ML Engineering in the Time of GPT-4 & PaLM 2

Digits engineers recently spoke at Google's North America Connect conference on the future of machine learning. This blog post expands on the presentation themes.

Over the past few months, we have witnessed groundbreaking developments in the field of generative machine learning (ML) models, revolutionizing the potential impact ML can have across diverse industries. Today, machine learning projects can be integrated with various applications in just a matter of hours, as opposed to the days or even weeks it took in the past. This not only saves valuable time, but also empowers companies to embrace technological advancements and drive innovation to market quickly.

As we attempt to understand the power of this rapidly evolving domain, we feel compelled to share our thoughts on the future of machine learning. Through this blog post, we aim to:

  1. Dissect the intricacies of the field
  2. Delve into the multifaceted aspects of generative machine learning via model APIs like OpenAI
  3. Discuss the benefits and downsides that have the potential to transform lives of people around the world.

Has Machine Learning Found Its Gutenberg Moment?

When we think of history's greatest technological leaps, the invention of the printing press in 1450 by Johannes Gutenberg in Mainz, Germany, is undoubtedly one of the most transformative. Gutenberg's press revolutionized how books were copied and distributed, no longer requiring them to be painstakingly hand-written by monks.

This innovation significantly altered access to knowledge, becoming one of the cornerstones in history and leading to increased literacy and widespread access to information. The “Gutenberg Moment.”

Are we experiencing a similar revolution in machine learning, specifically within the realm of generative AI?

Similar to how the Gutenberg Moment democratized access to information, the recent acceleration in access to generative AI has empowered businesses to swiftly adopt previously inaccessible technology such as Large Language Models (LLMs) and foster innovation, moving the autonomy to work with ML outside the confines of large technology companies and closer to domain experts in various industries.

As generative models continue to evolve, it begs the question: Will this evolution redefine the core tasks machine learning engineers are performing? Instead of focusing on generating datasets, training and evaluating machine learning models, will we shift focus to engineering prompts for LLMs?

Early Lessons Learned

When we first interacted with large language models, we were in awe of the generated human-like text. However, drawing conclusions based on brief interactions with these models can be misleading. It's essential to be cautious of initial outcomes, as LLMs are capable of producing highly convincing "hallucinations" or fabricated information within their output.

Moreover, LLMs may generate inconsistent outputs, reinforcing the need for human review in employing them effectively. As we continue to explore the potential of generative AI, understanding and mitigating these limitations will foster progress and unlock more reliable and robust applications.

Is Machine Learning Commoditized?

For certain projects, machine learning indeed appears to be commoditized by the capabilities of large language models. Typically, these projects involve using ML models based on public data or ones that do not require specific environment settings (e.g., on-device processing). Additionally, projects without stringent security or privacy requirements can also benefit from accessible model APIs like GPT-4 or PaLM 2.

However, not all projects fit into this commoditized landscape. Projects involving proprietary data or ones with strict privacy requirements still tend to need custom-built ML solutions. This is because 3rd-party model APIs may not factor in the unique traits of proprietary datasets, require impractically long prompts, or don’t provide the necessary security measures. Furthermore, projects with low latency requirements may also necessitate specialized ML solutions tailored to specific use cases, as the development for low-latency inferences of LLMs continues.

The importance of the underlying intellectual property (IP) should not be overlooked. If underlying data and custom models can provide you an unfair advantage, it is worth protecting it and further investing in it.

Should We Be Concerned About Model APIs?

Over the years, the machine learning community has consistently focused on achieving unbiased predictions, improving data and training transparency (e.g. through model cards), closing feedback loops for better model performance, ensuring user privacy, and enabling on-device inferences. However, as we move toward adopting third-party generative AI and incorporating model APIs, it's crucial to be aware of the potential issues and challenges they may pose.

Currently, the desired objectives mentioned earlier are not completely achievable with model APIs. There are concerns that, unlike more traditional AI models, generative models like GPT-4 may be more susceptible to producing biased results due to their complexity and the vast amount of data they need to process. Additionally, essential privacy features may be compromised when processing user data via model APIs, since these frameworks often require transmitting data to remote servers.

Transparency regarding data and training is an ongoing challenge for model API developers. Industry-leading models may not fully disclose their inner workings, making it difficult for users, and even industry experts, to fully judge their ethical implications. Lastly, on-device inferences, which have boosted privacy and efficiency in the past, are currently impeded by the large size and resource requirements of sophisticated generative models. In summary, as we continue to integrate model APIs in the realm of generative AI, it is essential for the ML and developer communities to be cognizant of the potential limitations and risks associated with their use. To fully harness the advantages of such powerful technologies while adhering to the standard objectives concerning privacy, transparency, and unbiased predictions, researchers and practitioners must be diligent in addressing and overcoming these challenges to strengthen their contributions to the field.

How is the role of Machine Learning Engineers changing?

The responsibilities of machine learning engineers have expanded beyond solely developing models to encompass a wider range of tasks associated with generative AI systems.

One of the key changes in our role is to act as effective moderators between various stakeholders. This involves liaising with clients, leaders, and other team members to ensure that a generative AI project is well-executed and the stakeholders (e.g. software engineers consuming third party model APIs) understand the implications of the hyperparameters.

In addition to being moderators, ML engineers now serve as advisors regarding the risks and benefits of generative AI projects. We use our knowledge of the field to inform stakeholders about the potential outcomes and consequences of implementing a particular model, as well as to identify potential biases and ethical issues that should be managed proactively.

With these changes, ML engineers are transitioning from creators to consultants. The role is no longer focused solely on designing and implementing algorithms, but rather on guiding and supporting organizations in navigating the complex landscape of generative AI. This shift requires us to develop not only technical expertise, but also strong communication, collaboration, and critical thinking skills to address the challenges and opportunities that generative AI presents in various industries.

Conclusion

In conclusion, although prompt design plays a significant role in the development of generative AI, it does not eliminate the need for machine learning expertise in its entirety. As we continue to grapple with machine learning engineering challenges associated with large language models, it becomes increasingly important to have a deep understanding of ML for integrating concepts such as bias and safety effectively. To optimize the value of generative AI, organizations should focus on projects with proprietary data, those involving "subjective" machine learning (e.g., similarity machine learning), and those with specific requirements in user privacy, security, and low latency. As experts and advisors, finding the right balance and alignment among stakeholders is crucial to optimally navigating the opportunities and challenges posed by this emerging technology.