Google I/O Analysis: PaLM 2 vs. Hyperscalers' Approach
While hyperscalers orchestrate disparate foundational models, Google flexes its engineering muscle with the highly adaptable PaLM 2.
May 12, 2023
At a Glance
- While hyperscalers orchestrate disparate foundational models, Google offers its highly adaptable PaLM 2.
- PaLM 2, a first-mover among large language models, powers 25 products and features within Google's portfolio.
Google I/O has come and gone, with the focus squarely placed on AI. Among the major announcements was PaLM 2, its latest large language model.
Regardless of the company’s true feelings toward the value inherent in very large, large language models (LLMs) vs. smaller, open source large language models, one thing is clear after the flurry of generative AI announcements from the search giant: Google intends to drive value through the direct application of its own, considerable AI expertise.
Where we see many other hyperscale competitors emphasizing the orchestration of disparate foundational models (big, small, open and closed source), Google is flexing its engineering muscle with the release of a highly adaptable LLM, PaLM 2.
This model, a first-mover in the market of LLMs, powers no fewer than 25 products and features within Google's portfolio. It also showcases the vendor's ability to take one very large model and scale that model down to run on an offline mobile phone.
Further, PaLM 2 can single-handedly tackle a very disparate array of modalities beyond basic language and code generation to include scientific and mathematical concepts across more than 100 languages. These changes will manifest within many existing LLM projects such as Bard, which will now be able to generate code as well as text.
This DIY mega-model approach is interesting as it runs counter to the direction Omdia expects the LLM market to take, favoring the creation of many smaller models, each fine-tuned to support company and even user-level requirements.
It seems that Google feels capable of handling this spectrum on its own as evidenced in the way Google itself is fine-tuning these models to tackle very specific use cases, be those answering health questions (with Med-PaLM 2) or assisting chief information security officers in tackling tough security challenges.
About the Author
You May Also Like