During its I/O developer conference, Google unveiled PaLM 2, its latest large language model (LLM) that will power the updated Bard chat tool and serve as the foundation model for many new AI features. PaLM 2 can now be accessed by developers through Google’s PaLM API, Firebase, and Colab. Google did not provide many technical details about the model’s training, but did reveal that it was built on Google’s latest JAX and TPU v4 infrastructure.
Google, like OpenAI, did not share extensive technical information on the training process of PaLM 2, such as the number of parameters used (PaLM 1 has 540 billion parameters). However, Google did mention that PaLM 2 was developed using their latest JAX and TPU v4 infrastructure.
“What we found in our work is that it’s not really the sort of size of model — that the larger is not always better,” DeepMind VP Zoubin Ghahramani said in a press briefing ahead of today’s announcement.
He added, “That’s why we’ve provided a family of models of different sizes. We think that actually parameter count is not really a useful way of thinking about the capabilities of models and capabilities are really to be judged by people using the models and finding out whether they’re useful in the tests that they try to achieve with these models.”
Google has emphasized the capabilities of its new PaLM 2 model, instead of providing technical details. PaLM 2 is said to have improved common sense reasoning, mathematics, and logic. The model was trained on a large amount of math and science texts, as well as mathematical expressions, which allows it to easily solve math puzzles, reason through problems, and provide diagrams. This is a significant improvement as large language models have struggled with handling math questions in the past without the need for third-party plugins.
Google’s PaLM 2 now comes with enhanced support for coding, enabling improved writing and debugging of code. The model was trained on 20 programming languages, including the popular ones such as JavaScript and Python, as well as lesser-known languages such as Prolog, Verilog and Fortran. Codey, Google’s specialized model for coding and debugging, is built on top of PaLM 2 and is part of Google’s code completion and generation service that was announced today.
According to Google, PaLM is not just a single model, but a family of models that includes Med-PaLM 2 for medical knowledge, Sec-PaLM for security, and a smaller version for smartphones. The company did not provide a timeline for the smartphone version’s release, but stated that it could be suitable for privacy-focused use cases. Google did not disclose the specific phone it tested the model on but claims that it can process 20 tokens per second.
Google has been cautious in launching its AI features, which the company openly acknowledges. The company’s representatives have emphasized that Google intends to develop these tools in a responsible and safe manner. However, since we were unable to test it before the announcement, it remains to be seen how well it will perform and how it will handle challenging scenarios.
Also Read: 6 Best ChatGPT Extensions For Google Chrome
All You Need To Know About Google Unveils Bard: A ChatGPT Rival.