Expanding access to our best AI models in Vertex AI
Over the last six months, we’ve launched our most capable models yet, including Gemini 1.5 Pro, and today we’re taking Gemini 1.5 Pro into public preview for our Cloud customers and developers. Gemini 1.5 Pro shows dramatically enhanced performance, and includes a breakthrough in long context understanding. That means it can run 1 million tokens of information consistently, opening up new possibilities for enterprises to create, discover and build using AI.
When combined with Gemini’s multimodal capabilities (which can process audio, video, text, code and more), long context enables enterprises to do things that just weren’t possible with AI before. For example, a gaming company could provide a video analysis of a player’s performance, along with tips to improve. Or an insurance company could combine video, images and text inputs to create an incident report, making the claims process easier.
We’re also expanding access to a new version of our open model Gemma, designed to help customers with code generation and other types of code assistance.
These are now available on Vertex AI, which is Google Cloud’s platform to customize and fully manage a wide range of leading gen AI models. Today more than 1 million developers are now using our generative AI across tools including AI Studio and Vertex AI. Additionally, through Vertex AI, customers now can augment and ground their models — connecting model outputs to verifiable sources of information — in two new ways. The first is with Google Search, which provides high quality information to improve the accuracy of responses. The second is with your own data and sources of truth, such as enterprise applications like Workday or Salesforce and Google Cloud databases like BigQuery.