OpenAI has just began renting Google's artificial intelligence processors to power ChatGPT and other products, according to a source familiar with the situation.
ChatGPT is one of the major buyers of Nvidia's graphics processing units (GPUs), which it utilizes to train models as well as for inference computing, a process in which an AI model leverages its taught knowledge to make predictions or choices based on incoming data.
OpenAI planned to add Google Cloud service to fulfill its expanding computer capacity demands, according to an exclusive story by Reuters earlier this month, indicating an unexpected alliance between two big AI competitors.
For Google, the agreement comes as the company expands the access of its in-house tensor processing units (TPUs), which were previously reserved for internal usage. This helped Google attract clients such as Apple and firms such as Anthropic and Safe Superintelligence, both of which are ChatGPT rivals founded by former OpenAI executives.
The decision to rent Google's TPUs marks the first time OpenAI has employed non-Nvidia processors seriously, and it demonstrates the Sam Altman-led company's movement away from depending on Microsoft's data centers. It might possibly bolster TPUs as a cheaper alternative to Nvidia's GPUs, according to the Information, which first reported on the breakthrough.
According to the study, OpenAI anticipates that the TPUs it rents from Google Cloud would assist to reduce the cost of inference.
According to The Information, Google, an OpenAI competitor in the AI battle, is not renting its most powerful TPUs to its competition.
Google declined to comment, and OpenAI did not immediately react to Reuters when asked.
Google's inclusion of OpenAI to its client base demonstrates how the internet behemoth has leveraged its in-house AI technologies, from hardware to software, to accelerate the expansion of its cloud division.