On the opening day of Google I/O[1] developers conference in Mountain View on Wednesday, Google announced second-generation Tensor Processing Units (TPUs), successor to the TPUs the search giant unveiled at the same conference last year[2]. Optimised for AI computations, Google says the new TPUs deliver up to 180 teraflops of floating-point performance, and they will be available via the Google Compute Engine.

"We’re bringing our new TPUs to Google[3] Compute Engine as Cloud TPUs, where you can connect them to virtual machines of all shapes and sizes and mix and match them with other types of hardware, including Skylake CPUs and NVIDIA GPUs," Jeff Dean, Google Senior Fellow, and Urs Hölzle, Senior Vice President, Google Cloud Infrastructure, said in a blog post[4].

Google says developers will be able to program the Cloud TPUs using TensorFlow, the open-source machine learning framework it announced back in 2015[5], as well as new high-level APIs, which will "make it easier to train machine learning models on CPUs, GPUs, or Cloud TPUs with only minimal code changes".

Apart from the additional computing power, Google says the big difference is that the new TPUs can be used for both training and inference, compared to the first generation TPU that had to be trained separately.

"Training a machine learning model is even more difficult than running it, and days or weeks of computation on the best available CPUs and GPUs are commonly required to reach state-of-the-art levels of accuracy," Google said in the blog post, adding that the new TPUs will make the process faster.

"One of our new large-scale translation models used to take a full day to train on 32 of the best...

Read more from our friends at NDTV/Gadgets