Google released the second generation of its semiconductor chip earlier this week specially designed to support artificial intelligence (AI).
The next generation of its Tensor Processing
Unit (TPU) machine-learning chip announced at the developer I/O conference is designed to process data and speed up machine learning tasks.
It may not mean much more to advertisers
than precise targeting of advertisements as the ad network makes inferences about the data it processes, but Google claims that under the hood, the latest generation of its TPU can deliver up to 180
teraflops of performance, as well as fast and powerful enough to train machine-learning models to identify flowers, vegetables and people.
The technology depends on the statistical likelihood
that it can identify an image and serve up the correct information.
These chips are touted as a way for Google to generate revenue from non-advertising related services, but in reality the
next phase of audio and visual ad services running on AI will require the processing power within the chip.
Advertisers want image recognition systems that rely on AI technology to tag
everything from appliance to each square inch of the earth to pinpoint exact locations and behavioral changes based on location.
The latest version of the TPU chip was developed by retired
California at Berkeley professor David Patterson who joined Google last year to advance the chip, per one report.
A report written by Patterson and 75 engineers scheduled to be
presented next month at the International Symposium on Computer Architecture,
concludes the TPU runs 15 to 30 times faster and 30 to 80 times more efficient than contemporary processors from Intel and Nvidia.
Google also announced the TensorFlow Research
Cloud, a program open to anyone conducting research. Those accepted into the program get access to a cluster of 1,000 Cloud TPUs for training. In exchange, Google asks users to share their
research