Chinese search engine Baidu has built a "supercomputer" that it says is more powerful than a system used by Google and Microsoft. The Chinese search engine calls the computer Minwa, which it
believes will raise the bar for artificial intelligence. For the nerd out there, it has 72 powerful processors, and 144 graphics processors or high-performance specialized chips typically used to
support visual data.
Computing power matters, especially in (cloud) servers. It powers search engine queries, ad targeting and bidding systems, and Web site page views. The faster the system
can identify and automate the process, the more likely it will create customer loyalty for a brand. Yes, the process and the time it takes to serve information on a mobile device or desktop has
everything to do with brand loyalty.
Earlier this week, Baidu published a paper outlining Minwa's accomplishments. The company says it has been used to train machine-learning
software that set a new record for recognizing images, beating a previous record set by Google. The results describe how the machine identified all but 4.58% of pictures in a set of one million
images. The previous best was 4.82%, reported by Google in March 2015. One month prior, Microsoft had reported achieving 4.94%, becoming the first to report
better average human performance of 5.1%.
Of course it takes more than processing power, but based on these percentages Baidu researchers say they have built a "large supercomputer" dedicated to training deep neural networks.
The promise of better recognition points to improvements in ad targeting based on images rather than keywords, especially as search ties visual images to intent. Baidu trained a "neural network" to recognize images, training the software with high-resolution versions of pictures to develop a better understanding of the characteristics in the images.
MIT Technology Review explains it this way: The technique is a souped-up version of an approach first established decades ago, in which data is processed by a network of artificial neurons that manage information in ways loosely inspired by biological brains. Deep learning involves using larger neural networks than before, arranged in hierarchical layers, and training them with significantly larger collections of data, such as photos, text documents, or recorded speech.
You can access the report here.