Apple announced a new machine-learning API framework this week aimed at making mobile artificial intelligence technology run faster. Along with it, the company published documentation on how developers can integrate the technology into their products.
Now the key for Apple has become building out a developer network focused on AI products. Core ML's key benefit, per Apple, is now speed and the quickness in executing tasks such as face recognition and text analysis across its devices from Apple Watch to iPhone.
Apple claims that its iPhone 7, built on Core ML, is six times faster than Google Pixel and Samsung S8, according to Inception-v3, a photo recognition benchmark.
Core ML, which will become part of the iOS 11 release, has privacy safeguards built in. The technology provides on-device processing, so the data that developers use to improve the user experience never leaves the device. The technology also supports machine-learning tools such as neural networks and making predictions.
The technology will be built into a variety of APIs for natural-language identification, face tracking and detection, landmark recognition, text detection, rectangle detection, barcodes, object tracking and image registration.
Core ML will allow developers to take models built with dmlc XGBoost, Turni, Caffe, Libsvm, Learn and Keras and using a machine-learning converter to execute the apps.
Google also recently announced a new TensorFlow Lite programming framework at its I/O developer conference.