In ways big and small, artificial intelligence and machine learning are reshaping digital experiences.
In the latest example of this trend, Twitter is using neural networks to crop
picture previews in more eye-pleasing ways.
The goal is to create a more consistent UI experience, Zehan Wag and Lucas Theis, an engineering manager and researcher at Twitter, note in a new blog
post. Going forward, Twitter's tech will focus on "salient" image regions, per the duo.
“A region having high saliency means that a person is likely to look at it when freely viewing the
image,” they explain. “Academics have studied and measured saliency by using eye trackers, which record the pixels people fixated with their eyes.”
Previously, Twitter used
face detection to focus the view on the most prominent face it could find in a picture. Not every picture includes faces, however. People are also drawn to text, animals and objects and regions of
high contrast.
“This data can be used to train neural networks and other algorithms to predict what people might want to look at,” according to Wang and
Theis.
Specifically, Twitter is using a technique called "knowledge distillation" to train a smaller network to imitate the shower but more powerful network. With this, an ensemble of large
networks is used to generate predictions on a set of images. These predictions, together with some third-party saliency data, are then used to train a smaller, faster network.
To keep its edge
in emerging technology, Apple is currently developing a processor devoted entirely to artificial intelligence.
Elsewhere in Techland, Twitter rivals are using AI and machine learning for
myriad tasks. Facebook recently said it was adding AI to existing efforts to prevent suicide.
Showing how high they value such specialties, tech giants are paying big salaries to
AI and machine-learning experts. At present, they are reportedly shelling out $300,000 to $500,000 in annual salaries for top talent.