SciNet Search Engine Introduces 'Interactive Intent Modeling'

Search engines process billions of queries monthly. They are great at checking facts, returning information about nearby restaurants or movie theaters, and finding product information on retail Web sites, but they are not yet good at processing complex tasks that go beyond simple keyword queries often thought of as information exploration and discovery.

The new search engine developed at Helsinki Institute for Information Technology (HIIT) focuses on interactive intent modeling, an approach the researchers believe is a better way to predict the user's search intent by analyzing interactions between humans and systems that retrieve information, or information retrieval (IR) systems, such as touchscreens on desktops or mobile phones. It combines human-computer interaction and machine learning. The researchers believe this is a better method for modeling consumer intent from search. 

The search engine, SciNet, changes searches into recognition tasks by showing keywords related to the user’s search in an image of topics, Professor Giulio Jacucci said in a statement. In addition to Jacucci, researchers Tuukka Ruotsalo, Petri Myllymäki, and Samuel Kaski created interactive intent modeling.

Interactive intent modeling, demonstrated on a touchscreen, uses an interactive cloud of relative words that can be combined or moved closer or farther from each other. The movement estimates consumer intent by mapping a visual image on what researchers call an IntentRadar. The search is directed by targeting keywords and moving them around the screen by touch. The position of the words on the screen can change the intent, which changes the results. As the technology refreshes the IntentRadar, search results change as words on the screen move. The technology combines visualizing the search intent and direction, and interactive adaptation of the intent model, balancing exploration of the information space and exploitation of user feedback.

The researchers provide an example of searching for "3D gestures." The intent model suggests that in this model gestures could become a highly relevant intent signal related to other intents such as video games, interaction, and virtual reality.

Google, Bing and Yahoo offer this sort of exploratory search, but it requires searchers to visit the pages served up in the results -- an act that lengthens the search process. Researchers tested their process of interactive intent modeling with 20 participants, comparing the system to a traditional query-based engine to determine whether the new search engine significantly improves the effectiveness of retrieval by providing access to more relevant information without having to spend more time acquiring the information.

The search engine could interface with wearable devices. Research explain that "IR systems can be extended by augmenting a real scene with predictions of what the user might find useful, shown as augmented reality on head-mounted displays (HMDs)." Someone's implicit and explicit reactions to visualized content can reveal their intent and help improve the intent model in the person's wearable device.

It turns out that changing a user's environment "when visiting a poster session at a conference with visual cues and information can help the system collect information about the user's intent even when the user is not actively engaged with a search engine."

Next story loading loading..