Understanding the world’s most complex search engine: the human visual system.
EyeQuant predicts human attention based on hundreds of eye-tracking studies.
Every second of our waking life, the optic nerve transmits over 100 Mbit of data from our eyes’ retinas into our brain’s visual cortex. Colors, contrasts, luminance levels, contours, shapes, shadows – the world is a chaotic mess at this stage. But that’s not how we see the world: We perceive clearly defined objects, things and ultimately meaning. What exactly is happening in between?
The short answer is: eye-movements! In order to make sense of the world, we need to perform around 3 quick and automatic eye-movements per second. Why? Because we can’t see sharply outside of the areas that we’re focussing on – our peripheral visual acuity is surprisingly poor. This effect is depicted nicely in the image on the left, where the viewer is focussing on the bee and is hence unable to perceive the rest of scene sharply. Of course we believe that our eyes provide us with a wonderful high-resolution view of the world – the fact that this is simply not the case is just one of many useful little secrets that our brain won’t admit to in front of us.
With over 6 billion eye-movements over our lifespan, this might easily be the most frequently performed human activity.
But how does the brain decide where to focus on next? What drives our eyes towards specific parts of the world or a website to make sense of it? Clearly, most eye-movements must be driven by the environment itself – but how exactly does this work?
As researchers, we have worked extensively on this question for over two decades at the California Institute of Technology, the University of Southern California and the Institute of Cognitive Science in Osnabrueck. Professor Christof Koch and Professor Laurent Itti in fact planted the very foundations of neuroscientific attention research by pioneering the theory of ’Saliency Maps’. Saliency Maps are algorithms that analyze images based on statistical feature sets that are known to drive human attention in the first few seconds – effectively a simulation of what the human visual cortex does when it tries to figure out what is most ’interesting’ about a scene.
In 2005, we expanded this approach to a very specific domain that allowed for a extremely controllable experiments: how are users looking at websites? And which image features of websites are responsible for attracting attention?
Using data from over 500 subjects viewing thousands of websites we identified over 50 statistical features that attracted online attention most efficiently – through constant evaluation and fine-tuning of the attention model we achieved over 90% predictive accuracy by 2009 – EyeQuant was born!
The beauty of EyeQuant’s technology is that it’s not a fixed algorithm, but a data-driven prediction engine. We’re constantly tracking new subjects over the latest websites to ensure that our models are up to speed to the latest cultural developments. To learn about our future product features more R&D, visit our research section!
Sign up for a free test!
Enter your URL and get results within seconds: