Attention

We live in the age of distraction and attention is the gatekeeper to an ad’s success. Without audience attention, everything else is secondary.

Download Paper

Tracking attention just like a human

Realeyes’ AI-powered platform is the most sophisticated in the market, using webcams to interpret audience’s attention the same way a human would do.

Audiences opt-in via their PC or mobile devices, enabling the use of their cameras during the test to classify their behavioural cues such as eye movements, blinking, yawning and distracted head movements. Using our ground-breaking technology, machines have learned the same things that the human brain instinctively processes to decide whether someone is paying attention or not.

Play Video

Best in market

Realeyes’ AI-powered platform is the most sophisticated in
the market, using webcams to interpret audience’s attention
the same way a human would do.

Smart

Realeyes’ attention metric is the most sophisticated on the market, tracking more behavioural cues than any other solution.

Scalable

Our attention solution can test multiple creatives at once on a global scale at machine speed. This scale and speed simply isn’t possible with other solutions that rely on monitoring participants’ brain activity or biometrics.

Precise

Our attention metric is just 4% shy of being as good as humans at telling if someone is paying attention or not. Our machines are continuously learning to close this gap and even exceed humans.

Robust

Unlike a lot of solutions on the market, which rely heavily on eye tracking, our solution measures participants’ attention the same way a human would do: using a set of subtle behavioural cues that the human brain instinctively processes to make up its mind about whether someone is paying

Two Scores

Get two measures of attention to gauge the quality and volume your content will receive:

1. Attention Volume

The volume shows the average volume of attention respondents paid to the content. For example: A score of 50% means that throughout the video half of the viewers were attentive to the content on average.

The more seconds of attention a video managed to grab from its audience, the higher this score will be.

2. Attention Quality

Quality, on the other hand, provides an indication of how long an audience was capable of maintaining continuous attention. This differs from Attention Volume since here it is not the overall amount of attention that dictates the value of the score, but how attention was distributed along the viewing.

Attention Quality decreases when respondents had short attention spans, getting distracted regularly. The proportion of the video which respondents managed to keep continuously attentive for, on average. For example: A score of 50% means that on average respondents managed to stay attentive without interruption for half of the video.

Human-Level Accuracy

86% Precision

Currently, our human annotators agree with our attention classifier 86% of the time – only 4% away from the average human level of precision (who agree with our ‘ground’ truth data 90% of the time).

We’re very close to be able to teach computers to measure attention as well, if not better, than humans.

F1 Score

The harmonic mean of the sensitivity of the model ( how often frames with attention are correctly picked by the classifier) and the precision (how often the frames picked as ‘attentive’ by our classifier are considered ‘attentive’ by annotators too). This score is very useful in understanding how well our classifier identifies attention levels.

MCC

We favour the Matthews Correlation Coefficient (MCC) to assess the performance of our classifiers. This is because this metric accounts for the relative proportion of all possible outcomes in predictions (true positives, true negatives, false positives, false negatives). If a classifier is great at predicting an outcome that happens often, but poor at predicting the rare event, the MCC will reflect that much more than other measures of accuracy.

Precision

This metric computes how often the frames pick ed as ‘attentive’ by our classifier are considered ‘attentive’ by annotators too. The higher the precision, the more we’re certain that frames seen as ‘ attentive’ by the classifier are in agreement with human annotators’ judgement.

Here’s the science bit

Our white paper covers the science behind our Attention Metric and how we’ve taught machines to recognise attention, just like humans.