Image default
Exchange

Marketers Could Use AI to Make Sure You See Their Ads—Here's How

In brief

  • AdGazer is a model that predicts human ad attention using eye-tracking–trained AI.
  • Page context drives up to one-third of ad attention outcomes.
  • An academic demo could quickly evolve into real ad-tech deployment.

Somewhere between the article you’re reading and the ad next to it, a quiet war is being waged for your eyeballs. Most display ads lose it because people just hate ads—so much that big tech companies like Perplexity or Anthropic are trying to steer away from those invasive burdens, looking for better monetization models.

But a new AI tool from researchers at the University of Maryland and Tilburg University wants to change that—by predicting, with unsettling accuracy, whether you’ll actually look at an ad before anyone bothers placing it there.

The tool is called AdGazer, and it works by analyzing both the advertisement itself and the webpage content surrounding it—then forecasting how long a typical viewer will stare at the ad and its brand logo based on extensive historical data of advertisement research.

The team trained the system on eye-tracking data from 3,531 digital display ads. Real people wore eye-tracking equipment, browsed pages, and their gaze patterns were recorded. AdGazer learned from all of it.

When tested on ads it had never seen before, it predicted attention with a correlation of 0.83—meaning its forecasts lined up with actual human gaze patterns about 83% of the time.

Unlike other tools that focus on the ad itself, AdGazer reads the whole page around it. A financial news article next to a luxury watch ad performs differently than that same watch ad next to a sports score ticker.

The surrounding context, according to the study published in the Journal of Marketing, accounts for at least 33% of how much attention an ad gets—and about 20% of how long viewers look at the brand specifically. That’s a big deal for marketers who’ve long assumed the creative itself was doing all the heavy lifting.

The system uses a multimodal large language model to extract high-level topics from both the ad and the surrounding page content, then figures out how well they match semantically—basically the ad per se vs the context it is placed on. These topic embeddings feed into an XGBoost model, which combines them with lower-level visual features to produce a final attention score.

The researchers also built an interface, Gazer 1.0, where you can upload your own ad, draw bounding boxes around the brand and visual elements, and get a predicted gaze time back in seconds—along with a heatmap showing which parts of the image the model thinks will draw the most attention. It runs without needing specialized hardware, though the full LLM-powered topic matching still requires a GPU environment not yet integrated into the public demo.

For now it’s an academic tool. But the architecture is already there. The gap between a research demo and a production ad-tech product is measured in months—not years.

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.

Artificial Intelligence#Marketers #AdsHere039s1771696410

Related posts

Elon Musk's X Bans Access to ‘InfoFi’ Crypto Projects Amid ‘AI Slop’ Backlash

admin

ChatGPT Rolls Out Ads, Just Hours After Anthropic's Mocking Super Bowl Commercials

admin

State-Sponsored Hackers Using Popular AI Tools Including Gemini, Google Warns

admin

Leave a Comment