Television Viewership Reimagined Through Generative AI

Event Time

Originally Aired - Saturday, April 13   |   11:30 AM - 11:50 AM PT

Event Location

Pass Required: Core Education Collection Pass

Don't have this pass? Register Now!

Info Alert

Create or Log in to myNAB Show to see Videos and Resources.




Log in to your myNAB Show to join the zoom meeting!


Info Alert

This Session Has Not Started Yet

Be sure to come back after the session starts to have access to session resources.

Television viewership measurement, employing a variety of media metrics, has long been an established practice. It has evolved from methods like people meters to sweep surveys, marking substantial progress in media measurement technology. Nevertheless, the landscape of television and OTT (Over-The-Top) platforms has grown increasingly complex over the last decade. While traditional media measurement can address 'what' and 'when' questions, it falls short in answering 'why' inquiries. For example, it cannot explain why a show like 'XYZ' has experienced a consistent decline in TRP ratings in all target metropolitan markets over the past week.

Several intrinsic and extrinsic factors influence the success of a television show or consumer behavior. These factors may include, but are not limited to, the launch of a similar show on a competing network with a nearly identical storyline, the introduction of a popular reality show, live events, socio-economic conditions, sudden plot twists, social media sentiment, cyclical events like summer holidays or parliamentary elections, and more.

In this paper, we introduce a multi-dimensional Question-Answer (QnA) chatbot employing Retrieval Augmented Generation (RAG) and large language models (LLM), such as Anthropic Claude v2. The use of RAG in LLM-powered QnA bots is common practice to provide additional context and reduce hallucinations. We begin by defining a graph to query the show's dependence on various factors and their relative significance. Each node within the graph represents a RAG source, providing contextual information about a specific show. When inquiring about the reasons behind a show's poor TRP ratings based on viewership data, we gather contextual information from multiple sources, including social media, competitor data, machine learning-based content analysis, and socio-economic conditions. All this information is provided to the LLM as context, and it is tasked with reasoning. The LLM can then provide the most plausible reason or causality for the underperformance. Furthermore, we can engage in chain-of-thought questioning to delve deeper into follow-up inquiries.

Presented as part of:

Application of Large Language Models (LLM) in Media


Punyabrota Dasgupta
Principal Solutions Architect
AWS India
Maheshwaran G
Principal Solutions Architect
AWS India