Creating and Implementing AI-Powered Conversational Search to Drive Viewer Engagement


Event Time

Originally Aired - Saturday, April 13   |   12:10 PM - 12:30 PM PT

Event Location

Pass Required: Core Education Collection Pass

Don't have this pass? Register Now!

Info Alert

Create or Log in to myNAB Show to see Videos and Resources.

Videos

Resources

{{video.title}}

Log in to your myNAB Show to join the zoom meeting!

Resources

Info Alert

This Session Has Not Started Yet

Be sure to come back after the session starts to have access to session resources.

Over the past year, since Generative AI gained widespread acceptance, the conversation in the media and entertainment industries has evolved. While early discussions centered on the technology's merits, the current focus is on effectively leveraging Generative AI to address longstanding, industry-specific challenges without breaking the bank.

While the long-range projections for the global artificial intelligence market are massive – it is anticipated to exceed $2.5 trillion by 2032 – achieving service velocity early will be important to establishing business and market success. To make the most of AI’s potential, M&E providers need to identify a technology roadmap that will enable them to tap into the growing availability of AI marketplaces and to determine those applications that can most rapidly capture audience mindshare and fill gaps in current streaming offerings.

This tech demo will show how broadcast and streaming providers can implement Generative AI capabilities in a short time. The demo will include an initial application of how a Generative AI implementation that leverages Large Language Models (LLMs) can improve the search and discovery experiences that have historically frustrated consumers.

Research on search time varies, but a consistent theme is the significant time investment required to find content of interest: 45 hours per year according to one calculation; 1.3 years of viewers’ lives according to another. The demo will show how integration with Generative AI models can power conversational search that expedites results for viewers while at the same time amassing information that makes it increasingly knowledgeable – and accurate.

Some streaming providers have attacked the discovery issue by implementing AI solutions that leverage their existing metadata. The limitation on searchable metadata means that consumers are limited in what they can discover. The technology proposed in the demo differs significantly in that it enables consumers to access the entire public information (available on the web) used to train the LLMs to find content in response to spoken queries or directives.

The demo will detail the implementation of the technology within existing streaming infrastructures, most notably the integration required to enable an installed CMS to process voice commands, and – using an existing AI marketplace and LLMs – initiate Web based search functionality and communicate the results back to the viewer. 

The demo converts speech to text, which is then fed into the chat-bison model to determine the most suitable responses. Throughout a conversation, the AI technology and the CMS retain context, to ensure that interactions are personalized and relevant. While the demo is specific to search, the technology can drive other applications, including the creation of highlight reels or even AI-generated feature content related to a specific topic.


Presented as part of:

Application of Large Language Models (LLM) in Media


Speakers

Naveen Narayanan
Senior Director, Product Innovation and Strategy
Quickplay