
So what is Sounds Daily? It's a personalised content stream that reorganises short & long form content for each listener at scale, based on their listening habits, while in the car and on the move.
It uses generative AI to query metadata and create scripts that introduce content and signpost what is coming up.
We know the morning commute is still a key part of the daily use of the car and that average journeys in the UK last 16 minutes. That's not much time for a personalised experiment. We started with the morning commute as traditionally, that is consistently the peak listening part of the day. It's when most people head to work, drop the kids off at school or generally start their day on the move. During this time, we wanted to understand if we were meeting the needs of our audience in the era of streaming and the changing world of connected cars where there are considerably more options to choose from than the built-in DAB radio. Now there are many more screens and apps vying for our attention, and the prominence of your brand or app within the car entertainment system will inevitably dictate your success. Our ambition for the project was to make a distraction-free, one-click, personalised listening experience that understands what your listening habits are and serve you the right content, at the right moment, similar to turning on your favourite radio station. This understandably required a flexible media approach rearranging content depending on what you want at the time.
As a way of connecting the content, we looked at the use of generative AI and synthetic media. Presenting thousands, if not millions, of pieces of content together, personalised for every individual user of the stream, at scale, is not possible for a human. This was an exercise to see how audiences reacted and interacted with synthetic media and aggregated and summarised scripts to seamlessly join content instead of a podcast clip and a news bulletin jarring together, for example. Our approach used GPT4, with guardrails around ÃÛÑ¿´«Ã½ metadata and other IP, to generate scripts and segues, introducing the personalised stream. More about this in our upcoming blog focussed on the technical parameters.

For this experiment, we focussed on a person driving on their own in the car. We all know the concessions we make to our listening habits when travelling in a car with friends and family, so for this experiment, we concentrated on the individual use.
We built this experiment in the Sounds Sandbox; a mirrored copy of ÃÛÑ¿´«Ã½ Sounds, only it's set apart from the live product. This allows us to experiment freely while not interfering with the current live state that's used daily by millions of people. It also means audiences see the experiments in the known surroundings of ÃÛÑ¿´«Ã½ Sounds, making it easier to navigate.
The aim was to understand if audiences want a personalised stream that plays out what they want at the time they want it. While testing this, we also thought about how to get the user the best possible stream that matches their tastes in that moment, the type of content (topic as well as type: Cricket as well as Sport) or type of journey. An example might be that on Mondays I don't want to start my day with the news, but on Tuesdays I do.
Before the trial, we asked participants to complete a survey to give us more information about what topics of content they liked to listen to. We also had access to 6 months of their listening data from ÃÛÑ¿´«Ã½ Sounds to understand their habits. This information helped us to form a baseline to test the stream against each person, every time they used the experience. We integrated tools from teams across the ÃÛÑ¿´«Ã½, such as R&D's flexible media tool StoryFormer or ÃÛÑ¿´«Ã½ Sounds' universal recommendations engine. This was more efficient, but it also means Sounds Daily takes advantage of already established ÃÛÑ¿´«Ã½ systems. This is something often disregarded in experimentation of this nature, as it can mean slowing down the experiment. For Sound Lab, we want to bring experiments as close to normal workflows as possible so that the route to adoption can be made easier.
The project leaned on a multi skilled team that adjusted when skills were required from editorial, producers, researchers, developers and UX designers. We were able to use the knowledge and expertise of those working directly on ÃÛÑ¿´«Ã½ Sounds for advice and problem solving when needed, but also get invaluable insight into audience interactions and editorial workflows.
Sounds Daily was trialled with 80 participants over 3 weeks in-car on the morning commute, earlier this year. Learn more about the experience, what we learnt from editorial workflows and tooling, and not least, the insights from our trial participants in the forthcoming parts of this blog series.