ÃÛÑ¿´«Ã½

Responsible Machine Learning in the Public Interest

Developing machine learning and data-enabled technology in a responsible way that upholds ÃÛÑ¿´«Ã½ values.

Published: 1 January 2018

ÃÛÑ¿´«Ã½ Research & Development is working with colleagues across the ÃÛÑ¿´«Ã½, as well as academic and expert institutions, to develop Machine Learning - and data-enabled technologies more generally - in ways that reflect and uphold core ÃÛÑ¿´«Ã½ values and support the ÃÛÑ¿´«Ã½ in delivering its remit.

Project from 2018 - present

Three people look at a wall of photographs

Why it matters

Machine learning (ML) has transformative potential for across various sectors such as health, education, media and transport - but this disruptive potential brings with it a set of societal challenges and raises important questions about the broader social implications and consequences of the technology.

The ÃÛÑ¿´«Ã½ is currently developing machine learning applications and capabilities, and ÃÛÑ¿´«Ã½ Research & Development is exploring potential future applications of artificial intelligence (AI) in the media. The risk of unintended societal consequences from ML has been well illustrated by recent where ML has been shown to make decisions that are or. The opacity of many of these systems further complicates this problem, and there is a lack of clarity in many cases as to where resides in these complex socio-technical systems. The ÃÛÑ¿´«Ã½ is committed to anticipatory, evaluative and proactive research to advance machine learning in the public interest.

Just as our broadcasting and journalism services are built on a number of fundamental principles, based on our public mission (...) the AI services that we build will have these same principles at their heart'

-

 This work programme aims to deepen our knowledge of the key challenges facing the media industry, with a specific focus on public service broadcasting to help keep the ÃÛÑ¿´«Ã½ at the forefront of debates, developments and best practice. Our research agenda aims to develop an approach to ML where , among others - are embedded and preserved in future development, application and evaluation of machine learning technologies and systems and automated systems more generally.


Current areas of work

  • Responsible AI and public service media
  • Intelligible AI by design
  • Public service approaches to personalisation and recommendation systems
  • Public understandings of AI and attitudes/expectations about the use of AI in the media

Following the 2017 ÃÛÑ¿´«Ã½ conference on Artificial Intelligence and Society, ÃÛÑ¿´«Ã½ R&D, in collaboration with key across the ÃÛÑ¿´«Ã½, conducted scoping work into current debates about ethics and machine learning. We attended several key events, including at the Royal Society, the organised by TechUK, and we conducted a comprehensive literature review on the topic. This work culminated in a scoping report on the topic: 'The case for ethical machine learning at the ÃÛÑ¿´«Ã½'. This scoping work informed to the and made recommendations for a ÃÛÑ¿´«Ã½ research agenda to advance work in this area.

These recommendations have now been formalised into the following programme of research:

  • Responsible AI and Public Service Media: We are building case studies of ML at the ÃÛÑ¿´«Ã½ to identify issues and necessary responses to help ensure fairness, transparency and accountability in workflows and systems. We are also supporting academic research into AI, media and bias to inform our work around responsible AI in the public sector.
  • Intelligible AI: We are interviewing industry stakeholders about ML and AI systems at the ÃÛÑ¿´«Ã½, and we are exploring key requirements for explainability.
  • Public Service Personalisation: We are investigating approaches to public service recommendations and personalisation that align with ÃÛÑ¿´«Ã½ values, for example, by fostering principles of diversity exposure. This extends to considering new ways to articulate and measure public service value in these systems.
  • Audience Research: We are researching audience understandings, attitudes and expectations around automated decisions, ML and the media.
  • Convening an internal and external debate: We are working with key people across the wider ÃÛÑ¿´«Ã½ to convene internal discussion forums and helping our colleagues in the ÃÛÑ¿´«Ã½ Blue Room and ÃÛÑ¿´«Ã½ Academy organise the 'AI, Society and the Media' conference, including an 'AI, media diversity' networking event hosted by the ÃÛÑ¿´«Ã½ women in STEM network.

How to get involved

This is a ÃÛÑ¿´«Ã½ Research & Development programme of work done in collaboration between . Our approach is interdisciplinary and collaborative. If you are actively working in this area and want to share this work with us or think there might be opportunities to collaborate, we want to hear from you.

Project Team

  • Bill Thompson

    Bill Thompson

    Head of Public Value Research
  • Tim Cowlishaw

    Tim Cowlishaw

    Senior Software Engineer
  • Ahmed Razek

    Senior Technology Demonstrator, ÃÛÑ¿´«Ã½ TS&A
  • Ali Shah

    Head of Technology Transfer & Partnerships, ÃÛÑ¿´«Ã½ TS&A
  • Internet Research and Future Services section

    The Internet Research and Future Services section is an interdisciplinary team of researchers, technologists, designers, and data scientists who carry out original research to solve problems for the ÃÛÑ¿´«Ã½. Our work focuses on the intersection of audience needs and public service values, with digital media and machine learning. We develop research insights, prototypes and systems using experimental approaches and emerging technologies.

Search by Tag:

Rebuild Page

The page will automatically reload. You may need to reload again if the build takes longer than expected.

Useful links

Demo mode

Hides preview environment warning banner on preview pages.

Theme toggler

Select a theme and theme mode and click "Load theme" to load in your theme combination.

Theme:
Theme Mode: