Introduction
The evolution of AI in all its forms offers tremendous opportunities for creativity, innovation and improved productivity in every area of the BBC.
However, it can also pose significant risks. Wherever any form of AI - generative AI or AI more broadly - is used to create, present or distribute content it must comply with the BBC's Editorial Guidelines and its editorial values.
The pace of change in artificial intelligence, the range of different uses and the potential absence of human oversight all raise serious challenges for its deployment by the BBC and how it manages its future development.
The BBC has made clear that its use of AI will always be in line with its public service values, it will prioritise talent and creativity, and it will be open and accountable.
BBC use of AI must
- never undermine the trust of audiences.
- always be transparent and accountable with effective and informed human oversight.
- always be used in a manner consistent with BBC editorial values, in particular accuracy, impartiality, fairness and privacy.
This Guidance is intended to outline how AI can be used by the BBC, and all those who supply content to the BBC, in a manner consistent with Editorial Guidelines and the BBC's editorial values.
It gives advice on where, when and how further advice should be obtained at any stage on any proposed editorial use of AI.
For staff and freelancers working for the BBC, it should be read alongside the BBC's AI Handbook: How to Use AI Responsibly which outlines key non-editorial issues to be considered and the processes which must be followed by all users of AI.
As experience, expertise and technology change, this guidance will be updated regularly
What is Artificial Intelligence (AI)?
Artificial intelligence can be defined as a machine-based system that can performs tasks commonly associated with human intelligence, including making content, predictions and/or decisions.
Generative AI can be defined as a type of artificial intelligence capable of creating text, images, speech, music, video and code in response to prompts from a user.
Editorial issues in the use of AI
Whenever the use of AI is proposed, content creators, content curators or product teams must first consider whether both the deployment of AI in principle and the specific product or tool is appropriate for the task it is being required to do.
They should also be aware that AI may also be integrated into tools provided by external suppliers or tools that are openly available on the internet.
Any use of AI by the BBC in the creation, presentation or distribution of content must be consistent with the Editorial Guidelines, including the principles of impartiality, accuracy, fairness and privacy.
Any use of AI by the BBC in the creation, presentation or distribution of content must include active human editorial oversight and approval, appropriate to the nature of its use and consistent with the Editorial Guidelines.
For example, oversight of a recommendation engine may be at a high level to ensure that its output is consistent with the Editorial Guidelines. But where an AI is used in data analysis for a journalistic project, human oversight should engage with the detailed output.
In all cases, where AI is used for editorial purposes there must be a senior editorial figure who is responsible and accountable for overseeing its deployment and continuing use of AI. Editorial line managers must also make sure they are aware of and effectively managing any use of AI by their teams.
Any use of AI by the BBC in the creation, presentation or distribution of content must be transparent and clear to the audience. The audience should be informed in a manner appropriate to the context and it may be helpful to explain not just that AI has been used but how and why it has been used.
Any external AI tools used must normally be authorised in line with existing BBC software authorisation or procurement processes before being used. A full list of authorised tools can be found on the Responsible AI website on Gateway.
Particular care should be taken around use of AI content intended for U18s.
Algorithmic bias and training data
The outcomes produced by AI are determined by both the algorithm behind it and the data that it has been trained on. Both the algorithm and the training data may introduce biases or inaccuracies into the outcomes of the AI.
For example, an early use of facial recognition software in a UK passport renewal system found it difficult to identify some skin tones, which made it impossible for those individuals to renew their passport online.
Any proposed use of any AI must consider whether any inherent biases affect its deployment by the BBC and therefore whether it is an appropriate tool.
Hallucinations
Generative AI operates by predicting likely responses to queries or instruction, based on the nature of its algorithm and training data - rather than providing content or answers that are necessarily factually accurate.
Any proposed use of generative AI must take into account the potential that content presented as accurate, may in reality be a creation of the algorithm and be a 'hallucination' or a fabrication with no basis in fact.
Plagiarism and mimicry
Similarly generative AI may simply adapt content from a web search or from a database of trusted content and present it as original.
Any proposed use of AI must take into account the potential that content presented as original may in reality be plagiarised or mimicked.
The BBC also has a responsibility to consider not only the rights of creators and artists in its use of AI but also not to jeopardise the role that they play in the wider creative community. Any use of AI must consider the rights of talent and contributors, while also allowing for the creative use of new forms of expression.
Non editorial issues in the use of AI
There are also important non editorial issues that must be taken into account in any proposed responsible use of AI.
There may be legal and commercial rights issues affecting whether the user of the AI or developer of the AI owns or is liable for any output created by it.
Similarly, any information input into an AI may be used by the developer of that AI to train it further or be shared with third parties.
This may raise significant data protection or information security concerns around for example confidential or commercial information, copyright protected content or any personal data input, produced by or interacted with outputs of any tool.
Seeking guidance
For staff and freelancers working for the BBC, a proposal to use AI must first be referred to a senior editorial figure, who should consult Editorial Policy. Editorial Policy may consider referring proposed uses and questions to the AI Risk Advisory Group (AIRA), particularly non-editorial issues.
AIRA includes subject matter experts on AI risk from across the BBC, including legal, data protection, commercial and business affairs, and Infosec as well as editorial policy. This multi-disciplinary approach reflects the range of different issues that inform many deployments of AI. The AI Risk Advisory Group is able to give detailed advice on both the editorial and non-editorial risks in the use of AI.
Use cases
Using Generative AI to create content
Generative AI should not be used to directly create news content published or broadcast by BBC News/Nations, current affairs or factual journalism unless it is the subject of the content and its use is illustrative.
Exemptions, such as in the creation of graphics, which previously would have been created using conventional digital techniques may be considered subject to a piloting process.
Using AI tools to create media may be otherwise considered where their use does not challenge the editorial meaning of the content, distort the meaning of events, alter the impact of genuine material or otherwise materially mislead audiences.
Examples of acceptable use might include creating a 'deepfake' face used to preserve anonymity in a documentary.
News, current affairs and factual journalism video and still images must not be manipulated beyond a sympathetic crop and minor adjustments to brightness and contrast.
Any generative AI options provided in editing software such as – but not only - 'generative fill', that allows the addition or removal of content in images or video, should only be employed where their use would not materially mislead audiences.
Whenever these techniques are used they should be signalled to the audience in an appropriate manner in line with the BBC's AI Transparency advice.
Synthesised audio
The use of either synthesised or cloned audio raises particular opportunities and challenges for audience facing content.
It should not normally be used to clone the voice of another individual without consent unless there is strong editorial justification, and its use does not risk materially misleading audiences or creating a 'false history'.
For example, it would not normally be appropriate to 'voice up' social media posts or texts of speeches using cloned voices of a real person.
However, it may be possible to use either cloned or synthesised voices where editorially justified, both context and transparency can be included, and there is no risk of materially misleading audiences. It may also be used to anonymise the voices of contributors where editorially justified. There may also be other non-editorial considerations, such as rights issues, to take into account. Editorial Policy should be consulted.
At all times any use of synthetic or fabricated voices must be clearly disclosed to avoid the risk of it being taken out of context and presented as genuine either on our own platforms or third party platforms. This is particularly the case when it is being used in audio content rather than video.
Audio editing functionality must not be used to materially mislead audiences in news, current affairs or factual journalism. For example, it may be used to 'de-umm' or edit interviews for clarity, it may also be used to improve the technical quality of any recording that might otherwise be difficult to hear.
However, it should not be used to alter the meaning of the content – for example by creating audio that was not originally recorded, including using edit smoothing tools that work by revoicing a phrase that would otherwise contain a noticeable edit.
Similarly, background noise should not be altered to present the impression that the audio was recorded in a different location.
Using AI to support editorial production or research
Generative AI or AI driven tools may be considered for use as part of the production process where they do not directly create content for publication but provide information, insight or analysis that might aid that process.
It may be used to assist or support editorial processes – for example to analyse data or documents, or to spark creativity such as through storyboarding or ideation techniques.
But using AI to direct news or current affairs content, where it may challenge or create the perception of a challenge to our editorial independence and integrity - for example by using it to provide questions for interviews, should be avoided.
At all times it is important to consider both the nature, detail and quality of the instruction, or prompt, that is given to any AI, or the use of any particular AI per se – to avoid potential inaccurate, inappropriate or biased outputs driven by the specific algorithm or the data it was trained on
Similarly any use of transcription or translation technologies would requires careful human editorial oversight.
Any use of generative AI or AI driven tools must be actively monitored and outcomes must be further assessed by human editorial oversight before it is employed in BBC content.
Editorial Policy and the AI Risk Advisory Group should be consulted whenever it is proposed to use AI tools in this way whether they have been developed internally or acquired from external sources.
Any external AI tools used must be authorised in line with existing BBC software authorisation or procurement processes, before being used.
Using AI to distribute or curate content
The BBC is already using AI in personalisation and recommendation engines to curate content to audiences on platforms like iPlayer and Sounds.
Products that distribute content in this way are considered to be an editorial experience and are therefore subject to editorial approval and human oversight.
As more, and more sophisticated, products are developed they must be consistent with BBC editorial values including impartiality, fairness, and harm and offence.
There are particular considerations around reporting crime and court cases, where contempt issues may be a risk if content is recommended that may be prejudicial to a fair trial. Advice should be sought from the programme legal team.
There are also heightened risks to impartiality during election periods.
Use of AI by third parties, including independent producers
The BBC must take particular care about how AI may have been used in content it has acquired from or had supplied to it by third parties, whether or not that use was deliberate or inadvertent.
Independent production companies or anyone commissioned to make content for the BBC that involves the use of AI must do so in a manner that is consistent with the Editorial Guidelines and the relevant processes outlined in this guidance, including the principles of impartiality, accuracy, fairness and privacy.
A senior editorial figure, who is responsible for compliance within the production team should be responsible and accountable for its use of AI.
Any proposed use of AI, where there may be a material impact on content for BBC audiences, should be discussed as part of the commissioning process. Independent production companies should contact their commissioning executive when they need guidance, who may in turn consult Editorial Policy.
The BBC should also be aware of the potential use of AI in acquired content and ensure that its broadcast or publication is in line with this guidance.
BBC producers must also be mindful of the use of AI or synthetic media in material from external sources being used as part of BBC content for example in user generated content. They must always authenticate user generated content carefully and should consult experts in the UGC Hub/BBC Verify.