PPN 02/24 Improving Transparency of AI use in Procurement (HTML)
Published 25 March 2024
Issue
1. AI systems, tools and products are part of a rapidly growing and evolving market, and as such, there may be increased risks associated with their adoption. Care should be taken to ensure that AI is used appropriately, and with due regard to risks and opportunities. As the Government increases its adoption of AI, it is essential to take steps to identify and manage associated risks and opportunities, as part of the Government’s commercial activities.
Dissemination and Scope
2. The contents of this Procurement Policy Note (PPN) apply to all Central Government Departments, their Executive Agencies and Non-Departmental Public Bodies, and are referred to in this PPN as ‘In-Scope Organisations’.
3. Please circulate this PPN within your organisation, particularly to those with a commercial, procurement and/or contract management role.
4. Other public sector contracting authorities may wish to apply the approach set out in this PPN.
Timing
5. In-scope organisations should note the provisions of this PPN.
For Information
6. There is a range of guidance available to support commercial teams to understand AI as a subject area, the appropriate use of AI within public services and how AI products should be procured. References to some of this guidance can be found in Annex A.
7. There are potential benefits to suppliers using AI to develop their bids, enabling them to bid for a greater number of public contracts. It is important to note that suppliers’ use of AI is not prohibited during the commercial process but steps should be taken to understand the risks associated with the use of AI tools in this context, as would be the case if a bid writer has been used by the bidder. This may include:
- Asking suppliers to disclose their use of AI in the creation of their tender (See Annex B for an example of text that can be added to procurement documentation).
- Putting in place proportionate controls to ensure bidders do not use confidential contracting authority information, or information not already in the public domain as training data for AI systems e.g. using confidential Government tender documents to train AI or Large Language Models to create future tender responses.
- Undertaking appropriate and proportionate due diligence:
- If suppliers use AI tools to create tender responses, additional due diligence may be required to ensure suppliers have the appropriate capacity and capability to fulfil the requirements of the contract. Such due diligence should be proportionate to any additional specific risk posed by the use of AI, and could include site visits, clarification questions or supplier presentations.
- Additional due diligence should help to establish the accuracy, robustness and credibility of suppliers’ tenders through the use of clarifications or requesting additional supporting documentation in the same way contracting authorities would approach any uncertainty or ambiguity in tenders.
- Planning for a general increase in activity as suppliers may use AI to streamline or automate their processes and improve their bid writing capability and capacity leading to an increase in clarification questions and tender responses.
- Potentially allowing more time in the procurement to allow for due diligence and an increase in volumes of responses.
- Closer alignment with internal customers and delivery teams to bring greater expertise on the implications and benefits of AI, relative to the subject matter of the contract.
8. In certain procurements where there are national security concerns in relation to use of AI by suppliers, there may be additional considerations and risk mitigations that are required. In such instances, commercial teams should engage with their Information Assurance and Security colleagues, before launching the procurement, to ensure proportionate risk mitigations are implemented.
9. Commercial teams should take note of existing guidance[footnote 1] when purchasing AI services, however they should also be aware that AI and Machine Learning is becoming increasingly prevalent in the delivery of “non-AI” services. Where AI is likely to be used in the delivery of a service, commercial teams may wish to require suppliers to declare this, and provide further details (see Example Disclosure Question 3). This will enable commercial teams to consider any additional due diligence or contractual amendments to manage the impact of AI as part of the service delivery.
An illustrative example: Video Conferencing
A Contracting Authority is seeking to implement a new video conferencing suite in their offices. Key considerations for the functionality of this service may include the number of concurrent users, audio and visual call quality, integration with existing software and services.
In such instances, a supplier may be able to provide a service that includes meeting transcription, or live translation using Generative AI.
Additional consideration should therefore be given to how these meeting records will be used (e.g. for training of further AI models) and whether the information gathered is subject to Data Classification policies.
In this example, departments will likely want to ensure that proposed contractual terms make it clear that data retrieved from the videoconference is appropriately managed, e.g. not used for training purposes unless this has been specifically agreed and approved in writing by the client.
Background
10. Generative AI is a form of Artificial Intelligence (AI) - a broad field which aims to use computers to emulate the products of human intelligence - or to build capabilities which go beyond human intelligence. Unlike previous forms of AI, Generative AI produces new content, such as images, text or music.
11. AI systems, tools and products are part of a rapidly growing and evolving market, and as such, there may be increased risks associated with their adoption. Care should be taken to ensure that the use of AI is restricted to use cases where risks can be effectively understood and managed.
12. AI has potential to accelerate and support decision making processes, especially through the use of large data sets. It is essential to ensure that decisions are made with the support of AI systems, not a reliance upon them, in accordance with the principles outlined in the Government’s Data Ethics Framework and guidance on Understanding Artificial Intelligence Ethics and Safety.
13. Content created with the support of Large Language Models (LLMs) may include inaccurate or misleading statements; where statements, facts or references appear plausible, but are in fact false. LLMs are trained to predict a “statistically plausible” string of text, however statistical plausibility does not necessarily mean that the statements are factually accurate. As LLMs do not have a contextual understanding of the question they are being asked, or the answer they are proposing, they are unable to identify or correct any errors they make in their response. Care must be taken both in the use of LLMs, and in assessing returns that have used LLMs, in the form of additional due diligence.
Contact
Enquiries about this PPN should be directed to the Crown Commercial Service Helpdesk (telephone 0345 410 2222, email [email protected]).
Annex A - Guidance and Best Practice
Assessing if artificial intelligence is the right solution
Government Digital Service (GDS) and the Office for Artificial Intelligence (OAI) have published joint guidance[footnote 2] for anyone responsible for choosing technology in a public sector organisation, and will help you determine if the use of AI is appropriate for your specific use case.
Guidelines for AI Procurement
Cabinet Office, Office for Artificial Intelligence (OAI) and the World Economic Forum have published joint guidance[footnote 3] to provide a set of guiding principles on how to buy AI technology, as well as insights on tackling challenges that may arise during procurement.
A guide to using artificial intelligence in the public sector
Government Digital Service (GDS) and the Office for Artificial Intelligence (OAI) have published joint guidance[footnote 4] on how to build and use AI in the public sector. This guidance covers how:
- to assess if using AI will help you meet user needs
- the public sector can best use AI
- to implement AI ethically, fairly and safely
You can contact [email protected] for further information about using AI in the public sector.
Guidance to civil servants on use of Generative AI
Generative Artificial Intelligence (Gen AI) is a subset of AI that focuses on creating new data. Unlike traditional machine learning models that are designed for specific tasks, Gen AI models are capable of generating new content, such as images, text, music, and more. CDDO has developed internal guidance[footnote 5] for the use of AI and Large Language Models (LLMs) across the Civil Service.
Generative AI Framework for HMG
Central Digital and Data Office (CDDO) have published further, more detailed guidance[footnote 6] on Generative AI. The Generative AI Framework for HMG is guidance on using Generative AI safely and securely for civil servants and people working in government organisations. The white paper A pro-innovation approach to AI regulation, sets out five principles to guide and inform AI development in all sectors. This framework builds on those principles to create ten core principles for generative AI use in government and public sector organisations.
Managing your artificial intelligence project
Office for Artificial Intelligence (OAI) published guidance[footnote 7] that provided advice on understanding how to manage a project which uses artificial intelligence.
Artificial intelligence in public services
The Equality and Human Rights Commission published guidance[footnote 8] on procuring, commissioning, building or adapting AI for the workplace or the services they provide.
Data Ethics Framework
Guidance[footnote 9] for public sector organisations on how to use data appropriately and responsibly when planning, implementing, and evaluating a new policy or service.
Understanding Artificial Intelligence Ethics and Safety
The Office for Artificial Intelligence (OAI) and the Government Digital Service (GDS) have produced Guidance[footnote 10] in partnership with The Alan Turing Institute. This guidance will help to ensure your project is fair and prevent bias or discrimination and to safeguard public trust in your project’s capacity to deliver safe and reliable AI through the use of the ‘FAST Track Principles’; Fairness, Accountability, Sustainability and Transparency.
Links to available training resources
There are a range of training resources related to AI available through the Government Campus and Civil Service Learning.
Government Campus: Artificial Intelligence[footnote 11]
Civil Service Learning: Artificial Intelligence[footnote 12]
Annex B - AI Disclosure Questions
In order to establish whether AI has been used by suppliers to develop responses to questions in the procurement, a disclosure question can be added to the Invitation to Tender. This requires suppliers to disclose their use of AI when responding to the tender questions, or as part of their proposed delivery of the service.
There are a series of example disclosure questions set-out below which may be helpful. The Questions included in this annex should not be scored or taken into account in tender evaluation, and should be used for information only. Contracting authorities can however continue to ask and evaluate any further relevant questions about use of AI as part of their award process which are specific to their requirements and are compliant with procurement law. Whether, and if so how, any questions are to be scored should be set out in the procurement documents.
These questions may be helpful in informing wider commercial strategy, defining how to undertake meaningful due diligence and manage risk.
Note: It is important that contracting authorities do not discriminate against particular suppliers in the use of these questions or in the interpretation of supplier responses.
Example Disclosure Question 1: For Information Only (Not Scored)
AI tools can be used to improve the efficiency of your bid writing process, however they may also introduce an increased risk of misleading statements via ‘hallucination’.
Have you used AI or machine learning tools, including large language models, to assist in any part of your tender submission? This may include using these tools to support the drafting of responses to Award questions.
Yes/No
Please provide details: ………………
Where AI tools have been used to support the generation of Tender responses, please confirm that they have been checked and verified for accuracy:
Yes/No
Example Disclosure Question 2: For Information Only (Not Scored)
AI tools can be used to improve the efficiency of your bid writing process, however they may also introduce an increased risk of misleading statements via ‘hallucination’.
Please detail any instances where AI or machine learning tools, including large language models have been used to generate written content, or support your bid submission.
Please provide details: ………………
Where AI tools have been used to support the generation of Tender responses, please confirm that they have been checked and verified for accuracy:
Yes/No
Example Disclosure Question 3: For Information Only (Not Scored)
Are AI or machine learning technologies used as part of the products/services you intend to provide to [Insert Contracting Authority Name]?
Yes/No
If Yes: Please describe how AI technologies are integrated into your service offerings.
Please provide details: ………………