This simple UK charity AI policy template is mainly for small charities. Larger charities or those commissioning, designing, building or funding AI systems or linking AI to charity data sets should use one or more of our more sophisticated toolkits at the bottom. Additionally, the Charity AI Governance and Ethics Framework can be used to create a more sophisticated policy or to extract items to include in existing policies. However, where appropriate, AI has been built into our 40+ downloadable charity policy templates. Our other toolkits below can be used to create additional charity AI policies. To access all our charity AI resources, go to our AI Services Page and to access everything, register, then login.
Find Funding, Free Help & Resources - Everything Is Free.
This policy applies to all trustees, other volunteers, employees, contractors, and third-party representatives working on our behalf. Its requirements should be reflected in other policies and procedures, agreements and contracts, as necessary.
We define Artificial Intelligence (AI) as the ability of machines or software to perform tasks that would normally require human intelligence. AI systems can process data, learn from it, and make decisions or predictions based on that data. AI is a broad field that encompasses many different types of systems and approaches to machine intelligence, including rule-base AI, machine learning, neural networks, natural language processing and robotics.
All key AI decisions and proposals will be subject to scrutiny and approval by the trustee Board. They will be advised on any concerns or breaches in AI use and will review this policy and our AI performance annually to keep up with evolving AI technologies and ethical standards.
Use of AI by our charity will have appropriate human oversight with humans being responsible for making all final decisions on their output. We will maintain oversight by monitoring AI systems’ performance, impact, and compliance with this policy on an ongoing basis.
To support this, we will create any necessary guidelines on the collection, use and storage of data. This will ensure accountability for the decisions made by AI systems, which may include measures such as auditing, reporting and review processes and the use of algorithms in decision-making, including the steps we will take to ensure these are as fair and unbiased as reasonably possible.
We will support our people in adapting to the changes AI will bring by providing them with appropriate support and skills development and taking into account their needs, when designing roles and work procedures.
The requirements of our AI policy will be embedded in other relevant policies and procedures, contracts, agreements and other documentation, such as job descriptions. We will ensure that those in our charity with responsibilities for or involvement in AI, understand our charity AI policy, their responsibilities in delivering this and are accountable for doing so.
Our AI risk analysis has included any specific groups who may be at risk and other reasonably foreseeable uses of the technology, including accidental or malicious misuse. The risks have been identified and quantified, and the avoidance/mitigation action put in place will ensure that the level of risk remains within acceptable limits.
We have carried out a Data Protection Impact Assessment (DPIA) for AI and made any necessary changes to our policies and procedures. As part of that, insofar as reasonably possible, we will:
We are aware of the ICO guidance on AI and data protection and have reflected any additional requirements in our policies and procedures.
We are committed to genuinely engaging with our stakeholders to ensure that our AI is aligned with their needs and values. We factor in to our risk analysis any exclusion or detriment to them based on their identity. We will take reasonable steps to avoid or minimise any exclusion or detriment and transparently communicate this. We will ensure that any AI created content created respects the dignity of individuals and represents them in the way they would wish to be, including them being accurately depicted. For example, disability equipment or religious dress.
We will make our AI systems and content as accessible as possible. Insofar as reasonably possible, we will use accurate, fair, and representative data sets to ensure these are inclusive. We will ensure that any AI decisions are understandable and interpretable by stakeholders. This could involve documenting the logic behind AI decisions, providing clear explanations, and making sure that the reasoning is accessible to non-technical users.
All reasonable efforts will be made to identify any bias within an AI system we use, to ensure any bias has either been eradicated or mitigated to the point where it is within an acceptable level of risk. We are open and transparent about any bias within an AI system (that we are aware of) and how we manage this.
Where AI is used to create content, there are appropriate checks and safeguards in place to ensure:
There is appropriate content moderation by humans, to minimise the potential for errors and bias/defamatory phrases etc.
We are aware of the environmental impact of AI due to its very high energy consumption. We will take this into account when considering our environmental impact and seek to make use of any emerging technologies that will help to minimise or mitigate this.
We will take all reasonable steps to identify copyrighted material. For any such material we use, we will ensure we have their copyright agreement, or it falls within 'fair use', or other exception to copyright, or the Open Government Licence (OGL), or some other free use category.
We will not knowingly use any online material, such as from social media accounts or online galleries, which has been marked as 'NoAI', 'NoImageAI', or similar.
We will take all reasonable steps to ensure that our use of AI does not have a negative impact on the legal rights and/or liberties of individuals or groups and complies with the Data Protection Act.
In particular, we will ensure that for any AI use of our data, the data is clean, complete, compliant and we have appropriate consent, particularly the safeguarding of sensitive personal information.
We have robust cyber security procedures that everyone is aware of and complies with consistently to minimise the risk of AI scams and disinformation.
Version No | Approved By | Approval Date | Main Changes | Review Period |
1.0 | Board | 21 May 24 | Initial draft approved | Annually |