This is the charity AI risk assessment template and risk register, which informed our work in creating the Charity AI Governance & Ethics Framework. In making my risk assessment, I split AI risks into near, medium and long term and also in terms of the risk to everyone and the specific charity AI risks.
It can be used by any charity (or anybody else) to quickly gain an oversight of AI risk. You can also copy the AI risk register or take from it the AI risks relevant to you and include these in your own charity risk assessments and register. You can also download an AI and Data Protection Risk Register toolkit and resources to assess and manage the risks for your own charity, from the risk register questions in the Governance and Risk questionnaires.
The 2nd section enables you to assess if AI is a risk or an opportunity for your own charity. At the end I've included other AI risk management resources and our charity AI FAQs.
The AI Governance & Ethics Framework can be used as a simple AI risk mitigation and avoidance tool. For those designing of commissioning AI systems, you may also want to have a look at the AI Design Principles guide and also our Charity AI Data Protection Toolkit.
This risk register template details the main AI risks in likely chronological order, split into AI risks for all of us and, if applicable, specific charity AI risks.
AI Risks for Everyone | Charity AI Risk Assessment | |
1. Privacy | Large language models (LLMs) are populated by scraping data from the Internet and the default in using many AI systems is to share your data with the LLM data set. | Online content created by charities may be shared and used by others without any regard to copyright and IP and there is a risk of charity people unwittingly and without appropriate consent, inputting sensitive personal and financial data into LLM data sets |
2. Legal Issues | Data is being input into LLMs without the permission of the content creators and disregarding copyright and other IP, which can then be used by anyone. | In many cases, charities may be happy that their content is used more widely but there may be issues around imagery, particularly of children and women. |
3. Disinformation | AI will enable disinformation to be made far more convincing. There are guardrails but also an increasing number of jail break websites that enable these to be circumvented. | Charities campaign to counter abuse and discrimination but AI will enable those creating this to do so in a far more convincing and compelling way. There is also a risk of charities using AI generated imagery without clearly annotating it as such and, in doing so, undermining public trust. |
4. Scams | AI is already enabling far more convincing scams. | There is a growing risk to charities and their beneficiaries. More widely, as people become less confident in their ability to tell a scam from a genuine fundraising campaign, scams will underline the public's trust, making people less confident about donating. |
5. Discrimination | AI does not itself discriminate but the data sets used and how these are trained may well do so. Ensuring this doesn't happen may well slow adoption and would increase cost, so there's a real risk of this not being done correctly. | There are risks of discrimination to charity beneficiaries. Charities need also to ensure that AI systems they adopt or create are not themselves discriminatory. |
AI Risks for Everyone | Charity AI Risk Assessment | |
1. Obsolescence | Organisations that fail to respond and cling to services and/or business models that people won't need any longer, or they can access for free or more effectively or more easily using AI. There are going to be losers. | The risk may be greater for charities that are larger and national, because they are less likely to have very niche or place based activities that are too difficult or small to make it worth building AI for. Our risk checklist is at the bottom of our Impact of AI on Charities Insight Report. |
2. Loss Of Human Agency | We could design jobs to retain human agency and achieve more, or cut jobs and dumb down work to cut costs. If we just cut costs there is a risk of job losses and also the mental health impact on those who remain in work. | AI offers charities and huge opportunity to achieve more and our people are our single biggest asset. Dumbing down charity jobs may save some money, but there would be a much greater loss in terms of effectiveness and impact on our people. |
3. Moral Outsourcing | A phrase coined by Rumman Chowdhury. Blaming the machine by applying logic of sentience and choice to AI. In doing so, , allowing those creating AI to effectively reallocate responsibility for the products they build onto the products themselves, rather than taking responsibility. The issues with AI arise because of the data sets we choose, how we train these and how we use the AI itself. We need to own the problem. | |
4. Dumbing Down Creativity | AI is already being used to create art and simpler news content. If this becomes widespread, it may well impact on human artists and the quality of news and other content may well be dumbed down to become AI 'supermarket muzak'. It may be art, but the creative magic has gone. |
AI Risks for Everyone | Charity AI Risk Assessment | |
1. Digital Moats | In Sep 23, the UK Competition & Markets Authority highlighted the risk of the AI market falling into the hands of a small number of companies, with a potential short-term consequence that consumers are exposed to significant levels of false information, AI-enabled fraud and fake reviews. In the long term, it could enable firms to gain or entrench positions of market power, and also result in companies charging high prices for using the technology. | Economic moat was a term coined by Warren Buffet about the potential for companies to become so dominant that they exclude all competition. The risk that the small number of very large charities, which already receive the vast majority of sector fundraising income, will use their significant digital expertise to become even more dominant, to the detriment of smaller charities. Our AI Steppingstones Strategy was, in part, created in response to this risk. |
2. Digital Super Exclusion | A Charity Excellence concept, so just our work. AI has the potential to hugely improve accessibility. However, there will always be those who cannot or will not use digital, often the most vulnerable. It seems likely to us that organisations may either switch off (existing) legacy systems as too expensive for the remaining very small numbers or assume they've solved the problem and switch these off as no longer needed. The result would be far fewer digitally excluded people but those who are would become 'super excluded'. | |
3. The Paperclip Maximiser | A 2003 thought experiment, in which a computer wiped out humanity, because we were getting in the way of its primary aim to maximise paperclips. This is obviously getting a lot of media attention, but we don't think it's a major risk until we achieve Artificial General Intelligence (AGI); human like cognitive abilities. Nobody knows, but we think it's probably still years way and may only be partly achievable. | |
4. Loss of Critical Thinking by Humans | In the same way that many lost their mental arithmetic skills with the advent of calculators and Excel, there is a risk we may slowly lose our critical thinking abilities, if we outsource this to AI. This may sound daft but has been flagged as a potentially serious and invidious risk by some leading thinkers, because critical thinking abilities are critical (funnily enough) to just about everything we do, from simple day-to-day decisions to 'should I press the button and start a nuclear war?' |
Frontier AI is highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today's most advanced models. In Oct 23, the Government Office for Science published the Future Risks of Frontier AI, including capabilities, other uncertainties and scenarios.
To find out if your charity is ready, or at risk, use our charity AI roadkill toolkit, which gives you the questions you need to ask yourself to find out.
Charity AI is not something that will pose a risk in the future, it's already here and its use is likely to grow rapidly. Here are 2 examples of charity AI use.
The more people chat to them, the better they get and that's without taking into account the rapid advances in AI capabilities.
We believe that charities need to think about AI risk in 3 ways. Which will apply to any given charity and the extent to which each will matter will vary but we think that all charities must take steps to safeguard themselves from AI risks.
As with any other charity risk, the key risks and steps to take for AI will depend on your charity and your priorities. However, here are 4.
Our Charity AI Toolbox explains all of our free charity AI services, with links to all of our charity toolkits, insight briefings, training and the ChatGPT launch pad for charities, as well as our lists of AI resources produced by others. You can also download a shareable infographic from the bottom of any AI page, including this one.
The National Cyber Security Centre (NCSC) provides advice, guidance and support on cyber security. Here is their small business guide and also another for individuals and families.
And the UK ICO has produced an AI and data protection risk toolkit.
In the US, NIST has an AI Risk Management Framework, which is intended to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
And here's the OWASP Machine Learning Security Top Ten.
A registered charity ourselves, the CEF works for any non profit, not just charities.
Plus, 100+downloadable funder lists, 40+ policies, 8 online health checks and the huge resource base.
Quick, simple and very effective.
Find Funding, Free Help & Resources - Everything Is Free.