A draft, practical toolkit for the design, procurement, commissioning, or grant making for non profit or charity AI systems. It's based on our own work in designing and building charity AI systems.
This toolkit is part of our Charity AI Strategy Phase 2 (What Good Looks Like). This is an initial draft and all comments will be very welcome, thank you. Send these to ian@charityexcellence.co.uk. All input that is used will be credited. To access all our free charity AI services, guides, tools and training, visit our AI services page.
If designing charity AI is a step too far for you, you can make sure you don't miss out by using Biomni's Charity Bot, which uses the same Tenjin AI system our own AI bunnies do.
AI is hugely powerful and is (and will) enable charities to both augment their capabilities and remove digital debt. However, there may already be existing and, potentially better, charity AI and non-AI solutions. Before you start designing a new charity AI system, identify all the options and carry out a cost benefit analysis, including financial and non financial costs and benefits. Some things to think about:
My thanks to Dorian Harris whose very helpful input led to this section being created.
It's easy to get a bit carried away but always useful to stand back at the start and check the basics first.
I make no claim to be any kind of expert but based on my own experience in building AI systems and reading the work of others, these are the principles we have adopted for Charity Excellence and which we have now published as this charity AI policy template. It's intended to be informed by input by anyone working on this issue and may be used by anyone. If you wish to provide input, please e mail me at ian@charityexcellence.co.uk.
Inadequate or poor quality or badly cleansed charity data and/or inadequacies in training the data set could (and has been known to) create inaccuracies, misinformation and/or bias.
The potential for AI to be used to manipulate or exploit users, inadvertently or otherwise, such as promoting addictive behaviours or targeting vulnerable users.
Some users may be very receptive but others may be suspicious or feel threatened.
Explainability is seen as a key pillar of AI governance. It enables those using or impacted by AI systems to understand and challenge system outcomes/decisions, not least any bias within these. However, AI systems can be hugely complex and it may not be possible to explain the reasoning behind the results/decisions these may make. This may lead to mistrust and an unwillingness to use a system. The use of new techniques, such as online continuous experimentation (Bing) may help to overcome this.
AI systems may integrate with and import/export data from other existing systems or there may be real benefits/risks in doing so.
Whilst consolidating data may not be essential, there are real benefits in consolidating all of a charity's data into a single source for its internal AI systems. The main issues in doing so are:
AI implementation may well impact on wider organisational issues, such as policies and training, and/or may require changes to working practices and job roles.
The use of generative AI is a high profile issue and its use is known to come with serious risks. The benefits may be uncertain or not understood and many in the sector are tech phobic.
You can download an AI and Data Risk Register and toolkit from the AI questions in the Governance and Risk questionnaires. Just register and then login; it's free.
Here are some AI tools and information that may be useful in thinking about designing AI systems for charities and non profits.
Our Charity AI Governance and Ethics Framework has been created to promote responsible use of AI by non profits, by providing a simple, practical and flexible framework within which to manage these ethical challenges. It should be read in conjunction with these design principles.
The AI framework can be used by charities and non profits to:
For those commissioning, funding or designing AI, it can be attached to RFPs, contracts and grant agreements, or relevant extracts included within these.
We converted our own AI risk framework into one that can be used by all charities and others. It has all of the AI risks we've identified, including some more off the beaten track ones, plus those we think will be of specific concern to charities.
A range of resources and tools of relevance to AI design, including standards and assurance systems.
In March 2023, the ICO updated its guidance on AI and data protection.
There's not much yet but in September 2023, the Competition & Market Authority published its review into AI Foundation Models, and their impact on competition and consumer protection. This set out 7 principles designed to make developers accountable, prevent Big Tech tying up the tech in their walled platforms, and stop anti-competitive conduct like bundling.
These are focussed on markets and competition (obviously) but include some good thinking on issues such as open and closed models, interoperability, access, transparency and deployment options. These have been helpfully summarised in a table.
In October 2023, Stanford University (Human Centred Artificial Intelligence) published the initial version of its Foundation Model Transparency Index (FMTI) that lays out the parameters for judging a model's transparency.
It grades companies on their disclosure of 100 different aspects of their AI foundation models, including how these were built and used in applications. I think it’s mainly intended to inform the development of Government regulation of AI but is interesting in that it shows how little transparency there is in practice and the differences between the model developers.
The Alan Turing Institute website for the AI standards community, dedicated to knowledge sharing, capacity building, and world-leading research. It aims to build a vibrant and diverse community around AI standards.
The Centre for Data Ethics and Innovation portfolio of AI assurance techniques and how to use it.
For the 2023 AI Safety Summit, the Government requested that leading AI companies outline their AI Safety Policies across nine areas of AI safety:
The Government’s Emerging Processes for Frontier AI Safety complements companies’ safety policies by providing a potential list of frontier AI organisations’ safety policies.
And least but not least.
Always ensure that any agreement includes basic standards and terms to help ensure your project works well.
A registered charity ourselves, the CEF works for any non profit, not just charities.
Plus, 100+downloadable funder lists, 40+ policies, 8 online health checks and the huge resource base.
Quick, simple and very effective.
Find Funding, Free Help & Resources - Everything Is Free.
To access all our free charity AI services, guides, tools and training, visit our AI services page.
This charity AI policy template is for general interest only and does not constitute professional legal or financial advice. I'm neither a lawyer, nor an accountant, so not able to provide this, and I cannot write guidance that covers every charity or eventuality. I have included links to relevant regulatory guidance, which you must check to ensure that whatever you create reflects correctly your charity’s needs and your obligations. In using this resource, you accept that I have no responsibility whatsoever from any harm, loss or other detriment that may arise from your use of my work. If you need professional advice, you must seek this from someone else. To do so, register, then login and use the Help Finder directory to find pro bono support. Everything is free.