Charity AI Best Practice - AI System Design and Procurement

A simple, practical charity AI policy template for anyone designing, buying, commissioning, or making grants for non profit or charity AI systems

Charity AI Best Practice - AI System Design and Commissioning

A draft, practical toolkit for the design, procurement, commissioning, or grant making for non profit or charity AI systems.  It's based on our own work in designing and building charity AI systems.

This toolkit is part of our Charity AI Strategy Phase 2 (What Good Looks Like).  This is an initial draft and all comments will be very welcome, thank you.  Send these to ian@charityexcellence.co.uk.  All input that is used will be credited.  To access all our free charity AI services, guides, tools and training, visit our AI services page.

If designing charity AI is a step too far for you, you can make sure you don't miss out by using Biomni's Charity Bot, which uses the same Tenjin AI system our own AI bunnies do.

Do We Need To Design A New Charity AI System At All?

AI is hugely powerful and is (and will) enable charities to both augment their capabilities and remove digital debt. However, there may already be existing and, potentially better, charity AI and non-AI solutions. Before you start designing a new charity AI system, identify all the options and carry out a cost benefit analysis, including financial and non financial costs and benefits.  Some things to think about:

  • Don't be sucked in jumping onto the charity AI bandwagon just because it's the 'shiny new toy' everyone wants.
    • Is there an existing non AI solution?
    • And, if not, is this a problem that lends itself to a charity AI design solution, or maybe not?
  • Has someone already built a charity AI solution for this, or something similar?
    • Does it provide you with the solution you need?
    • If not, what worked well and not so well?
      • Use that to learn from their mistakes.
  • If the environment is part of your charity ethics, you may wish to consider that AI systems are very power intensive.

My thanks to Dorian Harris whose very helpful input led to this section being created.

Scoping Checklist

It's easy to get a bit carried away but always useful to stand back at the start and check the basics first.

  • We have clearly defined the problem we are trying to solve.
  • We are clear on how we expect AI to solve it.
  • We have built this into our tender documentation to collect and assess the suppliers’ claims about what their AI systems can achieve and.
  • We have or will have processes to continually monitor the AI systems once in use.

CHARITY AI POLICY - DESIGN PRINCIPLES

I make no claim to be any kind of expert but based on my own experience in building AI systems and reading the work of others, these are the principles we have adopted for Charity Excellence and which we have now published as this charity AI policy template.  It's intended to be informed by input by anyone working on this issue and may be used by anyone. If you wish to provide input, please e mail me at ian@charityexcellence.co.uk.

Charity AI - Data Quality & Data Set Training

Inadequate or poor quality or badly cleansed charity data and/or inadequacies in training the data set could (and has been known to) create inaccuracies, misinformation and/or bias.

  • Processes have been built into the design/project implementation that will give a high level of confidence this is not the case.
  • That should include assessment of the data and training processes, with testing of outputs to validate this and effective guardrails.

AI Potential For Exploitation

The potential for AI to be used to manipulate or exploit users, inadvertently or otherwise, such as promoting addictive behaviours or targeting vulnerable users.

  • Processes have been built into the design/project implementation that will give a high level of confidence this is not the case, particularly if users will be vulnerable adults or children.

New AI Cyber Threats

  • The proposed system has been assessed for vulnerabilities to new AI cyber threats, such as LLM prompt injection attacks, and any necessary action has been taken to ensure these will be adequately mitigated.

Onboarding Charity AI Users

Some users may be very receptive but others may be suspicious or feel threatened.

  • AI tools must be user-friendly and trustworthy.
  • Navigation should be intuitive and simple, avoid technical terms and.
  • Produce outputs and reports that are both understandable and useful to those using the system.

Charity AI Policy - Making Systems Transparent & Explainable

Explainability is seen as a key pillar of AI governance.  It enables those using or impacted by AI systems to understand and challenge system outcomes/decisions, not least any bias within these.  However, AI systems can be hugely complex and it may not be possible to explain the reasoning behind the results/decisions these may make. This may lead to mistrust and an unwillingness to use a system.  The use of new techniques, such as online continuous experimentation (Bing) may help to overcome this.

  • AI systems should be designed with processes that are transparent and explainable and/or have processes that reduce the risk of mistakes and bias to an acceptable degree.

Integration of Charity AI with Other Systems

AI systems may integrate with and import/export data from other existing systems or there may be real benefits/risks in doing so.

  • Linkages are mapped to identify and resolve any legal or organisational policy implications and the impact on wider issues, such as those below.

Consolidating Existing Charity Data

Whilst consolidating data may not be essential, there are real benefits in consolidating all of a charity's data into a single source for its internal AI systems.  The main issues in doing so are:

  • Data Fragmentation: Charities often collect data from various sources, including fundraising efforts, programme activities, donor interactions, and financial transactions. This data may be stored in different formats, databases, or even on paper, leading to fragmentation and difficulty in integrating it for AI purposes.
  • Data Quality: Ensuring the quality of data is crucial for effective AI adoption. Charities may encounter issues with incomplete, inaccurate, or outdated data, which can negatively impact AI algorithms' performance and decision-making.
  • Data Privacy and Compliance: Charities must comply with data protection regulations such as the General Data Protection Regulation (GDPR) in the UK. Consolidating data while ensuring compliance with these regulations, particularly regarding sensitive information, adds complexity to the process.

Other Charity AI Design Considerations

AI implementation may well impact on wider organisational issues, such as policies and training, and/or may require changes to working practices and job roles.

  • For this and other reasons above, genuine staff consultation and communication from the outset, through to post implementation is likely to be critical in successful deployment.

Communicating AI In Your Charity

The use of generative AI is a high profile issue and its use is known to come with serious risks.  The benefits may be uncertain or not understood and many in the sector are tech phobic.

  • Consideration should be given to how external stakeholders will perceive implementation and to communicate the above to them effectively.
  • Avoid jargon and, instead us plain English to articulate the process, timescales, benefits, risks and action being taken to mitigate the risks.

Charity AI Design Risk Management

  • A robust design risk management exercise has been carried out, which extends beyond the system itself.  For example, consideration being given to additional checks and controls by management, restricting access and not using the system for some activities/decisions.
  • Any necessary changes to existing risk systems, policies and procedures have been implemented.
  • Where a system is intended to be used by beneficiaries who may be unable or unwilling to use it, consideration should be given to making alternative provision for them.

CHARITY AI DESIGN RESOURCES AND TOOLS

Here are some AI tools and information that may be useful in thinking about designing AI systems for charities and non profits.

Charity AI Governance and Ethics Framework

Our Charity AI Governance and Ethics Framework has been created to promote responsible use of AI by non profits, by providing a simple, practical and flexible framework within which to manage these ethical challenges.  It should be read in conjunction with these design principles.

The AI framework can be used by charities and non profits to:

  • Create an AI framework for your non-profit and/or.
  • Embed relevant aspects in your existing procedures, such as.
    • Data protection, Equality, Diversity & Inclusion (EDI) and ethical fundraising policies.

For those commissioning, funding or designing AI, it can be attached to RFPs, contracts and grant agreements, or relevant extracts included within these.

Charity AI Risk Management Framework

We converted our own AI risk framework into one that can be used by all charities and others.  It has all of the AI risks we've identified, including some more off the beaten track ones, plus those we think will be of specific concern to charities.

OTHER AI DESIGN RESOURCES & TOOLS

A range of resources and tools of relevance to AI design, including standards and assurance systems.

AI Regulation - Data Protection

In March 2023, the ICO updated its guidance on AI and data protection.

AI Regulation - Markets & Competition

There's not much yet but in September 2023, the Competition & Market Authority published its review into AI Foundation Models, and their impact on competition and consumer protection.  This set out 7 principles designed to make developers accountable, prevent Big Tech tying up the tech in their walled platforms, and stop anti-competitive conduct like bundling.

These are focussed on markets and competition (obviously) but include some good thinking on issues such as open and closed models, interoperability, access, transparency and deployment options.  These have been helpfully summarised in a table.

AI Foundation Model Transparency

In October 2023, Stanford University (Human Centred Artificial Intelligence) published the initial version of its Foundation Model Transparency Index (FMTI) that lays out the parameters for judging a model's transparency.

It grades companies on their disclosure of 100 different aspects of their AI foundation models, including how these were built and used in applications.  I think it’s mainly intended to inform the development of Government regulation of AI but is interesting in that it shows how little transparency there is in practice and the differences between the model developers.

The AI Standards Hub

The Alan Turing Institute website for the AI standards community, dedicated to knowledge sharing, capacity building, and world-leading research.  It aims to build a vibrant and diverse community around AI standards.

CDEI AI Assurance Techniques

The Centre for Data Ethics and Innovation portfolio of AI assurance techniques and how to use it.

AI Safety Policy

For the 2023 AI Safety Summit, the Government requested that leading AI companies outline their AI Safety Policies across nine areas of AI safety:

  • Responsible Capability Scaling provides a framework for managing risk as organisations scale the capability of frontier AI systems, enabling companies to prepare for potential future, more dangerous AI risks before they occur.
  • Model Evaluations and Red Teaming can help assess the risks AI models pose and inform better decisions about training, securing, and deploying them.
  • Model Reporting and Information Sharing increases government visibility into frontier AI development and deployment and enables users to make well-informed choices about whether and how to use AI systems.
  • Security Controls Including Securing Model Weights are key underpinnings for the safety of an AI system.
  • Reporting Structure for Vulnerabilities enables outsiders to identify safety and security issues in an AI system.
  • Identifiers of AI-generated Material provide additional information about whether content has been AI generated or modified, helping to prevent the creation and distribution of deceptive AI-generated content.
  • Prioritising Research on Risks Posed by AI will help identify and address the emerging risks posed by frontier AI.
  • Preventing and Monitoring Model Misuse is important as, once deployed, AI systems can be intentionally misused for harmful outcomes.
  • Data Input Controls and Audits can help identify and remove training data likely to increase the dangerous capabilities their frontier AI systems possess, and the risks they pose.

The Government’s Emerging Processes for Frontier AI Safety complements companies’ safety policies by providing a potential list of frontier AI organisations’ safety policies.

A Free One-stop-shop for Everything You Charity Needs

A registered charity ourselves, the CEF works for any non profit, not just charities.

Plus, 100+downloadable funder lists, 40+ policies, 8 online health checks and the huge resource base.

Quick, simple and very effective.

Find Funding, Free Help & Resources - Everything Is Free.

Register Now!

Free Charity AI Services & Support

To access all our free charity AI services, guides, tools and training, visit our AI services page.

This Charity AI Policy Template Is Not Professional Advice

This charity AI policy template is for general interest only and does not constitute professional legal or financial advice.  I'm neither a lawyer, nor an accountant, so not able to provide this, and I cannot write guidance that covers every charity or eventuality.  I have included links to relevant regulatory guidance, which you must check to ensure that whatever you create reflects correctly your charity’s needs and your obligations.  In using this resource, you accept that I have no responsibility whatsoever from any harm, loss or other detriment that may arise from your use of my work.  If you need professional advice, you must seek this from someone else. To do so, register, then login and use the Help Finder directory to find pro bono support. Everything is free.

Register Now
We are very grateful to the organisations below for the funding and pro bono support they generously provide.

With 40,000 members, growing by 3500 a month, we are the largest and fastest growing UK charity community. How We Help Charities

View our Infographic

Charity Excellence Framework CIO

14 Blackmore Gate
Buckland
Buckinghamshire
United Kingdom
HP22 5JT
charity number: 1195568
Copyrights © 2016 - 2024 All Rights Reserved by Alumna Ltd.
Terms & ConditionsPrivacy Statement
Website by DJMWeb.co
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram