Fairness Innovation Challenge - addressing bias and discrimination in AI
UK registered organisations can apply for a share of up to £400k for projects resulting in new solutions to address bias and discrimination in AI systems.
Opportunity Details
When
Registration Opens
16/10/2023
Registration Closes
13/12/2023
Award
Your project’s total costs can be up to £130,000. Your total project costs will be 100% funded up to the £130,000 maximum.
Organisation
Innovate UK
Innovate UK will work with the Centre for Data Ethics and Innovation (CDEI), part of the Department for Science Innovation and Technology (DSIT), to invest up to £400,000 in innovation projects.
The aim of this competition is to drive the development of novel solutions to address bias and discrimination in artificial intelligence (AI) systems.
Our objectives are to:
- encourage the development of socio-technical approaches to fairness
- test how strategies to address bias and discrimination in AI systems can comply with relevant regulation including the Equality Act 2010, the UK General Data Protection Regulation (GDPR) and the Data Protection Act 2018
- provide greater clarity about how different assurance techniques can be applied in practice
Your proposal must address bias and discrimination in one of the following use cases:
- provided healthcare use case
- open use case
Your proposed solution must adopt a socio-technical approach to fairness, seeking to address not only statistical but also human and structural biases associated with the AI system in question.
-
To lead a project your organisation must be a UK registered:
- business of any size
- academic institution
- research and technology organisation (RTO)
- charity
- not for profit
- public sector organisation
An eligible organisation can lead on any number of distinct projects.
Subcontractors are allowed in this competition. We recognise that developing socio-technical solutions to address bias and discrimination in AI systems requires a breadth of knowledge and skills that may require you to work with different organisations as subcontractors.
Grant funding in this competition is awarded as Minimal Financial assistance (MFA). This allows public bodies to award up to £315,000 to an enterprise in a 3-year rolling financial period. To establish your eligibility, we need to check that our support added to the amount you have previously received does not exceed the limit.
-
Your project must:
- have total project costs of up to £130,000
- carry out its project work in the UK
- intend to exploit the results from or in the UK
- start by 1 May 2024
- end by 31 March 2025
We are not funding projects that:
- do not adopt a socio-technical approach to fairness
- do not address at least two of the stages in the process of addressing bias and discrimination in AI systems
- do not evidence the potential for the proposed innovation to generate positive economic or societal impact
If you are proposing your own use cases, we will not accept projects that are not transparent and open about the models, data and risks to fairness that your use case presents.
-
The aim of this competition is to drive the development of novel solutions to address bias and discrimination in artificial intelligence (AI) systems.
Our objectives are to:
- encourage the development of socio-technical approaches to fairness
- test how strategies to address bias and discrimination in AI systems can comply with relevant regulation including the Equality Act 2010, the UK General Data Protection Regulation (GDPR) and the Data Protection Act 2018
- provide greater clarity about how different assurance techniques can be applied in practice
Assurance techniques, include the methods and processes used to verify and ensure that systems and solutions meet certain standards, including those related to fairness.
Despite increased interest in addressing bias and discrimination in AI systems, organisations continue to face numerous challenges, including:
- a lack of clarity around best practice for the use of fairness metrics and toolkits
- limitations associated with technical approaches
- risks of breaching UK legislation
This competition aims to tackle these challenges in practice. You must propose a solution to address bias and discrimination in an AI system in one of the real world use cases:
- provided healthcare use case
- open use case
-
You must propose a solution to address bias and discrimination in an AI system in one of the real world use cases:
- provided healthcare use case
- open use case
Your proposal must include:
- a description of the process you would adopt to detect and address bias and discrimination in the selected use case, including potential technical and socio-technical interventions
- an explanation of why you have selected this particular approach, for example, why you have chosen to use a particular fairness metric or socio-technical intervention
- an explanation of how you will also ensure broader ethical or legal fairness within the UK context, for example compliance with data protection legislation and equalities law, beyond just looking at technical and mathematical fairness
Your proposed solution must also address at least two of the following stages in the process of addressing bias and discrimination in AI systems:
- accessing demographic data (for bias detection)
- bias detection
- bias mitigation
- ongoing monitoring and evaluation
Your proposed solution must adopt a socio-technical, rather than purely mathematical or statistical approach to achieving fairness.
-
A socio-technical approach considers the broader historical, social and cultural context in which an AI system is embedded and seeks to address both statistical and structural biases associated with the use of AI systems.
Possible socio-technical interventions include but are not limited to:
- participatory forms of data collection, audit or mitigation
- governance interventions addressing organisational biases
- intersectional bias analysis
- custom context-specific bias metrics
- engagement with subject matter experts
- investigating bias in human decision making processes surrounding the system
You can access information about socio-technical approaches to fairness, in this paper and page 10 of this guidance from the National Institute of Standards and Technology (NIST).
-
If successful, on completion of your funded project you will be required to attend a show case event to present evidence of your outcomes.
You will also be required to share the outputs and outcomes of your project, this will include, at a minimum:
- a White Paper explaining the solution you developed, its impact, and lessons others can learn from your project
- if a method or tool is developed as part of the challenge, the code or description must be made available and open source
- if a proprietary method or tool is used as part of the challenge, a transparency record must be filled out and made publicly available for example, the Algorithmic Transparency Recording Standard (ATRS) or a model cards
-
This use case asks participants to submit fairness solutions to address bias and discrimination in the CogStack Foresight model developed by Kings Health Partners and Health Data Research UK, with the support of NHS AI Lab. This is a generative AI model for predicting patient outcomes based on Electronic Health Records.
CogStack is a platform that has been deployed in several NHS Hospitals. The platform includes tools for unstructured (text) health data centralisation, natural language processing for curation as well as generative AI for longitudinal data analytics, forecasting and generation.
This generative AI, Foresight, is a Generative Pretrained Transformer (GPT) model. Foresight can forecast next diagnostic codes and any other standardised medical codes including medications and symptoms, based on their source dataset. Foresight can also generate synthetic longitudinal health records that match the probability distributions of the source data, allowing pilots on synthetic data without direct access to private data.
As these AI models have been trained on real-world data, they contain biases of their historical datasets, including demographic biases, styles of historical practice and biased missingness from data capture.
-
For this option, you can propose your own use case. This includes AI models, systems and solutions at different stages of prototyping or deployment that are believed to be at risk of bias and discrimination.
If you are proposing your own use case, you must provide additional information in your application about:
- background or context: what are you using an AI enabled system for, what is the model, why is it being used, what problem does it solve
- potential risks to fairness: what are the fairness challenges associated with this system for this specific use case or context, why is it difficult to make this system fairer
- technical details: describe the data set, including the size of the data set and any variables, as well as the learning algorithms used to train the models
Your use case and proposed solutions will need to be published or shareable. This challenge is only open to use cases that are transparent about their models, tools and data, as well as the challenges and potential solutions to fairness.
-
Innovate UK KTN held an online networking and briefing event on Tuesday 24 October: click here to watch the recording.
If you would like help to find a project partner or use case, please contact Innovate UK KTN’s Robotics & AI team.