Fairness Innovation Challenge - addressing bias and discrimination in AI

UK registered organisations can apply for a share of up to £400k for projects resulting in new solutions to address bias and discrimination in AI systems.

Opportunity Details

When

Registration Opens

16/10/2023

Registration Closes

13/12/2023

Award

Your project’s total costs can be up to £130,000. Your total project costs will be 100% funded up to the £130,000 maximum.

Organisation

Innovate UK

Share this opportunity

In this funding competition, Innovate UK worked with the Centre for Data Ethics and Innovation (CDEI), part of the Department for Science Innovation and Technology (DSIT), to invest up to £400,000 in innovation projects. The challenge formally launched on Monday, 16 October 2023 and closed on 13 December 2023, and asked for solutions that focus on real-world examples.

This competition is not open to new entrants: the details below, dating from 2023, are retained for reference. For recent updates, follow the Centre for Data Ethics and Innovation on LinkedIn, or on Twitter/X.

The Centre for Data Ethics and Innovation (CDEI) is launching a Fairness Innovation Challenge to drive the development of novel solutions to address bias and discrimination in AI systems. We are also delighted to deliver this challenge in partnership with The Equality & Human Rights Commission (EHRC) and The Information Commissioner’s Office (ICO), who will help guide winners through some of the legal and regulatory issues relating to fairness implications of AI systems, as well as using learnings from the challenge to shape their own broader regulatory guidance.

This competition aims to encourage the development of socio-technical approaches to fairness, provide greater clarity about how different assurance techniques can be applied in practice, and test how different strategies to address bias and discrimination in AI systems can comply with relevant regulation.

Winning proposals will receive grant funding to develop their solutions over a one-year period.

Our objectives are to:

  • encourage the development of socio-technical approaches to fairness
  • test how strategies to address bias and discrimination in AI systems can comply with relevant regulation including the Equality Act 2010, the UK General Data Protection Regulation (GDPR) and the Data Protection Act 2018
  • provide greater clarity about how different assurance techniques can be applied in practice

Your proposal must address bias and discrimination in one of the following use cases:

  • provided healthcare use case
  • open use case

Your proposed solution must adopt a socio-technical approach to fairness, seeking to address not only statistical but also human and structural biases associated with the AI system in question.

Outcomes and objectives

Ensuring AI systems are built and used fairly can be challenging but is hugely important if the potential benefits of AI are to be realised.

Recognising this, the government’s white paper “A pro-innovation approach to AI regulation” proposes fairness as one of five cross-cutting principles for AI regulation. Fairness encompasses a wide range of issues, including avoiding bias that can lead to discrimination.

This issue has been a core focus for CDEI since we were established in 2018. Our 2020 “Review into bias in algorithmic decision making” provided recommendations for government, regulators, and industry to tackle the risks of algorithmic bias. In 2021, we also published the “Roadmap to an Effective AI Assurance Ecosystem”, which set out how assurance techniques such as bias audit can help to measure, evaluate and communicate the fairness of AI systems. Most recently, we published our report “Enabling responsible access to demographic data to make AI systems fairer”, which explored novel solutions to help organisations to access the data they need to assess their AI systems for bias.

Over this period, bias and discrimination in AI systems has been a strong focus across industry and academia, with significant numbers of academic papers and developer toolkits emerging. However, organisations seeking to address bias and discrimination in AI systems in practice continue to face a range of challenges, including:

  • Accessing the demographic data they need to identify and mitigate unfair bias and discrimination in their systems.
  • Determining what fair outcomes look like for any given AI system and how these can be achieved in practice through the selection and use of appropriate metrics, assurance tools and techniques, and socio-technical interventions.
  • Ensuring strategies to address bias and discrimination in AI systems comply with relevant regulatory frameworks, including equality and human rights law, data protection law, and sector-specific legislation.

The Challenge will seek to address these issues by supporting the development of novel solutions to address bias and discrimination in AI systems. Winning solutions will implement socio-technical approaches to fairness, provide greater clarity about how different assurance techniques can be applied in practice, and test how strategies to address bias and discrimination in AI systems interact and can comply with relevant regulation.

Timeline for submission

  • 16 Oct: The Fairness Innovation Challenge formally launches. The application portal on Innovate UK will open for submissions.
  • 19 Oct: CDEI hosts in-person launch event in London for prospective participants. You can register your interest via Eventbrite.
  • 24 Oct: Innovate UK hosts applicant briefing and networking session for prospective participants. More details to follow.
  • 13 Dec: Innovate UK portal for submissions closes.
  • Spring 2024: Challenge winners are announced.

Partners

The CDEI leads the Government’s work to enable trustworthy innovation using data and AI as part of the Department for Science, Innovation and Technology (DSIT).

CDEI will deliver the challenge with our delivery partner Innovate UK and in partnership with UK regulators, the Equalities and Human Rights Commission (EHRC) and the Information Commissioner’s Office (ICO).

Innovate UK is the UK’s national innovation agency. Innovate UK supports business-led innovation in all sectors, technologies and UK regions. We help businesses grow through the development and commercialisation of new products, processes, and services, supported by an outstanding innovation ecosystem that is agile, inclusive, and easy to navigate.

The Equality and Human Rights Commission is Great Britain’s national equality body and has been awarded an ‘A’ status as a National Human Rights Institution (NHRI) by the United Nations. The EHRC works to help make Britain fairer by safeguarding and enforcing the laws that protect people’s rights to fairness, dignity and respect.

The Information Commissioner’s Office (ICO) is the UK regulator for Data Protection and Freedom of Information, with key responsibilities under the UK General Data Protection Regulation (GDPR), Data Protection Act 2018 (DPA) and Freedom of Information Act 2000 (FOIA). The ICO’s role is to uphold information rights in the public interest. AI is a priority area for the ICO, with guidance being regularly published to support responsible innovation whilst protecting individual rights and freedoms.

  • To lead a project your organisation must be a UK registered:

    • business of any size
    • academic institution
    • research and technology organisation (RTO)
    • charity
    • not for profit
    • public sector organisation

    An eligible organisation can lead on any number of distinct projects.

    Subcontractors are allowed in this competition. We recognise that developing socio-technical solutions to address bias and discrimination in AI systems requires a breadth of knowledge and skills that may require you to work with different organisations as subcontractors.

    Grant funding in this competition is awarded as Minimal Financial assistance (MFA). This allows public bodies to award up to £315,000 to an enterprise in a 3-year rolling financial period. To establish your eligibility, we need to check that our support added to the amount you have previously received does not exceed the limit.

  • Your project must:

    • have total project costs of up to £130,000
    • carry out its project work in the UK
    • intend to exploit the results from or in the UK
    • start by 1 May 2024
    • end by 31 March 2025

    We are not funding projects that:

    • do not adopt a socio-technical approach to fairness
    • do not address at least two of the stages in the process of addressing bias and discrimination in AI systems
    • do not evidence the potential for the proposed innovation to generate positive economic or societal impact

    If you are proposing your own use cases, we will not accept projects that are not transparent and open about the models, data and risks to fairness that your use case presents.

  • The aim of this competition is to drive the development of novel solutions to address bias and discrimination in artificial intelligence (AI) systems.

    Our objectives are to:

    • encourage the development of socio-technical approaches to fairness
    • test how strategies to address bias and discrimination in AI systems can comply with relevant regulation including the Equality Act 2010, the UK General Data Protection Regulation (GDPR) and the Data Protection Act 2018
    • provide greater clarity about how different assurance techniques can be applied in practice

    Assurance techniques, include the methods and processes used to verify and ensure that systems and solutions meet certain standards, including those related to fairness.

    Despite increased interest in addressing bias and discrimination in AI systems, organisations continue to face numerous challenges, including:

    • a lack of clarity around best practice for the use of fairness metrics and toolkits
    • limitations associated with technical approaches
    • risks of breaching UK legislation

    This competition aims to tackle these challenges in practice. You must propose a solution to address bias and discrimination in an AI system in one of the real world use cases:

    • provided healthcare use case
    • open use case
  • You must propose a solution to address bias and discrimination in an AI system in one of the real world use cases:

    • provided healthcare use case
    • open use case

    Your proposal must include:

    • a description of the process you would adopt to detect and address bias and discrimination in the selected use case, including potential technical and socio-technical interventions
    • an explanation of why you have selected this particular approach, for example, why you have chosen to use a particular fairness metric or socio-technical intervention
    • an explanation of how you will also ensure broader ethical or legal fairness within the UK context, for example compliance with data protection legislation and equalities law, beyond just looking at technical and mathematical fairness

    Your proposed solution must also address at least two of the following stages in the process of addressing bias and discrimination in AI systems:

    • accessing demographic data (for bias detection)
    • bias detection
    • bias mitigation
    • ongoing monitoring and evaluation

    Your proposed solution must adopt a socio-technical, rather than purely mathematical or statistical approach to achieving fairness.

  • A socio-technical approach considers the broader historical, social and cultural context in which an AI system is embedded and seeks to address both statistical and structural biases associated with the use of AI systems.

    Possible socio-technical interventions include but are not limited to:

    • participatory forms of data collection, audit or mitigation
    • governance interventions addressing organisational biases
    • intersectional bias analysis
    • custom context-specific bias metrics
    • engagement with subject matter experts
    • investigating bias in human decision making processes surrounding the system

    You can access information about socio-technical approaches to fairness, in this paper and page 10 of this guidance from the National Institute of Standards and Technology (NIST).

  • If successful, on completion of your funded project you will be required to attend a show case event to present evidence of your outcomes.

    You will also be required to share the outputs and outcomes of your project, this will include, at a minimum:

    • a White Paper explaining the solution you developed, its impact, and lessons others can learn from your project
    • if a method or tool is developed as part of the challenge, the code or description must be made available and open source
    • if a proprietary method or tool is used as part of the challenge, a transparency record must be filled out and made publicly available for example, the Algorithmic Transparency Recording Standard (ATRS) or a model cards
  • This use case asks participants to submit fairness solutions to address bias and discrimination in the CogStack Foresight model developed by Kings Health Partners and Health Data Research UK, with the support of NHS AI Lab. This is a generative AI model for predicting patient outcomes based on Electronic Health Records.

    CogStack is a platform that has been deployed in several NHS Hospitals. The platform includes tools for unstructured (text) health data centralisation, natural language processing for curation as well as generative AI for longitudinal data analytics, forecasting and generation.

    This generative AI, Foresight, is a Generative Pretrained Transformer (GPT) model. Foresight can forecast next diagnostic codes and any other standardised medical codes including medications and symptoms, based on their source dataset. Foresight can also generate synthetic longitudinal health records that match the probability distributions of the source data, allowing pilots on synthetic data without direct access to private data.

    As these AI models have been trained on real-world data, they contain biases of their historical datasets, including demographic biases, styles of historical practice and biased missingness from data capture.

  • For this option, you can propose your own use case. This includes AI models, systems and solutions at different stages of prototyping or deployment that are believed to be at risk of bias and discrimination.

    If you are proposing your own use case, you must provide additional information in your application about:

    • background or context: what are you using an AI enabled system for, what is the model, why is it being used, what problem does it solve
    • potential risks to fairness: what are the fairness challenges associated with this system for this specific use case or context, why is it difficult to make this system fairer
    • technical details: describe the data set, including the size of the data set and any variables, as well as the learning algorithms used to train the models

    Your use case and proposed solutions will need to be published or shareable. This challenge is only open to use cases that are transparent about their models, tools and data, as well as the challenges and potential solutions to fairness.

  • Innovate UK KTN held an online networking and briefing event on Tuesday 24 October: click here to watch the recording.

    If you would like help to find a project partner or use case, please contact Innovate UK KTN’s Robotics & AI team.

Close

Connect with Innovate UK Business Connect

Join Innovate UK Business Connect's mailing list to receive updates on funding opportunities, events and to access Innovate UK Business Connect's deep expertise. Please check your email to confirm your subscription and select your area(s) of interest.