Delivering an Ethical AI Future

This week marked one year since Policy Connect launched the report ‘An Ethical AI Future: Guardrails and Catalysts to make Artificial Intelligence a Force for Good’. Since the release of the report, there have been several key advancements in the realm of Artificial Intelligence (AI), with the Government releasing their consultation response on the AI White Paper earlier this year, and the UK hosting the AI Safety Summit in Bletchley Park.  

The use of AI continues to grow at a rapid pace, but regulation seems to be lagging behind. The Government’s approach towards AI regulation is considered to be light-touch and follows a principle-based approach which is to be translated into sector-specific action by regulators. For Artificial Intelligence to be a force for good, industry needs an unambiguous and responsive regulatory environment to foster growth and innovation and promote public trust.  

Looking at the key recommendations from the report a year later, we have outlined the progress made on these policy recommendations and suggest how the incoming government can go a step further to ensure that the UK is a global leader in AI. 

Recommendation 1: The Government must step up engagement with the EU, US, and other bilateral partners.  

Arguably, the Government has made the most progress on this recommendation. In November 2023, the Government organised the first ever AI Safety Summit with international partners, leading AI companies and civil society groups to discuss the risks of AI and ways to mitigate them. One of the main achievements of the summit was the signing of a declaration by 28 countries to continue to meet and discuss AI risks in the future. The summit did however fail to produce clear consensus amongst the parties in establishing international standards on the safety of AI.  

More recently, the UK and US signed a Memorandum of Understanding (MOU) which will see them work together to develop tests for the most advanced artificial intelligence (AI) models. This is a positive development and aligns closely with the report’s recommendation. To further promote international collaboration, it is vital that the UK also increases its engagement with the EU, who are leading on regulation with the establishment of the EU AI Act.  

Recommendation 2: The Government should establish a National Centre for AI. The institution should be a single, independent central body with strong regulatory authority, established in statute, and properly resourced. 

As an outcome of the AI Safety Summit, the Government launched an AI Safety Institute (AISI) which is a research organisation within the Department of Science, Innovation and Technology (DSIT). The organisation is responsible for testing and assessing risks of advanced AI systems to inform policymakers, foster collaboration across sectors and bodies to strengthen ethical AI development practices globally.  

It is positive to see the Government establish a central body led by the Frontier AI Taskforce to assess the risk of AI models; however, the lack of regulatory powers is likely to limit their remit in enforcing ethical practices. Therefore, it is imperative that the Government’s regulatory framework establishes an independent and central body, established on a statutory footing, to convene existing regulators and the AISI.  

Recommendation 3: The Government should introduce statutory duties that are worded to require organisations to achieve the objective of ‘doing no harm’ through ‘cultural embedding’ of governance and leadership.  

As of yet the Government has not introduced any statutory obligations of ‘doing no harm’ on organisations using AI. Their approach is reliant on existing regulators implementing the principle-based framework in their respective sectors. To ensure that the development and bringing to market of AI is responsible and does not bring public harm, the report calls for internal governance measures and ways of working that embed the principles of fairness, transparency, and explainability of AI into the culture of the organisation.  

These governance measures, to be introduced on a statutory basis, include requirements for the organisations leadership to take account of the requirement to achieve ‘doing no harm’, to place a member on the board who is accountable for ensuring due diligence on AI and ethics, and any large organisation to have an Ethics Advisory Committee.