
Table of contents
- 1. What were the committee’s findings? skip to link
- 2. What did the committee recommend? skip to link
- 3. How did the government respond? skip to link
- 4. What are other countries’ views on AI technology in the justice system? skip to link
- 5. What recent AI policy has the UK government announced? skip to link
- 6. Read more skip to link
On 28 November 2022, the House of Lords is due to consider the House of Lords Justice and Home Affairs Committee report ‘Technology rules? The advent of new technologies in the justice system’. The report was published on 30 March 2022.
The report set out the committee’s findings from its ‘new technologies and the application of the law’ inquiry, launched in May 2021. This examined the use of artificial intelligence (AI) technologies in the criminal justice system. Specifically, the report looked at tools that use algorithms or machine-learning technology to help apply the law in England and Wales. This includes algorithmically manipulated technologies used to discover crimes, deter criminal behaviour, and to rehabilitate or punish offenders. Technologies referred to in the committee’s report included:
- ‘predictive policing’ that uses historic data to predict where and when certain crimes could occur
- ‘visa streaming’ algorithms which triage visa applications to help visa-issuing authorities decide who to investigate
- facial recognition tools that assess whether separate images depict the same person
The committee considered principles for the safe and ethical use of AI technologies in helping to apply the law, as well as how these principles could be put into practice.
1. What were the committee’s findings?
Overall, the committee recognised the positive impact that AI could have on efficiency, productivity and problem solving in the justice system. However, it said a lack of minimum standards, transparency, evaluation and training in AI technologies meant that the public’s human rights and civil liberties could be compromised. The committee said addressing these issues would “consolidate the UK’s position as a frontrunner in the global race for AI, while respecting human rights and the rule of law”.
2. What did the committee recommend?
2.1 Legal and institutional frameworks
The committee raised several concerns, including that algorithmically manipulated evidence could pose serious risks to an individual’s right to a fair trial. It believed “precise documentation, evaluation by subject experts, and transparency when evidence is subject to algorithmic manipulation” was the most suitable method to follow when applying the law.
The committee also raised concerns that the government had no cross-departmental strategy on the use of new technologies in the justice system, as well as no clear line of accountability for technology misuse.
It made several recommendations, including that the government should:
- undertake a review to rationalise and consolidate governance structures on the use of new technologies
- establish an independent and statutory national body to govern the use of new technologies
- introduce primary and secondary legislation that sets out general principles and minimum standards for the use of technological solutions when applying the law
- issue national guidance for the use of technology in policing and the criminal justice system
- appoint a taskforce to ensure that there are links of accountability across England and Wales
2.2 Transparency
Public bodies and police forces are not obliged to disclose information on their use of AI technology. As such, the committee raised concerns about the absence of an appropriate scrutiny mechanism. It argued this limited the ability of the press, academia and Parliament to challenge AI technology use. It made several recommendations, including that the government should:
- make ‘transparency’ a statutory principle when introducing legislation on the use of technological solutions
2.3 Human-technology interactions
The committee found evidence that some users of AI technologies were failing to “meaningfully engage” with the outputs from AI technologies. The committee said some outputs had been overrated or misinterpreted, as well as some challenges “smothered”. It made several recommendations, including that the government should:
- commission research to determine how the use of predictive algorithms affected decision making
- require officers and officials to undertake training on “meaningful interaction with technologies”
- require technology producers to embed “explainability” into the tools to enable users to understand, scrutinise and interpret the technology’s outputs
2.4 Evaluation and oversight
The committee said that evaluating technologies prior to deployment (and throughout their lifecycle) would be beneficial. It found that police forces had insufficient resources and expertise to evaluate AI technologies. The committee stated that evaluations would ensure police forces and public bodies could use tools effectively. It made several recommendations on this basis, including that the government should:
- make comprehensive impact assessments mandatory each time an advanced technology was implemented in a new context or for a new purpose
- require a new independent and statutory national body (as mentioned in section 2.1 above) to introduce a certification system that would certify technologies before they could be deployed
- facilitate the development of statutory specialist ethics committees to scrutinise police forces’ use of technologies, and veto the deployment of new technologies where necessary
3. How did the government respond?
In its formal response published on 23 June 2022, the government disagreed with several of the committee’s recommendations. It said it did not agree with the committee report’s “characterisation that new technologies [would] inevitably override societal values or hand over judgement on matters of necessity and proportionality to machines”. The government emphasised that people and not machines should take the key decisions when applying the law.
On oversight, the government said that existing bodies (such as His Majesty’s Inspectorate of Constabulary and Fire and Rescue Services) already monitored AI technology usage by police forces. It accepted that technologies should be used in a responsible way. It also said technologies were needed to ensure the police could keep up with the digitisation of society and the economy. Referring to a “pressure on policing to respond to crime”, the government said that more emphasis should be placed on the benefits of automation. It argued that criminals continued to rely on technologies for illegal activity, and therefore police needed the tools to adapt quickly and keep pace.
Recommendations that the government disagreed with included the following:
- It was not persuaded that a new independent national body and certification system should be created. It said whilst certification worked in some contexts, it could also create false confidence and be costly.
- It disagreed with the idea of making transparency a statutory principle. It said some police forces were already showing openness about the technologies that they use by providing resources, information and impact assessments on their websites. The government said making transparency a legal duty could limit the police’s current transparency efforts to whatever would be set out in statute.
- On training, the government said it could not make the police and the judiciary undertake training on “meaningful interaction with technologies”. This was because training was the responsibility of the College of Policing and Judicial College, rather than the government.
- It disagreed that there should be statutory ethics groups created to scrutinise the use of technologies and veto deployment. Whilst the government acknowledged that ethics groups could provide helpful advice, it also argued that they should not be empowered to act on behalf of the public in this way because they would not be democratically elected.
There were some recommendations that the government agreed with:
- It agreed that impact assessments were an important part of the pre-deployment process of technologies. It said that justice sector organisations already had a legal duty to carry out equality impact assessments and data protection impact assessments prior to deploying new technologies.
- The government agreed that further guidance on new technologies for the police could help with operational deployment and officer confidence. However, it said this should not be centrally imposed guidance but instead be sector-led with the government’s support.
Following the government’s response, the committee said it was “disheartened” by the Home Office’s reaction to the committee’s findings. The committee’s chair, Baroness Hamwee (Liberal Democrat), said the government’s response had read to the committee as being “more satisfied with the current position” on technology use in the justice system than was “consonant with the evidence” the committee had received. Baroness Hamwee said that the committee hoped the government would explore the issue further during the House of Lords debate on the report.
4. What are other countries’ views on AI technology in the justice system?
Two EU agencies have recently considered how AI technologies could be used to support cross-border cooperation in criminal justice. In June 2022, a joint report by Eurojust, the EU agency for criminal justice cooperation, and eu-LISA, the EU agency for the operational management of large-scale IT systems in the area of freedom, security and justice, said that AI could help to reduce judicial authorities’ costs in the long term. Listing potential benefits, it said:
Efficiency gains, improvements in effectiveness, cost reduction: all of these performance improvements can eventually result in improved access to justice and reduce time to render judicial decisions.
However, the joint report said these benefits would depend upon the robustness and reliability of the technologies deployed. It also said the use of such technologies should be balanced against the need to ensure that fundamental rights were protected.
5. What recent AI policy has the UK government announced?
Since the publication of the committee’s report, the government has not announced any specific policy on AI technology use in applying the law. However, it has set out plans to further the UK’s AI innovation capability.
In July 2022, the government published an AI action plan which provided a summary of the government’s progress towards realising ambitions set out in the UK national AI strategy. Published in September 2021, this strategy aimed to build upon the 2017 industrial strategy which set out the government’s vision to make the UK a global centre for AI innovation. The AI action plan outlined activities taken by each government department. It said the Home Office had committed over £7mn towards developing the child abuse image database, which included the use of AI to help counter the threat of online child sexual abuse.
Alongside the AI action plan, the government also issued a policy paper on establishing a “pro-innovation” approach to regulating AI in the UK. This proposed that a clear, innovation-friendly and flexible regulatory framework for AI be created in the UK to promote investment and drive productivity. The government sought views on these framework proposals via a 10-week call for evidence that closed on 26 September 2022. Following this, the government said further details would be set out in a white paper later in 2022. The white paper has yet to be published.
6. Read more
- Unesco, ‘AI and the rule of law: Capacity building for judicial systems’, 19 October 2022
- The Stack, ‘Technology use in the justice system is a “wild west” warn peers’, 4 April 2022
- Pinsent Masons, ‘Technology to become embedded in UK justice system by 2040, senior judge suggests’, 22 March 2022
- Bernard Marr, ‘The five biggest tech trends in policing and law enforcement’, Forbes, 8 March 2022
- International Bar Association, ‘Artificial intelligence in criminal justice: Invasion or revolution?’, 13 December 2021
Cover image by Gerd Altmann on Pixabay.