Data not decisions – MoJ’s AI action plan
Following in the wake of the Leveson Independent Review of the Criminal Courts and the Independent Sentencing Review Report, the Ministry of Justice (MoJ) has published its AI Action Plan for Justice. This is a policy document that signals a restrained approach to AI, acknowledging the concerns about, and limitations of, artificial intelligence in a system as important and complicated as the UK’s justice system.
The Plan outlines how AI will be deployed “responsibly and proportionately” across courts, tribunals, prisons, probation services, and their supporting infrastructure. It is not a bullish approach to radical transformation – any reader of Leveson’s Part 1 report could see the use case is being very carefully considered. The MoJ instead sets out a blueprint for cautious modernisation—one that proposes to preserve the integrity of legal principles while embracing technological efficiency.
Supporting, not replacing, human judgement
Central to the Plan is a commitment to judicial independence. AI is framed as a tool to support human decision-making, not to substitute it. This distinction is more than semantic—it reflects a deliberate effort to reassure the public that human decision making will be enshrined in the justice system at all levels. Algorthims and machine learning will support this, not replace it.
However, it is hard to escape the growing role of AI-parsed data in informing decisions. It appears the further from judicial decisions, the more AI can be expected to be involved, as the MoJ pursues efficiencies at all levels.
Whilst the market for AI continues to grow, there remains an inherent opacity (the ‘black box’ problem) and ingrained biases in these systems. Humans have their own biases, but they remain accountable and must explain their decisions – AI does not. The MoJ is at pains to make it clear that there will always be a ‘human’ mind at the core, even if the robot writes the judgment.
Transparency and trust
Transparency is a recurring theme. The MoJ has launched an AI Communications Plan and an online hub (ai.justice.gov.uk) to provide updates on models being piloted and scaled. An Ethics Framework is also in development, recognising the sensitivity of justice data and the need for robust privacy and security standards.
Rather than shying away, the MoJ has sensibly identified key limitations: siloed data and outdated infrastructure (across government but particularly in the creaking justice system), poor-quality data and a skills gap within government departments. These are not minor hurdles—they are structural issues that could undermine the effectiveness and fairness of AI deployment that government will have to address head on (at not insignificant cost). We can expect to see the information architecture to be a key point in the modernisation of the justice system.
Practical applications
The Plan sets out several areas where AI is expected to deliver tangible benefits, including in assisting in routine task automation (with notable pilot schemes in drafting/note-taking already being undertaken in the probation service), with the aim of maximising both judicial and civil servant time for ‘human’ work, requiring empathy and expertise.
The judiciary is being encouraged to utilise Microsoft’s Co-Pilot, with suggested use cases of bundle summarisation and chronology building. The probation service is being encouraged to use it for case notes and rehabilitation programmes, whilst prisons are being suggested to use it for capacity management and inmate and staff learning programmes.
Of most interest to the public is likely to be the proposed ‘public engagement’ use. The MoJ couches its plans in careful wording, to suggest it will not impact on the criminal courts, but it is clear chatbots and call centres are getting AI services baked in, and there is an ambition to use AI to triage and ‘nudge’ applicants away from the courts and into alternative dispute resolution (where possible).
What seems clear is that, whilst final decisions may be reserved for humans, the data and resources they rely on are going to be increasingly processed by AI. According to the Plan, this will impact every level of the justice system from policy setting and legislation down to individual legal proceedings. In that world, transparency is key so that decision-makers can be confident that bias and error does not creep in (or, more likely, is adequately risk managed in the same way as human bias and error should be).
Legal sector implications
For the legal profession, there is a clear push from the MoJ for ‘responsible’ AI use, with proposed training initiatives for regulators such as the SRA, BSB, and CILEX Regulation. Guidance is expected to cascade through the profession, shaping what constitutes “responsible” AI use in legal practice.
This has been a topic of discussion in law firms for years, even before ChatGPT brought AI into the wider market and public attention, and the regulators will need to be careful not to hamstring development and investment whilst balancing the risks that aren’t otherwise dealt with through other professional obligations.
It’s clear the legal sector will be looking to for efficiency gains in routine tasks. However, AI is capable of far more than rote outputs, and the MoJ is possibly highlighting new frontiers to be explored, not least in drafting court documents (and carefully check the Co-Pilot chronology isn’t full of AI-hallucinated events). The Plan does not delve too deeply into how it thinks the court should receive AI-driven material (as opposed to produce it) but Leveson has given clear warnings about the growing expertise required to unpick these systems and decisions made. If policy and judicial decision making relies on AI-parsed data, we might find challenges to decision-making becoming more and more technical, requiring expert evidence and perhaps a reconsideration of the procedural rules across the justice system.
A human-centred future
Perhaps the most heartening element of the plan is the suggestion that focused, judicious investment will be utilised across government to bring AI into wider use via scalable and transparent procurement. The appointment of a Chief AI Officer—a human, not a bot – and recognition of the government’s poor track record to date in maintaining technical expertise in a competitive market are the first steps in overcoming systemic barriers to the adoption of technology.
Overall, this is a positive picture, of judicial and judicious progressiveness with a degree of central planning and economies of scale. The MoJ’s AI Action Plan is not the beginning of the android arbitrators. It is a careful, measured step forward with clear-eyed plans to manage the limitations in the technology and the MoJ’s own limitations in funding and expertise. It is likely we will see tentative use of AI in the civil and family courts first. We will have to wait to see how Part 2 of the Leveson Review sets out the practical next steps for the criminal courts. We must also wait and see whether, despite the optimistic note, the public sector can successfully manage the wider adoption of AI.