This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
Back

Blog

| 3 minutes read

Technology in the public sector: guidance on using AI safely and securely

On 18 January 2024, the Cabinet Office and Central Digital and Data Office jointly published the Generative AI Framework for HM Government (guidance) which provides guidance for civil servants and others working in government organisations on using generative artificial intelligence (AI) safely and securely.

Recommended use of AI

The guidance highlights the most promising use cases are likely to be those which aim to:

  • Support digital enquiries: enable citizens to express their needs in natural language, and find the content and services which are most helpful to them.
  • Interpret requests: analyse correspondence or voice calls to understand citizens’ needs, and refer them to the place where they can best get help.
  • Enhanced search: quickly retrieving relevant information to help answer citizens' queries.
  • Synthesise complex data: help users to understand large amounts of data and text, by producing simple summaries.
  • Generate output: produce first drafts of documents and correspondence.
  • Assist software development: support software engineers in producing code.
  • Summarise text and audio: converting emails and records of meetings into structured content, saving time in producing minutes and keeping records.
  • Improve accessibility: support conversion of content from text to audio, and translation between different languages.

Conversely, public servants should take great care and avoid using generative AI for:

  • Fully automated decision-making: any use cases involving significant decisions, such as those involving someone’s health or safety, should not be made by generative AI alone.
  • High-risk /high-impact applications: generative AI should not be used on its own in high-risk areas which could cause harm to someone’s health, safety, fundamental rights, or the environment.
  • Low-latency applications: generative AI operates relatively slowly compared to other computer systems, and should not be used in use cases where an extremely rapid response is required.
  • High-accuracy results: generative AI is optimised for plausibility rather than accuracy, and should not be relied on as a sole source of truth, without additional measures to ensure accuracy.
  • High-explainability contexts: a generative AI solution may be difficult or impossible to explain, meaning that it should not be used where it is essential to explain every step in a decision.
  • Limited data contexts: generative AI depends on large quantities of training data. Systems that have been trained on limited quantities of data, for example in specialist areas using legal or medical terminology, may produce skewed or inaccurate results.

Legal compliance

The guidance recognises that while generative AI is new, many of the legal issues that surround it are not and lawyers can help navigate common challenges relating to data protection, contractual issues, intellectual property and copyright, equality issues, public law principles and human rights.

Aside from legal considerations, the guidance helps public sector bodies navigate key ethical issues, providing practical tips such as:

  • clearly signpost when generative AI has been used to create content or is interacting with members of the public (where possible, label AI-generated content, and consider embedding watermarking into the model);
  • collect feedback from diverse groups during user testing to understand how a generative AI system performs in real-world scenarios;
  • give citizens the option to be referred to a person and enable feedback from users and affected stakeholders.

Ultimately, responsibility for any output or decision made or supported by an AI system always rests with the public organisation.  The guidance is clear that public bodies must be able to explain, to any citizen using or impacted by a generative AI service, how the generative AI works and which factors influenced its decision-making and outputs.

Human oversight - still essential

The guidance warns that users need to be aware that outputs are statistically informed guesses rather than facts: processes need to allow for and consider individuals’ feedback, views and corrections of factual errors.

Although it is possible to use generative AI systems for automated decision-making, Article 22, of the UK GDPR currently prohibits “decision(s) based solely on automated processing” that have legal or “similarly significant” consequences for individuals. Services that affect a person’s legal status or their legal rights utilising generative AI must only use it for decision-support, where the system only supports a human decision-maker in their deliberation. 

Whilst generative AI offers huge opportunities for innovation, public bodies should uphold the expectation ‘to be heard’ by a human when interacting and receiving services from the government. Digital innovation and the use of generative AI still need to align with human values and support the public good but generative AI lacks emotional intelligence, personal experiences and emotions.

The guidance is a useful tool for public bodies, drawing together recommendations and guidance from multiple sources. Public sector organisations should use the guidance to check their thinking when it comes to using generative AI, so they can be confident they are building systems which maintain transparency and enhance public trust.

This framework aims to help readers understand generative AI, to guide anyone building generative AI solutions, and, most importantly, to lay out what must be taken into account to use generative AI safely and responsibly.

Tags

generative ai, public sector, transaprency, digital technology, innovation, public services, local authorities, local government