Embedding a large language model into a UK government agency.
Change Management
Things to shout about...
incubator experiments
incubator experiments
people given access
people given access
use it regularly
use it regularly
The brief
We supported a major government agency to successfully drive the responsible adoption and integration of a large language model (LLM) across more than 15,000 staff. This adoption posed numerous challenges, including legal and regulatory hurdles, technological integration complexities, and the need for employees to adapt to new tools and workflows. We supported the change and people aspects of this transformative initiative, fostering collaboration, ensuring all legal and compliance requirements were met, enabling seamless integration, and minimising disruption to daily operations.
Designing a safe and compliant approach to AI adoption
Working within a highly secure environment, we collaborated as part of a multidisciplinary team including experts in technology, policy, law, ethics and security to develop a comprehensive AI strategy focused on safe, compliant and value driven implementation. We co-designed robust policies, an ethical AI framework and stringent security controls to maintain public trust and ensure regulatory compliance.
Driving an adaptive and flexible approach to change
We adopted an agile approach, drawing on principles from experimentation and, importantly, maintaining communications throughout across all employees. Through an 8-week incubator, we explored Human-Machine teaming and ethical AI use, identifying over 150 use cases across business and operational areas, such as real-time threat detection using LLMs. A Proof of Concept (PoC) with 50 users tested the model’s performance and user experience, surfacing key insights for broader rollout. This laid the foundation for successful scaling. We scaled the solution to over 1,000 users, supported by an intensive 6-week AI awareness campaign, immersive training and continuous feedback loops. We focused on real-time, operational and business-critical use cases, monitoring use cases, prompts and LLM performance to gather rich data insights and ensure continual value for our users.
To foster cultural change, we launched department-wide awareness campaigns, tailored training programmes, and established an AI Champions network. We implemented continuous engagement strategies, including feedback loops, AI sandboxes for experimentation, and knowledge sharing platforms. We conducted multiple demonstrations of the solution to thousands of stakeholders, designed to showcase the full range of functionalities, potential benefits and transformative value of the LLM. The demos included face-to-face sessions, virtual meetings and team scenario-based situations. The feedback was invaluable, enabling us to refine our approach and secure stakeholder buy-in, ensuring alignment and support across the organisation.
Each iteration involved planning, execution, feedback gathering and adjustment based on insights gained. We ran regular feedback sessions with end-users and stakeholders to gather input on the LLM’s functionality and usability. This feedback informed ongoing adjustments, ensuring the large language model met evolving needs and preferences, and secured stakeholder buy-in throughout.
We promoted cross-functional collaboration throughout the process. Team members from various domains – change, commercial, legal, engineering, data science and research – worked closely together, sharing expertise and insights, and leading to innovative solutions. We also embraced adaptive decision-making, enabling the project to address changing circumstances and emerging insights effectively. This flexible, collaborative approach accelerated problem-solving and change adoption, and enriched the project with diverse perspectives.
The difference we made
Now embedded in critical functions like security, HR and customer service, the LLM serves 15,000 users. Capabilities have been expanded with features like retrieval-augmented generation, knowledge graphs and entity extraction. Users have also been empowered with API access to develop their own solutions. We conducted extensive prompt engineering training, laying a solid foundation for the optimisation of LLM output. This was crucial to enable people to utilise the LLM effectively, ensuring that prompts were well-crafted to achieve the desired outcomes and improve overall system performance.
Staff have reported increased confidence in using AI tools, greater trust in AI-augmented workflows, and notable improvements in human-machine teaming across key departmental functions. Efficiency gains and cost savings have simultaneously benefited the organisation’s bottom line.
Our AI adoption lead for this programme has shared insights at the Cross Government Change Managers Working Group and been invited to speak at multiple government departments, including the Department for Business & Trade (DBT), Counter Terrorism Policing, the Department for Energy Security & Net Zero and the Ministry of Justice. He has presented on AI adoption at major conferences such as the IRM Business Change & Transformation Conference, the IRM Business Analysis Conference, and the DBT Project Delivery & Change Conference.
Our work in action
There’s plenty more where that came from.
Here are some other things we’re proud of doing for our clients.
See all