Starmer’s race to AI-fy public services
What does Labour’s unfettered push for Big Tech and AI in government mean for public services in Britain?

A year into the Labour Party coming to power in the UK, it seems like all bets on “growth” are on Artificial Intelligence.
Since Prime Minister Keir Starmer unveiled the AI Opportunities Plan earlier this year, drafted by tech investor Matt Clifford before his swift return to industry, most government announcements on public services, from education to health to the civil service, have centred on AI rollouts. Yet the government remains strikingly evasive about how any of this will actually work.
As someone following these developments closely, I am none the wiser on how this AI-powered growth agenda is going to unfold, and solve all sorts of problems — from growing NHS waiting lists, prison over-crowding, and lack of social housing, to list a few. One thing is certain though, Starmer’s strategy is great news for Big Tech and its AI operations; with the industry having unprecedented say and sway over the policies being created to govern their own products.
The latest case in point is the new AI advisor to the Prime Minister, replacing Matt Clifford, Jade Leung - an engineer who formerly worked at OpenAI, one of the biggest, most profitable AI companies in the world. A company that has grown closer to the British state, having recently agreed a major partnership to supply AI for public services. This begs the question: should government policies be overseen by former industry people who are more likely to be sympathetic and uncritical towards these companies?
We can go higher in government ranks. The current Secretary of State for Science, Innovation and Technology, Peter Kyle is a man known to be so industry friendly that he invited the CEO of Google Deepmind, Demis Hassabis, to “sense check” the government’s AI policy, intended to rein in the sector he is part of. As far as we know Kyle didn’t ask anyone in civil society to do the same.
Kyle also promised Amazon to engage with the UK’s Competition and Markets Authority (CMA) and “advocate” for the company when the CMA was looking into its monopolistic business practices. The investigations against Amazon were eventually dropped. And the CMA got a new interim chair, Doug Gurr, who is the former Amazon UK boss. Surely a coincidence?
Reports also reveal that the Technology Secretary dined with the OpenAI CEO Sam Altman in the months leading up to the announcement of the Open AI-UK Government strategic partnership.
Matt Clifford, the previous AI advisor to the government, drafter-in-chief of the AI policy being unleashed by the government, has even more known direct connections to the tech industry than Kyle. Clifford holds business interests in almost 500 tech and AI firms, including some firms that supply tech to the public sector.
Conflicts of interest are notoriously hard to navigate where government and industry meet. The question of how close is too close seems up for debate, but what is clearly problematic is the amount of funding going towards initiatives led by American Big Tech. This directly contradicts the government’s own agenda to bolster technological sovereignty. Not to mention the manner in which Big Tech is quickly and without scrutiny being embedded in critical government infrastructure.

As part of the AI Action Plan earlier in the year, the government secured a £14 billion commitment from the tech industry. Big Tech US firms like Amazon, Microsoft and Anthropic promised huge investments in AI tools and upskilling in government. How is this technological sovereignty? And what are these for-profit multi-billion dollar companies expecting to get in return?
Other than the OpenAI partnership, the UK government has also announced a major one with Google Cloud for its services to be embedded across public services. The opacity of the MoUs announcing these deals make it hard to have confidence in how public data will be treated and what access “strategic partners” such as OpenAI will have, and for how long.
Given the DOGE-style mishandling of public data that has been happening in the US, under the watchful eyes of Big Tech billionaire Elon Musk, this should be a concern. The Department of Government Efficiency, set up by Donald Trump in his second presidency, which until recently was led by Musk, is reportedly responsible for the taking sensitive labour data and accessing sensitive records of immigrant children among several other worrying data compromises.
Early warning signs
The union between UK government and the tech bros seemingly started in January 2025, when Starmer accepted Clifford’s AI Opportunities Plan and all 50 recommendations that came with it. This included the building of “AI growth zones”, including infrastructure capacity for data centres to power the AI industry, changes in public data flows, and more broadly rapid embedding of (private sector) AI tools in the delivery of public services like health and education.
But Labour’s friendly stance towards American Big Tech was clear even earlier on. One of Starmer’s very first acts as Prime Minister in 2024 was to cross the Atlantic to meet with US President Donald Trump. That visit was widely read as laying the groundwork for a tech-focused trade deal, with the Starmer government signalling a willingness to go easy on American Big Tech in ways that would favour tariff negotiations.
Strikingly, the other pit stop Starmer made was to Palantir’s offices, a major technology company, where he met with its CEO Alex Karp, a move that underscored just how central US technology firms already were to his government’s agenda.

Palantir is a questionable American surveillance tech firm that sells tools to help the Immigration and Customs Enforcement (ICE) target immigrants for deportation in the US, and aids the Israeli government lock in on kill targets in Gaza, among other things. That’s the kind of company that was awarded a £330 million contract to be in charge of the data flows in the UK’s National Health Service (NHS) under the previous Tory government, and is now increasingly getting cosy with the Labour government.
Palantir already has bagged contracts across the UK government — local councils, police forces, the Ministry of Defense, the Department for Environment, the Department for Levelling Up, Housing and Communities, and the NHS among others — with investigations showing how the company benefits from a “revolving door” with Westminster officials. But the trend does not start or stop with Palantir.
Big Tech sweep across the public sector
Democracy for Sale found that Labour ministers met up with Big Tech representatives an average of six times a week in the first six months of forming its government. A total of 161 meetings, 90 of which were held with Amazon, Apple, Google, Meta and Microsoft alone.
This close contact between ministers and industry is especially worrying given the sheer number of deals and partnerships that the government has signed with Big Tech in the last year, some of which have been already mentioned in this piece.
On top of the £40 billion Big Tech investment, and wide ranging deals with Google and OpenAI, the government has also launched something called ‘AI Humphrey’ - a bundle of AI tools. These tools rely on the AI systems built by tech giants like Google and OpenAI, with the government paying them per use. It’s another example of how closely government business now leans on Big Tech.
The various tools part of the package are currently being used to summarise public responses to consultations, and employed by public servants to create briefs, among other things. But there is a lack of transparency around the extent of the adoption of this tool, future use cases being planned for it, and the impact it could further have on government jobs. As reported by the Guardian, Labour peer Shami Chakrabarti has also warned of gaps in knowledge around the biases and inaccuracies being encountered in this process.
In the NHS, the government is introducing an updated NHS app, complete now with an “AI GP”. Dubbed as the “ChatGPT of the NHS” and “doctor in your pocket”, the tool is supposed to free up clinicians’ time and reduce the number of frontline workers needed in the long run, saving the NHS money. But it is unclear what the AI application will be built on and how its efficacy would be measured and maintained; given the sensitive and critical nature of information handled in a healthcare environment.
This is happening alongside an attempt at data centralisation in the NHS that is being overseen by the infamous American tech firm, Palantir. Palantir offered a £1 contract to the UK government to provide data infrastructure to enable vaccination rollout during the 2020 Covid pandemic. It then went on to bag the £300 million Federated Data Services contract, giving it access to one of the most valuable healthcare databases in the world.
There have been critical questions raised about the procurement process in this case, but significantly, the efficacy of the tool has also come under scrutiny. A resolution passed by the British Medical Association (BMA) and reporting on hospital trusts suggests that medical professionals are unhappy with Palantir’s tool and are refusing to use it.

This trend of a Covid backdoor can also be seen with Big Tech accessing other sectors, like Education. During the pandemic, the then Tory government laid out a red carpet for companies like Google and Microsoft, striking deals worth millions of pounds at a time when remote technologies became truly essential. The government made grants available to schools across the country and mandated that they adopt either Google for Classroom or Microsoft 365, alongside running a device rollout; giving laptops and tablets to schools. This was an emergency moment where a lot of choices had to be made quickly - but the downsides have never been addressed. It basically ensured that two big American companies captured the entire UK school market, including hooking kids on hardware such as Chromebooks.
This is not only troubling because of the monopoly status it has given Google and Microsoft (so much so that academics have called the phenomenon “The Googlization of schools”), it is also because Big Tech products are not built with child safety in mind.
The Labour party seems to be pushing this trend further with its AI education strategy. In the last year, several millions have been handed out by the Department for Education (DfE) to build AI tools for children and teachers, but it remains unclear if Big Tech will be the provider of Ed-tech, or if smaller companies will also benefit.
The Department for Education has launched an ‘EdTech Impact Testbed Pilot’, effectively turning schools into testing sites for new education technology. When The Citizens asked through a Freedom of Information request which companies were involved in this pilot, the department said none had yet been contracted. Yet, at the same time, its new funding announcements were paired with ‘Product Safety Expectations’ guidance on children’s tech safety drawn up with, and endorsed by, Big Tech firms. In other words, even while claiming no contracts are in place, the government appears to be shaping the framework hand-in-hand with Big Tech, signalling who might likely benefit when the contracts do come.
It raises a clear conflict of interest: the companies that may later supply the technology are also helping to set the standards by which their own products will be judged.
Big Tech’s shadow is cast over the Justice Department too, with The Guardian recently reporting that the Justice Secretary Shabana Mahmood sat down with companies like Google, Amazon, Microsoft, Palantir and IBM to discuss ideas on the introduction of some truly dystopian technologies in the prison system, such as putting trackers under offenders’ skin, real-time behaviour monitoring and using robots to manage prisoners.
What side is the government on?
It hardly strengthens the government’s claim to impartiality when it openly and unapologetically takes the side of Big Tech over UK creatives in the AI copyright fight.
Under the Data (Use and Access) Bill, the government proposed a copyright exception that would allow Big AI companies to en-masse scrape and use copyrighted material without disclosing the use of such material and compensating creatives for it. It led to a huge backlash from the creative and publishing industry that criticised the government for favouring Big Tech’s interests over theirs.
There was strong opposition to this legislation in the House of Lords which resulted in a prolonged battle. Baroneness Kidron, the peer who fronted the fight in the Lords, described the government’s move as "state sanctioned theft", noting that it robbed the UK creative industry of £124bn.
But eventually, the Lords gave in and the Bill was passed in June 2025 with a compromise: the government agreed to publish a report on its AI and copyright policy in the months following the Bill’s passage. This means that some transparency measures could be implemented, but fails to ensure that Big AI companies stop benefiting from copyrighted content without having to pay.
In the end, it wasn’t Big Tech or the government that gave ground - it was creatives who lost out. The episode stands as a warning of where Labour’s growth agenda may be taking us.
Whether greater transparency will come remains to be seen. So far, our attempts to probe Big Tech’s influence over policymaking through Freedom of Information requests have largely been stonewalled. The government has repeatedly invoked exemptions to block disclosure of crucial ministerial meetings, leaving the public in the dark about who is shaping policy. It’s part of a wider pattern in which the state talks up openness on AI, while quietly shielding its dealings with Big Tech from scrutiny.