A day before the release of the Transparency International’s 2022 Corruption Perceptions Index (CPI) by the anti-graft organization Transparency International, the UK government’s Public Sector Fraud Authority (PSFA) announced a £4 million contract award to a tech company to find and prevent more fraud across the public sector.
The move could well be perceived as a face-saver as in the global corruption report the United Kingdom has been given its worst ever score of 73, placing it 18th on the list, alongside Belgium and Japan. The report blamed “public spending and ministerial misconduct for the poor UK performance.”
The report described the UK’s score as having “dropped significantly” to its lowest level since the index began in 1995. The UK dropped fell from 11th place in 2021, when it had 78 points, to 18th last year.
“This sharp fall in the UK’s score is a powerful indictment of a recent decline in standards in government and controls over the use of taxpayer money. These findings should set off alarm bells ringing in Downing Street. The underlying data clearly indicates that business executives and other experts are concerned about insufficient controls on the abuse of public office and increasingly view corruption and bribery as a real issue in Britain. This is the strongest signal yet that slipping standards are being noticed on the world stage,” Daniel Bruce, Chief Executive of Transparency International UK told the Daily Mail.
Following Conservative Party chairman Nadhim Zahawi being sacked for breaching the ministerial code over his tax affairs, Sunak reshuffled his ministers and created three new government departments aimed at curbing fraud, boosting economic growth and addressing the energy crisis. Downing Street said that the creation of the three new departments and one revamped department would “ensure the right skills and teams are focused on the Prime Minister’s five promises”. In his first major speech of 2023, Sunak promised to deliver “peace of mind” to the public.
Besides the reshuffling and sacking, the handing over of the £4 million contract could be justified as a major step toward transparency. The contract is part of a wider investment across the government to rooting out fraud against taxpayers by using modern tools and techniques to stop it before it happens.
“Fraud against the public purse is unacceptable and we’re stepping up the fight against those who wish to profit off the backs of taxpayers. Through the use of cutting-edge technology [data and analytics, coupled with AI], the Public Sector Fraud Authority will use data and AI to help us in the fight against fraudsters,” said Baroness Neville-Rolfe, Cabinet Office Minister.
The role of technology
“AI/ML algorithms are used all around the globe by multiple financial organizations these days as a successful tool backing anti-money laundering (AML) moves. With uniqueness in criminal behavior and sophistication in new-age attacks, these algorithms never sleep and are constantly learning and adapting to changes.
“However, tech-savvy and advanced criminals with access to their own AI, also have a high possibility of gaining access to the AML systems where they manipulate the algorithms. These algorithms begin with creating false and/or misleading data that lead to malfunctioning and ultimately the system going rogue. Therefore, warning signs can only make the user aware if the AML algorithms are monitored constantly,” said Phil Hermsen, Solutions Director, Data Science & AI, at HCLTech.
In yet another major step, the Sunak government recently launched a new program that aims to radically upskill civil servants in data science and analysis.
Speaking at the launch, director of the UK PM’s data science team and founder Laura Gilbert said: “By improving and connecting our data, upskilling our people and bringing them together across siloes, we can unleash a revolution that improves the lives of everyone in this country.”
Driven by cross-government ‘hackathons’, among the first projects that civil servants will work on include maternal and infant safety, high volume fraud and predicting serious crime. Government analysts and others will work together to link data and draw insights around difficult high-priority challenges.
“We know that fraudsters are a capable and committed adversary and the way they commit fraud is diverse and evolving. As criminals develop more sophisticated tools, we too must innovate and modernize our approach to prevent fraud,” said Mark Cheeseman, CEO of PSFA, which was backed by £25 million funding and has a first-year target of £180 million of recognized fraud benefits.
In terms of policy and governance, there are several internal enhancements that were made to the UK market last year. Here are some major highlights from the Office for AI:
- In July, the first AI Action Plan was published by the government, highlighting key national priorities that contained investment in the long-term needs of the AI ecosystem, ensuring that it benefits all sectors and regions
- The government also published an AI regulation policy paper, which prompted a full range of stakeholder feedback
- The UK’s new AI rulebook proposed a “pro-innovation framework” for regulating the technology
- Centre for Data Ethics and Innovation (CDEI) published its ‘Industry Temperature Check: Barriers and Enablers to AI Assurance’ report
- In June, with an aim to build public trust in AI and data-driven technologies, UKRI (UK Research and Innovation) launched its own £8.5 million research program on AI ethics and regulation in partnership with the Ada Lovelace Institute
- In collaboration with the BSI (the British Standards Institution) and the NPL (National Physical Laboratory), the Alan Turing Institute launched the AI Standards Hub
- Government invested £117 million in 1,000 new PhDs in AI
- It also invested £17 million in 2,000 new AI and data science conversion course scholarships
- In accordance with the dedicated Defence AI Strategy, Ministry of Defence (MoD) launched the Defence AI Centre to accelerate the technology’s adoption across the armed forces
- MoD also published its policy statement ‘Ambitious, Safe, Responsible: Our approach to the delivery of AI-enabled capability in Defence’
- Defence Science and Technology Laboratory (DSTL), in partnership with the Alan Turing Institute, opened its AI research center for defense
- DSTL and US Air Force Research Laboratory carried out the first deployment of the jointly developed AItoolbox in two military exercises
- Department for Business, Energy and Industrial Strategy (BEIS) contributed £1.5 million toward the AI for Decarbonisation programme to help speed the development of new technologies to reduce emissions
- BEIS also contributed £1.2 million towards the Net Zero Data Space for AI Applications
However, AI is a tool that also improves and enhances malware, making the technology extremely dangerous with more advanced and sophisticated attacks.
“The vulnerabilities attached to machine learning must be understood before making any informed decision on risks and investments because flaws within an ML make the situation even more complicated and are being exploited by cybercriminals and owners of cloud platforms.
“To access personal and/or restricted data—that’s readily available and relative cheap—there’s an ongoing AI-led ‘cyberwar’ between these organizations with nefarious aims and the owners of the cloud platforms.
“Going beyond how much computing power you can throw at the problem, the main weapon now used is the sophistication of algorithms and the fast ability to learn, adapt and counter to what the oppositions are doing,” adds Hermsen.