Experts reveal UK election cyber threats – NBC New York

  • Britain is gearing up for highly contested local and general elections this year, where issues such as the high cost of living and immigration are expected to dominate.
  • Cyber ​​experts expect malicious actors to interfere in upcoming elections in a number of ways, not least through artificial intelligence-assisted disinformation.
  • State-sponsored attacks are also expected to become common before elections, according to the cybersecurity community.
WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

According to cyber experts who spoke to CNBC, Britain is expected to face a barrage of state-sponsored cyber attacks and disinformation campaigns in the lead-up to the 2024 election — and artificial intelligence is a key threat.

Britons will vote in local elections on May 2, and a general election is expected in the second half of this year, although British Prime Minister Rishi Singh has yet to commit to a date.

The vote comes as the country faces a number of issues, including a housing affordability crisis and sharp divisions over immigration and asylum.

Todd McKinnon, CEO of identity security firm Okta, told CNBC via email that “with the majority of UK citizens voting at the polling stations on Election Day, I expect that cyber security will become increasingly important. But the dangers will unfold in the months leading up to that day.” .

It wouldn’t be the first time.

In 2016, both the US presidential election and the UK’s Brexit vote were marred by disinformation shared on social media platforms, allegedly by Russian state-linked groups, although Moscow denies these claims. .

According to cyber experts, state actors have since launched routine attacks to manipulate election results in various countries.

Meanwhile, last week, Britain alleged that the Chinese state-linked hacking group APT 31 tried to access the email accounts of UK lawmakers, but said such attempts were unsuccessful. London has imposed sanctions on Chinese individuals and a technology firm in Wuhan is believed to be a front for APT 31.

The US, Australia and New Zealand imposed their own restrictions. China has denied allegations of state-sponsored hacking, calling them “baseless”.

Cybercriminals using AI

Cybersecurity experts expect malicious actors to interfere with upcoming elections in a number of ways — not least through disinformation, which is expected to get worse this year due to the widespread use of artificial intelligence.

Artificial images, videos and audio created using computer graphics, simulation methods and AI – commonly known as “deepfakes” – will become commonplace because it’s easier for people to create them, experts say. goes

“Nation-state actors and cybercriminals are likely to use AI-powered identity-based attacks such as phishing, social engineering, ransomware, and supply chain compromises to target politicians, campaign staff, and election-related organizations. will,” added Okta’s McKinnon.

“We also believe that the advent of AI and bot-powered content is being engineered by threat actors to push disinformation on an even greater scale than we’ve seen in previous election cycles. “

The cybersecurity community has called for international cooperation to increase awareness of this type of AI-generated disinformation, as well as to reduce the risk of such malicious activity.

Top electoral risk

AI-powered disinformation is the biggest threat to the 2024 election, said Adam Meyers, head of countermeasures operations at cybersecurity firm CrowdStrike.

“Right now, generative AI can be used for harm or good, and so we see both applications being increasingly adopted every day,” Meyers told CNBC.

According to CrowdStrike’s latest annual threat report, China, Russia and Iran are highly likely to use tools like generative AI to carry out disinformation and disinformation operations against various global elections.

“This democratic process is extremely fragile,” Meyers told CNBC. “When you start looking at how hostile states like Russia or China or Iran can take advantage of generative AI and some new technology to craft messages and use deep fakes to create that story or narrative. “It’s very dangerous to accept what people accept, especially when people already have that kind of confirmation bias.”

A key issue is that AI is lowering the barrier to entry for criminals who exploit people online. This has already happened in the form of scam emails created using Easily accessible AI tools like ChatGPT.

According to Dan Holmes, fraud prevention expert at regulatory technology firm Feedzai, hackers are also launching more sophisticated and personalized attacks by training AI models on our own data available on social media.

“You can train these voice AI models very easily … through social exposure [media]Holmes told CNBC in an interview. “it is. [about] Getting that emotional level of engagement and coming up with something really creative.”

In the wake of the election, a fake AI-generated audio clip of opposition Labor Party leader Keir Starmer was posted on social media platform X in October 2023 abusing party staff. Truth reform charity according to the complete truth.

This is just one example of the many deepfakes that cyber security experts are worried about as the UK elections approach later this year.

Elections are a test for tech giants.

The local elections will serve as an important test for digital companies like Facebook owner Meta, Google and TikTok to keep their platforms free of misinformation.

Meta has already taken steps to add a “watermark” to AI-generated content so users know it’s not real.

However, deep forgery technology is becoming more sophisticated. And for many tech companies, the race to beat them is now about fighting fire with fire.

“Deepfaxes have gone from being a theoretical thing to being very much in production today,” Onfido CEO Mike Tochin told CNBC in an interview last year.

“Now there’s a cat and mouse game where it’s ‘AI vs. AI’ – using AI to detect deepfakes and minimize the impact for our customers is the big battle right now.”

Cyber ​​experts say it’s getting harder to tell what’s real — but there may be some signs that content has been digitally manipulated.

AI uses gestures to generate text, images and video, but it’s not always accurate. So for example, if you’re watching an AI-generated video of dinner, and the spoon suddenly disappears, that’s an example of an AI flaw.

Okta’s McKinnon added that “we’re sure to see deeper flaws during the election process, but one simple step we can all take is to verify the authenticity of something before we share it.”

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment