Russian Hackers used AI for hacking Ukrainians

By

The wave of phishing emails sent to Ukrainians this summer experienced a new twist by Russian hackers as they inserted a program which utilized artificial intelligence (AI) for hacking the PC. It would automatically seek out confidential data on the PCs of victims and then send them back to Moscow if it was installed.

This operation is the first known instance of Russian intelligence being identified as using malicious code using large language models (LLMs), the kind of AI chatbots that have become commonplace in corporate culture. In July, it was revealed in technical reports from the Ukrainian government along with several types of cybersecurity firms.

Probably, there are more Russian agents around the globe. Hackers of all stripes, including researchers, corporate defenders, spies, and cybercriminals, have been using AI techniques into their work in recent months.

ChatGPT and other LLMs are still prone to mistakes. On the other hand, they are highly proficient in detecting as well as summarizing texts, processing language instructions, along with interpreting plain English into computer code.

At this point, the technology did not improved hacking by converting people with no experience into professionals or allowing future cyberterrorists to knock down the power grid. Even so, it is developing and producing competent hackers. AI is currently being utilized by cybersecurity companies along with researchers, participating in expanding game of cat and mouse between offensive hackers who detect and exploit weaknesses of software and the defenders who attempt to address them first.

Heather Adkins, Google’s vice president of security engineering said:

“It’s the beginning of the beginning. Maybe moving towards the middle of the beginning”.

Adkins’ team began working on a project in 2024 to use Google’s LLM, Gemini, to uncover critical software flaws before hackers could exploit them. Adkins said earlier this month that her team had found at least 20 significant, often-overlooked problems in widely used software to date and notified businesses so they could address them. It’s a continuous process.

According to her, none of the problems have been alarming or something which could only have been uncovered by a machine. But utilizing an AI merely accelerates the process.

She said:

“No one has discovered anything new”.

In spite of implementing AI to support individuals who think they might have been hacked, Adam Meyers, a senior vice president at the cybersecurity firm CrowdStrike, said he is noticing growing proof of its use among the criminal, Chinese, Russian, as well as Iranian hackers that his company tracks.

He told NBC News:

“The more advanced adversaries are using it to their advantage”. “We’re seeing more and more of it every single day.”

Since ChatGPT was made public in 2022, the change is just now beginning to keep up with the hype that has been circulating for years in the AI and cybersecurity sectors. These technologies haven’t always worked, and several cybersecurity specialists have voiced their displeasure over potential hackers falling for phony AI-generated vulnerability findings.

Ever since 2024, scammers as well as social engineers, those associated with cyber operations who impersonate as someone else or craft convincing phishing emails—have been implementing LLMs to appear more credible.

In accordance with CEO of DreadNode, Will Pearce, one of the few new security firms that concentrates on hacking using LLMs, the implementation of AI to directly hack targets has only recently started to gain attention.

He said:

“At this point, the models and the technology are all really good”.

Pearce told NBC News that automated AI hacking tools are far more skilled now than they were less than two years ago, when they would require a lot of tweaking to function correctly.

The question is if AI will eventually help the attackers or defenders more has not been resolved by hackers and cybersecurity experts. However, defense seems to be winning right now.

Lat week, at an event of Def Con hacker conference in Las Vegas, Alexei Bulazel, claimed that the trend will keep going as long as more than half of the world’s most advanced computer businesses are located in the United States.

Bulaz stated that:

“I very strongly believe that AI will be more advantageous for defenders than offense.”

He pointed out that it is uncommon for hackers to discover very disruptive vulnerabilities in a large U.S. tech corporation. According to him, AI is very useful in identifying those problems before crooks do.

Bluazel said:

“The types of things that AI is better at — identifying vulnerabilities in a low cost, easy way — really democratizes access to vulnerability information”.

But as technology advances, that pattern might not hold true. One of the reasons is that there isn’t currently a free automated hacking tool or penetration tester that uses artificial intelligence. These tools, which are ostensibly applications that check for weaknesses in methods employed by criminal hackers, are already publicly accessible online.

Adkins of Google said in a statement:

“It will probably mean open season on smaller companies’ programs if one incorporates an advanced LLM and it becomes freely available. I think it’s also reasonable to assume that at some point someone will release [such a tool]. That’s the point at which I think it becomes a little dangerous.”

Meyers said:

“Agentic AI is really AI that can take action on your behalf, right? That will become the next insider threat, because, as organizations have these agentic AI deployed, they don’t have built-in guardrails to stop somebody from abusing it”.

Share This Article
Follow:
Jazib Khaleel is Founder of TechObserver, a technology news website covers trends in tech focusing on United Kingdom. He is a Google Certified Digital Marketing Strategist.
wpDiscuz
Exit mobile version