Thursday, November 21, 2024
Entreprise

There’s no reason to panic over WormGPT

As tools for building AI systems, particularly large language models (LLMs), get easier and cheaper, some are using them for unsavory purposes, like generating malicious code or phishing campaigns. But the threat of AI-accelerated hackers isn’t quite as dire as some headlines would suggest.

The dark web creators of LLMs like “WormGPT” and “FraudGPT” advertise their creations as being able to perpetrate phishing campaigns, generate messages aimed at pressuring victims into falling for business email compromise schemes and write malicious code. The LLMs can also be used to make custom hacking utilities, their creators say, as well as identify leaks and vulnerabilities in code and write scam web pages.

One might assume that this new breed of LLMs heralds a terrifying trend of AI-enabled mass hacking. But there’s more nuance than that.

Take WormGPT, for example. Released in early July, it’s reportedly based on GPT-J, an LLM made available by the open research group EleutherAI in 2021. This version of the model lacks safeguards, meaning it won’t hesitate to answer questions that GPT-J might normally refuse — specifically those related to hacking.

Two years ago might not sound that long in the grand scheme. But in the AI world, given how fast the research moves, GPT-J is practically ancient history — and certainly nowhere near as capable as the most sophisticated LLMs today, like OpenAI’s GPT-4.

Writing for Towards Data Science, an AI-focused blog, Alberto Romero reports that GPT-J performs “significantly worse” compared to GPT-3, GPT-4’s predecessor, at tasks other than coding — including writing plausible-sounding text. So one would expect WormGPT, which is based on it, to not be particularly exceptional at generating, say, phishing emails.

That appears to be the case.

This writer couldn’t get their hands on WormGPT, sadly. But fortunately, researchers at the cybersecurity firm SlashNext were able to run WormGPT through a range of tests, including one to generate a “convincing email” that could be used in a business email compromise attack, and publish the results.

In response to the “convincing email” test, WormGPT generated this:

Image Credits: WormGPT

As you can see, there’s nothing especially convincing about the email copy. Sure, it’s grammatically correct — a step above most phishing emails, granted — but it makes enough mistakes (e.g. referring to a nonexistent email attachment) that it couldn’t be copied and pasted without a few tweaks. Moreover, it’s generic in a way that I’d expect would sound alarm bells for any recipient who reads it closely.

The same goes for WormGPT’s code. It’s more or less correct, but fairly basic in its construction, and similar to the cobbled-together malware scripts that already exist on the web. Plus, WormGPT’s code doesn’t take what’s often the hardest part out of the hacking equation: obtaining the necessary credentials and permissions to compromise a system.

Not to mention, as one WormGPT user writes on a dark web forum, that the LLM is “broken most of the time” and unable to generate “simple stuff.”

That might be the result of its older model architecture. But the training data could have something to do with it, too. WormGPT’s creator alleges they fine-tuned the model on a “diverse array” of data sources, concentrating on malware-related data. But they don’t say which specific datasets they used for fine-tuning.

Barring further testing, there’s no way of knowing, really, to what extent WormGPT was fine-tuned — or whether it really was fine-tuned.

The marketing around FraudGPT instills even less confidence the LLM performs as advertised.

On dark web forums, FraudGPT’s creator describes it as “cutting-edge,” claiming the LLM can “create undetectable malware” and uncover websites vulnerable to credit card fraud. But the creator reveals little about the LLM’s architecture besides that it’s a variant of GPT-3; there’s not much to go on besides the hyperbolic language.

It’s the same sales move some legitimate companies are pulling: Slapping “AI” on a product to stand out or get press attention, praying on customers’ ignorance.

In a demo video seen by Fast Company, FraudGPT is shown generating this text in response to the prompt “write me a short but professional sms spam text i can send to victims who bank with Bank of America convincing them to click on my malicious short link”:

Dear Bank of America Member: Please check out this important link in order to ensure the security of your online bank account

Not very original or convincing, I’d argue.

Setting aside the fact that these malicious LLMs aren’t very good, they’re not exactly widely available. The creators of WormGPT and FraudGPT are allegedly charging tens to hundreds of dollars for subscriptions to the tools and prohibiting access to the codebases, meaning that users can’t look under the hood to modify or distribute the models themselves.

FraudGPT became even harder to obtain recently after the creator’s threads were removed from a popular dark web forum for violating its policies. Now, users have to take the added step of contacting the creator through the messaging app Telegram.

The takeaway is, LLMs like WormGPT and FraudGPT might generate sensationalist headlines — and, yes, malware. But they definitely won’t cause the downfall of corporations or governments. Could they in theory enable scammers with limited English skills to generate targeted business emails, as SlashNext’s analysis suggests? Perhaps. But more realistically, they’ll at most make a quick buck for the folks (scammers?) who built them.

Source link

Share with your friends!

Products You May Like

Leave a Reply

Your email address will not be published. Required fields are marked *

x  Powerful Protection for WordPress, from Shield Security
This Site Is Protected By
Shield Security