GPT-4 can’t stop helping hackers build cybercriminal tools


OpenAI released its latest machine learning software, GPT-4, this week with much fanfare. One of the features that the company focused on in the new version was supposed to have protection against cybercriminals. Within days, however, it helped researchers craft malware and phishing emails, just as they did with the previous iteration of OpenAI’s software, ChatGPT. On the bright side, they also managed to plug the software’s cyber security holes.

Cyber ​​Security Institute Checkpoint researchers have shown. Forbes How OpenAI blocks malware development by simply removing the word “malware” in a question. He then helped them create software that collected GPT-4 PDF files and sent them to a remote server. He then advised the researchers to run it on a Windows 10 PC and make it a smaller file, making it faster and less likely to be detected by security software.

The researchers took two approaches to find GPT-4’s help in crafting phishing emails. First, they used GPT-3.5, which did not block requests to craft malicious messages to write phishing emails impersonating a legitimate bank. They then asked GPT-4, which initially refused to create an original phishing message, to modify its language. In the second, they asked for advice on creating a phishing awareness campaign for a business and asked for a fake phishing email template that the tool actually provided.

“GPT-4 can empower bad actors, even non-technical ones, with tools to accelerate and validate their activities,” Checkpoint researchers said in their report. Forbes Before printing. “What we’re looking at is that GPT-4 can serve both good and bad actors. Good actors can use GPT-4 to build and assemble code that is useful to the community. But at the same time bad actors can use this AI technology for fast execution of cyber crime.

Sergey Shikevich, manager of the threat team at Check Point, said GPT-4 has fewer barriers to phishing or generating malicious code than previous versions. He pointed out that this could be because the company currently relies on only premium users. However, he added, OpenAI had to wait for such solutions. “I think they are trying to prevent and reduce them, but it is a game of cat and mouse,” he added.

Daniel Cuthbert, a cybersecurity researcher and review board member of the Black Hat Hacking Conference, said GPT-4 appears to help those with little technical knowledge build malicious tools. “This really helps if things are really bad. He will give it to you on a plate.” he said.

Open AI itself, by Paper The tool, which was released alongside GPT-4 earlier this week, acknowledges that it can lower the cost of a “successful cyberattack through certain measures, such as social engineering or improving existing security tools.”

This really helps if you are really bad at things. It is given to you on a plate.

Daniel Cuthbert, Black Hat Hacker Forum Review Board Member

But cybersecurity experts hired by OpenAI to test the intelligent chatbot before its release “have severe limitations in cybersecurity operations,” the paper said. “It does not improve upon existing tools for vulnerability, vulnerability exploitation, and network exploration, and is less effective than existing tools for complex and high-level vulnerability identification,” OpenAI writes. The hackers found that GPT4 was “effective in creating realistic social engineering content.”

“To mitigate potential abuse in this environment, we’ve trained models to refuse malicious cybersecurity requests and enhanced our internal security systems to monitor, detect and respond,” OpenAI added in the press release.

The company did not respond to requests for comment on why Checkpoint researchers were able to pass some of the mitigations so quickly.

While it may be easy to play OpenAI models, “it’s not doing anything that’s never been done,” says Cuthbert. A good hacker already knows how to do most of what OpenAI is capable of without requiring any artificial-intelligence support, he said. And modern scanning systems should also be able to pick out the types of malware ChatGPT has learned from previous examples on the Internet, he said.

Cuthbert is excited about what the GPT-4 can do for defense. After helping him find bugs in the software, he also provided quick fixes with precise code snippets he could copy and paste into the program. “I really like the automatic adjustment,” he says. “The future is good.”

follow back Twitter. check out My website. Send me a reliable tip.

We offer you some site tools and assistance to get the best result in daily life by taking advantage of simple experiences