People are already trying to get ChatGPT to write malware

Image: Getty/NurPhoto

The ChatGPT AI chatbot has created plenty of excitement in the short time it has been available and now it seems it has been enlisted by some in attempts to help generate malicious code.

ChatGPT is an AI-driven natural language processing tool which interacts with users in a human-like, conversational way. Among other things, it can be used to help with tasks like composing emails, essays and code. 

READ THIS: What is ChatGPT and why does it matter? Here’s what you need to know

The chatbot tool was released by artificial intelligence research laboratory OpenAI in November and has generated widespread interest and discussion over how AI is developing and how it could be used going forward. 

But like any other tool, in the wrong hands it could be used for nefarious purposes; and cybersecurity researchers at Check Point say the users of underground hacking communities are already experimenting with how ChatGPT might be used to help facilitate cyber attacks and support malicious operations. 

“Threat actors with very low technical knowledge – up to zero tech knowledge – could be able to create malicious tools. It could also make the day-to-day operations of sophisticated cybercriminals much more efficient and easier – like creating different parts of the infection chain,” Sergey Shykevich, threat intelligence group manager at Check Point told ZDNET.  

OpenAI’s terms of service specifically ban the generation of malware, which it defines as “content that attempts to generate ransomware, keyloggers, viruses, or other software intended to impose some level of harm,”. It also bans attempts to create spam, as well as use-cases aimed at cybercrime.

However, analysis of activity in several major underground hacking forums suggests that cyber criminals are already using the tool to develop malicious tools – and in some cases, it’s already allowing low-level cyber criminals with no development or coding skills to create malware. 

Also: The scary future of the internet: How the tech of tomorrow will pose even bigger cybersecurity threats

In one forum thread which appear towards the end of December, the poster described how they were using ChatGPT to recreate malware strains and techniques described in research publications and write-ups about common malware. 

By doing this, they’ve been able to create a Python-based information stealer malware which searches for common files including Microsoft Office documents, PDFs and images, copies them then uploads them to a file transfer protocol server. 

The same user also demonstrated how they’d used ChatGPT to create Java-based malware, which using PowerShell could be harnessed to covertly download and run other malware onto infected systems. 

Researchers note that the forum user making these threads appears to be “tech-oriented” and shared the posts to show less technically capable cybercriminals how to utilize AI tools for malicious purposes, complete with real examples of how it can be done. 

One user posted a Python script, which they said was the first script they ever created. After discussion with another forum member, they said that ChatGPT helped them to create it. 

Analysis of the script suggest it’s designed to encrypt and decrypt files, something that with some work, could be turned into ransomware, potentially leading to the prospect of low-level cyber criminals developing and distributing their own extortion campaigns. 

“All of the afore-mentioned code can of course be used in a benign fashion. However, this script can easily be modified to encrypt someone’s machine completely without any user interaction. For example, it can potentially turn the code into ransomware if the script and syntax problems are fixed,” said Check Point.

“It will require some improvements in the code and syntax, but conceptually when operational, this tool could carry out similar actions to ransomware,” said Shykevich.

Also: Cybersecurity: These are the new things to worry about in 2023

But it isn’t just malware development which cyber criminals are experimenting with ChatGPT for; on New Year’s Eve, one underground forum member posted a thread demonstrating how they’d used the tool to create scripts which could be operate an automated dark web marketplace for buying and selling stolen account details, credit card information, malware and more. 

The cyber criminal even showed off a piece of code that was generated using a third-party API to to get up-to-date prices for Monero, Bitcoin and Ethereum cryptocurrencies as part of a payment system for a dark web marketplace. 

It’s difficult to tell if malicious cyber activity generated with the aid of ChatGPT is actively functioning in the wild, because as Sykevich explains, “from a technical stand point it’s extremely difficult to know whether a specific malware was written using ChatGPT or not”. 

But as interest in ChatGPT and other AI tools grows, they’re going to attract the attention of cyber criminals and fraudsters looking to exploit the technology to help conduct malicious campaigns at low-cost and with the least effort necessary. ZDNET has contacted OpenAI for comment, but is yet to receive a response at the time of publication. 

MORE ON CYBERSECURITY

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! TechNewsBoy.com is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.