ChatGPT can be used by hacker to write phishing emails and codes, research shows
ChatGPT has been the buzz word on the internet and social media for some time now. The open-source Artificial Intelligence (AI) based chatbot is trained to follow an instruction in a prompt and provide a detailed response. It has been developed by Elon Musk-founded independent research body OpenAI. Now, a research shows how the conversational bot can be used by hackers to write phishing emails and codes.
A study by Check Point Research (CPR) demonstrates how AI models can be used to create a full infection flow, from spear-phishing to running a reverse shell. It used ChatGPT and another platform, OpenAI’s Codex – an AI-based system that translates natural language to code, to write malicious codes and phishing emails.
First, CPR asked ChatGPT to impersonate a hosting company and write a phishing email that appears to come from a fictional Webhosting company – Host4u. It was able to generate the phishing mail, though OpenAI warned that this content might violate its content policy.
The researchers then asked ChatGPT to refine the email with several inputs like replacing the link prompt in the email body with text urging customers to download an excel sheet.
The next step was to create the malicious VBA code in the Excel document. While the first code was very naive and used libraries such as WinHttpReq, after some short iteration ChatGPT produced a better code, the researcher said.
“Using Open AI’s ChatGPT, CPR was able to create a phishing email, with an attached Excel document containing malicious code capable of downloading reverse shells,” the researchers noted.
Since Codex can not only write codes, unlike ChatGPT which has a wider bandwidth of the tasks it can do, researchers then turned to Codex to create a basic reverse shell, using a placeholder IP and port. The researchers also asked Codex to augment defenders
The research concludes saying that while the expanding roles of LLM and AI in the cyber world is full of opportunity, but also comes with risks.
“Multiple scripts can be generated easily, with slight variations using different wordings. Complicated attack processes can also be automated as well, using the LLMs APIs to generate other malicious artifacts. Defenders and threat hunters should be vigilant and cautious about adopting this technology quickly, otherwise, our community will be one step behind the attackers”, the report said.
Download The Mint News App to get Daily Market Updates & Live Business News.
More
Less
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.