Scams are now significantly riskier because of the implementation of ChatGPT
Due to fraudsters' free access to ChatGPT, the AI-powered chatbot (which opens in a new tab) that never seems to leave the news, online scams may suddenly become far more harmful.
According to a report (opens in new tab) written by cybersecurity experts Norton earlier this month, that is the case. In it, the company outlined three primary ways threat actors may exploit ChatGPT to increase the effectiveness of internet scams: through the production of deep-fake material, phishing at scale, and quicker malware manufacturing.
Furthermore, according to Norton, the capacity to produce "high-quality misinformation or disinformation at scale" may help bot farms fuel conflict more effectively by enabling threat actors to "sow mistrust and mold narratives in different languages."
They claim that by creating phony reviews in bulk and using various voice tones, fraudsters wanting to manage them might also have a field day using ChatGPT.
Finally, Norton warns that the already-famous chatbot may be used in "harassment campaigns" on social media to bully or silence people and that the results could be "chilling":
Hackers can also use ChatGPT in phishing campaigns, which are frequently carried out by perpetrators who do not speak English as their first language. This makes it easier for victims to identify obvious scam attempts when there is poor spelling and grammar. Threat actors could produce convincing emails on a large scale using ChatGPT.
Scams are now significantly riskier because of the implementation of ChatGPT |
Ultimately, creating malware may no longer be the domain of expert hackers. The researchers claimed that with the proper prompt, inexperienced malware developers might express what they wanted to do and obtain functional code snippets.
Post a Comment