Breaking News

Watch out - OpenAI is being spoofed as part of a major phishing attack

New research from Barracuda has revealed threat actors are now using OpenAI in impersonation campaigns that target businesses across the globe.

The attack uses an email which impersonates OpenAI and sends an ‘urgent message’ to the victims recommending they update their payment information for their subscription, all through their handy direct link - a textbook phishing technique.

The operation was far reaching, with one email being sent to over 1,000 users. The first red flag was the sender's email address, which did not match the official OpenAI domain (e.g. @openai.com). Instead, it was sent from info@mta.topmarinelogistics.com.

AI powered

Worryingly, the email passed DKIM and SPF checks, meaning it was sent from a server that is authorized to send emails on behalf of the domain. The language in the email is common for phishing attacks, pressuring the user to take immediate action and creating fear and urgency.

This is far from the only AI related malicious campaign reported in the last few months. Earlier in 2024, a Microsoft report found 87% of UK organizations are more susceptible to cyberattacks thanks to the increasing use of AI tools.

That’s not to mention the rise in deep fake and convincing AI voice scams that have been targeting businesses and consumers. Already businesses around the world have lost millions to deep fake fraud, and almost half have been targeted at some point by this type of scam.

The introduction of machine learning algorithms that can uncover and leverage software flaws means that AI is leading to a dramatic increase in the number of attacks.

Despite this, research indicates that 90% of cyberattacks will still involve some element of human interaction, like with phishing attacks, so making sure everyone in your organization is trained to spot the signs of an attack is the best protection for a business.

More from TechRadar Pro



from TechRadar - All the latest technology news https://bit.ly/3C7IG8a

No comments