Hackers could use ChatGPT to infiltrate ships

Cybersecurity image
Share it now


The threat of cyber attacks on ships is growing, with the use of phishing emails as a key method. Criminals are turning to artificial intelligence (AI) tools such as ChatGPT to create more convincing and personalised phishing emails, causing concern in the shipping industry. The rewards for attackers could be substantial due to the disruption that would be caused to the global shipping industry if a vessel was compromised. Increased security measures, such as employee training, are needed to tackle the threat.

Most documented cyberattacks on individual ships in the past have been carried out by jamming and spoofing navigational signals. However, ships are increasingly threatened by a broader range of attacks, including ransomware. Recently, 1000 shipping vessels were affected when DNV’s ShipManager software system was hit by a cyber attack. Luckily, many ships retained their offline functionality, which reduced disruption, but this attack demonstrated the potential far-reaching reach of cyberattacks on ships. In addition, there are potentially large financial gains to be made by attacking ships. After the 2021 blockade of the Suez Canal by a 400-meter-long container ship and the resulting disruption to world trade and financial markets, criminal hackers discovered they could use the stock market changes associated with a grounded ship to their advantage. Therefore, there are potential advantages for attackers when attacking a ship.

One way to compromise a ship is through phishing emails. Phishing emails are a form of social engineering designed to trick crew members into clicking unsafe links and unknowingly downloading malicious content onto their computers. The emails appear legitimate and links are disguised as safe and genuine. They can certainly be personalized for that particular crew or ship, using information from open sources such as social media. Phishing emails play a key role in many types of maritime cyberattacks that rely on placing malicious software on targeted computers, including ransomware attacks.

Writing these emails was usually a manual exercise. However, a newly released AI tool is changing that.

ChatGPT is a novel tool developed by OpenAI with many language capabilities, including explaining quantum physics and writing poetry on command. ChatGPT was not designed for criminals and in fact has internal barriers to prevent malicious material from being created if directly requested to do so. However, attackers have found a way around this. AI can be a force multiplier for attackers, especially when using social engineering techniques. The AI ​​chatbot in particular creates compelling phishing emails when prompted.

There are many advantages for attackers using ChatGPT. For example, it writes in good American English and helps attackers to disguise typical distinguishing features between legitimate and illegitimate emails, such as typos or unique formats. In fact, it has many different ways to respond to a single prompt, making emails look customized and authentic.

ChatGPT can craft a compelling and emotionally manipulative phishing email according to the user’s prompts:

So how real is the threat?

Prior to the release of ChatGPT’s latest version, a research paper revealed the analysis of over 50,000 emails sent to users in over 100 countries as part of the phishing training flow. Professional Red Teamers had a 4.2 percent CTR, compared to ChatGPT’s 2.9 percent. In some countries, including Sweden, the AI ​​chatbot’s CTR was higher. Additionally, a survey of 1,500 IT decision makers in North America, the UK and Australia found that 53 percent are particularly concerned about the threat of more credible phishing emails and 51 percent expect a ChatGPT-powered cyberattack within the next year. Darktrace also commissioned a survey from Censuswide showing that 73 percent of UK workers are concerned about hackers using generative AI to create indistinguishable scam emails. Further research shows that ChatGPT is already manipulating people to bypass security requirements after an AI successfully asked a TaskRabbit employee to solve a captcha for them due to a visual impairment.

The threat of phishing emails is further highlighted by a recent study by Darktrace, which revealed a 135 percent increase in “novel social engineering attacks” in 2023, including increased text volume, punctuation, and sentence length with no links or attachments the spread of ChatGPT. It also showed that the overall volume of malicious email campaigns has decreased and been replaced by more linguistically complex emails.

So it is evident that people are rightly concerned about ChatGPT’s ability to send convincing phishing emails and this threat has been highlighted by national and regional authorities. Europol has issued a warning regarding the use of ChatGPT to create highly persuasive and realistic texts. The UK NCSC has also warned that LLM is being used to write persuasive phishing emails.

What does this mean for the maritime industry?

ChatGPT has almost encyclopedic knowledge that can easily be used to find useful maritime specific information like ship names or IMO regulations to make emails more persuasive:

The ChatGPT maritime threat is significant, especially because the rewards for the hacker could be substantial. Shipping is a global industry and disruption could be very costly. Ships with networks shut down due to a cyber attack cannot deliver critical goods that the industry depends on, such as fuel. B. Raw materials. A hacking event could even result in a landing on a major trade route, with wider financial implications. As a result, increased security measures—like employee training—are needed to raise awareness of the threats posed by clicking malicious links.

Source: News Network


Share it now
%d bloggers like this: