Artificial Intelligence (AI) is a double-edged sword: while it’s beneficial and helpful in so many ways – namely in helping organisations be more productive and defend themselves more efficiently against cyber threats –, it’s also being used by hackers as a powerful vehicle for cyberattacks.

 

Like any other technology out there, it’s important to understand that AI also has weaknesses and can be hacked. So how does this happen exactly? What methods are being used? And what does it mean to the future of cybersecurity, specifically within organisations? Let’s dive into that.

 

 

A weapon for hackers

Alter Solutions’ cybersecurity expert Amine Boukar agrees that “we bear witness to a new class of cyberattacks that leverage AI technology to achieve malicious objectives”. “In recent years, AI has made tremendous progress. While most experts and enthusiasts agree that there is a hidden dark side lurking behind its apparent benefits, many discussions about the negative implications of AI neglect the impact on cybersecurity, focusing instead on its economic, ethical and social implications. However, a growing concern is surging within the cybersecurity community regarding the impact that AI may have on the privacy and security of our information systems”, Amine points out.


Theoretically speaking, AI systems basically use Machine Learning (ML) algorithms to process big amounts of data and generate custom outputs. They rely on the datasets used to feed that algorithm so, understandably, it’s possible that an AI system can experience malfunctions or expose vulnerabilities when its data undergoes any disruption or unexpected change.


Hackers look at these advances in tech as an opportunity to deceive AI and use it to meet their own agenda. The question is “how?”. In fact, there are several ways cybercriminals can take advantage of AI technologies like generative chatbots (e.g.: ChatGPT), autocomplete, spam filters, or facial recognition systems. Amine Boukar explains a few of those methods:

  • Deepfake
    “As the name implies, this combination of ‘Deep Learning’ and ‘Fake’ consists of using deep generative models to fake video and audio content, with the aim to impersonate an individual or spread misinformation. Deepfake technology is being used in social engineering attacks by replacing one person’s voice or face with that of another to deceive an individual into revealing sensitive information. From identity theft to extortion, the implications of this technology are endless. There is an urgent need today for robust measures to detect and prevent such attacks.”

  • AI-enhanced phishing
    “Not only has AI caused the emergence of new types of attacks like deepfake, but it has also enhanced traditional ones that have been around for a long time. Today, attackers are actively using AI to generate convincing e-mails for their phishing attacks, increasing the likelihood of tricking recipients into clicking on a malicious link or disclosing sensitive information.”

  • AI-powered malware
    “AI-powered malwares have also raised concerns over the effectiveness of anti-malware protection solutions against this new threat. Unlike traditional malware, this new class of malware can adapt its behavior in real-time based on the environment where it is executed, making it harder to detect by traditional security solutions. One example is BlackMamba, a polymorphic malware developed as a Proof-of-Concept (PoC), which successfully managed to evade detection from different security solutions. While BlackMamba is only a PoC, and we have yet to encounter an AI-powered malware in the wild, it is important to stay alert and proactively prepare for the emergence of such threats.”

 

In addition to these strategies Amine mentioned, hackers are also becoming quite skilled in cracking CAPTCHAs and password-guessing. While navigating the Internet, whenever an identity check is needed, we have all come across simple tests to prove our human identity, like writing the characters we see in a sequence of distorted letters or numbers, or identifying which images (among several) have cars on them. That is CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart). However, this technique is no longer able to detect more sophisticated intruders, like the ones a few hackers have grown to be. With the help of AI and ML, they can also perform tasks like password-guessing, with remarkable accuracy and speed.

Very recently, a new AI tool – a dark alternative version of the GPT model – has been designed specifically for malicious activities: WormGPT. It is said to be optimised for performing the activities listed above (creating malware and phishing e-mails, for example) and, as opposed to ChatGPT, it does not restrict any illegal practice or text generation request. There are no boundaries.

Although this whole scenario seems daunting, cybersecurity expert Amine Boukar clarifies that “harnessing AI for cyberattacks poses a significant challenge for threat actors”. “The complexity of AI-driven attacks requires a high level of expertise and resources, making it accessible only to sophisticated and advanced threat actors.”

 

What are the intentions of cybercriminals?

There is a myriad of possibilities, but one of the most common motivations is to open up more theft/extortion opportunities, especially by targeting companies and big organisations. In other words, the goal is to achieve financial gain.


Hackers could also be looking for some kind of recognition from their peers, or simply to overcome a major challenge and get a sense of achievement.


There can also be military and political purposes, which is known as hacktivism (e.g.: to influence public opinion about a particular subject or person; to raise awareness about human rights issues within an organisation; among others).


In a professional context, there may also be an intention to commit corporate espionage, meaning to get a commercial advantage over a competing organisation by accessing their data.

 

 

Leveraging Artificial Intelligence in cybersecurity

When it comes to AI-driven cyberattacks, fighting fire with fire seems to be the preferred approach. This means that companies and organisations are integrating AI and ML into their security systems, as a way to develop powerful tools capable of detecting threats and responding to cyberattacks, as well as to perfect data training.


“There is an ongoing arms race between defenders and attackers, with both sides leveraging AI technology. As a result, the defending side is continuously developing and deploying AI-driven security solutions to counter AI threats”, Amine explains.


In fact, a recent study conducted by Acumen Research and Consulting says the global market of AI-based security products is expected to grow by 27.8% from 2022 to 2030, reaching the value market of US$ 133.8 billion. 

 

Benefits of using AI within cybersecurity
  • Better and faster threat response
    AI systems are trained to analyse big amounts of data and identify any potential threats. Responses can also be automated, which makes them faster and sharper.

  • Continuous learning
    One of the pros of AI algorithms is that they learn from previous experiences in order to become more efficient in future attacks.

  • Less human error
    The fact that AI works with an automated process reduces the need for human intervention and, therefore, the risk of committing mistakes.

  • Cost savings
    By preventing data breaches and damaging cyberattacks, AI cybersecurity helps organisations avoid unnecessary expenses. Additionally, they can save on labor costs or make a better use of their professionals for other core activities.

 

Where AI meets traditional security practices

Experts claim that AI can’t counteract cybercriminals by itself, just like more traditional approaches can’t. That’s why the best defense is to have them work together and complement each other.


These are some of the security practices currently being reinforced by AI:

  • Pentesting (penetration testing);
  • Multi-Factor Authentication (MFA);
  • Biometric technology: facial recognition, fingerprint scanner, etc.;
  • Virus/Malware detection;
  • Data loss prevention;
  • Fraud detection/prevention;
  • Intrusion detection/prevention;
  • Identity and access management;
  • Privacy risk and compliance management.

 

 

A look into the future

Considering what we know so far, Alter Solutions’ cybersecurity specialist Amine Boukar concludes that “AI is posing a serious threat that security professionals need to consider, and it wouldn’t be wise to entirely discard this threat”. 


“On the other hand”, he believes, “we should not yield to the overhype that surrounds it. The excessive publicity that surrounds AI can easily lead to misconceptions and inaccurate decisions regarding security. The most reasonable approach is to assess the actual risks of AI while maintaining a balanced and rational perspective.”

Share this article