2023 will be remembered as the year that artificial intelligence, AI, went mainstream, with generative AI technologies gaining mass adoption. However, the concept of using machines to simulate human intelligence isn’t novel. For many years, AI technologies have been used to enhance network security and detect fraud and malware, which it does much faster and more efficiently than humans.
However, AI also poses unique threats. Hackers and parties with nefarious intents can leverage the same capability to automate attacks, poison AI models, and steal private data. Denial of service (DoS) attacks, brute force, deepfake, and social engineering attacks are a few threats that utilize artificial intelligence. Here’s an overview of these and more risks of AI in cybersecurity:
Automated Malware Attacks
AI technologies like ChatGPT are fully capable of writing executable codes with little human help. People with entry-level programming skills can leverage artificial intelligence to write robust codes and test them with extreme ease. Although ChatGPT features some protection to prevent the AI from providing harmful results, such as malware codes, expert programmers can find ways around these measures. In fact, this was already demonstrated when a researcher identified a loophole and created an almost undetectable data-theft executable using the tech.
AI-powered tools of the future will offer even more capabilities, allowing developers to create and automate malicious bots that can steal data or infect networks. The malware can also be programmed to send back data, which can then be fed into AI systems to analyze failed and successful attacks. This level of automation and efficiency will lead to shorter malware-creation cycles and sophisticated attacks with AI systems that can make real-time decisions to elude traditional detectors. What’s more, all this will be possible with the least human intervention.
Deepfakes and Impersonation
Phishing and impersonation are some of the traditional tricks that hackers have relied on to permeate networks and people’s privacy. For instance, hackers can create a fake eCommerce or finance website, complete with the information and features found in the legitimate version of the site. They can then trick players into logging in by sending an email impersonating the official website administrators. The user probably won’t differentiate the official website from the fake one built by the hacker to collect login credentials. Logging into the fake site can expose the user’s information and leave their accounts vulnerable.
One way to counter such phishing and impersonation is through real-time OPT messages and verifications. For instance, leading online casino sites use multi-factor authorization to verify user actions, such as unusual logins, withdrawals, and changes to information (phone number, password). They also let players set gaming and withdrawal limits and provide immediate prompts when users change the limits, allowing them to safely enjoy slots, roulette, and other real money products.
Still, the risk of deepfakes is persistent in other sectors. AI can produce fake voices, images, and videos of real people, making it easier to impersonate others.
Enhanced Attack Optimization
As AI tools become cheaper and more accessible, more attacks will be seen and they’ll only become more sophisticated. Generative AI technologies, large language models, and machine learning can be used to scale cyberattacks at unwarranted levels. AI makes reduces the effort required to create new attacks, shortening the duration hackers need to adapt to new security patches. The technology can also create more complex attacks by using existing data to learn new ways to undermine network security and complexity.
Cybercriminals can take advantage of geopolitical tensions to create robust attacks and leverage AI to optimize phishing and ransomware attacks. With generative AI, cybercriminals can train AI models to refine attacks and circumvent legacy firewalls and protocols, resulting in advanced and optimized cyberattacks. AI can analyze large amounts of data and situations to provide new insights and identify vulnerabilities that would take the average hacker years to discover. Hackers can also test and polish their new malware much faster.
Physical and Privacy Threats
AI systems will become more accessible to the general public, with the technology expected to bolster manufacturing, autonomous driving, construction, medical systems, and more. As more AI-based models replace traditional human-operated systems, the risk to physical safety will increase. For instance, malicious actors can breach and tamper with self-driving cars or drones, resulting in accidents and injuries to passengers. The same can be said of AI-based medical and construction systems that can be hacked to injure a patient or create a hazardous condition.
AI systems rely on massive amounts of data on which they’re trained to make human-like decisions. Some of this data is sensitive or confidential information that hackers can exploit if they manage to trick the AI system to reveal its training databases.
For instance, ChatGPT had a bug in 2023 where the AI leaked the chat history of other users. If this happens to an AI system designed for healthcare facilities, profiling, or marketing, it can result in massive privacy breaches. Hackers can also program AI systems to spy on networks and companies.
Key Takeaways
AI poses various cybersecurity risks and the threat is expected to increase rapidly as AI tools become more accessible. However, like most cybersecurity issues of the past, the risks of AI can be resolved by AI, resulting in a race between hackers and security solution providers. Whoever can create the most sophisticated system of the time or anticipate the next move wins, but only for a while. For businesses, there’s an urgent need to learn the emerging cybersecurity issues, establish robust solutions, and train personnel to mitigate risks and maximize the benefits of existing safeguards.