Pulse Technology Blog

Does AI help create a better grade of cyber criminal?

Written by Pulse Technology | May 03, 2025

Most experts believe that the biggest cybersecurity threats today include the usual suspects: malware, social engineering, ransomware, and insider threats, among others.

One of those “others” is artificial intelligence (AI).

A Forbes article states that the AI language models such as OpenAI’s ChatGPT, Microsoft’s Bing Chat and Google’s Bard AI continue to grow in their unique capabilities to process language. Other generative AI applications including Midjourney, Dall-E or Stable Diffusion have the capabilities of creating visually appealing artwork, with a very few prompts required from the user.

In other words, AI’s abilities continue to expand rapidly. And while that’s great when used for legitimate business purposes, in the wrong hands (such as cyber criminals), AI can give cyber criminals a significant advantage when they seek to invade an unsuspecting company’s network. AI can reason, learn, make decisions and solve problems.

In the hands of cyber criminals, AI can enhance existing attacks, making it more difficult for antivirus software and spam filters to detect threats. AI can also be used to create fake data to results to confusion. For example, AI can impersonate public officials, an employer, a vendor or trusted advisor.

A New York University article states that AI is being used to build on existing cyber attacks, making them more effective and efficient. Most commonly, AI lends itself to phishing scams, but it can be used to support the operations of many types of cyber attacks.

AI aids criminals in phishing and social engineering scams, targeting people. The large language tools of AI allow criminals to improve scripts. In earlier days, it was easier to identify scams because the language was never “just right.” The grammatical structure seemed awkward, there were spelling errors, and they were easier to spot. With improved grammatical skills and the ability to reason or draw from large databases, AI can make cyber criminals more convincing and better communicators.

AI algorithms can automate certain parts of phishing campaigns, including mass emails, domain spoofing, and personalizing content. By analyzing significant amounts of data, Ai tools can make their messages seem very real and very convincing. AI can use public data from social media platforms and create detailed profiles of the cyber criminal’s targets.

AI can further manipulate audio and visual media to make a message seem authentic. The NYU article references cybercriminals using so-called “deepfake” techniques (a mix of deep learning and “fake media”) to create and spread political misinformation and even tricked a firm in transferring a large sum of money to a foreign bank account.

Another area of concern is the ability of AI to crack passwords. Cyber criminals use machine learning and AI to improve the algorithms for guessing the passwords of users. AI enhances the ability of cyber criminals to analyze password datasets and create variations.

You get the idea. While AI can be a big benefit for legitimate business purposes, those same benefits can be used by cyber criminals to improve their methods of hacking into networks. Cyber criminals do not take vacations; they are constantly working to perfect their craft.

What can you do? Vigilance is important. Being aware of the advances in AI and understanding how cyber criminals may use it to their advantage and to the disadvantage of business owners is important. We can assist in helping you protect your network against AI threats. If you are looking for network protection, or even just want to have an initial conversation, please visit https://pulsetechnology.com or give us a call at 888-357-4277. We’re here to help!