The leaders fear AI-driven cyber security costs will hover

The leaders are concerned about the rocket cost of cyber security tools that are flooded with AI functions. Meanwhile, hackers are largely avoiding AI, as there are relatively few discussions about how they could use what was posted on cybercrime forums.

In a study of 400 IT security decision -making machines from the security company Sophos, 80%believe that generative AI will significantly increase the cost of security tools. This spores with separate gardener research that predicts that global technological consumption is increasing by almost 10% this year, largely due to AI infrastructure upgrades.

The Sophos research found that 99% of organizations include AI capacities on the Cyber ​​Security Platforms collar list, with the most common reason to improve protection. However, only 20% of the respondents mentioned as their primary reason, indicating a lack of consensus on the necessity of AI tools in safety.

Three-quarters of the leaders said that measuring the extra cost of AI functions in their security tools is challenging. For example, Microsoft controversially increased the price of Office 365 by 45% this month due to the inclusion of copilot.

On the other hand, 87% of respondents believe that AI-related efficiency savings will offset the extra costs, which may explain why 65% ​​have already adopted security solutions with AI. The release of low cost AI model Deepseek R1 has generated hope that the price of AI tools will soon fall everywhere.

SEE: Hackerone: 48% of security professionals believe that AI is risky

But costs are not the only concern that Sophos’ researchers emphasize. A significant 84% of security leaders are concerned that high expectations of AI Tools ‘capabilities will push them to reduce their teams’ staff numbers. An even greater proportion – 89% – is concerned that shortcomings in the tool’s AI capacities could work against them and introduce security threats.

“Poor quality and poorly implemented AI models can inadvertently introduce a significant cyber security risk to their own, and the saying” Waste I, Waste “is especially relevant to AI,” warned Sophos researchers.

Cyber ​​criminals don’t use AI as much as you might think

Security concerns can discourage cyber criminals from adopting AI as much as expected, according to separate research from Sophos. Despite analyst fores, researchers found that AI is not yet widely used in cyberattacks. To assess the prevalence of AI use in the hacking community, Sopho’s positions on underground forums examined.

The researchers identified fewer than 150 posts about GPTs or large language models in the past year. In scale, they found more than 1,000 posts on cryptocurrency and more than 600 threads related to the purchase and sale of network access.

“Most threat players on cybercrime forums that we were investigating still do not appear to be particularly enthusiastic or excited about generative AI, and we found no evidence that cyber criminals used it to develop new exploits or malware,” Sophos researchers wrote .

A Russian-linguistic crime site has had a dedicated AI area since 2019, but it has only 300 threads compared to more than 700 and 1,700 threads in the malware and network access sections. However, the researchers noted that this could be considered “relatively rapid growth for a topic that has only been widely known in the last two years.”

Nevertheless, in a post, a user admitted to talking with a gpt of social reasons to fight loneliness rather than arranging a cyber attack. Another user replied that it is “bad for your opsec [operational security]“Highlights the society’s lack of confidence in technology.

Hackers use AI for spamming, collection of intelligence and social engineering

Posts and threads that mention AI use it for techniques such as spamming, open source intelligence collection and social engineering; The latter includes the use of GPTs to generate phishing -e emails and spam texts.

The security company Vipre discovered an increase of 20% in compromise attacks in business E -mail in the second quarter of 2024 compared to the same period in 2023; AI was responsible for two -fifths of them because attacks.

Other posts focus on “Jailbreaking”, where models are instructed to bypass protective measures with a carefully constructed prompt. Malicious chatbots designed specifically for cybercrime have been widespread since 2023. While models such as Wormgpt have been in use are emerging newer as ghostgpt still.

Only a few “primitive and low quality” attempts to generate malware, attacking tools and utilization using AI was discovered by Sopho’s research on forums. Such a thing is not unheard of; In June, HP captured an E email campaign that spread malware in the wild with a script that “was very likely it had been written using Genai.”

Chatting about AI-Generated Code tended to be accompanied by sarcasm or criticism. For example, on a post that contains allegedly handwritten code, a user replied: “Is this written with chatgpt or something … this code clearly doesn’t work.” Sophos researchers said the general consensus is that using AI to create malware was for “lazy and/or low-skilled people looking for shortcuts.”

Interestingly, some posts mentioned that created AI activated malware in an ambitious way, indicating that once the technology is available, they would like to use it in attack. A post entitled “The world’s first AI-driven autonomous C2” included the recording that “this is still just a product of my imagination for now.”

“Some users also use AI to automate routine tasks,” the researchers wrote. “But consensus seems to be that most people don’t trust it for something more complex.”

Leave a Comment