2025 Cyber Security Predictions Influenced by AI by info.odysseyx@gmail.com January 7, 2025 written by info.odysseyx@gmail.com January 7, 2025 0 comment 9 views 9 When it comes to cybersecurity in 2025, artificial intelligence is top of mind for many analysts and professionals. Artificial intelligence will be deployed by both adversaries and defenders, but attackers will benefit more from it, maintains Willy Leichter, CMO. AppSocAn application security and vulnerability management provider in San Jose, Calif. “We know that AI will be increasingly used on both sides of cyber warfare,” he told TechNewsWorld. “However, attackers will be less constrained because they care less about AI accuracy, ethics or unintended consequences. Techniques like highly personalized phishing and scoring networks for legacy vulnerabilities will benefit from AI.” “While AI has huge potential defensively, there are further limitations – both legal and practical – that will slow adoption,” he said. Chris Houck, Consumer Privacy Champion Pixel privacyA publisher of online consumer security and privacy guides, predicts that 2025 will be a year of AI vs. AI, as the good guys use AI to defend against AI-powered cyberattacks. “It will probably be a back-and-forth war for a year as both sides use information gathered from previous attacks to set up new attacks and new defenses,” he told TechNewsWorld. Mitigating the Security Risks of AI Leichter also predicted that cyber adversaries will begin targeting AI systems more often. “AI technology models, datasets and machine language operations greatly expand the attack surface area with rapidly emerging threats to systems,” he explains. “Furthermore, when AI applications are moved from the lab to production, the full security implications will not be realized until an inevitable breach occurs.” Carl Holmqvist, founder and CEO last wallAn identity security firm based in Honolulu, agreed “Unchecked, widespread deployment of AI tools — often launched without a strong security foundation — will have dire consequences in 2025,” he told TechNewsWorld. “Lack of adequate privacy measures and security frameworks, these systems will become prime targets for breach and manipulation,” he said. “This Wild West approach to AI deployment will dangerously expose data and decision-making systems, forcing organizations to urgently prioritize basic security controls, transparent AI frameworks and continuous monitoring to mitigate these growing risks.” Leichter also maintains that security teams will need to take more responsibility for securing AI systems in 2025. “It sounds obvious, but in many organizations, early AI projects have been driven by data scientists and business experts, who often bypass traditional application security processes,” he said. “Security teams will be fighting a losing battle if they try to block or slow down AI initiatives, but they must bring rogue AI projects under the protection and compliance umbrella.” Leichter also noted that AI will expand the attack surface for adversaries targeting the software supply chain in 2025. “We’ve already seen supply chains become a major vector for attack, as complex software stacks rely heavily on third-party and open-source code. ,” he said. “The explosion of AI adoption makes this goal even bigger with new complex vectors of attack on datasets and models.” “Understanding model lineage and maintaining the integrity of dataset changes is a complex problem, and currently, there is no effective way to free toxic data for an AI model,” he added. Data poisoning threats to AI models Michael Lieberman, CTO and co-founder of Kusari, a software supply chain security company in Ridgefield, Conn., sees poisoning large language models as a significant development in 2025. This approach is probably more resource-intensive than simpler techniques, such as malicious open distribution LLM,” he told TechNewsWorld. “Most organizations are not training their own models,” he explains. “Instead, they rely on pre-trained models, often available for free. The lack of transparency about the source of these models makes it easier for malicious actors to introduce malware, as evidenced by the Hagging Face malware incident.” This incident occurred in early 2024 when it was discovered that around 100 LLMs containing hidden backdoors that could execute arbitrary code on users’ machines had been uploaded to the Hugging Face platform. “Future data poisoning efforts may target major players such as OpenAI, Meta and Google, which train their models on massive datasets, making detection of such attacks more challenging,” Lieberman predicted. “In 2025, attackers will likely outnumber defenders,” he added. “Attackers are financially motivated, while defenders often struggle to secure adequate budgets because security is not generally seen as a revenue driver. It could take a significant AI supply chain breach – similar to the SolarWinds Sunburst incident – to prompt the industry to take the threat seriously. “ Thanks to AI, 2025 will have more threat actors to launch more sophisticated attacks. “The more capable and accessible AI becomes, the lower the barrier to entry for less skilled attackers and the faster the speed at which attacks can be carried out. ,” explained Justin Blackburn, a senior cloud threat detection engineer AppMoneyA SaaS security management software company, in San Mateo, Calif. “Additionally, the rise of AI-powered bots will enable threat actors to launch large-scale attacks with minimal effort,” he told TechNewsWorld. “Armed with these AI-powered tools, even less capable adversaries may be able to gain unauthorized access to sensitive data and disrupt services on a scale previously seen only by more sophisticated, well-funded attackers.” Script children grow up In 2025, the rise of agentic AI – AI capable of making independent decisions, adapting to their environment and taking action without direct human intervention – will also increase the problem for defenders. “Advances in artificial intelligence are expected to empower non-state actors to develop autonomous cyber weapons,” said Jason PittmanCollegiate Associate Professor in the School of Cybersecurity and Information Technology at Adelphi University of Maryland Global Campus, Md. “Agentic AI operates autonomously with goal-directed behavior,” he told TechNewsWorld. “Such systems can use frontier algorithms to detect vulnerabilities, infiltrate systems, and evolve their strategies in real-time without human steering.” “These features distinguish it from other AI systems that rely on predetermined instructions and require human input,” he explains. “Like the Morris Worms of decades past, the release of agentic cyber weapons may begin as an accident, which is even more troubling. This is because the accessibility of advanced AI tools and the proliferation of open-source machine learning frameworks lower the barriers to developing sophisticated cyber weapons. Once developed, , the strong autonomy feature can easily bypass the safeguards of agentic AI.” As damaging as AI can be in the hands of threat actors, it can also help better protect data such as personally identifiable information (PII). “After analyzing more than 6 million Google Drive files, we discovered that 40% of files contained PII that put businesses at risk of a data breach,” said Rich Vibert, co-founder and CEO. MetomicA data privacy platform in London. “As we enter 2025, we’ll see more companies prioritize automated data classification methods to reduce the amount of vulnerable information inadvertently stored in publicly accessible files and collaborative workspaces across SaaS and cloud environments,” he continued. “Businesses will increasingly deploy AI-powered tools that can automatically identify, tag and protect sensitive information,” he said. “This transition will enable companies to keep up with the massive amounts of data generated every day, ensuring that sensitive data is constantly protected and unnecessary data exposure is minimized.” Nevertheless, 2025 could usher in a wave of frustration among security professionals when the hype about AI hits the fan. “CISOs will miss 10% of Gen AI adoption due to lack of measurable quality,” said Cody Scott, a senior analyst at Forrester ResearchA market research firm headquartered in Cambridge, Mass., wrote in a company blog “According to Forrester’s 2024 data, 35% of global CISOs and CIOs consider exploring and deploying use cases for general AI to improve employee productivity a top priority,” he noted. “The security products market has been quick to hype the expected productivity benefits of Zen AI, but a lack of tangible results is fueling disillusionment.” “The thought of an autonomous security operations center using Zen AI created a lot of hype, but it couldn’t be further from reality,” he continued. “In 2025, the trend will continue, and security practitioners will sink deeper into inadequacy as insufficient budgets and unrealized AI facilities reduce the number of security-focused general AI deployments.” Share 0 FacebookTwitterPinterestEmail info.odysseyx@gmail.com previous post Sample Proposal for “Community Tree Planting Initiative for Carbon Sequestration” next post Meta Scrap fact-checker, eases content restrictions You may also like Bots now dominate the web and this is a copy of a problem February 5, 2025 Bots now dominate the web and this is a copy of a problem February 5, 2025 Bots now dominate the web, and this is a problem February 4, 2025 DIPSEC and HI-STECS GLOBAL AI Race February 4, 2025 DEPSEC SUCCESS TICTOKE CAN RUNNING TO PUPPENSE TO RESTITE January 29, 2025 China’s AI Application DEPSEC Technology Spreads on the market January 28, 2025 Leave a Comment Cancel Reply Save my name, email, and website in this browser for the next time I comment.