Cisco’s ‘Radical’ Approach to AI Security by info.odysseyx@gmail.com January 21, 2025 written by info.odysseyx@gmail.com January 21, 2025 0 comment 10 views 10 Cisco is taking a radical approach to AI protection in its new AI Defense solution. In an exclusive interview with Rowan Cheung on Sunday Rundown AICisco executive vice president and CPO Jitu Patel said that AI Defense is “taking a radical approach to address challenges that existing security solutions are not equipped to handle.” AI Defense, announced last week, addresses risks in developing and deploying AI applications as well as identifying where AI is used in an organization. AI Defense can protect AI systems from attacks and model behavior across platforms with features such as: detection of shadow and unauthorized AI applications across public and private clouds; Automated testing of AI models for hundreds of potential safety and security issues; And Continuous validation protection against potential security and safety threats, such as prompt injection, denial of service and sensitive data leakage. The solution allows security teams to better protect their organization’s data by providing a comprehensive view of AI apps used by employees, creating policies that limit access to non-authorized AI tools, and enforcing protection against threats and confidential data loss while ensuring compliance. . “The adoption of AI exposes companies to new risks that traditional cybersecurity solutions don’t address,” said Kent Noyce, global head of AI and cyber innovation at the technology services company. Global technology St. Louis, said in a statement. “Cisco AI Defense represents a significant advancement in AI security, providing complete visibility of an enterprise’s AI assets and protection against evolving threats.” AI is a positive step for security MJ Kaufman, an author and trainer O’Reilly MediaA Boston-based operator of a learning platform for technology professionals, confirmed Cisco’s analysis of existing cybersecurity solutions. “Cisco is right,” he told TechNewsWorld. “Existing tools fail to deal effectively with many attacks against AI systems, such as prompt injection attacks, data leaks, and unauthorized model actions.” “Implementers must take action and implement solutions aimed at addressing them,” he added. Cisco is in a unique position to provide such solutions, noted Jack E. Gould, founder and principal analyst Jay Gould AssociatesAn IT consulting firm in Northborough, Mass “That’s because they have a lot of data from networking telemetry that can be used to power the AI capabilities they want to protect,” he told TechNewsWorld. Cisco wants to provide security across platforms — on-premises, cloud and multi-cloud — and across models, he added. “It will be interesting to see how many companies adopt it,” he said. “Cisco is definitely going in the right direction with this kind of capability because companies, generally speaking, aren’t looking at it very effectively.” Providing multi-model, multi-cloud protection is critical to AI security. “Multi-model, multi-cloud AI solutions expand an organization’s attack surface by introducing complexity in disparate environments with inconsistent security protocols, multiple data transfer points, and coordinated monitoring and incident response—factors that threat actors can more easily exploit,” Patricia Thain , CEO and Co-Founder Private AIa data protection and privacy firm in Toronto, told TechNewsWorld Regarding limitations While Cisco’s approach to embedding security controls at the network level through their existing infrastructure mesh shows promise, it also reveals limitations, maintains Dev Nag, CEO and founder. QueryPalA customer support chatbot based in San Francisco. “While network-level visibility provides valuable telemetry, many AI-specific attacks occur at application and model layers that network monitoring alone cannot detect,” he told TechNewsWorld. “Last year’s acquisition of Robust Intelligence gives Cisco important capabilities around model validation and runtime security, but their focus on network integration can be a gap in securing the actual AI development lifecycle,” he said. “Critical areas such as training pipeline security, model supply chain validation, and fine-tuning fences require deep integration with MLOps tooling that goes beyond Cisco’s traditional network-centric paradigm.” “Think about the headaches we’ve seen with open source supply chain attacks where the offending code is publicly visible,” he added. “Model supply chain attacks are nearly impossible to detect by comparison.” Nag noted that from an implementation perspective, Cisco AI Defense appears primarily to be a repackaging of existing security products with some AI-specific monitoring capabilities on top. “While their massive deployment footprint provides benefits for enterprise-wide visibility, the solution seems more reactive than transformational for now,” he maintained. “For some organizations starting their AI journey that are already working with Cisco security products, Cisco AI defenses can provide effective control, but those pursuing advanced AI capabilities will likely need a more sophisticated security architecture built for machine learning systems.” For many organizations, mitigating AI risk requires human penetration testers who understand how to ask questions of models that reveal sensitive information, added Karen Walsh, CEO Allegro SolutionsA cyber security consulting firm in West Hartford, Conn. “Cisco’s release suggests that their ability to create model-specific pipelines will mitigate these risks to prevent AI from learning bad data, responding to malicious requests, and sharing unintended information,” he told TechNewsWorld. “At the very least, we can hope that this will identify and mitigate baseline problems so that pen testers can focus on more sophisticated AI compromise techniques.” Complex needs on the way to AGI Written for Kevin Okemwa Windows CentralNoting that the launch of AI Defense could not come at a better time as major AI labs are closing in on developing true artificial general intelligence (AGI), which is supposed to replicate human intelligence. “As the AGI goes on every year, it’s not going to increase,” said James McQuigan, an attorney at Security Awareness. KnowBe4Security Awareness Training Provider in Clearwater, Fla. “AGI’s ability to think like a human with intuition and adaptation could revolutionize industry, but it also introduces risks that could have far-reaching consequences,” he told TechNewsWorld. “A robust AI security solution ensures that AGI is developed responsibly, minimizing risks such as rogue decision-making or unintended consequences.” “AI security is not just a ‘nice to have’ or something to think about in the years to come,” he added. “This is critical as we move toward AGI.” Annihilation of existence? Okemwa also wrote: “While AI defense is a step in the right direction, its adoption in organizations and large AI labs remains to be seen. Interestingly, OpenAI CEO (Sam Altman) acknowledges the technology’s threat to humanity but believes that AI will be smart enough to prevent AI from causing existential destruction.” “I see some optimism about AI’s ability to self-regulate and prevent catastrophic outcomes, but I also note in acceptance that aligning advanced AI systems with human values is still not an imperative but an afterthought,” said Adam Ennamly, Chief Risk Officer. And security officials at the General Bank of Canada told TechNewsWorld. “The idea that AI will solve its own existential risks is dangerously optimistic, as demonstrated by current AI systems that can already be used to create malicious content and bypass security controls,” added Stephen Kowsky, Field CTO. Slash NextA computer and network security company, in Pleasanton, Calif. “Technical safeguards and human supervision remain essential as AI systems are fundamentally driven by their training data and programmed intentions, not an inherent desire for human well-being,” he told TechNewsWorld. “People are quite creative,” Gold added. “I don’t buy into this whole doomsday nonsense. We will figure out a way to make AI work for us and do it safely. That’s not to say there won’t be bumps along the way, but we won’t all end up in ‘The Matrix.’ Share 0 FacebookTwitterPinterestEmail info.odysseyx@gmail.com previous post Sample Proposal for “Launching a Telehealth Platform for Rural Populations” next post Tech Mix Key to Saving Ailing Federal Broadband Program: RPT You may also like Ride-sharing and Robotaxis Decopled Revenue Model Problems February 17, 2025 Web Raiders run the Global Brut Force attack from 2.5M IPS February 12, 2025 Generator Tech, Robot, risk of emerging February 11, 2025 Robotaxis is bringing in the lift dallas’ with ‘2026 with’ February 11, 2025 Why did Qualcom lose his first leadership February 10, 2025 Lenovo’s ThinkPad X 1 Carbon has rewrite my MacBook Pro February 5, 2025 Leave a Comment Cancel Reply Save my name, email, and website in this browser for the next time I comment.