Home NewsX Global AI security hampered by indecision, regulatory delays

Global AI security hampered by indecision, regulatory delays

by info.odysseyx@gmail.com
0 comment 32 views

Governments want to build security around artificial intelligence, but roadblocks and indecision are delaying cross-nation agreements to prioritize and avoid obstacles.

In November 2023, Great Britain released its Bletchley Declaration, agreeing to strengthen global efforts to cooperate on artificial intelligence protection with 28 countries, including the United States, China and the European Union.

Efforts to pursue AI security regulations continued with the second Global AI Summit in May, during which the UK and the Republic of Korea defended a commitment from 16 global AI technology companies to build security outcomes into those agreements.

“The declaration fulfills the objectives of the main summit by establishing shared agreement and responsibility on a progress process for international cooperation on risks, opportunities and cross-border AI security and research, particularly through greater scientific cooperation,” Britain said in a separate statement accompanying the announcement.

of the European Union AI ActAdopted in May, it became the world’s first major law governing AI. It includes enforcement powers and penalties, such as $38 million in fines or 7% of annual global revenue if companies violate the law.

After that, in a recent Johnny-come-lately response, a bipartisan group of US senators recommended that Congress draft $32 billion in emergency spending legislation for AI and released a report saying that the US must seize AI opportunities and address risks. .

“Governments need to engage with AI, especially when it comes to national security. We need to harness the opportunities of AI but also be wary of the risks. The only way for governments to do this is to inform, and to inform requires a lot of time and money,” said Joseph Thacker, principal AI engineer and security researcher at SAS Security. AppMoneyTechNewsWorld said.

AI security is essential for SaaS platforms

The importance of AI security is increasing day by day. Almost every software product, including AI applications, is now built as a software-as-a-service (SaaS) application, Thacker noted. Consequently, ensuring the security and integrity of these SaaS platforms will be critical

“We need strong security measures for SaaS applications. Investing in SaaS security should be a top priority for any organization developing or deploying AI,” he suggested.

Existing SaaS vendors are adding AI to everything, introducing more risk. Government agencies should take this into consideration, he maintained.

US Response to AI Security Needs

Thacker wants the US government to take a faster and more deliberate approach to confronting the reality of missing AI safety standards. However, he praised the commitment of 16 major AI companies to prioritize the security and responsible deployment of frontier AI models.

“This shows a growing awareness of AI risks and a willingness to commit to reducing them. However, the real test will be how well these organizations follow through on their promises and how transparent they are in their security practices,” he said.

Still, his praise fell short in two key areas. He saw no mention of outcomes or aligned incentives. Both are extremely important, he adds.

According to Thacker, AI shows the accountability companies need to disclose security frameworks, which will provide insight into the quality and depth of their testing. Transparency will allow for public scrutiny.

“This can force knowledge sharing and the development of best practices across industries,” he observed.

Urgent legal action is needed in this place. However, he thinks a significant movement for the US government in the near future will be challenging given how slowly US officials usually move.

“Having a bipartisan group come together to make these recommendations will hopefully start a lot of conversation,” he said.

Still navigating the unknown in AI regulation

The Global AI Summit was a big step forward in the evolution of AI, agrees Melissa Ruzzi, director of artificial intelligence at AppOmni. Regulations are key.

“But before we think about setting regulations, a lot more research needs to be done,” he told TechNewsWorld.

This is where collaboration between companies in the AI ​​industry to voluntarily join AI protection initiatives is crucial, he added.

“The first challenge is to set limits and explore objective measures. I don’t think we’re ready to set them yet for the AI ​​field as a whole,” said Ruzzi.

It will take more investigation and data to consider what these might be. Ruzzi added that one of the biggest challenges is to keep up with the development of technology without being hindered by AI regulations.

Start by defining AI damage

According to David Brauchler, chief security consultant NCC GroupGovernments should look to harm definitions as a starting point for setting AI guidelines.

As AI technology becomes more common, there may be a shift from classifying AI’s risk to its training computational capabilities. This standard was part of a recent US executive order.

Instead, the shift may turn to the practical harm that AI can cause in its functional context. He noted that various pieces of legislation hint at this possibility.

“For example, an AI system that controls traffic lights should incorporate much more security measures than a shopping assistant, even if the latter requires more computational power to train,” Brauchler told TechNewsWorld.

So far, there has been a lack of a clear vision of regulatory priorities for AI development and use. Governments should prioritize the real impact on people in how these technologies are implemented. One should not try to predict the long-term future of rapidly changing technology, he observed.

If any current danger arises from AI technology, the government can respond accordingly if that information is confirmed. Attempts to pre-legislate these threats can be a shot in the dark, Brauchler clarified.

“But if we look at preventing harm to individuals through impact-targeted legislation, we don’t have to predict how AI will change in form or fashion in the future,” he said.

Balanced government control, legislative oversight

Staying sees a critical balance between control and supervision when controlling AI. The result should not stifle innovation with heavy-handed legislation or rely entirely on company self-regulation.

“I believe that a light-touch regulatory framework coupled with high-quality oversight is the way to go. Governments should guard and enforce compliance while allowing responsible development to continue,” he argued.

Thacker sees some parallels between the push for AI regulation and the nuclear weapons dynamic. He warned that countries that achieve AI dominance could gain significant economic and military advantages.

“This creates incentives for countries to rapidly develop AI capabilities. However, global cooperation on AI security is more possible than nuclear weapons, because our network influence with the Internet and social media is greater,” he observed.

You may also like

Leave a Comment

Our Company

Welcome to OdysseyX, your one-stop destination for the latest news and opportunities across various domains.

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

Laest News

@2024 – All Right Reserved. Designed and Developed by OdysseyX