Talk about the impact of agentic artificial intelligence is complicatedbeing as we are in the annals of its development (and let’s not talk about its implementation in companies). There are gurus of all kinds promulgating the umpteenth technological revolutionalmost at the level of a transcendental change in the nature of work and social configuration. Others, including managers involved in the future of these systems, cool the forecasts by establishing deadlines of up to a decade to see their true impact.

Be that as it may, the truth is that we are fully involved in this matter and the big brands in the sector are not going to let it escape the entry point for automation and cost reduction in the global productive fabric. For now, there are few scale examples that we can find on the horizon (in Spain, there are hardly any examples like Repsol’s). But if the focus is on its economic benefits and its occupational or social risks, it is difficult to find anyone who also talks about the risks associated with cybersecurity in the agentic era.

Here are some predictions in this regard. According to Palo Alto Networks, it is estimated that autonomous agents will outnumber humans next year by a ratio of 82 to 1. It is an estimate that is difficult to believe considering the current low adoption of these systems, beyond pilot tests. IDC together with Microsoft, present somewhat more realistic figures: 1.3 billion agents in production by 2028which means 0.37 agents per real worker (taking into account the 3.5 billion that make up the active population worldwide).

Whether we believe some numbers, or others, or none at all, the undeniable thing is that governing and securing corporate environments full of autonomous agents is not going to be easy. If first there was the destruction of the classic cybersecurity perimeter with the arrival of the cloud and teleworking, now we are faced with controlling identities and accesses behind which there are no humans, but rather automated systems.

Three out of four agentic AI projects pose a serious security risk, mainly due to lack of governance“, detailed this week Marc Sarrias, head of Palo Alto for Spain and Portugal. “A single falsified order can trigger a cascade of automated actions, eroding trust.” “Malicious elements can be placed in the agents themselves, in the data on which they work or in the entire process around the agent,” added Jordi Botifoll, vice president of the firm for Southern EMEA and Emerging Markets.

Prompt injections, insiders freelancers with access to the “keys to the kingdom” and potential targets of cyber attackers, hidden doors with high privileges… There are many ways to compromise AI agents, the natural evolution of an artificial intelligence that is not exactly well secured either. In this regard: a Stanford study estimated that Only 6% of organizations are currently applying advanced cybersecurity frameworks for artificial intelligence..

The answer to this threat is not entirely clear. It will depend, according to experts, on a multitude of factors: from a greater correlation between the work of CISOs and data analysts, to real governance of the underlying data or identity control that effectively includes agents. Also from the work of the firms that create these agents (whose guardrails are the first defense frontier, although also the most vulnerable) and the cybersecurity providers who manage to adapt to new times.

Without a doubt, a topic that will give a lot of talk in the coming months, whether some adoption forecasts or others are confirmed. And more than relevant at this time, in the run-up to the International Cybersecurity Day that is celebrated this Sunday.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *