2026년 EU AI법에 따른 Agentic AI의 거버넌스 문제
AI News
|
|
🔬 연구
#agentic ai
#ai 거버넌스
#ai 에이전트
#eu ai act
#규제 대응
#review
#규제 리뷰
원문 출처: AI News · Genesis Park에서 요약 및 분석
요약
오는 8월부터 시행되는 EU AI법에 따라 고위험 영역에서 활동하는 에이전틱 AI의 거버넌스가 중요해지고 있으며, IT 리더들은 위반 시 막대한 벌금을 피하기 위해 철저한 통제와 감사 체계를 구축해야 합니다. 이를 위해 조직은 모든 에이전트의 고유 식별, 암호화 기반의 조작 불가능한 활동 로그 기록, 긴급 시 즉각적인 권한 회수, 그리고 충분한 맥락을 바탕으로 한 인간의 실질적인 감독을 시스템에 반드시 포함해야 합니다. 특히 다중 에이전트 환경에서는 연쇄적인 오류를 방지하기 위해 보안 정책을 개발 단계부터 철저히 테스트하고 규제 당국의 요구에 맞춰 모든 기술적 활동을 투명하게 증명할 수 있어야 합니다.
본문
AI agents hold the promise of automatically moving data between systems and triggering decisions, but in some cases, they can act without a clear record of what, when, and why they undertook their tasks. That has the potential to create a governance problem, for which IT leaders are ultimately responsible. If an organisation can’t trace an agent’s actions and don’t have proper control over its authority, leaders can’t prove that a system is operating safely or even lawfully to regulators. That’s an issue set to become more important from August this year, as enforcement of the EU AI Act kicks in. According to the text of the Act, there will be substantial penalties for failures of governance relating to AI, especially when used in high-risk areas such as when personally-identifiable information is processed, or financial operations take place. What IT leaders need to consider in the EU Several steps can be taken to alleviate high levels of risk, and of these, the ones that stand out for consideration include agent identity, comprehensive logs, policy checks, human oversight, rapid revocation, the availability of documentation from vendors, and the formulation of evidence for presentation to regulators. There are several options decision makers can consider that will help create the record of activities undertaken by agentic systems. For example, a Python SDK (software development kit), Asqav, can sign each agent’s action cryptographically and link all records to an immutable hash chain – the type of technique that’s more associated with blockchain technology. If someone or something changes or removes a record, verification of the chain fails. For governance teams, using a verbose, centralised, possibly-encrypted system of record for all agentic AIs is a measure that provides data well beyond the scattered text logs produced by individual software platforms. Regardless of the technical details of how records are made and kept, IT leaders need to see exactly where, when, and how agentic instances are acting throughout the enterprise. Many organisations fail at this first step in any recording of automated, AI-driven activity. It’s necessary to keep a registry of every agent in operation, with each uniquely identified, plus records of its capabilities and granted permissions. This ‘agentic asset list’ ties neatly into the requirements of the EU AI Act’s article 9, which states: Article 9: For high-risk areas, AI risk management has to be an ongoing, evidence-based process built into every stage of deployment (development, preparation, production), and be under constant review. Furthermore, decision-makers need to be aware of the Act’s Article 13: High-risk AI systems have to be designed in such a way that those deploying them can understand a system’s output. Thus, an AI system from a third-party must be interpretable by its users (not an opaque code blob), and should be supplied with enough documentation to ensure its safe and lawful use. This requirement means the choice of model and its methods of deployment are both technical and regulatory considerations. Putting the brakes on It’s important for any agentic deployment to offer a facility for the revocation of an AI’s operating role, preferably within a matter of seconds. The ability to revoke quickly should be part of emergency response processes. Revocation options should include the immediate removal of privileges, immediate ceasing of API access, and the flushing of queued tasks. The presence of human oversight, combined with the presentation of enough context for humans to make informed decisions, means that human operators must be able to reject any proposed action. It’s not considered adequate for the person reviewing a decision to see only a prompt or a confidence score. Effective oversight needs information around context, every agent’s authority, and time enough to intervene to prevent mis-steps. Multi-agent considerations While every agent’s action should be recorded automatically and retained, multi-agent processes are particularly complex to track, as failures can take place among chains of agents. It’s therefore important for security policies to be tested during the development of any system that intends to utilise multiple agents. Finally, governing authorities may require logs and technical documentation at any time, and will certainly need them after any incident they have been made aware of. Conclusion The question to be considered by IT leaders considering using AI on sensitive data or in high-risk environments is whether every aspect of the technology can be identified, constrained by policy, audited, interrupted, and explained. If the answer is unclear, governance is not yet in place. (Image source: “Last Judgement” by Lawrence OP is licensed under CC BY-NC-ND 2.0. To view a copy of this license, visit https://creativecommons.org/licenses/by-nc-nd/2.0) Want to learn more about AI and big data from industry leaders? Check out AI & Big
Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.
공유