뉴스피드 큐레이션 SNS 대시보드 저널

미국 재무부, 금융 기관을 위한 AI 위험 가이드북 발행

AI News | | 💼 비즈니스
#ai 위험 #tip #가이드북 #금융 기관 #리스크 관리 #미국 재무부

요약

미국 재무부는 금융권의 인공지능(AI) 도입에 따른 리스크를 효과적으로 관리하기 위해 100여 개 금융기관 협력으로 개발된 ‘금융 서비스 AI 리스크 관리 프레임워크(FS AI RMF)’와 가이드북을 공개했습니다. 이 프레임워크는 NIST 일반 지침을 보완하여 금융권의 특성을 반영한 230개의 통제 목표를 제시하며, 기관이 AI 도입 단계를 평가하고 알고리즘 편향이나 사이버 취약성 등의 위험을 관리하도록 돕습니다. 또한 초기부터 내장 단계별로 차등화된 통제를 적용하도록 유도함으로써, 금융 기관이 책임감 있는 AI 혁신을 지속할 수 있는 거버넌스 구축을 목표로 하고 있습니다.

왜 중요한가

개발자 관점

검토중입니다

연구자 관점

검토중입니다

비즈니스 관점

검토중입니다

본문

The US Treasury has published several documents designed for the US financial services sector that suggest a structured approach to managing AI risks in operations and policy (see subheading ‘Resources and Downloads’ towards the bottom of the link). The CRI Financial Services AI Risk Management Framework (FS AI RMF) comes with a Guidebook [.docx] which gives details of the framework, developed by a collaboration among more than 100 financial institutions and industry organisations, with input from regulators and technical bodies. The objective of the FS AI RMF is to help financial institutions identify, evaluate, manage, and govern the risks associated with AI systems and let firms continue adopting AI technologies responsibly. Sector-specific framework AI systems introduce risks that existing technology governance frameworks don’t address. Risks include algorithmic bias, limited transparency in decision processes, cyber vulnerabilities, and complex dependencies between systems and data. LLMs create concerns because their behaviour can be difficult to interpret or predict. Unlike traditional software, which is deterministic, an AI’s output varies depending on context. Financial institutions already operate under extensive regulation and there is a raft of general guidance such as the NIST AI Risk Management Framework. However, applying general frameworks to the operations of financial institutions lacks the detail that reflects sector practices and regulatory expectations. The FS AI RMF is being positioned as an extension to the NIST framework, with additional sector-specific controls and practical implementation guidelines in its pages. The Guidebook explains how firms can assess their current AI maturity and implement controls to limit their risk. Its aim is to promote consistent and responsible AI practices and support innovation in the sector. Core structure The FS AI RMF connects AI governance with broader governance, risk, and compliance processes already affecting financial institutions. The framework contains four main components. The first is an AI adoption stage questionnaire that lets organisations determine the maturity of their AI use. The second is a risk and control matrix, which contains a set of risk statements and control objectives in alignment with adoption stages. The Guidebook explains how to apply the framework, while a separate control objective reference guide provides examples of controls and supporting evidence. The framework defines a total of 230 control objectives organised according to four functions adapted from the broader NIST AI Risk Management Framework: govern, map, measure, and manage. Each function contains categories and subcategories that describe elements of effective AI risk management and governance. Assessing AI maturity The adoption stage questionnaire determines the extent to which an organisation is using AI. Some firms rely on traditional predictive models in limited applications for example, while others deploy AI in core business processes; others just use AI in customer-facing roles. The questionnaire helps organisations determine where they sit in the spectrum of AI use currently, evaluating factors like the business impact of AI, governance arrangements, deployment models, use of third-party AI providers, organisational objectives, and data sensitivity. Based on this assessment, organisations are classified into four stages of AI adoption: - initial stage: organisations that have little or no operational AI deployment. AI may be under consideration but is not embedded, - minimal stage: limited AI use in low-risk areas or isolated systems. - evolving stage: organisations running more complex AI systems, including applications that involve sensitive data or external services. - embedded stage: where AI plays a significant role in business operations and decision-making. These stages help institutions focus their efforts on controls appropriate to their maturity level. A firm at an early stage does not need to implement every control immediately, but as AI becomes more integrated, the framework introduces additional controls to address growing levels of risk. Risk and control The control objectives for each AI adoption stage address governance and operational topics including data quality management, fairness and bias monitoring, cybersecurity controls, transparency of AI decision processes, and operational resilience. The Guidebook provides examples of possible controls and types of evidence institutions can use to demonstrate they’re compliant. Each firm must determine the controls that fit best. The framework recommends maintaining incident response procedures specific to AI systems and creating a central repository for tracking AI incidents, processes that will help organisations detect failures and improve governance over time. Trustworthy AI The framework incorporates principles for trustworthy AI defined as validity and reliability, safety, security

관련 저널 읽기

전체 보기 →