XAI: 설명 가능한 인공 지능

hackernews | | 🔬 연구
#review #xai #머신러닝 #설명 가능한 ai #인공지능 #자율 시스템
원문 출처: hackernews · Genesis Park에서 요약 및 분석

요약

Explainable Artificial Intelligence (XAI) focuses on making complex AI models and their decisions understandable to humans through transparent techniques. By leveraging methods like SHAP or LIME, XAI aims to reveal the reasoning behind AI outputs, addressing concerns about bias and trustworthiness in critical applications such as healthcare and finance.

본문

Summary Dramatic success in machine learning has led to a torrent of Artificial Intelligence (AI) applications. Continued advances promise to produce autonomous systems that will perceive, learn, decide, and act on their own. However, the effectiveness of these systems is limited by the machineâs current inability to explain their decisions and actions to human users (Figure 1). The Department of Defense (DoD) is facing challenges that demand more intelligent, autonomous, and symbiotic systems. Explainable AIâespecially explainable machine learningâwill be essential if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners. The Explainable AI (XAI) program aims to create a suite of machine learning techniques that: - Produce more explainable models, while maintaining a high level of learning performance (prediction accuracy); and - Enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners. New machine-learning systems will have the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future. The strategy for achieving that goal is to develop new or modified machine-learning techniques that will produce more explainable models. These models will be combined with state-of-the-art human-computer interface techniques capable of translating models into understandable and useful explanation dialogues for the end user (Figure 2). Our strategy is to pursue a variety of techniques in order to generate a portfolio of methods that will provide future developers with a range of design options covering the performance-versus-explainability trade space. XAI is one of a handful of current DARPA programs expected to enable âthird-wave AI systemsâ, where machines understand the context and environment in which they operate, and over time build underlying explanatory models that allow them to characterize real world phenomena. The XAI program is focused on the development of multiple systems by addressing challenge problems in two areas: (1) machine learning problems to classify events of interest in heterogeneous, multimedia data; and (2) machine learning problems to construct decision policies for an autonomous system to perform a variety of simulated missions. These two challenge problem areas were chosen to represent the intersection of two important machine learning approaches (classification and reinforcement learning) and two important operational problem areas for the DoD (intelligence analysis and autonomous systems). In addition, researchers are examining the psychology of explanation. XAI research prototypes are tested and continually evaluated throughout the course of the program. In May 2018, XAI researchers demonstrated initial implementations of their explainable learning systems and presented results of initial pilot studies of their Phase 1 evaluations. Full Phase 1 system evaluations are expected in November 2018. At the end of the program, the final delivery will be a toolkit library consisting of machine learning and human-computer interface software modules that could be used to develop future explainable AI systems. After the program is complete, these toolkits would be available for further refinement and transition into defense or commercial applications.

Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.

공유

관련 저널 읽기

전체 보기 →