Anthropic의 최신 AI 모델을 사용하면 해커가 더 빠르게 공격을 수행할 수 있습니다.
hackernews
|
|
🔬 연구
#ai 모델
#anthropic
#claude
#review
#보안
#사이버보안
#해킹
원문 출처: hackernews · Genesis Park에서 요약 및 분석
요약
AI 기업 앤스로픽은 사이버 범죄에 악용될 우려가 큰 신규 보안 모델 '미토스(Claude Mythos Preview)'를 일반에 공개하지 않고, 아마존, 구글, 애플, 마이크로소프트 등 주요 글로벌 기술 및 보안 기업들에만 선별적으로 제공한다고 밝혔습니다. 이 모델은 수백 명의 인간 해커를 능가하는 속도로 소프트웨어 취약점을 스캔하고 찾아낼 수 있어, 실제로 최근 몇 주 사이 수천 개의 미지의 보안 결함을 발견하는 등 막대한 성과를 입증했습니다. 앤스로픽은 이처럼 강력한 공격적 기능이 방어자보다 해커들에게 먼저 활용될 경우 보안 위협이 가중될 수 있다고 판단하여, 주요 기업들과 미국 정부가 사전에 방어 체계를 구축하도록 돕고자 이러한 조치를 취했습니다.
본문
Anthropic will make its new AI model available to some of the world’s biggest cybersecurity and software firms in an effort to slow the arms race ignited by AI in the hands of hackers, Anthropic said Tuesday. Amazon, Apple, Cisco, Google, JPMorgan Chase and Microsoft, among other firms, will now have access to Anthropic’s Mythos model for cyber defense purposes. That includes finding bugs in those firms’ software and testing whether specific hacking techniques work on their products. Mythos (officially dubbed “Claude Mythos Preview”) is not ready for a public launch because of the ways it could be abused by cybercriminals and spies, according to Anthropic — a prospect that has prompted widespread concern in Washington and in Silicon Valley. Experts have told CNN that the speed and scale of AI agents looking for vulnerabilities, far beyond normal human capabilities, represent a sea change in cybersecurity. A single AI agent could scan for vulnerabilities and potentially take advantage of them faster and more persistently than hundreds of human hackers. “We did not feel comfortable releasing this generally,” Logan Graham, who heads the team at Anthropic its AI models’ defenses, told CNN. “We think that there’s a long way to go to have the appropriate safeguards.” Anthropic has also briefed senior US officials “across the US government” on Mythos’ full offensive and defensive cyber capabilities, an Anthropic official told CNN. The firm has also “made itself available to support the government’s own testing and evaluation of the technology,” the official said. Anthropic executives hope the selected release of Mythos to companies that serve billions of users will help even the playing field with attackers. The goal is to head off major security flaws in widely used internet browsers and operating systems before they are released publicly. Other firms or organizations that Anthropic said will have access to Mythos include chipmakers Broadcom and Nvidia, the nonprofit Linux Foundation, which supports the popular Linux operating system that powers many phones and supercomputers, and cybersecurity vendors CrowdStrike and Palo Alto Networks. “If models are going to be this good — and probably much better than this — at all cybersecurity tasks, we need to prepare pretty fast,” Graham told CNN. “The world is very different now if these model capabilities are going to be in our lives.” A blog post previewing Mythos’s capabilities, which leaked last month claimed that the AI model was “far ahead” of other models’ cyber capabilities. Mythos “presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders,” said the blog post, which Fortune first reported. Some of the concerns around how Mythos’ could be abused by bad actors were overblown, experts previously told CNN. But the leak also pointed to an uncomfortable truth, those sources said: Barring a change in course, the gap between attackers and defenders enabled by AI could widen further. Anthropic claims Mythos has already produced impactful results. The model has in recent weeks found “thousands” of previously unknown software vulnerabilities — a rate far outpacing human researchers, the firm said. CNN could not immediately verify this figure. Such software flaws can be painstaking for human researchers to find and are coveted by spy agencies and cybercriminals for conducting stealthy hacks. But cybersecurity experts have been using AI to protect against exploits long before Mythos arrived. Gadi Evron and other security researchers in December released a tool based on Anthropic’s Claude model to generate fixes for severe software vulnerabilities. “Unlike attackers, defenders don’t yet have AI capabilities accelerating them to the same degree,” Evron, the founder of AI security firm Knostic, told CNN. “However, the attack capabilities are available to attackers and defenders both, and defenders must use them if they’re to keep up.” Correction: An earlier version of this story incorrectly described the technology Anthropic is making available to other companies. Anthropic is sharing its new AI model.
Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.
공유