해커들은 AI를 통해 사이버 공격을 자동화하고 있습니다. 이를 반격에 활용하는 수비수들

hackernews | | 💼 비즈니스
#ai #anthropic #claude #tip #방어 #보안 #사이버 공격 #해킹
원문 출처: hackernews · Genesis Park에서 요약 및 분석

요약

해커들이 생성형 AI를 활용해 취약점 스캔, 악성코드 개발, 피싱 공격을 자동화함에 따라 사이버 보안 위협이 고도화되고 있습니다. 실제로 아마존 보고서는 해커들이 AI를 통해 55개국 600여 개 시스템을 공격했고, 크라우드스트라이크는 침해 후 탈출 시간이 29분으로 65% 단축되었다고 밝혔습니다. 이에 대응해 안트로픽과 다크트레이스 등 방어 기업들도 모델 내 보안 조치를 강화하고 AI 기반 자동 방어 도구를 도입하며 맞불 작전을 펼치고 있습니다. 향후 사이버 보안의 패권은 단순한 AI 모델의 성능보다 누가 이 기술을 더 빨리 전략적으로 적용하느냐에 달려 있습니다.

본문

Hackers Are Automating Cyberattacks With AI. Defenders Are Using It to Fight Back. Which side has the advantage will depend less on raw model capabilities and more on who adapts fastest. Image Credit Joshua Woroniecki on Unsplash Share Cybersecurity is an endless game of cat and mouse as attackers and defenders refine their tools. Generative AI systems are now joining the fray on both sides of the battlefield. Though cybersecurity experts and model developers have been warning about potential AI-powered cyberattacks for years, there has been limited evidence hackers were widely exploiting the technology. But that is starting to change. Growing evidence shows hackers now routinely use the technology to turbocharge their search for vulnerabilities, develop new code exploits, and scale phishing campaigns. At the same time, AI firms are building defensive security measures directly into foundation models to keep pace with attackers. As cybersecurity becomes more automated, corporations will be forced to rapidly adapt as they grapple with the security of their products and systems in the age of AI. A recent report by Amazon security researchers highlighted the growing sophistication of hackers’ AI use. The researchers wrote that Russian-speaking attackers used multiple commercially available generative AI services to plan, manage, and conduct cyberattacks on organizations with misconfigured firewalls in over 55 countries this January and February. The attack targeted more than 600 systems protected by FortiGate firewalls and worked by scanning for internet-exposed login pages—these are essentially front doors leading into private company networks—and attempting to access them with commonly reused security credentials. Once inside, they extracted credential databases and targeted backup infrastructure. This activity suggests they may have been planning a ransomware attack. The researchers report the attack was largely unsuccessful but nonetheless highlighted how much AI can lower the barrier to large-scale attacks. Despite being relative amateurs, the group “achieved an operational scale that would have previously required a significantly larger and more skilled team,” they wrote. In the most vivid demonstration of AI’s hacking potential, a research prototype created by a New York University researcher known as PromptLock used large language models to create an entirely autonomous ransomware attack. The malware used AI to generate custom code in real time, scour the target system for sensitive data, and write personalized ransom notes based on what it found. While the tool was only a proof of concept, it highlighted the mounting threat of fully automated malware attacks. A recent report from security firm CrowdStrike found that AI is also making attackers significantly more nimble. They discovered that average breakout times—the window between when an attacker first breaches a network and when they move into other systems—fell to just 29 minutes in 2025, 65 percent faster than in 2024. Be Part of the Future Sign up to receive top stories about groundbreaking technologies and visionary thinkers from SingularityHub. In November, Anthropic also claimed they had detected a Chinese state-linked group using the company’s Claude Code assistant to conduct a large-scale espionage campaign. The group used jailbreaks—prompts designed to bypass a model’s safety settings—to trick Claude into carrying out the attacks. They also broke the campaign into smaller sub-tasks that looked more innocent. The company claimed the hackers used the tool to automate between 80 and 90 percent of the attack. “The sheer amount of work performed by the AI would have taken vast amounts of time for a human team,” the company’s researchers wrote in a blog post. “At the peak of its attack, the AI made thousands of requests, often multiple per second—an attack speed that would have been, for human hackers, simply impossible to match.” But while AI is reshaping the offensive cybersecurity landscape, defenders are deploying the tools too. In February, Anthropic released Claude Code Security, which can scan systems for vulnerabilities and propose fixes automatically. The tool can’t carry out real-time security tasks like detecting and stopping live intrusions, but the news nonetheless sent stocks in traditional cybersecurity firms plummeting, according to Reuters. Cybersecurity vendors are also embedding AI into their defensive platforms. CrowdStrike recently launched two new AI agents, one designed to analyze malware and suggest how to defend against it and another that actively combs through systems for emerging threats. Similarly, Darktrace has introduced new AI tools designed to automate the detection of suspicious network activity. But perhaps one of the most promising applications for the technology is using it like a hacker to proactively probe defenses. Aikido Security recently released a new tool that uses agents to simulate cyberattacks on ea

Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.

공유

관련 저널 읽기

전체 보기 →