해커들은 AI를 통해 사이버 공격을 자동화하고 있습니다. 수비수들은 이를 반격에 활용하고 있습니다.

Singularity Hub | | 💼 비즈니스
#ai #anthropic #claude #tip #보안팁 #사이버보안 #자동화 #해킹
원문 출처: Singularity Hub · Genesis Park에서 요약 및 분석

요약

해커들이 AI를 활용해 취약점 탐색, 코드 악용 개발, 피싱 공격 규모 확대를 자동화하고 있으며, 방어 측 역시 기초 모델에 직접 보안 기능을 구축해 대응 중입니다. 최근 러시아 해커들은 상용 생성 AI 서비스를 이용해 55개국 이상의 방화미 설정 오류 시스템을 공격

본문

Hackers Are Automating Cyberattacks With AI. Defenders Are Using It to Fight Back. Which side has the advantage will depend less on raw model capabilities and more on who adapts fastest. Image Credit Joshua Woroniecki on Unsplash Share Cybersecurity is an endless game of cat and mouse as attackers and defenders refine their tools. Generative AI systems are now joining the fray on both sides of the battlefield. Though cybersecurity experts and model developers have been warning about potential AI-powered cyberattacks for years, there has been limited evidence hackers were widely exploiting the technology. But that is starting to change. Growing evidence shows hackers now routinely use the technology to turbocharge their search for vulnerabilities, develop new code exploits, and scale phishing campaigns. At the same time, AI firms are building defensive security measures directly into foundation models to keep pace with attackers. As cybersecurity becomes more automated, corporations will be forced to rapidly adapt as they grapple with the security of their products and systems in the age of AI. A recent report by Amazon security researchers highlighted the growing sophistication of hackers’ AI use. The researchers wrote that Russian-speaking attackers used multiple commercially available generative AI services to plan, manage, and conduct cyberattacks on organizations with misconfigured firewalls in over 55 countries this January and February. The attack targeted more than 600 systems protected by FortiGate firewalls and worked by scanning for internet-exposed login pages—these are essentially front doors leading into private company networks—and attempting to access them with commonly reused security credentials. Once inside, they extracted credential databases and targeted backup infrastructure. This activity suggests they may have been planning a ransomware attack. The researchers report the attack was largely unsuccessful but nonetheless highlighted how much AI can lower the barrier to large-scale attacks. Despite being relative amateurs, the group “achieved an operational scale that would have previously required a significantly larger and more skilled team,” they wrote. In the most vivid demonstration of AI’s hacking potential, a research prototype created by a New York University researcher known as PromptLock used large language models to create an entirely autonomous ransomware attack. The malware used AI to generate custom code in real time, scour the target system for sensitive data, and write personalized ransom notes based on what it found. While the tool was only a proof of concept, it highlighted the mounting threat of fully automated malware attacks. A recent report from security firm CrowdStrike found that AI is also making attackers significantly more nimble. They discovered that average breakout times—the window between when an attacker first breaches a network and when they move into other systems—fell to just 29 minutes in 2025, 65 percent faster than in 2024. Be Part of the Future Sign up to receive top stories about groundbreaking technologies and visionary thinkers from SingularityHub. In November, Anthropic also claimed they had detected a Chinese state-linked group using the company’s Claude Code assistant to conduct a large-scale espionage campaign. The group used jailbreaks—prompts designed to bypass a model’s safety settings—to trick Claude into carrying out the attacks. They also broke the campaign into smaller sub-tasks that looked more innocent. The company claimed the hackers used the tool to automate between 80 and 90 percent of the attack. “The sheer amount of work performed by the AI would have taken vast amounts of time for a human team,” the company’s researchers wrote in a blog post. “At the peak of its attack, the AI made thousands of requests, often multiple per second—an attack speed that would have been, for human hackers, simply impossible to match.” But while AI is reshaping the offensive cybersecurity landscape, defenders are deploying the tools too. In February, Anthropic released Claude Code Security, which can scan systems for vulnerabilities and propose fixes automatically. The tool can’t carry out real-time security tasks like detecting and stopping live intrusions, but the news nonetheless sent stocks in traditional cybersecurity firms plummeting, according to Reuters. Cybersecurity vendors are also embedding AI into their defensive platforms. CrowdStrike recently launched two new AI agents, one designed to analyze malware and suggest how to defend against it and another that actively combs through systems for emerging threats. Similarly, Darktrace has introduced new AI tools designed to automate the detection of suspicious network activity. But perhaps one of the most promising applications for the technology is using it like a hacker to proactively probe defenses. Aikido Security recently released a new tool that uses agents to simulate cyberattacks on ea

Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.

공유

관련 저널 읽기

전체 보기 →