AI 무관심은 비극, 뒤처지는 것은 범죄
hackernews
|
|
📰 뉴스
#claude
#오픈소스
원문 출처: hackernews · Genesis Park에서 요약 및 분석
요약
최근 보안 업계 대화의 핵심은 AI 해킹이나 시스템 오케스트레이션 등 AI 기술을 둘러싼 이슈에 집중되어 있습니다. 초기에는 환각 문제 등으로 인한 AI 활용에 대한 회의론이 존재했으나, 현재는 기술 발전과 함께 업계 트렌드에서 뒤처지는 것에 대한 불안감이 더 크게 대두되고 있습니다. 전문가들은 하루라도 AI 학습을 게을리하면 급변하는 산업 흐름에서 도태될 위험이 있음을 강조하고 있습니다.
본문
Radar #18: Week of 04/27/2026 The Low Orbit Security Radar is a weekly security newsletter from an offensive practitioner's perspective with curated news stories and links worth your time. AI apathy is a tragedy, falling behind is a crime The majority of conversations I’ve had recently have been centered around AI. Many conversations are about how to hack AI or how to better orchestrate AI systems. It’s all very exciting, but take a day off and you risk the industry leaving you behind. For every conversation I have about unique projects people are creating, there are a dozen other conversations I’m having with people who feel extremely uneasy about all of this. In the beginning, the conversations were about AI being no more than fancy autocomplete (a phrase I’ve long disliked) and LLMs being useless for any serious security work because of the risk of hallucinations. I understand why someone who isn’t spending a significant amount of time using these tools can come to that conclusion, and early models were plagued with problems. Today, almost everyone (in this wonderful tech bubble you, dear reader, exist) has moved to a different level of anxiety about AI. There are two reasons for this I want to touch on. Too fast The anxiety now is that it’s too much. Too much to keep up with, too much to compete against. Too much to learn. Too much to bid against. Too much to ask of one person to drop everything they’re doing to “learn AI”. To understand what is different we can look at the rise of cloud computing. It was a colossal industry shift that changed how businesses do business. It dramatically shifted what skills are relevant, and added a new domain security professionals need to spend their already limited bandwidth on. The ramp up time companies and individuals had to acquire these new skills was measured in decades. Amazon S3 was launched 20 years ago last month. There was a decade of gradual build up that allowed people to learn about this new technology at their own pace. If you didn’t pay attention for a day… nothing happened. You didn’t miss anything. You weren't afraid you'd miss 22 releases from AWS tweaking how S3 worked. Today we are not so fortunate. The speed at which things are changing has gone from months, to weeks, to days. Too much Cloud computing did not affect every industry overnight. In 2006 a lawyer did not know what S3 was. An accountant was not worried about S3 having serious consequences on their job role. A security engineer was probably actually hired because of it. AI is different. Every day the big 3 AI firms are pushing out tools trying to directly compete with law firms, cybersecurity companies, and web designers. All at once. What to do No one has a crystal ball, but cybersecurity has always been a winner take most field. The best in the field have way more job opportunities than those who are just working their 9-5. It seems AI is supercharging that mindset. Those who can spend the time, tokens, and mental effort to invest in these new tools seem to be pulling away from their peers. No one knows the implications of this, but there is no world in which knowing less is better than knowing more. The hard part is triaging the information and taking action on it without it becoming overwhelming. If you can do that, you'll come out ok. Caught My Eye - OpenAnt: OpenAnt is an open source LLM based vulnerability discovery product. I tried this out on a few projects and it produced some surprisingly good findings. It also cost $15 to run on a very small go application... - Using Claude Code: Session Management & 1M Context: How to manage your AI agent's context window. - Ephemeral Leaks and Automated BGP Route Leak Detection: A great write-up from an expert digging deeper into the information I discussed in the 16th Radar Issue about Venezuela. - Mythos and its impact on security: An even keeled take on the Mythos news. I agree that good AI usage requires an "oracle" to weed out false positives. What oracles are there for the type of work you're doing AI for? - Benchmarking Self-Hosted LLMs for Offensive Security: Another LLM benchmark for offensive security where models tend to fail on multi-step exploitation without more advanced scaffolding or memory management. - Chasing an Angry Spark: This story details a wild bit of tradecraft that someone clearly spent a lot of time building to not get caught. - How We Cut LLM Costs by 59% With Prompt Caching: Project Discovery discussing how to reduce the costs of using LLMs. Particularly notable given their Neo product that uses AI to find vulnerabilities. - Offensive Security Professional Titles: A site from Rob Fuller that aims to create transparency around different roles, responsibilities and expectations for cybersecurity roles. It seems mostly as accurate as it can be for a general template. - ludus_k8s: A collection of ansible scripts to automate the creation of Kubernetes environments for security testing. It generously includes configurations for testing my Nodes/Proxy RCE disclosure. - The zero-days are numbered: An account of how Firefox is utilizing its early access to Mythos to enter "a world where we can finally find them [vulnerabilities] all". The post raises more questions for me than it answers, I'm not sure if it's supposed to be hopeful or scary.
Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.
공유