뉴스피드 큐레이션 SNS 대시보드 저널

Google unleashes more AI security agents to fight the baddies

hackernews | | 🤖 AI 모델
#ai헬스케어 #글로벌콘퍼런스 #메디컬코리아2026 #보건복지부 #의료관광

요약

2026년 의료 AI 분야는 범용 단일 대규모 언어 모델(LLM)에서 멀티 에이전트 시스템과 도메인 특화 모델로 전환할 것으로 전망된다. 의료 분야는 진단, 치료, 환자 관리 등 복잡하고 전문적인 작업이 얽혀 있어 범용 AI의 한계가 드러나고 있기 때문이다. 멀티 에이전트 구조는 여러 AI 에이전트가 협업해 의료 데이터를 종합적으로 분석하고, 도메인 특화 모델은 각 진료 영역에 최적화된 정확도를 제공함으로써 의료 현장의 실무 적합성을 높일 수 있다는 것이 핵심이다.

왜 중요한가

개발자 관점

검토중입니다

연구자 관점

검토중입니다

비즈니스 관점

검토중입니다

본문

Google unleashes even more AI security agents to fight the baddies Along with a bunch of new services to make sure those same agents don't cause chaos Google Cloud chief operating officer Francis deSouza has summed up his company's security strategy du jour as follows: "You need to use AI to fight AI." That also sums up all of the security services and products announced Wednesday at Google Cloud Next – and every other tech firm's strategy at this point in 2026. Google’s version of this plan essentially boils down to deploying more AI agents to hunt for threats, plus more tools to secure this expanding AI agent fleet. "It is very clear that we have moved from a human-led defense strategy, to a human-in-the-loop defense strategy, to an AI-led defense strategy that's overseen by humans," deSouza told reporters during a press conference ahead of Google's annual shindig in Las Vegas, happening this week. "Our model for the future is an agentic fleet that does a lot of the routine cyber security work at a machine pace and then is overseen by humans." Also according to deSouza: Google's "full AI stack" – which sees it develop chips, models, and every layer in between – differentiates it from other security companies making the same promises about agentic AI ushering in a new age of effortless security. "We are able to be at the cutting edge of models because we build our own models," he said. "We work with the model team to understand the capabilities that are coming out to make sure that we can take advantage of them on day one and use those most sophisticated models available to create the agentic fleet." More security agents… After using these agents in its internal environment, Google on Wednesday introduced three new agents to customers, all in preview mode. They follow the security-specific agents announced last year at Cloud Next, and build on the Wiz security agents (red, blue, and green) plus the dark-web crawling, threat-intel agents that debuted at RSA Conference last month. The first, Google's Threat Hunting agent, helps security teams hunt for novel attack patterns and stealthy behaviors that might otherwise bypass defenses. "As the name implies, it looks for emerging threats in your organization's environment using intelligence from our Google Threat Intelligence and Mandiant best practices," deSouza said. "It does this continuously at infinite scale, much faster than you could do with a human-led defense." Our model for the future is an agentic fleet that does a lot of routine security work, overseen by humans The second agent is aptly named the Detection Engineering agent. Google says this one helps organizations identify security coverage gaps in IT environments, and then continuously creates new detections and detection rules based on these findings. Finally, a soon-to-be-released Third-Party Context agent scours existing security workflows and enriches them using third-party data. And, Google's Triage and Investigation agent, announced at last year's conference, is now generally available. During the past 12 months, it processed over five million alerts and reduced a typical 30-minute manual analysis time down to 60 seconds, the company claims. Plus: Google customers can build their own security agents with now-available remote Google Cloud model context protocol (MCP) server support for Google Security Operations, now generally available. Users can also access the MCP server client directly from the Google Security Operations chat interface – but this feature is only available in preview. "This means customers can create their own custom agents and have access to our Security Operations capabilities for their agents," deSouza said. What could possibly go wrong? And security tools to control the security agents But if (let's say when) something does break, Google and Wiz have a plan for that, too. As Wiz co-founder and VP Yinon Costica said: "We are giving security teams the tools that can help them accelerate with AI and win AI by applying AI." Google's Wiz acquisition, announced in March 2025 ahead of last year's Cloud Next, finally closed last month. "We are going deeper and deeper into how AI native development is being done, and it always starts with visibility at Wiz," Costica said. Specifically: Wiz has a new AI Bill of Materials (AI-BOM) to help secure AI-generated code and mitigate the risk of shadow AI. As developers create new AI applications, they use various skills, SDK libraries, models, MCP servers, and other components, and "it becomes a very long list," Costica said. "We want to be able to provide security teams with the full list, the actual bill of material that was used to create these AI applications." Wiz also now integrates with Loveable, running its security scanning inside that platform make vulnerabilities, secrets, and misconfigurations visibility to developers as they are vibe coding new applications - not after the fact. Additionally, inline AI security hooks integrate directly into IDEs and agent workflows to evaluate prompts and scan AI-generated output before the code is committed. - Google unleashes Gemini AI agents on the dark web - Smooth criminals talking their way into cloud environments, Google says - Google's got a hot cloud infosec startup, a new unified platform — and its eye on Microsoft's $20B+ security biz - I meant to do that! AI vendors shrug off responsibility for vulns But wait, there's more! In case you needed another platform to secure and manage agents, Google also announced the Gemini Enterprise Agent Platform, which it says will "enable access management and AI governance at scale," assigning AI agents unique identities, which should allow them to operate autonomously with specific authentication flows – as opposed to running amok and causing chaos. There's also a new service called Agent Gateway, which enables policy enforcement for all agent-to-agent and agent-to-tool connections via protocols like MCP and Agent2Agent (A2A). Google's runtime protection tool for model and agent interactions, Model Armor, integrated with Agent Gateway. This brave new world of using AI to fight AI remains untested, and there will be a battle royale between vendors for dominance. In the meantime, however, attackers are also using AI to boost the speed and sophistication of their attacks. An earlier Google-Mandiant report showed cybercrime "hand-off" times – where one crew gains initial access, then transfers that access to a second threat group like a ransomware or data theft gang – have dropped from eight hours to 22 seconds in the last three years. So it's not an exaggeration to say that security teams do need to move at machine speed, or else the attackers will come out ahead. ®

관련 저널 읽기

전체 보기 →