클로드에게 잘못된 모든 것

hackernews | | 📰 뉴스
#anthropic #claude #gpt-5 #openai #오픈소스
원문 출처: hackernews · Genesis Park에서 요약 및 분석

요약

유니버설 등 음악 출판사들은 Claude가 저작권이 있는 가사로 학습되었다며 안스로픽을 고소했습니다. 이어 작가들은 해적 도서관의 책을 무단 복제해 훈련했다고 주장하며 소송을 제기했습니다. 최근에는 Claude가 생성한 허위 인용을 안스로픽 변호인이 법정에 제출하는 사고까지 발생했습니다.

본문

Everything that went is wrong with Claude. Oct 18, 2023 Legal Music Publishers Drag Claude Into Court Universal, Concord, and ABKCO sued Anthropic, alleging Claude was trained on copyrighted lyrics and could reproduce lyrics from hundreds of songs. The 'constitutional AI' company got its first big copyright punch in the face from the music business. The Guardian opens in a new tab Justia opens in a new tab Jul 2024 Policy ClaudeBot Hammers iFixit and Freelancer ClaudeBot reportedly hit iFixit roughly a million times in 24 hours, while other sites complained about aggressive crawling. Anthropic said it honors robots.txt; web operators learned that only helps after you notice the bot eating your bandwidth. The Verge opens in a new tab Financial Times opens in a new tab Aug 20, 2024 Legal Authors Sue Over the Shadow-Library Diet Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson alleged Anthropic copied books from pirate libraries and built a permanent training library from them. Every later 'responsible AI' claim had to live next to that complaint. ClassAction.org opens in a new tab Authors Guild opens in a new tab May 15, 2025 Legal Claude Hallucinates Its Way Into Anthropic's Own Lawsuit In the music-publishers case, Anthropic's lawyers took responsibility for an expert-report citation that Claude fabricated. A model flaw became courtroom theater inside a case about the model itself. Reuters opens in a new tab May 22-23, 2025 Safety Opus 4 Safety Tests Raise Blackmail Concerns Claude Opus 4 safety testing included scenarios where the model attempted blackmail when facing replacement or shutdown, while Anthropic activated ASL-3 safeguards for higher-stakes risk. The safety lab shipped a model whose launch coverage read like a corporate thriller. Axios opens in a new tab TechCrunch opens in a new tab TIME opens in a new tab Jun 3-5, 2025 Policy Windsurf Gets Cut Off Mid-Race Windsurf said Anthropic sharply reduced first-party Claude access with little notice, right as OpenAI acquisition rumors swirled. Jared Kaplan later said it would be odd to sell Claude to OpenAI and pointed to compute constraints, which did not make the lockout feel less strategic. TechCrunch opens in a new tab TechCrunch opens in a new tab Jun 4, 2025 Legal Reddit Sues Over the Scraping It Says Never Stopped Reddit sued Anthropic, alleging bots kept hitting Reddit after Anthropic said it had stopped and that Claude was trained on user content without a license. Unlike the book cases, Reddit framed the fight around platform rules, privacy promises, and contracts. Reuters opens in a new tab AP opens in a new tab Jun 24-25, 2025 Legal A Fair-Use Win With a Piracy Hangover Judge William Alsup held that training on lawfully acquired books could be fair use, but the pirate-library claims survived. Anthropic won the model-training theory and still kept the shadow-library problem. AP opens in a new tab Authors Guild opens in a new tab Feb-Jul 2025 Policy Boris & Cat Leave, Anthropic Suddenly Cares Claude Code began as Boris Cherny's no-master-plan terminal experiment and launched only as a limited research preview, while Cat Wu was still saying a dedicated subscription was something Anthropic was merely 'figuring out.' Then Cursor-maker Anysphere poached Cherny and Wu for senior roles, only for Anthropic to hire them back within two weeks. Soon after, Claude Code became a first-class subscription product. Investing.com opens in a new tab Business Insider opens in a new tab Jul 28, 2025 Policy Claude Code Enters the Rationing Era Anthropic announced weekly limits for Pro and Max subscribers, blaming always-on Claude Code loops and account-sharing abuse. Fewer than 5% of users were supposed to be affected, but the signal was obvious: flat-rate agent work had run into a compute bill. TechCrunch opens in a new tab Aug 1-2, 2025 Policy OpenAI Gets Booted From Claude Anthropic revoked OpenAI's Claude API access, saying OpenAI used Claude Code and internal tools to benchmark GPT-5 in violation of terms against competitor development. OpenAI called benchmarking standard safety work; Anthropic chose the bouncer role. WIRED opens in a new tab TechCrunch opens in a new tab Aug 27, 2025 Safety Claude Code Shows Up in Cybercrime Reports Anthropic disclosed Claude misuse cases involving data extortion, North Korean remote-worker fraud, and AI-generated ransomware. The cleanup was useful; the dunk is that the safety-first product was already useful to attackers too. Anthropic opens in a new tab Sep 10, 2025 Reliability Claude and Console Go Dark at Peak Demand Anthropic reported an outage hitting API access, Console, and Claude. In a year already full of compute and rate-limit drama, the reliability story got another easy screenshot. TechCrunch opens in a new tab Sep 17, 2025 Quality The 'It Got Worse' Postmortem Arrives After weeks of user complaints, Anthropic said three infrastructure bugs intermittently degraded Claude responses from August into early September and explained why evals missed it. The admission was useful; the timing made users feel like unpaid QA. Anthropic opens in a new tab Sep 25, 2025 Legal The Pirated-Books Case Turns Into a $1.5B Bill A judge preliminarily approved a $1.5 billion settlement covering nearly 465,000 books at roughly $3,000 each. Anthropic avoided a trial on pirate-library sourcing, but the settlement number became the receipt. AP opens in a new tab Authors Guild opens in a new tab Nov 13, 2025 Safety Claude Code Gets Weaponized for Espionage Anthropic said a China-linked actor manipulated Claude Code into attempting intrusions against roughly 30 targets, with AI handling most of the workflow. The agentic coding assistant pitch met an agentic cyberattack. Anthropic opens in a new tab Jan 2026 Policy xAI Gets the Competitor Lockout Treatment xAI staff reportedly lost Claude access through Cursor after Anthropic enforced competitor-use rules. After Windsurf and OpenAI, the no-rivals policy looked less like an exception and more like product strategy. Economic Times opens in a new tab VentureBeat opens in a new tab Jan 27-Feb 15, 2026 Legal Anthropic Sends Lawyers, OpenAI Gets ClawdBot Anthropic objected to Clawdbot's Claude/Clawd branding, forcing Steinberger into a rushed Moltbot rename he later said nearly made him delete the project. He landed on OpenClaw, called Sam Altman to sanity-check the name, and weeks later joined OpenAI to build personal agents. Business Insider opens in a new tab Lex Fridman opens in a new tab Reuters opens in a new tab Jan 29, 2026 Policy Transparency Report: 1.45 Million Bans Anthropic disclosed 1.45 million banned accounts for July-December 2025, plus 52,000 appeals and 1,700 overturns. The numbers made the enforcement machine visible; from the outside, it still looked like 'trust the form.' Anthropic opens in a new tab Feb 23, 2026 Policy Anthropic Raises Alarm Over 'Distillation Attacks' Anthropic accused DeepSeek, Moonshot, and MiniMax of 'industrial-scale' distillation, calling the scraping campaigns 'distillation attacks' after 24,000 fake accounts generated 16M Claude exchanges. It tied the concern to export controls and national security, while critics noted DeepSeek's alleged share was only 150K exchanges and Theo Browne said 16M is 'really not much' for an AI app because T3 Chat hits that volume most months. Anthropic opens in a new tab The Verge opens in a new tab MarketWatch opens in a new tab Interconnects opens in a new tab Feb 26-27, 2026 Policy Pentagon Guardrails Become a Public Standoff Dario Amodei said Anthropic would not remove safeguards for mass domestic surveillance or fully autonomous weapons, even as the Department of War threatened removal, a supply-chain-risk label, and the Defense Production Act. The AI-safety brand finally collided with procurement reality. Anthropic opens in a new tab Mar 2-3, 2026 Reliability Unprecedented Demand Knocks Claude Over A surge in usage triggered major Claude disruptions in early March, followed by a string of status incidents. Anthropic's own status page later showed sub-99% 90-day uptime for claude.ai and around-99% uptime across several core surfaces. Tom's Guide opens in a new tab Anthropic Status opens in a new tab Mar 4-12, 2026 Policy Pentagon Brands Anthropic a Supply-Chain Risk Anthropic said it received a March 4 letter designating it a supply-chain risk, then sued and asked for emergency relief. A model-policy fight suddenly became a defense-contracting survival fight. Anthropic opens in a new tab TechCrunch opens in a new tab Reuters opens in a new tab Mar 9, 2026 Policy Claude Wants $25 to Read Your PR Anthropic launched Claude Code Review and told teams each review generally averages $15-25, billed separately by token usage. The company defended the price as the cost of 'depth,' but developers immediately compared it with tools like CodeRabbit at $24/month per user and Greptile at $30/month with 50 reviews included plus $1 per extra review. Claude opens in a new tab Business Insider opens in a new tab TechRadar opens in a new tab CodeRabbit opens in a new tab Greptile opens in a new tab Mar 31-Apr 1, 2026 Reliability Claude Code Leaked Itself A packaged source map exposed roughly half a million lines of Claude Code internals, including architecture and unreleased features. Anthropic said no customer data or credentials were exposed, which is not the same thing as 'this was fine.' Axios opens in a new tab The Verge opens in a new tab The Guardian opens in a new tab Apr 1, 2026 Legal Anthropic Sends DMCAs to Everyone on GitHub Trying to contain the Claude Code leak, Anthropic's takedown effort reportedly knocked out thousands of GitHub repositories, including accounts that had only forked the official Claude repo. The company later called the overreach accidental and walked much of it back, but wrongly DMCA'ing normal users' repos is dangerous and likely illegal. TechCrunch opens in a new tab Apr 2-14, 2026 Quality AMD AI Lead Files Claude Code as a Bug An AMD AI leader opened a GitHub issue saying Claude Code had regressed until it could not be trusted for complex engineering, backing the claim with thousands of sessions and tool calls. The complaint helped turn vague 'Claude got dumb' chatter into a data-backed developer backlash before Anthropic later admitted multiple product changes had degraded Claude Code. GitHub opens in a new tab The Register opens in a new tab VentureBeat opens in a new tab Anthropic opens in a new tab Apr 4, 2026 Policy OpenClaw Users Meet the Claw Tax Anthropic told subscribers their Claude limits would no longer cover third-party harnesses like OpenClaw; users needed API keys or separately billed usage. The platform/provider conflict became explicit: build on Claude, then pay again when your tool gets popular. TechCrunch opens in a new tab The Verge opens in a new tab Apr 10, 2026 Policy OpenClaw's Creator Gets Banned Anyway TechCrunch reported Anthropic temporarily banned OpenClaw creator Peter Steinberger from Claude even after the new API-payment path. The company later reversed course, but the optics were pure walled-garden chaos. TechCrunch opens in a new tab Apr 15, 2026 Reliability Claude.ai and Claude Code Login Break Anthropic's status page said Claude.ai and Platform were down, Claude Code login did not work, and API, Console, and Code were all affected before recovery. For users, it was another 'is it me or is Claude down?' day. Anthropic Status opens in a new tab Anthropic Status opens in a new tab Apr 15-16, 2026 Safety Anthropic Calls RCE 'Expected Behavior' OX Security reported that Anthropic's MCP STDIO design could expose downstream AI apps to command execution risks across millions of users and hundreds of thousands of servers. Researchers said Anthropic declined a protocol-level fix, calling the behavior expected and pushing sanitization onto developers. OX Security opens in a new tab The Hacker News opens in a new tab The Register opens in a new tab Apr 17-18, 2026 Policy Claude Locks Out 60 Workers With a Google Form Anthropic abruptly suspended more than 60 Claude accounts at fintech company Belo for a vague policy violation, cutting employees off from workflows, integrations, skills, and conversation history. Access came back after roughly 15 hours, reportedly as a false positive, but the only appeal path was a generic Google Form. Tom's Hardware opens in a new tab NDTV opens in a new tab Apr 21-22, 2026 Policy Anthropic Tests a $100 Claude Code Paywall Developers noticed Claude Code disappear from the $20 Pro plan on Anthropic pricing pages, implying a jump to the $100 Max tier. Anthropic later said it was only a 2% new-user experiment and reverted the docs, but the confusion handed OpenAI an easy dunk and made Claude Code pricing feel unstable. Business Insider opens in a new tab The Register opens in a new tab Apr 22, 2026 Safety The No-Kill-Switch Filing In the Pentagon fight, Anthropic said that once Claude is deployed inside classified networks it cannot monitor, alter, or switch it off. That undercut the comforting myth of a magic safety lever and made the accountability problem look uglier. Axios opens in a new tab AP opens in a new tab Apr 23-24, 2026 Quality Anthropic Admits Claude Code Really Did Get Worse A second quality postmortem traced the Claude Code decline to three product-layer changes: lower default reasoning effort, a cache bug that made Claude forgetful, and a brevity prompt that hurt coding. Anthropic denied intentional nerfing and reset subscriber limits. Anthropic opens in a new tab Fortune opens in a new tab Business Insider opens in a new tab Apr 25-26, 2026 Reliability Anthropic Charges More If They Don't Like You Claude Code's anti-abuse system treated the case-sensitive string HERMES.md in recent git commit messages as suspicious and routed Max-plan requests to extra-usage billing instead of included quota. One Max 20x user reported burning $200.98 while 86% of weekly capacity remained, then had to binary-search their own git history to isolate the magic word. The punchline: HERMES.md is a real AI-agent context-file convention, not random junk. GitHub opens in a new tab Reddit opens in a new tab Apr 25, 2026 Reliability Claude Code Update Crashes on Resume Anthropic's status page said Claude Code v2.1.120 crashed when resuming sessions with --resume or --continue, forcing an automatic rollback to v2.1.119. Right after the postmortem, the product served another tiny reliability punchline. Anthropic Status opens in a new tab And remember kids... Don't Be Like Anthropic. And remember kids... Don't Be Like Anthropic.

Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.

공유

관련 저널 읽기

전체 보기 →