Humanoid data
MIT Technology Review AI
|
|
{'이벤트': '📰', '머신러닝/연구': '📰', '하드웨어/반도체': '📰', '취약점/보안': '📰', '기타 AI': '📰', 'AI 딜': '📰', 'AI 모델': '📰', 'AI 서비스': '📰', 'discount': '📰', 'news': '📰', 'review': '📰', 'tip': '📰'} AI 모델
#기타 ai
#ai 모델
#ai 에이전트
#chatgpt
#gpt-4
#llm
#metagpt
#openai
#unite.ai
#가이드
요약
MetaGPT는 복잡한 작업 자동화에 한계를 보인 기존 LLM(대규모 언어 모델)의 단점을 보완하기 위해 표준 운영 절차(SOP) 기반의 다중 에이전트 시스템과 메타 프로그래밍 기술을 결합한 혁신적인 프레임워크입니다. 기초 구성 요소 계층과 협력 계층으로 나뉜 이 구조는 제품 관리자, 아키텍트 등의 전문 역할을 부여받은 다수의 AI 에이전트가 마치 인간 팀원처럼 유기적으로 협력하고 코드를 검토하게 만듭니다. 특히 사전 컴파일 실행 등의 코드 검토 메커니즘을 도입한 결과, HumanEval 및 MBPP 벤치마크에서 82.3% 수준의 우수한 코드 생성 성공률(Pass@1)을 기록했으며 프로젝트당 평균 1.09달러라는 극히 낮은 비용으로 뛰어난 효율성을 입증했습니다.
왜 중요한가
관련 엔티티
MetaGPT
LLM
표준 운영 절차
SOP
다중 에이전트 시스템
메타 프로그래밍
제품 관리자
아키텍트
AI 에이전트
HumanEval
MBPP
본문
I was recently invited to join an app that would pay me cryptocurrency to film myself doing tasks like putting food into a bowl, microwaving it, and then taking it out. Another website suggested I try a new game in which I’d remotely control a robotic arm in Shenzhen, China, as it completed puzzles and tasks, to help improve the robot’s dexterity. What on earth is happening? Well, just as our words became training data for large language models, robotics companies are betting that data about the way we move will help them build more capable humanoid robots. They see humanoids—despite being trickier to train than simple robotic arms—as more easily slotting into the places where humans work today (and someday replacing them entirely). This new notion for how to train humanoids arguably began with the launch of ChatGPT in 2022. Large language models were able to generate text through exposure to massive amounts of training data—every word ever written that AI companies could find (or, some argue, steal). Roboticists wanted to apply these scaling laws to robotics but lacked an internet-size collection of data describing how we move. Put off by how difficult this would be to amass, companies used workarounds, like teaching robots to move in virtual simulations. However, simulations never perfectly model how things like friction or elasticity work in the real world, so the robots trained in them tended to (literally) stumble. Now companies building humanoid robots have decided that collecting real-world data, as cumbersome as it is, could yield a massive payoff. That’s where things got weird. Early efforts were quaint and academic. Labs collected hours and hours of data from people doing household tasks, like flipping waffles or cleaning their desks, while wearing cameras or handheld grippers. The data was shared openly. But as venture capital money poured into robotics—$6.1 billion in 2025 for humanoids alone—the race to create this training data has gotten more competitive, and more elaborate. There are now training centers in China where people wear exoskeletons and virtual-reality hardware while they do the same repetitive task, like wiping a table, hundreds of times per day. Gig workers in Nigeria, Argentina, and India are filming themselves doing chores at home. Earlier this year, I learned that a delivery company in the US had outfitted its employees with sensors that track their movements as they carry boxes, in part to study injuries but also with the goal of training robots that could replace them. All this points to a surreal future of work in which physical laborers increasingly become data collectors. But training robots on movement data we collect is still a complicated proposition. It’s not clear that it’s even possible to do it at the scale potentially needed to yield technical breakthroughs, let alone build a profitable business. What is the value of a clip of me opening my microwave? How many thousands of those moments would it take to teach a robot to cook dinner? Perhaps this’ll be the year we find out.