Artificial scientists

MIT Technology Review AI | | {'이벤트': '📰', '머신러닝/연구': '📰', '하드웨어/반도체': '📰', '취약점/보안': '📰', '기타 AI': '📰', 'AI 딜': '📰', 'AI 모델': '📰', 'AI 서비스': '📰', 'discount': '📰', 'news': '📰', 'review': '📰', 'tip': '📰'} AI 서비스
#ai 모델 #ai #ai 서비스 #ai+ #aiサービス #ai구독 #ai뉴스 #ai서비스 #ai요금제 #codex #itmedia #it미디어 #openai #pro 플랜 #proプラン #pro플랜 #x #サブスク #サブスクリプション #月額 #月額100ドル #月額制 #구독 #구독료 #구독서비스 #구독제 #구독형 #요금제 #월간100달러

요약

X(옛 트위터)에 공유된 ITmedia의 AI 관련 소식에 따르면, 오픈AI(OpenAI)가 월 구독료 100달러의 새로운 요금제인 'Pro 플랜'을 추가하여 출시했습니다. 이 플랜은 기존보다 고성능인 코덱스(Codex) 이용량을 대폭 확장하여 선택할 수 있는 옵션을 제공합니다. 이로써 사용자들은 더 많은 코드 작성 및 소프트웨어 개발 작업을 고도화된 AI 환경에서 처리할 수 있게 되었습니다.

왜 중요한가

개발자 관점

OpenAI Pro 플랜을 통해 확장된 Codex의 할당량은 개발자가 AI를 단순한 코드 조력자가 아닌 핵심 개발 환경으로 활용할 수 있게 하여, 더 긴 맥락의 코드 생성과 고도화된 리팩토링 작업의 생산성을 극대화합니다.

연구자 관점

이는 거대 언어 모델(LLM)이 단순한 텍스트 생성을 넘어 실질적인 소프트웨어 공학 도구로 통합되고 있음을 보여주며, 높은 가격대의 유료 모델에 대한 사용자 수용성을 연구하는 중요한 사례로 평가됩니다.

비즈니스 관점

월 100달러의 고가 요금제 도입은 AI 코딩 도구가 전문가 영역에서 비용 절감 효과가 큰 필수 투자 자산으로 인식되고 있음을 시사하며, 플랫폼 수익성을 고도화하는 중요한 전략적 변화입니다.

관련 엔티티

X ITmedia OpenAI Pro 플랜 Codex

본문

AI companies frequently invoke the possibility of AI-enabled scientific discovery as a justification for their existence: If the technology eventually cures cancer and solves climate change, then all the carbon emissions and slop videos will have been well worth it. Already, LLMs can assist scientists in all sorts of ways. They can point people to relevant studies in the literature, draft journal articles, and, of course, write code. But AI companies and academic researchers alike have a much more ambitious vision for AI co-scientists. They want to develop systems that can act as a full member of a scientific team or, even more ambitiously, initiate and carry out research projects with limited human guidance. Google DeepMind has invested heavily in scientific AI for years, and it paid off in 2024 when Demis Hassabis and John Jumper, the company’s CEO and director, won the Nobel Prize in chemistry for AlphaFold, a specialized system that can predict the three-dimensional structure of a protein. Now its competitors are working to catch up. In October 2025, OpenAI launched a team devoted to AI for science, and Anthropic announced several Claude features geared toward the biological sciences around the same time. OpenAI in particular has called building an autonomous researcher its “North Star.” It just announced GPT‑Rosalind, the first in a planned series of specialized scientific models. Google released its own AI co-scientist tool last February. Under the hood, many of these AI-for-science systems are in fact multiple specialized AI agents working in concert. Google’s co-scientist uses a supervisor agent, a generation agent, and a ranking agent, among several others, in order to generate potential hypotheses and research plans in response to a goal provided by a human scientist. More recently, researchers at Stanford’s AI for Science Lab, led by James Zou, devised a “virtual lab” made up of agents that took on the roles of specialists in different scientific fields. They found that their system could design new antibody fragments that bind to SARS-CoV-2, the virus that causes covid. Unlike human scientists, however, those teams of agents can’t yet go out and test their ideas in the lab. To overcome that limitation, some researchers are plugging LLMs into experiment-running robots. In February, OpenAI announced that it had connected GPT-5 directly with automated biological laboratories built by the company Ginkgo Bioworks so that the AI system could iteratively propose experiments and interpret the results with limited human involvement. This approach allowed the system to run a gargantuan number of experiments and create a recipe that reduced the cost of synthesizing a particular protein by 40%. AI-powered science seems like a win for frontier labs and for society at large. But research suggests it could have unintended consequences. A recent Nature study found that while individual scientists see professional advantages from adopting AI, science on the whole may suffer, because AI reduces the scope of what the scientific community investigates. That might be because AI is especially good at analyzing preexisting data sets and literature, so scientists who use it gravitate toward established topic areas where large-scale data is available. That could leave fewer scientists to study problems less amenable to AI. Integrating AI effectively into science is more than just a technical problem: Maintaining the vibrance and diversity of science in the AI era may require concerted effort from the scientific community.

관련 저널 읽기

전체 보기 →