AI What Do: AI의 힘과 인간의 주체에 대해 생각하기 위한 프레임워크

hackernews | | 💰 할인
#ai #ai 딜 #openai #소프트웨어 #업무혁신 #인간의주체 #할인
원문 출처: hackernews · Genesis Park에서 요약 및 분석

요약

저자는 AI 기술의 영향력과 인간의 주체성을 분석하기 위해 X축을 ‘AI 권력의 소재(집중화 vs 분산화)’, Y축을 ‘AI 기술적 역량’으로 하는 프레임워크를 제시했습니다. 현재 우리는 기술적 역량은 낮지만 소수의 거대 기업에 권력이 집중된 위치(*)에 있으며, 미래에는 강력한 AI를 개인이 소유하는 주체적인 ‘Ag(Agency)’ 시나리오와 거대 권력에 의존하는 ‘Bg(Borg)’ 시나리오 중 하나로 나아갈 수 있습니다. 저자는 기술적 역량의 향상은 불가피하므로, 개인은 다양한 모델 활용과 로컬 실행을 통해 AI 권력의 분산화와 인간의 주체성을 확보하는 쪽으로 나아가야 한다고 강조했습니다.

본문

I’ve been thinking a great deal about how AI1 is reshaping the way we craft software, and work, and human activities in general. It’s been on my mind for years. Here’s a picture that shows my thoughts, with dot * in the lower right quadrant being where we are today: The Axes Link to heading The x-axis represents where AI power sits, i.e. who owns the software and hardware assets running AI computation, where those assets are physically located, and how they are governed. The y-axis is about AI capabilities and human choices about how to actually use AI. I originally framed this as “AI % share of labor”. But us humans do so much more than just work. And it turns out AI is capable of non-work things, too. So the y-axis instead represents AI capabilities, a.k.a. “from a purely technically standpoint, what percentage of human activity could AI stand in for, if we wanted to use it as a substitute”. Those activities might be cognitive (happening now), or people-interactive, or physical (robot stuff). Note: the y-axis is NOT actual substitution, i.e. “to what degree are we substituting AI for human efforts”. It represents what’s possible, the technical capability. And regardless of what’s possible, we have choices about how to apply the tech. (Think about nuclear tech for an analogy.) You are here: * Link to heading Back to * : where we are today. I’ve drawn it far down on the y axis to indicate today’s relatively low level of AI capability. Yes, there is a huge amount of chatter and hype about what AI can do. And in the software field, particularly, AI substitutability for humans is significant and growing fast. But in the full sphere of human activity there are many, many things AI cannot do. Yet. * is drawn far to the right on the x-axis to indicate the current high degree of AI power centralization and concentration. The vast bulk of people using AI services today consume from a handful of AI labs: OpenAI, Google, Meta, DeepSeek, Microsoft. Almost exclusively atop Nvidia’s silicon. Everything else on the graph is scenarios… speculations about the future. Scenario No : No AI, thank you Link to heading Moving clockwise from * , the No scenario represents people who either cannot access AI (e.g. due to barriers like unreliable or costly electricity, limited compute access, etc.) or don’t want to. Some people will undoubtedly remain in this quadrant, but I believe it will be a small minority in the long run. Compare to adoption of electricity. 10%, maybe? Scenarios Ag (Agency) and Bg (Borg) Link to heading Dots Ag and Bg represent extreme future worlds. In Ag , everyone has powerful AI running on a personal laptop or computing appliance. AI is ubiquitous, because it has become cheap to produce (R&D and train) and operate (inference, power consumption), and probably because the model algorithms and weights are open. Important: in this scenario, even though AI has immense capability, we may choose not to use it for certain applications, because those areas are reserved for human craft. We have agency. We can choose. In Bg , a few or even just one mega-corp-government-borg-thingy wins out, and all the AI power is concentrated in one place. Individuals have effectively no agency over AI. Instead, we all rent AI services from the central power, with whatever assets we already had in the bank before AI became the main means of production. The rich get richer. Hopefully Borg-thingy is a nice landlord. Hope is a strategy, right? Where to go from here? What to do? Link to heading The arrows radiating outwards from * represent possible next steps, and choices we need to make. On current trajectory, AI technical capabilities are growing quickly, so * will shift up the y-axis as time passes. This might be slowed by headwinds such as technical barriers, regulatory changes, or societal backlash. But right now it looks like continued, fast technical capability growth, and both the major AI labs and smaller R&D entities are pushing hard for that. Tailwinds. As for the x-axis, our aggregate heading seems far more uncertain to me. There are people trying to head in every direction. Impossible to predict. I’m not saying we are going to end up at Ag , or to Bg . I actually believe both are unlikely, for various reasons, and the more probable outcome is some unpredictable hybrid inbetween scenario. (Unless AGI happens, in which case all knobs get turned straight to 11 and we tilt to one extreme or the other. Right, AI-labbers?) What I am certain of is we are going to keep going “up” a long way in terms of AI capability and the opportunity/risk of substituting AIs for humans. In a very broad range of activities. As an individual I have no control over that. What I can do individually is choose where to head on the x-axis. I’m going to start heading towards Ag . More agency. Less Borg risk. As a softwware person, concrete things I can do: use many different AI models, not just what’s convenient; run AI locally on my own hardware, not ju

Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.

공유

관련 저널 읽기

전체 보기 →