Weaponized deepfakes
MIT Technology Review AI
|
|
{'이벤트': '📰', '머신러닝/연구': '📰', '하드웨어/반도체': '📰', '취약점/보안': '📰', '기타 AI': '📰', 'AI 딜': '📰', 'AI 모델': '📰', 'AI 서비스': '📰', 'discount': '📰', 'news': '📰', 'review': '📰', 'tip': '📰'} 하드웨어/반도체
#ai
#deepfakes
#misinformation
#security
#technology
#m5 max
#m5 pro
#맥북 에어
#맥북 프로
#애플
요약
애플은 3일(현지시간) 두 개의 3나노미터 다이를 하나로 결합한 퓨전 아키텍처 기반의 M5 Pro 및 M5 Max 칩을 탑재한 신형 맥북 프로를 공개했다. 신형 칩은 세계 최고 수준의 싱글스레드 성능을 구현하는 슈퍼 코어를 적용해 전 세대 대비 멀티스레드 성능은 최대 30%, AI 컴퓨팅 성능은 최대 4배 향상되었다. 맥북 에어 역시 M5 칩을 장착해 AI 작업 속도가 크게 개선되었으며 기본 저장 용량이 두 배로 늘어났고, 미니 LED 방식의 68.3cm 5K 스튜디오 디스플레이 XDR도 함께 출시되었다. 이번에 발표된 모든 신제품은 한국을 포함한 일부 국가에서 4일부터 사전 주문이 시작되어 11일 정식 출시된다.
왜 중요한가
개발자 관점
3나노 퓨전 아키텍처와 대폭 강화된 엔진(NPU)을 탑재하여, 맥북 환경에서도 별도의 하드웨어 없이 고성능 온디바이스 AI 모델 학습 및 추론이 원활하게 가능해진 기술적 이정표입니다.
연구자 관점
4nm 대비 3nm 공정 전환과 단일 칩 내 다이 결합 기술이 가져오는 전성비 효율 개선과 이동성 간의 상관관계를 분석할 수 있는 중요한 사례입니다.
비즈니스 관점
고사양 AI 작업을 엔트리 레벨 제품(맥북 에어 등)에서도 구현함으로써 경쟁사 대비 고성능 노트북 시장의 기술적 격차를 벌리고 프리미엄 디바이스 시장에서의 주도권을 강화하겠다는 전략입니다.
관련 엔티티
애플
퓨전 아키텍처
M5 Pro
M5 Max
맥북 프로
슈퍼 코어
맥북 에어
M5
미니 LED
스튜디오 디스플레이 XDR
본문
For years, experts have warned that deepfakes—AI-generated videos, images, or audio recordings of people doing or saying things they haven’t actually done in real life—could be deployed in malicious ways. These dangers are now here. Improvements in deepfake technology, and the widespread availability of easy-to-use and cheap (or free) generative models, have made it easier than ever for anyone to fake reality in a way that’s increasingly difficult to spot. We’re not just talking about AI slop, the often obviously fake content that has taken over the internet. Rather, weaponized deepfakes—from sexually explicit images to scam posts to political propaganda—may look startlingly real. There are already examples around the world of their inciting violence, trying to change minds (and maybe even votes), and generally sowing mistrust. That’s why experts worry that weaponized deepfakes will further crater critical thinking skills, as well as our trust in institutions and each other. This has dire effects for society and governance—and, of course, for the people targeted. As with many other examples of technology’s harms, the human impacts will weigh disproportionately on women and marginalized groups; though the technology has evolved in the past few years, a 2023 study found that 98% of deepfakes were pornographic in nature, and 99% depicted women. Just take Grok. Since Elon Musk launched the “edit image” function of this AI chatbot late last year, users have created millions of sexualized images, including many of children and women; one report estimated that 81% of these Grok-produced images depicted women. Despite widespread criticism, xAI’s initial response was to limit the feature to paying members; it has since blocked the nudity feature in jurisdictions where it is illegal. There’s also been an explosion of political deepfakes. The Trump administration, for example, has regularly produced and shared AI-generated images and videos. Not all of them are even meant to look real, but others appear to be designed to sway public opinion and even humiliate the person depicted. In January, meanwhile, Texas attorney general Ken Paxton shared a video appearing to show his opponent in the Republican primary for a US Senate seat, Senator John Cornyn, dancing with Representative Jasmine Crockett, a contender for the Democratic nomination. But this never happened—a fact the ad did not disclose clearly. Suggested solutions include instituting new technical safeguards and detection methods at the big AI firms, encouraging users to take more protective actions, and crafting new legislation or applying existing regulatory frameworks, like copyright law, to the issue. But these all have limits. Technical solutions can be bypassed; for instance, bad actors can simply switch to open-source models built without safeguards. Getting people to change how they behave, such as by watermarking photos or posting less personal information online, is simply unrealistic. Stronger regulations require enforcement—and while President Trump has signed legislation that criminalizes deepfake porn, his administration continues to post other types of harmful deepfakes. In late January, for instance, the White House shared an altered image of a Minneapolis civil rights lawyer, darkening her skin and changing her facial expression from one of calm to exaggerated crying. The problem could get much worse—and soon. There are high-stakes midterm elections in the United States later this year, and the federal agencies that traditionally addressed elections-related information integrity have been weakened. So have many outside research groups dedicated to fact-checking and fighting election-related disinformation.