How AI safety took a backseat to military money – 폰테크 달인

폰테크 후기, 폰테크, 폰테크당일 당일폰테크

Hey there, and welcome to Decoder! I’m Hayden Field, senior AI repoter at The Verge — and your Thursday episode guest host. I have another couple of shows for you while Nilay is out on parental leave, and we’re going to be spending more time diving into some of the unforeseen consequences of the generative AI boom.

Today, I’m talking with Heidy Khlaaf, who is chief AI scientist at the AI Now Institute and one of the industry’s leading experts in the safety of AI within autonomous weapons systems. Heidy has actually worked with OpenAI in the past; from late 2020 to mid-2021, she was a senior systems safety engineer for the company during a critical time, when it was developing safety and risk assessment frameworks for the company’s Codex coding tool. 

Now, the same companies that have previously seemed to champion safety and ethics in their mission statements are now actively selling and developing new technology for military applications.

In 2024, OpenAI removed a ban on “military and warfare” use cases from its terms of service. Since then, the company has signed a deal with autonomous weapons maker Anduril and, this past June, signed a $200 million Department of Defense contract. 

OpenAI is not alone. Anthropic, which has a reputation as one of the most safety-oriented AI labs, has partnered with Palantir to allow its models to be used for US defense and intelligence purposes, and it also landed its own $200 million DoD contract. And Big Tech players like Amazon, Google, and Microsoft, who have long worked with the government, are now also pushing AI products for defense and intelligence, despite growing outcry from critics and employee activist groups

So I wanted to have Heidy on the show to walk me through this major shift in the AI industry, what’s motivating it, and why she thinks some of the leading AI companies are being far too cavalier about deploying generative AI in high-risk scenarios. I also wanted to know what this push to deploy military-grade AI means for bad actors who might want to use AI systems to develop chemical, biological, radiological, and nuclear weapons — a risk the AI companies themselves say they’re increasingly worried about. 

Okay, here’s Heidi Khlaaf on AI in the military. Here we go.

If you’d like to read more on what we talked about in this episode, check out the links below:

  • OpenAI is softening its stance on military use | The Verge
  • OpenAI awarded $200 million US defense contract | The Verge
  • OpenAI is partnering with defense tech company Anduril | The Verge
  • Anthropic launches new Claude service for military and intelligence use | The Verge
  • Anthropic, Palantir, Amazon team up on defense AI | Axios
  • Google scraps promise not to develop AI weapons | The Verge
  • Microsoft employees occupy headquarters in protest of Israel contracts | The Verge
  • Microsoft’s employee protests have reached a boiling point | The Verge

Questions or comments about this episode? Hit us up at decoder@theverge.com. We really do read every email!

 Hey there, and welcome to Decoder! I’m Hayden Field, senior AI repoter at The Verge — and your Thursday episode guest host. I have another couple of shows for you while Nilay is out on parental leave, and we’re going to be spending more time diving into some of the unforeseen consequences of the generative AI AI, Anthropic, Decoder, OpenAI, Podcasts, Tech, xAI 폰테크 달인은 복잡한 용어 대신 이해하기 쉬운 설명과 투명한 절차로 상담부터 입금까지 전 과정을 깔끔하게 안내합니다. 합리적인 시세 기준으로 모델·상태별 견적을 제시하고, 불필요한 수수료나 숨은 비용은 없습니다. 폰테크가 처음이신 분도 안전하게 진행하시도록 신원 확인·안전결제·개인정보 보호 원칙을 지킵니다. 채널 상담, 방문 상담, 비대면 상담까지 상황에 맞춰 연결되며, 진행 여부는 고객이 결정합니다. 폰테크 달인은 결과만큼 과정의 신뢰를 중요하게 생각합니다. 당일 문의·당일 진행을 목표로 있지만, 무리한 권유 없이 조건이 맞을 때만 안내합니다. 진행 후에는 간단한 체크리스트와 거래 내역을 제공해 재확인할 수 있고, 사후 문의도 응답합니다. 합리, 안전, 투명—폰테크 달인의 기준입니다. 온라인 접수로 상담을 시작할 수 있으며, 자주 묻는 질문을 통해 폰테크 전 과정을 미리 확인하실 수 있습니다. 처음부터 끝까지 ‘내가 이해한 만큼만 진행’하는 곳, 그게 폰테크 달인입니다. #폰테크 #폰테크당일 #당일폰테크 #비대면폰테크 https://phonetech.store/

댓글 달기

이메일 주소는 공개되지 않습니다. 필수 필드는 *로 표시됩니다

위로 스크롤