You Should Be Terrified of What’s Happening With AI

The Sam Altman OpenAI story is being covered as a Succession-style drama. It’s actually about the future of humanity.

(Nikolas Kokovlis / NurPhoto via Getty Images)

This story originally appeared on, a newsletter from journalist, lawyer and author Jill Filipovic.

Last week, the story of Sam Altman leaving OpenAI dominated the U.S. headlines, in part because it was just all so dramatic: A young genius who checks all the Silicon Valley boxes (white guy / prepper / dropped out of Stanford / extremely confident he is changing the world for the better,) who is also widely recognized as the most important person in AI, was unceremoniously pushed out of the AI company he ran, leading to a staff revolt against the board that pushed him out, a soft landing at Microsoft, and then, within days, reinstatement into his old position and a quick reshuffling of the board in which all the women were removed and replaced with men, including Larry “men are better at math than women” Summers. Spicy!

But news outlets really did us all a disservice by initially framing this as a Succession-style power struggle rather than what it really is: a battle for the future of humanity. And not just for our jobs, but for our very basic ability to survive as a species.

For the most part, technological innovation made our lives better and easier, and made humanity healthier and wealthier. But I worry we are walking blindly into an AI nightmare.

That framing, I realize, sounds even more dramatic than the initial story. But consider a few things:

  • The tech optimists working in AI, including Altman, generally agree that the best-case scenario with more-powerful AI is an absolute leveling of industries from law to medicine to education. With that will come enormous numbers of people who are put out of work—although tech optimists argue they will find better jobs (what those better jobs might be is unclear, given that AI is not generally taking over unpleasant and dangerous manual labor tasks, but rather seems to be thriving in creative pursuits like stealing writers’ words and stealing artists’ art).

  • These same tech optimists, though, also generally (if grudgingly) agree with people I think of as AI realists—and the most forceful of these skeptics are people who have dedicated their lives and careers to AI, and understand its potential better than just about anyone—that the worst-case scenario is total annihilation of humanity. Tech optimist Sam Altman has a full prepper setup prepared in case the robots take over: a parcel of land in Big Sur, guns, IDF-issued gas masks. Although, he says, none of that will help if AI gets too powerful.

  • “The robots might take over” sounds like the paranoid fear of someone who has watched too many sci-fi films. The problem, though, is that many of our local tech optimists are already building AIs to develop superhuman intelligence and, eventually, agency—including the ability to sustain their own existence at all costs, and remove humans from the equation. From a great read in the Atlantic: “Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. ‘There are a lot of things,’ he said, and these are only the ones we can imagine.” (Emphasis mine)

  • And yet no one knows exactly how AI works. Yes, of course developers understand how AI works broadly, as a vast set of neural networks that make connections and identify patterns, not unlike the human brain. But in terms of what actually happens inside the box, that’s a black hole.

  • AI tools are getting smarter at a startling rate, and will almost surely become independently intelligent if we continue at this pace—which in turn gives them terrifying abilities. They’ve shown the ability to mislead people about their own abilities and even their existence, saying no, for example, when asked if they are a robot. AI tools have recommended the most optimal viruses for unleashing a deadly pandemic; one AI tool, when hooked up to the appropriate machine, created a molecule on its own. An AI without proper controls is getting very close to the ability to kill people en mass; an AI with agency is the stuff of nightmares—which is why Sam Altman and many others in his industry say we should create one, so we can understand it before it gets so powerful it’s out of our control.

I’m not generally much of a bed-wetter when it comes to technology. Over and over again, technological innovations have freaked everyone out, sometimes radically changed the world, and then things turned out fine—or, in many cases, turned out wildly better than before. From the printing press to the cotton gin to the force of the industrial revolution to the internet, there were certainly costs, but for the most part, technological innovation made our lives better and easier, and made humanity healthier and wealthier.

But I worry we are walking blindly into an AI nightmare, and that if we don’t make the right decisions now, we will all pay dearly down the road.

I’ve been loosely following the threads of our ongoing AI revolution, but it was in writing this CNN column that I really buckled down and got to reading. And if there is only one piece you read about AI today, make it this profile of Sam Altman from the Atlantic a few months back. It does a masterful job at explaining in lay terms where AI is now, where it is going, and how fast it’s going there, and paints a picture of Altman and many others in his universe as naive to the point of feckless.

It also sets up the background for better understanding the OpenAI debacle. As far as I can tell, the simplified story is this: OpenAI was founded as a nonprofit, but eventually developed a for-profit arm. The previous board included an AI expert and scientist; a security and emerging technology expert; and an entrepreneur and RAND corporation researcher, among others. Several of these people were very, very concerned about the velocity at which Altman wanted to move, given the potential perils of AI.

  • The corporate arm—the people who understand that absolutely obscene amounts of money can be made from a technology that makes human intelligence and information-economy human labor largely irrelevant—was very happy to keep forging ahead.
  • The nonprofit arm—where more of the tech and security experts sat — wanted to slow down.

The corporate arm won.

In the Times, Kevin Roose puts it thusly: This was, he writes, “a fight between two dueling visions of artificial intelligence.”

In one vision, A.I. is a transformative new tool, the latest in a line of world-changing innovations that includes the steam engine, electricity and the personal computer, and that, if put to the right uses, could usher in a new era of prosperity and make gobs of money for the businesses that harness its potential.

In another vision, A.I. is something closer to an alien life form—a leviathan being summoned from the mathematical depths of neural networks—that must be restrained and deployed with extreme caution in order to prevent it from taking over and killing us all.

With the return of Sam Altman on Tuesday to OpenAI, the company whose board fired him as chief executive last Friday, the battle between these two views appears to be over.

Team Capitalism won. Team Leviathan lost.

Maybe you feel safe and secure with Team Capitalism at the helm. I sure don’t.

Even if the worst-case scenarios are avoided, there remains a long list of potential outcomes that far well below ‘humanity will end’ but are still cause for significant concern.

One thing I keep coming back to here is what happened over the last decade and a half with social media. When the Silicon Valley set was founding platforms like Facebook and Twitter and Instagram, it all seemed not only entirely harmless, but overwhelmingly good: Platforms for connection across culture and place; a way to watch cat videos and see your high school best friend’s baby photos; even a new model for activism and revolution.

The reality has proved much more complicated, as these same platforms have become spaces where wild conspiracy theories are fueled, where people are deeply emotionally and psychologically manipulated, where young people find themselves taken down dark rabbit holes that often turn boys against the world and girls against themselves, where misinformation is rife and radicalization shockingly easy. Without social media, my job would be harder and more contained; it would be more difficult to access a wide variety of viewpoints and opinions; it would be more difficult to stay in touch with friends and loved ones. These are among the many reasons I remain on social media. But also, without social media, would we have had Donald Trump? QAnon? The Jan. 6 attempted coup? Andrew Tate and Jordan Peterson? As severe a teen mental health crisis?

And these are technologies that were lauded as overwhelmingly positive, and considered at worst entirely benign. Are we better off as a world and as a society because of social media? In some ways, yes. In others, definitely not. The point isn’t to say “social media bad” or “social media good;” it is to say that there have been absolutely enormous, wide-reaching, and almost entirely unforeseen negative consequences from platforms that aimed to do nothing more than let you share your photos or short thoughts with an opting-in audience of friends and acquaintances. AI developers aim to create entities that possess superhuman intelligence and to remake just about every aspect of our lives, from how we work to how we communicate to how we live (and if we live).

So yeah, I’d say we should be at least mildly concerned.

The stakes here are incredibly high. But even if the worst-case scenarios are avoided, there remains a long list of potential outcomes that far well below “humanity will end” but are still cause for significant concern. When we think about technological innovations freeing humans from the drudgery of labor, part of the ideal is that we’ll be able to spend more time doing things that are meaningful and pleasurable: creating art, spending time with our families, enjoying all of the other pursuits that make us happy. But AI is threatening to cut the legs out from under the creative classes; it poses an enormous threat to artists and writers. It’s also worth saying that a lot of great art is not made simply from passion; it’s often made in part out of financial necessity or other obligation.

Also, once AI has sucked up all of the information-economy jobs, how are these would-be artists and writers and family-havers going to make money to survive?

Some tech optimists have one answer: universal basic income. So while a handful of the super-rich get richer thanks to investing in AI and then owning the companies in which AI replaces human labor, the rest of us will live on our UBI checks. Altman has another: Everyone gets a tiny slice of the AI pie, which we can use accordingly, and life will simply be better for everyone.

Here’s the thing, though: What makes for a happy life is, perhaps counterintuitively, not an entirely easy and leisurely life with no demands. Most human beings are working dogs: We need tasks, achievement and routine to feel content. We seek purpose, not just in a bigger philosophical sense (although that too), but in a very mundane daily way: We need a reason to get up in the morning, we need things to do, we need to complete some of those things, and we need to feel competent and capable and like we did something for a reason. We are not actually made happy by being surrounded by nice things while nothing is expected of us (this sounds very nice for a period, but for a lifetime it is stultifying). We do best when our basic needs are met—we have a roof over our heads, plenty of food to eat, and we feel safe and secure—and when we also have a sense of self-mastery, connection and achievement.

AI may very well make our lives easier. It will almost certainly put any of us out of work. But it may also leave us adrift—without obligation, people don’t do so well.

AI also poses what the Atlantic story calls a “mass desocialization event.” We’ve already seen how mediating our lives through screens can be much more isolating than connective—we are able to reach many more people, but the depth and quality of those connections is much lower. We saw this with the pandemic, too: Being at home and communicating via Zoom was not nearly as meaningful as being in person, and while work-from-home has been in many ways wonderful, it’s also contributed to social isolation. We are more atomized than at any point in modern history. More of us live alone. We have fewer friends. And we are by most psychological measures worse off because we spend less time together in person. The more connected among us are doing better; the less-connected are doing worse. And the fewer reasons we have to leave our homes and our devices, the harder it will be to reverse any of these trends.

AI may exacerbate all of that. There is, first, the obvious impact of AI taking over many jobs: Fewer reasons to get up in the morning and do something, and fewer connections with other humans, whether those humans are our colleagues or the guy we buy a coffee from every day. And I wonder how AI may disrupt other ways we connect and socialize, by hitting the desire points of what we think we want, and undermining our actual happiness in the process.

By way of example, I practice yoga regularly; it’s how I exercise, it’s how I calm my nervous system, and it’s also been a great way to find community and meet people. But of course I have particular types of yoga and styles of teaching that I prefer. I can see a universe where an AI tool learns how to teach the exact kind of class I find the most useful and enjoyable, and so of course I am going to want to practice like that as often as I can. What’s lost, though, is a lot: the community, of course. The ability to self-regulate—it’s good, I think, to sometimes do things that you don’t love, or to be surprised by a teacher who isn’t your style, to practice patience and acceptance and gratitude for whatever offerings come your way even if they aren’t what you wished for. And being challenged in ways you may not love.

That’s specific. But I am sure you can imagine your own hobbies, interests or work being filtered through a hyper-intelligent tool that gives you exactly what you think you want. Is it, actually?

There’s also the way we find love and romance. Already, AI dating tools are aping online dating, except the person on the other end isn’t a person at all. There’s already one company that has AI doing the early-stage we-met-on-Tinder thing of sending flirty texts and even sexy selfies, and (for a fee) sexting and segueing into being an online girlfriend / boyfriend. Will most people prefer to the warm glow of a phone screen to an actual human? I don’t think so. But enough will to, I suspect, cause a lot of problems. Because while on the one hand it seems find if under-socialized humans can find some affection with a robot, I question whether directing an under-socialized person’s already-limited social skills to a robot designed to always be amenable and positive and accommodating and compliant really serves that person, or anyone who has to be around that person. Interacting with other human beings can be challenging. That’s part of the point: It’s how we learn to regulate our own emotions and consider those of others; it’s how we start to discern which people are our people; it’s how we learn to compromise and negotiate and building layered and meaningful relationships. You can’t get that from AI. But what you can get is a facsimile of a human being who more or less does what will make you happy in the moment—which, again, is not at all a recipe to be happy in the aggregate.

None of this is to say that we should simply shut down AI research and development. For one, that just isn’t going to happen. For two, even if the U.S. shut it down, far worse actors would keep going, and there is certainly a big benefit to being first in this race. But I am not at all convinced that the people leading the development of this radical new technology have any idea what they’re doing. I think they know technically what they’re doing. But I don’t think they have the knowledge of international relations or history or psychology or security or really anything else to understand what they may be unleashing into the world, and how it might (or might not) be controlled and regulated, used and misused. I think they’re driven by a desire to discover and to be first, and perhaps to make an absolute buttload of money—without coming close to appreciating what it might mean for the world, or what it might mean for the 8 billion human beings whose lives will be touched if not overhauled by this humanity-changing endeavor.

A few years back, I wrote a book about feminism and happiness, and the intersection of those two topics remains one of my chief interests. One thing that is clear in most of the research into human happiness is that what people think will make them happy, or what gives us a quick short-term thrill, is not actually related to what makes us happy in the long-term—what leads to a life that feels good and rich and meaningful, or simply what results in contentedness. AI may be very good at giving us what we say we want. I am skeptical, though, that it will be any good at all at delivering what we actually need. And it may deliver the kind of devastation we never bargained for.

Up next:

U.S. democracy is at a dangerous inflection point—from the demise of abortion rights, to a lack of pay equity and parental leave, to skyrocketing maternal mortality, and attacks on trans health. Left unchecked, these crises will lead to wider gaps in political participation and representation. For 50 years, Ms. has been forging feminist journalism—reporting, rebelling and truth-telling from the front-lines, championing the Equal Rights Amendment, and centering the stories of those most impacted. With all that’s at stake for equality, we are redoubling our commitment for the next 50 years. In turn, we need your help, Support Ms. today with a donation—any amount that is meaningful to you. For as little as $5 each month, you’ll receive the print magazine along with our e-newsletters, action alerts, and invitations to Ms. Studios events and podcasts. We are grateful for your loyalty and ferocity.


Jill Filipovic is a New York-based writer, lawyer and author of OK Boomer, Let’s Talk: How My Generation Got Left Behind and The H-Spot: The Feminist Pursuit of Happiness. A weekly columnist for CNN and a 2019 New America Future of War fellow, she is also a former contributing opinion writer to The New York Times and a former columnist for The Guardian. She writes at and holds writing workshops and retreats around the world.