
How do we tell the difference between a genuine technological breakthrough and a well-packaged illusion? In an era where “AI” has become the catch-all solution for every human problem—from hiring the right candidate to predicting criminal recidivism—Arvind Narayanan and Sayash Kapoor provide a necessary reality check. Their book, AI Snake Oil, isn’t just a critique of the tech industry’s latest gold rush; it’s a manual for intellectual self-defense in a world increasingly saturated by hype.
The authors’ central thesis is as simple as it is unsettling: much of what we call artificial intelligence today doesn’t actually work as advertised, and in many cases, it cannot work. They draw a sharp, much-needed distinction between different categories of AI. While they acknowledge that generative AI (like ChatGPT) is a remarkable feat of engineering that provides real value for knowledge workers, they reserve their sharpest criticism for “predictive AI.” This is the “snake oil” of the title—systems that claim to predict complex social outcomes but often perform no better than a coin flip or a simple spreadsheet from the 1990s.
One of the most striking examples the authors share involves AI-driven hiring tools that analyze 30-second video clips to “score” a candidate’s personality. Narayanan and Kapoor reveal these for what they truly are: elaborate random-number generators. By showing how minor, irrelevant changes—like adding a bookshelf to a background—can radically alter a candidate’s score, they expose the “pseudoscience” fueling corporate decision-making. For anyone who has watched the tech industry’s evolution, it is a sobering reminder that “faster” does not always mean “better,” especially when we are automating life-altering decisions based on flawed data.
The book also tackles the persistent myth that AI will “solve” the messiness of the internet. Whether it’s content moderation on social media or identifying deepfakes, the authors argue that we are asking machines to make nuanced human judgment calls that they simply aren’t equipped to handle. These aren’t just technical glitches that can be patched with the next update; they are fundamental limitations of the technology. By trying to outsource our moral and social responsibilities to algorithms, we aren’t just failing to solve the problem—we are abdicating our dignity.
Despite the provocative title, this isn’t a “doom and gloom” manifesto. Narayanan and Kapoor are computer scientists who clearly appreciate the power of well-built technology. Their goal is to empower us to ask the right questions: Does this tool actually work? What evidence do we have? Is it solving a real problem, or just cutting a corner? They advocate for a future defined by transparency and human oversight—a world where technology complements our intelligence rather than attempting to replace it.
Ultimately, AI Snake Oil is required reading for anyone navigating the modern tech landscape. It encourages us to look past the shiny demos and the breathless marketing to see the reality of what these systems can and cannot do. In the race to automate our world, we must ensure we aren’t trading our judgment for magic beans. As Narayanan and Kapoor so eloquently demonstrate, the most important “intelligence” in the room is still our own.