The AI We Deserve? Reflections on a Boston Review Discussion at Stanford
Yesterday with great enthusiasm, the Stanford University community packed into Gates Hall for a McCoy Family Center for Ethics in Society sponsored event, “AI We Deserve.” Celebrating Boston Review’s AI Futures publication, influential tech writer and theorist Evgeny Morozov appeared alongside free software activist and former digital minister of Taiwan Audrey Tang and the legendary Stanford computer scientist and AI pioneer Terry Winograd for a discussion on how to build political power and a technological future that serves us all. Tech journalist Brian Merchant moderated in an atmosphere of optimism and engagement around the question: What kind of AI do we deserve—and how do we get there?
Terry Winograd, who also launched Stanford Computer Science’s ethics courses in 1985, which I attended and later TA’d, embodies the best elements of American AI thinking and development. I was also greatly excited to hear Audrey Tang, who challenged the audience to think more expansively about the role of AI in society. Tang’s concept of "pre-bunking"—designing AI systems in a way that prevents misinformation and social harm before they occur—offers a much needed alternative to the endless cycle of debunking and damage control that currently dominates AI ethics discourse. Winograd, in turn, posed the essential question he has always asked:
"What different kinds of interaction could we have with AI beyond what we have now?"
This question arises with an acute urgency in our current historical and political moment: AI tools, controlled by a few major players, are rapidly shaping the economy, society, and human life. But must their vision be our common destiny?
Winograd and Tang responded to Evgeny Morozov’s call to seek radically different ways of thinking about AI—not through the lens of corporate inevitability but through alternative historical and political possibilities. All three venerable speakers urged us to imagine a more democratic, participatory, and non-extractive AI—one that was never bound solely to the imperatives of Cold War militarization or corporate profit-chasing. There were some jokes too about the title of this event: Did we end up with the AI we deserve, in the sense of a punishment for our hubris of blindly embracing Cold War logic?
While Winograd helped shape and critique this history of computing since the Cold War, Morozov, in both his essay and talk, offers a counter-history that challenges the myth of technological inevitability—the idea that AI developed in the only way it ever could. His reflections help us remember that AI was not born in a vacuum—it was shaped to a large extent by military priorities, bureaucratic rationality, and efficiency-driven corporate culture
This Cold War history of AI remains essential in every AI ethics course and there are many important historical considerations already including the seminal John McCarthy, Marvin L. Minsky, Nathaniel Rochester and Claude E. Shannon essay, A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, August 31, 1955, Paul Edwards’ The Closed World, Yuchen Jiang, et al.’s “Quo vadis artificial intelligence?, and Meredith Whittaker’s “The Steep Cost of Capture.” AI’s history is thus one of power consolidation—but not only that. It is also a history of competing forces—some say wolves— of centralization vs. decentralization, institutions vs. individuals, corporate control vs. counterpower.
Building on this contested history Morozov’s engagement with sociological and philosophical traditions, from Claude Lévi-Strauss to the flâneur’s critical gaze, draws on the rich intellectual terrain that has long shaped AI ethics discourse. His invocation of a socialist “third way” and Western Marxist utopias, while more deeply rooted in academic debates, encourage audiences to remember that technological development has never been bound to a single trajectory.
Morozov challenges the myth of technological inevitability—the idea that AI developed in the only way it ever could. He foregrounds alternative intellectual traditions, invoking Hubert Dreyfus, whose Heideggerian critique in What Computers Still Can’t Do exposed AI’s failure to account for embodied, non-rule-based intelligence, and Terry Winograd, who pivoted away from early AI toward human-centered computing. He also references Chile’s Project Cybersyn, a 1970s socialist experiment in computational governance, as an example of how AI could have evolved under different ideological conditions.
But if AI could have taken another path, why didn’t it? Morozov’s account highlights possibilities foreclosed by corporate and state power, but it risks romanticizing counterfactual utopias without fully reckoning with why these alternatives failed or remained marginal. The evolution of AI was not simply the result of missed opportunities; it was shaped by a dynamic interplay of ideological shifts, political contingencies, and institutional incentives. Computing has always contained two wolves, one centralizing, one decentralizing.
Take Terry Winograd’s transformation from AI researcher to human-centered computing advocate. In his pivotal essay, Thinking Machines: Can There Be? Are We? Winograd critiques AI’s foundational assumptions, arguing that artificial intelligence as conceived in the mid-20th century was based on a bureaucratic model of intelligence that prioritized formalized, rule-based reasoning over human understanding and meaning-making. He writes:
Artificial intelligence, as now conceived, is limited to a very particular kind of intelligence: one that can usefully be likened to bureaucracy in its rigidity, obtuseness, and inability to adapt to changing circumstances.
Winograd’s shift toward human-centered computing was a direct challenge to the AI status quo. However, the ideas he developed were not immune to co-option. His student Larry Page took Winograd’s insights and built Google—a company that initially marketed itself as a tool to enhance human knowledge but ultimately became one of the world’s most powerful AI-driven data-extraction and control mechanisms. When corporations consolidate and absorb the “human-centered” views of AI, it’s time to renew human-centered efforts, as many universities, including my own, are now doing.
Beyond asking how AI could have developed differently, we must focus on how it can be restructured now. Some of the most urgent questions are:
How can AI be governed democratically?
How do we prevent AI from further centralizing power under corporate monopolies?
Are there viable non-market, non-surveillance-driven AI models today?
There are many excellent analyses that address power structures, funding sources, and institutional constraints rather than imagining a cleaner ideological starting point.
This is why Tang’s idea of “pre-bunking” is so compelling—it forces us to think about how AI can be designed to prevent harm before it occurs, rather than just cleaning up the mess afterward. Tang offers a framework for practical intervention, not just historical critique. Similarly, Winograd’s call to rethink our interactions with AI invites concrete governance models that challenge the current order.
Once we ask “What if AI had been developed differently?” it’s time to move on to “What can we do now to change AI’s trajectory?” Morozov also invites his audiences to ask this question and intervene in AI’s development today. The conversation at Stanford yesterday made clear that we are not condemned to the AI we have inherited. The AI we deserve is the one we fight to build—through governance, intervention, and sustained public engagement.