How Computing Ethics Got “Woke”
Where we started and where we need to go
“AI is Communism; blockchain is Capitalism!” declared a Silicon Valley billionaire to my startled Stanford University Ethics and AI students, who blinked and tentatively raised their hands. Then, after several foreboding gestures about the AI arms race with China, a few deep sighs about the stagnation of Western progress, and a bit of avuncular advice to “ignore all the woke pressure and ethics hype and take care of your own interests first,” the tanned, white-polo-shirt-clad venture capitalist of European descent and American nationalist passions dashed off to another meeting.
The student-veterans in the class, who are a decade older than most Stanford undergrads, one having recently immigrated from China and served in Afghanistan, shared concerns about surveillance, autonomous weapons, and data security. Many students had wanted to inquire about political engagement and AI governance. No one, not even the sophomores who’d recently read Marx in their history classes, declared themselves a communist. Neither did anyone else step forward as “woke,” which students described as a racist conservative appropriation of African American English (AAE) to caricature people who address social justice questions. I asked the students if any of them still use “woke” and was impressed that they did:
Just because woke’s been appropriated doesn’t mean it’s lost importance to our communities. Now there are two definitions. Theirs, which says we have too much power, and are trying to deny them free speech, and ours, which comes from Leadbelly warning Black people to watch out and Tupac schooling us to educate our minds.
Video still of Tupac Shakur from 1992 telling his interviewer “woke means educate your mind.”
Given these competing definitions, students asked me to reflect on how we got to this moment where the tech industry and computing education often use the same words oppositionally to criticize social problems. First, I’ll try to explain why “ethics” are sometimes equated with “woke” as a term of abuse in tech establishment circles, how the techlash reshaped computing ethics education, and then I hope to offer definitions of ethics that might help our students.
Our VC classroom visitor’s enigmatic pronouncement “AI is Communism; blockchain is Capitalism” encapsulates many of the challenges of ethics education in computing. We understood him to mean that what gets called “artificial intelligence” is in fact massive data collection and machine surveillance, which threatens to make the US an invasive, autonomy-denying surveillance state like China. He’s neither wrong about the misnomer of AI nor the problem of surveillance, though some may argue with the US resemblance to China. As for the “Blockchain is Capitalism” assertion, after the FTX disaster not many people are so sanguine about the financial promise of blockchain. But, from our guest’s libertarian perspective, blockchain appears to be distributed, decentralized, and confounding to surveillance.
Interestingly, many tech industry critics and AI ethicists, whom our guest derides as “woke,” share both his concerns about surveillance and skepticism about “ethics,” which they view as an academic pre-occupation that impedes students’ advancement. They also similarly value distributed, decentralized systems and power. Consider Dr. Timnit Gebru’s DAIR institute, which stands for “distributed AI research.” Dr. Gebru equally abhors surveillance, believes in cultivating distributed, decentralized perspectives on AI, and doubts that the traditional ethics students have long learned in university computing courses serve students, especially not those from marginalized groups around the globe.
Despite these remarkable similarities, our classroom guest and tech critics like Dr. Gebru offer diametrically opposed definitions of ethics. The VC disparages “ethics,” because he defines these as “woke” efforts to obtain more equity and inclusion in tech conversations. In contrast, Dr. Gebru and others reject “ethics,” when these are narrowly defined by the academic discipline of American and European philosophy.
My students suspect the VC would be much more sympathetic to ethics if he defined them as Western philosophical value systems, but therein lies the problem. Starting at Stanford in the 1980s, computer science greats such as Professors Terry Winograd and Eric Roberts developed computing ethics courses teaching students how to identify responsibility for their own work as well as software disasters from the Therac-25 radiation deaths to the Northeast Blackout of 2003. Courses included brief modules on two of the dominant Western ethics frameworks to help students weigh the norms and duties (Kant) computer scientists must uphold, while also anticipating often unintended consequences (Bentham, Mill, and others) that might fail to serve humans and posterity. Many philosophers, me included, often made one- or two-hour cameo appearances in computing ethics courses to explain these frameworks. Some of us ventured into virtue ethics, relational ethics, and natural law ethics because these have a deep connection to non-Western ethics as well. From these early years through the 1990s, Stanford ethics courses drew on Professor Helen Nissenbaum’s description of the difficulties in addressing responsibility in computing, especially the problem of the “many hands” involved in the software development cycle.
Everyone, students, and instructors alike, knew inserting a couple philosophy lectures in a designated computing ethics course was inadequate, but we remained uncertain how else to integrate the material. We also learned that even if we spent most of the class on Western philosophical ethics as some of our Stanford versions of the computing course tried in 2015-6, that focus remained merely on individual responsibility. Students loved these philosophical excursions especially with great teachers like Krister Johnson, but we wondered how to better address larger societal issues. While we criticized the myth of the lone computer “genius” or “geek,” in fact when analyzing the software development cycle, we found ourselves drawing on Fred Brooks’ 1975 Mythical Man-Month collection of essays. Brooks' influential book got many things right about the need for “small teams” who critically investigate code, but also cemented the tech industry fantasy about the one “great” programmer whose work was worth more than perhaps 200 “mediocre” programmers. These myths die hard, especially in Silicon Valley.
Algorithmic technologies have partially succeeded in moving the conversation from lone engineering heroes and individuals responsible for a software error to the social impacts of machine models. But not without noisy, bristling debate. In 2017, the same year woke entered the Oxford English Dictionary with positive definitions of “social awareness” and “commitment to justice,” the infamous Google memo denounced efforts to address race and gender in tech work culture and algorithmic harm. This memo has been widely understood as the start of the Silicon Valley techlash, though public criticism of the tech industry and algorithmic harms arose much earlier with bias issues in ImageNet in 2010.
By 2017, the techlash caused a reckoning both within industry and academia. Women and gender minorities stepped in to criticize the memo, tech industry culture, and hype about machine learning models. Scandal and protests grabbed headlines in 2018: Cambridge Analytica, the revelations about Amazon’s sexist recruiting algorithm, the Google Walkout in 2018, and the Google employee protest against Project Maven. Engulfed in controversy, industry rushed to restore trust, launching ethics credos of their own and other campaigns. Critics quickly doubted their sincerity, especially in the development of bespoke principles and calls for internal regulation.
As pressure mounted, both industry and academia began to address questions of algorithmic impact: What problem is this technology trying to solve? What are we trying to optimize? Who benefits from the solution to this problem? Whom does this technology most negatively impact? Do we even need this technology? Should we build this? With much mainstream press fanfare, Professors Rob Reich, Mehran Sahami, and Jeremy Weinstein developed an ambitious multidisciplinary course CS182: Ethics, Public Policy, and Technological Change which shifted the focus from training for individual morality to policy making. Stanford has also since
adopted Harvard’s EmbeddedEthiCS training, which trains students to ask ethical questions as they learn to build new technologies.
Many argue however, such efforts need to more closely study the disproportionate harm to marginalized people. Among the earliest classes to embrace this task were Stanford Lecturer Dr. Cynthia Lee’s Race and Gender in Silicon Valley and Duke Professor Nicki Washington’s Race, Gender, Class, & Computing. Both courses address intersectional questions of race, gender, and class in computer science education. UC Boulder Professor Casey Fiesler began a now legendary open google doc collecting Ethics and AI syllabi, which calls for a move away from the long-standing claim that engineers exact little influence over ethics. No more “I’m just an engineer” strategies of disengaged hibernation.
As computing ethics courses tried to answer social impact questions, they sought frameworks that center equity and inclusion, and began to broaden their definition of ethics to incorporate the critical approaches of sociologists, anthropologists, Black feminism, and non-western frameworks. The search for actionable ethics for social problems helped computing ethics become “woke” in the sense Tupac meant when he said, “educate your minds.”
Broadened beyond academic philosophy, definitions of computing ethics include social sciences or what others like Deb Raji, Morgan Klaus Scheuerman, and Razvan Amironesei call Humanistic Social Science (HSS), by which they mean interdisciplinary approaches to evaluating the harms of machine models. But the work is far from done. In their article, You Can't Sit With Us: Exclusionary Pedagogy in AI Ethics Education Raji et al. survey the computing ethics education landscape after 2018 and still see many courses teaching AI ethics that focus on the individual engineer, who after learning a “sprinkling” ethical theory may imagine themselves capable of solving all problems technical and ethical. This comical image of the engineer who sees themselves as an “ethical unicorn,” results from a failure to teach engineers to think critically about their roles and longstanding “exclusionary pedagogy.” Raji et al. call for “frameworks of intervention based around existing problems, not anchored to the existing skills of those assumed to be in the position to address the problems.”
Now in the second decade of large algorithmic models, teaching computing ethics has gained many—not uncontroversial—strides. It is clear a multidisciplinary, multi-pronged approach is needed, as we saw at Stanford HAI’s Embedded EthiCS conference in March, which showcased both ethics embedded in computer science courses as well as stand-alone approaches like Professor Washington’s. When Mehran Sahami declared in 2017, “we want to make ethics as unavoidable as debugging,” six years later he and his colleagues have indeed succeeded. Though everyone still dreads debugging, and computer science ethics educators hope the ethics can be made both useful and attractive, just like the promise of industry jobs (which seems weak at this moment) that draw so many students to computer science. One effort to generate such appeal can be seen in Stanford’s work to make computer science education more welcoming to low-income students from marginalized groups through programs like the Stanford Summer Engineering Academy (SSEA) led by Dr Lee. Equity and inclusion inspire students to stick with the rigorous challenges of learning code.
Can industry embrace such an approach? Can it rethink its current disdain for woke ethics and appreciate Tupac’s sense of “educating minds” without derision? Only if industry learns that ensuring a path for minoritized groups into tech leadership positions secures greater, more thoughtful innovation. It is long past time to ditch the cultural conformity Silicon Valley has always suffered, but loudly disavowed.