Silicon Valley tech corporations know they need AI ethicists, but they strongly resist acting on ethicist recommendations. PR teams excel at their ethicswashing game, churning out virtue-conscious press releases and corporate landing pages. But when it comes to addressing long-standing problems like the inability to retain employees from historically marginalized groups, or the necessity of rethinking highly profitable algorithms that disproportionately harm these groups, corporations demur. In fact, it’s downright dangerous to be an AI ethicist in Silicon Valley. One can be “resigned” without resigning.
Some observers advise: “get management buy-in for ethics initiatives.” But just authoring the research remains perilous. Consider the fate of Google’s fired AI ethics co-leads Dr. Timnit Gebru and Dr. Margaret Mitchell. Many people have suggested “review” or “oversight boards” will hold tech accountable, but as yet these organizations remain beholden to industry and lack representation from marginalized groups. Students observing the tech ethics conversation articulate concerns about their own safety and ability to change tech.
Responses vary widely in different student communities:
The firing of Gebru and Mitchell sent a chill through the first-gen low-income (FLI) student community, dampening their hopes of being welcome and heard in tech.
Some students and faculty advocate a politics of refusal, saying no to tech work altogether.
Still such refusal remains difficult to consider for many including FLI students who see tech as their ticket out of poverty and a way to support their communities.
Other students remain interested in tech but feel alienated by both tech discourse and the academic critiques. They see only corporate ethicswashing on one hand, and “boring, repetitive, virtue-signaling” academic ethics on the other.
Boredom with these discussions, ethics ennui, remains a less common student objection, but nevertheless suggests that academics like myself offer our own form of empty ethicswashing. If students lack recourse at work, then what exactly are my AI ethics courses doing for them, besides talking the talk without taking the walk?
I like to think my courses, and universities in general, can provide the analytical, technical, and organizational tools to equip students to change tech for the better. Students arrive understanding their CS education benefits from ethics discussions:
“I take many classes that teach me how to prototype, and often I wonder whether I should build something.”
Stanford provides ample opportunities to interrogate this question:
When students read the work of Tawana Petty, the National Organizing Director at Data for Black Lives and former Data Justice Program Director at Detroit Community Technology Project, it stuns them to know that 1:2 people in communities of color already have their faces collected in government databases.
Another common question students ask is whether we need algorithms that determine gender from images—algorithms often work from outdated gender binary metrics and return spectacularly wrong mislabelings.
Some Stanford classes assign innovative coding projects for students to learn how to mitigate code that returns “accurate predictions” that continue to harm disadvantaged populations.
The problems are clear, some tech interventions appear promising, but ultimately, students have little recourse inside corporations. What if they refuse to build something or want to revise an algorithm? They can be easily replaced by someone more willing to do the work.
Recently, I posted an AI ethics workshop to my students and was delighted to see several in attendance, but I was disturbed when I received this text after the first hour of presentations:
I wondered why this tech industry-enthralled Computer Science freshman thought he’d heard Critical Race Theory mentioned, and why he so strongly refused it? The workshop speakers had repeatedly addressed the disproportionate harms AI causes to historically marginalized groups. Some participants also introduced themselves as speaking from “unceded native land.” They told the history of AI as arising from a Faustian pact with the Department of Defense during the Cold War. Most people on the zoom screen had shared their pronouns and were talking about their “journey” from the tech industry to organizations critical of tech, some of which were universities. I suspected this student developed an allergy to academic social justice rhetoric and especially resented it criticizing his chosen field.
I tried to assure him that academics use this language to be inclusive, to foreground the seizure of native land, that these issues are foundational to American society as well as tech, and that raising awareness is the first step toward action. The student reverted:
“All this foregrounding doesn’t go anywhere. Has it moved Stanford to pay reparations, or do anything? This is just empty virtue-signaling.”
I don't know if Stanford is paying any reparations to native people; I have never heard that discussed. In any case, I had to recognize the student’s point. More importantly, he was not objectingto the values of inclusion and reparation, but to the rhetoric and its failure to produce actionable results.
As sympathetic this student seemed toward real inclusion, I also wondered if his aversion to Critical Race Theory resulted from recent media campaigns against Critical Race Theory (CRT).
For detractors, CRT in 2021 appears an easy Straw Man to reject conversations about racial justice. AI ethicists draw on Critical Race Theory ideas, among many other methodologies, when criticizing the tech industry. So which was it? Was this student weary with politically correct academics, or annoyed at our powerlessness, or both?
Another student who left the workshop early voiced a common refrain of my low-income students from historically marginalized groups:
These FLI students know tech hurts their communities, but they would rather get a piece of the action than refuse a good job and be told to feel ethically superior about passing over a large tech salary. For this student, criticizing tech may highlight some truths, but all these AI ethics workshops overshadow many of the other injustices of the academic world, like debt-producing masters programs.
Confronting such remarks, I mull the choices available to AI ethicists. I believe there are actionable steps we can take.
Here are a couple of possible directions:
Many of our FLI students want to go into tech. Instead of shaming them, let’s advocate for them. The demographics of the tech industry will only change under pressure to invest in future employees. Right now, tech companies like Google cynically offer only UNPAID mentorships that lead to little more than canned advice from HR and a photo op. Now is the time to demand PAID opportunities for those young tech workers from historically marginalized groups and help them move up through the ranks to management. Let’s empower them to help build oversight boards and regulatory bodies that can win legal, cultural, and technological battles in tech.
AI ethicists should consider developing multiple rhetorics for their different audiences. Ethics findings can be presented in different rhetorical styles outside academia and communities of practice. This is a common problem for academic disciplines where the specialized language used among scholars, talking to each other, is too often opaque. It is important to find a democratic language to transmit specialized knowledge and help students organize.
For my students who suffer ethics ennui, I believe they possess a genuine interest in values of equity. They assert that the classic texts of CRT speak to them, that they are moved by the ideas and eloquence of W.E.B. DuBois, Patricia Williams, Kimberlé Crenshaw, and others:
“It’s not the reading. It’s our reality, if we go into tech, we need solutions that address the constraints we face and values we want to hold on to.”
The most lauded avenues of action annoy them. They tell me that the politics of refusal is for people who are otherwise gainfully employed and secure in their communities. We need to enable students to participate in ethics conversations in their own rhetorics and support those who wish to climb to positions of power in industry. We must help our students enforce not bespoke corporate pseudo-principles, but rather ethics grounded in human rights that protect them, their communities, and all of us.