Ethics Fatigue and the Allure of Breaking AI News
Mid Quarter Report from My Ethics and AI course Spring 2023
Having watched the enthusiasm drain from their faces over the first weeks of class, I suspected showing so much of the dark side of AI amid all the exciting news was a joy-killer.
“Is it OK if my research project isn’t about AI or machine learning?”
One might not expect Stanford University computer science students taking an AI ethics and research course to ask such a question, but three weeks in and 22+ articles of reading and presentations later, my students looked exhausted.
Theirs is a special kind of fatigue amid a larger phenomenon of increasingly voluminous, spectacular AI reportage. Efforts to consume the incessant AI deluge often feels like drinking from the firehose.
Pitbull or boxer dog with red and white markings drinking from a firehose on full blast
For my students, staying afloat in this torrent is particularly strenuous amid their course and lab work. Some tell me they feel overwhelmed by new technologies that seem already established and inevitable to them:
I just turned 19 and find it hard to fathom how all the technology, products, and algorithms that suffuse my world developed only in the last decades.
Breaking AI news now shapes my course, which is several decades old and used to only include ethical frameworks and case studies. Since 2012, I’ve taught “Ethics and AI” as a project-based course that fulfills a sophomore research communications requirement. I only assign readings in the first weeks, which the students present to the class. Then, students choose research projects to write a journal paper and conference presentation for the rest of the quarter. This quarter’s reading list is here.
Class started April 3, 2023, a couple days after the Future of Life Letter appeared. We opened with a debate on the letter and began with students presenting critical literature on generative models, algorithmic technologies, and their social impacts. Watching the enthusiasm drain from their faces over the first weeks of class, I suspected showing so much of the dark side of AI amid all the exciting news was a joy-killer.
Had I tried too hard to balance the ALL the attention-grabbing headlines with equal amounts of misery? I sent out a survey asking students for anonymous feedback on my amount and choice of readings. In particular, I’m eager to better understand student responses because my classes can be as contentious as public debates in Silicon Valley. In fact, my small seminars often reproduce in microcosm many of the problems of the tech industry and tech education: Students arrive with a wide range of preparation and confidence that highlight questions of inclusion.
Gender and tech exposure offer just two examples. The enrollment algorithm often fills my classes with 98% men studying computer science. I know that administrators adjust this ratio by placing at least one or two intrepid women/non-men in my classes. Sometimes these students are boss AI researchers, other times they have zero interest in the topic. Same for the very few men with no computer science background who wind up in my course because it covers a public speaking requirement. It’s my job to help these students not lose interest amid the heavy tech bro vibe. So, I’m relieved when I read such a comment:
I learned an incredible amount from the reading presentations. Even though my specialization is not AI, I felt I came away with so much great knowledge about a field to which I am new.
I was pleased that this pre-med woman had so gamely joined the conversation and now felt confident enough to present on generative AI and education to her audience of men. She’s also of European descent, a student athlete, and distance runner. To her, the cultures of sports and pre-med resemble tech in that one is often asked to take on new challenges even before one believes one is ready.
Where the rare newcomer remains undaunted, others express a mixture of hope and uncertainty. Immigrants or children of immigrants, low-income students and people of color, who read articles about data sovereignty and low-resourced languages, see potential for their communities to benefit from AI development, if they can be the producers of this technology.
Most frequently, though, students just breathe a huge sigh of relief when they finish the readings: “OK, that was dark and depressing. Can I just choose readings related to my project now?”
Reasons for such fatigue, especially among the majority CS majors, might arise from working in AI labs pursuing cutting edge research where students already confront first-hand how often the technology breaks. They also complain that ethics literature fails to give them credit for the interventions they’re attempting. Such challenges render them all the more susceptible seemingly spectacular news. Many returned totally enthused from Sebastien Bubeck’s talk on campus:
After reading about all the bad stuff tech corps do and seeing every day where models go wrong, I just couldn’t help feeling as excited as the presenter.
Indeed, this student and many others wished I had included the Sparks of AGI paper. Others wanted more technical readings or ones closer to their research topics. Unlike previous years, many registered their annoyance with the famous Stochastic Parrots paper. They echoed Bubeck’s words that large models like GPT-4 are no longer just statistical parrots: “you know the train has left the station.”
Screenshot of Bubeck’s talk March 22nd, 2023 47:13
In fact, I don’t personally know if “the train has left the station,” especially when it turns out that at minute 22:13 of the talk – “The Strange Case of the Unicorn,” there’s a high probability the training data had been scraped from Stack Exchange or elsewhere. Here’s what Meg Mitchell of Hugging Face found.
Screenshot of Meg Mitchell’s Twitter feed April 10.
More embarrassing is Bubeck’s reliance on a definition of intelligence that comes from a notoriously racist source. Bubeck responded to my tweet saying, “I will add a comment to explicitly disavow the claims following Gottfredson's definition in the next update. In terms of the definition itself, I encourage everyone to be critical of it and to think deeply about next steps!”
Sebastien Bubeck response to my tweet April 9, 2023
Students cringed at these details and exchange, but remain hopeful that “the train has left the station,” which for them means not that AI is moving toward “sentience,” but rather that model output is not merely parroting. What exactly that is, they hope to discover.
Interestingly, these same students also enjoyed learning about tech labor and reading about Silicon Valley history. AI isn’t Artificial or Intelligent and East of Palo Alto’s Eden, which though written in 2015, still rings true today in its history of Silicon Valley’s segregated past and present. From this pie chart it’s clear that “East of Palo Alto’s Eden” was the favorite reading.
Screenshot of Starkman class survey Apr 24, 2023
Given all these tensions between where students hope to direct their attention and the difficult realities of Silicon Valley, it will be interesting to see what projects they propose. Some students have chosen to write about ethical problems, others are still deciding. So far, a few research projects look like this:
Either regulating sentient AI (Aka more philosophical and about consciousness) or something about ethical considerations of AI applications in healthcare
Better route planning using Artificial Intelligence
Still deciding
Undecided
AI Coding Assistive Technologies
Environmental costs of pursuing AI
The debates around the Future of Life letter
AI and its use in fintech to provide financial security for low-income Americans
The use of LLMs in early childhood education
AI and big tech in Africa, with a specific focus on Ghana
Islamophobia in Large Language Models
Igniting the Sparks: The pursuit of AGI
Resampling and uses of the bootstrap to better represent smaller communities
AI and Aerospace
Unsure, maybe Quantum AI or a cultural review of how definitions and fears of AGI have adapted to the GPT boom.
This mixture of projects tells me that maybe I should reduce the amount of news reading, ask students to start their research earlier, and supply critical, historical, or technical texts as students build their bibliographies. I also shared with them one cheeky proposal that may result from the absurd pace and volume of the debates. My job as a teacher is to allow such irreverence for any topic, and to help students contribute to public conversations.