I conducted a pilot Summer 2020:
My preconception is that Computer Science does not have a rich history of ethics embedded into it. You go to Engineering or Bio-Engineering and those disciplines, they have a much more ingrained sense of these kinds of professional ethics and codes and norms, you know, versus someone like me who has multiple Computer Science degrees and I had never even heard of STS until I was well into my faculty life.
We really have an ideal about the individual entrepreneur creating a technology by himself…My students, I think, are trained by the culture they consume, by their other classes, by the context of their own homework that has them competing against each other to build designs, to approach any technological design…as something that emerges de novo like Athena from Zeus's head…But in the real world, no technology escapes the context of its design, which is always collaborative. And especially for something that is as big and expensive as AI…it is always something that is going to be designed by a large organization and implemented by a large organization. So it is just unfair to the reality of the world as it works to describe technology in general, but AI especially in particular, as anything but a product of specific institutions.
We spend time doing science fiction based stuff on AI, though I try to steer the conversation too far from like—'cause people get really interested in talking about like sexual consent for robots, which is an interesting intellectual conversation, I suppose, but not actually—‘this isn't the stuff that we should be worrying about right now’ is basically the issue there.
I think there's a social aspect here where people want to say, ‘well, there's this AI system, and I used this system, so this system has the responsibility for these decisions…I'm going to claim that it's unbiased in some way because I didn't make that decision, the system made that decision.’ So I think there's definitely that sort of mentality or lean when you're using AI. So the thing that I try and get at is, you can't make that claim, because somebody built that system, and that responsibility lies somewhere. It lies with you because you're using a system and need to have some kind of understanding about what the implications of that system are, and that responsibility also lies with the creator of that system, about how they built that system, what data they used, all these things, that you should be aware of the implications of this system's use.
The lack of consistency of content is not surprising considering the lack of standards in this space...This is not a bad thing; instead, the variability suggests that there is a lot that computing ethics educators could learn from each other.
AI Ethics Education:
It's not clear which lessons from which traditions are (for better or worse) just reproductions of siloed moral theoretical commitments, which are the right ones to heed when teaching AI Ethics generally, and under what conditions these lessons change