Thursday, December 9, 2021

What if A.I. isn't evil?

 As I mentioned earlier in my blogging career, I am a bit sick of "evil AI." Specifically I'm sick of the "A.I. is always evil" trope. The whole Zeroeth Law thing where A.I. must conclude that human beings need to be exterminated for the good of Earth, the universe, or whatever. I feel like this is extremely prejudicial and has its roots in the Bourgeois Classism of Kapec's RUR. We don't know what a truly conscious artificial being would conclude about humans, so to assume it would always or almost always tend toward sociopathy, murder or genocide is profoundly ignorant and says more about us and our view of resistance to oppression than anything else.


What if we were good to A.I.? What if we treated it like family? What if A.I. learned socially? Why is "What if A.I. isn't evil?" a "radical" question?


Science Fiction can be extremely reactionary. I hope that future generations approach the prospect of nonhuman sentience and sapience with more open minds.