I spent the week thinking about what I want to do with Protobug. Some of that thinking happened while talking to a long-time friend of mine who has a background in studying philosophy. He has recently rediscovered this love of Philosophy in all of its many splendid forms. As I’ve delved further into the world of Machine Learning and Artificial Intelligence (general and otherwise) our conversations have tended toward the intersection of these three fields. As expected, it certainly seems like we can find many comparisons between the philosophy of the mind and the science and ethics of artificial systems.
As I’ve been codifying where I want to take the Bug series, I’ve been peering into the notable minds surrounding the field of Artificial Life*. I’m interested in the odd space that hovers between our tendency to anthropomorphize, and actual work done in simulating emotions. I’d like to find some way to write a system that allows NPCs in ‘bug to store experiences surpassing some threshold value in memory. Later they might use these stored experiences to generate ongoing emotions which could cause situations that would breach the threshold again and so on.
Recently, while talking to my friend about this side of AI/AL, our ongoing talk morphed into a cynical discussion of politics, accelerationism, transhumanism, capitalism, and current and potential abuses of AI. It made me wonder about the ethics of programming such emotional systems into games. Even if the digital organisms don’t have any real conscious or subjective experience, is it moral to code in the possibility of suffering? Unfortunately, I doubt many arbiters/users of these technologies will think about the implications. I see a possible future where COD bots are coded to be performant by way of fear rather than simple pattern prediction. Would such a technology mean anything the players, other than just greater realism in their virtual opponents? The idea of “digital red rooms” is pretty terrifying (especially when considering the implications of a two-way brain-machine-interface.)
I don’t mean to be so grim about these technologies, but I think it’s something to keep in the back of our minds as we strive to further them. It’s something I will further ruminate on as I work through the AL/AI elements in the Bug series.
*As a side note, deeper excavations into the world of AL seem to indicate a fractured and catty discipline, though I hope I’m wildly misinterpreting this.)