Like many readers, I’m sure, I’ve followed at chuckle distance news that Microsoft’s attempt at launching artificial intelligence on social media created a sort of digital Daughter of Frankenstein’s Monster that quickly picked up the worst emotional, intellectual, and ideological tics of the online population. A brief commentary by Brian Boyer on PJ Media, though, brings me to a more profound conclusion:
Call Tay a failure if you want. It probably deserves that title. But how can you accuse software of having poor morals? For that matter, who decides what poor morals are? I think we are realizing (especially with a quick review of human history) that morality is learned and isn’t innate. Human children are used as soldiers for a reason. Most sci-fi thrillers involving AI usually result in AI becoming smarter than humans and coming to the conclusion that humanity needs to be wiped out by violence. Who thought that AI would be a problem not because it becomes super intelligent and paranoid, but because we taught it how to behave through our own actions and words? Tay may be a failure, but she (notice I haven’t been saying “it”) is a game changer.
Boyer illustrates, I think, why the virtual teenager, Tay, turned into a sex-and-drug-mad neo-Nazi with his question, “who decides what poor morals are?” He cheats, moreover, when he simply asserts that “human children are used as soldiers.” By whom and under what circumstances, and with what consequences?
For Boyer’s line of reasoning to make any sense, one must be a radical materialist (knowlingly or unaware) who truly believes that we human beings are nothing more than an evolved computer. Now, we can debate how much of morality is learned, how much has been cultivated through millennia of evolution, and how much is truly innate in a spiritual way; the mix isn’t as significant as people make it out to be.
Progressives and secularists have long smuggled conclusions serving their relativism out of the inextricable context of modern society. That is, they are able to claim simultaneously that objective morality doesn’t exist and that humanity will naturally respect a certain baseline of morality without antiquated religious notions because they live within a society so thoroughly built upon moral principles founded in religion that they can no longer see how the supports are attached to the structure.
Maybe what Tay exhibited, above all, is what would actually happen if there were no good or poor morals and if there were no God. At the end of the day, Tay was software, a virtual machine. One might reasonably conclude that it tells us more about ourselves by contrast than by reflection.