Snitch Switch: Smart Assistants With “Moral AI” Could Call Police on Owners Who Break Law
Article audio sponsored by The John Birch Society

Call it Terminator meets the second-grade tattletale, but some “experts” have suggested that electronic home assistants could be programmed with moral artificial intelligence (AI) that could decide whether to report its owners to the authorities for breaking the law.

As the Daily Mail reports:

Academics at the University of Bergen, Norway, touted the idea at the ACM conference on Artificial Intelligence, Ethics and Society in Hawaii.

Marija Slavkovik, associate professor the department of information science and media studies, led the research.

Leon van der Torre and Beishui Liao, professors at the University of Luxembourg and Zhejiang University respectively, also took part.

Dr Slavkovik suggested that digital assistants should possess an ethical awareness that simultaneously represents both the owner and the authorities — or, in the case of a minor, their parents.    

An “ethical awareness” customized based on the owner — that could be interesting. One might imagine what an AI ethical awareness representing Bill or Hillary Clinton could be like. Would there be a pass for island jaunts with underage girls or hurling lamps and other objects in fits of rage? Would leftist moral AI (isn’t that an oxymoron?) apply different standards based on party affiliation, epidermal melanin content, genotype, and professed sex or sexual inclination? And, really, are these questions rhetorical?

Whatever the parameters, upon witnessing doubleplusungood activity, devices would “have an internal ‘discussion’ about suspect behaviour, weighing up conflicting demands between the law and personal freedoms, before arriving at the ‘best’ course of action,” the Mail tells us. My, what could possibly go wrong?

Much has already — philosophically. Aside from quoting a University of Cambridge professor who said that human interactions are messy, the Mail also struck a relativistic note and wrote that the “ethics of the culture the product is launched in” would have to be considered. So now there could be a pass for female genital mutilation in Somalia.

Weighing in, Dr. Slavkovik told the Mail that there “‘is [already] an ethical conflict between people in one family, let alone between people and manufacturer, or shareholders of the manufacturer and programers [sic].’”

“‘If we want to avoid Orwellian outcomes[,] it’s important that all stakeholders are identified and have a say,’” she continued.

Actually, if we want to avoid Orwellian outcomes, people must believe in Truth, have a good grasp of it, and be virtuous. 

This gets at the real problem here. In reality, “moral AI” is currently just a proposal by a small group of scientists who may (or may not) want publicity as much as greater knowledge, and news organs certainly crave the traffic reporting on such stories brings. The deeper issue, however, is that a prerequisite for moral artificial intelligence is moral natural intelligence.

This is sorely lacking today. With moral relativism/nihilism having swept the West, the kind of people who’d entertain moral AI are often the first to scoff if a traditionalist dare even mention morality.

The contradiction is profound, too. The modern says, “Don’t impose your values on me! ”while his sub-conscious whispers, “Because I want to impose my values on you!” The leftist states, “Who’s to say what’s right or wrong?!” while his sub-conscious whispers, “I will — once I have the power to legislate!” The entitled snowflake proclaims, “Everything is relative!” while his sub-conscious whispers, “And I’ll make sure it’s all relative to me!

This relativistic spirit was reflected in the Mail piece when it stated that “there would need to be room for compromise [on moral AI] because the world itself is not black and white.” This sounds good to modern ears, which is why such statements so readily roll off lips today; it seems enlightened to speak of “shades of gray” as if you’re sophisticated enough to perceive nuance. But all it really means (at best) is that the person is too undiscerning to penetrate the gray clouds of confusion.

Reality here is as simple as it is rarely spoken. People are shades of gray, always, because no one is morally perfect; everyone sins.

But the Truth is black and white.

“Morality,” properly understood, reflects Truth, which itself implies God. To understand this, start with the basic question, “Who or what determines what we call ‘morality’”? There are only two possibilities: Either man does or something outside man does. The idea that man determines right and wrong translates into “moral relativism”; this means that morals are relative to the time, place, and people. The belief that right and wrong are determined by something outside of man reflects the idea of Truth (absolute by definition).

Why does the latter imply God? Well, if we’re saying that “Truth” is something existing apart from man, that it is inerrant, and that we must abide by it — which means it’s above man — what are we actually describing? But, now, what are the implications of relativism? I addressed this in “The Nature of Right and Wrong,” writing:

[Moral relativism] states that morality is determined by man; what is rarely recognized, though, is that if this is so then there is no right and wrong, objectively speaking. Think about it: If 90 percent of humanity said it preferred chocolate ice cream over vanilla, it wouldn’t mean that chocolate was “right” and vanilla “wrong.” Nor would it mean that chocolate was better in any objective sense — it would simply mean that people happened to like chocolate better. It’s illogical to say otherwise. But would it be any more logical to say that murder was wrong for no other reason than the fact that 90 percent of all people preferred that others not kill in a way that we call unjust? Of course not. But if the idea that murder is wrong is simply a function of man’s collective preference, it then falls into the exact same realm as the collective preference for a type of ice cream: the realm of taste.

This is precisely why, mind you, moderns (especially the leftist variety) generally don’t use the terms “morality” or “virtue,” but instead speak of “values.” Values can be positive, negative, or neutral, and all that’s necessary for something to be a value is that someone — an Einstein or idiot, savior or Stalin — somewhere values something.

The apocryphal saying tells us, “Moral issues are always complex matters, for people who have no principles.” Today the unprincipled, and hence the complexity, abound. Many now among us don’t even know what marriage, proper sexuality, or even sex is (as they busy themselves inventing new “genders”).

But no one who rejects Truth’s existence and who has spent time justifying himself with relativistic talk has any business devising a moral standard, artificial or otherwise. For a true moral standard is not a moral standard but the moral standard — and it’s not devised, but discerned.

Photo: asbe/iStock/Getty Images Plus