“The real danger is not that machines will begin to think like men, but that men will begin to think like machines. ” — Sydney J. Harris (former journalist for the Chicago Daily News)

I recently watched a YouTube video about an incident which came close to starting a nuclear war, and it reminded me of Harris’ quote above. I knew about the incident growing up but didn’t know the details described in the video.  The meaning of Sydney J Harris’ statement can be seen in this video. I encourage you to watch it, before finishing the rest of the post. It is about 20 minutes long: This War Game Glitch Almost Ended the World.

Harris’ statement is really a warning for our time as well.  There’s a practice of labeling things on social media as “misinformation,” which has been called censorship.  When it is abused, it is censorship.  (And yes, it has been abused and it will continue to be abused in the future).

But there’s a good reason that labeling things as misinformation started. Misinformation is a war tactic (deception). The opening of unfiltered free expression on social media gave rise to the problem. Free expression is a good thing, and is part of a healthy democratic society. But the lack of accountability for what is said has led to immeasurable damage, including opening up an uncontrolled propaganda channel for state actors. In the case of cyber-bullying, it has even cost people their lives.

My real concern with misinformation labelling is the push to use automation, particularly Artificial Intelligence (AI) and machine learning (ML), to automate it on a mass scale.  This is also a part of Harris’ warning.  If you are not familiar with Scott Adams, the creator of the comic strip Dilbert which is loved by IT Professionals everywhere, he did something interesting starting in 2020.  On Facebook, he began to speak out about the misinformation labeling and blocking which Facebook began to do.  But since he was aware that the technology at Facebook was using algorithms to look for accounts violating whatever policy about free speech they chose to have at the time, Adams would occasionally throw in a post with “…and here’s a cute puppy video” or “… and here’s a funny animal video”, et cetera, to throw off the algorithms.

He may not have stated it, but he demonstrated an understanding of how these algorithms actually tend to behave.  Unlike the human brain, computers can only operate on the algorithms they have been given, and the information they have to work with.  As such, they aren’t actually a form of intelligence, because they ultimately degrade into swarming and tribe mentality.  This is what happens when there is no conscious to apply as a governor, or… when the conscious has been burned out (the warning against a “hard heart”).  This means they become closed (or are intentionally closed) to outside influences and operate on their own understanding.

The behavior is not unlike lemmings, who will decide to literally jump off of a cliff because the “tribe” is heading in that direction.  I remember one of my relatives, when trying to explain to me why it was not a good idea to do something, explained it as, ” Well, if everyone else is going in this other direction, why aren’t you?”

Mass, united mentality is never a good reason to go in a particular direction–unless it is ruthlessly vetted.  If you want to have some fun with this, try this exercise with a group.  Propose the following question: “If someone were to pay you $1,000,000 to jump out of an airplane without a parachute, would you do it?  And no… you can’t sacrifice yourself and have your surviving heirs inherit the money.”  It is a really insane concept (except to maybe a handful of daredevils or extreme sports enthusiasts).  Many people will laugh it off as such… unless they ask for a specific, simple clarification: Is the plane flying? Now the meaning of “jumping out” has a completely different context, since it is on the ground.  If the plane was large (like a 737), the fall to the ground might be enough to break a bone or two… but it is highly survivable.  Is that risk worth the $1,000,000? It’s still a personal decision, but the risk is way less than when the plane is airborne.

And it is amazing how many people draw a conclusion without asking that simple question.  So it is with artificial intelligence.  The computers in the 1979 incident couldn’t make an accurate conclusion about the missiles not being in the air: they had only been programmed by people with what they knew. We can give AI and ML context to a certain extent, but not “a flash of inspiration”.  What struck me in the video was that the operators at NORAD had actually asked that same question as the exercise above, when they saw the indications of the attack: “Are they (the missiles) actually flying?”  That’s vetting information, which is needed for a true understanding. It saved the world from going to war.  The airplane question opened up the possibly of earning $1,000,000 and keeping it for your own use.  The missile question stopped a disaster from occurring. BOTH questions prevented a missed option or opportunity, that radically change the outcome.

That same issue is in regulating social media today.  People’s ignorance or just downright evil hearts have created the need to control (or, better, keep people accountable) for what they say on social media.  It will be an ongoing challenge, that’s here to stay.  The problem all boils down to the the Book of Proverbs (15:7, CSB): “The lips of the wise broadcast knowledge, but not so the heart of fools.”  Our hearts are a uniquely human quality that computers can, at best… poorly mimic.

Finally, I strongly recommend reading this blog post for a deeper understanding of Sydney J Harris’ quote.  Keeping our humanity and its influence is more important than ever.

Note: If you are interested in another near-war incident in 1983, with a similar human factor in the outcome, I recommend this video: The Nuclear Close Call of 1983