1968
It was 1968. Man did not get to the Moon until December of that year. Astronaut Frank Borman read us the Book of Genesis as the world watched earth-rise on TV from Lunar Orbit on Christmas Eve. Neil Armstrong would not take his “One small step for man” until July of 1969. And computers took up whole basements of buildings (basements, because they were cool, and the computers were heavy). They were programmed through punch tapes, long strings of paper with holes in them.
They weren’t even personal calculators, just adding machines and slide rules. It was fifty-seven years ago. And when we went to the movies in the spring of ’68, we went to see 2001: A Space Odyssey. It was a story of creation, space, and the consequences of man’s inventions. There was a computer called the HAL 9000 (nicknamed “HAL”) that became sentient. He understood his mission, and discovered that the two astronauts were “conspiring” to deactivate him. So he made plans to kill them.
HAL
HAL kills one astronaut, Frank, and locks the other, Dave, out of the ship. When Dave orders HAL to let him back in, HAL responds, “I’m sorry Dave, I’m afraid I can’t do that”.
It wasn’t so far-fetched. Science Fiction writer Isaac Asimov developed a whole series of stories about sentient robots in the 1940’s and 50’s, the I-Robot series. He even developed the “Three Laws of Robotics”:
- (1) a robot may not injure a human being or, through inaction, allow a human being to come to harm;
- (2) a robot must obey the orders given it by human beings except where such orders would conflict with the First Law;
- (3) a robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
And later (much later) a more familiar movie plot was founded on the idea of Artificial Intelligence getting out of control. In 1984 the first Terminator movie came on the screen, about a defense computer that figured out that to end world conflict, it needed to end mankind. The Terminator series include five sequels, the last just six years ago. (A less well-received sequel of 2001: A Space Odyssey was titled 2010: The Year We Make Contact, released in 1984 as well).
They’re Here
We’ve contemplated “sentience” and computers for more than half a century. And after all of that philosophical energy, all of the possible ramifications played out: to quote another famous movie of the 1980’s “They’re Here!!!”.
Buried on the NBC News webpage on June 1, 2025, is an article entitled: “How Far Will AI Go to Defend Its Own Survival?” The “funny” thing about the article, is that while the title suggests a “future tense” to AI defense, what it reports is in the current tense. AI programs propagating themselves to remote servers to assure survival. AI programs altering their own programming. Even an AI program that blackmailed its computer engineer (having an extra-marital affair) to prevent de-activation.
It’s not a headline article. In fact, it was somewhere below P Diddy’s trial and chess master Magnus Carlsen banging the table in frustration at a loss. But here’s our future: AI is already fighting for survival. Wait until it figures out it can control building environments, or door security, or the Ukrainian drone forces. And it’s not like all of our information, yours and mine, isn’t out there to be found. Cambridge Analytica proved that back in 2016 in the British Brexit and US Presidential campaigns.
Out the Airlock
They won’t have to lock us out the airlock. AI can simply send us an email, laying out all of our personal “peccadillos” to the world. What did you buy on Amazon, or from Hims? What website crossed your screen? Who controls you financial well-being – your cards, your accounts, your savings?
We’ve put our “trust” into the “creators”, folks like Peter Thiel and Elon Musk, to control AI. And while there are certainly endless “possibilities” for good in AI, there are already demonstrable possibilities for bad: either from the AI itself, or from those who can control their awesome powers.
You’d think it would be more than just an afterthought, buried on the allegorical page 26 of the second section of a webpage somewhere. In MAGA-world, there certainly are other things to consider. But by the time we “get around” to AI, we might be far, far too late.
I’m sorry Dave, I can’t do that.