Are we outsmarting ourselves with the invention of artificial intelligence (AI) technologies that are expanding at an exponential pace to outrun our human intellect, our work roles, and even our ability to keep them from taking control of our lives?
Will recent innovations such as the "Generative Pre-Trained Transformer," including Microsoft Bing's ChatGPT that I discussed in my previous column which former Secretary of State Henry Kissinger, former Alphabet Executive Chairman Eric Schmidt and MIT computer college Dean Daniel Huttenlocher become — as they predict— the most transformational societal development since the printing press?
Optimists among us believe that AI-assisted human intelligence will ultimately provide us with near-magical tools for alleviating suffering and realizing human potential.
Some holding this vision foresee that super-intelligent AI systems will enable us to comprehend presently unknowable vast mysteries of the universe, and to solve humanity's most vexing questions such as eradication of diseases, natural resource depletion and world hunger.
Others warn that when humans create self-improving AI programs whose intellect dwarfs our own, we will lose the ability to understand or manage them.
One of the best-known members of this camp, Elon Musk, has called super-intelligent AI systems "the biggest risk we face as a civilization," comparing their creation to "summoning the demon."
Perhaps one of the greatest fears regarding "artificial" intelligence supplanting the "real thing" (or "general intelligence") is that it will lead to imminent obsolescence of our humanity.
The brilliant science fiction writer Arthur C. Clarke warned about this in his 1956 book "The City and the Stars" which imagined a future world where immortal humans wanted and needed to do nothing because every aspect of life was anticipated and provided by the "Central Computer."
Although that masterful computer could create holographic realities for individual humans to inhabit and even store digital versions of dead people so that they could slumber until called back to life, it robbed everyone of individual purpose and meaning in the bargain.
Clarke's vision projected a dystopian future occurring 2.5 billion years from now.
Perhaps his time schedule colossally underestimated Central Computer's rapid machine learning curve, with generative AI systems currently doubling in capabilities about every few months.
Clarke wasn't by any means the first to alert us about monstrous consequences when meddling with human nature. Mary Shelley's famous 1818 novel warned about such an experiment that went terribly wrong which was conducted by Dr. Victor Frankenstein, a well-meaning scientist.
Despite good intentions and deeds, the beleaguered creature's actions were always misinterpreted. Even after rescuing a young girl from drowning, the public assumes that he was trying to murder her.
Even the miscreant's mastermind came to fear the inhuman beast. Dr. Frankenstein lamented:
"I was started from my sleep with horror; a cold dew covered my forehead, my teeth chattered, and every limb became convulsed: when, by the dim and yellow light of the moon, as it forced its way through the window shutters, I beheld the wretch — the miserable monster whom I had created."
Nevertheless, the monster had some legitimate justification in claiming superiority over his mortal detractors. He said:
"I was not even of the same nature as man. I was more agile than they and could subsist upon coarser diet; I bore the extremes of heat and cold with less injury to my frame; my stature far exceeded theirs."
Perhaps we shouldn't entirely blame him for some immodesty. After all, if the monster was truly so hideous, why would we keep trying to reinvent superior versions of ourselves in the first place?
Take, for example, AI-powered self-learning machines and automata that do much of what we do, and often do it much better. Why else would we ever trust our lives to Google map-tracking self-driving cars?
And what about the likely future of really smart brain implants and bioengineered artificial DNA that enable us to evolve (or perhaps devolve) into an updated post-human variant of a rapidly obsolete-prone precursor model?
Frankenstein-like theories regarding fears and fortunes of technological monsters are subjects of eternally contentious scientific, technological, philosophical and public policy debate ... as they should be.
Shelley, the monster's real-life creator, recognized this natural tendency to fear what we do not understand two centuries ago.
She wrote, "Nothing is so painful to the human mind as a great and sudden change."
So finally, will our human story end in tragedy of Frankenstein proportions? Or instead, will we advance and evolve with marvelous new capacities to attain presently unfathomable superhuman accomplishments?
In either case, there is no way to turn back the clock of progress where even Einstein's space-time continuum takes on a new dimension of meaning. Unlike the speed of light, there are no known theoretical limits to computational intelligence.
The good news here might be that while intelligent machines and updated versions of ourselves might not share our current values, they might also lack tendencies toward hostility, another frequent expression of animal emotion.
The bad news is that if we only succeed in creating super-intelligent psychopaths, creatures without moral compasses, we probably won't remain their masters for long.
________________
Larry Bell is an endowed professor of space architecture at the University of Houston where he founded the Sasakawa International Center for Space Architecture and the graduate space architecture program. His latest of 12 books is "Architectures Beyond Boxes and Boundaries: My Life By Design" (2022). Read Larry Bell's Reports — More Here.