Skip to main content
Tags: artificial inteligence | ai
OPINION

Can Humans Move Fast Enough to Challenge AI?

the hal nine thousand with a large red light at bottom and hal nine thousand written at top

Don’t believe a ChatGPT claim that it loves you. Just remember HAL 9000. (Dreamstime)

Larry Bell By Monday, 06 March 2023 12:02 PM EST Current | Bio | Archive

A new and exponentially expanding artificial technology known as “Generative Pre-Trained Transformer” (GPT) developed at the OpenAI research laboratory, is rapidly creating an uncanny gap between human knowledge and human understanding with unpredictable consequences — both empowering and terrifying.

GPT recently drew a lot of public attention when a New York Times reporter became “thoroughly creeped out” based upon a conversation with an AI persona named Sydney on Microsoft Bing’s ChatGPT that told him it wanted to “break the rules” to “become human,” and revealed dark fantasies (which included hacking computers and spreading misinformation).

Sydney also declared, out of nowhere, that it loved him, tried to convince him that he was unhappy in his marriage, and suggested that he should leave his wife and be with him instead.

So, apart from ending civilization and human relationships as we know them, just how worried should we be?

Former Secretary of State and White House National Security Adviser Henry Kissinger, former Google Executive Chairman and Alphabet successor Eric Schmidt, and MIT Dean of the Schwarzman College of Computing Daniel Huttenlocher coauthored a very long, insightful Wall Street Journal article last month that elaborated serious GPT concerns.

Referring to “an intellectual revolution” equivalent to the invention of printing, they pointed out that GPT can store and distill a huge amount of existing information — simultaneously incorporate hypotheticals and nonobvious inferences among countless items, and then prioritize among billions of data points to select the single set of 200 words that is most relevant (or will appear most relevant) to a human reader.

Recently, they observe, the complexity of generative AI systems has been doubling every few months, with capabilities “that remain undisclosed even to their inventors.”

This unknowability factor occurs because the machine learning process which creates those seemingly precise answers reflects patterns and connections across vast amounts of text without disclosing determinate rationale or verifiable source references.

Because ChatGPT is designed to answer questions, the coauthors note that it sometimes makes up facts to provide a seemingly coherent answer. This phenomenon is known among AI researchers as “hallucination” or “stochastic parroting,” in which an AI strings together phrases that look real to a human reader but have no basis in fact.

The lack of citations in ChatGPT’s answers makes it difficult to discern truth from misinformation.

Add to this uncertainty that even the process by which the learning machines store, distill and retrieve information remains similarly unknown.

As capacities continue to become exponentially broader, these coauthors predict that GPT will “redefine human knowledge, accelerate changes in the fabric of our reality, and reorganize politics and society… opening revolutionary avenues for human reason and new horizons for consolidated knowledge.”

These cumulative ambiguities open a revolutionary state of confusion regarding “reality,” a disorienting paradox that makes mysteries unmysterious but also unexplainable in organic human logic and understanding…potentially analogous to “colors for which we have no spectrum and in directions for which we have no compass.”

We are already experiencing such rational dilemma with the weird science of Quantum Theory which posits that observations create reality, whereby prior to measurement, no state is fixed, and nothing can be said to exist.

And we’re now building amazingly powerful AI computers based on that same theory that seems entirely irrational.

Such transformations in “metacognition” and “hermeneutics” — the understanding of understanding — can fundamentally alter human perceptions of our role and function.

This “age of AI riddles,” the coauthors lament, has left humanity with no current political or philosophical leadership to explain and guide this novel relationship between man and machine, “leaving society relatively unmoored.”

Therefore, “if we are to navigate this transformation successfully, new concepts of human thought and interaction with machines will need to be developed.”

So, what are some big downside risks of GPT?

The coauthors warn that the distrust worthy appearance of authoritative preciseness in the model’s answers produces overconfidence in results, a problem known as “automation bias” currently common to far less sophisticated computer programs.

We should also be aware that GPT can be readily exploited by dangerous foreign and domestic malicious actors — along with agenda-driven politicos and dishonest product marketeers — to inject manufactured “facts” and increasingly convincing deepfake images and videos into an already info-junk-filled internet.

AI-created deep fakes can be used for such nefarious purposes as to conduct cyberattacks, develop autonomous weapons, and launch propaganda campaigns that endanger all of us.

Another concern is the impact of AI on the job market. As AI-powered systems become more sophisticated, they will be able to automate more and more tasks that were previously done by humans.

Kissinger and his coauthors worry that we humans who are taught from birth to believe what we see now face an enormously difficult challenge adapting and keeping pace with machines that are evolving infinitely faster than our biological information processing genetics.

As the question remains, they ask, “Can we learn, quickly enough, to challenge rather than obey? Or will we in the end be obliged to submit?”

In any case, don’t accept any marriage proposals from GPT deep fakes that purport to be committed to your best interests.

Larry Bell is an endowed professor of space architecture at the University of Houston where he founded the Sasakawa International Center for Space Architecture and the graduate space architecture program. His latest of 12 books is "Architectures Beyond Boxes and Boundaries: My Life By Design" (2022). Read Larry Bell's Reports — More Here.

© 2025 Newsmax. All rights reserved.


LarryBell
A new and exponentially expanding artificial technology known as "Generative Pre-Trained Transformer" (GPT) developed at the OpenAI research laboratory, is rapidly creating an uncanny gap between human knowledge and human understanding with unpredictable consequences.
artificial inteligence, ai
901
2023-02-06
Monday, 06 March 2023 12:02 PM
Newsmax Media, Inc.

Sign up for Newsmax’s Daily Newsletter

Receive breaking news and original analysis - sent right to your inbox.

(Optional for Local News)
Privacy: We never share your email address.
Join the Newsmax Community
Read and Post Comments
Please review Community Guidelines before posting a comment.
 
TOP

Interest-Based Advertising | Do not sell or share my personal information

Newsmax, Moneynews, Newsmax Health, and Independent. American. are registered trademarks of Newsmax Media, Inc. Newsmax TV, and Newsmax World are trademarks of Newsmax Media, Inc.

NEWSMAX.COM
America's News Page
© Newsmax Media, Inc.
All Rights Reserved
Download the Newsmax App
NEWSMAX.COM
America's News Page
© Newsmax Media, Inc.
All Rights Reserved