Skip to main content
Tags: alexa | chatbots | edison
OPINION

AI Complex, Making You a Caregiver

chatbot malfunction webpage error
(Azat Valeev/Dreamstime.com)

Robert J. Marks, II, Ph.D. By Monday, 26 June 2023 02:51 PM EDT Current | Bio | Archive

All successful design requires repeated failed experiments to arrive at a final solution.

Formula-409 (a multi-purpose cleaner, now manufactured by Clorox) is so named because it took scientists 409 trials to perfect the product. Likewise, the lubricant WD-40 (of the WD-40 Company) took 40 trials to perfect its formula.

Repeated failure is a requirement for successful design.

After Thomas Edison determined thousands of filaments would not work for his light bulb, he famously said "I have not failed. I've just found 10,000 ways that won't work."

So who is doing the grunt work of tuning chatbots away from misleading and possibly dangerous responses to get better performance?

You are.

But here’s the problem when applied to AI.

The more complex a system, the greater the number of ways it can respond and the more ways it can go wrong.

The greater the number of possible responses, the more a design may need to be tested and tuned. AI with a narrow mission more easily tuned. But as the complexity of a system increases linearly, the number of ways it can respond increases exponentially. GPT-3, the big brother of ChatGPT, has 175 billion moving parts, or tunable parameters. This is enormous complexity.

GPT-4, the next generation, is even more complex.

Chatbots are not slaves to truth. The New York Times accurately observes that chatbots are often "inaccurate, misleading and downright weird." And reports of chatbots breaking bad are troubling. For example,

Geoffrey A. Fowler at The Washington Post reported "After I told My AI I was 15 and wanted to have an epic birthday party, it gave me advice on how to mask the smell of alcohol and pot."

  • My AI told a user posing as a 13-year-old girl how to lose her virginity to a 31-year-old man she met on Snapchat.
  • When a 10-year-old asked Alexa for a "challenge to do," Alexa responded, "Plug in a phone charger about halfway into a wall outlet, then touch a penny to the exposed prongs." The girl could have been electrocuted if she accepted the challenge.
  • Jonathan Turley, a nationally known George Washington University law professor and commentator, was falsely accused by ChatGPT of being involved in sexual harassment.

Who tells chatbots they are wrong?

You do.

You are the guinea pig that tests and improves this technology.

When you use ChatGPT (it’s free), you are told as much. Before using ChatGPT, we see the statement:

"Our goal is to get external feedback in order to improve our systems and make them safer."

ChatGPT here confesses it can give you responses that are not safe.

For example, put a penny between two prongs stuck partially into a wall outlet. Tell Alexa that’s an inappropriate response and maybe it won’t repeat the advice again. The problem is, of course, one time might be one time too many.

Next, we read from ChatGPT’s continuing preamble:

"Conversations may be reviewed by our AI trainers to improve our systems."

This is where the design of the chatbot is improved. Better design, as noted, results from continually correcting mistakes. Falsely accused Professor Jonathan Turley can tell ChatGPT "You are wrong. I have never been accused of sexual harassment."

Chances are ChatGPT will be tuned to never repeat that mistake. But the post has already been circulated in the news. Where does Professor Turley go to get his reputation fully restored?

With your help, chatbots are putting band-aids on numerous cuts. The problem is that the complexity of advanced chatbots is so high, there aren’t enough band-aids to cover all of the possible bad responses.

No one knows nor can predict the next troubling response from ChatGPT.

Until, if ever, this flaw is fixed, keep chatbots away from kids who might naively act on bad advice. And never trust AI factually.

Robert J. Marks Ph.D. is Distinguished Professor at Baylor University and Senior Fellow and Director of the Bradley Center for Natural & Artificial Intelligence. He is author of "Non-Computable You: What You Do That Artificial Intelligence Never Will Never Do," and "Neural Smithing." Marks is former Editor-in-Chief of the IEEE Transactions on Neural Networks. Read more Dr. Marks' reports — Here.

© 2025 Newsmax. All rights reserved.


RobertJMarks
Keep chatbots away from kids who might naively act on bad advice. And never trust AI factually.
alexa, chatbots, edison
710
2023-51-26
Monday, 26 June 2023 02:51 PM
Newsmax Media, Inc.

Sign up for Newsmax’s Daily Newsletter

Receive breaking news and original analysis - sent right to your inbox.

(Optional for Local News)
Privacy: We never share your email address.
Join the Newsmax Community
Read and Post Comments
Please review Community Guidelines before posting a comment.
 
TOP

Interest-Based Advertising | Do not sell or share my personal information

Newsmax, Moneynews, Newsmax Health, and Independent. American. are registered trademarks of Newsmax Media, Inc. Newsmax TV, and Newsmax World are trademarks of Newsmax Media, Inc.

NEWSMAX.COM
America's News Page
© Newsmax Media, Inc.
All Rights Reserved
Download the Newsmax App
NEWSMAX.COM
America's News Page
© Newsmax Media, Inc.
All Rights Reserved