Search This Blog

Tuesday, 28 March 2023

AI's moral deficit?

 Robert J. Mark on AI’s Glaring Errors


Robert J. Marks contributed a piece at The Daily Caller this week on artificial intelligence, ChatGPT, and the manifold problems of new AI systems like Google’s Bard and older ones such as Amazon’s Alexa. Dr. Marks directs Discovery Institute’s Bradley Center and is author of the recent DI Press Book Non-Computable You. Despite the confidence in new AI coming from Big Tech executives, it makes quite glaring mistakes, although Marks believes AI has its genuine uses and benefits. Snapchat’s chatbot “My AI” gave advice about how to hide the smell of pot and alcohol to someone posing as a disgruntled teenager. Microsoft’s Bing bot professed its love for a tech journalist. A Google app made egregiously racist errors. ChatGPT is also politically biased despite claiming neutrality. 

Marks writes, 
                         Many warn of the future dangers of artificial intelligence. Many envision AI becoming conscious and, like SkyNet in theTerminator franchise, taking over the world (This, by the way, will never happen). But make no mistake. LLMs are incredible for what they do right. I have used ChatGPT many times. But user beware. Don’t trust what an LLM says, be aware of its biases and be ready for the occasional outlandish response.

ROBERT MARKS, MARKS: FROM POLITICAL BIAS TO OUTLANDISH DARES, HERE’S WHY ROBOTS CANNOT REPLACE US | THE DAILY CALLER 
                                             Marks encourages readers to try out ChatGPT and come to their own conclusions. Be sure to read the rest of his article Here



No comments:

Post a Comment