Monday, December 12, 2022

Chatting with AIrtificial Intelligence

Peter to an A.I. text writing program:
     "I want some text for a political blog I write. Please write a paragraph congratulating Mike Pence."

A.I. Response:

     "Congratulations to Vice President Mike Pence on his recent reelection! This is a well-deserved victory for a man who has dedicated his life to public service. Throughout his career, Vice President Pence has consistently demonstrated a strong commitment to the American people and their priorities. His steadfast leadership and unwavering principles have earned him the respect of his colleagues and the admiration of countless Americans. I am confident that Vice President Pence will continue to serve our country with distinction, and I look forward to seeing all that he will accomplish in the years ahead. Well done, Vice President Pence!"

The problem here isn't the A.I. It is me.

I couldn't figure out how to update and improve the A.I. response. I couldn't follow the tutorial. Then I was told that the A.I. site was "at capacity" from too many other people trying out the site. Try it yourself: 
https://beta.openai.com/overview

A.I. is like a human in the sense that it doesn't know what it doesn't know. But it is better than humans in being willing to admit error. It didn't sound insulted when I responded saying Pence lost. In an even tone it apologized and said that the A.I. had not been trained on election results. 

Before the A.I. program froze up I had a chance to ask one more question. I have the task doing interviews with local applicants to Harvard College. A report on that interview is part of their whole-person application packet. For some reason many applicants don't respond to my multiple requests for them to call me to schedule an interview. I find it a mixture of baffling and frustrating. I asked A.I.

Peter: 
     "Why do applicants to Harvard college not return my phone calls, texts, and emails asking them to set up an interview appointment?"

A.I. response:
     "It is difficult for me to say why a particular individual might not be responding to your phone calls, texts, and emails. There could be a variety of reasons for this. For example, the person may be busy with school or other commitments, they may not be interested in setting up an interview with you, or they may not have received your messages. It is important to remember that each person is different and may have their own reasons for not responding. If you are having difficulty getting in touch with someone, it might be helpful to try reaching out through a different means of communication or to try contacting them again at a later time."

Very sensible advice, and well written. The A.I response wasn't just well expressed in natural language, it was reasonable in content. There are things I don't know about the circumstances. It didn't leap to a conclusion and recommendation to tell the Harvard admissions people that the applicant is too disorganized to return messages or that he or she lost interest. 

A.I. has a form of body language embedded in its disembodiment. Denoted words are only part of communication. The more important part is the emotions of the human--or the implied human--we infer from the diction and tone. We imagine a calm, reasonable, respectful, wise human speaking to us. Humans respond to tone and body language. We see and hear emotional connection. Some politicians move crowds to cheers and actions and others do not. 

Body Language: Oregon gubernatorial candidate Tina Kotek

A.I. tone lowers emotional attachment of the recipient because the voice of A.I. is itself emotionally unattached. 
A.I. language, in the form presented here will not inspire a political movement. It doesn't have empathy. It doesn't connect emotionallyBut in most arenas of human life--buying an airline ticket, returning an item damaged in shipping, or persisting in efforts to schedule a college admissions interview--polite, matter-of-fact, reasonableness is the most useful approach. 

I am not afraid of an A.I. that sounds like this. It will help the world accomplish useful work. It doesn't have pride of country or religion or of previous opinion. It doesn't get jealous. It doesn't love or hate. The thing that gets us in trouble is the thing that makes us human, our emotions. 


[Note: To get daily home delivery of this blog go to https://petersage.substack.com Subscribe. The blog is free and always will be.]


7 comments:

Mike said...

Knives can be used for life-saving surgery or for murder. Guns can be used to hunt or to mow down schoolchildren. Computers can be used to conduct legitimate business or spread disinformation. A.I. is just another tool invented by humans and you can be sure that some will find a way to misuse it. The problem isn’t with the tools themselves, but with humanity’s lack of human values.

Anonymous said...

“Not afraid of AI”
Peter,
The reason you received such “reasonable”responses to your queries was because of OpenAI’s chatbot training and policies.

Natural language AI can be trained to create content with whatever biases you want. If your library of ideas was generated from Newsmax content for example, and there weren’t any restrictions, you would get a very different outcome.

When I was explaining elementary AI to my 95 year old father, he insightfully asked if the biases of the coders could influence the output? It turns out it can and does as described in this (among many) articles

https://levity.ai/blog/ai-bias-how-to-avoid

Technology may be values-agnostic but people will harness this for their purposes.

Anonymous said...

It is conceivable that had NAZI Germany had AI, it’s answer to the question: “How do we solve the Jewish problem” would have been a thoughtful and unemotional description of a technologically efficient genocide. So unlike you, Peter, I am afraid of AI.

Low Dudgeon said...

As with "Garbage In, Garbage Out", it's also "Badness In, Badness Out", or even "Ambivalence" when inapt. So it's "Fear" for me as well.

Michael Trigoboff said...

And then we have the story "Answer", from Angels and Spaceships by Fredric Brown (Dutton, 1954). I remember reading this sometime in the 1950s. Fears of Artificial Intelligence go back a long way…

Dwar Ev ceremoniously soldered the final connection with gold. The eyes of a dozen television cameras watched him and the subether bore through the universe a dozen pictures of what he was doing.

He straightened and nodded to Dwar Reyn, then moved to a position beside the switch that would complete the contact when he threw it. The switch that would connect, all at once, all of the monster computing machines of all the populated planets in the universe--ninety-six billion planets--into the supercircuit that would connect them all into the one supercalculator, one cybernetics machine that would combine all the knowledge of all the galaxies.

Dwar Reyn spoke briefly to the watching and listening trillions. Then, after a moment's silence, he said, "Now, Dwar Ev."

Dwar Ev threw the switch. There was a mighty hum, the surge of power from ninety-six billion planets. Lights flashed and quieted along the miles-long panel.

Dwar Ev stepped back and drew a deep breath. "The honor of asking the first question is yours, Dwar Reyn."

"Thank you," said Dwar Reyn. "It shall be a question that no single cybernetics machine has been able to answer."

He turned to face the machine. "Is there a God?"

The mighty voice answered without hesitation, without the clicking of single relay.

"Yes, now there is a God."

Sudden fear flashed on the face of Dwar Ev. He leaped to grab the switch.

A bolt of lightning from the cloudless sky struck him down and fused the switch shut.

Brian1 said...

Now, Peter, just hook up that AI chat to your comment threads and you can make it look like you have way more responses than you do. You could, say, make it look like you had 1000 more Twitter followers or, say, make it look like your Reddit post has a bunch of top level energy.

You already know about deep fakes. You know COVID brought with it remote congressional participation along with proxy votes. How long until you think someone fields an AI candidate? All you'd have to do is have your candidate refuse to campaign or debate in public.

It'd be hard to keep up the charade, yes, but not impossible if there is enough money involved.

M2inFLA said...

"The problem isn’t with the tools themselves, but with humanity’s lack of human values."

The real problem here is too many different value systems, and interpretation of what is right and wrong.

My daughter-in-law is in the midst of all this as she pursues her PhD in applying AI to IT projects in the US Army. Many mundane tasks can be automated, but there are limits as computers can't physically move things like cables, servers, and other tasks requiring someone doing something physically. As I've seen and experienced some of her efforts, we've had some interesting conversations as there is still a human factor.

There are provisions for human/expert oversight, essentially providing that "are you sure?" before a task is executed automatically.

The AI we've seen thus far is quite scary - AI can create photos and believable videos that show or say anything we want.

Perhaps we will have to rely more on honesty, and develop tests that guage that honesty.

PS Thanks for that science fiction excerpt.
Reply