Wednesday, January 25, 2023

AI text is text built to a formula

Artificial Intelligence text is like a cantaloupe purchased out-of-season. 

It looks like a melon, but it has no taste. 

People still buy those melons. They are easy and they are good enough, if availability is more important than quality.

$4.99 at Safeway, today

AI text creates an outline for a blog post, and then fills in that outline. Using the key words one enters to instruct what is wanted, the program creates a two or three sentence introduction which restates those key words inserting them into full sentences. Then it inserts a few paragraphs, with each devoted to one of the key word concepts. It concludes with a paragraph that repeats what has just been said.

Part of the genius of AI text is that it uses different words to say the same thing, which obscures the fact that it is filling space. In the instance of my trial experiment with a post on Social Security, the AI text repeated at least 10 times within a 461-word essay that seniors count on their Social Security check. It is true. Most do. The Social Security example cited some readily available statistics, including that 65 million people get the benefit and that it is 35% of the average senior's income, which gives some data heft to the argument. The text argues that seniors like having money to spend, that merchants like having it spent, and that governments like merchants who stay in business. All true. All so reasonable. All dead obvious.

The AI text displays its lack of "understanding" of why Social Security has persisted for eight decades amid all the political turmoil over taxation and income re-distribution. It is as if the AI text did a quick scan of web commentary and synthesized it--which is, of course, what it did. In that sense it is all too human, rather like someone passing along rumors gleaned from friends, Joe Rogan, or email chain letters. In the paragraph on fairness, it reports that "wealthy Americans do not pay into the system at all once their taxable earnings exceed $132,900 per year." That is true. That argument misses a key point, though, for an essay on the political risk to Social Security when weighed against budget deficits. Social Security is set up as an earned benefit. Like most pensions, the benefit is approximately a return, with interest, of what was paid in by oneself and one's employer over one's working life. The Social Security benefit is capped to pay out on earnings up to the maximum at $132,900, so the payment into Social Security is capped at the same level. In fact, the formula for payouts skew toward providing a better benefit for lower income people, but that is disguised. Social Security isn't "welfare," nor do people understand it as such. Social Security is arguably a fair return across all income levels, given that on average prosperous people live longer. That gives it political buy-in across all groups. If it became a transparent transfer of money from the prosperous to the poor, Social Security would become an unearned benefit, and promptly become as contentious as welfare or food stamps. 

It would be dangerous for a writer to put his name on unedited, unchecked AI text. It asserts with confidence things the program doesn't understand. Besides, it is boring and repetitious. It reads like the bullet points in an old-style Power Point presentations. 

But AI text, even at its current state of development, still has value to a writer. It fills a block of space with words that sound reasonable and are grammatically correct. If readers are just skimming anyway, it looks as if the writer addressed the topic. Quick-and-dirty readers get quick-and-dirty text, and nobody is the worse. The text is meant to be read the way a cantaloupe purchased January is meant to be eaten. Both fill space. They aren't good, but they are good enough.



[Note: To get daily delivery of this blog to your email go to: https://petersage.substack.com Subscribe. The blog is free and always will be.]



11 comments:

Mike said...

Forget about climate change or the national debt. We recently learned on this blog that ‘wokeness’ is the greatest threat that America faces. In spite of lacking substance, yesterday’s AI critique of wokeness still sounded more intelligent than the humans railing against it.

We also learned here that robots are taking over our jobs. Imagine if they could program a robot to manufacture outrage, ranting and raving about wokeness, CRT, the war on Christmas, etc. Teach it to out-snarl Trump and proclaim itself to be sent by God like DeSantis. It could take over the Republican Party – maybe be our next president.

Rick Millward said...

Here's the response I got asking: "Write about the danger of AI"

Artificial intelligence (AI) has the potential to revolutionize many industries and improve our lives in countless ways, but it also poses significant risks if not properly understood and managed. Some of the main dangers of AI include:

Bias: AI systems can perpetuate and even amplify existing biases if they are not properly designed and tested. This can lead to discrimination and unfairness in areas such as hiring, lending, and criminal justice.

Job displacement: As AI systems become more capable, they may replace human workers in a wide range of jobs, leading to unemployment and economic disruption.

Autonomous weapons: The development of autonomous weapons, or weapons that can select and engage targets without human intervention, could lead to a new arms race and increase the risk of accidental or unintended escalation of conflicts.

Misuse: AI systems can be used to spread disinformation, conduct cyberattacks, or conduct surveillance on individuals and groups.

Lack of accountability: As AI systems become more complex and autonomous, it may become difficult to determine who is responsible when something goes wrong.

To address these risks, it is important to develop and implement best practices for the design, testing, and deployment of AI systems, as well as establish regulations and oversight mechanisms to ensure that these systems are used responsibly. Additionally, research is needed to develop techniques for making AI systems more explainable and auditable, so that their decisions can be understood and trusted.

It is important to note that these dangers of AI are still being studied and debated by experts in the field. While there are certainly risks associated with AI, it is also important to consider the potential benefits and work to mitigate the risks while still allowing for innovation and progress.

Michael Trigoboff said...

It depends how you ask the question. I asked my question a few times and picked out the best answer:

Write an essay about melons in the style of Hunter Thompson.

It was a scorching hot day in the desert, the sun beating down on my head like a hammer. I had been wandering the barren wasteland for hours, searching for something to quench my thirst and cool my body. And then, like a mirage, I saw it: a field of melons.

I stumbled towards the field, my mouth watering at the thought of biting into the juicy, sweet flesh of those melons. As I reached the field, I could see that these were no ordinary melons. They were giant, the size of small cars, and their rinds were a deep, vibrant green.

I approached one of the melons and reached out to touch it, but as my fingers touched the rind, I was hit with a wave of dizziness. The world around me began to spin, and I felt as though I was falling into a deep hole.

When I came to, I found myself lying on the ground, surrounded by the giant melons. I looked around in confusion, trying to make sense of what had just happened. And then, I heard a voice.

"Welcome to the world of the melons, Hunter," the voice said. "These melons are not just fruit, they're a portal to another dimension. A dimension of pure pleasure and euphoria. A place where all your wildest desires can be fulfilled."

I stared in disbelief as the voice continued. "You see, Hunter, these melons are grown using a special technique, one that allows them to tap into the collective unconscious of humanity. They're like a drug, a way to escape reality and enter a state of pure bliss."

I shook my head, trying to clear the cobwebs. This was all too much to take in. But as I looked around at the giant melons, I couldn't help but feel a sense of excitement. I reached out and touched one of the melons again, and this time, I felt a surge of energy coursing through my body.

I knew then that I had to try one of these melons. I reached for my knife, and with a swift motion, I sliced into the rind. The melon's juice flowed out, and I eagerly took a bite.

The taste was like nothing I had ever experienced before. It was sweet and tangy, with a hint of something else, something that I couldn't quite put my finger on. But as I ate, I felt a sense of calm and contentment wash over me.

I spent the next few hours wandering the field, trying different melons and experiencing different sensations. I laughed, I cried, I felt a sense of oneness with the world. And when the sun set and I finally left the field, I knew that I would never be the same again.

These melons, these giant, mysterious melons, had changed me in a way that I could never have imagined. They had given me a glimpse of a world beyond this one, and I knew that I would spend the rest of my days searching for the ultimate high, the ultimate melon.

And that's the story of how I discovered the true nature of melons, a story that will be passed down from generation to generation, a story that will live on forever in the annals of gonzo journalism.

Malcolm said...

I greatly enjoyed reading everyone’s views on wokeness, artificial intelligence, and Americans’ gullibility. Re gullibility, consider the common belief that chain letters will make ALL participants rich (and in a finite population, no less )

Peter, someday I’d love to sample one or more of your melons. In 1964, my best friend and I, while on a road trip, bought two big boxes of cantaloupes from a roadside produce stand near El Centro, Calif. Thought we’d died and gone to heaven! I have never tasted anything even close to those beauties over the last 48 years!

Peter, are you able to match those fruits, even though you’re in a less favorable climate? If so, have you considered selling them via Cartwright's Market, in Grants Pass? Bated breath here :)

Malcolm said...

Michael T, Hunter Thompson must have found those melons in El Centro! Very similar t my experience. Really.

Malcolm said...

Omigawd! Future shock; the melons my pal and I bought in El Centro sold for five cents each. Or was it TWO for five cents?

bison said...

The risks and criticisms of AI have been predicted in Science Fiction literature and films for decades. For a quick summary of current and future out of control AI mutations watch the original STAR TREK series with Shatner. I have not been surprised by any AI chacteristics yet.

Anonymous said...

A chatbot powered by reams of data from the Internet has passed exams at a U.S. law school after writing essays on topics ranging from constitutional law to taxation and torts.

ChatGPT from OpenAI, a U.S. company that this week got a massive injection of cash from Microsoft, uses artificial intelligence (AI) to generate streams of text from simple prompts.

The results have been so good that educators have warned it could lead to widespread cheating and even signal the end of traditional classroom teaching methods.

Jonathan Choi, a professor at Minnesota University Law School, gave ChatGPT the same test faced by students, consisting of 95 multiple-choice questions and 12 essay questions.

In a white paper titled “ChatGPT goes to law school” published on Monday, he and his coauthors reported that the bot scored a C+ overall.

While this was enough for a pass, the bot was near the bottom of the class in most subjects and “bombed” at multiple-choice questions involving mathematics.
‘Not a great student’

“In writing essays, ChatGPT displayed a strong grasp of basic legal rules and had consistently solid organization and composition,” the authors wrote.

But the bot “often struggled to spot issues when given an open-ended prompt, a core skill on law school exams”.

Officials in New York and other jurisdictions have banned the use of ChatGPT in schools, but Choi suggested it could be a valuable teaching aide.

“Overall, ChatGPT wasn’t a great law student acting alone,” he wrote on Twitter.

“But we expect that collaborating with humans, language models like ChatGPT would be very useful to law students taking exams and to practicing lawyers.”

And playing down the possibility of cheating, he wrote in reply to another Twitter user that two out of three markers had spotted the bot-written paper.

“(They) had a hunch and their hunch was right, because ChatGPT had perfect grammar and was somewhat repetitive,” Choi wrote.

Brian said...

A recent survey suggests that 17 percent of students at Stanford University have already used the popular AI chatbot ChatGPT on their final exams.

A University spokesperson confirmed that the Board of Judicial Affairs is aware of and monitoring these emerging AI tools, the Stanford Daily, which conducted the survey, reported.

“Students are expected to complete coursework without unpermitted aid,” Stanford spokesperson Dee Mostofi said. “In most courses, unpermitted aid includes AI tools like ChatGPT.”

The anonymous poll, which surveyed 4,497 Stanford student respondents, found that around 17 percent of them admitted to using ChatGPT to help with their fall quarter assignments and exams.

Additionally, a majority of that 17 percent reported using the AI only for brainstorming and outlining. Meanwhile, about five percent reported having submitted written material directly from ChatGPT, with little to no edits.

ChatGPT was created by the San Francisco-based artificial intelligence company OpenAI, and officially launched in November. The Stanford Daily noted that “Stanford dropout Sam Altman” is one of the company’s co-founders.

“The power of the technology to instantaneously generate swaths of human-like text has frightened some educators across the country,” the Stanford Daily reported.

Mostofi added “many Stanford faculty are highly engaged in researching new large language models and implications of AI in the learning environment,” and that conversations regarding ChatGPT and the honor code will soon be underway at Stanford.

“The Board on Judicial Affairs (BJA) has been monitoring these emerging tools and will be discussing how they may relate to the guidelines of our Honor Code,” Mostofi said.

Stanford University is not the only entity in academia worried about the use of ChatGPT.

The report noted that New York City’s education department has already blocked the website on its networks, citing “concerns about negative impacts on student learning, and concerns regarding the safety and accuracy of content.”

While the world of education grapples with ChatGPT, the media, which has spent years denying that politically biased censorship took place in Silicon Valley, has now begun denying the political bias of ChatGPT.

Mike said...

And let's not forget The Hitchhiker's Guide to the Galaxy, in which a supercomputer figured out the meaning of life, the universe and everything:

The answer is 42.

Ed Cooper said...

When I found myself in Gold Hill, near 8 years ago, our local Ray's Market had a bin if "locally grown" cantaloupe, so I bought one. Went home, ate it, and went back and bought two more. More recently, I've had the privilege of being gifted a few Sage Melons, and if eating one if those isn't Nirvana, I don't want to go.