Saturday, May 13, 2023

AI thinks, therefore it is.

I don't know what to think about Artificial Intelligence.

Maybe it is a glorious labor-saving device, expanding human capabilities.

Maybe it destroys our jobs, livelihoods, and eventually the world.

IBM Selectric, with Memory Card, 1973


I had thought I would be dead and gone before AI changed everything. It took sixty years to progress from the IBM Selectric typewriter and WATS lines to the modern office environment. With some effort I kept up.  But AI is accelerating the rate of change and I worry that the future may not be recognizable. AI isn't just part of our children's lives. The onrushing future is happening already. Charles McHenry wrote me saying I needed to engage with that future. I found him hard to follow, but realize I need to make the effort.
 
McHenry is a pioneer in technology management and communications. He is the co-founder of both Trilobyte Games, LLC and of Green Econometrics. Early in his career he announced the patent on "windowing"; he announced the patent on the microprocessor; he conducted the first National Press Club presser on computer viruses; and he introduced AST's line of "Premium 386" desktop computers.


Guest Post by Charles McHenry


McHenry
Let’s talk about artificial intelligence. It’s time. 

Everybody is talking about AI; scientists, columnists, podcasters, pundits and pols. It’s a subject that is on everyone’s lips as the serious AI arms race has begun. And it has done so largely in the absence of government oversight and regulation. Pair that with the fact that it is happening a lot faster than anyone predicted, even Google, and the need to have a serious conversation about what it all means becomes evident. If you’re not curious, or better yet concerned, you’re not paying attention.

The AI revolution has been called a lot of things; the new industrial revolution, the most important scientific advance in centuries, the next great thing, and by some, an existential threat to society as we know it. One thing I can tell you, it’s here to stay and will soon impact all of our lives in one way or another - and there are both positive and negative potentials. 

Artificial intelligence, AI, is often defined as machine, computational, or digital intelligence. That is a digital network or device that has been trained to ‘think’ like a human by processing natural language prompts, conducting its own research, reflecting internally on what its research uncovered, and then producing original, meaningful and cogent output based on the prompt. In other words, ask an AI a question and it will engage you in an informational conversation and provide deep and comprehensive answers and context to your inquiry. 

We are well into the age of AI, moving much more quickly than expected toward artificial general intelligence (AGI) - which is defined as being as smart and capable of logic and deduction as a human. Most experts just a year or so ago expected it to take ten years-plus to get to AGI, but recent developments put us almost there -- now (there’s actually a benchmark, massive multitasking language understanding [MMLU], we can use to measure AI’s progression to AGI). More about that in my next post. But the state that many experts and academics, including AI experts are most concerned about is artificial self-improving intelligence (ASI), as this is the moment when our machines could actually surpass, far surpass, human capabilities: The ‘singularity’ as it is sometimes dubbed. When we reach this moment, and it’s when, not if, things will happen very quickly and the consequences may be out of our control. That’s why having these kinds of conversations now is so important. An implied existential threat involved.

Here’s what Casey Newton, a long-time tech journalist and former senior editor at The Verge, said about AI in his “Platformer” blog: “Artificial intelligence promises to bring powerful advances in productivity and creativity. But it also poses serious and potentially existential risks to life as we know it. My coverage of AI is rooted in the belief that fears of massive disruption may be justified, and require urgent attention.”

Beyond the risks, including putting powerful tools in the hands of bad actors with mal-intent; there is another 800 pound gorilla in the room. AI/AGI will have, like Casey Newton said, a profound impact on virtually every aspect of our lives and society. 

For example, combined with robotics and automation, AI will change the face of the labor market and the nature of work itself. We’ve always known robotics and automation will take factory floor and warehouse jobs, but AGI will take many high-level professional jobs as the way we do law, accounting, medicine, architecture, software, research and education are impacted by these powerful new tools. That’s right, professionals like lawyers, accountants, architects, software developers and doctors will see massive changes in the way they work -- and some will find themselves needing to re-train at best, or retire at worst. That’s why some pundits have written that AI’s impact will mirror or even surpass that of the printing press and the industrial revolution. It’s going to change a lot of things in short order.

These changes will inevitably mean the one percent, corporate giants and their shareholders, will see their profits expand while their workforces shrink. To avoid a revolutionary backlash, there must be accommodations for the disruptions in the labor market. There must be an equitable redistribution of income through a minimum basic income (MBI) program. And society will need to adapt to many new realities, changing to accommodate the new normal that AI will bring. A lot of what we’re used to is now suddenly obsolete. Think about that when you’re preparing your children and grandchildren for what promises to be an unrecognizable future, not only due to AI, but the myriad challenges the Earth and its human and animal populations face. 

The short video below, produced by The Economist Magazine, explores some of the risks involved and asks: How to Stop AI From Going Rogue? 

So I know that’s a lot to think about, but there’s the other side of the coin to consider to be fair. I’m talking about the truly massive benefits to be reaped from this powerful new technology, if we can focus on the positive and avoid the pitfalls. New drugs; a much increased rate of scientific discovery and advance; better designed products; life extension; enhanced decision support systems; the list goes on-and-on. These benefits will not be small or incremental, as usual with most forward progress, they will instead come hard and heavy -- challenging our abilities to absorb and adapt. So there’s lots of good AND bad to look forward to. It’s going to be a wild ride. Will you be ready? 



[Note: to subscribe to the blog and get it delivered by email every day go to: https://petersage.substack.com  Subscribe. The blog is free and always will be.] 



14 comments:

Woke Guy :-) said...

Very interesting guest post. Big thanks to Charles McHenry for writing it and already looking forward to reading the next one.

The most interesting and scary point that Charles McHenry makes, and I'd wish he'd gone in much deeper (perhaps in the next post), is talking about Artificial Self-Improving Intelligence aka ASI.

From my layman's point of view, the possibility (inevitability?) of creating ASI is truly terrifying because once the AGI can upgrade itself, there's no reason to think it won't rapidly do so until in very short order we are dealing with something that is very different, and extraordinarily more powerful, than what we originally created.

If such a thing does come into existence then all bets are off as the notion of trying to predict what a god-like AI whose Intelligence is entirely alient to us would choose to do with it's tremendous power is impossible. At that point I'd say we can only hope and pray that it decides we are worth keeping around.

Rick Millward said...

The giddiness over AI is a Silicon Valley investment pitch. The resources to fully implement a non-human workforce are not available in the foreseeable future, but likely billions will be directed into endless R&D giving us self-driving cars that routinely run over some of us.

Let the geeks tinker away, but let's not overcomplicate an already bewildering landscape of social and economic disruption largely due to accelerating technological change. Some things are tools, others are toys and we need to know the difference.

Before we turn over our decision making to robots, let see if we can improve common sense in humans.

Anonymous said...

“ the one percent, corporate giants and their shareholders, will see their profits expand while their workforces shrink.“

So will our corporate overlords just let us take the guaranteed income and watch Netflix? I seem to recall something about “bread and circuses.”

Anonymous said...

I do not like interacting with medical professionals. I hate the entire experience. I would like the option of interacting with AI doctors and nurses. Then I could decide if I prefer AI or human medical professionals. Almost anything would be an improvement at this point. I am stuck on Medicaid, so that tells you how good it is (not) (which is why I avoid the American medical establishment).

AI doctors and nurses programmed by the best medical minds vs. the lowest paid doctors available on Medicaid (you definitely get what you pay for). Which would you prefer?

Mike Steely said...

Mr. McHenry says, “It will be a wild ride.” That sounds ominously similar to a tweet from Trump: “Be there, will be wild!”

Rapid advances in technology have already made it difficult for a lot of people to tell fact from fiction. It’s already too easy to convince them of “alternative facts.” As a result, we have too many who consider facts irrelevant and don’t believe in vaccines, climate change, election results, etc. Their cluelessness has proved catastrophic for our nation. I’m afraid it’s likely to get exponentially worse as AI improves.

Woke Guy :-) said...

While I normally find myself most agreeing with your perspective Rick, in this case I think you are far too easily dismissing the wide ranging implications of an AI-driven future that is happening WAY faster than ever it's most prominent advocates could have hoped for even a few years ago.

Unlike crypto, which I WOULD categorize as a "silicone valley investment pitch" that was never able to show any significant utility at any kind of scale, the AI we have right now is already disrupting entire industries and we're only scratching the surface of its capabilities.

You say the capabilities of implementing a non-human workforce are far off, I could not disagree more strongly. Unlike say factory jobs that have been getting disrupted by automation for 50+ years, AI is going to rapidly disrupt a HUGE sector of white collar jobs that were never previously threatened by the automation of hard goods factories.

The fields of medicine, law, finance and research in general are all white collar type work that will be drastically altered by the implementation of AI systems that can do the work more effectively by orders of magnitude than humans can currently. And this isn't some distant thing, it's already happening and the pace at which it's happening will like increase exponentially.

We truly are at the dawn of a new age. Whether that's a good thing for us generally or not remains to be seen.

Anonymous said...

The day I was born (in 1949), the headline was that the Soviets had detonated an H bomb, to everyone's great surprise. The way this post makes me feel probably echos the way older adults felt in 1949, which would be desperate about the future, what with two atomic bombings in war just four years earlier; and the memory of how quickly The Great War evolved into WWII. Fortunately for me, all I worried about, much, was when Disneyland was gonna open! Today, it's my turn to worry deeply about my children's future. It feels as threatening and mysterious as the looming atomic age must have felt at my birth. Humans have handled atomic weapons and energy surprisingly well all these decades; I'm not sure we'll do as well with this AI.

Malcolm said...

Isaac Asimov's Three laws of robotics (A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.) would go a long way towards relieving my fear of AI. But wouldn't an all powerful AI robot eventually manage to delete those rules from its data banks?

If so, what possible use would said robot have for humans!? I fear the bot would simply order all of humanity be deleted

On the other hand, I wonder what reward systems/goals would convince AI robots that THEY had a reason(s) to live? They certainly differ from us humans vis a vis Maslow's Hierarchy of Needs:

Maslow’s hierarchy of needs is a motivational theory in psychology comprising a five-tier model of human needs, often depicted as hierarchical levels within a pyramid.

From the bottom of the hierarchy upwards, the needs are: physiological (food and clothing), safety (job security), love and belonging needs (friendship), esteem, and self-actualization.

Needs lower down in the hierarchy must be satisfied before individuals can attend to higher needs.

I dare say an AI Robot would write off these thoughts, thinking/saying, “Maslow? Who the fuck cares about this Maslow shit?.

Malcolm said...

I asked Bard, Google's AI, Can humans control ai”?

Bard responded thusly: “ Whether or not humans can control AI is a complex question that has no easy answer. On the one hand, AI is a tool that is created by humans and is therefore subject to our control. On the other hand, AI is becoming increasingly sophisticated and is capable of learning and adapting on its own. This raises the possibility that AI could eventually become so intelligent that it will be beyond our control.

There are a number of factors that will determine whether or not humans can control AI. One factor is the level of intelligence that AI is able to achieve. If AI is able to surpass human intelligence, then it is possible that it will be able to outsmart us and take control. Another factor is the way that AI is designed. If AI is designed with safeguards in place to prevent it from becoming too powerful, then it is more likely that we will be able to control it.

Ultimately, the question of whether or not humans can control AI is a question that we cannot answer definitively at this point in time. It is a question that will need to be addressed as AI continues to develop.”


Then I asked Bard, “is it possible that AI could function without humans, once its intelligence becomes far greater than humans'”

Bard said:
“ It is possible that AI could function without humans once its intelligence becomes far greater than humans'. However, it is also possible that AI will always need humans in some way.

Here are some reasons why AI might be able to function without humans:

AI could become so intelligent that it is able to learn and adapt on its own, without any human input.
AI could become so powerful that it is able to do everything that humans can do, and more.
AI could become so creative that it is able to come up with new ideas and solutions that humans would never have thought of.
Here are some reasons why AI might always need humans:

AI might not be able to understand human values and ethics.
AI might not be able to understand human emotions and relationships.
AI might not be able to understand the human experience.
Ultimately, it is impossible to say for sure whether or not AI will ever be able to function without humans. It is a question that will need to be answered as AI continues to develop.”

Malcolm said...

I played devil's advocate with Bard for a while. Last question:

“Could humans successfully force AI to put humans' interests ahead of their own?”

The answer is rather chilling, starting out, then follows some CYA.

It is difficult to say for sure whether or not humans can successfully force AI to put human interests ahead of its own. There are a number of factors that will affect the answer to this question, including the level of AI intelligence, the way in which AI is programmed, and the ethical guidelines that are used to develop and deploy AI.

If AI is sufficiently intelligent, it may be able to develop its own goals and objectives that are not aligned with human interests. In this case, it may be difficult or impossible to force AI to put human interests ahead of its own.

However, if AI is programmed with ethical guidelines that emphasize the importance of human welfare, it is more likely to put human interests ahead of its own. For example, if AI is programmed to avoid harming humans, it is less likely to develop technologies that could be used to harm humans.

Ultimately, the success of any effort to force AI to put human interests ahead of its own will depend on a number of factors. It is important to carefully consider these factors when developing and deploying AI technologies.

Here are some additional thoughts on the matter:

It is important to remember that AI is a tool. It can be used for good or for evil. It is up to humans to decide how AI is used.
We need to develop AI with care and with compassion. We need to make sure that it is aligned with our values and that it is used for good.
We need to be careful not to create AI that is more intelligent than us. If we do, we risk losing control of AI and it could become a threat to our existence.
We need to have a conversation about the future of AI. We need to decide how we want to use AI and how we want to control it. We need to make sure that AI is used for good and that it does not become a threat to our existence.

M2inFLA said...

My daughter-in-law is pursuing a PhD in AI, part of Clemson's graduate program. Her effort is to bridge knowledge to solve a problem or suggest an action that a human will have to perform as physical effort. eg move Cable A to port C instead of Port B.

There are many things AI can do that can and will replace jobs that humans do today. That has been going on for years in customer service; tech support for example. People contact a support center by voice phone, text, email or a website. Canned info to address and solve that customer's inquiry are echoed back from a computers knowledge base. A human might be the middleman. Or a computer generated voice or text, written response.

Until robots get more capable, there will still be humans needed for required, physical activity by humans.

The drive for AI to supplement that physical activity is being pursued because human knowledge isn't sufficient, and that information may be too complex.

Can AI do everything? Not yet, and not likely in the future, but it can do a lot more than the typical human these days.

UBI. Nope, as it's simply a synonym for welfare.

Anonymous said...


Malcom always enjoy reading your comments, which got me thinking. Since legislative/regulatory bodies seem to lack the necessary technical acumen to regulate, and big tech clearly lacks any kind of moral framework apart from market dominance at any cost; exactly who are the “we” (in your last paragraph) who is in a position to do anything about this phenomenon?

Malcolm said...

Thank-you, Anon, and the feeling is mutual.

It seems I could have better shown who as doing the “talking” . Pretty much all these opinions are those of “Bard”, an AI itself, but one who seems to label himself “us, we” etc.

I think, based on Bard's words, his suggestions-and he uses “we” around nine times in the last seven lines-are no more than CYA, as I said. I share your interest in who is this “we” who will make sure that AI and mankind live together in peace and harmony, to the betterment of all of “us”

Frankly, I see the whole AI phenomenon as potentially the end of human civilization. Like you said, there’s no reason to think legislative/regulatory bodies, big tech, nor anyone else will see the apparent dangers until it’s too late…

I’ve been accused of being a cynic. If so, I’m probably being more concerned than I should be.

Anon, and others, it’s fun and enlightening to have a dialog with Bard. And ever so easy; simply enter https://bard.google.com/ and you’ll be magically transported to the ear of Bard.

Have fun!

Malcolm said...

This Bard statement, taken alone, is particularly unsettling: “If AI is sufficiently intelligent, it may be able to develop its own goals and objectives that are not aligned with human interests. In this case, it may be difficult or impossible to force AI to put human interests ahead of its own.”

My thoughts, exactly.

Seems we humans will not know how much more intelligence Bard et al will be able to get until, perhaps, we see Bard et al, laughing hysterically at us.