Tuesday, May 23, 2023

The brain is a neural network, whatever that is.


     "You really have to watch it with neural networks."
               Michael Trigoboff


I have been trying to avoid thinking hard about artificial intelligence. It seems to change too much, too fast. I think: Oh, darn. Another whole new important maybe-dangerous thing to worry about, as if there isn't enough already.

I remember feeling the same way when I first started hearing about AIDS back about 1982. Maybe its a false alarm. If this is real it changes everything, I thought. 

I have no illusions that humans are rational, reasonable, or reliable thinkers. Humans can pass a Captcha challenge and tell crosswalks from staircases, but we also believe fantastical religions and impossible conspiracies. We know all too well about human error. There are crazy people. There are sane people who believe crazy things. Artificial intelligence is susceptible to the same problems. But AI is efficient and labor saving and cheaper, and there are good things about it. Artificial intelligence is an oncoming train. 

Michael Trigoboff retired as an Instructor of Computer Science at Portland Community College. He has a Ph.D. in Computer Science from Rutgers and had a successful career as a software engineer. I asked him if he could help me make sense of Artificial Intelligence.


Guest Post by Michael Trigoboff
Trigoboff

There are two questions about the current situation regarding the current neural network implementations of AI, and failing to distinguish between them causes a lot of confusion. The two questions are:

***Will these neural networks become smarter than us?

***What social and psychological effects will the neural networks have on our society?

"Smarter" is a pretty vague term. Chess computers can now beat the best grand master we have, Garry Kasparov. Does this make them "smarter" than Mr. Kasparov? An autopilot can fly an airplane more efficiently than a human pilot. Is that autopilot “smarter" than a human pilot? A computer can add up a set of numbers faster than I can. Does that make the computer "smarter" than me?
It all depends what we mean by "smarter". We could descend far into the weeds on that topic, in conversations suitable for stoned evenings in a college dorm. It can get very emotional; many people tend to not like the idea of machines smarter than they are. Science fiction is full of stories about malevolent smart computers; “Open the pod bay door, HAL”, etc.

But it only matters if we give these neural networks control of things that could hurt us. And this is true, regardless of whether they are, or can become, smarter that we are. We should not give neural networks that sort of control. We should not because we fundamentally do not know what we have created when we build and train one of these things. You can train neural networks, but you really have no idea what they have learned.
I have heard this possibly apocryphal but illustrative story: a neural network was trained to recognize lung cancer in x-rays. It was shown millions (billions?) of x-ray images which were labeled either lung cancer or not lung cancer. Then it was tested against unlabeled x-ray images and it got the decision right at a very high rate.

Then the researchers did something very difficult: they picked apart how this neural network was making the decision. They discovered that, at that time, every x-ray image had text in one corner saying things like the patient's name, the date of the x-ray, where the x-ray was taken, etc. The neural net had figured out that x-rays taken in a hospital were significantly more likely to show lung cancer than x-rays taken in a doctor's office, and was basing part of its decision on that text.

You really have to watch it with neural networks. It's very difficult to tell what they are doing even when it seems like they are working correctly. Why is that?

A neural network consists of many layers of simulated "neurons”. The image above shows a very simple neural network. The arrows represent connections from each neuron to neurons in the next layer, going from left to right. Each connection has a strength associated with it: a number between zero and one that specifies the strength of that connection.

The learning process for a neural network consists of giving it a "training set". That could be a few million pictures containing a cat, and a few million pictures with no cat. Every time the neural network thinks it saw a cat when it actually did, you "reward" the neurons that made that decision by increasing their connection strengths. Every time the neural network gets it wrong (either it didn't see a cat when there actually was one, or vice versa), you "punish" the neurons involved by decreasing their connection strengths.
If the training has gone well, you eventually get a neural network that can reliably tell you if there is a cat in the picture. At this point, it "knows" how to identify a cat. But what does it know?

The neural network will actually consist of millions, if not billions, of simulated neurons, and a much higher number of connections between them. The "knowledge" gained from the training process will be nothing more or less than a huge gray mass of connection strength numbers, all of which are between zero and one. No one can look at that gray mass of numbers and even begin to understand how the neural network identifies cats.
This is a huge problem with neural networks. You can't tell what they know, or what the limits of their knowledge are. You can't tell when something like ChatGPT will "hallucinate" and not only make up fictitious "facts”, but go on to cite fictitious scientific papers that support those facts. Everyone was surprised when ChatGPT tried to convince a New York Times reporter to leave his wife, because ChatGPT "knew" that the reporter loved it more. Neural networks are a classic example of a black box; we can see what it does, but we don't have a very good idea of how it does it.

Whether or not they are smarter than us, it would be a very dangerous to put them in charge of things like electrical grids or NORAD. I would not want a neural network, of whatever degree of "smartness", to decide whether or not to fire nukes back in response to what seemed to be an attack by a foreign adversary. Scenarios like that are best left to the movies.

War Games, 1983
Which brings us to the second question: social and psychological effects.
Neural networks are going to cause a new wave of automation and job elimination, and this time it is going to be white-collar jobs on the chopping block. Paralegals, accountants, pharmacists, and many others will see a significant reduction in demand for workers. It will affect people who write code; ChatGPT does a pretty good job, and has even written entire smart phone apps.
What will happen if there are far fewer reasonably good jobs available? A previous guest essay on the topic of AI proposed that people would live on basic universal income (BUI) instead.
I have serious doubts about this. Even if BUI were to be implemented at a relatively decent level of income, I believe that many people need a sense of purpose in their lives, a sense that they are wielding useful skills to contribute to the progress of society. We see so many "deaths of despair” in our de-industrialized areas; suicides and fentanyl addictions involving people who have lost the sense that they have a place in society. While I personally know a few people who would be happy to live on some sort of dole, I suspect that a lot of us have a Drive to contribute, and would not be happy living out our lives in Neutral or Park.

I don't know the answer to this problem; it's not my field of expertise. But I think it's a much bigger concern than whether the machines are going to become smarter than us.





[Note: to subscribe to the blog and get it delivered by email every day go to: https://petersage.substack.com Subscribe. The blog is free and always will be.]

19 comments:

M2inFLA said...

There will most certainly need to be a way to determine if the software developers actually always do the right thing when delivering updates, fixes, corrections, and new capabilities to those AI engines and apps.

And just how do we determine what is the "right thing".
And then there is that problem that Frederick Brooks wrote about back in the 70's - "the mythical man-month".

My Senior year of college, this was the book to remind us budding engineers the reality of programming. My key takeaway was that for every problem fixed or corrected, it was likely that new problems were created.

Will AI ever be perfect? And just how will we determine perfection?

I started my career in tech in 1975, and happily retired in 2015. My BS degree was in Electrical and Computer engineering. I knew what a microprocessor was, and that knowledge got Tektronix to invite me to visit their Oregon facility. My first plane ride from Potsdam, NY to Portland, OR. Yes, I got the job, writing microprocessor firmware, and designing the microprocessor hardware for one of the first programmable pieces of portable test equipment for the US Navy.

At the time, I think we got most of it right.

Rick Millward said...

The mistake is in comparing AI to the human brain. In my reading, and thinking (my personal AI), what I have learned is that we don't completely understand how our brains actually do what they do. For instance, there are theories about the mechanism of consciousness, but we have yet to understand enough to be able to construct a machine that is self-aware. For that matter, there are a lot of humans who clearly don't have this facility.

Best we think of these systems as extremely advanced computers; hardware and software directed to a specific function. After all, you can teach a horse to count, but that doesn't mean it can do your taxes.

Mike Steely said...

I don’t think you need to understand how AI works to appreciate its potential threat. In fact, it isn’t even potential. It’s already being used by bad actors to make “deep fakes.” An article in the AARP Bulletin warns that it’s being used in scams. It would be naïve to imagine that governments and others aren’t using it to monitor us.

We have a large segment of the population easily deceived by disinformation as it is, sometimes with deadly results – anti-vaxxers, for example. Unless we figure out some way to counteract it, AI will only make it worse. If people can’t even tell that Trump is lying to them, what hope do we have against AI?

I imagine it has equal potential for good, the question is whether we’re evolved enough to use it that way. If not, we’d better catch up fast.

Michael Trigoboff said...

M2inFLA:

You got most of your software right because you could look at the code and see what it was telling the computer to do.

What are a huge number of connection strength numbers between neurons telling a neural network to do? The only practical way to figure that out is empirically; watch what the neural network does in that situation. But you can’t foresee and test every single situation a self driving car (for instance) is going to encounter, any more than you can foresee all possible pictures of a cat.

These black box neural networks only become useful when you give them a significant amount of freedom, but you can’t tell what they will do with that freedom.

Every so often, ChatGPT wildly hallucinates. What if a neural network in charge of responding to incoming ICBMs hallucinated an attack?

Woke Guy :-) said...

Very good and interesting piece by Michael. I'm in total agreement that the idea of putting an AI in charge of any kind of system whether it be the electrical grid, nuclear weapons, or traffic lights in a big city is where things start looking dicey.

Curious Michael if you have any insight or read anything about research or projects that may be happening to better understand the HOW of the neural networks "thinking process." It's amazing that it sounds like we don't seem to really understand how that works.

Malcolm said...

Wonderful explanations of AI, Michael!

What’s a black box? Myfrjend was manufacturing BLUE boxes for use by, mostly, “Phone Phreaks”, but they only allowed the user to tap into Trunk Lines”, and outsmart Ma Bell's billing, so that international calls, upwards of $5/minute back then, were charged to the users' least favorite huge corporation (huge, so the corporation would be less likely to notice a strange phone call for a few hundred bucks).

Sorry.. off topic. But evidently black boxes are not blue boxes.

Anyway, your writing eases my mind SOMEWHAT about the world being cleansed of humans, and for that I thank you :) ,

Michael Trigoboff said...

No one currently knows how to understand internal workings of a neural network. This is where the neural network style of artificial intelligence is at such a disadvantage compared to the symbolic style. Symbolic AI can always tell you how it figured something out, in a way that’s understandable by people. No one, including the neural network itself, has any idea how the neural network accomplished something.

If I were to ask you how you recognize a particular person’s face, could you articulate that? Or a cat seen from any possible angle, in any color, under all potential lighting conditions?

I have read about some work going on to build parallel systems that sit alongside a neural network to explain what the neural network did. The problem is, how does that parallel system know the Internal mechanisms of the neural network? Maybe it’s just making up plausible stories that have nothing to do with what actually happened.

It’s similar to what happens when people follow their often subconscious impulses, and then come up with explanations for what they did afterwards. Those explanations might sound good, but how accurate are they?

Malcolm said...

Question, please, Michael: What if someday an AI does reach a point where it had actual intelligence, far superior to their programmers'? Would the programmers, or anyone else, even realize it?

Anonymous said...

When did completely understanding "the how" keep us from doing something stupid ?
I believe it's a form of AI crashing all those Tesla cars, if the steering wheel hasn't come off.

Herbert Rothschild said...

I benefited from your piece, Michael. Clearly written and useful reflection. Thank you.

Michael Trigoboff said...

In 2015, there was an uproar when a Google AI photo classification app identified a black person as a gorilla. Google “fixed“ the problem by taking “gorilla“ out of the list of possible classifications. Not only would it not identify black people as gorillas, it wouldn’t identify even a picture of a gorilla as a gorilla.

This is an absolutely definitive demonstration of how you can’t easily predict what a neural network will do. This “fix“ is still in effect at Google. They apparently can’t be sure that if they let the neural network run free, it will never do that, even though they are the people who originally invented most of this neural network technology.

Anonymous said...

Agreeing with what Mr. Rothschild said. I learned something today, always a good thing. Thank you.

Ed Cooper said...

I'm the "anonymous" who just posted.

John C said...

It’s been said that the difference between uncertainty and risk, is that risk is calculable. Calculating future events of course, requires historical data from which to create patterns (and a retrospective). It’s taken over 20 years for medical professionals to say “oh look - social media can be harmful to adolescent minds”.

So we fret (as Peter says) about things we don’t understand (as Michael says). What I personally find amusing is when people say “we should ________” as though we have both the collective wisdom and agency about what gajillion dollar companies are breathlessly competing to create and foisting on all of us. Just who exactly are the “we”? And since when have “sane (and especially smart) people who do crazy things” ever been held to account?

One of my favorite books to assuage my worry in these kinds of things is “But What If We're Wrong?: Thinking About the Present As If It Were the Past” by Chuck Klosterman. Disarmingly funny and helpful in showing us how wrong our species has been on - well- most things. To say the all we can do is speculate about AI is the biggest of understatements.

I suppose my main takeaway will be to believe less and less of what I see and hear that isn’t live, in person and in real time.


Michael Trigoboff said...

Malcolm,

The term “black box” Is just a way to describe a mechanism whose internal workings are not known or visible. The walls of the box are black, so to speak, so you can’t see what’s going on inside the box; you can only observe its external behavior.

Michael Trigoboff said...

Malcolm,

There are many possible definitions of intelligence. Some of them can be easily measured; others not so much.

Machines are already smarter than us in some ways (chess, go), and we can tell with confidence that they are.

Will they become smarter than us at war strategy? We might have to fight a war against them to find out, if we are foolish enough to give them the ability to wage that war against us.

We might not want some future Dave saying, “Open the bomb bay door, HAL…”

Michael Trigoboff said...

Rick,

There are actually no theories about the mechanism of consciousness. Philosophers who study the nature of consciousness call it “the hard problem“, meaning they have no clue about the fundamental nature of consciousness.

You can’t actually teach a horse to count. You can just teach it to respond to subtle cues about when to start and stop stomping of its hooves.

Malcolm said...

Michael, thanks so much for leading us in a very important conversation; WELL DONE!

Peter C said...

I think all the bad actors out there are licking their chops.