Observations and commentary on American politics and culture. Now read by 3,000 people every day.
Tuesday, February 28, 2023
Snow
"Joe Biden's tenuous legitimacy"
"It is galling to be lectured about democracy by a man who took power in an election so sketchy that many Americans don't believe it was even real."
Monday, February 27, 2023
Pretext and willful blindness
"I see nothing. I hear nothing." |
KOBI "Five on Five" February 24, 2023 |
Sunday, February 26, 2023
Easy Sunday: Athletes
Some people make hard things look easy.
Some people do hard things that look really hard to do. Today's Easy Sunday post is this second kind.
This looks hard to do. Cheerleading acrobatics.
https://www.youtube.com/watch?v=hlOWbdM9r_I |
[Note: For daily delivery of this blog to your email go to: https://petersage.substack.com Subscribe. The blog is free and always will be.]
Saturday, February 25, 2023
Settler Colonial
"Land of the Empire Builders,Land of the Golden West;Conquered and held by free men,Fairest and the best."
Pioneer Woodsman, atop the Oregon Capitol |
Americans of European heritage did not colonize what became the United States. We conquered it. We displaced the people who were here and felt proud of it. We chose the state song. "Hail to thee, land of heroes, my Oregon."
Thoughts on one way to look at Westward expansion of the United States
How things look, the photo opportunity part of staging an event, may be important in creating favorable first impressions, but words – spoken or written – are a keystone of communication. When a word or phrase crops up in reading that runs counter to that which we “know” from past use, red flags go up. We either ignore the strange words or try to understand what the author is attempting to say.
It keeps bugging me. When I pick up my Oregon Historical Society Quarterly, a journal of Oregon history, or read one of my favorite magazines, High Country News, the label “colonial” or “settler colonial” crops up. Mostly it is used describing some event or trend in the westward expansion of the United States. Sometimes the term is attached to the here-and-now or recent
practices such as logging timber.
So where did the term “settler colonialism” originate? Scholars trace it to two Australians, Patrick Wolfe and Lorenzo Veracini. They coined the phrase, then applied it as one way to analyze history. Veracini, in a 2013 journal article, characterized the concept as “an ongoing and uncompromising form of hyper-colonialism characterized by enhanced aggressiveness and
exploitation.” By the 1970s, Veracini says scholars began interpreting the concept as bringing with it “high standards of living and economic development.”The "invasion" of Indigenous land was a structure, not an event;" settler-colonialism destroys to replace (Woolfe 2006). In contrast to the domination and exploitation practiced by external colonialism, settler-colonialism overwhelmed and inevitably tried to extinguish the Indigenous population by pushing them to the margin (Veracini, 2013).”
Wolfe, in an earlier paper, argues the “goal (of settler colonialism) was elimination of indigenous people.” Veracini observes that in North America “...Indians did not give up their land quickly, easily or entirely.”
From that launch in Australia, the settler colonial interest of academia grew. There’s a four-times a year on-line academic journal titled Settler Colonial Studies. Its purpose is“…to respond to what we believe is a growing demand for reflection and critical scholarship on settler colonialism as a distinct social and historical formation. We aim to establish settler colonial studies as a distinct field of scholarly research.”
Many of us grew up learning that the “colonial period” of American history coincided with planting of British colonies, beginning with the Virginia Company’s Lost Colony in 1587 and ending with conclusion of the Revolutionary War (1783). Colonies by definition were dependent on the mother country and under political control of that country. Using the “old” colonial definition, except for the Hudson’s Bay Company settlements in what is now Washington state, perhaps some Spanish mission settlements in California and the fur-trade era Russian North American outposts, there’s nothing colonial about history of the U.S. West.
Clearly, HBC came west under a charter granted in 1670 by Charles the second, the King of England. That was an age of colonization. European nations were issuing charters to businesses and companies of settlers setting up shop around the globe. Those same nations were claiming sovereignty over lands with little or no regard for resident native peoples. For the Americas it began in earnest with Christopher Columbus’s 1492 voyage of discovery. Colonies sprang up, peopled by Europeans beholden to investors in their companies and to protection from the chartering government. Global trade drove it all. Tea from East India. Gold from Central and South America. Spice from the Far East. Furs from North America. HBC built fur-trading posts across the Pacific Northwest from 1820 to 1850 and hired natives from Hawaii to run its many farms growing food for residents of those posts.
Westward expansion of the United States had its origins in the 13 British colonies, even before the Revolutionary War ended England’s rule. George Washington as a young man was surveying – and buying – land to the west. After all, the London Company’s 1621 Virginia charter claimed the Ohio River country and implied a claim west to the Pacific Ocean, constrained by 31 degrees latitude on the south and 40 degrees on the north. That’s roughly from present-day Florida on the south to Pennsylvania on the north, westward to the Pacific coast.
Virginia colonists often had little regard for the Indians. After a conflict in 1622, colonists the next year invited Indians for a peace treaty celebration. One report says perhaps 50 Indians were shot and 200 poisoned. Bands of colonists prowled the countryside destroying the Indian’s cornfields. That war continued off-and-on for 10 years. Virginia would again launch a war on Indians in 1666, and again with the Maryland colony in 1675.
When the colonial era ended with the 1783 Treaty of Paris, the western boundary shrank to the Mississippi River. By the time the U.S. purchased France’s Louisiana Territory in 1803, historians say Americans actually outnumbered the French as residents within the sparsely settled lands stretching north to Canada. The non-native population was estimated at 60,000, perhaps half of them enslaved laborers on plantations along the lower Mississippi River.
The Library of Congress, which authors teaching materials on U.S. History breaks eras down this way: Colonial settlement 1600s to 1763; American Revolution 1763-83, New Nation 1783-1815, National Expansion and Reform 1815-1880; Civil War and Reconstruction 1861-77; Rise of Industrial America 1876-1900; Progressive Era to new Era 1900-1929; Great Depression and WWII 1929-1945; Post War United States 1945-68. We can add the Cold War 1947-91, the Vietnam War and Civil Rights 1954-75, an Energy Crisis 1973-80, the Internet 1995 -present, War on Terror 2001-present.
Despite having an academic journal to its name, there’s no place to establish a “settler colonial” period after the American Revolution. But that’s no reason to ignore the sometime troubling history of our nation and setting the Western United States.
How we write and teach our history makes a difference. So do the words we use to chronicle that history. Ray Raphael, author of several books on U.S. history, concludes his 2004 “Founding Myths” by observing:Americans from the beginning, were both bullies and democrats. Despite the hesitancy of elites, most patriots at the time of our nation’s birth believed that ordinary people were entitled to rule themselves and fully capable of doing so. They also believed they had the right, and even the obligation to impose their will on people whom they deemed inferior.
These two core beliefs are key to understanding American history and American character, and we do an injustice to ourselves and our nation when we pretend otherwise.
[Note: For daily delivery of this blog to your email go to: https://petersage.substack.com Subscribe. The blog is free and always will be.]
Friday, February 24, 2023
Train wreck in Ohio
A missed opportunity for Democrats, Biden, and Buttigieg.
A Financial Advisor's perspective.
The clients of a Financial Advisor sometimes get disappointed. A stock loses value. A successful Financial Advisor gets through these rough spots with the relationship strengthened, not frayed.
I write again about Buttigieg because my expectations of him are high as a potential candidate and officeholder. He made the mistakes of a rookie Financial Advisor in the train derailment disaster. By the time Buttigieg got to the scene, it was no longer a message of empathy. It was a message of me-too catch-up.
He is certainly smart enough to be a good president. He may too smart, and not empathetic enough. His instincts may be wrong.There is a difference between the political parties as regards regulations on corporations. Republicans are proudly the party of fewer regulations, calling them burdensome job-killers. Trump said he would repeal two regulations for every one regulation the government imposed. Democrats are more favorable toward regulations. After all, it was Democrats who wanted mask and vaccination mandates. Democrats make the case that regulations protect us and Republicans make the case that regulations stifle us. Everyone understands who is whom.
The derailment accident was a perfect object lesson in the value of regulations to protect innocent bystanders. The exact particulars of the safety protocols and how many workers were on that particular train are excuses and quibbles. No one cares. Politically, one simple thing happened: A big corporation fought safety regulations, and then something bad happened in East Palestine.
Democrats tried explaining it, blaming Republicans. See? We are on your side. White House spokesperson Andrew Bates wrote:
Congressional Republicans and former Trump Administration officials owe East Palestine an apology for selling them out to rail industry lobbyists when they dismantled Obama-Biden rail safety protections as well as EPA powers to rapidly contain spills. Congressional Republicans laid the groundwork for the Trump Administration to tear up requirements for more effective train brakes, and last year most House Republicans wanted to defund our ability to protect drinking water.
Worthless words.
Buttigieg tried explaining that the real work was being done by EPA officials from the start. He justified not being there physically.
There's two kinds of people who show up when you have that kind of disaster experience: people who are there because they have a specific job to do and are there to get something done, and people who are there to look good and have their picture taken.
More worthless words. Is that factually true? No matter. It sounds like an excuse from someone who doesn't want to be bothered.
The optics of this disaster played out exactly as I feared when I wrote three days ago. Eastern Ohio is a White working-class area that gave 40-point margins to Trump. J.D. Vance, the new GOP senator from Ohio, said,
"These are sort of our people. It's a reasonably rural community. It's been affected by industrialization. These are the people who really lost when we lost our manufacturing base to China. And these are the people who are going to be forgotten by the media unless certain voices make sure that their interests are at the forefront."
A Financial Advisor would consider East Palestine to be a "good client" of Team GOP. Team GOP showed up to save that relationship. They held a media event with Donald Trump and the local mayor in East Palestine. Trump donated water bottles. They all criticized Biden for not caring about Ohio. They did not try to explain railroad safety rules. They tried to show that, in adversity, they were standing alongside local people.
A successful Financial Advisor does not get through a rough spot of client disappointment by sending a research report. One calls. One talks. One meets. A successful Financial Advisor attends the funeral of a client if at all possible.
I still have a Financial Advisor's perspective. Republicans opposition to railroad safety regulations make these incidents more likely, not less likely. No matter. The GOP people made a house call. These are our people and we want to be with family at a time of adversity. The Democrats phoned it in. They came with technicians on the ground and written statements arguing they were right all along. Every experience I have over a 30-year career as a Financial Advisor is that the GOP keeps the account.Thursday, February 23, 2023
Does AI Chat love me? Another view.
A computer scientist writes:
"Computers are not smart or conscious yet, and will not be in any foreseeable future."
Maybe I was fooled.
Last week I wrote about a conversation between a journalist and an Artificial Intelligence "being" named Sydney. Sydney was a combination of smart and silly in the immature way of an impulsive teenager, but it was conscious, I wrote. Or it simulated consciousness as well as any human does, or as well as did my Golden Lab, Brandy. Yesterday I posted a comment by John Coster, who took a theological approach to defining consciousness. Today I post the comment by another computer professional.
Michael Trigoboff is a member of the generation that created computers. He has a Ph.D. in Computer Science. He worked in industry and then as a professor of Computer Science at Portland Community College. He recently retired. He is open about his exploration of the nature of consciousness with help from psychedelic medications back when he was young and looked like this:
Now he looks like this:
Guest Post by Michael Trigoboff
What is the nature of consciousness? Of subjective experience? How can we tell if someone or something else is conscious?
These questions have puzzled and bedeviled philosophers for millennia. In our current era, philosophers refer to this as "the hard problem", meaning that they do not have even the first clue of an answer. The best analysis I have seen focuses on asking "What is it like?" to be a person, an elephant, etc. There are no clear answers to that question either. Apparently, what it's like to be a philosopher of consciousness includes a large component of frustration.
And now we have some new AI software: ChatGPT and Bing/Sydney. These new “large language models” behave as though they are manifesting sentient consciousness. They pass the Turing Test with flying colors. But it's way easier to convince a human that they are communicating with a conscious being than you might think.
In the mid-nineteen sixties, an MIT researcher named Joseph Weizenbaum created a program he called ELIZA. This program mimicked a Rogerian psychiatrist; it was built to reflect what you said back at you in that psychoanalytic style.
If you said to ELIZA, "I am feeling sad today", it would respond, "So you say you are feeling sad today". It did this through a small set of very simple rules, like substituting in the sentence the phrase "So you say you are feeling" for the phrase "I am feeling".
ELIZA was neither conscious nor even very capable of carrying on a normal conversation. But if you stuck to the kinds of things you would say to a psychiatrist, it did a reasonable job of simulating its end of that sort of Rogerian interaction.
Once Weizenbaum finished writing ELIZA, he wanted to test it. This was back when people had secretaries, and he asked his secretary to talk to it. She started conversing with it and then asked Weizenbaum to leave the room because she had something personal she wanted to discuss with ELIZA. It apparently looked like free psychoanalysis to her.
If something as simple and dumb as ELIZA can fool someone into thinking that it's a conscious sentient being, it's not surprising that the new Bing or ChatGPT (which are much more complex and capable) can do it.
Bing and ChatGPT work by analyzing huge quantities of text from the Internet into next-word probabilities. Given a sequence of words, what's the most probable next word, based on that analysis? There's no consciousness behind that process, and these chatbots don't "know" anything. They just string word after word together based on the most probable next word.
It's amazing that a process this simple can look so much like a conscious sentient being; it's just a supercharged version of autocomplete. Large language models like this have been referred to as "stochastic parrots". What you're getting is nothing more than a probability-based distillation of all the sequences of words that the LLM was trained on.
We’re a very long way from reproducing anything like human intelligence. A lot of what is called AI these days (e.g. cars that drive themselves) might be more accurately described as Artificial Insects. Ants can “drive” themselves to and from their nests. Bees can do it in three dimensions.
Consider this: a Turing Machine is a little mechanism that moves around on a long tape, reacting to data recorded on the tape. Turing machines are important because they are mathematically equivalent to computers but are simple enough to be useful in proofs about what computers can and cannot do.
A ribosome is a cellular mechanism that moves around on a long tape (mRNA), creating proteins encoded by data on the “tape.” Ribosomes and Turing Machines seem pretty similar to me.
There are ~37 trillion cells in a human body, and ~10 million ribosomes in a human cell. Which means we each contain ~3.7 * 1020 computer equivalents running in parallel. And that’s just the ribosomes. Human intelligence and consciousness seem to be phenomena that emerge from that complexity.
The idea that we might produce something equivalent from even 10,000 computers running in parallel strikes me as unlikely. There’s a complexity barrier standing between AI and its goal.
The “neural networks” that power the current version of AI consist of layers of simulated neurons that connect to each other in a simple and unified way. Think of something like the diagram below, but with millions or billions of the simulated neurons.
This is nothing like the way the neurons are organized in a human or biological brain. The brain has distinct sub-organs and nuclei, all wired together in an amazingly complex way. Some people have said that a human brain is the most complex object that exists in this universe. Its organization makes the current neural networks look like simple toys by comparison.
We absolutely do not understand how the human brain functions. The AI neural networks only model the actions of simulated neurons hooked together in very simple structures. The neurons in human brains are wired together with a complexity that's beyond our ability to understand. And that's just the neurons. There are smaller cells in the brain called glia; they outnumber the neurons by orders of magnitude, and no one knows what their function is.
There is a small worm called C. elegans. Its brain contains exactly 302 neurons, and scientists have mapped all of their connections to each other and to the rest of the worm. Here's a description from the abstract of a scientific paper:
With only five olfactory neurons, C. elegans can dynamically respond to dozens of attractive and repellant odors. Thermosensory neurons enable the nematode to remember its cultivation temperature and to track narrow isotherms. Polymodal sensory neurons detect a wide range of nociceptive cues and signal robust escape responses. Pairing of sensory stimuli leads to long-lived changes in behavior consistent with associative learning. Worms exhibit social behaviors and complex ultradian rhythms driven by Ca2+ oscillators with clock-like properties
No one knows how those 302 neurons are capable of producing this complex repertoire of behaviors; glial cells may be involved, but no one knows what their role might be.
To think that simply wired networks of large numbers of simulated neurons are going to be able to replicate, much less surpass, human intelligence is a combination of hubris and gullibility. It’s apparently easy for some folks to talk to ChatGPT or Bing/Sydney and come away thinking that they were speaking to something that was conscious; they have drawn an understandable but erroneous conclusion from that experience.
I was an AI researcher in the nineteen seventies. I finally left the field of AI firmly convinced that attempting AI was the appropriate punishment for committing the sin of pride of thinking that we could reproduce anything like human intelligence and consciousness with our current kind of digital computers.
Each of us experiences our own consciousness. This is the only way we can have knowledge of the presence or absence of consciousness. We cannot, barring very unusual circumstances or high doses of psychedelic drugs, directly experience the consciousness of another person. We are left with having to draw conclusions from, and generalize from, our own experience of consciousness.
Given that the only thing in this universe that I can verify the consciousness of is myself, and I experience myself as conscious, what reason would I have for concluding that anything else isn't conscious? Based on that thought, I believe that everything in this universe is conscious, although at various different levels depending on (perhaps) how complex that particular thing is.
Consciousness seems to arise, somehow, from complexity. The complexity of even the most complex current example of AI is still enormously less than the complexity of the human brain. They are not smart or conscious yet, and will not be in any foreseeable future.
That doesn't mean we cannot create AI software that does useful and potentially scary things for us. Just yesterday I heard a podcast about how neural networks have been trained to fly fighter planes in combat, and do it better than human pilots. That doesn't make them conscious or intelligent. Your thermostat keeps your house at a constant temperature; it's not conscious either.
[Note: For daily delivery of this blog to your email go to: https://petersage.substack.com Subscribe. The blog is free and always will be.]
Wednesday, February 22, 2023
Does AI Chat love me?
John Coster’s 44 year career has included developing dozens of global data centers for Microsoft and Lumen. He advises and has invested in digital tech startups, and currently manages an engineering and technology innovation team for a national wireless carrier and is co-inventor of 5 AI patents (filed). His recently completed graduate studies in Theology at Regent focused on morality in a technology-driven world.
The recent news and demonstrations of Microsoft and Google’s natural language-generative speech (AI) technology have triggered the kinds of big questions that in the past seem to have been relegated to undergraduate philosophy courses and dusty bookshelves of theology schools.
I think the evolution of this new technology provides a fresh opportunity for us “moderns” to carefully reexamine our beliefs about two fundamental questions of our existence i.e., What does it mean to be human? And Why do we care so much? The terms ‘sentient being’ and ‘human parity’ have been bandied about with increasing frequency in AI circles for the last decade, but until now we haven’t faced the ethical implications of the real possibility of a machine with volition, and what that means for our future. This ain’t Sci-Fi anymore.
A recent experience helps me respond to those who are dismissive of AI’s technical merits and potential. About a month ago at work, I stopped to chat with a group of young post-doc data scientists, after I had just come from an AI Summit with leaders from Microsoft where they demonstrated GPT. These young scientists scoffed at the plausibility of what I had described. After all, they told me – they are on the leading edge of data science! The next day I received a mea culpafrom one of them. They had dug into it and were indeed astonished – as we all are.
I began my unlikely journey with what is now called Ontological Engineering – essentially knowledge maps back in 2009 when I was part of a start-up funded by BAE. In those early days, we had mathematicians and programmers develop rudimentary predictive models for engineering “Smart Cities” which would enable infrastructure within communities to operate more as a holistic ecosystem. The main goal was to optimize system performance through predictability. Later, as we delved into human-machine-intersections, we began to explore to what degree we could not only predict but influence human behavior – essentially data-driven social engineering. Some senior behavioral scientists in our cohort had researched the extent to which they could also manipulate what people believed was real, true, and good, and they found it morally disturbing and left that research. I have thought a lot about that time as we’ve seen the power of digital technologies shape all of our worldviews. That ancient history was in 2014.
I have noticed that commenters on this blog can range from dismissive, to hostile towards religion in general, and Christianity in particular. I’m confident there are many good reasons why that is. My hope in writing this is to offer one orthodox (small o) Christian theologian’s view of technology.
Most people in the West still believe that there is something inherently sacred about humans. It’s why we are outraged at injustice, especially against the vulnerable. If you think about it, that sacredness is at the center of the belief in “human rights”. Whether you are pro- this or anti-that, it almost always comes down to asserting a right. Where did this idea of “rights” come from? Some historians point to the advent of Christianity as a major inflection point in human history. It introduced the counter-cultural notion that people were intrinsically more valuable than mere rulers or useful masses that serve those in power. The revolutionary principle of Christianity was that every person, regardless of gender, race, intellect, social or economic status, or any other ability, is uniquely and equally made in the image of God. Only humans are God’s image-bearers and that makes each of us sacred in his sight. Being made in God’s image also means we have a spiritual part of us, an eternal soul, that transcends our physical existence. The core belief of ‘imago dei’ is what created the institutions (however flawed) for caring for orphans, public hospitals, charities, and schools – which did not exist apart from the elite before that time. The founders and early workers of those institutions were motivated by that core belief. Even if you are not a religious person, your moral belief in the distinct sacredness of people can be traced to early Christian thought. I suspect many will disagree, but I cannot find any other religion or philosophy with such an audacious claim – or that has had such a historical impact globally.
So that is (or should be) the Christian’s framework for what it means to be human. I need to be reminded of it myself (like loving my enemies)- and challenge other Christians to think about how their lives line up with that truth claim.
I agree with those who say that we are meaning-seeking beings who desperately need community, which we express through language. I think we feel threatened because we see the man-made machine as an imposter, hijacking the very thing that we know in our hearts makes us unique. It grates at our very notion of what it means to be human. It may some day be sentient (like Peter’s dog), but it will be neither sapient, nor transcendent.
[Note: For daily delivery of this blog to your email go to: https://petersage.substack.com Subscribe. The blog is free and always will be.]