"Regulating AI will be harder than regulating nuclear power. AI grew up in the wild."
New technology always seems Frankenstein-monster scary at first.
We are in the Frankenstein-monster stage.
![]() |
Tech billionaires at the Trump inauguration |
There is plenty of material for the writer of a political blog. Tulsi Gabbard and Donald Trump are saying "treason;" business people are dealing with tariff uncertainty; Ghislaine Maxwell is cutting some sort of deal with Trump; the U.S. dollar is down; the stock market is up; and the Portland Trail Blazers are making crazy trades. Amid this, college classmate Jim Stodder shared an observation about a technology that I expect will change the world as profoundly as did the steam engine.
Stodder teaches international economics and securities regulation at Boston University. He left school for a decade to knock around as a roughneck in the oil fields. Then he returned to formal studies and received a Ph.D. from Yale in economics. His website is www.jimstodder.com
Big Tech: Tired of Trump
Elon was the first to jump ship, but he will not be the last.
Before the election the "tech bros" were aghast at the Biden administration’s clear intent to regulate the hell out of AI. Watch the Instagram exchange between Marc Andreesen and Ben Horowitz as they recall with incredulity how Biden staffers told them that AI was a national security issue every bit as serious as nuclear power. So it would be regulated just as stringently, with basic research results fenced off as “state secrets.”
As a result of such pronouncements, Andreesen and other tech bros decided to go all-in for Trump. We all saw Trump’s inauguration seating chart. Why has this ardor started to cool? Why are people like Dario Amodei CEO of Anthropic calling for more regulation, not less? Let me advance several reasons based on what economists call “Increasing Returns to Scale.”
Increasing Returns to Scale (IRS) means that when you double all the inputs, you more than double the output. Many people, with their instinctive distrust of the rich, think that’s always how Big Biz gets big, that everything works that way. It doesn’t. If it did, every industry would be dominated by just one gigantic firm – whoever got big first.
Virtually all firms – including ones based on AI – face a production function that looks like the letter “S”. With inputs collected into one variable on the X-axis, we have output as the Y-axis in a giant “S” curve, tilted and stretched up and to the right. In the early stages, the output curve grows steeper. Output per input is growing – we have IRS. But about half-way up, output starts to grow more slowly.
Why AI Must be Highly Regulated
1. A long IRS means small leads turn into much bigger ones.
2. The resources needed for “frontier” level AI are unprecedented, with some some CEOs predicting we will soon need data centers in the hundred-billion-dollar range. (See minute 18:20 in this Lex Fridman interview.)
3. Gigantic scale makes government control unavoidable, since:
--- a. Governments will have to help raise, protect, and ensure this investment.
--- b. The power of the AI-elite will make the robber barons look like small-town hustlers. Either the government controls them, or they own the government. I’m betting on the latter, at least for the medium-term.
4. The AI companies are starting to demand government regulation because:
--- a. It provides a screen against the anger of the public at this new concentration of wealth and power.
--- b. Regulation will reinforce the dominance of established U.S. firms like Open AI, Anthropic, Microsoft, Amazon, and Google.
--- c. Investors want more predictability, not Trumpian chaos.
--- d. Given the deep concern of most AI experts about the human control and “alignment” of Artificial General Intelligence (AGI), an all-out “arms race” for AGI makes catastrophic outcomes more likely.
--- e. The AGI arms-race with China is in full swing. This makes it harder for US leaders to tell our own companies to tread more carefully. Nonetheless, we have no hope of “arms control” – persuading the Chinese to increase regulation and safety-checking – unless we are doing it with our own companies.
--- f. The computer power of AI is centralized – but the data it needs are everywhere. We have a massive opportunity for data sharing with our allies. This will require not just U.S. regulation, but U.S. laws for data privacy and protection – such as those the EU has been pioneering. If we want to compete with China, we need the full cooperation of all our former allies. Someone should tell Trump.
5. Regulating AI will be much harder than regulating nuclear power. Nuclear power was developed and initially provided by the federal government alone. AI grew up “in the wild”. It will remain so unless it can somehow be corralled.
3 comments:
An acquaintance works in AI told me about a year ago the AI he works with was only somewhat helpful as he had to check for mistakes so much. He indicated about 2 months ago a new upgrade was created and now he is 10 x s more productive. What he viewed as a bit of a burden is now essential to his work. It does all the tedious work for him and whathe pays $100 a month to use will soon be a $1,000 a month to use
Given the competition that we are in with China, I don’t see how success in that competition will be compatible with any kind of government regulation. And besides, what would regulation even consist of?
Remember that even the experts who build these large language models do not understand how they work. They feel it as though they opened portal to a different universe and something came through it. But they don’t know exactly what.
Suppose they regulate and restrict access to training data. You know who will not respect any such regulation? China. Does anyone in this country want China to win the AI competition? I certainly don’t.
I think that the toboggan has been pushed off the top of the hill, and we are now taking the ride. Here’s hoping we don’t fall off or cause an avalanche or some other disaster.
As to what regulation would look like, we can consider the NYT op-ed by Anthropic CEO Dario Amodei, linked-to in my piece. He says that Anthropic and a few other biggies like OpenAI are agreeing (informally) to follow a set of 'best-practices' that will give forewarning and reason for pause when they think they might be getting close to autonomous (self-programming) AGI. He argues that such criteria should be enshrined in law in the US and the basis for international agreement.
OK, China might never agree to anything. But our best shot at getting them to do so -- and preventing an 'own goal' of disaster on our side -- is to get the US and its allies to start enforcing smart regulation. When top experts are telling us a disaster is very likely, "hoping" there will not be one is not a great strategy.
Post a Comment