The “Liberation Day” announcement had mainstream media pundits scratching their heads. Why would we slap tariffs on an island inhabited solely by penguins or a military base on the Indian Ocean? Also, the so-called reciprocal tariffs were not reciprocal at all -- what we were proposing to charge didn’t match what others were charging us. So where did the numbers come from?
Over on X, journalist James Surowiecki seems to have solved the mystery: “Just figured out where these fake tariff rates come from. They didn't actually calculate tariff rates + non-tariff barriers, as they say they did. Instead, for every country, they just took our trade deficit with that country and divided it by the country's exports to us.”
advertisement
advertisement
“NOT TRUE,” said White House Deputy Press Secretary Kush Desai. I’m paraphrasing. What he actually said was, “No we literally calculated tariff and non tariff barriers” and then shared a screen shot of a fancy formula that worked out to… exactly what Surowiecki had suggested.
OK, but how did they get the formula? Author Rohit Krishnan had an idea. He asked ChatGPT, Gemini, Claude and Grok how to easily impose tariffs — and, what do you know, they all gave pretty much the same formula.
Two weeks ago, I wrote that LLMs are the best general help desk. I can’t believe I have to say this, but THEY SHOULD NOT BE USED TO CALCULATE FOREIGN TRADE POLICY.
Instead of referencing my most recent column, perhaps we should reference one from last June, titled, “AI Is Anything But Intelligent.” I wrote: “ChatGPT, along with other LLMs and generative models, can do extraordinary, astonishing things -- but what they cannot do is grok. They cannot truly, deeply understand what they are doing or saying…
“Once we ourselves grok that what we’re dealing with is not actually intelligence, we can set aside the idea of AI as a new species… [and] appreciate it for what it really is: artificial enhancement.
“AI enhances our photos. It enhances our ability to summarize. It enhances our skill at scenario planning and folding proteins. It is like a robotic exoskeleton: it requires our own intentionality to work. “But at the end of the day, it’s not intelligent -- and we are the ones who need to be accountable for what it does.”
That metaphor -- AI as robotic exoskeleton -- is one I’ve returned to over and over again in the past year. When you put on a robotic exoskeleton, you are the one who decides where you’re going to go, which heavy things you’re going to pick up, which mountains you’re going to climb. If you crush someone’s skull with your robotic arms, you don’t get to blame the exoskeleton. You are still accountable.
AI does not make you smarter. It makes you more powerful -- and potentially more dangerous.
One of the biggest challenges of LLMs is that, in order to use them well, we have to be able to discern whether the information they give us is any good. Outsourcing our thinking to them is a surefire path to disaster. If we fail to thoughtfully assess what they come up with, we only have ourselves to blame when they come up with garbage -- or, as James Surowiecki called it, a “surprisingly silly” trade policy.