It’s notable that some weeks prior to Sam Altman’s recommendation to the US Congress that AI should be licensed, a leaked memo from Google stated, “We have no moat, and neither does OpenAI.” Without casting aspersions on Altman’s motives, I think he’s someone who has thought very seriously and deeply about good intentions - but licensing does create the very moat these platforms are hand-wringing about. Regulations have the effect of de-risking these incumbents’ investments in AI tech by setting a barrier to market entry against the domestic competitors who actually observe them, while ensuring foreign competitors who ignore the regulations are unencumbered by American competition. The thing about moats is that they could never really stop an invading army, but they sure did keep out a lot of plebes.
Even though I agree that licensing does provide a lever for government to enforce rules on companies who can exploit an immensely powerful technology, and if you want popular assent and legitimacy, other than basic consumer desire, government is pretty much the only way to get it. I even like how Altman's proposal offers them the option, but I disagree with it.
The greater harm is that licensing AI companies accelerates the process of forming “radical monopolies,” which are what the other social platforms became, where your choices are polarized between either the banality of the hegemon, or the quasi-criminal services provided by the proscribed alternatives. For anyone who remembers how "build your own social network," became, "build your own card processing and commercial banking system," a licensing regime for AI creates the exact same dynamic. I don’t want those licenses in the hands of my outgroup, but not because I’m worried about what they will do with the tech. Let them have the technology and let me build mine, but handing them the tools to persecute my inventions and the people who actually want them is an objectively terrible idea.
Pretty much all software is going to be LLM assisted in the next 24 months, so the remit of such a government AI licensing agency would also rise like a tide to encompass all new software and devices. A montage of past successes at managing the black markets consequent to similar interventions would have to include USG track records on alcohol, firearms, drugs, tobacco, and every other form of trafficking. Licensing AI prematurely problematizes it and substitutes innovation for a legal quagmire. Maybe this time it will be different, but probably worse.
I don’t think AI is a use case for licensing because it’s too early to start picking winners and losers. The culture is already too fractured for any current party to represent democratic legitimacy or the honest desire of the people they serve, and a technological change as radical as the one AI represents seems like just the thing to give some bullies pause, and to dislodge some ensconced interests on the platforms, in media, in the culture, and among the remains of academia.
Under a licensing regime, the real money will be in the regulatory arbitrage from offering sanctioned, intact-AI services to customers stuck in the officially neutered-AI domain. The (unenforcable) regulation becomes a kind of “asshole filter,” where it discourages rule-respecting developers by creating a bar to legitimate entry, and creates outsize winner take all advantages for the ones who transgress the rules, further incentivizing assholes. It leaves you with something like media or politics, where the rules are vague or not-evenly enforced, then everyone acts surprised when only the worst of the worst seem to get in.
I admire Altman’s achievements, but I’m sure of all people, he can appreciate that in the case of growing AI to its proper impact and potential, a founder may be too close to the techology, and possibly out of his depth to tell people how they should want to use it.