President Joe Biden’s moronic executive order on artificial intelligence is facing pushback from multiple tech industry associations that say the EO is too confusing, too broad, and could potentially stifle innovation.
NetChoice, the U.S. Chamber of Commerce and the Software & Information Industry Association — which represent some of the largest AI and tech companies in the world — expressed several concerns about the long-awaited 111-page executive order, which marks the most aggressive step by the government to rein in the technology to date.
“Broad regulatory measures in Biden’s AI red tape wishlist will result in stifling new companies and competitors from entering the marketplace and significantly expanding the power of the federal government over American innovation,” Carl Szabo, vice president and general counsel at NetChoice, an advocacy group that represents major AI companies such as Amazon, Google and Meta, said in a statement.
SIIA, said in a statement that it had “top-level support” for the EO but also “concerns about some of the directions taken,” including with regard to the document’s effect on innovation and American tech leadership.
“We are concerned that the EO imposes requirements on the private sector that are not well calibrated to those risks and will impede innovation that is critical to realize the potential of AI to address societal challenges,” said Paul Lekas, senior vice president for global public policy & government affairs at SIIA, which represents major tech players including Adobe, Apple and Google. “While we support the measures to democratize research and access to AI resources and reform immigration policy, we believe the directive to the FTC to focus on competition in the AI markets will ultimately undermine the administration’s objectives to maintain U.S. technological leadership.”
This executive order is vast in scope, addressing multiple very difficult problems in responsible AI. It will be good for driving dialogue and investigation at agencies. But this executive order is the equivalent of vaporware in software—something that sounds nice, doesn’t exist, and likely never will (at least in the form it was presented). While it’s clear there is a strong appetite for AI regulation in the United States, it’s likely several years away. That said, there are signals from this administration for what it could include. What that regulation will look like will surely evolve yet again.
Rather than try to identify problems and think of targeted solutions, the order simply assumes that factors like computing power and the number of model parameters are the right metrics to begin assessing risk. No evidence is offered to justify these assumptions. Other components of the order are similarly overly simplistic. For example, it directs the Office of Management and Budget, Commerce Department, and Homeland Security Department to identify steps to watermark AI-generated content. This is a bit like using a band-aid on a bone fracture. Sophisticated bad actors will be able to remove watermarks or produce high-quality deepfake content without them.
One of the order’s more important mandates requires that companies developing the most advanced AI models report to the government information on model training, parameter weights, and safety testing. Transparency about the results of safety tests sounds practical, but in reality, it could discourage tech companies from doing more testing, since results need to be shared with the federal government. Moreover, the very essence of AI research is iterative experimentation, and this mandate could bog down companies in red tape and reporting when they should be tweaking their models to improve safety. Given these tradeoffs, it’s unclear that all the reporting will improve safety for anyone.