
Photo Credit: Steve Johnson
Governor Newsom issued that order yesterday, not long after the Framework introduced a variety of “legislative recommendations” at the federal level. Unsurprisingly, given the timing, there’s a political-posturing angle to California’s AI rules.
(The White House recommendations, we reported earlier in March, emphasized an objective of establishing “a federal AI policy framework” to “prevent a fragmented patchwork of state regulations that would hinder our national competitiveness.”)
That’s clear enough from the corresponding press release’s title – “As Trump rolls back protections, Governor Newsom signs first-of-its-kind executive order to strengthen AI protections and responsible use.”
Throw in the long list of rightsholder complaints against leading developers – most recently, BMG levied infringement claims against Anthropic – and it’s safe to say that the AI regulatory landscape contains more than a few moving parts.
Among other things, said certifications will address safeguards to prevent misuse involving “illegal content” including sexually explicit deepfakes; “harmful bias”; and violations “of civil rights and civil liberties such as free speech, voting, [and] human autonomy.”
Additionally, the Government Operations Agency, “in consultation with DGS and CDT,” has 120 days to propose “any reforms to contractor responsibility provisions” along the same lines as those highlighted above.
And if the official “concludes that the designation is improper,” DGS and CDT “will jointly issue guidance ensuring that departments and agencies can continue to easily procure from that company.”
In closing, the order also calls on the agencies to establish state “employee access to vetted GenAI tools for general use cases with appropriate privacy and cybersecurity safeguards.”
Longer term, making AI watermarks the norm across the board would, of course, be a sweeping step – and a step with multiple considerations beneath the surface. How would doing so affect the growing collection of creations incorporating, but not solely resulting from, generative AI? And where will the “significantly manipulated” line be drawn?
Time will tell, but it’ll be worth tracking AI regulatory developments – including in Europe, where several AI legal disputes are unfolding – moving forward.