AI panel pushes developer transparency

Ai News

AI panel pushes developer transparency
Artificial IntelligenceAi HarmsScott Wiener
  • 📰 sfexaminer
  • ⏱ Reading Time:
  • 300 sec. here
  • 10 min. at publisher
  • 📊 Quality Score:
  • News: 137%
  • Publisher: 63%

In a report this week, a group of AI and policy experts pushed policymakers to require developers to disclose more information about their models

In seeking to prevent artificial-intelligence systems from causing catastrophic or widespread harm without thwarting innovation, California should — at least least initially — prioritize transparency, an advisory panel said in a report issued this week.

Any potential government regulation of AI models should be informed by evidence of how those systems work, how they were built, who is using them and the actual risks they pose, the Joint California Policy Working Group on AI Frontier Models said in its report. The group said transparency — in the form of increased data disclosures and reporting of hazardous events — is key to providing that evidence, but AI developers presently are doing a poor job of disclosing such information. “Greater transparency, given current information deficits, can advance accountability, competition, and public trust,” the working group said in the report. “Policy that engenders transparency can enable more informed decision-making for consumers, the public, and future policymakers,” it said. California’s efforts to regulate AI technology could have a large effect on the industry and on San Francisco. With the two most valuable and best-funded AI startups — OpenAI and Anthropic — based in The City, San Francisco has become ground zero for the nascent industry. Gov. Gavin Newsom announced the formation of the advisory group in September at the same time he vetoed Senate Bill 1047, state Sen. Scott Wiener’s AI-safety legislation. Wiener’s bill would have required developers of cutting-edge AI systems to test them for risks of causing large-scale harm. It also would have authorized the attorney general to sue companies whose models led to such harm if they hadn’t followed the legislation’s requirements regarding testing and transparency. In his veto message, Newsom said Wiener’s bill, which applied only to models that required certain amounts of computing power or money to develop, would likely cover AI systems that didn’t pose any threat and not cover ones that did. He also said any such safety regulations needed to be informed by more evidence of actual risks. To come up with guidelines for future AI safety policies, Newsom appointed to his advisory panel Stanford University computer-science professor Fei-Fei Li; Jennifer Tour Chayes, dean of the UC Berkeley College of Computing, Data Science, and Society; and Carnegie Endowment for International Peace President Mariano-Florentino Cuéllar.In the report, the working group advised policymakers to encourage AI developers to disclose more information about the data they use to train their models, the steps they take to mitigate risks and secure their underlying technologies, the testing they do on their models and how widely their systems are being adopted. Ex // Top Stories A 19th-century SF laundry owner remains core to the 14th Amendment Twelve years before Wong Kim Ark’s Supreme Court case defined birthright citizenship, another Chinese American in The City won a decision that has since served as ‘con… Is The City’s ‘family zoning’ plan big enough for families? Some critics argue that the proposal will incentivize builders to construct apartments that are far too small for typical households to live in Lurie budget redirects soda-tax funding from community grants At least 20 nonprofits could lose city funds for health programs The group also advised policymakers to consider creating a system into which developers and users could report harmful or hazardous incidents involving particular models, the group said. Additionally, policymakers should require third-party verification of data submitted by developers and should provide protections for whistleblowers who report risks or potential corporate policy violations, the working group said in the report. “A ‘trust but verify’ approach recognizes the distinctive insights and expertise of model developers about their technology while ensuring claims can be independently validated,” the group said. “This second step, which is reinforced by whistleblower protections and third-party research protections, engenders public confidence to enable evidence-based governance.” The group also called for policymakers to be thoughtful about which developers should be covered by such transparency and reporting requirements. They should avoid policies that focus on developers rather than the models they’re creating, the group said. Depending on how such policy is written, it might include companies that aren’t really developing advanced AI models while excluding others that clearly are, the group said. The group also said policymakers shouldn’t focus solely on the computing power required to train a model, because that factor isn’t necessarily well correlated with risk. Instead, it advised taking a flexible approach to determining which companies or models are covered by regulations and ensuring that such an approach could be easily updated based on more evidence. Notably, the group did not recommend any particular measures or legislation. And it didn’t look into how policymakers could mitigate the broader spectrum of actual and potential harm from AI, such as increases in carbon emissions, pollution and water consumption, the prospect of massive job losses, and the use of AI models for cyberattacks or to spread misinformation. The report comes as Congress is considering hobbling the ability of California and other states to regulate AI. A provision of the budget bill — which has been passed by the House of Representatives and is being debated by the Senate as of press time — would bar states from enacting new laws to govern AI and would nullify any existing laws. That provision has drawn wide criticism, Wiener said in a statement following the working group’s report. “California still has a vital role to play in establishing safeguards for AI—we can set the standard that others will follow,” Wiener said. “The recommendations in this report strike a thoughtful balance between the need for safeguards and the need to support innovation.” If you have a tip about tech, startups or the venture industry, contact Troy Wolverton at twolverton@sfexaminer.com or via text or Signal at 415.515.5594.

We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

sfexaminer /  🏆 236. in US

Artificial Intelligence Ai Harms Scott Wiener Gavin Newsom

 

United States Latest News, United States Headlines

Similar News:You can also read news stories similar to this one that we have collected from other news sources.

Mental health awareness with “Beyond the Stigma” panel for Harris County employeesMental health awareness with “Beyond the Stigma” panel for Harris County employeesIn observance of Mental Health Awareness Month, County Judge Lina Hidalgo spearheaded a panel discussion titled 'Beyond the Stigma' to educate County employees about available mental health resources and support systems. The event featured healthcare experts and former Houston Texans players who shared personal experiences related to mental health.
Read more »

FDA panel debates COVID vaccine recipe as questions swirl about fall shotsFDA panel debates COVID vaccine recipe as questions swirl about fall shotsGovernment vaccine advisers are meeting to decide if the recipe for COVID-19 vaccines needs updating for this fall and winter.
Read more »

FDA panel debates COVID vaccine recipe as questions swirl about fall shotsFDA panel debates COVID vaccine recipe as questions swirl about fall shotsGovernment vaccine advisers are meeting to decide if the recipe for COVID-19 vaccines needs updating for this fall and winter.
Read more »

FDA panel debates COVID vaccine recipe as questions swirl about fall shotsFDA panel debates COVID vaccine recipe as questions swirl about fall shotsAdvisers to the FDA will debate if the virus has mutated enough since last winter to require a tweak to the formula.
Read more »

FDA panel debates COVID vaccine recipe as questions swirl about fall shotsFDA panel debates COVID vaccine recipe as questions swirl about fall shotsGovernment vaccine advisers are meeting to decide if the recipe for COVID-19 vaccines needs updating for this fall and winter.
Read more »

FDA panel debates COVID vaccine recipe as questions swirl about fall shotsFDA panel debates COVID vaccine recipe as questions swirl about fall shotsGovernment vaccine advisers are meeting to decide if the recipe for COVID-19 vaccines needs updating for this fall and winter.
Read more »



Render Time: 2026-04-01 09:06:21