Ticker

6/recent/ticker-posts

Ad Code

Responsive Advertisement

Berkeley advises evidence-based AI policy in new Science article

Overview


Berkeley advises evidence-based AI policy in new Science article

UC Berkeley scholars—including Jennifer Chayes, Ion Stoica, Dawn Song, and Emma Pierson—joined a team of 20 experts to publish an article titled “Advancing science- and evidence-based AI policy” in Science on July 31, 2025. The publication lays out a structured framework for governing the deployment of increasingly powerful AI systems by grounding policymaking in scientific evidence.


Core Components: Evidence-Based Policy Framework


The article tackles three pivotal questions:


Question

Description

1. How should evidence inform AI policy?

Policy should be rooted in scientific analysis; evidence must shape and be shaped by policy.

2. What is the current state of evidence?

Evidenceing frameworks—like certification and synthesis—are nascent; much more development is needed.

3. How can policy accelerate evidence generation?

Policies should actively incentivize data generation—funding research, encouraging transparency, requiring model evaluations.


Policy Recommendations


The paper outlines several actionable proposals to operationalize evidence-based AI policy:

Safety disclosures from AI developers—e.g., model capabilities, training data summaries.

Mandatory pre-release evaluations to assess model risks before deployment.

Post-deployment monitoring (such as adverse-event reporting systems) to detect unintended harms.

Protections for third-party and independent researchers, including safe-harbor provisions.

Marginal-risk assessments, comparing the incremental risk of AI systems relative to existing technologies like search engines.


Relationship with California’s Frontier AI Policy Report


This Science article builds on the June 17, 2025 report co-led by Berkeley’s Jennifer Chayes and Stanford’s Fei-Fei Li, known as The California Report on Frontier AI Policy. That prior work emphasized:

Adverse-event reporting mechanisms to collect empirical harm data.

Model transparency and whistleblower protections to improve oversight.

Third-party evaluation to complement developer-led safety testing.


Broader Context & Caveats


While the emphasis on evidence-based AI policy is widely supported, academics have cautioned against overly rigid evidentiary standards stalling timely regulation. A 2025 analysis titled “Pitfalls of Evidence-Based AI Policy” highlights how requiring excessively high proof can delay responses to emerging risks—a dynamic historically seen in domains like tobacco or climate policy.


Summary Table


Aspect

Summary

Publication

Advancing science- and evidence-based AI policy (Science, July 31 2025)

Institutions

Berkeley among 20 institutions (Stanford, Harvard, Princeton, Carnegie Endowment, etc.)

Framework Questions

How evidence informs policy; current evidence landscape; how policy can catalyze evidence

Recommended Mechanisms

Safety disclosures, third-party evaluation, post-deployment monitoring, marginal-risk focus

Linked Initiatives

Builds on California Frontier AI Policy Report (June 2025) with similar recommendations

Critical Perspective

Warnings that strict evidence standards could slow regulation of urgent AI risks




Post a Comment

0 Comments