AI Governance and Transparency

While AI has the potential to streamline administrative workflows; enhance classroom experiences; and enable more efficient, data-driven decision-making across UC, we must remain vigilant about potential risks such as accidental disclosure of proprietary, confidential, or personal data, and the inadvertent violation of University policy and/or state and federal laws. As we integrate AI into our daily lives, it is essential to adhere to the UC AI Principles, along with relevant laws and policies that govern data usage, treating AI applications with the same caution and compliance as any other data usage.

At the University of California, AI transparency refers to the openness and clarity about how AI systems are designed, implemented, and used. Transparency fosters trust, ensures accountability, and allows stakeholders to understand and assess the impact and ethical implications of AI-driven decisions and processes.

UC Responsible AI Principles

The UC Presidential Working Group on AI developed the UC Responsible AI Principles to establish appropriate oversight and guardrails so UC may harness the full potential of AI’s transformative technology while appropriately addressing concerns about the potential risks, particularly bias and discrimination. These principles draw upon a growing consensus around the concept of responsible AI and various frameworks implemented across public and private sectors.

Aligning use cases with the UC Responsible AI Principles helps ensure UC procures, develops, and implements AI-enabled tools that advance both our systems and processes as well as our values.

What are the principles?

Appropriateness: The potential benefits and risks of AI and the needs and priorities of those affected should be carefully evaluated to determine whether AI should be applied or prohibited.

Transparency: Individuals should be informed when AI-enabled tools are being used. The methods should be explainable, to the extent possible, and individuals should be able to understand AI-based outcomes, ways to challenge them, and meaningful remedies to address any harms caused.

Accuracy, Reliability, and Safety: AI-enabled tools should be effective, accurate, and reliable for the intended use and verifiably safe and secure throughout their lifetime.

Fairness and Non-Discrimination: AI-enabled tools should be assessed for bias and discrimination. Procedures should be put in place to proactively identify, mitigate, and remedy these harms.

Privacy and Security: AI-enabled tools should be designed in ways that maximize privacy and security of persons and personal data.

Human Values: AI-enabled tools should be developed and used in ways that support the ideals of human values, such as human agency and dignity, and respect for civil and human rights. Adherence to civil rights laws and human rights principles must be examined in consideration of AI adoption where rights could be violated.

Shared Benefit and Prosperity: AI-enabled tools should be inclusive and promote equitable benefits (e.g., social, economic, environmental) for all.

Accountability: The University of California should be held accountable for its development and use of AI systems in service provision in line with the above principles.

Applicable Laws and Policies

UC is subject to many laws, and has numerous policies, that apply to AI even if they don't explicitly mention "artificial intelligence." Existing laws and policies on privacy, IT security, anti-discrimination, conduct, and more are just as relevant to AI and the data driving it as they are to any other framework. Essentially, if an action wasn't permissible before AI, it isn't permissible with AI.  

Review a list of UC's applicable laws and policies and learn more about how these policies relate to AI.

Additionally, UC Legal has prepared a Legal Expertise Chart that serves as a guide to navigate the complex legal landscape of artificial intelligence (AI) across diverse areas, including privacy, intellectual property, discrimination, due process, and contract law. By mapping potential university activities to pertinent legal frameworks, laws, regulations, and guidance from federal and state agencies, this dynamic resource provides timely links to regulatory updates, best practices, and case law. It is an essential tool to support informed AI use and thorough risk assessment within the rapidly evolving AI policy environment.

UC AI Transparency Report

Transparency in the use of AI allows the University to more effectively assess potential risks and opportunities, analyze experiences and outcomes, and shape future initiatives. This includes developing policies for responsible AI use that promote efficiency, protect civil liberties, respect autonomy, and ensure equitable, positive outcomes.

To that end, the AI Council’s Subcommittee on Transparency established the initial operational groundwork for transparent AI usage. The committee conducted a systemwide survey to update the Council on UC’s AI use trends since the 2021 Final Report and defined “high impact” AI uses affecting individuals, organizational units, or the University. Their final report highlights UC’s AI activities and makes recommendations and observations to inform UC AI Council initiatives for the coming year.

UC AI Transparency Report