Applicable Law and UC Policy
Laws and UC policies —old and new— govern our use and deployment of AI by regulating data privacy and security, transparency and accountability, and protection against biases.
Laws and Policies
University of California Statement of Ethical Values
The University of California's Statement of Ethical Values outlines the commitment of the university community to integrity, excellence, accountability, and respect. These principles necessarily extend to the use of AI by the university and members of the university community.
UC Standards of Ethical Conduct
The University of California's Standards of Ethical Conduct outline ethical behavior for all university members, emphasizing fair dealing, accountability, respect, legal compliance, conflict of interest management, and the responsible use of resources. These standards apply to AI governance by requiring ethical research, privacy protection, accurate data handling, and compliance with legal and professional standards.
Regents Policy 1111: Policy on Statement of Ethical Values and Standards of Ethical Conduct
The University of California's Statement of Ethical Values commits its community to the highest ethical standards, emphasizing integrity, excellence, accountability, and respect. The Standards of Ethical Conduct apply to all members, including faculty, staff, students, and affiliates, and cover areas like fair dealing, individual responsibility, respect for others, compliance with laws, conflict of interest management, research ethics, confidentiality, internal controls, resource use, financial reporting, and reporting violations. These principles require ethical AI governance, promoting responsible research, data privacy, and integrity in AI development and use.
Anti-Discrimination Policy
The Anti-Discrimination Policy ensures an inclusive environment free from discrimination and harassment, and details prohibited conduct and reporting procedures. In the context of AI, this policy emphasizes the need for AI systems to avoid perpetuating discrimination and to include safeguards against bias, ensuring equitable treatment for all protected categories.
UC Policy on Title IX
The Sexual Violence and Sexual Harassment (SVSH) Policy helps ensure a safe environment by defining prohibited behaviors, outlining individual rights and responsibilities, and establishing reporting procedures. Compliance with this policy mandates AI systems respect these standards, preventing technology from facilitating or ignoring sexual misconduct and reinforcing a safe and respectful community
PACAOS-140 Guidelines Applying to Non-Discrimination on the Basis of Disability
PACAOS-140 focuses on nondiscrimination on the basis of disability, ensuring that all university programs, services, and activities are accessible to individuals with disabilities. This policy intersects with AI use by mandating AI tools and systems adhere to these nondiscrimination guidelines. Thus, AI applications must not discriminate against individuals with disabilities and must be accessible and equitable in their deployment and operation.
IS-3: Electronic Information Security
BFB-IS-3 is designed to protect the confidentiality, integrity, and availability of electronic information within the UC system, and provides comprehensive guidelines for risk management, access control, and compliance with legal and regulatory requirements to ensure the security of institutional information and IT resources. IS-3 aligns with UC's commitment to responsible AI use, emphasizing transparency, accountability, and ethical decision-making in AI deployment, which is crucial for maintaining data integrity and security as outlined in the policy.
Account and Authentication Management Standard (AAMS)
The UC Account and Authentication Management Standard sets forth the minimum requirements for account, passphrase, and authentication management to protect UC's Institutional Information and IT Resources. It emphasizes the importance of secure access control and is a key component in safeguarding against unauthorized access to sensitive data, including that which may be used by AI systems. This policy ensures that AI technologies are accessed and managed securely, supporting the university's commitment to responsible AI use by preventing misuse and maintaining the integrity of AI operations.
RMP-2: Records Retention and Disposition: Principles, Processes and Guidelines
RMP-2: Records Retention and Disposition outlines the principles, processes, and guidelines for retaining and disposing of university records. The policy mandates systematic retention schedules and ensures that records are preserved for their required period based on administrative, legal, fiscal, and historical needs. This policy dictates how data, which may be used or generated by AI systems, should be managed, stored, and disposed of, ensuring compliance with legal and ethical standards for data protection and privacy.
RMP-7: Privacy of and Access to Information Responsibilities
BFB-RMP-7 focuses on protecting administrative records containing personally identifiable information (PII). It ensures that such records are managed according to privacy laws, including the California Information Practices Act, and sets guidelines for the collection, use, and dissemination of PII. This policy provides a framework to ensure that AI systems handling PII comply with privacy regulations, thus protecting individuals' data from misuse and ensuring ethical AI deployment within the university system.
HIPAA: Policies and Glossary
The HIPAA policies apply to all of UC’s covered entities, including medical centers, medical clinics, health care providers, health plans, and student health centers. The HIPAA Privacy Rule protects individuals’ medical records and protected health information (PHI), which is information that can be linked to a specific individual with regards to the provisioning of health care services. Under HIPAA, AI may only be deployed in health systems where patient data is secure and private to prevent data breaches and unauthorized access. When using AI for research or analysis, HIPAA allows the use of de-identified data, which is stripped of personal identifiers to ensure patient anonymity. When using identified data, such as for training AI models, patients must be informed about how their data will be used and must provide consent for its use.
Gramm-Leach-Bliley Act (GLBA) Compliance Plan
The Gramm-Leach-Bliley Act (GLBA) Compliance Plan aims to protect customers personal information related to financial products, including student loans, by implementing risk assessments, safeguards, and workforce training. This policy emphasizes the need for integration of robust data protection measures in AI systems, ensuring compliance with federal data privacy regulations and preventing AI applications from compromising customer information, integrity, and privacy.
PACAOS-130 Disclosure of Information from Student Records
PACAOS-130, aligned with the Family Educational Rights and Privacy Act (FERPA), regulates the disclosure of student records, specifying the types of records covered, students' rights to access, conditions for non-consensual disclosure, and correction procedures. This policy stresses the importance of protecting student privacy in AI systems, requiring stringent data access controls, privacy safeguards, and transparency to prevent unauthorized disclosure and misuse of student information.
UC Statement of Privacy Values, Principles, and Balancing Test
The UC Statement of Privacy Values, Principles, and Balancing Test emphasizes the importance of privacy in protecting personal data within the university system. These principles focus on transparency, accountability, and minimizing data collection to ensure privacy and data security. This policy guides how AI systems must handle personal data responsibly, ensuring that AI deployment within the university respects privacy rights and complies with established data protection standards.
APM 010: Academic Freedom
APM-010 aims to uphold the principles of academic freedom, professional integrity, and fair treatment within the University of California system. AI should be treated as any other software system in its effect on academic freedom.
APM 015: Faculty Code of Conduct
APM 015 is focused on academic freedom, professional standards, and the responsibilities of faculty members. In the context of AI, faculty should be vigilant about biases and errors in AI tools that can affect equity and academic rigor.
APM 160: Maintenance of, Access to, and Opportunity to Request Amendment of Academic Personnel Records
APM 160 ensures the protection, access, and amendment rights of academic appointees' personnel records within the University of California system, balancing individual privacy with public transparency. AI systems handling academic records should maintain privacy, transparency, and the right to amend records, thereby upholding the same standards of confidentiality and fairness.
BUS-43: Materiel Management
BFB-BUS-43 outlines the guidelines for purchasing goods and services, emphasizing security, accountability, and compliance with regulatory standards. It includes procedures for efficient financial management, promoting supplier diversity, and ensuring non-discrimination, while encouraging strategic sourcing through regional and university-wide agreements to optimize procurement efficiency. This policy ensures AI-related acquisitions follow university-wide guidelines, promoting responsible use and compliance with ethical and legal standards in AI system deployment and management.
BUS-49: Policy for Cash and Cash Equivalents Received: Appendix B, Data Security
FB-BUS-49 sets forth guidelines for handling cash and cash equivalents to ensure their security, accountability, and compliance with regulatory standards. It covers the receipt, safeguarding, and documentation of cash transactions within the university. In relation to AI governance, this policy emphasizes the need for AI systems managing financial transactions to adhere to these security and compliance standards, ensuring the integrity and proper handling of financial data within AI applications.
BUS-80: Insurance Programs for Information Technology Systems
BUS-80 establishes insurance programs to protect the university's IT infrastructure, including AI systems, from various risks by outlining the requirements and procedures for securing insurance coverage. This ensures AI technologies are covered under comprehensive risk management protocols, aligning with institutional standards for mitigating risks associated with AI deployment and operations. The policy thus integrates AI systems into the broader framework of IT risk management and insurance within the university.
AI Executive Order N-12-23
Governor Gavin Newsom's Executive Order on Artificial Intelligence (AI) establishes guidelines for the safe and ethical development, deployment, and use of AI technologies in California. It emphasizes the need for transparency, accountability, and public engagement, and mandates state agencies to assess and mitigate risks associated with AI. The order also promotes collaboration with various stakeholders to ensure AI benefits all Californians while addressing potential societal impact.
Benefits and Risks of Generative AI Report
The report from California's Government Operations Agency outlines the implementation of Governor Newsom's Executive Order on Generative AI, detailing strategies for the responsible use of AI within state agencies. It emphasizes transparency, accountability, and ethical considerations in AI deployment, and includes recommendations for data privacy, security, and workforce training. The report aims to ensure that AI technologies benefit all Californians while mitigating potential risks.
California State Constitution
Article 1 Declaration of Rights, Section 1: “All people are by nature free and independent and have inalienable rights. Among these are enjoying and defending life and liberty, acquiring, possessing and protecting property, and pursuing and obtaining safety, happiness and privacy.”
California Information Practices Act of 1977 (Civil Code §§1798-1798.28)
Requirements for the protection of the privacy of individuals in the maintenance and dissemination of personal information within a state agency.
Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence
The Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence establishes guidelines to ensure AI technologies are developed responsibly, with a focus on safety, security, and equity. It mandates comprehensive risk management, promotes innovation and competition, and requires federal agencies to implement measures that prevent bias, protect privacy, and ensure consumer protection. This order aims to position the U.S. as a global leader in ethical AI development and use.
Gramm-Leach-Bliley Act of 1999
Federal law to protect consumers' personal financial information.
Health Insurance Portability and Accountability Act of 1996 (HIPAA)
The HIPAA Privacy Rule, effective April 14, 2003, established national standards to guard the privacy of a patient's protected health information.
Protected health information includes: Information created or received by a health care provider or health plan that includes health information or health care payment information plus information that personally identifies the individual patient or plan member.
Personal identifiers include: A patient's name and email, web site and home addresses; identifying numbers (including Social Security, medical records, insurance numbers, biomedical devices, vehicle identifiers and license numbers); full facial photos and other biometric identifiers; and dates (such as birth date, dates of admission and discharge, death).
Family Educational Rights and Privacy Act (FERPA)
FERPA, the Family Educational Rights and Privacy Act, regulates the privacy of student education records, giving students and parents specific rights to access and control the disclosure of these records.
The Privacy Act of 1974, 5 U.S.C. § 552a: The EU AI Act (European Commission)
The EU AI Act was the first global legal framework on AI, and establishes clear requirements for AI developers and deployers. The Act is part of a broader strategy to ensure AI safety, fundamental rights, and to foster innovation and investment in AI across the EU.
Ethics and Governance of Artificial Intelligence for Health: Guidance on Large Multi-modal Models (WHO)
Artificial Intelligence (AI) involves algorithms that learn from data to perform automated tasks without explicit human programming, while Generative AI uses trained algorithms to create new content like text, images, or video. This guidance focuses on large multi-modal models (LMMs), which process diverse data types to produce varied outputs and are expected to be widely used in health care, scientific research, public health, and drug development, although their broad capabilities remain unproven.
AI and Education: Guidance for Policymakers (UNESCO)
Artificial Intelligence (AI) in education holds significant promise for innovating teaching and learning, but also presents risks that outpace current policy and regulation. This publication provides guidance for policymakers on leveraging AI's opportunities while addressing associated challenges.
AI Risk Management Framework – National Institute of Standards and Technology (NIST)
This AI Risk Management Framework – Generative AI Profile is a companion resource for the AI Risk Management Framework (AI RMF 1.0) tailored to Generative AI, aligning with President Biden’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. It aims to help organizations incorporate trustworthiness into AI products by offering a framework for managing AI risks across various stages of the AI lifecycle. The profile provides insights and suggested actions for governing, mapping, measuring, and managing risks specific to Generative AI, applicable across different sectors.
AI Risk Management Framework – Generative AI Profile (draft)- NIST
This AI Risk Management Framework – Generative AI Profile is a companion resource for the AI Risk Management Framework (AI RMF 1.0) tailored to Generative AI, aligning with President Biden’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. It aims to help organizations incorporate trustworthiness into AI products by offering a framework for managing AI risks across various stages of the AI lifecycle. The profile provides insights and suggested actions for governing, mapping, measuring, and managing risks specific to Generative AI, applicable across different sectors.