Recent developments in the field of artificial intelligence (AI) have generated much discussion on the role of AI within higher education. Many questions have been raised about risks related to security, privacy, and ethical considerations. Because this field is quickly evolving, guidance is needed to help understand and evaluate these risks. The following are key points to understand:
- The University of Iowa does not have a contract or agreement for AI tools or services. This means that standard UI security, privacy, and compliance provisions are not in place when using these technologies.
- As with any other IT service or product with no university contract or agreement, AI tools should only be used with institutional data classified as PUBLIC (Low Sensitivity). See UI Data Classification Levels for descriptions and examples of each data classification.
- AI tools can generate incomplete, incorrect, or biased responses, so any output should be closely reviewed and verified by a human.
- AI-generated code should not be used for institutional IT systems and services unless it is reviewed by a human.
- Faculty, staff and students should be aware OpenAI Usage policies disallow the use of its products for other specific activities.
Risks in this area are both positive and negative. Since the potential benefits associated with responsible use of AI are significant, any decision on the use of AI must consider both the potential positive and negative impacts.
A recent QuickPoll from Educause identified many areas of concern related to the field of AI, and a Special Report from the same organization serves as an opportunity for more in-depth exploration of the issues.
The National Institute of Standards and Technology (NIST) has published a draft AI Risk Management Framework to help organizations use a formal approach to managing AI risks. The Framework lists the following attributes of trustworthy AI:
- Valid and Reliable. Trustworthy AI produces accurate results within expected timeframes.
- Safe. Trustworthy AI produces results that conform to safety expectations for the environment in which the AI is used (e.g., healthcare, transportation, etc.)
- Fair – and Bias is Managed. Bias can manifest in many ways; standards and expectations for bias minimization should be defined prior to using AI.
- Secure and Resilient. Security is judged according to the standard triad of confidentiality, integrity and availability. Resilience is the degree to which the AI can withstand and recover from attack.
- Transparent and Accountable. Transparency refers to the ability to understand information about the AI system itself, as well as understanding when one is working with AI-generated (rather than human-generated) information. Accountability is the shared responsibility of the creators/vendors of the AI as well as those who have chosen to implement AI for a particular purpose.
- Explainable and Interpretable. These terms relate to the ability to explain how an output was generated, and how to understand the meaning of the output. NIST provides examples related to rental applications and medical diagnosis in NISTIR 8367 Psychological Foundations of Explainability and Interpretability in Artificial Intelligence
- Privacy-enhanced. This refers to privacy from both a legal and ethical standpoint. This may overlap with some of the previously listed attributes.
Appendix B of the framework includes a discussion of risks that are unique to AI. It is recommended to review these risks to understand how AI risk differs from more familiar technology risks.
Any implementation of artificial intelligence is subject to applicable university policies and standards, including the security review process. If you have questions about how to assess the above attributes for a given implementation of AI, please contact email@example.com.
If you have questions about using Artificial Intelligence tools in your teaching, including syllabus language, developing assessments, and the impact of browser plugins on online testing, please see the Office of Teaching, Learning, and Technology’s page.
Last updated 7/10/2023