Skip to Main Content

Gen AI at USF

University Guidance

USF Guidance for Ethical Generative AI Usage

Generative artificial intelligence (genAI) tools are rapidly transforming our teaching and learning, research, and business practices at USF. These tools will shift our approach to both regular tasks and how we address new challenges. With increasing availability of genAI tools, USF provides guidance and considerations for implementing use into various workflows.

Individual units and departments are responsible to actively explore the usefulness and limitations of these technologies within their specific contexts and share their experiences and expectations. As these tools continue to develop, our practices will evolve, and usage recommendations will be updated regularly.

Defining Generative AI Tools

GenAI refers to technologies that can automatically generate new, original content and assets including text, images, audio, video, and computer code. These tools work by analyzing patterns in training data, building an understanding of structure and style, and using that knowledge to create novel, customized outputs that mimic the training data while introducing variation and new ideas. The core capability is to synthesize new artifacts that are coherent, relevant, and potentially useful given a user-provided context.

GenAI Guiding Principles & Usage Limitations

  • Human-Centered Generative AI: Human intelligence is fundamental in using genAI technologies towards beneficial ends. Critical thinking, ethical awareness, and human judgment remain essential. These technologies should augment - not replace - our foundational strengths of cultivated talent, scholarly expertise, creative expression, intellectual community, and human capital. GenAI tools generate possibilities; human intelligence ascribes value to realize the best possibilities.
  • Individual Responsibility: Given the human-centered approach, individuals are responsible for understanding the limits and constraints of the genAI tools they are using. Users are responsible for the validity, correctness, and usefulness of the content generated by these AI tools. Verify facts and cross-check information, as generative AI models can create incorrect information and regularly appears authoritative despite being incorrect.
  • Awareness of Bias: Users should be aware that genAI models can potentially perpetuate biases that exist in their training data as well as the applications used to interact with the large language models. Users should proactively work to identify and proactively mitigate any harmful biases in generated content.
  • Sourcing Information: While genAI tools can synthesize novel content, they may rely on training data that contain original works and even potentially copyrighted materials. Users of genAI have an ethical responsibility to respect copyrights, avoid plagiarism, and provide proper citations for any generated content that incorporates or builds upon others' original work. Our academic principles require proper acknowledgement of sources, authors, and contributors.
  • Transparency of Use: Users should be aware when they are using genAI tools and should when possible and appropriate disclose content is produced by AI versus human-generated. Disclosing the use of generative AI is especially important for published works, official communications, and student submissions where allowed by instructors. While the university integrates genAI tools into our academic integrity policies, misrepresenting AI-generated work as one’s own goes against our central tenets. Transparency builds trust and understanding around the appropriate use of AI technologies.
  • Discipline-Specific Principles:  Users of genAI should proactively stay informed about the policies, guidelines, and ethical considerations relevant to their field or specialty, including those established by professional organizations and through journals and publications. In addition, use of genAI tools within university coursework is at the discretion of the instructor and students are expected to follow course-specific policies.
  • Data Protection and Privacy: Users are expected to maintain current USF Technology Policies, with specific regard when using publicly available tools that may retain and use data entered into these tools. Users should not share student data, employment data, and other protected or sensitive information. Users must continue to comply with data privacy regulations including FERPA and HIPAA when interacting with public AI tools. USF faculty and staff have access to Copilot Enterprise which provides a data-protected text-based genAI tool and more information is available on the USF IT website. Current governance structures for institutional data and information will continue to review and explore these technologies and utility for advancing the university’s mission.