Integrating AI policies into your syllabus helps uphold academic integrity and provides students with clear guidelines for responsible AI use.
Your pedagogy must drive your policy. Your stance should be a reflection of the course's specific learning objectives, so the fundamental question behind any AI rule should be: "How does this policy support or hinder what I want my students to learn?" This alignment should provide a clear and defensible rationale for your students and not an arbitrary restriction. The policy communicates what is valued in the course.
AI policies should make clear that the student is the ultimate author and is fully responsible for the entirety of their submitted work, regardless of the tools used in its creation. This is especially important given the known propensity of AI tools to "hallucinate" or create false citations. Your policy should be clear that "the AI did it" is not a valid defense for submitting inaccurate or plagiarized work. Students should be taught that AI generated content is raw material that requires verification, evaluation, and revision. This makes the student a critical evaluator, fact checker, and overseer of the information they present.
The foundation of academic integrity is transparency. Rather than attempting to catch or police students using AI, a more effective approach centers on openness and honesty. Even if the policy is total prohibition, there must be clear instructions on how to cite and disclose their use of GenAI tools.This transparency can be anything from "I used ChatGPT to brainstorm ideas" or "to check for grammatical errors" to more detailed poliies that detail process documentation throughout a scaffolded assignement. All of the major citation styles have developed formats for GenAI citations, and students should be directed to those resources.
A good AI policy should include ethical and practial issues related to GenAI usage. Issues like data privacy, intellectual property, and bias and equity are all essential considerations. Students should be warned against inputting any personal, confidential, or proprietary information, and when possible, be encouraged to use institutionally vetted AI tools. Students should also be reminded that course materials should not be uploaded into public AI tools without the instructor's or publisher's explicit permission. Finally, GenAI models that have been trained on vast datasets from the internet can perpetuate and amplify harmful stereotypes related to race, gender, and other identities. Students should be encouraged to critically analyze content for such biases.
Given the limitations on AI detection tools, policies should actively discourage their use as a primary basis for academic misconduct allegations. An approach centered on policing and detection can create a climate of fear and mistrust. This discourages students from asking for clarification about policies and could potentially lead to false accusations that are difficult for students to disprove. It's better to build a policy that fosters a culture of dialogue and trust. The first step in any inquiry about inappropriate AI use should be a conversation framed with curiosity, not an accusation. Focus on inappropriate AI use as a teachable moment, especially for first time or unintentional violations.
Each assessment should include a specific academic integrity statement detailing whether AI tools are permissible. For example, include a note that states, "For this exam, no AI tools may be used," or "You may use AI for proofreading but not for content generation."
Clearly state the consequences of violating the AI policy and explain how such actions will impact the student’s academic record. Remind students of the importance of original work and how misuse of AI affects their learning experience.
Here is my current revised syllabus statement in which I try to focus primarily on transparency and critique:
The following policy outlines the appropriate use of Artificial Intelligence (AI) or Large Language Model (LLM) assistance in this class. Adherence to this policy is mandatory for all students.
Failure to adhere to this policy will be considered a violation of academic integrity and may result in disciplinary actions as outlined in the Code of Student Conduct. It is your responsibility to ensure that your use of AI tools aligns with these guidelines and to seek clarification when in doubt
Any time you use AI in this class, you must include in your assignment submission comprehensive documentation that covers
Any failure to follow this requirement will negatively impact your grade, will be reported to Student Conduct, and could subject you to legal liability. By remaining in this course, you agree to allow the instructor and/or CSCC to run your work through AI detectors in the course of investigating potential academic integrity violations.
I also include specific academic integrity statements for each assignment. Here is an example of how I've added AI expectations to those statements:
Links to sample syllabi statements
© 2024 Digiasati. All rights reserved.