Background
In the realm of higher education, the advent of Artificial Intelligence (AI) presents both an opportunity and a challenge. When used appropriately, AI can support student learning and enhance productivity. However, the use of AI must not replace the critical thinking and problem-solving skills that are foundational to an engineer’s education. As engineers often create reports and find themselves creating a variety of written documents, having the skills to proficiently write are instrumental to an engineer.
The current AI software is a form of Natural Language Processing. There are many benefits to using this software, but it is not an end-all-be-all and does not have the level of understanding that most individuals would expect. In short, the software predicts what should be said next when given a particular context or setting. In the context of engineering, AI software does not have a deep understanding of the content it generates. Thus, responsible use of AI software is paramount for students’ education.
Guiding our AI use is the preamble to the National Society of Professional Engineer (NSPE) Code of Ethics:
Guidelines
A student’s time at the University of Arizona is a phase of fundamental intellectual development, distinct from their future workforce experience. Through the lens of the NSPE Code of Ethics for Engineers, we will guide the use of AI tools in our courses as follows:
AI as a Supplement, not a Substitute for Student Work
AI should be considered a tool to supplement student learning, not replace it. AI can offer support by automating repetitive tasks, providing additional learning resources, and enhancing student engagement. The more advanced models can serve as supplemental tutors when office hours are unavailable. However, AI should not complete most of a student’s work, which shorts the student of a learning opportunity. Additionally, if AI software is used for supplementing a student’s work, the student should scrutinize any work generated with AI assistance.
Prioritizing Critical Thinking
The cultivation of critical thinking skills is a primary objective of our courses. Higher-level thinking is developed through grappling with complex problems, applying new material, analyzing various scenarios, synthesizing insights, and evaluating learned concepts. AI tools should be used in a way that supports and enhances this struggle, not circumvents it. Learning happens in the struggle. If there’s no struggle, there’s very little real learning.
I would highly recommend that instructors include an assignment that requires students to critically analyze AI generated text. I am thinking of doing something like having them get the AI to write things, see how accurate it is, and see how they can “break” the AI by making it say incorrect things (this usually happens when math, equations, and such are involved – see these links)
Appropriate Use of AI as a “Friend/Classmate”
AI can assist in the learning process, but it’s crucial to ensure its use is appropriate and does not facilitate academic dishonesty. See UofA’s Code of Academic Integrity for more information on this topic. Students should regard AI as a classmate or friend when completing assignments or tasks. For “individual work,” students should avoid using AI software. However, when an assignment or task is classified as “group work,” students can use AI software appropriately according to these guidelines. AI should help students understand concepts or practice skills, not complete assignments on their behalf. Standard plagiarism protocols will still apply.
Continuous Learning and Adaptation
We encourage students to maintain a growth mindset, and we also acknowledge the need for continual adaptation in our use of AI. We will regularly assess AI’s role in our courses and make necessary adjustments to ensure it enhances, rather than inhibits, student learning.
Citing AI Software
Guidelines for citing generative AI text are slowly emerging but not yet standardized. APA and MLA currently provide limited direction for citing AI software. Ongoing discussions will likely change our views and philosophy on the use of Language Learning Models (LLMs). For now, AI software should be cited as “personal communications – [Insert AI software].”