Navigating AI in the Classroom
For educators, the challenge lies in finding the right balance: capitalizing on AI’s strengths while ensuring students develop a thorough understanding of core concepts and the specific nuances of their disciplines. "
Navigating AI in the Classroom
As someone who teaches business analytics courses at the intersection of business and computer science, technology plays a significant role in my courses. The recent introduction of Generative AI (GenAI) tools, such as ChatGPT, has brought both exciting opportunities and challenges to how I teach. Over the past few years, I’ve had many conversations with students about how different their experiences are today compared to previous years when homework and thesis work was entirely manual. Now, we find ourselves adapting to a world where AI-generated content is part of everyday learning and competes with search engines, textbooks, and class notes.
The discussion around AI in education is broad and nuanced. On one hand, it’s a powerful tool that reflects how industries evolve. On the other, some see it as a potential threat to traditional teaching methods. I’ve been asked several times how to design "AI-proof" assignments—tasks that AI can’t easily complete. After exploring different approaches, I realized that strategies like asking students to submit voice-annotated PowerPoints didn’t add much value and were very time-consuming to grade. Instead, I embraced AI’s potential while ensuring students learn the necessary foundational skills.
This semester is my first time allowing students to use GenAI in class openly. I’m trying to strike a balance between leveraging its benefits and understanding its drawbacks. Previously, I might spend three hours walking through one coding example with students as they either followed along or watched me type on the screen. Now, I can ask for their ideas, have GenAI generate the code, and review it with the class in real-time. This allowed me to cover multiple examples in a session tailored to the student's interests. For instance, we’ve built a walk-in clinic for superheroes, a grocery delivery app, and a booking system for a music studio—all while allowing more time for in-depth discussion.
That said, it’s still crucial that students truly learn the material. If a student is starting with coding and doesn’t grasp the core concepts, relying on AI can be counterproductive. Just because you can generate something with AI doesn’t mean it will be useful or actually solve the challenge unless you understand the basics. To address this, I’ve created a policy for AI use in my courses that allows students to benefit from AI but ensures they also master the underlying principles.
Policy on GenAI Use
As AI continues to transform/threaten/support education, I’ve gone back and forth on how best to integrate these tools into my classroom. Over time, I’ve tested various approaches—some restrictive, others more open. This year, I’m trying a new policy that allows students to use generative AI tools, like ChatGPT, as part of their learning process. The idea is to let them experiment with AI while ensuring they do so thoughtfully, with an emphasis on understanding rather than just generating answers. This approach aims to help students explore AI’s potential while also focusing on building the critical skills central to their success.
My syllabus this year contains this policy:
Generative AI tools can be a useful resource, but they can also create complications if used improperly. In this course, you may use AI-generated code for your assignments, but your grade will reflect the quality of your final submission. If you rely solely on AI-generated content without understanding the underlying concepts, you will miss out on developing the critical thinking skills this course aims to teach.
Using generative AI in "auto-pilot" mode can result in code that lacks creativity or may contain errors, negatively impacting your grade. It is recommended that you focus on building a strong foundation in programming and data transformation. If you choose to use AI-generated code, you must indicate this in your assignment by adding a comment explaining how and why you used it. This ensures transparency and shows that you understand the work submitted.
The goal of this policy is to strike a balance between innovation and foundational learning. While AI offers powerful resources that can enhance the educational experience, it's crucial that students develop a strong understanding of the material and not rely on technology as a crutch.
Performative Classes
In thinking about how AI fits into my teaching, I've often questioned the purpose of the classroom experience. Are we just there for students to watch me type so they can pick up techniques, or is it about presenting concepts they’ll go home and try on their own? Sometimes, in smaller classes, I’ve acted as a roving mentor—walking around to help students troubleshoot as they code. With HyFlex teaching, I’ve found a sweet spot: I can present a single problem to the class, solve it together once on the screen, and then move forward. It’s much more efficient than repeating the same solution twenty times individually.
Lately I have been asking students what interests them when they walk into class. This semester, for instance, a student wanted to design a health clinic for superheroes. We jumped into it by having GenAI generate a basic database structure. The example was entertaining and unique, giving us a great foundation for discussion. We analyzed what the AI provided, identified missing tables, and refined the relationships—plus, we still had time for three more examples. Instead of having students watch me type, we engaged pedagogically in a way that felt far more valuable.
Sometimes, though, I wonder if I’m “cheating” by not building everything from scratch. But then I remind myself that this is the direction our classrooms are heading. Students entering the workforce will likely use these technologies in their day-to-day roles, and it’s useful for them to see how low-value tasks can be automated and integrated into their workflows. While critical thinking remains essential, I feel that by automating repetitive parts, I can focus on the material's more complex and meaningful aspects.
The only downside I see is that while I still emphasize learning the fundamentals, I’m unsure if students follow through. Without that foundational knowledge, they may struggle when it comes time to debug or fully understand AI-generated solutions.
The Role of Context
A colleague asked me to review his midterm exam using AI. I couldn’t reliably score higher than 60 or 70 percent, even when using multiple AI models and providing extra context about the course, assignment and professor. This experience highlighted a key challenge: success in exams isn’t just about the questions themselves but how students connect their answers to the broader course material. In this case, the context of Canadian material was markedly different from the answers at a university in the United States in the same course. Strong responses require not only knowing the facts but also being able to integrate them with the concepts we’ve covered in class.
This is where AI can be both a help and a hindrance. While these tools are powerful, my students often struggle with understanding the core business or computer science problems they’re meant to solve. They can miss the critical context needed to address complex issues effectively. AI can assist by filling in some gaps, but true problem-solving and academic success come from human understanding and the ability to think critically.
In many ways, we’re in a “Cold War” with constantly evolving AI tools and shifting testing methods. This forces us to continually revisit what we try to accomplish in the classroom. It’s important to combine all these pieces, understand the context, and imagine new possibilities. We might soon have rich multimedia content that can be generated on demand or even virtual reality experiences that bring course topics to life in immersive ways. But at the end of the day, the depth of human comprehension and creativity drives meaningful learning and prepares students for real-world challenges.
A Shift in Teaching and Learning
Before GenAI tools became widely used, my interactions with students largely centred on explaining core concepts and answering questions about the course content. Students often emailed me for help with specific topics or to clarify points from HyFlex recorded lectures. I often used tools like Loom to create feedback videos to demonstrate solutions, walk through examples, and guide them through complex topics, creating a dynamic and responsive learning experience.
Now, with GenAI, these interactions have evolved. Students frequently ask me questions about AI-generated responses that they either don’t fully understand or suspect might be inaccurate. While GenAI provides fast answers, they can sometimes be misleading or subtly flawed, leading students down confusing paths if they aren’t equipped to evaluate the output critically. I’ve seen instances where AI-generated code, for example, contains logical errors or lacks the creativity needed for the business solution or unique data challenges—particularly significant issues in a field like data science.
This change has transformed many of my emails with students; instead of focusing solely on teaching concepts or clarifying content, I now spend time helping students understand what ChatGPT is "thinking" and why it generated a particular response. This shift reflects a broader evolution in learning: students need to grasp the subject matter and how to interpret and critically assess AI output. While AI tools can feel like a quick path to answers, I stress to my students that a solid understanding of the basics is still crucial. Without this foundation, they can easily get drawn into what I call "AI rabbit holes." For example, ChatGPT might provide a convincing answer but subtly misrepresents key concepts or context, which can lead students down the wrong path. Without a strong grasp of the fundamentals, they may spend a lot of time troubleshooting without even realizing the real issue.
This underscores an essential point about AI in education: critical thinking and foundational knowledge are irreplaceable. I aim to teach students to use AI thoughtfully, recognizing when it’s helpful and when it could steer them into trouble. With a solid foundation, they’re better equipped to evaluate AI outputs and make informed decisions, ensuring they understand their work rather than relying on AI-generated answers.
Considering Equity in AI Usage
I also worry about equity in the class and how AI is regulated through policy, particularly if the instructor uses AI detector software that may be inaccurate or generate false positives. Students with access to more advanced tools, funding, and resources are likely to produce higher-quality work compared to those who don’t have the same support. If we don’t address this imbalance, we risk reinforcing existing inequalities. For instance, when we penalize students for plagiarism or low-quality AI-generated content, are we inadvertently penalizing those who lack access to the best tools? This raises essential questions about equity, fairness and access. To truly level the playing field, we must consider policies ensuring all students have equitable access to the technology and resources necessary to succeed. Without this, AI use regulations might deepen disparities rather than promote equal opportunity in education.
Opportunities and Challenges
Bringing AI into my courses has opened up many new learning opportunities. Students now have a tool that supports ideation, exploration, rapid prototyping, and a deeper engagement with the material. They can quickly generate code snippets, get explanations, and interact with AI in a way that resembles having an extra teaching assistant. But AI isn’t foolproof, and students need to learn when to trust its outputs and when to question them.
AI’s presence in the classroom has been transformative. It’s pushed me to adapt my teaching methods to leverage AI’s strengths while addressing its limitations. I aim to ensure that students aren’t just passive recipients of AI-generated content at the last minute before an assignment is due but active participants who think critically about the information they’re using. This approach aligns well with my mission to develop critical thinkers ready for a world where AI will only become more central.
Recommendations
As GenAI tools become more integrated into educational settings, educators face the challenge of effectively adapting their teaching methods to leverage these technologies. To navigate these complexities, educators must consider tailored approaches that align with their discipline’s unique needs while promoting responsible and inclusive AI use. My three recommendations are building capacity by openly discussing what AI represents to the discipline and how it impacts students.
-
Develop Discipline-Specific AI Policies
Recognize that AI’s role in the classroom will differ across disciplines. Create AI usage guidelines that reflect your field's unique goals, challenges, and values. In disciplines where creativity and critical thinking are central, consider policies encouraging students to analyze and critique AI-generated content, ensuring they understand how AI complements rather than replaces human input.
-
Promote Transparency and Responsible AI Use
Encourage students to be transparent about using AI tools in assignments and projects. Require students to annotate AI-generated content, explaining how and why they used it, fostering a sense of accountability and encouraging them to evaluate the AI’s output critically. This practice ensures academic integrity and helps students develop a deeper understanding of the underlying material.
-
Ensure Equitable Access to AI Resources
Address potential disparities in students' access to advanced AI tools by advocating equitable resource allocation. Provide access to AI tools within the institution or recommend accessible alternatives. Consider the implications of AI detectors and policies that penalize misuse, ensuring these do not unintentionally disadvantage students who may lack access to premium tools. By promoting equity, educators can support a fair and inclusive learning environment for all students.
Takeaways for Teaching Practice
The integration of generative AI tools into the classroom presents an invaluable opportunity to rethink teaching methods, assessment strategies, and how students engage with course material. While GenAI offers the promise of enhanced learning through rapid, personalized feedback, ideation, and explanation, it also underscores the enduring importance of foundational knowledge and critical thinking. Responses to AI adoption will naturally vary across disciplines—some may readily embrace AI as a powerful complement to human expertise, while others view it as an existential threat. This diversity in approach highlights the necessity of tailoring AI policies to align with each field's unique needs and values.
For educators, the challenge lies in finding the right balance: capitalizing on AI’s strengths while ensuring students develop a thorough understanding of core concepts and the specific nuances of their disciplines. A thoughtful, future-oriented teaching approach requires transparency in AI use, a commitment to fostering critical analysis of AI-generated content, and a continued focus on equity. By embracing AI as an aid rather than a replacement for genuine learning, educators can prepare students for a future in which AI plays an integral role while recognizing that we all engage with these technologies in distinctive ways.
By encouraging a balanced approach to AI, educators can create a learning environment where students use it as a tool rather than a crutch. This helps them build stronger analytical skills and deepen their understanding of the subject. Students need to develop critical thinking skills to evaluate what the GenAI model outputs and how to leverage the technology properly.
In writing this article, I relied on several AI tools to help with ideation and refinement. I started with ChatGPT to generate ideas and outlines, then wrote the paragraphs and refined them further with Google Gemini. I used Grammarly for polishing and made additional edits with a local AI model, while manually tweaking sentences along the way. At some point, I just started writing and said I could do a better job myself. At each stage, I guided, revised, and ensured the final message was clear and aligned with my goals. This process mirrors my classroom policy on AI: it can be a valuable aid, but human oversight and critical thinking are still essential components of effective learning. This paragraph was evaluated by GPTZero as 75% AI-generated (though uncertain of its accuracy) and was, in fact, completely human-written.