Nobody Said It Was Easy
I may be biased, but I think my AI for Business course is a good course in our online portfolio. We average close to 400 students every semester, outcomes look solid, and evaluations are positive overall. But those same evaluations highlighted some real issues. Several videos felt repetitive. Ethics content showed up in four different modules. And students wanted more technical depth, not less.
We designed and built the course two years ago. Now, generative AI barely resembles what it looked like then. A refresh is clearly needed, but how and what? I was reminded of a line from the Coldplay song, The Scientist: "Nobody said it was easy."Coldplay โ The Scientist (2002). The reverse-narrative video is worth watching on its own terms.
Yet, as I thought about how AI has changed over the last few years and the current course structure, I was reminded of the next line in that song: "No one ever said it would be this hard." Eight modules with 120 videos, each around eight minutes long, a field that's moved dramatically, student feedback pulling in different directions. Where do you even begin?
Rather than losing myself in the enormity of it all, I felt I needed to return to my roots and be a scientist. Run an experiment. See what happens.
So, like I am fond of telling others, I sat down with Claude and started a conversation.
The Experiment
Not one prompt, but an extended back-and-forth over about two hours total, spread across a few days. I'd work through a section, think about it, come back later and pick up where we left off. Most of the time was spent reading and thinking, not typing.
I fed it the current course structure and anonymized student evaluations, and we worked through the problem together.Student evaluations are anonymized when faculty receive them. No student personally identifiable information was used at any point in this process. The AI asked clarifying questions before proposing anything. I set the constraints: the course needs to be accurate, timely, and future-proof for two to three years. Budget matters. Fewer changes means less money spent, but sound pedagogy was paramount. And this is a technical literacy course for business leaders, not a strategy seminar.
The conversation was genuinely collaborative. The AI proposed restructuring options. I pushed back when they didn't fit my philosophy. It pushed back when I was making assumptions I hadn't examined. Student feedback drove the major structural decisions, but the AI helped me respond to that feedback systematically rather than piecemeal.AI tools were configured with appropriate privacy settings. Claude Max Pro with model training toggled off. ChatGPT Education account. Gemini with data retained at the University of Illinois. If you're considering this workflow, check your own institutional policies and tool configurations.
What We Produced
Over our conversation, we worked through a comprehensive redesign scope. Not just which videos need updating, but the full architecture.
We rewrote course-level and MOOC-level learning outcomes. We restructured the module flow, merging and consolidating where it made sense and creating space for entirely new content. We estimated carryover for each module and assigned priority tiers: what needs to happen first and what can wait.Specific structural changes included merging the separate machine learning and deep learning modules into a single 'how AI learns' foundation, merging autonomy and robotics into one 'physical AI' module, creating an entirely new generative AI module, and moving ethics from four separate modules to contextual integration throughout the course. Per-module carryover estimates ranged from 70% new content to 60% kept, giving Teaching and Learning staff concrete data for workload planning.
One example: students specifically valued the infrastructure content, mentioning TPUs and ASICs by name.TPUs (Tensor Processing Units) and ASICs (Application-Specific Integrated Circuits) are specialized hardware designed for AI workloads โ faster and more efficient than general-purpose processors for tasks like training and running neural networks. Without working through the full picture, I might have cut that material to make room. Instead, we kept it and found space elsewhere.
The output was a comprehensive scope document with learning outcomes, restructured flow, per-module revision estimates, and prioritized implementation tiers. The kind of deliverable that normally takes weeks of meetings between faculty and Teaching and Learning staff.
What I Couldn't Do With It
I had a full revision scope, and a rough estimate for the resources required. But the course was going to be taught again in weeks not months. I didn't have the time or resources to make these changes.
Together with Cheng Li, a senior learning designer at Gies, we figured out what we could do now to keep the course fresh while postponing the full revision. Our answer was module guides โ concise documents that bridge the foundational Coursera videos to current developments in the field.Each module guide follows a consistent template: what the Coursera videos cover, what's new in the field since the videos were created, optional resources in two tiers (quick starts and deeper dives), and preparation for the live session. Consistent structure across all modules so students know what to expect. The videos still teach core concepts. The guides tell students what's new, point them to optional resources, and set up the live sessions where we focus on today's applications.
Cheng had a critical insight that shaped the whole approach. My initial instinct was too defensive about the somewhat outdated course content. Instead, he argued to stop apologizing and frame it confidently. Foundational concepts plus current applications equals intentional course design. That reframing came from a human collaborator, not AI. Both mattered.
In keeping with the scientific mindset, I also used Claude to help draft the module guide content, iterating on structure, tone, and detail across another series of conversations. AI augmenting my work, not replacing it.
This course launches again soon. So I don't know yet whether the module guides work as intended, but I'm hopeful. I'm sharing the process, not claiming victory. But the scoping experiment wasn't wasted. It gave us clarity about the full picture and a practical interim path until we can prioritize the real revision.
Then I Realized What This Could Mean
I did this experiment out of curiosity, driven by student feedback, over a few days. The full implications, however, didn't hit me until later.
What if Teaching and Learning staff did this for faculty?
Feed the AI a current syllabus, anonymized student evaluations, and developments in the field. Have that same kind of conversation. Not one prompt, but a genuine back-and-forth. The goal being a scoped revision proposal that the faculty member reacts to rather than creates from scratch.
That flips the traditional workflow. T&L staff move from reactive support to proactive course management. And it frees capacity. T&L teams who are stretched too thin on revision scoping have less bandwidth for new course development, even when new courses might be a higher priority.
Starting From What's Actually Achievable
The traditional dynamic can be painful for everyone. Faculty want to change everything. We're academics, of course we do. T&L staff push back because of capacity constraints. Both sides may end up frustrated.
What if T&L staff instead said: "We have capacity for a 25% revision this cycle, and here's what that looks like for your course. We can do a fuller revision in a year. In the meantime, here's how we bridge the gap: module guides, updated assessments, and refreshed live sessions."
Faculty react to a concrete, realistic proposal rather than scoping big and getting negotiated down. The full scope document shows what each tier requires: now, next, and later. Faculty still weigh in on priorities. They just aren't starting from zero โ and they aren't getting told "no" without an alternative.
Those per-module carryover estimates are data, not guesswork. They could feed directly into workload modeling and timeline planning. And if the scoping produces precise change orders โ like "update this case study in lecture 3 of module 7" rather than "module 7 needs a refresh" โ even the production side gets easier.Precise change orders also pair naturally with avatar-generated video for surgical updates. Need to update a single case study? Re-script and regenerate just that segment. No studio booking, no faculty scheduling. I've written about this in my I still haven't found what I'm looking for thought piece.
The Broader Point
To be clear, this scope document is a snapshot in time, not a finished plan. If I ran the same exercise in six months or a year, the results would likely be different. The field will have moved, student needs may have shifted, and new tools will exist. Yet, that's actually the point. When scoping is fast and cheap, you don't have to commit to one plan and hope it ages well. You can revisit it as conditions change.
This started with student feedback and curiosity. It ended with implications I didn't expect. I was doing exactly what I tell others to do: experimenting with AI to learn how to collaborate with it effectively. The experiment produced something useful, and the process itself revealed a workflow that could matter well beyond my one course.
Faculty expertise and judgment are still essential. AI proposes, humans decide. Student feedback has to come from real evaluations, not AI assumptions. And the module guides are unproven until students engage with them.
But the tools exist now. The question is whether our higher ed workflows will adapt to use them.