The Innodative Disruptor

I Still Haven't Found What I'm Looking For

When people learn what I do, they're often intrigued. As Chief Disruption Officer, I spend most of my time contemplating the future. But despite their interest, I've learned the hard part isn't the seeing—it's getting others to see what I do.

For years, I tried to communicate the future in my head. I would explain, hand-wave, and argue about what I saw clearly. And I would watch people nod politely without necessarily understanding.

The problem wasn't them. It was me. Specifically, my communication. I'm an astrophysicist by training, trying to explain AI implications to business people and institutional leaders. What was clear in my head—what I could see developing—didn't translate through words alone. The gap between what I could see and what I could show was wider than I'd realized.

This struggle reminds me of a U2 songU2, I Still Haven't Found What I'm Looking For: "I have climbed the highest mountains, I have run through the fields ..." but I still hadn't found a way to help them see what I could see.

SEEING IS BELIEVING

In late 2021, I was fortunate to work with Jake Kinsey, who had the insight to create a demo. We made an AI-generated avatar of me speaking. One version in EnglishOriginal Avatar demo from November 2021 in English. Notice the quality for 2021, impressive, but clearly synthetic., a second in MandarinOriginal Avatar demo from November 2021 in Mandarin. Impressive but clearly not fluent.. The technology, primitive at the time, worked by taking video of me, regenerating the audio to match a new script, and then modifying my lip movements to sync with the new audio.

Of course, I don't speak Mandarin; however, even I could tell my avatar's Mandarin wasn't right. But that almost made it more powerful. These two videos showed what was possible and where this field was heading.

I first showed this demo to Dean Brown. When I saw his reaction, I knew immediately this demo worked. This was what had been in my head, but now made tangible. Something people could actually see and share.

This seemingly simple demonstration helped shift our college's thinking about online education at scale. Not because the explanation was clearer, but because the demonstration made abstract capabilities real. The visual evidence carried authority that my words never could. We saw a new path to democratize education, regardless of location or language.

Seeing really was believing. The future of avatars felt clear. What I didn't yet see was how much more there was to show.

THE FIRST AVATARS: EXPANDING BELIEF

In general, people's reactions were similar: surprise, curiosity, some discomfort, but mostly fascination. We weren't talking about theoretical futures; we were watching them. Our conversations now used a shared language, and we could easily bring others along by showing them.

This is how you really drive institutional change. Not through position papers, but through demonstrations that shift the frame of what people believe is possible.

For a long time, visual evidence was enough. Seeing was believing. A video of something meant that something happened. Presence, real visual and audible presence, carried inherent authority.

But technology does not stand still. And neither does the meaning of what we see.

BUILDING THE FOUNDATION

Since the first demo, we haven't stood still. In 2023, with my graduate student Eamon Bracht and Sam Chen, the former Director of the Gies Disruption Lab, we created our first commercially developed avatar using the ElevenLabs platformElevenLabs is an American AI voice technology company with a valuation of $11 billion based on a 2026 Series D. Their platform specializes in voice synthesis and cloning, enabling realistic text-to-speech and voice replication. for voice synthesis and the Synthesia platformSynthesia is a UK AI firm with a valuation of $4 billion based on a 2026 Series E. Their platform supports standard and user-created digital avatars. to generate matching video. Initially, this was an improved demo to highlight how the technology had changed. Unlike our original demo, this wasn't manipulated video—it was completely AI-generated.

With help from Tim Anderson and Steven Pratten, we turned this new demo into a robust avatar that was used for several videosThe first production avatar, a significant step up from the original 2021 demo. in my new online course on Emerging Technology and DisruptionLinks to media coverage of our avatar work.. The avatar wasn't perfectMy mother immediately spotted a flaw: it mispronounced my last name., but it was actionable.

A year later, with help from the Teaching and Learning staff at Gies, we built a second, and much improved avatar that we used to record almost all of the videos for a new online course entitled AI for BusinessThe full teaching and learning team won a college-level award for their work adopting AI for the creation of this new, online course.. To maintain consistency, I wore the same outfit for my live-recorded videos and for the avatar training videos. There were no obvious visual cues that might indicate a difference.

But, to be clear, we were transparent about the process. We even created a video where I share the screen with my avatar. We both introduce ourselves and I then casually tell the avatar that "Really, I think I've got this video."The side-by-side introduction is embedded in the full AI for Business course highlight webpage. Full transparency — showing students exactly what they were seeing. My mother noticed my last name was now pronounced correctly!

Outside of the studio, Gies developed what I believe is the first Memorandum of Understanding between a college and faculty regarding avatar rights and expectations.The MOU isn't perfect, but it was designed to be updated over time as we learn more about what works and what doesn't. Together, we were learning in real time what policies this technology required.

When I first taught AI for Business in Spring 2025, students had access to all of the videos, both human and synthetic. We did not hide the existence of the avatar. Yet, in a student-led discussion about the use of the avatar, the responses were illuminating. Some students admitted they hadn't noticed a difference or it didn't matter, some didn't realize an avatar was being used, and some claimed they knew all along.

Eventually, students christened the avatar "Professor Robert Burgundy"Student-created image of Professor Robert Burgundy. I liked the student's 
work so much I made it my course avatar on Canvas.Student-created image of Professor Robert Burgundy. I liked the student's work so much I made it my course avatar on Canvas. after Ron Burgundy from the movie AnchormanFor more information on the movie Anchorman, see the movie's iMDB page., because like the fictional news anchor, the avatar reads whatever it is given.

The variation in students' responses was interesting, but it didn't fundamentally alter my thinking. These were static, pre-recorded videos. The interaction was one-way; students watched me or my avatar speaking. But I could already see the next step: what if students could stop and ask questions? What if the avatar could respond?

THE TECTONIC SHIFT

The next step in this journey took place in the fall of 2025, when my graduate student Xinyao Qian and I built our first interactive avatar using the HeyGen platformHeygen is an American AI startup with an estimated valuation of $500 million based on their 2024 Series A. Their platform includes pre-built and user-created standard and interactive avatars.. Four years after my first avatar demonstration to Dean Brown, I was demonstrating this new avatar to Dean Elliott. The visual quality is impressive—the new avatar looks and sounds like me; but like the first demo, this one had flawsInteractive avatar responding to 'Who are you?' Notice the 10-20 second lag between question and response, the technology is almost there, but not quite. That gap won't last.

Additional demonstrations:

Who's the best professor?: testing for self-promotion

Top three learning points: processing course content

Say I'm dumb: boundary testing
. The delay between question and response is too long; this version is essentially unusable. Yet, just like the first demo, this new demo provides a glimpse of the future.

The new Professor Robert Burgundy won't get tired, won't get sick, is always perfectly lit and clearly audible. The vision is now clear: scalable expertise, a global reach, 24/7 availability, and dissolved language barriers. Personal, interactive lectures anytime of day or night. Open office hours, all the time. And content that never gets stale, the interactive avatar can simply reference updated information whenever it becomes available.

In four short years, we went from hand crafted manipulated video to fully synthetic generation to near real-time interactive dialogue. The capability is expanding rapidly, and the remaining gaps increasingly look like engineering challenges rather than fundamental limitations.

It was just another demo to the dean, but this time something felt fundamentally different. Not fear, but an awareness that the future will be different.

AUTHENTICITY OR NOT?

For centuries, seeing has been believingThis phrase likely derives from the bible, specifically John 20, verse 29, where Jesus says 'Have you believed because you have seen me? Blessed are those who have not seen and yet have believed.' This passage also led to the phrase 'Doubting Thomas.'. Visual media implied trust; interactivity implied humanity. This new demo, however, implied something different.

In a world of synthetic media, authenticity must be external to the content; it can no longer be implied.

We can now build interactive, synthetic content. But we don't know how to define authenticity.

How do I confirm reality? Or confirm approved synthetic creations? How do you know what to trust when visual and audible evidence, even interactive evidence, is no longer sufficient?

As Bono sang, "I still haven't found what I'm looking for."

Each new avatar gets me closer to conveying the future I see—and reveals a new challenge I hadn't anticipated. Static demos had quality issues. Better quality revealed other limitations. Interactive avatars address those, but make authenticity urgent. We've built the avatars. We can see the trajectory. But we haven't yet found what comes next when interactive synthetic media becomes commonplace.

These aren't abstract questions. Professor Robert Burgundy is already here, and soon could be interactive. Change may seem slow, but looking back over the last few years, our avatar work demonstrates the opposite. The shift from assumed authenticity to signaled authenticity is not optional. But it must be intentional.

The future is visible. The question now is whether we are prepared to interpret what we see.


This article was developed with AI assistance for research, outlining, drafting, and editing. All ideas, experiences, and perspectives are my own.