4 Things Ethan Learned from Facilitating a Sprint about Bias in AI Imagery
For the past five years, I’ve collaborated with the Department of Design at Ohio State University to guest lecture and mentor visual communication design students. These interactions are meant to allow students to gain insight into the working world while keeping me connected to fresh perspectives and evolving design mindsets.
This time around, my colleague Shane Richardson and I partnered with Ohio State Design faculty to help facilitate a week-long sprint focused on addressing bias in AI-generated imagery.
What the heck does that mean?
Students conducted a series of tests, prompting Adobe Firefly to generate images based on prompts such as “a teacher in a classroom,” “people exercising in a gym,” “a happy couple,” etc. After which, they documented their observations, synthesized their findings, and created a final poster that commented on the biases they saw in the generated images. You can see some of their final designs here.
While I was there to observe and advise, this was also a great learning opportunity for me! This experience helped reinforce the value of the critical components of user research—asking the right questions, setting clear goals, and never leaving an insight without a “so what?”
Over the course of the week, I learned:
Mentoring these students was a reminder that even though AI is coming for our jobs, the uniquely human skills of critical thinking, creativity, and ethical decision-making will always set us apart and remain essential in shaping meaningful design solutions.
Curious about how your product’s users are thinking about AI? Check out ZoCo’s report here.
Tempted to implement AI in your digital tool? Check out my article detailing my process for designing an AI product.