The democratization of design
What was once a mere hypothetical is now a tangible shift that we need to plan for
OpenAI made a significant announcement this week: ChatGPT has entered the multi-modal phase, enabling it to "see, hear, and talk." While this exciting development is currently being rolled out to Pro+ users over the next week or so, many of us eagerly await the opportunity to experience this revolutionary shift. Luckily, a few fortunate individuals who already have access have shared videos that offer us a glimpse into this transformative capability.
And when I say transformation, I truly mean it. I have around four different articles titled "The Democratization of Design" sitting in my drafts, eagerly awaiting the examples that would fully support the shift we're currently experiencing. Honestly, they never felt complete until now.
Multi-modal turns everything on its head. Honestly, I'm still processing what it all means.
First, let's take a look at some of the examples emerging this week:
ChatGPT can “see” and therefore translate images, an example posted by Pietro:
Keep reading with a 7-day free trial
Subscribe to Designing with AI to keep reading this post and get 7 days of free access to the full post archives.