Imagine a world where any toy idea or concept you could visualize in your mind could be instantly translated from text into a jpeg. Let’s say you envisioned a dancing plush hippo for a product or character but your sewing or illustration skills were sub-par. Still need a way to show your colleagues or customers what that hippo could look like? Enter DALL-E.
“dancing plush hippo in a tutu”
The images above were generated by artificial intelligence and took 5.7 seconds to create. None of these photos have ever existed before.
While still in development, DALL-E may ultimately revolutionize creative processes for many industries, but this technology has major potential for the toy industry. Small to enterprise companies, inventors and licensors can all benefit by adding this software to their R&D arsenal.
DALL-E is an artificial intelligence software created by OpenAI that can generate digital images from text descriptions. Its name stems from a word mashup of Pixar’s Wall-E and Salvador Dali, the famous artist known for his surrealist paintings.
While AI might conjure futuristic images of sentient robots serving you breakfast, this version of AI is available today and could make a profound impact in your toy business or career.
This software offers much more than just an abstract art tool, and toy industry professionals should take a closer look at how it could help (or hurt) them in the near future. While some see this tech as an entertaining mashup of new ideas with old art styles, I believe it’s much more than that. DALL-E can be utilized by inventors, designers, founders/executives and licensors. One just has to think a little outside the box for how to use it.
What makes DALL-E so incredible is that it doesn’t just find images from the internet that are relevant to the text input and regurgitate them. It actually manipulates and rearranges multiple elements in novel ways without explicit instructions.
For those of you who haven’t explored its capabilities, here are some of my favorites from their Instagram @openaidalle:
“A photo of Michelangelo’s sculpture of David wearing headphones DJing”
“Giant gold Rubik’s cube on the ground on earth, rendering, detailed, zoomed out”
“A portrait of an avocado made of yarn” 🧶
When I heard about this technology, I was floored. It’s been in closed beta testing so immediately I applied for my company, Sky Castle, to join their beta program. They had only invited 10k people to join their beta program up to that point and I didn’t think the OpenAI acceptance criteria would classify a toy company as a relevant applicant so I was astonished when we were accepted a few days later! (I didn’t get much work done that week)
Here’s how it works: enter any text prompt and DALL-E spits out 4 images. OpenAI last week announced they would be expanding the beta program to 1 million more people, and now the software costs $0.13 per prompt, and you can save any image you create. DALL-E states that all images generated are copyright-free, giving you complete freedom to use any image created for personal or commercial purposes (i.e. children’s book illustrations). However, images you create may infringe on existing copyrights, so user beware.
When I was first given access to the platform, I started out doing random text inputs of funny art styles and futuristic images. It didn’t take long before I started thinking about how this game-changing software could assist me in my business: “Wait…so all my chicken-scratch toy idea sketches could actually be rendered by AI?!” Just so long as I could articulate the idea as simply and concisely as possible, the software generates images to convey to others what I was envisioning, perhaps much better than my sketches.
Here are some bad toy idea examples:
“Collectible toy superheroes worn like jewelry”
“Mini Yeti Riding in a Kid’s backpack”
“Water bed game board”
While these might not be perfect image generations, this technology is in beta, and the level of “creativity” this software is already demonstrating is revolutionary.
I put “creativity” in air quotes because that’s not really what this program is doing. It’s using captions from billions of internet pictures to hack together new images based on a text prompt, but with the skill of a professional Photoshop designer.
Are you starting to see the potential here? Need a product idea render? Plug your text prompt and see if DALL-E can get you something workable before hiring a professional. Or at least, give yourself some sense of direction and inspiration you can share with a designer so you don’t have to rely solely on verbal or written direction.
Some are concerned this technology will steal jobs from designers and other creatives. That concern may be well founded but, at least in the short-term, I think this will be a useful tool that can help designers and non-designers collaborate on toy concepts.
One awesome aspect of this technology is the style in which you can output images. Whether you want digital art, 35mm camera lens, or crayon drawing, you can specify your medium.
Here is the same text prompt with three different filters:
“Collectable Plush Watermelon Bracelet, pencil sketch”
“Collectable Plush Watermelon Bracelet, 3D Render”
“Collectable Plush Watermelon Bracelet, photorealistic”
Need a kid wearing your concept for a pitch deck? DALL-E won’t show human faces, but here’s what you can get for concept-stage product presentations:
“Kid wearing Collectable Plush Watermelon Bracelet”
Working from home and lacking a whiteboard brainstorm session with colleagues can make this tool an awesome soundboard. DALL-E can be great to spitball concepts with, even if what it dishes out seems like it originated from the depths of a psychedelic trip.
DALL-E has been super useful for us at Sky Castle. One of our products, DoodleJamz, is a sensory art board full of squishy gel and beads to shape and doodle on swappable backer cards. We offer hundreds of free backer cards designs on our website DoodleJamz.com for kids to print out at home and insert in their DoodleJamz for endless playability.
We’ve been utilizing the DALL-E technology to create lots of different backer card designs. Here are some that we will be incorporating into our holiday seasonal category on our website later this year.
We believe we’re just scratching the surface on how to properly use this technology!
For now, this technology may only be used as entertainment, but it will be so much more. Soon, artificial intelligence will even be able to create videos and animations. Maybe even full-length feature films or music albums. What if DALL-E could develop a custom rollout of NFTs? Design a full lineup of doll personalities? Configure the block matrix for a new Lego vehicle?
While it might not seem that important now, we will soon have this technology interweaved in the fabric of our creative industry, and the toy companies and professionals that realize this now and learn to utilize its powers may reap huge benefits.
DALL-E is like the first-gen video game console Atari. When it launched in 1977, there wasn’t massive consumer adoption or robust gaming capabilities, but it set the stage for the evolution of gaming. Forty-five years later, avid gamers are battling alien fighter jets in virtual reality, and the vast majority of young consumers play video games on a daily basis.
Some say that ignoring DALL-E and its capabilities could be like ignoring instant messenger in the nineties. This technology will soon become so robust that human and AI designs will be indistinguishable from each other.
Article Reference: https://peopleofplay.com/blog/dall-e-and-how-artificial-intelligence-will-affect-the-future-of-toy-design