Last spring, clothing brand Levi Strauss & Co. Announced plans to introduce “AI-generated personalized models” to its online shopping platforms. These “body-inclusive avatars” would come in a variety of sizes, ages, and skin tones and would help Levi’s create a more “diverse” line in a way the company considered “sustainable.” Many (real) people were shocked. Why not give those jobs to real humans of the size, age and skin tone Levi’s was looking for? Was “sustainable” simply PR-speak for “cheaper”? Levi’s later affirmed its “commitment to supporting multicultural creatives behind and in front of the camera.” But it didn’t abandon the partnership with the Amsterdam-based company that created the models, Lalaland.ai. (It’s on pause until Levi’s can formulate internal AI guidelines.)
That controversy put Lalaland on the map and caused more major brands to pursue generated models, says Duy Vo, Lalaland’s creative director. WIRED sat down with him to find out how to get an algorithm to smile correctly and not grow extra fingers.
The first step to create the models is research. I see what kind of models walk down the catwalk. I follow the latest trends in ecommerce. I find patterns, like what kind of faces are in fashion this season. In some ways, the work I do now is similar to my former job as a fashion photographer for big magazines like Vogue and Harper’s Bazaar. I ask clients what kind of collection they want, what kind of model they see. They might say something broad, like they want an aesthetic from a Quentin Tarantino movie. I take looks, photographs and data from those images. And then we send it to the machine learning team and basically create a new persona on demand.
We start by making 3D models of a body. On top of that, we use generative AI to create identities that clients want to display, with different ethnicities, hair colors, and appearances. You can add freckles, slightly alter smiles, and add nail polish – all the finishing touches on a model. With AI, I am a photographer, a hairstylist and a makeup artist all at the same time. And then you might have to modify the design based on customer feedback with more suggestions or using Photoshop. Simply fixing something like a hairstyle and making sure it still works in all poses can take days.
Then we “dressed” the models. Many of our clients already use 3D software to design clothing, so it’s easy: we simply import those files and render them on our models. But not all brands design in 3D. In those cases, we collect garments from the brands and send them to a partner who can digitize them. They recreate the patterns, the fabrics, the texture and all that.
AI can hallucinate. We have seen horror shows: models with three heads or one head attached to the knee. It is still difficult to fix the hands and feet; They appear with too many fingers or toes. You have to go back and have the AI try again. My role is to heal and guide the system to create attractive people and filter out all the bad things.
The salary for a position like this would be comparable to a tech job in the US, around $100,000 or $120,000. Salaries are a little different here in Amsterdam, which makes it difficult to compare to Silicon Valley. You don’t need to know how to code to do this job. You need to know what technology is capable of, but you also need to understand fashion and its history, and have good instincts. Anyone from the traditional fashion space could transition to this in a few weeks or months.
It is still difficult to make a complete advertising campaign with AI. Fashion is very specific and needs to be replicated exactly. Add in other factors like lighting and making everything look good is difficult. You’ll still want traditional image makers to create beautiful photographs. AI is more like a tool for creating images for commerce. But if you can communicate your message through synthetic images, why couldn’t you?
– As told to Amanda Hoover