How Team Rolfes uses motion capture suits to create wild interactive experiences

Motion recording is often used in the production of visual effects for films and video games, and VR is often known as a lonely experience. But Sam and Andy Rolfes initially developed these tools for working behind the scenes and re-used them for live, interactive and audience-driven shows. As Team Rolfesthe brothers work together as a design studio that relies heavily on abstraction and symbolism to create performative art that everyone in the room can enjoy.

During a Team Rolfes show, you see at least one model or dancer attracted to the newest way punctuated with motion capture technology. Their movements send digital actors on the screen while performing in real time with live music. The scene is bizarre, hyper-stimulating, captured by VR-controlled cameras. Everything is live and reactive. In a recent MoMA version, members of the public were able to upload photos directly from telephones, causing models to end up in an apocalyptic wasteland in real time.

In addition to their personal work in creating live shows that integrate dance, fashion and music, they also work with brands and make visuals for Nike, Adult Swim and Super Deluxe. The duo currently has a residency at the Superchief Gallery in Brooklyn, where we caught up with them to find out more about the theatrices of bringing together hardware and software.

This interview has been edited slightly for clarity and conciseness.


What is your background and how did you end up in this?

Sam Rolfes: Andy and I both come from a kind of painter's background. We started with screen printing and painting with mixed media, and that subsequently developed into semi-digital things. Our mother had a 3D studio when we were little, so we were briefly introduced to Blender and things like that. But we found the wireframe meshes and the like a bit unimpressive and just a bit sterile and boring. I don't think our interest has lasted longer than a few years.

Andy Rolfes: No, because you had to move vertices to make swords. They say, "Well, 3D is math." What is this thing, and why is it so hard to make something cool?

SR: Yes, so we have lost interest in that. But when I graduated from the art academy and was part of various music scenes and album art and made flyers with certain amounts of digital elements – but always with a painter's background – I found a program for 3D sculpture that actually resembled the modeling of digital clay. So that made it a lot more expressive than just the technical feeling that we had tampered with.

AR: Check name. It's ZBrush.


SR: ZBrush. So then we could reduce the kind of 3D and start making things. And once you are able to create assets, characters, objects and iterative development there, it is a bit of work, but not much to set up and animate. I misused it, a kind of theme that keeps recurring in our work. I used it almost immediately for live shows.

There are a few things I wanted to dive into. Andy, you said you did a lot of modeling. What is the division of the process between the two of you?

SR: It trades a bit because Andy and I have different models. He is more classic and fashion inspired. It is more realistic.

AR: Yes, as one style is more romanticized, humanistic, surrealistic stuff, and then another, I make 3D brush strokes to make the shape and the shape. So it's much more fashion inspired.

SR: And mine in general is a bit more destroyed and abstract or some kind of gnarled, which I am actually trying to get better about, not completely satiating.


In general, in all our cases, the character personalities stem from the technical limitation. This suit cannot move well on stage, so we are generally locked up in space. And because of the sensors, it atrophy over time, so that they become increasingly gnarled. So mine are generally less human.

Does that lead you to play the more gnarled, less human characters?

SR: Yes. We have this other suit that is more precise and can move across the stage. It doesn't nibble that easily. So that's more human characters, more representative characters.

I will also act as puppet players, who are generally the least human. We build many different types of characters based on input control.

Can you play a character, record the animation, let it go live and then go back and communicate with that character yourself? Or is this sort of one performer per character at a time?


SR: You can see that several characters are moving based on my movement. We can set which color determines which characters. I move around, but we tied it up so that they are just attached to the hip. This is partly because the sensors in the suit are so sensitive. When I first started doing it, I didn't realize that. It would ensure that the characters just fly into the stratosphere. And so I would walk into the next room and there would be nobody. And it would be like, "What should I do?"

So in the past all of these characters were controlled by me at the same time, but we go from space to space and that is how I control progress. I start easier and then go bigger, then smaller.

Now we have two colors and they can move a little more. So now we can have a dialogue and let several different character pairs communicate. But as for playing against myself, I did that for Adult Swim, where I played the main characters to record here, and then I put on the VR headset and wore the thing that I could see myself. And I played against myself like that.

That is not exactly in real time because we have recorded and played it. But it was created live in a capacity, but they do not both happen at the same time. Theoretically, you could repeat and replay animation, which is an interesting idea. I personally would not want to avoid that on stage, because then there will be a question of what is live? And what is included?

Can you talk about the hardware you use and how you prepare for a show?


SR: So live, we now use two colors. It depends on the size, but we have this Shadow mocap suit. They have sponsored us, and this is the baseline we started with. They helped us a bit. We just have this mocap suit from Xsens that is able to move more around the stage. It's a bit more shielded from the electromagnetic frequencies and things. And that is for all body check.

We also use the Vive, which we use for all other spatial things. We will increasingly use these Vive trackers for various stage props. But the most important elements are mainly only these motion controls. I do not use the headset at all. I do not like it.

My problem concerns the emerging experimental tech-art world and its relationship to more and more vertical or integrated mega forms as the benefactor. And that relationship becomes almost uncritical if it doesn't really take into account the, I don't know, the conflict of interest perhaps. But I have not seen anyone do anti-capitalist work for Google.

Our practice to come fully from choosing these tools specifically because they come from the body and expression and not just because they are the newest, I think, is like a great principle of our studio.

I want to ask questions about the software. I see you are using Unreal Engine. Was that an aesthetic choice against unity?


SR: I tried both with the first computer scanner video. And I soon realized that visual scripting from Unreal is really intuitive for me because I learned Max ASP and other visual scripting things at the art academy.

AR: Well, in short, it's much quicker to make something beautiful in Unreal than it is Unity. I worked in Unity for a few years, and it was fun and very stable and I can do a lot with it. God knows that there are many indie people who use it. But it is just good to look at this level. You have to buy so many plug-ins, and to get Unity to that level it is like this: "Okay. I made it nice with all this post-processing, and now I can actually do visual scripting," which I think shouldn't have … I think they might have only got visual scripting for the materials, but they still have extra plug-ins for just based on all the things they have in Unreal, so it's that ease of use, much of it .

SR: Granting now, making it work efficiently from that point is much harder in Unreal than Unity.

You have done your job for Adult Swim and Super Deluxe. When such a project presents itself, do they come to you and say, "Make us something"?

SR: Yes, they come to us in different ways. From the air they will say: "We have a layout. Can you make a quick animation for that? & # 39;


Super Deluxe, we started a relationship with them and tested many of these live things two years ago when we barely got a handle on how to do it, and they were super open to experimenting.

Music is perhaps a more typical example to which it resembles: "We have a song. We have a sort of motive. Do you want to develop something for that?" But the way it has been is that my preference was often for the musician who brought us in because they feel something suggestive in our work that matches theirs, instead of just being part of the Video Commissioner's Rolodex, where they have hit everyone a little when they feel: “We have a weird video needed. Let's get one of the weird guys. "

So this is all visually written?

SR: Yes everything.

All right. So it's not like C # or something? Isn't it all an editor?


SR: Fully visually written, yes. If I had the money and the time, I would hire my more frequent developer, Eric, who works at Meow Wolf. They are a large bunker in Santa Fe. They actually hired half of my team. Our developer, our network man, our producer, all moved to Santa Fe and started working with them. And we have done a bit of work with them, but they are all there because they can have a steady income.

What do you do while you are at Superchief? What are you doing now?

SR: We have already recorded our motion recordings for the adult swimming video here. We rehearsed here with Justin for the MoMA thing. Superchief has been an incredibly friendly benefactor because we have been able to use all the space.

We have just returned from a few dates in Australia. We played Dark Mofo Fest with Marshstepper, which is such a big crazy choreographic thing with 10 people on stage and guest musicians. There were a lot of things going on, two phases, multiple projectors. It is so wild.

Then we go to Berlin for that Berlin Atonal with Marshstepper again. We are going to take Raymond Pinta with whom we have previously worked here. It is this giant vertical screen in it Kraftwerk – they are three stories, just a giant vertical screen – and we will do a live stream downstairs on stage.