So in the musical I finished this year that you will see soon it was first foray into doing my own motion capture- both body and face-
What I used for body mocap was-
Perception Neuron 3 sensors + suit- If I had to review it I’d give it a 6/10 it is pretty straightforward and solid to setup and when it works well it works well BUT a constant issue I had with it was that the shoulder, elbow and knee sensors would drift over time- like within a minute or two so I had to constantly stop and reset the t-pose- I don’t know how many shots it ruined by making the shoulder flip forward, or the knees bend outwards or the elbows hyperextend- putting on the suit and the whole process was annoying and time consuming for me to do alone while running the laptop for recording.
So it became something I dreaded doing toward the end- it took 7-15 minutes each session to calibrate the sensors and put on the suit and it would ruin take after take by flipping joints for no apparent reason- I ended up limiting how fast I moved as that seemed to trigger it much more. So it sort of dampened my enthusiasm over time as I might perform a good take just for the suit/software to ruin in.
So after doing a film with it- if I had to use it again I would but I don’t really want to- so I’m looking at/testing other options
I checked out camera based solutions, ai solutions, vr gear solutions, expensive vicon setups but what looks like the best solution for body mocap for me for my next project is the Sony Mocopi- I purchased one but won’t be able to get it until the end of the month
It’s quick to put on/take off, results look good enough, and it has Unity integration- it happens to be like 1/10 the price of suits/setups like Perception Neuron 3/Rokrapo
Both PN3 and Mocopi stream animation data to Unity(bvh streaming) so the same Unity setup can record from both sources
For face mocap I used
Unity face capture + iPhone 12- The free face capture app works really well though its a shame they just deprecated it 🤷♂️ It should keep working for a bit though. After doing some research I found the iphone 12 does the best depth tracking thats required for the app- So I got a used one for ~$200 usd. I didn’t have a way to record the body and the face at the same time so I recorded the face motion capture in a separate pass which seemed natural as it was just lip syncing to a song.
For the body motion capture and the face motion I had to manually control recording by pressing a start/stop button in Unity- it wasn’t that a big of deal for face stuff but for the body mocap it was a hassle especially because getting close to the laptop causes magnetic interference in the suit for a short time making the sensors inaccurate.
So what am I changing for my next big production?
For the time being I’m going with the Mocopi for body motion capture and I rigged up some tools to make my work much easier/smoother-
For the series I’m planning on doing I want to do a lot of improvised comedy dialogue- so I’d need to record body mocap, face mocap and an audio track all at the same time-
So with Unity Recorder + Unity Face Capture + Audacity + some custom code and a custom Autohotkey script + VoiceAttack I was able to rig things up so that…
Body mocap, Face Mocap, and an audio track is all recorded and synced together all at once and its all triggered via voice commands so I don’t need to touch a computer at all when doing motion capture.
I just say “Start recording” and it records it all- I say “Stop recording” and it saves all the data to disk- then I just keep saying start/stop recording to record additional takes-
My scripts even rename the different data(Body mocap .fbx, face mocap .fbx, audio .wav) with the take name/time/date etc and moves them into a new folder with a unique take name so its easy to work with-
For the face capture I bought two different “head cam rig” type things so the iphone stays pointed at you’re face- I’ll use whichever one performs the best-
But I’m really excited by this new setup and tools- I kept tweaking the scripts until it takes all the pain out of recording motion capture alone and yeah it gives me body/face/audio all synced- for the audio I’m using a shotgun mic as a wireless lapel would cause interference AND I’m only using the audio as a source for Audimee as a voice changer so the audio quality doesn’t have to be good at all for that to work.
I also made this custom Unity editor window for finding/loading animation clips into Slate(or any other animation clip field) Natively Unity sucks at this and its a big time wasting hassle so I fixed it with this
Code here if you want to use it- can favorite and hide clips as well
I finished the website for the musical and am just waiting for all the audio mixes to come back so I can put the whole film on the site-