I'm not familiar with Unreal so I can only give you what I think the workflow would look like:
Ideally you'd bake out a range of movement for your character, using the offline solver (Ziva VFX for Maya). The ROM should hit all the poses that you want to see in your realtime environment.
You'd then have to extract out the delta between each simulated pose and the corresponding skinCluster pose.
The realtime rig would have those deltas fed back in as a driven front of chain blendshape / PSD or similar. It would have to be driven by joint angles or some other mechanism to make sure the shapes are triggered at the right times.
Someone who knows realtime better than me might like to comment