floyd
Sorry not sure how I missed this.
They're not random, no.
I 'trained' the scapula before shipping the rig.
Here's a description of how that node works.. some people might find it useful.
The training process goes like this:
Have some animation on a hierarchy of joints (in our case it was retargeted mocap)
You need a single transform to train from. So you could potentially use the humerus, or the clavicle but for a scapula you kind of need to know where both the humerus and clavicle are in order to position the scapula correctly. I think I ended up with a locator that was parented someway down the humerus but was always pointing back at the clavicle root, to attempt to capture both clavicle and humerus articulations. It's not perfect.
When you get to a frame where the scapula doesn't look right, rotate it so that it's in the right position.
You can think of this like a snapshot. You're saying whenever the clavicle/humerus look like this, I want the scapula to look like this. You add that 'snapshot' to the solver, and continue adding snapshots until you've got a decent coverage of the possible articulations of clavicle/humerus/scapula. You might end up with 10 snapshots, say.
The solver then tries to interpolate between the different snapshots. It might determine that on frame 10 you're mostly snapshot A with a little bit of D and F mixed in.
So back to the screenshot you shared. Each Input Compound is a snapshot. The data point is the rotation matrix of the locator I mentioned earlier.
The sample value is the Euler rotation value (xyz) of the scapula at that given data point.
That's it!
Hope that's useful
Andy