Hello guys. I am very interested in using Ziva face for my upcoming project however I have a question about the best way of implementing it into my pipeline. I am wondering how the process of linking a Ziva head to an Analyzer/Retargeter for facial motion capture that utilizes pre-defined facial expressions would work. Does Ziva Face come with an Analyzer/Retargeter of its own or should I just treat the Ziva face like a rigged head and go through the process of creating any expressions that I need and linking it to any software I choose? I also remember in the early days of Ziva face development reading on this forum about the use of an infra-red camera to capture the actors performances? Are there any requirements to get the best out of Ziva Face in terms of footage format, resolution, lighting, type of camera etc?
Thank you in advance and thanks for the great work!