I am new to Ziva Dynamics. I am a hobbyist developing a voice app. Right now I am using Google's Dialogflow ES for the ASR/NLU. A lot of this involves back-end programming, tool building and writing dialogue.
I was taken aback by the Emma virtual person: wow! It would be so cool if I could make an avatar for my voice app! Again, I am new to all this. Also I consider myself more a systems developer by experience. I know little about animation or VFX. I recently took a few natural language processing courses so I have had exposure to ML.
I am very interested in connecting a virtual human like Emma to my voice app. I want to determine what changes to my back-end would I have to make. How much more would I need to do to get something like Emma functional and running on a deskop or a mobile device?
I am particularly interested in getting the facial expressions and vocal intonations to match the mood of what is being said (i.e., successfully performing a task, an error occurring, confirming). I would probably use a dedicated system for speech synthesis part. Dialogflow would be used for speech recognition.
What would I need to get something like Emma running? I tend to use Linux Ubuntu machines. Unfortunately my machines tend to have old NVIDIA cards. Would I have to compile Unity? I tend to develop in Python but I have used C#.
Since I know nothing about VFX or games (I played a little with Ink) and would take too long to come up to speed, could some one "lend" me a virtual human so I can experiment? I want to see how hard it would to wire a Ziva Dynamics system to an off-the-shelf voice platform like Google. Am I looking at this the right way?
If some can do #2 and is in the Montreal area, that would be great! This way, we would meet and talk in person!
Of course, I would be happy to share what I learn!