Xplodwild wrote: ...
After some time using SB, I have another round of how-to questions for you Ari
1) Using the code you have on my repository, I cannot get locomotion animations to work. The character moves in the 3D space, but no animation is played. No visible error in logs. I'm starting the locomotion using:
Code: Select all
<locomotion manner="run" target="1000 0 400 -1000 0 1000 10 -10" />
I'm setting up the locomotion animations the same way you do in Python (check SmartBodyTools.cpp and SmartBodyManager::createCharacter, I mimic'd a few of the python scripts in C++ with the native interface), except I'm not retargetting the motions as I'm using Utah and Utah's animations directly (no prefix).
The locomotion system can either be 'meat-hook' ("basic") animation (the character is dragged around without any animation) or it can use the example-based system ("example"), which uses several states consisting of many animations that control forward, sideways, turning and other movements (like you've got in the code). If all the states that are needed are present and the steering is set to 'example', then the example-based animation should work. So on line 403 of SmartBodyTools.cpp, change "basic" to "example". Also, make sure that the steering is enabled (steerManager->setEnable(true);) after all the characters have been configured. Let me know if that fixes your problem.
2) I also cannot get lipsyncing to work. Same as above,
https://github.com/xplodwild/OgreSmartB ... r.cpp#L170 is used to setup the character, and I'm starting a speech using this (took from SB manual):
Code: Select all
sbmgr->executeBML(character->getName(), "<speech type=\"application/ssml+xml\">" \
"<sync id=\" T0\" time=\" .1\" />hello" \
"<sync id=\" T1\" time=\" .2\" />" \
"<sync id=\" T2\" time=\" .35\" />my" \
"<sync id=\" T3\" time=\" .4\" />" \
"<sync id=\" T4\" time=\" .6\" />name" \
"<sync id=\" T5\" time=\" .72\" />" \
"<sync id=\" T6\" time=\" .9\" />is" \
"<sync id=\" T7\" time=\" .1.07\" />" \
"<sync id=\" T8\" time=\" 1.4\" />Utah" \
"<sync id=\" T9\" time=\" 1.8\" />" \
"<lips viseme=\" _\" articulation=\" 1.0\" start=\" 0\" ready=\" 0.0132\" relax=\" 0.0468\" end=\" 0.06\" />" \
"<lips viseme=\" Z\" articulation=\" 1.0\" start=\" 0.06\" ready=\" 0.0952\" relax=\" 0.1848\" end=\" 0.22\" />" \
"<lips viseme=\" Er\" articulation=\" 1.0\" start=\" 0.22\" ready=\" 0.2442\" relax=\" 0.3058\" end=\" 0.33\" />" \
"<lips viseme=\" D\" articulation=\" 1.0\" start=\" 0.33\" ready=\" 0.3586\" relax=\" 0.4314\" end=\" 0.46\" />" \
"<lips viseme=\" OO\" articulation=\" 1.0\" start=\" 0.46\" ready=\" 0.4644\" relax=\" 0.4756\" end=\" 0.48\" />" \
"<lips viseme=\" oh\" articulation=\" 1.0\" start=\" 0.48\" ready=\" 0.4888\" relax=\" 0.5112\" end=\" 0.52\" />" \
"</speech>");
But no lips are moving unfortunately. I don't plan to use a TTS, but rather an externally played sound file, I might be missing a config line somewhere.
There are two different lip syncing schemes; a lower quality one that activates a mouth shape on every viseme, and higher-quality one that accounts for combinations of visemes (the diphone-based method). Utah can only use the lower quality one, since he doesn't have the mouth shapes needed for the higher quality one (which are the same face shapes that FaceFX uses - the Brad and Rachel characters have those lip shapes). So first and foremost, you have to put Utah in lower quality lip syncing mode by setting the bool attribute 'useDiphone' to false (SmartBodyManager.cpp:171).
Having said that, in the past there are two ways to power the lip sync; via TTS, or embedded within a file. The command that is being sent actually isn't BML, but rather our internal format that gives instructions on how to accomplish the lip syncing. Up until now, we haven't had the need to send this information directly to the character (it's always been gathered from a file, or send from the TTS). To make this work you can do the following:
1) Set Utah to lower quality lip sync mode: character->setBoolAttribute("useDiphone", false);
2) Place the contents of the XML above in a file called 'myspeech.bml'.
3) Include an audio file called 'myspeech.wav' in the same directory
4) Tell Utah to read audio information from a file: character->setStringAttribute("voice", "audiofile");
5) Tell Utah where to find his sound files: character->setStringAttribute("voiceCode", ".");
6) Tell SmartBody where to find the audio files: scene->addAssetPath("audio", "path/to/my/directory")
7) run BML like this: sbmgr->executeBML(character->getName(), "<speech type=\"application/ssml+xml\" ref=\"myspeech\"></speech>");
Does that work for you?
Ari