Necessity is the mother of invention. I have made some truly amazing breakthroughs after making this post, which are being documented in Twitter-fashion on the facebook page. Hope you don't mind if I post a link to it? Not meant to self-promote, really, but I've taken down the mailing list and that's the only real page that features updates. I'd love it if I could find someone really familiar with extracting geometry out of Quake3 maps (with textures intact). That's what I'm struggling with right now, using data from Elite Force which is based on Quake3. It's becoming sort of a digital archeology project because there's only a small remnant of people still involved with games of this vintage.
Fem Trekz is going to be a really funny beast for anyone familiar with Trek games as it will be a true technology mashup--Xtranormal characters appearing to live inside the Bridge Commander and Elite Force environment. Originally I was going to do this via the greenscreen but now I can actually get these assets integrated. Taking this shortcut was the only possible way I could finish this project and have it look half-way decent. Unfortunately the creators of these mods are pretty hard to reach, otherwise I'd try to get the model data directly from them rather than going through rather extreme lengths to rip them out of the modpacks, but I am doing whatever is necessary which includes employing outside help to work over the model-data.
If I had enough information about Xtranormal's character data then I could make custom characters, give Guinan her hat, etc..., but as it is, the absolute limit of Xtranormal modding is to bring in custom sets and props and to make more limited customizations to characters (retexturing and swapping modular bodyparts). This is where Xtranormal has been, um, less than helpful. But I never thought I'd get anywhere this level of creative control with this software so I really can't complain about hitting absolute limits. If Fem Trekz appeared to have gone nowhere over the last year with me frittering away the time with goofy little shorts about panty raids and junk in the trunk and what not it's because a mountain of R&D took place behind the scenes to extend the software to the point where the show is both feasible and practical. I had to piggyback an advanced animation system on top of a crippled animation system. I did this because I simply wasn't satisfied with the look and feel of Moviestorm, IClone, or Muvizu characters.
Kirok, sorry to give you the War and Peace explanation but it has to do with my creative process. I actually coined a phrase for it: Emergent Animation. The idea is I sit down in front of the computer (with all the preproduction taken care of ahead of time), drag the characters onto the stage, and pretty much don't look away from the screen until I have a finished scene some hours later. Going into it I only ever have an outline. I never write out the dialogue beforehand. The scene kind of writes itself spontaneously out of the spirit of the moment, and because everything is so close to realtime, I can very much role-play or puppeteer my way through it. The only thing that is very well-defined are the characters. I know who they are cover to cover. I know what they stand for, their hopes, dreams, fears, and flaws. I lose myself into the persona of each character as I get into this very deep meditative state. I know this all sounds kind of freaky but I love it, otherwise I wouldn't be busting my ass to do it. It's the reason machinima appeals to me rather than devoting years of effort towards the goal of delivering some single prewritten storyline via more traditional animation programs. I enjoy getting into that zone more than anything else in the world (even if it has mostly produced ephemera like panty raid skits). Unlike other visual arts, it's very close to instant gratification. It's what I was never able to experience when I was in film school in the late 80s nor my abortive attempt to break into CGI in the mid to late 90s. You know, waiting for the workprints, waiting for things to render, waiting waiting waiting. Plus, it doesn't even feel like linear storytelling because the story can branch off in unexpected directions. Once I have done all the drudgery ahead of time to world-build, I can let these characters loose to explore, and by extension live vicariously through them. It's almost like lucid dreaming.
So it's a very closed-loop creative experience. The second I have to explain to a voice actor what to say and how to say it, with all the latency involved, it destroys the spontaneity. I just don't think I can do my best work under those constraints. I've tried it and actually been less satisfied with the delivery than the TTS. (Yes, it actually is possible for real people to give flatter performances than TTS.) And since I can not have a voice actor at my beckoned call 24/7, they can not produce when I feel most inspired and woudn't like me constantly changing the script on them as I do with TTS. The process I've described above is something that I can't really turn on and off like a light-switch. I have to kind of ramp myself up for it, clear away all distractions, sit down, and go for broke. (With real voices I actually prefer to use a soundboard method of canned phrases, as I'm planning to do with Whoopi Goldberg as Guinan. That works surprisingly well for supporting characters.)
That being said, there is an element of teamwork necessary to pull off a show like this, and I am finally just opening up my checkbook to pay for some A-class animation of the USS Earhart for the full-length pilot. Just as she's getting built within the story she's also getting built behind the scenes as a model. It's just that when it comes to the, how shall I say it, directorial aspect? of animation, where I get the characters onto the stage and start talking, I really prefer to do that myself. So the use of TTS is a necessary evil unless someone wants to literally be on-call at-will. Do I like the artefacts or the fact I can't get them to really raise and lower their voice? No. But I've been doing this for over two and a half years, and before that I had been playing with voice synths all the way back to Software Automatic Mouth close to 30 years ago. So whatever the absolute upper limit of TTS is capable of doing in the right hands, as supported by the writing, the gestures, the music, the editing, etc... I feel confident I'm the one doing it. If that's still not good enough for the majority of the target audience, I guess I will just have a hard time building a following. But that's the $10,000 question that I hope to have answered when the final story starts unrolling itself on Youtube. Will it be well received or will it fall flat? I don't know, but there's only one way to find out. All I know is I like what I create, and I have anecdotal evidence to show that others do too.
I do have an open slot for Admiral Hall, who is a character who will be used only once to give the captain her ship. She has an important monologue to deliver. One-offs like that I am cool with using voice actors, and there is one character I will voice myself. Beyond that I just fear that it will destroy this unique creative process that I have embraced for myself (and am trying to turn other people onto).
So if you know anybody who might be good for Admiral Hall (I was actually hoping for Nichelle Nichols, yeah right...) then please have them contact me, because otherwise I'm using the same voice as the Doctor which is going to be painfully obvious.
Also, I'm now partnering with someone else who uses voice actors with Xtranormal and there may be some way for me to leverage that talent pool. I don't rule it out down the road, but I want to make sure I don't lose the very personal and immediate nature of what I'm doing in the process.
Maybe one way to do it would be to ADR the dialogue after I do a first pass with TTS. Not technically ADR, as I can just go back into the projects and swap out the TTS for real voices, but it would be a similar process of the voice actor hearing the original and recording over it. But the one time I had someone replace TTS with real dialogue I actually preferred the TTS, no offense to her. I associate the characters very closely now with their TTS voices, like Captain Bakshi's indian accent. Someone else stepping in would have to match the timbre and cadence very closely otherwise it wouldn't seem right to me.
Also, I'm already so mindful of the TTS's limitations that I write scenes very deliberately around those limitations. So if those limitations weren't there anymore, if they could scream and shout and whisper and whine, my stories would be very very different and I'm not sure how I'd deal with that after wiring myself so specifically to the limits of XN. While I admit it's jarring at first, I think over time people have become more used to seeing XN animations as they've proliferated on Youtube and are now used to the way the dialogue sounds just as they might be used to crude visual animation styles like South Park cutouts or Filmation rigid animation where only the mouths and eyebrows moved. It's its own subgenre, IMHO. It will never compete with Avatar, obviously.
This post has been edited by mos6507: 21 February 2012 - 12:08 PM