BlenderCon 2006
Tue Oct 24, 2006 · 822 words

… was amazing. Met some very nice people there. Thanks to all of You! Will definitely try to be back next year, although I'm still alittle woozy from all of it. Not sure if I could go tomorrow… 1/5 of the work is still to be done… I also put up some photos of the event (unfortunately didn't have time to take many).

For future reference

Some words about the AV production. The whole show was done with 3 cameras + projector feed. In the main hall, we had everything running into preview monitors and a video mixer that was connected to a little JVC MiniDV cam for passthrough digitizing. That was then fed into a G5 running Wirecast which had two inputs (video from cam + still for fades) and two outputs: streaming (MP4 and raw DV (with AAC audio, wrapped in a MOV file)) the streaming was handled by Darwin Streaming Server, I still don't know what connection we were on.

Wirecast worked pretty well. I really like that you can define as much IO as you want. It did crash a few times and I think it's way overpriced (470 EUR), I'm pretty sure someone will come along soon and create something similar for far less money (eg next version of QT Broadcaster), 50 EUR would be OK.

For audio we had two mics in the main hall + presentation PC sound between which I tried to mix desperately (keeping all of them on would've been too noisy). Sound is hard in these little venues because there's little motivation for speakers to really make an effort and speak into the mic.

In the smaller room I just shot everything as DV (would HDV've made a difference?) off the screen with the Sony HVR A1E and recorded as much as possible onto a FireStore FS4 (I think this is the way to go for capturing, a bit too pricy at the moment though). The sound situtation was a little better here since we just had a lapell mic attached to the speaker and the onboard cam mic for ambient and questions. The FS-4 holds 3 hours of material which made timing difficult. Most of the sessions just ran back-to-back so I had no time to offload the material. This meant going to tape and that's a nightmare because you have to switch tapes (not fun with a bottom-loader sitting on a tripod with several addons in a room where you can hardly turn around).

The final compression workflow was kind of a positive surprise. We ended up using VLC's Streaming/Transcoding Wizard which, considering the circumstances, did a splending job. The main drawbacks were no scaling/cropping controls or exact choice for compression parameters.

There's a pretty big feature gap in VLC - it doesn't seem to support QuickTime reference movies. My plan was to store some of the shorter presentations as one big file and just create refs from each one and encode those into separate files but it didn't work. I think this could be a nice carrot for some VLC dev out there.

Finally Marco used rsync to send everything to the Blender Foundation server. I'm still in the final phases of getting the 3rd day and second room stuff online.

Some things that could've done better/differently:


back · essays · credits ·