Technical Note - Using 'Processing' Software for Audio-Visuals
Using a combination of techniques to match beautiful music with beautiful graphics is what this video is all about. (However, it is said that beauty is in the eye of the beholder so maybe not everybody will agree about the beautiful.) There are two graphics generators employed, Processing for real time software generated patterns and Poser for the 3D humanoid figures. PaintShopPro Animator was used to generate video from the Processing stills and MoviePlusX5 film editor pulled together all the elements and synched the audio track.
The Processing software modules used, usually called scripts or sketches, builds heavily on work done by others, and published on the web site. Producing graphics in sync with the soundtrack is no easy matter. The final video needs to run at 25 frames per second and the computational power required to run the scripts means that the effective Processing frame rates can be as low as a few fps.
To get around this, the audio can be fed into a separate script that produces a sequence of 'driver numbers', 25 numbers per second, effectively chopping the audio into 1/25th second slices. The simplest method is to have the DNs related to the audio amplitude. A more complex method is to have sets of DNs representing the audio power in selected frequency bands, using readily available FFT (Fast Fourier Transform) scripts. The DNs are then used within the Processing graphic generator script to modify the graphics, each DN in sequence being used to generated a new image. The images are then pulled together in the PSP Animator utility with the video output set to 25fps. The resultant video snip is therefore in sync with the audio. The audio can then be added back in using a movie editor, Serif's MoviePlusX5 in this case. There is a great deal of video editing involved to get everything to flow smoothly.
For the I See You Clearly video, the slow nature of the music required a suitably slow set of DNs and a somewhat different technique from those described above was used. The music's bar time was accurately measured (using Audacity audio package) and a set of DNs generated within the Processing script by chopping the bar time into 25 units per second to generate a slow ramp up and a fast ramp down as the script was running. The ramp value at any time was the DN. The images produced in this way were then fed into the Animator utility to make a video clip, as described above. My impression on seeing the final video was that the video was happening too quickly for the music so I revamped it all using twoe bar times to generate the DNs instead of one bar time. This worked well.
Two Processing scripts were used for the graphics. The first was a basic Particles script, and the second an Emergence type script. Those into Processing will know what these are, those that aren't don't need to! Both were hacked in various ways to produce the visual effects I wanted, in addition to generating the DNs.
It is hoped that this note may be of help to other Processing programers interested in producing audio-visual work.