DEAF04 - Affective Turbulance

V2_: DEAF04 - Affective Turbulance


techreport: UnMovie - *-pike

30 Nov 2004 , report

The DEAF04 techreports try to give technical insight in a selection of DEAF04 projects. They are aimed at an audience of programmers and project managers and may require some technical background.

One of the charming aspects of the UnMovie project is that it connects a wide range of open source projects into a single "hypercinema experience". The result, the UnMovie "package", is itself again an open source, freely available distribution. Both server software (Python/C code) and client software (Flash) can be obtained through CVS from the UnMovie website,

For clarity's sake, the UnMovie setup may be divided into two parts:

  • The Stage
    where FlashMX Clients connect to a Python-based server, where other clients meet "bots" added to the stage by the server. They can move around in a virtual 2D space and chat with the bots and other clients. All conversation is audible in their Flash player.
  • The Stream
    where a sequence of images is displayed, influenced by the subjects on The Stage.

The Stage


On the Stage, the client's Flash movie connects to the UnMovie server using a Flash XMLSocket to a Twisted-based XML-RPC server. Twisted is, among others, a framework in Python for writing networked applications. Whenever a user presses an arrow key in the Stage, Flash sends a chunk of XML to the server that triggers a Python method. The position of the user is updated on the server and all the connected clients receive a chunk of XML notifying them of the change. As a result, the movement is a little chunky, especially on low bandwidths.

To read more about Twisted, see
To read about XML-RPC in Twisted, see

Bots: MegaHAL

Some of the actors on the stage are bots that live on the server. The server uses a modified version of MegaHAL, a semi-intelligent conversation simulator. MegaHAL was originally programmed by Jason Hutchens for the 1998 Loebner AI Contest, where it won the second prize. It is able to "learn" what to say by observing the things you write to it, and is therefore not limited to the English language.
The MegaHAL C source code is free for download at
There is an interesting story about AI and "conversation simulators" on

MegaHAL only supports one "personality", while UnMovie needs to run several bots, each with a different personality. So the makers of UnMovie needed to modify its source code. Actually, the personalities available to UnMovie are plain text files, so more may appear over time. The downside of the modification is that MegaHAL doesn't learn anymore. The modified version (with a Python interface) is available by CVS from the UnMovie website.

TextToSpeech: MBRola

The server also generates an mp3 stream of all the text spoken on the stage. Quite some effort seems to be done to get a more evocative, emotional output than the usual text-to-speech synthesizers: UnMovie first uses a text-to-phoneme converter, then a filter tool to manipulate the duration of phonemes and a piecewise description of pitch (called the "prosody" of the phonemes) and finally, a speech synthesizer to create samples from these filtered phonemes. This way, any written text can be converted to sound "happy", "sad", "excited", "dull" etc. The bots in UnMovie have some really extreme settings.

All the audio software used is free and open source:
FreeSpeech: a text to phoneme converter
Emofilt: the prosody filter
MBRola (think of "umbrella") the speech synthesizer

The Stream

Unmovie harbours a large MySQL database of small Flash movies (currently ca. 15000), all tagged with keywords. The 'tagging' is an open process for visitors of the UnMovie website, but adding and removing them requires a login. The server software is written in PHP.

Every once in a while, the client requests a "hitlist" from the server. The server analyzes the conversation on the Stage and compares it with the tagged videos and returns a list of movies. The client simply plays these movies until it is done, and requests a new hitlist.


In a web browser, it seems impossible to look at the stage and the stream simultaneously. Use two browsers (or two computers) to check the interaction. The quality of the effect depends on a few factors:

  • the quality of the metadata tagging of the videoclips, which is not guaranteed, as it's an open process;
  • the influence of the bots on the video stream: if the bots on the Stage outnumber the 'real' visitors, they tend to determine the subject of the movies on the Stage.

It is interesting to compare the effect of UnMovie to the effect of onewordmovie. UnMovie's approach is theoretical, but the theory is hardly recognizable in the actual result. Onewordmovie is only the display of one simple concept, but its result is nearly provocative. As an installation, onewordmovie was much more popular.

All in all, to really enjoy UnMovie, visit the website at a very busy moment. And afterwards, take some time to tag some of the videoclips in the database. After all, this is a "collaborative hypercinema experience".


UnMovie 2001-2002
  • axel heide
  • onesandzeros
  • philip pocock
  • gregor stehle

Creative Commons License
This text is licensed under a Creative Commons License. The license does not apply to images and comments.

UnMovie: stage
UnMovie: stream

Write a comment

Your (nick)name
Your message