mikro is an interactive software framework for audiovisual performance. it depends on a live audio stream for input which serves as a catalyst for audio and graphics algorithms. it is comprised of two independent software applications, one for each of the modalities. supercollider audio programming environment provides the framework for developing interactive musical algorithms while several different implementations of graphics have included custom-built applications in objective-c and c++, the latest variant utilising the cinder library. the separate applications use the open sound control protocol for communication and can be deployed on different computers on the same local network or in physically remote locations connected via the internet.
there are different implementations of the framework, however, all of these function on the same basic premises. they facilitate audiovisual interaction to an improvised input, i.e. there is no linear progression in the composition or predetermined musical structures, but rather each of the systems is prepared to learn in some way from the audio stream and based on the learning generates an audiovisual response.
there are three implementations presented on this website:
- in mikro:strukt low-level machine listening algorithms are used as the guiding principle of the compositional process.
- the mikro:skism environment is trained according to the spectral categorisation of the input prior to the performance and each category is then mapped to reactive audiovisual entities.
- bocca/mikro uses machine learning algorithms in combination with a large database of synthesisers evolved using gene expression programming techniques.
the source code, however disorganised, is available on github. the most recent version of the environment has been consolidated into the lambda repository, but earlier versions are accessible as well.