Sonification of Emotion State In Family-Run Businesses

Download PDF

I report on an application that makes use of auditory display of data that represents an individual’s appreciation of his situation in the specific collaborative context of a family owned company. The auditory display is an emotion mapping of the company-family structure, and thereby transmits the emotional impact of possible future scenarios if no intervention takes place. The structural parameters ‘family complexity’, ‘company complexity’, ‘company structure’ and ‘structural risk’ are mapped to structural aspects of the auditory display that contain sufficient similarity to be readily appreciable with minimal preparation. The result is that the implicit emotional state of the analysis subject – a member of the family – is represented in the audio stream. This facilitates other family members’ empathy, because it circumvents subjective semantic interpretations and potential rejection of a purely verbal interpretation of the data. The technique is general and may be applied to other collaborative situations where a self-learning approach is preferred.
Funded by Danish Center for Design Research

Visualizing Structures of Speech Expressiveness

Download PDF


“Speech is both beautiful and informative. In this work, a conceptual study going through the myth of the tower of Babel and hypothesis on the apparition of articulated language is undertaken in order to create an artistic work investigating the nature of speech. Our interpretation is that a system able to recognize archetypal phonemes through vowels and consonants could be used with several languages to extract the expressive part of speech independently from the meaning of words. A conversion of speech energy into visual particles that form complex visual structures provides us with a mean to represent this expressiveness of speech into a visual mode. This system is presented in an artwork whose scenario is inspired from various artistic and poetic works. The performance was presented at the Re:New festival in May 2008.”

Funded by Aalborg University and Danish Center for Design Research.

Gesture and Emotion in Interactive Music: Artistic and Technologial Challenges

Download PDF


“This dissertation presents a new and expanded context for interactive music based on Moore’s model for computer music (Moore 1990) and contextualises its findings using Lesaffre’s taxonomy for musical feature extraction and analysis (Lesaffre et al. 2003). In doing so, the dissertation examines music as an expressive art-form where musically significant data is present not only in the audio signal but also in human gestures and in physiological data. The dissertation shows the model’s foundation in human perception of music as a performed art, and points to the relevance and feasibility of including expression and emotion as a high-level signal processing means for bridging man and machine. The resulting model is multi-level (physical, sensorial, perceptual, formal, expressive) and multi-modal (sound, human gesture, physiological) which makes it applicable to purely musical contexts, as well as intermodal contexts where music is combined with visual and/or physiological data.

The model implies evaluating an interactive music system as a musical instrument design. Several properties are examined during the course of the dissertation and models based on acoustic music instruments have been avoided due to the expanded feature set of interactive music system. A narrowing down of the properties is attempted in the dissertation’s conclusion together with a preliminary model circumscription. In particular it is pointed out that high-level features of real-time analysis, data storage and processing, and synthesis makes the system a musical instrument, and that the capability of real-time data storage and processing distinguishes the digital system as an unprecedented instrument, qualitatively different from all previous acoustic music instrument. It is considered that a digital system’s particular form of sound synthesis only qualifies it as being of a category parallel to the acoustic instruments categories.

The model is the result of the author’s experiences with practical work with interactive systems developed 2001-06 for a body of commissioned works. The systems and their underlying procedures were conceived and developed addressing needs inherent to the artistic ambitions of each work, and have all been thoroughly tested in many performances. The papers forming part of the dissertation describe the artistic and technological problems and their solutions. The solutions are readily expandable to similar problems in other contexts, and they all relate to general issues of their particular applicative area.

Disseration successfully defended at Oxford Brookes University November 2006.

Applying a Performer’s Physical Gestures to Sound Synthesis in Real-Time

Download PDF


“Motivation and strategies for affecting electronic music through physical gestures are presented and discussed. Examples of such usage in practise are reported, and the results and future possibilities are discussed.”

In Proceedings of Australian Computer Music Conference 2006
Medi(t)ations: computers, music and intermedia. ISSN: 1448-7780
Adelaide, Australia, July 11-13 2006.

My main areas of research is advanced music composition, real-time music technology and music cognition. I am particularly interested in the relationship between core emotions and score notation and performance features, as well as novel methods for generative real-time performance that are founded in non-expert music cognition.

Some years ago I was involved in the Nordic SUM project – Systematic Understanding of Music – and a large part of the project concerned the implementation into computer code of concepts from probabilistic melody generation for real-time performance.

This work has since found its way into several commercial releases and is today a constant presence in my live laptop performances, either in duo or small group formats with a variety of instrumentalists or as a solo performer.