top of page
Search
  • Dror Margalit

The Invisible Conductor

Created by Elyana Javaheri and Dror Margalit


The crowd is sitting, the lights are on, and the orchestra is ready. In just a few seconds, a settled hand gesture will queue the musician to join in. The conductor raises their hands, and the music begins. In tonight's show, we have The Invisible Conductor - a musical experience where participants conduct a digital orchestra using body gestures.

There is something magical in seeing a conductor use their entire body to direct the orchestra. Experiencing how music moves our bodies and makes us dance is common, but here, the relationship is different. The body movement is translated into a musical performance played by dozens of people simultaneously. Without participating in it, we would probably never understand the feelings of this collective dialogue between the conductor and the orchestra.

Still, with The Invisible Conductor, we wanted to tap into the magical feeling of controlling music with one's body: Participants are invited to conduct a digital orchestra using their arms. One arm controls the tempo, and the other controls the instruments that are playing. Moreover, the digital orchestra is completely sonic, not using any visuals, which enhances the musical experience and enables participants to fully immerse themselves in the relationship between their bodies to the music.


Making of The Invisible Conductor

When working on this project, we faced two main challenges. First, we tried reliably translating movement in 3D space into a 2D environment. To do that, we used the ml5 Postnet model to detect the positions of the arms. Then we used the location of the left hand to determine the track to play - each track has a different instrument from the same music piece.


The function of the right arm, however, was more challenging, as it dictated the tempo of the music. To measure its velocity, we averaged the distances between the last 60 positions of the right arm. Then we defined three tempo categories and set a range of velocities to play each one of them. This, however, introduced us to a challenge: how to measure the velocity when someone stands closer or further away from the camera?


While we did not entirely solve it, we found a workaround. Since the distance between the eyes is similar between people and cannot be changed, we used it to measure how far a person stands from the camera. Only when a person sands at the right distance can the experience begin. This, however, raised the second main challenge we faced: how can one know that the experience is ready with no visual queues?


Because we wanted to create the experience fully sonic, we had to provide sonic feedback to indicate different stages of the experience as well as where participants should stand. Additionally, we did not want to use verbal instructions that might take people away from the experience. Therefore, we used sound effects from the orchestras' sound environment to indicate certain features. For example, when the ml5 model is ready, there is a sound of tuning an orchestra. After it is played, participants can know that the digital orchestra is ready to begin the show. In the future, we might add more sounds, such as applause or chatter, to indicate that the conductor is in the right position.




Special thanks to Mary Markhvida for helping us with parts of this code




Code for calculating the velocity and using it to change the tempo

let d = dist(last_x, last_y, rightWrist.x, rightWrist.y);

let v = d / (1 / 60) / 10;

v_arr.push(v);

last_x = rightWrist.x;

last_y = rightWrist.y;


if (frameCount % 60 == 1) {

var total = 0;

for (var i = 0; i < v_arr.length; i++) {

total += v_arr[i];

}

avg = total / v_arr.length;

v_arr = [];

}


if (avg < 35) {

playingTempo = 0.5;

} else if (avg >= 35 && avg < 100) {

playingTempo = 1;

} else if (avg >= 100) {

playingTempo = 1.5;

}

tracks[0].rate(playingTempo);

tracks[1].rate(playingTempo);

tracks[2].rate(playingTempo);

tracks[3].rate(playingTempo);




Code fore initiating the experience only when the model is ready and the participant is in the right position

if (

eyesDist > 30 &&

eyesDist < 50 &&

intro.isPlaying() == false &&

avg > 20

) {

playTracks();

...

function playTracks() {

if (tacksArePlaying == false) {

for (let i = 0; i < tracks.length; i++) {

tracks[i].loop();

tracks[i].setVolume(0);

tracks[i].rate(1);

}

tacksArePlaying = true;

}

}




Recent Posts

Peer-to-peer nose interaction

By Joann Myung and Dror Margalit In this project, we attempted to increase our online presence and create a feeling of proximity. Using PoseNet, we made users control their position on the screen base

bottom of page