Back to list

Dario Mazzanti

Post Doc

Former colleague

Research Line

Advanced Robotics

Contacts

About

Dario Mazzanti studied Computer Science Engineering at the University of Genova, Italy. He graduated in February 2011, presenting a thesis on Real Time Indexing for Motion Capture Applications, with a focus on Virtual Reality interaction. He received his PhD in 2015, presenting a thesis concerning the enhancement of User Experience in Interactive Environments. He is currently a Post Doc at Istituto Italiano di Tecnologia. His major interests are Human Computer Interaction, Interactive Environments, Music, New Media and the use of technology within artistic and expressive fields.

Projects

Augmented Stage for Participatory Performances

concept and interaction AS[7] The Augmented Stage concept transforms a live performance stage into an Augmented Reality (AR) environment, which can be enjoyed through the cameras of the audience personal smartphones or tablets. Big posters placed on the stage act as trackable AR targets, becoming part of the performance installment. The posters serve as placeholders for AR elements, characterizing the Augmented Stage. By watching the targets through the devices cameras, the audience can watch both the stage and the AR elements. Features of these AR objects are associated to visual and sonic controls. By manipulating these objects using their devices, spectators contribute to the performance outcome, together with the performers. The changes made to the Augmented Stage by someone in the audience are perceived by everyone, simultaneously and coherently. Based on these changes, the AR environment controls sonic and visual features of the performance. A fixed camera can be pointed at the stage, watching the performers and the posters. The feed of the camera can be displayed, showing to the entire audience the Augmented Stage and the interactions taking place within it.

The platform provides the freedom to design different kinds of choreographies and interactions, coherently with performances style and purpose. The simplicity of the setup permits to stage performances in most venues. The use of spectator's personal devices allows the design of transparent and powerful audience and performer interactions, contributing to the generation of ever-changing performances. This kind of experience increases audience reward and contribution awareness. AR can improve the transparency of the performers' actions as well. The concept of Augmented Stage can be applied to all performing arts, including music, theater and dance.

 


 

Evaluation Metrics for Participatory Performances Technological Platforms

evaluationSamples[7] Participatory performances allow the audience to interact with the piece of work presented by a performer. Spectators may be able to access different aspects of the performance, as individuals or as a whole crowd. Access to the performance can vary in quality and quantity, and can include real time feedback given by the crowd to the performer, or direct control of audio and visual content by one or multiple participants. Research on specific interaction devices, techniques, mappings and proper interfaces is necessary, in order to provide the audience of such performances with the desired level and quality of control. We defined a set of metrics for the evaluation of concept and platforms used by interactive performances:

-Control Design Freedom: how freely audience interaction can be designed with the platform.
-System Versatility: overall performance setting up simplicity and performer's comfort on stage.
-Audience Interaction Transparency: clearness of the relation between audience manipulation and its effects.
-Audience Interaction Distribution: to what extent interaction can be located towards the participants (strongly centralized interface vs. every participant holds one).
-Focus: how easily the audience can freely focus on different performance aspects (the stage, their interaction, visuals, music, etc.).
-Active/Passive Audience Affinity: how much the non-interacting audience experience can be similar.









Scenography of Immersive Virtual Musical Instruments

[5] Immersive Virtual Musical Instruments (IVMIs) can be considered as the meeting between Music Technology and Virtual Reality. Being both musical instruments and elements of Virtual Environments, IVMIs require a transversal approach from their designers, in particular when the final aim is to play them in front of an audience, as part of a scenography. In this study, the main constraints of musical performances and Virtual Reality applications are combined into a set of dimensions, meant to extensively describe IVMIs stage setups. A number of existing stage setups are then classified using these dimensions, explaining how they were used to showcase live virtual performances and discussing their scenographic level.



Generative Art Laboratory at Festival della Scienza

The term "Generative Art" refers to art generated with the aid of an autonomous system. A multi-disciplinary laboratory addressing this topic was presented at 2013 Genova's Festival della Scienza (Mazzanti, D., Zappi, V., Barresi, G.). Generative Art was introduced from a historical, perceptual and technological point of view. A number of interactive and non interactive audio-visual applications were developed and presented, proving how Human Computer Interaction research topics can be applied to the creation of autonomous systems capable of generating perceptually intriguing works.

Selection 007

 

 


 

Augmented Reality Interaction Paradigm for Mobile Devices and Collaborative Environments

dragNdropUSER 2[6] Mobile devices computing and visualization capabilities support Augmented Reality solutions, allowing users to experience a real-time view of an existing environment, enhanced with computer-generated information. The user is immersed in real world, and may interact with a set of virtual objects, which can be represented on the smartphone screen as part of the real scene. This study introduces paradigms for interaction with objects within Augmented Reality Environments through a smartphone. Augmented Reality environments can be easily integrated within real contexts using trackable images, called markers or targets.
We propose an interaction paradigm based on the concept of physical drag and drop of virtual objects associated to AR targets, performed with the use of a handheld device. Through the device camera, users can look at the augmented environment, and at the virtual objects associated to AR targets. These objects can be picked up from their target, and linked to the smartphone with a simple accelerometer-based shake gesture, done with the hand holding the device. Then, by carrying the smartphone with her/him, the user can move the virtual object in space. A second shake gesture performed in the direction of an AR target can drop the object. This results in the objects being associated to the new AR target.
Multiple users can interact simultaneously with the environment, using multiple devices. The association of each object with specific AR targets is constantly updated on every device. Consequently, all users experience the same environment, in which the same objects are associated to the same targets. The idea of multiple users interacting with the same AR environment has been exploited in [7], to create a shared interactive environment in which multiple users could modify the sonic and visual features of a musical participatory performance.


 

Composite Interface - Study on a Distractive User Interface

distractive[4] This study analyzes the behavior and impressions of users interacting with a simple Virtual Environment, in which two separate tasks are required. A novel interaction paradigm based on a composite interface was designed, based on two main interfaces: a Microsoft Kinect and an Android. This hybrid interface allowed an experimental study in which subjects performed a main control task with their right hand position, and an accessory task with their right thumb touching a smartphone screen. The experiment was designed to prove whether the thumb task was capable of distracting the attention of the subject from the upper arm, and to support user performance during each trial. Future development of this composite interface could exploit the capability of the smartphone to enrich the input of Kinect tracking, allowing for finer interaction with features and elements of complex Virtual Environments.










 

Real Time Indexing for Motion Capture Applications

exp def[3] A recurrent problem in motion capture is to keep a coherent indexing for different points during real time tracking. The purpose of this study was to develop, test and exploit a real time algorithm capable of dealing with such matter. The current solution includes a main indexing algorithm and a secondary indexing recovering algorithm. The main indexing technique was developed in order to keep the most correct indexing of an arbitrary number of points. The indexing recovery technique adds an indexing correction feature to the main algorithm. The recovery technique has been thought with Virtual Reality applications in mind, but not exclusively. The development of a functional indexing-based Virtual Reality application allowed the testing of the algorithm in its entirety. During the tests participants were asked to recreate a number of virtual objects configurations inside a Virtual Reality environment. This was possible by copying, moving and deleting a given number of different objects. These interactions were triggered thanks to the indexing algorithm real time distinction between three tracked fingers. Tests data analysis gave numeric informations on the algorithm behavior, and observations made during algorithm development and tests provided useful clues for further developments [video].




 

vrGrains

vrGrainsPictures[2] Corpus based concatenative synthesis has been approached from different perspectives by many researchers. This generated a number of diverse solutions addressing the matter of target selection, corpus visualization and navigation. With this paper we introduce the concept of extended de- scriptor space, which permits arbitrary redistributions of audio units in space, without affecting each unit’s sonic content. This feature can be exploited in novel instruments and music applications to achieve spatial dispositions which could enhance control and expression. Making use of Virtual Reality technology, we developed vrGrains, an immersive installation in which real-time corpus navigation is based on the concept of extended descriptor space and on the related audio unit rearrangement capabilities. The user is free to explore a corpus represented by 3D units which physically surrounds her/him. Through natural interaction, the interface provides different interaction modalities which allow controllable and chaotic audio unit triggering and motion [video].

 


 

 Hybrid Reality Live Performances

dissonance[1] In this study we introduce a multimodal platform for Hybrid Reality live performances: by means of non-invasive Virtual Reality technology, we developed a system to present artists and interactive virtual objects in audio/visual choreographies on the same real stage. These choreographies could include spectators too, providing them with the possibility to directly modify the scene and its audio/visual features. We also introduce the fi rst interactive performance staged with this technology, in which an electronic musician played live tracks manipulating the 3D projected visuals. As questionnaires have been distributed after the show, in the last part of this work we discuss the analysis of collected data, underlining positive and negative aspects of the proposed experience. This paper belongs together with a performance proposal called Dissonance, in which two performers exploit the platform to create a progressive soundtrack along with the exploration of an interactive virtual environment [video].





Selected Publications

2014

  • [7] Augmented Stage for Participatory Performances, Mazzanti, D., Zappi, V., Caldwell, D. and Brogni, A.
    Proceedings of the International Conference on New Interfaces for Musical Expression, 30 June - 4 July 2014, London, UK.

  • [6] Repetitive Drag & Drop of AR Objects: a Pilot Study, Barresi, G., Mazzanti, D., Caldwell, D. and Brogni, A.
    Proceedings of 2014 IEEE International Conference on Complex, Intelligent and Software Intensive Systems (CISIS 2014), 2 - 4 July 2014, Birmingham, UK.

  • [5] Scenography of Immersive Virtual Musical Instruments, Berthaut F., Zappi, V., Mazzanti, D.
    1st Workshop on Sonic Interactions for Virtual Environments at IEEE Virtual Reality 2014, March 29th 2014.

2013

  • [4] Distractive User Interface for Repetitive Motor Tasks: a Pilot Study, Barresi, G., Mazzanti, D., Caldwell, D. and Brogni, A.
    Proceedings of 2013 IEEE International Conference on Complex, Intelligent and Software Intensive Systems - International Workshop on Intelligent Interfaces for Human-Computer Interaction, 3 - 5 July 2013, Taichung, Taiwan.

2012

  • [3] Point Clouds Indexing in Real Time Motion Capture, Mazzanti, D., Zappi, V., Brogni, A. and Caldwell, D.
    Proceedings of the 18th International Conference on Virtual Systems and Multimedia, 2 - 5 September 2012, Milan, Italy.

  • [2] Concatenative Synthesis Unit Navigation and Dynamic Rearrangement in vrGrains, Zappi, V., Mazzanti, D., Brogni, A. and Caldwell, D.
    Proceedings of the 9th Sound and Music Computing Conference, 11 - 14 July 2012, Copenhagen, Denmark.

2011

  • [1] Design and Evaluation of a Hybrid Reality Performance, Zappi, V., Mazzanti, D., Brogni, A. and Caldwell, D.
    Proceedings of the International Conference on New Interfaces for Musical Expression, 30 May - 1 June 2011, Oslo, Norway.

 

INFORMATION NOTICE ON COOKIES

IIT's website uses the following types of cookies: browsing/session, analytics, functional and third party cookies. Users can choose whether or not to accept the use of cookies and access the website.
By clicking on further information, the full information notice on the types of cookies used will be displayed and you will be able to choose whether or not to accept cookies whilst browsing on the website.
Further information
Accept and close
×

I numeri di IIT

L’Istituto Italiano di Tecnologia (IIT) è una fondazione di diritto privato - cfr. determinazione Corte dei Conti 23/2015 “IIT è una fondazione da inquadrare fra gli organismi di diritto pubblico con la scelta di un modello di organizzazione di diritto privato per rispondere all’esigenza di assicurare procedure più snelle nella selezione non solo nell’ambito nazionale dei collaboratori, scienziati e ricercatori ”.

IIT è sotto la vigilanza del Ministero dell'Istruzione, dell'Università e della Ricerca e del Ministero dell'Economia e delle Finanze ed è stato istituito con la Legge 326/2003. La Fondazione ha l'obiettivo di promuovere l'eccellenza nella ricerca di base e in quella applicata e di favorire lo sviluppo del sistema economico nazionale. La costruzione dei laboratori iniziata nel 2006 si è conclusa nel 2009.

Lo staff complessivo di IIT conta circa 1440 persone. L’area scientifica è rappresentata da circa l’85% del personale. Il 45% dei ricercatori proviene dall’estero: di questi, il 29% è costituito da stranieri provenienti da oltre 50 Paesi e il 16% da italiani rientrati. Oggi il personale scientifico è composto da circa 60 principal investigators, circa 110 ricercatori e tecnologi di staff, circa 350 post doc, circa 500 studenti di dottorato e borsisti, circa 130 tecnici. Oltre 330 posti su 1400 creati su fondi esterni. Età media 34 anni. 41% donne / 59 % uomini.

Nel 2015 IIT ha ricevuto finanziamenti pubblici per circa 96 milioni di euro (80% del budget), conseguendo fondi esterni per 22 milioni di euro (20% budget) provenienti da 18 progetti europei17 finanziamenti da istituzioni nazionali e internazionali, circa 60 progetti industriali

La produzione di IIT ad oggi vanta circa 6990 pubblicazioni, oltre 130 finanziamenti Europei e 11 ERC, più di 350 domande di brevetto attive, oltre 12 start up costituite e altrettante in fase di lancio. Dal 2009 l’attività scientifica è stata ulteriormente rafforzata con la creazione di dieci centri di ricerca nel territorio nazionale (a Torino, Milano, Trento, Parma, Roma, Pisa, Napoli, Lecce, Ferrara) e internazionale (MIT ed Harvard negli USA) che, unitamente al Laboratorio Centrale di Genova, sviluppano i programmi di ricerca del piano scientifico 2015-2017.

×

IIT: the numbers

Istituto Italiano di Tecnologia (IIT) is a public research institute that adopts the organizational model of a private law foundation. IIT is overseen by Ministero dell'Istruzione, dell'Università e della Ricerca and Ministero dell'Economia e delle Finanze (the Italian Ministries of Education, Economy and Finance).  The Institute was set up according to Italian law 326/2003 with the objective of promoting excellence in basic and applied research andfostering Italy’s economic development. Construction of the Laboratories started in 2006 and finished in 2009.

IIT has an overall staff of about 1,440 people. The scientific staff covers about 85% of the total. Out of 45% of researchers coming from abroad 29% are foreigners coming from more than 50 countries and 16% are returned Italians. The scientific staff currently consists of approximately 60 Principal Investigators110 researchers and technologists350 post-docs and 500 PhD students and grant holders and 130 technicians. External funding has allowed the creation of more than 330 positions . The average age is 34 and the gender balance proportion  is 41% female against 59% male.

In 2015 IIT received 96 million euros in public funding (accounting for 80% of its budget) and obtained 22 million euros in external funding (accounting for 20% of its budget). External funding comes from 18 European Projects, other 17 national and international competitive projects and approximately 60 industrial projects.

So far IIT accounts for: about 6990 publications, more than 130 European grants and 11 ERC grants, more than 350 patents or patent applications12 up start-ups and as many  which are about to be launched. The Institute’s scientific activity has been further strengthened since 2009 with the establishment of 11 research nodes throughout Italy (Torino, Milano, Trento, Parma, Roma, Pisa, Napoli, Lecce, Ferrara) and abroad (MIT and Harvard University, USA), which, along with the Genoa-based Central Lab, implement the research programs included in the 2015-2017 Strategic Plan.