Back to list

Dario Mazzanti

Junior Technician

Research Line

Advanced Robotics

Center

IIT Central Research Labs Genova

Contacts

+39 010 2896 214
Contact Me

About

Dario Mazzanti graduated in February 2011, with a thesis focusing on Motion Capture and Virtual Reality interaction. In 2015 he obtained a Ph.D in Computer Science at Istituto Italiano di Tecnologia (Italian Institute of Technology, IIT), presenting a thesis about Enhancing User Experience in Interactive Environments. One of the main topics of his work and research from 2010 to 2015 was the application of Virtual and Augmented Reality technologies to the design and enrichment of interactive music performances and installations. From 2015 to 2016 he worked as a Post-Doctoral researcher on the WearHap European Project, contributing to the integration of Wearable Haptic devices in Virtual Reality and Teleoperation setups. From January 2017 to February 2018 he covered the role of VR Developer at Singular Perception (Genova), working on the design and development of immersive VR games, and creating audio/visual content and tools for the studio artists. He is currently a Technician at IIT’s Advanced Robotics Department, working on Virtual Reality and User Interfaces for Teleoperation.

His major interests are Human Computer Interaction, Interactive Environments, Music, New Media and the use of technology within artistic and expressive fields.

Projects

Augmented Stage for Participatory Performances

concept and interaction AS[7] The Augmented Stage concept transforms a live performance stage into an Augmented Reality (AR) environment, which can be enjoyed through the cameras of the audience personal smartphones or tablets. Big posters placed on the stage act as trackable AR targets, becoming part of the performance installment. The posters serve as placeholders for AR elements, characterizing the Augmented Stage. By watching the targets through the devices cameras, the audience can watch both the stage and the AR elements. Features of these AR objects are associated to visual and sonic controls. By manipulating these objects using their devices, spectators contribute to the performance outcome, together with the performers. The changes made to the Augmented Stage by someone in the audience are perceived by everyone, simultaneously and coherently. Based on these changes, the AR environment controls sonic and visual features of the performance. A fixed camera can be pointed at the stage, watching the performers and the posters. The feed of the camera can be displayed, showing to the entire audience the Augmented Stage and the interactions taking place within it.

The platform provides the freedom to design different kinds of choreographies and interactions, coherently with performances style and purpose. The simplicity of the setup permits to stage performances in most venues. The use of spectator's personal devices allows the design of transparent and powerful audience and performer interactions, contributing to the generation of ever-changing performances. This kind of experience increases audience reward and contribution awareness. AR can improve the transparency of the performers' actions as well. The concept of Augmented Stage can be applied to all performing arts, including music, theater and dance.

 


 

Evaluation Metrics for Participatory Performances Technological Platforms

evaluationSamples[7] Participatory performances allow the audience to interact with the piece of work presented by a performer. Spectators may be able to access different aspects of the performance, as individuals or as a whole crowd. Access to the performance can vary in quality and quantity, and can include real time feedback given by the crowd to the performer, or direct control of audio and visual content by one or multiple participants. Research on specific interaction devices, techniques, mappings and proper interfaces is necessary, in order to provide the audience of such performances with the desired level and quality of control. We defined a set of metrics for the evaluation of concept and platforms used by interactive performances:

-Control Design Freedom: how freely audience interaction can be designed with the platform.
-System Versatility: overall performance setting up simplicity and performer's comfort on stage.
-Audience Interaction Transparency: clearness of the relation between audience manipulation and its effects.
-Audience Interaction Distribution: to what extent interaction can be located towards the participants (strongly centralized interface vs. every participant holds one).
-Focus: how easily the audience can freely focus on different performance aspects (the stage, their interaction, visuals, music, etc.).
-Active/Passive Audience Affinity: how much the non-interacting audience experience can be similar.









Scenography of Immersive Virtual Musical Instruments

[5] Immersive Virtual Musical Instruments (IVMIs) can be considered as the meeting between Music Technology and Virtual Reality. Being both musical instruments and elements of Virtual Environments, IVMIs require a transversal approach from their designers, in particular when the final aim is to play them in front of an audience, as part of a scenography. In this study, the main constraints of musical performances and Virtual Reality applications are combined into a set of dimensions, meant to extensively describe IVMIs stage setups. A number of existing stage setups are then classified using these dimensions, explaining how they were used to showcase live virtual performances and discussing their scenographic level.



Generative Art Laboratory at Festival della Scienza

The term "Generative Art" refers to art generated with the aid of an autonomous system. A multi-disciplinary laboratory addressing this topic was presented at 2013 Genova's Festival della Scienza (Mazzanti, D., Zappi, V., Barresi, G.). Generative Art was introduced from a historical, perceptual and technological point of view. A number of interactive and non interactive audio-visual applications were developed and presented, proving how Human Computer Interaction research topics can be applied to the creation of autonomous systems capable of generating perceptually intriguing works.

Selection 007

 

 


 

Augmented Reality Interaction Paradigm for Mobile Devices and Collaborative Environments

dragNdropUSER 2[6] Mobile devices computing and visualization capabilities support Augmented Reality solutions, allowing users to experience a real-time view of an existing environment, enhanced with computer-generated information. The user is immersed in real world, and may interact with a set of virtual objects, which can be represented on the smartphone screen as part of the real scene. This study introduces paradigms for interaction with objects within Augmented Reality Environments through a smartphone. Augmented Reality environments can be easily integrated within real contexts using trackable images, called markers or targets.
We propose an interaction paradigm based on the concept of physical drag and drop of virtual objects associated to AR targets, performed with the use of a handheld device. Through the device camera, users can look at the augmented environment, and at the virtual objects associated to AR targets. These objects can be picked up from their target, and linked to the smartphone with a simple accelerometer-based shake gesture, done with the hand holding the device. Then, by carrying the smartphone with her/him, the user can move the virtual object in space. A second shake gesture performed in the direction of an AR target can drop the object. This results in the objects being associated to the new AR target.
Multiple users can interact simultaneously with the environment, using multiple devices. The association of each object with specific AR targets is constantly updated on every device. Consequently, all users experience the same environment, in which the same objects are associated to the same targets. The idea of multiple users interacting with the same AR environment has been exploited in [7], to create a shared interactive environment in which multiple users could modify the sonic and visual features of a musical participatory performance.


 

Composite Interface - Study on a Distractive User Interface

distractive[4] This study analyzes the behavior and impressions of users interacting with a simple Virtual Environment, in which two separate tasks are required. A novel interaction paradigm based on a composite interface was designed, based on two main interfaces: a Microsoft Kinect and an Android. This hybrid interface allowed an experimental study in which subjects performed a main control task with their right hand position, and an accessory task with their right thumb touching a smartphone screen. The experiment was designed to prove whether the thumb task was capable of distracting the attention of the subject from the upper arm, and to support user performance during each trial. Future development of this composite interface could exploit the capability of the smartphone to enrich the input of Kinect tracking, allowing for finer interaction with features and elements of complex Virtual Environments.










 

Real Time Indexing for Motion Capture Applications

exp def[3] A recurrent problem in motion capture is to keep a coherent indexing for different points during real time tracking. The purpose of this study was to develop, test and exploit a real time algorithm capable of dealing with such matter. The current solution includes a main indexing algorithm and a secondary indexing recovering algorithm. The main indexing technique was developed in order to keep the most correct indexing of an arbitrary number of points. The indexing recovery technique adds an indexing correction feature to the main algorithm. The recovery technique has been thought with Virtual Reality applications in mind, but not exclusively. The development of a functional indexing-based Virtual Reality application allowed the testing of the algorithm in its entirety. During the tests participants were asked to recreate a number of virtual objects configurations inside a Virtual Reality environment. This was possible by copying, moving and deleting a given number of different objects. These interactions were triggered thanks to the indexing algorithm real time distinction between three tracked fingers. Tests data analysis gave numeric informations on the algorithm behavior, and observations made during algorithm development and tests provided useful clues for further developments [video].




 

vrGrains

vrGrainsPictures[2] Corpus based concatenative synthesis has been approached from different perspectives by many researchers. This generated a number of diverse solutions addressing the matter of target selection, corpus visualization and navigation. With this paper we introduce the concept of extended de- scriptor space, which permits arbitrary redistributions of audio units in space, without affecting each unit’s sonic content. This feature can be exploited in novel instruments and music applications to achieve spatial dispositions which could enhance control and expression. Making use of Virtual Reality technology, we developed vrGrains, an immersive installation in which real-time corpus navigation is based on the concept of extended descriptor space and on the related audio unit rearrangement capabilities. The user is free to explore a corpus represented by 3D units which physically surrounds her/him. Through natural interaction, the interface provides different interaction modalities which allow controllable and chaotic audio unit triggering and motion [video].

 


 

 Hybrid Reality Live Performances

dissonance[1] In this study we introduce a multimodal platform for Hybrid Reality live performances: by means of non-invasive Virtual Reality technology, we developed a system to present artists and interactive virtual objects in audio/visual choreographies on the same real stage. These choreographies could include spectators too, providing them with the possibility to directly modify the scene and its audio/visual features. We also introduce the fi rst interactive performance staged with this technology, in which an electronic musician played live tracks manipulating the 3D projected visuals. As questionnaires have been distributed after the show, in the last part of this work we discuss the analysis of collected data, underlining positive and negative aspects of the proposed experience. This paper belongs together with a performance proposal called Dissonance, in which two performers exploit the platform to create a progressive soundtrack along with the exploration of an interactive virtual environment [video].





Selected Publications

2016

  • [9] Focus-Sensitive Dwell Time in EyeBCI: Pilot Study, G. Barresi, J. Tessadori, L. Schiatti, D. Mazzanti, D. Caldwell, L. Mattos
    Proceedings of the 8th Conference on Computer Science and Electronic Engineering, 2016

  • [8] HEXOTRAC: A Highly Under-Actuated Hand Exoskeleton for Finger Tracking and Force Feedback, Ioannis Sarakoglou, Anais Brygo, Dario Mazzanti, N.V. Garcia Hernandez, Darwin Caldwell, Nikos Tsagarakis.
    Proceedings of 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2016), 2016

2014

  • [7] Augmented Stage for Participatory Performances, Mazzanti, D., Zappi, V., Caldwell, D. and Brogni, A.
    Proceedings of the International Conference on New Interfaces for Musical Expression, 30 June - 4 July 2014, London, UK.

  • [6] Repetitive Drag & Drop of AR Objects: a Pilot Study, Barresi, G., Mazzanti, D., Caldwell, D. and Brogni, A.
    Proceedings of 2014 IEEE International Conference on Complex, Intelligent and Software Intensive Systems (CISIS 2014), 2 - 4 July 2014, Birmingham, UK.

  • [5] Scenography of Immersive Virtual Musical Instruments, Berthaut F., Zappi, V., Mazzanti, D.
    1st Workshop on Sonic Interactions for Virtual Environments at IEEE Virtual Reality 2014, March 29th 2014.

2013

  • [4] Distractive User Interface for Repetitive Motor Tasks: a Pilot Study, Barresi, G., Mazzanti, D., Caldwell, D. and Brogni, A.
    Proceedings of 2013 IEEE International Conference on Complex, Intelligent and Software Intensive Systems - International Workshop on Intelligent Interfaces for Human-Computer Interaction, 3 - 5 July 2013, Taichung, Taiwan.

2012

  • [3] Point Clouds Indexing in Real Time Motion Capture, Mazzanti, D., Zappi, V., Brogni, A. and Caldwell, D.
    Proceedings of the 18th International Conference on Virtual Systems and Multimedia, 2 - 5 September 2012, Milan, Italy.

  • [2] Concatenative Synthesis Unit Navigation and Dynamic Rearrangement in vrGrains, Zappi, V., Mazzanti, D., Brogni, A. and Caldwell, D.
    Proceedings of the 9th Sound and Music Computing Conference, 11 - 14 July 2012, Copenhagen, Denmark.

2011

  • [1] Design and Evaluation of a Hybrid Reality Performance, Zappi, V., Mazzanti, D., Brogni, A. and Caldwell, D.
    Proceedings of the International Conference on New Interfaces for Musical Expression, 30 May - 1 June 2011, Oslo, Norway.

 

INFORMATIVA SUI COOKIES

Il sito web di IIT utilizza i seguenti tipi di cookies: di navigazione/sessione, analytics, di funzionalità e di terze parti. L'utente può scegliere di acconsentire all'uso dei cookies e accedere al sito. Facendo click su "Maggiori Informazioni" verrà visualizzata l'informativa estesa sui tipi di cookies utilizzati e sarà possibile scegliere se autorizzarli durante la navigazione sul sito.
Maggiori Informazioni
Accetta e chiudi