An error occurred while processing the template.
The following has evaluated to null or missing:
==> RimuoviOmbra  [in template "20101#20127#625815" at line 6, column 56]

Tip: If the failing expression is known to be legally refer to something that's sometimes null or missing, either specify a default value like myOptionalVar!myDefault, or use <#if myOptionalVar??>when-present<#else>when-missing</#if>. (These only cover the last step of the expression; to cover the whole expression, use parenthesis: (myOptionalVar.foo)!myDefault, (myOptionalVar.foo)??

FTL stack trace ("~" means nesting-related):
	- Failed at: #if getterUtil.getBoolean(RimuoviOmbr...  [in template "20101#20127#625815" at line 6, column 29]
1<#assign bgcolor = "#ffffff"> 
2<#if BackgroundColor.getData()?has_content> 
3    <#assign bgcolor = BackgroundColor.getData()> 
6<div class="header header-1 <#if getterUtil.getBoolean(RimuoviOmbra.getData())>no-shadow</#if>"> 
7    <div class="image" style="background-color:${bgcolor}"> 
8        <#if Image.getData()?? && Image.getData() != ""> 
9            <img src="${Image.getData()}" alt=""> 
10        </#if> 
11    </div> 
12      <div class="header-body"> 
13        <div class="container"> 
14          <div class="header-title">${Title.getData()}</div> 
15          <div class="row"> 
16            <div class="col-8"> 
17              <div class="header-text">${Abstract.getData()}</div> 
18            </div> 
19          </div> 
20           <#if Link.getData()?? && LinkLabel.getData()?? && LinkLabel.getData() != ""> 
21            <div class="header-action"><a class="btn btn-primary" href="${Link.getData()}">${LinkLabel.getData()}</a></div> 
22           </#if> 
23        </div> 
24    </div> 


Visual attention is essential for humans and animals interacting with the environment. Robots can similarly take advantage of attentional mechanisms, to autonomously select and react to external stimuli. We are developing an attention system for iCub that will be able to direct its gaze to regions of the field of view where there are potentially interesting stimuli, for example, objects that are moving or reachable. To do so, we take inspiration from models of primates’ visual attention and adapt them to the event-driven paradigm. we are working on their full implementation on spiking neuromorphic hardware towards the design of a low-latency, low cost, attentive module.

We developed a bio-inspired bottom-up event-driven saliency-based model following the “Gestalt laws” to detect possible objects in the scene. The Gestalt laws define how human beings perceptually group entities in the scene to understand and interact with the environment. We adapted the original bio-inspired model for frame-based cameras to work with the event-driven cameras. Our proto-object saliency pipeline is fully bio-inspired, and it runs online on iCub.

We added depth information, using a bio-inspired Disparity Extractor based on Cooperative Networks for event-driven cameras. The model runs online onto the robot with low latency (~100 ms) and prioritizes objects closer to iCub.

The spiking implementation of the proto-object saliency model running on the neuromorphic platform SpiNNaker shows a latency of only ~17ms.

Bio-inspired bottom-up proto-object saliency models, Cooperative stereo-matching for stereo, Spiking Neural Networks.



[Research Project Page]-Gallery_Title