THE PROJECT

The TACO (Three-dimensional Adaptive Camera with Object Detection and Foveation) project aimed at enhancing the abilities of service robots by improving the sensing system with real 3D foveation properties and increasing their ability to interact with their natural environment in a more natural and human-like way.

Contact

Project Coordinator
TECHNIKON Forschungs- und Planungsgesellschaft mbH
Burgplatz 3a
A-9500 Villach
E-Mail: coordination(at)taco-project.eu
Web: www.technikon.com

Links

Related Projects

custompacker-logo

Electronic consumer goods have a large number of variants and are packaged manually. Automating the packaging process will decrease the production cycle time and costs also for mixed variant production lines, thus allowing that several production lines can be merged to a reduced number of packaging stations. CustomPacker aims at developing and integrating a scalable and flexible packaging assistant that aids human workers while packaging mid to upper sized and mostly heavy goods. The main goal is to design and assemble a packaging workstation mostly using standard hardware components resulting in a universal handling system for different products.


CUSTOMPACKER Hompage: www.custompacker.eu

corbys logo

CORBYS focus is on robotic systems that have symbiotic relationship with humans. These systems have to cope with highly dynamic environments as humans are demanding, curious and often act unpredictably. CORBYS will design and implement a cognitive robot control architecture that allows the integration of 1) high-level cognitive control modules, 2) a semantically-driven self-awareness module, and 3) a cognitive framework for anticipation of, and synergy with, human behaviour. These modules will be supported with an advanced multi-sensor system to facilitate dynamic environment perception. This will enable the adaptation of robot behaviour to the user’s variable requirements.


CORBYS Hompage: www.corbys.eu

darwin

Targeting both assembly and service industry, the DARWIN project aims to develop an "acting, learning and reasoning" assembler robot that will ultimately be capable of assembling complex objects from its constituent parts. First steps will also be taken to explore the general reparation problem. Functionally the robotic artefact will operate in three modes: slave, semi-autonomous, fully autonomous mode.
Categorization, object affordances, accurate manipulation and discovery of the naive physics will be acquired gradually by the robot. The reasoning system will exploit all these experiences in order to allow the robot to go beyond experience when confronted with novel situations.


DARWIN Hompage: www.darwin-project.eu

isur logo

This project will develop advanced technologies for automation in minimally invasive and open surgery. The introduction of more and more complex surgical devices, such as surgical robots, and single-port minimally invasive instruments, highlights the need of new control technologies in the operating room. On the one hand, the complexity of these devices requires new coordination methods to ensure their smooth operation; on the other hand, it also requires new interfaces that could simplify their use for surgeons. Automation may thus provide a solution to improve performance and efficiency in the operating room without increasing operating costs.


I-SUR Hompage: www.isur.eu

intellact logo

IntellAct addresses the problem of understanding and exploiting the meaning (semantics) of manipulations in terms of objects, actions and their consequences for reproducing human actions with machines. This is in particular required for the interaction between humans and robots in which the robot has to understand the human action and then to transfer it to its own embodiment. IntellAct will provide means to allow for this transfer not by copying movements of the human but by transferring the human action on a semantic level. Two major application areas are addressed by IntellAct: the monitoring of human manipulations for correctness and the efficient teaching of cognitive robots to perform manipulations in a wide variety of applications.


INTELLACT Hompage: www.intellact.eu

roboearth-logo

The RoboEarth-project exploits a new approach towards endowing robots with advanced perception and action capabilities, thus enabling robots to carrying out useful tasks autonomously. The core of the innovation involves a world-wide web-style database: RoboEarth. RoboEarth will allow robots to share any reusable knowledge independent from their hardware and configuration. When a robot starts performing a task, it is able to download available high-level knowledge on both task and environment; next, it can use and translate this knowledge to its hardware specifications and its configuration and will improve it by learning during the task. Finally, it will upload its knowledge to RoboEarth again.


RoboEarth Hompage: www.roboearth.org

tomsy-logo

The goal of the proposed research is to enable a generational leap in the techniques and scalability of motion synthesis systems. Motion synthesis is a key component of future robotic and cognitive systems to enable their physical interaction with humans and physical manipulation of their environment. Existing motion synthesis algorithms are severely limited in their ability to cope with real-world objects such as flexible objects or objects with many degrees of freedom. The high dimensionality of the state and action space of such objects defies existing methods for perception, control and planning and leads to poor generalisability of solutions in such domains. These limitations are a core obstacle of current robotic research.We propose to solve these problems by learning and exploiting appropriate topological representations and testing them on challenging domains of flexible, multi-object manipulation and close contact robot control and computer animation.


TOMSY Hompage: www.tomsy.eu

V-Charge Logo

The project V-Charge is based on the vision that, due to required drastic decrease of CO2 production and energy consumption, mobility will undergo important changes in the years to come. This includes new concept for an optimal combination of public and individual transportation as well as the introduction of electrical cars that need coordinated recharging. A typical scenario of such a concept might be automatic drop-off and recovery of a car in front of a train station without taking care of parking or re-charging. Such new mobility concepts require among other technologies autonomous driving in designated areas. The objective of this project is to develop a smart car system that allows for autonomous driving in designated areas (e.g. valet parking, park and ride) and can offer advanced driver support in urban environments.


V-CHARGE Hompage: www.v-charge.eu

Vanaheim logo

The aim of VANAHEIM is to study innovative surveillance components for autonomous monitoring of complex audio/video surveillance infrastructure, such as the ones prevalent in shopping malls or underground stations. To do so, VANAHEIM addresses three main application-driven research questions: 1.Scene activity modelling algorithms for automatic sensor selection in control room. In everyday practice, surveillance video wall monitors frequently show empty scenes, while there are obviously many cameras looking at scenes in which something (even normal) is happening. Performing a sensor selection at the control room level to autonomously select the streams to display therefore seems required. While this scenario is trivial when dealing with empty vs occupied scenes, building models to characterise the streams content, in terms of usual and unusual activities, turns out to be necessary when dealing with almost all occupied scenes. Furthermore, the need for such selection is even more explicit when dealing with audio streams, for which mosaicing of data is not possible due to the transparent nature of sound. VANAHEIM thus targets the development of such automatic audio/video components allowing to select the audio/video streams to present to operators in control room.2.Investigation of behavioural cues for human-centred monitoring and reporting. VANAHEIM investigates the use of subtle human behavioural cues (head pose, body shape) and social models (e.g. about space occupancy) to perform the live detection of well-defined scenarios of interest. In addition, the project also targets three specific levels of monitoring: 1. individuals, 2. groups of people and 3. crowd/people flow. Last, VANAHEIM targets the development of a situational awareness reporting, which aims at translating the ongoing activities of people into meaningful user-oriented figures, through for example a map-based overlay of the approximate location/number/behaviour of people in the infrastructure.3.Collective behaviour building and online learning from long-term analysis of passenger activities. By combining cognitive science and ethological analysis, VANAHEIM aims at designing models for the identification and characterization of the structures inherent in collective human behaviour. In other words, by continuously analyzing, learning and clustering information about users' locations, routes, activities, interactions with others passengers and/or equipments, and contextual data (time of day, density of people...), VANAHEIM targets a subsystem able to estimate of the long-term trends of large-scale human behaviour, thus allowing the discovery of collective comprehensive daily routines.The evaluation/assessment of the sub-systems developed within the project is covered thanks to the participation of Turin and Paris transport operators; VANAHEIM deployment in both sites will allow to demonstrate the scalability, as well as the performance of the developed system.


VANAHEIM Hompage: www.vanaheim-project.eu