15-07-2010 | CCG
CCG participates in a project that intends to capture and recognize automatically sense and motion human actions.
The Computer Graphics Center (CCG), interface institution to University of Minho and Engineering Institute of Coimbra to the TICEs area, integrates the European consortium responsible for the development of the COGNITO project, acronym to Cognitive Workflow Capturing and Rendering with On-Body Sensor Networks.
This project started in last February and will last 36 months, being lead by DFKI – Deutsches Forschungszentrum für Künstliche Intelligenz GmbH (Germany), and includes several European partners, namely the University of Leeds (UK), the University of Bristol (UK), the University of Compiegne (France), the Prototyping GmbH (Germany) and the Technologie-Initiative SmartFactory KL e.V (Germany). The COGNITO project is co-financed by the seventh framework program of the European Commission, under theme ICT-2009.2.1, Cognitive Systems and Robotics – Information and Communication Technologies.
The automatic capture, recognition and rendering of human sensory-motor activities represent essential technologies in many diverse applications, ranging from 3D virtual manuals through to training simulators and novel computer games. Although capture systems already exist on the market, they focus primarily on capturing raw motion data, matched to a coarse model of the human body. Moreover, the recorded data is organised as a single cinematic sequence, with little or no reference to the underlying task activity or workflow patterns exhibited by the human subject.
The result is data that is difficult to use in all but the most straightforward of applications, requiring extensive editing and user manipulation, especially when cognitive understanding of human action is a key concern, such as in virtual manuals or training simulators.
The aim of the COGNITO project is to address these issues by advancing both the scope and the capability of human activity capture, recognition and rendering. Specifically, we propose to develop novel techniques that will allow cognitive workflow patterns to be analysed, learnt, recorded and subsequently rendered in a user-adaptive manner.
Our concern will be to map and closely couple both the afferent and efferent channels of the human subject, enabling activity data to be linked directly to workflow patterns and task completion. We will focus particularly on tasks involving the hand manipulation of objects and tools due to their importance in many industrial applications.
The key objectives of the project are to develop a novel on-body sensor network consisting of miniature inertial and vision sensors, estimate an osteo-articular model of the human body, recover the workflow digitally, and develop novel rendering mechanism for effective and user-adaptive visualization. The work will done within the context of designing effective user assistance systems based around Augmented Reality techniques for specialised industrial manufacture and will be carried out in close collaboration with industrial and end user partners.
The monitoring of the workflow of human activity will produce detailed information about both its semantics and temporal aspects. This flow will be represented abstractly as "templates of action." With this information, it will be possible to automatically generate different Augmented Reality (AR) and Virtual Reality (VR) representations that will support the interaction and use of the system by the end-user, in a setting of an interactive 3D guide or assisted training manual.
This means for example, that a worker in a factory might be trained in assembling different pieces of a machine with the support of the COGNITO system. He will be both monitored and guided through the steps that should be executed in order to accomplish the whole task. The COGNITO system will show through the AR goggle (worn by the worker) what he should do each time.
This is the main objective of the GCC participation in this project - to provide the means to automatically compose the views in VR and AR, based on the workflow treated and previously recorded in the "templates of action." Another component to be provided by the GCC is the development and implementation of an editor of "templates of action" that allows easily to make corrections and / or adjustments in the templates previously recorded or replicate them. The COGNITO project will be implemented with the close collaboration of industry partners and potential end-users.