2008
WeKnowIt: Emerging, Collective Intelligence for personal, organisational and social use
WeKnowIt: Emerging, Collective Intelligence for personal, organisational and social use

April 2008 (36 months)

WeKnowIt project targets to extract knowledge at different levels (such as trend detection in mass intelligence) to support specific tasks like decision making in emergencies or recommendations in its commercial application. It has its own social platform and network with its own users and content. Among its main goals is to change the way information is shared between masses of people, offering an integrated approach ready to cover a variety of human life aspects, both personal (e.g. entertainment and vacation time), as well as community related (e.g. handling of emergencies).


Funded under: FP7-ICT

REMINE: High performances prediction, detection and monitoring platform for patient safety risk management

February 2008 (36 months)

According to recent studies, Risks Against Patient Safety (RAPS) represent one of the most important factors of dead in hospitals: during therapy, more then 8% of patients recovered in hospitals suffer for additional disease that in almost 50% of the cases produce either dead or significant additional health problems. RAPS occur in any stage of the patient care process. REMINE project idea originates from the common difficulty in conducting a analysis, early identification and effective prevention on RAP when there are significant mass of in homogeneous data sources, stored in multimedia databases, and a distributed environments with different care professionals contemporary involved.To contrast the RAPS trends and the malpractices diffusion, REMINE prosecutes a number of main objectives. a new technological platform, new care process organizational requirements. Main elements are: mining of multimedia data; modeling, prediction, detection of RAPS, RAPS management support system and info broker patient safety framework.Main outcomes of REMINE will be: time reduction in collecting data, time reduction in RAPS analysis, standardization of common language, evolution in the interaction model, reference framework, patient safety improvement, health care cost saving (within an estimated RAPS reduction between 6% to 9% of RAPS.


Funded under:FP7-ICT 

METABO: Controlling Chronic Diseases related to Metabolic

January 2008 (52 months)

The aim of METABO is to set up a comprehensive platform, running both in clinical settings and in every-day life environments, for continuous and multi-parametric monitoring of the metabolic status in patients with, or at risk of, diabetes and associated metabolic disorders.The type of parameters that will be monitored, in addition to traditional clinical and biomedical parameters, will also include subcutaneous glucose concentration, dietary habits, physical activity and energy expenditure, effects of ongoing treatments, and autonomic reactions.The data produced by METABO will be integrated with the clinical data and the history of the patient and will be used in two major interrelated contexts of care:1. Setting up a dynamic model of the metabolic behavior of the individual to predict the influence and relative impact of specific treatments and of single parameters on glucose level.2. Building personalized care plans integrated in the current clinical processes linking the different actors in primary and secondary care and improving the active role of the Patient.

Funded under: FP7-ICT

2006
FEELIX GROWING: FEEL, Interact, eXpress: a Global appRoach to develOpment With INterdisciplinary Grounding
FEELIX GROWING: FEEL, Interact, eXpress: a Global appRoach to develOpment With INterdisciplinary Grounding

December 2006 (40 months)

The overall goal of this project is the interdisciplinary investigation of socially situated development from an integrated or “global” perspective, as a key paradigm towards achieving robots that interact with humans in their everyday environments in a rich, flexible, autonomous, and user-centred way. To achieve this general goal we set the following specific objectives:1)Identification of scenarios presenting key issues and typologies of problems in the investigation of global socially situated development of autonomous (biologically and robotic) agents, 2)Investigation of the roles of emotion, interaction, expression, and their interplays in bootstrapping and driving socially situated development, which includes implementation of robotic systems that improve existing work in each of those aspects, and their testing in the key identified scenarios, 3)Integration of (a) the above “capabilities” in at least 2 different robotic systems, and (b) feedback across the disciplines involved, 4)Identification of needs and key steps towards achieving standards in: (a) the design of scenarios and problem typologies, (b) evaluation metrics, (c) the design of robotic platforms and related technology that can be realistically integrated in people’s everyday life. FEELIX GROWING takes a highly interdisciplinary approach that combines theories, methods, and technology from developmental and comparative psychology, neuroimagery, ethology, and autonomous and developmental robotics, to investigate how socially situated development can be brought to robots that “grow up” and adapt to humans in everyday environments.

Funded under: FP6-IST

CALLAS: Conveying Affectiveness in Leading-edge Living Adaptive Systems
CALLAS: Conveying Affectiveness in Leading-edge Living Adaptive Systems

November 2006 (42 months)

CALLAS will investigate key aspects of Multimodal Affective Interfaces in the specific area of Art and Entertainment applications. As an integrated project CALLAS will address the following high-level objectives: 1) To advance the state-of-the-art in Multimodal Affective Interfaces by i) developing new emotional models that will be able to take into account a comprehensive user experience in Digital Arts and Entertainment applications and ii) new modality-processing techniques to capture (and elicit) these new emotional categories, 2) To research, develop, and integrate advanced software components, tailored to the processing of individual modalities supporting the semantic recognition of emotions, making them available through a “living” repository, called the CALLAS “shelf”, 3) To develop a software methodology for the development and the engineering of Multimodal Interfaces that will make their development accessible to a larger community, i.e. the assembly of a Multimodal interface from individual components will not require anymore a deep understanding of theories of Multimodality. The effectiveness of the CALLAS approach in pursuing the aforementioned objectives will be validated by developing significant research prototypes (or Showcases) in three major fields of Digital Arts and Entertainment: Augmented Reality for Art, Entertainment, and Digital Theatre Interactive Installations for Public Spaces Next-Generation Interactive Television CALLAS also aims to ensure the sustainability and the replicability of the technology results. This will be addressed mainly by supporting Technology Transfer, in particular towards SMEs operating in the new media sector, whether these SME are involved in Digital Arts and Entertainment or are innovative technology spin-offs.

Funded under: FP6-IST

AGENT-DYSL: Accomodative Intelligent Educational Environments for Dyslexic learners
AGENT-DYSL: Accomodative Intelligent Educational Environments for Dyslexic learners

September 2006 (36 months)

The overall objective of the AGENT-DYSL project is to contribute in narrowing the gap between good and poor (due to dyslexia) readers. AGENT-DYSL's main target group is school-aged children. AGENT-DYSL's approach towards obtaining the objective is to develop a "next generation assistive reading system" and to incorporate it into learning environments. The AGENT-DYSL project addresses a main target of the e Inclusion Strategic objective, which is the development of next generation assistive systems that empower persons with (in particular cognitive) disabilities to play a full role in society, to increase their autonomy and to realize their potential. To effectively contribute to this goal, the project focuses on the development of novel technologies and tools for supporting children with dyslexia in reading.  One of the main objectives of AGENT-DYSL is the incorporation of these prominent features into assistive reading software, and the move to the next generation of assistive software. The features include automated user modelling, age-appropriate and dyslexia-sensitive user interfaces, automatic user progress monitoring, automatic user’s psychological and emotional state tracking, knowledge assisted reasoning and evaluation of information, personalized user interfaces that adapt to the individual requirements of each dyslexic learner. Moreover, the project apprentices the role of accommodative educational environments in obtaining the best results for inclusion purposes; in particular it recognises that learners’ diversity is a strength for collaborative training environments (e.g., schools, education centres, work) and that heterogeneous communities (groupings) have a built-in dynamic that can bring about development in learners with widely different potentials and competence profiles. In this framework, AGENTDYSL also focuses on accommodative education environments for dyslexic learners, interweaving the above-mentioned technologies for evaluation of both the individual dyslexic learner and the context of the learning environment, with a pedagogical perspective and testing in three real environments in United Kingdom, Denmark and Greece.


Funded under: FP6-IST

Dianoema: Visual analysis and Gesture recognition for Sign Language modeling and robot tele-operation

June 2006 (18 months)

The project aims to develop innovative image analysis and computational algorithms for effective detect and track gestures of video sequences, parallel corpus collection of Greek Sign Language and provide annotation and linguistic modeling of representative ENG phenomena; pre-trained in the corpus and integrated them into a robot remote control pilot system with a subset of simple gestures.


Funded under: GSRT EHG

 

IMAGINATION: Image-based Navigation in Multimedia Archives

May 2006 (36 months)

The main objective of IMAGINATION is to bring digital cultural and scientific resources closer to their users, by making user interaction image-based and context-aware. Our ultimate aim is to enable users to navigate through digital cultural and scientific resources through its images. IMAGINATION will provide a novel image-based access method to digital cultural and scientific resources. It will reduce complexity by the provision of intuitive navigation method. IMAGINATION will facilitate an interactive and creative experience providing an intuitive navigation through images and parts of images. To do so IMAGINATION will combine, apply and improve existing techniques to provide a new way of navigation through cultural heritage multimedia archives. It will exploit the context of resources stored in its knowledge space by combining text-mining, image segmentation and image recognition algorithms. This combination will cause a synergy effect and will result in semiautomatically generated, high-level semantic metadata. IMAGINATIONs focus is on indexing, retrieving and exploring non-textual complex objects and will apply knowledge technologies and visualisation techniques for improved navigation and access to multimedia collections. Comprehensive tool support (including an ontology editor and a semi-automated image annotation tool) will be provided, together with an easy-to-use web-based interface which visualises the contextualised content stored in the IMAGINATION knowledge space. A major outcome of the project will be the new and intuitive approach of navigation trough images and a set of technologies and tools to support the annotation of images by manual, semi-automatic and automatic techniques.

Funded under: FP6-IST

MESH: Multimedia Semantic Syndication for Enhanced News Services
MESH: Multimedia Semantic Syndication for Enhanced News Services

March 2006 (36 months)

Multimedia Semantic Syndication for Enhanced News Services (MESH) will apply multimedia analysis and reasoning tools, network agents and content management techniques to extract, compare and combine meaning from multiple multimedia sources, and produce advanced personalized multimedia summaries, deeply linked among them and to the original sources to provide end users with an easy-to-use “multimedia mesh” concept, with enhanced navigation aids. A step further will empower users with the means to reuse available content by offering media enrichment and semantic mixing of both personal and network content, as well as automatic creation from semantic descriptions. Encompassing all the system, dynamic usage management will be included to facilitate agreement between content chain players (content providers, service providers and users). In a sentence, the project will create multimedia content brokers acting on behalf of users to acquire, process, create and present multimedia information personalized (to user) and adapted (to usage environment). These functions will be fully exhibited in the application area of news, by creation of a platform that will unify news organizations through the online retrieval, editing, authoring and publishing of news items.


Funded under: FP6-IST

BOEMIE: Bootstrapping Ontology Evolution with Multimedia Information Extraction
BOEMIE: Bootstrapping Ontology Evolution with Multimedia Information Extraction

March 2006 (36 months)

The main measurable objective of the project is to improve significantly the performance of existing single-modality approaches in terms of scalability and precision. Towards that goal, BOEMIE will deliver a new methodology for extraction and evolution, using a rich multimedia semantic model, and realized as an open architecture. The architecture will be coupled with the appropriate set of tools, implementing the advanced methods that will be developed in BOEMIE. Furthermore, BOEMIE aims to initiate a new research activity on the automation of knowledge acquisition from multimedia content, through ontology evolution. BOEMIE will pave the way towards automation of the process of knowledge acquisition from multimedia content, by introducing the notion of evolving multimedia ontologies, which will be used for the extraction of information from multimedia content in networked sources, both public and proprietary. BOEMIE advocates a synergistic approach that combines multimedia extraction and ontology evolution in a bootstrapping process involving, on the one hand, the continuous extraction of semantic information from multimedia content in order to populate and enrich the ontologies and, on the other hand, the deployment of these ontologies to enhance the robustness of the extraction system. The ambitious scope of the BOEMIE project and the proven specialized competence of the carefully composed project consortium ensure that the project will achieve the significant advancement of the state of the art needed to successfully merge the component technologies.


Funded under: FP6-IST