2006
AGENT-DYSL: Accomodative Intelligent Educational Environments for Dyslexic learners
AGENT-DYSL: Accomodative Intelligent Educational Environments for Dyslexic learners

September 2006 (36 months)

The overall objective of the AGENT-DYSL project is to contribute in narrowing the gap between good and poor (due to dyslexia) readers. AGENT-DYSL's main target group is school-aged children. AGENT-DYSL's approach towards obtaining the objective is to develop a "next generation assistive reading system" and to incorporate it into learning environments. The AGENT-DYSL project addresses a main target of the e Inclusion Strategic objective, which is the development of next generation assistive systems that empower persons with (in particular cognitive) disabilities to play a full role in society, to increase their autonomy and to realize their potential. To effectively contribute to this goal, the project focuses on the development of novel technologies and tools for supporting children with dyslexia in reading.  One of the main objectives of AGENT-DYSL is the incorporation of these prominent features into assistive reading software, and the move to the next generation of assistive software. The features include automated user modelling, age-appropriate and dyslexia-sensitive user interfaces, automatic user progress monitoring, automatic user’s psychological and emotional state tracking, knowledge assisted reasoning and evaluation of information, personalized user interfaces that adapt to the individual requirements of each dyslexic learner. Moreover, the project apprentices the role of accommodative educational environments in obtaining the best results for inclusion purposes; in particular it recognises that learners’ diversity is a strength for collaborative training environments (e.g., schools, education centres, work) and that heterogeneous communities (groupings) have a built-in dynamic that can bring about development in learners with widely different potentials and competence profiles. In this framework, AGENTDYSL also focuses on accommodative education environments for dyslexic learners, interweaving the above-mentioned technologies for evaluation of both the individual dyslexic learner and the context of the learning environment, with a pedagogical perspective and testing in three real environments in United Kingdom, Denmark and Greece.


Funded under: FP6-IST

Dianoema: Visual analysis and Gesture recognition for Sign Language modeling and robot tele-operation

June 2006 (18 months)

The project aims to develop innovative image analysis and computational algorithms for effective detect and track gestures of video sequences, parallel corpus collection of Greek Sign Language and provide annotation and linguistic modeling of representative ENG phenomena; pre-trained in the corpus and integrated them into a robot remote control pilot system with a subset of simple gestures.


Funded under: GSRT EHG

 

IMAGINATION: Image-based Navigation in Multimedia Archives

May 2006 (36 months)

The main objective of IMAGINATION is to bring digital cultural and scientific resources closer to their users, by making user interaction image-based and context-aware. Our ultimate aim is to enable users to navigate through digital cultural and scientific resources through its images. IMAGINATION will provide a novel image-based access method to digital cultural and scientific resources. It will reduce complexity by the provision of intuitive navigation method. IMAGINATION will facilitate an interactive and creative experience providing an intuitive navigation through images and parts of images. To do so IMAGINATION will combine, apply and improve existing techniques to provide a new way of navigation through cultural heritage multimedia archives. It will exploit the context of resources stored in its knowledge space by combining text-mining, image segmentation and image recognition algorithms. This combination will cause a synergy effect and will result in semiautomatically generated, high-level semantic metadata. IMAGINATIONs focus is on indexing, retrieving and exploring non-textual complex objects and will apply knowledge technologies and visualisation techniques for improved navigation and access to multimedia collections. Comprehensive tool support (including an ontology editor and a semi-automated image annotation tool) will be provided, together with an easy-to-use web-based interface which visualises the contextualised content stored in the IMAGINATION knowledge space. A major outcome of the project will be the new and intuitive approach of navigation trough images and a set of technologies and tools to support the annotation of images by manual, semi-automatic and automatic techniques.

Funded under: FP6-IST

BOEMIE: Bootstrapping Ontology Evolution with Multimedia Information Extraction
BOEMIE: Bootstrapping Ontology Evolution with Multimedia Information Extraction

March 2006 (36 months)

The main measurable objective of the project is to improve significantly the performance of existing single-modality approaches in terms of scalability and precision. Towards that goal, BOEMIE will deliver a new methodology for extraction and evolution, using a rich multimedia semantic model, and realized as an open architecture. The architecture will be coupled with the appropriate set of tools, implementing the advanced methods that will be developed in BOEMIE. Furthermore, BOEMIE aims to initiate a new research activity on the automation of knowledge acquisition from multimedia content, through ontology evolution. BOEMIE will pave the way towards automation of the process of knowledge acquisition from multimedia content, by introducing the notion of evolving multimedia ontologies, which will be used for the extraction of information from multimedia content in networked sources, both public and proprietary. BOEMIE advocates a synergistic approach that combines multimedia extraction and ontology evolution in a bootstrapping process involving, on the one hand, the continuous extraction of semantic information from multimedia content in order to populate and enrich the ontologies and, on the other hand, the deployment of these ontologies to enhance the robustness of the extraction system. The ambitious scope of the BOEMIE project and the proven specialized competence of the carefully composed project consortium ensure that the project will achieve the significant advancement of the state of the art needed to successfully merge the component technologies.


Funded under: FP6-IST

X-Media: Knowledge Sharing and Reuse across
X-Media: Knowledge Sharing and Reuse across

March 2006 (48 months)

X-Media addresses the issue of knowledge management in complex distributed environments. It will study,develop and implement large scale methodologies and techniques for knowledge management able to support sharing and reuse of knowledge that is distributed in different media (images, documents and data) and repositories (data bases, knowledge bases, document repositories, etc.) or that is inaccessible for current systems, which cannot capture the knowledge implicit across media. All the developed methodologies aim at seamlessly integrating with current work practices. Usability will be a major concern together with ease of customisation for new applications. Technologies will be able to support knowledge workers in an effective way, (i) hiding the complexity of the underlying search/retrieval process, (ii) resulting in a natural access to knowledge, (iii) allowing interoperability between heterogeneous information resources and (iv) including heterogeneity of data type (data, image, texts). The expected impact on organizations is to dramatically improve access to, sharing of and use of information by humans as well as by and between machines. Expected benefits are a dramatic reduction of management costs and increasing feasibility of complex knowledge management tasks.


Funded under: FP6-IST

MESH: Multimedia Semantic Syndication for Enhanced News Services
MESH: Multimedia Semantic Syndication for Enhanced News Services

March 2006 (36 months)

Multimedia Semantic Syndication for Enhanced News Services (MESH) will apply multimedia analysis and reasoning tools, network agents and content management techniques to extract, compare and combine meaning from multiple multimedia sources, and produce advanced personalized multimedia summaries, deeply linked among them and to the original sources to provide end users with an easy-to-use “multimedia mesh” concept, with enhanced navigation aids. A step further will empower users with the means to reuse available content by offering media enrichment and semantic mixing of both personal and network content, as well as automatic creation from semantic descriptions. Encompassing all the system, dynamic usage management will be included to facilitate agreement between content chain players (content providers, service providers and users). In a sentence, the project will create multimedia content brokers acting on behalf of users to acquire, process, create and present multimedia information personalized (to user) and adapted (to usage environment). These functions will be fully exhibited in the application area of news, by creation of a platform that will unify news organizations through the online retrieval, editing, authoring and publishing of news items.


Funded under: FP6-IST

YSTERA: Analysis and Semantics of 3D Human motion for HCI and Animation of Virtual Characters

January 2006 (36 months)

This project aims at the theoretical examination and the experimental justification of a model which collects, semantically interprets and uses audiovisual information collected from humans. The goals are (a) supporting non-verbal human-machine interaction and (b) the reconstruction of the motion and “behaviour” of the human subjects with in virtual environments.

Funded by the greek General Secretariat of Research and Technology PENED 2003

K-Space: Knowledge Space of semantic inference for automatic annotation and retrieval of multimedia content
K-Space: Knowledge Space of semantic inference for automatic annotation and retrieval of multimedia content

January 2006 (36 months)

K-Space is a network of leading research teams from academia and industry conducting integrative research and dissemination activities in semantic inference for automatic and semi-automatic annotation and retrieval of multimedia content. K-Space exploits the complementary expertise of project partners, enables resource optimization and fosters innovative research in the field. The aim of K-Space research is to narrow the gap between low-level content descriptions that can be computed automatically by a machine and the richness and subjectivity of semantics in high-level human interpretations of audiovisual media: The Semantic Gap. Specifically, K-Space integrative research focus on three areas: 
- Content-based multimedia analysis: Tools and methodologies for low-level signal processing, object segmentation, audio/speech processing and text analysis, and audiovisual content structuring and description. 
- Knowledge extraction: Building of a multimedia ontology infrastructure, knowledge acquisition from multimedia content, knowledge-assisted multimedia analysis, context based multimedia mining and intelligent exploitation of user relevance feedback. 
- Semantic multimedia: knowledge representation for multimedia, distributed semantic management of multimedia data, semantics-based interaction with multimedia and multimodal media analysis. An objective of the Network is to implement an open and expandable framework for collaborative research based on a common reference system. Specific dissemination objectives of K-Space include: 
- To disseminate the technical developments of the network across the broad research community 
- To boost technology transfer to industry and contribute to related standardisation activities.


Funded under: FP6-IST

2004
Ask-IT: Ambient Intelligence System of Agents for Knowledge-based and Integrated Services for Mobility Impaired Users

October 2004 (48 months)

ASK-IT integrated project aims to establishAmbient Intelligence (Ami) in semantic wed enabled services, to support and promote the mobility of the MobilityImpaired people, enabling the provision of personalised, self-configurable, intuitive and context-related applications and services and facilitating knowledge and content organisation and processing. Mobility Impaired (Ml) people have a wide variety of functional limitations, from different types of physical impairments to activity limitations. ICT systems following the "design for all" and adequate content are required, so as to take advantage of both internet and mobile-based services. ASK-IT integrated project aims to establishAmbient Intelligence (Ami) in semantic wed enabled services, to support and promote the mobility of the MobilityImpaired people, enabling the provision of personalised, self-configurable, intuitive and context-related applications and services and facilitating knowledge and content organisation and processing. Within it, Mlpeople related infomobility content is collected, interfaced and managed in SP1 (Content for All), encompassing transport, tourism and leisure, personal support services, work, business and education, social relations and community building related content. To offer the content, a number of advanced tools are developed within SP2(Tools for All), such as enhanced accuracy localisation, accessible intermodal route guidance modules and interfaces to eCommerce / ePayment, domotics, health and emergency management, driver support, computeraccessibility, eWorking, eLearning systems and assistive devices. Content and tools are integrated within an Ambient Intelligent Framework (SP3), by a Multi Agent System of Intelligent Agents and a self-configurable UserInterface, that offer service personalisation according to user profile, habits, preferences and context of use. This framework is interoperable in terms of mobile devices, local and wide area networks used, entrusted and based on intuitive web-semantics; thus offering seamless and device independent service everywhere. The integrated ASK-IT service and system will be tested in 7 interconnected sites Europewide in SP4 (Accessible Europe), to prove that full travel accessibility for Ml users can be achieved in a reliable and viable.


Funded under: FP6-IST

MUSCLE: Multimedia Understanding through Semantics, Computation and Learning

March 2004 (48 months)

MUSCLE aims at creating and supporting a pan-European Network of Excellence to foster close collaboration between research groups in multimedia data mining on the one hand and machine learning on the other in order to make breakthrough progress towards the following objectives.(i) Harnessing the full potential of machine learning and cross-modal interaction for the (semi-)automatic generation of metadata with high semantic content for multimedia documents.(ii) Applying machine learning for the creation of expressive, context-aware, self-learning, and human centred interfaces that will be able to effectively assist users in the exploration of complex and rich multimedia content.(iii) Improving interoperability and exchangeability of heterogeneous and distributed (meta)data try enabling data descriptions of high semantic content (e.g. ontologies, MPEG7 and XML schemata) an conference schemes that can reason about these at the appropriate levels.(iv) Through dissemination, training and industrial liaison, contribute to the distribution and uptake the technology by relevant end-users such as industry, education, and the service sector. Due to the convergence of several strands of scientific and technological progress we are witnessing the emergence of unprecedented opportunities for the creation of a knowledge driven society. Indeed, databases are accruing large amounts of complex multimedia documents, networks allow fast and almost ubiquitous access to an abundance of resources and processors have the computational power to perform sophisticated and demanding algorithms. However, progress is hampered by the sheer amount and diversity of the available data. As a consequence, access can only be efficient if based directly on content and semantics, the extraction and indexing of which is only feasible if achieved automatically. Given the above, we feel that there is both a need and an opportunity to systematically incorporate machine learning into an integrated approach to multimedia data mining. Indeed, enriching multimedia databases with additional layers of automatically generated semantic metadata as well as with artificial intelligence to reason about these (meta)data, is the only conceivable way that we will be able to mine for complex content, and it is at this level that MUSCLE will focus its main effort. Realising this vision will require breakthrough progress to alleviate a number of key bottlenecks along the path from data to understanding.


Funded under: FP6-IST