Projects
This page contains a summary of projects related to the community group.
EUROSENTIMENT
Tje main concept of the EUROSENTIMENT project is to provide a shared language resource pool for fostering sentiment analysis. The specific objectives of the project are:
- Provide semantic interoperability and connectivity between several multilingual sentiment analysis resources available online for the first time. Semantic interoperability is based on domain ontologies linked to a domain labelled WordNet and compatible with existing Linked Data initiative and EmotionML.
- Reduce the cost of aggregating new language resources to the shared resource pool by providing best-practice guidelines and QA procedures based on a publicly available multilingual sentiment analysis corpus on two different domains.
- Provide a self-sustainable and profitable framework for language resource sharing based on a community governance model, which offers contributors unwilling to grant free access the possibility of exploiting commercially the resources they provide.
- Demonstrate the impact of the developed pool by providing public access to a multilingual demonstrator in the media domain, which will show how the different resources can provide high quality results working with specialised language resources, integrate semantically their results and exploit these multilingual results with a semantic front-end.
More information: [1]
HUMAINE / Association for the Advancement of Affective Computing
Emotion-oriented computing is a broad research area involving many disciplines. The AAAC (Association for the Advancement of Affective Computing) grew out of the EU-funded network of excellence HUMAINE. This project was making a co-ordinated effort to come to a shared understanding of the issues involved, and to propose exemplary research methods in the various areas.
More information: [2]
iHEARu
FP7 ERC Starting Grant (Runtime: 01.01.2014 - 31.12.2018; Partners: Technische Universitaet Muenchen)
Recently, automatic speech and speaker recognition has matured to the degree that it entered the daily lives of thousands of Europe's citizens, e.g., on their smart phones or in call services. During the next years, speech processing technology will move to a new level of social awareness to make interaction more intuitive, speech retrieval more efficient, and lend additional competence to computer-mediated communication and speech-analysis services in the commercial, health, security, and further sectors. To reach this goal, rich speaker traits and states such as age, height, personality and physical and mental state as carried by the tone of the voice and the spoken words must be reliably identified by machines. In the iHEARu project, ground-breaking methodology including novel techniques for multi-task and semi-supervised learning will deliver for the first time intelligent holistic and evolving analysis in real-life condition of universal speaker characteristics which have been considered only in isolation so far. Today's sparseness of annotated realistic speech data will be overcome by large-scale speech and meta-data mining from public sources such as social media, crowd-sourcing for labelling and quality control, and shared semi-automatic annotation. All stages from pre-processing and feature extraction, to the statistical modelling will evolve in "life-long learning" according to new data, by utilising feedback, deep, and evolutionary learning methods. Human-in-the-loop system validation and novel perception studies will analyse the self-organising systems and the relation of automatic signal processing to human interpretation in a previously unseen variety of speaker classification tasks. The project's work plan gives the unique opportunity to transfer current world-leading expertise in this field into a new de-facto standard of speaker characterisation methods and open-source tools ready for tomorrow's challenge of socially aware speech analysis.
More information: [3]
ASC-Inclusion
EU FP7 Specific Targeted Research Project (STREP) (Runtime: 01.11.2011 - 31.12.2014; Partners: University of Cambridge, Bar Ilan University, Compedia, University of Genoa, Karolinska Institutet, Autism Europe, TUM, Koc University, Spectrum ASC-Med)
Autism Spectrum Conditions (ASC, frequently defined as ASD - Autism Spectrum Disorders) are neurodevelopmental conditions, characterized by social communication difficulties and restricted and repetitive behaviour patterns. The project aims to create and evaluate the effectiveness of an internet-based platform, directed for children with ASC (and other groups like ADHD and socially-neglected children) and those interested in their inclusion. This platform will combine several state-of-the art technologies in one comprehensive virtual world, including analysis of users’ gestures, facial and vocal expressions using standard microphone and webcam, training through games, text communication with peers and smart agents, animation, video and audio clips.
More information: [4]
SEWA: Automatic Sentiment Estimation in the Wild
EU Horizon 2020 Innovation Action (IA) (Runtime: 01.02.2015 - 31.07.2018; Partners:Imperial College London, University of Passau, PlayGen Ltd, RealEyes)
The main aim of SEWA is to deploy and capitalise on existing state-of-the-art methodologies, models and algorithms for machine analysis of facial, vocal and verbal behaviour, and then adjust and combine them to realise naturalistic human-centric human-computer interaction (HCI) and computer-mediated face-to-face interaction (FF-HCI). This will involve development of computer vision, speech processing and machine learning tools for automated understanding of human interactive behaviour in naturalistic contexts. The envisioned technology will be based on findings in cognitive sciences and it will represent a set of audio and visual spatiotemporal methods for automatic analysis of human spontaneous (as opposed to posed and exaggerated) patterns of behavioural cues including continuous and discrete analysis of sentiment, liking and empathy.
MixedEmotions: Social Semantic Emotion Analysis for Innovative Multilingual Big Data Analytics Markets
EU Horizon 2020 Innovation Action (IA) (Runtime: 01.04.2015 - 31.03.2017; Partners: NUI Galway, Univ. Polit. Madrid, University of Passau, Expert Systems, Paradigma Tecnológico, TU Brno, Sindice Ltd., Deutsche Welle, Phonexia SRO, Millward Brown)
MixedEmotions will develop innovative multilingual multi-modal Big Data analytics applications that will analyze a more complete emotional profile of user behavior using data from mixed input channels: multilingual text data sources, A/V signal input (multilingual speech, audio, video), social media (social network, comments), and structured data. Commercial applications (implemented as pilot projects) will be in Social TV, Brand Reputation Management and Call Centre Operations. Making sense of accumulated user interaction from different data sources, modalities and languages is challenging and has not yet been explored in fullness in an industrial context. Commercial solutions exist but do not address the multilingual aspect in a robust and large-scale setting and do not scale up to huge data volumes that need to be processed, or the integration of emotion analysis observations across data sources and/or modalities on a meaningful level. MixedEmotions will implement an integrated Big Linked Data platform for emotion analysis across heterogeneous data sources, different languages and modalities, building on existing state of the art tools, services and approaches that will enable the tracking of emotional aspects of user interaction and feedback on an entity level. The MixedEmotions platform will provide an integrated solution for: large-scale emotion analysis and fusion on heterogeneous, multilingual, text, speech, video and social media data streams, leveraging open access and proprietary data sources, and exploiting social context by leveraging social network graphs; semantic-level emotion information aggregation and integration through robust extraction of social semantic knowledge graphs for emotion analysis along multidimensional clusters.
ARIA-VALUSPA: Artificial Retrieval of Information Assistants - Virtual Agents with Linguistic Understanding, Social skills, and Personalised Aspects
EU Horizon 2020 Research & Innovation Action (RIA) (Runtime: 01.01.2015 - 31.12.2017; Partners: University of Nottingham, Imperial College London, CNRS, University of Augsburg, University of Twente, Cereproc Ltd, La Cantoche Production)
The ARIA-VALUSPA project will create a ground-breaking new framework that will allow easy creation of Artificial Retrieval of Information Assistants (ARIAs) that are capable of holding multi-modal social interactions in challenging and unexpected situations. The system can generate search queries and return the information requested by interacting with humans through virtual characters. These virtual humans will be able to sustain an interaction with a user for some time, and react appropriately to the user's verbal and non-verbal behaviour when presenting the requested information and refining search results. Using audio and video signals as input, both verbal and non-verbal components of human communication are captured. Together with a rich and realistic emotive personality model, a sophisticated dialogue management system decides how to respond to a user's input, be it a spoken sentence, a head nod, or a smile. The ARIA uses special speech synthesisers to create emotionally coloured speech and a fully expressive 3D face to create the chosen response. Back-channelling, indicating that the ARIA understood what the user meant, or returning a smile are but a few of the many ways in which it can employ emotionally coloured social signals to improve communication. As part of the project, the consortium will develop two specific implementations of ARIAs for two different industrial applications. A ‘speaking book’ application will create an ARIA with a rich personality capturing the essence of a novel, whom users can ask novel-related questions. An ‘artificial travel agent’ web-based ARIA will be developed to help users find their perfect holiday – something that is difficult to do with existing web interfaces such as those created by booking.com or tripadvisor.
PROPEREMO: Production and Perception of Emotions: An affective sciences approach
FP7 ERC Advanced Grant (Runtime: 01.03.2008 - 28.02.2015; Partners: University of Geneva (PI Klaus Scherer), TUM, Free University of Berlin)
Emotion is a prime example of the complexity of human mind and behaviour, a psychobiological mechanism shaped by language and culture, which has puzzled scholars in the humanities and social sciences over the centuries. In an effort to reconcile conflicting theoretical traditions, we advocate a componential approach which treats event appraisal, motivational shifts, physiological responses, motor expression, and subjective feeling as dynamically interrelated and integrated components during emotion episodes. Using a prediction-generating theoretical model, we will address both production (elicitation and reaction patterns) and perception (observer inference of emotion from expressive cues). Key issues are the cognitive architecture and mental chronometry of appraisal, neurophysiological structures of relevance and valence detection, the emergence of conscious feelings due to the synchronization of brain/body systems, the generating mechanism for motor expression, the dimensionality of affective space, and the role of embodiment and empathy in perceiving and interpreting emotional expressions. Using multiple paradigms in laboratory, game, simulation, virtual reality, and field settings, we will critically test theory-driven hypotheses by examining brain structures and circuits (via neuroimagery), behaviour (via monitoring decisions and actions), psychophysiological responses (via electrographic recording), facial, vocal, and bodily expressions (via micro-coding and image processing), and conscious feeling (via advanced self-report procedures). In this endeavour, we benefit from extensive research experience, access to outstanding infrastructure, advanced analysis and synthesis methods, validated experimental paradigms as well as, most importantly, from the joint competence of an interdisciplinary affective science group involving philosophers, linguists, psychologists, neuroscientists, behavioural economists, anthropologists, and computer scientists.
SOMEDI: Social Media and Digital Interaction Intelligence
The amount of digital interaction data has soared along with the digitization of business processes and private communication since the advent of the Internet. The increased amount data will produce an almost unfathomable amount of interaction traces. The goal of this project is to research machine learning and artificial intelligence techniques that can be used to turn digital interaction data into Digital Interaction Intelligence and approaches that can be used to effectively enter and act in social media, and to automate this process.