ISSUE-29: how should audio visual speech recognition be annotated
how should audio visual speech recognition be annotated
- State:
- CLOSED
- Product:
- EMMA
- Raised by:
- Michael Johnston
- Opened on:
- 2006-12-12
- Description:
- How should the results of audio visual speech recognition
be annotated?
mode=voice
medium=acoustic,visual?
More work needed to make clear the differences in meaning
between medium and mode and how they apply to cases such as
AV speech recognition. - Related Actions Items:
- No related actions
- Related emails:
- ISSUE-98 (EMO-29): SMIL or EMMA-like representation of time? [EmotionML] (from sysbot+tracker@w3.org on 2009-11-02)
- Re: [emo] Issues in EmotionML (from ashimura@w3.org on 2009-10-31)
- [emo] Issues in EmotionML (from schroed@dfki.de on 2009-10-30)
- Re: [emma] resolution of open issues in issue tracker (from ashimura@w3.org on 2007-10-31)
- [emma] resolution of open issues in issue tracker (from johnston@research.att.com on 2007-10-29)
- Re: issue tracker issues (from ashimura@w3.org on 2007-03-28)
- [emma] draft 032107-diff (some more changes and list of open issues) (from paolo.baggia@loquendo.com on 2007-03-21)
- ISSUE-29: how should audio visual speech recognition be annotated [EMMA] (from dean+cgi@w3.org on 2006-12-12)
Related notes:
This issue has been resolved as both medium and mode can
have multiple values:
mode=voice,camera
medium=acoustic,visual
Display change log