ISSUE-132: FURTHER INVESTIGATION: An example showing use of Media fragment URI to refer to portions of a signal
FURTHER INVESTIGATION: An example showing use of Media fragment URI to refer to portions of a signal
- State:
- OPEN
- Product:
- EMMA 1.1
- Raised by:
- Opened on:
- 2010-04-08
- Description:
- Related Actions Items:
- No related actions
- Related emails:
- [emma] minutes August 2, 2012 (from dahl@conversational-technologies.com on 2012-08-02)
- [emma] June 21, 2012 minutes (from dahl@conversational-technologies.com on 2012-06-21)
- [emma] minutes April 8, 2010 (from dahl@conversational-technologies.com on 2010-04-08)
- ISSUE-132: An example showing use of Media fragment URI to refer to portions of a signal (from sysbot+tracker@w3.org on 2010-04-08)
Related notes:
Discussed during June 14 call. This originally came from the Emotion subgroup. Would be fairly straightforward to add an example. This is just an extension of using emma:signal. You would allow signal to have fragment URI's and allow signal within application semantics, or do we have something else, reserving "signal" for the whole signal. Could have a group with classifications of different parts of the utterance. Or a space-separated list of fragment URI's. Could be useful for annotating audio or video. Could we just borrow something from the EmotionML spec?
Deborah Dahl, 14 Jun 2012, 15:00:56Discussed during June 21 call.
We are starting to think that you should use emma:group rather than trying to fill individual parts of a signal, just use group with fragment URI, you don't have to distinguish between a full file or a portion of the file.
Display change log