[[SafeHTML(<span xmlns:dc="">http://purl.org/dc/elements/1.1/"><span about="">http://www.w3.org/2005/Incubator/mmsem/wiki/Building_multimedial_Semantic_Web_Applications"><strong>Authors:</strong> <a href="http://www.w3.org/2005/Incubator/mmsem/wiki/MichaelHausenblas" rel="dc:creator">MichaelHausenblas</a>, ...<br /><strong>Topics:</strong> <meta property="dc:subject" datatype="xsd:string">scalability, video-blogging, GEO-tracking</meta></span></span>)]]
Use Case: Building multimedial Semantic Web Applications
Index
1. Introduction
This use case is all about supporting to build real distributed, Semantic Web applications in the domain of multimedial content. It discusses scalability, and interop issues and tries to propose solutions to lower the barrier of implementing such multimedial Semantic Web applications.
2. Motivation
Shirin is a IT manager at a NGO, called FWW (Foundation for Wildlife in the World) and wants to offer some new multimedial service to inform, alarm, etc. members, e.g.:
- Track your animal godchild (TyAG)
A service that would allow a member to audio-visually track his godchild (using geo-spatial services, camera, satellite, RFID :). DONald ATOR, a contributer of FWW is the godfather of a whale. Using the TyAG service he is able to observe the route that his favorite whale takes (via Geonames) and in case that the godchild is near a FWW-observing point, Donald might also see some video footage. Currently the whales are somewhere around Thule Island. TyAG allows Donald to ask questions like: When will the whales be in my region? etc.
- Video-news (vNews)
As Donald has gathered some good experiences with TyAG, he wants to be informed about news, upcoming events, etc. w.r.t. whales. The backbone of the vNews system is smart enough to understand that whales are a kind of animals that live in the water. Any time a FWW member puts some footage on the FWW-net that has some water animals in it, vNews - using some automated feature extraction utils - offers it to Donald as well to view it. Note: There might be a potential use of the outcome of the News Use Case here.
- Interactive Annotation
A kind of video blogging (Parker et.al., 2005) using vNews. Enables members to share thoughts about endangered species etc. or to find out more information about a specific entity in a (broadcasted) videostream. Therefore, vNews is able to automatically segment its video-content and set up a list of objects, etc. For each of the objects in a video, a user can get further information (by linking it to Wikipedia, etc.) and share her thoughts about it with other members of the vNews network.
3. Possible Solutions
Common to all services listed above is an ample infrastructure that has to deal with the following challenges:
- Using many different (multimedial) metadata (EXIF, GPS-data, etc.) as input, a common internal representation has to be found (e.g. MPEG-7) - INTEROPERABILITY.
- For the domain (animals) a formal description needs to be defined - ONTOLOGY ENGINEERING (also visual to entity mapping).
Due to the vast amount of metadata, a scaleable approach has to be taken that can handle both low-level features (in MPEG-7) and high-level features (in RDF/OWL) - SCALABILITY.
We now try to give possible answers to the above listed question to enable Shirin to implement the services in terms of:
- Based on well-known ontology engineering methods give hints on how to modell the domain.
- Giving support in selecting a representation that both addresses low-level as high-level features.
- Supplying an SW-architect with an evaluation of RDF-stores w.r.t. multimedial metadata.
4. References
- (Parker et.al., 2005)
Video blogging: Content to the max C. Parker and S. Pfeiffer, IEEE MultiMedia, vol. 12, no. 2, pp. 4-8, 2005.
- (Xxxxx, YYYY)
TITLE AUTHOR. IN/SOURCE, DATE.