See also: IRC log
<AdrianP> Slides: http://ibis.in.tum.de/projects/paw/docs/HCLS_ResponderDemo.pdf
<Susie> Slides: http://ibis.in.tum.de/projects/paw/docs/HCLS_ResponderDemo.pdf
<matthiassamwald> sorry, nothing new about the SenseLab conversion, still waiting vor CVS access to the W3C site.
(Adrian goes through slides at http://ibis.in.tum.de/projects/paw/docs/HCLS_ResponderDemo.pdf )
(Adrian goes through slides at http://ibis.in.tum.de/projects/paw/docs/HCLS_ResponderDemo.pdf )
<ericP> there are a lot of layers here. i wonder which are backed by use cases
EricP: This treats Prova as the unifying layer. Another approach would be to put RDF adaptors on all the data sources and use SPARQL as the unifying language.
<mscottm> another uk project 'comparagrid' also uses rules to create views of (OWL) data
Kei: How does your rules language compare to RUleML and OWL.
Adrian: Different family. SWRL is
a subfamily of RuleML. ReactionRuleML is intended for reactive
rules.
... It can implement workflow-like systems like BPEL.
... RuleML is more homogeneous integration, whereas
ReactionRule is more heterogeneous. You can use external schema
vocabularies to type the variables. You can have a certain OWL
ont and give a variable the typ of this class.
Susie: Plans to integrate with HCLS knowledge base?
Adrian: You can define
conditional decision logic, and then encode additional
decisions. You could also do transformation al rules: pull the
data, then update to HCLS knowledgebase, though that would need
SPARQL update which doesn't yet exist.
... If there are queries that always repeat in a certain
binding -- author, patents, etc., -- then you can implement it
in a declarative rule language.
DBooth: What are you using this for?
Adrian: We use it for virtual
orgs, where the members are defined as a set of autonomous
agents with their internal decision logic. Other apps are IT
service mgmt, where you define quality contracts between
services and you need to monitor conformance, and that is easy
to do in this rule language.
... In HCLS it could be used as a heterogeneous approach for a
HCLS infrastructure.
... Please let me know of suggestions for applying this in
W3C.
<AdrianP> some links:
Matthias: Still working with Eric to get it on W3C server. Need to update my invited expert status.
EricP: I'm ready to do the next
step, now that invited expert status is done.
... Send me your SSH public key.
Susie: Status?
<mscottm> http://www.w3.org/2001/sw/hcls/notes/kb/
Scott: I have some new items that
were requested to be added, and I'll try to fill them in.
Starting with the motivation, I look at it as conveying the
steps and thoughts behind the design of this knowledgebase, so
that people can understand and reproduce it if desired --
ambitious. Added a lenghty intro, not sure it should be quite
so long. We've also got more info about the incorporated
databases. Would like to add discussion of how it needs to be
main
... maintained, next steps for improvement, detailed section on
1-2 conversions to RDF from legacy datastore.
... I've mailed to Alan to get input on which conversion to
cover.
... Added to the wish list: List of current implementation,
where to find them. Also listing of schema classes and
properties used. And an additional resources section in the
appendix.
... i'd like to ask Matthias for comments on next steps.
... Also would like input on current doc structure, or if
anyone wants to provide material. One tricky piece: explanation
of how evidence is done, but I may have a doc from Alan that
talks about it.
EricP: Sometimes evidence is handled by providence in the sparql query, and sometimes it's in the data itself.
Scott: Yes, in some cases it's
encoded in the properties.
... There are a number of approaches that can be used to model
evidence, and we're using more than one, but we think evidence
is important, so we should at least describe the approaches
taken.
... Has everyone looked at it?
EricP: I have. I had envisioned the SenseLab conversion as being the detailed description of importing more than one knowledge base, so we wouldn't have to do that in this document -- we'd just reference it.
Susie: That makes sense to me.
Scott: I suppose we'll link out to a separate doc.
EricP: It woudl be nifty to have the motivated query incorporate some of the Senselab data.
Scott: Good plan. Do we have such a query?
Matthis: Nothing we can show.
<scribe> ACTION: Kei to provide query that makes use of SenseLab graph [recorded in http://www.w3.org/2007/12/10-BioRDF-minutes.html#action01]
<matthiassamwald> I will just write an e-mail. Arghl.
<matthiassamwald> We cannot query our ontologies with SPARQL.
<matthiassamwald> At least not in a very intuitive manner.
EricP: I arbitratily picked Banff query #2 for clarifying the motivations when somenoe is reading the doc. Any other query would be fine also if it touches on SenseLab.
Scott: People involved with incorporating the data should review to see if I have left out anything. Could also use help on getting good use cases.
<matthiassamwald> YES IT IS.
Scott: Was DERI knowlege base send out to public?
<matthiassamwald> I'm in charge of that. If you have questions, please contact me at samwald@gmx.at. The web address is http://hcls.deri.ie
Don: It's unlikely ours will be ready in time.
<AdrianP> we might also setup a mirror for the KB if there is a need
<matthiassamwald> The problem is to keep the installations in sync.
Scott: The main point is to provide links in this section to encourage people to try it out. We could list installations, and those taht are publicly accessible could have URLs.
<matthiassamwald> It is already hard enough to keep the DERI and Neurocommons KB in sync. With multiple mirrors, we need to set up some automated process.
<scribe> ACTION: Susie to ping Wright State University (Amit) to see if there database is up and available [recorded in http://www.w3.org/2007/12/10-BioRDF-minutes.html#action02]
Susie: in Sec 5, have you been able to incorporate all of the databases in Jonathan's ReadMe?
Scott: No. SOme are listed
multiple times (e.g., MESH). We should just list it once, and
maybe refer out to Jonathan's more complete table. Difficult
call because keepign these installations in sync is impossible,
but if people want the details they should see Jonathan's
table. \
... There are still some things missing and I'll be adding
them. Do we list every little thing that's been put in? That
makes th list long.
Susie: My inclination: Add a small amount of info about all of the datasets, to avoid letting anyone feel unloved.
Scott: I don't mean leaving anyone out. It's really a technical detail. E.g., the mammalian part of OBO is in another section. I'll check with Jonathan about how to handle that.
Susie: You could send out email
to ask for feedback.
... Finish by 20th?
Scott: I could use some help. If there are diagrams tha someone likes, let me know. Last week I applied jambalaya to your ont, but maybe you meant something else.
Susie: I was hoping for a screen
shot of the OB ont.
... Screen shot of science commons ont would be interesting for
people to see.
EricP: We have until the 21st.
<kei> kei: sorry, got to go because of class
Monday 17-Dec-2007
ADJOURNED
<matthiassamwald> So... I need to generate a SSH key?
<scribe> Scribe: DBooth
<ericP> ssh-kegen
<matthiassamwald> Perfectly.
<matthiassamwald> Yes.
This is scribe.perl Revision: 1.128 of Date: 2007/02/23 21:38:13 Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/ Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/tained/... maintained/ Succeeded: s/Scptt/Scott/ Found Scribe: DBooth Inferring ScribeNick: dbooth Present: Don Olivier Scott Matthias Adrian Kei Susie DBooth EricP Got date from IRC log name: 10 Dec 2007 Guessing minutes URL: http://www.w3.org/2007/12/10-BioRDF-minutes.html People with action items: kei susie[End of scribe.perl diagnostic output]