See also: IRC log
<kaz> Scribe: Mark
<inserted> scribenick: Mark_Vic_
clarke: Kaz created a graph for his action item. Kaz will explain.
<kaz> http://www.w3.org/2011/webtv/wiki/Testing/survey-results#Graph_generated_using_the_above_results
kaz: I put the graph on the wiki.
the graph includes all 53 results from the surveys.
... The color represents the category of each spec.
... The color is described in the key below the graph.
... External survey on X axis, internal survey is Y axis.
--- Most important is upper right
<giuseppep> scribenick: giuseppep
<scribe> scribe: giuseppep
mark: the testing group now has a
TV profile, that came for me as a temporary input waiting for
this group to finilize this work
... we now need to provide something similar, that replace what
they have
... if we give them this set of data they would have to do this
work
<inserted> scribenick: Mark_Vic_
<giuseppep> scribe: mark
giuseppe: we are mixing some
internal and external list info, FYI
... I agree we need to generate a profile, but I think a
ranking is more useful than a threshold binary decision
clarke: Agree that it's more useful to have a ranking, but also useful to have a threshold to get a first list
giuseppe: i think we should
supply all this data to the testing group, and they can decide
what to do.
... The coremob list was more from the app POV, whereas the TV
list was more regarding devices POV
clarke: does everyone agree with
that? [no disagreements]
... we could try more than one approach and compare &
decide
... I'd like volunteers to provide columns of priorities
... Clarke & Mark Vickers volunteered to make ranking &
threshold columns
<kaz> ACTION: Clarke to work with Mark and create aggregated rank and mandatory/optional columns for the survey table [recorded in http://www.w3.org/2013/09/11-webtv-minutes.html#action02]
giuseppe: do we also include the graph?
clarke: yes [no one disagreed]
giuseppe: so the final result
will be a ranking, a threshold and the graph [no one
disagreed]
... In answer to question on the list, we should send it to
Tobie's testing coordination group.
clarke: so, we send to tobie directly & let him distribute from there
<inserted> Clarke's generated text (Member only)
clarke: Moving to the paragraph (which is on GoToMeeting now)
giuseppe: change name. take name
from wiki or address to tobie
... second, also explain that this is different from the
coremob profile in that it is focused on devices and not
applications
... mobile was definitely focussed on applications
<kaz> ACTION: Clarke to edit introductory paragraph to include description of audience polled and add correct group name [recorded in http://www.w3.org/2013/09/11-webtv-minutes.html#action03]
mav: suggest we explain where data is from but not characterize as device vs. app
clarke: any suggestion on where to publish this?
ddavis: I'll ask tobie
<ddavis> ACTION: ddavis to ask tobie about how/where to publish testing coverage list [recorded in http://www.w3.org/2013/09/11-webtv-minutes.html#action01]
<trackbot> Created ACTION-142 - Ask tobie about how/where to publish testing coverage list [on Daniel Davis - due 2013-09-18].
clarke: giuseppe, do you want to
discuss your comments next?
... the comments are grouped into categories
<Clarke> Req. Document: https://docs.google.com/document/d/1zY5_C0ZK4_Z2_WMSf2MSu5pBI8tNAgmL2o-Ua_rv1Jg/edit?pli=1#heading=h.raug22jjh5xz
giuseppe: First comment: Descriptions needed.
clarke: agree
giuseppe: About requirements themselves, one thing missing: Do we need the result to be machine readable? Do we need the results to be tamper-proof?
clarke: I'll add that
giuseppe: Some of my comments may be resolved when descriptions are added, so I'll hold off on those
giusppe: Also, not sure why we distinguish general requirements from specific use case requirements
clarke: This just reflects how
the requirements were derived, but not sure that evolutionary
info is useful in the final report
... In general, I welcome comments on how this report whould be
delivered
giuseppe: I feel one general
requirement is more useful than the specific/general
categories.
... For example, there is duplication between the two
lists.
... On the performance requirements: some may be different for
different groups. Should the requirements target which group it
is for?
clarke: I like pointing groups at particular requirements
giuseppe: we alos need to be
specific what we want from each group.
... After the first part of use case requirements, we jump to
specific spec requirements (e.g. EME)
... It's not clear to me why we have specific requirements for
some specs and not others
... we also have specific spec requirements for specs that
aren't in the priority table (e.g. NSD spec)
clarke: table mostly published specs
giuseppe: Need to explain this in
the document. explain why we picked certain specs in
ranking.
... for example, the section on specific specs could say:
"We're monitoring these specs in development and here are some
additional testing requirements..."
bin: If we submit the doc to specific groups, it's clear what we want, but what do we want from tobie's group
giuseppe: We want tobie to update the TV mandatory spec list and also use the ranking as needed
(apologize for jumping the queue.)
gmandyam: I was in the coremob
group and part of the problem was that the specification list
wasn't as useful without performance levels.
... for example, MSE is an example of a requirement that may
not work at the JavaScript performance level in mobile devices.
So, are we going to address performance level.
<gmandyam> Thanks for summarizing - Mark
giuseppe: I tried to add a
general need for performance, but perhaps we also need to
address it at the specific spec level, e.g. CSS animation
... Of course, performnace spec would require a standard way to
test performance
clarke: Is performance level a spec issue or a market issue
gmandyam: My point is that if you're just specifying functionality without performance, you're not doing a full service. Is performance in scope?
clarke: The group never addressed performance as a scope issue one way or the other
giuseppe: I think we need to address performance
clarke: My hesitation is that performance specs can get very complex
giuseppe: Not sure we need to provide the benchmarks in this group, perhaps just highlight which specs need performance testing and work with the specific working groups on develping performace tests
clarke: running out o f
time
... suggest taking performance issue to email list
... we have a number of specific action items on the
document
... I'll add action items as formal action items
<Zakim> kaz, you wanted to ask if we want to invite Tobie to this call itself (maybe next time?)
kaz: seems we need even more collaboration with tobie's team, so why not have a joint meeting by phone and/or a joint F2F at TPAC?
clarke: clarke & kaz will follow up on joint meetings
<kaz> ACTION: Clarke to work with Kaz and consider joint meeting(s) with general testing group (Tobie) and make recommendations [recorded in http://www.w3.org/2013/09/11-webtv-minutes.html#action04]
<Bin_Hu> http://www.principledtechnologies.com/benchmarkxprt/webxprt/
bin: Not sure performance is
priority of any w3C group now.
... there is an external group (link above) that is providing
performance test info
gmanyam: decision to deprioritize performance is tobie's decision. I have provided a performance tool that could be used to tobie.
giuseppe: [unintelligible audio]
<Clarke> Thanks for scribing, Mark
clarke: Let's continue discussion
on mail list
... meeting adjourned
<kaz> [ adjourned ]