Options:
- # Session Start: Wed Oct 24 00:00:00 2012
- # Session Ident: #testing
- # [08:38] * Joins: shadi (shadi@public.irc.w3.org)
- # [12:29] * Joins: abarsto (~abarsto@public.irc.w3.org)
- # [12:29] * abarsto is now known as ArtB
- # [12:41] * Quits: ArtB (~abarsto@public.irc.w3.org) ("Leaving.")
- # [12:42] * Joins: abarsto (~abarsto@public.irc.w3.org)
- # [12:42] * abarsto is now known as ArtB
- # [14:06] * Joins: plh (plehegar@public.irc.w3.org)
- # [14:36] * Joins: darobin (rberjon@public.irc.w3.org)
- # [14:47] * Joins: Lachy (~Lachy@public.irc.w3.org)
- # [14:48] <Lachy> MikeSmith, ArtB told me to talk to you about updating http://w3c-test.org/framework/app/suite to incorporate the latest changes to the selectors api testsuite.
- # [14:49] <ArtB> PLH added some suites lately so he may be able to help too
- # [14:49] <Lachy> All of the tests have been rewritten now, so none of the old tests are there and all old results are no longer valid.
- # [14:49] <Lachy> Also, level1-all.xht needs to be added (replacing the former level1-xhtml.xht that has been removed)
- # [14:50] <Lachy> these tests: http://w3c-test.org/webapps/SelectorsAPI/tests/submissions/Opera/
- # [14:50] * plh wakes up
- # [14:50] <plh> I can help
- # [14:50] <plh> gimme 5 minutes
- # [14:51] <Lachy> ok
- # [14:53] <ArtB> PLH, fyi, Lachy has an ImplReport for Selectors v1 that includes data that meets the CR exit criteria http://dev.w3.org/2006/webapi/selectors-api-testsuite/level1-baseline-report.html
- # [15:01] <plh> Lachy, I did the update
- # [15:01] <plh> thank you for updating the test suite!
- # [15:01] <plh> the report generated by the framework is useless
- # [15:01] <plh> and it doesn't do justice to the test suite
- # [15:02] <Lachy> plh, in the dropdown list on this page, under single testcase, why does it list so many duplicates for the files? http://w3c-test.org/framework/app/suite/css-selectors-api-1-baseline/run
- # [15:02] <plh> hum...
- # [15:02] <plh> good question
- # [15:02] <plh> looking around
- # [15:03] <Lachy> and when I run it, it doesn't seem to submit the results automatically. It just gives me buttons to click pass, fail, cannot tell or skip. But it's not clear what should be pressed in the case where there's many passes and only a few failures.
- # [15:03] <Lachy> I really don't understand how this framework is supposed to work.
- # [15:04] <plh> not sure if it helps but I don't either :(
- # [15:04] <Lachy> haha.
- # [15:04] <Lachy> who wrote it or maintains it?
- # [15:04] <plh> robin maintains it
- # [15:04] <plh> I stopped using to generate my own report
- # [15:05] <plh> I have my own runner on the side
- # [15:05] <Lachy> well, I would like to use it to submit results, but I don't know what I'm supposed to do to use it. I'll ask robin later when he's online.
- # [15:09] <plh> all right, this thing is driving nuts
- # [15:09] <plh> I can't figure why some of the files are appearing multiple times
- # [15:10] <plh> do you need help to generate a proper report?
- # [15:10] <plh> or does Charles have one already?
- # [15:10] <Lachy> I have an implementation report that I created by hand. ArtB linked to it above.
- # [15:11] <Lachy> It would be useful to get it automated somehow, if this framework can do it.
- # [15:11] <plh> agreed
- # [15:12] <Lachy> but, as I said, I have no idea how to make this framework do anything, so help would be appreciated.
- # [15:13] <plh> I think this framework needs a serious revision
- # [15:15] <ArtB> (perhaps we can have a related discussion next week during WebApps' f2f meeting, assuming we can get Robin to join ...)
- # [15:15] <Lachy> ok
- # [15:15] <darobin> hey Lachy
- # [15:15] <ArtB> btw, PLH - is the Browser Testing WG working on the framework?
- # [15:15] <darobin> automatic submission is currently disabled due to a bug
- # [15:15] <plh> nope, they're on the webdriver api
- # [15:15] <Lachy> oh, ok.
- # [15:15] <darobin> I'll put it back in when I get the ten minutes it requires
- # [15:16] <darobin> sorry about that
- # [15:16] <Lachy> darobin, can you look into the other issue plh was trying to solve with the duplicates showing up?
- # [15:16] * darobin reads the backlog
- # [15:17] <darobin> Lachy: in recent times, most of the work done on the framework has been to make the EU people who were paying for it happy
- # [15:17] <darobin> now that this is finally over (as of yesterday) I will start adding useful features again :)
- # [15:17] <Lachy> ok
- # [15:18] <Lachy> darobin, can we make it generate nice looking reports like the one I created by hand?
- # [15:18] <darobin> it also needs some refactoring, the original code base is very strongly oriented towards CSS-like testing, which doesn't help
- # [15:19] <darobin> Lachy: you mean like this one: http://dev.w3.org/2006/webapi/selectors-api-testsuite/level1-baseline-report.html ?
- # [15:19] <Lachy> yes.
- # [15:20] <Lachy> From what I can tell, it also seems to treat 1 file as a single test case with 1 result, rather than potentially many tests with individual results. That's not particularly useful.
- # [15:20] * ArtB had a good LOL re EU stuff above
- # [15:21] <darobin> Lachy: filed https://www.w3.org/Bugs/Public/show_bug.cgi?id=19688
- # [15:22] <darobin> and https://www.w3.org/Bugs/Public/show_bug.cgi?id=19689
- # [15:22] <Lachy> thanks
- # [15:24] <darobin> Lachy: as for producing a report like yours, assuming we had the data in the db, this should do the trick, no? http://w3c-test.org/framework/api/result-summary/css-selectors-api-1-baseline
- # [15:24] <darobin> ArtB: yeah, being paid by a EU project has pros and cons
- # [15:25] <darobin> in the pros, it meant that there actually was money at all to start with, and it did go into improvements
- # [15:25] <darobin> but in the cons it means that some less useful stuff needs to happen too
- # [15:25] <darobin> anyway, that's all behind us now :)
- # [15:26] <darobin> Lachy: as for treating results as one-file-one-result that's indeed one of the problems I meant when I said we inherited this from CSS
- # [15:26] <Lachy> darobin, that data lacks the granularity of indivudual test cases, and instead just reports pass/fail for the whole file.
- # [15:26] <Lachy> right.
- # [15:26] <darobin> then I'm afraid you'll have to wait for my refactoring
- # [15:26] <Lachy> yeah, no worries.
- # [15:26] <darobin> I certainly agree with you here, it's just taking a while to move from something good for CSS to something useful for APIs
- # [15:28] <ArtB> (dabrobin, just to be clear, I think the framework is coming along nicely and I want WebApps to use it even more.)
- # [15:29] <darobin> ArtB: yeah, no problem here, I'm much happier with people bringing problems up than with people ignoring it :)
- # [15:29] <darobin> it still has a fair way to go though
- # [15:29] <darobin> the good news is that HTML sort of needs good test management, so I have an excuse to move this forward :)
- # [15:31] <ArtB> that's great to read darobin
- # [15:31] <ArtB> until you made that clear, I thought Editor tasks was your main focus
- # [15:36] <darobin> ArtB: Editor tasks are my main focus overall, but during CR that includes the testing effort
- # [15:59] * Joins: odinho (~odinho@public.irc.w3.org)
- # [16:18] <jgraham> (fwiw I have a slightly differenrt point of view which is that tools for creating implementation reports are a bad way for W3C to spend effort)
- # [16:19] <jgraham> It seems much more important to me to spend effort on helping browser vendors run the tests themselves
- # [16:19] <jgraham> (and other implementors ofc)
- # [16:19] <jgraham> And to spend effort on knowing what tests we have
- # [16:36] * Quits: Lachy (~Lachy@public.irc.w3.org) ("Computer has gone to sleep.")
- # [16:37] <darobin> jgraham: if we have a good way of running tests then producing reports from those should be relatively trivial
- # [17:14] <jgraham> darobin: Well for browser vendors, having a "realtively good way of running tests" typically involves a huge investment in infrastructure
- # [17:15] <jgraham> *relatively
- # [17:15] <jgraham> The W3C shouldn't try and repeat that in a cross-browser way (which would be even ahrder)
- # [17:15] <jgraham> *harder
- # [17:23] <darobin> jgraham: there are different things here
- # [17:23] <darobin> one is having a good view over what tests we have and being able to expose that data in a usable fashion
- # [17:23] <darobin> the other side is that I presume that if we have that it can be plugged into whatever other tooling there is
- # [17:25] <jgraham> My point is that running many thousands of tests and dealing with the possibility that the browser will hang / crash / etc., whilst being automated in as many cases as possible (so you need to be able to take screenshots, run javascript tests in a good way, etc.) is really hard
- # [17:26] <jgraham> It's not really something that's easy to do with a javascript app running in the browser
- # [17:26] <jgraham> and to make it performant, you want to distribute it to multiple machines
- # [17:27] <jgraham> Trying to make it work for multiple browsers would probably involve multiple implementations
- # [17:27] <jgraham> So this is not a small project
- # [17:28] <darobin> oh you mean full automation
- # [17:28] <jgraham> At the same time it is *far* less useful than having the browser vendors run W3C tests on their own infrastructure
- # [17:28] <darobin> yeah I'm not even looking at that yet
- # [17:28] <jgraham> Which afaict none of them do in a systematic way yet
- # [17:28] <darobin> I think one of the top priorities is bringing order to the tests
- # [17:29] <jgraham> So I think the W3C would be much better off spending the same resources helping browser vendors run the tests
- # [17:29] <darobin> making it easy to find what tests exist, in what state, in which suites, etc.
- # [17:29] <darobin> I think that's what we're geared towards
- # [17:29] <darobin> we need to sit down at TPAC and write down some ideas if you don't mind
- # [17:29] <jgraham> That kind of librarianship project does sound useful, but seems totally different to actually running the tests
- # [17:29] <jgraham> Yes, of course, that kind of thing is the point of going to TPAC :)
- # [17:30] <jgraham> Just some people try and fill the day with distracting WG meetings ;)
- # [17:59] <darobin> yeah, silly meetings
- # [18:00] <darobin> I think that there's running the tests and running the tests
- # [18:00] <darobin> running the tests in the way that a browser full regression TS might require is a hard project
- # [18:00] <darobin> allowing people who are looking at tests in a library to run them is useful, and easier
- # [18:04] <jgraham> So, to the extent that the implementation is trivial, I agree. Once it starts taking more time than "given we have a list of tests, we may as well hyperlink the test files so you can actually run them", I start to disagree
- # [18:27] * Disconnected
- # [18:28] * Attempting to rejoin channel #testing
- # [18:28] * Rejoined channel #testing
- # [18:29] * Joins: shadi (shadi@public.cloak)
- # [18:36] * Quits: darobin (rberjon@public.irc.w3.org) (Client closed connection)
- # [20:01] * Quits: Velmont (~Velmont@public.irc.w3.org) (Ping timeout: 20 seconds)
- # [20:10] * Joins: Velmont (~Velmont@public.cloak)
- # [21:13] * Joins: Lachy (~Lachy@public.cloak)
- # [22:00] * Quits: shadi (shadi@public.cloak)
- # [22:25] * Quits: ArtB (~abarsto@public.irc.w3.org) ("Leaving.")
- # [22:27] * Disconnected
- # [22:28] * Attempting to rejoin channel #testing
- # [22:28] * Rejoined channel #testing
- # [22:58] * Quits: plh (plehegar@public.irc.w3.org) ("always accept cookies")
- # [23:19] * Joins: abarsto (~abarsto@public.cloak)
- # [23:19] * abarsto is now known as ArtB
- # Session Close: Thu Oct 25 00:00:00 2012
The end :)