Options:
- # Session Start: Fri Feb 14 00:00:00 2014
- # Session Ident: #testing
- # [00:18] * Joins: gitbot (~gitbot@public.cloak)
- # [00:18] -gitbot:#testing- [web-platform-tests] foolip pushed 3 new commits to master: https://github.com/w3c/web-platform-tests/compare/e29b6804fc82...76f698245b62
- # [00:18] -gitbot:#testing- web-platform-tests/master 6880471 Philip Jägenstedt: Automate audio_003.html...
- # [00:18] -gitbot:#testing- web-platform-tests/master 01462c7 Philip Jägenstedt: Remove overly helpful comments
- # [00:18] -gitbot:#testing- web-platform-tests/master 76f6982 Philip Jägenstedt: Merge pull request #623 from foolip/automate-audio_003...
- # [00:18] * Parts: gitbot (~gitbot@public.cloak) (gitbot)
- # [00:18] * Joins: gitbot (~gitbot@public.cloak)
- # [00:18] -gitbot:#testing- [web-platform-tests] foolip closed pull request #623: Automate audio_003.html (master...automate-audio_003) https://github.com/w3c/web-platform-tests/pull/623
- # [00:18] * Parts: gitbot (~gitbot@public.cloak) (gitbot)
- # [00:18] * Quits: plh (plehegar@public.cloak) ("Leaving")
- # [00:35] <gsnedders> Playing around with opjsunit, it does appear to find JIT bugs still.
- # [00:35] <gsnedders> So it does have some use.
- # [00:42] <gsnedders> SpiderMonkey fails 12 more tests running them 1.1k times (v. running them 1 time)
- # [00:42] <gsnedders> (IonMonkey's threshold is ~1k)
- # [00:50] <gsnedders> And an even larger number in V8 running them 10k times (Crankshaft appears to have no clear threshold, so I just went over-the-top on the assumption that'd get it invoked!).
- # [00:55] <gsnedders> jgraham: And idea of how to find the diff in the failing tests? :P
- # [00:58] * Quits: rhauck (~Adium@public.cloak) ("Leaving.")
- # [01:01] * Joins: shepazutu (schepers@public.cloak)
- # [01:01] * Joins: rhauck (~Adium@public.cloak)
- # [01:06] * Quits: shepazu (schepers@public.cloak) (Ping timeout: 180 seconds)
- # [01:06] * shepazutu is now known as shepazu
- # [01:23] * Quits: yankhates_cb (~yankhatescb@public.cloak) (Ping timeout: 180 seconds)
- # [01:43] * Quits: ptressel (~chatzilla@public.cloak) (Client closed connection)
- # [02:01] * Joins: yankhates_cb (~yankhatescb@public.cloak)
- # [02:02] * Joins: glenn (~gadams@public.cloak)
- # [02:03] * Quits: glenn (~gadams@public.cloak) (Client closed connection)
- # [02:34] * Quits: lmclister (~lmclister@public.cloak) ("")
- # [02:34] * lmclister_ is now known as lmclister
- # [02:38] * heycam is now known as heycam|away
- # [02:42] * Quits: yankhates_cb (~yankhatescb@public.cloak) (Ping timeout: 180 seconds)
- # [03:01] * Quits: ato (~sid16069@public.cloak) (Client closed connection)
- # [03:01] * Joins: ato_ (~sid16069@public.cloak)
- # [03:57] * Quits: rhauck (~Adium@public.cloak) ("Leaving.")
- # [03:57] * Joins: rhauck (~Adium@public.cloak)
- # [04:04] * Quits: rhauck (~Adium@public.cloak) (Ping timeout: 180 seconds)
- # [04:05] * Quits: ArtB (~abarsto@public.cloak) ("Leaving.")
- # [04:10] * heycam|away is now known as heycam
- # [05:27] * Joins: lmcliste_ (~lmclister@public.cloak)
- # [05:30] * Joins: lmclist__ (~lmclister@public.cloak)
- # [05:32] * Joins: gitbot (~gitbot@public.cloak)
- # [05:32] -gitbot:#testing- [web-platform-tests] deniak pushed 1 new commit to master: https://github.com/w3c/web-platform-tests/commit/084a1af9fab1050bade8d6ce8af6ec6ffc037f5a
- # [05:32] -gitbot:#testing- web-platform-tests/master 084a1af xiaojunwu: Add tests to check dynamic urls changing
- # [05:32] * Parts: gitbot (~gitbot@public.cloak) (gitbot)
- # [05:32] * Joins: gitbot (~gitbot@public.cloak)
- # [05:32] -gitbot:#testing- [web-platform-tests] deniak closed pull request #460: Add tests to check "Dynamic changes to base urls" (master...submission/xiaojunwu/urls-dynamic-change) https://github.com/w3c/web-platform-tests/pull/460
- # [05:32] * Parts: gitbot (~gitbot@public.cloak) (gitbot)
- # [05:34] * Quits: lmcliste_ (~lmclister@public.cloak) (Ping timeout: 180 seconds)
- # [06:18] * Quits: lmclist__ (~lmclister@public.cloak) ("")
- # [07:01] * heycam is now known as heycam|away
- # [07:14] * Joins: yankhates_cb (~yankhatescb@public.cloak)
- # [07:52] * Joins: zcorpan (~zcorpan@public.cloak)
- # [07:54] * Quits: zcorpan (~zcorpan@public.cloak) (Client closed connection)
- # [08:06] * Joins: ptressel (~chatzilla@public.cloak)
- # [08:42] * Joins: Ms2ger (~Ms2ger@public.cloak)
- # [08:44] * Quits: yankhates_cb (~yankhatescb@public.cloak) (Client closed connection)
- # [09:00] * Joins: dom (dom@public.cloak)
- # [10:15] * Joins: gitbot (~gitbot@public.cloak)
- # [10:15] -gitbot:#testing- [web-platform-tests] dontcallmedom opened pull request #633: Proposed script to facilitate but isolate testing of vendor-prefixed features (master...vendor-prefix-support) https://github.com/w3c/web-platform-tests/pull/633
- # [10:15] * Parts: gitbot (~gitbot@public.cloak) (gitbot)
- # [10:24] * Quits: Lachy (~Lachy@public.cloak) ("My MacBook Pro has gone to sleep. ZZZzzz…")
- # [10:47] * Joins: Lachy (~Lachy@public.cloak)
- # [11:09] * Quits: leif (~leif@public.cloak) ("Leaving.")
- # [11:26] * Joins: gitbot (~gitbot@public.cloak)
- # [11:26] -gitbot:#testing- [web-platform-tests] xiaojunwu opened pull request #634: Add tests for the browsing contexts (master...submission/xiaojunwu/browsing-context-first-created) https://github.com/w3c/web-platform-tests/pull/634
- # [11:26] * Parts: gitbot (~gitbot@public.cloak) (gitbot)
- # [11:46] * Joins: abarsto (~abarsto@public.cloak)
- # [11:46] * abarsto is now known as ArtB
- # [12:31] <jgraham> gsnedders: I don't understand the question
- # [12:31] <jgraham> dom: Tabs :(
- # [12:31] <jgraham> dom: Also, you should really turn on whatever your local equivalent of whitespace-mode is to see trailing whitespace in an angry red colour
- # [12:38] <MikeSmith> gsnedders: what code editor you use
- # [13:00] <Ms2ger> jgraham, he's got a set of failing tests when running them once, and a different set when running them in a tight loop
- # [13:01] <gsnedders> MikeSmith: emacs
- # [13:02] <gsnedders> jgraham: So opjsunit outputs one set of fails with -r 1, and another set of fails with -r 1100. How do I find the diff? Given the order of opjsunit's output is non-determinisitic…
- # [13:03] <MikeSmith> gsnedders: OK I don't know how in emacs
- # [13:04] <gsnedders> (It's the non-determinism of the ordering of failed tests that makes it hard)
- # [13:05] <Ms2ger> sort | diff
- # [13:05] <gsnedders> Each failure is multiple lines.
- # [13:06] <Ms2ger> Boo
- # [13:06] <gsnedders> You ask this like I haven't *tried* to do this before.
- # [13:08] <gsnedders> So I think --fail-list might work
- # [13:08] * gsnedders tries
- # [13:08] <gsnedders> (someone really needs to rewrite the harness, it's a horrid mess)
- # [13:09] <gsnedders> (disclaimer: I tried to rewrite it before, gave up)
- # [13:09] <Ms2ger> You? :)
- # [13:09] <gsnedders> See above disclaimer.
- # [13:09] <Ms2ger> Already got experience... sgtm
- # [13:10] <jgraham> It's not *that* horrid a mess iirc
- # [13:11] <gsnedders> I'd disagree :)
- # [13:11] <jgraham> Well I haven't looked for a while, but I don't remember it being too bad
- # [13:20] * MikeSmith only now sees that jgraham's advice about whitespace-mode was to dom and not to gsnedders
- # [13:20] <jgraham> Yeah, my advice to gsnedders is "don't be dissing my code"
- # [13:21] <jgraham> ;)
- # [13:21] * Ms2ger wonders if Opera particularly likes emacs
- # [13:22] <jgraham> (but actually constriuctive feedback welcome, because I just looked at it and I don't know what the main complaint is. I mean it's not perfect, but it works)
- # [13:22] <jgraham> Ms2ger: Not really I don't think
- # [13:22] * ato_ is now known as ato
- # [13:23] <Ms2ger> Oh, is that more code you wrote?
- # [13:24] <jgraham> I wrote most of the opjsunit harness
- # [13:24] <Ms2ger> Of course
- # [13:24] * Quits: Ms2ger (~Ms2ger@public.cloak) ("bbl")
- # [13:48] <gsnedders> jgraham: I thought it was kilsmo?
- # [13:48] <gsnedders> And it's not like it hasn't been heavily hacked on by me before. :)
- # [13:49] <gsnedders> jgraham: My main complaint is a surprising amount of it relies, subtely, on global state.
- # [13:59] <jgraham> gsnedders: I think kilsmo wrote one of the other harnesses and I then tried to unwrite that one
- # [14:00] <jgraham> But subtle global state is kind of hard to avoid when you are running lots of external processes simultaneously
- # [14:05] <gsnedders> It's not just that which is bad.
- # [14:06] <gsnedders> Anyhow, kilsmo's involvement predated my working on Carakan, so it's hardly surprising I scarcely know :)
- # [14:22] * Joins: Ms2ger (~Ms2ger@public.cloak)
- # [14:52] * Quits: ptressel (~chatzilla@public.cloak) ("zzz")
- # [15:02] <Ms2ger> gsnedders, can you get me the raw results for SM, if you don't have the diff?
- # [15:36] * Joins: AutomatedTester (~AutomatedTester@public.cloak)
- # [15:39] * Joins: plh (plehegar@public.cloak)
- # [15:41] * MarkS is now known as MarkS_home
- # [15:51] * Quits: Ms2ger (~Ms2ger@public.cloak) (Ping timeout: 180 seconds)
- # [16:01] * Joins: Ms2ger (~Ms2ger@public.cloak)
- # [16:11] <gsnedders> Ms2ger: I think I have the diff at home. But that's at home. :P
- # [16:11] <Ms2ger> gsnedders, either would be appreciated :)
- # [16:13] <jgraham> Ms2ger: You could just run it :)
- # [16:13] <gsnedders> Ms2ger: Left it running at 100k iterations on V8 overnight, so have a fair bit about Crankshaft too.
- # [16:14] <gsnedders> Ms2ger: essentially you just want python2 harness/opjsunit.py -s js -e spidermonkey -r 1100 --count --fail
- # [16:14] <gsnedders> (the engine argument just controls what arguments the shell needs)
- # [16:15] <gsnedders> (Pretty certain even at 100k Crankshaft didn't get invoked for a lot of the tests, but got lots of extra failures, which is good)
- # [16:16] <gsnedders> (I think sof looked at V8 with opjsunit before, not quite sure what he did though. And didn't get a response pinging him last night.)
- # [16:18] * Ms2ger kicks octal
- # [16:18] <gsnedders> Yeah, indeed.
- # [16:18] <gsnedders> Surprised that not all of those are wrapped in eval("09"), etc.
- # [16:18] <gsnedders> Most of the number parsing ones were exactly for that reason :P
- # [16:21] <gsnedders> (But no, that opjsunit is still finding bugs shows how bad testing of JITing behaviour is. :()
- # [16:22] <Ms2ger> Absolutely
- # [16:22] <gsnedders> So sad that test262, despite my arguments, went to testing everything in global scope. Because at least when everything was in a function you could trivially run each test multiple times to test JIT behaviour. :(
- # [16:23] <gsnedders> (What I did was if the second iteration was different from the first, ignore it (assume it mutates state, etc.), otherwise look another hundred times or so.)
- # [17:06] * Quits: Ms2ger (~Ms2ger@public.cloak) ("Leaving")
- # [17:14] * Quits: Lachy (~Lachy@public.cloak) ("My MacBook Pro has gone to sleep. ZZZzzz…")
- # [17:14] * Joins: Lachy (~Lachy@public.cloak)
- # [17:24] * Quits: Lachy (~Lachy@public.cloak) ("My MacBook Pro has gone to sleep. ZZZzzz…")
- # [17:31] * Joins: lmcliste_ (~lmclister@public.cloak)
- # [18:09] * Quits: dom (dom@public.cloak) (Ping timeout: 180 seconds)
- # [18:22] * Joins: yankhates_cb (~yankhatescb@public.cloak)
- # [18:27] * Joins: rhauck (~Adium@public.cloak)
- # [18:46] * Quits: yankhates_cb (~yankhatescb@public.cloak) (Ping timeout: 180 seconds)
- # [18:48] * Quits: lmcliste_ (~lmclister@public.cloak) ("")
- # [19:00] * Joins: yankhates_cb (~yankhatescb@public.cloak)
- # [19:20] * Joins: lmcliste_ (~lmclister@public.cloak)
- # [19:38] * Quits: lmcliste_ (~lmclister@public.cloak) ("")
- # [19:42] <bterlson_> gsnedders: A large number of tests need to be at global scope to test global scope semantics. Do you think it's preferable to have two harnesses and two styles of tests to support JIT testing? An alternative might be to take the global scope test and wrap it in a function as a harnessing step...
- # [19:46] <jgraham> I don't think there's a problem supporting both styles of test in a single harness
- # [19:46] <jgraham> opjsunit does a test discovery phase before running the test
- # [19:47] <jgraham> If you need code at the global level (as opposed to just setting variables at the global level) you could flag the whole file as a test, rather than the functions in the file
- # [19:47] <jgraham> (we might even have already solved this; I don't remember)
- # [19:48] <jgraham> OTOH wrapping global code in a function is unsafe in the sense that it changes the semantics of the code
- # [19:50] <bterlson_> Yes, a significant number of tests must be in global scope, including setup and verification code. Not to mention, cross-script body and cross-realm tests need to be a thing as well
- # [19:53] <jgraham> So I'm not arguing that you can only test the global scope semantics in the global scope. But the vast majority of tests don't need to run in the global scope (and many *can't* run in the global scope in the sense that you require some sort of function call)
- # [19:54] <bterlson_> I want to separate language and library tests. Library tests are not really interesting in this discussion, function wrapping is fine for those (but really you probably want some bad ass DSL like Mocha or whatever). Library tests are typically not very interesting for JIT.
- # [19:55] <bterlson_> For language tests, I believe that fundamentally you should not introduce language features that are not relevant to what you're testing
- # [19:55] <jgraham> bterlson_: When you sau "library", what do you mean?
- # [19:55] <jgraham> *say
- # [19:55] <bterlson_> Stuff like Array.prototype.splice
- # [19:55] <jgraham> OK
- # [19:56] <jgraham> I'm pretty sure you can find JIT bugs with those too
- # [19:56] <bterlson_> Sometimes. I think solving the library testing problem with amore expressive framework in test262 is a good thing.
- # [19:56] <jgraham> But in either case it seems like you are selling out the common case in favour of the uncommon case
- # [19:57] <jgraham> That is, by saying that global scope is the *only* scope allowed you are making it much more difficult to test JIT behaviour and favouring tests that are unlike most production code
- # [19:57] <bterlson_> If there is some semantics that doesn't care if it's in global scope or not, global scope is better because the test can be more easily mutated to run in whatever scope you want
- # [19:57] <bterlson_> JIT in global scope is also interesting
- # [19:58] <jgraham> Does anyone actually JIT in the global scope?
- # [19:58] <bterlson_> Yes.
- # [19:58] <jgraham> OK
- # [19:58] <bterlson_> Also different kinds of functions (decls, exprs, arrows, etc.)
- # [19:58] <jgraham> Anyway, I don't agree with the "more easilly mutated"
- # [19:59] <bterlson_> blocks, etc.
- # [19:59] <jgraham> global scope just has different semantics
- # [19:59] <jgraham> bterlson_: Have you looked at the opjsunit tests?
- # [20:00] <jgraham> If they are still finding bugs 4 years after being written it seems like they must be doing something right
- # [20:00] <bterlson_> briefly googled, but I would be curious to hear more
- # [20:00] <bterlson_> Well, if it's the same tests, I'm not sure taking 4 years to find issues is a positive thing :)
- # [20:01] <jgraham> Well they were Opera-internal before
- # [20:01] <jgraham> https://github.com/operasoftware/presto-testo/tree/master/core/standards/scripts/opjsunit
- # [20:01] * Quits: yankhates_cb (~yankhatescb@public.cloak) (Ping timeout: 180 seconds)
- # [20:01] <jgraham> https://github.com/operasoftware/presto-testo/tree/master/core/standards/scripts/opjsunit/tests has all the actual tests
- # [20:05] <bterlson_> interesting
- # [20:05] <bterlson_> wish we could test in eval but that's also a no-go from a test262 perspective I think
- # [20:08] <bterlson_> at any rate, in my experience jit testing can be accomplished with a simple harness that wraps in a function using string concat, runs it once and sees if it passes (or, just baselines the result) and then runs it repeatedly
- # [20:08] <jgraham> Yeah, that will work of course
- # [20:09] <jgraham> But if you write all your tests in the global scope it seems more likely that you will depend on global-isms than if you wrap them in functions by default
- # [20:11] <bterlson_> Define "global-isms"?
- # [20:12] <jgraham> Any of the behaviours that differ between global scope and function scope
- # [20:13] <bterlson_> Isn't it equally bad to depend on "function-isms"?
- # [20:14] <jgraham> Not for the case we are discussing here since your strategy is to wrap the code in a function and then run the function repeatedly.
- # [20:14] <bterlson_> Generally speaking, though?
- # [20:15] <bterlson_> test262 is for more than just JIT testing :)
- # [20:16] <jgraham> Generally speaking you have to pick one or the other. I don't see why global scope is preferable. It makes it harder to have multiple independent tests in a single file, harder to reuse the tests for JIT testing and is further from the common case of author code
- # [20:16] <bterlson_> I think it makes it easier for everything other than JIT testing.
- # [20:17] <jgraham> I don't understand why
- # [20:17] <bterlson_> and I don't see why concatenation is such a big deal
- # [20:17] <jgraham> Of tests in the same file?
- # [20:17] <jgraham> It makes authoring tests much easier
- # [20:17] <bterlson_> no I mean, concatenation to put a global scope test in a function
- # [20:18] <bterlson_> other stuff than jit testing is easier because, for example, if I want to test in global scope it's harder to remove a function wrapper (that could be declared in any number of ways) than to do blind string concat
- # [20:19] <jgraham> In general the same test isn't valid inside and outside global scope. But really removing a function wrapper doesn't seem that hard. You don't have to deal with pathological cases because you control the input
- # [20:20] <bterlson_> How do I control the input? In test262 right now there are multiple methods of calling runTestCase
- # [20:20] <bterlson_> anonymous function expr passed in, function declaration, var test = function(){}, maybe even others
- # [20:22] <jgraham> Well in opjsunit at least the input is very uniform
- # [20:22] <jgraham> So unwrapping wouldn't be that hard
- # [20:22] <jgraham> I would think
- # [20:23] <bterlson_> Even then, harder than doing blind string concat to wrap I think?
- # [20:23] <jgraham> Sure.
- # [20:23] <bterlson_> Multiple tests per file is also bad for language testing purposes
- # [20:23] <bterlson_> so that's the next point we can discuss if you're interested! :)
- # [20:24] * Quits: scheib (~sid4467@public.cloak) (Ping timeout: 180 seconds)
- # [20:25] <jgraham> Sure, in pathological cases it can affect things, and you will notice that in opjsunit there are some files with only one test
- # [20:25] <bterlson_> although again my thinking there is a special case of the general point: test262 language tests should be as simple as possible to most easily accommodate automated testing pipelines
- # [20:25] <jgraham> But you should optimise for the common case
- # [20:25] * Joins: scheib (~sid4467@public.cloak)
- # [20:26] <jgraham> Well I agree that being run in automation is a very valid goal
- # [20:26] * Quits: lmclister (~sid13822@public.cloak) (Ping timeout: 180 seconds)
- # [20:27] <jgraham> But certainly that wasn't a problem for opjsunit, particularly when using the js shell rather than a full browser
- # [20:27] <bterlson_> doesn't sound like it was mutating the tests in significant ways
- # [20:27] * Joins: lmclister_ (~sid13822@public.cloak)
- # [20:27] <bterlson_> or doing automated failure analysis (it's really nice to be able to plop the entire test into a bug and basically have the min repro)
- # [20:28] <bterlson_> also when bringing up new features, it sucks when an early syntax error blocks the entire test
- # [20:29] <bterlson_> Of course all of this is just based on how I've used test262 in the past. I think your points are equally valid for the record.
- # [20:30] <bterlson_> It is extremely annoying to author language tests in test262 today, although tooling can help with that some...
- # [20:31] <jgraham> By "automated failure analysis" do you mean minimisation of the TC using an automated too to strip out features not needed to reproduce the fail, or something else?
- # [20:32] <bterlson_> You don't need that with one-test-per-file
- # [20:32] <bterlson_> because the assumption is that everything the author put in the file is required to reproduce the issue
- # [20:32] <bterlson_> that's why automated failure analysis is easier ;)
- # [20:34] <bterlson_> further things would be stuff like bucketing like failures based on data about the issue, resolving to known issues...
- # [20:34] <jgraham> Fair enough. Usually saying "testFoo in this file fails" is also effectively minimal, but yes there are edge cases where it only fails if you have exactly 32 functions in the global scope, or whatever
- # [20:34] <jgraham> Again, since you know the name of the function that failed as well as the file, those things don't seem much harder
- # [20:35] <bterlson_> I don't think that's effectively minimal
- # [20:35] <bterlson_> when you want to fix the bug you will almost for sure have to remove the excess stuff
- # [20:35] <bterlson_> esp. if it's hard to diagnose
- # [20:35] <jgraham> Sure, if it's a hard bug you will
- # [20:36] <jgraham> I have spent many hours of my life minimising multiple-tens of kbs whitespace-stripped, compiled, js so I appreciate the beauty of a minimal testcase
- # [20:36] * Joins: yankhates_cb (~yankhatescb@public.cloak)
- # [20:36] <jgraham> (or multiple hundreds of kb, or more)
- # [20:38] <jgraham> Possibly it depends what stage of development you are at. If you are just implementing a feature you are still likely to get lots of breakage of the non-pathological kind. If you are optimising an already implemented feature you are more likely to break strange edge cases
- # [20:38] <jgraham> So in the latter case you will get a higher ratio of bugs where unexecuted code in the same file makes a difference
- # [20:38] <jgraham> Although in the former case you will get more bugsm total
- # [20:39] <jgraham> (and probably more of both types)
- # [20:39] <bterlson_> I'm not sure I follow, but in my experience, 1tpf helps bootstrapping impls because each test depends on fewer things working
- # [20:43] <jgraham> Well I think opjsunit worked pretty well for Carakan at least
- # [20:43] <jgraham> Anyway, I'm heading home now
- # [20:44] <jgraham> Probably gsnedders will be along in a bit to contradict everything I said ;)
- # [20:44] <jgraham> Nice talking to you
- # [20:44] <bterlson_> I hope so! I haven't had such an interesting test262 conversation in a while :)
- # [20:44] <bterlson_> have a good one
- # [21:05] <plh> jgraham? re gsoc ideas
- # [21:06] <plh> which ones should we retain?
- # [21:06] <plh> is there one where you'd like to be a mentor?
- # [21:07] <plh> today is the deadline for gsoc
- # [21:08] <plh> https://etherpad.mozilla.org/GBHx8UkC9k has a bunch of comments but not clear if it converged
- # [21:09] * plh changes topic to 'Testing the Web Platform | http://testthewebforward.org | PR Count: 118 (2014-02-14)'
- # [21:52] * Joins: yankhates_cb_ (~yankhatescb@public.cloak)
- # [21:56] * Quits: ArtB (~abarsto@public.cloak) (Client closed connection)
- # [21:57] * Quits: yankhates_cb (~yankhatescb@public.cloak) (Ping timeout: 180 seconds)
- # [22:08] * Joins: lmclister (~lmclister@public.cloak)
- # [22:09] * Joins: yankhates_cb (~yankhatescb@public.cloak)
- # [22:15] * Quits: yankhates_cb_ (~yankhatescb@public.cloak) (Ping timeout: 180 seconds)
- # [22:35] * Quits: lmclister (~lmclister@public.cloak) ("")
- # [22:35] * lmclister_ is now known as lmclister
- # [22:37] * Joins: lmcliste_ (~lmclister@public.cloak)
- # [22:44] * Quits: lmcliste_ (~lmclister@public.cloak) ("")
- # [22:54] * Quits: yankhates_cb (~yankhatescb@public.cloak) (Ping timeout: 180 seconds)
- # [23:00] * Joins: lmcliste_ (~lmclister@public.cloak)
- # [23:03] * Joins: yankhates_cb (~yankhatescb@public.cloak)
- # [23:19] * Quits: yankhates_cb (~yankhatescb@public.cloak) (Ping timeout: 180 seconds)
- # [23:30] <gsnedders> I disagree with everything jgraham said.
- # [23:30] <gsnedders> (I'll read the above later.)
- # [23:32] <gsnedders> jgraham: V8 obviously JITs the global scope (there is no interpreter), everyone else JITs hot loops in the global scope.
- # [23:34] * Joins: rhauck1 (~Adium@public.cloak)
- # [23:36] * Quits: rhauck (~Adium@public.cloak) (Ping timeout: 180 seconds)
- # [23:39] <gsnedders> bterlson_: I'd much rather have two harnesses, tbh. The fact that there are subtly bugs with property accesses in JIT bugs (esp. when getters/setters are involved) shows that even "simple" things like property access makes a lot of sense to test in such a way.
- # [23:40] <gsnedders> (I do actually agree with most of what jgraham said, scarily.)
- # [23:41] <gsnedders> (Myself and jgraham probably did the majority of the QA work on Carakan, FWIW, so we're probably a bit bias it. :))
- # [23:44] <gsnedders> bterlson_: But a brief glance at the majority of tests failing in opjsunit in SpiderMonkey/V8 in hot loops shows most of them to be relatively simplictic things that are failing, which really isn't that promising.
- # [23:47] <gsnedders> bterlson_: The main problem with taking global scope tests and wrapping them is that they're almost never designed to be run in a loop, or written with any slight thought about it.
- # Session Close: Sat Feb 15 00:00:00 2014
The end :)