/irc-logs / w3c / #testing / 2014-02-14 / end

Options:

  1. # Session Start: Fri Feb 14 00:00:00 2014
  2. # Session Ident: #testing
  3. # [00:18] * Joins: gitbot (~gitbot@public.cloak)
  4. # [00:18] -gitbot:#testing- [web-platform-tests] foolip pushed 3 new commits to master: https://github.com/w3c/web-platform-tests/compare/e29b6804fc82...76f698245b62
  5. # [00:18] -gitbot:#testing- web-platform-tests/master 6880471 Philip Jägenstedt: Automate audio_003.html...
  6. # [00:18] -gitbot:#testing- web-platform-tests/master 01462c7 Philip Jägenstedt: Remove overly helpful comments
  7. # [00:18] -gitbot:#testing- web-platform-tests/master 76f6982 Philip Jägenstedt: Merge pull request #623 from foolip/automate-audio_003...
  8. # [00:18] * Parts: gitbot (~gitbot@public.cloak) (gitbot)
  9. # [00:18] * Joins: gitbot (~gitbot@public.cloak)
  10. # [00:18] -gitbot:#testing- [web-platform-tests] foolip closed pull request #623: Automate audio_003.html (master...automate-audio_003) https://github.com/w3c/web-platform-tests/pull/623
  11. # [00:18] * Parts: gitbot (~gitbot@public.cloak) (gitbot)
  12. # [00:18] * Quits: plh (plehegar@public.cloak) ("Leaving")
  13. # [00:35] <gsnedders> Playing around with opjsunit, it does appear to find JIT bugs still.
  14. # [00:35] <gsnedders> So it does have some use.
  15. # [00:42] <gsnedders> SpiderMonkey fails 12 more tests running them 1.1k times (v. running them 1 time)
  16. # [00:42] <gsnedders> (IonMonkey's threshold is ~1k)
  17. # [00:50] <gsnedders> And an even larger number in V8 running them 10k times (Crankshaft appears to have no clear threshold, so I just went over-the-top on the assumption that'd get it invoked!).
  18. # [00:55] <gsnedders> jgraham: And idea of how to find the diff in the failing tests? :P
  19. # [00:58] * Quits: rhauck (~Adium@public.cloak) ("Leaving.")
  20. # [01:01] * Joins: shepazutu (schepers@public.cloak)
  21. # [01:01] * Joins: rhauck (~Adium@public.cloak)
  22. # [01:06] * Quits: shepazu (schepers@public.cloak) (Ping timeout: 180 seconds)
  23. # [01:06] * shepazutu is now known as shepazu
  24. # [01:23] * Quits: yankhates_cb (~yankhatescb@public.cloak) (Ping timeout: 180 seconds)
  25. # [01:43] * Quits: ptressel (~chatzilla@public.cloak) (Client closed connection)
  26. # [02:01] * Joins: yankhates_cb (~yankhatescb@public.cloak)
  27. # [02:02] * Joins: glenn (~gadams@public.cloak)
  28. # [02:03] * Quits: glenn (~gadams@public.cloak) (Client closed connection)
  29. # [02:34] * Quits: lmclister (~lmclister@public.cloak) ("")
  30. # [02:34] * lmclister_ is now known as lmclister
  31. # [02:38] * heycam is now known as heycam|away
  32. # [02:42] * Quits: yankhates_cb (~yankhatescb@public.cloak) (Ping timeout: 180 seconds)
  33. # [03:01] * Quits: ato (~sid16069@public.cloak) (Client closed connection)
  34. # [03:01] * Joins: ato_ (~sid16069@public.cloak)
  35. # [03:57] * Quits: rhauck (~Adium@public.cloak) ("Leaving.")
  36. # [03:57] * Joins: rhauck (~Adium@public.cloak)
  37. # [04:04] * Quits: rhauck (~Adium@public.cloak) (Ping timeout: 180 seconds)
  38. # [04:05] * Quits: ArtB (~abarsto@public.cloak) ("Leaving.")
  39. # [04:10] * heycam|away is now known as heycam
  40. # [05:27] * Joins: lmcliste_ (~lmclister@public.cloak)
  41. # [05:30] * Joins: lmclist__ (~lmclister@public.cloak)
  42. # [05:32] * Joins: gitbot (~gitbot@public.cloak)
  43. # [05:32] -gitbot:#testing- [web-platform-tests] deniak pushed 1 new commit to master: https://github.com/w3c/web-platform-tests/commit/084a1af9fab1050bade8d6ce8af6ec6ffc037f5a
  44. # [05:32] -gitbot:#testing- web-platform-tests/master 084a1af xiaojunwu: Add tests to check dynamic urls changing
  45. # [05:32] * Parts: gitbot (~gitbot@public.cloak) (gitbot)
  46. # [05:32] * Joins: gitbot (~gitbot@public.cloak)
  47. # [05:32] -gitbot:#testing- [web-platform-tests] deniak closed pull request #460: Add tests to check "Dynamic changes to base urls" (master...submission/xiaojunwu/urls-dynamic-change) https://github.com/w3c/web-platform-tests/pull/460
  48. # [05:32] * Parts: gitbot (~gitbot@public.cloak) (gitbot)
  49. # [05:34] * Quits: lmcliste_ (~lmclister@public.cloak) (Ping timeout: 180 seconds)
  50. # [06:18] * Quits: lmclist__ (~lmclister@public.cloak) ("")
  51. # [07:01] * heycam is now known as heycam|away
  52. # [07:14] * Joins: yankhates_cb (~yankhatescb@public.cloak)
  53. # [07:52] * Joins: zcorpan (~zcorpan@public.cloak)
  54. # [07:54] * Quits: zcorpan (~zcorpan@public.cloak) (Client closed connection)
  55. # [08:06] * Joins: ptressel (~chatzilla@public.cloak)
  56. # [08:42] * Joins: Ms2ger (~Ms2ger@public.cloak)
  57. # [08:44] * Quits: yankhates_cb (~yankhatescb@public.cloak) (Client closed connection)
  58. # [09:00] * Joins: dom (dom@public.cloak)
  59. # [10:15] * Joins: gitbot (~gitbot@public.cloak)
  60. # [10:15] -gitbot:#testing- [web-platform-tests] dontcallmedom opened pull request #633: Proposed script to facilitate but isolate testing of vendor-prefixed features (master...vendor-prefix-support) https://github.com/w3c/web-platform-tests/pull/633
  61. # [10:15] * Parts: gitbot (~gitbot@public.cloak) (gitbot)
  62. # [10:24] * Quits: Lachy (~Lachy@public.cloak) ("My MacBook Pro has gone to sleep. ZZZzzz…")
  63. # [10:47] * Joins: Lachy (~Lachy@public.cloak)
  64. # [11:09] * Quits: leif (~leif@public.cloak) ("Leaving.")
  65. # [11:26] * Joins: gitbot (~gitbot@public.cloak)
  66. # [11:26] -gitbot:#testing- [web-platform-tests] xiaojunwu opened pull request #634: Add tests for the browsing contexts (master...submission/xiaojunwu/browsing-context-first-created) https://github.com/w3c/web-platform-tests/pull/634
  67. # [11:26] * Parts: gitbot (~gitbot@public.cloak) (gitbot)
  68. # [11:46] * Joins: abarsto (~abarsto@public.cloak)
  69. # [11:46] * abarsto is now known as ArtB
  70. # [12:31] <jgraham> gsnedders: I don't understand the question
  71. # [12:31] <jgraham> dom: Tabs :(
  72. # [12:31] <jgraham> dom: Also, you should really turn on whatever your local equivalent of whitespace-mode is to see trailing whitespace in an angry red colour
  73. # [12:38] <MikeSmith> gsnedders: what code editor you use
  74. # [13:00] <Ms2ger> jgraham, he's got a set of failing tests when running them once, and a different set when running them in a tight loop
  75. # [13:01] <gsnedders> MikeSmith: emacs
  76. # [13:02] <gsnedders> jgraham: So opjsunit outputs one set of fails with -r 1, and another set of fails with -r 1100. How do I find the diff? Given the order of opjsunit's output is non-determinisitic…
  77. # [13:03] <MikeSmith> gsnedders: OK I don't know how in emacs
  78. # [13:04] <gsnedders> (It's the non-determinism of the ordering of failed tests that makes it hard)
  79. # [13:05] <Ms2ger> sort | diff
  80. # [13:05] <gsnedders> Each failure is multiple lines.
  81. # [13:06] <Ms2ger> Boo
  82. # [13:06] <gsnedders> You ask this like I haven't *tried* to do this before.
  83. # [13:08] <gsnedders> So I think --fail-list might work
  84. # [13:08] * gsnedders tries
  85. # [13:08] <gsnedders> (someone really needs to rewrite the harness, it's a horrid mess)
  86. # [13:09] <gsnedders> (disclaimer: I tried to rewrite it before, gave up)
  87. # [13:09] <Ms2ger> You? :)
  88. # [13:09] <gsnedders> See above disclaimer.
  89. # [13:09] <Ms2ger> Already got experience... sgtm
  90. # [13:10] <jgraham> It's not *that* horrid a mess iirc
  91. # [13:11] <gsnedders> I'd disagree :)
  92. # [13:11] <jgraham> Well I haven't looked for a while, but I don't remember it being too bad
  93. # [13:20] * MikeSmith only now sees that jgraham's advice about whitespace-mode was to dom and not to gsnedders
  94. # [13:20] <jgraham> Yeah, my advice to gsnedders is "don't be dissing my code"
  95. # [13:21] <jgraham> ;)
  96. # [13:21] * Ms2ger wonders if Opera particularly likes emacs
  97. # [13:22] <jgraham> (but actually constriuctive feedback welcome, because I just looked at it and I don't know what the main complaint is. I mean it's not perfect, but it works)
  98. # [13:22] <jgraham> Ms2ger: Not really I don't think
  99. # [13:22] * ato_ is now known as ato
  100. # [13:23] <Ms2ger> Oh, is that more code you wrote?
  101. # [13:24] <jgraham> I wrote most of the opjsunit harness
  102. # [13:24] <Ms2ger> Of course
  103. # [13:24] * Quits: Ms2ger (~Ms2ger@public.cloak) ("bbl")
  104. # [13:48] <gsnedders> jgraham: I thought it was kilsmo?
  105. # [13:48] <gsnedders> And it's not like it hasn't been heavily hacked on by me before. :)
  106. # [13:49] <gsnedders> jgraham: My main complaint is a surprising amount of it relies, subtely, on global state.
  107. # [13:59] <jgraham> gsnedders: I think kilsmo wrote one of the other harnesses and I then tried to unwrite that one
  108. # [14:00] <jgraham> But subtle global state is kind of hard to avoid when you are running lots of external processes simultaneously
  109. # [14:05] <gsnedders> It's not just that which is bad.
  110. # [14:06] <gsnedders> Anyhow, kilsmo's involvement predated my working on Carakan, so it's hardly surprising I scarcely know :)
  111. # [14:22] * Joins: Ms2ger (~Ms2ger@public.cloak)
  112. # [14:52] * Quits: ptressel (~chatzilla@public.cloak) ("zzz")
  113. # [15:02] <Ms2ger> gsnedders, can you get me the raw results for SM, if you don't have the diff?
  114. # [15:36] * Joins: AutomatedTester (~AutomatedTester@public.cloak)
  115. # [15:39] * Joins: plh (plehegar@public.cloak)
  116. # [15:41] * MarkS is now known as MarkS_home
  117. # [15:51] * Quits: Ms2ger (~Ms2ger@public.cloak) (Ping timeout: 180 seconds)
  118. # [16:01] * Joins: Ms2ger (~Ms2ger@public.cloak)
  119. # [16:11] <gsnedders> Ms2ger: I think I have the diff at home. But that's at home. :P
  120. # [16:11] <Ms2ger> gsnedders, either would be appreciated :)
  121. # [16:13] <jgraham> Ms2ger: You could just run it :)
  122. # [16:13] <gsnedders> Ms2ger: Left it running at 100k iterations on V8 overnight, so have a fair bit about Crankshaft too.
  123. # [16:14] <gsnedders> Ms2ger: essentially you just want python2 harness/opjsunit.py -s js -e spidermonkey -r 1100 --count --fail
  124. # [16:14] <gsnedders> (the engine argument just controls what arguments the shell needs)
  125. # [16:15] <gsnedders> (Pretty certain even at 100k Crankshaft didn't get invoked for a lot of the tests, but got lots of extra failures, which is good)
  126. # [16:16] <gsnedders> (I think sof looked at V8 with opjsunit before, not quite sure what he did though. And didn't get a response pinging him last night.)
  127. # [16:18] * Ms2ger kicks octal
  128. # [16:18] <gsnedders> Yeah, indeed.
  129. # [16:18] <gsnedders> Surprised that not all of those are wrapped in eval("09"), etc.
  130. # [16:18] <gsnedders> Most of the number parsing ones were exactly for that reason :P
  131. # [16:21] <gsnedders> (But no, that opjsunit is still finding bugs shows how bad testing of JITing behaviour is. :()
  132. # [16:22] <Ms2ger> Absolutely
  133. # [16:22] <gsnedders> So sad that test262, despite my arguments, went to testing everything in global scope. Because at least when everything was in a function you could trivially run each test multiple times to test JIT behaviour. :(
  134. # [16:23] <gsnedders> (What I did was if the second iteration was different from the first, ignore it (assume it mutates state, etc.), otherwise look another hundred times or so.)
  135. # [17:06] * Quits: Ms2ger (~Ms2ger@public.cloak) ("Leaving")
  136. # [17:14] * Quits: Lachy (~Lachy@public.cloak) ("My MacBook Pro has gone to sleep. ZZZzzz…")
  137. # [17:14] * Joins: Lachy (~Lachy@public.cloak)
  138. # [17:24] * Quits: Lachy (~Lachy@public.cloak) ("My MacBook Pro has gone to sleep. ZZZzzz…")
  139. # [17:31] * Joins: lmcliste_ (~lmclister@public.cloak)
  140. # [18:09] * Quits: dom (dom@public.cloak) (Ping timeout: 180 seconds)
  141. # [18:22] * Joins: yankhates_cb (~yankhatescb@public.cloak)
  142. # [18:27] * Joins: rhauck (~Adium@public.cloak)
  143. # [18:46] * Quits: yankhates_cb (~yankhatescb@public.cloak) (Ping timeout: 180 seconds)
  144. # [18:48] * Quits: lmcliste_ (~lmclister@public.cloak) ("")
  145. # [19:00] * Joins: yankhates_cb (~yankhatescb@public.cloak)
  146. # [19:20] * Joins: lmcliste_ (~lmclister@public.cloak)
  147. # [19:38] * Quits: lmcliste_ (~lmclister@public.cloak) ("")
  148. # [19:42] <bterlson_> gsnedders: A large number of tests need to be at global scope to test global scope semantics. Do you think it's preferable to have two harnesses and two styles of tests to support JIT testing? An alternative might be to take the global scope test and wrap it in a function as a harnessing step...
  149. # [19:46] <jgraham> I don't think there's a problem supporting both styles of test in a single harness
  150. # [19:46] <jgraham> opjsunit does a test discovery phase before running the test
  151. # [19:47] <jgraham> If you need code at the global level (as opposed to just setting variables at the global level) you could flag the whole file as a test, rather than the functions in the file
  152. # [19:47] <jgraham> (we might even have already solved this; I don't remember)
  153. # [19:48] <jgraham> OTOH wrapping global code in a function is unsafe in the sense that it changes the semantics of the code
  154. # [19:50] <bterlson_> Yes, a significant number of tests must be in global scope, including setup and verification code. Not to mention, cross-script body and cross-realm tests need to be a thing as well
  155. # [19:53] <jgraham> So I'm not arguing that you can only test the global scope semantics in the global scope. But the vast majority of tests don't need to run in the global scope (and many *can't* run in the global scope in the sense that you require some sort of function call)
  156. # [19:54] <bterlson_> I want to separate language and library tests. Library tests are not really interesting in this discussion, function wrapping is fine for those (but really you probably want some bad ass DSL like Mocha or whatever). Library tests are typically not very interesting for JIT.
  157. # [19:55] <bterlson_> For language tests, I believe that fundamentally you should not introduce language features that are not relevant to what you're testing
  158. # [19:55] <jgraham> bterlson_: When you sau "library", what do you mean?
  159. # [19:55] <jgraham> *say
  160. # [19:55] <bterlson_> Stuff like Array.prototype.splice
  161. # [19:55] <jgraham> OK
  162. # [19:56] <jgraham> I'm pretty sure you can find JIT bugs with those too
  163. # [19:56] <bterlson_> Sometimes. I think solving the library testing problem with amore expressive framework in test262 is a good thing.
  164. # [19:56] <jgraham> But in either case it seems like you are selling out the common case in favour of the uncommon case
  165. # [19:57] <jgraham> That is, by saying that global scope is the *only* scope allowed you are making it much more difficult to test JIT behaviour and favouring tests that are unlike most production code
  166. # [19:57] <bterlson_> If there is some semantics that doesn't care if it's in global scope or not, global scope is better because the test can be more easily mutated to run in whatever scope you want
  167. # [19:57] <bterlson_> JIT in global scope is also interesting
  168. # [19:58] <jgraham> Does anyone actually JIT in the global scope?
  169. # [19:58] <bterlson_> Yes.
  170. # [19:58] <jgraham> OK
  171. # [19:58] <bterlson_> Also different kinds of functions (decls, exprs, arrows, etc.)
  172. # [19:58] <jgraham> Anyway, I don't agree with the "more easilly mutated"
  173. # [19:59] <bterlson_> blocks, etc.
  174. # [19:59] <jgraham> global scope just has different semantics
  175. # [19:59] <jgraham> bterlson_: Have you looked at the opjsunit tests?
  176. # [20:00] <jgraham> If they are still finding bugs 4 years after being written it seems like they must be doing something right
  177. # [20:00] <bterlson_> briefly googled, but I would be curious to hear more
  178. # [20:00] <bterlson_> Well, if it's the same tests, I'm not sure taking 4 years to find issues is a positive thing :)
  179. # [20:01] <jgraham> Well they were Opera-internal before
  180. # [20:01] <jgraham> https://github.com/operasoftware/presto-testo/tree/master/core/standards/scripts/opjsunit
  181. # [20:01] * Quits: yankhates_cb (~yankhatescb@public.cloak) (Ping timeout: 180 seconds)
  182. # [20:01] <jgraham> https://github.com/operasoftware/presto-testo/tree/master/core/standards/scripts/opjsunit/tests has all the actual tests
  183. # [20:05] <bterlson_> interesting
  184. # [20:05] <bterlson_> wish we could test in eval but that's also a no-go from a test262 perspective I think
  185. # [20:08] <bterlson_> at any rate, in my experience jit testing can be accomplished with a simple harness that wraps in a function using string concat, runs it once and sees if it passes (or, just baselines the result) and then runs it repeatedly
  186. # [20:08] <jgraham> Yeah, that will work of course
  187. # [20:09] <jgraham> But if you write all your tests in the global scope it seems more likely that you will depend on global-isms than if you wrap them in functions by default
  188. # [20:11] <bterlson_> Define "global-isms"?
  189. # [20:12] <jgraham> Any of the behaviours that differ between global scope and function scope
  190. # [20:13] <bterlson_> Isn't it equally bad to depend on "function-isms"?
  191. # [20:14] <jgraham> Not for the case we are discussing here since your strategy is to wrap the code in a function and then run the function repeatedly.
  192. # [20:14] <bterlson_> Generally speaking, though?
  193. # [20:15] <bterlson_> test262 is for more than just JIT testing :)
  194. # [20:16] <jgraham> Generally speaking you have to pick one or the other. I don't see why global scope is preferable. It makes it harder to have multiple independent tests in a single file, harder to reuse the tests for JIT testing and is further from the common case of author code
  195. # [20:16] <bterlson_> I think it makes it easier for everything other than JIT testing.
  196. # [20:17] <jgraham> I don't understand why
  197. # [20:17] <bterlson_> and I don't see why concatenation is such a big deal
  198. # [20:17] <jgraham> Of tests in the same file?
  199. # [20:17] <jgraham> It makes authoring tests much easier
  200. # [20:17] <bterlson_> no I mean, concatenation to put a global scope test in a function
  201. # [20:18] <bterlson_> other stuff than jit testing is easier because, for example, if I want to test in global scope it's harder to remove a function wrapper (that could be declared in any number of ways) than to do blind string concat
  202. # [20:19] <jgraham> In general the same test isn't valid inside and outside global scope. But really removing a function wrapper doesn't seem that hard. You don't have to deal with pathological cases because you control the input
  203. # [20:20] <bterlson_> How do I control the input? In test262 right now there are multiple methods of calling runTestCase
  204. # [20:20] <bterlson_> anonymous function expr passed in, function declaration, var test = function(){}, maybe even others
  205. # [20:22] <jgraham> Well in opjsunit at least the input is very uniform
  206. # [20:22] <jgraham> So unwrapping wouldn't be that hard
  207. # [20:22] <jgraham> I would think
  208. # [20:23] <bterlson_> Even then, harder than doing blind string concat to wrap I think?
  209. # [20:23] <jgraham> Sure.
  210. # [20:23] <bterlson_> Multiple tests per file is also bad for language testing purposes
  211. # [20:23] <bterlson_> so that's the next point we can discuss if you're interested! :)
  212. # [20:24] * Quits: scheib (~sid4467@public.cloak) (Ping timeout: 180 seconds)
  213. # [20:25] <jgraham> Sure, in pathological cases it can affect things, and you will notice that in opjsunit there are some files with only one test
  214. # [20:25] <bterlson_> although again my thinking there is a special case of the general point: test262 language tests should be as simple as possible to most easily accommodate automated testing pipelines
  215. # [20:25] <jgraham> But you should optimise for the common case
  216. # [20:25] * Joins: scheib (~sid4467@public.cloak)
  217. # [20:26] <jgraham> Well I agree that being run in automation is a very valid goal
  218. # [20:26] * Quits: lmclister (~sid13822@public.cloak) (Ping timeout: 180 seconds)
  219. # [20:27] <jgraham> But certainly that wasn't a problem for opjsunit, particularly when using the js shell rather than a full browser
  220. # [20:27] <bterlson_> doesn't sound like it was mutating the tests in significant ways
  221. # [20:27] * Joins: lmclister_ (~sid13822@public.cloak)
  222. # [20:27] <bterlson_> or doing automated failure analysis (it's really nice to be able to plop the entire test into a bug and basically have the min repro)
  223. # [20:28] <bterlson_> also when bringing up new features, it sucks when an early syntax error blocks the entire test
  224. # [20:29] <bterlson_> Of course all of this is just based on how I've used test262 in the past. I think your points are equally valid for the record.
  225. # [20:30] <bterlson_> It is extremely annoying to author language tests in test262 today, although tooling can help with that some...
  226. # [20:31] <jgraham> By "automated failure analysis" do you mean minimisation of the TC using an automated too to strip out features not needed to reproduce the fail, or something else?
  227. # [20:32] <bterlson_> You don't need that with one-test-per-file
  228. # [20:32] <bterlson_> because the assumption is that everything the author put in the file is required to reproduce the issue
  229. # [20:32] <bterlson_> that's why automated failure analysis is easier ;)
  230. # [20:34] <bterlson_> further things would be stuff like bucketing like failures based on data about the issue, resolving to known issues...
  231. # [20:34] <jgraham> Fair enough. Usually saying "testFoo in this file fails" is also effectively minimal, but yes there are edge cases where it only fails if you have exactly 32 functions in the global scope, or whatever
  232. # [20:34] <jgraham> Again, since you know the name of the function that failed as well as the file, those things don't seem much harder
  233. # [20:35] <bterlson_> I don't think that's effectively minimal
  234. # [20:35] <bterlson_> when you want to fix the bug you will almost for sure have to remove the excess stuff
  235. # [20:35] <bterlson_> esp. if it's hard to diagnose
  236. # [20:35] <jgraham> Sure, if it's a hard bug you will
  237. # [20:36] <jgraham> I have spent many hours of my life minimising multiple-tens of kbs whitespace-stripped, compiled, js so I appreciate the beauty of a minimal testcase
  238. # [20:36] * Joins: yankhates_cb (~yankhatescb@public.cloak)
  239. # [20:36] <jgraham> (or multiple hundreds of kb, or more)
  240. # [20:38] <jgraham> Possibly it depends what stage of development you are at. If you are just implementing a feature you are still likely to get lots of breakage of the non-pathological kind. If you are optimising an already implemented feature you are more likely to break strange edge cases
  241. # [20:38] <jgraham> So in the latter case you will get a higher ratio of bugs where unexecuted code in the same file makes a difference
  242. # [20:38] <jgraham> Although in the former case you will get more bugsm total
  243. # [20:39] <jgraham> (and probably more of both types)
  244. # [20:39] <bterlson_> I'm not sure I follow, but in my experience, 1tpf helps bootstrapping impls because each test depends on fewer things working
  245. # [20:43] <jgraham> Well I think opjsunit worked pretty well for Carakan at least
  246. # [20:43] <jgraham> Anyway, I'm heading home now
  247. # [20:44] <jgraham> Probably gsnedders will be along in a bit to contradict everything I said ;)
  248. # [20:44] <jgraham> Nice talking to you
  249. # [20:44] <bterlson_> I hope so! I haven't had such an interesting test262 conversation in a while :)
  250. # [20:44] <bterlson_> have a good one
  251. # [21:05] <plh> jgraham? re gsoc ideas
  252. # [21:06] <plh> which ones should we retain?
  253. # [21:06] <plh> is there one where you'd like to be a mentor?
  254. # [21:07] <plh> today is the deadline for gsoc
  255. # [21:08] <plh> https://etherpad.mozilla.org/GBHx8UkC9k has a bunch of comments but not clear if it converged
  256. # [21:09] * plh changes topic to 'Testing the Web Platform | http://testthewebforward.org | PR Count: 118 (2014-02-14)'
  257. # [21:52] * Joins: yankhates_cb_ (~yankhatescb@public.cloak)
  258. # [21:56] * Quits: ArtB (~abarsto@public.cloak) (Client closed connection)
  259. # [21:57] * Quits: yankhates_cb (~yankhatescb@public.cloak) (Ping timeout: 180 seconds)
  260. # [22:08] * Joins: lmclister (~lmclister@public.cloak)
  261. # [22:09] * Joins: yankhates_cb (~yankhatescb@public.cloak)
  262. # [22:15] * Quits: yankhates_cb_ (~yankhatescb@public.cloak) (Ping timeout: 180 seconds)
  263. # [22:35] * Quits: lmclister (~lmclister@public.cloak) ("")
  264. # [22:35] * lmclister_ is now known as lmclister
  265. # [22:37] * Joins: lmcliste_ (~lmclister@public.cloak)
  266. # [22:44] * Quits: lmcliste_ (~lmclister@public.cloak) ("")
  267. # [22:54] * Quits: yankhates_cb (~yankhatescb@public.cloak) (Ping timeout: 180 seconds)
  268. # [23:00] * Joins: lmcliste_ (~lmclister@public.cloak)
  269. # [23:03] * Joins: yankhates_cb (~yankhatescb@public.cloak)
  270. # [23:19] * Quits: yankhates_cb (~yankhatescb@public.cloak) (Ping timeout: 180 seconds)
  271. # [23:30] <gsnedders> I disagree with everything jgraham said.
  272. # [23:30] <gsnedders> (I'll read the above later.)
  273. # [23:32] <gsnedders> jgraham: V8 obviously JITs the global scope (there is no interpreter), everyone else JITs hot loops in the global scope.
  274. # [23:34] * Joins: rhauck1 (~Adium@public.cloak)
  275. # [23:36] * Quits: rhauck (~Adium@public.cloak) (Ping timeout: 180 seconds)
  276. # [23:39] <gsnedders> bterlson_: I'd much rather have two harnesses, tbh. The fact that there are subtly bugs with property accesses in JIT bugs (esp. when getters/setters are involved) shows that even "simple" things like property access makes a lot of sense to test in such a way.
  277. # [23:40] <gsnedders> (I do actually agree with most of what jgraham said, scarily.)
  278. # [23:41] <gsnedders> (Myself and jgraham probably did the majority of the QA work on Carakan, FWIW, so we're probably a bit bias it. :))
  279. # [23:44] <gsnedders> bterlson_: But a brief glance at the majority of tests failing in opjsunit in SpiderMonkey/V8 in hot loops shows most of them to be relatively simplictic things that are failing, which really isn't that promising.
  280. # [23:47] <gsnedders> bterlson_: The main problem with taking global scope tests and wrapping them is that they're almost never designed to be run in a loop, or written with any slight thought about it.
  281. # Session Close: Sat Feb 15 00:00:00 2014

The end :)