[00:00] Tim_Smart: pong
[00:00] isaacs has joined the channel
[00:01] ryah_away: konobi: yo
[00:03] aho has joined the channel
[00:13] steadicat has joined the channel
[00:31] cc has joined the channel
[00:33] okito has joined the channel
[00:36] okito: Hi I was wondering if anyone could give me some insight on the status of packages in node.js
[00:37] okito: I know it is not in the core code now but is there a roadmap by any chance?
[00:37] ashb: http://github.com/isaacs/npm
[00:37] ashb: is one that i know of
[00:38] okito: I saw that and kiwi
[00:38] okito: And I saw some patches from Ryah mentioning changes to allow require to be replaces which seems like a feature for packages also
[00:39] okito: Is nph long term the preferred solution?
[00:41] tmpvar has joined the channel
[00:42] okito has joined the channel
[00:43] jed has joined the channel
[01:00] BBB has joined the channel
[01:01] rektide: anyone seen the "test" module used anywhere? i dont understand from the spec how its intended to be used. spec at: http://wiki.commonjs.org/wiki/Unit_Testing/1.0
[01:01] rektide: doesnt look like the node.js tests use the test module
[01:08] ryah: rektide: require('assert')
[01:10] jcrosby has joined the channel
[01:12] rektide: assert i understand
[01:13] rektide: although it looks like i still have to DIY the test runner??
[01:13] ryah: yes
[01:13] ryah: check out the wiki there are a few projects, i think
[01:13] rektide: i was thinking the "test" module was intended in some way to organize a collection of unit tests, but again, i really dont understand the spec
[01:14] ryah: maybe - node doesn't implement that
[01:15] ryah: we've just got "assert"
[01:15] konobi: there's Test.Simple, QUnit, etc.
[01:19] jed: rektide: it's pretty easy to just use Makefile: http://github.com/jed/fab/blob/master/Makefile
[01:22] hassox: quick servey
[01:22] hassox: how many ppl have ruby installed on their systems?
[01:22] hassox: or rather the other way
[01:22] hassox: does anyone not have ruby installed?
[01:23] rektide: on what systems? this laptop has ruby, but my colo and NAS dont have ruby.
[01:24] hassox: on systems you develop / deploy to
[01:27] RayMorgan_ has joined the channel
[01:27] rektide: that applies to al of hte above. i'm at 1/3.
[01:34] lvmike has joined the channel
[01:39] jcrosby has joined the channel
[01:46] binary42 has joined the channel
[01:50] konobi: hassox: perl... ftw
[01:50] konobi: ACTION *ducks*
[01:51] stephenlb: perl :D
[02:01] rektide: now i'm in for it
[02:01] rektide: i need a decent means of testing httpServer with pipelined clients
[02:02] rektide: http.createClient doesnt quality, i dont believe curl qualifies
[02:07] RayMorgan has joined the channel
[02:09] bpot has joined the channel
[02:11] tmpvar: 99% complete with a pure js dom level 1 implementation, wut wut!
[02:12] stephenlb: rektide: what does qualify mean?
[02:13] jcrosby has joined the channel
[02:13] brosner has joined the channel
[02:16] brosner has joined the channel
[02:16] ashb: rektide: evne curl multi interface? cant do that with the `curl` binary, but the lib supports some form of internal select() or there abots
[02:17] ryah: tmpvar: cool - can't wait to check it out
[02:19] davidsklar has joined the channel
[02:20] tmpvar: http://github.com/tmpvar/jsdom -- needs some work but its passing 522/527 unit tests heh
[02:20] ashb: tmpvar: where di the tests come from?
[02:21] tmpvar: http://www.w3.org/2004/04/ecmascript/
[02:21] ashb: oh yes that wonderfulness
[02:21] ashb: tmpvar: in which case this might be useful to you:
[02:21] tmpvar: heh, yeah
[02:21] ashb: http://github.com/ashb/DOM-Tests
[02:22] ashb: i made a start (and got a reasonable way) to generating commonjs version for the assert module
[02:22] ashb: which i haven't pushed >_<
[02:22] tmpvar: ah
[02:22] tmpvar: these are the original tests im guessing?
[02:23] ashb: yeah
[02:23] tmpvar: ah
[02:23] ashb: your's have changes?
[02:23] tmpvar: i just used some php/regex magic to process all of the individual tests into a large file
[02:24] ashb: probab ly would have been easier than fighting the lovely XSLT/ant/hellmouth
[02:24] tmpvar: and im avoiding implementing a parser, so the test doms are being built manually
[02:24] ashb: no parser?
[02:24] tmpvar: hehe, yeah it was pretty simple
[02:24] tmpvar: yeah, no parser
[02:24] ashb: how do you get data into it then?
[02:25] tmpvar: http://github.com/tmpvar/jsdom/blob/master/test/level1/_files/staff.xml.js
[02:25] tmpvar: currently i've only implemented the XML specific dom parts
[02:26] ashb: should store it in jsonml ;)
[02:26] tmpvar: heh heh
[02:27] tmpvar: I did a quick poc with node-xml, and it i think that my initial "no parser" road was the right path to take
[02:27] tmpvar: -it
[02:27] ashb: node-xml?
[02:27] ashb: yeah the dom API is much easier than an sgml/xml parser
[02:27] tmpvar: http://github.com/robrighter/node-xml
[02:28] tmpvar: agreed
[02:31] andy_l has joined the channel
[02:31] cloudhead has joined the channel
[02:31] tmpvar: couple more attribute tests and a making namednodemap/nodelist live, and ill be done with a leg of the journey heheh
[02:32] ashb: i wend for binding a C library in hippo. decided it was easier
[02:32] tmpvar: gdome?
[02:32] ashb: arabica
[02:33] ashb: C++ lib that wraps either libxml2,expat,xerces or msxml
[02:33] ashb: and provides dom level2 (xml) and tag-soup parsing modes
[02:33] tmpvar: not bad, and it is bsd-ish licensed
[02:34] tmpvar: curious why i didnt see this before hah
[02:34] ashb: yeah. author has been receptive-ish to patches
[02:34] ashb: tho a bit slow to take them at times.
[02:35] tmpvar: hrm
[02:36] cloudhead: is there an easy way to chain async http requests?
[02:41] mikeal has joined the channel
[02:51] Tim_Smart: !log
[02:58] MattJ: Hmm, node-xml is a pain
[02:58] MattJ: or perhaps just the docs are :)
[02:58] tmpvar: heh
[02:58] tmpvar: its just sax right?
[02:59] MattJ: Yeah
[02:59] MattJ: But the API is wacky if you ask me
[02:59] tmpvar: yeah?
[02:59] ashb: thats SAX
[02:59] MattJ: It also says you can do: new libxml.SaxParser()
[02:59] MattJ: which you can't
[02:59] tmpvar: ooh
[02:59] tmpvar: yeah, i noticed that
[03:00] MattJ: I was expecting from the documentation that I would get a parser object from that, and then be able to set the callbacks on that object
[03:00] MattJ: But actually you have to pass it a function, which calls the necessary functions with the callbacks as parameters (!?)
[03:01] tmpvar: heh heh
[03:01] tmpvar: we just need someone to port tagsoup/beautiful soup heh
[03:01] MattJ: Hmm :)
[03:02] technoweenie has joined the channel
[03:02] jan____ has joined the channel
[03:02] MattJ: A push-based SAX parser is exactly what I'm after, just hoped for a more sensible API
[03:02] MattJ: But, I'll live :)
[03:04] MattJ: Hmm... I guess the idea is that you can re-use the callback factory
[03:04] MattJ: But that's actually something you'll rarely be able to do, except in simple scripts
[03:04] MattJ: Considering it gives no way to pass user-specific data to the callbacks
[03:05] MattJ: Ok, now I stop pointlessly ranting :)
[03:05] tmpvar: haha
[03:05] cadorn has joined the channel
[03:08] tmpvar: hrm, i have a feeling live list/map is just going to make things horribly slow with doing lookups everywhere
[03:09] tmpvar: no soft-refs for the fail! hehe
[03:10] Yuffster has joined the channel
[03:12] unomi has joined the channel
[03:12] unomi has joined the channel
[03:13] jan____ has joined the channel
[03:14] jan____ has joined the channel
[03:18] steadicat has joined the channel
[03:31] nodejs_v8 has joined the channel
[03:32] Tim_Smart: Huh awesome. IRC bot that connects to multiple networks and channels
[03:41] kriszyp has joined the channel
[03:50] r11t has joined the channel
[04:01] mikeal has joined the channel
[04:03] mattly has joined the channel
[04:05] jwm has joined the channel
[04:32] brosner has joined the channel
[04:32] sahnlam has joined the channel
[04:37] brosner has joined the channel
[04:58] andy_l has joined the channel
[05:22] jubos has joined the channel
[05:42] _sh has joined the channel
[05:45] RayMorgan has joined the channel
[05:48] cloudhead has joined the channel
[05:53] steadicat has joined the channel
[06:03] micheil: Tim_Smart: got it working?
[06:03] Tim_Smart: micheil: What working?
[06:03] micheil: the nodejs_v8?
[06:04] Tim_Smart: Yeah I just finished it :p
[06:04] micheil: nice work
[06:04] micheil: does it segfault any more?
[06:04] Tim_Smart: haha no, I used a different stategy
[06:05] micheil: oh
[06:05] Tim_Smart: I can hook it up to multiple IRC networks and channels with the current design
[06:05] micheil: nice
[06:05] Tim_Smart: And one run one instance
[06:06] micheil: that must require multiple tcpclients being active though
[06:06] Tim_Smart: yes it does
[06:07] nodejs_v8 has joined the channel
[06:07] nodejs_v8 has joined the channel
[06:07] Tim_Smart: OK there we are
[06:07] Tim_Smart: :p
[06:08] Tim_Smart: it's in #node.js, ##javascript and irc.moofspeak.net/#revolution
[06:08] Tim_Smart: v8> while(true) {}
[06:08] nodejs_v8: Tim_Smart: No Output.
[06:11] nodejs_v8 has joined the channel
[06:12] steadicat has joined the channel
[06:19] unomi: v8> this
[06:19] nodejs_v8 has joined the channel
[06:19] Tim_Smart: :p
[06:19] Tim_Smart: Was fixing a small bug
[06:20] unomi: I tried out nodeBot.js
[06:20] unomi: but it didn't know how to reconnect, and I am still too green to attack fixing it
[06:20] rednul has joined the channel
[06:20] _Ray_ has joined the channel
[06:20] tmpvar: !help
[06:20] tmpvar: heheh
[06:20] jwm: heh
[06:21] unomi: mind putting it in #yui ? consensus is that we could use an eval bot
[06:21] Tim_Smart: atm it just has a parser
[06:21] v8 has joined the channel
[06:21] creationix has joined the channel
[06:22] unomi: what nodeBot was doing was probably unsafe (Process.eval, I believe)
[06:22] _Ray_: This one has a simple C++ addon.
[06:22] unomi: but it was pretty cool as well, because vars were persistant
[06:22] unomi: so you could set more fancy functions at runtime
[06:22] _Ray_: v8> var i, props = []; for(i in this) { props.push(i); } props
[06:22] v8: _Ray_: ["i", "props"]
[06:23] unomi: v8> i
[06:23] v8: unomi: Exception: ReferenceError: i is not defined
[06:23] _Ray_: Naked scope, with a very simplistic pretty printer
[06:23] jwm: what do you guys recommend to learn
[06:24] jwm: mootools or jquery or yui
[06:24] jwm: hehe
[06:24] _Ray_: JavaScript :)
[06:24] jwm: yeah yeah :)
[06:24] jwm: seems jquery has a lot of force behind it
[06:24] mattly: jquery
[06:24] mattly: but also yeah javascript
[06:24] unomi: javascript
[06:24] botjs has joined the channel
[06:25] _Ray_: I'm pretty tired of people in ##javascript not understanding JS, only having done jQuery.
[06:25] mattly: prototype and mootools do a lot of work towards making javascript work more like ruby or python or something
[06:25] unomi: from that perspective I would say that yui is closer to 'javascript' than jquery
[06:25] _Ray_: Well, more sad than tired.
[06:25] _Ray_: Do prototype and MooTools still modify the core objects nowadays?
[06:26] jwm: I just don't understand the use for them but I've been out of javascript for 4 years now
[06:26] unomi: ` var hello = function('name'){return 'Hi there, ' +name+'!';};
[06:26] jwm: hehe
[06:26] botjs: SyntaxError: Unexpected string
[06:26] unomi: ` var hello = function(name){return 'Hi there, ' +name+'!';};
[06:26] botjs: undefined
[06:26] jwm: I'm experimenting with websockets
[06:26] _Ray_: v8> var hello; hello = function(name){return 'Hi there, ' +name+'!';};
[06:26] v8: _Ray_: function (name){return 'Hi there, ' +name+'!';}
[06:26] unomi: `hello('_Ray_');
[06:27] unomi: ` hello('_Ray_');
[06:27] botjs: Hi there, _Ray_!
[06:27] _Ray_: :)
[06:27] TIm_Smart: _Ray_: I'll show you my bot in a minute
[06:27] _Ray_: Doesn't the state get... polluted after a while?
[06:27] TIm_Smart: I avoided writing to a file :p
[06:27] _Ray_: Tim_Smart, createChildProcess?
[06:27] _Ray_: Unreliable, I've tested.
[06:27] _Ray_: Well, IPC with it.
[06:28] TIm_Smart: yeah, except I use it slightly differently
[06:28] Pilate: ` sys
[06:28] botjs: ReferenceError: sys is not defined
[06:28] _Ray_: Sadly the only reliable way I found of communicating with a process is writing to files.
[06:28] TIm_Smart: _Ray_: Only the process.stdio is unreliable
[06:28] _Ray_: Not afaik
[06:28] TIm_Smart: Mine has worked flawlessly everytime
[06:29] _Ray_: Exec itself is unreliable, because it's just createChildProcess
[06:29] _Ray_: It may just be an OS X thing
[06:29] andy_l has joined the channel
[06:29] TIm_Smart: We will find out, as soon as I add password support to the bot
[06:30] _Ray_: This way (files) has never failed me, and it's not really that bad.
[06:30] _Ray_: v8> while(1) {}
[06:30] v8: _Ray_: Timeout.
[06:31] Pilate: v8> process.createChildProcess("wget",["http://www.google.com/intl/en_ALL/images/logo.gif"])
[06:31] v8: Pilate: Exception: ReferenceError: process is not defined
[06:31] Pilate: ` process.createChildProcess("wget",["http://www.google.com/intl/en_ALL/images/logo.gif"])
[06:31] botjs: [object ChildProcess]
[06:32] TIm_Smart: oh dear
[06:32] Pilate: ;)
[06:32] _Ray_: This is a brand new context, there's no node.js in v8
[06:32] TIm_Smart: someone didn't set the context properly ;)
[06:32] _Ray_: (I specifically wanted it without anything)
[06:32] unomi: bye bye nodeBot
[06:33] Pilate: v8> GLOBAL
[06:33] v8: Pilate: Exception: ReferenceError: GLOBAL is not defined
[06:33] _Ray_: v8> var i, props = []; for(i in this) { props.push(i); } props
[06:33] v8: _Ray_: ["i", "props"]
[06:33] _Ray_: ^--
[06:35] Pilate: v8> var dump = function (ind) { var a; for (t in ind) {a+=ind[t]};return a}; dump(this);
[06:35] v8: Pilate: "undefinedfunction (ind) { var a; for (t in ind) {a+=ind[t]};return a}"
[06:36] inimino: there's js> here too (which is spidermonkey)
[06:37] inimino: js> var i, props = []; for(i in this) { props.push(i); } props
[06:37] gbot2: inimino: []
[06:37] _Ray_: Because you're not in global context
[06:37] joshthecoder: v8> 1/0
[06:37] v8: joshthecoder: Infinity
[06:37] joshthecoder: lol
[06:39] _Ray_: On the other hand...
[06:40] _Ray_: js> props = []; for(i in this) { props.push(i); } props
[06:40] gbot2: _Ray_: ["props"]
[06:40] Pilate: js> "a\r\na"
[06:40] gbot2: Pilate: "a
[06:40] jwm: is node.js pretty much overkill for just doing websockets for redis communication
[06:40] _Ray_: How odd. i should be global.
[06:40] jwm: hehe
[06:40] Pilate: js> "a\r\nQUIT lol"
[06:40] gbot2: Pilate: "a
[06:40] _Ray_: inimino, why doesn't i appear there?
[06:41] _Ray_: Hrm. Interesting.
[06:41] jwm: anyone compare the speed of say ruby eventmachine with node.js
[06:41] _Ray_: js> props = []; i = 0; for(i in this) { props.push(i); } props
[06:41] gbot2: _Ray_: ["props","i"]
[06:42] nodejs_v8 has joined the channel
[06:43] Pilate: js> var a; while(1){a++;}
[06:43] gbot2: Pilate: Timeout.
[06:43] _Ray_: v8> var a; while(1){a++;}
[06:43] nodejs_v8: _Ray_: No Output.
[06:44] Pilate: v8> "lol\r\nQUIT sanatize_me"
[06:44] nodejs_v8: Pilate: "lol
[06:44] jwm: I want to use nodejs and redis :)
[06:44] _Ray_: Hrm.
[06:44] _Ray_: Tim_Smart, you made v8 quit, didn't you :p
[06:44] TIm_Smart: _Ray_: I'm getting it to identify first
[06:45] _Ray_: I mean v8, not nodejs_v8
[06:45] _Ray_: Oh, no
[06:45] _Ray_: You didn't
[06:45] _Ray_: Pilate did ;)
[06:45] Pilate: :x guilty
[06:46] _Ray_: Hrm, odd.
[06:46] _Ray_: send("PRIVMSG "+who+" :"+what.replace(/\n/g, ""));, should have dealt with that.
[06:46] Pilate: \r is enough apparently
[06:47] brainproxy: something strange; I've got an instance of the demo chat server running locally; if I enter a numeric value as a chat message and submit it; after that new messages are not echoed back to the clients
[06:47] inimino: /[\r\n]/g
[06:47] Pilate: it strips \n
[06:47] _Ray_: Evil..
[06:47] _Ray_: Yeah, I'm changing it to that, but it's odd... that's up to the IRCd I guess
[06:47] isaacs has joined the channel
[06:48] v8 has joined the channel
[06:48] _Ray_: v8> "a\r\nQUIT lol"
[06:48] v8: _Ray_: "aQUIT lol"
[06:49] isaacs: haha, you guys are funny
[06:50] Pilate: v8> "a"+String.fromCharCode(10)+"QUIT"
[06:50] v8: Pilate: "aQUIT"
[06:52] _Ray_: It's an interesting thing that I can't use Function.prototype.toString.apply(somefunc); on the pretty printer, because somefunc comes from another context (i.e. was created with another Function object) and v8 complains that Function.prototype.toString isn't... "polymorphic" or something to that effect
[06:53] kennethkalmer has joined the channel
[06:55] _Ray_: The unfortunate side effect of that is...
[06:55] _Ray_: v8> var f = function() { return 1; }; f.toString = function() { return "function() { return 2; }"; }; f
[06:55] v8: _Ray_: function() { return 2; }
[06:56] _Ray_: I suppose I could use .constructor.prototype.toString.apply though.
[06:58] nodejs_v8 has joined the channel
[06:58] TIm_Smart: v8> while(true) {}
[06:58] nodejs_v8: TIm_Smart: No Output.
[06:58] v8: TIm_Smart: Timeout.
[06:58] _Ray_: ACTION suggests the bot recognize his own nick instead of v8> 
[06:59] Tim_Smart: Yeah easily changed :p
[06:59] isaacs: v8> __wrap__
[06:59] v8: isaacs: Exception: ReferenceError: __wrap__ is not defined
[06:59] nodejs_v8: isaacs: Exception: ReferenceError: __wrap__ is not defined
[06:59] Pilate: v8> this
[06:59] v8: Pilate: {}
[06:59] nodejs_v8: Pilate: {}
[06:59] isaacs: also, it should recognize "v8:" instead of "v8>"
[07:00] isaacs: v8: hi, there
[07:00] isaacs: then tab-completion would work
[07:00] _Ray_: Aight :p
[07:00] Pilate: v8> GLOBAL+process+sys+http
[07:00] v8: Pilate: Exception: ReferenceError: GLOBAL is not defined
[07:00] nodejs_v8: Pilate: Exception: ReferenceError: GLOBAL is not defined
[07:00] _Ray_: evalRegex = new RegExp("^"+botNick+"> (.*)$"),
[07:00] _Ray_: I guess that $ is superfluous.
[07:00] nodejs_v8 has joined the channel
[07:00] isaacs: _Ray_: i'm suggesting you change the > to [>:]
[07:01] Tim_Smart: v8: 1+1
[07:01] nodejs_v8: Tim_Smart: 2
[07:01] micheil: v8> (function(){ var a=require("sys); a.puts("test"); })();
[07:01] v8: micheil: Exception: SyntaxError: Unexpected identifier
[07:01] micheil: aww..
[07:01] isaacs: v8: "hello
[07:01] nodejs_v8: isaacs: Exception: SyntaxError: Unexpected token ILLEGAL
[07:01] isaacs: v8: "hi"
[07:01] nodejs_v8: isaacs: "hi"
[07:01] _Ray_: isaacs, yeah, did it :)
[07:01] isaacs: nice
[07:01] isaacs: looks like v8 doesn't get it, but node_v8 does
[07:01] micheil: v8: function(){while(true){arguments.callee();}};
[07:01] nodejs_v8: micheil: Exception: SyntaxError: Unexpected token (
[07:02] isaacs: member:v8: (function(){while(true){arguments.callee();}})()
[07:02] micheil: v8: (function(){while(true){arguments.callee();}})();
[07:02] nodejs_v8: micheil: Exception: RangeError: Maximum call stack size exceeded
[07:02] isaacs: v8: (function(){while(true){arguments.callee();}})()
[07:02] micheil: woo.
[07:02] nodejs_v8: isaacs: Exception: RangeError: Maximum call stack size exceeded
[07:02] isaacs: nice
[07:02] isaacs: weird that the copy in colloquy made it some kinda weird hyperlink
[07:03] isaacs: v8: (function(){while(true);})()
[07:03] nodejs_v8: isaacs: No Output.
[07:03] v8 has joined the channel
[07:03] _Ray_: v8: "Hi!"
[07:03] v8: _Ray_: "Hi!"
[07:03] nodejs_v8: _Ray_: "Hi!"
[07:03] micheil: v8: return this;
[07:03] v8: micheil: Exception: SyntaxError: Illegal return statement
[07:03] nodejs_v8: micheil: Exception: SyntaxError: Illegal return statement
[07:03] _Ray_: ACTION re-suggests the bot understands his own nick ;p
[07:04] _Ray_: v8> var f = function() { return 1; }; f.toString = function() { return "function() { return 2; }"; }; f
[07:04] v8: _Ray_: function () { return 1; }
[07:04] _Ray_: :D
[07:04] nodejs_v8 has joined the channel
[07:04] Tim_Smart: The people @ javascript will be getting annoyed haha
[07:05] Tim_Smart: nodejs_v8: function(){}
[07:05] nodejs_v8: Tim_Smart: Exception: SyntaxError: Unexpected token (
[07:06] _Ray_: Function declaration != expression
[07:06] inimino: js> function f(){
[07:06] gbot2: inimino: Error: SyntaxError: missing } after function body: function f(){ ............^
[07:06] inimino: ACTION blames keyboard
[07:06] inimino: js> function f(){}
[07:06] gbot2: inimino: undefined
[07:06] Pilate: js> function f()\r\n
[07:06] gbot2: Pilate: Error: SyntaxError: illegal character: function f()\r\n ............^
[07:06] _Ray_: Yeah, that's SpiderMonkey
[07:06] inimino: js> (function f(){})
[07:06] gbot2: inimino:
[07:07] _Ray_: It tries to interpret it as a function expression, no idea why
[07:07] _Ray_: jsc does the same as v8
[07:07] nodejs_v8 has joined the channel
[07:07] inimino: _Ray_: hm?
[07:07] Tim_Smart: nodejs_v8: function(){}
[07:07] nodejs_v8: Tim_Smart: Exception: SyntaxError: Unexpected token :
[07:08] Tim_Smart: nodejs_v8: "test"
[07:08] nodejs_v8: Tim_Smart: Exception: SyntaxError: Unexpected token :
[07:08] _Ray_: eval expects a function body, and function(){} in a program body is a syntax error, isn't it?
[07:08] _Ray_: Well, I'm not sure. It's not a valid function declaration, but I don't see why it should interpret it as a functionexpression.
[07:08] Tim_Smart: I got another bug I think
[07:09] inimino: _Ray_: it doesn't
[07:09] aho: http://kaioa.com/b/0801/quine.htm <- did something stupid with that kind of bot a while ago
[07:09] _Ray_: inimino, sure?
[07:09] _Ray_: function declarations aren't expressions, they don't have return values
[07:09] _Ray_: yet the return value of "function(){}" in a function body seems to be a ref to the func
[07:10] _Ray_: ehrm, in program body
[07:10] inimino: _Ray_: hm, yeah I think I vaguely recall that behavior has changed a few times
[07:10] nodejs_v8 has joined the channel
[07:10] Tim_Smart: nodejs_v8: "test"
[07:10] nodejs_v8: Tim_Smart: "test"
[07:10] Tim_Smart: nodejs_v8: function(){}
[07:10] nodejs_v8: Tim_Smart: function (){}
[07:11] _Ray_: nodejs_v8> var f = function() { return 1; }; f.toString = function() { return "function() { return 2; }"; }; f
[07:11] _Ray_: nodejs_v8: var f = function() { return 1; }; f.toString = function() { return "function() { return 2; }"; }; f
[07:11] nodejs_v8: _Ray_: Exception: SyntaxError: Unexpected token var
[07:11] Tim_Smart: hmm
[07:12] _Ray_: You're wrapping it in parenthesis, aren't you?
[07:12] Tim_Smart: :p
[07:12] _Ray_: ;)
[07:12] Tim_Smart: ACTION goes back and reverts it
[07:13] Tim_Smart: I should install filewatchers for the modules so I don't have to restart it :p
[07:13] nodejs_v8 has joined the channel
[07:14] _Ray_: nodejs_v8: var o = {}; o.o = o; o
[07:14] nodejs_v8: _Ray_: {"o": {"o": **RECURSION**}}
[07:14] inimino: _Ray_: mozilla did something different with bare function expressions in eval() for a while, but I think on current trunk that is a syntax error
[07:14] Pilate: v8> module
[07:14] _Ray_: Tim_Smart, add seen.push(obj) at the beginning of the pretty printer
[07:14] v8: Pilate: Exception: ReferenceError: module is not defined
[07:15] _Ray_: v8> var o = {}; o.o = o; o
[07:15] v8: _Ray_: {"o": **RECURSION**}
[07:16] nodejs_v8 has joined the channel
[07:17] Tim_Smart: nodejs_v8: var o = {}; o.o = o; o
[07:17] nodejs_v8: Tim_Smart: {"o": **RECURSION**}
[07:18] Tim_Smart: _Ray_: I got mine watching on several IRC servers atn
[07:18] Tim_Smart: *atm
[07:18] ayo has joined the channel
[07:19] Tim_Smart: I'll give you a tarball of the files
[07:23] nodejs_v8 has joined the channel
[07:25] Tim_Smart: unomi: Bot is in #yui now
[07:26] cedricv has joined the channel
[07:29] _Ray_ has joined the channel
[07:30] _Ray_: Hrmph. Stupid internet.
[07:31] okito has joined the channel
[07:31] Tim_Smart: _Ray_: http://dl.dropbox.com/u/396394/ircbot.tar.gz
[07:36] _Ray_: Tim_Smart, you did the lib?
[07:36] unomi: Thanks Tim_Smart
[07:37] Tim_Smart: the irc lib belongs to someone else, but I did a couple mods
[07:37] Tim_Smart: irc.js
[07:37] _Ray_: Ah, thought I'd seen that style somewhere
[07:38] _Ray_: Yeah, I remembered "this.send('USER', this.user, '0', '*', ':'+this.real);" from somewhere
[07:39] _Ray_: ah, there we go
[07:39] _Ray_: http://github.com/fwg/nodejs.irc/tree/master/irc/
[07:40] Tim_Smart: yeah it was used as the irclog backend
[07:40] kennethkalmer has joined the channel
[07:40] Tim_Smart: no not that one
[07:41] _Ray_: :o weird, it's what I remembered " this.raw("USER", this.username, '0', '*', ':'+this.realname);" from
[07:41] _Ray_: I was like, wtf, he's joining arguments? XD
[07:43] hassox has joined the channel
[07:45] Tim_Smart: nodejs_v8: var hassox = Infinite; hassox
[07:45] nodejs_v8: Tim_Smart: Exception: ReferenceError: Infinite is not defined
[07:45] Tim_Smart: nodejs_v8: var hassox = Infinity; hass
[07:45] nodejs_v8: Tim_Smart: Exception: ReferenceError: hass is not defined
[07:45] Tim_Smart: nodejs_v8: var hassox = Infinity; hassox
[07:45] nodejs_v8: Tim_Smart: Infinity
[07:45] hassox: haha
[07:45] hassox: nice one :D
[07:46] _Ray_: v8> Infinity = 2; 1/0 === Infinity
[07:47] _Ray_: ehrm
[07:47] _Ray_: js> Infinity = 2; 1/0 === Infinity
[07:47] gbot2: _Ray_: false
[07:47] Pilate: nodejs_v8: print3 = function() { var rv = ""; for (k in this) { rv += k + " ," } ; return rv }; print3()
[07:47] nodejs_v8: Pilate: "print3 ,"
[07:47] _sh has joined the channel
[07:48] _Ray_: What I find odd is that 'k' doesn't appear.
[07:48] Tim_Smart: nodejs_v8: this
[07:48] nodejs_v8: Tim_Smart: {}
[07:52] hassox: nodejs_v8: this
[07:52] nodejs_v8: hassox: {}
[07:52] hassox: tasty
[07:53] micheil: nodejs_v8: this.prototype.a = "a"; this;
[07:53] nodejs_v8: micheil: Exception: TypeError: Cannot set property 'a' of undefined
[07:53] micheil: nodejs_v8: this.a = "a"; this;
[07:53] nodejs_v8: micheil: {}
[07:55] Tim_Smart: nodejs_v8: var i = "test"; this
[07:55] nodejs_v8: Tim_Smart: {}
[07:56] Pilate: nodejs_v8: print
[07:56] nodejs_v8: Pilate: Exception: ReferenceError: print is not defined
[07:56] _Ray_: ACTION reminds people the module gives you a clean slate.
[07:58] creationix has left the channel
[07:58] creationix has joined the channel
[08:00] Tim_Smart: nodejs_v8: JSON
[08:00] nodejs_v8: Tim_Smart: {}
[08:01] Tim_Smart: nodejs_v8: JSON.stringify('{"test":"test}');
[08:01] nodejs_v8: Tim_Smart: ""{\"test\":\"test}""
[08:01] Tim_Smart: nodejs_v8: JSON.parse('{"test":"test"}');
[08:01] nodejs_v8: Tim_Smart: {"test": "test"}
[08:07] kennethkalmer has joined the channel
[08:11] zmoog has joined the channel
[08:15] markwubben has joined the channel
[08:21] mikeal has joined the channel
[08:28] rektide: i cant get a "new events.Promise()" to get any of its callbacks to fire
[08:28] rektide: http://cgit.voodoowarez.com/pipe-layer-js/plain/chain.js is the code i'm working with
[08:29] rektide: this.execute(ctx) is the entry point. it finishes with a call: this.chainResult.emitSuccess(ctx,result); , but the this.chainResult.addCallback() listeners never get called
[08:34] rektide: confused addCallback and addListener, appears to be working as expected now.
[08:34] teemow has joined the channel
[08:35] lifo has joined the channel
[08:48] sudoer has joined the channel
[08:51] nodejs_v8 has joined the channel
[08:52] keeto_ has joined the channel
[08:56] brainproxy has joined the channel
[08:56] brainproxy has joined the channel
[08:56] brainproxy has joined the channel
[08:56] paulca has joined the channel
[09:04] brainproxy has joined the channel
[09:06] brainproxy has joined the channel
[09:07] brainproxy has joined the channel
[09:07] matthijs has joined the channel
[09:09] brainproxy has joined the channel
[09:10] brainproxy has joined the channel
[09:12] brainproxy has joined the channel
[09:14] brainproxy has joined the channel
[09:17] keeto has joined the channel
[10:17] felixge has joined the channel
[10:24] ithinkihaveacat has joined the channel
[10:35] paulca has joined the channel
[10:39] blackdog` has joined the channel
[10:40] felixge: ryah_away: you still up by any chance?
[11:06] micheil: local time: 6am
[11:06] micheil: probably not up yet.
[11:08] micheil: place's ryah somewhere in the eastern side of the states.
[11:23] felixge: micheil: I think he's west coast
[11:23] felixge: bay area
[11:23] micheil: oh, okay
[11:23] micheil: well, where ever the ryah_away account is sitting is on a east coast box
[11:24] felixge: yeah, I think he's using ssh/screen to connect to IRC through his server
[11:24] felixge: :)
[11:24] micheil: felixge: just to check, it's 12:23 where you are, isn't it?
[11:24] felixge: it is
[11:24] micheil: colloquy has that local time feature under the user info dialog
[11:25] felixge: :)
[11:25] felixge: nice
[11:25] felixge: micheil: it says its really late where you are
[11:25] felixge: 10pm
[11:25] micheil: not really
[11:25] micheil: really late is 5am
[11:25] felixge: well, but it's 10pm?
[11:25] felixge: where ar eyou?
[11:25] micheil: yeah
[11:26] micheil: Eastern australia
[11:32] webben has joined the channel
[11:33] rolfb has joined the channel
[11:37] bryanl has joined the channel
[12:01] felixge: ACTION I hate google groups
[12:01] felixge: your posts don't show up right away
[12:01] felixge: and sometimes the web interface just eats them
[12:01] felixge: :|
[12:07] micheil: I just email them to nodejs@googlegroups.com
[12:07] micheil: or whatever the url is
[12:10] felixge: right, but I don't subscribe to email
[12:10] felixge: as my inbox is a mess without it
[12:10] felixge: so if I want to reply ... the web interface it is
[12:10] felixge: :|
[12:11] micheil_mbp has joined the channel
[12:16] micheil: I find it funny when people say their inbox's are a mess.
[12:23] jwm: you should see mine
[12:23] jwm: I've got an account from 1997 on yahoo
[12:23] jwm: never deleted an email
[12:23] jwm: hehe
[12:23] micheil: I just set up tags and filters to automatically sort my mail
[12:24] micheil: rather then having 200 in my inbox, I have maybe 20 in the nodejs tag, 50 in the dojo related tags, etc.
[12:24] micheil: 20 and 50 aren't big numbers
[12:25] jwm: you have an email IQ
[12:25] jwm: I don't
[12:25] jwm: even though I've run several email servers
[12:25] micheil: heh heh
[12:25] jwm: and setup filters for people and antivirus etc
[12:27] joshbuddy has joined the channel
[12:28] jwm: how do you like node.js micheil
[12:28] micheil: hmm?
[12:29] micheil: jwm: that's a little of an odd question
[12:30] jwm: nah it's a simple one
[12:30] jwm: :)
[12:30] jwm: you're suppose to say you love it
[12:30] micheil: oh
[12:31] alex-desktop has joined the channel
[12:31] jwm: did you ever work with ruby
[12:31] micheil: once
[12:31] micheil: I've worked with django, rails, and sinatra, each just for learning
[12:32] jwm: ahh
[12:32] jwm: I'm curious why people are going the javascript on server way
[12:33] micheil: I'm going it because I like the javascript language
[12:33] micheil: and because it's new
[12:33] blackdog`: image size, hot reloading, above average speed for dynamic lang
[12:33] micheil: blackdog`: image size?
[12:33] blackdog`: size of the running executable
[12:33] jwm: I like the DOM
[12:34] jwm: :)
[12:34] blackdog`: in comparison with java it's terrific
[12:34] micheil: jwm: I like the DOM as well, just not on the server
[12:34] jwm: actually my entire project involves putting the/a DOM on the server
[12:34] jwm: so this project makes the most sense for me
[12:34] jwm: hehe
[12:34] micheil: man.. how can I write node-protocol so that it actually makes sense?
[12:49] micheil has joined the channel
[12:49] micheil: does this make sense: http://gist.github.com/292635 ?
[12:49] _Ray_ has joined the channel
[12:52] micheil: hmm..
[12:58] lattice_ has joined the channel
[12:58] dbrmr has joined the channel
[12:59] lattice_: felixge: hi :-) (lattice = blaine)
[12:59] felixge: lattice_: hey there : )
[12:59] felixge: how is it going?
[13:00] charlenopires has joined the channel
[13:00] lattice_: good! you?
[13:01] lattice_: So about this reloading thing!
[13:01] lattice_: I definitely agree that it's tricky to auto-reload all modules, mostly because those modules will be assigned to variables that we can't later re-assign.
[13:02] felixge: right
[13:02] lattice_: using a module.asyncReloadable callback, it'd definitely be possible, but then we're talking about async() requires, which is I think tricky for people to wrap their heads around; wrapping a module in requires kind of sucks
[13:03] felixge: oh, maybe I was confusing there
[13:03] lattice_: I guess we could imagine something like module.requires = function () { } where all requires that are to be reloadable must go...
[13:03] felixge: I don't believe that hot reloading should make any given app "reloadable"
[13:03] felixge: I think most apps will have 1 main module that boots the webserver
[13:03] felixge: and then 1 module that acts as the "request handler"
[13:03] felixge: if that module is reloaded, you essentially reloaded the entire app
[13:04] felixge: (other than the primitive web server)
[13:04] felixge: is that what you mean?
[13:04] lattice_: Right; so in that case, my thinking was, if you're just watching a symlink or that request handler for changes, then all of the request handler's dependencies will be reloaded (synchronously, thus without race conditions) when the module is re-required
[13:04] lattice_: yup
[13:05] MattJ has joined the channel
[13:06] felixge: lattice_: what happens if a utility function uses by the request handler changes?
[13:06] felixge: you'd never notice and still be running old code
[13:07] jwm: your momma uses old code
[13:07] lattice_: jwm: your momma uses pascal
[13:07] jwm: doh
[13:07] jwm: mean :)
[13:08] lattice_: felixge: right, absolutely. Basically, my though was that in order to keep things simple, reloading would need to be explicit.
[13:08] felixge: jwm: there are companies running their web stack on tcl - your mom is cool, don't worry ;)
[13:08] lattice_: lol
[13:08] lattice_: right now, if you want to reload, for example a rails app, you need to replace the entire app directory, and everything gets reloaded
[13:09] felixge: lattice_: right, that's my thinking as well. When I "deploy" I just send a signal to my process which tells it to reload the request handler which reloads all child modules
[13:09] lattice_: exactly; I think the signal is even unnecessary - frameworks should include file watchers that allow them to auto-reload their handlers.
[13:09] jwm: I use to be big into tcl and tk
[13:09] jwm: hehe
[13:09] jwm: bought some books on it
[13:10] felixge: lattice_: how do you avoid needing a file watcher for every file in the project?
[13:10] lattice_: watchFile() should be free on most platforms (it looks like OS X is using a timer, but there is an asynchronous api for getting file change notifications)
[13:10] lattice_: felixge: you don't - just use a symlink, or watch for changes in the top-level request handler.
[13:10] lattice_: Now, this assumes that your deploy process is atomic.
[13:10] felixge: lattice_: not sure I understand your symlink idea
[13:11] felixge: (my deployment process is just doing a 'git pull' on the remote)
[13:11] lattice_: sure - umm, trying to think how to format this in irc ;-)
[13:12] felixge: lattice_: paste bin?
[13:12] felixge: ;)
[13:12] lattice_: yeah - it's not working so well to describe it in prose.
[13:13] test-client has joined the channel
[13:13] micheil: oh, okay, so it joins..
[13:14] test-client has joined the channel
[13:14] micheil: hmm...
[13:14] lattice_: http://gist.github.com/292651
[13:15] test-client has joined the channel
[13:16] lattice_: So when you want to deploy new code, you create a new checkout (cp -R; git pull), create a new symlink to it ("pending"), and then rename "pending" to "current"
[13:16] felixge: lattice_: oh. Well that is quite restrictive in terms of dictating how to deploy
[13:16] felixge: is the standard in rails?
[13:16] lattice_: pretty much, I think?
[13:16] felixge: I wouldn't know, not a rails guy ;)
[13:16] felixge: but I see what you mean
[13:16] felixge: and it would work, yes
[13:16] test-client has joined the channel
[13:17] lattice_: micheil: umm - maybe find a different room to test your client? ;-)
[13:17] micheil: oh, sorry.
[13:17] koprnicus has joined the channel
[13:17] lattice_: This is just one approach, keep in mind - you could just watch requestHandler.js, but then you have potential race conditions in terms of what code is actually loaded
[13:18] felixge: right
[13:18] lattice_: i.e., you can't guarantee that same-versioned code will be loaded in any instance of the handler
[13:18] felixge: well with your approach you'd still get in trouble if two version are deployed within < 1s
[13:18] felixge: (unless you queue up reloading)
[13:19] lattice_: even if you queue up reloading, I think you run into those problems. That's part of the reason I tried to keep it really simple, and essentially leave the synchronising to the developer, rather than trying to be really clever.
[13:20] lattice_: I thought about using timestamps, etc, but even that is totally dependent on how deployment happens.
[13:20] micheil: lattice_: just trying to work out the best way to declare node-protocol
[13:21] lattice_: micheil: no worries ;-)
[13:22] lattice_: felixge: and of course, app or library developers could add in reload hooks anywhere they please. I definitely agree that it'd be nice to have the ability to reload on any-file change, but I worry a lot about the complexity of dependency tracking.
[13:23] _sh has left the channel
[13:23] lattice_: What about this - if we had a global module hierarchy, we could say something like "oh, uri.js was changed. find its top-level parent, and trigger an onReload event for that module"
[13:24] felixge: nah, I don't even want to go into thay
[13:24] felixge: * that
[13:24] felixge: my patch was just meant so that you could call require.hot() any time and as much as one pleases
[13:24] felixge: without running into issues
[13:24] felixge: that will be hard to understand without intimate knowledge of node's co-routines or the module loading system
[13:25] felixge: I don't want to address the problem regarding *when* to trigger require.hot
[13:25] felixge: that should be left to the developers
[13:25] felixge: and they can use your symlink strategy
[13:25] felixge: or just send a signal to the process
[13:25] felixge: whatever they like
[13:25] lattice_: Oh - right, that's not what I was saying.
[13:28] lattice_: just that right now, if you were to say "don't ever cache this module", then you're going to end up swapping out modules in old running requests
[13:30] lattice_: which might be useful, but I worry that it's more complicated to explain the relatively complex side-effects than to just force a single module to reload in a known-safe context.
[13:31] felixge: who says "don't ever cache this module" ?
[13:31] lattice_: that said, I really do agree that your idea makes sense.
[13:32] felixge: require.hot() just says load this module, but ignore anything other than the global cache
[13:32] lattice_: that's my reading of hot()? Maybe I'm misunderstanding it - let me take another look.
[13:33] lattice_: okay, sorry, I got caught in my own eddies.
[13:36] lattice_: So, reading the maybeLoadHotModule(), it looks like you're requiring that the requires are async? I'm I reading that wrong?
[13:38] lattice_: I'm not sure that there's actually the real potential for massive storms of reloads; give me a sec, I think I have an approach to implement require.hot() in a "wait() friendly" way.
[13:40] _Ray_ has joined the channel
[13:41] aho has joined the channel
[13:43] felixge: lattice_: k, let me see :)
[13:43] lattice_: felixge: typing it up :-)
[13:50] bryanl has joined the channel
[13:55] lattice_: felixge: okay, http://gist.github.com/292682
[13:55] lattice_: it's totally untested, and just a sketch, but I think it should get the idea across
[13:56] felixge: lattice_: I thought we agreed that file watching should not be implemented in the node core?
[13:56] lattice_: felixge: I don't really see why not, as long as it's something that's just a helper, explicitly triggered by the developer
[13:58] felixge: lattice_: I'm not even sure what this would do. parent.require() is not going to update any existing references to the module
[13:58] lattice_: felixge: no - it's just a pre-cache for the next time require() is called
[13:58] lattice_: we could also aggregate reloads, so that if one path is watched by a number of modules, loadModule would only get called once
[13:58] felixge: oh
[13:58] lattice_: which isn't such a big deal, since it's all async, but it would cut down on IO and eval
[13:59] felixge: hm, I think its confusing
[13:59] felixge: and again, it only works if the module itself is changed
[14:00] felixge: if one of its children change, it has no effect
[14:00] lattice_: sure; but I think that's an intractable problem without some explicit handling
[14:00] felixge: yes. Well my solution is to not solve the "when" to reload problem
[14:01] felixge: Instead we should just provide a method to reload a module without referencing any cached modules in the process
[14:01] felixge: i.e. require.hot()
[14:01] lattice_: I think our approaches are the same, the only difference is instead of forcing a reload at require.hot(), just say "allow this module in this context to be reloaded when changed"
[14:01] felixge: it's the same as require.async() but it just ignores the cache
[14:02] felixge: no its different
[14:02] felixge: my approach will always reload all child modules
[14:02] felixge: yours won't unless each child module is explicitly included using require.hot()
[14:03] felixge: which becomes a problem as soon as you rely on 3rd party libs you may wish to update
[14:03] felixge: * update = reload
[14:06] lattice_: oh - okay, in that case couldn't we just: http://gist.github.com/292682
[14:06] lattice_: (again, this is just a sketch, the code's messier than I would commit ;-) )
[14:07] lattice_: maybe the naming is confusing - what about just having module.clearCache() ?
[14:08] lattice_: or parent.flushModuleCache(module_path)
[14:09] felixge: won't work. require.async emits 'exports' on success, not a reference to the module
[14:09] felixge: but even if it did, the children would already have been loaded using the cache, by the time you try to unset the cache its too late
[14:10] lattice_: sorry, require.flushModuleCache(module_path)
[14:10] lattice_: ahh, you're right, nevermind.
[14:10] felixge: my first idea was to clear the parent's cache before doing require.async
[14:10] felixge: this works somewhat
[14:11] lattice_: too many side-effects
[14:11] felixge: but as soon as you a loading 2 modules from the parent at the same time, the cache starts to populate
[14:11] felixge: and since the cache is re-checked for createModule, this gets you into race conditions
[14:12] felixge: I think the main thing that would simplify this whole mess is to use sync i/o for require()
[14:12] felixge: something ryan is seriously considering
[14:12] felixge: the only problem is that right now, we are supporting remote module loading (load a module from a url)
[14:12] lattice_: eek - that would be *very* scary in the case of an unstable filesystem
[14:12] felixge: lattice_: why?
[14:12] felixge: because your process is blocked the whole time?
[14:13] lattice_: felixge: yeah - flaky IO subsystems can mean *huge* delays
[14:13] felixge: hm, ok
[14:13] lattice_: especially if you're on a NAS, as in EC2 or other so-called cloud providers
[14:13] felixge: well there is an entirely different way to go about
[14:13] felixge: * about it
[14:13] pmuellr has joined the channel
[14:13] felixge: but this is not nearly as convenient
[14:13] felixge: net2 will eventually add support for web workers
[14:14] felixge: when you need to reload your code, you could just start new workers that will pick up the new code
[14:14] felixge: and all new requests go them
[14:14] felixge: (you can directly send the tcp request socket to the worker, so this is very little overhead)
[14:14] felixge: but you'll have to "phase out" old workers yourself
[14:15] rolfb has joined the channel
[14:15] lattice_: it seems kind of heavy for what should be a simple thing
[14:15] felixge: one benefit is that if there are exceptions, you main web server is not affected
[14:15] felixge: right
[14:15] felixge: I think I'll move to this setup with transload.it at some point
[14:15] felixge: but its overkill for most people
[14:15] lattice_: what if we incorporated the require queue with my async approach?
[14:16] felixge: I think there should be a queue for require, yes
[14:16] felixge: but I don't think we should use watchFile to automatically trigger reloads
[14:16] felixge: at least not from within the module loading system. People should do this from the outside
[14:17] lattice_: that would mean that you'd only be forcing re-requires occasionally, so the chance of doing a lot of wait()s is already low, but combined with a queue, it guarantees that multiple waits() won't be fired simultaneously
[14:17] lattice_: wait() doesn't block the whole node runtime, does it? My understanding is that it doesn't, but I don't know it very well.
[14:18] lattice_: what's the resistance to using watchFile? I worry that people will just ship code with require.hot() everywhere, which will be *massively* slower than require()
[14:18] lattice_: if we're going to give them a tool to shoot themselves in the foot, we should at least give them steel-toed boots. ;-)
[14:18] felixge: my worry is that watchFile will not fire when child modules get changed
[14:18] felixge: and not everybody wants to do a symlink deployment strategy
[14:19] lattice_: watchFile in the code I posted isn't dependent on symlinks
[14:19] felixge: lattice_: right, but its not catching child module changes either
[14:19] lattice_: felixge: this one is: http://gist.github.com/292682
[14:20] lattice_: that code will catch any child modules that have changed up until the parent module is reloaded
[14:20] felixge: lattice_: I think I'm missing something
[14:20] felixge: I can't see how this works
[14:21] felixge: walk me through it
[14:21] felixge: how would you use this?
[14:21] lattice_: So assuming you have some code, "var myModule = require('./myModule');"
[14:21] lattice_: sorry
[14:21] felixge: lets assume a 'requestHandler' module, ok?
[14:21] lattice_: So assuming you have some code, "var myModule = require.hot('./myModule');"
[14:21] lattice_: sure
[14:21] felixge: k
[14:22] lattice_: So assuming you have some code, "var requestHandler = require.hot('./requestHandler');"
[14:22] lattice_: now, when that gets called, it sets up a file watcher on "./requestHandler" (expanded to an absolute path) and immediately returns the current version of the module.
[14:22] felixge: k
[14:23] cloudhead has joined the channel
[14:23] lattice_: the asynchronous file watcher then waits for changes to requestHandler; when that happens:
[14:23] lattice_: 1. the request handler is re-loaded
[14:24] lattice_: 2. the parent module's cache of "./requestHandler" is cleared
[14:24] felixge: lattice_: right, shouldn't it be the other way around to have any effect ?
[14:25] lattice_: 3. (I'm noticing that I have duplicate code - ignore #1)
[14:25] lattice_: So it *should* be,
[14:25] lattice_: #1 the parent module's cache of "./requestHandler" is cleared
[14:25] lattice_: #2 the parent asynchronously requires "./requestHandler" (the new version)
[14:25] brosner has joined the channel
[14:26] lattice_: hmm, no, you're right, there needs to be a "force no parent cache" argument to the Module constructor
[14:26] lattice_: but assume that for the sake of argument
[14:27] lattice_: we could also just not pass in a parent to loadModule, and manually insert the loaded module into the parent's cache, which would have the same effect.
[14:27] felixge: well there is still one problem remaining
[14:27] felixge: If only a child module changes, but it's parent module does not
[14:27] felixge: it will not trigger the watchFile
[14:28] felixge: example: You push a bugfix for an utility module
[14:28] n8o has joined the channel
[14:28] lattice_: hmmmm - I think this brings me back to a much earlier suggestion
[14:29] lattice_: what about having a module.watchThisForChangesAndClearMyModuleCacheAndRemoveMeFromMyParentsCache(parent) function?
[14:29] felixge: so you want to do watchFile() for every module?
[14:30] felixge: regardless how it got included?
[14:30] felixge: *probably* almost free on linux, but OSX might be busy doing nothing else but polling those files for changes ;)
[14:31] lattice_: node should use OSX's native event-drive file change interface - polling is unnecessary. BUT
[14:31] lattice_: no - just for modules whose cache hierarchy has some flag saying that they should be watched
[14:31] felixge: hm
[14:31] davidsklar has joined the channel
[14:31] felixge: I don't know
[14:31] lattice_: in any event, it'd be more efficient in any sort of high traffic situation than re-loading every child module on every require
[14:31] felixge: when I develop and I hit safe and there is a syntax error, my server dies
[14:31] felixge: I'd rather tell my server to reload when I'm ready
[14:32] felixge: less magic, less trouble
[14:32] lattice_: I think the former is less magic - that's how PHP works. ;-)
[14:32] felixge: no, PHP has no state
[14:32] felixge: so a PHP request crashing is harmless
[14:32] felixge: a node process has state
[14:32] felixge: and crashing it sucks
[14:32] felixge: :)
[14:33] lattice_: well, all this assumes that there's a global exception handler. ;-)
[14:33] felixge: there is, but it's limited
[14:33] lattice_: no, I mean, if you hit save on PHP, all future requests will fail
[14:33] lattice_: it would easily catch this, though. ;-)
[14:33] felixge: lattice_: I'm talking about local development, the future request will only fail if there *is* a future request
[14:34] felixge: however, the node process will hit an error automatically
[14:34] felixge: let's go one step back
[14:34] felixge: your main problem is a high-traffic scenario, right?
[14:34] felixge: and you think my "full-reload" approach will suck there?
[14:34] lattice_: felixge: agreed - but wouldn't it make more sense to have a node flag to disable require() caching entirely for development mode?
[14:34] lattice_: right
[14:34] lattice_: (i mean node command line flag, to clarify)
[14:35] felixge: so for the high-traffic app we are assuming, let's say there is a request_handler module which needs reloading on deployment, right?
[14:36] felixge: my idea is that you manually communicate the need for reloading to the process and thous reloading only happens when its actually required
[14:36] felixge: this is done via: `kill -USR1 `
[14:36] felixge: node can listen for this signal
[14:37] lattice_: felixge: I know - but that sort of interaction is a *much* higher barrier to entry
[14:37] felixge: lattice_: here is the code I'm using: https://gist.github.com/fcf7dbb089a810a4c728
[14:38] steadicat has joined the channel
[14:38] lattice_: it seems like the target audience for node is php / javascript developers who want to write effective high-concurrency, asynchronous apps
[14:38] felixge: lattice_: why? You'll have do explicitly update your reference to the 'request_handler' module anyway
[14:38] lattice_: sure, but that can be handled in a framework, rather than by the developer themselves
[14:38] felixge: right, and they can do hot reloading by just saying: require.hot('./request_handler').addCallback(function(handler) {handler.handle(req)
[14:39] felixge: for every request
[14:39] felixge: lattice_: again, the update to the requestHandler module needs to be updated
[14:39] lattice_: I know - I'm assuming that's done in a context where the framework handles that.
[14:39] felixge: and teaching people to use an 'inline' require that uses a somewhat intransparent cache purging strategy is very bad
[14:40] felixge: because a) You should almost never use require anyway other than the header of your module
[14:40] felixge: b) You shouldn't teach people to rely on sync operations anywhere
[14:40] lattice_: I don't think it's an opaque purging strategy - it's explicitly "require.hot() guarantees that the most recent version of the file will be loaded"
[14:41] felixge: give me a full example
[14:41] felixge: with a http server
[14:41] felixge: of how you'd use your api
[14:41] lattice_: watchFile is very reliable - if it's not, then there are bugs with the implementations
[14:41] felixge: (can be pseudo codish)
[14:41] lattice_: So here's the first example:
[14:41] lattice_: http://romeda.org/blog/2010/01/hot-code-loading-in-nodejs.html
[14:42] lattice_: but that last line could be converted to:
[14:42] felixge: lattice_: why not go the other way around. Bulid a sha1 of the modules that are loaded and check if it changed before deciding to use cache?
[14:42] lattice_: you're still doing IO in that case
[14:42] lattice_: http.createServer(require.hot('./myRequestHandler')).listen(8000);
[14:43] lattice_: which says "use the most recent version of ./myRequestHandler"
[14:43] lattice_: my watching code (on the gist) could be updated to add watchers for all child modules of myRequestHandler
[14:43] felixge: hm
[14:43] Booster has joined the channel
[14:44] felixge: tempting
[14:44] felixge: it just feels wrong on some levels to me
[14:44] lattice_: in that case, you're actually doing the require async *every time* except for the first one
[14:44] felixge: right
[14:44] lattice_: while the new module is loading, you just use the old one. ;-)_
[14:44] felixge: so essentially you'd have to propagate the fact that you are loading a hot module to all child modules
[14:44] lattice_: right
[14:44] felixge: and setup watchers
[14:45] felixge: hmmm
[14:45] lattice_: it's *way* more efficient than reloading all the time - you could actually use my approach on a fully loaded server, I think.
[14:45] felixge: I might be coming around here
[14:45] felixge: well, the bootup is problematic
[14:46] felixge: as you could hit this code multiple times initially
[14:46] felixge: before the module is loaded for the first time
[14:46] felixge: a single require.hot('./myRequestHandler') call in the header would be better
[14:46] lattice_: right
[14:46] lattice_: well, no - hmm
[14:47] lattice_: potentially, I guess? We could introduce a global module cache that is guaranteed to be populated with the most recent version of any module?
[14:47] felixge: actually looking at your code again
[14:47] felixge: nvm
[14:47] felixge: the server wouldn't start before require.hot() finished the first time
[14:48] felixge: it would be more problematic if you'd give a closure to the createServer() and do the require.hot() inside of that
[14:48] lattice_: there is a wait() that returns immediately for subsequent calls
[14:48] felixge: so this is actually cool
[14:48] lattice_: right
[14:48] lattice_: I think if you were worried about startup, you could do a single require('./myRequestHandler') to get the cache primed, then use require.hot() in the createServer call
[14:48] felixge: actually wait
[14:48] felixge: no
[14:49] felixge: it doesn't work
[14:49] felixge: http.createServer(require.hot('./myRequestHandler').handler).listen(8000);
[14:49] felixge: would never reload
[14:49] lattice_: sorry, yes - you need it in a child level, not in createServer, since createServer creates its own closure
[14:49] felixge: it would just execute a single time
[14:49] felixge: so the problem I saw does exist because you have to move the require.hot() call into its own closure
[14:49] felixge: so its executed for each request
[14:49] lattice_: http.createServer(function(req,res) { require.hot('./myRequestHandler').hander }).listen(8000)
[14:50] felixge: right
[14:50] felixge: but this has the problem I mentioned
[14:50] felixge: the server could be hit 100 times before the first require.hot() call finishes
[14:50] felixge: incurring 100 .wait() callstacks
[14:50] lattice_: just put "require('./myRequestHandler)" before the createServer call
[14:50] felixge: again, this is why I'd like to do require.hot() async
[14:51] felixge: lattice_: yeah, but people will fall into this trap
[14:51] felixge: we're setting them up for failure
[14:51] lattice_: ok - that's an interesting idea.
[14:51] lattice_: what about require.watch(path, callback)
[14:52] lattice_: that way, people could do something like (hypothetical)
[14:52] lattice_: require.watch('./myRequestHandler', function (requestHandler) {
[14:52] lattice_: httpServer.handler = requestHandler;
[14:52] lattice_: }
[14:52] felixge: hm
[14:52] felixge: that's a cool idea
[14:52] felixge: the other thing I was gonna say is this:
[14:52] felixge: https://gist.github.com/4892007b0a3223b0ea61
[14:53] felixge: but I dislike the overhead this will incur on every hit compared to my signal approach
[14:53] felixge: your watch() idea would circumvent this
[14:53] felixge: i.e. make it as performing as my suggestion
[14:53] felixge: require.watch() is a very cool idea
[14:54] felixge: but I think it would be an additional to the require.hot() I'm proposing
[14:54] felixge: not a replacement
[14:54] felixge: * an addition
[14:54] lattice_: right - in this case, you could have different approaches - like, require.disableWatchFile();
[14:54] lattice_: or require.forceReload('module') <----- this could be put inside a signal watcher
[14:55] felixge: forceReload() is the same as require.hot() ?
[14:55] felixge: (i.e. the require.hot() from my patch?)
[14:55] lattice_: felixge: yes
[14:55] lattice_: well, similar
[14:55] lattice_: forceReload would just clear the cache
[14:55] lattice_: so that require.watch() would trigger
[14:55] cloudhead has joined the channel
[14:56] felixge: why would you need that if watchFile() operates properly?
[14:56] lattice_: felixge: I wouldn't ;-) I'm just want to take your goals into account.
[14:56] felixge: my goal is to have awesome hot reloading. I don't give a shit about signals :)
[14:56] lattice_: lol ;-)
[14:56] felixge: I like your idea of require.watch a lot
[14:57] felixge: but I think we need something like my require.hot() that we can execute when watch fires
[14:57] lattice_: it feels really effective to me, too - I think it's a nice balance.
[14:58] lattice_: by require.hot() in this case, you mean queuing requires?
[14:58] kennethk_ has joined the channel
[14:58] lattice_: what about just making require() queue in general? It seems like even without re-requiring, you could trigger problems (e.g., a deep or broad set of requires, fired asynchronously)
[14:59] felixge: no. I mean require.hot() would just reload a module while ignoring any cache there might be for it
[14:59] felixge: yeah, I think require should always be queued
[15:00] lattice_: ahh, yes - I think that would have to be assumed, that watch() reloads a fresh version of the module
[15:00] lattice_: how that's actually achieved is less important to me - though it would be interesting to be able to specify a "watchAllChildren" or something
[15:00] felixge: I don't think watch should *do* anything
[15:00] felixge: it should just trigger if a module (or one of its children) have changed
[15:01] felixge: not more, not less
[15:01] lattice_: ahh, I see. hmmmmmmm.
[15:01] lattice_: I think it's important that the require.watch callback gets called with the actual instantiated module, to keep the API simple
[15:01] felixge: it's essentially just a more convenient approach than my signals to figure out "when" to reload
[15:02] lattice_: because otherwise it's just the same as what I already have here: http://romeda.org/blog/2010/01/hot-code-loading-in-nodejs.html
[15:02] felixge: but that's no watching, that's dynamic reloading
[15:02] lattice_: sure it is - right at the top, I have a watchFile
[15:02] felixge: we could provide a higher level function for that
[15:03] felixge: require.onChange('./request_handler')
[15:03] r11t has joined the channel
[15:04] felixge: the second argument being a callback
[15:04] felixge: that gets a reference to the module
[15:05] lattice_: ok - so to step back a bit, the functionality of require.hot() is just "reload this module and any children"?
[15:05] lattice_: (ignoring for the moment the require queuing)
[15:05] felixge: yes!
[15:05] felixge: I think that's a core building block we need
[15:07] lattice_: alright! yes. I think it's really simple - using my code, require.hot = function (url, callback) { module.unCacheModule(url);
[15:07] lattice_: callback(require(url)); }
[15:07] lattice_: you could s/callback(require(url))/require.async(url, callback)/
[15:08] felixge: no, its not that simple
[15:09] felixge: if the module you are in reference the same utility module that the child you are reloading references
[15:09] felixge: the child will inherit the cached version of this
[15:09] lattice_: it's true, though in that case, the parent module could just require.watch('utility-module'), too
[15:09] felixge: you need to do the reloading with a cache that is entirely clean of anything but internal modules
[15:09] felixge: otherwise it sucks
[15:10] felixge: yeah, but that seems stupid
[15:10] lattice_: ok - fair enough - I'm not totally convinced that it sucks, but I'm pretty ambivalent about the point. ;-)
[15:10] felixge: I'd say the rule should be to ingore cache completely for hot reloading
[15:10] felixge: ok
[15:10] felixge: :)
[15:10] felixge: well, it just introduces the chance of silent & hard to debug failures
[15:11] lattice_: fair enough. ;-) So I think that means we're agreed!
[15:11] cloudhead: in what release is __dirname supported? I just download .26, and it doesn't seem to be there, is that right?
[15:11] felixge: cloudhead: HEAD only
[15:12] cloudhead: ah I see, I must have compiled from source on my other computer ; |
[15:12] cloudhead: er I mean, from github
[15:12] lattice_: (I'm going to reverse onChange and watch, since that makes more sense to me) so require.onChange(module, callback) is an asynchronous require that calls the callback when the module changes, and require.watch(module, callback) is an asynchronous require that calls the callback when the module changes with the new module included (i.e., it passes require.hot() to the callback).
[15:13] felixge: lattice_: to summarize: We need require.hot() which is reloading without cache. And we need require.watch() which is a recursive change watcher for the module and its children?
[15:13] felixge: and from those two combined, we could implement require.onChange()
[15:13] lattice_: can we switch the onChange / watch terminology?
[15:13] felixge: lattice_: switch it to what?
[15:14] felixge: I mean which is which?
[15:14] felixge: :D
[15:14] lattice_: just so that onChange detects file changes, whereas watch() is the combination of onChange and require.hot()
[15:14] lattice_: it makes more semantic sense, I think
[15:14] lattice_: onChange feels lower-level than watch()
[15:16] felixge: hm
[15:16] felixge: its probably still confusing terminology
[15:16] felixge: we need something better
[15:16] felixge: but I think we agree on the overall idea
[15:17] lattice_: how about we don't expose onChange(), but just watch(), and have it return the freshest version of the module, already instantiated?
[15:18] felixge: hm
[15:18] felixge: maybe
[15:18] lattice_: with the documented caveat that watch() should only be used where you actually want/need the modules reloaded
[15:18] felixge: I don't know why you would want to watch without reloading
[15:18] lattice_: right
[15:22] felixge: ok
[15:22] felixge: so maybe require.hot() should be named: require.unCached() ?
[15:23] felixge: hm maybe not
[15:23] felixge: I kinda like the name ;)
[15:26] felixge: lattice_: what ya think? We got a deal?
[15:26] felixge: :)
[15:26] _4get has joined the channel
[15:33] voodootikigod: ryah_away: ping me when you get online
[15:33] joshbuddy has joined the channel
[15:34] lattice_: felixge: sorry, got pulled away from the keyboard for a second - require.unCached() looks great - hot is a bit obscure - I like it, too, but it's probably easier to make it clear ;-)
[15:35] felixge: ok
[15:35] felixge: do you think we actually need your unCacheModule() function?
[15:35] felixge: I don't see a use case for it
[15:36] lattice_: felixge: I think the ability to re-require while preserving the parent's cache is a useful feature
[15:37] felixge: lattice_: for end users or hackers?
[15:37] felixge: I mean module.moduleCache is exposed to any module who wishes to modify it
[15:38] lattice_: I guess that's true - resolveModulePath isn't, though - that's the main reason for unCacheModule's existence
[15:38] lattice_: brb
[15:42] felixge: lattice_: right, require.resolve() or require.path() probably should get resolved
[15:43] lattice_: right - because requireModulePath right now is just in the node.js context, I think the only way to do that is with a helper function like unCacheModule()
[15:44] lattice_: but I think it's not too big a deal - there are a few viable approaches, and I don't think anyone's going to be building apps that are widely distributed and dependent on the new api ;-) If you're not totally opposed, we should just ship it and then iterate as more people interact with it.
[15:45] felixge: yeah
[15:45] felixge: np on my end
[15:45] felixge: so remaining tasks
[15:45] felixge: a) Make all require calls use a queue, not just the hot ones
[15:45] felixge: b) Rename hot to unCached
[15:45] felixge: c) Fix unit tests to not use wait()
[15:46] felixge: d) Implement wach()
[15:46] felixge: anything I missed?
[15:46] lattice_: right. I don't think so.
[15:47] lattice_: the queue stuff should be transparent from the others, right?
[15:47] felixge: lattice_: transparent = independent ?
[15:47] felixge: yeah pretty much
[15:47] lattice_: right
[15:48] felixge: I can do the queue right now
[15:48] lattice_: cool - I have some things to do, so I can't make the changes right now, but I'm happy to tackle c & d in a few hours.
[15:48] lattice_: sounds good
[15:48] felixge: sounds good
[15:48] felixge: :D
[15:48] lattice_: cool - this'll be awesome. Oh! (e) document this stuff.
[15:48] felixge: I have stuff todo as well, but I'm using my procrastination energy to do good in the world :)
[15:48] felixge: yeah, ryan will ask for docs *g*
[15:49] lattice_: lol - procrastination++
[15:50] charlenopires has joined the channel
[15:51] kriszyp has joined the channel
[15:51] felixge: lattice_: trying to send you a pm, not sure you got it
[15:51] lattice_: hmm - oh, it's probably on my other computer
[15:51] lattice_: argh, damn NAT FTL.
[15:51] lattice_: one sec
[15:53] felixge: lattice: did it work now
[15:53] felixge: ah I think it did
[15:57] bentomas has joined the channel
[15:58] jubos has joined the channel
[15:58] alexiskander has joined the channel
[16:22] binary42 has joined the channel
[16:23] Connorhd has joined the channel
[16:24] binary42 has joined the channel
[16:31] pdelgallego has joined the channel
[16:43] okito has joined the channel
[16:43] rolfb has joined the channel
[17:00] matthijs has joined the channel
[17:04] okito has joined the channel
[17:06] steadicat has joined the channel
[17:13] brainproxy has joined the channel
[17:14] vhost- has left the channel
[17:16] creationix has joined the channel
[17:18] JimBastard has joined the channel
[17:22] JimBastard: sup inimino
[17:27] qFox has joined the channel
[17:31] brandon_beacher has joined the channel
[17:35] tlrobinson_ has joined the channel
[17:37] inimino: JimBastard: hey
[17:38] isaacs has joined the channel
[17:39] JimBastard: hey inimino, me and Tim_Smart started working on a google wav doc for a pretend node framework
[17:39] stephenlb has joined the channel
[17:40] JimBastard: you got any interest in taking a peek / wving
[17:40] JimBastard: waving
[17:40] JimBastard: im not sure if we are going to build it , as much as talk about it and write down wish lists
[17:41] inimino: JimBastard: sure, I'm not a big fan of Wave but I'm definitely interested in seeing your ideas
[17:42] gwoo: ACTION is not a wave fan either
[17:42] gwoo: but is also interested in seeing ideas :)
[17:44] inimino: (in my experience, the sooner you can get those ideas out of Wave and onto the real Web, the more exposure they'll get, and the more people are likely to get involved)
[17:45] Tim_Smart has joined the channel
[17:46] nodejs_v8 has joined the channel
[17:48] Tim_Smart: nodejs_v8: "lla2iah".split("").reverse().join("")
[17:48] nodejs_v8: Tim_Smart: "hai2all"
[17:48] sahnlam has joined the channel
[17:49] JimBastard: inimino, its only been a day
[17:49] JimBastard: but ya
[17:49] JimBastard: Tim_Smart can you invite inimino?
[17:49] Tim_Smart: sure
[17:49] Tim_Smart: his email?
[17:49] mattly has joined the channel
[17:49] JimBastard: i worked on in a bit last night
[17:50] mrd`: nodejs_v8: process
[17:50] nodejs_v8: mrd`: Exception: ReferenceError: process is not defined
[17:50] mrd`: nodejs_v8: global
[17:50] nodejs_v8: mrd`: Exception: ReferenceError: global is not defined
[17:50] mrd`: nodejs_v8: GLOBAL
[17:50] nodejs_v8: mrd`: Exception: ReferenceError: GLOBAL is not defined
[17:50] mrd`: Stupid security.
[17:50] stephenlb: heh
[17:50] Tim_Smart: mrd`: :p
[17:50] ryah: morning
[17:50] Tim_Smart: morning2 ryah
[17:51] inimino: morning ryah
[17:51] Tim_Smart: inimino: https://wave.google.com/wave/?pli=1#restored:wave:googlewave.com!w%252BayXwiDRMA
[17:51] isaacs: hey, so, you can require(url), right? but then, you can do relative requires inside that module.
[17:51] isaacs: any reason why that is?
[17:52] isaacs: i mean, is that by design, or is it a bug? because i'd love to be able to do that.
[17:53] ryah: by design
[17:54] Tim_Smart: JimBastard: http://github.com/gimite/web-socket-js
[17:55] Tim_Smart: JimBastard: It falls back to flash from native
[17:55] inimino: hm, Tim_Smart it says I am not a participant... I guess you need to invite inimino@inimino.org
[17:55] Tim_Smart: add yourself?
[17:55] Tim_Smart: it is supposed to be public >.>
[17:56] eikke has joined the channel
[17:56] Tim_Smart: added
[17:56] _Ray_ has joined the channel
[17:56] scudco has joined the channel
[17:57] creationix: does anyone want to be a contributor to howtonode.org
[17:57] inimino: Tim_Smart: ah, I got it now, thanks
[17:59] rtomayko has joined the channel
[17:59] creationix: I mean as a github collaborator with commit rights, anyone of course can write articles.
[17:59] omygawshkenas has joined the channel
[18:08] inimino: Tim_Smart: looks good
[18:09] JimBastard: you like so far inimino?
[18:09] inimino: JimBastard: yeah, I think it would be good to pull out stuff like jQuery or mustache.js, so as to avoid limiting the audience too much
[18:10] JimBastard: well, yeah
[18:10] inimino: people have their own library preferences, and if there's one 'blessed' library it will put other people off
[18:10] JimBastard: you could make all those plugable
[18:10] inimino: even if you can use others
[18:10] inimino: yes
[18:10] JimBastard: but i think it would be in the best interest to stress the defaults
[18:10] JimBastard: also, realastically you gotta start somewhere
[18:11] inimino: well, if you focus on the server-side, it shouldn't matter what client-side existing libraries people use
[18:12] inimino: the same goes for templating, I think, though maybe that's less clear-cut as you probably want to do templating on the server
[18:12] Tim_Smart: inimino: definately
[18:12] JimBastard: inimino you could want it on both
[18:12] JimBastard: you could be templating client-side or server-side
[18:12] JimBastard: heavy ajax apps need client-side
[18:13] inimino: and if you do something really good with Web Sockets / Comet fallback, you'll probably need a custom client-side component for that
[18:13] inimino: that could be an interesting project in itself :-)
[18:13] inimino: I think someone said on the list if you make this easy and get it right you will clean up the Comet space, which I agree with
[18:13] Tim_Smart: http://github.com/gimite/web-socket-js
[18:13] inimino: I'd be interested in working on something like that
[18:14] isaacs: ryah: really? what's the problem with it? just complexity?
[18:15] inimino: isaacs: did you mean you /can't/ do them?
[18:15] isaacs: yes.
[18:15] ryah: isaacs: ?
[18:15] inimino: oh.
[18:15] isaacs: node http://blah/foo.js
[18:15] isaacs: if foo.js does require("./bar") then it fails
[18:15] isaacs: i'd like it to fetch http://blah/bar.js
[18:15] ryah: isaacs: oh - yeah it should - it's a bug
[18:15] isaacs: great.
[18:16] ryah: of course this is the part where i hate myself for accepting the sync module loading
[18:16] ryah: gr..
[18:17] JimBastard: back to work crap
[18:17] konobi: ACTION just wouldn't bother supporting remote js files
[18:17] JimBastard: inimino see if you can clean up any of those requirements / add more / take notes
[18:17] JimBastard: ill peep it later
[18:17] JimBastard: peace
[18:18] inimino: ryah: I do think it's possible to get rid of that, if we can restrict require() to only accept a static string...
[18:19] bpot has joined the channel
[18:19] nefariousD has joined the channel
[18:19] inimino: ryah: then you can have an async require()-alike that still works with normal modules
[18:20] ryah: inimino: not sure if i follow
[18:20] ryah: and by "not sure" i mean "i don't"
[18:20] Tim_Smart: Usually require is used on startups, so haven't it sync is alright. It would be cool having it async when hot-plugging
[18:20] inimino: hehe
[18:20] Tim_Smart: *having
[18:20] inimino: ryah: well, I mean if you have a regular old CommonJS module that has "require('bar')" at the top
[18:21] inimino: ryah: then you want to do async_require("foo") (where foo is that regular module requiring bar)
[18:21] ericflo has joined the channel
[18:21] mikeal has joined the channel
[18:21] itistoday has joined the channel
[18:21] Tim_Smart: inimino: just do async when a callback is detected
[18:22] Tim_Smart: require("", callback);
[18:22] inimino: ryah: as long as everything that calls require() does so with a static string, we can parse the module, find all the require() calls, cache all the module file bodies, and then load the initial module
[18:22] ryah: yeah - i guess - seems kind of messy
[18:22] lifo has joined the channel
[18:22] itistoday: this has been driving me crazy, hopefully you guys can help. just the other day i saw something (i think on HN) talking about dynamically reloading modules using watchFile or something like that
[18:22] inimino: Tim_Smart: that means you can't asynchronously load a CommonJS module, because CommonJS modules can have plain sync require() calls
[18:22] itistoday: anyone know what i'm talking about?
[18:23] itistoday: it was on some blog post
[18:23] ryah: why don't we just diverge from commonjs totally and have load()
[18:23] Tim_Smart: itistoday: yeah it has been thrown around a bit
[18:23] ryah: it can be async
[18:23] Tim_Smart: ryah: sounds good
[18:23] ryah: and we'll go back to having process.addListener("loaded")
[18:23] inimino: ryah: messy a bit, but it's just implementation messiness... callers can get the asynchronous loading without having to worry about rewriting modules to take advantage of it... I think it's a neat solution
[18:23] tlrobinson: nooooooo
[18:23] ryah: tlrobinson: :)
[18:23] itistoday: Tim_Smart: but do you know the blog post i'm referring to?
[18:23] Tim_Smart: ryah: make sure you can load multiple modules in one call
[18:24] tlrobinson: just implement require.async
[18:24] Tim_Smart: because I'm not attaching millions of listeners
[18:24] ryah: tlrobinson: the problem is when we do "node http://foo.com/bar.js"
[18:25] ryah: and then bar.js does a bunch of "require('./a') require('./b')
[18:25] inimino: ryah: that's fine with me, CommonJS-compatible async require() can still be layered on top of it by the messy solution I just described (in a library)
[18:25] ryah: tlrobinson: the bar module doesn't know it's being loaded over http - so it's implemented sync require
[18:26] inimino: ryah: hey what did you mean by error codes on the list?
[18:26] konobi: ryah: just kill off http loading
[18:26] ryah: konobi: that's one solution
[18:26] itistoday: I found it!
[18:26] itistoday: http://romeda.org/blog/2010/01/hot-code-loading-in-nodejs.html
[18:26] tlrobinson: ryah: does it really matter if you do a few synchronous operations at startup?
[18:26] itistoday: "hot code loading"
[18:26] konobi: we're not going to support it in smart... daft idea
[18:26] itistoday: neat stuff, check it out :-)
[18:27] stephenlb: creationix: cool@! how did you get and with which hook to make the website update on push?
[18:27] ryah: inimino: an exception object wrapping up an errno
[18:27] Tim_Smart: itistoday: We might support it in the web-framework me and others are going to work on
[18:28] itistoday: Tim_Smart: cool, do you have anything up yet? I see there's a lot of existing web frameworks for node.js
[18:28] creationix: stephenlb: github has a post commit hook service. You give them a url, and they will post to it on every git push
[18:28] inimino: ryah: I'd like it better if I don't have to type check
[18:28] Tim_Smart: itistoday: Not yet. Check out express
[18:28] ryah: inimino: http://github.com/ry/node/blob/fc025f878a0b7a5bbb5810005da3e09cb856b773/src/node_file.cc#L23-29
[18:28] creationix: I have a node server listening that does a git pull and re-generates the site
[18:29] stephenlb: creationix: cool, i'll check this out.
[18:29] itistoday: Tim_Smart: ok will do. btw, is there a web framework for node.js that does php-style inline templates?
[18:29] tlrobinson: ryah: however, if you want to ditch require, narwhal can add it back it. we'll be the commonjs layer on top of node
[18:29] ryah: inimino: agree - the first arg of the callback can always be reserved for error. fs.unlink("/tmp/foo", function (error) {})
[18:29] gwoo: itistoday: checkout out the ry/node/wiki
[18:30] ryah: tlrobinson: that's not a bad idea
[18:30] itistoday: gwoo: ?
[18:30] itistoday: gwoo: link?
[18:30] gwoo: comin up
[18:30] Tim_Smart: itistoday: http://wiki.github.com/ry/node/
[18:30] inimino: ryah: I like that Error object with the errno on it
[18:30] ryah: although i think we're a bit distant atm from the narwhal hookup
[18:30] gwoo: poor github
[18:30] gwoo: so slow for me today
[18:30] gwoo: thanks Tim_Smart
[18:31] gwoo: Tim_Smart: did you do node-blog?
[18:31] tlrobinson: ryah: i know kris kowal was working on it. i'm not sure how far along he was
[18:31] bentomas has left the channel
[18:31] Tim_Smart: gwoo: Nope
[18:31] bentomas has joined the channel
[18:31] ryah: tlrobinson: yeah - i'm working with him - slowly
[18:31] gwoo: ok, another Tim :)
[18:31] inimino: tlrobinson: yes, I think require() and such should be implemented above...
[18:31] inimino: tlrobinson: did you already implement something with parsing and looking up require() values?
[18:31] konobi: require("commonjs");
[18:31] inimino: tlrobinson: I thought I read somewhere some doing that
[18:32] itistoday: Tim_Smart: thanks, but it seems to be down
[18:32] ryah: nice thing about that is we could share all the module code with narwhal instead of reimplementing it
[18:32] Tim_Smart: itistoday: yes it does >.>
[18:32] ryah: just have a simple async loading system for node
[18:32] ryah: ''
[18:32] ryah: :P''
[18:32] gwoo: ryah: do you think it would be a good idea to have a #node-core channel? lots of people chatting about different things in here
[18:33] ryah: ^-- drooling desirably
[18:33] inimino: ryah: yeah, error as first parameter to the callback sounds good too
[18:33] tlrobinson: ryah: yeah. narwhal's module system has some neat features too
[18:33] aguynamedben has joined the channel
[18:33] tlrobinson: like hooks for custom loaders and stuff
[18:34] ryah: gwoo: nah - i like having everyone talking :)
[18:34] gwoo: hehe ok
[18:34] tlrobinson: (used for native modules and languages which target js)
[18:35] itistoday: are there any benchmarks that compare node.js to nginx?
[18:35] blackdog`: tlrobinson: anyone using that feature yet? is there example code ?
[18:35] ryah: itistoday: i've done some-- not sure i have anything online
[18:35] ryah: itistoday: it's much slower
[18:35] itistoday: ryah: node.js is?
[18:35] ryah: yes
[18:35] blackdog`: tlrobinson: i'd b interested in doing a loader for haxe
[18:36] tlrobinson: blackdog`: yeah narwhal supports coffeescript, parenscript, objective-j languages
[18:36] blackdog`: tlrobinson: ok, thx i'll check it out
[18:36] itistoday: ryah: hmm... so perhaps it would be better to run these javascript libraries through nginx?
[18:36] creationix: tlrobinson: is there a hool to pre-process files before requiring them
[18:36] ryah: itistoday: ?
[18:36] creationix: *tool
[18:37] itistoday: ryah: i.e. do cgi-type stuff with javascript web frameworks through nginx
[18:37] ryah: itistoday: yeah - that's what i do
[18:37] tlrobinson: creationix: you can add custom loaders to narwhal's require which do the preprocessing
[18:37] itistoday: ryah: do you have a blog?
[18:38] ryah: itistoday: http://four.livejournal.com/
[18:38] creationix: tlrobinson: very cool +1 for sharing narwhal's module system
[18:38] tlrobinson: creationix: objective-j's compiler is written in js so it runs in the same process. coffeescript and parenscript have to spawn ruby/lisp interpreters to do theirs
[18:39] itistoday: ryah: cool. do you have any links btw for doing cgi/javascript development through nginx?
[18:39] tlrobinson: blackdog`: i'd point you to examples, but github seems to be having trouble at the moment
[18:39] konobi: itistoday: see http proxy
[18:39] creationix: tlrobinson: actually Jeremy and I are porting the CoffeeScript compiler to node
[18:40] konobi: it's _just_ HTTP... nothing magical
[18:40] inimino: alright time to get some work done
[18:40] ryah: me too
[18:40] blackdog`: tlrobinson: ok, no worries, i'll find it
[18:40] inimino: ACTION afks
[18:40] creationix: tlrobinson: and I wrote another language (Jack) in javascript
[18:40] itistoday: konobi: proxy to what? node.js?
[18:40] tlrobinson: creationix: cool. by "node" i hope you mean "javascript" :)
[18:41] creationix: mostly
[18:41] unomi: itistoday: basically you setup nginx in front of node, and if, say, there is a request to /images /javascript or the like
[18:41] creationix: Jeremy and I actually have two implementations of CoffeeScript written in JavaScript
[18:41] tlrobinson: creationix: it would be nice if it ran in the browser too
[18:41] felixge: could somebody sum up the module system talk? I'm having a hard to to cross-reference the messages
[18:41] unomi: then you serve static resources via nginx
[18:41] unomi: otherwise you hand the request over to node
[18:41] felixge: blaine and I talked quite a bit about hot reloading earlier today
[18:41] creationix: mine is a simple function that takes a string and gives back a string, but I think Jeremy's is tightly coupled to node's IO apis
[18:42] creationix: tlrobinson: http://trycoffee.org and http://bit.ly/dsrgKv
[18:43] itistoday: unomi: i see, thanks!
[18:43] jubos has joined the channel
[18:43] tlrobinson: creationix: cool
[18:44] creationix: felixge: I think the basic idea was to not implement require in node, but let narwhal take care of that, and node could expose some low-level api that narwhal uses internally. Is this right tlrobinson?
[18:44] tlrobinson: creationix: yeah
[18:45] creationix: and then I got excited about the possibility of pre-processing filters :P
[18:45] tlrobinson: node would be the low level IO platform basically. narwhal would implement the commonjs module system and APIs
[18:45] felixge: hm
[18:45] itistoday: so, of the web frameworks on the ry/node wiki, which have php-style inline templating support? this is preferable to generating HTML via javascript on serverside because designers like the php-style better
[18:45] omygawshkenas has left the channel
[18:45] creationix: tlrobinson: one question, so would node itself not have require anymore, but only narwhal with a node engine?
[18:46] felixge: tlrobinson: but it would be possible to just the current node API without any narwhal stuff?
[18:46] felixge: (no offense, I just don't want anything to be sync other than module loading)
[18:46] felixge: * to just use
[18:46] tlrobinson: creationix: it sounded like ryah_away wanted to remove require from node completely
[18:47] nodejs_v8 has joined the channel
[18:47] creationix: Interesting...
[18:47] tlrobinson: felixge: yeah you could still write apps just on node
[18:48] konobi: no reason narwhal couldn't just override node's require
[18:48] konobi: require("narwhal"); # voila
[18:48] v8 has joined the channel
[18:48] nodejs_v8 has joined the channel
[18:48] felixge: I guess I wouldn't mind this at all
[18:48] creationix: I like how it simplifies node's scope
[18:48] felixge: tlrobinson: does narwhal allow to require() without paying attention to any cache?
[18:49] tlrobinson: felixge: like the hot reloading stuff does?
[18:49] felixge: tlrobinson: yip
[18:49] tlrobinson: felixge: currently i don't think it does, but kris kowal is investigating what it would take
[18:49] tlrobinson: we have a different approach to hot reloading
[18:49] felixge: tlrobinson: what is it?
[18:49] felixge: I don't care about approach, I want results :)
[18:49] unomi: itistoday: there are a number of templating engines that are competing, I think they all work in the manner you are talking about
[18:49] tlrobinson: felixge: i wish github were up :)
[18:50] felixge: tlrobinson: hip hop is to blame :)
[18:50] tlrobinson: felixge: we throw out the entire cache of modules and reload everything. which is rather slow
[18:50] tlrobinson: but safer, i think
[18:50] gwoo: felixge: and the funny thing is it's not even available
[18:50] itistoday: unomi: thanks!
[18:50] gwoo: felixge: there is no source in the project
[18:51] unomi: github seems up here
[18:51] felixge: tlrobinson: that's a perfectly valid approach with a truly sync require() function
[18:51] gwoo: unomi: i think it depends on what you are accessing
[18:51] felixge: tlrobinson: it's problematic when multiple require() calls happen that share the same cache so
[18:51] unomi: well, thank god its 'git' then
[18:53] mikeal: why don't we use fork & pull rather than format-patch?
[18:53] unomi: email I guess?
[18:53] unomi: would you force everyone who contributes to create a github account?
[18:53] unomi: not that its that onerous I suppose ;)
[18:55] richtaur has joined the channel
[18:55] tlrobinson: felixge: which approach is problematic?
[18:55] lattice has joined the channel
[18:55] felixge: tlrobinson: purging cache for hot reloading if reloads can happen async
[18:56] felixge: tlrobinson: but I guess in narwhal it's all sync so it's not a problem?
[18:56] itistoday: thanks all, caio
[18:56] itistoday has left the channel
[18:56] creationix: mikeal: ryah said it was because pull requests weren't transparent to the community
[18:56] tlrobinson: felixge: yeah all require's are sync
[18:56] felixge: tlrobinson: in that case your approach is perfectly safe
[18:56] joshbuddy has joined the channel
[18:56] joshbuddy has joined the channel
[18:56] Booster has joined the channel
[18:56] tlrobinson: felixge: if you can get this to load, here's the impl of the "Reloader" middleware in Jack: http://github.com/280north/jack/blob/master/lib/jack/reloader.js
[18:57] tlrobinson: the "sandbox" fuction there is just the "require" function
[18:57] felixge: tlrobinson: no luck yet, but I'll keep trying
[18:58] Tim_Smart: http://twitter.com/github
[18:59] tlrobinson: Tim_Smart: heh 32 minutes ago, eh?
[18:59] Tim_Smart: not sure how long queues take to resume...
[19:01] mikeal: creationix: ahh, that makes sense, but if you fork you will see all the other work in your network
[19:01] jcrosby_ has joined the channel
[19:01] mikeal: so maybe fork + emails to the dev list :)
[19:02] inimino: has anyone built node on FreeBSD?
[19:02] creationix: mikeal: That's what I usually do anyway. And I post the patch as a gist in case I notice a typo right after blasting the mailing list.. Then I can update it without a new email.
[19:03] mikeal: that's a good idea, i've never used gist like that :)
[19:04] inimino: I am getting (from make build): TypeError: cannot concatenate 'str' and 'NoneType' objects: File "/root/node-v0.1.26/deps/v8/SConstruct", line 608: 'help': 'the architecture to build for (' + ARCH_GUESS + ')'
[19:04] unomi: try hardcoding ARCH_GUESS ?
[19:04] technoweenie has joined the channel
[19:05] unomi: or preferably, extend the arch_guess code
[19:05] inimino: unomi: yeah, I ... guess
[19:06] unomi: I 'believe' its used mainly for where to drop the payloads
[19:06] inimino: ah, good
[19:06] unomi: don't take my word for it though, ryah_away or someone should know for sure
[19:08] inimino: yeah, I'll post on the list
[19:11] unomi: inimino: weird: http://github.com/kriskowal/node/commit/0cea946cb9030d772897b468e2f4e32e40842c2a
[19:11] unomi: looks like it truly doesn't matter
[19:13] unomi: http://groups.google.com/group/nodejs/browse_thread/thread/b61a21f303bb9f9b
[19:23] inimino: unomi: oh, thanks
[19:23] Tim_Smart: Gah github has got a reputation of failing now hasn't it?
[19:24] unomi: cloud computing is not a silver bullet
[19:25] Tim_Smart: not at all
[19:25] creationix has joined the channel
[19:26] unomi: to be fair though, they have become immensely popular and well trafficked
[19:26] Tim_Smart: yeah but they did go under a "uber upgrade"
[19:26] unomi: it is difficult to imagine that they had the optimism to envision the success they have now when they designed the foundation
[19:26] charlenopires has joined the channel
[19:27] Tim_Smart: yeah
[19:27] Tim_Smart: but then again, quite a lot of developers these days are always thinking about scalability
[19:28] unomi: but most think of it in terms of throwing more machines at it :p
[19:28] unomi: otherwise, wouldn't it all be running on erlang?
[19:29] unomi: ruby..
[19:29] unomi: is it still in ruby?
[19:30] Tim_Smart: github is done on rails
[19:31] unomi: http://www.infoq.com/interviews/preston-werner-powerset-github-ruby
[19:31] gwoo: http://github.com/blog/530-how-we-made-github-fast
[19:31] gwoo: :)
[19:32] teemow has joined the channel
[19:32] gwoo: teemow: !!
[19:33] gwoo: unomi: actually there was a decent podcast on thechanglog where defunkt talked about how surprised they have been by the success
[19:40] creationix: tlrobinson: How hard would it be to add a type flag to commonjs modules? You could flag modules as node-specific, cross-platform, etc
[19:40] tlrobinson: creationix: i think that belongs in packages
[19:40] tlrobinson: you could declare node as a dependency
[19:41] creationix: for example, I want to use Jison to make my language compilers, but it's not safe to use in it's vanilla form because it requires the system module which node doesn't have
[19:42] RayMorgan has joined the channel
[19:42] creationix: but the commonjs version of showdown works just fine in node.
[19:42] creationix: I want a way to peruse the tusk catalog for libraries that would work in node
[19:43] tlrobinson: ah
[19:44] tlrobinson: the idea of commonjs is this wouldn't be a problem :)
[19:44] creationix: I know that ideally that's the goal, but it's not the reality
[19:45] creationix: maybe categorize modules as sync, async, doesn't matter, or both
[19:45] creationix: and have a commonjs specification for sync and async io operations. Node will support the async profile
[19:45] tlrobinson: the best i can suggest right now is to have the maintainers of the modules add keywords, tusk lets you search by keyword
[19:49] jcrosby has joined the channel
[19:52] inimino: ACTION looks at github, sees the ...whiskered fail-octopus? ...tentacle-tailed, wetsuit-wearing cat?
[19:55] technoweenie: one of their file servers went down or something
[19:55] Tim_Smart: Do you know if node.js makes use of multi-core machones?
[19:55] Tim_Smart: *machines
[19:56] inimino: Tim_Smart: JavaScript is basically single-threaded by nature
[19:56] Tim_Smart: yeah, so run a instance per core?
[19:56] lattice: Tim_Smart: yup :-)
[19:56] inimino: or per network interface
[19:56] Tim_Smart: I guess because it is async that would work
[19:57] lattice: Tim_Smart: there's some work with web workers, but I think that's not ready for general consumption.
[19:57] Tim_Smart: unless you wanted to share in-memory data
[19:57] lattice: Tim_Smart: yeah, don't do that in general. ;-)
[19:57] Tim_Smart: mongodb is pretty fast, probably could utilize that
[20:00] creationix: I know Ryan is working on adding clustering to node
[20:01] creationix: http://www.amqp.org to be specific
[20:02] lattice: creationix: amqp is just messaging - it's good, but I'd be surprised (possibly excited, not sure) if ry was planning on making the integration transparent
[20:03] creationix: no, but it shouldn't be hard to add it on top I would think
[20:03] okito has joined the channel
[20:03] creationix: would be neat for it to be fully integrated though
[20:03] ryah_away: lattice: no
[20:03] lattice: creationix: definitely easy to add coordination on top; the problem is closures - they get really complicated.
[20:04] lattice: ryah_away: :-)
[20:05] lattice: you can serialize functions & modules, but serializing their execution context is much more difficult, so having a transparent cross-process messaging infrastructure is, erm, hard.
[20:05] lattice: erlang is able to have a uniform messaging system because they explicitly remove the context. Doing that to javascript would significantly suck. ;-)
[20:07] inimino: lattice: I don't think it sucks, that's what Web Workers does... you just have to adjust expectations about how you do things
[20:07] creationix: it's just a different level of integration I think
[20:08] lattice: inimino: oh, sure - no, I meant modifying javascript to transparently do remote execution means that you'd have to *remove* the concept of execution context in general - it's possible, but it would be horrible.
[20:08] Tim_Smart has joined the channel
[20:09] lattice: making people explicitly say "I want this to execute externally", but forcing them to provide the context is totally reasonable.
[20:10] inimino: lattice: oh, yes, agreed
[20:18] creationix: inimino: how is webworkers doing btw? I think it would great for my use case of sharing some cpu-bound tasks across cpu cores
[20:25] inimino: creationix: I don't know, haven't followed it... I heard about some early implementations but nothing recently
[20:29] creationix: ok, will search the list
[20:30] Tim_Smart: hmm node.js is slighty slower at server static requests than apache
[20:30] Tim_Smart: s/server static/static/r
[20:30] _Ray_: sure?
[20:31] Tim_Smart: yeah I just did a nechmark
[20:31] Tim_Smart: *benchmark
[20:31] _Ray_: The "hello world" examples show a marked difference, in favor of node.
[20:31] Tim_Smart: _Ray_: I'm serving an image file
[20:32] _Ray_: I'd have to test that, but not all static requests, then, are slower than apache
[20:32] rictic has joined the channel
[20:32] Tim_Smart: yeah serving binary content from disk
[20:32] Tim_Smart: is what I am testing
[20:33] Tim_Smart: using the paperboy module
[20:35] halorgium has joined the channel
[20:36] Tim_Smart: I might some modification to paperboy to try speed it up
[20:38] inimino: Tim_Smart: there's some experimental sendfile support which you can try too
[20:38] inimino: Tim_Smart: kind of undermaintained at the moment, but I'm still using it and it works
[20:39] Tim_Smart: inimino: Is it on the nodejs api page?
[20:39] inimino: Tim_Smart: I rather doubt it
[20:39] Tim_Smart: response.sendFile()?
[20:40] inimino: no, I don't think that's it, unless that's new
[20:40] inimino: but my understanding could be well out of date :-)
[20:40] Tim_Smart: inimino: what is the usage?
[20:40] inimino: Tim_Smart: http://boshi.inimino.org/3box/nhttpd/staticFile.js search for 'sendfile' on that page
[20:40] kennethkalmer has joined the channel
[20:41] inimino: Tim_Smart: note that the 'Resource temporarily unavailable' is required
[20:41] charlenopires has joined the channel
[20:41] inimino: Tim_Smart: and I don't remember if I benchmarked it or what the results were
[20:42] Tim_Smart: ok
[20:44] inimino: Tim_Smart: if you benchmark them I'd be curious to see the results
[20:44] Tim_Smart: sure
[20:45] jcrosby has joined the channel
[20:49] Tim_Smart: inimino: http://gist.github.com/293015 node.js was on port 8000
[20:49] mrd`: Before I comment on the sync API issue, can I ask what the motivation for adding it is? Is it people just want to be able to write code that way because it's less verbose, or is it because creating callback and invoking callback functions is expensive?
[20:50] inimino: mrd`: blocking the main thread is much more expensive, there are occassionally good reasons for using sync I/O but they are very rare
[20:50] mrd`: inimino: What are those good reasons, is what I'm asking.
[20:50] inimino: mrd`: you could use it to implement synchronous require(), which would be better than the way it's currently done
[20:51] mrd`: inimino: Is synchronous require() considered desirable?
[20:51] inimino: I can't think of another good reason, and I think there are better solutions for require()
[20:51] inimino: mrd`: it's easiest to use, by analogy with C #include, e.g.
[20:52] isaacs: ryah_away: ping
[20:52] mrd`: inimino: I don't think ease of use makes it inherently desirable.
[20:52] ryah_away: isaacs: hey
[20:52] inimino: mrd`: people expect to be able to write require(foo) and then use foo without having to put the rest of the file in a callback
[20:52] isaacs: so, mob brought up a good point about the stream api.
[20:52] isaacs: basically, it seems to assume that write() succeeds, either now, or later.
[20:53] mrd`: inimino: So are there any other reasons for wanting sync APIs other than implementing other sync APIs?
[20:53] isaacs: all the other concerns are bikesheds, and i could really go either way on, but that's got me thinking.
[20:53] Tim_Smart: inimino: it's not hard to do: require(["module", "http://somemodule/test.js", "etc"], myInitFunction);
[20:53] inimino: so, I got node built and running on FreeBSD :-)
[20:53] mrd`: Tim_Smart: Agreed.
[20:54] inimino: Tim_Smart: I agree, but... opinions vary
[20:54] ryah_away: isaacs: what's it got you thinking?
[20:54] inimino: Tim_Smart: and for CommonJS modules, it needs to be synchronous from the perspective of the module (though there's a solution to that as I described earlier today)
[20:54] mrd`: I think exposing any blocking API is a mistake.
[20:54] isaacs: well, i'm not sure. we could have write() return the number of bytes written, but if it's buffering, then maybe you don't know yet.
[20:55] isaacs: another option would be for write() to take a callback.
[20:55] Tim_Smart: inimino: did you see the benchmark?
[20:55] _Ray_ has joined the channel
[20:55] inimino: Tim_Smart: oh, no, looking now
[20:55] _Ray_: Huh. That's a first. Benchmarking node.js too much made OS X BSOD.
[20:55] isaacs: but i'm really trying not to bloat the complexity of using the API. i don't mind making Streams a bit more complex in their specification and internals, if it makes the interface more humane.
[20:56] Tim_Smart: _Ray_: hahahaha
[20:56] isaacs: stream.write(andThenWriteSomeMore) is kinda lame.
[20:57] mrd`: I'd rather the core API and language be strictly non-blocking. If people want to code in an extended syntax that has the appearance of blocking, that should be a strictly syntactic matter.
[20:57] ryah_away: isaacs: returning the number of bytes is okay
[20:57] isaacs: ryah_away: but how do you do that if it returns false and no bytes were written yet?
[20:57] ryah_away: isaacs: 0 if the send bufferis full
[20:57] mikeal has joined the channel
[20:58] inimino: Tim_Smart: hm, not bad, did you try with sendfile yet?
[20:58] Tim_Smart: there is no sendFile on the nodejs api page
[20:58] _Ray_: Oh, right. "Error: Too many open files".
[20:59] _Ray_: ACTION reminds self to .close().
[20:59] isaacs: ryah_away: that seems to imply that stream.write() is always synchronous.
[20:59] isaacs: which means, i can't create a stream, write to it, and then return it, and have my calling function be able to read what i just wrote.
[20:59] ryah_away: isaacs: how so?
[20:59] isaacs: if we defer the "data" event until the next tick, then that's possible.
[20:59] Tim_Smart: inimino: I test it now
[21:00] isaacs: however, deferring the "data" action until the next tick means that we don't know how many bytes were written.
[21:00] ryah_away: isaacs: you wouldn't do that though - usually there is some i/o involved
[21:00] isaacs: though, i guess, in that case, the pure-js-to-js stream object would just always return whatver the number of bytes are, or 0 if there's a buffer involved.
[21:00] ryah_away: isaacs: open a socketpair() and do it
[21:01] ryah_away: so let's have write() return number of bytes written synchronously to the kernel send buffer
[21:02] isaacs: what i want to do is mount an app that returns a stream, then hook that stream to the outgoing HTTP response, and then whenever the app write()s to the stream, it'll get shunted down the line to the http connection.
[21:02] ryah_away: for a file stream abstraction (writing to tmp file or something) write() will always return 0
[21:02] isaacs: i see.
[21:02] isaacs: so, yeah, in this case, write() will just always return 0, because it's always buffered to some degree.
[21:02] jan____ has joined the channel
[21:03] ryah_away: sockets can usually push data out directly
[21:03] ryah_away: you write it directly into the kernel
[21:03] isaacs: right
[21:03] isaacs: but we actually want a callstack of middleware around an app, and then have the app+middleware return a which can be listened to for output data asynchronously.
[21:04] ryah_away: i can imagine one problem
[21:04] isaacs: without having to hold the entire response body in memory all at once, or block other processes while slow things happen.
[21:04] ryah_away: socket.write(utf8string)
[21:04] ryah_away: var bytesWritten = socket.write(utf8string)
[21:04] ryah_away: if (bytesWritten < utf8string.length)
[21:04] isaacs: right, so that's gonna be two for ü, but "ü".length is only 1
[21:04] ryah_away: ^---
[21:04] isaacs: right
[21:05] isaacs: ACTION can haz ByteString?
[21:05] inimino: yeah, that's happened before
[21:05] inimino: yeah, that's the solution IMO :-)
[21:05] ryah_away: so the correct way to do that would be (using net2 notation)
[21:05] isaacs: that's exactly the problem that ByteArray and ByteString are there to solve.
[21:05] inimino: explicit conversion
[21:05] ryah_away: if (bytesWritten < Buffer.utf8ByteLength(utf8string))
[21:05] isaacs: well, you could just have write() convert to a ByteString.
[21:05] inimino: not if it returns partial writes.
[21:06] isaacs: then if (bytesWritten < utf8string.toByteString().length)
[21:06] inimino: var bytesToSend = Buffer.convert(utf8string)
[21:06] isaacs: sure, or do it before hand
[21:06] inimino: while (write (bytesToSend) < ...) ...
[21:07] ryah_away: so - what happens when you do socket.write(utf8String), at least in net2, is that it converts it to utf8 and writes it to a buffer
[21:07] ryah_away: then write(3POSIX) is called on that buffer
[21:07] ryah_away: if it doesn't write everything, then it puts the send head there
[21:07] ryah_away: (or wheverever it was stopped)
[21:07] ryah_away: so it's all done in one conversion
[21:08] Tim_Smart: ryah_away: What the arguments for posix.sendFile?
[21:08] ryah_away: Tim_Smart: it's broken - don't use it
[21:08] Tim_Smart: ok
[21:09] Tim_Smart: ryah_away: So the fastest way of serving static images and stuff would be a posix.read solution?
[21:10] ryah_away: Tim_Smart: web server?
[21:10] Tim_Smart: yes a web server
[21:11] ryah_away: Tim_Smart: the best way is to use nginx to serve static files
[21:11] Tim_Smart: ok
[21:11] isaacs: hm.. so, if write() returns the number of bytes written synchronously to the socket, or 0 in the case of stream abstractions, then I still don't quite know if it failed or not.
[21:11] isaacs: that is, i could write(someBigThing), it returns 0. how do i know if it was buffered, or just failed?
[21:12] inimino: isaacs: you'd probably need to have a separate way to signal failure on things that can fail
[21:12] inimino: (like an event or whatever)
[21:12] isaacs: right
[21:13] ryah_away: yeah, just add an error event
[21:13] ryah_away: maybe a member "bytesSend"
[21:13] ryah_away: which just is a running tally of total bytes sent on the connection
[21:14] ryah_away: i'd rather have write() return true /false
[21:15] mrd`: Have you guys looked at Adobe AIR's filesystem APIs?
[21:16] mrd`: nm, it all seems to be synchronous
[21:16] isaacs: ryah_away: i like that better.
[21:17] isaacs: so, we'd add to the API: bytesSent, "error" event, and keep write() semantics the way they are?
[21:17] ryah_away: yeah
[21:17] ryah_away: isaacs: need an "error" event for readable streams too
[21:18] isaacs: hm. i think that might appease the masses.
[21:19] rtomayko has joined the channel
[21:22] okito has joined the channel
[21:26] Tim_Smart: inimino: ok cool
[21:30] voodootikigod has joined the channel
[21:31] Tim_Smart: Question: If I had 4 instances of node running on a quad core machine, how would I distribute the requests?
[21:31] deanlandolt: Tim_Smart: nginx
[21:32] Tim_Smart: Is there a nginx guide somewhere? I have never used it
[21:32] ryah_away: people are really concerned about using multiple cores :)
[21:32] Tim_Smart: well make use of what you got!
[21:32] ryah_away: i'm pretty sure a single node can saturate your bandwidth
[21:33] richtaur has left the channel
[21:33] Tim_Smart: but would 1:1 node:core perform better is the real question?
[21:33] inimino: that's what I meant about one node per network interface ;-)
[21:34] ryah_away: i mean, one core is enough if your software is good
[21:34] ryah_away: :)
[21:34] Tim_Smart: OK I instance it is
[21:34] Tim_Smart: *1
[21:34] inimino: CPU-bound node apps are (hopefully) quite the exceptional case
[21:34] ryah_away: Tim_Smart: but load balance behind nginx anwyay for security
[21:34] ryah_away: node does crash
[21:34] ryah_away: would be good to have multiple instances
[21:35] deanlandolt: also, you may as well have nginx serving up your static files
[21:35] ryah_away: right
[21:35] Tim_Smart: yeah well, I need to learn about nginx anyways
[21:36] Tim_Smart: google time, unless someone knows a decent guide
[21:36] deanlandolt: Tim_Smart: ping the ML -- there's been some effort around getting tutorials together -- that'd be a /good/ one to request
[21:36] Tim_Smart: ML?
[21:36] deanlandolt: mailing list
[21:36] Tim_Smart: mailing list gotcha
[21:37] deanlandolt: or you can just be lazy like me and use apache because it's what you know :D
[21:37] deanlandolt: once perf's an issue you can transparantly swap it out for nginx
[21:38] Tim_Smart: well considering I used to PHP and apache, now that I'm seriously look at node.js, I may as well take a look at a whole new bunch of tech's while I am here
[21:38] konobi: nginx is easier from blank
[21:38] Tim_Smart: *I am used
[21:38] Tim_Smart: *looking
[21:38] deanlandolt: aye, but not many of us are starting from blank :-/
[21:38] konobi: http://kovyrin.net/2006/05/18/nginx-as-reverse-proxy/
[21:39] Tim_Smart: mongodb looks like a db I will be trying too
[21:40] Tim_Smart: Hmm node.js could make a nice adserver
[21:40] Tim_Smart: just route all the static requests back to nginx
[21:40] deanlandolt: Tim_Smart: not /back/ -- if they go through nginx they'll never see node
[21:41] konobi: deanlandolt: unless use do the X- header
[21:41] Tim_Smart: yeah but you might need to do some database look ups etc
[21:41] deanlandolt: Tim_Smart: for static requests?
[21:41] deanlandolt: konobi: ?
[21:42] konobi: deanlandolt: http://foo.bar/advert.jpg # based on host, etc... send back different images
[21:42] Tim_Smart: yeah wait, I thought wrong
[21:42] deanlandolt: konobi: but weha tyou do mean x- header?
[21:42] deanlandolt: x-sendfile?
[21:42] konobi: X-Accel-Redirect
[21:42] Tim_Smart: well node.js will serve dynamic javascript that points to static files
[21:44] deanlandolt: konobi: ahh, i see
[21:44] Tim_Smart: but it would be much cooler to just point to an image that routes to different images
[21:44] Tim_Smart: some sort of redirect was what I was meaning yeah
[21:44] deanlandolt: Tim_Smart: err, cool, but not restful
[21:44] deanlandolt: (unless you set the Vary header)
[21:45] onar_ has joined the channel
[21:45] konobi: deanlandolt: it's restful... it's just not cachable
[21:45] Tim_Smart: well Google's strategy is javascript and probably for good reason
[21:45] deanlandolt: okay, well, perhaps you don't want caching with an adserver anyway :D
[21:46] creationix has joined the channel
[21:59] markwubben has joined the channel
[22:02] cdorn has joined the channel
[22:03] Tim_Smart: I sent something off to the mailing list
[22:04] okito has joined the channel
[22:08] Booster has joined the channel
[22:09] Tim_Smart: X-Accel-Redirect looks very useful
[22:13] hassox has joined the channel
[22:14] mikeal: what is the easiest way to do a Python style if __name__ == "__main__" in node
[22:14] mikeal: do I really have to iterate over process.ARGV and see if __filename is in it?
[22:14] kriszyp has joined the channel
[22:16] JimBastard has joined the channel
[22:18] Booster has joined the channel
[22:21] Tim_Smart: mikeal: I think ARGV is the only solution
[22:22] isaacs: wait, what now?
[22:22] Tim_Smart: if (process.cwd() + "/" + __filename === process.ARGV[0])
[22:23] isaacs: ew
[22:23] mikeal: it's not necessarily ARGV[0]
[22:23] isaacs: why would you want to do that?
[22:23] mikeal: if i do node /path/to/file
[22:23] isaacs: what's the goal here?
[22:23] isaacs: (also, better to use path.join())
[22:23] mikeal: i have some modules that can also be standalone scripts or requireable modules
[22:23] deanlandolt: isaacs: python's __name__ == "__main__"
[22:23] Tim_Smart: isaacs: To see if it is running as module or directly
[22:23] deanlandolt: isn't there a module.main or require.id == pattern?
[22:23] mikeal: ARGV and __filename are abspath
[22:24] isaacs: if (module.id === ".")
[22:24] isaacs: the commonjs spec says that there should be a require.main
[22:24] isaacs: node doesn't have that yet.
[22:24] deanlandolt: yeah...that's what i thought...something like require.id === require.main
[22:25] deanlandolt: isaacs: btw i've modified jack to do forEachables for the request.body to see how it'll look and an interesting thing is shaking out...
[22:25] isaacs: deanlandolt: what's that?
[22:26] isaacs: deanlandolt: (it'd be require.main === module, i think. or maybe require.main === exports.)
[22:26] isaacs: or maybe require.main === module.id
[22:26] deanlandolt: i'm writing some Multipart middleware that will return a lazy array of objects that look a lot like jsgi requests, with a headers obj and a forEachable body...and if it's a nested multipart the forEachable body could return another jsgi request-like obj, all the way down
[22:27] deanlandolt: isaacs: yeah, that last one i think
[22:27] isaacs: deanlandolt: ok, nested multipart... i grok. what's shaking out of that?
[22:28] deanlandolt: the pattern of lazy arrays containing objects with bodies that can contain lazy arrays, etc...
[22:28] deanlandolt: all async and wwithout buffering
[22:29] isaacs: you can do that with streams, too, and only ever have one chunk held in memory at a time.
[22:29] deanlandolt: nested like that?
[22:30] deanlandolt: hmm, i guess
[22:30] deanlandolt: that's right, you can return objs in your streams too
[22:30] isaacs: when it gets back up to the server it has to be strings.
[22:30] isaacs: but the same thing is in foreachables, too
[22:31] deanlandolt: you mean in the response? sure
[22:31] deanlandolt: it's non-standard jsgi -- although, it'd make a nice jsgi extension proposal no matter which stream style gets chosen...being able to stream objects is huge
[22:32] okito has joined the channel
[22:33] isaacs: sure.
[22:33] isaacs: basically, the spec doesn't say what you can and can't do with middlewrae.
[22:33] isaacs: it just says that when the server gets the entire composed app, it should do X
[22:34] deanlandolt: it defines what /compliant/ should look like -- inputs and outputs and all that
[22:34] isaacs: right
[22:34] deanlandolt: but that doesn't mean you can't do what you will...just know where the boundary between compliant and extended is
[22:34] isaacs: but your "app" is just whate'ver hanging out on exports.app. the inside functions could be doing whatever.
[22:35] isaacs: as long as what gets out is "compliant", you're compliant.
[22:35] deanlandolt: yeah, but at least at the top of your middleware chain you'll probably want to reuse other peoples middleware...
[22:36] deanlandolt: once you start doing crazy stuff like returning a non-standard response.body you'll no longer be able to reuse middleware (at least, middleware taht cares about response.body)
[22:36] isaacs: sure
[22:37] isaacs: or you'll have to say "these three middlewares all go together" or something.
[22:37] isaacs: but frameworks do that all the time.
[22:37] tlrobinson: at that point it's not jsgi
[22:37] deanlandolt: it's /extended/ jsgi :D
[22:38] tlrobinson: jsgi servers/apps/middleware need to have well defined interfaces so they're stackable
[22:38] deanlandolt: tlrobinson: agreed but at a certain point in your middleware chain it may make sense to deviate
[22:38] tlrobinson: if "middleware" changes the interface then it's not jsgi compliant middleware
[22:39] deanlandolt: agreed -- but you can have custom application middleware and standard, reusable middleware -- i think both have their place
[22:39] tlrobinson: yeah
[22:40] mikeal: i think a certain level of interop is doable in JSGI but at some point it's the job of a web framework to provide infrastructure to make sure plugins don't step on each other
[22:41] mikeal: in other words, JSGI is **not** a web framework :)
[22:41] tlrobinson: mikeal: have you looked at Rails 3?
[22:41] tlrobinson: it makes heavy use of Rack
[22:41] mikeal: i like doing things that aren't painful
[22:41] isaacs: mikeal: sure, but the framework might have a collection of things called "filters" and "views" and "models", which each are just functions, or stream modifiers, or some other junk, and the framework turns all the loaded stuff into a jsgi-compliant app.
[22:41] deanlandolt: mikeal: i mostly buy that but it /is/ the means by which web frameworks cna play nice together
[22:42] tlrobinson: the router is a Rack app, the controllers are Rack apps, etc
[22:42] mikeal: exactly, web frameworks should be built **on** JSGI
[22:42] isaacs: with jsgi, you could have that going on in one section of the site, and another jsgi framework in another, and a simple router that routes between them.
[22:42] mikeal: but JSGI out of the box shouldn't try to hard to protect plugins from each other
[22:42] isaacs: and they can play together.
[22:42] mikeal: ideally, yes, that's what Pylons tries to do
[22:42] isaacs: ACTION .oO( it sounds like we all agree? )
[22:42] tlrobinson: mikeal: Rails 3 is not just "on" Rack like previous versions of Rails, it's a collection of Rack components
[22:43] mikeal: sounds a lot like Pylons
[22:43] deanlandolt: so a web framework can be /on/ jsgi or /of/ jsgi -- the latter just means more of it will be reusable in other projects/frameworks
[22:43] hassox: jsgi definitely should not try to protect middlewares from each other
[22:44] isaacs: ACTION .oO( yeah, we definitely all agree )
[22:44] hassox: deanlandolt: if it's on jsgi, it should be usable with any other jsgi bit of kit
[22:44] tlrobinson: i don't know what "protect" means
[22:44] mikeal: isaacs: yes, consensus!
[22:45] hassox: tlrobinson: I just mean that we should not try to put protection mechanisms in the spec. it should just say what's avaialbe and how to move about
[22:45] mikeal: tlrobinson: imagine you have two pieces of middleware that don't know about eachother but both want to do stuff to the response body
[22:45] hassox: if devs do something stupid then that's up to them
[22:45] mikeal: it's possible that they can break each other
[22:45] deanlandolt: hassox: yes...but there's different levels of usable -- if it's /of/ jsgi it's individual components can be swapped in and out of other projects' stacks
[22:45] mikeal: but if you build too much scaffolding to protect that from happening you end up with a web framework
[22:46] hassox: correct
[22:46] hassox: web frameworks are a level above jsgi
[22:46] hassox: let them worry about what works together
[22:46] hassox: if they have a cohesive stack
[22:46] mikeal: my only concern is that if JSGI starts turning in to a big web framework then it will NEVER get done because you can't build a web framework through consensus :)
[22:46] deanlandolt: hassox: as tlrobinson pointed out -- is rails 3 a /level above/ rack?
[22:46] hassox: yes
[22:46] hassox: rails 3 is built on rack
[22:46] jcrosby has joined the channel
[22:46] hassox: it's built by assembling rack applications
[22:46] hassox: it
[22:46] hassox: '
[22:46] hassox: it's directly comparible
[22:47] deanlandolt: mikeal: agreed but you don't have to worry about that -- there is no jsgi code per se
[22:47] deanlandolt: just a bunch of compliant implementations
[22:47] hassox: as is pancake, sinatra and merb
[22:47] mikeal: if you want to write something that modifies the response body in rails 3 you write it for some rails 3 component that uses Rack, not in Rack itself
[22:47] konobi: jack?
[22:47] tlrobinson: right, as i understand it, it's basically a bunch of rack components that can be assembled using the rails-y DSLs and what not
[22:47] deanlandolt: konobi: compliant implementation
[22:47] hassox: tlrobinson: not quite
[22:47] hassox: a rack app is just an object that responds to call and receives the env
[22:47] hassox: returns an array
[22:48] hassox: anything can be inserted into the stack
[22:48] technoweenie: the rails controller implements that interface
[22:48] tlrobinson: oh i know what rack is :)
[22:48] hassox: tlrobinson: right... so anything that implements can be put in the stack
[22:48] hassox: jsgi is at the same level
[22:49] hassox: multiple frameworks should be able to be built on top
[22:49] deanlandolt: hassox: yeah, i think that was tom's intent when he originally wrote jack
[22:49] deanlandolt: (hence the name...similar, eh? :)
[22:49] hassox: sinatra, panckae, rails, merb are all frameworks that exist on rack
[22:49] hassox: sure
[22:49] hassox: but it's heart is sync...
[22:49] kriszyp: ACTION is enjoying listening to someone explain to tlrobinson what rack is :)
[22:49] hassox: lol
[22:49] hassox: yeah I know
[22:49] hassox: ;)
[22:49] hassox: I know tlrobinson is involved heavily in rack
[22:49] kriszyp: sorry, hassox, just being silly :)
[22:50] tlrobinson: my entire knowledge of rails 3 is based on yehuda's blog, in particular http://yehudakatz.com/2009/12/26/the-rails-3-router-rack-it-up/
[22:50] deanlandolt: hassox: what does "sync at heart" really mean? the async talk is hot and heavy
[22:50] hassox: tlrobinson: usher is antoher rack based router
[22:50] isaacs: software doesn't have hearts.
[22:50] hassox: deanlandolt: happy to go into it but my manager is looking over my shoulder ;)
[22:50] charlenopires has joined the channel
[22:50] hassox: isaacs: course they do!
[22:50] deanlandolt: hassox: heh...know the feeling :)
[22:51] mikeal: so, in summary, we all agree :)
[22:51] hassox: and sherpa is a router for jsgi
[22:51] mikeal: spec writing is the painful process of agreement
[22:52] hassox: I'm sure I missed heaps of the discussion ;0
[22:52] deanlandolt: mikeal: no doubt...but as frustrating as it is good specs are so totally worth it
[22:52] mikeal: i'm optimistic
[22:52] mikeal: i've certainly worked on enough bad specs that weren't worth it
[22:52] mikeal: see: CalDAV
[22:53] Tim_Smart: woo http://gist.github.com/293140
[22:54] hassox: tlrobinson: I didn't mean to be so forceful about rack... I'm just frustrated by peoples understanding of it...
[22:54] hassox: not that I don't think you don't undesrand it ;)
[22:54] hassox: or anyone else in here...
[22:55] tlrobinson: i don't think my assessment of rails 3 being a bunch of rack components was too far off
[22:55] _Ray_: Tim_Smart, export is a reserved keyword in JS
[22:55] isaacs: Tim_Smart: also, that won't do anything.
[22:56] hassox: tlrobinson: it's not.. they are...
[22:56] hassox: but that's what rack apps are
[22:56] isaacs: you're overwriting "variable", but that's just a local param
[22:56] Tim_Smart: _Ray_: Yeah I changed it to prop
[22:57] tlrobinson: hassox: i mean rails 3 is composed of multiple rack compontents, it's not just one big framework which interfaces with rack at the bottom
[22:57] _Ray_: variable is of type object*, when you say variable = foo, you're just chaing where the pointer points to, not the old contents
[22:57] tlrobinson: (like it used to be)
[22:57] hassox: tlrobinson: right... yes
[22:57] tlrobinson: ok then we agree
[22:57] Tim_Smart: right, I'll pass it by string then
[22:58] hassox: tlrobinson: yes
[22:58] v8 has joined the channel
[22:58] hassox: imo a jsgi or rack framework should facilitate such a collection of components
[22:58] sudoer has joined the channel
[22:58] mikeal: it would be really nice if node-repl put '.' in the require path
[22:58] _Ray_: v8> var foo = 1; (function(foo) { foo = 2; })(foo); foo
[22:58] v8: _Ray_: 1
[22:58] tlrobinson: also agreed
[22:59] hassox: but it's not up to jsgi or rack to worry about that level of stuff
[22:59] kriszyp: tlrobinson, do you have any idea how many JSGI middleware modules exist today?
[22:59] kriszyp: just out of curiousity..
[22:59] technoweenie: hassox: have you looked at fab?
[22:59] technoweenie: oh yea we talked about it already
[23:00] hassox: technoweenie: i need to look closer at it
[23:00] hassox: time is not on my side
[23:00] mikeal: imho anything that can be done in the application layer without inhibiting portability of the application to other servers, regardless of middleware concerns, should be left out of JSGI and handled by application developers like web frameworks
[23:00] hassox: technoweenie: can fab be written on top of ejsgi?
[23:00] technoweenie: no idea
[23:00] technoweenie: havent looked at ejsgi
[23:00] tlrobinson: kriszyp: no, i think jack has about 25, george's nitro has maybe 10,
[23:01] jcrosby has joined the channel
[23:01] kriszyp: I think pintura has about 15, and bogart may have some too
[23:01] hassox: technoweenie: async jsgi (rack)
[23:01] deanlandolt: bogart has quite a few IIRC
[23:01] technoweenie: ah
[23:01] technoweenie: well, fab is async
[23:01] technoweenie: has middleware
[23:01] tlrobinson: hassox: fab should work on any of the async JSGI proposals
[23:01] deanlandolt: jed's already commited to that
[23:01] tlrobinson: they can all be implemented on top of each other pretty much
[23:01] hassox: tlrobinson: does that mean that all of the middleware components can be implemented as jsgi?
[23:02] tlrobinson: hassox: oh i don't know, i haven't looked at fab closely
[23:02] deanlandolt: hassox: not quite -- fab middleware would probably have to be wrapped to be used in a jsgi stack...jsgi middleware would probably have to be wrapped to be used in a fab stack...
[23:02] deanlandolt: but you can put a pure jsgi stack on top of fab with one wrapper
[23:02] deanlandolt: and vice versa
[23:02] tlrobinson: it would be nice if fab were just another JSGI router
[23:02] bentomas has left the channel
[23:03] deanlandolt: tlrobinson: i believe that's in the roadmap
[23:04] hassox: yeah if it were implemented using that then everything is re-usable :D
[23:04] jcrosby has joined the channel
[23:14] Tim_Smart: _Ray_: http://gist.github.com/293140
[23:19] Tim_Smart: going to test it now
[23:21] Tim_Smart: hmm needs some work
[23:23] bpot has joined the channel
[23:26] tlrobinson: http://ssjs.pbworks.com/Communicating-Event-Loops
[23:27] tlrobinson: featuring about 10 people in this irc channel in the background haha
[23:28] hassox: tlrobinson: where do you work?
[23:29] tlrobinson: (i see isaacs jan____ ryah mikeal jcrosby and myself... who else was there?)
[23:29] tlrobinson: hassox: 280 North
[23:29] hassox: where does everyone work?
[23:29] isaacs: huh?
[23:29] isaacs: wha?
[23:29] tlrobinson: isaacs: video from saturday
[23:29] isaacs: oh, hey, neat
[23:29] isaacs: kriskowal was there
[23:29] isaacs: lots of poeple, actually
[23:29] inimino: oh, cool
[23:30] hassox: Tim_Smart: kriszyp inimino deanlandolt where do you guys all work?
[23:30] hassox: ACTION is at NZX in melbourne
[23:30] mikeal: hassox: me, jchris and jan____ work for Relaxed / couch.io
[23:30] kriszyp: I work for SitePen
[23:30] Tim_Smart: self employed
[23:32] creationix has joined the channel
[23:32] hassox: mikeal: just started reading the couchdb book last night
[23:32] hassox: it's come a long way from when I last looked
[23:32] mikeal: which one?
[23:32] hassox: the one that just went to th eiphone
[23:33] hassox: iphone
[23:33] mikeal: the one jchris, jan and nslater wrote?
[23:33] hassox: yeah
[23:34] mikeal: awesome
[23:34] mikeal: it's a really fast moving project
[23:34] mikeal: there are even some amazing improvements since the book was published
[23:34] mikeal: and they only signed off like 3 or 4 weeks ago :0
[23:35] inimino: who's talking in this video?
[23:35] inimino: hassox: self-employed too
[23:36] hassox: mikeal: yeah last time i looked write performance was really bad
[23:36] tlrobinson: inimino: tom van cutsem, works at google
[23:36] hassox: damn this...
[23:36] hassox: I hate not living at the internet
[23:36] hassox: all the interesting stuff happens there
[23:36] mikeal: hassox: write performance has never been super bad, but it's definitely not anymore
[23:36] isaacs: hassox: you should move here.
[23:36] mikeal: i was testing concurrent writes todya
[23:37] hassox: isaacs: many ppl I know have had a lot of trouble with visas
[23:37] hassox: and my job is in aus...
[23:37] mikeal: and on a Mac, which has the worst fsync ever, I was doing around 3 thousand writes a second with 200 concurrent writers
[23:37] inimino: tlrobinson: ah, cool
[23:37] isaacs: so just move to mexico and hop the border. tons of people do it.
[23:37] mikeal: the great thing about couchdb is the concurrent performance is linear
[23:37] mikeal: it doesn't drastically degrade at any point
[23:38] hassox: mikeal: to the same db or to a different one?
[23:38] hassox: isaacs: hah
[23:38] mikeal: same db
[23:38] hassox: right
[23:38] mikeal: multiple dbs would probably be faster
[23:39] mikeal: depending on the harddrive
[23:39] hassox: last time I checked it was peaking at 350rps to the same db with a 1 element object
[23:39] hassox: writes
[23:39] mikeal: this is a 2 element object
[23:39] mikeal: 200 writers slamming it, and it's doing over 2K, on a Mac
[23:40] mikeal: i get 350rps with my big ass doc and 300 writers :)
[23:40] mikeal: on linux it should be a lot better
[23:40] hassox: 350 with a J-Lo doc hey?
[23:40] mikeal: fsync on Mac is the worst
[23:40] hassox: not bad
[23:41] mikeal: i'm also working on some concurrent update/read tests
[23:41] mikeal: so 200 updaters and 400 readers
[23:41] mikeal: what is the average read time and the average write time
[23:41] mikeal: make sure that scales linearly
[23:42] mikeal: only problem is that when I start to do too many http clients in node i start to see a slowdown on the node side
[23:42] mikeal: which i'm pretty sure i just Mac OS's kqueue
[23:42] _4get has joined the channel
[23:46] okito has joined the channel
[23:52] bryanl has joined the channel
[23:53] jcrosby has joined the channel