[00:00] mikeal: so it does auto-escape
[00:04] sixtus42: mikeal: the question still is, whether the escape is 7bit safe or if there are unescaped utf8 characters in between
[00:05] kriszyp has joined the channel
[00:15] isaacs has joined the channel
[18:11] nodelog has joined the channel
[18:11] ryah: there is something that proxies jabber to a local irc server
[18:11] ryah: (which is the sort of thing that should be able to be written in node easily)
[18:12] ryah: (but i guess XMPP support is blocked for a few reasons)
[18:13] sixtus42 has joined the channel
[18:17] konobi: ryah: we'll probably need it at some point i would assume
[18:19] isaacs has joined the channel
[18:21] stephenlb has joined the channel
[18:22] isaacs: jed: hey, how would you feel about having (fab) return a JSGI-style object rather than an array?
[18:22] konobi: ryah, does node do any dtrace fun at all?
[18:22] jed: you mean to the response?
[18:22] isaacs: jed: yeah
[18:22] isaacs: it seems like fab functions can return [200, headers, body], yes?
[18:23] jed: right, but that's just a shortcut.
[18:23] isaacs: i see
[18:23] tlynn has joined the channel
[18:23] jed: it turns into respond( 200, headers, body, null)
[18:23] isaacs: could you make the shortcut be to return { status: 200, headers:headers, body:body}?
[18:23] isaacs: or, make that an also-ok shortcut?
[18:23] ryah: konobi: nope- i want to do add dtrace probes to v8 at some point
[18:24] jed: which in turn becomes respond( 200 ); respond( headers )... etc.
[18:24] isaacs: riterite
[18:24] isaacs: i'm just considering the difficulty of mounting existing jsgi apps inside a fab framework.
[18:24] isaacs: which would be nice to do.
[18:24] jed: well, there might be some way, but it'd need reworking for sure.
[18:24] jed: i think existing jsgi might be hard.
[18:24] jed: but it's be easy to turn a fab app into a jsgi app.
[18:24] isaacs: jed: sure, well... mostly i'm thinking about jsgi-with-streams
[18:25] jed: sure. i think the best way for that would be to always return an object.
[18:25] isaacs: formatting the request obj is the hard bit.
[18:25] jed: which could be incomplete.
[18:25] isaacs: (for fab)
[18:25] jed: and you'd return a bunch of objects.
[18:25] isaacs: but it could be pretty easily done with middleware.
[18:25] tlynn: probably a standard/newbie question, but what's the best way to get a suitable gnutls for nodejs on Ubuntu Hardy?
[18:26] isaacs: jed: well, going both ways would be keen.
[18:26] jed: maybe you could return {status: 200}, and then return { headers: headers }, etc?
[18:26] isaacs: hm.
[18:26] isaacs: i suppose in any event, i could just have a JSGI-to-(fab) middleware that sits inside of fab, takes the fab request, formats it to what JSGI wants it to be, calls the app, then turns the response into what (fab) expects.
[18:27] jed: i really like the brevity of the fab approach, but realize that type sniffing isn't for everyone.
[18:27] isaacs: and you could do the same in the other direction.
[18:27] isaacs: jed: i actually really *like* the sniffing
[18:27] isaacs: it's haskelly
[18:27] isaacs: reminds me of pattern matching in erlang. it's good magic.
[18:27] jed: i'm not sure, but you might also get away with wrapping the jsgi middleware in it's OWN wrapper to make it fab compatible.
[18:28] jed: as long as it's apples to apples.
[18:28] isaacs: jed: well, yeah, either direction.
[18:28] jed: the default actions of turning returns into repsonds are just middleware anyway.
[18:28] isaacs: i mean, i'm just calling the wrapper "middleware", but middleware can wrap in many layers.
[18:28] jed: the nice thing about the fab approach is it makes the fewest asumptions.
[18:29] jed: i like the idea of jsgi, but it's a bit of a moving target.
[18:29] isaacs: jed: not moving fast enough!
[18:29] jed: i really dislike input.read() etc as it stands now. (which is what you're fixin' to fix)
[18:29] isaacs: actually the spec has been pretty stable with only minor chances for the last few months. getting to proper streaming is key.
[18:30] jed: most of the jsgi-node wrapper is for input.
[18:30] jed: i still don't quite grok how it works.
[18:30] isaacs: ejsgi is very simple.
[18:30] jed: it certainly isn't "eventy".
[18:30] jed: sure sure. ejsgi is much nicer.
[18:30] steadicat has joined the channel
[18:30] isaacs: i DO need to support promise responses, though, since you might want to have middleware that modifies the header/status based on the written body.
[18:30] jed: ejsgi is much closer to (fab).
[18:30] isaacs: no way to do that without promises.
[18:31] jed: that's not true, is it?
[18:31] isaacs: jed: that's because (fab) is so pretty, and i'm envious!
[18:31] isaacs: jed: well, it's tricky, right?
[18:31] jed: well, you'd certainly need to buffer the headers.
[18:31] isaacs: let's say i have middleware that says "if your app writes 'foo' in the first 1000 bytes, then i'm going to 403 on it"
[18:32] jed: but that's just another piece of middleware, no?
[18:32] jed: that's possible with fab for sure.
[18:32] jed: the middleware that does that would just need to buffer.
[18:32] isaacs: well, sure, but in jsgi, you return the headers and status immediately, before the body has started being written
[18:32] jed: oh, really?
[18:32] jed: headers and status are sync?
[18:32] isaacs: so i need to have middleware that returns a promise, and once i know what the headers/status should be, i fulfill the promise.
[18:32] isaacs: yeah
[18:32] jed: yuck.
[18:32] jed: i don't like that.
[18:33] isaacs: they effectively are in node anyhow, and they're never that big.
[18:33] paulca has joined the channel
[18:33] jed: i mean, i'd like to be able to search a database and return a 404 async.
[18:33] jed: jsgi can't do that?
[18:33] isaacs: well, sure, with promises.
[18:33] isaacs: that's why i need that bit.
[18:33] isaacs: having a stream for status and header would be overkill the vast majority of the time, though
[18:34] jed: hrm, yeah that's tough.
[18:34] jed: ACTION is not a huge fan of promises, to be honest.
[18:34] ryah: me neither :)
[18:34] ryah: seems more trouble than it's worth
[18:34] mikeal1 has joined the channel
[18:35] jed: i mean, a decent promise implementation is about the same size as the entire fab codebase.
[18:35] jed: i like the idea of "here's a function, put stuff in it, mkay?"
[18:37] jed: if that's what jsgi is about, i'd really rather do it my way and write a wrapper to make jsgi apps fab-compliant.
[18:37] eck has joined the channel
[18:37] isaacs: sure, and having two-way compatibility would be winful for everybody, i'm sure.
[18:38] isaacs: but yeah, promises are usually dicey.
[18:38] ryah: node 0.0.x didn't have promises or event emitters. you'd just do req.onBody = function (body) {}
[18:38] jed: like old-school dom events.
[18:38] ryah: yeah
[18:38] isaacs: crazy
[18:39] isaacs: i like the new-school addListener attachment better, personally
[18:39] jed: the event abstraction is a win, for sure. DOM scripting moved in that direction for that reason.
[18:39] jed: custom events in jQuery are great.
[18:39] jed: but the promises thing creates more mental overhead than this feeble mind can take.
[18:39] ryah: the object creation overhead bothers me at a fundemental level
[18:40] jed: isaacs: i agree with you, and see the value in the idea of addListener, but the jQuery approach to it, not w3c.
[18:41] ryah: have you seen webdatabase? they make not attempt at a promise
[18:41] ryah: http://dev.w3.org/html5/webdatabase/#asynchronous-database-api
[18:41] jed: they just use callbacks?
[18:42] ryah: yeah
[18:42] jed: the whole reason that people don't like callbacks is because you end up with nested functions, right?
[18:42] ryah: maybe this is a better link: http://dev.w3.org/html5/webdatabase/#introduction
[18:42] jed: and you can't chain?
[18:42] ryah: yeah
[18:42] jed: i think passing a response function or stream or whatever solves that pretty nicely.
[18:44] ryah: fs.readdir("/")(function (contents) { sys.p(contents); })
[18:44] ryah: yeah - it's pretty sexy
[18:45] jcrosby has joined the channel
[18:45] jed: well, i'm a bit of a one-trick pony, but my approach is 100% based on chaining. so you don't really need promises, you just write middleware.
[18:46] jed: chaining is great, but not when it creates more problems than it solves.
[18:46] tlynn: do any of you know of a js-on-java platform that can compete with node.js for speed?
[18:46] ryah: tlynn: i think rhino is the only js-on-java interpretter
[18:47] ryah: so - no
[18:47] ryah: :)
[18:48] tlynn: yeah. you don't know of anyone implementing node.js deps to run on rhino + some non-blocking framework?
[18:48] ryah: no
[18:48] tlynn: hmm. shame.
[18:48] tlynn: very tempting to do it myself.
[18:48] orlandov: tlynn: why do you need js on java?
[18:49] jcrosby: tlynn: i have some code that implements parts of node.js using rhino + netty in scala.
[18:49] tlynn: jcrosby: that sounds exactly like what I'm after
[18:49] jcrosby: i did it as an experiment and have been happy using node, but i can put what i have on github and share it.
[18:50] ryah: jcrosby: are you in sf?
[18:51] jcrosby: ryah: yes
[18:51] ryah: jcrosby: you should come to this http://ssjs.pbworks.com/San+Francisco+-+January+2010
[18:51] tlynn: orlandov: because I already have a java nio component of my system and java has its own deployment tools, so it would be easier to use that than the C implementation of node.js.
[18:52] tlynn: and because my server is Ubuntu 8.04 which doesn't seem friendly with gnutls 2.5.0+
[18:52] jcrosby: ryah: thanks. i just got back in town and had not seen that event yet. i'll make plans to be there.
[18:53] ryah: jcrosby: cool. it will be nice to meet you
[18:53] jcrosby: ryah_away: likewise
[18:56] tlynn: it also looks like jack may be what I'm after, but it doesn't look hugely async-aware.
[18:58] isaacs: tlynn: if you compile your scripts, and do a lot of jvm tweakery, you can get rhino to go very very fast.
[18:58] isaacs: like, comparable to node.
[18:58] isaacs: there's people at yahoo doing it for pipes and yql.
[18:58] tlynn: that's useful to know
[18:58] isaacs: but yeha, easily? no.
[18:59] isaacs: if you want something jacklike, you should keep an eye on my thing: http://github.com/isaacs/ejsgi
[18:59] isaacs: it's like jsgi, but with streams instead of a fully-baked body object.
[18:59] tlynn: tbh, I trust my skills at making js go fast, it's the high level api I want to get right
[18:59] tlynn: that sounds useful, thanks
[19:00] RayMorgan_ has joined the channel
[19:00] joshbuddy has joined the channel
[19:00] tlynn: and ejsgi should be implementable running in rhino you think?
[19:00] isaacs: tlynn: nope. it's node-only right now.
[19:00] isaacs: but if you wanted to port it, that'd be rockin
[19:01] tlynn: quite possibly
[19:01] isaacs: i'm not familiar with java's non-blocking stream APIs.
[19:01] tlynn: nor am I, but how bad can it be? ;-)
[19:01] isaacs: HA! tlynn it's JAVA.
[19:01] isaacs: it can be VERY VERY BAD.
[19:01] tlynn: :-)
[19:01] isaacs: but, basically, you'd just have to replace the stream.js with your rhino thing.
[19:03] jcrosby: tlynn: java's nio is indeed very java :-) check out netty for a sane wrapping of the complexity.
[19:07] jed has joined the channel
[19:08] tlynn: hmm. well, that's all useful information, thanks all. I'll think about it. ttyl.
[19:11] tlynn has left the channel
[19:12] paulca has joined the channel
[19:15] ericflo has joined the channel
[19:23] joshbuddy has joined the channel
[19:33] deanlandolt: jcrosby: i'm googlefailing -- linky to netty?
[19:36] jed: http://www.jboss.org/netty ?
[19:37] deanlandolt: thanks jed
[19:37] jed: deanlandolt: i'll pass on your thanks to the goog.
[19:38] deanlandolt: jed: heh...i tried netty js, netty rhino, and a bunch of combinations thereof
[19:38] deanlandolt: didn't think to try "netty"
[19:39] jed: deanlandolt: and here i thought you were in beijing.
[19:39] deanlandolt: :D
[19:43] mattly has joined the channel
[19:45] mikeal1: orlandov: you around?
[19:45] orlandov: mikeal1: hey
[19:45] mikeal1: i'm about to do some concurrent testing on CouchDB vs MongoDB
[19:45] mikeal1: and i'm looking at your client
[19:45] orlandov: sweet :)
[19:45] mikeal1: is insert() a blocking call?
[19:45] orlandov: mikeal1: no
[19:45] mikeal1: or does it return a Promise that just isn't in the example
[19:45] orlandov: it should buffer the message until the next write opportunity
[19:46] mikeal1: how do I tell when success is returned from Mongo?
[19:46] orlandov: insert() doesn't generate a response from the server so there's not really a callback for when it's "done"
[19:46] mikeal1: for real
[19:46] mikeal1: Mongo just hates data integrity
[19:46] orlandov: if i understand the mongo wire protocol correctly :)
[19:46] orlandov: i think only OP_FIND and OP_GET_MORE return responses
[19:47] deanlandolt: orlandov: what if it fails? (e.g. _id already exists or some other constraint is violated)
[19:47] orlandov: deanlandolt: iirc there's a command you can issue to check whether the last command failed
[19:47] deanlandolt: ah...fair enough
[19:47] mikeal1: deanlandolt: MongoDB isn't ACID compliant, so it doesn't provide a lot of normal integrity stuff
[19:47] orlandov: yeah, nosql and all that
[19:47] deanlandolt: it provides atomicity at the document level, just like couch
[19:47] mikeal1: other nosql databases are ACID compliant :)
[19:47] jamiew has joined the channel
[19:48] mikeal1: atomicity but not MVCC, edits are in place
[19:48] orlandov: yeah, i dont think mongodb has mvcc
[19:48] mikeal1: they insist that inplace edits are faster
[19:48] mikeal1: but they are wrong
[19:48] mikeal1: because discs have to seek :)
[19:48] deanlandolt: yeah, they imply that all over the docs
[19:49] orlandov: mikeal1: really? i thought acid compliance was not very common in the nosql world?
[19:49] mikeal1: CouchDB is ACID compliant
[19:49] deanlandolt: orlandov: it's not all that common...persvr's pretty close to ACID
[19:49] mikeal1: most of the non-in-memory databases are ACID compliant as well
[19:49] deanlandolt: mikeal1: what? you'd have to define ACID pretty narrowly for that to be true
[19:49] orlandov: yeah... what about transactions for instance?
[19:49] steadicat has joined the channel
[19:49] deanlandolt: inmemory DBs violate the "D" right off the bat
[19:50] mikeal1: transactions aren't defined in ACID that way SQL does them
[19:50] b0bg0d has joined the channel
[19:50] mikeal1: it just says that when you return true for a write it MUST be persisted to disc and if the connection is cut off midway through the db is fine
[19:50] mikeal1: MongoDB will straight up go corrupt if it crashes mid-write
[19:51] deanlandolt: mikeal1: it's got a repair fn (i know, i know...not quite the same :)
[19:51] mikeal1: i've heard a lot of people say that didn't work
[19:51] deanlandolt: i wonder what nosql implementations are actually ACID
[19:51] orlandov: i have heard riak comes close
[19:51] deanlandolt: hmm...ouch...i imagine it'll get more reliable with time...but yeah, i agree that's pretty tacky
[19:51] sudoer has joined the channel
[19:53] mikeal1: ACID compliance is a good standard to follow to make sure you won't corrupt the db
[19:53] mikeal1: you just can't try to follow it multi-node
[19:53] orlandov: err nm, i was confusing CAP with ACID for riak
[19:53] mikeal1: because then it is SQL specific
[19:55] micheil has joined the channel
[19:55] orlandov: in any case, mikeal1, i would be very interested to see what kind of numbers you get out of your test
[19:55] tmpvar has joined the channel
[19:55] mikeal1: so
[19:55] orlandov: i've been meaning to load test the mongodb driver for a while
[19:55] mikeal1: i don't think I can do it
[19:56] mikeal1: because the concurrency tests open x number of connection which send, wait for recieve, and send again
[19:56] lifo has joined the channel
[19:56] mikeal1: and then I periodically take the average response time
[19:57] orlandov: this is where you needed that callback on insert completion?
[19:57] mikeal1: since MongoDB doesn't actually response I don't know how to do concurrent response
[19:57] mikeal1: yeah
[19:58] mikeal1: because if I don't, I'm just going to fill up the socket buffers with write requests and then MongoDB will sit there for like 20 minutes after the test finished actually writing
[19:58] mikeal1: and I can't do anything like "last command" because the test is concurrent
[19:58] isaacs: mikeal: you considered doing a pausable stream?
[19:59] isaacs: (jumping in mid-convo, sorry if i'm missing the point you're talking about)
[19:59] mikeal1: i don't see how that would help?
[19:59] mikeal1: i still need something to base the response time from MongoDB on
[19:59] isaacs: oic, so the problem is that you need to know when the insert has completed?
[19:59] mikeal1: and writes don't git a response, because i have no assurances that writes actually happen
[19:59] orlandov: ACTION wonders if something could be hacked together with nextTick() to let the event loop catch up
[20:00] mikeal1: even then, I'm going to send requests as fast as they go, and MongoDB is just going to buffer the writes until it fills up the memory
[20:00] mikeal1: which is just begging for a crash
[20:01] mikeal1: what I want to see is if the actual write performance scales linearly like CouchDB or if it drastically degrades
[20:01] mikeal1: because I'm 99% sure it doesn't degrade linearly
[20:01] isaacs: mikeal1: i see, so the issue isn't pausing the stream, t's knowing WHEN to pause the stream.
[20:01] mikeal1: more than that, it's knowing when MongoDB actually accepts the request
[20:01] bpot has joined the channel
[20:02] mikeal1: even if it doesn't write it, even if it bulks it's writes every second, i still need to know that the request is in the queue for the next bulk write
[20:02] isaacs: it doesn't give you ANY kind of ack?
[20:02] isaacs: not even a status code or whatever?
[20:02] isaacs: that seems dumb.
[20:02] mikeal1: yes, it is
[20:03] mikeal1: like i said, MongoDB just hates data integrity
[20:03] isaacs: yeah, apparently.
[20:03] isaacs: i mean, couchdb doesn't go too far out of its way to support integrity, but at least it doesn't actually *attack* it
[20:03] mikeal1: yeah it does
[20:03] mikeal1: CouchDB uses two append only btrees
[20:04] isaacs: well, if your PUT would break the integrity, then it fails
[20:04] isaacs: ie,if you don't ahve the current _rev
[20:04] isaacs: so you GET the new _rev, and then re-PUT
[20:04] isaacs: i mean, integrity is your responsibility as a user, but at least you have the tools you need.
[20:04] mikeal1: and won't repsond with a 201 response until it's actually finished writing the header on the btree, so it's assured that it's in the db and the db can survive a write failure, crash, or power failure
[20:04] isaacs: right
[20:04] orlandov: http://www.mongodb.org/display/DOCS/Mongo+Wire+Protocol#MongoWireProtocol-OPREPLY
[20:05] isaacs: ACTION never checked out mongodb, but is now even more a fan of couchdb
[20:05] mikeal1: no, if the DB defines a transaction for a write then the response needs to not happen until the write is finished and assured
[20:05] mikeal1: which is what CouchDB does
[20:05] mikeal1: MongoDB just doesn't provide a transaction for the write
[20:06] mikeal1: orlandov: what is this exactly?
[20:06] mikeal1: another type of query?
[20:06] orlandov: mikeal1: basically it says only queries return a response from a server
[20:07] mikeal1: OH
[20:07] mikeal1: i just had an idea
[20:07] mikeal1: I can keep track of the number of inserts I've done
[20:07] mikeal1: and then do count() queries
[20:08] mikeal1: and compare the difference
[20:08] mikeal1: that won't give me comparible numbers with CouchDB, but at least it will tell me if it degrades linearly
[20:08] mikeal1: still tho
[20:08] mikeal1: orlandov: is there some way I can buffer my insert calls so that I don't just overflow everything
[20:09] jamiew has left the channel
[20:10] mikeal1: my concern is that I'm going to blow up the client code before I push the db very far
[20:10] orlandov: mikeal1: they are buffered now, but the buffer will just keep growing until it gets a chance to flush
[20:10] mikeal1: orlandov: is there any place that I can check when the flush happens?
[20:10] mikeal1: i don't want to do another write until it's flushed
[20:10] mikeal1: because I'm going to be using a bunch of concurrent connections
[20:11] orlandov: if you do a lot of writes without flushing it will keep growing the buffer (with horrible memcpy'ing) so that might be a worst case scenario
[20:11] isaacs: mikeal: you could also just not solve that problem.
[20:11] mikeal1: basically I would like to do widgets.insert(obj).addCallback(function () { doThisAgain;})
[20:11] orlandov: mikeal1: that's why i was thinking you could do something with nextTick which would throw it into the event loop and hopefully make sure it gets dealt with faster
[20:11] mikeal1: where the Promise success is met when the flush happens
[20:12] orlandov: mikeal1: right, i don't think there's anything like that currently (emit an event when the write is done), but i have been thinking about adding that
[20:12] mikeal1: that would be super helpful
[20:13] orlandov: right now, it just flushes the whole buffer in big chunks (without respect to command boundaries)
[20:13] teemow has joined the channel
[20:13] orlandov: this has given me some ideas that ill try out tonight :)
[20:13] mikeal1: awesome :)
[20:13] mikeal1: keep me in the loop :)
[20:13] mikeal1: actually
[20:13] mikeal1: i'll just watch this repository :)
[20:14] micheil: isaacs: we reallu need something like a fs send to show the streaming in action
[20:14] isaacs: micheil: ryah_away claims to intend to take the fs and http stuff in that direction post-net2
[20:15] micheil: yeah, I know, sounds good
[20:15] isaacs: so hopefully, my node-stream project will be outdated soon.
[20:15] micheil: I'm just hoping there's an underlying streaming api
[20:15] micheil: as in that it's not tied into any part specifically
[20:15] isaacs: micheil: that's how it's gonna be.
[20:15] isaacs: streams are very useful in a lot of random casees.
[20:15] micheil: time to go do stuff.. -out
[20:16] mikeal1: orlandov: also, you parse the obj sent to insert every time
[20:16] technoweenie has joined the channel
[20:16] mikeal1: it would be nice if I could send that an JSON encoded string
[20:16] mikeal1: or another method I could send a JSON encoded string to
[20:17] fictorial has joined the channel
[20:17] mikeal1: so that if I want to test sending a JSON object many times I could parse it once and send it many times and I'm not testing the parse time anymore
[20:17] orlandov: mikeal1: yeah... it does a object -> bson every time
[20:17] orlandov: *conversion
[20:17] mikeal1: i see
[20:18] mikeal1: i really hate bson
[20:18] orlandov: so maybe there's a need for a BSON object that can hang around
[20:18] orlandov: heh
[20:18] mikeal1: that would be super useful
[20:18] orlandov: s/object/object type/
[20:18] around: orlandov: what what
[20:18] orlandov: in the butt?
[20:18] mikeal1: god damn, bson, it's like "let's just forget everything we learned about transport data for the last 15 years"
[20:19] around: orlandov: you said hang around
[20:19] around: orlandov: i wanna live
[20:19] orlandov: around: hang around so you dont have to convert an object to bson on every insert
[20:19] around: orlandov: passive aggressive bastard
[20:19] orlandov: ohhh
[20:19] around: :P
[20:19] orlandov: ACTION gets things
[20:19] orlandov: ever so slowly
[20:19] orlandov: hehehe
[20:20] around: o-0
[20:20] orlandov: no we need you, around
[20:20] orlandov: see what i did there? :)
[20:20] around: yea
[20:20] around: nice
[20:20] orlandov: mikeal1: what's your beef with bson? (i'm not defending it or mongodb, just curious)
[20:20] around: ill be here
[20:21] mikeal1: we all figured out a long time ago that using binary data in a transport format was bad at the application layer
[20:21] mikeal1: and then they decided that they knew better, and took JSON and somehow turned it in to something hard to work with than XML
[20:21] orlandov: haha
[20:21] orlandov: fair enough
[20:22] orlandov: it does make writing a driver more complicated, that's for sure
[20:22] mikeal1: same thing with their socket protocol, it's SQL all over, I've gotta compile their libraries in order to build a client because I need the crazy bindings
[20:22] orlandov: and who doesn't love fucking around with machine endianess?
[20:22] mikeal1: it's my favorite :)
[20:22] mikeal1: it's just weird too
[20:22] mikeal1: they did all this in order to increase single writer performance
[20:23] orlandov: mikeal1: yep... and to be fair, that is very very fast
[20:23] mikeal1: the thing is….. nobody should really care about single writer performance
[20:23] mikeal1: at least on the web
[20:23] orlandov: but there is a cost
[20:23] around: orlandov: o_0
[20:23] mikeal1: you care about concurrent perforamnce
[20:24] mikeal1: the difference between 5ms and 200ms doesn't really matter if 200ms stays solid with thousands of concurrent requests
[20:25] rednul_ has joined the channel
[20:25] jfd: mikael1: Sorry to interupt your discussion but I must comment your latest statement. :-) 5ms and 200ms can be really important in some situations
[20:25] fictorial: games
[20:25] jfd: for example! :)
[20:26] jfd: but, I was thinking about realtime services
[20:26] jfd: I trend that is growing each day
[20:27] jfd: Take facebook for example, and their chat system. A diff of 195ms can make a big difference in the long run.
[20:29] mikeal1: so
[20:30] mikeal1: for push services I wouldn't care about single reader or writer either
[20:30] mikeal1: that also needs to scale concurrently
[20:30] jfd: I think MongoDb implementaion is great for a change. I like that theyre using binary for transportation. I will use it in my next project. It depends on quick access to a write buffer. Any how, I think they should have made dual access, both binary and JSON..
[20:30] mikeal1: have yo looked at CouchDB _changes :)
[20:30] mikeal1: you can get all the changes to documents in the database on a continuous feed
[20:31] mikeal1: which is much faster than a poll, it scales concurrently, and it has a filter function that can filter the updates each client gets
[20:31] mikeal1: and since erlang is much better at holding thousands of sockets open than C or C++ based servers are you can just offload it all to the db server and not keep another piece of comet infrastructure in place
[20:32] orlandov: woah there?
[20:32] jfd: Yes, I have made some serious tests with couch and I like it alot. But couch isnt for every situation. If you create a plain old website (say a forum, or community) I would go with couch every time. But websites isnt the only kind of application living on the internet
[20:32] mikeal1: with MongoDB i'd need another Comet server, that can parse BSON, sitting there listening to updates and keeping channels open
[20:32] orlandov: how is erlang better at holding thousands of open sockets than c or c++?
[20:33] fictorial: orlandov: it isn't of course
[20:33] jcrosby has joined the channel
[20:33] mikeal1: i can't have the clients talk directly to MongoDB because browsers don't talk BSON and can't use the MongoDB socket protocol
[20:33] mikeal1: fictorial, orlandov: erlang concurrent socket handling is worlds better than pthread
[20:33] mikeal1: or nthread for that matter
[20:33] fictorial: what does pthread have to do with anything?
[20:33] orlandov: who said anything about threads
[20:34] deanlandolt: mikeal1: isn't the erlang vm /written/ in c?
[20:34] orlandov: libev is crazy fast
[20:34] mikeal1: sure, if you write your own event machine in C it's going to be plenty fast
[20:34] mikeal1: libev is nearly as fast as erlang
[20:34] fictorial: lol
[20:34] mikeal1: but erlang doesn't allow shared memory so the VM can create a lot more concurrent "procesess"
[20:34] orlandov: i'd like to see the numbers
[20:35] fictorial: mikeal1: what on earth do you think erlang uses under the covers for socket handling?
[20:35] mikeal1: the erlang VM is written in C
[20:35] mikeal1: but erlang HTTP servers blow away apache httpd
[20:35] orlandov: the only way you could possible get faster than libev is if you use kqueue/epoll & friends directly
[20:35] fictorial: "libev is nearly as fast as erlang" - this is classic
[20:35] mikeal1: look
[20:36] mikeal1: ericsson write erlang so that it could handle hundreds of thousands of cell phone calls on routers with a single core
[20:36] mikeal1: s/write/wrote
[20:36] orlandov: erlang may be a lot of things, but i don't think you're right here
[20:36] fictorial: which has nothing to do with libev and everything to do with lightweight processes
[20:36] orlandov: erlang is very fault tolerant, hot swappable system
[20:37] mikeal1: http://en.wikipedia.org/wiki/Actor_model
[20:37] fictorial: on linux, libev uses epoll, and gasp, so does erlang
[20:40] mikeal1: so
[20:40] mikeal1: right now node doesn't support more than one processor
[20:40] orlandov: sure it does... createChild :)
[20:40] mikeal1: that creates a seperate system process
[20:40] mikeal1: inside a single VM it doesn't support x operations over n cores
[20:41] mikeal1: i read at one point that the plan is to work on WorkerThreads
[20:41] mikeal1: and make them perform over n cores
[20:41] mikeal1: since they only use message passing it's not too hard
[20:41] fictorial: Web Workers
[20:41] mikeal1: but what is the plan for using libev over n cores?
[20:42] mikeal1: sorry, i spent too long inside Mozilla
[20:42] fictorial: "But what about multiple-processor concurrency? Threads are necessary to scale programs to multi-core computers. Processes are necessary to scale to multi-core computers, not memory-sharing threads."
[20:43] fictorial: http://nodejs.org/#about
[20:43] orlandov: argh hate to run but i gotta eat :) bbl
[20:43] mikeal1: fictorial: have you had to write a program that passes messages between n cores by hand using seperate processes?
[20:44] fictorial: by hand? :)
[20:44] mikeal1: it's the most annoying thing in the world
[20:44] fictorial: annoying? there are a million forms of IPC
[20:44] mikeal1: what you want is some abstraction you can send messages to that doesn't share memory, and create thousands of them, and let the VM scale them over n cores
[20:45] fictorial: I'm not sure what problem you are really solving. Perhaps take a look at ZeroMQ.
[20:46] mikeal1: i have
[20:46] mikeal1: not lately tho
[20:46] eikke__ has joined the channel
[20:47] fictorial: ( erts/emulator/sys/common/erl_poll.c ) BTW
[20:47] fictorial: in OTP
[20:47] fictorial: anyway, gotta run
[20:47] mikeal1: i know that erlang uses epoll
[20:47] mikeal1 has left the channel
[20:47] mikeal1 has joined the channel
[20:47] mikeal1: whoops
[20:50] mikeal1: anyway, my point is that there is more to concurrency than just IO, and providing a good API to epoll and kqueue is the right route but you also need to scale that over n cores efficiently
[20:50] fictorial: can't argue with that.
[20:51] mikeal1: i have a lot more faith in node.js than anyone else at the moment in making that happen
[20:51] mikeal1: it's why I'm not writing Python right now :)
[20:51] fictorial: as much as I love node.js I'm not sure I follow that ... as Python has the multiprocessing module and node.js does not yet have web workers ...
[20:51] mikeal1: erlang figured it out really well, but you've gotta have a certain kind of brain to get erlang
[20:52] mikeal1: mutliprocessing is a bandaid
[20:52] mikeal1: and honestly, not a very good one
[20:52] fictorial: I like erlang in theory but I cannot get over the syntax. I hate to say that because it sounds so shallow but egads.
[20:52] mikeal1: you hit system limits really fast if you create too many
[20:53] fictorial: too many what?
[20:53] mikeal1: actually, I don't mind the syntax just because you have to think so differently when writing erlang anyway that having a syntax that looks NOTHING like any other language is a little helpful
[20:53] mikeal1: too many processes
[20:54] fictorial: sure, I mean there are always real limits. You generally don't want to have processes >> cores anyway.
[20:54] mikeal1: mutliprocess uses seperate system processs, with an entire other python vm, and fakes shared memory over a shared semaphore
[20:54] mikeal1: you know how after a certain point in Python threads adding more threads just makes things slower
[20:55] mikeal1: multiprocessing hits that much faster
[20:55] fictorial: these are not threads though, they are processes.
[20:55] mikeal1: basically, once you have a process for each thread, you need to stop
[20:55] fictorial: I see. Sure, it depends on what you are trying to accomplish (and how).
[20:55] mikeal1: which means again, as a developer, you need to write code specific to the number of cores you have and don't have a good API in between
[20:56] mikeal1: the nice thing about erlang, and I hate to say it but some of the new Java libraries and Objective-C APIs
[20:56] mikeal1: is that they provide a way for you to create a "process" abstraction thousands of times that will scale over how ever many cores are on the box
[20:56] charlenopires has joined the channel
[20:56] mikeal1: Python not only doesn't have that, but has no plans to implement anything like that
[20:57] mikeal1: also, Python can't really take advantage of non-blocking IO
[20:57] fictorial: Huh?
[20:57] mikeal1: because callbacks in Python suck and just aren't very well optimized
[20:57] mikeal1: that's why the coroutine stuff in Python is gaining so much momentum, it doesn't use callbacks and ends up being significantly faster than any of the Twisted stuff
[20:58] fictorial: er, that's kind of full of FUD there. There's select.{epoll,kqueue} in 2.6.
[20:58] konobi: mikeal1: Objective-C... do you mean the new C blocks support?
[20:58] mikeal1: yes, it's there
[20:58] fictorial: uh, Twisted's inlineCallbacks is using generators too so..
[20:58] mikeal1: konobi: no, the GrandCentral stuff
[20:58] mikeal1: basically it's a threading abstraction that uses nthreads
[20:58] konobi: sorta one in the same really
[20:59] mikeal1: fictorial: but you add a function as a callback, which creates a closure, which is very poorly optimized in Python
[20:59] mikeal1: seriously, download Cython, and just write a simple function that returns True
[20:59] mikeal1: run it in Python, and then run it in Cython
[20:59] mikeal1: and Cython will be an order of magnitude faster
[20:59] mikeal1: because Python has a rediculous amount of overhead in function execution
[21:00] mikeal1: Python closures look kinda like javascript closures look if you stick an eval() in the closure
[21:01] fictorial: to quote 30 rock: "this is boring. I'm bored now"
[21:01] fictorial has left the channel
[21:01] mikeal1: haha
[21:04] fictorial has joined the channel
[21:04] fictorial: I almost forgot: http://bit.ly/RFzjV
[21:05] mikeal1: this is the best part of that link "An empowering song with great Music and Lyrics by LEONCIE who will dazzle you with her energetic performance and Piano playing, on her flying magic carpet."
[21:10] richtaur has joined the channel
[21:25] jed has joined the channel
[21:29] quirkey has joined the channel
[21:29] mattly has joined the channel
[21:29] hassox has joined the channel
[21:34] eddanger has joined the channel
[21:39] hassox has joined the channel
[21:46] eikke_ has joined the channel
[21:46] jspiros has joined the channel
[21:48] voodootikigod has joined the channel
[21:53] micheil has joined the channel
[21:59] RayMorgan has joined the channel
[22:03] rictic has joined the channel
[22:06] un0mi has joined the channel
[22:06] paulca has joined the channel
[22:07] hassox has joined the channel
[22:26] hassox has joined the channel
[22:30] mikeal1 has joined the channel
[22:32] dekz has joined the channel
[22:39] aho has joined the channel
[22:41] joshbuddy has joined the channel
[22:56] [mighty]Mike has joined the channel
[22:57] sveisvei has joined the channel
[23:05] jcrosby has joined the channel
[23:06] CIA-75 has joined the channel
[23:07] jcrosby has joined the channel
[23:07] mikeal1 has joined the channel
[23:08] eikke_ has joined the channel
[23:08] charlenopires has joined the channel
[23:11] markwubben has joined the channel
[23:13] stuartross has joined the channel
[23:15] felixge has joined the channel
[23:21] charlenopires_ has joined the channel
[23:22] stuartross has left the channel
[23:22] jed has joined the channel
[23:22] okito has joined the channel
[23:25] mikeal1: orlandov: what version of MongoDB did you build your client against?
[23:27] orlandov: mikeal1: well, i don't build against a release of mongodb, i just use the mongo-c-driver from github
[23:28] mikeal1: ahh
[23:28] ryah: is mongo-c-driver async?
[23:28] mikeal1: i thought that was part of the full mongo distrobution
[23:28] orlandov: ryah: no, but i have reimplemented the parts that arent
[23:28] orlandov: i mostly use it for bson conversions and convenience functions
[23:28] ryah: orlandov: do you use libev ?
[23:28] mikeal1: this is painful
[23:29] orlandov: ryah: yep, very similar to your node_postgres (in fact, that was my skeleton)
[23:29] mikeal1: this whole mongo process is only slightly easier than setting up postgres
[23:29] ryah: orlandov: do you work for mongo?
[23:29] orlandov: orlandov: nope
[23:29] ryah: you just like it?
[23:29] ryah: do they know you've done this?
[23:29] orlandov: ryah: it's okay, i rather like how simple it is to use
[23:30] brandon_beacher has joined the channel
[23:30] orlandov: ryah: perhaps, i havent announced it or anything
[23:30] orlandov: mostly because i'm still learning this stuff
[23:30] mikeal1: i think i saw that michael dude working on a client
[23:30] mikeal1: but it wasn't as nice looking at yours
[23:31] orlandov: ah yeah, i think that one just uses the mongo shell
[23:31] felixge: ryah: I saw your discussion with jed about promise alternatives earlier, I find the idea of returning a function
[23:31] felixge: * I like
[23:32] ryah: felixge: yeah, might be cute
[23:32] ryah: i think if one did that there shouldn't be error events
[23:33] ryah: it should just be a "done" event
[23:33] felixge: ryah: yeah
[23:33] felixge: I guess the only thing we might miss is the ability to wait()
[23:33] ryah: well, i guess wait() could be done with that API
[23:33] felixge: how?
[23:34] ryah: x = function () {}; x.wait = ... ; return x
[23:34] felixge: yeah, but you don't want to do that by hand everywhere
[23:34] ryah: subclass Function
[23:34] felixge: but maybe it would be even simpler to just do events.wait(callbackFn)
[23:34] cadorn has joined the channel
[23:35] ryah: in fact we could probably support that api righ tnow
[23:35] ryah: with promises
[23:35] ryah: promise = posix.cat("blah"); promise(function () {})
[23:35] felixge: ryah: would be fun to see how it feels, only having 1 kind of event would make all sorts of things easier
[23:36] felixge: like chaining
[23:36] ryah: yeah
[23:36] felixge: or grouping
[23:36] ryah: on the contrary you'll probably have branches for errors in each cb
[23:36] ryah: but maybe that's okay
[23:37] ryah: if () { } else {} is less than addCallback(function (){}).addErrback(function () {})
[23:37] felixge: true
[23:37] felixge: but it would be convenient to trow an exception by default when not handling an error
[23:37] ryah: yeah
[23:38] felixge: I guess that could be easily done as well
[23:38] ryah: if there are no callbacks and the first arg is instanceof Error, then throw it
[23:38] felixge: promise = posix.cat("blah"); promise(true, function (
[23:38] Jed has joined the channel
[23:38] felixge: oh
[23:38] felixge: I was thinking that even if there was a callback
[23:38] jed_ has joined the channel
[23:38] felixge: but yeah that would work as well
[23:39] felixge: I mean how often will you not want to add a callback
[23:39] felixge: at least I haven't written much code that does that
[23:39] ryah: not often
[23:39] felixge: so if the first param to the callback fn is true you'd ask to handle errors
[23:39] felixge: otherwise they'll throw
[23:39] ryah: but if you do posix.cat("blah")(function (contents) { sys.puts(contents) })
[23:39] ryah: you'll see the error
[23:39] ryah: :)
[23:39] felixge: right :)
[23:40] felixge: but I kind of like explicitly asking for handling the error
[23:40] felixge: like you do with a try..catch block
[23:40] felixge: so the unexpected may never lead to silent issues
[23:40] felixge: writing async code is hard enough without stuff silently failing :)
[23:43] richtaur has left the channel
[23:43] ryah: yeah
[23:44] ryah: in net2 it's so great, cause the bindings actually raise errors they get from posix layer
[23:44] ryah: so you have real error messages
[23:44] felixge: nice :)
[23:44] ryah: it just crashes and tells you where and why
[23:44] felixge: make things crash hard wherever you suspect problems, I like that : ).
[23:45] ryah: evcom really covers them up
[23:45] felixge: its the best way to get this stuff right
[23:45] felixge: yeah, I noticed : )
[23:45] felixge: It was happily telling me it was Success()ful in crashing the process *g*
[23:45] ryah: isn't it like super late in berlin?
[23:46] felixge: 00:42, its ok
[23:46] mediacoder: 00:43
[23:46] jed__ has joined the channel
[23:46] mediacoder: ok..a bit later here ;-)
[23:47] ryah: mediacoder: :D
[23:48] felixge: ryah: any idea how you would name this weird callback functions?
[23:48] isaacs has joined the channel
[23:48] felixge: * those
[23:48] moosecrumpet has joined the channel
[23:53] ryah: there was a proposal in commonjs like that
[23:53] ryah: i think the poster gave them a name
[23:53] felixge: http://gist.github.com/287381
[23:53] felixge: I just converted ben's promise example
[23:53] felixge: looks quite neat I think
[23:53] felixge: much less typing
[23:55] ryah: i think you need a second branch
[23:55] felixge: oh
[23:55] felixge: for handling error on the 2nd file?
[23:55] ryah: yeah
[23:56] jed___ has joined the channel
[23:56] jcrosby has joined the channel
[23:56] felixge: damn
[23:56] felixge: :)
[23:56] paulca has joined the channel
[23:57] felixge: I guess the only way to avoid that would be to have some indication on whether or not you'd like to handled errors
[23:57] felixge: like so: http://gist.github.com/287381
[23:58] ryah: okay
[23:59] ryah: i think its a bit hard to read, but sure
[23:59] felixge: hm, yeah
[23:59] felixge: I can see what you mean
[23:59] ryah: what if the first arg was always an error object
[23:59] ryah: or nothing?
[23:59] ryah: err, contents