PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 6th of November, 2012

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:00:00] <Nerp> Due to the memory mapped nature of Mongo, is it normal for a mongod process to take up 80% of a systems 8 gigs of ram?
[00:38:57] <munro> 7/w 13
[00:43:50] <wereHamster> Nerp: yes
[00:43:58] <wereHamster> it uses all it can
[00:44:09] <wereHamster> there is no point in having ram and not using it.
[01:20:53] <Nerp> wereHamster, Thank you for the answer.
[05:06:33] <alfanso> anybody able to run mongodb with php 5.4.4?
[05:06:41] <alfanso> I'm not able to setup correctly
[05:07:01] <alfanso> I dropped mongodb ts .dll file in ext folder
[05:07:07] <alfanso> made required changes in php.ini
[05:07:17] <alfanso> but as soon as I make any request to server, it crashes
[05:07:25] <alfanso> and error is point to httpd.ext
[05:07:29] <alfanso> exe*
[08:38:01] <[AD]Turbo> hola
[08:38:13] <Zelest> o/
[10:01:07] <Zelest> When it comes to filesystems and operatingsystems, what should I pick and why? I'm currently looking at FreeBSD (UFS2), OpenBSD (FFS) and Linux (EXT4) ..
[10:01:59] <ron> I'd go with Linux if only because it is more wildly used.
[10:05:22] <NodeX> Andriod ftw
[10:05:28] <NodeX> Android *
[11:00:22] <moian> Hi, using mainly aggregation framework, I try to keep all my collections in the RAM, but is the difference really relevant ?
[11:00:35] <moian> I mean, doing this, I need to use sharding with lots of clusters. My question is: Can I do something better by using replica set instead of sharding ?
[11:07:24] <moian> anyone know ?
[11:09:22] <phrearch> hello
[11:09:23] <NodeX> not realy sure what you're asking
[11:10:10] <phrearch> im trying to create a reference between two schemas using mongoose, but it errors with a duplicate key error index
[11:10:16] <phrearch> http://paste.kde.org/597896/
[11:10:19] <phrearch> http://paste.kde.org/597902/
[11:10:27] <phrearch> any idea why this fails?
[11:11:02] <NodeX> mebbe you have a null key that's duplicating?
[11:12:02] <phrearch> NodeX: dont know, its complaining about l3m0n.activities.$email_1
[11:12:21] <phrearch> the reference user has an email field, but its supposed to only store an ObjectId to the user
[11:12:54] <phrearch> the same ObjectId can exist multiple times in the document right?
[11:12:59] <phrearch> ehm in the collection
[11:13:27] <NodeX> no
[11:13:32] <NodeX> well not as _id
[11:13:39] <NodeX> it can as anotherfield
[11:13:40] <phrearch> no, i saved it as _actor
[11:13:46] <NodeX> another field *
[11:13:49] <phrearch> _actor: {type: Schema.Types.ObjectId, ref: 'User'},
[11:14:06] <NodeX> sorry ^ means nothing to me, I dont use mongoose
[11:14:44] <moian> NodeX: I use aggregation framework on big collections, I think about keeping the full database in the RAM to keep it as fast as possible, but by doing this, I need to use sharding with lots of clusters. I'm asking if doing this will really increase performances, or if simply using replica set (to increase read/write capacity) could be as good or even better.
[11:14:52] <NodeX> your error is a dupe key either on "_id" or on something else that's been set as "unique"
[11:15:00] <phrearch> ok thanks anyway
[11:15:19] <phrearch> yea it looks like its somewhere set on email
[11:15:40] <phrearch> bit confusing, since i dont want to store the user again, but only the reference to a user in another collection
[11:16:15] <NodeX> this is what happens when "join" type logic makes it's way into a driver/wrapper
[11:16:27] <NodeX> and into a datbase not meant for joining
[11:17:09] <NodeX> moian : indexes in RAM are always faster
[11:17:11] <phrearch> yea, its not meant for joining i read
[11:17:50] <phrearch> im accustomed to using joins in odm/orm
[11:18:17] <phrearch> not really sure how to do it otherwise
[11:18:52] <phrearch> looks like i need to reference in the other schema as well
[11:19:10] <NodeX> I wouldn't know about that
[11:20:06] <phrearch> is there a good readup about how to deal with the absence of joins in nosql?
[11:20:35] <NodeX> the general rule is to avoid them
[11:20:50] <NodeX> model your data so that it doesn't need them
[11:21:31] <phrearch> ok thanks. wont that mean duplicating a lot of fields or having huge collections?
[11:21:48] <phrearch> sorry, having a hard time adapt to nosql :)
[11:22:36] <NodeX> phrearch : it does yes
[11:23:04] <NodeX> but performance does not come cheap in terms of disk space, but luckily disks are not expensive
[11:23:06] <phrearch> hm, then the tradeoff is speed vs db-sizer?
[11:23:12] <phrearch> indeed
[11:23:21] <NodeX> that's the trade you have to decide for your app
[11:23:26] <NodeX> one size does not fit all
[11:23:27] <phrearch> how about update queries? arent they more expensive?
[11:23:36] <NodeX> depends what you're updating
[11:24:00] <NodeX> let's take a users forename/surname/email - this doesn't get updated very often if ever
[11:24:04] <moian> NodeX: ok thank you ! that's what I thought.. Do you know what is best to do about security, when we use sharding ?
[11:24:07] <phrearch> yea indeed
[11:24:28] <NodeX> so it's safe to store it in comments staticaly and update if it ever happens
[11:24:51] <NodeX> moian : what part of security
[11:24:56] <phrearch> ok thanks for explaining
[11:25:11] <NodeX> it's decisions you make as per your data phrearch
[11:25:24] <NodeX> my data is different and may not fit in your model and VV
[11:25:47] <moian> for the clusters: password, firewall ?
[11:26:54] <phrearch> NodeX: thanks, ill try to figure out how to model my data as efficient as possible for nosql
[11:27:29] <phrearch> the duplicate error is gone now. seems like i did an update to the model after one was saved to the db. removing the existing entries fixed the problem
[11:28:10] <phrearch> err nm. was looking at the wrong terminal
[11:28:16] <NodeX> lolol
[11:28:23] <phrearch> :)
[11:42:58] <moian> On my clusters, do using password to access mongo is enough, or is it better to set firewall and/or something else ?
[11:46:50] <NodeX> all of the above
[11:50:38] <jamma> hi all
[11:50:57] <jamma> is there any mongodb schema expert ?
[11:53:37] <ppetermann> you might want to ask your question instead of asking for someone who knows whatever
[11:55:51] <SLNP> Hi has anyone ever some across an issue where indexes don't replicate across to new replica set nodes?
[11:56:10] <jamma> ppetermann, yeah right but it's actually quite complicated
[11:56:37] <ppetermann> fine, so it is complicated. ill go back to work.
[11:56:56] <jamma> :d
[11:57:44] <jamma> what's the best way to maintain relations between collections ? I actually try to put everything into each document, but it is very slow, i have huge amount of data
[11:58:31] <ppetermann> optimize your finds, your aggregation, map/reduce incrementally
[11:59:19] <ppetermann> make sure you have enough ram for the indexes
[12:00:02] <jamma> i'd like to try to avoid map/reduce because of low performance ?
[12:54:10] <roxlu> hi
[12:54:41] <roxlu> when I have a gridfs in my database called "tweet" and a fs prefix of "images", how can I use mongofiles to list all files in the fs?
[12:55:07] <roxlu> I tried: ./mongofiles -d tweet list images
[13:15:31] <timroes> Hi, is there any possibility to use the Java driver to store a BSONObject into MongoDB?
[13:24:40] <HongKilDong> Hi all
[13:25:48] <HongKilDong> Executing this code in mongo shell for(var i=0;i<=10;i++) {echo 'it does not work';}; I get error "SyntaxError: missing ; before statement (shell):1" What's wrong with it ?"
[13:26:46] <timroes> HongKilDong: i guess you mean print?
[13:26:53] <NodeX> ^^
[13:26:54] <timroes> you are not coding php :)
[13:27:38] <HongKilDong> OK, print , but it still doesn't work l
[13:27:40] <timroes> print("It does work");
[13:27:54] <timroes> still you are not coding php, dont skip the braces :)
[13:28:17] <HongKilDong> timroes cool, it works ! thx
[13:28:26] <timroes> you're welcome :)
[13:29:43] <roxlu> how do I set metadata of a gridfile when using the c-driver and when I write chunks using gridfile_write_buffer ?
[13:29:59] <roxlu> it seems that previously set meta data is lost
[13:31:29] <HongKilDong> guys. do you have this channel in search results on freenode ?
[13:32:04] <HongKilDong> i've managed to connect this channel only by typing /join #mongodb
[13:32:38] <HongKilDong> in search there is no #mongodb channel, it's strange
[13:32:45] <NodeX> +s
[13:33:04] <NodeX> not sure if search lists mode +s
[13:33:44] <HongKilDong> HongKilDong has quit to google )
[13:34:26] <ron> HongKilDong: 'searching' channels on IRC isn't the best way to find a channel.
[13:34:32] <NodeX> +1
[14:05:01] <eka> hi all
[14:05:05] <Zelest> o/
[14:05:32] <eka> is there any reason why inline map/reduce temporal collections are always indexed? that's taking a lot of time
[14:22:11] <Bartzy> What is the admin database?
[14:23:19] <eka> Bartzy: admin
[14:30:52] <roxlu> someone using the c-driver? (looks like on of the few?) Can't find any information on how to store meta data when writing chunks with write_buffer().
[14:53:13] <roxlu> ah... mongofiles only works with default .fs. prefixes .. arg
[15:31:04] <dorong> hey
[15:31:25] <ron> ho
[15:31:39] <dorong> is there a way to do a backup for a single server mongod, running on a raid 10 ?
[15:31:52] <dorong> that is, without any downtime - can't lock it
[15:33:44] <NodeX> mongoexport ?
[15:33:50] <NodeX> not sure if it locks or not
[15:34:57] <doxavore> mongoexport doesn't lock on its own, but i'm not sure how consistent your backup would be.
[15:36:05] <NodeX> you wont get a consistent backup without a lock - that's the reason for a lock lol
[15:55:28] <snizzo> hello+
[15:55:28] <dorong> what would you say will be the locking time for a db with 8 collection, ~42200 objects, average object size: 111, data size: 4-5mb, storage size: ~14mb, index size: ~5mb, file size: ~200mb?
[15:56:09] <ron> dorong: probably the same amount of time it'd take to count grains of sand in a bottle.
[15:56:42] <ron> dorong: in other words, there's no real way to answer that. does it run on a 486DX2?
[15:57:23] <snizzo> I'm building a little script for importing txt data into mongodb in php. I build the array in php but, once imported without errors mongo shows me empty fields :( like name : ""
[15:57:29] <dorong> well, saying there's no real way to answer that is an answer. all the other is less... but thanks.
[15:57:59] <NodeX> snizzo : pastebin your script
[15:58:01] <ron> dorong: dude, if I can't have my fun, what's the point of being here?
[15:59:07] <dorong> I just asked that question myself. well, the last part of it.
[15:59:07] <snizzo> NodeX: http://pastebin.com/RbiWH0Wa
[16:00:32] <NodeX> snizzo : what is $t ?
[16:00:54] <snizzo> with file() I parse a text file
[16:00:58] <snizzo> $t is each line
[16:00:59] <ron> dorong: there there. at least you now have a good reason to start using a replica set.
[16:01:24] <snizzo> a string
[16:01:33] <NodeX> that's great but an you verify that it exists?
[16:01:35] <dorong> you're absolutely correct. however, to do that, I must backup before.
[16:03:39] <snizzo> NodeX: that the string exists? An echo results in correct output
[16:04:16] <NodeX> I would suggest it's your wrapper then
[16:04:56] <snizzo> the library?
[16:05:03] <snizzo> ok
[16:10:19] <baptistem> Hey folks!
[16:11:12] <baptistem> I would like to know if mongodb is able to add a primary key to an existing table ( == not restarting the table from scratch and add all data) ?
[16:15:02] <snizzo> baptistem: yep use ensureIndex
[16:15:53] <Derick> baptistem: mongo has no "primary key" that is auto increment
[16:16:07] <Derick> the index on _id is always there, so that could be considered the "primary key"
[16:16:20] <Derick> but "restarting a table from scratch" is something I don't understand
[16:17:04] <baptistem> ok thx guys :) I may use more mongodb :)
[16:21:33] <eka> anyone knows why my map/reduce processes create temporary mr collections?
[16:23:39] <baptistem> Derick: on a Mysql db I found advice on internet that say : 'create a table with all good columns including your primary key columns, copy data from your old table and drop the old one'
[16:24:33] <Derick> Yes, but this is MongoDB and not MySQL.
[16:25:35] <NodeX> LOL]
[16:26:59] <baptistem> yes sure, my question was more is mongodb able to do better than mysql with this solution
[16:27:32] <roxlu> Derick: do you maybe know more about the c-driver? (It seems there aren't a lot of people using this) I'm trying to find out how to add meta data to a gridfile incombination with gridfile_write_buffer? (the gridfile_writer_init has no meta data parameter)
[16:27:38] <NodeX> depends on your situation
[16:28:15] <Derick> baptistem: MongoDB uses indexes as well - but how you store data is different.
[16:28:37] <Derick> baptistem: Indexing in MongoDB can be tricky, so I would suggest you read up on a few articles and tutorials and perhaps presentations about them.
[16:28:50] <Derick> roxlu: sorry, I have not used the c-driver :-/
[16:28:59] <roxlu> Derick: ok np
[16:29:23] <baptistem> I saw some article on hackers news I will deep read them :) thx for time/advice
[16:29:28] <Derick> baptistem: http://docs.mongodb.org/manual/core/indexes/ is a good overview
[16:29:45] <Derick> as well as http://emptysquare.net/blog/optimizing-mongodb-compound-indexes/
[16:42:35] <MatheusOl> does mongodb keep statistical data about the collections distribution?
[16:42:50] <MatheusOl> I mean, it would use an index even if I want, let's say, 99% of the collection?
[16:44:32] <Derick> MatheusOl: no
[16:44:53] <Derick> no statistics are kept, but it does remember "the best index" for a specific query type
[16:45:25] <Derick> MatheusOl: for some information how indexes are picked and what is remembered, please read http://emptysquare.net/blog/optimizing-mongodb-compound-indexes/#optimizer
[16:57:08] <MatheusOl> Derick: thank you
[16:57:22] <MatheusOl> Derick: I'm checking your linking
[16:57:59] <Derick> MatheusOl: feel free to follow up with questions here!
[16:58:27] <MatheusOl> Oh... And would Mongo avoid using an index somehow (even if it could)?
[16:58:42] <Derick> if there is an index, it will use it
[16:58:49] <MatheusOl> humm
[16:58:58] <Derick> but not in eveyr case can an index be used of course
[16:59:14] <MatheusOl> Yeah, this is obvious for me
[16:59:16] <Derick> but if there is an index that covers your query in some form, it will be used
[16:59:43] <MatheusOl> And is always a b-tree, right?
[16:59:54] <Derick> unless you use a "2d" (geo-index), yes
[17:00:03] <MatheusOl> Nice
[17:00:04] <MatheusOl> Thanks
[18:48:25] <atlantaman> hi folks, trying to set a username and password using simple ruby code, so auth i still have as false
[18:49:35] <atlantaman> i make the connection and do this - add_admin = db.add_user("super","password")
[18:49:49] <atlantaman> but i gett add_user as underfine
[18:50:16] <atlantaman> any example of add_user in code
[18:52:08] <atlantaman> i can do it with a shell command and --eval, but think there is a cleaner way of doing it
[19:00:40] <atlantaman> never mind, simpler than i was thinking
[19:00:53] <atlantaman> i am good
[19:13:40] <smsfail> looking to implement chat in mongo
[19:13:52] <smsfail> anyone have any async examples or ways to implement?
[19:13:58] <smsfail> going to be rolling using the python driver
[19:15:04] <MatheusOl> web?
[19:17:53] <JakePee> smsfail: http://blog.mongodb.org/post/33837586050/reactivemongo-for-scala-unleashing-mongodb-streaming
[19:19:47] <smsfail> thanks JakePee
[19:19:53] <smsfail> scala sounds hard
[19:19:58] <smsfail> so I think i will have to find another route
[19:20:16] <JakePee> yeah, I believe it's pretty green
[19:21:19] <JakePee> http://reactivemongo.org/#samples
[19:25:09] <_m> smsfail: Scala is pretty easy to use, all told.
[19:25:36] <smsfail> _m: as far as my java foo goes its all js. No fundamental java.
[20:02:25] <ckd> Has anybody had problems dealing with multiple connections in the 1.3RC PHP driver?
[20:44:58] <bjori> hopefully not :)
[20:45:07] <bjori> as the 1.3 release is supposed to be solving all those problems :D
[20:46:04] <bjori> ckd: have you encountered any issues with it?
[20:49:09] <bjori> ckd: woha. just saw your post.. that seems.. scary
[21:12:14] <ckd> bjori: Could be scary, could be me being dumb! (Definitely hoping for the latter!)
[21:17:50] <bjori> ckd: no no, I've confirmed it locally - this is a bug in the driver
[21:18:07] <bjori> ckd: I'll look at it in few minutes - thanks for letting us know!
[21:18:30] <ckd> Interestingly, I did try the other, "proper" syntax, and it appears to work correctly
[21:21:25] <ckd> oops, spoke too soon, my test was inconclusive
[21:29:15] <mrobben> hi, i'm hitting a bug I'd like someone to comment on. When I call db.command{'convertToCapped'…} on a collection that doesn't exist, I get an error output (which is fine), but the error is returned in the result. Err is null. Gist is here: https://gist.github.com/4027649
[21:29:31] <mrobben> is this expected behavior? Should I be parsing the result field for error messages?
[21:30:49] <mrobben> config: driver version 1.1.11, node version v0.8.6, mongo shell 2.0.6
[21:39:33] <crudson> mrobben: I have found a couple of operations that the server recognizes as an error but the client isn't notified as such. e.g. https://jira.mongodb.org/browse/RUBY-492 - I would report it
[21:41:18] <mrobben> crudson: thx. How did you work around your issue? did you check the 'ok' field coming back from the server in the message body?
[21:41:46] <mrobben> crudson: thankfully, that's getting set in my case.
[21:42:40] <bjori> ckd: yeah, so what is happening is we are actually picking a random connection.. as if you were looking for a random secondary on a replicaset :P
[21:43:19] <crudson> I was just goofing around trying to determine what mongo does to determine a "system" collection, so I tried it. Browsing the server source code there is a IsSystem() (or similar) method that just checks the name, there is nothing else tagged for it to be a system collection, so I tried to save to
[21:43:23] <crudson> that
[21:43:29] <ckd> hah! finally something not my fault
[21:43:43] <crudson> so it wasn't really an issue for me, just something I found out whilst experimenting
[21:45:35] <crudson> I would probably parse the return message for now with an obvious TODO/NOTE tag in code that this will change.
[22:01:24] <epicheals> I currently have a single mongodb server. Is it possible to "add" storage to it by setting up a new mongodb server or will it be limited to the amount of storage of the smallest instance?
[22:01:34] <epicheals> If it is possible what would I google?
[22:06:31] <ckd> epicheals: are you talking about preallocation, or are just concerned that you're limited by the size of your instance's disk?
[22:06:50] <Xorlev> Hey guys, so here's the situation: We had an issue where our secondary was catching up from an overnight event. When it was ~7200s behind, the primary fell over and the secondary (7200s behind) took over while we diagnosed the server. We brought it back up, the old primary went into ROLLBACK and then "replSet too much data to roll back" ... that's okay. How
[22:06:50] <Xorlev> would I force the old primary in as the source of truth given that it's stuck in ROLLBACK mode? I'm guessing it won't be elected primary even if I go and do a rs.stepDown()
[22:07:18] <Xorlev> To be clear, I don't really care too much about what's happened in the last few hours, I'm okay that the rollback failed.
[22:07:58] <Xorlev> I'd just like to get the old primary back as primary (it's on better hardware) and then worry about resyncing the secondary.
[22:09:51] <ckd> xorlev: Does this help? http://dba.stackexchange.com/questions/18020/mongodb-replica-set-secondary-stuck-in-rollback-state
[22:12:55] <Xorlev> Not really. I'm in the situation where I don't want to hose the server in the ROLLBACK state and instead just want it to become the source of truth
[22:13:26] <epicheals> chk: the actual hard drive capacity
[22:13:44] <epicheals> chk: I was wondering if I could add more space via more servers
[22:14:27] <Xorlev> epicheals: Through sharding, you can get more space. Or just run everything on lvm if you're really worried about it then you can add new disks (preferably RAID pairs) to expand the space
[22:15:34] <epicheals> xorlev: cool ty :) I unfortunately can't add more hard drives, but I can learn sharding
[22:37:33] <epicheals> I'm confused about shard keys. "Shard keys that have a high correlation with insert time are poor choices for this reason; however, shard keys that have higher “randomness” satisfy this requirement better" then in the next sentence "will make it possible for the http://docs.mongodb.org/manual/reference/config-database/#mongos to return most query operations directly from a single specific mongod instance. Your shard key should b
[22:41:25] <epicheals> I'm guessing a good shard key would be something that's multiple columns with those columns being the most commonly selected columns?
[22:52:45] <Judson> I'm trying to administer a mongodb restored from a backup.
[22:53:08] <Judson> And I'm getting weird errors trying to add/remove users from one collection
[22:53:57] <Judson> db.removeUser("olduser") -> false
[22:54:27] <Judson> db.addUser("newuser", "password") -> uncaught exception: error { "$err" : "assertion db/key.cpp:409" }
[22:56:36] <PedjaM> Hola
[22:56:48] <PedjaM> Hey guys I need a bit of advice
[22:57:00] <PedjaM> if someone have few minutes to help it would be appreciated...
[22:57:06] <PedjaM> I want to use 2 mongo servers in a shard
[22:57:30] <PedjaM> There's only one collection in the database (~1 billion records)
[22:57:30] <PedjaM> Heavy writing
[22:57:32] <PedjaM> Records have only two fields:_id and an array { :_id => 12345, :pp => [ stuff, stuff... ] }
[22:57:47] <PedjaM> _id is my user_id, integer
[22:57:55] <PedjaM> _id is the only index in the collection
[22:58:14] <PedjaM> I want sharding key to be: _id%2 (because user_ids are linear, and I want both servers equally balanced on writing)
[22:58:22] <PedjaM> how can I do that ?
[22:58:42] <PedjaM> when I trieed, all the writing went to one server, after some time that would be balanced
[22:59:00] <PedjaM> but I wanna have equal writes to both servers...
[22:59:23] <PedjaM> so, if someone have a bit of time and is willing to help...
[23:09:55] <epicheals> PedjaM if you figure that out let me know
[23:10:12] <epicheals> I'm just starting on sharding as well
[23:10:53] <PedjaM> Well, I have hoped that someone will advice here ;)
[23:12:18] <PedjaM> It is kinda strange how sharding works, it would be nice if you can chose how something is sharded (% number of servers would be ideal for me)
[23:12:46] <PedjaM> I just hate to see one server dieing while second one is idling
[23:13:22] <PedjaM> anyway, some link to read ot advice over irc, anything would be helpful...
[23:13:44] <jaha> Question: I cant seem to wrap my head around how to create this query. How would I go about taking 1 keyword/regex and search for the same keyword in 3 different fields and return a doc that matches in at least 1 of the fields?
[23:24:19] <jaha> nvm…duh.. use OR
[23:30:34] <FrenkyNet> hi all, just read that article about MongoDB best practices that was just tweeted. Is safe => true always the way to go when not explicitly set to false?
[23:31:55] <ttdevelop> could you share that article?
[23:33:00] <FrenkyNet> ttdevelop: http://weblogs.asp.net/andresv/archive/2012/10/24/mongodb-usage-best-practices.aspx
[23:33:07] <ttdevelop> thanks :)
[23:33:15] <FrenkyNet> and https://twitter.com/mongodb/status/265957991969349633
[23:33:20] <FrenkyNet> that was the tweet
[23:34:19] <ttdevelop> seems like so many things going on around mongodb lately
[23:34:35] <FrenkyNet> btw, Derander / mstearn : might be a good hint to tweet about pecl install mongo-beta so people can install the RC of php 1.3, searched my ass off trying to figure out how to get the RC
[23:35:08] <FrenkyNet> ttdevelop: indeed indeed, also because so many platforms embrace it
[23:35:33] <FrenkyNet> the PHP community is catching up I think, I hear more and more people are using it
[23:36:47] <ttdevelop> FrenkyNet: it is certainly nice to never have to jump through the loops to make a JSON return from server
[23:37:05] <ttdevelop> also it's much more app friendly than oracle
[23:38:58] <FrenkyNet> wouldn't know, I just test the platform again scenario's and see what helps me the most
[23:39:13] <FrenkyNet> and which one is fun to work with
[23:46:01] <ckd> FrenkyNet: it's not necessarily their intent, but it seems to me like a good idea to have only people that can figure out how to install beta packages try out the RC
[23:46:24] <FrenkyNet> ckd: maybe so yeah
[23:46:37] <FrenkyNet> I'm just not a big PECL'er
[23:46:59] <ckd> FrenkyNet: neither am I… it took me a few mins to sort it out when I first tried the 1.3 releases
[23:47:29] <ckd> FrenkyNet: but I really wanted read pref support :)
[23:48:57] <FrenkyNet> ckd: I just really wanted the aggregation stuff
[23:49:54] <ckd> FrenkyNet: I actually didn't have any problems except for a bug in the RC that the 10gen boys tracked down quickly
[23:50:04] <FrenkyNet> what bug?
[23:50:16] <FrenkyNet> with the aggregation stuff?
[23:50:27] <ckd> FrenkyNet: noon, just with using multiple connections
[23:50:34] <FrenkyNet> ahh ok
[23:50:53] <ckd> FrenkyNet: my particular setup uses two separate sets, one for normal shit, one for logging
[23:55:33] <ttdevelop> ckd and FrenkyNet: do you guys play with node.js?
[23:55:47] <FrenkyNet> ttdevelop: some, not a lot
[23:55:55] <FrenkyNet> as nothing serious
[23:56:00] <ttdevelop> trying to figure out if there is a need for mongoose
[23:56:00] <ckd> ttdevelop: same for me :)
[23:56:11] <ttdevelop> vs. mongodb's node driver
[23:56:18] <FrenkyNet> can't help you there, sorry
[23:57:08] <ttdevelop> np