[01:27:28] <staykov> i have an object with 3 arrays, i want to get the count for each
[01:27:44] <staykov> i dont need the actual data in the arrays
[07:28:19] <masch> Hi. Is it possible to aggregate or query informations from more then one collection at once? To reproduce something that looks like SQL JOINs?
[07:31:26] <masch> kali: okay so I either have to include these information in the same collection and maybe even life with redundancy or I neeed to do multiple querys and merge in the "client"?
[07:41:26] <shoshy> hey, i tried asking it yesterday without getting an answer.. is it possible to use mongo to query for 2 groups where one depends on that other? i want to group into "roots" and "leafs" where "roots" had "parent_ids" size 0 and "leafs" has "children_ids" size 0 AND none of their "parent_ids" is IN the "roots" group.
[07:53:35] <rspijker> shoshy: that’s a weird definition of leafs… Regardless, I can’t see a way to do it in a single query…
[07:55:14] <shoshy> rspijker: why weird? that's how i managed to store in a hierarchical way... and had ro use _ids string arrays as the object get updated frequently
[07:56:31] <rspijker> only your definition of leaf is weird. Why isn’t something that’s directly beneath a root a leaf? As long as it has no children, it’s by definition a leaf
[07:58:02] <shoshy> because it might not have a root.
[08:01:33] <rspijker> so it’s either empty or not a tree…
[08:02:40] <shoshy> not empty, just not a tree , or not a tree YET
[08:03:13] <shoshy> as the DB grows it might become a tree in the future, hope it makes sense..
[08:16:09] <shoshy> it's very simple... only the roots of a strongly connected components
[08:16:16] <rspijker> then I really don’t udnerstand your initial question
[08:16:49] <rspijker> because there you speak about leafs that you want to get as well, which depend on the set of roots, making this hard
[08:17:41] <shoshy> yea, because saying "Strongly connected components" might sound too much and because in the situation of a 1-graphs the leafs are the roots. here:
[08:18:05] <shoshy> ok, lets say i have this int he db: {[1->2->3],[4->5->-6>7->8->9->....->99],[100],[101],[102]}
[08:18:10] <rspijker> well, in the example, 3 satisfies your conditions for a leaf, yet now you say you don;t care about it
[13:56:34] <grkblood13> would this be the place for mongo related nodejs questions?
[14:31:18] <rspijker> grkblood13: just ask your question
[14:33:16] <KamZou> Hi, what's the best way to upgrade from a 2.4 package to a 2.6 (on the same server ive multiple instances, a configserver, a primary RSM) on debian pls ?
[14:34:50] <rspijker> KamZou: there is documentation for this specific purpose
[14:40:35] <Jinsa> could someone help me a bit, I'm setting up SSL following the tutorial (creating .pem & co) but the server isn't starting anymore with ssmode=requireSSL and the pem file created :(
[14:41:07] <rspijker> KamZou: I don’t really see how them being on the same server impacts things all that much? If they are all on the same server, I’m guessing it’s not production, so you shouldn;t be too worried about downtime
[14:42:08] <rspijker> szborows: a collection can have at most 1 text index
[14:42:26] <rspijker> Jinsa: what errors? what does the log say?
[14:42:30] <szborows> rspijker: okay, but how can I remove existing one?
[14:42:38] <KamZou> rspijker, i've some instances on the same server (not all) but yes it's production
[14:43:24] <szborows> rspijker: I did db.coll.dropIndexes()
[14:43:32] <Jinsa> rspijker: nothing in /var/log/mongodb/mongod.log , is there somewhere else to check or do I have to install something more to get more logs?
[14:43:37] <szborows> so all non-id_'s indexes were removed
[14:43:42] <rspijker> KamZou: Is there a specific step in the docs I pointed you to that you can’t execute?
[14:44:40] <KamZou> rspijker, in 2.4 mongodb was using a "mongodb-10gen" package. Now it's splitted in multiple package (one per role). So i've firstable to remove the package "mongodb-10gen" before I can install mongodb-org-server for instance.
[14:44:46] <rspijker> Jinsa: you sure that’s it’s logging to that file? How are you passing the options in?
[14:45:34] <Jinsa> rspijker: i'm using the service one (on debian) and it's configured by /etc/mongod.conf
[14:46:07] <KamZou> rspijker, my problem is : i've read that the way to upgrade is : 1) upgrade the metadata 2) upgrade configservs 3) upgrade ReplicaSet
[14:53:07] <rspijker> szborows: can you try again?
[14:53:14] <Jinsa> rspijker: I'm on last 2.6 mongodb-org-server from http://downloads-distro.mongodb.org
[14:53:27] <rspijker> KamZou: this isn’t foolproof of course… and it might not be a great idea… But then again, neither is running a bunch of stuff on the same host ;)
[14:53:47] <szborows> rspijker: I can, but not right now. How should I copy everything (including text indexes)?
[14:53:47] <rspijker> if it’s just a RSM, I’d advice just turning them off and using the other RSMs to do failover until you’ve updated what needs updating
[14:54:07] <rspijker> szborows: mongodump - mongorestore is probably safest option
[14:54:16] <rspijker> I meant running the create index command again btw
[14:54:22] <KamZou> rspijker, here there no good choice... Too many servers
[14:54:23] <rspijker> just in case it was still doing background index removal before
[14:55:03] <rspijker> Jinsa: it might be net.ssl.mode then
[14:55:09] <rspijker> Jinsa: config file changed a bit, I believe
[14:55:10] <szborows> okay, thanks a lot. will get here back if the problem will remain
[15:57:19] <grkblood13> I'm trying to use mongo as my sessionstore for a node.js but I'm having trouble reading my sessions in my instances on socket.io. https://gist.github.com/anonymous/83804ea87e80c34282e0
[16:03:45] <theRoUS> Derick: what ebook would you recommend for a mongodb n00b aspiring to become a pro?
[16:13:07] <nfroidure> Ho. Are mongoos schema statics resrved to functions or can i attach values likes objects or else
[16:55:57] <tscanausa> data even making to nodes when in a network partition
[16:56:57] <elm> o.k. I guess there needed to be some mechanism to ignore it until a new master was elected.
[16:58:05] <elm> if every client sent with his updates the id of the node he currently believed to be the master ignoring such data should be quite straight forward, should not it?
[17:11:08] <choosegoose> so I'm trying to install mongodb and it's giving me problems because I have my OS installed on a small partition, can I just have a mongodb running somewhere in my /home/choosegoose/?
[17:16:40] <cheeser> you can run it from anywhere you'd like for the record
[17:27:16] <Kaiju> I'm performing a map reduce on my dev mongod instance and its outputting to a collection as expected. I'm running the same command on my production mongos instance and the resulting collection is empty. The map reduce report from production shows the correct number of emits. But the resulting collection is empty. Is there a flag or something simple I'm missing?
[19:16:40] <aaronds> Hi, I'm currently using .findAndModify(query: {}, update: {"myField":true}); but I'm finding that the record will lose all its other fields... Why is this?
[19:21:30] <aaronds> Kaiju: so it does, thanks :-)
[19:33:44] <theRoUS> given a collection with the field 'category' with a limited number of values, what's the equivalent of 'select category,count(*) from collection group by category' ?
[19:34:07] <theRoUS> basically, i want all the values of 'category' and the number of times each value appears
[21:30:35] <MacWinner> are most of the issues that folks bring up about mongo losing data or silently not writing data old news? ie, addressed in latest 2.6.x?
[21:31:04] <MacWinner> anytime I ask someone about their experience here in SF, I get vague FUD about how they love and hate mongo
[21:32:41] <MacWinner> just want to be as sure as possible that I'm not going to get bitten..
[21:36:49] <kali> MacWinner: there are still situations where lost writes can happen. you need to read and understand everything you found about replication and write concern
[21:38:22] <kali> MacWinner: http://aphyr.com/posts/284-call-me-maybe-mongodb this is testing in extremely adverse conditions, worth a read
[21:39:04] <kali> MacWinner: you can compare with the other system tested by the same guys in the same kind of conditions, and mongodb does not look so bad
[21:58:11] <rekibnikufesin> I'm prepping for 2.4 -> 2.6, running db.upgradeCheckAllDBs(), running for several hours, and has scrolled past my terminal buffer limit. Any chance that it's going to give me a summary output or log file when it's done?
[23:07:41] <joannac> hahuang65: I'm working. If you would like to pay me to get in my queue, I can give you ideas ;)
[23:10:31] <Kaiju> hahuang65: $ne with a $or should do the trick
[23:15:55] <hahuang65> joannac: that'd depend on if my company would be willing to do so :)
[23:16:18] <hahuang65> Kaiju: how so? $ne and $or would only work on fields for which I know the names of.
[23:18:31] <Kaiju> hahuang65: how about $and with {$exists : false} ?
[23:19:44] <hahuang65> Kaiju: I feel like you're not understanding what I'm trying to do?
[23:22:09] <Kaiju> hahuang65: I'm pretty sure your looking for any documents that do not have the properties of foo and bar. I would write it like this {$and: [ {foo : {$exists : false}}, {bar : {$exists : false}}]} is that correct?
[23:23:11] <hahuang65> Kaiju: Nope. I'm looking for any documents that has fields OTHER than foo and bar. For example, if one document has foo = 1, bar = 2, and another document that has foo = 1, bar = 2, and baz = 3, I want the document that has baz as a field
[23:24:32] <Kaiju> hahuang65: Oh gothca, I'm not sure on that one. Without a regular pattern to follow I would handle that in the application layer.
[23:25:31] <hahuang65> Kaiju: I just needed it for a support document, so it wouldn't be application code anyways. That would never be performant.
[23:26:36] <Kaiju> hahuang65: There is always map reduce to get a performant solution over a large data set
[23:31:35] <hahuang65> Kaiju: right, but this is just for a report. We don't need it for app layer :)
[23:31:42] <hahuang65> Kaiju: thanks for the suggestions.