PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 23rd of June, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[01:27:14] <staykov> hey, what is the best way to count elements of an array?
[01:27:17] <staykov> $unwind?
[01:27:28] <staykov> i have an object with 3 arrays, i want to get the count for each
[01:27:44] <staykov> i dont need the actual data in the arrays
[07:28:19] <masch> Hi. Is it possible to aggregate or query informations from more then one collection at once? To reproduce something that looks like SQL JOINs?
[07:28:49] <kali> masch: nope
[07:31:26] <masch> kali: okay so I either have to include these information in the same collection and maybe even life with redundancy or I neeed to do multiple querys and merge in the "client"?
[07:39:27] <kali> that's about right
[07:41:26] <shoshy> hey, i tried asking it yesterday without getting an answer.. is it possible to use mongo to query for 2 groups where one depends on that other? i want to group into "roots" and "leafs" where "roots" had "parent_ids" size 0 and "leafs" has "children_ids" size 0 AND none of their "parent_ids" is IN the "roots" group.
[07:53:35] <rspijker> shoshy: that’s a weird definition of leafs… Regardless, I can’t see a way to do it in a single query…
[07:55:14] <shoshy> rspijker: why weird? that's how i managed to store in a hierarchical way... and had ro use _ids string arrays as the object get updated frequently
[07:55:25] <shoshy> *ro=to
[07:56:31] <rspijker> only your definition of leaf is weird. Why isn’t something that’s directly beneath a root a leaf? As long as it has no children, it’s by definition a leaf
[07:58:02] <shoshy> because it might not have a root.
[08:01:33] <rspijker> so it’s either empty or not a tree…
[08:02:40] <shoshy> not empty, just not a tree , or not a tree YET
[08:03:13] <shoshy> as the DB grows it might become a tree in the future, hope it makes sense..
[08:03:23] <rspijker> not really :)
[08:03:30] <rspijker> at least, not from a graph theoretical pov
[08:03:35] <shoshy> true
[08:03:50] <shoshy> its not sccp or a tree by mere definition
[08:03:57] <shoshy> but each component IS a true
[08:04:03] <shoshy> a root is still a graph
[08:04:49] <shoshy> *true = tree
[08:04:51] <rspijker> of course
[08:05:16] <rspijker> and that would preclude the root from the leafs set (although they are technically both)
[08:05:24] <shoshy> exactly :)
[08:05:44] <rspijker> but, currently, if you have a component consisint of 2 nodes, which will have a parent-child relation
[08:05:51] <rspijker> you’re also saying that the child is not a leaf
[08:06:02] <shoshy> no, the child is a leaf..
[08:06:09] <shoshy> i'm not saying that
[08:06:15] <rspijker> not by your defintiions
[08:06:24] <shoshy> yes it does
[08:06:29] <shoshy> it's parent_ids won't be empty
[08:06:57] <shoshy> it'll have the id of it's parent(s)
[08:07:17] <rspijker> yes… and that parent will be a root
[08:07:27] <rspijker> since it does not have any parents, since it’s a 2 node component
[08:08:11] <shoshy> the parent is a root, yes... so i'll want to have it as a result of the query
[08:08:39] <rspijker> yes, but then your child has a parent which is in the roots group
[08:08:52] <rspijker> and your definition then states that it is not a leaf
[08:08:58] <shoshy> no....
[08:09:10] <shoshy> that's not my definition...
[08:09:24] <shoshy> it is not a leaf, true
[08:09:29] <rspijker> it is!
[08:09:34] <shoshy> i'm after 2 types of objects
[08:10:04] <rspijker> well, it’s not a leaf by your defintition of leaf, but it is a leaf.. :P
[08:10:16] <shoshy> the roots AND leafs (roots) with no children and no parents
[08:10:48] <rspijker> no children and no parents?
[08:10:51] <shoshy> so, if you talk in strongly connected components definitions, i'm only after the roots.
[08:11:42] <shoshy> yes, no children and no parents... a single node graph
[08:11:42] <rspijker> what if you have a graph like 1 — 2
[08:12:51] <shoshy> ok, what about it...
[08:13:09] <rspijker> let’s say 1 is the parent of 2
[08:13:22] <rspijker> in which groups would 1 and 2 fall?
[08:13:51] <shoshy> 2 doesn't interest me, only 1
[08:14:14] <shoshy> i'll only want to "take" the root
[08:14:25] <rspijker> ok
[08:14:32] <rspijker> and if you have 1 — 2 — 3?
[08:14:40] <shoshy> same thing...
[08:14:42] <rspijker> where 1 is the parent of 2, which is the parent of 3?
[08:14:42] <shoshy> only 1
[08:15:05] <rspijker> well, your definition calls 3 a leaf
[08:15:05] <shoshy> if i have only 1-graph
[08:15:17] <shoshy> i'll still be interested in 1
[08:15:23] <shoshy> because it is a leaf...
[08:15:25] <shoshy> it has no children
[08:15:44] <rspijker> well.. that’s what I’m asking, do you want to get that or not?
[08:15:54] <shoshy> 3? nope
[08:16:09] <shoshy> it's very simple... only the roots of a strongly connected components
[08:16:16] <rspijker> then I really don’t udnerstand your initial question
[08:16:49] <rspijker> because there you speak about leafs that you want to get as well, which depend on the set of roots, making this hard
[08:17:41] <shoshy> yea, because saying "Strongly connected components" might sound too much and because in the situation of a 1-graphs the leafs are the roots. here:
[08:18:05] <shoshy> ok, lets say i have this int he db: {[1->2->3],[4->5->-6>7->8->9->....->99],[100],[101],[102]}
[08:18:10] <rspijker> well, in the example, 3 satisfies your conditions for a leaf, yet now you say you don;t care about it
[08:18:19] <rspijker> ok
[08:18:25] <shoshy> the result i'm after : {[1],[4],[100],[101],[102]}
[08:18:46] <shoshy> because 3 is a leaft but it's not a root :)
[08:19:02] <rspijker> that’s satisfied by anything that has an empty parent_id set
[08:19:46] <rspijker> you seem to believe you need to do more than just check if the parent_id set is empty
[08:19:57] <rspijker> can you give me an example where that check would not be sufficient?
[08:22:12] <shoshy> i'm afraid of a situation where it says it has a parent id and the parent id doesnt exist in the db
[08:22:16] <shoshy> so it's a root...
[08:22:32] <shoshy> because of some application level / network error what ever..
[08:22:50] <shoshy> how can i check that it exist in the same query.. i saw there's $unwind
[08:22:54] <shoshy> in the aggregate
[08:23:15] <rspijker> there is
[08:23:17] <shoshy> but can i do $exists :{ "$parent_ids
[08:23:30] <shoshy> sorry... i mean: $exists: "$parent_ids"
[08:23:33] <rspijker> that will only check if the field exists
[08:23:41] <rspijker> so, you can, but it won’t do what you want
[08:23:52] <shoshy> yes, sorry... im new to mongo
[08:23:57] <rspijker> no worries
[08:23:59] <shoshy> it got me confused...
[08:24:53] <rspijker> I don’t think you can easily do this…
[08:25:03] <shoshy> now you understand what i'm after? its to use "aggregate" i guess, with $unwind and checking the $not: { $exist ... in the db
[08:25:08] <rspijker> referential integrity is somewhat difficult in mongo
[08:25:17] <rspijker> yeah, I get it
[08:25:19] <shoshy> ahhh....
[08:25:26] <rspijker> you want to check each parent_id for existence
[08:25:38] <shoshy> yes
[08:25:53] <rspijker> how performant does this have to be?
[08:26:26] <shoshy> the query will be limited to a small set 10s of records
[08:26:42] <shoshy> so it's nothing big... and i assume it'll be fast enough
[08:27:00] <rspijker> easiest way I can think of is to use javascript
[08:27:03] <shoshy> records = objects :)
[08:27:18] <rspijker> get a list of all _ids
[08:27:32] <shoshy> i see.. so application layer ? or need to delve into implementing the map reduce of mongo?
[08:27:57] <shoshy> or it's an overhead
[08:29:27] <rspijker> I really dislike map reduce
[08:29:30] <rspijker> I’d do it in the app layer
[08:29:34] <rspijker> just get a list of all _ids
[08:29:52] <rspijker> then use aggregate and after the unwind, do a $match where you filter out anything that isn’t in the list of _ids
[08:30:48] <shoshy> ahhh i see...
[08:31:08] <shoshy> ok, thanks a lot for your time and patience... going to test it out
[08:31:32] <rspijker> good luck :)
[08:32:21] <shoshy> ty!
[08:45:02] <kali> Nodex: if you read that, the air control strike is cancelled
[11:56:09] <prateekp> has anyone used mongoid from behing the proxy
[11:57:15] <kali> proxy ? what kind of proxy ?
[11:59:34] <prateekp> export http_proxy = http://proxy22.iitd.ernet.in:3128
[11:59:56] <kali> mongodb is not a http protocol
[12:00:04] <prateekp> everything other than mongoid works when i export proxy
[12:00:19] <prateekp> ohh
[12:00:21] <kali> so mongoid will rightfully ignore the http_proxy
[12:00:50] <prateekp> so there is no way to bypass proxy
[12:04:34] <kali> i'm not sure what you mean
[13:56:34] <grkblood13> would this be the place for mongo related nodejs questions?
[14:31:18] <rspijker> grkblood13: just ask your question
[14:33:16] <KamZou> Hi, what's the best way to upgrade from a 2.4 package to a 2.6 (on the same server ive multiple instances, a configserver, a primary RSM) on debian pls ?
[14:34:50] <rspijker> KamZou: there is documentation for this specific purpose
[14:35:35] <rspijker> http://docs.mongodb.org/manual/release-notes/2.6-upgrade/
[14:38:07] <KamZou> rspijker, it's a very particular case here. On this link it's not question of multiple instance on the same server
[14:38:22] <szborows> hi there
[14:38:54] <szborows> what should I do when I get error "Found pre-existing index .... Disallowing creation of new index type 'text'"?
[14:39:16] <Jinsa> hi all
[14:40:35] <Jinsa> could someone help me a bit, I'm setting up SSL following the tutorial (creating .pem & co) but the server isn't starting anymore with ssmode=requireSSL and the pem file created :(
[14:41:07] <rspijker> KamZou: I don’t really see how them being on the same server impacts things all that much? If they are all on the same server, I’m guessing it’s not production, so you shouldn;t be too worried about downtime
[14:42:08] <rspijker> szborows: a collection can have at most 1 text index
[14:42:26] <rspijker> Jinsa: what errors? what does the log say?
[14:42:30] <szborows> rspijker: okay, but how can I remove existing one?
[14:42:38] <KamZou> rspijker, i've some instances on the same server (not all) but yes it's production
[14:42:52] <rspijker> szborows: db.coll.dropIndex
[14:43:24] <szborows> rspijker: I did db.coll.dropIndexes()
[14:43:32] <Jinsa> rspijker: nothing in /var/log/mongodb/mongod.log , is there somewhere else to check or do I have to install something more to get more logs?
[14:43:37] <szborows> so all non-id_'s indexes were removed
[14:43:42] <rspijker> KamZou: Is there a specific step in the docs I pointed you to that you can’t execute?
[14:43:46] <szborows> just one remain
[14:44:15] <rspijker> szborows: that’s weird… Is this a sharded cluster?
[14:44:22] <szborows> nope
[14:44:40] <KamZou> rspijker, in 2.4 mongodb was using a "mongodb-10gen" package. Now it's splitted in multiple package (one per role). So i've firstable to remove the package "mongodb-10gen" before I can install mongodb-org-server for instance.
[14:44:46] <rspijker> Jinsa: you sure that’s it’s logging to that file? How are you passing the options in?
[14:45:34] <Jinsa> rspijker: i'm using the service one (on debian) and it's configured by /etc/mongod.conf
[14:46:06] <Jinsa> rspijker: logpath=/var/log/mongodb/mongod.log
[14:46:07] <KamZou> rspijker, my problem is : i've read that the way to upgrade is : 1) upgrade the metadata 2) upgrade configservs 3) upgrade ReplicaSet
[14:46:28] <rspijker> Jinsa: ok
[14:46:34] <KamZou> rspijker, on my production cluster, on a particular server i've : a configserv and a RSM
[14:46:56] <rspijker> Jinsa: and when you say it isn’t starting… ?
[14:46:59] <rspijker> nothing at all?
[14:47:01] <KamZou> rspijker, so i'm upgrading the cfgserv, the RSM will have the upgrade at the same time
[14:47:08] <Jinsa> nothing :)
[14:47:29] <Jinsa> rspijker: [FAIL] Starting database: mongod failed!
[14:47:36] <rspijker> can you try starting it manually?
[14:47:47] <rspijker> you can still point it to the cfg file, just without the service
[14:48:10] <rspijker> mongod -f /etc/mongod.conf
[14:48:21] <Jinsa> rspijker: ok i'll try, but without SSL cong it's starting nicely
[14:48:29] <Jinsa> *conf
[14:48:46] <rspijker> szborows: so what does db.coll.getIndexes() give you?
[14:49:06] <rspijker> KamZou: I see
[14:49:12] <rspijker> but that’s just the binaries, right?
[14:49:19] <rspijker> as long as you turn them on in the right order...
[14:49:48] <szborows> rspijker: it gives me "{
[14:49:48] <szborows> "v" : 1,
[14:49:48] <szborows> "key" : {
[14:49:48] <szborows> "_id" : 1
[14:49:48] <szborows> },
[14:49:48] <szborows> "ns" : "employees.employees",
[14:49:49] <szborows> "name" : "_id_"
[14:49:49] <szborows> }
[14:50:01] <szborows> sorry, how can I get flat json?
[14:50:26] <rspijker> just use a pastebin next time :)
[14:50:33] <rspijker> “flat” json is unreadable anyway
[14:51:09] <Jinsa> rspijker: ok I have more info but it sounds crazy ^^ : Error parsing INI config file: unknown option sslMode
[14:51:13] <szborows> ;)
[14:51:22] <rspijker> Jinsa: what version are you on?
[14:51:32] <rspijker> this is weird szborows…
[14:51:51] <KamZou> rspijker, hmmm so the RSM instance will use the older binary in memory ? Even during the upgrade the configserv instance ?
[14:51:53] <szborows> well maybe this is because of copy
[14:51:59] <rspijker> only thing I can think of is that it is still doing index removal in the background if you had just removed them
[14:52:02] <szborows> I copied collections between machines
[14:52:11] <szborows> I copied from Mongo.2.4.x to Mongo2.6.x
[14:52:11] <rspijker> shouldn’t matter all that much...
[14:52:17] <rspijker> ah, that might be a problem
[14:52:20] <rspijker> how did you copy?
[14:52:35] <szborows> sorry, I don't remember that
[14:52:36] <rspijker> KamZou: if you never turn it off, and assuming it’s linux
[14:52:40] <szborows> perhaps using copydb
[14:52:49] <KamZou> rspijker, yes and yes
[14:52:50] <KamZou> ok
[14:53:05] <KamZou> well thank you
[14:53:07] <rspijker> szborows: can you try again?
[14:53:14] <Jinsa> rspijker: I'm on last 2.6 mongodb-org-server from http://downloads-distro.mongodb.org
[14:53:27] <rspijker> KamZou: this isn’t foolproof of course… and it might not be a great idea… But then again, neither is running a bunch of stuff on the same host ;)
[14:53:47] <szborows> rspijker: I can, but not right now. How should I copy everything (including text indexes)?
[14:53:47] <rspijker> if it’s just a RSM, I’d advice just turning them off and using the other RSMs to do failover until you’ve updated what needs updating
[14:54:07] <rspijker> szborows: mongodump - mongorestore is probably safest option
[14:54:16] <rspijker> I meant running the create index command again btw
[14:54:22] <KamZou> rspijker, here there no good choice... Too many servers
[14:54:23] <rspijker> just in case it was still doing background index removal before
[14:55:03] <rspijker> Jinsa: it might be net.ssl.mode then
[14:55:09] <rspijker> Jinsa: config file changed a bit, I believe
[14:55:10] <szborows> okay, thanks a lot. will get here back if the problem will remain
[14:55:13] <rspijker> Jinsa: http://docs.mongodb.org/manual/reference/configuration-options/
[14:55:18] <rspijker> szborows: cool :)
[14:55:24] <rspijker> KamZou: yeh, I know the feeling...
[14:56:30] <KamZou> rspijker, but i agree it'd be a better option
[14:57:47] <Jinsa> rspijker: "unknown option net.ssl.mode", arg! I'll check on doc but seems that doc may be not up to date :'(
[14:58:25] <rspijker> Jinsa: might be problematic if you are mixing two styles of config file
[14:58:37] <rspijker> think the new style is json and it probably doesn’t mix well with the old style
[14:58:44] <rspijker> so you might have to redo the cfg file in the new format
[15:00:22] <Jinsa> rspijker: ok, is there a conv tool to help me a bit or is there a template I could start with? (thanks for your help really!)
[15:00:36] <rspijker> at the top of the document I linked there is an example
[15:01:00] <rspijker> so it’s not with dots, but indentation, more JSON-like almost
[15:01:30] <Jinsa> ok
[15:01:41] <Jinsa> so I just have to write it that way
[15:01:49] <Jinsa> and keep the same filename right?
[15:02:01] <rspijker> correct
[15:02:12] <Jinsa> ok I'll give a try :)
[15:03:01] <rspijker> good luck :)
[15:04:13] <Jinsa> do I have to put some {} like in JSON?
[15:08:00] <rspijker> Jinsa: nope
[15:08:15] <Jinsa> ok thanks, i'm writing it ^^
[15:15:43] <Jinsa> rspijker: arg, still one error: Unrecognized option: net.ssl.PEMKeyFile
[15:18:14] <rspijker> Jinsa: are you actually using the periods in there?
[15:19:32] <rspijker> net: <NEWLINE> ssl: <NEWLINE> PEMKeyFile: actual file<NEWLINE> mode: requiressl …
[15:19:42] <rspijker> I’ve got to go though
[15:19:46] <rspijker> good luck man :)
[15:19:46] <Jinsa> only spaces
[15:19:55] <Jinsa> ok thanks, bye ;)
[15:20:02] <rspijker> is it compiled with SSL support?
[15:20:10] <Jinsa> mmhhh...
[15:20:13] <rspijker> the default mongod binary does not have SSL support
[15:20:16] <Jinsa> arg!
[15:20:21] <rspijker> only the enterprise version and the specially compiled one
[15:20:22] <Jinsa> that may be the point
[15:20:25] <Jinsa> ok
[15:20:38] <rspijker> http://docs.mongodb.org/manual/tutorial/configure-ssl/
[15:21:01] <Jinsa> many thanks!
[15:21:04] <Jinsa> I'll do that
[15:21:05] <Jinsa> :)
[15:21:16] <kmac_> can anyone here help me reset MMS two-factor authentication?
[15:21:37] <rspijker> kmac_: launch a support ticked
[15:21:41] <rspijker> I had to do it this morning
[15:21:45] <kmac_> will do rspijker, thanks
[15:21:49] <rspijker> fairly annoying, but they resolved it real quick :)
[15:22:13] <rspijker> ok, now I’m really off
[15:57:19] <grkblood13> I'm trying to use mongo as my sessionstore for a node.js but I'm having trouble reading my sessions in my instances on socket.io. https://gist.github.com/anonymous/83804ea87e80c34282e0
[16:03:45] <theRoUS> Derick: what ebook would you recommend for a mongodb n00b aspiring to become a pro?
[16:13:07] <nfroidure> Ho. Are mongoos schema statics resrved to functions or can i attach values likes objects or else
[16:13:16] <nfroidure> *Hi
[16:44:39] <elm> let us consider a network partitioning:
[16:45:09] <elm> both sides will elect their own master because they don`t know that the servers in the other partition are still running
[16:45:27] <elm> so what is gonna happen if you re-merge this partitions later on?
[16:45:39] <elm> then we have two masters!
[16:45:46] <elm> how will that work?
[16:46:22] <tscanausa> split brain is never fun
[16:47:22] <elm> yes, but NoSQL systems like mongodb would basically be supposed to cope with such events.
[16:48:10] <tscanausa> that is why they have things like arbitors and strongly recommend against setting yourself up for split brain.
[16:48:46] <elm> how does it work by principle?
[16:49:26] <tscanausa> it acts like a mongodb instance that holds no data and just votes in elections
[16:49:30] <elm> I suppose that a new master will be elected and that the arbiter will help them achieving a new consistent view.
[16:49:40] <elm> ah ok.
[16:50:10] <tscanausa> usually you put them in a third location like a 3rd dc
[16:50:27] <elm> and what about updates?
[16:50:54] <elm> will all applyable updates which are not in conflict with an other update be applied?
[16:50:55] <tscanausa> what do you mean? a majority of nodes need to be available for write operations
[16:51:17] <elm> ah such write operations would fail; I see.
[16:52:19] <elm> and if you write something with a write concern of 1 or None then we still will have to think about partition merging
[16:53:19] <tscanausa> well you have bigger issues then partition merge at that point
[16:54:28] <elm> I believe. However if we assume that a new master is elected and that partitions will merge,
[16:54:33] <elm> so multiple update operation like this ({},{$inc:...}) would all get applied in sequence on merging?
[16:54:48] <elm> i.e. I would have the sum of all these operations at last?
[16:55:00] <elm> if there was a simple web page access counter.
[16:55:22] <elm> tscanausa: what issues exactly?
[16:55:57] <tscanausa> data even making to nodes when in a network partition
[16:56:57] <elm> o.k. I guess there needed to be some mechanism to ignore it until a new master was elected.
[16:58:05] <elm> if every client sent with his updates the id of the node he currently believed to be the master ignoring such data should be quite straight forward, should not it?
[17:11:08] <choosegoose> so I'm trying to install mongodb and it's giving me problems because I have my OS installed on a small partition, can I just have a mongodb running somewhere in my /home/choosegoose/?
[17:15:33] <choosegoose> nvm figured it out
[17:16:40] <cheeser> you can run it from anywhere you'd like for the record
[17:27:16] <Kaiju> I'm performing a map reduce on my dev mongod instance and its outputting to a collection as expected. I'm running the same command on my production mongos instance and the resulting collection is empty. The map reduce report from production shows the correct number of emits. But the resulting collection is empty. Is there a flag or something simple I'm missing?
[17:27:57] <Kaiju> 1 mongos and 3 shards
[19:16:40] <aaronds> Hi, I'm currently using .findAndModify(query: {}, update: {"myField":true}); but I'm finding that the record will lose all its other fields... Why is this?
[19:17:37] <cheeser> from the shell?
[19:19:03] <aaronds> cheeser: both from the shell and via the Java API.
[19:19:47] <Kaiju> aaronds: $set prevents that
[19:21:30] <aaronds> Kaiju: so it does, thanks :-)
[19:33:44] <theRoUS> given a collection with the field 'category' with a limited number of values, what's the equivalent of 'select category,count(*) from collection group by category' ?
[19:34:07] <theRoUS> basically, i want all the values of 'category' and the number of times each value appears
[19:34:33] <tscanausa1> theRoUS: use aggregation
[19:35:01] <theRoUS> tscanausa1: okey, but i'm a n00b. i haven't been able to figure the aggregation stuff out sufficiently yet
[19:41:33] <theRoUS> tscanausa: having trouble putting the aggregation stuff together properly
[19:43:56] <tscanausa> the id is the category and sum : { $sum :1 }
[19:44:13] <riceandbeans> if I feel somewhat comfortable with postgresql, how hard will mongodb be to learn?
[19:52:32] <theRoUS> tscanausa: yah, i just guessed that. lol
[19:54:39] <saml> you use pymongo?
[19:54:49] <saml> how can I set up logging so that each query is logged?
[20:33:18] <tscanausa> saml: the opslog has a rolling record of operations.
[20:33:38] <saml> yah i wanted to put logs in application
[20:33:56] <tscanausa> write a wrapper function?
[20:34:18] <saml> yah but i want to log queries that can copy-pastable to mongo shell
[20:34:35] <saml> instead of foobar: {$date:...} I want foobar: ISODate(...)
[20:34:43] <saml> i guess i want too much
[20:58:15] <cozby> anyone know how to reduce the noise level in the logs
[20:58:29] <cozby> my logs are flooded with authentication notifications
[20:59:08] <cozby> I've searched online, but I haven't found a resolution to this so... wanted to verify if this has been resolved or not
[20:59:09] <cozby> ?
[21:30:35] <MacWinner> are most of the issues that folks bring up about mongo losing data or silently not writing data old news? ie, addressed in latest 2.6.x?
[21:31:04] <MacWinner> anytime I ask someone about their experience here in SF, I get vague FUD about how they love and hate mongo
[21:31:25] <cheeser> that was old news years ago
[21:31:42] <MacWinner> i've been using it in production for a while now for a simple caching layer and it seems to work great..
[21:31:54] <cheeser> it *is* great! ;)
[21:32:16] <MacWinner> cheeser, cool.. are there any similar issues outstanding now? I'm actually looking to migrate some file related stuff to GridFS
[21:32:41] <cheeser> none i'm aware of
[21:32:41] <MacWinner> just want to be as sure as possible that I'm not going to get bitten..
[21:36:49] <kali> MacWinner: there are still situations where lost writes can happen. you need to read and understand everything you found about replication and write concern
[21:38:22] <kali> MacWinner: http://aphyr.com/posts/284-call-me-maybe-mongodb this is testing in extremely adverse conditions, worth a read
[21:39:04] <kali> MacWinner: you can compare with the other system tested by the same guys in the same kind of conditions, and mongodb does not look so bad
[21:58:11] <rekibnikufesin> I'm prepping for 2.4 -> 2.6, running db.upgradeCheckAllDBs(), running for several hours, and has scrolled past my terminal buffer limit. Any chance that it's going to give me a summary output or log file when it's done?
[22:11:52] <MacWinner> kali, thank you!
[22:53:24] <hahuang65> hey guys, I need to find all documents in a collection that have any fields besides 'foo' and 'bar'. How might I do that?
[22:57:52] <joannac> do you know what the other fields are called?
[22:58:25] <hahuang65> joannac: no I don't. They're considered "metadata"
[22:58:37] <hahuang65> joannac: which is why this is problematic :p
[23:04:59] <hahuang65> joannac: no ideas? :p
[23:07:41] <joannac> hahuang65: I'm working. If you would like to pay me to get in my queue, I can give you ideas ;)
[23:10:31] <Kaiju> hahuang65: $ne with a $or should do the trick
[23:15:55] <hahuang65> joannac: that'd depend on if my company would be willing to do so :)
[23:16:18] <hahuang65> Kaiju: how so? $ne and $or would only work on fields for which I know the names of.
[23:18:31] <Kaiju> hahuang65: how about $and with {$exists : false} ?
[23:19:44] <hahuang65> Kaiju: I feel like you're not understanding what I'm trying to do?
[23:22:09] <Kaiju> hahuang65: I'm pretty sure your looking for any documents that do not have the properties of foo and bar. I would write it like this {$and: [ {foo : {$exists : false}}, {bar : {$exists : false}}]} is that correct?
[23:23:11] <hahuang65> Kaiju: Nope. I'm looking for any documents that has fields OTHER than foo and bar. For example, if one document has foo = 1, bar = 2, and another document that has foo = 1, bar = 2, and baz = 3, I want the document that has baz as a field
[23:24:32] <Kaiju> hahuang65: Oh gothca, I'm not sure on that one. Without a regular pattern to follow I would handle that in the application layer.
[23:25:31] <hahuang65> Kaiju: I just needed it for a support document, so it wouldn't be application code anyways. That would never be performant.
[23:26:36] <Kaiju> hahuang65: There is always map reduce to get a performant solution over a large data set
[23:31:35] <hahuang65> Kaiju: right, but this is just for a report. We don't need it for app layer :)
[23:31:42] <hahuang65> Kaiju: thanks for the suggestions.