[04:22:56] <meltingwax> does db.collection.count() run in O(1) time?
[04:25:37] <TehShrike|Work> I'm saving documents with an array/set in it - how would I go about finding the document with the largest number of items in that list, or even just find the largest count of items in a set?
[06:23:00] <lgp171188> How do I get run the db.serverStatus() command from pymongo?
[07:21:35] <lgp171188> I am trying to insert a python dictionary into a mongodb collection. The python dictionary is obtained by running the serverStatus command from pymongo. But when I try to insert the data, I get "bson.errors.InvalidDocument: key '.' must not contain '.'" What could be the issue here? I don't see any keys having a . in their names.
[07:36:53] <regreddit> why does this query work: users.findOne({ user_id: {$regex: userid, $options:'i'}}, but this does not: users.findOne({ user_id: {$regex: userid, $options:'i'}, field2:'bar'}?
[07:37:18] <regreddit> it seems that mongo $regex omly allows a single field in the query
[07:38:25] <regreddit> i think this may be specific to the node native driver
[07:39:35] <mrpoundsign> are you sure field2: 'bar' exists?
[07:40:03] <regreddit> yes, the exact query works flawlessly in mongo shell
[07:40:30] <regreddit> copy/paste to node app, no results, but if i remove field2, it works
[07:41:53] <mrpoundsign> db.pixels.findOne({color: {$regex: /#/}, x: 1}).x resulted in NumberLong(1)
[07:42:17] <mrpoundsign> perhaps userid isn't a valid regex?
[07:42:33] <regreddit> mrpoundsign, can you try something for me? make the value for regex a variable, vs a literal
[07:57:35] <regreddit> go is supposedly heavily used inside google, so it may hang around a while
[07:58:05] <mrpoundsign> If I'm wrong, and I never am... it will be. :p
[07:58:46] <mrpoundsign> wrote http://www.pixelparty.us/ -- every pixel is in Mongodb. It runs in < 40mb RAM, all over websockets. I'm having a blast :)
[07:59:42] <mrpoundsign> like, I am generating the export images on-the-fly haha. It's not optimized at all.
[08:23:48] <lgp171188> I ran db.serverStatus() command on mongodb and in the data returned, there is a key '.' in the value of 'locks' key. Does mongodb allow . in the key name?
[10:11:32] <brucelee> is it a bad idea to run mongodb off an nfs export?
[14:37:24] <d0x> Hi, i've wrote a aggregation query (with unwind and one projection) to get a flat json that i need to export as CSV for my boss. But the result is larger then 16MB. Is that an identicator that mongo db is not the best database for our usecase or should i use mapreduce or smth. like that?
[14:44:19] <d0x> hm, i think map reduce is not the beneficial for me. Because i like to transform the whole collection
[14:47:22] <ppetermann> i'm not sure what you are doing there, neither map/reduce nor the aggregation framework are really for transformation..
[14:49:29] <kali> mongodb is not for tranformation at all
[14:49:36] <d0x> hm, sry.I dont like to transform the current data
[14:49:50] <d0x> look, thats my aggregation query http://pastebin.com/bcaZVygw
[14:50:00] <d0x> and the result i need to deliver as CSV
[14:51:11] <kali> well, you can map/reduce to get the result you want in a new collection (who cares if there is no reduce) then dump the resulting collection with mongodump
[14:52:10] <kali> but you may want to check out tools like talend for the next time
[14:52:31] <kali> or just read the file, and rewrite it with any language you're comfortable with
[14:53:18] <d0x> i'm not sure whether mongodb is the right technologie for our usecase
[14:54:01] <d0x> We store data from different plattforms to analyze them later
[17:52:17] <lampe2> i have a collection items every object in that collectoin has a entry "category" this category is a Array of strings ["cat1","cat2"] . What is the best way to find all strings and return a object/array with only that strings? Example: item1.category = ["cat1","cat2"] and item2.category = ["cat2","cat3"] and i want catgeory = ["cat1","cat2","cat3"]
[17:53:54] <kali> lampe2: look for the "distint" command
[21:01:09] <cm0s> How do you usually translate a many to many relation in Mongodb? Do your create two documents with a field containing an array of ObjectId to keep the references? Or do you keep the reference on only one document?
[21:01:54] <cm0s> For example a Bookmark document and a Category document
[21:16:05] <kali> cm0s: whatever makes your application data access pattern efficient
[21:16:07] <cm0s> kali: yeah, it seems there is no generally accepted pattern for this case.
[21:16:07] <cm0s> I have a "relational db" background and changing the way to think about it is sometime difficult ;)
[21:17:37] <kali> you need to hink about the data access patterns
[21:17:39] <noizex> well, with m2m its not really much different
[21:18:09] <noizex> you wont embed one inside another anyway
[21:19:21] <kali> instead of thinking of the data in a static way, you need to think about the access
[21:19:22] <kali> try to picture what your application will require, and what data layout can provide the data in an efficient way
[21:19:22] <cm0s> So depending if I mainly need to read the data or modify it?
[21:19:22] <kali> i hope you get the idea, because i can't rephrase this idea anymore :)
[21:26:43] <kali> anyway, you get the idea. there is no rule. it's a compromise
[21:27:10] <cm0s> Yeap I got it. Thanks for your help !
[21:27:54] <kali> cm0s: just one thing: getting data out of mongodb tends to be way faster than a sql engine
[21:28:42] <kali> cm0s: so making two queries (one to get bookmarks, then on to get category) with no denormalisation could get you quite a long way
[21:29:35] <kali> also, categories are usually a quite small dataset, that will fit in mongodb ram (making access fast), or even in your application ram, making them negligible
[21:34:44] <cm0s> Do you think if we create a totaly normalized schema on mongodb the perf for getting data should be pretty similar (or even worse) than on sql?
[21:36:53] <kali> no, i actually think if you craft the queries sensibly (like i said, read a page of bookmarks, then read all categories, so two queries), you can get better performance than relational databases
[21:37:41] <kali> but you can probably get one order of magnitude faster by denormalizing
[21:50:53] <cm0s> But if you make two separate queries you'll then have to "reassable" the to documents. If, for example, you need a json document with an array of categories which all have their associated bookmarks embedded in the same document.
[23:14:43] <swak> http://pastebin.com/eZFnp6YT Any ideas on having the "target.update({b" use the key parameter instead of something set like 'b'? This is with javascript.