[02:48:41] <bzittlau> I wonder if someone could help shed some light on an issue I'm trying to debug against our Mongo server.
[02:49:17] <bzittlau> We're getting cursor timeouts from the PHP driver which is causing some of our long lived processes to die. We have MMS setup, but I can't seem to connect the dots and figure out what's causing the timeouts to happen.
[04:44:19] <platzhirsch> Counting the length of sub arrays is really hard, I want to compute the avg, max, min, etc. pretty hard without $size: $field shipped yet
[06:25:35] <Aric> I have a large JSON array (3+MB) and want to insert it into MongoDB but it does not have any keys and such? though there is an easy pattern to do so? should I manually edit the JSON to add them or is there some tools to insert into MongoDB with rules I specify?
[07:55:23] <alexandernst_> Since I'm able to store 2 files with the same name in GridFS, I'm wondering how can I delete just one of them via it's _id instead by name, as the docs say here: http://mongodb.github.io/node-mongodb-native/api-generated/gridstore.html#gridstore-unlink
[07:55:33] <alvesjnr> hi all; basic question about licensing: may I use mongo-db in a comercial closed source project?
[08:34:18] <joannac> malladi: db.addUser(...) on the database youwant to give them access to; whatever role(s) from here: http://docs.mongodb.org/manual/reference/user-privileges/ depending on what you want them to be able to do
[09:26:19] <quattr8> I created a sharded collection with a shardkey on _id (hashed), i only store sessions here and only need a key on _id, my insert/findOne queires are slow but my update queries are fast
[09:53:40] <Mastine> I have this collection, and I try to authenticate a client. My aggregation works fine, but returns only the "login" array. I would like to retrieve the _id / company of the root object. But I can't find how =/
[09:55:18] <joannac> Add them in your $project step?
[10:03:17] <mark____> http://pastebin.com/s2qvZz3h#! the error is after starting making connections with database,iwithin seconds,it closes automatically.with that my browser page contents which have to come from database doesnt come
[11:24:04] <Mastine> Thanks kali! I tried this one before, but I didn't put the $_id on the project. So that was the way to do it..
[11:48:24] <platzhirsch> I am having a hard time with the aggregate framework. Is it even possible to compute the average length of an array field over a collection?
[12:24:40] <themapplz> why is it that result.count turns out to be 42874 and result.length is only 784
[12:25:34] <kali> a few ground rules on IRC guys: 1/ never paste more than two lines of codes on the #. 2/ don't adresse somebody in particular, many people can help
[12:31:32] <mark____> http://pastebin.com/s2qvZz3h#! the error is after starting making connections with database,iwithin seconds,it closes automatically.with that my browser page contents which have to come from database doesnt come
[12:31:53] <kali> mark____: i have no idea what you're talking about.
[12:32:10] <kali> mark____: are you trying to access mongodb with your brower ?
[12:32:41] <mark____> Kali:yes,actually the dynmically data to be loaded from the mongodb on browser
[12:34:15] <mark____> Kali: why in the end it refuses the connection? the content to be loaded in browser is to be come from mongodb,but suddenly mongodb ends the connection ?
[12:36:47] <kali> mongodb is not meant to be acces from a browser
[12:37:46] <themapplz> thanks for the link but damn i find it so confusing. does it mean that it will rerun the reduce function if the output is the same as the previous input?
[12:38:50] <mark____> kali:then that means i am not been able to dynamically load data from the database.
[13:49:22] <platzhirsch> Is there an alternative to map reduce when it comes to counting the array lengths of documents?
[13:49:32] <themapplz> I'm trying to wrap my head around map and reduce as a very constrained query returns 40000+ records. now in order to grab only every 5th record i'm told i should use map/reduce. is there a better way?
[13:52:52] <platzhirsch> themapplz: I wish I knew, but I don't see how this would work in the aggregate framework
[13:56:09] <themapplz> platzhirsch: thx.. are ppl here only pm'ing? thought people were here to discuss (and help)
[13:56:24] <cheeser> if you want every 5th you'll have to use m/r or get a cursor and iterate it manually
[13:56:40] <platzhirsch> themapplz: that's often the case, depends on the time :)
[13:56:54] <cheeser> traffic varies during the day
[13:57:00] <themapplz> thx cheeser .. which is faster?
[13:57:18] <cheeser> depends on the size of your data. it may not matter.
[13:57:31] <themapplz> my db has ~400M docs and might need to iterate through a whole lot
[13:57:40] <cheeser> iterating the cursor will pulling *all* the data over the query returns only to discard 80% of it.
[13:58:09] <cheeser> on the other hand, i don't think m/r supports cursor based output so you'll be limited to 16MB responses, iirc.
[13:59:41] <themapplz> im at a standstill here.. the db is huge and querying all data, however constrained, is not suitable for graphic visualisation. i need to reduce, and figured every Nth value by map reduce is the way to go.is that correct? anbd if so
[14:09:25] <themapplz> just to try and make an average on val
[14:09:51] <themapplz> still don't see what it is i'm doing wrong
[14:15:57] <platzhirsch> For map-reduce to count array lengths, do I need to loop over the array and emit 1 in the map function or can I invoke count() somehow?
[14:16:26] <cheeser> i thought there was a count function for arrays
[14:17:05] <platzhirsch> cheeser: where? There is a new aggregation upcoming $size, which can read the size of an array, but apparently we won't see that before 2.6
[14:24:47] <cheeser> on the Array "object" (not a js guy...)
[14:25:06] <platzhirsch> Oh, yes. That's true, you can call length()
[15:15:15] <platzhirsch> okay, lesson learned. If I want to gather sophisticated statistics about my data, I better start to manage some extra variables that get incremented
[15:15:33] <tomasso> getting to explore an mongodb.. is there any gui tool for linux?
[15:18:15] <cheeser> tomasso: there's this list: http://docs.mongodb.org/ecosystem/tools/administration-interfaces/ and my own ophelia http://www.antwerkz.com/ophelia
[16:29:41] <slava> cheeser: complaining about possibly having to support weird things that might not be documented. :P
[16:32:00] <slava> turns out dbHash is a bad way to get non-system non-hidden collection names ... when your DB is ~30GB
[17:10:26] <knownasilya> what's the best way to handle opening the db connection? open once and do everyting? open multiple times? open and close multiple times?
[17:15:39] <kali> well, with normal stuff, you create an instance of MongoClient once and for all, and it manages connection pooling behind the scences
[17:15:47] <kali> with the node clients, i have no idea
[17:16:19] <knownasilya> kali, do you have any links to examples with that format? I mainly see using Db(..., Server), and db.open()
[17:21:01] <knownasilya> kali, found this one http://mongodb.github.io/node-mongodb-native/driver-articles/mongoclient.html it says you can use MongoClient.db afterwards? (is the mongoClient.close() in the callback just for example?)
[17:28:55] <venturecommunist> if i have a collection where there's a field 'color' that could be red, green or blue, and red, green and blue entries are all timestamped with a 'timestamp' field, is there a query i could do that would return a single cursor populated with the latest red, the lastest green and the latest blue?
[17:33:25] <doxavore> is there a way to tokenize the values out of system.profile documents to get an idea of how many times (and total/avg time) a query with a particular shape took, even when its exact arguments don't match?
[17:34:36] <doxavore> that is, not for a particular query, but some way to get a list of "spent Nms over M calls querying by name and age"
[17:38:00] <venturecommunist> might aggregation framework operators be the answer to my question?
[17:39:15] <kali> venturecommunist: you'd better run three queries
[17:39:57] <kali> venturecommunist: but, yes, i think you may be able to do it with aggregation (but it will probably less efficient)
[17:40:33] <venturecommunist> what's the difference running with three queries? and will i still get a single mongo cursor at the end?
[17:41:19] <kali> if you have the right indexes (color, timestamp, in this order), it's jsut three lookup. with aggregation, you will access the whole collection
[17:43:52] <venturecommunist> if i had 10,000 colors i suppose aggregation would become important?
[19:17:42] <platzhirsch> So I want to access my documents over another field, rather than directly through the identifier. So I add a single field index to speed up the queries, have I got that right?
[19:18:53] <cheeser> you want to query your collection using an arbitrary field rather than the _id, yes?
[19:19:03] <cheeser> so, yes, you'll want an index on that field
[21:17:17] <cha0ticz> Having an issue with sharded replica sets. I've setup the entire cluster but I saw that all of my data is still going to just one shard. Looking into it further I see the following error in my mongo log: [Balancer] moveChunk result: { errmsg: "exception: no primary shard configured for db: config", code: 8041, ok: 0.0 }
[21:17:44] <cha0ticz> everything i've searched so far says it may be due to all shards being able to reach all others and the config servers but I just went through each one and was able to connect to everything via the mongo shell
[21:17:53] <cha0ticz> running out of ideas what may be going wrong =/
[21:40:16] <Zeeraw> I'm still experiencing major issues with DataFileSync
[21:40:44] <Zeeraw> It takes me between 10 to 45 seconds to flush the mmaps to disk
[21:41:19] <Zeeraw> Is there any way I can diagnose this?
[21:44:06] <dfarm> Hey guys, looking for a little update help say I have a bunch of documents like: {'a': 2.2, 'b': 1.3, 'c': 5.3} and I want to add a total field. I know it's pretty basic but I'd appreciate any help.
[21:44:46] <dfarm> I've tried doing an update with something like {'total':{'$add': blah} but that doesn't work apparently because the result isn't a document