[00:40:36] <lqez> You may start from here : http://blog.mongodb.org/post/24960636131/mongodb-for-the-php-mind-part-1?_ga=1.131023951.1108412926.1414728469
[01:30:03] <MrWebington> Hi, I'm trying to connect to my remove MongoDB server hosted on Linode via Robomongo. Problem is I think I need to authenticate and I never remember setting an admin user or a password :/
[01:30:16] <MrWebington> *my remote MongoDB server
[01:32:58] <MrWebington> Robomongo keeps saying "Authentication skipped by you".
[05:27:06] <mgeorge> would be cool if there was something like phpmyadmin for mongodb :)
[11:48:59] <jonjons> I was doing a backup with db.copyDatabase on same machine on a 2 gig collection
[11:49:33] <jonjons> and mongod stopped during this process and I can't get it running again
[11:49:42] <jonjons> its a live app, kind of freaking out
[11:49:58] <jonjons> "warning: Failed to connect to 127.0.0.1:27017, reason: errno:111 Connection refused"
[15:02:37] <iapain> moshe: In most of the cases it doesn’t matter but if your array is too large then it will need to scan the Array (if array is indexed then it wont).
[15:03:06] <iapain> So if you index array it should be as fast as string
[15:03:51] <moshe> iapain: I’m trying to build a relationship model, and trying to figure out which option is better
[15:04:37] <moshe> iapain: A - child has parent_id field, which is indexed, and then I query all childs by finding all docs with parent_id equals to that id
[15:05:00] <moshe> B - parent store an array of child ids, and query by finding all docs with that id
[15:10:29] <iapain> moshe: It seems to me that option A is better, howeever option B is flexible and if you index child ids then it’s as good as option B
[15:26:55] <Baluse> I want to store some measurement data from inverters etc. There could be 210 different measurements . Is mongodb suitable for this ?
[15:36:59] <rkgarcia> Baluse, mongodb it's schemaless, scalable, fast and many other things
[15:49:39] <cheeser> the preferred term is "dynamic schema" rather than "schemaless"
[15:49:46] <cheeser> you still have to think about what you're doing...
[16:03:37] <drag0nius> i want to count multiple things in same aggregation, how can i do that?
[16:04:53] <drag0nius> i want to go through Logs collection counting logs of type 1 and type 2
[17:38:48] <jonjon> if I do a mongodump, then later add few records and do mongodump again these records will be merged to previous dump correct?
[17:39:56] <jonjon> and if I do a mongodump,. Then delete records from collection, add others, do mongodump again. My deleted records will still be available in the dump + new records?
[17:43:45] <cheeser> no merging is done with mongodump
[17:44:36] <jonjon> 90% of my database are records I don't need to have available, but I just want to store them if in the future I decide to create some stats or something
[17:44:43] <jonjon> would mongoexport be the best case for this then
[17:45:13] <jonjon> mongoexport with query, then delete with same query
[17:46:32] <cheeser> as long as that query is idempotent
[17:46:59] <cheeser> the results don't change between runs
[17:47:18] <jonjons> its about 500.000 records per week
[17:48:27] <jonjons> 500k records in text files on a weekly basis seems a bit excessive?
[17:48:40] <jonjons> wondering what the best thing to do
[19:39:13] <tomahaug> Hi there :-) I'm currently facing some issues with mongodb not building 2dsphere indexes on all entries in the collection, have anyone every experienced something similar?
[21:03:50] <scrandaddy> Hey guys. I am beginning to design a real-time analytics platform. I would like to store my tracking pixel data in mongo. In the interest of keeping the tracking pixel request load down, I'm thinking I could store each request in Redis first, and then move them every second/minute/whatever to mongo in batches. Is this a viable strategy? Overkill?