PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Saturday the 1st of November, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:31:50] <droid909> guyz, with mongodb i don't need to write backend?
[00:32:08] <droid909> am i understand it right?
[00:32:15] <droid909> i use mysql currentloy
[00:32:19] <droid909> i use mysql currently
[00:33:10] <lqez> droid909: what is the exact meaning of 'write backend' ?
[00:34:10] <droid909> lqez: create REST stuff
[00:34:17] <lqez> MongoDB is not just a memory-based database. It uses 'files' to manage data and indexes.
[00:34:33] <droid909> hmm
[00:34:44] <droid909> ok, so it is memory-based
[00:34:44] <lqez> But these processes run via memory-mapped file i/o.
[00:34:56] <droid909> but i asked something else
[00:35:12] <droid909> lqez: i see json on the site
[00:35:35] <droid909> lqez: this is what i generate when using mysql
[00:35:42] <droid909> lqez: generate via php
[00:35:57] <droid909> lqez: with mongo db i can avoid this?
[00:36:05] <lqez> Yes, MongoDB uses JSON as protocol.
[00:36:16] <lqez> So you can use JSON natively
[00:36:43] <lqez> You can update partial strings, can pull out partial either.
[00:37:04] <droid909> lqez: i still use sql with mongodb?
[00:37:07] <lqez> Nope
[00:37:15] <lqez> There is no SQL layer in MongoDB
[00:37:28] <droid909> lqez: so, i get data by ids or all?
[00:37:29] <lqez> But you can still use aggregations via aggregate or map/reduce.
[00:37:49] <lqez> droid909: See http://docs.mongodb.org/manual/reference/sql-comparison/
[00:38:04] <lqez> And also : http://docs.mongodb.org/manual/reference/sql-aggregation-comparison/
[00:39:04] <lqez> May the javascript be with you :^D
[00:39:20] <lqez> There are several mongodb drivers for various platform/languages.
[00:39:20] <droid909> lqez: how js related to mongodb besides json
[00:39:38] <lqez> mongodb shell uses javascript as a primary language
[00:39:54] <lqez> but if you need php to play with mongodb, you can do that.
[00:40:02] <lqez> http://docs.mongodb.org/ecosystem/drivers/php/
[00:40:16] <lqez> pecl is ready.
[00:40:22] <droid909> lqez: i see, thank you
[00:40:36] <lqez> You may start from here : http://blog.mongodb.org/post/24960636131/mongodb-for-the-php-mind-part-1?_ga=1.131023951.1108412926.1414728469
[01:30:03] <MrWebington> Hi, I'm trying to connect to my remove MongoDB server hosted on Linode via Robomongo. Problem is I think I need to authenticate and I never remember setting an admin user or a password :/
[01:30:16] <MrWebington> *my remote MongoDB server
[01:32:58] <MrWebington> Robomongo keeps saying "Authentication skipped by you".
[05:27:06] <mgeorge> would be cool if there was something like phpmyadmin for mongodb :)
[11:48:59] <jonjons> I was doing a backup with db.copyDatabase on same machine on a 2 gig collection
[11:49:33] <jonjons> and mongod stopped during this process and I can't get it running again
[11:49:42] <jonjons> its a live app, kind of freaking out
[11:49:58] <jonjons> "warning: Failed to connect to 127.0.0.1:27017, reason: errno:111 Connection refused"
[11:50:02] <jonjons> so mongod will not start now
[11:50:07] <cheeser> the one you were copying from or to?
[11:50:16] <cheeser> either way, check the logs and see what's going on
[11:50:18] <jonjons> I was doing on same host
[11:54:27] <jonjons> might be out of disk space? http://i.imgur.com/gf5j3HY.png
[11:54:46] <jonjons> 9 gig available though
[11:55:03] <cheeser> the logs?
[11:55:08] <jonjons> img is of logs
[11:55:41] <cheeser> tail it and see what it says when you try to start mongod
[11:55:55] <jonjons> ok
[12:01:16] <jonjons> its starting from boot, so I'm just restarting and see what gets added
[12:01:21] <jonjons> I'm not very good at linux :p
[12:03:50] <jonjons> cheeser: http://pastebin.com/ec6N24vB
[12:05:20] <jonjons> the live20141101 collection is the backup that I just want to delete
[12:05:30] <jonjons> is rm live20141101* a safe operation to do?
[12:05:37] <jonjons> from my db directory
[12:06:20] <cheeser> i would try starting with --noIndexBuildRetry
[12:06:39] <cheeser> looks like it's bombing trying to rebuild an index
[12:07:17] <jonjons> but if is live201411011 is the offender I'm fine with deleting it
[12:07:49] <cheeser> well, whatever you're comfortable with but i'm not going to suggest that as I don't want to be responsible.
[12:07:55] <cheeser> it's *probably* ok.
[12:08:02] <jonjons> ok I'm just moving for now
[12:08:46] <jonjons> thanks for helping btw
[12:08:50] <cheeser> np
[12:37:34] <jonjons> I double server memory and it booted
[12:37:41] <jonjons> jesus
[14:28:13] <moshe> hi
[14:29:32] <moshe> is there a difference, performance-wise, to query for an array of bson ids or (indexed) strings?
[14:40:58] <moshe> anyone?
[14:59:33] <iapain> moshe: Whatever is indexed, it’s faster to get that back. (provided its in RAM)
[15:00:00] <moshe> I know, but you could index either of them.
[15:00:19] <moshe> so ‘I’m trying to understand if once it’s indexed - it doesn’t matter anymore
[15:00:37] <moshe> iapain: or does it?
[15:02:37] <iapain> moshe: In most of the cases it doesn’t matter but if your array is too large then it will need to scan the Array (if array is indexed then it wont).
[15:03:06] <iapain> So if you index array it should be as fast as string
[15:03:51] <moshe> iapain: I’m trying to build a relationship model, and trying to figure out which option is better
[15:04:37] <moshe> iapain: A - child has parent_id field, which is indexed, and then I query all childs by finding all docs with parent_id equals to that id
[15:05:00] <moshe> B - parent store an array of child ids, and query by finding all docs with that id
[15:10:29] <iapain> moshe: It seems to me that option A is better, howeever option B is flexible and if you index child ids then it’s as good as option B
[15:19:17] <Baluse> hello
[15:20:07] <moshe> ianblenke: ok, thanks.
[15:26:55] <Baluse> I want to store some measurement data from inverters etc. There could be 210 different measurements . Is mongodb suitable for this ?
[15:36:27] <rkgarcia> Baluse, may be
[15:36:59] <rkgarcia> Baluse, mongodb it's schemaless, scalable, fast and many other things
[15:49:39] <cheeser> the preferred term is "dynamic schema" rather than "schemaless"
[15:49:46] <cheeser> you still have to think about what you're doing...
[16:03:37] <drag0nius> i want to count multiple things in same aggregation, how can i do that?
[16:04:53] <drag0nius> i want to go through Logs collection counting logs of type 1 and type 2
[17:38:48] <jonjon> if I do a mongodump, then later add few records and do mongodump again these records will be merged to previous dump correct?
[17:39:56] <jonjon> and if I do a mongodump,. Then delete records from collection, add others, do mongodump again. My deleted records will still be available in the dump + new records?
[17:43:45] <cheeser> no merging is done with mongodump
[17:43:52] <jonjon> oh ok thanks
[17:44:36] <jonjon> 90% of my database are records I don't need to have available, but I just want to store them if in the future I decide to create some stats or something
[17:44:43] <jonjon> would mongoexport be the best case for this then
[17:45:13] <jonjon> mongoexport with query, then delete with same query
[17:46:32] <cheeser> as long as that query is idempotent
[17:46:59] <cheeser> the results don't change between runs
[17:47:18] <jonjons> its about 500.000 records per week
[17:47:35] <jonjons> yes Ok
[17:48:27] <jonjons> 500k records in text files on a weekly basis seems a bit excessive?
[17:48:40] <jonjons> wondering what the best thing to do
[19:39:13] <tomahaug> Hi there :-) I'm currently facing some issues with mongodb not building 2dsphere indexes on all entries in the collection, have anyone every experienced something similar?
[21:03:50] <scrandaddy> Hey guys. I am beginning to design a real-time analytics platform. I would like to store my tracking pixel data in mongo. In the interest of keeping the tracking pixel request load down, I'm thinking I could store each request in Redis first, and then move them every second/minute/whatever to mongo in batches. Is this a viable strategy? Overkill?