PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Sunday the 7th of September, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:04:35] <joannac> oh i see
[00:04:42] <joannac> get rid of the "j = "
[00:04:55] <joannac> and then i guess that works
[00:33:21] <theekgb> wondering if anyone can help with how to remove a db from a secondary server
[00:46:25] <ag4ve> is there a way to get a mongo shell to connect to an offline db file?
[00:46:28] <joannac> theekgb: ?
[00:46:37] <joannac> ag4ve: no
[00:46:44] <joannac> start the mongod server, and connect to that
[00:47:13] <ag4ve> so can i make mongod use a fifo and then connect to that fifo?
[00:47:23] <ag4ve> or do i have to use a port?
[00:48:10] <joannac> I don't know what a fifo is
[00:48:50] <joannac> but yes, it needs a port
[00:49:38] <ag4ve> a named pipe. but i guess just binding to localhost is good enough
[00:50:48] <theekgb> i beleive someone ran a script the wrong driction when writing backup scripts…it connects to the secondary to do the mongodump, they are only on the secondary though
[00:53:48] <joannac> you can't write to a secondary though...
[00:53:53] <joannac> anyway
[00:54:12] <joannac> bring the secondary donw, start it as standalone (no --replSet option, and a diffferent port)
[00:54:16] <joannac> and then drop your databases
[00:54:59] <theekgb> ah ok, didnt think about that, thank you
[02:01:18] <pabst^> im a little confused on manual references, if I have a manual reference and I want to query a document that only has part of that manual reference how can I query for that?
[02:15:45] <joannac> pabst^: um, why don't you have the entire manual reference
[02:19:27] <pabst^> joannac: im not sure. im not sure I have grasped references yet
[02:21:25] <joannac> okay, well if you have the _id and you know which collection it is, you do another query
[02:22:40] <pabst^> joannac: thanks, that seems expensive, but I think I am thinking about it like sql
[02:25:12] <pabst^> so, i have a collection that has a reference to a collection that has a list of domains, I only want records in the first collection that reference a particular domain. In order to do that I need to get the _id of the domain first, and then I can do a find by that objectid on the first collections...
[02:25:23] <pabst^> makes more sense now that I am "saying" it out loud
[08:35:46] <sweb> documention need to be update http://docs.mongodb.org/manual/tutorial/enable-authentication/
[08:35:55] <sweb> createUser is not a function
[08:36:01] <sweb> addUser instead
[08:36:21] <sweb> but i have problem using these method for enable authentication
[08:36:41] <sweb> i using robomongo client to login via created user on `admin` db
[08:36:49] <sweb> also on terminal
[08:37:20] <sweb> http://pastebin.mozilla.org/6364111
[08:37:25] <sweb> what's my problem ?
[08:38:56] <sweb> http://pastebin.mozilla.org/6364155
[10:04:58] <JulienTant> hi there
[10:05:27] <JulienTant> What's the benefit of using $inc compared to a simple addition ?
[10:07:01] <joannac> how would you do the addition?
[10:09:36] <JulienTant> in my code a simple x = x+1
[10:10:31] <joannac> race condition
[10:10:46] <joannac> your code finds a document, adds 1, and then saves
[10:10:54] <joannac> who knows what's happened in the meantime
[10:12:53] <JulienTant> here's my use case, i have a to retrieve a document in a videos collection and i want to increment the nb_view
[10:13:17] <JulienTant> ok
[10:13:22] <JulienTant> i just understand ^_^
[10:14:06] <JulienTant> my video informations are outdated as soon as i have them, because the nb_view could have increased
[10:14:12] <JulienTant> thanks joannac
[10:58:45] <ZenGeist> Hi! I have a problem with querying same collection in parallel
[10:59:37] <ZenGeist> I have smth like hitparades, which I make with different conditions (queries), but from same data (collection)
[11:00:16] <ZenGeist> The problem is, that my queries are sequential, not parallel
[11:00:31] <ZenGeist> Is it because of ReadLock?
[12:17:38] <sundaycoder> hello chaps - would this be a good place to grab a little help with getting started that docs and google doesn't seem to cover?
[12:36:55] <mezod> hi, i am using mongo on windows and every day I have to delete the whole db or repair it because of "unclean shutdown". How am I supposed to shutdown?
[14:50:25] <tab1293> can anyone tell me why I may be getting this error when trying to connect to mongo warning: Failed to connect to 127.0.0.1:27017, reason: errno:111 Connection refused Error: couldn't connect to server 127.0.0.1:27017 (127.0.0.1), connection attempt failed at src/mongo/shell/mongo.js:146
[14:50:25] <tab1293> exception: connect failed
[15:23:07] <flok420> tab1293: because either it is not running or not listening on that address/port
[15:41:19] <mango_> bsondump -v <file>
[15:41:28] <mango_> what unit is the file size in?
[15:54:32] <skot> bytes
[15:55:42] <skot> If you look at the file size reported by the filesystem you will see the same number.
[21:00:43] <mango_> MongoDB queries (based on the M102) exam, are they mostly JavaScript execution?
[21:01:07] <mango_> Is that too broad of a statement to make?
[21:03:55] <mango_> I think it's also processing JSON and BSON
[21:03:59] <mango_> so, no.
[21:06:58] <tab1293> how long should an aggregate matching and sorting one field should take on a collection of about 1 million documents?
[21:07:17] <tab1293> > 5 mins?
[21:08:00] <partycoder> no
[21:08:34] <partycoder> that's not a correct approach to the problem
[21:08:49] <tab1293> okay well does making multiple writes to the DB effect the duration of an aggregate cause this has been taking more than like 6 minutes now
[21:09:00] <tab1293> what's the correct approach then?
[21:09:09] <partycoder> consumed time will depend on what type of hardware is used
[21:09:27] <tab1293> I am on an amazon ec2 micro instance
[21:09:42] <partycoder> well micro instances should not be used for mongo
[21:10:15] <tab1293> okay so you're saying to speed up an aggregate of that magnitude a hardware upgrade is the only solution?
[21:10:24] <partycoder> no
[21:10:42] <partycoder> http://docs.mongodb.org/manual/reference/operator/meta/explain/
[21:10:53] <partycoder> first of all, run this in your query, understand the execution plan
[21:11:00] <partycoder> see what the bottleneck is
[21:11:18] <partycoder> eventually you will find out that you are running a query over non-indexed fields
[21:11:36] <partycoder> so you might want to add an index over the field you are running your query on
[21:11:43] <partycoder> however
[21:11:51] <partycoder> if your query does something inefficient
[21:12:00] <partycoder> such as scanning multiple fields in the document
[21:12:19] <partycoder> expect it to consume more and more time as the collection grows
[21:12:42] <partycoder> you can mitigate the problem by adding hardware but it will never scale horizontally
[21:12:58] <partycoder> does that make sense for you?
[21:13:13] <tab1293> okay I have only an index on the $sort operator, guess I need to add one to the $match operator to
[21:13:32] <tab1293> yeah it does, I still need to read that explain page though
[21:13:34] <partycoder> no i mean
[21:13:42] <partycoder> you are sorting over a field
[21:13:59] <partycoder> you can add indices for that field, for example
[21:14:16] <partycoder> usually queries involving only the collection indices are fast
[21:14:47] <partycoder> http://docs.mongodb.org/manual/tutorial/list-indexes/
[21:15:03] <tab1293> yeah I have to add an index for the field that I am matching
[21:15:12] <tab1293> I only had one for the index I was sorting
[21:15:18] <tab1293> I see my mistake, thank you
[21:15:38] <partycoder> but, in general
[21:15:43] <partycoder> for query optimization
[21:15:47] <partycoder> run execution plans
[21:16:03] <partycoder> i mean, obtain execution plans with explain
[21:16:48] <partycoder> then, mongodb can log slow queries
[21:17:13] <partycoder> so you can optimize queries slowing the mongo server down
[21:17:30] <partycoder> another approach you can use is sharding
[21:17:40] <tab1293> cool I didn't know about that explain feature
[21:17:45] <tab1293> yeah I was reading about sharding today
[21:17:50] <partycoder> if you can...
[21:17:57] <partycoder> buy a book called scaling mongodb
[21:18:03] <partycoder> it's really short
[21:18:09] <partycoder> has lots of figures in it.
[21:18:13] <partycoder> but it's useful
[21:18:27] <partycoder> it's mostly about sharding
[21:18:53] <tab1293> nice, need a book to read on the train anyway
[21:19:10] <partycoder> it's really short, i was disappointed at how short it was
[21:19:29] <partycoder> like, around 40 pages
[21:19:32] <partycoder> or something like that
[23:33:09] <tab1293> why would I be getting a errno 111 on connection to mongod?
[23:45:56] <Hudolus> Hello guys, I am about to ask a lot of stupid questions
[23:46:03] <Hudolus> Please help me the best you can
[23:46:30] <Hudolus> I'm using Unreal Engine 4. It has a plugin called VaRest https://forums.unrealengine.com/showthread.php?19961-Plugin-Http-s-REST-blueprintable-JSON-query-and-Parse-API-manager-VaRest&highlight=JSON
[23:46:37] <Hudolus> I need to take data from my game
[23:46:41] <Hudolus> and put it into a DB
[23:46:56] <Hudolus> and be able to access it as well
[23:47:02] <Hudolus> but,
[23:47:18] <Hudolus> JSON files are normally capped at 16mb yes?
[23:47:20] <Hudolus> if thats the case
[23:47:41] <Hudolus> Each player will be using like 6 MB worth of space for their info
[23:47:47] <Hudolus> well thats quite an overshot
[23:47:59] <Hudolus> lets say each player uses 500 kb of space.
[23:48:24] <Hudolus> how do I tell my plugin which JSON file that that data needs to be stored in or where that data will be for the request