PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Saturday the 4th of October, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[03:11:57] <ttxtt> is there a straightforward way of creating an incrementing number as one field in a collection of documents? How do I avoid a race condition where two clients try to create the same number twice?
[03:12:15] <Terabyte> hey
[03:13:44] <Terabyte> I'm writing some code in java to connect to mongodb, and I noticed it's possible to "fail to connect" to the database on startup. I wanted to have mongodb at least try 5 more times, and was about to use spring batch to do this, but i saw that there's a "autoretry" in the mongoclientopions object, on closer inspection this is deprecated so i don't want to use it. how are you supposed to approach
[03:13:44] <Terabyte> this?
[07:17:08] <Chepra> Hey, whats the supposed way to do a repairdatabase on a replica set member?
[08:03:52] <jon___> Hey everybody I can't get my .gitignore file to ignore my MongoDB database... is there any gotcha in doing so? I just used "touch .gitignore" to create the file in my projects root directory, "nano .gitignore" to edit it to have two lines, "data" and "node_modules"... it is ignoring node_modules but keeps throwing me a file size error when I try to push a commit to github
[08:07:06] <kali> jon___: more a git question than a mongodb question, but... paste or gist the content of your .gitignore somewhere and the output of git status
[08:18:08] <jon___> kali the content of .gitignore is "node_modules" and then a new line and "data" those are the two names of the folders, i don't know how to copy from the terminal and i can't see the .gitignore file in the file browser.... git status new file: .DS_Store new file: README new file: app.js new file: bin/www new file: db.js new file: models/.DS_Store new file: models/user.js new file: npm-debug.log new file: pac
[08:18:20] <jon___> whoops that got cut off but my /data/ isnt in it
[08:19:05] <jon___> if i reset HEAD . then run git status
[08:19:09] <jon___> its not even in my untracked files
[08:26:41] <Bacta> Is MongoDB fast and webscale?
[08:26:45] <jon___> i just ran "git rm -r --cached .", "git reset HEAD .", "git add .gitignore", "git commit -m 'fixing .gitignore'", then "git push" and its still saying my database is too large...
[08:28:32] <Bacta> Is MongoDB fast and webscale?
[08:32:38] <Bacta> Is MongoDB fast and webscale?
[08:40:33] <Bacta> Is MongoDB fast and webscale?
[09:09:43] <krion> it is.
[12:33:19] <fl0w> ..
[16:56:17] <agend> hi
[16:56:51] <agend> is rs.initiate() and rs.add() idempotent?
[17:37:20] <olan> hey guys - don't know if anyone is around but I was hoping someone could shed some light on my current problem! To cut a long story short I have a replica set of two nodes. Both IPs recently changed and I'm trying to get them reconfigured.
[17:37:20] <olan> * using rs.config() i was able to update machine1's config to point at the updated ips.
[17:37:20] <olan> * when trying to do this on machine2, I got the following error message:
[17:37:20] <olan> {
[17:37:20] <olan> "ok" : 0,
[17:37:20] <olan> "errmsg" : "replSetReconfig command must be sent to the current replica set primary."
[17:37:21] <olan> }
[17:37:22] <olan> * both nodes are both appearing as 'secondary'
[17:37:22] <olan> * rs.status() on machine1 looks ok, but rs.status() on machine2 is still pointing to the old IP.
[17:37:23] <olan> any help would be very much appreciated!
[22:47:01] <joannac> olan: you probably have to force the one on the secondary
[23:10:40] <fpghost84> Hi. If I have a database with two collections which for simplicity have documents like {'name': name, 'val': val}, is it possible in mongodb to get a list of documents from the two collections where all the names match, and val1 (from collection1)> val2?
[23:12:42] <skot> no, you would need to do the comparison in the client. There are no joins or cross/multi-collection queries.
[23:14:53] <skot> you can do this in a few steps: 1.) get distinct names from the smallest collection, 2.) get values from those names from first collection -- 1+2 could be combined using aggregation, 3.) query collection 3 for matching names + value paris to do the join + filter.
[23:15:04] <fpghost84> skot: ok so I just do it in pymongo say with a for loop on the two collections?
[23:15:17] <skot> s/3/2
[23:15:46] <fpghost84> ok let me try to understand this
[23:15:49] <skot> and yes, you can also do it one at a time in the client
[23:16:08] <skot> If you understand that one, it will be the easiest without learning more.
[23:16:10] <fpghost84> would I lose any speed compared to sql joins? or will it be the same pretty much?
[23:16:37] <skot> It will be slower since you need to copy the data -- there are many things relational databases do to speed up joins.
[23:17:09] <skot> In practice, you may find it to be just fine, so try and see :)
[23:17:53] <fpghost84> thanks
[23:18:25] <skot> If the simple loops is too slow, play with grouping a few together with $or
[23:18:47] <fpghost84> Before mongo I was doing it by storing my data in JSON and then using for loops to find matches on name, testing for val1>val2, and that was too slow
[23:19:22] <skot> If you don't have indexes it will be very slow, if the data is larger than memory.
[23:19:56] <fpghost84> indexes?
[23:20:02] <skot> like on name
[23:21:14] <fpghost84> ok guess I have to have a play with this and see
[23:21:41] <skot> good luck, and don't forget about the docs -- they are sometimes helpful
[23:22:58] <fpghost84> thanks