[00:13:10] <joannac> hahuang61: depends what the usecase is
[00:15:18] <hahuang61> joannac: we have 2 db clusters, and trying to cut over on 1 day for a seamless upgrade, so I've been dumping the database on one side to the other, taking deltas every so often, and we'd take smaller and smaller deltas
[00:15:43] <joannac> why not just use replication?
[00:15:47] <hahuang61> joannac: until the actual cutover, but basically this means that we have to shutdown one side, refuse all traffic and dump the entire colltections that are mutable in our app, restore them, then bring it back
[00:16:08] <hahuang61> joannac: can I set up replication between 2.0.4 and 2.6.7?
[00:16:15] <hahuang61> joannac: like a hidden replica set member?
[00:16:32] <joannac> no, but neither can you mongodump from 2.0.4 and restore to 2.6.7
[00:16:54] <hahuang61> joannac: what? are you sure? Seems to be working fine.
[00:27:55] <hahuang61> joannac: wellllllllllll NOT exactly.
[00:28:30] <hahuang61> joannac: dumping from an unsharded mongo that takes a daily dump from a mongos for a mongo cluster. Don't ask me why they're doing this. No idea, they did this before I joined.
[00:28:36] <hahuang61> joannac: but then yes, restoring through a mongos.
[00:29:01] <hahuang61> joannac: the unsharded mongo is basically used for manual queries
[00:29:14] <joannac> okay, i will revise to "not recommended, but if it works, good for you"
[00:32:33] <gozu> now that i think about it, i will have to ask the guys running this 12-shard 3-replica various-extras clustah how their reporting thingie (one of the extras) works ... is there some way to tell a mongos to be a read-only frontend for rs-secondaries?
[00:33:25] <hahuang61> I'd love to hear it. Lemme know when you find out :)
[00:33:56] <hahuang61> joannac: so in any case, there's no way huh? Full dump full restore will "work" if it works.
[00:34:15] <hahuang61> mostly it's not a problem, but one of these collections is pretty large.
[00:45:23] <hahuang61> how much slower is mongoexport/mongoimport than mongodump/mongorestore?
[00:45:29] <hahuang61> trying to see if that's an option.
[00:46:27] <gozu> wouldnt expect the difference to matter if you are going through a mongos.
[00:48:17] <Boomtime> hahuang6: note that mongoexport/import transitions via javascript which cannot express the same type fidelity as the binary format
[00:49:01] <gozu> (which, depending on the "not a schema", may actualy be an advantage in this case...)
[00:49:38] <Boomtime> not when your large 64bit ints get rounded via doubles
[00:49:52] <gozu> yeah. that falls under "depends". :)
[00:50:39] <gozu> but going from a "speshul" 2.0.4 setup to 2.6.7 sounds like "depends" is the main theme.
[00:53:36] <hahuang61> yeah. It's a difficult situation.
[00:53:57] <hahuang61> it's also a difficult sitatuon to be the guy with the most mongo experience when the 2 previous architects have quit
[00:54:18] <hahuang61> AFAIK, they tried upgrading to 2.2 an 2.4 before, in-place
[00:54:52] <hahuang61> and it failed for a few reasons. I don't know what they were, but once they'd blamed the yielding strategy casuing perf issues and rolled back the upgrade.
[01:14:37] <Trindaz> mongoose model runs save() just fine, no errors. debug output shows document had all it's properties updated correctly. BUT. the save just never persists in the DB. How can this be?!
[01:15:32] <joannac> Trindaz: checking the right db and collection?
[01:16:17] <Trindaz> yup all definitely correct. I know this because I start by fetching the document from the DB and using console output to verify that it has exactly the right data. Also verified _id fields. It's the right document.
[01:17:32] <Trindaz> lets see what markModified has for me
[01:27:20] <Trindaz> however I'm 100% certain that the document has an id of 54cc287734072ab454ba1360
[01:27:42] <Trindaz> that's what the _id field is set to when the document is fetched form Mongo to make edits, and that's still the value of _id when the document is attempted to be saved
[09:55:58] <bybb> Actually, from the beginning I have Deploying changes in progress
[10:15:32] <bybb> In MMS itself, is there a way to undo everything and start again?
[10:23:55] <oceanx_> hello, I didn't understand if a cursor timeout during a long running aggregation/mapreduce means that the operation is aborted, or if it continues running (did find the latter explanation in a stackoverflow answer) anyone can help?
[10:25:22] <chou> hat-tip to agolden, "Thanks for the email reports. I'd rather have them in private email than not at all."
[10:44:33] <Chulbul> I'm new to mongodb, I have installed mongo db, now whats next , Im reading from here http://docs.mongodb.org/manual/tutorial/install-mongodb-on-windows , how to set data directory and where ?
[10:50:35] <bybb> Chulbul: I guess yu should create it on C:
[11:10:06] <oceanx_> Tug_: I don't think write performance should be an issue unless you enable write concern > 0
[11:10:44] <oceanx_> i usually prefer to reason in terms of queries I'll need to execute next, to decide the structure of the data I'm inserting in mongodb
[11:10:52] <Tug_> oceanx_: I have write concern > 0
[11:12:37] <oceanx_> i think there's some overhead from the find() mongo has got to execute before updating
[11:12:50] <oceanx_> while insertion will just append the document
[11:13:54] <Tug_> oceanx_: I see, maybe that's why I went with 1) in the first place. I'm just wondering if 2) could perform as well as it fits my model better
[11:15:07] <oceanx_> Tug_: depending on what you need later you could just use an index on id1
[11:15:35] <oceanx_> and read performances should not change much (if you need to find (y, v,z) based on x)
[11:16:13] <oceanx_> or use x as _id (and mongo will always use an index in that case)
[11:16:43] <Tug_> yes, I'm not too woried about read performance though
[11:17:16] <oceanx_> if you need to store more data for (id1: x) which is not changing the best thing could be using a collection (maybe capped) for this data
[11:17:41] <oceanx_> and then another collection with the other data which you are pushing using x as index
[11:18:14] <oceanx_> when you need to find() you can just do two find on the two different collections and client-side join eventually
[11:23:52] <Tug_> "Capped collections are fixed-size collections" << so it's in terms of number of documents in the collection
[11:25:59] <oceanx_> yes capped collection A fixed-sized collection that automatically overwrites its oldest entries when it reaches its maximum size. The MongoDB oplog that is used in replication is a capped collection. See Capped Collections
[11:26:10] <Tug_> I see, so it's some sort of stack
[11:26:20] <oceanx_> if you don't need oldish data you can use them
[11:26:42] <oceanx_> actally I think it was wrong for your use case sorry :)
[11:27:05] <Tug_> No I can't use this. My documents have an expires field so they are removed based on the time not the quantity of data
[11:27:54] <Tug_> It looks interesting but it has a very narrow use case I guess
[11:28:49] <oceanx_> then just stick with the second scheme using indexes were needed {_id: new ObjectId(), id2:x, data:[y,v,z]}
[11:30:07] <oceanx_> you will have a chance to client-side join based on x (or server-side aggregate eventually or mapreduce/find().forEach() and can set an expiration timeout on the cursor if you have lot of data)
[11:30:47] <oceanx_> and you just create an index on x to have better performances.
[11:32:33] <Tug_> I won't need to merge on x I think.
[11:32:48] <Tug_> But I usually retrieve data for all x at the same time
[11:33:25] <Tug_> I just don't want to be limited on write perfomance if I have 100k values of z
[11:34:54] <Tug_> for all x, I meant every documents matching x
[11:35:25] <oceanx_> yep so just use the index on id2 (x) and you can find at the same time every document matching
[11:37:31] <Tug_> (I used id1 and id2 but yeah id1 can be _id)
[11:39:12] <Tug_> oceanx_: thanks for your input :)
[11:45:19] <Tug_> according to this article: http://askasya.com/post/largeembeddedarrays $push is more expensive as documents needs to be moved on the disk as it grows
[11:45:34] <Tug_> not sur it is up to date though but it makes sense
[13:01:23] <izx> I am trying the repair the mongodb using the option >> 'mongod -f /etc/mongodb.conf --repair' and I get this error >> pulp_database Cannot repair database pulp_database having size: 6390022144 (bytes) because free disk space is: 4213800960 (bytes). Any help or advice would be appreciated.
[13:04:41] <joannac> izx: well, the message says it all. you don't have enough free disk space to repair
[13:05:00] <joannac> izx: why are you repairing in the first place? reclaim disk space or corruption?
[13:05:29] <izx> joannac, Do i need to have the same amount of free space as the DB size?
[13:05:56] <joannac> just like it says in the documentation ;)
[13:06:05] <izx> joannac, I am not able to start the mongod service as it is throwing >> exception in initAndListen std::exception: boost::filesystem::status: Permission denied: "/var/lib/mongodb/mongod.lock", terminating
[13:10:10] <joannac> check up the directory tree? do all the directories have the right permission?
[13:11:47] <izx> joannac, Directory has the permission of 755
[13:16:21] <joannac> izx: okay, check the contents of that file
[13:18:16] <izx> joannac, Initially while repairing the DB i got >> Can't specify both --journal and --repair options. I then removed the journaling option from the conf file and then tried. Is that fine?
[13:18:36] <joannac> well, you never told me why you were repairing
[13:19:28] <izx> joannac, Was not able to start the mongod services. Read in a forum removing the mongod.lock and repairing would be the solution. So tried it.
[13:20:07] <joannac> if that's the case why do you still have a mongod.lock file?
[13:20:33] <izx> joannac, I do not have the lock now as I have removed it. I just get the file system space error.
[13:32:34] <joannac> also, I don't know what you mean
[13:32:43] <joannac> you don't expect to be able to connect to your mongod?
[13:33:41] <bybb_> No I need to figure out what's a mongod from MMS point of view
[13:33:48] <bybb_> I just need to read the doc again
[13:34:09] <bybb_> because I didn't deploy anything yet
[13:34:43] <joannac> yeah, go read the docs. then deploy. then worry about monitoring
[13:43:54] <Tomasso> im executing a text search query from ruby driver such as "$search" => "\\\"Thank you information\\\" Thank you information" in order to get either documents with entire phrase or documents with any of the words of the entire phrase
[13:44:13] <Tomasso> i think i have problems with escape characters ...
[13:52:01] <Tomasso> and doing smth like $search => ("\"\"" + "Thank you information" + "\"\" " + "Thank you information") // Works for phrases but not for individual words
[14:28:55] <scellow> Hello guys, it's an emergency, im unable to import a database
[14:29:03] <scellow> i have the file dbName.0 and dbName.ns
[14:29:20] <scellow> if i put them in the folder of mongodb i get an error and i can't start mongo
[14:29:33] <scellow> Tue Feb 3 15:24:49.250 Error: couldn't connect to server 127.0.0.1:27017 at src/mongo/shell/mongo.js:145
[14:30:10] <scellow> If someone could help me that'd be ultra nice :)
[18:48:45] <jpb> usually questions that are application-specific are not well suited to IRC
[18:49:23] <swapnesh> okkk... learnboost is maintaining mongoosejs
[18:49:34] <jpb> but questions about the ORM would go to mongoosejs
[18:51:30] <swapnesh> actually one problem I am constantly facing with mongo command..I am not sure if its the way mongodb behave..let me discuss with you then
[18:54:38] <swapnesh> I am using Ubuntu ....and every time I boot the system ..I need to specify db path and then run mongo
[18:54:53] <swapnesh> if I directly run mongo command it will always fail
[18:57:24] <swapnesh> even at times I need to rm mongod.lock file wwhenever I start my PC
[18:57:56] <swapnesh> I followed everything http://docs.mongodb.org/manual/tutorial/recover-data-following-unexpected-shutdown/
[18:58:56] <swapnesh> from this but still I need to run once instance from root privileges to > mongod --dbpath /var/lib/mongodb
[18:59:05] <swapnesh> and in next terminal > mongo
[18:59:30] <swapnesh> If I close the first terminal ..then mongo stops in the second terminal as well
[18:59:51] <swapnesh> let me know if this is ok...or I am doing something wrong here ??
[19:01:38] <swapnesh> If this is an off topic for here..let me the IRC from where I can get info about this ???
[19:05:45] <kali> swapnesh: you need your system to start mongod: look there: http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/
[19:19:21] <ldiamond> I'm looking for a graph database solution. Anyone know anything good available? Or will it be easier to just hack one up on a mongodb?
[19:20:36] <ldiamond> I've looked at Cayley but it's too buggy and there's barely any development on it.
[19:59:38] <Nilium> I'm strangely pleased that I haven't actually had any trouble in the process of mongorestore-ing a 23gb collection so far
[21:01:13] <Tyler_> Is it possible to update using a callback? findOneAndUpdate({phone: req.phone}, {start: req.start, code: callback.code}, function(err, callback)});
[21:50:34] <geri> hi, can mongodb be used as a distributed key-value store?
[21:51:58] <Hilli> scellow: If you are root, you have permission. But I don't think that mongod is running as root, is it?
[21:55:19] <Hilli> geri: I suppose, if you bend it. But there are probably better options. Riak or someting like that. MongoDB is more document oriented, so your queries would be something like key: something and you would get value: somevalue out. instead of just gimme(key) => value