[00:09:45] <skot> babykosh, I'd suggest writing a small python/whaterver-lang-script to read from postgres and write to mongodb. csv is a terrible format and will lose all kinds of type info
[08:28:47] <kas84> in a capped collection can I specify only the max documents to store? instead of bytes?
[08:29:39] <kas84> can I convert a regular collection to a capped one?
[08:30:27] <Boomtime> a capped collection is a fixed size on disk, the optional document limit is a convenience but is applied secondarily only
[08:31:07] <Boomtime> convert a collection to capped: http://docs.mongodb.org/manual/core/capped-collections/#convert-a-collection-to-capped
[09:23:48] <nomadist> suppose I have a document like this {'_id': 'ObjectId(...)', 'bag': { 'word1':{ 'pos':0,'neg':1 } } }
[09:24:31] <nomadist> and I want to add {'word2': { 'neg':0,'pos':1 } } to the 'bag' key in the previous document
[09:26:13] <nomadist> then db.mycollection.update({'_id':'ObjectId(...)'},{'$set':{'bag':{'word2':{'pos':1,'neg':0}}}}) is just replacing the previous 'word1' key with this new 'word2' key
[09:26:40] <nomadist> whereas I want to add this new key value pair to the 'bag' dictionary/object
[09:27:26] <nomadist> If anyone could help i'd be thankful. Been looking for a solution all morning and can't find anything.
[10:34:24] <jhon> i have a problem with mongodb aggregate
[10:36:02] <jhon> i have to show data for multiple days, i have 1 milion rows / day . .. but with aggregate is fail with message . exception: aggregation result exceeds maximum document size (16MB)
[11:17:35] <ssarah> i just installed mongo. Didn't reboot yet. I got no /etc/default/mongodb file, but i have a /etc/init/mongodb.conf with a ENABLE_MONGOD="yes" line on it. should i change that?
[11:38:39] <jhon> i have to show data for multiple days, i have 1 milion rows / day . .. but with aggregate is fail with message . exception: aggregation result exceeds maximum document size (16MB) anyone ?
[11:41:20] <jdj_dk> what's the best way to ensure that only valid files are documents are being stored in mongo? Has anybody made a Schema to check with or do I have to do this manually?
[11:43:27] <oznt> hi, can someone tell me how to I access setup.cfg properties from my setup.py ?
[11:47:45] <kees_> ssarah, nah, just create the /etc/default/mongodb file. the init script checks if it exists and includes it if it does
[12:00:00] <Sven_vB> hi all, can anyone help me upgrade MongoDB? http://paste.ubuntu.com/8410224/ while updating package lists on Ubuntu 12.04.5 LTS: bzip2: Data integrity error when decompressing.
[12:06:18] <gausie__> If I have a collection of subdocuments, how can I query to get 10 documents after one with a specific ObjectId
[12:18:31] <hydrajump> hi anyone using the MongoDB official AMIs and can explain how you would deal with updates?
[12:35:56] <Sven_vB> looks like my problem was with another package, the aptitude output seems to be out-of-sync with the bzip error outout.
[12:38:39] <sweber> mongo <hostname>:30000/<database> returns "connection refused". Is there a special invocation parameter for mongod to allow remote access?
[12:40:12] <cheeser> you have to bind to an IP other than localhost
[12:48:56] <sachinaddy> Hi.. Can we use JOINS in mongodb
[13:55:25] <jhon> the cursor { } has limit ? and what are the diferrence between cursor{} and cursor: { batchSize: 0 }
[14:03:41] <cheeser> jhon: that batchsize of 0 means the first batch is 0-sized
[14:03:54] <cheeser> there are some perf gains in some use cases using that. i forget the specifics.
[14:06:00] <Thijsc> Anybody using MongoDB with Erlang here by any chance? Wondering wether it's possible to do that in a stable way.
[14:08:27] <jhon> cheeser how is the correct form to use, with batchSize or not
[14:21:48] <sweeper> morning. has anyone seen mongo stop using an index for a given query for a period of time, and then start using it again in a while? for an identical query
[15:32:14] <loincloth> i'm struggling to add a custom attribute for the JSON to a Mongoid::Document in Ruby... i've seen people use the term "decorate"... i don't want to persist this value, only add it to the objects before they are converted for JSON output
[15:34:41] <jb8> do myjson.toObject() then do mjson.firstName = John ... you cannot add right now because it is a mongo object not a json object as it appears in the console
[15:35:32] <jb8> the object you are trying to add a custom attribute has functions on it like myjson.save ...it is not the typical json
[15:36:34] <loincloth> jb8: hmm ok i'll look into that thx
[15:43:54] <loincloth> jb8: i'm not sure we are talking about the same context... i am working with Mongoid models in Ruby... are you talking about JavaScript?
[15:44:14] <loincloth> i'm not seeing this toObject or to_object type stuff on the Mongoid side
[15:45:02] <loincloth> in a pinch i suppose i can parse the JSON i get as is and then add to that and serialize convert back to json but that suuuux
[15:47:25] <loincloth> hmm actually i might be able to get in before we go to json and turn the documents into a hash an dmodify that and just send the hashes to json
[15:47:41] <loincloth> i see as_json(methods: :custom_attribute) seems to work
[15:47:52] <jb8> yes it was for javascript. I didnt think Ruby would do it differently for that though.
[15:52:05] <jb8> decorator and facade design patterns are a great way to go like you are doing
[15:52:22] <loincloth> i am thoroughly confused now, though, because i found a spot in this code where i think it is already doing what i thought should work... merely assigning to the document: document[:custom_attribute] = value; and that makes it to the json according to my tests so far
[15:52:49] <loincloth> jb8: yeah maybe i should consider it but i think i need to do more work making sure i see current options clearly
[15:54:04] <jb8> for sure on making sure your current situation is covered first
[15:54:55] <jb8> sometimes the attribute will apply in the current scope but since it is not done correctly it will not persist on the object once you leave the scope say to send it to your user or the next function. just make sure your object works in more than one scope before you call it working.
[15:57:57] <sweeper> morning. has anyone seen mongo stop using an index for a given query for a period of time, and then start using it again in a while? for an identical query
[16:00:34] <loincloth> jb8: thx yeah i gotta figure out which end is up here... starting to smell like user error :\
[16:01:53] <jb8> if you are on a replicated database and your replica does not have the index yet then it could appear to flip flop...just the first thought
[16:02:13] <jb8> if your query is not using the exact fields in the index...
[16:03:01] <jb8> your logs should tell you exactly nscanned and a little bit more about what may have happened
[16:18:53] <context> 'Assertion failure false src/mongo/db/database.cpp:257' everything on google points to running 32bit mongo but we are running 64bit
[16:22:28] <loincloth> jb8: i think this is not only user error but just bad test writing :(
[16:23:01] <loincloth> some simple tests and various debugging has me convinced assigning custom attributes to Mongoid::Document in memory and seeing that when you call to_json is something that generally works
[17:48:21] <tispford> fun fact i just learned by reading mongo source: when currentOp() on a moveChunk says "msg" : "step5 of 6",
[17:48:33] <tispford> it means that it has *finished* step 5 and is currently on step 6
[17:48:42] <tispford> not "currently doing step 5 of 6" as i expected
[17:50:29] <tispford> always a good sign when to figure out what's going on with some software it's more productive to consult the source than the documentation
[17:54:02] <EmmEight> Developers aren't journalists.. its hard mang
[18:16:51] <EricL> Anyone have or know of any scripts that dump your databases's by collection and list only the indexes (ie for creating a dev env)?
[20:27:57] <jb8> i have 200,000 docs ...streaming 5,000 at a time in a loop...sometimes it goes all 5,000 at once the next round it does 200 then pauses for a second then finishes the 5,000. Anyone know why mongodb is pausing? cpu is 1 percent. memory is 8 percent. its a dump of the collection no query involved.
[20:29:48] <skot> jb8, when iterating a cursor every once in a while it has to call to the server to get more results and you might just be seeing the network latency of a round-trip.
[20:30:16] <jb8> i also put the script on the database instance and same issue...any other ideas? thank you
[20:30:45] <skot> It can still happen locally, as the network overhead on a local instance isn't always zero.
[20:33:59] <tispford> yeah usually when i've seen behavior like that, the pause is just due to grabbing more results from the cursor
[20:34:14] <tispford> you also mentioned your load and memory, but check out io utlization also
[20:34:39] <tispford> mongo is most frequently io-bound IME, though that is less true with SSDs
[20:35:32] <jb8> i need to stream to elasticsearch faster and the streaming out is being the bottleneck instead of the sharded elasticsearch...i need to get this up to 10 million records but having pauses which are fine but I need it to not pause so I can push faster. just talking. ill keep looking into this.
[20:36:41] <tispford> definitely take a gander at the output of iostat if you are on linux and haven't done so yet
[20:37:12] <jb8> i did and its only reporting 8 blk read per second...which is low
[20:38:20] <jb8> I feel like if I could find a tool that would show me where I am hitting a limit then it would help but all the tools are saying that no limits are being hit which makes me think some other limit is being hit that another tool could expose...
[20:41:18] <EricL> Anyone have or know of any scripts that dump your databases's by collection and list only the indexes (ie for creating a dev env)?
[20:41:54] <EmmEight> EricL, I will write one for you, for 100 bucks
[20:41:56] <tispford> MMS can be pretty useful, though I think trying mongodump is the best suggestion so far. MMS can be very handy though
[20:41:57] <jb8> mongodump and mongorestore with -c can do that but defintely dev and low records only like less than 100,000
[20:43:07] <jb8> put those commands in a script and you have one line per collection. so maybe 10 lines of code if you only have 10 collections in dev. not bad
[20:43:10] <EricL> EmmEight: I can write one for myself for free. I just didn't want to reinvent the wheel.
[20:44:56] <tispford> that is what you would call the "right answer"
[20:46:44] <tispford> anyone know what happens if you control+c a shardCollection() that's already created chunks but is in the process of moving them?
[20:47:01] <cheeser> the server should continue sharding.
[20:47:09] <cheeser> you're only really stopping the client, afaik.
[20:47:37] <tispford> makes sense, thanks. mostly just curious because this command is on course to take a week :p
[20:49:54] <skot> shardCollection should not take a week. It should take minutes at the most. It doesn't move the data before returning, just creates the initial chunk metadata
[20:50:44] <tispford> thats interesting, the command is blocking in the console and it's definitely moving chunks. maybe because the balancer is disabled?
[20:51:04] <tispford> i reenabled the balancer and that didn't seem to make a difference
[20:55:01] <tispford> commands_admin.cpp, "// only initially move chunks when using a hashed shard key"
[20:57:26] <tispford> second time today source-diving :p
[21:03:51] <wmayner> Hi folks, does anyone have experience getting Mongo to run on a QNAP NAS?
[21:08:48] <hydrajump> It would be great to talk to someone who has experience working with the MongoDB official AMIs on AWS.
[21:15:53] <Lope> I'm trying to enable the oplog on mongoDB for use with meteor.
[21:16:21] <Lope> in /etc/mongod.conf I've got replSet=rs0
[21:16:57] <Lope> when I try run db.users.update({'user':'admin'}, {$addToSet: {roles: 'clusterAdmin'}}); I get this error: WriteResult({ "writeError" : { "code" : undefined, "errmsg" : "not master" } })
[21:17:46] <Lope> I get a similar error when I try to run show collections; error: { "$err" : "not master and slaveOk=false", "code" : 13435 }
[21:24:52] <tispford> I'm not familiar with this but there are some hits for that error: https://stackoverflow.com/questions/8990158/mongodb-replicates-and-error-err-not-master-and-slaveok-false-code
[21:35:56] <Lope> I actually just want to enable the oplog, I don't need a replica set
[21:36:07] <Lope> I recall doing this before, I just don't remember how.