[00:01:37] <themoebius_> Is there any way to convert a stand-alone instance to a replica set without downtime? It seems like I need to add this replset parameter to the config and restart the stand-alone instance…
[00:07:29] <nemothekid> themoebius_: AFAIK, no because you have to change the config parameter
[00:10:40] <themoebius_> nemothekid: :( thats what I thought
[00:11:00] <themoebius_> was hoping there was a workaround like changing the config parameter directly in the config db
[00:18:48] <nemothekid> How long does mongos --configdb <config server> --upgrade take? and can I run 2.2 mongos processes in parallel (so I don't have any downtime?)
[00:55:27] <antares_> Monger 1.5 RC1 is out: http://blog.clojurewerkz.org/blog/2013/03/21/monger-1-dot-5-0-rc1-is-released/
[01:53:50] <nemothekid> For the 2.4 upgrade I keep getting " error upgrading config database to v4 :: … :: caused by :: 13127 getMore: cursor didn't exist on server, possible restart or timeout?" I'd imagine a certain a collection (we have 15000 total chunks) is causing the timeouts
[02:04:18] <zzing> I have been looking at the gridfs module for python, and I have been wondering if there is a way of obtaining a list of files with the same name.
[02:16:30] <sirious> zzing: you can use pymongo and just query the collection normally
[02:17:51] <zzing> I did try some of the basics of looking at the collection (I am very new to mongo), and did see the possibility. But I also read that gridfs was supposed put the stuff back together as well if the files are large.
[02:18:54] <sirious> sure, but there is the .files collection, which stores metadata about the file (including filename)
[02:19:04] <sirious> and the .chunks collection, which is the file broken up into pieces
[02:19:38] <sirious> so if you just want a list of files with the same name (sounds more like you want to do an aggregation with the count of files with the same name)
[02:19:53] <sirious> you can do that using a mongo query, no need for gridfs
[02:52:50] <zzing> sirious, ok I will do that, and then make use of the id with the gridfs api
[02:57:57] <nemothekid> For the 2.4 upgrade I keep getting " error upgrading config database to v4 :: … :: caused by :: 13127 getMore: cursor didn't exist on server, possible restart or timeout?" I'd imagine a certain a collection (we have 15000 total chunks) is causing the timeouts
[04:44:18] <Jaymin> Since mongodb has atomic transaction - one would be inclined to have mongo single document containing all entry of financial transaction i.e. debit and credit in array.
[04:44:26] <Jaymin> However, I believe that in long run this approach would be costly for analyzing transaction in array/sub document. E.g. aggregation $unwind would be costly and may not support index.
[04:44:34] <Jaymin> Also, searching and listing transaction as statement (i.e it would refer to one entry of array list from millions and trillions of transactions/records) - Please let me know your thoughts.
[05:50:34] <sat2050> I have a lot of idle connections and even though not much requests the no of connection never goes below 450
[05:50:53] <sat2050> Can someone tell me why this should be happening
[05:51:08] <sat2050> And how can I verify number of connections are valid or not
[06:10:44] <wromanek> Hi guys! I just started with mongodb few days ago... and there is one thing I'm wondering... is there any way to get Collection (List,Set or Array) of values instead of Documents with those values?
[06:38:06] <BadDesign> What happens when I insert the same document again? Does MongoDB prevent duplication?
[06:39:05] <BadDesign> The object has all the fields exactly the same including the $oid
[06:40:29] <BadDesign> And I'm using the insert member function of a DBClientConnection object
[06:41:39] <AAA_awright> BadDesign: Did you try it?
[06:41:49] <AAA_awright> You should get a duplicate _id error
[06:42:46] <BadDesign> I've tried it and I don't see any duplicate error. The document is simply not inserted or MongoDB does as check to see if it exists before insertion/update
[06:45:00] <AAA_awright> And where are you reading this from to see?
[06:46:31] <BadDesign> I use the mongo shell to check... My insertion is done to a different collection by using an BSONObj obtained by querying another collection
[06:47:22] <BadDesign> I can't find anything in the documentation for insert that says something about duplicates...
[06:50:38] <BadDesign> So I can't decide if this is expected behaviour or not
[06:50:55] <BadDesign> But it does what I want... it prevents insertion of duplicate documents...
[06:52:30] <AAA_awright> My error value shows an error when I try to insert an object with a duplicate value for a primary key like _id
[06:59:11] <BadDesign> The server's consoles doesn't show a thing, I have to check on the client
[07:12:54] <BadDesign> AAA_awright: it gives the duplicate key error... it's alright as this is what I want
[07:48:56] <wromanek> Hi guys! I just started with mongodb few days ago... and there is one thing I'm wondering... is there any way to get Collection (List,Set or Array) of values instead of Documents with those values?
[09:54:02] <wromanek> Did anyone tried wrapping with morphia and java?
[10:01:51] <Sep1> Hi guys, I am trying to run a query like this on MongoDB (Mongoid) database: Article.where({'_id' => { "$gt" => Moped::BSON::ObjectId(id_from_database)}})
[10:01:51] <Sep1> but getting this error: undefined method 'id' for irc://irc.freenode.net:6667/#%3CArray:0x007ff71d671b20%3E
[10:01:59] <Sep1> I've tried to google this issue, but still don't know how to fix it… could you help me with that? Thanks
[11:32:03] <davve> herro, i apologize if this is'nt the right forum but. how do i deserialize bson with the native client? (a SlowBuffer)
[11:59:29] <kali> davve: it does not sounds off topic, but... what do you mean ? :)
[11:59:43] <kali> davve: what do you call "native client" ?
[12:01:02] <davve> kali: the node.js client/driver "mongodb" :)
[12:01:55] <davve> im now using the mongodb.bson objects method deserialize and putting what i find in there, but it says corrupted, so im not sure if i need to provide metadata of some kind?
[12:02:53] <davve> for the record, i did not set up the collection it is actually a queue in a production system
[12:05:06] <davve> guess i need to get a hold of someone who knows more about it than i do. kali: i apppreciate you responding :) with all the join/parts here i was doubting anyone was actually reading this lol
[12:06:04] <kali> davve: it's a bit slow... and it's early for US, and lunch time un europe, so...
[12:06:44] <kali> davve: be patient, stay around, there are a few node users around here (not me)
[12:51:15] <DinMamma> Hiya, I run a 2.2 sharded cluster, and I needed a new mongos so I deployed a chef role which just install mongos on a node.
[12:51:47] <DinMamma> Ofc it installed the 2.4 version and complained in the logs that I needed to run mongos with --upgrade.
[12:52:21] <DinMamma> I didnt think that this was dueue to 2.4, I did a lot of replicaset work in my chards last night so i naively though that that it had something to do with this.
[12:57:51] <kali> i would assume it's linear on the size of the config database, not the size of the read database
[12:58:01] <kali> but once again... i don't know what i'm talking about ):
[12:58:23] <salentinux> guys, tonight my pc crashed while I was filling my journaled mongodb with some data. Now I tried to use mongo and I'm getting this error: http://pastie.org/6743329
[12:59:03] <kali> your data is too big for a 32bit mongodb
[12:59:23] <kali> iirc, journaling double the required adress space
[12:59:53] <salentinux> ok,.. so If I cannot switch to a 64 bit architecutre?
[13:00:46] <kali> you can probably still dump it with bsondump
[13:03:01] <salentinux> I will try with bsongdump. Another option: I have a 64 bit server (another machine). If I install mongodb and copy /var/lib/mongodb to that server, I should be able to recover?
[13:46:32] <shadok> Hi! I have a quick question about virtual memory size utilization by mongodb under linux, my whole /var/lib/mongodb folder is 72Go and the virtual memory is at 140Go while my server only has 4Go RAM + 4Go swap. Two questions here:
[13:47:27] <shadok> Is there a relation between the size of my data folder and the virtual memory size, and if so why is there double the virtual mem compared to datas ?
[13:48:24] <shadok> I this virtual meme consumption a symptom that something is wrong or is it fine for mongo ?
[13:51:25] <shadok> A third question would be: given that I don't have 140Go of memory anyway wouldn't it be better to limit the overcommit in the kernel or is there an advantage having this much virtual meme ?
[13:58:25] <kali> shadok: I think the most efficient is to read this... this http://docs.mongodb.org/manual/faq/storage/ and then we'll try to fill in the remaining blanks :)
[13:58:42] <lwade> hey folks, anyone seen this before. If I start mongod with the --dbpath option on the commandline it works fine, as both root and mongod user. If I try and run it using the service init script, it always tells me permission denied on the dbpath dir :S
[13:58:52] <DinMamma> kali: I just had to wait a while. The upgrade just finished.
[14:01:04] <kali> and something with "success" in it :)
[14:01:06] <DinMamma> Phew, cridid adverted, I hope.
[14:01:28] <DinMamma> Now to find out how everything works with 2.2 mongods and upgraded configservers.
[14:01:34] <shadok> kali: ok, i just read some of this but some details were missing, I'll come back here as soon as I find which parts were missing, thanks :)
[14:01:37] <DinMamma> I have no intentions to switch to 2.4 just yet. :)
[14:07:33] <shadok> kali: ok, my bad, I didn't understand the first paragraph :/ This sentence in particular: "Memory-mapped files are the critical piece of the storage engine in MongoDB. By using memory mapped files MongoDB can treat the content of its data files as if they were in memory."
[14:08:29] <kali> shadok: yeah, the rest can be confusing if you don't understand that bit :)
[14:09:00] <shadok> kali: first question resolved :) Now i'd like to be reassured that much virtual memory is "normal" (remebmer it's two times my (data + indexes + journal))
[14:09:47] <shadok> kali: I'm not that good outside sysadmin tasks and I don't really understand how mmap() works but it's already a bit better
[14:10:21] <kali> shadok: well, many people freak out at the beginning, so don't worry too much
[14:10:36] <shadok> one thing to note is we should have space to recover from previous import and deleted files so maybe it has a link
[14:11:09] <shadok> kali: in fact it's working like a charm but we didn't start the load testing yet
[14:11:30] <kali> shadok: this is a good background paper on mmap (for varnish, but it's the same principle for mongodb) https://www.varnish-cache.org/trac/wiki/ArchitectNotes
[14:13:40] <shadok> kali: excellent, I looked at varnish a bit so it's perfect :)
[14:14:08] <kali> shadok: the important thing to worry about is the size of the working set, but it's quite difficule to evaluate. 2.4 (released two days ago) has a built-in "evaluator" for the working set size
[14:17:50] <shadok> kali: I was wondering too until this morning when a dev just annouced me the 2.4 was out, I just upgraded to try the working set estimator
[14:20:22] <kali> shadok: thing is, this can only give relevant results in production
[14:22:09] <shadok> kali: yep but i'm lucky this app is for an internal documentation app
[14:27:20] <bobbytek> Why would MongoDB allow you to connect to an instance before you are authenticated? Or is this just a quirk of the shell?
[14:47:13] <fredix> I have this error with 2.4 : Thu Mar 21 14:44:41.700 [TTLMonitor] ERROR: error processing ttl for db: nodecast_prod 13312 replSet error : logOp() but not primary?
[14:47:34] <fredix> anyway it seems that my replica set works well
[14:57:54] <bobbytek> Nodex: correct. But couldn't this approach lead to DoS type attacks?
[15:15:57] <Nodex> bobbytek : it's up to your app to rate limit
[15:16:26] <rbasak> Hello! How would you like a trivial bugfix patch? github pull request only? Or jira issue? Or both? http://www.mongodb.org/about/contributors/ doesn't seem to provide any guidance on how you accept contributions.
[15:22:58] <ron> Nodex: you ignored my message yesterday ;)
[16:49:45] <nemothekid> What should be done when the cursor timeouts when upgrading to 2.4? (error upgrading config database to v4::could not copy data into new collection :: caused by :: 13127 getMore: cursor didn't exist on server, possible restart or timeout?)
[16:51:16] <kali> nemothekid: is that in the critical section ?
[16:52:00] <nemothekid> Don't think so, it seems to happen 15 minutes after I start the process
[16:52:12] <kali> if not, you can just start again. if you're in the critical section, look at that: http://docs.mongodb.org/manual/release-notes/2.4-upgrade/#resync-after-an-interruption-of-the-critical-section
[16:52:31] <nemothekid> Yes, I have started it multiple times, and it always results in the same error
[16:53:06] <nemothekid> Will the logs tell me if Im in a critical section?
[16:53:34] <kali> look for "critical" in the upgrading mongos log
[16:53:50] <nemothekid> Then, no, I'm not in a critical section, and it stops 15 minutes after I started. If it helps I have over 15k chunks
[16:56:34] <kali> DinMamma reported it took about an hour for 11k chunks earlier today, but he did not report any timeout problems
[16:59:42] <DinMamma> nemothekid: what are your servers?
[17:00:17] <DinMamma> My mongod-servers are quite beefy, SSD+32g ram.
[17:00:25] <DinMamma> configservers are aws micor-instances.
[17:00:34] <nemothekid> My mongod servers are the same
[17:00:51] <nemothekid> but my config servers are on HDDs (dual core, 4GB Ram)
[17:01:04] <DinMamma> Hmm, how big is your databases?
[17:02:08] <nemothekid> My database is a little over 2 TB, but if I have this right, my mongod machines shouldn't care? I believe its the config servers are timing out (which I think are 500MB)
[17:04:33] <kali> nemothekid: it might be worth trying again and again and again: if the upgrading process skips what has already been converted, it may work after a few iterations
[17:04:52] <kali> nemothekid: docs says, as long as you're not in the critical section, it's fine, so...
[17:05:55] <DinMamma> Sorry that I cant be of any help, but I would try kali's suggestion, as long as its safe you should be fine.
[17:06:35] <DinMamma> Just put the mongos --upgrade in a loop :)
[17:06:36] <nemothekid> No problem, thanks for the help
[17:06:50] <DinMamma> Thats a terrible idea which I think noone should try btw.
[17:07:05] <nemothekid> haha because if it does crash in a critical section...
[17:07:47] <DinMamma> Seems like one can recover from a critical crash though, acording to the docs. Nothing I would like to try out for myself I have to say.
[17:10:15] <saml> com.mongodb.MongoException: not talking to master and retries used up
[17:16:49] <khinester> is there a way to replicate a mongodb but only with a small number of documents?
[17:20:16] <jgiorgi> any mongo engine for django 1.5?
[18:25:09] <yfeldblum> i'm assuming the answer is yes, but: should i be able to *migrate mongodb servers* by making a new server, adding it to the replset, stepdowning the old server if it was primary, removing it from the replset, and decommissioning the old server?
[19:35:03] <Brondoman> Found a bug I believe but not sure where to submit it: When creating a '2dsphere' index on a large collection 10 millions docs mongod process crashes. This is 2.4 on 64bit Windows 8
[20:13:40] <durre> hi! I'm looking for a solution to run my integration tests (that are dependent on mongodb) on a cloud based CI-server. so far I've tried EmbedMongo. are there alternatives?
[21:07:29] <EricL> Does mongos automatically distribute queries between primary and secondary of a replSet on each shard?
[21:17:54] <EricL> In other words, if I have a cluster with 2 shards and 2 replicas per shard, how get I get mongos to read from the secondary in the replSet? Or does it to that by default/
[23:19:21] <freezey> looking to hire a mongoDBA if anyone is of interest