PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 13th of November, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:13:07] <quuxman> I'm having problems with excessive connections being opened, using pymongo 2.9
[00:13:37] <StephenLynx> you are probably opening a new connection for each request instead of reusing the first connection.
[00:13:49] <quuxman> where 1 server can create 5k connections. It seems pretty clear that number of connections to the DB is proportional to the rate of queries
[00:14:42] <quuxman> has anyone experienced and fixed this problem with pymongo?
[00:15:02] <quuxman> StephenLynx: there should be a connection pool
[00:15:57] <StephenLynx> yes, but if you explicitly open a new connection, a driver usually opens a a new connection.
[00:18:16] <quuxman> I'm using pymongo.MongoClient, which has a max_pool_size argument, defaulted to 100
[00:18:30] <cheeser> do you create a new client with each request?
[00:18:59] <quuxman> cheeser: it appears that's what's happening, which I'm trying to figure out how to fix
[00:20:01] <quuxman> I'm making a call to pymongo.MongoClient for each thread in each server, and the reference should maintain between requests
[00:21:02] <quuxman> I assumed there would be a connection pool for each server process, that would be shared between threads
[00:21:41] <StephenLynx> only if you keep the reference to it and reuse.
[00:27:51] <quuxman> I'm certain the reference is kept. I've verified that code is only run once per server process
[00:28:46] <quuxman> (the call to MongoClient()
[00:28:48] <quuxman> )
[00:31:21] <kakashiAL> I created a new databse with mongodb
[00:31:37] <kakashiAL> what I dont understand is why I get a file that is 1.3GB big
[00:35:20] <quuxman> Mongo allocates growing room
[00:35:32] <quuxman> makes writes faster
[00:49:43] <quuxman> someone on StackOverflow suggests using pymongo's autoreconnect
[00:50:34] <quuxman> any thoughts on how I could log when a connection is created, so I can figure out when it's happening?
[00:51:22] <quuxman> my first thought is to dig up where that happens in pymongo, and just add a print statement to the library source, then point a load test at the server
[02:59:52] <quuxman> It looks like there's a connection leak in my application, but only when there are > 10 server processes. It doesn't seem to be dependent on threads at all. Does this make any sense?
[03:03:11] <quuxman> nevermind. Connections stabalize at 2x the number of processes
[10:11:26] <bojanbg> Hi I need a tip on how to create a mongo group query, I have {"_id": n, "user_id": "100", "txt": ...} I need to create a group where I concatenate all txt fields and group by user_id.
[12:41:26] <walm> I have a question about Bulk API, anyone good on that?
[12:42:55] <walm> I wonder if it will break on unique index in a collection, or just carry on? Seems there is no option to set how it will handle that.
[12:53:42] <coudenysj> walm: it will error out on bulk write
[12:56:51] <walm> and no way to set it to not do that?
[13:09:09] <testmatico> Hello guys, I got a question regarding mongodb setup with replication and sharding
[13:09:47] <testmatico> Today I tried failover on our setup by turning off one of the nodes with replication sets
[13:10:24] <testmatico> after that, logins via the mongos instances was getting pretty slow (about 25s)
[13:11:01] <testmatico> on the other hand, only turning off the mongod processes resulted in a correct failover
[13:11:33] <testmatico> is there some sort of timeout I have to configure in case of a network fault?
[13:14:19] <coudenysj> walm: why would you want to do that? just don't use the unique contraint if you don't want it
[13:15:12] <walm> well I like it to just continue on with the rest in the bulk list
[13:16:57] <walm> but I guess I can switch to collection.insert and set writeConcern to w:0
[13:18:11] <coudenysj> walm: If you have a unique index, and you insert duplicate data, you will get errors anyhow
[13:24:06] <walm> yes I know just like it to continue inserting with the once that are valid in that case.
[13:25:58] <walm> let say I'm inserting in to a huge collection and like to do it in bulk/batches instead of doing one by one in the driver. So it's fast, simular to how mongorestore works I guess.
[13:26:43] <walm> and if that task fails and restart's it should not create duplicates as there is a uniq index in the collection.
[13:29:46] <walm> collection.insert with writeConcern to w:0 should do the trick, but then I guess I have to handle the bulk load my self (as Bulk API do it limit to 1000 for each bulk)
[13:35:41] <coudenysj> walm: https://docs.mongodb.org/manual/core/bulk-write-operations/#ordered-vs-unordered-operations
[13:38:01] <walm> man I've been reading this page over and over.. and how did I miss this "With an unordered list of operations, MongoDB can execute the operations in parallel. If an error occurs during the processing of one of the write operations, MongoDB will continue to process remaining write operations in the list."
[13:38:34] <walm> and I'm using a unordered list so all good then :)
[13:39:13] <walm> thanks coudenysj !
[13:41:43] <coudenysj> walm: yw
[13:42:46] <dddh> are stored procedures planned or there already is some way to load them?
[14:25:47] <avril14th> Hello, in mongodb, does querying several collections ensure that none can be modified between each collection query? (all reads are a global blocking operation)
[14:26:20] <cheeser> "querying several collections?"
[14:26:39] <avril14th> well, you query one and have relations to what you get also queried for instance
[14:26:45] <dddh> select for update ?
[14:26:56] <avril14th> select for display
[14:27:47] <cheeser> neither of those are mongo things
[14:28:02] <avril14th> so querying one collection for documents and have the documents relations also queried.
[14:28:21] <StephenLynx> there are no relations on mongo.
[14:29:15] <avril14th> references?
[14:29:29] <cheeser> references, yes. but no RI.
[14:30:33] <avril14th> sorry, my poor choice of words. so, can querying for documents and also for some references of these documents can be done in such a way that I know that none could have been updated between queries?
[14:32:14] <StephenLynx> no
[14:32:55] <avril14th> ok, thank you
[14:32:58] <StephenLynx> and even references don't exist on a database level, only on a driver level.
[14:33:08] <StephenLynx> its just syntactic sugar.
[14:33:20] <avril14th> understood
[14:33:40] <avril14th> I just don't know how I should call the stuff then :)
[14:34:08] <StephenLynx> thats why I said they don't exist.
[14:34:25] <StephenLynx> when I have to refer to them, I just call "fake relations"
[14:34:40] <avril14th> ok
[14:34:43] <StephenLynx> because they are only used on an application level.
[15:53:28] <dgaff> Hey all
[15:54:55] <dgaff> I am upgrading a huge db from 2.6 to 3.0 - my documents all finished importing about 3 hours ago, and It's looking like it's moved into the index building phase - based on the log output, It's a bit ambiguous as to whether it's building all the indexes per document at the same time, or if it's going through and doing it sequentially (ie, for each index, do all documents, rather than for each document do all indexes)
[15:55:04] <dgaff> given this gist, which is the case? https://gist.github.com/DGaffney/550b90d394358ad82b34
[16:04:03] <dgaff> Or does anyone know where I could go to find out that answer?
[16:04:32] <dgaff> If it's doing it all indexes at a time per document, then I will be done by end of day, but if not... it'll be a week maybe? ish?
[16:27:23] <m0rose> Anyone see what I'm doing wrong here? db.<coll>.find({ $where : "this.client_inputs._loan-id != this.loan_id" })
[16:28:27] <m0rose> if I extract "this.client_inputs._loan-id != this.loan_id" out into a variable and run .find({ $where : <var> }) it doesn't seem to help -- I get "ReferenceError: id is not defined near 's.loan_id'"
[16:39:42] <m0rose> "this.client_inputs[\"_loan-id\"] != this.loan_id" seems to solve it -- morning coffee is kicking in :)
[18:25:28] <chairmanmow> I'm having a bit of troube getting some subdocuments to be pulled from a record based on a $lt criteria using a date in the subdocs, I've written up a summary of the problem here, if anyone sees what I'm doing wrong I'd appreciate any tips http://pastebin.com/HmBuSHfR
[18:33:20] <dgaff> One more shot in the dark here: I am upgrading a huge db from 2.6 to 3.0 - my documents all finished importing about 3 hours ago, and It's looking like it's moved into the index building phase - based on the log output, It's a bit ambiguous as to whether it's building all the indexes per document at the same time, or if it's going through and doing it sequentially (ie, for each index, do all documents, rather than for each document do all indexes)
[18:33:20] <dgaff> . Given this gist, which is the case? https://gist.github.com/DGaffney/550b90d394358ad82b34
[19:12:37] <jbarker7> hey everyone
[19:13:29] <jbarker7> Been ripping out my hair on a mongo-connector issue- any one willing to pair?
[22:01:42] <dbounds> Hi. Having a problem with the following query: https://gist.github.com/dbounds/d37a905d79635b5300f8
[22:02:12] <dbounds> It's giving me an "invalid operator '$avg'" for some reason.
[22:02:35] <dbounds> This is one of several aggreation queries but the only one using $avg I have.
[22:03:04] <Smace> Anyone here using NeDB? It's a JS standalone MongoDB clone. Same API but limited.
[22:03:08] <dbounds> Any assitance would be greatly appreciated!
[22:56:54] <godzirra> Is there a way to do a limit as part of the query inside of db.collection.find({})?
[23:00:41] <StephenLynx> yes
[23:01:00] <godzirra> Can you share what it is? :)
[23:01:06] <StephenLynx> limit
[23:02:55] <godzirra> What am I doing wrong here then? > db.ballots.find({$query: {}, "$orderby": { createdDate: -1 }, limit: 1})
[23:04:09] <StephenLynx> .limit()
[23:04:22] <godzirra> Right. I'm trying to get this to work as part of a mongodump query, so .limit() won't work.
[23:04:34] <godzirra> That's why I was asking if you could do it as part of the query, as opposed to doing it as .limit()
[23:04:47] <godzirra> The mongodocs say $limit should work, but even when I changed the above example to $limit, it didn't work.
[23:11:38] <joannac> godzirra: find the _id of the last value, and put $lt: LAST_ID in the query
[23:12:02] <godzirra> That's the only way?
[23:13:21] <joannac> yes. or limit it by putting the results into another collection, and dump the new collection
[23:13:32] <godzirra> okay. Thanks.
[23:28:08] <godzirra> Can I run a single query using the mongo cli?
[23:37:27] <godzirra> I'm trying with --eval, but it always returns empty. --eval 'db.ballots.find().limit(1)'