PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 13th of December, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:01:11] <joannac> umm, no.
[00:01:40] <joannac> What would be the point of having an incorrect index? You might as well just drop it
[00:02:08] <joannac> (but please don't drop it on a replica set just like that, you'll cause problems. do a rolling index drop)
[00:06:51] <joannac> quuxman: bulk updates like update with upsert:true ?
[00:25:33] <stashspot> why is the server config auto_connect default to false?
[00:25:40] <stashspot> just curious
[03:46:20] <quuxman> joannac: I'm not concerned with insertions
[03:46:56] <quuxman> I'm modifying an attribute with a large index, and would like to be able to allow for the index to be inconsistent during a bulk update in exchange for reasonable performance
[03:47:42] <quuxman> I've encountered this problem with MySQL, and my solution was always to drop the table, recreate it, then run a bulk insert
[03:48:02] <quuxman> that generally ran 10 to 100 times faster than updating each record
[03:52:56] <joannac> quuxman: Sorry, I always do that. I meant multi:true
[03:53:08] <joannac> Somehow in my head multi and upsert are confused
[07:56:06] <bin> morning guys
[07:56:19] <bin> is there any way to search for example after 10th elementh but without skip..
[07:56:36] <bin> i mean exactly start from 10th
[08:03:52] <ghostbar> bin: why without skip
[08:03:52] <ghostbar> ?
[08:03:59] <bin> because skip is expensive
[08:04:17] <bin> you always have to read from the beginning
[08:04:24] <bin> and its cpu consuming
[10:11:48] <naillwind> wih db.collection.update({_id: "id"}, {$set: { fieldName: {} }}) I can add a field to a collection, but how do I create that field with an empty array?
[10:13:30] <naillwind> got it: db.collection.update({_id: "id"}, {$set: { fieldName: [] }})
[12:13:10] <wesley> Hi! command denied: { replSetHeartbeat: "rs0", v: 3, pv: 1, checkEmpty: false, from: "host-a-db20:27017" } Any clue where to look for this, on how to fix it ?
[13:39:16] <roelmonnens> Hello I'm trying to build a notification-system. I want to push new items into a capped collection (is needed for tailable cursor). Then read new stuff with a tailable cursor. This will be used by 100 000 users. The structure of the objects is not clear and can be different with each insert. How do you determine how big the capped collection may be?
[13:39:27] <roelmonnens> Or should I make collections per user?
[13:50:05] <Nodex> you don't want 100,000 collections
[13:50:12] <Nodex> not even sure you can have that many anyway
[14:01:46] <Nodex> anyone ever pushed $in to the limit?
[14:01:58] <Nodex> to its limit*
[14:04:12] <Nodex> Derick : when a cursor / findOne() returns in php is the _id already cast as a MongoId() or does it have to be recast?
[14:54:15] <hell_razer> Hi, i can not find, how on successful addToSet incr some counter... I am going to use it for handling list len
[15:03:41] <Nodex> {$addToSet:{foo:"bar"},$inc:{some_counter:1}}
[15:14:15] <jordz> If I'm setting up a new sharded collection and I'm sharding by something like "username", I have 2 shards and I'm about to import 120,000 documents with ~20,000 different users, will it split the data over both shards initially or will it all end up on one?
[15:14:53] <Fishbowne> what is best way to interface mongodb to webserver ?
[15:16:19] <Nodex> Fishbowne : via your server side language
[15:17:04] <Fishbowne> is there something like CouchDB's web interface ?
[15:18:48] <Nodex> no, not for writing, read only iirc
[15:18:58] <cheeser> Fishbowne: http://docs.mongodb.org/ecosystem/tools/administration-interfaces/
[15:20:00] <hell_razer> Nodex, {$addToSet:{foo:"bar"},$inc:{some_counter:1}} that inc on all addToSetm but i need only on successful, like if added then inc counter
[15:37:34] <zumba_addict> morning folks, is mongodb the same as nosql?
[15:41:01] <nathanielc_> MongoDB is a form of a NoSQL database, there are many falvors of NoSQL. Take a look here: http://kkovacs.eu/cassandra-vs-mongodb-vs-couchdb-vs-redis
[15:41:11] <jordz> Schema wise, growing documents can be very bad, but I'm currently trialing a weekly data bucket for stats. Some documents may only be very small whereas others will probably be larger. Will the fact that document sizes can differ be an issue?
[15:41:12] <nathanielc_> *flavors
[15:42:16] <jordz> Each document contains 7 objects for each day and these can intern have 1000 objects in each day or 10, the objects are simple a number and a key
[15:43:38] <jordz> but I'm wondering if this will cause a performance issue
[15:44:01] <jordz> Anyone have any input?
[15:44:45] <nathanielc_> jordz: Not an expert but my understanding is that only changing the documents over time can cause issues
[15:45:41] <nathanielc_> basically mongo monitors the growth rate of documents and adds a few extra bytes of padding on insert. If the documents are not changing then independent of size the padding will be little to none
[15:45:50] <cheeser> jordz: it won't be a problem, no.
[15:46:57] <Derick> Nodex: what do you mean by "recast" ?
[15:47:09] <jordz> Thanks guys! The documents will be varying in sizes, so maybe a week for one specific item will have 4 in each
[15:47:23] <jordz> where as for another it could easily have 1000 for each of the 7 days in the root document
[15:48:08] <jordz> The documents are updated daily
[15:48:36] <jordz> I did read somewhere that preallocating data could reduce moves
[15:48:52] <jordz> I have yet to see any though
[15:53:20] <Nodex> Derick : cast it back to a MongoId
[15:58:12] <Derick> Nodex: ? huh what? You're not making sense
[16:02:34] <Nodex> lmao, how come. Q. Does the PHP driver cast _id's back to ObjectId's when it returns results from the dataabase
[16:02:46] <Nodex> I can't see how that doens;'t make sense
[16:28:18] <banzounet> Hey guys When I try to save this (it's the part of a dump from php) : https://gist.github.com/BaNzounet/50299dd9179326f533ec I get : zero-length keys are not allowed, did you use $ with double quotes? and when I remove the mailchimp structure it doesn't fail any idea?
[16:29:45] <Derick> banzounet: you can only store public properties
[16:31:26] <banzounet> Derick: it doesn't fail with _export:protected => NULL So I guess the mapper I've got do the converction
[16:32:39] <Derick> banzounet: yes
[16:32:44] <Derick> the error message is a bit odd though
[16:36:02] <banzounet> Yep, and I don't really have much information in my stack trace ;/
[16:43:53] <jordz> What are the potential performance benefits of using daily buckets rather than weekly buckets containing days for stats data?
[16:52:46] <_Nodex> jordz : apart from the obvious data saving?
[16:52:51] <_Nodex> size*
[16:53:00] <jordz> Yeah
[16:53:10] <jordz> I'm just having a look at mms after a couple of hours
[16:53:21] <jordz> a lot of documents are being moved
[16:53:34] <jordz> so I want to preallocate the next days data to speed up the updates
[16:53:38] <_Nodex> what sort of stats are they? - incrementals?
[16:54:50] <jordz> No, they're words, essentially with a day: [ { word: "test", number: 1 } ]
[16:55:01] <jordz> I'll pastebin a better representation
[16:55:05] <jordz> of the real data
[16:57:12] <jordz> Nodex, this is the schema some have thousands in, others have a few so I'm not sure if this is the right schema
[16:57:15] <jordz> http://pastebin.com/nrAPBnwC
[16:58:10] <jordz> I'm wondering if breaking it down into days would be better
[17:03:04] <Nodex> and "stat" is incremental?
[17:04:42] <jordz> No it only ever has one value a day
[17:04:52] <jordz> it could go up or down the next day
[17:06:28] <Nodex> I'm struggling to work out what you're trying to achieve with it
[17:08:10] <jordz> Essentially, everyday I get information about those words and what number it is for 100 or so items
[17:08:32] <jordz> so then I make 100 updates to 100 documents
[17:08:43] <jordz> and insert the word and number for that day in that week
[17:08:58] <jordz> insert = addToSet in this case
[17:09:24] <jordz> I want to keep the updates fast
[17:09:39] <jordz> and if the documents grow I don't want them to be moved
[17:09:55] <jordz> some items have 1000 words associated with them and some have only a few
[17:10:13] <jordz> that's why the document grows,
[17:10:18] <jordz> and I want to stop it moving
[17:10:33] <jordz> Cause that's the only performance bottle neck I can foresee I think
[17:11:59] <cheeser> look into the powerof2 settings
[17:13:05] <kali> +1 for powerof2
[17:13:40] <jordz> Okay cool, how will that help though?
[17:13:46] <jordz> In terms of allocation, I though it's for reusing space
[17:13:52] <jordz> more efficiently?
[17:16:09] <jordz> Ahh
[17:16:15] <jordz> I understand
[17:16:40] <jordz> +1 for powerof2
[17:16:44] <jordz> :)
[17:16:49] <Nodex> +2 :P
[17:17:03] <jordz> ^2 ;)
[17:18:56] <Joeskyyy> +4?
[17:19:23] <Nodex> *6
[17:19:33] <Joeskyyy> wat
[17:19:42] <Nodex> + 4 * 6 / n
[17:19:49] <Nodex> = boom
[17:19:55] <jordz> (2 + x)(2^x)
[17:20:05] <jordz> n
[17:20:17] <Joeskyyy> It's still way to early for math
[17:20:22] <jordz> it's 5pm here
[17:20:24] <jordz> home time!
[17:20:27] <Derick> e ^ ( i * pi ) - 1
[17:20:44] <jordz> THanks for all help guys, I'll probably have more questions to ask!
[17:20:50] <jordz> See ya'll later
[17:20:53] <jordz> well
[17:20:54] <Nodex> laters
[17:20:55] <jordz> I won't
[17:20:58] <jordz> see you...
[17:21:01] <jordz> but you get
[17:21:03] <jordz> it
[17:21:12] <Nodex> Derick : showing off your skills ? :P
[17:21:27] <Derick> no
[17:22:06] <Joeskyyy> ow my calculus ):
[17:22:10] <Derick> i got it wrong anyway
[17:22:12] <Derick> e ^ ( i * pi ) + 1
[17:22:19] <Derick> http://en.wikipedia.org/wiki/Euler%27s_identity
[17:23:08] <Nodex> my brain aches!
[17:23:22] <Nodex> it's not hydrated with enough alcahol
[17:23:30] <Joeskyyy> Agreed. And at that, lunch time.
[17:23:40] <Nodex> aka, I have blood in my alcahol stream :(
[17:29:12] <Derick> Nodex: you can also turn that around: I have blood in my alcohol stream ;-)
[17:29:55] <Nodex> but it's not a ;-) day when there is more blood than alcahol LOL
[18:43:43] <ppetermann> good day
[18:44:53] <ppetermann> i'm having a slight "i might not able to do this (anymore) with my documents"-problem here,
[18:45:24] <ppetermann> i used to be able to query the db during a map, but since a few versions thats not working anymore - is there any way to enable that?
[18:46:07] <ppetermann> problem is my documents don't have all data that i need during the mapping (and i can't add that information)
[18:47:38] <tomasso> I have a set of polygons on a map , then I query for a point located in other country to test, and I still get results.. :S should that be working fine?
[19:00:25] <bhangm> agg fwk question - Is there any way to $project the first element of an array? other than using $unwind w/ $group and some kind of ordering i.e.
[19:01:29] <retran> bhangm, i dont know what your question means because those variable names only have meaning to you
[19:01:45] <bhangm> retran: those are not variable names
[19:01:55] <bhangm> they are aggregation framework operators
[19:02:04] <retran> oh
[19:02:07] <retran> agg framework
[19:02:21] <retran> srry
[19:02:31] <retran> i thought i was wathcing the php chan
[19:03:25] <bhangm> no worries
[19:07:00] <kali> bhangm: $unwind / $group with $first
[19:08:01] <bhangm> kali: thx, I thought so, was hoping to have some kind of positional extraction sort of like the array.$ syntax but I'll just use $unwind
[19:08:22] <kali> bhangm: have you tried array.0 ?
[19:08:36] <kali> bhangm: i don't think that works but...
[19:09:06] <bhangm> kali: I tried it, but no dice. It just added an empty array for the computed field
[19:27:06] <h_ags> hey guys
[19:27:21] <h_ags> quick question about mongo/mongoose
[19:27:52] <h_ags> for some reason my queries are being put in the collection queue even though mongoose.connection.readyState == 1. Not sure where to go from here on debugging this. Any thoughts?
[21:11:34] <_maes_> hello everyone, could please someone shed some light on what this could be and how to fix this:
[21:11:34] <_maes_> [Balancer] caught exception while doing balance: can't move shard to its current location!
[21:21:05] <qswz> Fishbowne mo,ngo from javascript: http://jsbin.com/epudowe/1/edit
[23:18:03] <ram9> Hi.. we are trying to change 14k records, with a bunch of atomic operations - the changes don't appear going through - we've tested our migration on 3 different environments and are seeing this behavior now in production - we've restoreed the previous release and db and now are trying to find out what could be wrong.
[23:18:27] <ram9> we are running 2.04
[23:18:29] <ram9> we are running 2.0.4
[23:37:57] <joannac> ram9: Hmm. What did you upgrade to?
[23:38:07] <joannac> ram9: single node, replica set, sharded?
[23:38:53] <ram9> joannac: replica set for backups
[23:39:14] <joannac> ram9: what are the operations you're running, and do they work in 2.0.4 but not in whatever you upgraded to?
[23:39:53] <ram9> they work in 2.0.4 - our migration is purely data.
[23:40:04] <ram9> trying to rename fields
[23:40:09] <ram9> delete some fields
[23:40:14] <ram9> add a field
[23:40:26] <ram9> using atomic operators
[23:40:45] <ram9> via ruby mongoid
[23:43:53] <ram9> i'm going through a dump now - but thought i'd ask in case anyone recognized a pattern
[23:47:14] <ram9> is there a flag to sync writes early in mongo?