PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 19th of November, 2012

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[01:47:47] <mdedetrich> hey I am currently kind of new to mongodb, and I am wondering what is the best way to do a simple where clauses
[01:48:19] <mdedetrich> basically I do a query over collection A, and in collection A there is an ID that points to a specific item in collection B
[01:49:13] <mdedetrich> so I do a query, get the results as an array of items in collection A
[01:50:26] <mdedetrich> and I want to do something like db.collectionB{($where: this._id = results}) where results is the list of results from collectionA
[01:50:31] <mdedetrich> is this the idiomatic way of doing it?
[01:50:56] <duncan_donuts> Does anyone know anything about Assertion failure b.empty() during mongorestore?
[02:02:13] <duncan_donuts> I've found something that suggests I delete the undefined stuff from the restore options, but then I get different errors.. JSON object size didn't match file size
[04:23:02] <tjmehta> hello anyone use node with mongo here?
[04:23:38] <tjmehta> i am seeing some strange issue with a node-mongo-native findAndModify on mongohq
[04:24:50] <tjmehta> it fails performing the update, it returns the document found (even though i have specified the new flag) but it doesnt not return an error
[04:25:06] <tjmehta> and it only occurs on staging which is using mongohq as the db
[07:24:16] <syskk> are auto generated object IDs random? or are they easy to guess?
[07:24:36] <syskk> I want to get an invitation URL for my web app and was wondering if I could use object IDs for that
[07:27:42] <khurram> hi guys
[07:27:54] <khurram> how to see the columns inside the myCollection ?
[07:28:05] <khurram> or schema of myCollection
[07:28:15] <khurram> from ubuntu console ?
[08:09:10] <finalspy> Hi everybody, I've just installed 3 mongod configuration servers and also 3 mongoS on 3 fresh new boxes ...
[08:09:39] <finalspy> unfortunately, tje mongoS keep failing to start arguing that mongo conf servers are not in sync
[08:13:21] <finalspy> I've verified that my servers where synchronized with ntpdate (also added to cronttab), I've tried to empty my /data/db/mongoConf folders... and restart everything... but that doesn't work
[08:13:38] <finalspy> Any ides ?
[08:17:36] <phira> finalspy: I guess this is a really stupid question, but are they all the same version?
[08:30:28] <kali> finalspy: do you have the exact error message ? because it could simply mean they do not aggree on what's the config state is
[08:37:20] <palanglung> hello any one can we use logic in reduce like if value more than zero result++
[08:44:58] <khurram> hello
[08:45:46] <khurram> there is a collection in my db called projects no data is inside how can view the columns or fields it has ?
[08:45:49] <[AD]Turbo> hola
[08:46:15] <Gargoyle> khurram: It doesn't have columns or fields
[08:46:32] <khurram> Gargoyle: then how do insert the data ?
[08:47:14] <Gargoyle> khurram: The short version is db.projects.insert({name: value, name: value})
[08:47:31] <Gargoyle> khurram: But it sounds like you need to do some reading of the docs.
[08:47:36] <khurram> hmm so it has no schema
[08:47:39] <finalspy> phira: yes they are :)
[08:47:50] <khurram> yep you right im very new to mongo
[08:48:14] <Gargoyle> khurram: Give this a whirl for a start:- http://try.mongodb.org/
[08:48:32] <khurram> Gargoyle: thanks
[08:49:04] <finalspy> phira: oh wait this time message changes .. it says no config server reachable
[08:49:22] <phira> I'm sorry, we've reached the limit of my mongo knowledge :/
[08:49:29] <finalspy> i'll investigate on that since this is a new message and get back to you later
[08:49:43] <finalspy> thanks anyway
[08:54:14] <finalspy> phira: strange ... it first says "creating new connection to:myserver:port" then "BackgroundJob starting connect BG" then "connected connection!" , this 3 tiems (once for each conf server, then it ends up with "ERROR: config servers not in sync! no config servers reachable" and "configServer connection startup check failed"
[08:55:39] <finalspy> phira: I guess this is a login issue since the auth is activated on those servers !!!
[08:56:02] <phira> seems plausible
[08:56:13] <phira> we found auth a bit fiddly to set up initially.
[09:11:06] <Mmike> Can I have more than one slave in plain master-slave setup? (no replica sets, etc?)
[09:14:20] <mids> Mmike: yes, but why dont you use replica sets
[09:15:03] <mids> replica sets are a functional superset of master/slave, and newer, more robust code
[09:16:50] <syskk> I removed an index from a field but still getting dups errors
[09:17:27] <syskk> Error: MongoError: E11000 duplicate key error index: dev.invitations.$requestID_1 dup key: { : null }
[09:18:25] <mids> syskk: check if there are other indexes? db.collection.getIndexes()
[09:19:02] <Mmike> mids, it's political :/
[09:19:23] <Mmike> mids, actually, dunno why there is a slave there in the first place...
[09:19:46] <mids> Mmike: cool, show 'them' the official MongoDB page that warns you not to use them :) http://www.mongodb.org/display/DOCS/Master+Slave
[09:20:18] <Mmike> i think someone set this up to have no downtime when dump/import is running (to clean up disk space)
[09:21:16] <Mmike> I now have 300+GB datadir that is shrinked to cca 30GB when dump/import is performed. So I guess I'll do dump/import on the slave first, make it 'catch up' with the master, then promote that slave to a master, redo dump/import on old master, and then promote it to master again
[09:22:21] <syskk> actually I'm using Mongoose, and just removed the unique: true field
[09:22:25] <syskk> but it seems it didn't update the db
[09:22:30] <syskk> how do i remove an index manually?
[09:23:42] <mids> syskk: db.collection.dropIndex(name)
[09:25:23] <syskk> mids: that worked, thanks
[09:41:52] <chutsu> Hello I was wondering how does one randomly shard their data?
[09:42:13] <chutsu> Sorry, I meant pre-splitting manually
[09:42:25] <chutsu> and randomly
[10:42:03] <khurram> is there findAll() in mongodb
[10:43:40] <algernon> there's find(), and you can iterate
[12:31:34] <diegoviola> hi
[12:33:57] <diegoviola> what's the mongodb equivalent of 'BETWEEN' and 'AND' in SQL
[12:34:09] <diegoviola> "The BETWEEN operator is used in a WHERE clause to select a range of data between two values."
[12:35:32] <Gargoyle> diegoviola: Yes. lookup the $gt and $lt operators. There is also an $and operator, but it is the default when passing multiple items as search params, and is only needed for specific situations.
[12:37:23] <diegoviola> thanks
[13:18:50] <danielhunt> replica set question - i want to run a replica set with 2 nodes, with no arbiters, and have one as a permanent primary, and the other as a secondary that can never be promoted to primary. am i correct in saying that a combination of votes/priority will allow this?
[13:19:49] <Gargoyle> danielhunt: I don't think so!
[13:20:30] <Gargoyle> As far as I know, you need an odd number.
[13:21:17] <Gargoyle> If your intent is to have a lesser or remote machine for a backup, then I suppose the best way would be to run a second process on the primary to act as the 3rd arbiter.
[13:21:47] <Gargoyle> this way, if the backup link fails, the primary will remain primary.
[13:22:05] <Gargoyle> And the secondary won't promote itself.
[13:22:07] <danielhunt> that was my intent alright
[13:22:35] <danielhunt> my understanding was that hidden=1, priority=0, votes=0 on the secondary would prevent it from ever becoming primary
[13:22:42] <danielhunt> and the primary would be happy if the secondary fell over
[13:23:36] <Gargoyle> danielhunt: I've not dugg that deep. If the secondary has no voting rights, then it could work.
[13:30:20] <danielhunt> Gargoyle: yeah, i'm just trying to find out why all the docs on mongodb ignore the 'standard' approach of 1 master, and 1 secondary for backup purposes. there doesn't seem (to me) to be any valid reason to *require* a 3rd server running as arbiter, when the voting values can be manipulated into allowing 2 servers to proceed
[13:31:48] <Gargoyle> danielhunt: It's not designed as a master/slave for backup purposes
[13:32:04] <Gargoyle> danielhunt: It's a fault tollerant cluster.
[13:32:08] <danielhunt> true, but there's no reason to ignore the usecase
[13:32:09] <danielhunt> :)
[13:32:21] <Gargoyle> danielhunt: Do you know what splitbrain is?
[13:32:40] <danielhunt> nope
[13:32:44] <danielhunt> never heard of it
[13:33:02] <danielhunt> oh wait - multi-primary?
[13:34:13] <Gargoyle> kind of. If your two servers can't see each other, but they can see for example, your internet gateway, how would they decide who was primary as they would both think they were working fine.
[13:34:29] <danielhunt> ya i know about that issue alright
[13:34:46] <danielhunt> that's actually exactly why i'm asking these questions. i want to tell my secondary that it can never, ever become primary
[13:34:51] <danielhunt> :)
[13:35:03] <danielhunt> and similarly, that my primary should always, always be primary
[13:35:40] <Gargoyle> I think you are on the right lines. You can deffo give the secondary a pri of 0 to never become primary.
[13:35:57] <danielhunt> the only downside to modifying the votes values for the 2 servers, that i can see, is that if i add more nodes into the replica set in the future, i may forget that the voting values are no longer 'normal', and have unexpected outcomes in the event of outages
[13:36:26] <Gargoyle> But I think you need a 3rd mongod process to make up the odd vote. That process can run on your master machine (no need for a third physical machine)
[13:37:15] <danielhunt> well, that's precisely why i'd up the primary's voting value from 1 to 2. but i don't know how that will behave, because every single reference to votes in the docs, refer to: 'DO NOT CHANGE THIS VALUE'. there isn't an explanation as to *why* not to change them, anywhere :(
[13:38:03] <Gargoyle> probably because someone with more experience that you or I in this area knows better! :)
[13:38:13] <danielhunt> i have no doubt about that ;)
[13:38:20] <danielhunt> still, that's no reason to ignore the details :p
[13:38:31] <Gargoyle> The arbiter process won't use much resource as it's not handling data or client connections.
[13:38:59] <wereHamster> danielhunt: the reason for three servers is that with two there can be no majority if one is down.
[13:39:19] <danielhunt> wereHamster: yup. (that's why i'm looking at up'ing the voting power of the primary)
[13:39:23] <wereHamster> I believe there must be a majority of servers, not only votes
[13:39:28] <danielhunt> orly
[13:39:39] <danielhunt> my understanding was that it was the majority of votes, not servers
[13:39:47] <wereHamster> either ask 10gen suport or rtfs.
[13:40:05] <danielhunt> i'm rtf'ing everything at the moment ;)
[13:40:07] <wereHamster> oh, there is also the google group
[13:41:17] <danielhunt> clearly, the absolute best approach is to have an arbiter running on the same box as the primary. but this is only 'best' because it's documented
[13:41:26] <danielhunt> cheers Gargoyle/wereHamster
[13:41:30] <danielhunt> i appreciate the banter
[13:42:16] <Gargoyle> danielhunt: messing with vote numbers is doc'd… DONT! ;)
[13:42:36] <danielhunt> Gargoyle: hah! i'll obey docs like that only when they also document *why* not :p
[13:42:48] <danielhunt> otherwise, it's just an annoying big/little brother, telling me what to do
[13:42:56] <danielhunt> and no one ever, ever listens to them
[13:42:58] <danielhunt> :)
[13:49:53] <danielhunt> i've posted to the google group - hopefully someone will be able to shed some light on it
[13:49:57] <danielhunt> otherwise, arbiters ahoy!
[14:53:23] <fommil> hi all – I'd like to enforce a uniqueness constraint on my collection, which is essentially a triple of fields. What are the options other than restructuring my document to have a field which is a BasicDBObject of these 3 key/value pairs?
[14:53:43] <eka> fommil: an index?
[14:54:03] <fommil> eka: the index only works for individual fields – I need one that works for a triple
[14:54:37] <eka> fommil: no... you can do an index from many fields
[14:54:53] <fommil> eka: I'll look closer at the Javadocs of that API – because that would be awesome
[14:55:34] <eka> fommil: I'm looking at MongoDB api... for java you should go to the java mongodb channel ;) but I presume that java api follow mongodb api standard
[14:55:42] <eka> fommil: look at ensureIndex
[14:55:58] <eka> fommil: http://docs.mongodb.org/manual/reference/method/db.collection.ensureIndex/#db.collection.ensureIndex
[14:56:05] <fommil> eka: that's the one I'm using. I guess I need to create a custom DBObject instead of just a Simple key/value
[14:56:25] <fommil> eka: ok, cool – this seems to solve the problem. Thanks!
[14:56:43] <fommil> eka: (I'd abstracted away the raw API and forgot the details! :-) )
[14:56:48] <eka> fommil: ensureIndex({foo:1, bar:1}, unique=true) or something like that
[14:57:04] <eka> :)
[14:57:41] <eka> fommil: oh they are options soo ensureIndex(...., {unique:true})
[15:07:11] <fommil> if I build a unique index like {id: 1, other: 1} – does that mean that "id" and "other" have an index individually? (e.g. id/other is the uniqueness, but looking up by "id" is really fast)
[15:07:33] <fommil> eka: sorry, forgot to include you there
[15:08:11] <eka> fommil: no... no invididual index
[15:08:27] <eka> fommil: you have to create it
[15:08:38] <fommil> so I need to do a unique index for {id, other} AND a non-unique index {id} AND {other}
[15:08:47] <eka> fommil: right
[15:09:01] <fommil> eka: ok, cool – verbose, but once it's done it's done :-)
[16:03:11] <ejsmith> I'm hoping to get a quick answer to an issue I'm having... I have a string _id on my documents that looks like this: "201207/-06:00/1ecd0826e447ad1e78877ab2" and "201211/-06:00/1ecd0826e447ad1e78877ab2"... I am trying to do a range query to select all the documents between those two ids.
[16:03:25] <ejsmith> but it's returning docs whose ids are not within the range
[16:30:36] <solussd> is it possible to match on the value of a $project eariler in the pipeline using the aggrgation framework? e.g. {$project : {blah "$something.value"} } followed by {$match : {otherthing.value "$blah"}}?
[17:00:20] <underguiz> i'm trying to create a backup using mongodump directly to a tarball but it's not working… seems like tar is expecting data to be feed to it but mongodump isn't feeding it
[17:01:10] <underguiz> I'm trying to do it running mongodump -o - | tar -zvf backup.tar.gz -
[17:01:40] <underguiz> anyone here is able to accomplish what I'm trying to do?
[17:05:01] <nledon> Has anyone here actually tested sharding? I need to do it for a POC, I've look into setting --chunkSize low but getShardVersion() shows nothing has happened. What steps would I need to take to ensure everything is as it should be?
[17:24:24] <tracker1> Looked around, and trying to find an aggregation framework example to return only the first item in a property with an array... ex: {listing_id:string,vehicle:{images:[...]}} I want the listing_id and the first image in vehicle.images
[17:24:54] <tracker1> or null if no images
[17:42:30] <tracker1> http://stackoverflow.com/questions/13459195/getting-only-the-first-item-for-an-array-property-in-mongodb
[18:18:09] <sirious> anyone have experience running a sharded mongo cluster using blades? if so how is the performance?
[18:44:43] <Jezzer12> Trying to modify a field in a document that is embedded in an array, which is in turn is in a document that is in an array - is this even possible? My find query is matching the document..
[18:44:57] <Jezzer12> db.racing.find( {"races":{$elemMatch:{"id":"434200", "runners.id":"1610394"}}} )
[18:46:30] <Jezzer12> But my update query:
[18:46:38] <Jezzer12> db.racing.update( {"races":{$elemMatch:{"id":"434200", "runners.id":"1610394"}}}, {$set:{"races.$.runners.$.name":"John Doe"}} )
[18:47:14] <Jezzer12> is returning: "can't append to array using string field name [$]"
[18:47:46] <Jezzer12> Anyone able to offer any advice?
[18:54:56] <solussd> is it possible to match on the value of a $project eariler in the pipeline using the aggrgation framework? e.g. {$project : {blah "$something.value"} } followed by {$match : {otherthing.value "$blah"}}?
[18:55:22] <eka> solussd: yes
[18:56:18] <eka> solussd: but your match wouldn't use any index
[18:56:35] <eka> solussd: for match to use indexes it should be the first in the pipeline
[18:57:12] <eka> solussd: maybe you can have match, project, match ... with the first one you shrink the dataset
[19:19:44] <nledon> Trying to force sharding of data for a POC, set --chunkSize 1, not getting results I expected. Uploading a 1.1mb image expecting to see it on both shards. Am I doing this correctly? If so, how can I confirm my data has been partitioned?
[19:22:51] <nledon> More information on my issue is available here, thank you: https://groups.google.com/forum/?fromgroups=#!topic/mongodb-user/-nQcAPGrjMo%5B1-25%5D
[19:27:52] <solussd> eka: thanks-- I tried this query and it does not return anything, though substituting the expected value of the project virtual field does.
[19:28:52] <solussd> in other words {$match : {otherthing.value "the_actual_value"}} works but not {$match : {otherthing.value "$blah"}}
[19:30:48] <Podpalacz> hi. I have a simple technical question. i couldnt find anwser at google. How mongo handles requsts ? queue like mysql or something else ?
[20:35:43] <nledon> Does anyone here have experience with Sharding? If you have a sharded collection "images" and have 2 shards, what is the expected behaviour when you upload 500 1.1mb images to "images"?
[20:41:23] <kchodorow> nledon: it should split them between the shards
[20:42:34] <nledon> @kchodorow: THank you! So this is not what is happening in my case. WOuld you mind looking at my sh.status()?
[20:44:14] <nledon> @kchodorow, If you're willing, you can find it here: https://groups.google.com/forum/?fromgroups=#!topic/mongodb-user/-nQcAPGrjMo%5B1-25%5D
[20:45:53] <kchodorow> nledon: are you connected to mongos when you're uploading the images?
[20:46:08] <kchodorow> i.e., are you uploading the images through mongos?
[20:47:16] <nledon> Yes. 192.168.2.202:27018 is my Mongo-S's location. And it's set that way in my application.
[20:49:52] <kchodorow> nledon: are you using gridfs?
[20:50:11] <nledon> Yes
[20:50:15] <nledon> Java
[20:50:38] <kchodorow> alright, images.ns isn't a typical name for a gridfs collection, can you do "show collections" in the shell?
[20:51:39] <nledon> "show collections" via mongos return nothing.
[20:52:04] <kchodorow> are you using the images db?
[20:52:40] <nledon> Yes. images db on shard dunnReplSetCharli
[20:53:09] <nledon> WHoops
[20:53:57] <nledon> Collections are: photo.files | photo.chunks | system.indexes
[20:54:14] <kchodorow> okay
[20:54:30] <kchodorow> you're going to need to shard photo.chunks to split the image data
[20:54:50] <kchodorow> you can shard photo.files to shard the metadata, but it's probably pretty small, up to you
[20:55:00] <nledon> Alright. Let me do that quickly.
[20:56:04] <nledon> "collectionsharded" : "photo.chunks", "ok" : 1 }
[20:56:05] <kchodorow> http://stackoverflow.com/questions/5344606/sharding-gridfs-on-mongodb
[20:56:27] <kchodorow> er, what are you using as shard key?
[20:56:57] <nledon> Uh oh... filename? Use shell memory xD
[20:57:29] <kchodorow> well, chunks doesn't have a filename field
[20:57:42] <kchodorow> so that isn't going to distribute
[20:57:55] <kchodorow> (because everything has the same value for the shard key: null)
[20:57:58] <nledon> Understood. How can I modify?
[20:58:47] <kchodorow> reload the data :( http://docs.mongodb.org/manual/faq/sharding/#faq-change-shard-key
[20:58:56] <nledon> Oh brother
[20:59:04] <kchodorow> sorry
[20:59:13] <nledon> Oh don't be, glad to have your help
[20:59:14] <nledon> xD
[20:59:29] <kchodorow> np
[20:59:41] <nledon> is files_id a valid field in .chunks?
[21:00:00] <nledon> According to your link it looks like it.
[21:00:22] <McSorley> Trying to modify a field in a document that is embedded in an array, which is in fact a document that is embedded in an array. My find query is matching the desired document to modify.. db.racing.find( {"races":{$elemMatch:{"id":"434200", "runners.id":"1610394"}}} )
[21:00:34] <McSorley> But my update query: db.racing.update( {"races":{$elemMatch:{"id":"434200", "runners.id":"1610394"}}}, {$set:{"races.$.runners.$.name":"John Doe"}} )
[21:00:34] <McSorley> is returning: "can't append to array using string field name [$]"
[21:00:38] <kchodorow> nledon: yes
[21:00:56] <McSorley> Is this even do-able with Mongo, Anyone able to offer any advice?
[21:01:00] <kchodorow> nledon: you can do a findOne() on .chunks to see what they look like
[21:02:01] <kchodorow> McSorley: no, not possible
[21:02:35] <McSorley> kchodorow: really? :-(
[21:02:46] <kchodorow> you might want to structure races.434200.1610394 or something
[21:03:14] <McSorley> kchodorow, back to the schema board perhaps!
[21:03:53] <McSorley> kchodorow: thanks for clearing that up anyways, is this something that we'll be likely seeing in the future?
[21:12:34] <kchodorow> McSorley: yes, but not for a while
[21:21:17] <McSorley> kchodorow: just out of curiosity.. the issue with my query above is because it would have to make use of the positional operator twice which is not currently supported right, trying to understand exactly why this is not yet possible. Thanks
[21:25:10] <kchodorow> McSorley: like, why it's not supported yet?
[21:26:17] <McSorley> yeah, just trying to figure out id this due to the way Im using the positional operator etc?
[21:30:16] <kchodorow> no, it's just a big change to the update code
[21:30:37] <kchodorow> which we're actually working on for the next release, to make it easier to add stuff like this
[21:31:02] <McSorley> kchodorow: superb, great to hear. thanks again
[21:31:11] <kchodorow> np
[21:38:38] <ismell> ello
[21:40:07] <nledon> kchodorow: I'm back. With 500 images again ;p. And: mongos> db.adminCommand( { shardCollection : "photo.chunks" , key: { files_id:1 } } ) | I haven't noticed any migration yet. Is there a way to manually activate the balancer?
[21:41:21] <ismell> are there any plans to add eventmachine support to the mongo ruby drier?
[21:41:22] <ismell> drive
[21:41:28] <ismell> *driver
[21:42:34] <kchodorow> nledon: what does sh.status() say?
[21:44:25] <nledon> databases:
[21:44:25] <nledon> { "_id" : "admin", "partitioned" : false, "primary" : "config" }
[21:44:25] <nledon> { "_id" : "images", "partitioned" : true, "primary" : "dunnReplSetCharli" }
[21:44:25] <nledon> { "_id" : "photo", "partitioned" : true, "primary" : "dunnReplSetCharli" }
[21:44:25] <nledon> photo.chunks chunks:
[21:44:25] <nledon> dunnReplSetCharli 1
[21:44:25] <nledon> { "files_id" : { $minKey : 1 } } -->> { "files_id" : { $maxKey : 1 } } on : dunnReplSetCharli Timestamp(1000, 0)
[21:44:26] <nledon> { "_id" : "test", "partitioned" : false, "primary" : "dunnReplSetCharli" }
[21:44:26] <nledon> { "_id" : "d", "partitioned" : true, "primary" : "dunnReplSetCharli"}
[21:46:12] <kchodorow> nledon: can you pastebin your log from mongos, since sharding the collection?
[21:46:45] <nledon> Sure thing, one moment.
[21:57:42] <nledon> kchodorow: Log: http://pastebin.com/R5QMcv1y
[21:58:47] <kchodorow> nledon: that looks like the config server, not mongos
[22:00:18] <nledon> They're running on the same machine. And might be writing to the same log. Let me see if I can switch that up quickly. FYI, I did see something about the balancer at the end of the log.
[22:04:23] <kchodorow> yeah, the message about the balancer is the config server recording that a mongos is trying to balance chunks (good sign) but none of those log messages are anything a mongos would produce
[22:05:16] <nledon> Righto, got mongos printing to its own file. WOuld you like to to try sharding again? If so, would just 10 images sufice xD?
[22:05:35] <kchodorow> can you pastebin the mongos logs, first?
[22:05:41] <nledon> Sure
[22:05:48] <kchodorow> oh, it was writing to the same file as the config server?
[22:06:04] <kchodorow> well, let's see what you have
[22:07:52] <Raynos> would this be a good place for node mongodb driver questions
[22:08:43] <nledon> kchodorow: Yup. Writing to the same file as config server, but here is the mongos's log on it's own: http://pastebin.com/ewKvjh5N
[22:10:40] <nledon> kchodorow: Looks dead after: Mon Nov 19 16:03:06 [Balancer] config servers and shards contacted successfully
[22:12:40] <kchodorow> nledon: that is the balancer trying to do stuff, is sh.status() still the same?
[22:13:49] <Xerosigma> #mongodb
[22:17:53] <nledon> kchodorow: WOuld this be of any use? ::: { _id: "balancer", state: 0, ts: ObjectId('50aaa2c550d0e9e9f63bdc5c') } update: { $set: { state: 1, who: "dunn-srv-bravo:27018:1353359681:41:Balancer:18467", process: "dunn-srv-bravo:27018:1353359681:41", when: new Date(1353360075230), why: "doing balance round", ts: ObjectId('50aaa2cb50d0e9e9f63bdc5d') } } nscanned:1 nupdated:1 keyUpdates:0 locks(micros)
[22:17:53] <nledon> w:386 109ms
[22:18:44] <Zelest> I'm confused about mongoose for nodejs.. Why on earth does it use a schema? Isn't that one of the good parts of MongoDB, that it doesn't use schemas?
[22:18:58] <kchodorow> nledon: no, do you have any other images to load? inserts trigger splits/migrates
[22:19:18] <kchodorow> which should have started on its own, but it's tough to know why it didn't without the log
[22:19:58] <nledon> kchodorow: Yes, I could upload many smaller ones instead if that would suffice.
[22:20:41] <kchodorow> as long as it's into that collection it shouldn't really matter
[22:20:51] <nledon> Roger that. I'm on it.
[22:22:35] <nledon> Alright, pasting the log.
[22:24:27] <nledon> kchodorow: New log: http://pastebin.com/kK5Z8Zjm
[22:29:12] <nledon> kchodorow: :/ I don't see any significant difference in the log.
[22:30:40] <kchodorow> nledon: have to go for a bit
[22:31:04] <nledon> Alright. Thank you for your time.
[22:39:19] <nledon> If there is anyone else who knows sharding well and is willing to help: https://groups.google.com/forum/?fromgroups=#!topic/mongodb-user/-nQcAPGrjMo%5B1-25%5D -Thank you
[22:44:16] <JakePee> is there any reason to automatically connect a new Mongo object if it will connect when queried anyway?
[22:44:25] <JakePee> i'm looking at the php constructor options
[22:49:07] <solussd> is it possible to use a field form the $project stage in an aggregation pipeline as the value for a $match stage? e.g. {$match : {some.key $field_from_project}} ?
[23:09:44] <FrenkyNet> Derick / mstearn: The param for remove and update on this page (in the options description) are reversed: http://www.php.net/manual/en/mongocollection.findandmodify.php and I hate the php.net bug reporter
[23:14:10] <Derick> FrenkyNet: can always put them in our JIRA
[23:14:56] <FrenkyNet> ah yeah, duh… my brain is still recovering from last weekend
[23:16:27] <Derick> too much booze? :-)
[23:17:42] <Derick> FrenkyNet: if you get it in *now* it'll get fixed soon too :-)
[23:17:55] <FrenkyNet> and BOOM! it's there
[23:18:48] <Derick> FrenkyNet: instead of (DOCS) in the prefix, just select the component "Documentation" ;-)
[23:18:51] <Derick> I've fixed that now
[23:19:00] <FrenkyNet> Derick: yeah, quite some. Beer suckt at the place I was at, so went for the Belgians
[23:19:22] <Derick> :-)
[23:19:26] <Derick> Where are you based btw?
[23:20:04] <FrenkyNet> Amsterdam :)
[23:20:49] <FrenkyNet> you in the states or the UK?
[23:22:58] <Derick> London
[23:23:13] <Derick> most of the time... I travel a lot too
[23:23:51] <FrenkyNet> sounds like you're having a good time then :) Amsterdam is nice, but getting to travel beats a lot of places