PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 15th of October, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:29:07] <IrishGringo> what is wrong with this...
[00:29:09] <IrishGringo> db.products.update({_id : ObjectId("507d95d5719dbef170f15c00")},{$set:{ limits.sms.over_rate:0.01}})
[00:29:41] <IrishGringo> if I wanted to update a specific field a couple layers deep?
[00:34:45] <joannac> anything with a dot in it needs to be in double quotes
[00:36:23] <redsand> IrishGringo: nothing except your field with set needs to be in ""
[00:36:48] <IrishGringo> huhhhh????
[00:37:21] <redsand> yep
[00:37:56] <IrishGringo> db.products.update({_id : ObjectId("507d95d5719dbef170f15c00")},{$set:{ "limits.sms.over_rate":0.01}})
[00:38:27] <IrishGringo> ?
[00:38:54] <joannac> Yes. Try that
[00:39:02] <IrishGringo> ahhh... yea... worked...
[00:41:15] <IrishGringo> when doing a count... how can I find all the docs with a specific field?
[00:43:03] <decaf> find?
[00:43:14] <decaf> even I know this one
[00:44:11] <IrishGringo> yea... I know.. find... but the parameter is the a specific field exists... I am trying to figure out the paramter
[00:44:20] <IrishGringo> $exists... or something like that
[00:46:47] <decaf> console commands work also with drivers?
[00:47:49] <decaf> cause I didn't read that part yet :)
[00:48:46] <decaf> find returns an iterator, which has a nice forEach() method
[02:00:58] <Frosh> Is there something to check if the last command ran successfully?
[02:04:16] <cheeser> Frosh: http://docs.mongodb.org/manual/reference/command/getLastError/
[02:04:23] <joannac> getLastError...what cheeser said
[02:55:50] <chrisbelfield> lo all
[02:56:35] <joannac> Hi
[02:57:15] <chrisbelfield> Can someone help me with converting a mongo query into php mongocollection?
[07:46:57] <k_sze[work]> How do I stop mongos?
[07:47:09] <k_sze[work]> (ad hoc mongos, not daemon)
[07:49:15] <k_sze[work]> I started mongos from the command line, but how do I stop it?
[07:50:57] <k_sze[work]> like this: mongos --configdb cfg0.example.net:27019,cfg1.example.net:27019,cfg2.example.net:27019
[07:51:13] <k_sze[work]> if I press Ctrl-C, mongos won't quit, it seems.
[07:51:19] <k_sze[work]> so how do I stop it?
[07:51:22] <k_sze[work]> (properly)
[07:52:59] <k_sze[work]> I'm trying to tear down a test sharded cluster. I figure the first things to stop are the mongos routers, right? But I can't find in the documentation how to stop mongos properly.
[08:01:35] <Nodex> ./etc/init.d/mongos stop?
[08:02:24] <Zelest> not an option!
[08:03:47] <Nodex> dunno what that means :P
[08:04:15] <Kim^J> k_sze[work]: Ctrl+D ?
[08:05:21] <Nodex> ps ax | grep mongos | some fancy awk to kill the pid
[08:07:02] <Kim^J> In your terminal with mongos open, Ctrl+Z, kill %1, fg
[08:07:03] <Kim^J> Done!
[08:07:36] <k_sze[work]> Kim^J: well, *sometimes* Ctrl-C is sufficient to kill mongos
[08:07:52] <k_sze[work]> 1 out of the 3 mongos instances could be killed by Ctrl-C (SIGINT)
[08:08:00] <Kim^J> k_sze[work]: So? That doesn't help you right now does it? :P
[08:08:01] <k_sze[work]> The other two refused to
[08:08:02] <liquid-silence> how can one get a string back from mongo instead of the object id byte array
[08:08:11] <k_sze[work]> and then SIGTERM won't kill it either
[08:08:15] <k_sze[work]> I had to use SIGKILL
[08:10:44] <k_sze[work]> Doesn't it bother you that there seems to be no official way to quit mongos?
[08:10:57] <k_sze[work]> Ctrl-D is supposed to send SIGQUIT?
[08:12:08] <k_sze[work]> Or mongos expects an EOF from stdin as a signal to quit?
[08:17:40] <stefancrs> morning
[08:22:04] <liquid-silence> morning
[08:24:02] <stefancrs> given that I have an array of embedded documents, like so: http://hastebin.com/qegotoqile.sm
[08:24:16] <stefancrs> is it possible to match documents that have a certain price for a given country?
[08:24:52] <stefancrs> or do I have to $unwind to do that? :)
[08:27:04] <stefancrs> what is this $elemMatch thing, hm
[08:32:24] <stefancrs> seems to do the trick, now if they only didn't use an ODM for the models in this project... :)
[09:32:11] <Mohyt> "When removing a shard, the balancer migrates all chunks from to other shards. After migrating all data and updating the meta data, you can safely remove the shard."
[09:32:45] <Mohyt> Is that mean, before shutting down any shard , first I have to igrate the chunks ?
[09:32:53] <Mohyt> err.. migrate
[09:34:36] <Nomikos> Mohyt: I've no clue but this page describes the process http://docs.mongodb.org/manual/tutorial/remove-shards-from-cluster/ maybe that helps?
[09:35:33] <Nomikos> ah, you were looking at the intro
[10:36:53] <Mohyt> I want to deply a shard cluster
[10:37:07] <Mohyt> A good setup guide for that ?
[10:40:36] <joannac> http://docs.mongodb.org/manual/tutorial/deploy-shard-cluster/
[10:57:22] <Mohyt> Minimum servers required to setup shard cluster ?
[10:57:42] <Derick> in theory, 4
[10:58:38] <Derick> Or I supposed you can do it with 1, but that's not really helping then
[10:58:55] <joannac> GM Derick :)
[10:59:02] <Derick> almost afternoon!
[10:59:09] <joannac> 1 config, 1 mongos, 2 mongoDs in a repl set
[10:59:13] <Goopyo> Why can't a 2 member replica set choose a master if its priority is lower?
[10:59:29] <joannac> Goopyo: ...does not compute. What?
[10:59:33] <Derick> joannac: you can have a 1 member repl set
[11:00:13] <joannac> Wait, that's wrong
[11:00:14] <Goopyo> a 2 member rs cant elect a primary if communication breaks between them
[11:00:24] <joannac> 2 mongoDs not ina repl set, so you have 2 shards
[11:00:42] <joannac> Derick: in my defense I'm really tired :(
[11:00:50] <joannac> Goopyo: correct. neither has majority
[11:01:41] <Goopyo> right why doenst the one with higher priority become primary and the secondary fall off
[11:04:56] <Derick> Goopyo: don't you want to change "votes" instead of "priority" instead?
[11:05:05] <Number6> DON'T CHANGE VOTES
[11:05:20] <Derick> heh, I agree - but that does do what he wants :P
[11:05:35] <Number6> Adding an arbiter is far better than messing with votes
[11:06:41] <Number6> Changing votes should ONLY ever be used to get a RS back to production during a mass outage where the majority of nodes are down
[11:06:54] <decaf> how would you store 35000 small words? no searching required, just add and remove.
[11:07:06] <Number6> It is NOT designed to be used in the way Goopyo wants
[11:07:07] <joannac> Number6: Surely you could force-reconfig?
[11:07:33] <Number6> joannac: Yes, you can do that as well. IIRC vote manipulation came before that
[11:08:13] <joannac> Number6: Wait, how can you manipulate votes if you don't have majority?
[11:08:49] <joannac> You won't have a primary and hence can't reconfig without forcing it?
[11:09:11] <Number6> joannac: There was a magic way
[11:11:04] <Derick> Number6: so what are votes for then?
[11:11:35] <joannac> Votes should really only be 0 or 1
[11:11:46] <Number6> Derick: Elections.
[11:12:04] <Number6> Votes should really be 1, for all nodes.
[11:12:09] <Derick> Number6: I know that - but if you say that you shouldn't change it, then what's the point of having it? :)
[11:12:22] <Number6> If you have an even number of votes, add an arbiter to break any ties
[11:13:56] <joannac> Derick: you can set it to 0 for non voting members
[11:14:21] <joannac> Derick: the problem is, making it >1 can have weird side effects that people don't think through
[11:19:45] <Mohyt> So it is 1 mongos. 1 config and 2 mongod intance in rs
[11:20:31] <Derick> + arbiter
[11:20:44] <Derick> but that can run on the "config server" node too
[11:20:57] <Derick> mongos typically runs on the machine that also runs the app/web server
[11:21:02] <Derick> atleast, that's what I do
[11:21:12] <Derick> as you can use Unix domain sockets instead of TCP/IP
[11:22:22] <joannac> Derick: Surely you wouldn't want the 2 nodes in rs, you'd want them both to be standalone? Else you only have 1 shard and then, why shard at all?
[11:22:35] <Mohyt> Actually I just want to see how sharding works in mongo
[11:23:08] <Derick> joannac: yeah, sure...
[11:23:19] <Derick> Mohyt: if you're just playing, you can put it all on one machine (or even in a VM)
[11:23:23] <Mohyt> Show I like to push some bulk of data and then like to see redistribution
[11:23:25] <Derick> don't do that in production though :)
[11:23:38] <Mohyt> I am doing on VM
[11:23:58] <Mohyt> launching 4 vm for this purpose
[11:24:05] <Derick> that works too
[11:24:15] <Derick> it doesn't really matter in that case
[11:24:43] <Mohyt> I think It matter for this one https://jira.mongodb.org/browse/SERVER-9332
[11:24:57] <Mohyt> will help a broader view of things
[11:39:40] <Nomikos> Does MongoDB have any kind of cache you need to clear if you want to find out which of two (types of) query is faster?
[11:41:08] <joannac> Erm, on the contrary. I think you want to get all of your data into memory so you don't have to wait for it to be paged in.
[11:42:09] <Derick> only "cache" I can think of that might influence benchmarking otherwise is the "explain cache"
[11:42:18] <Derick> MongoDB remembers which query plan works best
[11:42:34] <Derick> you can just "fix" that by running a query more than one time and removing the first and last results usually.
[11:42:40] <Derick> and of course, what joannac says
[11:42:43] <Nomikos> I mean, in MySQL you have to add a flag that tells it not to cache the results (or use the cache). I'm wondering if it's faster for certain search to store the values I'm searching on in an array field, or in separate fields
[11:43:08] <Derick> MongoDB doesn't have a query cache
[11:43:14] <Nomikos> then I wanted to try it out, and was wondering if there was some thing I needed to clear in between searches
[11:43:17] <Nomikos> ok
[11:52:55] <clarkk> does anyone here use mongoose/node? The #mongoosejs channel is dead. I hope people won't mind me posting my question here
[11:53:14] <clarkk> I get this error. Can anyone give me some suggestions, please? "500 TypeError: Object function model() { Model.apply(this, arguments); } has no method 'mapReduce'" http://pastebin.ca/2466901
[11:56:31] <joannac> Erm, that class doesn't have a mapReduce method
[11:58:37] <joannac> http://stackoverflow.com/questions/8843264/mongoose-mapreduce says mongoose.connection.db.execureDbCommand({ mapreduce: 'schemaname', .....})
[12:00:05] <joannac> oops, sorry, not schemaname, collection_name
[12:05:59] <clarkk> joannac: it would be helpful if the docs actually told you how to define "User" in the example http://mongoosejs.com/docs/api.html#model_Model.mapReduce
[12:06:26] <clarkk> to me it looks like mapReduce is available on the model - Model.mapReduce(o, callback)
[12:11:12] <joannac> clarkk: Sure, and I agree, that's a bit deceptive. FIle a big report (or see if there is one already(
[12:11:22] <joannac> bug report*
[12:14:56] <stefuNz> Hi. I'm using mongodb for some time now and i was using it for only a few queries per minute with a replicaset of 3 nodes. Since yesterday, i'm using it for a few thousand queries per minute and some nodes sometimes get stuck and don't accept new connections until i kill them and restart them. What am i doing wrong?
[12:15:11] <stefuNz> Machine Ubuntu 12.04, current mongodb version
[12:15:15] <stefuNz> enough ram.
[12:18:32] <clarkk> joannac: this seems like a much more concise interface.. http://stackoverflow.com/a/18836261 but again, I have no idea how db or foo are defined
[12:19:09] <Nomikos> I've a collection with a field categories which is an array, but searching using c.find({categories: {$type: 3}}) yields 0, as does $type 4. $type 2 (string) finds them all. Why is this?
[12:19:39] <Nomikos> the items in that field are indeed strings, but ..
[12:19:47] <Nomikos> they sit in an array type thing
[12:21:11] <joannac> stefuNz: check ulimits and mongod logs
[12:22:15] <stefuNz> joannac: oh, right… didn't check the logs .. oh man :D ulimits are set to 65000 in the init-script (ulimit -n 65000) -- isn't that enough?
[12:22:32] <joannac> clarkk: that looks like it's the mongo shell. which means db is whatever db you'reusing, and foo is the collection
[12:24:53] <joannac> Nomikos: http://docs.mongodb.org/manual/reference/operator/query/type/
[12:24:58] <joannac> If the field holds an array, the $type operator performs the type check against the array elements and not the field.
[12:25:26] <Nomikos> aha, thanks
[12:27:58] <joannac> stefuNz: check -u. also http://docs.mongodb.org/manual/reference/ulimit/
[12:28:11] <joannac> I'm out... you guys can help each other :)
[12:28:27] <stefuNz> joannac: Thank you very much … I think i just didn't close the connections :D
[12:28:32] <stefuNz> .. in my application
[12:28:34] <clarkk> joannac: it turns out I was using mongoose 2.7 rather than 3.6.20. The latter has the method Model.mapReduce :)
[14:30:44] <pithagorians> hello all. getting such error in log files https://gist.github.com/anonymous/6992427
[14:31:05] <pithagorians> mongo version - 2.4.1
[14:31:19] <pithagorians> what could be the issue?
[14:32:29] <cheeser> a bad file it would seem.
[14:33:12] <ron> there are no bad files. only bad people.
[14:33:18] <pithagorians> :D
[14:33:49] <cheeser> take ron, for example. please. just take him anywhere.
[14:34:02] <ron> you love me.
[14:34:50] <cheeser> if it gets cold enough.
[14:35:15] <pithagorians> ok guys, glad that you love so much each other
[14:35:20] <pithagorians> but seriously
[14:35:28] <pithagorians> what can be the issue?
[14:35:34] <cheeser> a bad file.
[14:35:37] <pithagorians> is it a "bad file" ?
[14:35:46] <cheeser> back up and run repair like it suggests.
[14:49:56] <pithagorians> doing mongod --repair --repairpath /var/lib/mongodb
[14:50:20] <pithagorians> getting https://gist.github.com/anonymous/6992773
[14:50:36] <pithagorians> looks like it doesn
[14:50:39] <pithagorians> 't work
[14:50:42] <pithagorians> the repair
[18:12:35] <dragenesis> I'm using the java driver. After I call dbCollection.insert or some other operation, is there ever a time when the WriteResult will be null?
[18:15:07] <cheeser> dragenesis: shouldn't be, no.
[18:17:51] <dragenesis> cheeser, is there a way it can happen? My code is just: WriteResult result = collection.insert(document)
[18:18:15] <dragenesis> with document being the DBObject and collection is the DBCollection.
[18:45:51] <testing22_> so i see the section on structuring documents for pre-aggregated reporting, but now i'm wondering how i would structure the the same thing to support multiple counters instead of just "hits" in the example. i'd like to have essentially several types of hits (or just generic counters) per timeframe. reference: http://docs.mongodb.org/ecosystem/use-cases/pre-aggregated-reports/ any ideas how i would best go about this?
[18:50:51] <testing22_> the best way i could think of was to include an embedded object within each time bucket that contains the counters such as.. hourly: { "0" : { "foo" : x, "bar" : y } }. would this be feasible for aggregation? it seems so, but i'd like to make sure that is the best way
[18:52:56] <tkeith> I'm trying to do an update with a $setOnInsert, but it's failing. My code is: db.users.update({'username': user}, {'$set': {'token': token}, '$setOnInsert': {'username': user}}, upsert=True
[18:53:02] <tkeith> and the error is: OperationFailure: Invalid modifier specified $setOnInsert
[18:53:28] <Goopyo> tkeith: I think what you want is a upsert
[18:54:09] <Goopyo> not an setOninsert,
[18:55:17] <tkeith> Goopyo: I'm setting upsert=True, but if it's inserting a new document I want it to set the username as well
[18:55:30] <Goopyo> it will, upserts insert the query params
[18:55:49] <Goopyo> basically if the document didnt exist it will have username and token fields
[18:55:54] <Goopyo> if it does the token field will update
[18:56:14] <Goopyo> thats what an upsert is, you dont need setoninsert anymore
[18:57:48] <tkeith> Goopyo: Ok, but what if I also want to add other fields only on insert, without changing for an already existing document? For example, I might want to set balance to 0 on insert, but leave it as-is if it already exists.
[18:59:34] <cheeser> dragenesis: i can't think of a case where it would be null, no. i'd think that'd be a bug if that were ever the case.
[19:32:59] <elux> hey just wondering but whats the difference between $orderby and sort() ?
[21:06:36] <Pinkamena_D> has anyone ever had any luck directly converting mongodb collectiong into tabular data?
[21:07:52] <Pinkamena_D> I am making a program and the old (access 2 db) program showed "reports" in the form of sql tables that could be printed
[21:08:24] <Pinkamena_D> I am wondering what the most user friendly way to display the bson would be without having to write a different function for each collection
[21:08:25] <cheeser> having once tried that very approach, i can tell you that documents don't often map/render well as tables.
[21:09:37] <Pinkamena_D> so you think that the best way would be to make custom functions, rather then trying to design some kind of "folding table"
[21:10:05] <Pinkamena_D> (where you can expand a column into more columns, etc. )
[21:10:27] <cheeser> i just rendered the documents as json and let the window scroll vertically as necessary
[21:11:15] <Pinkamena_D> I have to make it at least a little bit more user friendly then that...but thanks for the suggestion.
[21:13:51] <mst1228> hi. can someone help me dial in a query using the $nin operator?
[21:13:54] <mst1228> gist: https://gist.github.com/thorsonmscott/2e9d87b53528614e4b41
[21:14:14] <mst1228> first line explains what I want to accomplish
[21:14:36] <cheeser> "$taggers.uid" perhaps?
[21:15:59] <mst1228> thanks, that does the trick
[21:45:06] <tkeith> I'm trying to do the following pymongo query, and it's failing: db.users.update({'username': user}, {'$set': {'token': token, 'username': user}, '$setOnInsert': {'groups': []}}, upsert=True)
[21:45:13] <tkeith> The error message is: OperationFailure: Invalid modifier specified $setOnInsert
[21:47:23] <tkeith> Nevermind, just solved it -- looks like my mongodb version is too old
[21:47:30] <tripflex> :P
[22:00:15] <tkeith> It looks like setOnInsert can't set the _id... if this is the case, how can I atomically insert a document with a unique username and custom _id?
[22:19:44] <clarkk> is there a way to limit the number of documents retrieved for each match of a property value. For example, if the property is "pet_type" and it contains the value "cat", "dog" or "rabbit", is there any way to ensure that only 5 cat records, 5 dog records and 5 rabbit records are
[22:19:44] <clarkk> returned?
[22:53:25] <fabiobat_> There is a fuzzy, levensthein or any similarity match on mongodb?
[23:00:46] <joannac> There's text search http://docs.mongodb.org/manual/core/index-text/
[23:18:21] <bjori> fabiobatalha: no, besides.. you should duplicate the string as a levensthein and index on it if you need it frequently
[23:19:33] <bjori> wait, no. I'm lying. thats metaphone :)
[23:19:46] <bjori> levenshtein was distance