PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 6th of December, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[01:54:36] <iwantoski> why can't i do db.collection.findOne({_id: "52a108be95687e0e6184e07f"}, function(err, item) { .. }) ?
[01:54:51] <iwantoski> am I missing something about finding by id?
[02:12:21] <iwantoski> Oh, it's not a string
[04:08:25] <a|3x> hi
[04:27:02] <a|3x> what could be wrong with my text search attempts? http://pastebin.com/GHkDbfP9 why is it not finding my document in both cases?
[07:59:14] <spicewiesel> hi all
[07:59:55] <spicewiesel> please, what's the best way to copy the data from on sharded setup (mongos+cfgsrv+replicaset) to a similar test environment? (same setup)
[08:45:32] <Seraf> hello everyone. I have a replicaset. All nodes are stuck in STARTUP2 mode ... I know which one should be the primary, how can I force it please ?
[08:45:48] <Seraf> (can't play with data and conf, it's in production :s )
[09:09:33] <liviud> hey
[09:10:35] <liviud> I have several nodes with 2.2.0 mongodb. If I upgrade some to 2.4 and use the 2.4 drivers in a replicaset configuration will the 2.4 drivers be able to talk to the 2.2.0 mongodbs ?
[09:16:09] <kali> liviud: check out the release notes for 2.4. IIRC, there is something to do if you're using sharding or it will break, but replica set are fine. keep in mind it is better to migrate all nodes ASAP, partially migrated state is not meant to last long
[09:20:03] <Seraf> hello everyone. I have a replicaset. All nodes are stuck in STARTUP2 mode ... I know which one should be the primary, how can I force it please ?
[09:33:52] <kali> Seraf: is it a brand new replicaset ?
[09:42:22] <liviud> kali: thanks
[09:42:40] <liviud> kali: yeah, that temporary setup is part of the migration movement
[09:42:49] <liviud> we still need it to be up while migrating it
[09:44:28] <kali> liviud: yeah, it's up and running. and there are no known problem with it.
[09:44:47] <liviud> also we seem to encounter a problem with a custom php app that's using the 1.4.3 driver
[09:45:31] <liviud> in the replicaset environment, if one of the nodes times out ( for example by lack of connectivity ) the driver seems to get stuck in the connected state for a large while...
[09:46:37] <liviud> which, although we used the default timeouts code-wise, it should be by default less than half a day I suppose :)
[09:47:20] <liviud> on the other hand if the node goes down ( kill -9 mongodb or stop it ) everything runs just fine
[10:29:33] <spicewiesel> is there a way to reset the local admin pw in a replicaSet?
[11:55:01] <remonvv> \o
[11:56:02] <Zelest> nazi
[12:10:07] <misterdai> Anyone pretty knowledgeable with projections and array or arrays? :)
[12:24:15] <Derick> no glaring
[12:26:31] <Derick> oh, adamcom on IRC?! since when does that happen
[12:26:35] <Derick> adamcom: just emailing you boss
[12:26:45] <adamcom> has been a while, tis true
[12:26:50] <adamcom> I lurk a lot
[12:27:04] <Derick> anyway, i've got mail for you!
[12:27:05] <joannac> camper!
[12:27:09] <joannac> >.>
[12:27:19] <Number6> Very voyeuristic
[12:28:45] <joannac> 4 active ops, is that a record?
[12:29:05] <Derick> pretty much
[12:29:12] <Number6> Yes. We should have a party
[12:29:48] <Derick> we are *having* a party
[12:30:41] <adamcom> Derick: replied :)
[12:31:20] <Derick> ta
[12:32:09] <Derick> now just waiting for the other dud
[12:32:10] <Derick> e
[14:54:32] <Industrial> Hi. I have a user collection for my webapp and I have a complete authentication setup now, however the authorization is limited to 'are you logged in or not'. How do I authorize users to perform requests/actions in my webapp based on a structure/collection in mongodb? Anyone done this before?
[14:55:14] <Industrial> I want to be able to specify that users can view all profiles but only edit their own profile, and a moderator or admin could do everything
[15:06:50] <Industrial> I guess for now a role enum/string will do.
[15:10:57] <Nodex> Industrial : I do that with an array of users / groups and interset the current user to it
[15:18:21] <edugonch> Hello, I'm having some problems with rails 4 and mongoid, I'm getting this errors http://pastebin.com/8RaaSMDH
[15:18:39] <edugonch> this is my yaml file for mongoid http://pastebin.com/gpLBZZ6G
[15:19:21] <edugonch> this is new, I was able to run my app without this error some days before
[15:21:35] <Nodex> doesn't look like a mongo error to me
[15:27:15] <edugonch> when I ping 127.0.0.1:27017 I get unknown host 127.0.0.1:27017
[15:27:28] <edugonch> so I can't access the server also without rails
[15:27:30] <Zelest> remove :27017
[15:27:43] <Zelest> or what do you mean by ping?
[15:27:57] <Nodex> perhaps the mongodb isn't started?
[15:28:02] <edugonch> it is
[15:28:12] <Zelest> mongo 127.0.0.1:27017
[15:28:17] <Zelest> or just "mongo"
[15:28:53] <Zelest> Nodex, o/
[15:30:26] <edugonch> I can enter to the console, so might be a mongoid problem then
[16:15:37] <sflint> question...dropDatabase gives space back to the OS...but remove() on a collections does not give space back...right?
[16:26:36] <kali> yes
[16:37:02] <adamcom> correct - because the drop removes the actual files
[16:37:17] <adamcom> the remove leaves the files in place, deletes the data in them
[16:38:00] <adamcom> if you removed everything then did a db.repairDatabase() that would reclaim the space too, but the drop is going to be a lot quicker
[17:04:47] <tiller> hi!
[17:05:09] <tiller> Is there a way to "find and remove" with mongodb?
[17:05:24] <tiller> meaning to retrieve what have been removed
[17:06:34] <cheeser> http://docs.mongodb.org/manual/reference/command/findAndModify/
[17:06:41] <cheeser> see the remove flag
[17:07:19] <tiller> oh shi-. I'm sorry cheeser! I used this method a lot for updated, never seen the remove flag =/
[17:07:25] <tiller> & thanks
[17:08:11] <cheeser> any time
[17:13:06] <tiller> huuum, cheeser> it doesn't work with multiple item, does it?
[17:13:25] <tiller> s/item/document
[17:14:24] <tiller> I'd need something what can find&remove a set of documents
[17:14:34] <tiller> that*
[17:35:14] <insanidade> hi all. I'm using a script to restart and connect to mongodb but I have the following issue: the service restart output reports that mongodb was correctly restarted but any attempt to connect to it results in an output as if mongodb is down.
[17:35:42] <Nodex> perhaps it's down?
[17:35:50] <insanidade> Nodex: it's not.
[17:36:12] <tiller> it might need some time to be ready to rock?
[17:36:31] <insanidade> it works fine if I 'manually' try the same thing the script is doing.
[17:36:48] <insanidade> tiller: shall I use some sort of "sleep" command? how long would you suggest?
[17:36:51] <Nodex> permissions of the scropt?
[17:36:54] <Nodex> script*
[17:37:15] <insanidade> Nodex: http://paste.openstack.org/show/54612/
[17:37:57] <Nodex> who owns the script and who owns mongodb?
[17:38:06] <insanidade> Nodex: that's my script output. it works perfectly if I issue the same commands 'by hand'
[17:38:10] <Nodex> i/e does your script have permission
[17:38:35] <insanidade> Nodex: yes, it does. I'm pretty sure due to other commands that need root permission being correctly executed.
[17:39:10] <insanidade> Nodex: I use 'sudo' and the password is known to the user running it.
[17:39:19] <insanidade> *know by the user
[17:39:35] <Nodex> then I'm out of ideas, check the mongo log
[17:39:42] <Nodex> might have a lock file in there or somehting
[17:40:22] <insanidade> Nodex: I just tried something to check if I should add some sort of 'wait' before mongodb is up. I think that's the case.
[17:40:47] <insanidade> Nodex: http://paste.openstack.org/show/54614/
[18:12:58] <quickdry21> How can I convert a replica set to a standalone in the config DB? I'm trying to test backups but don't want to have to deploy an entire replica set for each shard.
[18:14:00] <quickdry21> Possibly as easy as doing a config.shards.update()?
[18:28:51] <quickdry21> To answer my own question, yes, you can do a config.shards.update({_id: 'rs*': 'new-host:port'}) for each shard to convert it to a standalone and change the hostname all at once, then copy the configdb path to all three config servers
[18:37:07] <sflint> anyone here good with aggregation
[18:37:12] <sflint> this is a pretty easy question
[18:37:32] <kali> you'd better ask it
[18:37:56] <sflint> i am trying to do a simple aggregation count
[18:38:00] <sflint> here is the document
[18:38:14] <cheeser> use a pastebin
[18:38:17] <kali> use gist or pastie
[18:38:28] <sflint> k
[18:38:34] <kali> pfiou...
[18:42:23] <sflint> found my own issue putting it in gist...thanks guys
[18:43:30] <cheeser> rubber ducking++
[18:54:35] <a|3x> what could be wrong with my text search attempts? http://pastebin.com/GHkDbfP9 why is it not finding my document in both cases?
[18:56:05] <cheeser> English vs english
[18:56:20] <cheeser> line 4 vs line 12
[19:01:13] <cheeser> a|3x: make sense?
[19:01:54] <a|3x> why would it be using my own field 'language'
[19:02:29] <a|3x> i did not see anything in the docs about this field in documents
[19:03:22] <cheeser> http://docs.mongodb.org/manual/reference/command/text/#dbcmd.text
[19:03:32] <cheeser> " If not specified, the search uses the default language of the index."
[19:03:59] <a|3x> this is in records
[19:04:19] <a|3x> the docs talk about 'language' field in the command
[19:07:13] <a|3x> cheeser, "default language of the index" sounds like an index property, not collection document field
[19:08:09] <cheeser> that field is reflected in the documents in a field by the same name
[19:08:26] <cheeser> iirc, it's an implicit field on documents indexed for text searching.
[19:09:02] <a|3x> where does it say in the docs you can have documents with different language settings?
[19:09:07] <cheeser> http://docs.mongodb.org/manual/tutorial/specify-language-for-text-index/#specify-the-index-language-within-the-document
[19:09:40] <a|3x> hm thanks
[19:09:43] <a|3x> that explains a lot
[19:09:43] <cheeser> np
[19:09:59] <a|3x> what if i want to have 2 fields with different languages in one record?
[19:10:09] <cheeser> i don't think you can
[19:10:12] <cheeser> or would want to
[19:10:20] <a|3x> why not
[19:10:23] <a|3x> seems reasonable
[19:10:44] <cheeser> really? you're going to have portuguese and english in one sentence?
[19:11:27] <a|3x> no, i would have {'english_page': '...','portuguese_page':'....'}
[19:12:01] <cheeser> oh, i see. i'd break those up in to different documents personally
[19:12:30] <a|3x> ya but thats just an example, there could be other usages
[19:12:41] <cheeser> i don't think that's supported.
[19:13:24] <a|3x> also, another problem is what i have actually, i am using 'language' for something else, i was debugging a problem with text search for hours until i found that my own language field is breaking things
[19:13:46] <a|3x> this is no a very good idea to decide to use 'language'
[19:13:53] <a|3x> its a very common word
[19:14:09] <a|3x> at least prepend a couple of _ to avoid collisions
[19:15:09] <a|3x> if you are going to do this, maybe allow db.quotes.ensureIndex( { quote: "text", language_field: 'blah' } )
[19:18:09] <cheeser> you can change the name of that column when you create the index.
[19:18:12] <joannac> a|3x: same page, "language_override" option
[19:18:17] <cheeser> iirc, that link shows how
[19:18:28] <cheeser> anyway, i'm off to the apple store :/
[19:24:28] <a|3x> wow didn't see that
[19:24:31] <a|3x> thanks
[19:27:01] <cirwin> Is there any reason to use config passed in as a hash instead of in the URL? I'm using the mongoskin node driver to connect to a mongos and want to specify readPreference=secondaryPreferred
[19:27:17] <cirwin> it seems like just putting in the URL shoudl work, but I'm not sure how to test it
[19:50:29] <qswz> is it possible to do injections in mongo (like sql injections)?
[19:50:45] <qswz> I mean is it injection-safe?
[19:51:47] <bjori> qswz: if you don't filter your input then nothing is safe
[19:52:09] <bjori> qswz: for example, if the user input is used to structure your fieldnames and criterias then you are in big troubles
[19:52:46] <bjori> same with values. if the user changes the input to an array, then he can create a complex query that does totally different things then you wanted
[19:53:20] <bjori> so if you are doing fieldname $eq user-input, you better be suer user-input is a scalar value, not array
[19:53:29] <qswz> for that I'll put an authentication layer
[19:53:33] <bjori> depends on your language how exactly this is structuresd
[19:55:16] <qswz> I'm mapping ajax request in json, more or less directly to mongo's requests, even the collection [collectionName, 'find' or 'update'.., json, orther json]
[19:56:21] <qswz> [collName, accessToken, <- forgot it
[19:58:08] <qswz> then there is access right on documents, with hard-coded _canRead, _canUpdate fields, to restrict documents to a group
[20:01:57] <bjori> please don't use a user defined variable for picking operation or collection name
[20:02:00] <bjori> please filter it
[20:02:05] <bjori> filter input, escape output
[20:02:09] <bjori> words to live by
[20:08:30] <qswz> bjori see http://jsbin.com/UNIxosA/1/edit :)
[20:08:39] <qswz> (in console)
[20:09:59] <qswz> http://jsbin.com/UNIxosA/2/edit like this rather
[20:10:20] <qswz> only find, upsert and remove commands
[20:10:33] <qswz> but any collectionName, and jsons..
[20:12:07] <bjori> that looks scary
[20:12:15] <qswz> I know but
[20:13:42] <qswz> If I create a json with a valid token, I'm able to put _canUpert in the document coainting my email and others, and only those can upsert it, same goes for delete
[20:14:02] <qswz> I was just scared about hacky things
[20:14:55] <qswz> and to look stupid if that thing isn't safe
[20:22:50] <qswz> the db won't have passwords (openid auth)
[20:27:35] <qswz> just restricting collectionsnames, and it's 'safe' ?
[20:27:47] <qswz> (to avoid hitting system ones)
[20:35:44] <qswz> yeah, because , I was ableto read system.users :))
[20:36:21] <qswz> now usale colllections are prefixed
[20:55:22] <qswz> http://jsbin.com/UNIxosA/3/edit more complete example
[21:53:10] <travisgriggs> i'm changing my document schema, changing to custom _id's. Is there a way to find() all docs that have an _id of type ObjectID? Or another technique to remove all of them?
[21:54:35] <cheeser> http://docs.mongodb.org/manual/reference/operator/query/type/
[21:55:14] <cheeser> db.collection.find({_id:{$type:7}})
[22:21:08] <travisgriggs> thanks cheesed!
[22:21:15] <travisgriggs> damn autocorrect
[22:21:22] <travisgriggs> thanks cheeser
[22:21:40] <cheeser> any time
[22:27:01] <travisgriggs> i just learned that remove() actually does a match, you don't just have to feed it one complete document at a time. i seriously love mongo
[22:27:27] <Joeskyyy> It's magical most times. :D
[22:44:18] <travisgriggs> having just switched one of my doc fields to be the _id... I find myself missing the more descriptive name. I wish there was a way at collection creation time to indicate an alternate name for the sacred/special _id field
[22:44:42] <retran> i'm glad there's not
[22:44:54] <retran> it's nice to have that consistency
[22:44:59] <Joeskyyy> +1 ^ i could see that being a big issue
[22:46:03] <jjbean> Hi, would like to offer advise about a making a geoWithin query through mongoid?
[22:46:21] <retran> i can't think of any benefit i've derived from having PK have an arbitrary name, in RDBMSes
[22:46:30] <retran> i just wish they were all named 1 thing
[22:46:35] <retran> "id"
[22:47:06] <retran> it's just one more stupid thing to familiarize myself with in a schema that has nothing to do with business logic
[22:48:01] <iwantoski> When I'm updating a document, it removes any field that isn't present in the update - why is that? Much I get the full object in order to preserv its completness
[22:48:12] <iwantoski> completeness*
[22:48:22] <jjbean> *meant 'would anyone'. I can successfully make a geoWithin query from mongo shell, but not through mongoid
[22:49:04] <retran> does this chan support ORMs
[22:49:05] <tystr> huh
[22:49:07] <tystr> TypeError: Cannot read property 'version' of null at src/mongo/shell/utils.js:1008
[22:49:14] <tystr> get that when trying to reconfig my replset
[22:49:46] <retran> jjbean, question. what benefit do you get using an ORM with mongo?
[22:49:59] <retran> does it really speed up development time or anything?
[22:51:21] <jjbean> retran I'm familiar with rails and becoming introduced to mongo
[22:51:34] <retran> what does that have to do with my question?
[22:51:41] <retran> why are you using an ORM
[22:52:09] <Joeskyyy> iwantoski: http://docs.mongodb.org/manual/reference/method/db.collection.update/
[22:52:23] <Joeskyyy> By default it'll do that, if you wanna change just one part of the document, use $set
[22:52:23] <Joeskyyy> http://docs.mongodb.org/manual/reference/operator/update/set/#up._S_set
[22:52:27] <retran> you know that ORMs will remove some capabilities, they may not have every single feature properly abstracted
[22:52:44] <jjbean> I did not see why not to use it. I saw how I could fit the pieces of my web app togeather and decided to use it
[22:52:58] <retran> that's an interesting decision process
[22:53:06] <retran> now you are bearing the fruits of that decision
[22:53:15] <jjbean> haha yep
[22:54:52] <jjbean> I think you might be right about the geoWithin query feature in my case, but thought I would check here as i'm running out of ideas at the moment
[22:54:59] <travisgriggs> consistency is nice. it seems like you just trade "just one more stupid thing to familiarize myself with in a schema that has nothing to do with business logic" from one to the other
[22:55:35] <travisgriggs> you never have to worry about figuring out what the special/sacred field is. it's always the same. otoh, any time it's custom, you have to figure out what it actually is
[22:55:45] <iwantoski> Joeskyyy: Oh wow, I missed that .. I've been look at that page for a while now :/
[22:55:56] <Joeskyyy> It's kinda hidden, one of the gotchas of mongo
[22:56:02] <travisgriggs> any code you write around it, you have to figure out "so what are they using _id for here?"
[22:56:05] <Joeskyyy> When you THINK update, you think it would update the field you give it haha
[22:56:25] <Joeskyyy> But ultimately it makes sense to update by changing the document by default, then using $set in all other cases.
[22:56:54] <travisgriggs> so i offer my second wish, slightly more educated now :) i wish there were a way to have field aliases. Then you could have _id always be there. but one could alias it where appropriate
[22:57:28] <Joeskyyy> I can live with that
[22:57:29] <Joeskyyy> (:
[22:57:55] <iwantoski> Joeskyyy: Yea sure, I'm not one to critize that solution because I'm new to both mongodb and nosql/non-relational dbs in general - I'm coming from a traditional setting with relational dbs :)
[22:58:08] <Joeskyyy> Welcome to the better side :D
[22:58:18] <iwantoski> Well see ;)
[22:59:54] <iwantoski> Hm, "while I have you one the line", I have a different question I havne't had time to test yet. regarding 2d Indexes, which I've only read about - but they seem to be sub-optimal when latency is of importance, any thoughts/input regarding this?
[23:00:10] <retran> it's easy to know what they are using _id for in Mongo
[23:00:13] <retran> it's the PK :|
[23:00:22] <retran> three's no thinking about it
[23:01:33] <iwantoski> travisgriggs: I'm new but _id is just a PK, you can modify that if you need AFAIK?
[23:01:43] <retran> iwantoski, explain why 2nd indexes seem to be 'sub-optimal' to you?
[23:01:46] <Joeskyyy> I've actually not used 2d indexes, someone else may have a good idea on that.
[23:02:10] <retran> and what exactly you mean by "2nd indexes", explain that too
[23:02:26] <retran> oh 2d
[23:02:30] <retran> ha ha
[23:02:30] <iwantoski> retran: The impression I've got is that it's not as fast as - say postgresql's implementation
[23:03:10] <retran> how many things are better than postgres
[23:03:12] <iwantoski> (i'm going to do this using mongo for now anyway to try it out, but I just thought I could get some feedback on the matter)
[23:04:28] <iwantoski> I'm not trying to call out a db war, I'm just asking - because for what I'm building, I'm really concerned about low latency lookup from (2d) geospatial data which is high concurrent.
[23:04:34] <travisgriggs> so you see one of my documents {'_id': 'F00008', 'last_update': <datetime>, 'ip_address': '10.20.30.40'}, and you know what I'm using for my PK retran?
[23:04:45] <retran> yes, you're using _id
[23:04:55] <iwantoski> travisgriggs: _id is the default unless you change it.