PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 8th of January, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[06:58:20] <jwilliams_> when using mapreduce, us it possible to remove the key in reduce phase?
[06:59:00] <jwilliams_> for example, function (key, values) { key.remove(); } so that the output collection won't show the key we don't want.
[07:09:41] <jwilliams_> or in finalize function?
[07:31:38] <Kneferilis_> hello
[07:31:56] <Kneferilis_> can you do queries as advanced as with SQL, in mongodb?
[07:34:41] <jwilliams_> there are some advanced queries can be specified - http://www.mongodb.org/display/DOCS/Advanced+Queries
[07:35:08] <Kneferilis_> ok
[08:36:50] <[AD]Turbo> hola
[08:36:56] <Gargoyle> o/
[08:56:34] <Lujeni> Hello, someone can help me with this error http://pastebin.com/kG76dRVj ? Thx
[09:24:14] <arussel> I have a doc with a unique index (email + name), is it better to use native id and add unique index for (email + name) or use email + ":" + name as primary key ?
[09:24:34] <arussel> or is it the same for mongo ?
[09:35:26] <jwilliams_> i read that mongodb mapreduce should not to access database during map, reduce, and finalize phase (http://docs.mongodb.org/manual/reference/commands/#mapReduce).
[09:35:39] <jwilliams_> does that mean mapreduce is only for read only operation?
[09:35:51] <ron> arussel: umm, you're asking whether you should add another field to a document just to use that as a key?
[09:38:04] <jwilliams_> and if data manipulation is needed (in e.g. sharded mongo cluster), would common mongo's javascript be performant? or what is recommended as alternative?
[09:44:14] <kali> jwilliams_: mongo javascript is not performant
[09:45:03] <kali> jwilliams_: i would recommended to avoid it in production environments
[09:45:42] <ron> it's kinda weird. you'd expect javascript to be very performant.
[09:45:56] <kali> jwilliams_: it can be helpful in development / debugging situation, though, and can be acceptable for occasional batch processing
[09:46:32] <kali> whats... this smell...
[09:46:34] <kali> i know.
[09:46:36] <kali> troll
[09:47:58] <jwilliams_> kali: get it. thanks in advice.
[09:48:23] <ron> kali: you love me.
[09:50:37] <arussel> ron: no, I have 2 field email and name. (email + name) should be unique. I'm asking if I should use email + ":" + name a id or use default mongo id and add a unique index on email + name
[09:52:35] <ron> arussel: I see. well, that's one way to solve it, yeah, though it's a matter of preference. I think it may also depend on *how* you're going to use the _id of the document, whether they'll be exposed somehow. if they will, then you may have some privacy issues.
[09:54:43] <arussel> good point, thanks
[09:55:34] <ron> some people here say that as a best practice, the _id should remain the ObjectID. however, I must I admit I've yet to hear arguments to explain WHY it is a best practice (again, not saying it isn't, just that I can't explain the reasoning of it).
[10:01:11] <NodeX> _id is always garunteed to be unique across shards
[10:01:18] <NodeX> that's why ;)
[10:01:36] <arussel> aren't unique index the same ?
[10:01:44] <NodeX> no
[10:02:07] <NodeX> ObjectId is more analogous to primary key auto incr in SQL
[10:02:27] <NodeX> but it's on a cluster basis not per table
[10:02:39] <arussel> so if I want unique email + name, I should use it as pk
[10:03:02] <NodeX> as a hash?
[10:03:24] <arussel> new OjbectId(hash(myemail + ":" + myname))
[10:03:34] <NodeX> I wouldn't bother
[10:03:55] <NodeX> I would get the free oid and put a unique index on email and name
[10:04:04] <NodeX> because the chances are you will use them to look other things up
[10:04:31] <arussel> I still keep the email and name field
[10:04:45] <NodeX> with oid you get a free index as it's always indexed which is useful for asc/desc sorting
[10:04:49] <arussel> but having them in pk allows me a quick look up and ensure uniqueness
[10:05:01] <NodeX> so does a unique index
[10:05:26] <arussel> if I have the id of the user I'm looking for, which is not the case, but I have mail + name
[10:05:47] <arussel> oops, sorry, misunderstood your last statement
[10:05:48] <NodeX> are you going to be querying on name or email or both in toher queries?
[10:05:52] <NodeX> other *
[10:06:07] <arussel> mainly getting user based on name and email
[10:06:20] <NodeX> mainly isn't 100%
[10:06:38] <arussel> no, but well over 98%
[10:07:08] <NodeX> my advice is use ObjectId and have a unique index on name + email - this gives the MOST flexibility
[10:30:12] <wereHamster> the argument against putting the email or username into _id is that _id cano not change, while the user might want to change his email/username.
[10:34:52] <ron> that's a very good point.
[10:35:38] <kali> ho yes, usernames and emails are bad ids
[10:37:00] <NodeX> I thought that was implicit so I didn't mention it lol
[10:38:10] <kali> NodeX: that said, i like to use "semantic" ids instead of arbitrary objectids when it's possible
[10:38:33] <ron> blasphemy!
[10:39:50] <NodeX> I am not sure what "semantic" id's are kali!
[10:40:01] <ron> NodeX: that's because you're dumb.
[10:40:04] <NodeX> I don't know big words sorry lol
[10:40:27] <kali> NodeX: ids that contains some information usefull for the human who is debugging :)
[10:40:30] <NodeX> for a genious i'm pretty bad at english
[10:40:42] <NodeX> ah, yes good point
[10:41:00] <NodeX> most of my data doesn't really matter about that
[10:41:39] <kali> yeah, i says that as a principle, but the only case of this i have in the dataset i'm working with are ids from other databases :)
[10:41:54] <kali> so they're not very semantic, and not ObjectIds
[10:42:04] <NodeX> ugh
[10:44:08] <NodeX> ron, how have you manged to keep your job?
[10:44:19] <NodeX> you can barely use a calculator :P
[10:44:21] <ron> NodeX: I pay THEM.
[10:44:23] <ron> :D
[10:44:26] <NodeX> nice
[10:44:35] <NodeX> come work for me then ;)
[10:45:15] <ron> well, considering that for your level I'd pay you maybe a penny a month, sure.
[10:45:41] <NodeX> LOLOL
[10:45:55] <NodeX> Now I can die peacefuly, I have heard it all!
[10:48:39] <ron> NodeX: please. please do.
[10:48:40] <ron> :D
[10:49:47] <aep> hi, when i have { a : { b : { c : 3 } } } how can i query for all documents where a.b.c is greater 3?
[10:50:00] <NodeX> Good new TV series started btw called "Up and under"
[10:50:22] <NodeX> db.foo.find("a.b.c" : {$gt:3});
[10:50:31] <aep> NodeX: ah!
[10:50:43] <aep> so its not { a : { b : { c : { $gt: 1 } } }
[10:50:43] <kali> NodeX: interesting, what's it about ?
[10:51:11] <NodeX> aep : no
[10:51:45] <NodeX> kali : it's about a gambling addict who moves from vegas to new york and becomes a book keeper for sports betting with some genious kid
[10:51:58] <NodeX> pilot was last friday night
[10:52:29] <NodeX> aep : the shell uses dot notation to reach into objects (same as json)
[10:52:51] <aep> cool, get it
[10:53:02] <aep> i can use the same syntax from C++/C can i?
[10:55:41] <NodeX> I wouldn't know sorry
[10:55:53] <NodeX> check the docs for the drivers
[10:55:57] <aep> yeah, cool
[11:00:14] <ron> NodeX: are you dead yet?
[11:00:45] <NodeX> no sorry
[11:00:52] <NodeX> got a few days left in me yet
[11:02:34] <ron> hmpf.
[11:10:37] <aep> can i access gridfs from the js shell?
[11:12:33] <Zelest> db.collection.fs.findOne()
[11:12:56] <Zelest> but it's two collections.. one for the metadata and one for the chunks.
[11:13:53] <aep> hm i was hoping for a simple upload function
[11:14:01] <aep> i guess not
[11:14:42] <NodeX> mongo-files
[11:14:47] <NodeX> if I recall correctly
[11:15:10] <aep> ah nice!
[11:51:14] <jtopper> how is it that the docs at http://docs.mongodb.org/manual/reference/method/db.addUser/ don't cover the return values for that method?
[11:55:05] <Derick> jtopper: i think you'd find that for most commands right now
[11:55:12] <Derick> I think I have filed an issue for that, let me check
[11:57:16] <Derick> nope, only for RPs - jtopper can you file a documentation issue at https://jira.mongodb.org/browse/DOCS ?
[11:57:38] <jtopper> sure.
[11:58:23] <Derick> i'll upvote it too
[11:59:20] <jtopper> an "Improvement" ticket?
[12:15:27] <aep> files in gridfs are identified by path? rather then by object
[12:15:43] <aep> mongofiles now lists me 6 files with the same path, no idea how to get them
[12:16:11] <aep> GridfFs.insert return metadata. should i do something with it instead of putting it into my document?
[12:16:22] <aep> i thought that was some kind of reference to the file
[12:17:06] <Derick> they will all have a different objectid
[12:17:27] <aep> yeah, thats what i expected
[12:17:38] <aep> but mongofiles has no way of using that metadata i got to get the file
[12:18:35] <Derick> heh, indeed
[12:22:21] <lucian> i'm having trouble with mongod starting up on ubuntu 12.10, with both ubuntu and 10gen repos. service mongodb start gives me a pid, but there's no actual process running. if i start it again, it prints another pid, but same problem
[12:22:44] <NodeX> lucian : check the error log
[12:23:53] <lucian> NodeX: ah, leftover lock file. thanks :)
[12:24:25] <bika__> Does anyone known how to deepcopy a FixedOffset object in python?
[12:25:33] <NodeX> try in #python
[12:25:43] <rbika> I will, thx
[12:35:32] <sebastian> hi guys, db.copyDatabase can have an IP address as hostname?
[12:36:42] <NodeX> probably seeing as mongo is not bound to dns ;)
[12:38:00] <sebastian> ok, trying
[12:39:07] <NodeX> https://jira.mongodb.org/browse/SERVER-1014 <----- very annoying
[12:55:27] <sebastian> it worked,, sweet
[13:32:26] <ron> anyone from bulgaria here?
[13:37:00] <Gargoyle> NodeX: Just out of curiosity, what is your use case for wanting to remove a non unique element?
[13:38:54] <NodeX> I was pulling an element from an array and it leaves a null (using $unset)
[13:39:01] <NodeX> and I needed to pull the null
[13:39:53] <Gargoyle> What was it an array of?
[13:40:43] <NodeX> blog post comments
[13:40:54] <NodeX> $pull does not work with the positional operator :(
[14:23:04] <sebastian> hey guys I'm trying to copy a database and it's saying "exception: E11000 duplicate key error index: .... blah" but there are no real duplicates in there
[14:23:07] <sebastian> what's going on?
[14:25:28] <NodeX> nulls are classed as dupes too
[15:04:38] <try_it> hello
[15:05:37] <try_it> try it out repl is broken, find() function returns empty array regardless of db state
[15:05:52] <try_it> is it ok?
[15:06:40] <NodeX> is there anything in the db?
[15:07:06] <try_it> Neptu: yep
[15:07:15] <try_it> save function returned "ok"
[15:07:32] <try_it> NodeX: is it working for you?
[15:08:21] <try_it> http://ompldr.org/vZ3pvcw
[15:08:32] <Neptu> try_it: ??
[15:08:50] <try_it> Neptu: ???
[15:08:59] <NodeX> I am not clicking that link
[15:09:35] <try_it> NodeX: I don't care.
[15:09:46] <NodeX> go troll somewhere else
[15:09:55] <try_it> NodeX: no you.
[15:10:10] <NodeX> lol
[15:10:58] <NodeX> world is full of script kiddies
[15:27:09] <try_it> so, db.scores.save({a: 99}); db.scores.find(); *should* return empty array?
[15:27:30] <NodeX> 42
[15:29:34] <mathieue__1> hi all. i have a lag on mongoid / replicaset when host is down (only mongo down == nolag ==ok). anyone ?
[15:35:04] <NodeX> added_epoch_int
[15:35:05] <NodeX> oopos
[15:37:46] <Gargoyle> Careful, NodeX. You almost sent us all back to 1970!
[15:38:00] <NodeX> dang
[15:38:41] <Infin1ty> I have a cluster of 3 shards, each shard is a replicaset with 3 members, i want to add another 2 shards (3 replicaset members per shard)
[15:38:55] <Infin1ty> should i first create the replicaset as two standalones replicasets
[15:39:06] <Infin1ty> then add those replicasets to the cluster using sh.addsh?
[15:48:32] <try_it> ok, i've installed mongodb on localhost and db.test.save({a: 10}); db.test.find() returns nonempty array with 'a' record.
[15:48:54] <try_it> so, can someone ban this shit spouting NodeX troll?
[15:49:09] <try_it> he doesn't help.
[15:50:28] <Gargoyle> try_it: This isn't kindergarden. If you failed to do that task, then you havent ready ANY docs, and there is 99.999% chance your link is spam. If you really wanted someone to look, use a well known pastebin.
[15:50:58] <Gargoyle> try_it: And asking for bans will most likely only succeed in getting yourself banned.
[15:52:27] <try_it> Gargoyle: are you kidding? I haven't failed with mongo on localhost, web repl is failng, and i not going to convince you in your incompetency since i don't give much care about it
[16:00:02] <NodeX> LOL
[16:00:08] <NodeX> making friends again Gargoyle ?
[16:00:23] <Gargoyle> Yeah.
[16:00:24] <Gargoyle> :)
[16:00:25] <NodeX> if he didn't care then why was he nerd raging
[16:05:19] <obryan> is there some sort of recovery from this error "create failed in createPrivateMap" ?
[16:05:52] <obryan> i've tried googling this error but all i get are git commit code snippets
[16:06:41] <NodeX> is that a driver error
[16:06:48] <obryan> a driver?
[16:06:58] <obryan> i'm getting this in the mongo console
[16:07:13] <NodeX> ok, perhaps state that in the first place then ;)
[16:07:24] <obryan> well now i have :P
[16:08:29] <obryan> I've made sure the lock file was removed, it starts up with no errors in the journal
[16:08:39] <obryan> but when I try to access any collection it throws this error
[16:11:51] <obryan> anyone....anyone....
[16:11:55] <obryan> Beuller?
[16:12:34] <NodeX> it's his day off
[16:12:38] <NodeX> LOL
[16:12:45] <NodeX> (ferris that is)
[16:12:53] <obryan> I see lots of people asking about this error
[16:13:05] <obryan> so far a dozen different forums and messages asking WTF this error means
[16:13:19] <obryan> and pretty much the same nonresponse
[16:13:22] <obryan> thanks ten gen :)
[16:13:33] <NodeX> thanks for an open source great product?
[16:13:55] <obryan> yes, so great that it manages to self destruct every four days. quite amazing
[16:13:58] <NodeX> lose the sense of entitlement and have some patience then perhaps you'll geet your problem fixed
[16:14:15] <NodeX> go use somehting else if you're not happy with it, that's the great thing about choice
[16:15:11] <obryan> Spoken like a know it all with no answers
[16:15:54] <NodeX> Except I do have Answers, just not for your problem because it's never happened to me
[16:16:05] <NodeX> and an attitude like that will not get you help any faster
[16:16:53] <obryan> Then please, feel free to say nothing.
[16:17:00] <ron> nobody forces you to use MongoDB.
[16:17:18] <obryan> i wish that was true
[16:17:27] <obryan> I can't convince management to let me use Postgres
[16:17:31] <NodeX> did you ever think that if it's happening to a handful of people that you all have something in common
[16:17:40] <NodeX> or did that not cross your mind?
[16:17:42] <ron> it IS true. nobody forces you to work there.
[16:18:01] <obryan> No, I just assume it was a random bit of chaos causing the error capricously and for no reason at all
[16:18:15] <NodeX> then there is your first problem
[16:18:20] <obryan> Amazing how your condescending assumption that fits your expectations is precisely what thep roblem is.
[16:18:40] <NodeX> concedescending? lol
[16:18:40] <ron> well, no, the problem is what you're having. WE don't have that problem.
[16:19:00] <obryan> I remember we had a developer who repeatedly said the same problem
[16:19:03] <obryan> we fired him
[16:19:14] <obryan> "it works fine on my dev box"
[16:19:19] <NodeX> dude, you come in here demanding help, slating the product and have the stones to call me condescending?
[16:19:34] <obryan> if the shoe fits
[16:20:02] <obryan> and i ASKED, and got your snark as a reply
[16:20:03] <NodeX> I think you can pretty much count on getting no help at all on this one kid
[16:20:11] <NodeX> my snark ? LOL:
[16:20:29] <obryan> wow, shocking, you're going to be no help
[16:20:30] <NodeX> I asked you to state the problem - are we supposed to guess that it was related to the shell?
[16:21:11] <ron> well
[16:21:14] <NodeX> it's related to memory kid, now go read a book
[16:21:16] <ron> I think the following image is going to fit well
[16:21:31] <obryan> der gee thank you for that astounding insite
[16:21:51] <NodeX> LOL, grow up
[16:21:55] <ron> http://farm2.staticflickr.com/1368/987251565_22ea2338dd.jpg
[16:22:31] <NodeX> not once did you explain it my be due to memory... now you "claim" to know what it's related to. I bet people love talking to you kid
[16:22:42] <NodeX> ron : that's me dude
[16:22:43] <obryan> no, i know what it's not.
[16:22:47] <NodeX> where did you get my photo?
[16:22:48] <obryan> 3x the RAM is available
[16:22:53] <obryan> so NOT a memory issue
[16:22:57] <obryan> just because it says so
[16:23:01] <ron> NodeX: I *know*
[16:23:05] <NodeX> It's a MMap issue idiot
[16:23:39] <NodeX> another one for the ignore list
[16:23:44] <obryan> yes you are
[16:23:46] <obryan> good suggestion!
[16:23:56] <obryan> the only one you've made so far asshat
[16:24:03] <NodeX> gonna need to shard my ignore list pretty soon if idiots keep using mongo
[16:24:05] <NodeX> LOL
[16:26:10] <moogway> hi, i'm using ubuntu 12.04 (64 bit) and django 1.4 to develop a simple webapp (newbie here) and I wanted to use mongodb
[16:26:38] <moogway> could anyone please guide me in terms of what engines do I need to install? MongoEngine or Django-mongodb?
[16:27:26] <moogway> which is better and more supported? also, if I want to use Django, would using Mongo be a good idea?
[16:28:48] <Okasu> moogway: Using Mongo is usefull only if you need NoSQL webframwork choice is unrealted. You can use Mongo even without it.
[16:29:16] <Okasu> s/need NoSQL/need NoSQL,/
[16:29:32] <obryan> the framework should be your primary consideration
[16:30:02] <moogway> hmm, I understand that
[16:30:30] <obryan> Also if you have multiple options it often comes down to which one you prefer, in Ruby there is mongomapper and mongoid, I just happen to prefer mongoid.
[16:30:33] <moogway> but I wanted to use python and I like the concept of NoSQL (because it's intuitive for me, as compared to SQL)
[16:30:53] <Okasu> obryan: Yeah, mongoid is a good one.
[16:31:36] <obryan> Well from my scanty knowledge of python-mongo, the django-mongodb comes up alot, so something for popularity?
[16:31:46] <moogway> hmm, so I shouldn't worry about support and active development? Because it seems MongoEngine is not as well supported as django-mongodb
[16:31:50] <NodeX> from what I gather; most drivers do the same thing, a few have ORM's and things with them but they generaly all give you API like access to the mongodb so your app will be agnostic
[16:32:13] <moogway> obryan: exactly, django-mongodb does come up a lot
[16:32:19] <obryan> Well that can happen to any project, but if its developed enough like Capistrano it can be essentially abandoned and be good for a long time
[16:33:16] <moogway> so it's cool to go ahead with anything, right?
[16:33:46] <obryan> if you have options, play with'm and go with the one you like
[16:34:02] <NodeX> I should say so
[16:34:03] <moogway> @obryan: yeah
[16:34:16] <moogway> thanks all
[16:34:23] <NodeX> You should not listen to obryan : he doens't know what he;s talking about
[19:22:01] <Edgan> Why is there no mongos.conf in the mongodb-10gen ubuntu-upstart packages?
[19:24:15] <Edgan> better said /etc/init/mongos.conf
[19:31:08] <rickibalboa> How do I deal with storing keys that have disallowed characters in them, such as dots. I can't not do it because the architecture is already in place a lot of changes would need to be made, is there a function for like encoding these or something?
[19:36:46] <rickibalboa> Never mind i'll just base64 encode it
[19:44:34] <jhammerman> Hi MongoDB users list. I am spinning up a pair of MongoDB Replica Sets in AWS and both of the arbiters on on of the nodes are unable to communicate with the masters. I have tested inter-node connectivity the wire is not the problem. The logs show this: DBClientBase::findN: transport error: 10, checkEmpty: false, from: $auth: {} }
[19:45:47] <jhammerman> The arbiter itself is telling me to run rs.initiate, but that can't be right. I've tried restarting all of the daemons and I've tried running rs.reconfig, and I've tried removing and reading the arbiter. I'm completely open to suggestions!
[19:46:42] <jhammerman> I do have noprealloc=true in the mongod.conf on the arbiter nodes, since ELB storage space is limited
[20:39:41] <kali> NodeX: i think you meant "over under"
[20:44:53] <moogway> hi, any idea about the integration of mongodb with django 1.4? is it on? does it work? or is it really buggy?
[20:45:20] <moogway> i've been reading up here and there and it seems to me that the thing is buggy
[21:40:57] <Zelest> http://www.youtube.com/watch?v=d8wEvWNrIxo .. porn! <3
[21:48:49] <ekristen> does there exist such a tool that is web based that allows for dynamic document creation/editing in mongo?
[22:07:10] <JakePee> Trying to write a script to migrate some of our stuff in our schema. Essentially convert objects to arrays. Anyone have any insight into optimizing this:
[22:07:46] <JakePee> https://gist.github.com/bb52d92171b8dceff996
[22:09:37] <JakePee> ~45 million records
[22:48:47] <timvdm> Hi
[22:48:51] <timvdm> Is it possible to extend mongodb with C++ plugins? I saw the src/db/module.h file which might be relevant but I can't find much information about this.
[22:56:05] <jaimef> is there a way to specify non-voting member when doing rs.add()?
[23:25:07] <ehershey> I don't think so
[23:25:50] <ehershey> oh no, sorry
[23:25:54] <ehershey> echo rs.add | mongo
[23:27:31] <ehershey> something like this would do that: rs.add({ host: 'somehost:12345', votes: 0 })
[23:27:38] <ehershey> I believe
[23:28:02] <ehershey> rs.add lets you pass in a hostport string or a member json object
[23:36:24] <midinerd_> Hello!
[23:36:26] <midinerd_> I'm new to this.
[23:36:31] <midinerd_> to freenode*
[23:36:47] <midinerd_> Is this channel very active eh? ;);)
[23:36:58] <Auz> very
[23:37:04] <midinerd_> Awesome.
[23:37:25] <midinerd_> mongodb had a bit of a learning curve at first but I'm really digging it now...
[23:43:52] <midinerd_> With mongoexport - do I have to specify all fields in a collection or is there an arg I can use to indicate "export all fields" ?
[23:45:41] <Auz> I think if you leave off the --fields it will assume all fields.
[23:46:00] <midinerd_> Yeah, I figure, but it says "you need to specify fields"
[23:46:17] <midinerd_> The actual line I'm using is: mongoexport --host viglbuildXXXX:27017 --csv -o flat_table.csv -c jordanitems
[23:46:51] <Auz> you should specify a db
[23:48:56] <midinerd_> When I get an "Invalid BSON object type for CSV output: 6" <-- how do I interpret or break this message apart? what is the '6'
[23:51:08] <Auz> 6 is undefined
[23:51:16] <midinerd_> probably had a null in there somewhere.
[23:51:19] <midinerd_> thanks