PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 7th of March, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:07:57] <gyre007> I see so before I rebuild hekman I need to force reelection...
[00:08:09] <hekman> only on your primary
[00:08:13] <gyre007> I mean before I rebuild primary
[00:08:16] <gyre007> yup
[00:08:18] <gyre007> cool
[00:08:21] <gyre007> cheers hekman
[00:08:37] <hekman> you should automatically elect a new primary if you bring it down
[00:08:43] <hekman> just make sure it happens is what i mean :)
[00:08:47] <gyre007> I see
[00:08:49] <gyre007> ok
[00:20:03] <trevorfrancis> I am looking at building a mongo cluster: We are wanting to use it to store a large volume of simultaneous inserts
[00:20:12] <trevorfrancis> and to perform analytics work on them
[00:20:30] <trevorfrancis> does Mongo have an analytics framework?
[00:32:31] <trevorfrancis> also, how hard is it to add new nodes to a cluster?
[00:52:18] <bartzy> Hey
[00:52:29] <bartzy> I executed find() on the mongo shell
[00:52:32] <bartzy> and got this line
[00:52:38] <bartzy> { "_id" : ObjectId("5137e3fca4c9deee0500000d"), "name" : "Fun", "subcategories" : [ ObjectId("5137e3fca4c9deee05000007"), ObjectId("5137e3fca4c9deee0500000a") ] }
[00:53:02] <bartzy> Well, in the shell the spacing between each ObjectId in "subcategories" is much bigger. Any idea why the spacing is there anyway ?
[05:43:07] <zeroquake> wat server side setup do i need in aws , for storing gps data coming for android phonegap app , using mongodb ??
[06:36:56] <someprimetime_> I'm returning a collection of 10 items that I'd like to loop through and augment an easy date to… Basically something that grabs the created date and converts it to a `X time ago` string… and add it to each item in the collection. What's the best way to handle something like this?
[06:37:17] <someprimetime_> Was thinking a for in?
[07:05:46] <rishabhverma> Hey. I'm trying to install MongoDB on a fresh Ubuntu 12.04 LTS machine. But have been getting this error, "locale::facet::_S_create_c_locale name not valid" repeatedly.
[07:06:00] <rishabhverma> Running locale-gen also doesn't help resolve this.
[07:06:11] <rishabhverma> Any ideas on what should I do in such a case/
[07:06:15] <rishabhverma> *?
[08:30:52] <[AD]Turbo> ciao
[08:56:05] <Katafalkas> Hey, I am wondering which image on amazon ec2 I should use for mongodb. 10gen by default uses standard AMIs. But I do like ubuntu. Also there was some benchmarks and they show that mongodb runs better on ubuntu. Any thoughts on that ?
[09:13:24] <richwol> I've seen a huge company store date/time as a string in the format 2013-03-07 09:10:12, the idea being it's easy to query (for example, to return all records matching a particular day you could do /2013-03-07/. I haven't read much about this technique and the performance of it - does anyone know if it's faster this way than doing a similar query against native date objects? Any advantages/disadvantages would also be appreciated so I can make a well informed d
[09:16:35] <saby> I have a collection containing a field status: {status.algo1: "required", status.algo2: "running", status.algo3: "required"}
[09:16:44] <saby> i.e status has some sub fields,
[09:17:43] <saby> is it possible to send a query such that it sends me any document for which anyone of the status is required?
[09:21:18] <eaSy60> Hello, how could I get the id of the last "inserted" document in mongoshell?
[09:23:21] <Nodex> eaSy60 : you can't unless you set it yourself
[09:23:34] <richwol> EaSy60: http://stackoverflow.com/questions/3611277/how-to-get-last-inserted-id-using-the-js-api
[09:23:49] <richwol> gives you an idea..
[09:24:07] <Nodex> ^^ (setting it yourself)
[09:24:26] <richwol> Yeah, just giving him an example
[09:25:00] <Nodex> with regards to your regex question - the performance is about as bad as it can get
[09:25:33] <richwol> Nodex: Yeah I thought it might be, I can't work out why they've done it this way at all..
[09:25:59] <Nodex> personaly I do this ... timestamp : epoch, ymd : Ymd
[09:26:14] <Nodex> ymd: 20130307
[09:26:41] <Nodex> it's sortable and readable
[09:26:49] <richwol> Oh ok, so you can just match ymd exactly, but you still have the timestamp if needed?
[09:26:55] <Nodex> correct
[09:27:12] <Nodex> if you need it any different then just add the difference you need
[09:27:20] <Nodex> example, Ym (201303)
[09:27:56] <eaSy60> richwol, Nodex: Thanks, I'm just coding a migration script so I don't care how much dirty it is
[09:27:57] <Nodex> some poeople store a date object ... date : {y:2013,m:03,d:07}
[09:28:05] <richwol> sounds good, the only issue for me is that I need to query on different timespans.. for example /2013-03/, /2013-03-06/, /2013-03-06 10/ etc. I guess for that reason I'm best off using the Date object
[09:28:08] <Nodex> it's a little more granular
[09:28:44] <Nodex> I would use range query's for timespans
[09:29:00] <richwol> Ah ok, another interesting approach. My only concern about this is the speed - it's gonna be a big table so whichever approach is fastest is gonna have bonus points for me!
[09:29:07] <richwol> sure
[09:29:30] <Nodex> define big
[09:29:44] <richwol> 20 million records +
[09:30:18] <richwol> The date field was going to make up part of a composite key
[09:31:16] <Nodex> are the records large in size?
[09:31:43] <richwol> Difficult to say, they can vary from about 100 bytes up to about 80k
[09:32:02] <richwol> Odd ones could be a lot more
[09:32:19] <saby> i have a field status containing multiple subfields, like status.algo1: "required" and stuff. Is it possible to get all documents which have any subfield's value as "required" ??
[09:33:00] <Nodex> db.foo.find({status.algo1: "required"});
[09:33:05] <Nodex> db.foo.find({"status.algo1": "required"});
[09:33:06] <Zelest> o/
[09:33:21] <saby> Nodex i have status.algo1, status.algo2, status.algo3
[09:33:23] <Nodex> richwol : if they fit in RAM you should be fine on performance
[09:34:08] <saby> db.foo.find({"status.algo1": "required", "status.algo2":"required"...});
[09:34:14] <richwol> nodex: Awesome. I suppose the other benefit of using the date object is that I can cater for data stored in different timezones.. the other methods are a bit limited for that
[09:34:25] <saby> is there a way to do status* : "required"
[09:34:31] <Nodex> saby : please pastebin a document
[09:34:36] <saby> such that I get all the records for which any status field is required
[09:35:20] <saby> sure Nodex
[09:35:41] <saby> Nodex http://pastebin.com/3iuCqpd3
[09:37:04] <saby> so Nodex I want to keep the find query generic as new subfields could get added in near future so I would like to fetch any doc with a status as required
[09:37:49] <Nodex> you're going to need an $or
[09:38:24] <saby> yes
[09:38:43] <saby> but is there a way of not specifying the individual field names?
[09:39:33] <Nodex> no
[09:42:25] <saby> Nodex any way to get the names of fields available in Mongo?
[09:45:17] <Nodex> you can loop a document I suppose
[09:48:56] <saby> hmmm that's gonna be a problem
[09:48:57] <saby> damn
[09:50:20] <kali> saby: maybe this can help: https://github.com/variety/variety
[09:50:38] <kali> ha. no. sorry
[09:50:40] <kali> forget it
[09:50:59] <saby> ok :)
[11:09:23] <multi_io> when you're storing "1:n relations" using DBRefs, do you commonly store the DBRefs (or simple IDs) on the n side (as a single reference), or on the 1 side (as an array of references)?
[11:09:54] <ron> depends.
[13:19:29] <Zelest> I'm having huge issues with failover and mongo in our replicaset.. :/
[13:19:55] <Zelest> Whenever the master dies, the secondary becomes master, but php-fpm give me "no suitable candidate" even if the master is available and up..
[13:20:03] <Zelest> same happens if the master goes down and comes back up short after
[13:20:09] <Zelest> only way to solve this is to restart php-fpm :/
[14:51:35] <jtomasrl> i have a items document that have nested orders with a created_at field, is it possible to order by that?
[14:51:43] <petto> I guys... I'm trying sort collection result by text field. But it not work with accented character. For exemple (a,b,c,d,v, á)
[14:53:30] <Nodex> jtomasrl : use dot notation
[14:53:56] <petto> hey*
[14:56:21] <petto> db.content.find({_id:/PLACE/,title:{$ne:null}},{title:1,_id:0}).sort({title:1}); => {"title":"a"}, {"title":"b"}, {"title":"c"}, {"title":"á"}
[15:02:05] <kali> petto: yes, sort is strcmp based. nothing fancy happening
[15:04:29] <petto> kali: =/ How can I solve this? Or how can I improve this sort?
[15:06:05] <kali> petto: the only thing you can do is normalize the "title" by discarding the accents (and maybe case too). that will improve the situation for some languages (not all of them)
[15:09:42] <petto> oh my god!
[15:10:09] <strigga> mmmh?
[15:23:53] <kali> petto: you may want to watch/vote https://jira.mongodb.org/browse/SERVER-1920
[15:31:03] <Nodex> or just use english hahahahah
[16:05:12] <qhartman> I see that the method to increase the size of a capped collection in 1.x is to create a new larger collection and then copy the contents of the current one over to it.
[16:05:39] <qhartman> Is that the same process in 2.x, or has the feature I see talked about online about a more direct way to do that been implemented?
[16:16:00] <coogle> Does anyone know what this error means?
[16:16:00] <coogle> Couldn't pull any geometry out of $within query:
[16:16:06] <coogle> Couldn't pull any geometry out of $within query: { 0: [ -73.98085499999999, 40.75547299999999 ], 1: 0.0252321865809176 }
[16:17:23] <coogle> never mind... it's because i didn't have a 2d index
[16:25:59] <bartzy> Hey
[16:26:19] <bartzy> when I encode a MongoId (PHP driver) to JSON via json_encode, the result is :
[16:26:46] <bartzy> {"$id":"5137e3fca4c9deee05000001"}
[16:26:58] <bartzy> This is not expected - it should be just the string, not an object like that.
[16:27:30] <bartzy> What about MongoId implementing JsonSerializable ?
[16:32:24] <Derick> bartzy: file a feature request please - however, I don't quite agree it should just be a string, as it's a bit more than that
[16:32:29] <Derick> why are you turning it into JSON?
[16:32:40] <bartzy> Derick: To forward it to our JS app.
[16:32:52] <bartzy> our JS app need to know the id of that item/object/whatever...
[16:33:03] <bartzy> and that's our representation for the id (the ObjectId of the document)
[16:33:29] <bartzy> and now instead of going to photo.id (in JS for example), it needs to go to photo.id.$id , which is weird...
[16:33:35] <Derick> we do still have to support php 5.2 though.. and jsonserializable is 5.3 (or 5.4 even)
[16:34:07] <bartzy> Derick: I don't understand the reasoning behind having a json like that: {"$id":"5137e3fca4c9deee05000001"}
[16:34:23] <bartzy> It's not that json_decode would bring back MongoId to "life", it would just be stdClass
[16:34:28] <Derick> it's the same as vardump... so it makes sense if you look it like that
[16:34:44] <Derick> I do indeed not like the "$id" name though, as it's confusing
[16:34:51] <Derick> (and difficult to use in JSON)
[16:35:10] <bartzy> so most users that are just doing json_encode($results) from a mongo query to their JS app - need to go over each document and cast the MongoId to string ?
[16:35:30] <Derick> well, you just said you can do photo.id.$id yourself...
[16:35:54] <Derick> we can't really change behaviour either :-/
[16:36:16] <Derick> I think in the end, it would be best if you can specify your own classes instead of MongoID/MongoCursor etc, so that you can modify behviour yourself
[16:37:50] <bartzy> Derick: Wouldn't that be pretty slow ?
[16:38:11] <Derick> maybe, but atleast we don't have to make a decision
[16:38:34] <Derick> implementing jsonserializable now, and changing it to a string for that will break BC, which we can't do before 2.0 (and that we'd like to avoid)
[16:48:33] <fredix> hi
[16:48:49] <fredix> it seems that mongo::ScopedDbConnection can connect to a single mongodb instance, right ?
[17:09:36] <bartzy> Derick: when 2.0 will arrive ? Why you can't do that before 2.0 ?
[17:09:49] <Derick> we can not break people's code
[17:09:58] <Derick> changing behaviour means breaking people's code
[17:10:02] <Derick> 2.0 is not planned
[17:19:16] <bean__> Backwards compat matters
[17:51:48] <jiffe98> why would show dbs not show all the available dbs?
[17:51:56] <jiffe98> I'm doing this through the router
[17:52:16] <jiffe98> it isn't listing my mail db which exists and when I use it, it shows me the collections in it fine
[18:27:38] <dmilith> hi
[18:27:43] <dmilith> I want to build mongo from source
[18:28:01] <dmilith> on normal systems like darwin/ freebsd with clang I'm adding --clang to scons
[18:28:06] <dmilith> and it works fine
[18:28:21] <dmilith> but documentation lacks on information of how to give explicit path to C++ compiler
[18:28:33] <dmilith> cause it's ridiculous to build mongodb with g++
[18:28:37] <dmilith> any hints?
[18:30:18] <dmilith> note: on linux I have clang in custom path
[18:30:31] <dmilith> (it's in PATH, but it's scons..)
[18:31:34] <wereHamster> scons --cxx=/path/to/...
[18:40:00] <dmilith> ah
[18:40:03] <dmilith> --cxx
[18:40:24] <dmilith> It's really fucking awesome to know. Cause no documentation ever mentioned such option.
[18:42:10] <wereHamster> not even scons --help?
[19:06:15] <dmilith> wereHamster:
[19:06:16] <dmilith> src/mongo/db/namespace_details.h:112:22: error: private field 'reserved2' is not used [-Werror,-Wunused-private-field]
[19:06:16] <dmilith> unsigned reserved2;
[19:06:30] <dmilith> and how to deal with such ridiculous "errors"?
[19:07:26] <dmilith> allright. I see
[19:09:03] <dmilith> no, I see only --ignore-errors
[19:09:07] <dmilith> and this is not same
[19:13:27] <dmilith> --warn=WARNING-SPEC
[19:13:31] <dmilith> what does it means?
[19:19:28] <dmilith> no documentation at all..
[19:19:36] <dmilith> it's real open source project.
[19:19:47] <dmilith> I tried even read SConscript
[19:19:53] <wereHamster> you can get your full refund back if you don't like it.
[19:19:55] <dmilith> no fun at all
[19:19:59] <wereHamster> even with interest!
[19:20:02] <dmilith> I don't use it
[19:20:32] <dmilith> and never wont, but I need to build it for some other stupid people who will be
[19:21:16] <wereHamster> cna'ot' you use a binary package?
[19:22:43] <dmilith> nope
[19:22:52] <dmilith> I'm using very ancient distro
[19:23:06] <dmilith> I really want only to get rid of those warnings as errors
[19:23:16] <dmilith> and I got lost and never back, I promise
[19:23:43] <dmilith> --warn=WARNING-SPEC, --warning=WARNING-SPEC
[19:23:43] <dmilith> Enable or disable warnings.
[19:23:48] <dmilith> what does it means?
[19:23:58] <dmilith> where can I find 1 word about it
[19:24:13] <wereHamster> use the source, luke
[19:27:09] <dmilith> yea, sure
[19:27:18] <dmilith> --clang --cxx='${SOFTWARE_DIR}Clang/exports/clang++ -w'
[19:27:35] <dmilith> is the solution, but i bet you wont even mention about it in your great fucked up site
[19:27:49] <dmilith> this project is so lame
[19:42:38] <SteveO7> Does anyone know if mongoimport can be used to update fields in an embedded document?
[19:56:21] <ehershey> what distro are you using?
[19:56:38] <ehershey> (dmilith)
[19:57:03] <dmilith> ehershey: debian stable
[19:57:18] <ehershey> debian stable is an ancient distro?
[19:59:32] <dmilith> it is
[20:00:05] <ehershey> I can't keep up
[22:15:10] <redsand> is it possible to add a new shard to an existing set if the set already has authentication enabled?
[22:38:13] <toxster> hi, i am setting a sharded cluster, do i need to have mongod running on each server? now mongos complains that it's using the same port, or does mongos tae over what mongod does?
[22:39:15] <toxster> should i just kill all mongod processes?
[22:42:29] <julian-delphiki> toxster: you'll want to change what port your mongod is listening on.
[22:43:01] <toxster> julian: ok do i need the original mongod daemons
[22:43:18] <julian-delphiki> yep
[22:43:23] <toxster> ok
[22:43:51] <toxster> what host do i direct my client at, and to which process? mongos, mongod configsvr, or original mongod
[22:44:17] <julian-delphiki> toxster: http://docs.mongodb.org/manual/core/sharded-clusters/#infrastructure-requirements-for-sharded-clusters
[22:45:11] <julian-delphiki> it'd be the mongos
[22:45:14] <fredix> hi
[22:45:25] <julian-delphiki> since, you know, the mongos is the server that controls the shards, etc
[22:45:33] <toxster> so m understanding is that i should direct my client towards a mongos host and port
[22:45:41] <fredix> what should the connection string to connect to a replica set with ScopedDbConnection ?
[22:47:09] <fredix> ScopedDbConnection *replicaset = ScopedDbConnection::getScopedDbConnection("yourreplicasetname/ip1,ip2,ip3" ); is it ok ?
[22:47:43] <julian-delphiki> toxster: yes, and the mongos should handle the rest transparently
[22:47:45] <julian-delphiki> i believe
[22:48:25] <toxster> ok, so i'll just change mongos's port and update my client code to connect to that port
[22:48:36] <toxster> all should be well?
[22:49:19] <toxster> only 1 mongos should be needed when i test right, the other is just for failover
[23:28:11] <toxster> weird, when i run mongos on port 27020, mongod --configsvr on 27018, and 10gen mongo 2.2.3 on 27017, when i start mongod on 27017 i get Fri Mar 8 00:20:15 got signal 2 (Interrupt), will terminate after current cmd ends
[23:33:12] <toxster> seemed to be that the configsvr has t be started after mongod