PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 29th of January, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[04:00:43] <speaker1234> I read up on indices and how to create them. My question now is how do you create an index when you first create a database? Maybe a better way to ask is how I guarantee indices when I re-create the database after dropping it?
[04:05:47] <cheeser> you call ensureIndex()
[04:06:30] <speaker1234> when?
[04:06:47] <cheeser> after you drop?
[04:07:10] <speaker1234> after I define the collection?
[04:07:22] <cheeser> you don't define collections as such
[04:07:29] <cheeser> you just start writing to them.
[04:07:39] <speaker1234> I know. What I've been doing is I make a connection to a client and then just start adding records
[04:08:05] <speaker1234> it behaves the same whether I'm creating a new database or using an old one
[04:08:10] <cheeser> right. so you just call ensureIndex() at some point
[04:08:45] <speaker1234> that could get ugly :-)
[04:09:06] <cheeser> it won't
[04:09:28] <speaker1234> I would think I'd want to only call it once when the database is just starting new to
[04:09:49] <cheeser> why? it's a no-op if the index already exists.
[04:10:43] <speaker1234> oh. That's interesting so every time I establish a connection to a database, I call ensure index. If it has records to create an index from, it will work automatically only once
[04:11:03] <speaker1234> work automatically I mean will create the database index automatically but do it only once
[04:11:04] <cheeser> no. just do it app start up.
[04:11:14] <cheeser> calling it over and over is just dumb
[04:11:28] <cheeser> it doesn't need data in the collection to create an index.
[04:12:07] <cheeser> anyway, i'm off to bed.
[04:12:12] <speaker1234> this is in a wsgi environment. Every time I get a connection from an external client, I make a connection to the database
[04:12:44] <speaker1234> so if it doesn't need data in the collection, maybe I can do it in my reset database shell script
[05:06:07] <Synt4x`> any reason why this wouldn't work?
[05:06:13] <Synt4x`> player = db.player.find({'parent_api__id':'stats', 'game__id':{'in':games_list}}).sort({'date':1})
[05:06:26] <Synt4x`> error is : TypeError: if no direction is specified, key_or_list must be an instance of list
[05:11:26] <joannac> Synt4x`: you want $in ?
[05:11:32] <joannac> also what's games_list
[05:12:33] <Synt4x`> joannac: sorry $in part works, I've used it a bunch, I just added a .sort() to it at the end and it seems to not work
[05:18:41] <joannac> Synt4x`: so to confirm, if you issue the exact same query without the .sort part, it works?
[05:19:23] <Synt4x`> joannac: correct, just checked again to make sure
[05:21:39] <joannac> oh
[05:21:44] <joannac> you're using python
[05:21:47] <joannac> right?
[05:21:54] <Synt4x`> yea, pymongo
[05:22:18] <joannac> http://stackoverflow.com/questions/8109122/how-to-sort-mongodb-with-pymongo
[05:22:30] <joannac> .sort("date", 1)
[05:22:34] <joannac> is the correct syntax
[05:22:53] <joannac> also next time please mention pymongo in the original question so I don't have to try and decipher ;)
[05:23:50] <Synt4x`> joannac: yea sorry about that I've only ever worked with Mongo in PyMongo so it slips my mind that there are other ways, I'll be more careful. Thank you
[05:47:30] <Synt4x`> anybody had this happen to them before?
[05:47:31] <Synt4x`> pymongo.errors.OperationFailure: database error: Runner error: Overflow sort stage buffered data usage of 33554623 bytes exceeds internal limit of 33554432 bytes
[05:48:21] <Synt4x`> nevermind ignore that i'll find some solutions online :)
[05:53:55] <Synt4x`> nevermind the answers online were pretty confusing to me, I'm only getting back 42,470 documents from my query that I'm trying to sort it seems kind of crazy that causes a memory overflow
[05:53:59] <Synt4x`> any ideas?
[05:56:49] <joannac> what's the average size of the document?
[05:57:03] <Synt4x`> joannac: whats the best way for me to check that?
[05:57:19] <joannac> db.collection.stats() will give you an estimate
[05:57:57] <Synt4x`> avgObjSize: 1084
[05:58:10] <joannac> right
[05:58:28] <joannac> 42k documents x 1k document size = 42MB. The limit is 32
[06:00:04] <Synt4x`> joannac: hrmmm, ok, is there an easy way to split it up into 2 queries? or perhaps another way to avoid that, I'm not sure what's best to do in this situation, first time happening to me
[06:00:04] <joannac> um, index it?
[06:00:07] <joannac> so you don't have to sort in memory
[06:02:16] <Synt4x`> joannac: hrmm and if it was indexed, say I went into each document and put an index in there, wouldn't I still need to use the .sort() function? If not it already has an object.date field (that I am trying to sort on), can I use that as the index?
[06:03:27] <joannac> indexes are per collection
[06:04:44] <Synt4x`> joannac: have never heard of collections being indexed, I will google around some on the topic. thanks
[06:33:43] <Neo9> #join ubuntu
[06:33:54] <Neo9> #join ubuntu
[06:34:04] <Neo9> #join linux
[06:37:02] <joannac> Neo9: /join #ubuntu
[06:37:26] <Neo9> joannac: thanks
[06:57:55] <Jaevn> hello.. anybody there?
[07:00:06] <gihejk> im dropping virtual apples A_i on earth's spherical surface and each one has a radius R_i and a center [lng, lat]. If there any built in way yet to be able to query for all the apples whos radii lie within R_0 of some center [x,y]?
[07:01:00] <gihejk> this is simple for a set of apples {A_i} such that R_i = R_j fr all i,j, but is more complicated when R_i =/= R_j for all i,j
[07:02:10] <gihejk> anyone have insight here ?
[07:02:21] <gihejk> It's essetnially geofencing
[07:02:54] <gihejk> is there a more elegant solution other than diving the surface up into a finite number of segments
[07:04:58] <joannac> gihejk: this sounds like a homework question, especially the way you described it
[07:05:07] <gihejk> im a math major
[07:05:13] <gihejk> i like to explain things in this language
[07:05:17] <gihejk> actually im not in school
[07:05:21] <gihejk> graduated
[07:05:21] <joannac> LaTeX ?
[07:05:23] <gihejk> im building an ap
[07:05:26] <gihejk> app*
[07:05:42] <gihejk> huh?
[07:05:48] <gihejk> what does this have to do with tex
[07:05:52] <gihejk> x_X
[07:05:58] <gihejk> im asking a query questopn
[07:06:07] <joannac> you're a LaTeX person, I can tell from the syntax, that's all
[07:06:14] <gihejk> no im a physics person
[07:06:15] <gihejk> lol
[07:06:26] <gihejk> and ive been in math classes
[07:06:27] <gihejk> idk how text comes into this
[07:06:30] <gihejk> ahh well
[07:06:36] <gihejk> text is the number one tool for research pubs
[07:06:43] <gihejk> i see why you would think that
[07:06:43] <gihejk> xD
[07:06:50] <gihejk> tbh i never had the patience for tex
[07:06:59] <gihejk> i made my ppartners write the projects
[07:07:07] <gihejk> anyways
[07:07:26] <joannac> gihejk: http://docs.mongodb.org/manual/reference/operator/query/geoIntersects/#op._S_geoIntersects
[07:07:42] <gihejk> thanks you :}
[07:07:45] <gihejk> i will check it out
[07:09:46] <gihejk> joannac:
[07:09:51] <gihejk> i read that
[07:09:56] <gihejk> my only question is
[07:10:26] <gihejk> if i have an object X which has center (x,y) and radius R, if the polygon i specify intersects the circle, will that result come up?
[07:10:35] <gihejk> or does the center itself have to be in the polygon region
[07:10:56] <joannac> "Selects documents whose geospatial data intersects with a specified GeoJSON object; i.e. where the intersection of the data and the specified object is non-empty. This includes cases where the data and the specified object share an edge."
[07:11:09] <gihejk> so "geospatial data" refers to the entire circle?
[07:11:14] <gihejk> not jsut the center
[07:11:28] <joannac> yes, if your geospatial data is a circle and not a point
[07:11:39] <gihejk> :D!
[07:12:34] <gihejk> joannac (___._|__)___| so the object with center at that point other to the left will come up in the geoIntersect query of the rectangular polygon, correct?
[07:13:28] <joannac> Yes
[07:13:33] <gihejk> :D!
[07:13:34] <gihejk> wonderful
[07:13:37] <gihejk> many thanks joannac
[07:13:39] <joannac> The docs aren't ambiguous about this
[07:13:41] <gihejk> <3 to you
[07:13:49] <gihejk> i know but i am
[07:13:55] <joannac> Even an overlapping adge counts
[07:14:18] <gihejk> the docs will tell me how to create a circle ^^
[07:14:49] <gihejk> then i can develop variable radii circles
[07:15:01] <gihejk> and see which ones intersect with the polygon!
[07:17:06] <gihejk> yeah im not too orried about whether it's inclusive or not, the edge is such a small thing it wont make a huge difference in the app
[07:18:24] <gihejk> bye joannac thanks, again very much!
[07:20:11] <joannac> Jaevn: if you had asked your question you might have an answer by now ;)
[08:32:21] <fatdragon5> hi all
[08:34:51] <fatdragon5> i’ve been hearing some conflicting stories but perhaps the experts here knows more :-) i have an enterprise app that will definitely require full ACID (multi-document transactions). I’ve heard that it will be addressed in the next major release after this upcoming one. But then from a different source I was told that it’s not on any roadmap…
[08:37:40] <fatdragon5> have you heard any news?
[10:03:31] <dorongutman> hey guys
[10:04:01] <dorongutman> if someone has any input on the following question that would be really helpful - https://groups.google.com/forum/#!topic/mongodb-user/LEnVxTmsWig
[10:21:16] <BurtyB> dorongutman, no idea here as I don't believe in AWS ;)
[10:22:17] <dorongutman> BurtyB: I have a fruitarian friend who doesn’t believe in hemoglobin
[13:17:20] <jhon> hi all
[13:18:07] <jhon> i save into a document a fiel id_image like this "id_image" : ObjectId("54c9140f57c84c5b1a8b459f")
[13:18:38] <jhon> now i want to find an another collections this ObjectID - ObjectId("54c9140f57c84c5b1a8b459f") with PHP
[13:19:58] <jhon> i know that in php i need to create the object with .. $id = new \ObjectId("54c9140f57c84c5b1a8b459f")
[13:20:17] <jhon> but now i want to use only 1 version
[13:20:25] <jhon> first version
[13:20:29] <jhon> is possible ?
[13:29:51] <keithy> hey guys
[13:30:23] <keithy> Can someone explain to me what the empty {} means in db.collection.findOne({}, {name : 1})
[13:30:41] <keithy> x
[13:30:50] <keithy> daf
[13:30:50] <Zelest> match anything
[13:31:09] <Zelest> o_O
[13:31:33] <Zelest> what's the officer problem?
[14:28:02] <andrer> I'm having some issues with a replica set. Three servers and the PHP driver (tried both 1.5.8 and 1.6). The seed list matches the replicaset configuration.
[14:28:12] <andrer> And everything works as long as all three are up
[14:28:39] <andrer> But when one is turned off, PHP continues to connect without any hitches, but any query ran results in a MongoCursorTimeoutException
[14:28:53] <andrer> It does not seem like the driver chooses one of the two other servers at all.
[14:41:14] <lolcookie> why is no one maintaining mongoskin?
[14:41:49] <cheeser> ask the devs?
[14:50:14] <StephenLynx> what is mongoskin?
[15:07:02] <ldiamond> Is there text search in mongo that allows to return "similar" results (like with trigrams or levenshtein distance and such)?
[15:43:05] <pward123_> does rs.initiate() need to be called each time mongo is started?
[15:43:56] <ehershey_> no just to do initial configuration
[16:04:31] <mephju> Hey guys! I have a mongoose question. The situation is that I am saving a rather deeply nested document and validators are not applied to the deeply nested portion of the document. Why would this happen?
[16:06:01] <mrmccrac> confused why im seeing lock status for Global, Database, and Collection on a particular query when running 3.0.0rc6
[16:06:14] <mrmccrac> do you need to run wiredtiger to do collection-level locking?
[16:06:27] <mrmccrac> and not MMAP
[16:07:29] <mrmccrac> an entry in currentOp() has:
[16:07:31] <mrmccrac> http://pastie.org/9871650
[16:08:02] <mrmccrac> that makes me think this operation needs three write locks?
[16:24:09] <pward123_> ehershey: I'm using the stackbrew/mongo:2.4 container with --volumes-from and the collection data persists, but I have to run rs.initiate() each time I rm/run the mongo container
[16:28:34] <djlee> Just done a an upgrade on ubuntu, my mongo version as you can see from aptitude log was updated : "[UPGRADE] mongodb-10gen:amd64 2.4.9 -> 2.4.12". However when i tried to start mongo after the upgrade, i started getting this error: "ERROR: Encountered an unrecognized index spec: { v: 1, key: { location: "2dsphere" }, ns: "loop.cities", name: "location_2dsphere", 2dsphereIndexVersion: 2 }" ... should this have happened?
[16:28:34] <djlee> Why did it happen? Since when did a bug/hotfix upgrade introduce such a big change
[16:32:47] <StephenLynx> try using mongo repository instead of the default one.
[16:32:59] <StephenLynx> 2.4.x is quite outdated, we are o 2.6.7
[16:35:40] <djlee> StephenLynx: think im using 10gen. I will check that though as my local mongo is 2.6 but staging server is 2.4 (yeah i know, stupid). But im more weirded out how the old version before upgrade, managed to use a index version of 2, when a newer version post upgrade, couldn't boot with that same index
[16:36:22] <StephenLynx> your case must be the 3rd I saw this week with bugged indexes after upgrades from 2.4
[16:36:25] <djlee> StephenLynx: I do need to upgrade the server, so when i do, hopefully that database i had to disable will start working again. Just weird by whats happened
[16:37:44] <djlee> StephenLynx: Yeah but i upgraded from 2.4.9 to 2.4.12, if i'd have upgraded from 2.4 to 2.6 i'd understand something going wrong somewhere... If its just bad luck or no explanation thats fine. I just don't like not knowing why things have hapened... curiosity more so than anything else :P
[16:39:15] <dorongutman> if someone has any input on the following question that would be really helpful - https://groups.google.com/forum/#!topic/mongodb-user/LEnVxTmsWig
[16:41:09] <StephenLynx> latest version: 2.2.2
[16:41:11] <StephenLynx> lol AWS
[16:45:00] <RaceCondition> is there sometimes a few bytes of reserved space reserved for size-increasing updates in capped collections?
[16:45:54] <RaceCondition> I can update e.g. a "status" field of a document in a capped collection from "foo" to "foobar" without error
[16:46:02] <RaceCondition> whereas if that field value was previously a number, it fails
[16:46:46] <pward123_> StephenLynx: using 2.4 since that's what meteor uses in dev. pretty sure I saw the same thing in 2.6, but will verify
[16:48:11] <StephenLynx> what is meteor?
[16:48:48] <pward123_> https://www.meteor.com/
[16:49:21] <pward123_> was originally using 2.6 when I started having problems getting replica set working so I jumped back to 2.4
[16:50:00] <StephenLynx> "Accomplish in 10 lines what would otherwise take 1000, thanks to a reactive programming model that extends all the way from the database to the user's screen."
[16:50:18] <StephenLynx> the horror
[16:51:21] <NoOutlet> I would think ranman might have some insight about AWS.
[16:57:24] <C4nC> Hello everybody, I need an advice for a mongo sharding replicated cluster.
[16:57:58] <C4nC> I try to figured out how can achive a ha cluster with 3 node
[16:58:38] <C4nC> it should be a sharding cluster (3 shard) and each replicated to the other nodes
[16:59:29] <C4nC> are there any good man that could help me clarify some aspect of the config?
[17:08:17] <pward123_> StephenLynx: official mongo docker container running 2.6.7 -- I can docker stop/run the mongo container all day, but if I docker rm/run my collections still have data but I get the following error from mongod
[17:08:21] <pward123_> [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
[17:08:51] <pward123_> running the mongo shell against the container and executing rs.initiate() resolves the problem
[17:09:03] <pward123_> is there some file that's being modified by that command that's not in /data/db ?
[17:09:13] <StephenLynx> docker?
[17:09:18] <StephenLynx> that thing with a whale?
[17:21:50] <ldiamond> Is there text search in mongo that allows to return "similar" results (like with trigrams or levenshtein distance and such)?
[17:22:58] <macbroadcast> hello all, i want to import initial data into a mongodb but given command " mongorestore -h localhost -d leanote --directoryperdb /home/user1/leanote/mongodb_backup/leanote_install_data/ " gives me an "Unexpected identifier" whats wrong with the command ?
[17:23:52] <mrmccrac> ldiamond: you'll probably need to supplement your mongo instance w/ elasticsearch or something for doing that kind of search i imagine
[17:25:57] <ldiamond> mrmccrac: I see, do you know if mongo integrates well with solr or elastic search?
[17:28:33] <mrmccrac> http://blog.mongodb.org/post/95839709598/how-to-perform-fuzzy-matching-with-mongo-connector
[17:30:36] <ldiamond> Ok so what I'm looking for is mongo-connector
[17:30:37] <ldiamond> Thanks
[17:38:53] <gt3> when doing a range with $gt and $lt and limit 100, i have some _id's that are missing. So when I that, sometimes instead of getting 100 results back I get like 73. Is there a way to ensure I always get 100 back?
[17:42:05] <gt3> for example: User.find({'_id' : { "$gte" : 0, "$lt" : 100 } }).limit(100) # if any _id's fields are missing, i'll get less than 100 but I want it to skip those and keep going so I always get 100
[17:42:16] <StephenLynx> if you got 73 its because only 7
[17:42:21] <StephenLynx> 73 documents matched it.
[17:42:22] <StephenLynx> right?
[17:42:25] <gt3> yeah
[17:42:34] <StephenLynx> afaik mongo cant do that.
[17:42:46] <StephenLynx> you will have to fill the rest mannualy.
[17:43:08] <gt3> ok thanks. i'm doing this from a python script, so I can probably just make a variable that I can adjust to get what i need then
[17:44:09] <gt3> or maybe i'll just search based on {} to get all results instead of by _id
[17:48:38] <NoOutlet> You could sort by _id so that you get the _ids closest to 0 if that is desired.
[17:48:45] <NoOutlet> And then limit to 100.
[17:50:38] <gt3> NoOutlet: hmm I tried that but it doesn't seem to be working...
[17:54:28] <NyB_> Hi (again), is there a way to configure (i.e. make it shut up) the logger in the Java 3.0 driver?
[18:40:15] <Aria> /part
[18:57:44] <roadrunneratwast> How does a mongoose schema map to the raw mongo db?
[18:58:23] <roadrunneratwast> For example if I have BookEntry = new Schema({paragraph: String})
[18:58:56] <roadrunneratwast> Does that map to the mongodb collection bookentry?
[19:01:42] <StephenLynx> afaik it just doesn't.
[19:01:57] <StephenLynx> I never used mongoose
[19:02:18] <StephenLynx> but from what I know of mongo
[19:02:26] <StephenLynx> it has nothing to ensure schemas
[19:02:35] <StephenLynx> you can just do a find on the console client
[19:38:02] <beardedhen> Hello
[19:38:42] <beardedhen> I'm trying to investigate why I have so much outbound traffic (using a local/private network for most traffic)
[19:39:10] <beardedhen> When running `netstat -t -u -c` I see a lot of
[19:39:11] <beardedhen> tcp 0 0 myserver.com:51885 myserver.com:27017 ESTABLISHED
[19:40:15] <beardedhen> It's a replicaset, but the slaves are all accessing private.myserver.com. Why is it trying to access itself externally rather than localhost?
[19:43:33] <pward123_> are there default locations where mongo stores config files?
[19:43:51] <beardedhen> sec
[19:44:15] <beardedhen> `/etc/mongod.conf`
[19:44:41] <pward123_> heh thanks. must not be fully awake yet. right after I pressed enter, I started a find/grep
[19:46:07] <styles> I have a document with comments. I have a comment have a "username" and "userid" (username for display purposes).
[19:46:16] <styles> if people change their username
[19:46:21] <styles> what's the best way to do this
[19:47:44] <pward123_> db.update({userid:"1234"},{$set:{username:"blah"}}) should do it I think
[19:49:04] <pward123_> anyone have an idea of where rs.initialize stores its changes?
[19:49:21] <StephenLynx> you either duplicate data and updates everything with said duplicate or you perform aditional queries to obtain the relational data.
[19:49:36] <StephenLynx> the solution pward provided is the best approach, imo.
[19:49:46] <StephenLynx> because people don't change usernames often.
[19:51:08] <Derick> pward123_: local database, in system.replset
[19:55:47] <pward123_> Derick: if I run mongo and issue a rs.initiate(), stop mongo, wipe the dbpath clean and start mongo I still get info about replSet can't get local...
[19:56:10] <pward123_> is there a location other than dbpath where the initiate makes changes?
[19:58:46] <pward123_> the real problem is that I'm running mongo in a docker container that gets removed every time its run and I need to persist all data in a data volume container
[19:59:11] <pward123_> currently, I'm only persisting the dbpath tree. however, I'm still having to run rs.initiate each time I start the container
[20:00:55] <jayjo> I've set up a signal on the initialization of documents, post_init, and put a testing print statement to see when it executes. Whenever I start a connection to my db, it initializes every document in my database. Is this typical behavior?
[20:19:04] <joannac> jayjo: what does "initialisation" mean?
[20:19:51] <joannac> pward123_: nope, all in dbpath. are you sure you're saving the whole thing?
[20:25:50] <pward123_> joannac: yep. just learned about "docker diff". looks like the container is modifying files in /var/db as well. will try adding that
[20:41:17] <pward123_> what is /root/.dbshell ?
[20:42:38] <pward123_> ah nm
[20:52:01] <FunnyLookinHat> If I'm running a database for an internal tool with fairly low-load ( i.e. a dozen writes every 10 seconds ) - is there still a good reason to have a primary and secondary server setup? i.e. I don't do that for the MySQL applications we rely on - and they do just as much ( if not more ) work.
[21:02:32] <cheeser> probably not
[21:08:50] <joannac> FunnyLookinHat: as long as you're taking backups, and can take the downtime if the server goes down
[21:10:04] <FunnyLookinHat> Ah ok
[21:10:07] <FunnyLookinHat> yeah I have a daily image that I do
[21:10:12] <FunnyLookinHat> Thanks for the help!
[22:02:49] <ezakimak> in pymongo what's the diff between system.js and system_js ?
[22:03:38] <ezakimak> i recall vaguely that they aren't entirely interchangeable, but the docs leave it ambiguous
[22:26:53] <Skunkwaffle> I'd like to use mongo aggregation to find the total number of records that have a particular value, and divide them by the total number of documents that have another value. Here's what I'm doing now: http://pastie.org/9872417
[22:27:17] <gt3> anyone know why find({'_id' : { $gt : 0, $lt : 100 } }) would return 100 results fine from pymongo, but returns nothing directly in a mongo shell?
[22:28:33] <Skunkwaffle> This works fine for single events, but I'm not sure how to proceed if I want to use an array of values for each. Does anyone know how to match against an array in a $cond?
[22:28:39] <gt3> the field exists fine, and i get results if i type this directly in a mongo shell find({}, {'_id':1}).limit(3)
[22:30:19] <Skunkwaffle> gt3: are you sure you're using the correct database & collections? I've made that mistake before.
[22:31:29] <Skunkwaffle> Also check your datatypes. I'm not sure how pymongo interacts with mongo, but if your _id values are strings instead of ints, you might not be able to find them directly in the shell.
[22:32:44] <mrmccrac> connecting to a mongos vs. connecting directly to a shard?
[22:34:03] <gt3> Skunkwaffle: ah yeah they're strings
[22:35:53] <Skunkwaffle> I'd be suspicious of the results from pymongo too then. They may be fine, but they may also be matched lexicographically, which is probably not what you want.
[22:37:26] <gt3> Yeah. I've been getting funky results, so was hoping the shell would tell me why, but that's probably why. Which is not good because i don't know how else I can chunk this into ranges if i can't use _id
[22:37:44] <gt3> maybe objectId somehow
[22:37:57] <gt3> since it's a timestamp
[22:38:02] <Skunkwaffle> can still use _id, you just have to ensure your values are integers
[22:38:39] <Skunkwaffle> Oh, why not convert them to seconds then?
[22:41:49] <Skunkwaffle> You'll likely want to use 64 bit ints if you do though. Otherwise you'll run out of space in about 22 years.
[22:41:59] <Skunkwaffle> Run out of time I should say.
[22:51:47] <ezakimak> how many years does a 53bit int provide?
[22:52:23] <cheeser> 4?
[22:52:33] <cheeser> depends on what that number represents
[22:52:40] <ezakimak> seconds
[22:52:53] <cheeser> well, that math is pretty easy...
[22:53:01] <cheeser> 2^53 --> seconds
[22:53:03] <ezakimak> 2^53 / 86400
[22:53:10] <cheeser> then convert to years
[22:53:14] <ezakimak> / 365
[22:53:48] <ezakimak> this # needs a nodebot like ##javascript
[22:54:33] <LouisT> why
[22:55:13] <ezakimak> convenience
[22:55:23] <LouisT> to do basic math?
[22:56:42] <jumpman> yes, basic math such as taking two to the fifty third and dividing it by eighty six thousand and four hundred
[22:56:48] <jumpman> basic things that even a child could do without a calculator
[22:56:55] <jumpman> why would you want that?
[22:58:34] <LouisT> that was already my question
[23:12:49] <djlee> Is there any way to downgrade a v2 2dsphere index on a mongo install that doesnt support them
[23:19:18] <gt3> I don't get mongo. I have string _id's like "1", "2", etc but doing db.userprofile.find({}, {_id:1}).sort({$natural:1}) shows them as 1, 2, etc except suddenly it goes from 574 to 1068 to 547
[23:19:53] <gt3> is there a better way to sort numbers that are strings?
[23:23:26] <djlee> gt3: since when were mongoid's incremental?
[23:23:32] <gt3> things work fine from pymongo at least. I can find the highest id with User.find_one(sort=[("_id", -1)])
[23:24:15] <gt3> djlee: shrug, my brain just wants to work one way and going back between a db layer and direct mongo client is confusing me
[23:24:54] <gt3> think i just need to stay in pymongo where i'm having consistency
[23:28:50] <sssilver> Hey guys, I have a nested document in the form of {"a": 1, "b": 2, "c": {"aa": 11, "bb": 22}} -- I need to only update bb's value. But if I pass update() a data = {"c": {"bb": 22}}, naturally that ditches "aa". What's the way to do this?
[23:28:52] <sssilver> Thanks a lot!