PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 11th of October, 2012

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[02:26:51] <entropyfails> Heya, a quick hunt through docs and whatnot didn't help me as much but I wanted to see how many hits a particular collection is getting. the log is mostly filled with initandlisten with no real data in it. Is their something to increase the log verbosity so I can get an idea of what queries get run more often?
[03:43:56] <Oddman> big up 10gen and their free online courses
[03:43:56] <Oddman> woot!
[04:40:01] <sat2050> hi
[04:40:07] <sat2050> i need some help
[04:40:44] <sat2050> my logs are getting flooded by these messages
[04:40:56] <sat2050> [rsSync] info DFM::findAll(): extent 5:74314000 was empty, skipping ahead. ns:local.replset.minvalid
[04:43:04] <scrappy> I have a mongo noob question about using $set,
[04:44:15] <scrappy> I'm using: collection.update(object, {$set : {"somekey" : value}, true, false);
[04:44:30] <scrappy> and mongo rejects this, telling me that I'm missing a colon after property id. why's that?
[04:45:37] <tpae> is value an object
[04:45:53] <scrappy> value is a defined variable
[04:46:05] <scrappy> it is a defined javascript object, so..yes
[04:46:57] <tpae> oh
[04:46:59] <tpae> it's missing }
[04:47:08] <tpae> {$set : {"somekey" : vlaue } }
[04:48:10] <LouisT> ha, scrappy, i did the same thing multiple times yesterday -.-
[04:48:17] <scrappy> sure, sure. I fixed that, and mongo has ht same complaint. I also tried change the colon between $set and the curly brace to an =
[04:50:07] <tpae> what is object ?
[04:50:23] <tpae> i think you should check that also
[04:50:38] <scrappy> object is the return value from a .find() operation, and the find() was successful
[04:51:30] <tpae> hmm.. i don't see anything wrong :/
[04:52:18] <crudson> scrappy: if you have the full object already and are passing it in update to locate the document, then just change 'somekey' and call .save(object)
[04:54:42] <crudson> otherwise we need more details about what object and value are to diagnose this. if it's the result of a 'find' then it will be a cursor, which is invalid to pass back to update
[04:55:53] <scrappy> "value" is just this, in javascript: var value = "foo";
[04:56:05] <scrappy> but your point about the cursur being invalid sure is important.
[04:56:26] <scrappy> invalid as a param to update, that is.
[04:59:15] <crudson> so it can't be the exact result of find. Either findOne or a specific document in the result. But you can just issue the query you used to locate the document in the first place in update(), rather than getting it and using that.
[05:01:19] <scrappy> simply done, crudson. and now mongo says field names cannot start with $ [set]
[05:56:07] <scrappy> exit
[06:56:54] <SeyelentEco> Just ran into interesting bug in PHP MongoDB Driver...
[06:57:35] <SeyelentEco> testing to see if string passed in is mongo id... so I create a new MongoId($query)
[06:57:53] <SeyelentEco> then test if $id == $query ... and that works.. normally
[06:58:05] <SeyelentEco> unless you pass in "pirates of the caribbean"
[06:58:17] <SeyelentEco> MongoId apparently thinks that's a valid MongoId
[06:58:31] <SeyelentEco> weird
[07:24:54] <sunfmin> I have `readers` and `posts`, I want to track which reader have already read which post. and sort the posts depend on the logged in reader. to make unread posts on the top. and with in unread and read posts by published time.
[07:24:55] <sunfmin> How should I store it in mongodb?
[07:25:04] <sunfmin> I first comes out:
[07:25:05] <sunfmin> - readers collection
[07:25:05] <sunfmin> - reader_posts collection {reader_id: 11111, entry_id: 2222, read: true, publishedat: "2012/12/12 12:11:11,112"}
[07:25:06] <sunfmin> - posts collection
[07:25:30] <sunfmin> Is this what people normally do like in RDB?
[07:30:47] <SeyelentEco> is each post only for 1 reader or are they shared across?
[07:32:22] <[AD]Turbo> hi there
[07:44:00] <samurai2> christkv : hi, you're the one who made java example to use mongo with storm right? :)
[07:44:32] <christkv> yeah but have not touched the code in awhile alas
[07:45:30] <samurai2> because I have setup a small cluster just to test the code, but I don't see any pom.xml in the git repo
[07:46:00] <samurai2> and it's storm can only be used on capped collection? :)
[07:47:57] <samurai2> and basically, I'm still not quite so sure what kind of application that need storm with mongo? :)
[07:48:15] <BlackPanx> hi everyone
[07:53:36] <BlackPanx> one question, we are interested in upgrading existing mongo replica set that's version: 2.0.1-mongodb_1 to 2.2.0-mongodb_1
[07:53:44] <BlackPanx> did anyone have any problems ?
[07:53:49] <BlackPanx> we tried upgrading on test database
[07:54:36] <BlackPanx> by taking down each member of replica set and upgrading it to latest from repository
[07:54:42] <BlackPanx> and we didnt have problems
[07:54:55] <kali> BlackPanx: http://docs.mongodb.org/manual/release-notes/2.2/#upgrading-a-replica-set
[07:55:17] <BlackPanx> yes i know how to do it. we did it already... but i'm interested if there were any issuse with latest mongo
[07:55:23] <BlackPanx> issues*
[07:55:36] <BlackPanx> cause this is production, and we wanted to wait for mongo 2.2.1 maybe
[07:55:39] <BlackPanx> to make sure it's stable
[07:55:40] <BlackPanx> but...
[07:55:59] <BlackPanx> there are so many features in 2.2 version that we want to skip waiting now
[07:56:05] <BlackPanx> and upgrade to 2.2.0 maybe
[07:56:28] <BlackPanx> but we just want to be sure there is no strange bugs and it's stable.... anyone using it in production ?
[07:56:34] <kali> we've been running 2.2.0 for a few weeks here (in a replica set + sharding setup) no particular problem
[07:57:10] <BlackPanx> and with upgrade ?
[07:57:16] <BlackPanx> no issues ?
[07:58:20] <kali> not with upgrade. there is this thing though: some findAndModify() behaviour have changed
[07:59:18] <kali> in our case, it was easy to fix, the trouble was more to find them all
[07:59:34] <kali> but the upgrade per se was as easy as advertised
[08:00:08] <kali> bump the secondaries one by one, stepDown primaries, bump the remaining nodes
[08:00:09] <BlackPanx> great
[08:00:19] <BlackPanx> great
[08:00:23] <BlackPanx> that seems piece of cake
[08:00:24] <BlackPanx> :)
[08:00:28] <joe_p> 2.2.1 should be GA this week
[08:00:37] <BlackPanx> but as i saw mongodb is fucking bulletproof shit...
[08:00:54] <BlackPanx> uff 2.2.1 this week ?!
[08:01:01] <BlackPanx> this would be awesome!
[08:01:01] <christkv> samurai2: so storm makes sense mostly with capped collections as it's for processing a continuous flow of information
[08:01:06] <BlackPanx> we can't wait for this update
[08:01:15] <BlackPanx> ... for sure
[08:01:53] <christkv> samurai: say f.ex count and group stuff without involving the actual server
[08:01:55] <BlackPanx> joe_p where did you see that ?... u think it will come to repository this week ?
[08:02:15] <joe_p> rumor has it
[08:02:47] <BlackPanx> ok... well that's no good :) we'll just go with 2.2.0 i think
[08:03:06] <BlackPanx> i'm not really fond of rumors... usually they're wrong :)
[08:03:18] <BlackPanx> thanx for headsd up tho
[08:03:44] <BlackPanx> heads*
[08:05:16] <christkv> BlackPanx: as far as I know it's scheduled for release tomorrow
[08:05:29] <christkv> BlackPanx: barring any last moment major changes
[08:05:47] <christkv> https://jira.mongodb.org/browse/SERVER/fixforversion/11494
[08:06:17] <BlackPanx> oh this is great!
[08:06:21] <BlackPanx> we will wait then
[08:06:25] <BlackPanx> this is awesome
[08:06:29] <BlackPanx> thanks christkv
[08:11:46] <Michale> hi, if I know the ObjectId, could I dump a gridfs file content to txt, using mongodb's shell cmd ?
[08:14:57] <NodeX> mongofiles
[08:15:50] <Michale> Can I achieve it by something like "db.fs.chunks.find()" and dump the result to a file
[08:20:39] <NodeX> use mongofiles
[08:22:15] <Michale> NodeX, thanks. My distribution happen to have no such tool. I will try to download it.
[08:22:56] <NodeX> it comes with all distributions afaik
[08:25:46] <Michale> NodeX: fedora14?
[08:28:04] <NodeX> that's an operating system not a mongodb dist
[08:30:35] <Michale> OK, I will try to build from source
[08:33:59] <samurai2> christkv : ic. :)
[08:50:47] <NodeX> ANyone know if mongo has a cap on the number of objects allowed in a field ... for example http://pastie.org/5034038 <-----
[08:51:03] <NodeX> I had 45 or so and it was not returning data as expected
[10:59:45] <jasiek> hey guys. i'm having an odd issue with a mongo instance, and i can't understand why it's happening.
[11:00:16] <jasiek> i have a script that goes over several million records and updates some of them
[11:00:34] <jasiek> occasionally (3 times in 20M), i get this message from the ruby driver: could not obtain connection within 5.0 seconds. The max pool size is currently 1; consider increasing the pool size or timeout.
[11:00:53] <jasiek> i don't understand why mongo would drop the connection in the first place.
[11:01:53] <jasiek> i'm running mongo 2.2.0, and ruby-mongo 1.5.2
[11:08:07] <DinMamma> Hiya.
[11:08:50] <BlackPanx> is it possible to replicate only certain database in replicaset ?
[11:09:05] <BlackPanx> like with mysql for example... where you put him ignore table expression
[11:09:06] <DinMamma> Ive copied over a database with db.copyDatabase, on the source server all the copying is completed, the _id is build but the db.copyDatabase-command have yet to terminate(give me back the cursor). i run 2.2, anyone know why that might be?
[11:13:15] <DinMamma> db.currentOp says "msg" : "index: (3/3) btree-middle",
[11:13:43] <DinMamma> Which is strange since the logs says its done building the index
[11:14:52] <DinMamma> Ah, my mistake, something else is up. It has yet to clone over a different collection.
[11:14:55] <DinMamma> Must be a network issue
[12:10:46] <ryn> hi guys
[12:11:20] <ryn> http://pastie.org/5034777
[12:11:21] <ryn> SyntaxError: missing { before function body (shell):3
[12:11:25] <ryn> pls help
[12:13:04] <wereHamster> learn javascript
[12:14:43] <ron> wereHamster: dude, you just reminded me of MacYET :)
[12:15:13] <ron> algernon: blasphemy!
[12:15:23] <wereHamster> ron: line 3
[12:15:30] <wereHamster> ryn: ^^^
[12:16:25] <ryn> sory guys,. im noob :(
[12:16:50] <NodeX> we thought MacYET returned the other day as a new IP / Nick
[12:16:59] <NodeX> (we being me and one unnamed other)
[12:30:14] <ryn> thnx a lot 4 for the help
[13:09:37] <fredix> hi
[13:10:56] <fredix> question about c++ api : const char* toto = GridFSChunk::data(int len) give me the size of the chunk into the len, anyway strlen(toto) do no return the same length
[13:11:07] <fredix> not
[13:12:37] <kali> fredix: strlen will get confused if the data is binary and contains byte 0
[13:16:12] <fredix> kali: indeed
[13:22:49] <Jt_-> hello guys
[13:23:26] <Jt_-> i have a little problem, i must migrate my db
[13:23:45] <Jt_-> i-m using mongodump & mongorestore
[13:24:13] <Jt_-> but when mongorestore finished i have this error
[13:24:14] <Jt_-> Assertion: 13111:field not found, expected type 2
[13:24:43] <Jt_-> i read that is a index problem caused by different mongo version
[13:25:49] <Jt_-> i try to reindex after restore but when i dump newdb (with new index) and restore, the error persist
[13:27:36] <Jt_-> anyone have a idea to resolve this issue?
[14:08:09] <FriedBob> I've recently installed mongodb from packages on Ubuntu 12.04 LTS (mongod --version shows db version v2.0.4, pdfile version 4.5 ). For some reason, when I do "sudo service mongodb start" it starts, then immediately exits. If I do "sudo mongod" then it starts and runs. Any ideas on where to look for more info, or what might be the cause?
[14:08:44] <NodeX> check the logs
[14:08:57] <NodeX> perhaps the data dir does not exist - that's a common error
[14:10:49] <FriedBob> Initially it did not, but then I created it and was able to get it start and stay running when I called it directly
[14:11:00] <NodeX> permissions then probably
[14:11:08] <NodeX> check the log file and it will tell you
[14:12:16] <FriedBob> Ok, will take another look once I get this fire out. Such a shame all the things that get in the way of "real" work at times.
[14:19:53] <FriedBob> Would it be logging to anywhere other than /var/log/mongodb/mongodb.log ? That file didn't get touched when I started mongodb as a service.
[14:20:54] <FriedBob> Though I did see " mongodb main process (16496) terminated with status 2" in syslog
[14:21:08] <NodeX> locate mongodb.conf
[14:21:15] <NodeX> see where your logging it
[14:21:22] <NodeX> you're
[14:24:15] <FriedBob> "logpath=/var/log/mongodb/mongodb.log"
[14:24:41] <FriedBob> Though I did see in /etc/init/mongodb.conf it calls mongodb from there with "--quiet"
[14:24:50] <NodeX> does the dir exist?
[14:25:23] <FriedBob> Yes. I can tail -f logpath=/var/log/mongodb/mongodb.log and see stuff written to it when I call mongodb directly
[14:25:30] <FriedBob> Just not when I start it as a service
[14:25:51] <NodeX> try to remove the --quiet
[14:25:56] <FriedBob> I think the --quiet in the upstart stuff is preventing logging, or interferring
[14:28:32] <FriedBob> Nope.. it was perms
[14:28:43] <FriedBob> Looks like my puppet module is off.
[15:03:05] <cmacmurray> Good morning mongo users, I am writing some python mongo db management scripts, and I was wondering if someone could help me with figuring out the appropriate pymong command to manipulate replica sets via pymongo. specifically the rs.initiate() function, i have tried some stuff like this data = db.command('rs.initiate()') but it is not working, docs would be great
[17:00:08] <Ahlee> trying to write a map/reduce query that only emits a field if the record doesn't contain another bit of information
[17:00:39] <Ahlee> should it just be if (this.deleted : {$exists : false})? That doesn't appear to be valid
[17:00:58] <Ahlee> and I don't know how to tell the JS shell waht I mean. And that's the most frustrating.
[17:06:03] <crudson> Ahlee: why not include {field:{$exists:false}} in the initial map reduce query?
[17:07:11] <Ahlee> crudson: I didn't know you could :) My map/reduce experience is pretty minimal.
[17:07:55] <crudson> Ahlee: look at "Command Syntax" here to see what options you can give to map reduce: http://www.mongodb.org/display/DOCS/MapReduce
[17:09:53] <Ahlee> ah! so the map function would remain 'dumb' as only items matching query will be sent to it
[17:09:59] <Ahlee> s/would/can/
[17:11:14] <crudson> Ahlee: yep, you should minimize the number of input documents as appropriate
[17:20:00] <Ahlee> lesson learned there, order is important
[17:20:07] <Ahlee> query before out, bad.
[17:20:14] <Ahlee> anyawy, thanks so much crudson. Much appreciated.
[17:20:30] <crudson> anytime
[17:26:28] <codemagician> Are there any control scripts for mongod for Ubuntu server?
[17:27:41] <crudson> codemagician: not out of the box last time I saw
[18:03:18] <gdoko> Hi! I've noticed that MongoDB documentation states that sharding is performed on a per-collection basis. However, in the following open JIRA issue https://jira.mongodb.org/browse/SERVER-939 states (in comments) that if you have e.g. 1,000-collections in 1-database, then they will be in the same shard. Doesn't this imply a per-database sharding?
[18:27:12] <aster1sk_> Afternoon ladies and gentlemen.
[18:36:34] <aster1sk> I have a problem with my aggregate queries.
[18:36:41] <aster1sk> See this http://stats.dev.riboflav.in/app/issue_views
[18:36:52] <aster1sk> Click the query tab to see what I'm querying.
[18:37:37] <aster1sk> Without the $match I get (kind of) the ruslt I want, but I can't $lt / $gt on date [d]
[18:40:00] <aster1sk> Ahh, ok well give me the details of exactly what you need and I'll see if I can source it.
[18:48:40] <Austin__> I seem to be having a problem with the init script for mongodb 2.2 from the official repos when installing to Amazon EWS using one of their "Amazon Linux" (e.g., CentOS) images.
[18:49:21] <Austin__> After installing, I do "sudo service mongod start" and then get http://pastie.org/5036620
[18:49:43] <Austin__> if I kill everything shown in that pastie, and then start mongo the same way, it works.
[18:50:12] <Austin__> this is problematic for me because my automatic initialization scripts hang while waiting for Mongo to actually start.
[18:50:29] <Austin__> (killing everything shown in the pastie means logging in with a separate session)
[18:50:55] <Austin__> er, by automatic initialization I mean server build scripts. I'm attempting to build this unattended.
[18:51:45] <Austin__> I don't have this if I install mongo20-10gen-server
[18:56:20] <Austin__> Is anyone familiar with this sort of behaviour?
[18:57:15] <aster1sk> Bahh... http://stats.dev.riboflav.in/app/issue_views -> click 'Query' : can someone explain why it's returning the entire document from the $match query+
[19:00:22] <aster1sk> If I remove the match it returns issue_views -> 45
[19:00:28] <aster1sk> But with the match everything breaks.
[19:05:30] <_m> Anyone have a sample query which returns a subset of fields? Using the documentation's example doesn't seem to work.
[19:05:43] <_m> From the docs: coll.find("_id" => id, :fields => ["name", "type"]).to_a
[19:06:02] <_m> Using Ruby driver 1.7.0
[19:06:57] <crudson> _m: ruby will mash those together into a single hash, you need to: col.find({:_id:id}, {:fields=>[:name,:type]})
[19:07:13] <crudson> first is the query, second is options
[19:08:03] <crudson> ignore me using symbols instead of strings, doesn't matter, just a habit of mine
[19:10:09] <_m> crudson: Does matter, just not to mongo. =P
[19:10:42] <_m> crudson: Thanks, btw. documentation led me to believe they were supposed to be in the same hash. Admittedly, that seemed really strange.
[19:11:52] <crudson> right, not when querying, just don't use symbols when referencing document keys in results. Yeah, it's the kind of thing you can stare at for a while and not see what's wrong :)
[19:22:57] <mobiltt> Hi everybody
[19:23:11] <mobiltt> MongoDB is cool.
[19:23:38] <mobiltt> I have a question though, if anyone wouldn't mind lending a hand
[19:25:05] <crudson> mobiltt: just ask the question, if someone can help they will try
[19:25:38] <mobiltt> I have an application running alright on mysql. But I have one table that I use as a kv store. I like mongo a little better for many reasons. So, I'm offloading that table to mongo collection. I'm coming to a situation where i need to associate data in the mongodb and in a number of mysql tables as well.
[19:26:49] <mobiltt> Has anyone had any experience, associating data, like "select ids from table where ids between 1 and 100" and then doing a mongodb find() on those 100 ids
[19:30:46] <mobiltt> That's of course an example situation. I store random data including some user data in the mongo kv store, but need to "join" that with tables in my mysql database. I know this is not ideal. But could anyone point me in the correct direction here?
[19:42:03] <crudson> mobiltt: you're going to have to stitch those rows/documents together yourself. Depends how they are going to be used or offloaded. I have one (note: read-only) app that is hybrid mysql/mongodb that combines data from both in realtime. It works fine, but I'd think very carefully as to whether this is a sensible road to start down.
[19:42:36] <crudson> mobiltt: if using rails you could theoretically combine ActiveRecord with MongoMapper if you want to have "associations", but you'll have to define those manually
[19:43:44] <crudson> it's an architectural problem and as such hard to answer simply
[19:45:43] <mobiltt> crudson: thanks for your answer
[19:49:07] <mobiltt> crudson: I agree with you that it is firstly an architectural problem. I'm going to attempt to further remove the need for associations across database systems. I just wanted to know if anyone has had similar experiences, and what was the eventual outcome. A new smart db driver, or other changes..
[19:56:39] <crudson> mobiltt: I did it with bespoke application code, with parts abstracted to do the stitching between them. Again, a fuzzy answer.
[20:02:42] <bhosie> i have a collection that i'm trying to use '$addToSet $each' to append multiple sub docs to an array using the php driver. When i run the update, my new subdoc gets wrapped in another doc called {'$each' : ....} if i perform the update in the shell, it works as expected. i'm not sure if this is a bug or my syntax is botched.. http://pastebin.com/YyU8G7rU
[20:15:34] <crudson> bhosie: I don't use php, but is $value definitely an array?
[20:17:10] <bhosie> crudson: it is
[20:18:09] <bhosie> here's a var_dump() http://pastebin.com/YtWu4W0S
[20:25:41] <bhosie> is there a way to pass a json string straight through to the server? since it works on the shell, something like that might be a good workaround
[20:30:25] <crudson> bhosie: again, not a php user, but that looks like a single object (although the last paren is mismatched). Also, you care about uniqueness? if not does $pushAll work? What shell command works, paste that.
[20:36:13] <bhosie> crudson: ah i missed copying the first line. but yeah, that's an array - http://pastebin.com/MGc14ycS - uniqueness is necessary.... here's the working query http://pastebin.com/2xJXy33V
[20:39:51] <bhosie> oops. pasted the wrong working command. one sec
[20:39:55] <crudson> bhosie: the 'each' doesn't appear in your js one
[20:40:01] <ispuk> mongo rules
[20:46:40] <bhosie> db.fbinsights.update({"data.id" : "xxxx/insights/page_posts_impressions_frequency_distribution/days_28"}, {$addToSet : {'data.values' :{ $each : [{'testtesttest': 'hello world'}]}}})
[20:54:46] <bhosie> so yeah, that command on the shell works fine
[20:58:52] <crudson> bhosie: not sure :( Is that trailing comma at the end of the update command valid? I'd hope someone with php mongo api familiarity could help, I can't really spend too much time going through it. Can you at least inline something simple like [{'testtesttest': 'hello world'}]. Try and create the simplest php example that will work then start using your variables.
[21:01:08] <bhosie> crudson: yeah thanks for at least the sanity check and the time. before initially posting, i did narrow it down specifically to trying to push subdocs into the array. passing a simple array through the 'each' command like array('foo', 'bar', 'baz') works fine
[21:02:02] <bhosie> i'm thinking more and more this is a bug with the php driver.... just wanted some confirmation before submitting it through Jira
[21:04:55] <bhosie> an alternative would be to use the db->execute() function. but that causes a write lock :(
[21:10:50] <LouisT> would it be stupid to do: if ($col->count($obj) > 0) { $data = $col->findOne($obj); ... }
[21:11:29] <crudson> bhosie: yeah, good luck
[21:12:42] <crudson> LouisT: yes it would be, count performance with attributes is a current issue (https://jira.mongodb.org/browse/SERVER-1752). findOne() will use index correctly.
[21:12:55] <LouisT> crudson: ah ok, thanks
[21:16:26] <LouisT> crudson: findOne can use $or correct?
[21:17:50] <crudson> LouisT: anything you can pass to find, it will just limit to 1 and return null if no match. Type db.collection.findOne - with no parens in the shell to see what it does
[21:18:27] <LouisT> oh ok, had no idea shell did that
[21:19:04] <crudson> LouisT: it will print source for any js method - helpful for things like your question
[21:24:48] <crudson> LouisT: the key is it simply sets 'limit' to -1, which means cursor will be closed after '1' is read
[21:30:36] <cjhanks> Is there an up to date git repo of just the Mongo Cpp driver?
[21:45:52] <piotr> what can I do with a cursor that timeouts?
[21:46:03] <piotr> I am in a python loop processing a collection
[21:46:09] <piotr> any ideas?
[21:52:19] <cjhanks> piotr: I had the same problem, I had to split up the query on the indexable field.
[21:53:07] <timeturner> should dates be stored as integer unix timestamps?
[21:53:13] <timeturner> number*
[21:53:21] <timeturner> or should I store it as a BSON Date type
[21:53:23] <cjhanks> piotr: additionally you can specify NoTimeout in the query.
[21:53:26] <piotr> cjhanks: seems it can be disabled
[21:53:45] <piotr> will the cursor will be closed correctly without timeout on destruction?
[21:53:54] <_m> timeturner: Depends on your expected usage.
[21:54:43] <cjhanks> piotr, I wondered about that myself since its timed out server side. If the client side crashes, does it hang indefinitely?
[21:54:58] <cjhanks> That's why I segmented when I couldn't find answer.
[21:55:50] <timeturner> well all my time rendering is done on the client side
[21:56:05] <timeturner> so I prefer to keep the data as compact as possible
[21:56:15] <timeturner> having said that what exactly is the BSON Date type?
[21:56:39] <timeturner> does mongodb internally store the unix timestamp but just display it as an ISODate when I query for it?
[21:56:48] <timeturner> or does it actually store that huge Date
[21:59:22] <cjhanks> timeturner: I believe its a 64bit field storing ms since 1970.
[22:00:54] <cjhanks> Unix time likely 32 bit. So, it will be twice as large "theoretically".
[22:04:45] <timeturner> unix time and "ms since 1970" is the same thign right?
[22:04:48] <timeturner> thing*
[22:06:16] <jawr> hey, if i have information like this: http://pastie.org/5037747 what would be more efficient: to store it all in one collection and then have the program parse it (i.e. into groups of day/week/months) or have seperate collections which would allow me to pull the information straight out of the database, but would mean duplicated data?
[22:07:46] <cjhanks> timeturner: I think its 32 bit since same epoch which is also 1970, but 64 bit doesn't have the Year 2038 problem.
[22:14:16] <timeturner> so it stores it as 64 bit timestamp representation which prevents the year 2038 problem if stored as a BSON Date type. otherwise it stores it as the regular 32 bit unix timestamp if stored as a number which does not prevent the year 2038 problem
[22:14:21] <timeturner> right?
[23:23:47] <WarDekar> how do i query for nested fields? like i have a collection with the structure {'a': {'b': c}}
[23:23:57] <WarDekar> and i want to query for all 'b's that match
[23:38:00] <crudson> WarDekar: the documentation on Querying and embedded objects will be good reading