PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 28th of June, 2012

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:10:47] <libbyh> hi all, having trouble accessing my mongodb from a remote location.
[00:17:01] <jY> probably not mongo's fault
[00:19:45] <dstorrs> libbyh: check ping on machine, check firewalls, verify you can ssh in, verify mongod running on target machine
[00:20:40] <libbyh> i think my problem is a firewall issue. ping OK, mongo via SSH ok, some ports OK. no response on 27017 or 28017 even though they're in my iptables
[00:21:16] <jY> is mongo listening to only 127.0.0.1?
[00:21:25] <libbyh> and show listening in netstat. i could connect when i was in the same building as the server but not from home. (not sure how much overlap btw IP of server and IP when I was able to connect.)
[00:21:31] <libbyh> i set bind_ip = 0.0.0.0
[00:34:43] <libbyh> thanks, dstorrs and jY. think my mongo config is ok and will go work on firewall.
[01:31:08] <dnnsmanace> hello, i am trying to set up a db structure where users get sent a message when they are inactive
[01:31:20] <dnnsmanace> i am trying to think of the best way to do this without parsing every user constantly
[01:32:06] <dnnsmanace> the app is based around uploading files
[01:32:16] <dnnsmanace> if they haven't uploaded anything in 30min, i want them to get a notification
[01:32:35] <dstorrs> dnnsmanace: you don't need anything server side for this. Do it client side.
[01:32:43] <dstorrs> doesn't need to touch the DB
[01:33:15] <dnnsmanace> i guess i am trying to think of the logic behind this in my app, doesnt have to be db
[01:33:16] <dstorrs> every time they take an action, canccel any existing idle_timer, do the action, then set a 30min idle_timer.
[01:33:32] <dstorrs> if an idle_timer pops, have the client send a notification to itself (update a view, whatever)
[01:33:50] <dnnsmanace> and if i have thousands of users there would the thousands of idle timers?
[01:33:54] <dnnsmanace> is that scalable?
[01:33:58] <dstorrs> yes, but all running client side.
[01:34:10] <dstorrs> i.e., not on your server, not on your DB
[01:34:22] <dnnsmanace> lets say the client is only an email
[01:34:34] <dnnsmanace> meaining its all done thru email, so i can't have it client side
[01:34:42] <dnnsmanace> my server has to count to 30, and then send an email notificatoin
[01:35:02] <dstorrs> I thought you said they were uploading files. They are doing that by email ?
[01:35:14] <dnnsmanace> yeah.. dont ask :)
[01:35:36] <dstorrs> oohkay.
[01:35:55] <rossdm> if they are uploading via email then how can they have any sort of session that requires activity < 30 mins?
[01:36:41] <dnnsmanace> it requires a kind of continuous communication, so i want to track the time to the last email they sent
[01:36:48] <dnnsmanace> and if its > 30 i want to send a notification
[01:38:00] <dstorrs> have a collection "last_seen" structure is: { _id : username, time : epoch / ISODate / whatever }
[01:38:30] <dstorrs> hm. correction:
[01:39:12] <dstorrs> meh, that'll work.
[01:39:20] <dstorrs> you could do it time-major, but this is fine.
[01:39:42] <dstorrs> then set a desc index on the 'time' column.
[01:40:30] <dstorrs> once a minute, do a 'find' for users in time >= 30_mins_ago. mail everyone you find. use a capped collection so you don't need to worry about storage
[01:41:14] <dnnsmanace> that makes sense, and that shouldnt choke up the app with lots of users
[01:42:23] <dnnsmanace> thanks a lot
[01:44:09] <dstorrs> np
[01:46:12] <dstorrs> hm. actually, one correction. you need to also not keep pinging them after they get active again.
[01:46:59] <dnnsmanace> after that i just reset the timer?
[01:47:06] <dnnsmanace> and it wont come up in the search
[01:47:09] <dnnsmanace> correct?
[01:52:15] <dstorrs> oh, no. right, all good.
[01:52:20] <dstorrs> I confused myself for a moment there.
[01:52:32] <dstorrs> yes, just reset the timer and you're fine.
[01:52:57] <dstorrs> if you use a capped collection you may run into an issue with update, not sure. I know you can't remove from a CC
[01:54:44] <Ahlee> Hi, does mongo log when an index was created?
[01:55:00] <dnnsmanace> cool
[01:55:37] <dnnsmanace> Ahlee: i thinnk part of the unique id is the date, google it
[01:56:00] <dstorrs> dnnsmanace: Only if it's an ObjectID but then yes.
[01:56:16] <dstorrs> but that's not what Ahlee is asking.
[01:56:22] <dnnsmanace> ok ill be quiet :)
[01:56:38] <dstorrs> Ahlee: check your log file. should be in /var/log/mongo/mongo.log
[01:56:48] <dstorrs> if not there, look at /etc/mongo.conf to see where it is.
[01:57:07] <dstorrs> try creating a collection, adding an index, then look in log to see what's there.
[01:57:46] <Ahlee> thanks you two
[01:59:36] <Ahlee> Found it in the log, thanks.
[01:59:44] <Ahlee> that's freaking awesome.
[02:01:49] <dstorrs> np
[02:26:31] <macrover> I have a question about Timestamp conversion here: http://pastie.org/4163385
[02:26:37] <macrover> looking for pointers, thanks
[03:03:29] <lizzin> why doesn't `show dbs` from within the mongo client show collections created by my liftweb app using net.liftweb.record?
[03:07:16] <lizzin> instead several files have been created in dbpath/. such as collection.0, collection.1, and collection.ns
[03:16:36] <Ahlee> lizzin: appears your collection name is collection, those files are the preallocated files for the collection namespace
[03:17:09] <Ahlee> lizzin: You should be able to run db.collection.find().foreach(printjson) to see what's being inserted
[03:18:41] <Ahlee> Does this mean I didn't get this field indexed, or it didn't get inserted? from my log file: Wed Jun 27 23:04:40 [rsSync] ibp.logging Btree::insert: key too large to index, skipping ibp.logging.$message_1_background_ 1362 { : "SurVo - survey():
[03:22:07] <lizzin> Ahlee: you're right, i changed the name of the collection to 'collection. the real name is 'phonebook'
[03:22:15] <lizzin> Ahlee: > db.phonebook.find().foreach(printjson)
[03:22:15] <lizzin> Wed Jun 27 22:18:47 TypeError: db.phonebook.find().foreach is not a function (shell):1
[03:22:34] <lizzin> i am able to query the collection from my app successfully though
[03:23:27] <Ahlee> lizzin: sorry, forEach(printjson)
[03:23:54] <Ahlee> lizzin: so what's the question - you're looking for why the collection name is created under the db?
[03:24:13] <lizzin> Ahlee: forEach return nothing. just a new empty prompt
[03:24:27] <Ahlee> are you in the right db?
[03:25:12] <lizzin> Ahlee: i was hoping to add documents to the collection from within my app and then connect to the db via the mongo client to verify things are going as expected
[03:26:04] <lizzin> Ahlee: shouldn't `show dbs` list all db's?
[03:26:07] <Ahlee> lizzin: ok, so there are dbs, and collections in the db, you've issued use <db_name>, db.<collecton>.count()
[03:26:11] <Ahlee> lizzin: yes, it should
[03:26:50] <lizzin> count() returns 0
[03:26:56] <Ahlee> You're not in the right db then
[03:27:07] <Ahlee> is my guess
[03:27:08] <lizzin> but `show dbs` doesn't even show the 'phonebook' collection
[03:27:15] <Ahlee> phonebook is a collection
[03:27:19] <Ahlee> dbs are not collections
[03:27:25] <Ahlee> dbs contain many collections
[03:27:34] <lizzin> hrm, guess im a bit lost on this then
[03:27:39] <Ahlee> i feel your pain.
[03:27:44] <lizzin> and collections contain documents right?
[03:27:50] <Ahlee> correc
[03:27:51] <Ahlee> t
[03:28:18] <Ahlee> so if my db name is foo, collection is bar
[03:28:20] <Ahlee> use foo
[03:28:25] <Ahlee> db.bar.count()
[03:28:30] <Ahlee> returns documents in bar collection, in foo database
[03:29:22] <lizzin> how do you create a db?
[03:29:49] <Ahlee> first use creates
[03:30:01] <Ahlee> so use 'asdf' will create a db with no collections
[03:30:19] <Ahlee> db.collection.insert({"a":1}) will preallocate and insert
[03:33:49] <lizzin> hrmm
[03:36:03] <lizzin> so if 'use foobar' creates db 'foobar'
[03:36:40] <lizzin> doesn't db.foobar.save({name: 'ummyea'}) add this document to 'foobar'
[03:36:48] <lizzin> and if so, where is the collection in this?
[03:36:53] <lizzin> what is it's name?
[03:37:17] <Ahlee> foobar is the collection name, the leading db I read as "this"
[03:37:34] <Ahlee> foobar collection, in foobar database
[03:37:40] <Ahlee> i think, let me double check that
[03:38:24] <lizzin> oh ok
[03:40:16] <Ahlee> yeah, that's what happened on my system when I just did it, I wound up with a database foobar, a collection in foobar database named foobar, with a single document consisting of an ObjectID, and name: "ummyeah"
[03:44:59] <dnnsmanace> what format is this date: 2012-06-28T03:39:45.768Z
[03:46:12] <AAA_awright> dnnsmanace: ISO-8601
[03:46:44] <AAA_awright> What's the exact rule on . in property names? I have several documents with Object-hashtables of {"url": {information...}} and that doesn't seem to be a problem, except recently
[03:48:06] <dnnsmanace> whats the best way to compare ISO-8601 date and a date like this Thu, 28 Jun 2012 03:42:06 GMT
[03:49:22] <dstorrs> stupid question -- how do I find out what DB I'm currently in?
[03:50:00] <dnnsmanace> got it
[03:50:06] <dstorrs> dnnsmanace: convert them to a single format. compare. enjoy a milkbone in a commie-free world phase
[03:50:10] <AAA_awright> dnnsmanace: Use a date-time library, most of them should be able to convert arbritrary strings to dates... Javascript has the Date object, look at the documentation for that for the speciics
[03:50:23] <AAA_awright> *specifics
[03:50:25] <dnnsmanace> does $lt work on dates?
[03:51:14] <dstorrs> dnnsmanace: as a thought -- I personally have just been forcing everything to UTC before insert and storing it as epoch values.
[03:51:24] <dstorrs> it makes comparisons and range searches dead easy
[03:51:35] <dstorrs> and every date library in existence speaks epoch
[03:51:47] <dstorrs> YMMV
[03:51:59] <Ahlee> dnnsmanace: I'm having issues with date comparisons. Namely, issues with javascript's ISODate and Date
[03:52:13] <AAA_awright> Javascript is fun, milliseconds since 1970 as a 64-bit float >_>
[03:52:16] <Ahlee> hasn't been an issue yet, so I'm avoiding diving into it
[03:52:57] <dnnsmanace> converted javascript .toISOString() and it works with $lt
[03:53:14] <dstorrs> so, anyone know how to determine current database?
[03:53:53] <AAA_awright> dstorrs: db.getName() ?
[03:54:38] <dstorrs> aha. thank you.
[03:55:06] <dstorrs> I missed that before.
[03:55:14] <AAA_awright> So, anyone an authority on the usage of "." here?
[03:59:19] <AAA_awright> > db.nodes.find({_id:ObjectId("4ebcb47a8bfb992231000001")},{menu:1});
[03:59:19] <AAA_awright> { "_id" : ObjectId("4ebcb47a8bfb992231000001"), "menu" : { "http://magnode.org/Menu_MainMenu" : { "title" : "Main Page", "weight" : 0 } } }
[03:59:26] <AAA_awright> er, well, yeah
[03:59:28] <AAA_awright> Should this be possible?
[04:00:00] <AAA_awright> Somehow, MongoDB happily stored my key names in the database without any domain restrictions whatsoever
[04:00:39] <Guest1231231> do you have a point?
[04:01:29] <AAA_awright> Can someone explain the behavior? Because now, it's not working.
[04:05:11] <wereHamster> AAA_awright: should what be possible?
[04:06:45] <AAA_awright> Should it be possible? idk, I just want to know what's the behavior that inserted that key into the database
[04:07:24] <wereHamster> I don't get your question...
[04:07:29] <Guest1231231> are you unable to ask something reasonable?
[04:07:55] <AAA_awright> Let's try again maybe?
[04:07:55] <AAA_awright> What's the exact rule on . in property names? I have several documents with Object-hashtables of {"url": {information...}} and that doesn't seem to be a problem, except recently
[04:08:01] <Guest1231231> or like usually: brain disabled?
[04:08:23] <wereHamster> AAA_awright: except recently? Can you elaborate on that?
[04:08:37] <AAA_awright> Now I'm getting a domain error, lemme see exactly
[04:08:46] <AAA_awright> "Error: Server Error: not okForStorage"
[04:08:48] <Ahlee> so mgd picking it up as a sub field
[04:09:09] <Ahlee> like "foo.bar" for {foo : { bar: 1, baz: 1}} ?
[04:13:38] <AAA_awright> Guest1231231: If you're going to be rude don't bring it into private messages. As an author of a program, I need to know the behavior of MongoDB so I can code accordingly for my program. I'm asking for that behavior, since appearently it's not documented. Got it?
[04:15:13] <AAA_awright> Ahlee: As far as I can tell, no. It's the actual key name. Unless MongoDB collapses sub-objects into parents like {a:{b:1}} -> {"a.b":1}, but I wouldn't expect that
[04:15:58] <Ahlee> AAA_awright: Sadly, I can't speak definitively on that. I believe that "a.b" is a short cut for {a:{b:1}}
[04:16:10] <Ahlee> but, I'm too green to be confident.
[04:18:18] <AAA_awright> Ahlee: There's nothing I can do to make it spit out {"a.b.c":1}, it looks like it's only used in queries
[04:21:15] <deoxxa> AAA_awright: correct
[04:22:09] <deoxxa> AAA_awright: {"a.b.c": 1} means exactly what it looks like - it would translate to pseudocode of `if (a.b.c === 1) { ... }'
[04:22:21] <deoxxa> (with some guards around undefined values, etc)
[04:23:17] <deoxxa> unless that's not what you're asking
[04:23:28] <deoxxa> i'm not quite sure - you're not making a whole lot of sense unfortunately :(
[04:24:21] <AAA_awright> I've never had a problem inserting documents with properties containing "." until about 3 days ago
[04:24:37] <AAA_awright> I have several documents containing "." in property names
[04:24:45] <AAA_awright> I'm trying to figure out *how*
[04:25:05] <AAA_awright> Or rather, why I'm getting errors now and not earlier
[04:25:05] <deoxxa> ok, that's a better explanation
[04:25:29] <AAA_awright> But really I'm just looking for documentation on the subject
[04:25:38] <AAA_awright> But appearently no one's heard of any such documentation
[04:25:49] <AAA_awright> So now I'm thinking this is a bug?
[04:25:58] <Guest1231231> the bug is sitting in front of the keyboard
[04:26:23] <Ahlee> Keep on being helpful there big shooter.
[04:26:32] <deoxxa> ok, so after a cursory google search of "mongodb key dot"
[04:26:33] <AAA_awright> Guest1231231: Do you have an answer to my question to offer?
[04:26:38] <Max-P> Is it possible to update an associative array with a key that doesn't exist in an existing document? I need to {$set: {'folder.my_user_id': 'INBOX'}} but it seems to fail when the field is not already set =/ Thanks
[04:26:42] <deoxxa> i see https://jira.mongodb.org/browse/JAVA-151
[04:27:02] <Guest1231231> i tend to ignore questions from people that can not read other persons answers if they are perfectly correct
[04:27:43] <deoxxa> Guest1231231: what's up, MacYET?
[04:27:54] <AAA_awright> I've been using Javascript, specifically, mongolian for Node.js which uses (used?) the mongodb driver in turn
[04:28:18] <deoxxa> yes, node-mongodb-native
[04:30:08] <AAA_awright> Guest1231231: You asked "do you have a point?" before any other person even acknowledged my question. Again, rephrased, do you have something useful to contribute?
[04:30:30] <deoxxa> looks like it's never been valid, but there's been inconsistent driver support for stopping you from doing it
[04:31:07] <deoxxa> it doesn't *technically* get rejected at the database, but only to allow maximum flexibility for drivers implementing weird crap, by the looks of things
[04:31:32] <deoxxa> sane drivers will reject it, or at least strongly advise you not to do it
[04:31:37] <AAA_awright> I *assumed* it was done at the the MongoDB level
[04:31:51] <AAA_awright> or rather, the server level
[04:32:06] <deoxxa> maybe it does now, but as far as i know it never was before
[04:32:13] <deoxxa> as long as you were talking to it at a low enough level
[04:32:16] <deoxxa> i.e. wire protocol
[04:33:49] <AAA_awright> I'll have to take a closer look at the driver to see then, now I'd guess that queries aren't sent as-is
[04:35:03] <deoxxa> might want to try sifting through the commit log of node-mongodb-native
[04:35:07] <deoxxa> see if it was recently changed
[04:36:36] <deoxxa> it's broken behaviour anyway, conceptually
[04:36:55] <deoxxa> it could only make things harder in the long run
[04:38:24] <AAA_awright> Having full range keys would be nice... Storing URIs is a major use-case I would think
[04:38:39] <deoxxa> as keys?
[04:38:54] <AAA_awright> Yeah
[04:38:57] <deoxxa> ugh
[04:39:22] <AAA_awright> <_<
[04:39:55] <deoxxa> well, it's a bad enough idea that drivers actively try to stop you from doing it
[04:40:05] <deoxxa> so maybe time to re-think that design
[04:42:01] <AAA_awright> Except the next-best design I have is... escape the dots in the URIs
[04:42:54] <AAA_awright> There's no reason not to attach this information to a document, it makes sense to have an Object of { URI : {...} }
[04:44:13] <AAA_awright> I can think of other use cases, storing link-level data about all the URLs you reference on a page. You want to know where they are, what version of the page they're linking to, etc.
[05:30:26] <ferrouswheel> have people come up with a way to kill a foreground indexing?
[05:30:34] <ferrouswheel> db.killOp does nothing
[05:31:50] <ferrouswheel> and it brings everything to a crashing halt. seems like a retarded design that background=False is default.
[05:35:38] <ferrouswheel> issue here: https://jira.mongodb.org/browse/SERVER-3067 seems like a big issue, i'm not the only one it's brought down a site for.
[05:36:02] <ferrouswheel> and you can't even cleanly shutdown the server either
[05:45:18] <ferrouswheel> kill -9 mongod it is then! ;-p
[07:37:14] <[AD]Turbo> hola
[07:54:48] <dstorrs> hello all
[07:55:06] <dstorrs> no one about at this late and lonely hour, I presume?
[08:14:33] <NodeX> wassup
[08:28:47] <LambdaDusk> Hi, I have seen that mongoose uses "Buffer" as a type for fields, I guess it's a BSON byte buffer... what is the use of that data type?
[08:30:49] <kali> LambdaDusk: store binary data ? :)
[08:31:23] <LambdaDusk> kali: But what makes Buffer better for this than String?
[08:32:55] <kali> assumptions are made on string by mongo and the driver. they have to be utf8 encoded for instance
[08:33:25] <kali> so if you try to store an arbitrary byte array in mongo as a string, you'll get an error
[08:33:38] <kali> well, unless you're very lucky
[08:34:18] <kali> and it helps drivers do something meaningful with the data in typed languages
[09:02:31] <Gargoyle> Morning all!
[09:02:58] <Gargoyle> If I have shutdown 2 nodes from a 3 node RS. Can I force the remaining note to be Primary?
[09:04:00] <NodeX> I think it will auto elect as it;s the only one left
[09:04:10] <NodeX> it will vote foritself
[09:04:14] <NodeX> + space
[09:04:41] <Gargoyle> Not at the moment. Its staying as Secondary. Can I force an election?
[09:05:31] <NodeX> restart it ?
[09:06:09] <Gargoyle> back up as secondary.
[09:06:15] <NodeX> on it's own ?
[09:06:19] <Gargoyle> yup
[09:06:36] <NodeX> http://www.mongodb.org/display/DOCS/Forcing+a+Member+to+be+Primary
[09:06:59] <Gargoyle> Just reading that now. Going to try increasing it's priority.
[09:08:11] <Gargoyle> hmm. command must be sent to primary! :(
[09:08:48] <NodeX> bring them all up then do it
[09:10:08] <Gargoyle> got one of them back online, and the one I want to keep was promoted back to PRIMARY (I did originally shut down the secondaries).
[09:12:09] <Gargoyle> Nope. It drops to secondary as soon as it's the only one left. :(
[09:14:48] <Gargoyle> ahh. I have to remove the other members from the config.
[09:14:53] <Gargoyle> http://www.mongodb.org/display/DOCS/Reconfiguring+a+replica+set+when+members+are+down
[10:07:38] <neil__g> is this the place to ask about the php mongodb driver?
[10:17:24] <NodeX> yep
[10:29:07] <c2c> Hello, I have a question I can't find in doc : How can I find() documents that doesn't have a field 'blah' in a collection
[10:29:39] <c2c> .find({blah: null}) only returns document that a have a blah field with null
[10:29:47] <ron> c2c: search for $exists in the docs.
[10:29:56] <c2c> (blah is a 'sparse' if it helps)
[10:30:02] <c2c> ho ! that simply ?
[10:30:11] <algernon> yes.
[10:30:13] <c2c> thanks .. sorry for such a dumb question
[10:30:23] <ron> no worries
[10:30:26] <ron> we've had dumber ones ;)
[10:34:26] <NodeX> you can use $in : [null, ""]
[10:34:49] <ron> don't trust NodeX. he uses PHP.
[10:34:52] <NodeX> db.foo.find({field : {$in : [null, ""]}});
[10:35:08] <ron> MUHAHAHA!
[10:35:11] <NodeX> :P
[10:35:25] <NodeX> dont trust ron, he writes his code on a typewriter :P
[10:36:26] <ron> NodeX: yo mama!
[10:38:08] <NodeX> she taught you how to write code? lololol
[10:38:51] <NodeX> redis
[10:38:55] <NodeX> 2/3
[10:52:44] <neil__g> @NodeX I have a P-S-S replica set and I do a query on one of the servers. $res->count() returns the correct result count, but if I foreach() over $res it never enters the loop. Seems to be related to the secondaries being out of sync by an hour or so.
[11:11:22] <jamiel> does $res->count(true) return the correct result?
[11:14:23] <ron> that sounds very php-ish.
[11:39:05] <pilgo> Are there any tools that make schema migrations easier?
[11:40:47] <ron> considering mongo is schemaless, it can't be easier than that!
[11:40:58] <kali> what would you like such tool to help with specifically ?
[11:41:21] <kali> ron: mongodb is more or less schemaless, but usualy, your data isn't :)
[11:41:39] <ron> kali: are you mocking me?
[11:42:29] <kali> ron: no. i just mean i can see some parts of the data migrations where tools could help, but it's not really a mongodb issue imho
[11:42:44] <ron> kali: I was kidding. :)
[11:43:05] <kali> ok. don't try that with me, it does not work :)
[11:43:22] <ron> kali: you're humorless?
[11:43:45] <kali> yeah, the same way is schemaless
[11:43:49] <kali> +mongodb
[11:44:03] <ron> ;)
[11:46:18] <NodeX> lol
[11:48:36] <pilgo> kali: I just created a date created attribute for my collections and am wondering how I would migrate the existing documents to have a date. Any tips?
[11:48:40] <ron> NodeX: at least kali doesn't use PHP.
[11:48:53] <NodeX> PHP rocks - true story
[11:48:58] <pilgo> lol
[11:49:39] <ron> it's as useful as a rock. yeah.
[11:49:47] <kali> pilgo: if the collection is small, just iterate over it, get done with it and forget about it
[11:49:50] <NodeX> pilgo : do.foo.update({date: {$exists:false}},{$set : {date:"Somedate"}},false,true);
[11:50:07] <kali> pilgo: if the collection is big, fix the data in the collection every time you read it
[11:50:18] <NodeX> ron - out of interest what SS language do you use?
[11:50:28] <ron> NodeX: java.
[11:50:30] <kali> pilgo: migration are a pain, because they are a pain for the database
[11:50:33] <NodeX> nuff said lol
[11:50:54] <kali> ron: i use ruby, which i'm not sure is much better than php
[11:51:06] <pilgo> kali: I guess I was just complicating things. I've got a few collections
[11:51:33] <ron> kali: it is
[11:51:47] <ron> kali: people actually thought about it before they created the language.
[11:51:48] <ron> :D
[11:52:38] <pilgo> Am I guaranteed to get the same order of documents if I sort on a date field and they all have the same date?
[11:52:56] <ron> you care about the order of the documents? O_O
[11:53:46] <kali> ron: i'm not sure it is a sufficient condition
[11:53:50] <pilgo> ron, care only in the sense that I'd rather not them be sorted willy-nilly
[11:54:47] <NodeX> all languages have thier bad points - if you stay clear of them then you're fine
[11:55:21] <kali> yeah, let's kill the troll before it devoures us
[11:56:39] <balboah> is there a way to check consistency of a database that is synced in a replicaset? For example to see that some data that should be there isn't?
[11:57:10] <NodeX> only by making sure it's written at write time iirc
[11:58:14] <ron> pilgo: sorting should be done on specific fields. how they are stored in the database shouldn't matter (that much).
[11:58:20] <balboah> maybe it's possible to make it resync specific db's?
[11:58:35] <kali> balboah: nope, replication is server wide
[11:58:52] <balboah> allright, I'll just clear it to 0 and get everything
[11:58:56] <pilgo> ron: Right, I mean what the db returns on find.
[11:59:16] <kali> balboah: yep. you're experiencing replication issues ?
[11:59:18] <ron> pilgo: if you use the .sort() operation, you have nothing to worry about.
[11:59:32] <balboah> kali: only because I've moved data files around and forgot some of it before I started it again
[12:00:03] <kali> balboah: yeah, in that case, it's probably better to start all over again :)
[12:00:14] <NodeX> ron = biggest troll on IRC
[12:00:27] <ron> NodeX: pfft. you wish.
[12:00:34] <balboah> oh well
[12:00:39] <NodeX> [12:55:21] <hell_razer> hello, anyone used https://github.com/iamteem/redisco ?
[12:00:39] <NodeX> [12:55:37] <ron> I'm guessing iamteem did.
[12:00:39] <balboah> it's only 97G ;)
[12:00:42] <NodeX> trolled!
[12:01:07] <ron> that's not trolling. that's answering a stupid question with a stupid answer.
[12:01:15] <NodeX> ^^ trolling
[12:01:28] <NodeX> or an offshoot of trolling
[12:01:47] <kali> balboah: it's the same number of keystrokes than 97k :)
[12:02:10] <kali> balboah: even if you'll kill one or two more polar bears
[12:02:37] <balboah> kali: hah
[12:02:57] <hell_razer> NodeX: )
[12:03:06] <NodeX> ;)
[12:06:32] <ojii> hi, does anyone know a workaround the bug in pymongo that prevents it from talking to remote (authenticated) databases?
[12:06:40] <ojii> (when in a threaded application)
[12:46:33] <ojii> does anyone here have a solution for http://stackoverflow.com/questions/9191136/pymongo-fails-to-work-with-multithreading?
[13:14:14] <WormDrink> Hi
[13:14:33] <WormDrink> For mongodb - in what size increments are journal written
[13:19:57] <mids> WormDrink: default journalCommitInterval is 100ms
[14:11:52] <pilgo> kali, ron, thanks for your help
[14:21:43] <ron> pilgo: I helped? o_O
[14:44:09] <cemalb> Hi. I have a replica set on EC2 w/ a primary, secondary and arbiter. We restarted all 3 individually and now get the error "not master or secondary, can't read" when trying to query the primary or secondary. rs.status() shows "loading local.system.replset config (LOADINGCONFIG)". What should be my next step in getting these back online?
[14:45:12] <cemalb> Also rs.initiate() gives "local.oplog.rs is not empty on the initiating member. cannot initiate."
[14:45:26] <kali> dont do initiate()
[14:45:49] <cemalb> Okay
[14:46:23] <kchodorow_> cemalb: can you pastebin a bigger chunk of the log?
[14:46:28] <kali> you're sure the data has been remounted at the right place ?
[14:47:05] <cemalb> kchodorow_, Absolutely. Anything I should paste in particular?
[14:47:16] <kali> cemalb: check that the server still have the same hostname and ips
[14:47:24] <kali> +s
[14:47:35] <cemalb> kali, okay
[14:47:39] <kchodorow_> ~50 lines from around the "loading local.system.replset config (LOADINGCONFIG)"
[14:49:02] <kali> cemalb: and kchodorow_ is right, the log would probably help
[14:54:24] <macrover> this is a question about updating timestamp value, http://pastie.org/4163385
[14:55:17] <cemalb> http://pastebin.com/jNrxyBMG
[14:55:46] <cemalb> There's an excerpt from the log
[14:56:46] <sharp15> how detailed is mongodb's permissions/access tracking? for instance. can i give a group of people access to a database and mongo will note who created (and when) which documents?
[14:58:09] <kali> cemalb: everything clear on the hostname front ? because "replSet error self not present in the repl set configuration" sounds like the replica can not find itself in the set
[14:59:21] <cemalb> Yeah, I was wondering about that. The hostname is mw-rs1-data1 which is in there though
[15:02:48] <kali> cemalb: can you check also that "host" in db.serverStatus() is consistent with what you expect ?
[15:06:10] <cemalb> kali: That looks correct.. "host" : "mw-rs1-data1"
[15:08:08] <kchodorow_> cemalb: can you start a mongo shell on mw-rs1-data1 by running: "mongo mw-rs1-data1:27017/test"
[15:09:36] <augustl> need a field in my db, "api token", some kind of UUID would do. Should I just generate my own, or does mongo have something built-in that ensures uniqueness etc?
[15:18:44] <cemalb> kchodorow_, kali: Welp, it was in fact a problem with the host name. It was mapped to the wrong IP. Everything's working now. Thanks for the help!
[15:25:38] <kchodorow_> cemalb: np!
[15:25:43] <kchodorow_> glad it worked out
[15:54:46] <JoeyJoeJo> I've got documents that include lat/lon. How can I find all documents in a collection that are some distance from a given lat/lon?
[15:56:49] <Derick> a maximum distance you mean? or more than a certain distance?
[15:57:31] <JoeyJoeJo> I want to find anything within x kilometer radius of a coordinate
[15:59:15] <drudge\work> geoNear
[16:00:00] <Ahlee> Does this mean I didn't get this field indexed, or it didn't get inserted? from my log file: Wed Jun 27 23:04:40 [rsSync] ibp.logging Btree::insert: key too large to index, skipping ibp.logging.$message_1_background_ 1362 { : "SurVo - survey(): <snip>
[16:00:38] <drudge\work> JoeyJoeJo: check out http://www.mongodb.org/display/DOCS/Geospatial+Indexing
[16:01:43] <JoeyJoeJo> drudge\work: I was just reading that actually. I have my lat and lon stored as db.collection.lat and db.collection.lon. Do I need to store them in one field?
[16:02:06] <drudge\work> yeah
[16:02:22] <JoeyJoeJo> ok, thanks
[16:03:31] <JoeyJoeJo> In that case I need to wipe all the data in a collection so I can re-import it properly. How can I do that?
[16:07:00] <drudge\work> JoeyJoeJo: you could loop over each rec and set a loc property
[16:07:58] <drudge\work> like:
[16:08:02] <drudge\work> http://pastie.org/4166390
[16:08:22] <drudge\work> er delete should be store.long and store.lat
[16:23:54] <JoeyJoeJo> drudge\work: Thanks for your help. I can now run the geoNear command and return correct results
[16:24:07] <drudge\work> awesome, glad i could help
[16:55:03] <souza> Hello guys!
[16:57:15] <souza> I have the following bson in C language (http://pastebin.com/ZzXtdPSM) , it must return one result from mongodb, but it doesn't returns anything, and i've no idea how to fix it, thanks!
[16:58:23] <souza> the "vm" object is a sub object, from another
[17:00:37] <multiHYP> hi
[17:00:57] <multiHYP> how can I find the length of an array inside a row from shell?
[17:03:41] <multiHYP> {"date":"20120628", "items":[{…},{…},{…},{…},{…},{…}]}; I want to find out the length of items array.
[17:04:25] <stefania> client-side
[17:04:43] <Derick> multiHYP: google "mongodb find size of array" - it's the first hit
[17:05:22] <multiHYP> I was reading that before coming in here and asking that question. sorry, i don't know how to ask it more clearly.
[17:05:28] <Derick> hm
[17:05:45] <Derick> are you doing a findOne() ?
[17:06:17] <multiHYP> no, on the example above I would do: db.collec.find({"date":"20120628"}).pretty();
[17:06:23] <ron> Derick: hmm, for a moment there you sounded like MACYet :)
[17:06:37] <Derick> multiHYP: but it's one document that matches date?
[17:06:42] <multiHYP> so count is obviously always one, because I only save one per day.
[17:06:47] <multiHYP> yes
[17:07:04] <Derick> db.collec.findOne({"date":"20120628"}).items.length
[17:07:08] <Derick> should do the trick
[17:07:10] <diminoten> is there a way to query for the count of documents in a collection which have arrays containing a certain value?
[17:07:12] <multiHYP> now I'm looking at map reduce suggestions, which seems a bit of an overkill.
[17:08:32] <multiHYP> brilliant, cheers Derick
[17:08:40] <multiHYP> wow 60.
[17:08:50] <Derick> multiHYP: it's just the javascript way of accessing the length of the array
[17:08:53] <diminoten> how does elemMatch work?
[17:09:13] <multiHYP> I did it first with find() and that does not have length method.
[17:12:02] <multiHYP> migration is a pain, I don't know if its worth combining 3 collections into 1 that would fit with my current new model. analytics is always relevant though.
[17:12:03] <diminoten> nm
[17:12:12] <stefania> elemMatch? as documented :-P
[17:43:20] <halcyon918> hey folks… I've got a question about rep sets… if I set the WriteConcern to REPLICAS_SAFE, and one replica gets the update, but the others do not… does Mongo rollback the save on the first replica so it's as if it never was saved to the first replica?
[17:47:14] <stefania> mongodb and rollback is a contradiction
[17:49:50] <kali> stefania: it's not that simple
[17:51:06] <kali> stefania: when a replica is primary and get splitted, it will rollback the last few write he got before becoming aware he was splitted
[17:51:49] <kali> halcyon918: as far as i know, this is the only case that can lead to rollback: resycing a split replica set
[18:01:11] <dstorrs> hey all. I just connected to my develop data and all data is gone. Yet, db.coll.totalSize() has not changed.
[18:01:30] <dstorrs> this isn't a critical fail -- it's not prod data -- but I'm confused.
[18:02:03] <dstorrs> we're trying to see if the vanishing data was something we did or a Mongo issue. what could cause it aside from developer error?
[18:03:25] <dstorrs> also, am I correct that when a remove() is done, the disk-state of the the data is marked 'deleted' (much like 'rm filename'), but it does not actually change the totalSize? for that you would need to run repairDatabase to compact, yes?
[18:09:35] <souza> Hello guys, i'm having problem to compare dates with mongodb and C, this is my output http://pastebin.com/59yuTimT It was returning all services in my database, but must to return only the services that have date less than "1340906127000" (date in milliseconds), and the last record, have a date (in milliseconds) 1340906187000, and didn't must be returned, someone know how can i fix it?
[18:11:37] <dstorrs> souza: I can't make heads or tails of that output.
[18:11:47] <dstorrs> Can you post an example of two of the docs in your services collection?
[18:11:55] <dstorrs> (or wherever you store them)
[18:13:37] <souza> dstorrs: but the paste have more than one result (document), right?
[18:14:24] <fomatin> Hi
[18:14:43] <fomatin> I'm trying to use mongorestore to restore a database from S3, but I'm getting a seg fault error
[18:14:58] <fomatin> 71213 segmentation fault
[18:15:24] <souza> dstorrs: this is my code: http://pastebin.com/QvtsuEYq
[18:15:46] <dstorrs> souza: (brb)
[18:16:10] <souza> dstorrs: ok
[18:16:57] <Ahlee> Does this mean I didn't get this field indexed, or it didn't get inserted? from my log file: Wed Jun 27 23:04:40 [rsSync] ibp.logging Btree::insert: key too large to index, skipping ibp.logging.$message_1_background_ 1362 { : "SurVo - survey(): <snip>
[18:17:37] <dstorrs> souza: back.
[18:18:32] <souza> dstorrs: i'm about two hours in this problems, it's too weird, because it doesn't retrieve that last record
[18:19:21] <fomatin> this is what i'm getting after running mongorestore http://pastebin.com/hDZQNCSA
[18:19:46] <dstorrs> souza: if I'm reading this correclty, I think the query you're building is '{vm : { $lt : last_date}}'
[18:20:00] <dstorrs> does that want to be 'vm' or something more date-oriented ?
[18:20:58] <souza> dstorrs: i just want to get all documents that have the "last_date" $lt the value informed.
[18:21:06] <dstorrs> like I said, your code is useless to me understanding what you need. I really need to see the data.
[18:21:25] <dstorrs> post two documents from your collection and I can probably give you the query you need.
[18:21:48] <souza> dstorrs: just a momento
[18:21:51] <souza> moment*
[18:21:55] <dstorrs> but if you want what you said, then I think you just need to change "vm" to "last_date" is your query.
[18:22:55] <souza> dstorrs: that the "vm" object is a subobject from other service called "service", and this service has a array of "vm's"
[18:23:18] <dstorrs> ok. then post it, because I can't visualize it as described
[18:23:50] <souza> sorry, but post "what"? the structure from my service collection?
[18:25:21] <souza> dstorrs: http://pastebin.com/xggJfFJZ
[18:25:47] <souza> dstorrs: this?
[18:27:20] <halcyon918> kali: sorry for the late response about the replica set stuff… but does that mean that the primary will keep the write even though the other replicas didn't get it? I'm confused about the implications of a REPLICA_SAFE partial save
[18:27:28] <dstorrs> souza: perfect. reading it now.
[18:29:15] <dstorrs> souza: ok, so for each 'Services' object the 'vm' key has only one entry under it, yes?
[18:29:29] <dstorrs> or would it ever be an array?
[18:30:40] <souza> i have an array of vms, but i want to get all services with only the vms that have a date $lt the date informed
[18:30:49] <dstorrs> if it's always exactly one, this will work: db.Service.find({ 'vm.last_date' : { $lt : THE_DATE_YOU_CARE_ABOUT } })
[18:32:24] <souza> dstorrs: Ok, that i'm doing this in C language, but my query it's something like this, but it returns a date $gt yet
[18:32:51] <souza> dstorrs: sorry by poor english :(
[18:32:52] <dstorrs> if it gives you grief about 'last_date' sometimes being there and sometimes not, you can do this: db.Service.find({ $and : [ $exists : { 'vm.last_date' : true }, 'vm.last_date' : { $lt : THE_DATE } ] })
[18:33:21] <dstorrs> souza: no worries. my russian (?) is far worse.
[18:33:28] <multiHYP> football calling, seeyalater. :)
[18:34:53] <souza> dstorrs: my problem not is in exists or not, my problem is that my query it's retrieving a greater than that value, must i want only the less than
[18:37:42] <souza> dstorrs: like in this paste http://pastebin.com/59yuTimT you can se in the first line the time in milliseconds, and the last record have a value greater that one.
[18:39:47] <dstorrs> souza: please check that your query actually matches what I posted. Because what I saw in your code did not.
[18:40:59] <dstorrs> you were doing { 'vm' : { $lt : DATE }} when you needed {'vm.last_date' : { $lt : DATE}}
[18:45:22] <dstorrs> souza: make sense?
[18:45:59] <souza> dstorrs: i changed my code to this http://paste.org/51095, but it doesn't works yet, it returns the vm with date higher
[18:46:32] <souza> dstorrs: sorry by too slow, my internet connection doesn't helps
[18:47:19] <dstorrs> souza: stop working in C. Connect to the Mongo shell. Get a query that works and returns what you want. Then translate it to C.
[18:47:46] <souza> dstorrs: humm, good point!
[18:47:53] <dstorrs> I am not familiar with the C driver, but I am willing to bet money that you have still not accurately translated what I told you to use.
[18:48:54] <dstorrs> becaue as far as I can see, your query is now this: { vm : { $lt : 'vm.last_date' : Date }} which is not even syntactically correct
[18:51:33] <souza> dstorrs: and what is the correct query?
[18:51:56] <dstorrs> souza: I've posted it several times now. Please scrollback and read.
[18:52:46] <souza> db.Service.find('vm.last_date' : { $lt : 1340908730000 }] }); This? it was returning error. :(
[18:53:09] <dstorrs> unsurprising. it's not syntactically correct
[18:53:16] <souza> yes
[18:53:30] <souza> i post the wrong query
[18:53:47] <souza> i tried this one > db.Service.find('vm.last_date' : { $lt : 1340908730000 });
[18:53:59] <Derick> you miss an extra set of { }
[18:54:07] <souza> and doesn't works
[18:54:14] <dstorrs> souza: I'm trying to be nice about this, but I'm on three hours sleep and I'm starting to feel like I'm talking to a wall, because I keep posting the exact query and you keep not using it.
[18:54:21] <Derick> you need to send a javascript object to find()
[18:54:33] <Derick> not some stuff separated by a :
[18:54:52] <Derick> each object starts with a { , then has properties, and then a }
[18:55:04] <dstorrs> One last time. Do this: db.Service.find({ 'vm.last_date' : { $lt : 1340908730000 } })
[18:55:05] <Derick> a property is a keyname, followed by a : followed by a value
[18:55:20] <dstorrs> do it in the shell. not in C. it if returns the right thing, then translate to C.
[18:55:34] <souza> dstorrs: sorry, i'm over with this question.
[18:56:07] <souza> dstorrs: i'll try a little more, because this one doesn't returns anything
[18:56:28] <Derick> because it's syntactically incorrect...
[18:56:33] <Derick> souza: did you read what I wrote?
[18:58:46] <dstorrs> thanks for helping Derick. maybe that was the concept I was not getting across to him.
[18:59:09] <Derick> np
[18:59:31] <Derick> dstorrs: i was thinking about just writing it in BNF though
[19:01:19] <dstorrs> Derick: I don't know if that would have been better or not. it seems that he had never really RTFM'd and had no mental model of Mongo queries. I suspect he was copy pastaing
[19:04:51] <souza_> dstorrs: thanks for help, i don't get it yet, but i think that i have a way now!
[19:05:17] <dstorrs> souza_: you're welcome and good luck.
[19:05:35] <souza_> dstorrs: thanks a lot
[19:06:19] <dstorrs> souza_: if you haven't done so, I strongly recommend sitting down and reading absolutely everything under http://docs.mongodb.org/manual/#for-developers
[19:06:44] <dstorrs> it will take several hours. It will save you several days over the next few weeks.
[19:07:37] <souza_> dstorrs: thanks i'll read! =)
[19:17:23] <souza_> dstorrs: i got the query, now i have to translate to C
[19:17:32] <dstorrs> what was the query?
[19:18:07] <souza_> i have to create a Date object and pass in the query > db.Service.find({ "vm.last_date" : { $lt : date } });
[19:18:59] <dstorrs> glad you got it.
[19:35:02] <FerchoDB> Hi, We're doing some tests with TTL collections. We are trying the example on blog.mongodb.org but it doesn't remove the document at 30 seconds, it lasts much more before removing it
[19:35:13] <FerchoDB> do you know why this can be happening?
[19:35:50] <mids> which version do you have?
[19:37:39] <dnnsmanace> is it possible to push when doing update or only when doing find or findone
[19:40:44] <mids> dnnsmanace: you can push on update
[19:40:58] <dnnsmanace> $push
[19:41:01] <dnnsmanace> ic
[19:41:27] <FerchoDB> mids: we are testing with 2.1.
[19:41:32] <FerchoDB> 2.1.2, I'm sorry
[19:44:35] <dstorrs> FerchoDB: if you don't mind me asking, why are you using a dev version? does it have something specific you need?
[19:47:20] <FerchoDB> dstorrs, it's ok, we are just making some documents and investigation about new features in MongoDB
[19:47:41] <FerchoDB> these are not in-production databases.
[19:48:19] <dstorrs> makes sense. "being aware of the upcoming cool stuff" == "good" :>
[19:58:22] <FerchoDB> It lasts 44 seconds to remove 30 second TTL documents
[19:58:56] <spikie> Hi all!
[19:59:57] <FerchoDB> MongoDB 2.1.2 lasts 44 seconds to remove 30 second TTL documents. Is 15 seconds an accepted bias?
[20:00:52] <php10> so i'm trying to build an aggregated stats collection. takes every combination of the hit/sale metadata such as user, site, lander, geoip country, geoip region, http referrer, etc. and then has a bunch of counters raw hit, index hit, unique hit, leads, etc. i need to query any of these fields and then be able to break down by any of these fields. is storing all of these combinations the best
[20:00:53] <php10> way to do this? it seems that after even a day of traffic collection the collection becomes massive
[20:01:33] <spikie> I use doctrine mongodb odm, and when I try to make a map reduce with inline option, mongo doesn't return me an'results' as expected, but a 'result' <collection_name>.... Some one could help me
[20:02:06] <php10> but it seems that in order to keep any of the fields associated in order to breakdown the data, the only way i can aggregate are on unique combinations
[20:02:07] <dnnsmanace> how do i combine a regular update with $push?
[20:03:08] <dnnsmanace> { pair_email : "blah", { $push : {past_convos : exists.current_convo} } }
[20:03:22] <JoeyJoeJo> How can I do a find to return all documents where x = 1 or x = 2?
[20:04:47] <php10> db.foo.find( { $or: [ { x: 1},{ x: 2} ] } )
[20:05:11] <JoeyJoeJo> thanks
[20:15:46] <FerchoDB> now MongoDB 2.1.2 lasts about 90 seconds to remove 30-second TTL documents. ¿? is this because it's still unstable or because I'm missing something?
[20:18:17] <charnel> hi when I try to connect to mongolab with the uri I am getting unable to authenticate user mongodb which is the protocol in the url. Anyone knows how to set the url
[20:26:20] <dnnsmanace> hi i am getting a dup key error even though the key should be the unique id generated on creation of document
[20:26:21] <dnnsmanace> MongoError: E11000 duplicate key error index: dblblnd.convos.$id_1 dup key: { : null }
[20:27:25] <Derick> you're definitely trying to add a NULL _id though
[20:27:43] <dnnsmanace> right
[20:27:52] <dnnsmanace> but i am simply making a new document
[20:28:06] <dnnsmanace> so it be autogenerating a unique id?
[21:00:57] <tystr> hey guys, what's your preferred method for backing up mongodb databases?
[21:09:36] <Ahlee> lie to the boss that it's being done, pray I don't need to restore
[21:09:45] <wereHamster> tystr: replica sets
[21:10:01] <tystr> we're using replica sets
[21:10:31] <Ahlee> tystr: LVM under your /data directory?
[21:10:37] <tystr> yes
[21:11:03] <Ahlee> lvm snapshot off a replicaSet member that's priority=0
[21:12:15] <Ahlee> I then mount the lvm snapshot and pusn the raw files over to a cheap coraid
[21:17:21] <tystr> Ahlee do you need to lock the database during the snapshot?
[21:17:47] <Ahlee> tystr: I don't and just rely on the journal to recover
[21:18:11] <Ahlee> I had no problems with it 'catching up' when I copied over to a node that fell too far behind
[21:31:45] <tystr> Ahlee cool, thanks for the feedback
[21:32:14] <tystr> I'm setting up our replica set on aws and want to make sure I have all the bases covered with regards to backups
[21:32:53] <tystr> I'm also looking at mongodump
[21:33:35] <tystr> not sure how that would perform with millions of documents in a collection, though
[21:57:12] <linsys> tyster: don't use mongodump
[21:57:37] <linsys> Use snapshots like Ahlee suggested.. it will be a lot less painless.
[21:59:30] <tystr> linsys no?
[21:59:36] <tystr> ah ok
[22:07:46] <tystr> yeah, looks like snapshots is the way to go
[22:12:13] <tomlikestorock> Is it better to store fewer fat rows with a lot of data in a row field, or is it better to split that field up into individual rows and store many smaller rows in a collection?
[22:12:47] <tomlikestorock> by rows I mean documents, I guess
[22:13:51] <Ahlee> "It depends" :)
[22:14:18] <Ahlee> I do the latter, and have issues with association logic
[22:26:12] <kenneth> hey all, i'm noticing something weird with bson
[22:26:16] <kenneth> php> =d('9b000000027575696400290000003637383533336230333162353535316261666432653966396433303564393762303030303030303000026d616369640029000000623766363932616661656564376639303639646563623236653137333236663839343436653462350002757569642d616476002100000036353133336262643863363234616438623535643536333235393838333837300')
[22:26:27] <kenneth> is equivalent to
[22:26:28] <kenneth> php>
[22:26:28] <kenneth> =d('9b000000027575696400290000003637383533336230333162353535316261666432653966396433303564393762303030303030303000026d616369640029000000623766363932616661656564376639303639646563623236653137333236663839343436653462350002757569642d61647600210000003635313333626264386336323461643862353564353633323539383833383730000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
[22:26:29] <kenneth> 0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000')
[22:26:41] <kenneth> where the string is hex-encoded bson data
[22:26:50] <kenneth> notice all the trailing null bytes?
[22:27:18] <kenneth> i'm encoding this using the C lib for bson as provided in the mongodb c driver
[22:29:28] <algernon> I assume it is so, because the bson has a length specifier in the beginning, and the php driver ignores any junk past that length
[23:06:43] <php10> question for php mongo users. trying to increment a field in an embedded document. driver is building the $inc as follows: ['$inc']['sums']['hits'] 1 but i'm receving "Modifier $inc allowed for numbers only" ... note: i'm doing this on upsert. it works with ['$inc']['hits'] 1
[23:08:04] <kenneth> algernon: well, i'm more curious why the C lib is generating a bunch of extra junk and making my payload significantly bigger
[23:10:30] <php10> my upset inc array right from the driver: http://pastebin.com/8JnjbBtH
[23:12:32] <php10> oh i think i know, the $inc needs to be on each of the embedded fields
[23:22:00] <php10> nope... dot notation was the answer