[01:46:24] <thesteve0> crudson: no it is "" which is not null
[01:49:02] <crudson> thesteve0: but your .put() is not doing anything, or even the other methods mentioned. When you give it a DBObject I'd expect it to use the methods on the DBObject interface to query the state of your object (with keyset() etc). Again I apologize if I misunderstand it.
[03:10:46] <nvictor> i am trying to troubleshoot a mongodb server. everything seems to be setup correctly but not a single operation seems to affect it
[03:11:14] <nvictor> anyone has encountered this before ?
[04:20:05] <hdm> random queston on indexes; if an instance of mongod has multiple databases (call them A,B,C); if A is queried, the index is loaded to ram, then nothing happens and B/C are queried, will whatever ram used for A's index get freed up for other queries?
[04:20:28] <hdm> assuming yes, but seeing performance differences between querying A and not querying A before doing large queries on B
[13:11:46] <flex__> Hello. Is there some simple way to do set-if-not-set atomically, like SETNX from Redis? I see $exists, but this check doesn't appear to be atomic...
[13:16:17] <ron> flex__: have you looked at findAndModify?
[13:21:17] <flex__> ron: not seen that, will look now, thanks! mongodb newbie, as it may be obvious ;-)
[13:21:38] <ron> flex__: no worries, we all start somewhere.
[13:23:13] <flex__> ron: Hm, not sure that's quite what I want. Whilst I can see that this solves half my problem, I don't see any atomic way to evaluate that the value is nonexistent between the find/set.
[13:56:35] <flux__> ron: Hi again. Sorry, I think I didn't make my problem clear enough. findAndModify works, but it also updates it if it exists. I'm only wanting to update if it doesn't exist.
[13:56:55] <socket> Hello, im looking for a too like phpmyadmin for mongodb? got one?
[13:56:57] <flux__> That is, if it exists, I don't want to do anything.
[14:10:28] <Gargoyle> socket: What is it you'll be mostly using it for?
[14:11:55] <Gargoyle> I use it for bits of dev work, quick queries, checking indexes. etc. Not really something I would rely on for making any serious changes (But then, I also stopped using phpMyAdmin about 10 years ago!)
[14:18:23] <Aric-> Can mongo do something similar to this: https://gist.github.com/04d8a40f68b6bdb132cc
[15:10:00] <NodeX> from what I gather it uses geo hashes
[15:11:32] <NodeX> it's no biggie for most operations because most people use $near anyway, - the one large caveat with near is the results are limitted to 5000 or somehting
[15:20:29] <NodeX> it is strange, mine doesn't do it
[15:22:07] <Gargoyle> It's not consistent, becuase when I ran it on a smaller script from the other day, I couldn't get it to repeat the misscount. But this is a large batch job resizing jpegs, so I suspect there is a lot more internal shuffling going on.
[15:22:51] <Gargoyle> just wondering if there is the possibility of it getting stuck in an infinate loop!
[15:23:15] <NodeX> i'm about to resize 200k jpegs so I'll add a counter and test on mine
[15:23:39] <Gargoyle> What are you using to resize?
[15:26:42] <Gargoyle> You think using a $set would make much difference when saving this kind of thing?
[15:26:45] <Infin1ty> I have 4 nodes + 1 arbiter replicaset, 3 nodes + 1 arbiter are in one DC, the 4th node is in other DC, what happens if i lose the 4th node and then take the 3rd node down as well? will there be any problems with primary election?
[15:27:35] <Gargoyle> Infin1ty: Primary elections need a majority. I think it's as simple as that.
[15:30:17] <Infin1ty> Gargoyle, hmm, so if i have now 3 nodes + 1 arbiter (1 node is down) and i'm taking another node down, i have 2 nodes + 1 arbiter, i think i'm good, no
[15:44:13] <Nicolas_Leonidas> I've created a collection called db.stats.largedaily
[15:44:33] <Nicolas_Leonidas> with php but when I do db.stats.largedaily.count() in mongo shell, it says TypeError: db.stats.largedaily has no properties (shell):1
[15:52:24] <Nicolas_Leonidas> This project I'm working on is about recording daily stats about events happening on the website, such as user clicks, page views and stuff, there will be around 100,000,000 documents per year
[15:52:33] <Nicolas_Leonidas> do you think I should be using mongodb for this?
[15:53:22] <Gargoyle> Nicolas_Leonidas: deffo! mongo is great for "stuff" :-)
[15:54:11] <Nicolas_Leonidas> ok, so I restarted the server, here are "show collections"
[15:56:00] <NodeX> it also didnt work because you issued the command wrong
[15:56:01] <Nicolas_Leonidas> I have a question about compound indexes, I want 5 fields to be indexed, should I use ensureIndex on each separately or in one shot?
[15:56:16] <NodeX> it's db.COLLECTION.count() not db.DATABASE.COLLECTION.count9)
[15:56:29] <Nicolas_Leonidas> NodeX: right, I get it now
[15:56:49] <Gargoyle> NodeX: You got some diswexic fingwrs today!
[16:01:01] <Nicolas_Leonidas> so, db.largedaily.ensureIndex({year:1, month:1, day:1, lid:1, type:1 }) is the same as db.largedaily.ensureIndex({year:1}); db.largedaily.ensureIndex({month:1}) and ... ?
[16:15:39] <hdm> not that ive noticed, im i/o bound, not cpu bound on inserts/updates
[16:15:56] <hdm> the ugly part is the corruption happened in a bson subdocument, so it took forever to figure out where/how
[16:16:19] <hdm> being religious about objcheck has prevented it from happening again, mongo doesnt otherwise validate BSON structure on insert from clients
[16:16:29] <tomreyn> i wouldnt even know where to start, i'm all new to mongodb
[16:16:54] <tomreyn> so thanks for this hint, that's really good to know.
[16:16:59] <hdm> you can turn it off and run without it if you feel its a performance hit, i havent noticed it thhough
[16:17:13] <hdm> for the initial import of 1.6 i would definitely do it though
[16:17:19] <hdm> just in case something got mangled
[16:17:42] <tomreyn> are you running replicasets with --objcheck in production?
[16:18:03] <tomreyn> i'm wondering whether it's noticeable overhead there
[16:18:11] <hdm> not even rs's right now, just single nodes with backups every 24h, i can always rebuild, but with objcheck
[16:18:37] <skot> It is fairly low overhead and just extra cpu (not usually a limiting resource)
[16:18:39] <hdm> ive got a ton of data but only one user, so a different use case than most
[16:18:56] <hdm> ~3.7T / 1.5 billion records, single server
[16:18:58] <tomreyn> hehe, i can bet what that data is
[18:02:27] <sander__> I don't want to downgrade.. I want to be sure my current code works on future versions of mongodb
[18:03:16] <sander__> kali, In case i'm running mongodb as a service.. then it will be very bad if they decided to upgrade mongodb witout it beeing compatible.
[18:03:58] <sander__> So just wondring how that's solved.
[18:06:45] <kali> sander__: the intent in clearly to be compatible to allow most people to upgrade as soon as possible. that said, there are often a few corner cases where you need to adjust your code before upgrading
[18:15:18] <sander__> kali, do you have a good document with an introduction to mongodb?
[19:12:11] <hell_razer> hello i am unsing mongo from official centos repo, i can not restart it, its doenst release console at /etc/init.d/centos restart/stop/start
[19:25:07] <jrdn> i tend to shorten field names, is there any way in mongo do to a similar thing as mysql's AS to rename a field? (in my rest app, i don't want to show shortened names)
[19:26:15] <jrdn> currently i'm have domain models that i'm passing to the view with the right field names, but this can be unneccesary overhead if i'm just doing a find({}) with no more manipulation ;(
[19:27:22] <W0rmDrink> if I run this: http://pastebin.com/xzXbRGLE - the resulting types of "source_addr_ton" : 1, "source_addr_npi" : 1, is numberDouble
[20:19:20] <Xerosigma> Is there any documentation on setting up replication on Windows? Google is failing me.
[20:20:04] <wereHamster> Xerosigma: have you searched 'mongodb replication'?
[20:20:25] <wereHamster> replication is the same under any operating system, it's no different under windows than it is under linux
[20:21:16] <Xerosigma> I tried to use this for reference: http://docs.mongodb.org/manual/reference/configuration-options/
[20:21:29] <Xerosigma> However, it isn't really working out. xD
[20:22:49] <Xerosigma> I've also reviewed this: http://docs.mongodb.org/manual/tutorial/deploy-replica-set/
[20:24:32] <Xerosigma> Hmmm...perhaps it's the "Production" section that's confusing me.
[20:25:33] <Xerosigma> I'll start with the development set and work up then. Looks descriptive. Thank you.
[20:28:59] <wereHamster> "isn't really working out" is as useful as ... a pile of poo
[21:26:22] <fommil> Hi all – I'd like to be able to pass JSON directly to MongoDB as inserts (and queries) and to receive JSON in response using the Java driver. Is this possible?
[21:30:43] <nicobn> if I'm using sharding, is it right that I should stop specifying the replica set I want to use when I connect to the mongo server using a driver ?
[21:33:19] <_m> fommil: (with the 2.0 driver) Use com.mongodb.util.JSON
[21:33:42] <fommil> _m: interesting, I'll have a look.
[21:35:46] <fommil> _m: you mean DBCollection.save(DBObject) ?
[21:36:34] <fommil> _m: I actually already have the JSON marshalling sorted, I just want to be able to pass and receive JSON during inserting/reading
[21:37:17] <_m> In what sense? Serializing a JSON object as a field or using JSON as your query?
[21:37:56] <fommil> _m: JSON.parse returns an Object – I have no idea what that is. DBCollection.save takes in a DBObject.
[21:38:31] <_m> I would imagine the docs for the JSON class provide information about said object.
[21:39:00] <_m> class/module/whatever term Java uses
[21:39:07] <fommil> _m: I have an object, I already have code that can serialise/deserialise to a JSON String. I want to be able to pass that String into MongoDB to insert the object into the DB, and when I read rows, I want JSON to be returned.
[21:40:22] <fommil> _m: so I'm passing `String` in for CREATE and UPDATE, and I'm getting `String` out for the READ.
[21:40:50] <fommil> _m: right, that is totally not obvious from the Javadocs. Why the hell is the signature not DBObject?
[21:42:04] <fommil> _m: but in any case, this is still not optimal – I'd really like to be able to pass the JSON `String` directly to MongoDB so that there is not an intermediary layer of translation happening. In the console, isn't the driver just converting this back into a JSON string again?
[21:44:00] <ron> no. mongo doesn't store JSON, it stores BSON.
[21:45:45] <fommil> ron: whether it's JSON or BSON, using JSON.parse to get a DBObject, which is then converted into a String (J|B)SON when talking to MongoDB is inefficient
[21:46:51] <ron> BSON isn't a string. As for being converted to JSON, you don't know that it does that. As for efficiency, do you suffer any performance issues that you're trying to solve?
[21:48:03] <fommil> ron: yeah, performance really matters – I don't want to take stages that aren't needed. If there is a method in the API that takes raw JSON Strings, then that's perfect. If it doesn't exist, I'll profile and look at my options.
[21:48:35] <fommil> ron: well all my objects are made up of primitives that mean the JSON and BSON are equivalent.
[21:48:39] <ron> I didn't say performance does't matter. I asked whether you actually have any performance issues at the moment.
[21:49:06] <fommil> ron: I've not written it yet – so I want to use the right API the first time.
[21:49:26] <ron> do you write your code in a way that completely avoids autoboxing too?
[21:50:23] <_m> You can look at the code for the driver to see if what you're attempting is supported and simply not/poorly documented.
[21:50:54] <fommil> _m: well, thanks for the pointers to util.JSON. At least that's given me a way of avoiding writing the explicit CRUDs, even though it's not exactly what I was after. Now I can convert String <-> DBObject, which is useful
[21:51:24] <fommil> _m: maybe direct String JSON writing isn't supported.
[21:52:24] <fommil> _m: I've already got the code and looking through it. It's not got the best documentation in the world – the DBOject cast on JSON.parse is a case in point. I don't see why that doesn't just return DBObject.
[21:53:00] <_m> What are the implications of parsing/encoding the string to a dbobject? Does that really cause Java to DIAF?
[21:53:55] <fommil> _m: hehe, well the app is about really high throughput JSON -> DB conversion. The REST -> JSON marshalling is super optimised, and I wince at the thought of an extra String -> DBoject -> JSON marshalling layer
[21:54:21] <fommil> _m: I'll have to profile to see if it's going to be a big problem, and if it is I'll deal with it then. Premature optimisation and all that
[21:59:18] <skot> fommil: there is no prebuilt thing that you want, but you can take a look at json.parse and the callback interfaces and go directly from string (json) to encoded bson (create a lazydbobject from bytes) and insert that, on the way out you can use a custom decoder as well. But the whole driver is made to produce DBObjects, basically.
[22:00:31] <skot> the lazy* classes are essentially byte[] holders which implement the DBObject interface
[22:44:21] <mrpro> looks like the more connections the less throughput
[23:09:14] <cornchips1> i would like to run mongo on top of dfs so i don't have to deal with sharding and so my dfs could take care of replication. would like to have multiple mongo servers as writers/readers.. any ideas if/how this can be done
[23:20:26] <jcims> this is going to be a terrible question from a complete n00b to nosql. i am aggregating software inventory information from approximately six different sources and am considering the use of mongodb as the repository. the different sources have different formats for the same data, and different confidence levels. generally speaking, would it be better to invest time normalizing the data up front, or getting the data into mongo as is and
[23:20:37] <jcims> probaby a million records total