[02:40:32] <nathanielc> I am trying to understand how mongos works with chunks. Can someone explain the uses of the Chunk class vs the ChunkManager class?
[06:09:25] <mun> if the API allows storing lists and dictionaries, would these be stored as binary in the db?
[06:19:07] <mun> in fact, is every type stored as binary?
[06:52:54] <Mulleteer> Hi, the mongodb Jira https://jira.mongodb.org does not seem to have option to create bugs for the Node.js driver (node-mongodb-native)
[06:53:25] <Mulleteer> there is no project for it when creating new issue, or then I'm just missing something
[07:41:43] <hedenberg> Have anyone experienced issues with running stored javascripts on larger amounts of data? Ran a script on 200mil rows which worked perfectly for most of the execution, then suddenly halted with "JavaScript execution failed: SyntaxError: Unexpected token :" Should I just treat it as a MongoDB bug? On smaller sets the function works perfectly.
[07:56:43] <poy> hi. i am trying to build an rpm for mongo and came up with this cp: cannot stat `BINARIES/usr/bin': No such file or directory.
[09:12:48] <hedenberg> Have anyone experienced issues with running stored javascripts on larger amounts of data? Ran a script on 200mil rows which worked perfectly for most of the execution, then suddenly halted with "JavaScript execution failed: SyntaxError: Unexpected token :" Should I just treat it as a MongoDB bug? On smaller sets the function works perfectly.
[12:11:46] <isart> I've removed a shard from my cluster, everything looks OK on sh.status() but if I try to get collection status I get the following msg "exception: socket exception [CONNECT_ERROR] for replicaSet3/10.1xxx:10000,10.xxx:10000"
[12:12:13] <isart> I removed it following the instructions on the site documentation
[12:59:38] <kali> kala_sifar: well, the document model is not ideal... you have to choose between a simple and highly space-inefficient schema or a quite complex to manipulate
[12:59:48] <kali> so i'm not sure it's worth the trouble compared to specialized tools
[13:00:30] <kala_sifar> its a long debate but after all it depends on your use case
[13:01:21] <kali> yeah, i aggree there is more than one answer to this question.
[13:02:03] <kala_sifar> i have been doing massive updates every day + read loads ... we have to update about 86 GB of data every day
[13:20:36] <hillct> Good morning all. I wonder, can anyone point me to an implementation in Node, of a GridFS style interface to mongodb that exposes a node filesystem api object?
[13:49:19] <eldub> When backing up a mongo db... what is the best way? I'm hearing a mongodump isn't the way. Should I just be backing up the /data/db folder?
[13:49:44] <cheeser> funny, that. we offer backup services. :D
[13:50:05] <Derick> eldub: you can only copy the data if you shutdown mongodb
[13:51:55] <Derick> a hidden secondary node is what most people use to run mongodump against
[13:51:56] <eldub> cheeser I appreciate the option, but we would only be keeping things in house.
[13:52:37] <cheeser> i just use mongodump for my back up but i'm just running an irc bot on it so i'm not worried about a few dropped documents on restore
[13:53:06] <eldub> Derick That sounds like a good approach. But from what I've gathered, mongodump isn't the ... best option?
[13:53:24] <Derick> well, you can also pause the secondary node and copy the data dir
[13:53:32] <eldub> Yea that's what I think I'm going to do
[13:53:36] <Derick> but that involves some trickery with changing the configuration of your set
[13:54:16] <eldub> I was thinking of just writing a script to shutdown mongod on an off hour when it's not being used, copy the dir, upon completion start back up mongod
[14:10:37] <adamobr> how about array fields in a document? a new document to unwinding?
[14:16:27] <CIDIC> I have recently started reading about and learning ot use mongodb I just read this http://docs.mongodb.org/manual/core/write-concern/#write-concern
[14:16:53] <CIDIC> it seems like there are a lot of ways to loose data? or are these extreme corner cases?
[14:18:18] <pebble_> adamobr, I think mongo already does that, it creates documents based on the unique set of an array
[14:18:35] <pebble_> then you'd have to unwind those arrays
[14:19:33] <NodeX> CIDIC : you specify your write concern, by default it's set to 1
[14:22:58] <CIDIC> I have been asking what people think about using mongodb as the only db of a php contentmanagement system and I have gotten a lot of conflicting answers. what do you guys think?
[14:26:44] <CIDIC> Derick: in a cloud system that wouldn't happen very often ?
[14:27:10] <Derick> CIDIC: more than you think really, that's why in your app you need to handle the cases when the driver tells you something went wrong
[14:28:15] <CIDIC> Derick: say a user is updating content and submits it to the server and something goes wrong what would/should happen?
[14:28:42] <Derick> CIDIC: depending on your write concern: nothing (w=0, not the default), the driver says "couldn't wrote"
[14:29:34] <NodeX> it also powers a multi tenant CRM, the fastest dedicated Job board in the UK and a very well known adult social network
[14:29:40] <Derick> do not think that MongoDB loses the data after it's stored, see it has cases where something transient doesn't *get* stored in the first place
[14:30:57] <Derick> but, I've seen some customers not checking errors and wondering why things went wrong.
[14:31:16] <Derick> just check/catch and handle the exceptions the driver throws and this is not a problem.
[14:32:01] <CIDIC> so really you should have a webform user fills out posts, if there is a write error display a notification "Failed to write…" from the post repopulate all the form fields with the data submitted and they can click submit and try again?
[14:32:10] <NodeX> Derick : you ever known PHP to segfault when it can't connect to a MongoDB server ?
[14:32:33] <NodeX> CIDIC : would you do that with another database?
[14:32:36] <CIDIC> basically the same procedure for any db write operation
[14:52:03] <Tiller> Is it possible with an update in upsert mode to insert a field only if it doesn't exist yet?
[14:53:25] <NodeX> Derick : https://gist.github.com/anonymous/7233966 is that any more use?
[14:55:18] <NodeX> Tiller : there is a switch , one sec let me find it
[14:55:50] <Derick> NodeX: yes, it shows that the mongodb driver is not involved at all
[14:57:12] <Tiller> Oh, I think I get it NodeX. $setOnInsert
[14:57:21] <Tiller> Sorry for not seeing it early :(
[14:57:42] <NodeX> that's the one, I always forget it's name
[14:57:55] <CIDIC> this is roughly the schema we use in our php cms with a mysql db. If we were to migrate it to mongodb how would you recommend structuring it? https://gist.github.com/unstoppablecarl/6b634491512f7ea9fb97
[14:58:09] <NodeX> Derick : if I remove mongodb call from the script it runs fine
[14:58:39] <Derick> but there is no call there at all yet
[14:59:01] <Derick> NodeX: it's just gotten to parsing the script
[14:59:07] <NodeX> it's in the construct, one second let me select a collection
[14:59:21] <Derick> NodeX: the backtrace does *not* show the mongo extension
[14:59:26] <Derick> NodeX: try turning off opcache
[15:04:59] <CIDIC> NodeX: got an explaination to go along with that?
[15:05:55] <NodeX> keys are pretty self explanitory
[15:06:44] <NodeX> that ^^ gives you somehting a long the lines of http://www.nodex.co.uk/
[15:13:27] <NodeX> Derick : https://gist.github.com/anonymous/7234292 <----- php file contents at the top, debug information underneath. Opcache removed. Without the MongoClient() call there is no segault
[15:18:57] <rclements> I am trying to save a Hash of objects as field and get an undefined method __bson_dump__ for the object type in the Hash I'm trying to save. I added https://gist.github.com/rclements/7234372 to my model and it worked.
[15:19:28] <rclements> My problem is I have another model in a gem I wrote that is giving me that error and doesn't use mongo. How do I handle the __bson_dump__ error in that model?
[15:20:39] <rclements> Is there a bson gem included in mongo I can just add to that gem that would give me that __bson_dump__ method to include?
[15:20:49] <Derick> NodeX: yay, so that crashes too
[15:21:04] <Derick> NodeX: can you post your whole shell session?
[15:45:42] <CIDIC> all the extra stuff is to make it effortless to create admin forms for pages in the end it is just a key value list per page
[15:45:57] <CIDIC> that is really all the front end uses
[15:46:20] <NodeX> then my advice is keep it embedded, even if that means data duplication
[15:46:42] <NodeX> every single page uses one db call with ONE query in my CMS
[15:47:37] <CIDIC> NodeX: I plan to embed the key value pairs all the extra meta data I am not sure about
[15:48:51] <CIDIC> the other issue with the data duplication, I want to be able to update something multiple pages reference and I would have to change every page with a copy of the embeded data right?
[15:52:17] <NodeX> what are you trying to achieve in the long run?
[15:55:44] <CIDIC> NodeX: flexible page data structure and admin forms that reflect that structure that can be generated by the app automatically from the metadata set by the admin. and yeild a key value pair list for all pages that will be used by the front end.
[15:56:33] <NodeX> not really what I meant. WHat do you want to achieve by using mongodb for your datastore
[15:57:10] <CIDIC> NodeX: it would seem using mongodb would be less complex than mysql to achieve this
[15:59:12] <CIDIC> I guess it isn't a big problem but it would seem that scale could become a problem
[15:59:42] <CIDIC> if a site has a ton of pages that use the same page type (usually the default) and I make a change to the default page type it would have to go through and update every page document right?
[16:00:07] <NodeX> that's not hard just issue a multiple update
[16:00:08] <CIDIC> although it would have to do that anyway but not do as much
[16:16:26] <dbasaurus> In my mongo log, I see this error quite frequently -> ClientCursor::find(): cursor not found in map ……. My understanding is that this is caused by a timeout iterating through a find result set. Can anyone confirm? Also, I am seeing this message a lot as well -> [conn1296877] killcursors: found 0 of 1…. Is that statement carry the same meaning?
[16:27:43] <rclements> Anyone around that can possibly help with this Rails/Mongo issue? https://t.co/MJJbTqt31K
[16:33:45] <Derick> NodeX: how is it coming along?
[17:07:34] <KamZou> Hi, could you please tell my what's the max length of a mongo request ?
[20:27:39] <_sri> is $out support for the aggregate command currently broken in 2.5.3? it does not appear to send outputNs with the command reply, even though the target collection is created
[20:31:52] <cheeser> according to the docs $out only returns an empty "result" array and "ok" : 1
[20:32:44] <_sri> cheeser: and drivers are supposed to look through the pipeline spec to find the target collection?
[20:33:34] <cheeser> "The driver will return the raw document returned from the server. The user can decide whether to instantiate a collection using the name specified in the $out operator."
[20:34:02] <cheeser> at least in the java driver, cursors are lazy so we return a find() cursor on that collection.
[20:34:11] <cheeser> if you don't iterate, it costs nothing.
[20:35:54] <ShellFu> hash that im passing doesnt change. I simply pass it to save as is and update as its. If i use one or the other than only part of the hash seems ot be saved/updated
[20:36:30] <_sri> ah, i guess if it's guaranteed to be the last stage it makes sense
[20:37:29] <cheeser> yeah. it's required to be the last one.
[20:41:14] <_sri> i've only been looking at the perl driver, which still uses outputNs, i guess it needs to be updated again
[20:49:02] <schnittchen> I see a strange, but consistent, performance degradation after moving to a more powerful server...
[20:50:20] <schnittchen> like, factor 2 worse, even though everything is probably served from memory
[20:55:23] <angasulino> schnittchen, same physical location?
[20:56:00] <schnittchen> no..., why do you ask? I query locally
[22:26:56] <jesse_> hey guys. I'm trying to build a p2p application and would like to distribute it (on linux systems) with everything needed to run out of the box. The application itself is written in Python. Is it possible to box everything (drivers, pymongo, blah blah blah) and create a self-sustained package?
[22:34:36] <retran> what's this have to do with mongo, jesse
[22:54:27] <eldub> I'm still unable to change my primary's host from IP to host name