PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 30th of October, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[02:40:32] <nathanielc> I am trying to understand how mongos works with chunks. Can someone explain the uses of the Chunk class vs the ChunkManager class?
[06:09:06] <mun> hi
[06:09:25] <mun> if the API allows storing lists and dictionaries, would these be stored as binary in the db?
[06:19:07] <mun> in fact, is every type stored as binary?
[06:52:54] <Mulleteer> Hi, the mongodb Jira https://jira.mongodb.org does not seem to have option to create bugs for the Node.js driver (node-mongodb-native)
[06:53:25] <Mulleteer> there is no project for it when creating new issue, or then I'm just missing something
[07:41:43] <hedenberg> Have anyone experienced issues with running stored javascripts on larger amounts of data? Ran a script on 200mil rows which worked perfectly for most of the execution, then suddenly halted with "JavaScript execution failed: SyntaxError: Unexpected token :" Should I just treat it as a MongoDB bug? On smaller sets the function works perfectly.
[07:56:43] <poy> hi. i am trying to build an rpm for mongo and came up with this cp: cannot stat `BINARIES/usr/bin': No such file or directory.
[08:27:11] <[AD]Turbo> hi there
[09:00:19] <Number6> hipsterslapfight: Great nickname
[09:01:13] <hipsterslapfight> ha, thanks
[09:06:32] <NodeX> lmao
[09:12:48] <hedenberg> Have anyone experienced issues with running stored javascripts on larger amounts of data? Ran a script on 200mil rows which worked perfectly for most of the execution, then suddenly halted with "JavaScript execution failed: SyntaxError: Unexpected token :" Should I just treat it as a MongoDB bug? On smaller sets the function works perfectly.
[09:31:34] <Mulleteer> anything in mongo log?
[12:06:45] <BurtyB> hmm it's a shame a string isn't $gt or $lt a numeric confused me for a while :/
[12:10:21] <isart> Hi!
[12:11:13] <cHarNe2> exit
[12:11:46] <isart> I've removed a shard from my cluster, everything looks OK on sh.status() but if I try to get collection status I get the following msg "exception: socket exception [CONNECT_ERROR] for replicaSet3/10.1xxx:10000,10.xxx:10000"
[12:12:13] <isart> I removed it following the instructions on the site documentation
[12:52:37] <kala_sifar> hello
[12:52:47] <kala_sifar> i have a question
[12:52:59] <kala_sifar> if i want to increase the write capacity of my mongodb instances
[12:53:09] <kala_sifar> sharding is the obvious option to go with
[12:53:28] <kala_sifar> i now have a sharded cluster
[12:53:33] <kala_sifar> but i dont understand one thing
[12:53:52] <kala_sifar> i.e should different collections be updated using different mongos ?
[12:54:06] <Derick> no need
[12:54:25] <Derick> mongos knows (through the config servers) which collections are sharded
[12:54:38] <kala_sifar> and if i have two mongos running should i have to make 3 config servers for each of them ?
[12:54:41] <Derick> and where data lives
[12:54:49] <Derick> no, the config servers are for the whole cluster
[12:54:59] <kala_sifar> thankyou so much
[12:54:59] <kala_sifar> (Y)
[12:55:03] <Derick> you can have many mongos-es all using the same 3 config servers
[12:55:10] <Derick> they *have* to use the same three
[12:55:36] <kala_sifar> okay
[12:55:37] <kali> hehehe... i think i will have to stop telling people to avoid time series in mongodb, now
[12:56:55] <kala_sifar> timeseries ?
[12:56:57] <kala_sifar> works like a charm for me
[12:57:02] <kala_sifar> on *mongodb*
[12:59:38] <kali> kala_sifar: well, the document model is not ideal... you have to choose between a simple and highly space-inefficient schema or a quite complex to manipulate
[12:59:48] <kali> so i'm not sure it's worth the trouble compared to specialized tools
[13:00:30] <kala_sifar> its a long debate but after all it depends on your use case
[13:01:21] <kali> yeah, i aggree there is more than one answer to this question.
[13:02:03] <kala_sifar> i have been doing massive updates every day + read loads ... we have to update about 86 GB of data every day
[13:02:14] <kala_sifar> mongodb never troubled us
[13:02:21] <kala_sifar> we have a very micro architechture
[13:02:23] <kala_sifar> just 2 shards
[13:02:31] <kala_sifar> total memory 12GBs
[13:02:33] <kala_sifar> xeon machines
[13:20:36] <hillct> Good morning all. I wonder, can anyone point me to an implementation in Node, of a GridFS style interface to mongodb that exposes a node filesystem api object?
[13:49:19] <eldub> When backing up a mongo db... what is the best way? I'm hearing a mongodump isn't the way. Should I just be backing up the /data/db folder?
[13:49:44] <cheeser> funny, that. we offer backup services. :D
[13:50:05] <Derick> eldub: you can only copy the data if you shutdown mongodb
[13:50:41] <cheeser> http://www.mongodb.com/blog/post/start-backing-mongodb-free-free-tier-mms-backup-now-available
[13:50:41] <eldub> Derick I have to shutdown mongod in order to backup the /data/db folder?
[13:50:53] <Derick> yes
[13:50:58] <cheeser> those files are being actively changed, so yeah.
[13:51:02] <Derick> so that's clearly not an option either...
[13:51:12] <Neptu> hej someone has experiemented with hardware.... what will be a perfect minimalistic shard??
[13:51:15] <eldub> Understood.
[13:51:55] <Derick> a hidden secondary node is what most people use to run mongodump against
[13:51:56] <eldub> cheeser I appreciate the option, but we would only be keeping things in house.
[13:52:37] <cheeser> i just use mongodump for my back up but i'm just running an irc bot on it so i'm not worried about a few dropped documents on restore
[13:53:06] <eldub> Derick That sounds like a good approach. But from what I've gathered, mongodump isn't the ... best option?
[13:53:24] <Derick> well, you can also pause the secondary node and copy the data dir
[13:53:32] <eldub> Yea that's what I think I'm going to do
[13:53:36] <Derick> but that involves some trickery with changing the configuration of your set
[13:53:45] <eldub> hmmm
[13:54:16] <eldub> I was thinking of just writing a script to shutdown mongod on an off hour when it's not being used, copy the dir, upon completion start back up mongod
[13:54:31] <Derick> looks like a hack :-)
[13:54:34] <cheeser> you could do that to a secondary instead.
[13:54:52] <Derick> cheeser: yeah, that's what I just suggested
[13:54:53] <eldub> cheeser better idea than mine. :)
[13:54:59] <eldub> Derick true -
[13:55:05] <cheeser> i'd be tempted to try something like this: remove secondary from set, mongodump, readd to set.
[13:55:46] <Derick> cheeser: that will break all connections though
[13:55:50] <eldub> cheeser think that's more efficient way than just stopping mongod, backup data dir, restart?
[13:56:07] <cheeser> Derick: if there are any to that secondary, yeah.
[14:00:50] <Derick> cheeser: reconfig also breaks the primary connections afaik
[14:04:05] <pebble_> Hello is there a way to $unwind an object?
[14:04:06] <cheeser> ah. didn't know that.
[14:04:19] <Derick> pebble_: nope
[14:04:27] <Derick> pebble_: but that would be nice :)
[14:05:07] <pebble_> @Derick, yeah, it would :) . I tried everything I could think, and I'm a sad panda now
[14:05:13] <pebble_> oh well, thanks!
[14:05:24] <Derick> pebble_: file a feature request in jira?
[14:05:48] <pebble_> Will do.
[14:06:27] <cheeser> what would unwinding an object (document?) look like?
[14:07:20] <pebble_> Well pretty much the same as unwinding an array. Each field of the object would form a new document
[14:07:48] <cheeser> ah
[14:10:37] <adamobr> how about array fields in a document? a new document to unwinding?
[14:16:27] <CIDIC> I have recently started reading about and learning ot use mongodb I just read this http://docs.mongodb.org/manual/core/write-concern/#write-concern
[14:16:53] <CIDIC> it seems like there are a lot of ways to loose data? or are these extreme corner cases?
[14:18:18] <pebble_> adamobr, I think mongo already does that, it creates documents based on the unique set of an array
[14:18:35] <pebble_> then you'd have to unwind those arrays
[14:19:33] <NodeX> CIDIC : you specify your write concern, by default it's set to 1
[14:20:53] <pebble_> CIDIC : Mongodb isn't ACID compliant.
[14:21:03] <CIDIC> pebble_: what does that mean?
[14:21:12] <NodeX> Google ;)
[14:22:58] <CIDIC> I have been asking what people think about using mongodb as the only db of a php contentmanagement system and I have gotten a lot of conflicting answers. what do you guys think?
[14:23:06] <Derick> yes
[14:23:14] <Derick> it's an awesome fit
[14:23:59] <CIDIC> how often do these write failures come up?
[14:24:24] <Derick> CIDIC: hardware breaking
[14:24:30] <Derick> or network breaking
[14:24:44] <Zelest> http://devnull-as-a-service.com/ :-D
[14:26:44] <CIDIC> Derick: in a cloud system that wouldn't happen very often ?
[14:27:10] <Derick> CIDIC: more than you think really, that's why in your app you need to handle the cases when the driver tells you something went wrong
[14:28:15] <CIDIC> Derick: say a user is updating content and submits it to the server and something goes wrong what would/should happen?
[14:28:42] <Derick> CIDIC: depending on your write concern: nothing (w=0, not the default), the driver says "couldn't wrote"
[14:28:52] <Derick> write*
[14:29:00] <NodeX> CIDIC : I have been using mongodb as the main datastore of a CMS for around 3 years
[14:29:14] <Derick> then to be really sure, you can w=n with n>1 and you can ensure other edge cases as well
[14:29:18] <CIDIC> NodeX: what cms?
[14:29:34] <NodeX> it also powers a multi tenant CRM, the fastest dedicated Job board in the UK and a very well known adult social network
[14:29:40] <Derick> do not think that MongoDB loses the data after it's stored, see it has cases where something transient doesn't *get* stored in the first place
[14:29:42] <NodeX> CIDIC : CLosed source sorry
[14:29:56] <CIDIC> NodeX: what language? just curious
[14:30:01] <NodeX> php
[14:30:12] <NodeX> (mainly), touch of node here and there
[14:30:32] <CIDIC> have you guys ever actually had it happen to you?
[14:30:35] <NodeX> node -> Server side javascript*
[14:30:39] <Derick> CIDIC: not me
[14:30:44] <NodeX> I can't recall ever losing data
[14:30:52] <CIDIC> I mean a failed write
[14:30:57] <Derick> but, I've seen some customers not checking errors and wondering why things went wrong.
[14:31:16] <Derick> just check/catch and handle the exceptions the driver throws and this is not a problem.
[14:32:01] <CIDIC> so really you should have a webform user fills out posts, if there is a write error display a notification "Failed to write…" from the post repopulate all the form fields with the data submitted and they can click submit and try again?
[14:32:10] <NodeX> Derick : you ever known PHP to segfault when it can't connect to a MongoDB server ?
[14:32:33] <NodeX> CIDIC : would you do that with another database?
[14:32:36] <CIDIC> basically the same procedure for any db write operation
[14:33:16] <Derick> NodeX: yes
[14:33:30] <NodeX> Derick : dang, how to get round it?
[14:33:36] <Derick> fix the driver
[14:33:45] <NodeX> update it?
[14:33:56] <Derick> well, or fix it if it is a bug in the latest :)
[14:34:15] <NodeX> I have a very weird mongodb install on a server that wont start with the normal init.d scripts, I have to fork it and nohup
[14:34:23] <NodeX> and for some reason the driver doesn't like this one bit
[14:34:33] <NodeX> mongo shell is fine to connect
[14:34:33] <Derick> hmm, gdb it?
[14:35:06] <NodeX> I ran an strace and it gets to SOCK_INET 127.0.0.1 somehting or other then segfaults
[14:35:19] <Derick> NodeX: strace is useless for debugging
[14:36:09] <NodeX> I just wanted to see where it stopped tbh
[14:36:55] <Derick> strace is still not handy for that ;-)
[14:37:35] <NodeX> I'll run a gdb on it
[14:38:12] <Derick> NodeX: USE_ZEND_ALLOC=0 gdb --args /path/to/php /path/to/script.php
[14:39:04] <NodeX> thanks Derick : will do it now
[14:39:26] <Derick> heh, i need to thank you helping to find a bug :P
[14:41:00] <NodeX> it just does nothing and has (gdb) prompt
[14:41:41] <NodeX> my bad, it's asking for a core dump, let me generate one
[14:42:52] <Derick> no
[14:42:55] <Derick> type "run" :-)
[14:43:14] <Derick> the (gdb) prompt is just what you want
[14:45:06] <NodeX> haha, sorry, not done this before
[14:46:55] <NodeX> https://gist.github.com/anonymous/7233867 <--- not a lot of help
[14:47:09] <Derick> no, it tells you it worked
[14:48:28] <NodeX> https://gist.github.com/anonymous/7233889 <--- after "bt"
[14:48:59] <Derick> hmm, not useful
[14:49:02] <Derick> there are no symbols
[14:49:29] <NodeX> gdb moaned about not having some debugging symbols
[14:49:45] <Derick> did yo uinstall php from a package?
[14:49:56] <NodeX> yeh 5.5 from apt
[14:50:06] <Derick> apt-get isntall php5-debug
[14:50:16] <NodeX> thought I had it sorry
[14:50:36] <Tiller> Hi!
[14:51:06] <Derick> hi
[14:51:08] <Derick> NodeX: that's ok
[14:52:03] <Tiller> Is it possible with an update in upsert mode to insert a field only if it doesn't exist yet?
[14:53:25] <NodeX> Derick : https://gist.github.com/anonymous/7233966 is that any more use?
[14:55:18] <NodeX> Tiller : there is a switch , one sec let me find it
[14:55:50] <Derick> NodeX: yes, it shows that the mongodb driver is not involved at all
[14:57:12] <Tiller> Oh, I think I get it NodeX. $setOnInsert
[14:57:21] <Tiller> Sorry for not seeing it early :(
[14:57:42] <NodeX> that's the one, I always forget it's name
[14:57:55] <CIDIC> this is roughly the schema we use in our php cms with a mysql db. If we were to migrate it to mongodb how would you recommend structuring it? https://gist.github.com/unstoppablecarl/6b634491512f7ea9fb97
[14:58:09] <NodeX> Derick : if I remove mongodb call from the script it runs fine
[14:58:39] <Derick> but there is no call there at all yet
[14:59:01] <Derick> NodeX: it's just gotten to parsing the script
[14:59:07] <NodeX> it's in the construct, one second let me select a collection
[14:59:21] <Derick> NodeX: the backtrace does *not* show the mongo extension
[14:59:26] <Derick> NodeX: try turning off opcache
[15:00:00] <NodeX> good idea
[15:00:10] <Derick> brb
[15:00:34] <NodeX> CIDIC : I advise you embed as much as possible, if you can wait I will show you my schema
[15:00:39] <NodeX> or an example of one
[15:00:50] <CIDIC> NodeX: I have all day :)
[15:00:59] <CIDIC> does what I linked to make sense?
[15:01:15] <NodeX> yes but it's very relational so it's not of much use to you
[15:01:33] <CIDIC> yea I know that is why I am asking, this is how we structure it using mysql
[15:04:07] <NodeX> CIDIC : https://gist.github.com/anonymous/7234148
[15:04:59] <CIDIC> NodeX: got an explaination to go along with that?
[15:05:55] <NodeX> keys are pretty self explanitory
[15:06:44] <NodeX> that ^^ gives you somehting a long the lines of http://www.nodex.co.uk/
[15:13:27] <NodeX> Derick : https://gist.github.com/anonymous/7234292 <----- php file contents at the top, debug information underneath. Opcache removed. Without the MongoClient() call there is no segault
[15:13:57] <Derick> NodeX: can I haz access?
[15:15:56] <NodeX> it's a client server, wish I could, he would shoot me haha
[15:16:07] <NodeX> as I say the mongodb install is kinda borked for some reason
[15:16:15] <Derick> looks like there is more borked
[15:16:21] <NodeX> it's only this one server too, every other one is fine
[15:16:28] <Derick> as I said, PHP hasn't gotten further than even *parsing* your script
[15:16:38] <Derick> let alone executing
[15:16:42] <NodeX> but all other php scripts run fine if I dont call mongodb which is weird
[15:16:49] <Derick> yes, it is
[15:17:01] <NodeX> you can see where my confusion lay
[15:18:28] <NodeX> https://gist.github.com/anonymous/7234379 <--- <?php echo("Hello world"); ?>
[15:18:57] <rclements> I am trying to save a Hash of objects as field and get an undefined method __bson_dump__ for the object type in the Hash I'm trying to save. I added https://gist.github.com/rclements/7234372 to my model and it worked.
[15:19:28] <rclements> My problem is I have another model in a gem I wrote that is giving me that error and doesn't use mongo. How do I handle the __bson_dump__ error in that model?
[15:20:39] <rclements> Is there a bson gem included in mongo I can just add to that gem that would give me that __bson_dump__ method to include?
[15:20:49] <Derick> NodeX: yay, so that crashes too
[15:21:04] <Derick> NodeX: can you post your whole shell session?
[15:21:42] <NodeX> you want me to record it?
[15:21:53] <Derick> copy and paste it into a gist? :)
[15:22:16] <NodeX> it crashes but it echo's out "hello world"
[15:22:45] <Derick> you're only showing me a part of things :-)
[15:23:33] <NodeX> https://gist.github.com/anonymous/7234469
[15:23:52] <NodeX> that's everything from running the gdb
[15:23:57] <Derick> duh
[15:24:04] <Derick> you forgot --args in your gdb line
[15:24:14] <Derick> it's just waiting for you to enter data ;-)
[15:24:14] <NodeX> hahahaah *facepalm
[15:24:21] <cheeser> :D
[15:24:24] <Derick> like you'd have typed "php" on the command line
[15:24:37] <Derick> ^C ...
[15:25:50] <NodeX> https://gist.github.com/anonymous/7234506 <---- now it has some mongodb references
[15:34:49] <CIDIC> NodeX: so you don't have any suggestions for how to structure my data?
[15:35:50] <NodeX> embed as much as possible and avoid relations
[15:36:19] <CIDIC> the requirements of my application demand some relationships
[15:43:04] <rclements> https://gist.github.com/rclements/7234787
[15:44:53] <NodeX> CIDIC : on the front end?
[15:45:17] <CIDIC> NodeX: not really
[15:45:42] <CIDIC> all the extra stuff is to make it effortless to create admin forms for pages in the end it is just a key value list per page
[15:45:57] <CIDIC> that is really all the front end uses
[15:46:20] <NodeX> then my advice is keep it embedded, even if that means data duplication
[15:46:42] <NodeX> every single page uses one db call with ONE query in my CMS
[15:47:37] <CIDIC> NodeX: I plan to embed the key value pairs all the extra meta data I am not sure about
[15:48:51] <CIDIC> the other issue with the data duplication, I want to be able to update something multiple pages reference and I would have to change every page with a copy of the embeded data right?
[15:49:01] <CIDIC> that sounds like a pain
[15:49:51] <NodeX> I am not sure what you mean sorry
[15:50:27] <CIDIC> so every page has a page type, and each page type has page properties
[15:50:47] <CIDIC> I want to be able to change a page type and multiple pages that reference it update
[15:51:01] <NodeX> right but that's an administration issue for the back end, not somehting that happens 2000 times a day?
[15:51:09] <CIDIC> no it isn't
[15:51:21] <CIDIC> it is an admin thing
[15:51:46] <NodeX> so writing a function to track what collections to update is a pain?
[15:51:53] <CIDIC> seems like I would have to write a sizable amount of code to keep those subdocuments in sync
[15:51:59] <CIDIC> ?
[15:52:17] <NodeX> what are you trying to achieve in the long run?
[15:55:44] <CIDIC> NodeX: flexible page data structure and admin forms that reflect that structure that can be generated by the app automatically from the metadata set by the admin. and yeild a key value pair list for all pages that will be used by the front end.
[15:56:33] <NodeX> not really what I meant. WHat do you want to achieve by using mongodb for your datastore
[15:57:10] <CIDIC> NodeX: it would seem using mongodb would be less complex than mysql to achieve this
[15:58:14] <CIDIC> I may be wrong
[15:58:38] <NodeX> you mentioned that you would have to write a lot of code to maintain things, why is thta a problem?
[15:58:41] <NodeX> that*
[15:59:12] <CIDIC> I guess it isn't a big problem but it would seem that scale could become a problem
[15:59:42] <CIDIC> if a site has a ton of pages that use the same page type (usually the default) and I make a change to the default page type it would have to go through and update every page document right?
[16:00:07] <NodeX> that's not hard just issue a multiple update
[16:00:08] <CIDIC> although it would have to do that anyway but not do as much
[16:16:26] <dbasaurus> In my mongo log, I see this error quite frequently -> ClientCursor::find(): cursor not found in map ……. My understanding is that this is caused by a timeout iterating through a find result set. Can anyone confirm? Also, I am seeing this message a lot as well -> [conn1296877] killcursors: found 0 of 1…. Is that statement carry the same meaning?
[16:27:43] <rclements> Anyone around that can possibly help with this Rails/Mongo issue? https://t.co/MJJbTqt31K
[16:33:45] <Derick> NodeX: how is it coming along?
[17:07:34] <KamZou> Hi, could you please tell my what's the max length of a mongo request ?
[17:08:05] <Derick> request or response?
[17:08:12] <KamZou> request
[17:08:47] <Derick> "maxMessageSizeBytes" : 48000000,
[17:08:57] <Derick> which each document being 16MB
[17:09:10] <Derick> (and a query is one document - about)
[17:09:45] <KamZou> Derick, and speaking of caracters ?
[17:10:04] <Derick> i can't answer that, as it depends on the amount of fields and types etc.
[17:10:33] <Derick> but that question more is... what are you trying to do? :-)
[17:10:36] <cheeser> it's just math at this point, KamZou
[17:14:24] <NodeX> Derick : any luck with that gdb ?
[17:14:34] <Derick> NodeX: which one?
[17:14:38] <NodeX> sorry, I missed your message
[17:14:40] <Derick> did I miss something?
[17:14:50] <NodeX> https://gist.github.com/anonymous/7234506
[17:14:57] <NodeX> that has some php_mongo_* information
[17:15:10] <Derick> yes
[17:15:13] <Derick> this makes more sense
[17:15:19] <NodeX> haha, my bad sorry
[17:15:34] <Derick> what I do not like is the "optimized out" stuff
[17:15:46] <NodeX> I can disable anything you want me to
[17:15:58] <Derick> which driver is this?
[17:16:04] <Derick> NodeX: you need to recompile to fix that
[17:16:11] <NodeX> the latest I think, I installed it about 5 weeks ago
[17:16:11] <Derick> (recompile the driver, and not strip it)
[17:16:32] <Derick> can you do:
[17:16:34] <Derick> frame 2
[17:16:43] <Derick> or rather: "bt full" ?
[17:17:04] <NodeX> Just going to pull the latest dirver and recompile, 1 sec
[17:17:25] <Derick> NodeX: afraik that I'll have to leave in 14 mins though
[17:18:29] <NodeX> https://gist.github.com/anonymous/7236468
[17:18:35] <NodeX> that's "bt full"
[17:18:47] <Derick> k
[17:18:58] <Derick> before recompiling, can I ask you to change the code here?
[17:19:39] <NodeX> https://gist.github.com/anonymous/7236479
[17:19:43] <NodeX> yeh, I'll change anythign
[17:19:52] <NodeX> we can leave it till tomorrow if you have to go ;)
[17:20:19] <Derick> add the code from example #1 in before your "new" (but remove the one in that example): http://us2.php.net/mongolog.setcallback
[17:21:25] <NodeX> you want me to just make a new file with that code?
[17:22:03] <Derick> sure,
[17:22:10] <Derick> I don't quite know what your code is now
[17:23:06] <NodeX> it's just $db=new MongoClient()
[17:23:11] <Derick> oh, then yes
[17:23:25] <NodeX> https://gist.github.com/anonymous/7236559
[17:23:32] <NodeX> that's example #1 from php.net
[17:23:42] <Derick> where is the output of the script? :)
[17:24:34] <NodeX> https://gist.github.com/anonymous/7236578
[17:24:56] <Derick> it must have shown more
[17:24:58] <NodeX> that's the same as if I do php-cgi c.php
[17:25:02] <NodeX> no, that's everything
[17:25:30] <Derick> that's not possible
[17:25:40] <NodeX> https://gist.github.com/anonymous/6d7fe69fa86fd390837c
[17:25:49] <Derick> can you run it with php -ddate.timezone=UTC yourscript.oho ?
[17:26:34] <NodeX> https://gist.github.com/anonymous/7236616
[17:26:43] <Derick> now *that* makes sense
[17:26:49] <NodeX> Couldn't connect to 'localhost:27017': get_server_flags: got unknown node type <-----
[17:27:02] <NodeX> this is why I think it stems from this mongodb installation
[17:27:25] <Derick> which mongod version is this?
[17:27:29] <NodeX> 2.4.6
[17:27:34] <Derick> it looks like its sending rubbish
[17:27:52] <NodeX> as I say it wont start with init.d .. keeps moaning about some C lib crap
[17:27:52] <Derick> there is still a bug, and I really like to find out what it is though
[17:28:02] <Derick> can we pick this up tomorrow?
[17:28:07] <NodeX> so I have to start with nohup mongodb --fork....
[17:28:09] <Derick> ping me in the morning if you wish
[17:28:09] <NodeX> yeh of course
[17:28:17] <NodeX> I'll ping you around 9 ;)
[17:28:22] <Derick> make it 10 ;-)
[17:28:22] <NodeX> have a good night and thanks :)
[17:28:26] <NodeX> 10 it is
[17:28:26] <Derick> np
[17:42:08] <bartzy> Hi
[17:42:17] <bartzy> What is the difference between mongodump and mongoexport ?
[18:05:56] <NodeX> export can dump to json / csv iirc
[18:06:36] <cheeser> mongodump is raw bson, iirc
[18:08:18] <NodeX> plus that lol
[20:27:39] <_sri> is $out support for the aggregate command currently broken in 2.5.3? it does not appear to send outputNs with the command reply, even though the target collection is created
[20:31:52] <cheeser> according to the docs $out only returns an empty "result" array and "ok" : 1
[20:32:44] <_sri> cheeser: and drivers are supposed to look through the pipeline spec to find the target collection?
[20:33:00] <cheeser> the java driver does, yes.
[20:33:05] <_sri> eeeep
[20:33:10] <_sri> thanks
[20:33:12] <cheeser> and i believe that's spec'd as well.
[20:33:14] <cheeser> one sec
[20:33:34] <cheeser> "The driver will return the raw document returned from the server. The user can decide whether to instantiate a collection using the name specified in the $out operator."
[20:34:02] <cheeser> at least in the java driver, cursors are lazy so we return a find() cursor on that collection.
[20:34:11] <cheeser> if you don't iterate, it costs nothing.
[20:34:15] <_sri> are those docs public?
[20:34:28] <cheeser> this one isn't. let me see what's on mongodb.org
[20:35:11] <ShellFu> Im using the ruby driver, and I cant seem to get a document updated in full without issuing both a save and an update.
[20:35:27] <cheeser> this is all i can find http://docs.mongodb.org/master/reference/operator/aggregation/out/#pipe._S_out
[20:35:53] <_sri> thanks again
[20:35:54] <ShellFu> hash that im passing doesnt change. I simply pass it to save as is and update as its. If i use one or the other than only part of the hash seems ot be saved/updated
[20:36:30] <_sri> ah, i guess if it's guaranteed to be the last stage it makes sense
[20:37:29] <cheeser> yeah. it's required to be the last one.
[20:41:14] <_sri> i've only been looking at the perl driver, which still uses outputNs, i guess it needs to be updated again
[20:49:02] <schnittchen> I see a strange, but consistent, performance degradation after moving to a more powerful server...
[20:50:20] <schnittchen> like, factor 2 worse, even though everything is probably served from memory
[20:55:23] <angasulino> schnittchen, same physical location?
[20:56:00] <schnittchen> no..., why do you ask? I query locally
[20:56:14] <angasulino> because latency
[20:57:05] <schnittchen> it's loopback in both cases
[20:57:46] <schnittchen> on the old machine, running inside a linux-vserver, on the new, inside an lxc container
[20:58:20] <schnittchen> is there a way to use unix domain sockets for connection, to eliminate that possibility?
[21:10:22] <schnittchen> i already ruled out the container virtualization as a cause
[21:20:56] <tbjers> Hello, is there a way to implement UUID _ids that can be sorted on based on a timestamp or similar, like a since_id?
[21:21:11] <tbjers> For greater granularity than ObjectId, that is.
[21:27:44] <http402> you could use longs in your _id property and sort there?
[21:33:52] <eldub> Something happened to my replicaset (it's only in testing thankfully) and it's like none of the members know about the set anymore.
[21:34:09] <eldub> when I issue a rs.status() here are my results: "errmsg" : "loading local.system.replset config (LOADINGCONFIG)"
[21:34:45] <eldub> db.isMaster()
[21:34:47] <eldub> oops
[21:35:58] <eldub> I found out why
[21:36:08] <eldub> I never changed my config to the new IP sheme -- duh
[21:40:09] <eldub> How do I change the hsot IPs in the replicaset config
[21:46:03] <eldub> cfg = rs.conf()
[21:59:45] <eldub> I figured it out --
[22:26:56] <jesse_> hey guys. I'm trying to build a p2p application and would like to distribute it (on linux systems) with everything needed to run out of the box. The application itself is written in Python. Is it possible to box everything (drivers, pymongo, blah blah blah) and create a self-sustained package?
[22:34:36] <retran> what's this have to do with mongo, jesse
[22:54:27] <eldub> I'm still unable to change my primary's host from IP to host name
[22:54:28] <eldub> any ideas?
[22:54:39] <eldub> rs.config() shows my primary as my IP but the rest of the replica set as hostnames..
[23:04:29] <joannac> eldub: reconfig?
[23:10:43] <eldub> joannac I'm reading on that, but when I do a reconfig and put in the host name of my node... it always says "cannot find self"
[23:25:51] <joannac> You could force it if you're sure your hostname is setup correctly eldub?