PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 21st of January, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:08:29] <donski> Hi, is it possible to have nested groupings as part of the mongo aggregation pipeline? I'm stuck on an issue where I want to have a nested grouping as the final output. I've detailed it here: http://pastie.org/8652193 if anyone could take a look and advise me
[01:31:29] <george2> can I force mongodb to use a sequential _id starting from 1?
[01:32:40] <arthurnn> george2: nop..
[01:32:53] <arthurnn> actually u cannot tell mongodb to do that. but u can simulate it.
[01:33:11] <arthurnn> see this http://docs.mongodb.org/manual/tutorial/create-an-auto-incrementing-field/
[01:33:14] <george2> ok. I'll just use another id field then, yes?
[01:33:23] <george2> perfect
[01:33:25] <george2> thanks
[01:33:37] <arthurnn> which lang are u using? which driver?
[01:34:09] <george2> mongoengine, django and Python.
[01:34:37] <arthurnn> i see... django might have something to do that for you.. i wrote a gem do that behind the scenes on ruby
[01:35:26] <george2> unfortunately, MongoEngine's Django docs are rather... lacking, so I'm pretty much making stuff up as I go.
[01:36:04] <preaction> george2: iirc mongoengine has something to do it automatically. let me check my code quick
[01:36:41] <preaction> george2: it's called "SequenceField"
[01:37:16] <george2> preaction: nice, thanks!
[03:04:38] <SaintMoriarty> Hello
[03:05:38] <SaintMoriarty> I am currently writing a test app to get used to NoSQL and mongo. I am using the MEAN stack and wanted to see what is the best way to take a update post in nodes and apply it to the model?
[03:15:03] <kveroneau> Hello everyone. I have a scary question about MongoDB. Although this video on YouTube was uploaded 2 years ago, I really need to know if this is still a fact, or was just FUD.
[03:15:35] <kveroneau> It says something about MongoDB not writing/flushing to disk often enough, making data loss more of a possibility.
[03:15:51] <kveroneau> Video I found is here: https://www.youtube.com/watch?v=URJeuxI7kHo
[03:16:42] <kveroneau> Is this FUD? Please say it is so, I really like MongoDB, but some things said there are sort of... well... scary for a DBA or someone who makes choices on what DB to use in an app.
[03:21:44] <retran> are you really a DBA
[03:22:18] <retran> if you were one, you'd simply test it
[03:22:32] <retran> instead of being weird and dramatic and talking about a TV show you saw
[03:22:52] <jiffe> youtube never lies
[03:24:17] <kveroneau> retran, never said I am a DBA. However, I am currently in a position where I need to confirm that MongoDB will work for my client's web application. I am doing some search, and have tested MongoDB with Python and seems all too good to be true.
[03:24:34] <kveroneau> *research
[03:24:41] <retran> ok, it's too good
[03:24:57] <retran> if you want to make a real question, i suggest making a real question
[03:25:18] <retran> with things like "how often/under what circumstances does mongod flush"
[03:25:36] <retran> that's how computer scientists conduct research
[03:26:19] <retran> and then, like any good scientist, compare it to something else
[03:26:26] <kveroneau> Can the flushing be manually forced via the API?
[03:26:39] <retran> kveron, have you ever used a database before?
[03:27:42] <kveroneau> retran, yes I have used MySQL for quite awhile. I use transactions to make sure my changes are atomic.
[03:27:52] <retran> are you aware, for example, that "flush tables" in mysql doesn't actually literally gaurantee the data is written once the command is done
[03:28:31] <retran> in mysql using innodb have tons of data that is not yet flushed to disk at any given time
[03:28:44] <retran> and "flush tables" is no remedy
[03:28:51] <retran> the only remedy is mysql shutdown
[03:29:16] <retran> it has 0 things to do with the server, and everything to do with the nature of transactional processing
[03:29:27] <retran> 0 things to do with the quality of the server, i mean
[03:30:07] <retran> if you want to see heartbreaking data loss, talk to people who've done important things with Mysql
[03:32:59] <kveroneau> Hmm, this has been most of the debate about SQL vs NoSQL, each party states that their DB has data integrity and the other doesn't.
[03:34:07] <kveroneau> In the #django channel, they mentioned "risking your data integrity", whereas in this channel, you say there is data integrity.
[03:34:25] <retran> i suggest you provide facts
[03:34:57] <retran> i think you're more intested in personal drama
[03:38:04] <kveroneau> I already talked to Eliza recently about my personal drama, so I don't think that's it.
[03:39:57] <cheeser> there are no known data loss issues last i heard a few weeks back
[03:41:54] <kveroneau> cheeser, thank you. That's what I wanted to hear. I am sure there are lots of developers and use in production to report solid statistics.
[03:42:12] <retran> a simple affirmation is enough for you?
[03:42:20] <retran> what kind of research is this
[03:42:25] <cheeser> anecdata++
[03:42:43] <retran> heh
[03:46:10] <kveroneau> Just awful, a search for "mongodb case studies" isn't really that helpful, with of course the top results from mongodb's website.
[03:47:56] <cheeser> straight from the horse's mouth
[03:49:13] <kveroneau> The only information to reasure cheeser's claim is the company list on the MongoDB website, which shows lots of large companies with a usually high volume of traffic using MongoDB. For example eBay wouldn't use a DB without proper data integrity would it?
[03:59:35] <cheeser> highly unlikely.
[03:59:52] <cheeser> just look over all the case studies on the site. lots of high end users.
[04:55:41] <btclovernick> mongodb old version was running
[04:55:52] <btclovernick> killed it
[04:55:55] <btclovernick> accidentally ran new version of mongodb
[04:56:00] <btclovernick> now it is not starting
[04:56:03] <btclovernick> so running repair
[04:56:08] <btclovernick> it is a 198GB db
[04:56:10] <btclovernick> how long will it run
[04:56:34] <btclovernick> how to recover db
[04:57:01] <cheeser> if you don't have backups, you'll probably just have to wait it out now.
[04:58:19] <btclovernick> ok cheeser: how long will it take
[04:59:03] <btclovernick> Filesystem Size Used Avail Use% Mounted on
[04:59:04] <btclovernick> 441G 344G 75G 83% /
[04:59:09] <btclovernick> theres only 75GB available
[04:59:16] <cheeser> no idea
[04:59:25] <btclovernick> initandlisten] Index: (1/3) External Sort Progress: 2730200/4305650 63%
[04:59:26] <btclovernick> Mon Jan 20 23:11:54.002 [initandlisten] Index: (1/3) External Sort Progress: 2772400/4305650 64%
[04:59:26] <btclovernick> Mon Jan 20 23:12:04.001 [initandlisten] Index: (1/3) External Sort Progress: 2815000/4305650 65%
[04:59:27] <btclovernick> it is here
[04:59:35] <btclovernick> Every 2.0s: du -sh Mon Jan 20 23:12:13 2014
[04:59:36] <btclovernick> 263G .
[04:59:39] <btclovernick> it is now at 263GB
[04:59:43] <btclovernick> scared to lose all the data
[04:59:49] <btclovernick> will all the data be recovered
[05:20:00] <Mallanaga> sup folks
[05:21:07] <Mallanaga> I'm trying to learn this stuff... and I'm using magic cards as a tool... what's a good way to denormalize a JSON file into several documents?
[05:22:32] <btclovernick> Index: (1/3) External Sort Progress: 3322900/4305650 77%
[05:22:33] <btclovernick> Mon Jan 20 23:34:55.009 [initandlisten] Index: (1/3) External Sort Progress: 3366500/4305650 78%
[05:22:34] <btclovernick> Mon Jan 20 23:35:05.013 [initandlisten] Index: (1/3) External Sort Progress: 3409100/4305650 79%
[05:22:37] <btclovernick> how long does this taken
[05:23:49] <cheeser> btclovernick: please stop pasting in channel. use a pastebin.
[05:23:56] <btclovernick> ok
[05:23:57] <cheeser> also, looks like it's close to done. 80%
[05:24:45] <btclovernick> but it says index 1/3
[05:25:07] <cheeser> oh, i dunno.
[05:28:31] <btclovernick> http://pastie.org/8652675
[05:28:36] <btclovernick> 6% again
[05:30:28] <cheeser> looks like you'll get that for each collection as it rebuilds the indexes.
[05:37:18] <btclovernick> :(
[05:37:20] <btclovernick> this is bad
[05:37:27] <btclovernick> how long are the users going to get errors
[05:40:20] <cheeser> no idea. are they actively getting errors now? if the data is there but no indexes, it'll just be slow
[05:41:21] <btclovernick> no mongodb isn't starting
[05:41:23] <btclovernick> remember
[05:41:46] <btclovernick> running http://pastie.org/8652690
[05:41:50] <cheeser> well, it's starting. whether or not it's responding to requests is what i'm asking.
[05:42:08] <btclovernick> no
[05:50:05] <btclovernick> http://pastie.org/8652703
[05:50:22] <btclovernick> it needed cheeser:
[05:50:26] <btclovernick> but it is not opening agian
[05:50:26] <btclovernick> mongod dead but subsys locked
[05:50:28] <btclovernick> same problem
[05:50:48] <Mallanaga> question about mongoDB... noSQL in general, I suppose
[05:50:51] <btclovernick> Starting mongod: -bash: /var/log/mongodb/mongodb.log: Permission denied
[05:51:11] <Mallanaga> is the idea to to have all the information for an instance of the model in the one document?
[05:52:41] <btclovernick> cheeser: please help repair is done
[05:52:43] <btclovernick> but not opening
[05:53:11] <cheeser> sorry. i have to head to bed now. it's 1am already. good luck.
[05:55:12] <btclovernick> cheeser : pls man
[05:55:19] <btclovernick> 16M 2014-01-20 21:05 transactions.ns
[05:55:19] <btclovernick> [mark@colo1 mongodb]$ sudo chown -R root:root /var/log/mongodb/
[05:55:21] <btclovernick> [mark@colo1 mongodb]$ sudo /etc/init.d/mongod restart
[05:55:22] <btclovernick> Stopping mongod: [FAILED]
[05:55:23] <btclovernick> Starting mongod: -bash: /var/log/mongodb/mongodb.log: Permission denied
[05:55:25] <btclovernick> [ OK ]
[05:55:26] <btclovernick> [mark@colo1 mongodb]$
[05:55:26] <btclovernick> still not opening
[06:00:55] <Mallanaga> nobody?
[06:01:05] <Mallanaga> crickets...\
[09:03:46] <crashev> is there a way to output data from mongo shell to file ?
[09:04:15] <megawolt> yep
[09:04:38] <megawolt> http://docs.mongodb.org/v2.2/reference/mongoexport/
[09:04:38] <Nodex> mongo DATABASE commands.js > output.json
[09:05:28] <axi> mongodb on a raspberry pi -> segmentation fault! any ideas? pm me please
[09:05:51] <Nodex> axi, I am not sure mongodb is supported on the pi processor
[09:07:11] <Nodex> perhaps you can try this : http://c-mobberley.com/wordpress/index.php/2013/10/14/raspberry-pi-mongodb-installation-the-working-guide/
[09:08:16] <crashev> Nodex: ok, I saw that, I was expecting some way that I can redirect it to file from the console/mongo shell itself
[09:08:37] <crashev> megawolt: thx, will check it out, have not used this so far
[09:11:14] <Nodex> crashev : megawolt's offering is for EXPORTING data
[09:14:28] <megawolt> @crashev here is your sample http://pastie.org/8653084
[09:25:58] <crashev> thanks
[09:29:02] <megawolt> @crashev you're welcome
[09:51:47] <r1pp3rj4ck> hey guys, i'm having a problem with mongodb
[09:51:58] <r1pp3rj4ck> using 2.4.9
[09:52:15] <r1pp3rj4ck> it seems that this tutorial is wrong: http://docs.mongodb.org/manual/tutorial/return-text-queries-using-only-text-index/
[09:52:55] <ron> the tutorial is never wrong.
[09:52:59] <r1pp3rj4ck> we need to sort by an integer instead of the score
[09:53:01] <ron> there are wrong people, not wrong tutorials.
[09:53:24] <r1pp3rj4ck> not saying i'm not wrong of course, that's why i came here to find someone who can correct me :)
[09:54:15] <ron> I was kidding though, the tutorial could be wrong. no idea.
[09:55:15] <r1pp3rj4ck> i know :)
[10:00:34] <r1pp3rj4ck> anyone else know something about this?
[10:01:12] <theblackbox> hello all, I'm just looking for a way to start the mongod as a service - it currently locks the terminal instance that inits it and this is very annoying when trying to script. I'm pretty damn sure I can't be the only one that's tackled this, but I can't seem to find any useful info
[10:01:53] <ncls> theblackbox: are you on linux ?
[10:01:59] <theblackbox> yep
[10:02:03] <r1pp3rj4ck> theblackbox, what distro?
[10:02:11] <theblackbox> debian
[10:02:25] <r1pp3rj4ck> write an init script
[10:02:37] <theblackbox> I have
[10:02:45] <theblackbox> it still locks the script
[10:03:01] <r1pp3rj4ck> how?
[10:03:33] <megawolt> do you use your script with db.eval() ?
[10:03:42] <theblackbox> locks/owns/binds - you know what I mean where the instance is bound to the running program?
[10:03:50] <theblackbox> megawolt: me? no
[10:04:23] <megawolt> ouww i just understand
[10:04:30] <megawolt> mongodb is a service
[10:04:46] <megawolt> that's normal to lock your terminal
[10:05:36] <megawolt> for example when you try to execute mongod in command prompt. It works but you cannot write anything
[10:05:46] <theblackbox> correct
[10:06:02] <megawolt> when you close the window mongod stops
[10:06:19] <theblackbox> but this cannot be the desired method of instantiating a MongoDB _daemon_ ... can it
[10:06:33] <megawolt> in windows you can use mongod as a windows service
[10:07:11] <r1pp3rj4ck> theblackbox, how do you start it?
[10:07:22] <megawolt> start it and leave it
[10:07:33] <megawolt> open an another blackbox for shell
[10:07:43] <megawolt> and connect to DB
[10:09:57] <theblackbox> but I'd like to script the starting and stopping of the DB as part of my deployment, so it would be desirable to swoop in on SSH wings and --restart the mongos instance but given the way the mongod "service" behaves this breaks my deployment script - there is no way I can automate the starting/stopping of mongod
[10:10:46] <r1pp3rj4ck> theblackbox, put an & on the end?
[10:11:45] <theblackbox> r1pp3rj4ck: ;)
[10:11:58] <Nodex> theblackbox : which operating system are you running?
[10:12:14] <theblackbox> hah, that's the doo-hicky I was looking for... I knew there was some way to force this
[10:12:24] <megawolt> @theblackbox also use can use db.shutdownServer() for close... But restart can be little comlicated :d
[10:13:13] <theblackbox> megawolt: yeah I'm working on restart. I think if I keep tabs on a pid file I might save myself some pain. We'll see
[10:13:18] <Nodex> theblackbox : which operating system are you running?
[10:13:45] <theblackbox> Nodex: the mongod service is running on Debian, the deployment script is running on OS X with Bamboo installed
[10:14:03] <Nodex> and you simply want it to run int he background?
[10:14:06] <Nodex> in the*
[10:14:22] <r1pp3rj4ck> theblackbox, so is that what you were looking for?
[10:14:57] <theblackbox> Nodex: correct, but I think it was simply a matter of forgetting the trailing & on the init command. Once I've put that in place the rest should fall together
[10:15:14] <Nodex> false
[10:15:25] <Nodex> when you detach your terminal it will exit
[10:15:30] <Nodex> use --fork instead
[10:16:24] <r1pp3rj4ck> theblackbox, do what Nodex suggests, it's better :D
[10:16:39] <r1pp3rj4ck> didn't know there was a --fork param on it
[10:17:19] <Nodex> I'm not sure why you're not using the init scripts for debian though
[10:17:59] <theblackbox> Nodex: I didn't notice them?
[10:18:10] <r1pp3rj4ck> theblackbox, didn't you say you have init script?
[10:18:37] <theblackbox> yeah, just a plain skeleton I've edited
[10:18:37] <r1pp3rj4ck> if you use the init script, restarting mongodb should be as simple as /etc/init.d/mongodb restart
[10:20:02] <theblackbox> agh, cool, I didn't realise there was a deb package, that's much better!
[10:20:13] <r1pp3rj4ck> :D
[10:26:28] <r1pp3rj4ck> can i somehow sort the results by a specified integer instead of the score when running text command?
[10:26:52] <r1pp3rj4ck> it doesn't work as this tutorial suggests: http://docs.mongodb.org/manual/tutorial/return-text-queries-using-only-text-index/
[10:27:21] <r1pp3rj4ck> and I *really* need to do this somehow
[10:43:49] <abhishek> Hi everyone, Is there a way to get whole collection and using distinct in mongo.
[10:44:26] <kali> abhishek: explain what you want
[10:46:12] <abhishek> kali: For eg i have documents in collection with multiple email, i can get unique one using db.collection.distinct('email') but this returns only email values, is it possible to fetch other values.
[10:46:31] <Nodex> aggregation framework or map/reduce it
[10:46:55] <abhishek> Aggregation would be nice for better speed i guess.
[10:47:51] <kali> yeah, aggregation is faster, but there is a 16MB limit on the result
[10:48:02] <kali> so in the end, you may need to go map/reduce
[10:50:09] <abhishek> oh, i forgot it, any way i dont think size will ever go beyond that limit.
[11:47:51] <dump> #debian
[11:48:44] <Nodex> #windows
[11:49:57] <dump> Sorry, typed in the wrong place! :)
[12:21:34] <megawolt> #dos
[12:23:22] <ron> #mock
[12:34:28] <scristian> hi all, I have a replicaSet, is safe to run on primary db.repairDatabase() ?
[12:35:42] <megawolt> you should freeze replica set and repair each member of RS seperately
[12:40:49] <kali> freeze ? what ?
[12:41:45] <megawolt> http://docs.mongodb.org/manual/reference/method/rs.freeze/
[12:42:06] <megawolt> or http://docs.mongodb.org/manual/reference/command/shutdown/
[12:42:53] <kali> ha. that just freeze one member... basically the idea of repairing a replica set is: run repair on each secondary (in turn), then switchover and run on the former primary
[12:42:54] <scristian> I need to run repairDatabase only to reclaim disk space
[12:43:23] <kali> scristian: on which member ?
[12:44:35] <megawolt> i think disk mapping not directly transferred to secondary
[12:45:13] <megawolt> just oplog continue with that
[12:46:37] <scristian> On primary
[12:49:56] <kali> scristian: then you need a failover. you can either: run repair on the primary and it will let the other secondary elect one of them to primary, or: trigger the failover yourself by freeze()ing the primary, or altering the configuration, or just stopping it
[12:50:29] <kali> scristian: in all case, you'll have a few seconds of mayhem
[12:54:08] <scristian> great, thank you so much for the help
[12:55:02] <kali> scristian: you're aware you need twice the database size on disk for the repair to run ?
[12:56:01] <scristian> right now is 40G after repair will be 5G, I need twice of 40G ?
[12:57:16] <kali> scristian: mmm no. you should need the dataset size + a few GB
[12:57:27] <kali> scristian: so about 7GB in your case
[12:58:14] <kali> scristian: because another option is to just delete everything from the former primary disk, and let it pull the dataset from one of the two other nodes
[13:03:13] <chronos> hello guys, is anyone here working with Delphi and Delphi Mongo Driver i need some help with an find command?
[13:15:51] <avril14th> Hello, is it possible to stream mongodb's log from a remote instance?
[14:02:23] <bdiu> Anyone interested in a few hours paid consulting? I'm trying to come up with the ideal strategy for map/reduce for ongoing data analysis of our data set...
[14:04:21] <ron> go Nodex
[14:06:20] <bdiu> nodex?
[14:06:56] <ron> Nodex
[14:08:59] <Nodex> no ta, to busy
[14:09:24] <ron> oh well
[14:11:16] <Nodex> bdiu : perhaps you can just ask the questions and someone will be able to help you anyway
[14:14:24] <bdiu> it's a broad question, I really want someone to dig in with me :-)
[14:14:38] <bdiu> anyone have any recommendations?
[14:15:00] <Nodex> 10gen did do consultations, not sure if they still do
[14:15:13] <bdiu> very very pricey
[14:16:12] <bdiu> i'd prefer to spend less than $200/hr
[14:16:15] <bdiu> (way less)
[14:16:40] <Nodex> probably not going to get a good consultant who knows what their talking about unless you're willing to spend the money tbh
[14:17:04] <ron> yeah, like, I'm willing to do it for half that, but I don't know what I'm talking about.
[14:17:09] <Nodex> haha
[14:17:09] <bdiu> haha
[14:17:29] <bdiu> I think the $100/hr area is still very reasonable for a professional... just not an agency rate
[14:34:34] <lgp171188> How do I connect to the default test database as an admin user? I have enabled auth and added a user to the admin database. Using that user account and credentials I am able to connect to the admin database, but I want to be able to connect to the test database and all the databases. How do I do it?
[15:02:38] <Razz_> Hi
[15:02:58] <Razz_> I'm having an issue with Mongo that I've been banging my head against for the better part of two days:
[15:03:29] <Nodex> best to just ask :)
[15:04:19] <Razz_> I store an array of longs using the C++ API, which works just fine. However, when I try to retrieve the data again one of the documents (not all, just one) throws an assertion error 10320:BSONElement: Bad type 110 / 50 / etc
[15:04:57] <Razz_> so it seems that it goes wrong in the BSONElement::size() function, which is incapable of finding the type for that document, even though at insertion time they are all the same (NumberLong)
[15:05:26] <Razz_> google is trying to tell me the DB is corrupt, however if I set the value to 42 everywhere it works again -.-
[15:06:12] <Razz_> The odd thing is though that printing the document as JSON shows a correct JSON object and using the Mongo CLI also finds and shows the document just fine
[15:06:42] <Razz_> TL;DR I'm getting a BSONElement:Bad type <random number> error, any clues?
[15:11:07] <Joeskyyy> Quick glance at the c++ docs shows that BSONElement::size returns an int (if I'm reading this correctly)
[15:11:11] <Joeskyyy> http://api.mongodb.org/cplusplus/current/classmongo_1_1_b_s_o_n_element.html#a1d46b39f5c5a2235a1c5c73f22528974
[15:11:24] <Joeskyyy> Is it possible your casting wrong somewhere? Or not casting when needed?
[15:11:51] <Razz_> Joeskyyy: no, it seems to get into the 'default' clause of the switch inside size()
[15:12:28] <Joeskyyy> Gotcha, been a while since I've C++'d, thought I'd take a stab ;)
[15:13:03] <Razz_> Joeskyyy: thx in any case :-)
[15:14:01] <Razz_> calling 'valid()' on the BSONObj also returns false, so the object is genuinely broken, just what about it is broken is a mystery to me
[15:16:50] <theblackbox> can I set a log level? I would like to see what command is being issued to the server from my node app
[15:21:02] <Joeskyyy> Last I heard the way to do that was to set the profiler correct, and the "slow queries" to a really low value
[15:21:03] <Joeskyyy> http://docs.mongodb.org/manual/tutorial/manage-the-database-profiler/
[15:21:56] <richthegeek> can anyone give me a brief summary on the issue with excessive data usage? I've got a DB that's 2gb now with only ~270mb of data+indexes... the database is mostly append-only so it just seems insane?
[15:23:41] <Joeskyyy> richthegeek: Has to do with record padding most likely, mongo likes to use extra data to avoid moves on disk
[15:23:48] <Joeskyyy> http://docs.mongodb.org/manual/core/record-padding/
[15:23:55] <Joeskyyy> *is one with the docs today*
[15:23:59] <richthegeek> Joeskyyy: sure, but a factor of 10 seems quite excessive?
[15:24:07] <kali> richthegeek: or that http://docs.mongodb.org/manual/faq/storage/
[15:24:15] <Joeskyyy> ^ that to
[15:48:06] <b0c1> hmm, interesting
[15:58:42] <b0c1> ok, maybe later
[16:03:33] <oceanx> hello, I've created a replicaSet, two members (one primary and one secondary) and an arbiter, i did shut off mongo before (i converted a standalone mongo to a replicaset) and copied all the content over ssh to the secondary, then started all the mongod instances configured the replicaSet on the primary and added the secondary and the arbiter
[16:04:46] <oceanx> now what's strange is that i only had a db called "applicationdb" (90GB), and now I can see there's also a db called "local" which weights almost the same on both the secondary and the primary
[16:05:02] <cheeser> local is used for replication
[16:05:12] <cheeser> the oplog is there, e.g.
[16:06:33] <oceanx> in fact i could only find the oplog and a few other informations inside, is there a way to be sure it doesn't weight too much? or is it normal that it takes that much (right now I don't have problems not having enough space since it resides on a 2tb xfs partition)
[16:07:37] <cheeser> well your oplog will be largeish to replicate the changes to the secondary
[16:07:59] <cheeser> i'm not sure it should be 90G but i'm not a guru on those bits.
[16:08:38] <joannac> it's 5% of your disk space iirc
[16:12:20] <giskard> hello!
[16:13:01] <giskard> did anyone here use mgo (golang) to rsInitiate
[16:18:19] <oceanx> thanks cheeser and joannac it was out of curiosity mainly! :)
[16:18:50] <oceanx> exit
[16:23:40] <cheeser> oceanx: any time
[17:05:40] <tiller> Hi there
[17:06:16] <tiller> Is it normal that with the Java's driver, I can't serialize a class into a DBObject? Even if this class implements Serializable ?
[17:10:32] <cheeser> pastebin your error
[17:11:53] <tiller> http://pastebin.com/HVXJEUZS
[17:12:00] <tiller> Message: http://pastebin.com/rQw0zx2R
[17:13:07] <tiller> The class that start the serialization: http://pastebin.com/U6Kr3FUi
[17:13:14] <cheeser> what version of the driver?
[17:14:14] <cheeser> oh, you'll need a no-arg constructor
[17:14:15] <tiller> mongo-java-driver-2.11.3.jar
[17:15:05] <tiller> Is there a way to obtain an attribute-based serialization?
[17:15:19] <cheeser> though that's unlikely the cause of a serialization bug it'll definitely hit on the other end.
[17:15:30] <cheeser> use morphia
[17:29:11] <bdiu> Sorry to repeat my earlier query, but... anyone have any interest in a few hours of paid consulting to help with some architectural/query/map-reduce goals? If not, anyone have any recommendations for other individuals or companies that do this? (Not 10gen as their rate is way to high for me $450/hr... ouch!)
[17:29:33] <bdiu> thanks
[17:55:33] <Nodex> bdui : try people per hour or freelancer maybe?
[17:56:58] <bdiu> Nodex: good idea... might head that way... just wary about the quality of assistance
[18:06:01] <tpayne> Does anyone know how i can run mongod on mavericks?
[18:06:13] <cheeser> homebrew works just fine
[18:07:24] <tpayne> cheeser: it installs fine
[18:07:42] <tpayne> but when i run it, i get this:
[18:07:47] <cheeser> and runs. i just upgraded this morning.
[18:07:55] <tpayne> http://dpaste.com/1563074/
[18:08:19] <cheeser> add -vv to your plist file and start it
[18:08:34] <tpayne> where can i find the plist?
[18:09:03] <cheeser> or run mongod -f /usr/local/etc/mongod.conf
[18:09:11] <cheeser> in /usr/local/opt/mongod
[18:09:33] <cheeser> running mongod directly will be more useful
[18:09:48] <tpayne> cheeser: i've been running it directly
[18:09:52] <tpayne> not using launchd
[18:10:33] <cheeser> run mongod -f /usr/local/etc/mongod.conf -vv
[18:10:47] <tpayne> right i just did that
[18:11:15] <tpayne> http://dpaste.com/1563090/
[18:12:18] <Joeskyyy> need a bit higher up in the log probably
[18:12:23] <cheeser> yeah
[18:12:57] <tpayne> ahhh ok one sec
[18:13:27] <tpayne> haha ok i fixed my own problem
[18:13:28] <Joeskyyy> tail -f just does the last 10 lines then keeps it open to stdout :P
[18:14:21] <tpayne> lol doh
[18:14:23] <tpayne> ok thanks guys
[18:14:31] <tpayne> i wasn't running it in sudo haha idiot!
[18:14:35] <Joeskyyy> it happens :D
[18:15:12] <cheeser> figured it was either port in use or file permissions :)
[20:01:03] <darkk^> I see quite an interesting behavior: mongodb can run long read (getmore) query for minutes even if the client is already disconnected and the socket is in CLOSE_WAIT state. I see the issue on 2.0.x branch, but I've not found any relevant bugs in jira. Have anyone seen behavior like that?
[20:05:00] <paulovap> is there a way to make text search ignore punctuation like "é" and recognize it for as "e"
[20:11:41] <kali> "é" is not punctuation, it's diacritics
[20:14:49] <BlakeRG> Hello, been banging my head against the wall on this for about an hour now, i just need to remove a single value from an array in a document (PHP) https://gist.github.com/BlakeGardner/615f9ff20959d4ab969a
[20:16:22] <kali> BlakeRG: "$pull" => {"segments.About" => "idfa" }
[20:17:51] <BlakeRG> kali: LOL i think that worked
[20:17:54] <BlakeRG> question though
[20:18:04] <BlakeRG> some of these segments.blah have spaces in the name
[20:18:10] <BlakeRG> think that would cause issues?
[20:29:12] <BlakeRG> kali: thanks for the help!
[20:29:16] <BlakeRG> i knew it had to be something simple
[20:35:52] <kali> BlakeRG: avoid using variable stuff as a docuemnt key... prefer { name: "About", values: ["", "" ] } or you'll regret it sooner or later
[20:40:36] <treaves> When I use appendBinData() to create an entry on a BSONObj, what is the correct way to retrieve that data back off of the BSONValue?
[20:41:20] <BlakeRG> kali: will take that into consideration, i am just writing some scripts to remove data from an existing MongoDB that i didn't design
[20:41:47] <treaves> value() returns five bytes too much.
[20:52:10] <generic_nick> i have a process that does some backing up of old data to long term storage
[20:52:21] <generic_nick> i am continually seeing this as it fails and restarts: shard global version for collection is higher than trying to set to
[20:52:36] <generic_nick> i tried restarting mongos and running flushRouterConfig
[20:53:00] <generic_nick> there are multiple processes using that mongos, but only the one reading that specific collection is having issues
[21:43:45] <hugod> I'm trying to set up MMS, but it keeps picking up an unqualified hostname, which doesn't resolve. The replica set is configured using IP addresses. Can we configure MMS to force it to use ip's?
[21:45:29] <hugod> (I have the MMS agent running on the same node as the arbiter, and seed it with `localhost`)
[22:46:00] <proteneer> my document looks like { "foo": { "someList": { "array_1": [1234] } } }, and i want to do an $push on the "array_1"
[22:46:04] <proteneer> what should my update look like/
[22:54:20] <Joeskyyy> ({query}, {$push:{"foo.someList.array_1":5}})
[22:54:21] <Joeskyyy> should do it
[23:04:46] <proteneer> Joeskyyy, holy moly, that's easy! thanks :)
[23:06:10] <Joeskyyy> No prob :D
[23:50:26] <proteneer> Joeskyyy, so find_one returns the entire document, is there a way to query for only a field within the document? so in the above case, I only want to display the 'array_1' list
[23:54:29] <proteneer> nm
[23:54:32] <cheeser> not sure about the driver API you're using but find takes an optional projection
[23:54:35] <cheeser> http://docs.mongodb.org/manual/reference/method/db.collection.find/
[23:54:36] <proteneer> well
[23:54:37] <Joeskyyy> ^
[23:54:38] <proteneer> i'm using python
[23:54:38] <Joeskyyy> That.
[23:54:47] <proteneer> i just had to pass in -fields
[23:54:58] <justen_> What's the best way to handle saving changes to documents with populated fields? Right now I'm going through each field for each schema and changing the populated fields to _Id's before saving, but it doesn't really feel like the right solution.
[23:55:53] <cheeser> well it certainly made no sense to me.
[23:55:57] <Joeskyyy> Yeah.
[23:56:07] <Joeskyyy> Gonna have to ask for a reclarification justen_ haha
[23:56:39] <justen_> ha ok. I have a document, and for display purposes on the client side I populate some fields in that document.
[23:56:57] <justen_> when I want to changes any of the fields in that doc and go to save it
[23:57:18] <justen_> I manually go through all the populated fields to "reduce" them back to _id's
[23:57:28] <justen_> so that there isnt a save error
[23:57:36] <justen_> what's a better way?
[23:57:37] <cheeser> not helping :)
[23:57:48] <justen_> What's ambiguous?
[23:57:54] <Joeskyyy> … All of it. Sorry ):
[23:58:00] <Joeskyyy> Do you have a before/after example?
[23:58:10] <cheeser> what does "reduce them back to _id's[sic]" mean?