[00:33:34] <GrubLord> I’m doing some mongodb admin and I’d like help with a little issue - anyone around?
[00:33:50] <GothAlice> Probably several. Ask, don't ask to ask. What's the problem?
[00:34:12] <GrubLord> Cool. =) So, I’ve been following the LDAP instructions here: http://docs.mongodb.org/manual/tutorial/configure-ldap-sasl-openldap/
[00:34:34] <GrubLord> Everything works nicely, up to the point where I do the auth() function check at the bottom
[00:34:45] <GrubLord> Where I get “Error: 2 PLAIN mechanism support not compiled into client library.”
[00:35:21] <GrubLord> What I’d like to know is - where is this mechanism not compiled into? MongoDB itself? saslauthd? Do I have to recompile mongo from source?
[00:35:59] <GrubLord> On here: http://search.cpan.org/~mongodb/MongoDB-v0.707.2.0/lib/MongoDB/MongoClient.pm I see an option for building Mongo with SASL via Perl, but the previous page seems to imply that’s not necessary.
[00:36:31] <GrubLord> I’m running MongoDB v2.6.8, on CentOS 7
[00:37:11] <GrubLord> Installed it via the Mongo RPMs.
[00:44:47] <GothAlice> I'd expect to see client crashes if the CXX driver is compiled without --use-sasl-client, and issues if the installed SASL lib wasn't compiled with PLAIN support. Similarly, server-side.
[00:46:24] <GrubLord> What confuses me is that the saslauthd conf file is supposed to be set to MECH=ldap… saslauthd doesn’t even list a “PLAIN” mechanism in its list of mechanisms.
[00:46:27] <GrubLord> i.e. [root@localhost system]# saslauthd -v
[01:04:57] <GrubLord> Even though that’s exactly how they do it in: http://docs.mongodb.org/manual/tutorial/configure-ldap-sasl-openldap/
[01:48:09] <drlewis> Is there a way to slow down mongoimport to avoid disrupting my production servers?
[02:14:33] <GothAlice> drlewis: You could perform the inserts manually.
[02:14:53] <GothAlice> Then you would have full control over any rate limiting.
[04:12:15] <jclif> Is anyone using MMS for backups on large datasets?
[04:21:01] <Freman> so... mongo is pretty magical... I have a small collection... { "name" : "role1", "description" : "a test role", "permit" : [ "scheduled", "persistent" ], "matches" : [ "shan" ] }
[04:21:24] <Freman> ^ with records like that (getting ahead of myself)
[04:21:58] <Freman> I'd like to do a find where doc.matches.filter(function(match){ var re = new RegExp(matches, 'i'); return match.test($GIVENARG) });
[04:27:37] <Freman> or am I going to have to grab the entire collection and do it in app? (not really a problem, it's a small collection and it'll never be very big, but I thought it'd be cool to do it in mongo :D)
[04:33:02] <jclif> We're trying to get a 1.6 TB dataset from MMS but it's projected to take around 50 hours; is this something that people just grin and bare?
[04:37:01] <joannac> jclif: that's 8mb/s which is pretty slow
[04:37:39] <Freman> dn.cluster.find({$where: "var found = this.matches.filter(function(match){ var re = new RegExp(match, 'i'); return re.test('someshanserver'); }); return found.length > 0;"}) works :D
[04:37:41] <joannac> MMS outbound is faster than that .. are you maxing out your netowrk?
[05:51:55] <arussel> is there a way to have a selector that returns an object but not its fields (without know all the fields in advance) ? something like {"myobj.*" : 0}
[05:56:04] <cbuckley> hi there, I'm having trouble updating my mongodb insall on CentOS 6 to 2.4.13. yum keeps telling me its obsoleted by mongodb-org 2.6.8. Does anyone have any information on how to accomplish this, or have a URL I can look at?
[08:22:52] <Cygn> Is there any function in the mongodb query syntax that accepts a string and returns a regex expression? f.e. "/[a-Z]/" would return /[a-Z]/ ?
[11:53:32] <tiwest> I've just upgraded php from 5.4.* to 5.6.6 on a CentOS 5.6 box and now it looks like the MongoDB driver is out of date. Is anyone able to point me in the right direction for upgrading that?
[13:24:48] <android6011> I need to create a query that grabs records for 2 days and on both days where the time is between 3:00PM and 5:00PM, if $hour and $minute the best way to do this?
[13:35:08] <android6011> is there a recommended way of migrating data from postgresql into mongo? right now i have just been writing code to transform the data
[13:36:04] <Derick> android6011: that's probably the best way, but do realize that you often need to change your data schema too, to make optimal use of mongodb
[13:37:01] <android6011> ya i have been thats why i figured just writing code would be the best way
[13:38:06] <cheeser> that's what i ended up doing. thankfully my data was mostly flat and not terribly relational
[13:38:37] <android6011> also for things that we had before like "reference types" a table with type values like event_types, the text on those is very small so I am assuming its best just to put the type on each doc rather than have to requery by id to pull the type, especially when a doc might have 4 or 5 ref values on it?
[13:43:23] <d0x> Any ideas why com.mongodb.DBObject isn't available on my emr cluster after putting the dependencies into the lib folder (like described on the mongo-hadoop connector emr example?) I put this question also to stackoverflow: http://stackoverflow.com/questions/28998333/connect-hadoophive-with-mongodb-on-aws-emr-class-not-found-com-mongodb-dbobjec
[13:44:01] <jokke> is it possible to make a query with the $in operator and preserve the order of the given array?
[13:45:17] <jokke> as in db.books.find({_id: {$in: [ObjectId('someid'), ObjectId('someotherid'), ObjectId('thirdid')]}})
[13:45:44] <jokke> which would return the books in order 'someid', 'someotherid', 'thirdid'
[14:15:20] <dbclk> folks I use the rs.reconfig to remove a replic set from my primary that's complaining it can't connect to primary over port 27017
[14:16:01] <dbclk> problem is now, on the secondary machine, when I try to reconnect to primary it's still complaining about the port but, when I go inside the mongo terminal, it's marked as rs0:remove
[14:43:13] <spuz> Does the getN() method in the Java api ever return anything other than 0 after a document is inserted? http://api.mongodb.org/java/2.6/com/mongodb/WriteResult.html#getN%28%29
[14:43:34] <spuz> I seem to be receiving 0 even after a successful insert
[14:53:30] <Diegao> is there a way to run a distinct wich returns a list of documents instead of the list of ids? I'm using mongoengine for python
[15:10:50] <pamp> hi, when querying with find, its possible using statments like IF for projection ?
[15:15:16] <GothAlice> pamp: Alas, $cond, which sounds like what you want, is an aggregate operator. Ref: http://docs.mongodb.org/manual/reference/operator/aggregation/cond/
[16:58:48] <technoid_> Anyone happen to know if the rockmongo project is still active?
[17:04:05] <boutell> Hi. I am inserting multiple documents with a single insert() call. There is a lot of confusing information about the limitations. Am I restricted to no more than 1,000 documents per insert? What is the maximum combined size of the documents? Right now my code just starts a new batch if the total raw BSON size of the documents exceeds 16MB, but I’m getting complaints that writes are being refused for batches near that
[17:04:24] <boutell> I suppose inserting *one* 16MB document *must* be allowed, because that is the per-document limit.
[17:04:30] <boutell> (these are inserts in raw mode.)
[17:05:15] <boutell> I am guessing that there is overhead per each document in the “batch,” which is pushing me just over 16MB if, for instance, there are four documents that are exactly 4MB. But I don’t know how to account for the overhead correctly. Any thoughts?
[17:08:30] <technoid_> rockmongo.com appears to be down, PHPMoAdmin just had a vulnerability found...what web based gui for managing Mongo would anyone recommend?
[17:13:18] <krisfremen> nothing that is written in php
[17:38:37] <schu> hi there. how can I install mongodb 2.4.9 on a debian system via apt? (I need this special release but apt always tries to force me to use 3.0.0)
[17:49:11] <GothAlice> Storing them as proper date objects, just ignoring the date component, would be both more efficient than storing them as strings like that, but also allow you to use http://docs.mongodb.org/manual/reference/operator/aggregation/#date-operators
[17:49:42] <ndb> GothAlice: yes, I'm aware of the fact, just can't change it ASAP
[17:50:04] <GothAlice> ndb: I feel that pain. Alas, until you do, you can't do what you want.
[17:53:50] <wc-> hi all, currently in the process of upgrading to mongo 3, i am using user authorization and having some issues with connecting to the mongod v3 from a mongo client v2.0.4
[17:54:29] <wc-> were there some authorization-related changes in mongod v3 that require clients, code libraries etc to be updated?
[18:15:33] <wc-> for anyone else that might hit this auth with old clients problem, this jira ticket has helpful info and a temporary solution: https://jira.mongodb.org/browse/SERVER-17459
[18:32:15] <wc-> im trying to completely remove all traces of any user accounts imight have on this mongod instance
[18:56:38] <claygorman_> he's a sql guy so beware folks
[18:59:36] <claygorman_> i know there was a change in the auth scheme for 3.0 that I think broke my application when I upgraded, so i rolled back until i could figure it out
[18:59:48] <GothAlice> claygorman_: A popular topic today.
[19:03:45] <GothAlice> To test 3.0 out I asked MMS to spin up some mongod on some of my nodes, configured them for 2.6 default authentication, added the users I needed, waited a minute, then bulk loaded my data back. Didn't have a single hiccup with authentication at the application level after "pip install -R pymongo" to bump that package version.
[19:03:47] <claygorman_> i tried the 2.6 style auth schema version with mongo 3 hoping that would work
[19:04:51] <wc-> unfortunately theres no way im gonna get management to allow this web automation stuff to manage our prod instances until its proven
[19:04:59] <wc-> so until then im stuck with .js files and ansible
[19:05:13] <GothAlice> Because that's so much… more reliable? O_o
[19:05:32] <phutchins> Anyone know much about BSON? I know this isn't really directly mongo related (although i'm pulling the bson id's from mongo). I'm trying to decode the Mongo ObjectId ( in ruby ) and convert the object id string to binary. Anyone have any ideas on how?
[19:05:58] <wc-> when the automation first arrived we hooked up an aws account to it, it got stuck in some spin cycle spinning up instances and shutting them down
[19:08:22] <phutchins> GothAlice: it does... but I'm workign toward using the general bson library to decode it... Hoping to find someone who knows it a bit better than i do to get in the right direction
[19:08:47] <phutchins> GothAlice: I.e. I can do things like decode the date from the object id, etc... But I've got a more in depth task i'm working on
[19:08:50] <wc-> claygorman_: i had to do the following commands in a script, then start adding users, then enable auth and restart mongo:
[19:09:32] <phutchins> GothAlice: yeah I've gone through the docs...
[19:10:16] <phutchins> GothAlice: there's no to_binary methods so I'm a little stuck...
[19:11:23] <GothAlice> phutchins: There isn't to_binary, but there certainly is an array of bytes: see the "data" instance attribute.
[19:12:43] <phutchins> GothAlice: ah, It wasn't clear to me that that was what that was... that' helpful. thanks
[19:40:58] <phutchins> GothAlice: so I get that I now have an array of bytes but i'm still unclear on how to get this to binary or hex or somethign that I can decode. Have any idea?
[19:42:10] <GothAlice> … what is an array of bytes if not by definition "binary"?
[19:43:03] <GothAlice> phutchins: And define "decode". It sounds like you have everything you need and a general Ruby question about how to manipulate those things. Unfortunately, I don't Ruby, but #ruby may be able to help.
[19:43:42] <phutchins> GothAlice: true, I'm not forming my questions clearly :). This is definitely #ruby territory now... I appreciate the help!
[22:46:54] <daidoji> hello, I'm testing a Mongo backup from an EC2 instance using journaling
[22:47:02] <daidoji> I take the snapshot and everythign looks fine
[22:47:36] <daidoji> but when I set it back up to test a recovery from backup, it takes quite a while for the journal to recover
[23:16:18] <daidoji> joannac: ahhh I see, out of curiosity, why aren't these files captured in a snapshot?
[23:17:00] <joannac> are you restoring to the same place you did the backup from?
[23:18:19] <joannac> did you take a ec2 snapshot of everything (including journal)?
[23:18:20] <daidoji> joannac: no, I spun up a new instance to make sure the backup process would work and attached my snapshotted volumes to that new instance
[23:18:33] <daidoji> yeah, all three volumes except the root volume
[23:19:47] <joannac> "If the dbpath is mapped to multiple EBS volumes, then in order to guarantee the stability of the file-system you will need to Flush and Lock the Database."
[23:21:33] <daidoji> joannac: hmm, not really. The data is all on one volume, the journal and log are on another
[23:21:42] <daidoji> joannac: since all the data was on one volume I didn't do it
[23:22:18] <joannac> daidoji: the previous paragraph is "The journal file allows for roll forward recovery. The journal files are located in the dbpath directory so will be snapshotted at the same time as the database files."
[23:22:18] <daidoji> joannac: thats why I'm asking for clarification :)
[23:22:54] <joannac> so yes. journal is inside dbpath. journal is on a different volume. therefore dbpath is on multiple volumes. therefor you need to lock the database
[23:23:07] <daidoji> joannac: ahhh okay, thanks for clearing that up.
[23:23:20] <joannac> It's not unclear to me.... is there another way it should be worded?
[23:23:25] <GothAlice> Like nuking from orbit, it's the only way to be sure.
[23:23:47] <joannac> (genuine question, not being snarky)
[23:24:01] <daidoji> joannac: oh I'm not sure. It might have just been my read of it. (no offense taken)
[23:24:39] <daidoji> joannac: I did read it really quickly so it might just be reader error
[23:24:43] <joannac> okay. feel free to PM me if you think of anything
[23:24:50] <daidoji> joannac: sure, I appreciate your help
[23:24:53] <joannac> daidoji: anyway back to the original question
[23:25:05] <joannac> on a fast filesystem, mongodb doesn't preallocate journal files
[23:25:25] <joannac> so for example, on my ssds, I only have 1 journal file and it's tiny
[23:25:35] <joannac> if I took a snapshot of that and put it onto slow disks
[23:25:56] <joannac> when mongodb starts up, it replays what's in the journal, then does some tests and goes "oh crap these disks suck. preallocate time!"
[23:26:32] <daidoji> joannac: ahhh I see. That makes sense
[23:27:40] <joannac> there's not really anything practical you can do at the snapshot time to prevent that. I guess if you knew that in advance, you should lock the database (which flushes changes from journal to data, and data to disk)
[23:28:03] <joannac> then you could just copy the data files and create your own journal files like in that docs page
[23:28:28] <joannac> but I'm not sure it's worth the effort?
[23:28:48] <daidoji> roger, yeah I think you're right
[23:28:59] <daidoji> it sounds like some work to save 10 min
[23:29:15] <daidoji> and it wasn't really a blocker
[23:31:48] <GothAlice> When our DB was on AWS, the inexpensive instances were still given just under 18GB of RAM. So we decided to roll with no persistent storage at all on our DB nodes.
[23:33:03] <GothAlice> (Fully self-configuring on startup, ephemeral VMs, with occasional snapshots and streamed archiving of the oplog to S3 in the event of catastrophic failure.)
[23:33:29] <GothAlice> Side benefit: point-in-time restores.
[23:34:53] <daidoji> GothAlice: oh really? Where'd you move to? Your own colo?