[08:26:39] <Mothership> Ravenheart have any idea how can I populate mongodb document field with a c# List<myCustomClass>? Can't figure it out.
[08:31:34] <Ravenheart> just add a property to your model of type List<Whatever>
[08:35:30] <rbott> hi folks. does the upgrade part in the official manual still apply (e.g. simply exchange the binaries), if you need to upgrade from 1.8 to 2.4?
[08:38:42] <rbott> if I understand the upgrade docs for 2.0, 2.2 und 2.4 series correctly, the database files are binary-compatible. however, i could not find any information if 1.8 -> 2.4 works directly
[09:03:29] <Mothership_> Ravenheart, http://pastebin.com/VdFkYsQ0 what exactly should I do here?
[09:18:18] <Mothership_> Ravenheart, I know how to do queries, the problem is that I have the representation of the document as a class in c#, and when I want to save the c# class object to a document, the strings save fine, for arrays i can do cast (BsonArray) so that works too, but the I cant save list of custom class to mongo as a value of document's property.
[11:24:03] <jekle> abhishek: I saw an orm abstraction library for php that let you combine both. (https://github.com/jenssegers/laravel-mongodb/)
[11:32:02] <netQt> hi all, i'm trying to create replica sets but i keep getting this error "mongod --port 27017 --dbpath /srv/mongodb/rs0-0 --replSet rs0 --smallfiles --oplogSize 128"
[11:32:20] <netQt> does anyone know how to fix this?
[11:32:36] <netQt> replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
[11:34:10] <kali> netQt: i'm not sure this is an error when you create the RS
[11:34:33] <dawik> im wondering, if there is a convenient way to print/output a BinData object to binary representation? I can only find hex() and base64() methods
[11:34:56] <dawik> from the CLI that is, it is easier via a client :)
[11:35:12] <kali> dawik: not knowing what's insise ? i'm not sure
[12:43:07] <_NiC> Can someone confirm that after setting a keyFile in my config, my replica set will work (as in, be replicated) regardless of what users I add to any of the databases, including the admin db?
[12:43:57] <rspijker> yeh, that should work fine _NiC
[12:45:03] <_NiC> Then I just need to figure out what users to add to administrate this whole thing.
[12:47:26] <_NiC> are there any "best practices" when it comes to general admin users, like one with userAdminAnyDatabase, one with clusterAdmin, and so on?
[12:48:46] <rspijker> _NiC: not sure if there are any defined best practices. We alsways just decide which separate roles we need. So does it make sense to split these responsibilities? If so, we do it, otherwise we don’t
[12:51:20] <_NiC> rspijker, It's a fairly small setup with limited number of people having access, so we could do fine with a single user.. My limited experience does not cover what makes sense yet :-)
[12:52:13] <rspijker> we generally have a single userAdmin, a clusterAdmin, instanceAdmin and then users for the separate DBs as makes sense
[12:52:55] <rspijker> so we split them up quite a bit, actually
[12:55:20] <_NiC> Hm, ok. instanceAdmin. *reads about*
[12:57:53] <rspijker> I don’t think that’s an actual mongo thing, just a term we use internally :)
[12:58:18] <rspijker> the clusterAdmin is responsible for the cluster administration, the instanceAdmin controls the replica sets inside of the shards
[13:00:12] <_NiC> Ok, I think that makes sense.. :-)
[13:15:44] <_NiC> When I access the :28017 web interface, what's the role needed for that
[13:19:03] <_NiC> The docs recommend to disable that for production systems, but I guess it's safe to have it up as long as you have it password protected? Also, can mongo export it on https instead of http? Or do I need to stick something in front of it?
[13:24:23] <rspijker> fairly sure it can’t do http
[13:24:40] <rspijker> no idea what the roles are for the REST interface or how that would even work
[13:24:56] <rspijker> (can’t do https, that is of course)
[13:26:03] <_NiC> there's a few things to click on in there that requires REST... enabled it on my local test, it does provide some useful info
[13:26:11] <_NiC> I think I'll have to look into that a bit more as well
[13:33:30] <_NiC> Docs says for 'nohttpinterface' that "Authentication does not control or affect access to this interface.", but that's actually not the case.. I got a login-popup in my browser when accessing the web interface, and I was able to log in with my userAdminAnyDatabase user..
[13:37:47] <_NiC> How do you do monitoring of mongodb?
[13:45:44] <rspijker> _NiC: MMS and we use Zabbix for the general monitoring
[13:48:57] <_NiC> rspijker, Feel like sharing your zabbix templates? :-)
[13:49:40] <rspijker> the mongo stuff is all from a plugin
[13:57:15] <rspijker> no worries. Zabbix is good to monitor overall health, in a more general sense. MMS can really help look into health as well as performance metrics
[13:57:41] <ue> hey, do i have to chown /data/db everytime i want to run mongod?? whats the way to do it only once?
[13:58:05] <rspijker> why would you have to chown more than once?
[14:27:08] <ue> my username is of format: firstname.lastname
[14:28:48] <rspijker> ok… so… apparently: If the length of the username is greater than the length of the display column, the numeric user ID is displayed instead.
[14:29:14] <ue> is there a way to check if 1131087 is actually my username
[15:17:25] <MathiasM> Hi! What's the right way of doing the inverse of populate()? I.e. I want to save a document that should have a link id to another document, but I only have a unique string identifying that document. What's the correct way to lookup (or create) that document before saving the link to it in the first document?
[15:18:34] <MathiasM> can/should I do it in a pre("save") schema action?
[15:18:39] <andrei_> does any one knows how I can reset 2 delete or reset two-factor authentication for mms ?
[15:18:58] <andrei_> does any one knows how I can reset delete or reset two-factor authentication for mms ?
[15:25:53] <tscanausa> having 40 Mongos servers is a nightmare
[15:35:00] <uehtesham90> hey, i have to do a mongodump of a database but i am getting the following error: assertion: 14035 couldn't write to file: errno:27 File too large
[15:35:08] <uehtesham90> is there a way around this
[15:35:25] <uehtesham90> i have sufficient space in my external drive
[15:45:22] <rspijker> uehtesham90: is your external drive formatted in some horrible file system?
[15:52:45] <rspijker> ok, so you can’t just reformat it
[15:52:53] <rspijker> well, then you’re kind of out of luck
[15:52:57] <uehtesham90> and the external drive i have as 400 GB of space free....and still u cant transfer the whole file?
[15:53:12] <rspijker> vfat can only handle files up to like 2GB in size
[15:53:31] <uehtesham90> assuming i move the existing data to another storage device, how can i reformat the drive?
[15:54:14] <mehwork> if i have a collection with: {'cars':['bmw':'2014', 'audi':'2003']} how can i use find to query cars.audi = 2003 ?
[15:55:32] <rspijker> uehtesham90: you can use a utility like gparted, probably easiest. There are command line tools as well, but they are a bit more complex if you don’t know too much about this stuff...
[15:56:01] <rspijker> mehwork: almost exactly like that…. db.collection.find({“cars.audi”:”2003”})
[15:56:40] <uehtesham90> what filesystem i shud reformat it to?
[15:56:48] <rspijker> mehwork: ehm… wait… I think I ight have misread that
[15:56:56] <rspijker> uehtesham90: depends, want to use it on windows too?
[16:15:57] <umquant> Could someone give me a little assistance on updating the "values" array in this schema https://gist.github.com/anonymous/3955ecd223979f695d09
[16:58:46] <mehwork> if i have a collection with: {'cars': ['bmw':'2014', 'audi':'2003']}. What should i pass to .find() to see the value of bmw?
[16:59:01] <mehwork> without knowing what the value is, i just want to display the value
[17:41:55] <umquant> Any idea why a negative slice wouldn't work in a find? For example if I do -5 I still get the first 5 elements
[18:45:30] <denis> Helllo. I am trying to build the mongo-hhvm-driver but I am missing CMakeLists.txt. Does anyone know where to find that file? Thank you.
[18:59:08] <rickibalboa> I have a problem with a findOne call hanging in nodejs. No other calls hang, if I make the call from shell / elsewhere it works fine, just this specific process. Even restarting it doesn't fix anything. Wtf is going on?
[19:52:32] <stefandxm> i guess if you are to write manual projections and what not it would be a different thing
[19:52:46] <stefandxm> but with decent intellisense its all the same :)
[19:53:10] <cheeser> yeah. from driver code it doesn't matter overly much. it gets to be a pita in the shell, though.
[20:35:19] <achiang> hello, looking to mongoimport a largish dataset on a single, non-sharded instance. it's going very slowly. i did read that indexing should be turned off to improve performance, but am a bit confused on this point, because i need a unique index, and my data isn't clean (it may have dupes)
[20:36:22] <achiang> so i've been relying on mongo's upsert to clean up this data for me... that is -- on a dupe, just update the record
[20:36:48] <achiang> guessing this probably isn't the best way to do things... looking for advice/pointers
[20:37:50] <cheeser> upserts are going to be slower. it introduces 1 query for each imported row.
[20:42:41] <achiang> hm... i am doing pre-processing in python already anyway. perhaps i could hash the field i want to be unique. then when encountering a new document, check the hash, and if not exist, write to a new, clean set of data. then pass that to mongoimport, and build an index after
[20:46:03] <achiang> a semi-related question is, what is that "check 9 ..." output telling me at the end of a mongoimport?
[20:56:57] <achiang> oh interesting. /me discovers the dropDups arg to building an index
[21:26:56] <achiang> ok, even more interesting... after some experimentation, on a newly initialized db, and importing 30K records, db.collection.update(<field>, upsert: True) ends up inserting far fewer records vs. db.collection.insert(), even if i have a unique index on <field>
[21:28:06] <achiang> in my experiments, i drop the db inbetween...
[23:23:37] <achiang> how can i find which documents have this duplicate key: "E11000 duplicate key error index: openmotion.buses.$loc_2dsphere dup key: { : \"0f12220001223122\" }"
[23:23:53] <achiang> i'm not exactly sure what i should be searching for
[23:56:52] <seanp2k> so I've been playing with mongodb for all of about an hour now, and I'm wondering if I should do this logic in python or mongo: I have a collection with info on ~100 plugins for an unrelated system. Each plugin has a plugin key, and I want to have a different collection that has a list of keys to ignore (to be used later for alerting based on the results).
[23:57:36] <seanp2k> I could very easily just get the list of ignored keys, get a list of all the keys that match my other criteria, and in python say "remove the ones that match anything in the ignore list".