[00:40:42] <TTimo> hello. Are there conditions under which mongodb will write out queries to mongodb.log even though db.getProfilingLevel() says 0 ?
[00:40:48] <TTimo> seems to be what I'm seeing atm
[00:51:03] <TTimo> also .. I should be able to enable profiling on a secondary, correct? db.system.profile.find() says "not master and slaveOk=false"
[00:56:12] <skot> yes, but you must do rs.slaveOk() to query a secondary.
[01:14:31] <jaha> Anyone got a good way to "clean" an excel or csv to prep it for mongoimport? im getting an exception "CSV file ends while inside quoted field"
[02:18:48] <doxavore> Is there a way to set nofile limits in an upstart .conf file?
[02:18:54] <doxavore> I seem to be running into SERVER-2200
[02:28:15] <samurai2> hi there, anyone know how to dump mongodump --out into remote folder? thanks :)
[02:34:03] <robdodson> has anyone ever tried mongoskin with node? https://github.com/kissjs/node-mongoskin
[02:34:11] <robdodson> curious if it's stable and easy to work with
[02:35:53] <addisonj> its built on mongodb-native, which is 10gen supported
[03:23:38] <speakingcode> hi, got a newb questoin here. i want to build a database where i have a collection of users, each user will have some attribute, say Company. I want to match users by company so in RD terms i would have a company table and make those serve as foreign keys for user records. in mongo, what would be the right approach?
[03:25:16] <speakingcode> i want to be able to basically query all the users with a given company. b/c of performance i don't want to iterate every user and find every match. would it be best to have a second object of companies, each of those containing the user? is there a way to cross-reference and cascade updates?
[03:25:40] <opus_> if i want to insert a row and use the Next ID for an internal id
[03:26:13] <opus_> do I do db.collectionname.insert( { "name" : "bob" , { "$inc" : { internal_id : n }} );
[04:00:03] <robdodson> can I ask a very newb question? Is it possible to prepopulate a db with a json file of some kind? I'm trying to build a little prototype and I'm not sure how to bootstrap the db with data
[04:22:06] <michaeltwofish> robdodson: mongoimport can import json. http://docs.mongodb.org/manual/administration/import-export/
[07:34:41] <opus_> because I started with a collection and can't define a schema after already having the collection. I guess it was in a previous version of mongoose to 'learn' the schema, but they took that functionality out
[07:47:16] <Zelest> havn't done any benchmarks on the gridfs module yet
[07:48:01] <Zelest> but how can gridfs be faster than regular files? i mean, fine, mongodb/gridfs is all in memory.. but so is the regular files after being read once..
[07:48:23] <Zelest> i'm curious how "well" gridfs performs once the data isn't in memory though.
[07:48:36] <Zelest> not that I know how to test that..
[07:48:56] <opus_> can I change ObjectID to a Number and have it increment somehow?
[07:56:03] <NodeX> Zelest : I think you'll find it's not faster when you get down to it
[07:59:03] <Zelest> NodeX, well, I put file.jpg in the webroot.. put the same file in mongodb/gridfs and had a php script that reads the file and spits it out..
[07:59:27] <Zelest> then I ran siege, a benchmarking tool requesting with 10 parallell connections for 30 seconds
[07:59:42] <Zelest> and I get almost twice as many requests using gridfs than the regular file.
[08:36:39] <Zelest> 17.1k (static) vs 3.2k (gridfs) .. using a 77 byte file.
[08:38:18] <IAD> fs will be very slow after a million files ... =)
[08:38:44] <OpenGG> Guys I met a problem: I had 3 collections, but after dropping one of them, db.stats() still shows "collections" : 3, and "fileSize" won't get reduced;
[08:38:47] <Zelest> 49.98 (static) vs 102 (gridfs) .. using a 1.4MB file.
[08:39:34] <OpenGG> db.repairDatabase() gives me "errmsg" : "exception: can't map file memory - mongo requires 64 bit build for larger datasets", "code" : 10084,
[09:58:09] <Zelest> ron, Yes, I can port it myself.. or compile from source.. either way beats the idea of using a ports system where I don't have to worry about being outdated. ;-)
[09:58:39] <ron> Zelest: that's one solution. the other solution is to use a more common OS and not have to worry about port maintenance ;)
[09:58:47] <NodeX> I like to take an hour out of my day to annoy you ron
[10:01:03] <Zelest> packages just tend to b0rk on me.. i want a certain version (like now) and the package is only available in uber-super-scary-testing branches..
[10:06:14] <Zelest> I just have some feedback regarding the gridfs docs.. they're quite tricky to understand. :-P
[10:06:17] <NodeX> mind you saying that about debian, about every 6 weeks my server decides it doesn't want me to login to SSH for NO reason, the whole thing then needs a full cold restart and fsck, it's very strange
[10:13:44] <Zelest> the only thing (apart from transactions then, which i very rarely use) i lack in mongodb is a good fulltext search feature
[10:13:54] <NodeX> I also realised early on that all payment providers hae a callback of somesort and http isn't stateful or transactional and if providers can do it with callbacks why can't my app - which worked out well
[10:13:58] <Zelest> i sort of fell in love with postgres tsearch2.. it's insanely powerful
[10:22:37] <NodeX> I even mimicked 24's "send it to my screen" for contact record management and opening across a WAN in a web based CRM which was pretty coo
[10:52:43] <Paizo> kk that development version, i can't use it on a production enviroment. I would like to schedule the mongodb 2.2 update togheter with the php driver when it's officially out
[10:54:43] <NodeX> I use it fine in production tbh
[11:47:02] <remonvv> Which means the real question is, do we trust NodeX' as an engineer?
[11:47:10] <remonvv> And I think we all know the answer to that question, don't we.
[11:48:32] <NodeX> is this attack me day or something
[12:22:48] <remonvv> If you need something endorsed let me know
[12:23:14] <ppetermann> prefer having people to endorse me when they know what i can do.. for you i know you can get endorsements =)
[12:24:04] <remonvv> That's what it is supposed to do but I'm getting endorsements for "Database Engineering" for a marketing person at a big media company and stuff like that.
[12:24:14] <remonvv> It's sort of a popular vote sort of thing because it gets autosuggested.
[12:24:26] <remonvv> Hence my "Receiving Endorsements" skill.
[12:24:31] <remonvv> Which, worryingly, is climbing to the top.
[12:24:55] <ppetermann> well the positive thing about endorsements is that you can see who endorses what
[12:36:05] <remonvv> Someone with that name just connected in LI. Good to have the handle<-> name combo ;)
[12:37:50] <ppetermann> the reason why i use my rlname here
[12:39:11] <doxavore> Are there some normal causes of this message in 2.0.7? warning: virtual size (2195998MB) - mapped size (2190772MB) is large (4255MB). could indicate a memory leak
[13:05:43] <doxavore> Anyone know much about the ruby driver? In addition to the above MongoDB messages, I'm starting to get pool.ping_time==nil at: https://github.com/mongodb/mongo-ruby-driver/blob/master/lib/mongo/util/pool_manager.rb#L259
[13:12:31] <mnaumann> hi, i've setup my first abiter yesterday, now i realise it doesn't use authentication (the other nodes do). how can i add authentication to the mongo shell of an arbiter? do i need to?
[13:16:09] <remonvv> if you can't remove auth altogether (you shouldn't really need it) you can add it to the arbiter just like any other node afaik.
[13:16:28] <remonvv> doxavore, not sure what's happening but it sounds like your node is having to deal with more data than the hardware can handle.
[13:17:05] <doxavore> it was running fine on a smaller machine - this one has faster disks and 2x the RAM :-/
[13:17:47] <remonvv> Ah, that does make it a little weird. It's hard to diagnose from a distance.
[13:18:15] <remonvv> Is it hot? As in did you give it time to swap the warmer data into memory?
[13:18:29] <remonvv> faults in mongostat should be an indication
[13:45:55] <remonvv> Ruby isn't a language I'm very comfortable with but your nil exception seems to be something that should never happen
[13:47:38] <remonvv> your application is actually configured to connect to the repset as a repset I assume?
[13:55:02] <doxavore> remonvv: Yes, the config is much the same as we've been using for about 18 months now, save for adding pool_size after moving to JRuby recently
[13:55:54] <doxavore> I'm at a loss as to cause/effect, but we also see periods where the DB seems to go into a complete lock, with locked % reported by mongostat of 500+ and queries dropping to 0
[14:04:05] <remonvv> hm, and you have results for db.currentOp() during that time?
[14:14:42] <[AD]Turbo> I'm getting this error (updating a capped collection): MongoError: failing update: objects in a capped ns cannot grow
[14:18:04] <remonvv> [AD]Turbo, exactly that. You cannot update documents in a capped collection that cause the document to grow in size. Only in-place operations are allowed.
[14:42:16] <remonvv> The reason is that mongodb reserves a specific amount of bytes for that documents and it counts on that not changing to be able to do some of the optimizations that come with capped collections
[14:58:36] <kali> remonvv: you can workaround this by inserting padding when you create the documents in the cap collection (add a field: padding: "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX") and $unset it when you add your fields
[15:09:48] <remonvv> kali, true ;) But when you find yourself updating things in capped collections that require that hack a lot you're probably using them wrong
[15:15:26] <aminal> he guys, i'm getting errors like this: Assertion failure: _unindex failed: assertion db/pdfile.h:474 all throughout my logs for two of my replica sets. i'm running a 3 shard x 3 repl set member setup on version 2.0.4. googling this kind of sends me all over the place. any ideas what could be going on here?
[15:16:57] <TTimo> is there a way I can use the mongo shell on a secondary to run read only operations? I keep getting a "not master and slaveOk=false" and I don't understand how to work around it
[15:17:19] <TTimo> in this particular case, I enabled profiling on this secondary, and I can't even look at the results
[15:20:34] <TTimo> I don't have slaves, unless you mean secondaries in the replica set, which is more or less the same, and that's what I'm connected to ?
[15:22:25] <kali> remonvv: definitely :) but worth mentioning if it can helps somebody
[15:23:05] <kali> TTimo: yep, slave == secondary. run rs.slaveOk() in the shell
[15:23:25] <kali> TTimo: slave reffers to the obsolete "master/slave" replication system
[15:23:48] <TTimo> yup that was my understanding as well
[15:25:41] <TTimo> is that on #francaise? which network is this on again
[15:26:41] <kali> TTimo: freenode now, but it's quiet :)
[15:28:41] <coldwind> I'm developing an C# app with MongoDB where I store per-user tags for each document; I have tried two of the possible dictionary representations that the C# driver provides by default but both have serious problems for querying ( http://pastie.org/5104033 ). Anyone got a suggestion about how to approach this?
[15:33:27] <coldwind> The main problem with the Document representation is that keys (usernames) would need to be strings with a limited set of characters or escaped; another problem is that .distinct() does not work on usernames. The problem with ArrayOfDocuments representation is that I cannot "$addToSet" to a tag to a user that does not exist yet.
[15:39:38] <Derick> NodeX: I forget things because there is no Jira ticket :P
[15:50:21] <Exitium> Hey, since upgrading to 2.2, we're getting a lot of cursor timeouts in php, any suggestions?
[15:51:17] <MongoDBIdiot> well, you could use a database...
[16:08:28] <Exitium> We only have 2 or 3 indexes on that collection I think, not 100% sure, I didn't write the software, but I'll add that to the list to explore
[16:08:33] <NodeX> if you add domain sockets back you can have two cards ;)
[16:08:58] <NodeX> also if it's an update a safe=true might stall it
[16:09:13] <NodeX> if for instance it's waiting for a "w=1" from the node
[16:11:12] <Exitium> Problem is, I've only really been told of the issue and been told to fix it lol, I didn't actually write the code or I'd have a better understanding of what's going on -.-
[16:11:34] <Exitium> I don't even know how many indexes they're using, or need etc.
[17:25:41] <NodeX> when I grow up I want to be just like him
[17:42:52] <Almindor> is there a way to get a dynamic document in csharp without mapping it to a class?
[17:43:21] <Almindor> we have subdocuments which could contain virtually anything in them and we need to get them into mono/csharp and work with them
[18:01:03] <NodeX> Almindor : that's kind of a large point of mongo / nosql so it should be possible
[18:26:16] <snizzzzle> I'm considering switching my entire database from mysql to mongo but I've heard that there are some reliability issues with mongo. Specifically, data corruption I've heard has been a problem. Is this true and what are some issues that I should familiar myself with?
[18:27:11] <TTimo> you shouldn't run standalone servers in production. basically you need to look at a 3 node replica set as your starting point
[18:27:29] <TTimo> that means a bit more infrastructure and setup difficulty
[18:27:32] <snizzzzle> NodeX: A thread was killed in the middle of execution and has corrupted a row of data
[18:27:59] <snizzzzle> TTimo: Are you referring to my question?
[18:28:09] <Gargoyle> snizzzzle: MySQL and Mongo are two totally separate things. I doubt you want to "switch" from one to the other. More likely, you are "Going to rewrite your app and make use of mongo in the new version"?
[18:30:29] <snizzzzle> Gargoyle: I know but django facilitates the use of this. I will first consider using mysql for my main purpose database and then using mongo as a side db for collections of data.
[18:31:07] <TTimo> snizzzzle: mongoengine or django-mongo ?
[18:33:14] <snizzzzle> TTimo: We would be starting with using mongo as a side option and then eventually pushing everything over to mongo but I wanted to see if there are reliability and data corruption issues....
[18:33:21] <TTimo> anyway, there are plenty of options to balance how fast versus safe you want to be with your data
[18:34:09] <NodeX> snizzzzle : if a thread dies in a mysql operation what happenes?
[18:34:32] <TTimo> yeah that particular example is probably no different
[18:36:12] <Gargoyle> snizzzzle: Mongo just gives you the option of not waiting around to find out! :)
[18:36:27] <TTimo> there are some interesting options though. you can have your operations wait until some number of replication slaves report having replicated your change
[18:36:31] <NodeX> snizzzzle : you're talking about transactions?
[18:36:37] <NodeX> if so they do not exist in mongo
[18:37:33] <snizzzzle> NodeX: We're trying to get past the mysql issue of doing migrations on production. It takes down out service about 10 minutes for a single added column...
[18:42:28] <aster1sk> Does the $or wrap the $regex or the other way around?
[18:43:28] <mordocai_work> Hello, I have a question on how to best do something the "mongo way" or nosql way. I have an inventory management app I'm working on and I found that on certain things I need what amounts to an enumeration in sql. I want to be able to store a list of options to be able to be used for a dropdown, but I want them in the database so they can be changed easily through the web interface. What is the best way to do this with mongo? (I
[18:43:33] <Gargoyle> aster1sk: IIRC it $or: [ {}, {}, {} ] with your conditions in the {} as normal.
[18:46:20] <aster1sk> I care about being able to search more than one item.
[18:47:03] <NodeX> aster1sk : better to store like this as another field c..... idx_field : ['title','artist','trackname'] ... all lower cased and split on space
[18:47:04] <aster1sk> $regex : { title : { $regex : 'query' } } works perfectly
[19:53:12] <diogogmt> The official mongodb GridFS doc page says to pass the content-type in the options obj: http://mongodb.github.com/node-mongodb-native/api-generated/gridstore.html
[20:42:02] <Nerp> when I run rs.stepdown() on the primary of one of my shards, it causes most of the mongos processes attached to it to crash, is there a better way to step a primary down that will prevent this from happening?
[20:49:28] <TTimo> pymongo requiring explicit closure of connections to replicaset is being a royal PITA
[20:56:53] <diogogmt> The GridStore of the mongodb native nodejs driver doesn't set the contentType for a gridfsFile if the mode is set to +w(edit). If the mode is set to w(truncate) then the contentType is property set, anybody knows why?
[21:05:10] <electricowl> Hi. Any suggestions for dealing with seemingly random MongoCursorExceptions in PHP?
[21:06:13] <Gargoyle> I'm thinking about reworking our models to use mongo directly to better leverage the drivers capabilities and to reduce memory usage. Probably best description would be some kind of hybrid model/odm. Anyone heard of anything similar?
[21:06:25] <ryan_> getting 'MongoCursorException' with message 'couldn't send query:' on php mongodb driver 1.2.12 - is the only option to downgrade?
[21:17:18] <newbie22> *: hello everyone, I am having trouble getting mongodb to run on my 32 bit windows XP system. Anyone have any reccomendations ????
[21:17:45] <newbie22> Error Message starts : "the procedure entry point interlockedcompareexchange64"
[21:19:18] <Gargoyle> newbie22: Also, I think you are going to run into many problems along the way if you use a 32 bit system.
[21:20:02] <newbie22> Gargoyle: well they still have available a 32 bit download for windows .........
[21:22:18] <Gargoyle> newbie22: Can't offer much more help - been windows free (apart from web testing) for years. 32bit = old and mongo = new. I think the support will be limited.
[21:23:29] <newbie22> Gargoyle: you are correct, it is..............I have been reading aobut it so, I will be purchasing a PC to install linux on next month... This is not my PC, so I cannot do a complete reconfig
[21:23:41] <Gargoyle> newbie22: Did you read the note on the download page? Says pretty much the same thing.
[21:23:58] <newbie22> Garygoyle: Yes I did, thanks for the help...
[22:29:30] <aster1sk> 'what a difference a key made'
[23:39:15] <Hoverbear> Hi all, I'm working with mongojs (https://github.com/gett/mongojs) and would like to have a reference in my foo document that links to a user (via ID) in my user document…. Could I do that within mongo, or do I need to process it myself in callbacks?
[23:41:06] <woodzee> i have a setup that has 3 shards that are all replicated. 9 instances all together. i want to change the replica sets to be using different host name on the secondary members. each time i try and remove the existing entries i get { "$err" : "socket exception", "code" : 11002 } when i run db.printShardingStatus()