[00:50:14] <mkmkmk> can i upgrade a single secondary to 2.2 to try out the aggregation framework (if i run it with slaveOk) or do i need to upgrade the whole deployment first
[00:56:48] <Yiq> So whats a typical relation case and a nonrelational? maybe im stupid but i have a hard time understanding exactly what relational means?
[01:13:31] <awpti> Howdy folks - I've taken up the task of learning how to use MongoDB, but I do have some questions on general performance: In what situations does MongoDB start bottlenecking?
[01:22:32] <Init--WithStyle-> hey guys... was hoping you could help me figure out how to do a range query for a [0,0] value
[01:22:39] <Init--WithStyle-> I have a value "loc"
[01:22:44] <Init--WithStyle-> it's an x,y coordinate
[01:23:01] <Init--WithStyle-> I want to do a range query between [0,0] and [500,500]
[01:23:07] <Init--WithStyle-> the "box" between that area
[01:24:01] <Init--WithStyle-> My index is a geospatial index..
[01:54:11] <bluethundr> hey according to phpinfo I'm definitely using the right php.ini yet for some reason even tho I've added extension=mongo.so to the ini file mongo still does not show up in phpinfo.. anyone have any ideas?
[08:34:35] <Ceeram> with db.collection.update( criteria, objNew, upsert, multi ), examples for critera show { name:"Joe" } like, but can criteria be used as {_id:"mongoidhere"} as well?
[09:06:13] <gigo1980> hi all, i have some mongos instances running on application nodes, if i make an change on the config servers (moveprimary of an database) the mongos instances does not regocgnize this
[10:31:27] <yati> Hi. suppose I have a document like {"foo": ["bar", "baz"]}. Now I want to append to that list only if that particular item does not exist already in the list. Is there a way to do that?
[10:32:29] <yati> I mean, appending "baz" to that list should do nothing as it already exists, but appending "hello" should append to it
[11:16:47] <Pandabear> Does anyone have a clue how to fix this, I've searched the web and docs already
[11:18:40] <manveru> Pandabear: what are the permissions on /var/lib/mongodb ?
[11:23:16] <Pandabear> manveru: read/write on owner only
[11:24:51] <manveru> and what user executes mongo?
[11:26:32] <emocakes> depends on what you choose :p
[11:27:20] <Pandabear> I've fixed it already. It was the permissions on the parent folder, it was root user not mongodb
[11:27:30] <Pandabear> Thanks for the help manveru
[11:52:30] <yati> I am using pymongo(I did not find a dedicated channel for pymongo, so asking here) - does using db.add_son_manipulator() affect the underlying database in any way? Can it be safely called multiple times? (as in running my app multiple times)
[12:48:14] <brahman> Hi. My cfg servers time is out of synch. Has anybody had any luck fixing the time skew without restarting the whole clusteR?
[13:10:03] <Clex> Hi. The documentation says "DELETE /databases/{database}/collections/{collection}/{_id}" to remove a document with the rest API. But the code does not seem to handle it: http://pastie.org/pastes/4603540/text . What did I miss?
[13:22:51] <NodeX> can you paste the endpoint please
[13:23:04] <NodeX> pastebin ... that paste does not show the query
[13:25:22] <Clex> The mongod process includes a simple REST interface, with no support for insert/update/remove operations, as a convenience – it is generally used for monitoring/alerting scripts or administrative tasks. For full REST capabilities we recommend using an external tool such as Sleepy.Mongoose.
[13:39:05] <Gargoyle> Hey there. Anyone know if you can swap to the 10get packages under ubuntu easy? (Is it just a case of un-install ubuntu package, add 10gen repo and install mongo-10gen? Does the data get preserved, or do you need to dump and restore?)
[13:39:44] <algernon> the data location may be different between the two, but otherwise it should be easily swappable
[13:40:42] <Gargoyle> Can anyone confirm the data location for the 10gen package?
[13:41:09] <Gargoyle> And any known gotchas going from 2.0.4 to 2.0.7 ?
[13:41:20] <algernon> appears to be /var/lib/mongodb for mongodb-10gen.
[13:42:03] <Gargoyle> algernon: Seems to be that for the ubuntu package too! :)
[13:44:45] <algernon> you should be good to go then
[13:45:55] <Gargoyle> Will give it a bash later. running replica set so shouldn't be too much of a hassle to recover a slave if it all goes wrong
[14:15:26] <richwol> I've setup MongoDB on Amazon EC2 and plan to store the database on an EBS volume. I created a 3GB EBS volume, mounted it and specified for Mongo to use it as the data directory. When launching Mongo it starts eating up the disk space (I can see the file j._0 has used up all 3GB. When I then try to run any sort of query it complains that it can't because there's no disk space. Any tips?
[14:17:11] <BurtyB> richwol, --noprealloc will probably help
[14:17:42] <BurtyB> richwol, and --oplogSize too thinking about it
[14:17:54] <richwol> @BurtyB Wouldn't turning off prealloc slow down reads?
[14:18:44] <Derick> the journal by default is I think 1GB - per file
[14:19:08] <richwol> @Derick Do you know how much space? I couldn't find any minimum requirements.
[14:19:49] <Derick> richwol: what are you trying to do?
[14:19:55] <richwol> I'm happy to increase the size of the EBS (I don't want to sacrifice performance.. just unsure of what size I should be setting it to!)
[14:21:14] <richwol> Derick: Just a basic MongoDB setup. Quite heavy no of writes but the overall database will be fairly small (under 1GB of data)
[14:21:37] <Derick> right, but mongodb preallocates, and uses a journal
[14:21:49] <Derick> the journal alone takes up 3GB minumum (without using --smallfiles)
[14:22:04] <Derick> and data in your case probably either 3.75 or 5.75 GB
[14:24:38] <Zelest> Ugh, just biked 50km.. and now my knee is b0rked..
[14:24:50] <Zelest> Sitting in the middle of a parking lot with the mac and waiting for the ride home.. :D
[14:26:01] <kali> Zelest: it make sense actualy... if avoid having big files if your base is small, and it avoid having too many files around if your base is big
[14:55:49] <Mmike> Hello. If I'm connecting to a mongodb cluster (i have 3 servers in replication) is it possible that a client library will connect to each of them somehow, no matter i just connected to one of them?
[14:57:01] <Derick> Mmike: the php driver definitely connects to all of them
[14:58:57] <Mmike> So, If I have a setup that those 3 mongo servers are behind some loadbalancer (haproxy, let's say), and I do, in php $bla = new Mongo("mongodb://loadbalancer-ip", '...);
[14:59:12] <Mmike> I will actually get a connection trough the loadbalancers, but also directly to all the servers too?
[15:07:25] <NodeX> or just the web servers replicated
[15:09:50] <Gargoyle> Migrating from Ubuntu 12.04LTS package for 2.0.4 to 10gen packeges for 2.0.7 seems to have been a flawless upgrade. (touch wood!)
[15:10:58] <Mmike> i have 3 mongodb servers in replication
[15:11:24] <Mmike> now I didn't realize that there is one master and that client library knows that so no matter to which one I connect from my php code somehow i'll always end up on the master
[15:11:50] <Mmike> now, i just set up haproxy in front of those (one frontend IP that I connect to from php, and 3 backends, pointing to three mongodb servers)
[15:11:57] <Mmike> i can read/write via the haproxy IP with no issues
[15:12:21] <Mmike> but If I block with firewall access to just one mongodb server from the php box, I can't connect to mongodbs via the haproxy ip
[15:12:32] <Mmike> and now I'd like to shoot my self in the head
[15:17:19] <Mmike> Derick, good to know :/ but I need it somehow today :)
[15:17:46] <Derick> you can use the master branch on github :-)
[15:17:54] <Gargoyle> jarrod: OK. But except for the easy path to sharding, it's just the same as having a norm replset config?
[15:17:56] <Mmike> jarrod, so, my three mongos, one is always master, right? And client library directs writes to the master, no matter to what instance I connect to?
[15:22:15] <Mmike> I guess I can have mongos loadbalanced somehow? Because that's what I do with my haproxies
[15:22:43] <Mmike> Although it would be really nice that once I connect to mongoB my client library is only connecting to mongoB, and not to all the boxes in the replication setup :)
[15:22:50] <Gargoyle> mongos is very lightweight isn't it?
[15:28:52] <Vile1> Guys, can you suggest any best practices for virtualising mongoDB?
[15:29:10] <Mmike> I guess there is no harm with haproxy, but no benefit either
[15:29:48] <Mmike> because client lib has no clue about haproxy and just connects to mongod, via haproxy. but then it connects to other mongods, so haproxy is really not needed there
[15:30:15] <Vile1> At the moment I'm planning to have instances with up to 500GB HDD, 4GB RAM, 1 vCPU
[15:40:04] <Vile1> NodeX: why would it? mongo is still single-threaded
[15:40:25] <jgornick> Hey guys, is it possible to setup an instance of mongo that can be treated as a sandboxed instance of another instance which is the production data?
[15:40:41] <Vile1> I've run into the situation where there still ram available, but mongo eats up 100% CPU most of the time
[15:41:34] <NodeX> Vile1 : do what you feel is best, you ask for input and pick holes in it, so do what suits your app ;)
[15:41:40] <jgornick> I'm essentially looking to perform inserts and updates against a sandboxed version of our production data to test the commands before performing them on the production instance.
[15:42:28] <Derick> Vile1: mongod is not single threaded
[15:42:40] <Vile1> NodeX: I got your point very well. In fact i am uncomfortable with 4 GB myself also
[15:43:11] <NodeX> jgornick : just change the DB name ;)
[15:43:13] <Vile1> But I think that i'll better make more VMs (easy to add => clone)
[15:47:47] <Vile1> or just when you are going to try out something?
[15:47:55] <Gargoyle> jgornick: Are you wanting to sandbox the mongo install (for upgrade testing) or the database (for app testing)?
[15:49:11] <jgornick> I'm trying to have a sandboxed instance of my production database so I can emulate transactions. I would run my updates in the sandboxed database (which is synced with the production database), and if everything works, I will perform the same commits on the production database.
[15:49:46] <Vile1> jgornick: i.e. this is normal behaviour for your app?
[15:49:46] <jgornick> This way, if something happens on the sandboxed database, it doesn't affect production data.
[15:49:56] <jgornick> Vile1: This is something we need to implement.
[15:51:44] <jgornick> Mongo doesn't want transaction support right? Meaning, they won't add it in the future?
[15:51:53] <Vile1> and if something breaks - then you just restore sandbox from the main
[15:52:04] <Gargoyle> jgornick: Although, you are probably thinking in a relational db way. You could probably rething the while idea of "transcations". This might also help:- http://cookbook.mongodb.org/patterns/perform-two-phase-commits/
[15:52:12] <NodeX> if you need and I mean really need transactions use somehting with transaction support
[15:52:15] <jgornick> Gargoyle: Yeah, read that over many times ;)
[15:58:50] <jgornick> NodeX: I'm seeing it and have the replicata set docs open just to make sure, but I don't think it'll work for what I'm trying to do :)
[16:03:12] <NodeX> apart from the danger of it and other transactions being blocked while things are being restarted?
[16:03:15] <Gargoyle> Vile1: Becuase if there is more than one client, any kind of stop,restore snapshot,start system will not work.
[16:04:42] <Gargoyle> I've just written code that when 1 doc is changed, hundreds of others in the same and different collections also need to be changed.
[16:04:46] <Vile1> Gargoyle: obviously there MUST be single client. or rather "transactional mongo server" application
[16:05:17] <Vile1> so, all the other client must connect to it
[16:05:30] <Gargoyle> But the end result is something that can be figured out after the fact, so if something goes wrong, I can have a cleanup script that can work out what it was supposed to be.
[16:05:35] <Vile1> otherwise some of the clients will have no transaction support
[16:06:47] <Vile1> or you can call that "transactions with database-level locking"
[16:08:27] <Vile1> I'm using this kind of approach when applying patches to VMs
[16:08:50] <Vile1> snapshot-apply-check => revert or delete snapshot
[16:09:27] <NodeX> the onyl way to truely transact is to lift all of the documents from the collection(s), put them in a transactional DB, do the work and re-join them backl
[16:10:42] <Gargoyle> NodeX: And what if something fails while you are "putting back" and only half get put back.
[16:10:59] <NodeX> then the document doesnt get Unlocked?
[16:11:04] <Vile1> NodeX: so all that's left is to write storage driver for mongodb based on transactional database?
[16:11:57] <NodeX> It's a hack, and I would never do it but it would work
[16:11:58] <Gargoyle> Anyway. It is dinner time, so I need to find my baseball bat so that I can make some scrambled eggs!
[16:12:14] <Vile1> Gargoyle: 11 past midnight for me
[16:12:32] <Vile1> but you are right about dinner time :)
[16:15:02] <alanhollis> Hi guys, I'm after a little help if anyones about and bored?
[16:15:35] <alanhollis> I'm using map reduce on a collection on 2 difference collections and combining them into one
[16:15:56] <alanhollis> I then do another map reduce on this reduced collection and inline the data out
[16:16:28] <alanhollis> unfortunately unless the first permanent collection is completely unique (i.e created with a timestamp)
[16:16:47] <alanhollis> inline seems to be doubling up the results of the reduce
[16:51:25] <jarrod> is it possible to do a bulk update
[16:51:34] <jarrod> with a list of documents that already have _ids
[16:56:46] <LesTR> hi, its normall have in one rs 2 secondary servers and no primary? : )
[17:14:36] <darq> hi. I have a collection of documents that look like this {'name':'John', 'surname':'Smith'} ? this works db.names.find({'name':{'$regex': 'some-regex', '$options':'i'}}) How can i combine name and surname?
[17:20:10] <monosym> hello, does anyone know of a good way to check for matches between two mongodb collections? the matches would be in the form of a database object string
[17:20:31] <monosym> I've found some people online suggest mapreduce, but is there another possible way to do it?
[17:21:02] <kali> how big are the two collections ?
[17:26:45] <jsmonkey123> Hi, If I have a collection called Accounts and there I got a bunch of account hashes with nested hashes as well. Accounts cointains the property "Sites" which has an array full of objects/hashes. I want to remove an hash from the array that is nested in Accounts.Sites. How do I best do this?
[17:34:58] <crudson> monosym: map reduce is probably your best bet I reckon with that many docs. there are other ways, like querying the larger for values in the smaller but you'll do a lot of queries (even chopping it into small $in queries)
[17:35:52] <jarrod> how do I do a bulk update with a list of docs that have existing _ids?
[17:37:35] <monosym> crudson could i do something like this? db.documents1.find("key" : db.documents2.find({}, {"search_key": 1}));
[17:37:58] <monosym> @crudson would that possibly work?
[17:39:15] <crudson> monosym: no you can't do that I'm afraid
[17:39:48] <monosym> crudson: why wouldnt that work? just curious
[17:40:09] <jarrod> adamcom: I have queried for many docs, updated each of them individually, and want to send that list back so that each is updated with its own data
[17:40:15] <jarrod> rather than saving each one individually
[17:40:21] <jarrod> i do not believe $in accomplishes that
[17:41:35] <adamcom> jarrod: so it's not the same update for each document?
[17:42:18] <jarrod> no, they have each been modified
[17:47:49] <adamcom> I don't think it's possible, if it doesn't fit into the update() paradigm - you can't pass in a list of operations as a batch in a single update (which is what I think you are looking for)
[17:48:10] <monosym> crudson: oh yeah, an example would be totally helpful
[17:48:10] <jarrod> so i can pass in many updates?
[18:22:05] <monosym> crudson: is it cool if i take you up on that offer to produce a mapreduce example? can't really find any online particular to my situation.
[18:25:26] <crudson> monosym: sorry got a mtg starting but if you're around in a couple hrs I was in the middle of making an example
[18:28:17] <monosym> crudson: I won't be around in a couple of hours but would you be so kind as to send it in an email? m2544314@gmail.com
[18:44:33] <Gizmo_x> Hi Guys, i wanna know if i can use c# mongodb driver in asp.net mvc 4 web app hosted in shared hosting, will there be a problem with driver because db is on remote server?
[18:51:15] <Gizmo_x> Hi Guys, i wanna know if i can use c# mongodb driver in asp.net mvc 4 web app hosted in shared hosting, will there be a problem with driver because db is on remote server?
[18:58:33] <Gizmo_x> can we include c# driver in ap.net website and then upload all files for web site including driver on sharred hosting to use mongodb remote db?
[19:42:01] <arkban> design question: where you do put your ensureIndex() statements? do you bundle them in your app code (like in the initialization stuff) or do you use a separate script that's part of the install/deploy process?
[19:45:45] <crudson> arkban: for me, it's whenever the collection gets created or initially populated (if the population process would be hindered by the index). It could well be inside application code though, if using map reduce results for example. Note that there is no real 'harm' in doing ensureIndex if it already exists.
[19:46:48] <arkban> so you tie it to creation as opposed to first access by the app? that is when your app starts up, ensure all the indexes (which might do nothing) and proceed
[19:47:18] <arkban> i'm in a situation where the collection may already exist and i'm wondering what the common practices is
[19:53:08] <crudson> crudson: databases can be used by many applications, so is it an application's responsibility to ensure that queries it needs exist? if you say yes, then perhaps ensure they are there when the app boots. There can only be harm in doing that if you have a whopping collection and a big index, the creation of which will take some time.
[20:05:30] <joshua> Hmm so there is no init script included with the mongodb rpms, just for mongod?
[20:26:56] <jgornick> Hey guys, does the aggregation framework in 2.2 allow you to aggregate over multiple collections?
[20:54:30] <crudson> jgornick: no don't think you can do that
[21:03:54] <camonz> hi, is there a way to update a set of records like db.elec.p.update({}, {name: cne_name.pq.split(" ").slice(1).join("_")}, false) ?
[21:13:22] <crudson> camonz: if you need fields that are computed like that, create them when you create/update the document. To update a batch, do db.elec.find().forEach(function(doc) doc.name=....;db.elec.save(doc)}) - non-atomic. Wrap in db.eval to block.