PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 15th of January, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:36:13] <mattgordon> I'm on Mongo 1.8 and I'm trying to get all documents for which a field exists and is not null. What is the correct way to do this? I'm having trouble finding old docs and the information for the current version seems to be incorrect for my version
[00:50:22] <Auz> mattgordon, does $exists work for 1.8?
[00:50:23] <Auz> I think it does
[00:51:20] <mattgordon> db.conversions.findOne({finished_at': {"$exists": true}}, {'finished_at': 1})
[00:51:20] <mattgordon> { "_id" : ObjectId("5046687b30b61f6249000126"), "finished_at" : null }
[00:51:38] <mattgordon> it brings along the null values (which disagrees with the 2.2 docs)
[00:52:08] <Auz> one sec
[00:52:49] <Auz> db.conversions.findOne({finished_at': {"$exists": true}}, {'finished_at': 1})
[00:52:51] <Auz> oops
[00:53:11] <Auz> db.conversions.findOne({'finished_at': {"$exists": true, '$ne': null}}, {'finished_at': 1})
[00:55:32] <mattgordon> yeah that should do it. thank you! i'm seeing very different behavior between $exists: true, $nin: [null], and $not: {$in: [null}} so I was getting pretty frustrated
[00:56:00] <mattgordon> I assume 10gen cleaned it up for the new releases but that makes it difficult to find docs that don't secretly break for edge cases ;)
[01:08:39] <jaimef> so trying to split a large RS by adding 4 more systems to the rs, then setting fsynclock on, shutting down half of the servers, and then change the replicaset name of those left running. shutdown and startup other group. is there an easier way to do this?
[02:50:04] <w3pm> if i have a process I want to read/write to a mongodb database, what's the "proper" way of setting this up securely so that no other processes on the network can access the db?
[02:50:59] <jesta> firewalls
[02:51:47] <w3pm> well lets say that process goes down and i have automated failover for another machine to take over as the read/write guy
[02:52:19] <w3pm> modifying the firewall each time that happens seems strange, maybe i make all the failover nodes trusted
[02:52:20] <w3pm> hrm
[02:52:53] <jesta> If it's just internal networking (and not public facing) but you still want security inside, there's a user/pass auth I believe too
[02:52:54] <w3pm> maybe a better question is, what's the usecase for db.addUser() and db.auth()?
[02:53:18] <jesta> http://docs.mongodb.org/manual/administration/security/#authentication
[02:54:32] <jesta> I've always just used firewalls though ;)
[02:54:47] <phira> we use auth when we have multiple applications with different dbs
[02:54:51] <jesta> and haven't had a site grow to the point where i need more than a single server
[02:55:03] <phira> as part of general defense in depth
[02:55:15] <phira> to prevent compromise on one application from necessarily making it easy to get to data from other
[02:55:16] <phira> s
[02:57:48] <w3pm> how do you handle the passwords
[02:58:05] <w3pm> i guess i should read more indepth but i hope theres a better solution than just a plaintext password being stored somewhere
[02:59:06] <w3pm> phira: yeah thats sort of what i need to solve
[02:59:21] <phira> ew put the passwords in the application config
[02:59:38] <phira> it's just a defensive measure, it's not security. The protection from the general internet is handled by firewalls.
[02:59:39] <w3pm> well application config for me, erlang app, is just plaintext
[02:59:44] <phira> yep
[02:59:47] <phira> we just put it in plaintext
[03:00:03] <w3pm> hm ok
[03:00:04] <phira> if the attacker can get into the app, they've got the pwd regardless
[03:00:14] <w3pm> sure fair enough
[03:00:23] <phira> the objective of the password is to prevent them from expanding out beyond the compormised app, if possible.
[03:02:44] <w3pm> yeah maybe im overthinking it, at the end of the day if the private network is compromised really bad stuff will happen, pw or not
[03:03:00] <w3pm> probably just a speedbump at that point
[03:20:12] <phira> hrm
[03:20:22] <phira> can someone explain what the default write concern is for updates?
[03:20:26] <phira> we thought it was 1
[03:20:34] <phira> but it doesn't seem to be
[07:57:52] <strata> using the same version of pymongo (2.4.1) with mongo 2.2.2 on 3 different distros (ubuntu, fedora, arch), i create a record that looks like this: [{stuff: 'hi', somestuff: [{morestuff: 'hi', evenmorestuff: 'hi'}]}]
[07:59:22] <strata> when i pull that record out on arch (stuff = db.whatever.find()), stuff = that record exactly. on all other distros, it gets encapped like this {0: {the original record}}
[08:01:17] <strata> my only guess is the arch devs enabled some kind of quirk at compile time because it seems the other distros exhibit the later behavior (which to me is faulty anyway but that's my opinion)
[08:04:00] <strata> yes. actually this is a mongo thing. db.something.save([{some stuff}]) saves as { "_id" : ObjectId("..."), "0" : {somestuff}}
[08:04:12] <strata> am i doing something wrong?
[08:14:42] <oskie_> on the primary node (the oldest node) in the replica set the disk usage is 350G. On another it is 280G. What's all that wasted space? Is there a way to reclaim it?
[08:15:32] <Zelest> http://www.mongodb.org/display/DOCS/Excessive+Disk+Space
[08:17:33] <oskie> I forgot - I made a script to calculate difference between storageSize and dataSize some time ago... and that explains it
[08:18:13] <oskie> Zelest: thanks. compact locks the db too, doesn't it?
[08:18:42] <solars> hey, are v2.2.2 and v2.3.2 binary compatible? meaning, can I substitute the old version to test by replacing the binary?
[08:18:50] <solars> (disconnect last night in case someone has answered)
[08:19:31] <Zelest> oskie, think so, not 100% sure though.
[08:19:58] <Zelest> no idea solars, though, 2.3.2 isn't prod-friendly.. which I assume you're aware of. :)
[08:23:06] <solars> yep I am :) I think madveru told me that they are compatible but I'm not sure if Iremember correct
[08:37:44] <[AD]Turbo> hola
[08:41:39] <tomlikestorock> is there a way to elemmatch on items in a document and items in a list in that document at the same time?
[08:45:18] <tomlikestorock> that is, elemMatch on a field in a document, and one of the sub attributes of that field is a list, and specify conditions to elemmatch on items in that list as well
[09:02:31] <NodeX> Zelest : http://blog.mongodb.org/post/40513621310/mongodb-text-search-experimental-feature-in-mongodb
[09:06:29] <chovy> is there anything like phpmyadmin for mongodb?
[09:06:39] <chovy> i need to inspect a few things.
[09:07:13] <ron> yes. have you looked at the mongodb website at all?
[09:08:11] <chovy> i'm looking now
[09:08:18] <chovy> just looking for something easy to intsall on debian
[09:08:58] <chovy> something web-based
[09:09:31] <NodeX> I think there is something called phpmymongo
[09:09:54] <NodeX> http://phpmoadmin.com/
[09:10:22] <Zelest> NodeX, http://i.imgur.com/uuDPN.jpg (unrelated)
[09:11:15] <ron> there's rockmongo. I hate it, but it's probably the lesser evil of them all.
[09:11:41] <kali> mongohub (for macosx) is maintained by a colleague of mine
[09:11:43] <NodeX> power rangers?
[09:11:44] <NodeX> lol
[09:12:23] <ron> kali: too bad. now I have a mac and don't use mongo.
[09:12:46] <kali> ron: i have a mac, and i don't use it, tbh :)
[09:12:54] <chovy> ron: ok, that's what i'm looking at
[09:13:09] <ron> kali: poop on you!
[09:13:20] <NodeX> 3 weeks till release of open source mongo based webmail :D
[09:13:44] <ron> php based?
[09:13:50] <NodeX> of course
[09:13:53] <ron> then it sucks.
[09:13:53] <ron> :D
[09:14:01] <NodeX> apart from it doesn't suck but yeh
[09:14:27] <chovy> ok, to use mongohub...i have to configure my mongodb server to allow remote connections.
[09:15:59] <Gargoyle> chovy: Or use an SSH tunnel.
[09:16:31] <chovy> ok, wtf. I didn't even enter a pw and it connected
[09:16:40] <chovy> is it automatically using my ssh keys?
[09:16:49] <chovy> otherwise i have a majro security problem
[09:16:58] <kali> i have no idea :)
[09:17:05] <kali> you'd better check :)
[09:17:08] <Gargoyle> Mongo does not have auth by default
[09:17:42] <chovy> so anyone can connect?
[09:17:57] <chovy> remotely?
[09:18:15] <NodeX> enter a firewall
[09:19:02] <solars> per default it's not open to the outside afaik
[09:19:09] <solars> only on localhost
[09:19:22] <kali> solars: wrong
[09:19:24] <NodeX> [09:13:24] <chovy> ok, to use mongohub...i have to configure my mongodb server to allow remote connections.
[09:19:39] <solars> it's bound to 0 0 0 0 ?
[09:19:48] <kali> solars: yes
[09:19:59] <solars> ah, alright, sorry
[09:21:14] <solars> so does anyone know if I can replace the 2.2.2 binary with 2.3.2?
[09:21:33] <kali> on a test environment ?
[09:21:35] <chovy> solars: i think not
[09:21:48] <solars> kali, on any environment
[09:21:57] <kali> solars: 2.3.2 is not for production use
[09:21:59] <chovy> i didn't create a user/pass or anything I figured it would only accept local connections. But the past 2 months my db has been accessible to anyone.
[09:22:11] <solars> kali, yes I know, I just want to know if I can replace it for testing or not
[09:22:19] <solars> (and switch back)
[09:22:36] <kali> solars: i haven't heard of data format change, so my guess is yes
[09:22:46] <solars> alright
[09:23:01] <kali> chovy: well, it's good you tried mongohub :)
[09:23:22] <solars> by not ready for production use you mean it's not guaranteed to be stable, right - or are there any performance issues because of debug outputs etc?
[09:24:01] <chovy> so how to i lock this thing down so the world can't read my db?
[09:24:40] <kali> solars: i think it's the same compilation options as the stable builds, so there should not be huge performance issues
[09:24:41] <NodeX> FIREWALL
[09:25:01] <solars> kali, nice, thanks a lot
[09:25:01] <kali> solars: but: the new features are probably buggy
[09:25:19] <kali> solars: and the quality is not at the same level as a production ready build
[09:25:20] <solars> yep, what I want to improve is the count queries, everything else is old stuff
[09:25:33] <solars> but I have huge problems with slow count queries
[09:26:10] <kali> solars: the optims have been implemented in 2.3.2 ?
[09:26:30] <solars> as someone told me there is an improvement for counts if there is an index
[09:26:34] <chovy> NodeX: so i should just block all traffic using iptables to the mongo port?
[09:26:36] <solars> which should solve the problem partially
[09:26:40] <chovy> and only allow my ip address?
[09:26:50] <solars> but I think there are other tickets for count as well, which might be not there yet
[09:27:09] <kali> chovy: or use bind_ip to limit to localhost connections
[09:27:17] <solars> but if it's binary compatible I'll just test it
[09:27:17] <kali> chovy: if that's enough for your use case
[09:27:41] <kali> solars: i'm afraid the optims on count requires a change of format on the index, so you may need to reindex your data
[09:27:52] <kali> solars: and reindex again if you switch back to 2.2
[09:28:04] <kali> solars: unless these are not the same optims...
[09:28:20] <chovy> kali: i'm wondering...if mongohub is using my ssh keys to connect to the host.
[09:28:47] <kali> chovy: my colleque is not here yet :)
[09:29:01] <kali> chovy: try mongo in command line
[09:29:55] <chovy> kali: it connects w/o asking me to enter any credentials
[09:30:04] <kali> the CLI ?
[09:30:28] <chovy> yeah
[09:30:39] <chovy> mongo <prod-host>/<prod-db>
[09:30:47] <chovy> it just connects, and i can read all the data
[09:30:53] <kali> well, this does not make a ssh tunnel
[09:30:55] <chovy> but i have ssh keys on this machien.
[09:30:58] <kali> so you're wide open indeed
[09:31:28] <NodeX> give me your IP and I'll fix it :P
[09:31:32] <chovy> heh
[09:31:38] <solars> kali, hmm but are there any other changes that make it impossible to switch back?
[09:31:41] <solars> not sure how I can find out
[09:31:47] <solars> reindexing is not a problem of course
[09:31:51] <chovy> isn't that a little dumb to ship a database this way?
[09:32:01] <NodeX> chovy : not really
[09:32:17] <NodeX> a daatbase stores data - it does not by definition do authentication
[09:32:21] <NodeX> database*
[09:32:32] <kali> chovy: isn't it a little dumb to setup a database with no firewall ? :)
[09:32:56] <chovy> yeah. but c'mon
[09:33:13] <chovy> should not be accepting remote connections w/o user/pass
[09:33:21] <kali> how did you install mongodb ?
[09:33:33] <solars> the question is if it is wise to install a db without knowing how it is configured :)
[09:33:56] <NodeX> no the question is "42"
[09:34:52] <chovy> kali: the instructions on 10gen web site
[09:34:58] <chovy> from debian package
[09:35:18] <chovy> nowhere does it say "this shit is wide open to the world"
[09:36:12] <NodeX> http://docs.mongodb.org/manual/administration/security/
[09:36:16] <NodeX> apart from there ;)
[09:36:54] <NodeX> on the plu side, you can always not use Mongo if you feel it's not right for you - free choice and all that
[09:36:57] <NodeX> plus *
[09:36:57] <chovy> well, i did make an assumption.
[09:37:14] <chovy> naah, i think its great.
[09:37:40] <chovy> someone in here actually told me it only accepted local connections.
[09:37:43] <chovy> by default
[09:37:49] <chovy> so i figured i was safe
[09:37:53] <NodeX> the reason it's great and fast (not the main reason but partly) is because it takes all these things that a DB shouldn't really handle out of the equation
[09:38:14] <kali> chovy: i don't know... i have this bad habit of spending hours reading on a techno before deciding to download it, so sorry, but i can't relate there
[09:38:44] <NodeX> personaly I read the config file and locked it down to localhost before starting it ever
[09:38:54] <kali> chovy: it is also the first bullet point on the production notes page
[09:39:03] <NodeX> hehe
[09:39:16] <chovy> i can add bind_ip to my config file i think?
[09:39:24] <NodeX> my guess is they assume that everyone runs firewalls
[09:39:25] <chovy> just make that 127.0.0.1
[09:39:36] <NodeX> yeh just uncomment the bind_ip part of the config
[09:39:45] <chovy> i wasn't running one
[09:39:49] <NodeX> if you want user/pass turn on authentication
[09:40:14] <chovy> it's not even there
[09:40:40] <kali> chovy: there is also the problem of keeping things simple for people who like to play with the new toy
[09:40:57] <kali> chovy: bootstrapping a SQL environment is difficult and boring as hell
[09:41:23] <chovy> ind_ip = 127.0.0.1
[09:41:59] <kali> chovy: i loke to be able to run a mongod --dbpath . & in a shell and just use it as a sandbox to demo, trials and tests... that is not possible to do when configuring gets in the way
[09:42:01] <NodeX> https://gist.github.com/0699bbecc2ee864ce439 <--- easy firewall script ..... put it in a .sh, execute it, once happy use iptables-save
[09:42:30] <chovy> ok, at least i can't connect remotely anyore
[09:42:51] <Gargoyle> chovy: From a sys admin point of view, you should assume "This shit is open to the world". If you give a server an internet facing IP address without a filewall, it's like leaving your front door wide open while you pop to the shop for milk.
[09:43:33] <chovy> ok, thanks for the tip.
[09:43:46] <chovy> i have firewalls on my other servers, just not this one.
[09:44:00] <chovy> Thankfully, i would only have to fire myself. as this is a side project, and no harm done.
[09:44:07] <NodeX> haha
[09:45:00] <Gargoyle> chovy: You might wanna go through this:- http://docs.mongodb.org/manual/administration/security/
[09:45:07] <chovy> this is why i do this shit in my spare time instead of claim to know what i'm doing at work.
[09:45:21] <chovy> yeah, i'm reading through that now.
[09:45:31] <Gargoyle> Personally, for the last 5 or 6 years, my webb apps connect to mysql using root account and no password.
[09:45:42] <Gargoyle> when they are on the same box.
[09:46:36] <NodeX> omg
[09:46:53] <NodeX> SQLi = whole server compramised
[09:47:06] <chovy> Gwayne: yeah, that's what i've always done, but mysql by default doesn't allow remote connections. I assumed this worked the same way.
[09:47:30] <chovy> oh, what's the tunnel trick to connect with mongohub?
[09:47:44] <oskie> is there a way to speed up oplog recovery? It seems the initial data replication to a new node is quite fast, but when it comes to oplog replay it takes 30s to apply 1min worth of changes
[09:48:56] <NodeX> Gargoyle : root sql account (unless you disable it) has exec privs by default on mysql which means a simple SQL injection wouldv'e rooted your entire server
[09:49:09] <oskie> (i guess it is all the indexes slowing it down)
[09:49:21] <Gargoyle> NodeX: So where's the problem?
[09:49:27] <NodeX> LOLOL
[09:49:34] <Gargoyle> In allowing SQL injection in the first place!
[09:50:01] <NodeX> new Injection methods arrive all the time, one simply cannot keep up with unreleased ones
[09:50:23] <NodeX> which means that your code to stop it will be circumvented
[09:53:12] <chovy> how do i tell mongodb to use my ~/.ssh/id_rs.pub
[09:53:20] <chovy> i mean mongohub
[09:54:10] <Gargoyle> chovy: Read the docs!
[09:55:08] <chovy> Gargoyle: what docs
[09:55:25] <Gargoyle> chovy: For mongohub!?
[09:56:30] <chovy> yeah. what docs
[09:56:32] <chovy> there are none
[09:57:34] <Gargoyle> chovy: eeew!
[09:57:47] <chovy> for ssh tunneling, do i need to setup a tunnel, or does mongohub do it for me. it is unclear.
[09:59:32] <Gargoyle> looking at the screenshot I would guess the big ass "Use SSH Tunnel" tick box is a hint
[10:01:03] <NodeX> lmao
[10:01:50] <Gargoyle> In the main Host/port field, I would put localhost:27017, then Bind Address = 127.0.0.1 27017 (Assuming you dont have a local mongo instance)
[10:02:02] <Gargoyle> And SSH Host = your servers real IP
[10:09:11] <chovy> none of that works
[10:09:27] <chovy> infact, there's a video on youtube explaining how to use a tunnel.
[10:09:37] <Gargoyle> chovy: Can you create the tunnel manually?
[10:09:47] <chovy> that's what the video is aying to do
[10:10:16] <Gargoyle> TBH, if there are not even basic docs, I'd use something else.
[10:10:30] <chovy> yeah.
[10:11:03] <Gargoyle> I combination of rockmongo and the built in shell have served me well over the last year
[10:13:35] <Derick> chovy: http://derickrethans.nl/debugging-with-xdebug-and-firewalls.html has instructions for setting up ssh tunnels
[10:16:04] <Gargoyle> Nice writeup Derick. I think for outbound tunnels, -L is needed instead of -R ?
[10:17:40] <Derick> yes
[10:17:44] <chovy> ok, i got it working with ssh -L 9800:localhost:27017 <host>
[10:17:58] <chovy> then i just configure mongohub to use localhost:9800
[10:18:07] <Zelest> encryption.. pft!
[10:18:13] <Zelest> moar plaintext :)
[10:18:14] <Derick> Just -L 27017:localhost:27017 would work too
[10:18:46] <chovy> that port is already used because i have mongo on my mac. But i have no idea how to stop it.
[10:18:51] <Derick> ah, ok
[10:21:48] <chovy> this is throwing an error: db.foo.items.find({ObjectId: 50708c8895f1c16257000002})
[10:22:01] <Gargoyle> chovy: Correct
[10:22:16] <chovy> can i lookup by id?
[10:22:20] <Gargoyle> yes
[10:23:20] <Gargoyle> By default, an id *IS* and ObjectId, and it's *KEY* is "_id"
[10:23:32] <Gargoyle> s/and/an
[10:25:02] <chovy> db.foo.items.find({_id: "<id>"}) returns nothing
[10:25:11] <Gargoyle> chovy: Closer
[10:25:14] <NodeX> ObjectId("Foo")
[10:25:15] <Gargoyle> :)
[10:25:33] <Gargoyle> Oh… NodeX - You can't just go giving away answers! ;)
[10:25:37] <NodeX> quick question which is confusing me... say I have an images collection and normaly I lookup on gid (gallery_id) to get a list of all images in a gallery ... now I want to adapt my query to sort by _id and put a fed _id at the top but still return a list of all images in the gallery but with my "fed" id being the first in the list .. can this be done (not somehting I have ever tried but one
[10:25:37] <NodeX> of you might have)
[10:25:40] <chovy> ahh
[10:25:49] <chovy> i didn't quote inside ObjectId()
[10:26:19] <NodeX> infact scratch my question it's quicker another way
[10:27:52] <chovy> i'm constantly running into this problem in production only, where i have objects created before I added attributes to the schema.
[10:28:14] <NodeX> not sure what that means chovy
[10:28:19] <chovy> but now all those objects throw errors, because they have no understanding of a sub-doc, like comments[] for example
[10:28:41] <Gargoyle> chovy: Welcome to mongo!
[10:28:53] <Gargoyle> chovy: What language are you programming in?
[10:29:03] <chovy> so i either need to migrate the data, or put a bunch of logic in my views to accommodate every possible object.
[10:29:07] <chovy> node
[10:29:17] <chovy> js
[10:29:36] <chovy> i have not found anything related to migrating mongo data.
[10:29:53] <NodeX> this is where the line gets blurred from relational mappers - they dont really have a place in Mongo imho
[10:29:54] <Gargoyle> chovy: Mongo is that way by design, so yes - in code you need logic to check for the existence of a specific sub doc before you use it.
[10:29:56] <chovy> which means, if it doesn't have a comments subdoc, then nobody can ever leave a comment.
[10:30:09] <Derick> that's not true
[10:30:10] <NodeX> imo they're created to make relational DB admins feel more comfortable
[10:30:19] <Derick> you can do a $push to an unexisting key
[10:30:20] <Gargoyle> chovy: Mongo will create the subdocs as needed.
[10:30:30] <chovy> i'm using mongoose
[10:30:42] <NodeX> $push is fine, $addToSet wont work
[10:30:45] <chovy> all i would need to do is call .save() on every object
[10:30:54] <chovy> but i need to do it using the mongoose schema.
[10:31:02] <Derick> NodeX: right
[10:31:25] <Derick> chovy: as NodeX, I think ODMs hinder you with using MongoDB
[10:35:11] <chovy> yeah
[10:35:24] <chovy> i'm new to it, so it seemed like a safety net coming from rdbms world.
[10:35:49] <chovy> i think 'jake' will work, it's like rake for node. but i'm not sure if i can run a task on the server
[10:36:09] <Gargoyle> chovy: Hmmm. Personally, I'd go for learning the "native" stuff before you start wrapping and hiding behind an ODM/ORM/Whatever
[10:36:42] <Derick> yup
[10:37:52] <NodeX> +2
[10:38:11] <NodeX> you will find a lot more flexibility and freedom not using wrappers
[10:38:24] <NodeX> and probably some more performance I would imagine
[10:38:58] <chovy> yeah.
[10:39:26] <Derick> and you actually get to learn the product you're using :-)
[10:44:37] <solars> kali, any idea if something regarding auth has changed? I've replaced the binary with 2.3.2 and get: Tue Jan 15 11:42:27.503 [conn24] assertion 16550 not authorized for query on history_production.a
[10:44:43] <solars> for an auth that has worked before?
[10:46:54] <kali> solars: no idea
[10:47:06] <kali> solars: i'm not using auth, i'm not using 2.3
[10:47:07] <solars> weird
[10:48:14] <kali> and + 1 about understanding mongo before using ODM stuff
[10:48:42] <Derick> solars: only thing I know is that auth has been redone in 2.3/2.4
[10:48:55] <kali> one of the dramatic impact of the ORMs is how SQL-illiterate most people yougers developper are
[10:49:02] <solars> Derick, any idea what this means?
[10:49:09] <Derick> nope
[10:49:15] <solars> alright
[10:49:20] <kali> s/people // . i don't mind that much my grand mother not getting SQL
[10:50:08] <Gargoyle> kali: Indeed. In fact, I think that same mentality extends beyond development.
[10:50:52] <kali> yeah. you know algebra ? :)
[10:51:08] <Gargoyle> kali: Some people see "apt-get install" or "brew install" and use the commands but never bother reading any further, and wonder why their systems end up in such a mess.
[10:51:15] <Derick> :-)
[10:51:28] <Derick> or X is suddenly gone...
[10:51:57] <NodeX> lmao
[10:52:26] <Gargoyle> It took me 5 mins to install mogno on OSX - It took another 2 hours reading some basic info about Launch Control.
[10:52:37] <solars> but this discussion is endless, you could also argue that because of this, everyone would have to write assembler :)
[10:52:42] <NodeX> On the flip side of that one can also get caught up in non understanding of things and get to weighed down in "other people's" versions of "the done thing"
[10:52:46] <solars> abstraction isn't only a bad thing
[10:53:33] <kali> solars: it's good to have a good understanding of at least one level behind the one you're using daily
[10:53:43] <NodeX> One doesn't have to be a genious to write fast / efficient web apps - one just has to understand signal flow and have a basic knowledge of syntax
[10:53:49] <oskie> you don't need db.fsyncLock if you are backing up using LVM snapshots and the journal is on the same volume as the data, right?
[10:54:12] <NodeX> signal / data flow *
[10:54:18] <solars> kali, yeah, I agree
[10:54:27] <solars> almost essential I would say :)
[10:54:33] <solars> especially regarding ORMs
[10:54:44] <kali> solars: and it's vital to have somebody understanding DBs in a team of application developpers, as it is important to have somebody around who can do a bit of assembly, or understand a tcp dump
[10:54:51] <solars> yeah
[10:55:01] <NodeX> ORM's (from the little I know of them) just add another layer of complexity to an App
[10:55:22] <NodeX> it's just one more thing that can break
[10:55:35] <solars> but also a layer of abstraction that increases productivity if done right
[10:55:39] <NodeX> a chain is only as strong as it's weakest link
[10:55:55] <Gargoyle> NodeX: Not to mention that a lot of them mess up code too!
[10:55:56] <NodeX> I wouldn't know about producivity, I work alone
[10:56:15] <NodeX> Gargoyle : +1
[10:57:07] <kali> NodeX: some are realy good... salat in the scala world is small, efficient, focus... and because scala is so statically typed, using mongo directlry is really a pain
[10:57:11] <Gargoyle> I see no positive outcome from using a ORM/ODM that requires ether code annotations or public accessors/methods for everything.
[10:57:20] <kali> NodeX: but for dynamic language, i tend to aggree
[10:57:28] <Gargoyle> Most languages have some notation of serialization.
[10:57:46] <NodeX> I dont use (currently) a typed language, I'm teaching myself C so perhaps one day!
[10:58:09] <NodeX> I hear the argument for teams of people when working on large projects
[10:58:17] <NodeX> and the need for a common way to do things
[10:59:04] <kali> NodeX: well, as all you get from mongodb is basically Map[String,Object] (in scala notation) every time you read a field you need to cast it to the right type
[10:59:11] <NodeX> ANyone using ZeroMQ behind their mongo in here?
[10:59:22] <NodeX> Ah, kali - I do that anyway LOL
[10:59:34] <NodeX> just habit to cast everything when working in PHP !!
[10:59:54] <kali> yeah, i'm still trying to forget about that period of my life :)
[10:59:57] <NodeX> hehe
[11:01:05] <Gargoyle> NodeX: Same rule applies when if you replace "working in PHP" with "working on the web"
[11:01:20] <kali> NodeX: the good thing about salat is... it's small. it just maps hash to object. it does not try to write queries for you
[11:02:37] <NodeX> cool
[11:03:50] <NodeX> not sure whether to wait for 2.4 to release mongo mail or not
[11:03:57] <NodeX> (for searching)
[11:56:50] <limpc> hi. does mongo still lock the entire instance on a write? or is it down to the db or collection level now?
[11:57:56] <NodeX> db level locking currently
[11:58:38] <limpc> when are they going to become collection level?
[11:59:24] <kali> maybe 2.4
[12:00:01] <limpc> thats quite a ways off
[12:00:18] <kali> it's the next production release...
[12:00:37] <limpc> latest is 2.2.2?
[12:00:43] <kali> yes
[12:00:51] <kali> 2.3 is the development branch
[12:00:57] <kali> so next major is 2.4
[12:01:42] <limpc> are we talking about 2.2.3 or 2.3.x?
[12:01:49] <limpc> i see 2.2.3 as an rc
[12:02:03] <kali> 2.2.3 is a bug fix release
[12:02:13] <kali> 2.3 is the development branch leading to 2.4
[12:03:12] <limpc> okay.
[12:04:40] <limpc> the mongo download page is structured a bit weird
[12:05:40] <Derick> limpc: collection level locking is not in 2.3/2.4
[12:05:58] <kali> Derick: ha ? ok
[12:06:20] <limpc> where is it in the roadmap? I've tried to find it but it's alot of work clicking all those links
[12:06:50] <Derick> afaik, unscheduled right now: https://jira.mongodb.org/browse/SERVER-1240
[12:07:05] <limpc> thats a bummer
[12:07:57] <Derick> it might just not give as much as you think it would....
[12:08:01] <noobie25> i'm having trouble testing out geospatial queries. when i runCommmand: geoNear ... i get results, be clearly there are better results available in proximity. I was storing geo x, y values as Float Double ... is that the underlying problem?
[12:08:18] <kali> limpc: this thing is... most of us (in the community, and i guess in the dev team too) think it is useless
[12:08:23] <Derick> with the lock yielding, collection level locking is likely not be going to give a lot of extra performance
[12:08:53] <NodeX> noobie : can you pastebin your query and your indexes and a sample document?
[12:09:16] <Derick> noobie25: make sure you store things in lon, lat, and not lat, lon
[12:09:17] <limpc> well it should help reduce the number of replica sets needed for the same volume of queries, yes?
[12:09:57] <Derick> limpc: uh? what makes you jump to that conclusion?
[12:10:35] <vikash_> I am trying to insert data in mongo but after the first insertion I get "TypeError: object is not a function" [Source code -> https://gist.github.com/4538178 ] Please help
[12:10:45] <limpc> the main reason for replica sets is to mitigate backlogs from large numbers of inserts (e.g. high user concurrency), correct?
[12:11:54] <kali> limpc: the main drive of replica set is repliation and failover
[12:12:03] <Derick> kali: well, and read performance
[12:12:08] <kali> Derick: true.
[12:12:12] <Gargoyle> limpc: No I don't think so, since writes will be replicated!
[12:12:27] <NodeX> shards gain write performance, replica sets read (generaly)
[12:12:30] <Derick> limpc: for write performance, you want sharding really...
[12:12:41] <NodeX> replica sets gain read *
[12:12:42] <limpc> sorry, had them flipped. its 6am and I havent slept yet
[12:13:33] <limpc> so i would think that by adding collection level locking, you'd be able to get more concurrency per shard, and reduce the number of shards necessary to support certain volumes
[12:14:07] <Derick> limpc: "think" is not a good benchmarking tool :-)
[12:14:11] <kali> limpc: it's not that clear, because you'll saturate the hardware at some point
[12:15:17] <vikash_> Any pointers?
[12:15:42] <limpc> hm isnt that like saying you're going to saturate a computer's northbridge before you max out all 6 cores of a 6 core cpu?
[12:17:01] <webber_> [13:07] <kali> limpc: this thing is... most of us (in the community, and i guess in the dev team too) think it is useless + [13:13] <@Derick> limpc: "think" is not a good benchmarking tool :-)
[12:17:18] <kali> webber_: this is helpful, thanks :)
[12:17:53] <webber_> thinking is a bad benchmark
[12:18:10] <NodeX> not when it's about nekkid chicks
[12:18:18] <NodeX> best benchmark ever!
[12:20:43] <noobie25> Derick: hi derick. i verified that values are being stored in the following order : lng, lat
[12:20:59] <limpc> ok well i worked at zynga. one of the reasons we decided to forgo mongo last year was because of the locking. it caused concern where high loads were involved, and we werent convinced its speed justified the cost of the # of servers we'd probably need to replace mysql/memcached. Now I'm at another very high volume business and there's some similar concern.
[12:22:20] <limpc> if there were collection level locks, it'd get alot more attention
[12:22:29] <NodeX> running mongo is certainly less hardware needy than running mysql
[12:23:01] <noobie25> Derick: however, when querying for San Francisco region: (122.41, 37.77) i get results from korea (127.23, 38.31) so i think it has something to do with these lng lat pairs.
[12:23:40] <Derick> sf is -122.41
[12:23:42] <Derick> not 122.41
[12:24:06] <Derick> west = -, east = +
[12:24:35] <noobie25> ahhh!!
[12:24:36] <vikash__> I am trying to insert data in mongo but after the first insertion I get "TypeError: object is not a function" [Source code -> https://gist.github.com/4538178 ] Please help
[12:25:01] <NodeX> vikash__ : it's in your driver , please see if you can paste the raw query sent to mongo
[12:25:10] <noobie25> Derick: thank you! life saver
[12:25:42] <noobie25> i'd rather store my lng, lat at strings ... do you know if mongo supports that datatype as well?
[12:26:22] <NodeX> no
[12:26:29] <vikash__> Nodex Can you please look at 86 and 142
[12:26:34] <NodeX> and they're not strings
[12:27:08] <NodeX> vikash__ : please make the output json friendly so we can see what's going on
[12:28:46] <vikash__> ok, but In the first time the dataToPush is getting inserted into the collection. Problem is, at the second time its giving me the error
[12:29:01] <vikash__> And I am very new to Node and mongo :)
[12:30:29] <NodeX> perhaps it's a unique key it's trying to overwrite?
[12:31:27] <vikash__> hmm,, how can I get rid of it then?
[12:31:33] <vikash__> or solve this problem
[12:31:40] <vikash__> and thanks NodeX
[12:31:49] <ron> don't thank him. consider him your bitch.
[12:31:52] <NodeX> Please pastebin a json representation of the object
[12:31:58] <NodeX> else we can't help
[12:34:07] <vikash__> NodeX, I hope this can help both of us :) https://gist.github.com/4538311
[12:36:24] <NodeX> can you goto the shell and pastebin the output of db.<your_channel_collection>.getIndexes()
[12:37:03] <NodeX> the collection is "chat" according to your gist
[12:37:03] <vikash__> Sure.
[12:38:04] <vikash__> https://gist.github.com/4538343
[12:39:40] <vikash__> NodeX, ^
[12:39:42] <NodeX> all I can think of is that it's trying for ObjectId and using Object instead
[12:39:53] <NodeX> which is why I need the raw query
[12:41:24] <vikash__> what do you mean by raw query? I have an object dataToPush. and when a user sends a message, it updates the fields in dataToPush and I am trying to insert this in mongodb
[12:42:01] <NodeX> the raw query sent to mongo .... I/e what the string looks like when it leaves the driver
[12:43:06] <vikash__> By any chance did you mean this -> result ->[{"name":"Vikash","msg":"Hi","channel":"null","is_spam":false,"is_delete":false,"timestamp":"2013-01-15T12:31:38.054Z","_id":"50f54c44206f74e145000001"}]
[12:43:15] <vikash__> Data -> {"name":"Vikash","msg":"Hi","channel":"null","is_spam":false,"is_delete":false,"timestamp":"2013-01-15T12:31:38.054Z","_id":"50f54c44206f74e145000001"}
[12:43:16] <vikash__> debug - websocket writing 5:::{"name":"add_message","args":["Vikash","I am Vikash"]}
[12:43:26] <vikash__> Data -> {"name":"Vikash","msg":"Hi","channel":"null","is_spam":false,"is_delete":false,"timestamp":"2013-01-15T12:31:38.054Z","_id":"50f54c44206f74e145000001"}
[12:43:34] <vikash__> Sorry for pasting here :P
[12:43:39] <NodeX> No i didn't
[12:43:49] <NodeX> that's your own log from node
[12:43:52] <vikash__> My bad, me gusta
[12:44:10] <`3rdEden> \o/ socket.io debug messages
[12:45:06] <vikash__> debug - websocket writing 5:::{"name":"update_users","args":[{"0":"Vikash"},{"0":"Agrawal"}]} ?
[12:45:34] <`3rdEden> yep. that one
[12:45:59] <vikash__> NodeX, just one thing, with app.'s you can see https://gist.github.com/4536605 and run it, In worst case :)
[12:47:23] <vikash__> https://gist.github.com/4538391
[12:47:28] <NodeX> no thanks :)
[12:47:33] <vikash__> `3rdEden, NodeX ^
[12:47:34] <vikash__> :P
[12:47:40] <NodeX> and that debug is not what I am after
[12:49:02] <`3rdEden> vikash__: running the latest version
[12:49:07] <`3rdEden> *?
[12:49:13] <`3rdEden> of the mongodb driver
[12:49:39] <vikash__> MongoDB shell version: 2.2.2
[12:50:13] <NodeX> ok good luck
[12:50:18] <vikash__> Why?
[12:50:36] <`3rdEden> vikash__: but what version of the mongodb driver are you running in node?
[12:52:50] <vikash__> 1.2.8
[12:53:02] <vikash__> "_id": "mongodb@1.2.8"
[12:54:23] <vikash__> NodeX, `3rdEden ^
[12:54:30] <kali> webber_: my "thinking" relies on four years of mongodb prod experience, and actual study of the code, not a general principal...
[12:56:38] <`3rdEden> vikash__: i notice that 1.2.9 was released a couple of min ago
[12:57:26] <vikash__> Same here. But will the update solve my problem?
[13:00:09] <vikash__> Update and same problems
[13:00:34] <vikash__> NodeX, debug - websocket writing 5:::{"name":"add_message","args":["asda","asd"]}
[13:02:24] <NodeX> it's still not the debug
[13:02:36] <NodeX> +that I'm after so I bid you good luck
[13:05:17] <vikash__> NodeX, then how to get one/
[13:07:37] <NodeX> I dont have a clue, read your driver docs
[13:08:02] <vikash__> ok
[13:11:32] <Guest_1448> hey
[13:12:01] <Guest_1448> how can I do a $where query if one of the fields is in a subdocument?
[13:12:15] <NodeX> dot notation?
[13:12:30] <Guest_1448> I've tried .find({$where: 'this.foo != this.subdoc.foo'}) but that fails with subdoc has no properties errors
[13:13:21] <Guest_1448> hm there are some documents where subdoc doesn't exist
[13:13:26] <Guest_1448> could that be causing it?
[13:14:51] <Guest_1448> yeah..
[13:14:53] <Guest_1448> that was it
[13:14:55] <Guest_1448> ...
[13:15:17] <Guest_1448> .find({$where: 'this.subdoc && this.foo != this.subdoc.foo'}) works
[13:18:46] <Aktau> Good day!
[13:18:58] <Aktau> I'm building something with debian squeeze
[13:19:02] <Aktau> Which has mongodb 1.4.4
[13:19:13] <Aktau> I'm trying to add journalling but it doesn't recognize the option
[13:19:24] <Aktau> Is it even possible for that version, if somebody knows?
[13:23:18] <vikash__> I edited my JSON instead of dataToPush and made it individual in db.createCollection() and it works now
[13:32:51] <woozly> guys, how to join queries? I have two collections: db.queue( { "myid": 12345, "parent_id": 9126} ); and db.parents( { "myid": 9126, "count": 0 } ). How I can update db.parents, buy finding db.queue by id? I mean: queue.find({ "myid": 12345}) <--- got from this result 'parent_id' and use it, for increasing parents() record :/
[13:34:31] <woozly> In SQL: UPDATE parents SET parents.count = 999 LEFT JOIN queue q ON q.myid = 12345 WHERE parents.myid = q.parent_id; (something like this, if I don't do it wrong.. :)
[13:34:41] <NodeX> mongo doesn't have joins
[13:34:51] <woozly> I need to do two queries for that?
[13:34:53] <NodeX> yup
[13:34:56] <woozly> :/
[13:34:59] <woozly> doh
[13:35:03] <woozly> okay thanks)
[13:35:11] <NodeX> :)
[14:33:25] <solars> kali, alright so I made a trip to 2.3.2
[14:33:48] <solars> and noticed a performance decrease when inserting of almost 90%
[14:34:06] <solars> ok lets say 99%
[14:37:08] <NodeX> upgrade from what?
[14:37:32] <solars> I've tried to substitute 2.2.2 with 2.3.2 to check if the performance of my count queries improves (it's a huge bottleneck)
[14:37:48] <solars> the app processes messages on a queue and inserts into the mongodb database
[14:38:19] <solars> the queue had 20k messages, and I was able to insert those 20k in around 1 minute
[14:38:23] <solars> under 2.2.2
[14:38:38] <solars> with 2.3.2 the queue was filled up faster than the messages were inserted into the db
[14:38:50] <solars> I have no idea why
[14:38:56] <NodeX> load average: 46.73, 45.61, 40.23 oops
[14:39:10] <Zelest> :D
[14:39:24] <Derick> NodeX: it's nothing if it's not 4 digits
[14:39:36] <Zelest> it IS for digits!
[14:39:39] <Zelest> just a dot in between ;)
[14:39:43] <Derick> lame
[14:39:45] <Derick> ;-)
[14:40:07] <NodeX> it's onyl got 8 cores
[14:40:10] <NodeX> only*
[14:40:16] <NodeX> that's pure mongo doing that lol
[14:42:07] <Kaim> kali, just for your information :
[14:42:19] <Kaim> https://github.com/fluent/fluent-plugin-mongo/issues/22#issuecomment-12267557
[14:42:27] <Kaim> my bug is fixed :)
[14:43:15] <kali> Kaim: ok :)
[15:10:33] <amimknabben> http://www.allbuttonspressed.com/blog/django/2010/09/JOINs-via-denormalization-for-NoSQL-coders-Part-1-Intro
[15:10:43] <amimknabben> denormalize models and run threads, really?
[15:20:38] <doxavore> What kind of connection pool sizes are people using in multithreaded servers? I've tried everything from 10 to 512 in JRuby and I still run out.
[15:22:24] <mansoor-s> How do I go about searching with values
[15:22:40] <mansoor-s> say I have a value: "MyAwesomeValue" I want all documents that have the term Awesome in it
[15:24:15] <Derick> mansoor-s: you will need a regexp query, but that is not going to use an index
[15:24:32] <Derick> what is the key?
[15:25:50] <mansoor-s> hmm
[15:29:08] <MatheusOl> doxavore: the pool size will depend on your box capacity and the max number of connection
[15:29:34] <MatheusOl> mansoor-s: You mean, on any key?
[15:29:59] <mansoor-s> MatheusOl, no, on a single key
[15:30:05] <MatheusOl> mansoor-s: I think only map-reduce will solve that, at least if you don't know what are the keys
[15:30:13] <mansoor-s> so lets say my key is Name: "JamesBond"
[15:30:17] <mansoor-s> i want all names with James in them
[15:30:20] <MatheusOl> mansoor-s: Humm... So you can use Regex
[15:30:39] <MatheusOl> db.col.find({name: /James/});
[15:31:03] <MatheusOl> mansoor-s: Notice that it's not an indexable query
[15:31:34] <mansoor-s> Derick, MatheusOl I might have to RDS for this?
[15:32:22] <Derick> mansoor-s: an RDBMS can't index that either
[15:32:34] <MatheusOl> RDS?
[15:32:49] <MatheusOl> Derick: humm... Some can
[15:33:04] <MatheusOl> Derick: Not with this Regex, but with equivalent LIKE
[15:33:09] <Derick> MatheusOl: maybe as a full text index with split up words
[15:33:14] <Derick> MatheusOl: LIKE is not indexable either...
[15:34:45] <mansoor-s> hmmm
[15:34:58] <MatheusOl> Derick: Yes, it is
[15:35:04] <MatheusOl> Derick: Not with b-tree, of course
[15:35:18] <MatheusOl> Derick: But on PostgreSQL you can use GIN or GiST indexes for that
[15:40:34] <Derick> gist is not a type of index, but just a data structure (says wikipedia: http://en.wikipedia.org/wiki/GiST) - if you mean their tsearch2 index, then that is just FTS - which needs split up words (http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/docs/tsearch-V2-intro.html) http://www.postgresql.org/docs/8.3/static/textsearch-indexes.html also talks for gist about "each word" and for gin about "GIN indexes store only the words"
[15:41:40] <mansoor-s> is it acceptable to do map reduce-es as a normal function of the application?
[15:41:44] <webber_> LIKE can be answered from a plain RDBMS index if LIKE uses an index prefix, e.g. LIKE "A%".
[15:42:30] <Derick> mansoor-s: M/R won't help you here
[15:42:39] <Derick> webber_: same in Mongo, with /^A/
[15:43:40] <MatheusOl> lol
[15:43:49] <Derick> mansoor-s: and, I would not use M/R unless you *really* need it - and you probably won't as we now have the aggregation framework
[15:44:02] <MatheusOl> B-Tree is also a data structure
[15:44:11] <Derick> yes, sure
[15:44:12] <kali> Derick: yeah, got me on this one
[15:44:14] <MatheusOl> http://www.sai.msu.su/~megera/wiki/wildspeed
[15:44:20] <MatheusOl> http://www.postgresql.org/docs/current/static/pgtrgm.html
[15:44:55] <mansoor-s> Derick, are you saying I can do something like what I want in Mongo?
[15:45:05] <mansoor-s> or I have to go with something lime MySQL or Postgres
[15:45:10] <Derick> mansoor-s: you are being really vague about what you want
[15:45:29] <Derick> you can query with a regular expression, it's just not going to use an index
[15:45:31] <MatheusOl> webber_, Derick: What I'm saying is that PostgreSQL can, in some cases, use index with LIKE, even when it comes with suffix: LIKE '%bla%'
[15:45:39] <MatheusOl> Using one of the extensions above
[15:45:57] <MatheusOl> And GIN or GiST indexes (yes they are data structure, and also can be used as indexes)
[15:47:36] <MatheusOl> Derick: I agree, I just mentioned it wouldn't use an index, but it's not always a bad thing. Also, generally, we expect some slowness searching with regular expression (compared to equally searchs)
[15:47:50] <mansoor-s> Derick, We are creating a search functionality for our application. We want to be able to search by value contents.
[15:48:58] <webber_> ... pg - nice!
[15:49:13] <Derick> mansoor-s: "value contents" ?
[15:49:31] <mansoor-s> "this is my string" i want to search the string
[15:49:42] <Derick> mansoor-s: can you provide a proper (slightly) complex document as well as what you're trying to look for?
[15:51:26] <mansoor-s> Book titles. I have a book (document) with Title(key) with "Harry Potter" (Value). When I search Harry, i want this document to be returned
[15:52:08] <mansoor-s> and its not necissarily by individual words either, so it could be Harrypotter and I still want it returned
[15:52:20] <mansoor-s> if i search for Harry
[15:52:47] <Derick> mansoor-s: only thing you can then do is using a regular expression search:
[15:53:07] <Derick> db.collection.find( { Title: /harry/i } );
[15:58:59] <NodeX> good luck with performance on that lol
[16:00:30] <ehershey> ooh on what
[16:00:30] <MatheusOl> This may help: http://docs.mongodb.org/manual/tutorial/model-data-for-keyword-search/
[16:01:03] <Gargoyle> ehershey: Case insensitive regex matching
[16:01:09] <ehershey> ah
[16:02:01] <ehershey> people misuse regexes a lot
[16:02:11] <ehershey> in general
[16:03:43] <Derick> MatheusOl: he doesn't have keywords afaik... no way to split up text
[16:05:06] <MatheusOl> Derick: Well, he could split by words, but wouldn't work for every cases
[16:05:26] <NodeX> [15:51:04] <mansoor-s> and its not necissarily by individual words either, so it could be Harrypotter and I still want it returned
[16:05:32] <NodeX> ^^ nothing to split opn
[16:05:33] <NodeX> on *
[16:06:23] <Gargoyle> Node. I though that mean a search for harry should also match harrypotter
[16:06:48] <NodeX> yes it does mean that but someone is saying to split on words
[16:06:53] <mansoor-s> sorry I was AFK
[16:07:01] <NodeX> and "Harrypotter" cannot be splt on words as it's one word"!
[16:07:43] <Gargoyle> NodeX: You couls still use regex /^harry/ will match and use an index, but the app will have to convert all keywords to lowercase
[16:08:24] <mansoor-s> case isn't an issue
[16:08:28] <mansoor-s> if its not indexed
[16:08:31] <mansoor-s> will it kill performance
[16:08:42] <Derick> Gargoyle: only if it's at the start of a string though...
[16:08:48] <NodeX> ^^
[16:08:48] <mansoor-s> as this is the most/main used feature of the applicaiton
[16:09:18] <Derick> mansoor-s: it will not be *as fast* as with an index - but I'd argue it's still going to be as fast as any other rdbms would do it
[16:09:19] <NodeX> mansoor-s : you will never get performance without a prefix/index
[16:09:22] <MatheusOl> NodeX: I meant, if performance is really an issue, one can change requirements. But I do agree, doesn't do what he really want
[16:10:17] <NodeX> performance is always an issue isn't it ? :P
[16:10:34] <MatheusOl> Sometimes it just arrive late
[16:11:07] <NodeX> but that should never be due to the daatbase or the app code (if possible)
[16:12:04] <mansoor-s> Derick, would this not be indexed in a RDBMS?
[16:12:27] <Derick> the whole field will of course, jsut like with mongo
[16:12:43] <Derick> but it's very unlikely the rdbms can use an index efficiently for a LIKE query
[16:13:05] <MatheusOl> It is. I said that already
[16:13:27] <MatheusOl> With some extensions
[16:13:54] <NodeX> unfortunately (at present) mongodb is not suited for a performant text search
[16:14:13] <NodeX> 2.4 will address some issues but it was never meant to be a replacement for an RDBMS with FTM
[16:14:29] <NodeX> or a secondary search index (Lucene, ES etc)
[16:14:33] <MatheusOl> Of course not as fast as prefixed
[16:17:09] <MatheusOl> But I think is a good idea to use a secondary search index, as NodeX said
[16:53:49] <coredump> So, I need to remove a replica member but it is the primary atm, use that procedure to force the primary to move to the other server and then stop/remove the primary?
[17:43:01] <motormal> hello all
[17:43:53] <motormal> anyone here up for helping me through a sharding question?
[17:53:57] <NodeX> you're better to just ask dude
[17:55:03] <UForgotten> Anyone seen this before? https://jira.mongodb.org/browse/SERVER-8178
[17:59:25] <konr_tra`> Is there a tool to effortless import gigabytes of kinky csv files (valid, but with entries containing newlines, escaped quotes and all sorts of nastiness) to mongo?
[18:04:16] <konr_tra`> oh wait, mongoimport does work with csv :D
[18:34:55] <limpc> hmm how did foursquare get over the 3.2 million collection limitation in mongo?
[18:36:44] <limpc> Our mongo struct is fairly complicated, we were wanting to make each user their own collection for speed. but --nssize is limited to 2047, which is just under 3.26 million collections
[18:37:42] <iggy__> hi, can someone help with an update query?
[19:49:35] <themoebius_> is there a way to reclaim disk space in place yet? i.e. without exporting the whole DB and reinitializing like in a repair?
[19:57:34] <UForgotten> themoebius_: there is a compact function that I read about, but use it at your own risk
[19:57:54] <UForgotten> and it requires a lot of head room to defrag
[19:58:14] <themoebius_> UForgotten: yeah but that doesn't reclaim disk space, it just allows mongo to reuse space it's already claimed
[19:58:42] <themoebius_> I mean my /db partition is like 95% full but the actual data size is far less
[20:16:50] <akeem> hi, I am trying to track down a slow process. its ok when I do an explain on the query but when I profile it takes a good amount of time
[20:17:36] <akeem> if it helps it an $or query with both sides of the or indexed
[20:35:04] <owen1> i have replica set of 3. i killed the primary and new primary was elected. i killed the new one but the last survivor, the 3rd, is still secondary. is it normal?
[20:35:57] <Gargoyle> owen1: Yes
[20:36:32] <Gargoyle> Your live and healthy nodes need to have a majority of your original configuration.
[20:47:27] <owen1> Gargoyle: got it. so if 2 are dead and the 3rd is secondary i am screwd
[20:47:40] <Gargoyle> owen1: Nope.
[20:48:25] <Gargoyle> owen1: The final server will not self promote as it does not know if the other servers are dead, or it IT has been segmented on the network.
[20:48:38] <Gargoyle> But you can manually promote it to a primary
[20:51:13] <Gargoyle> owen1: I can't remember exactly, but this is the general area of the docs. http://docs.mongodb.org/manual/tutorial/reconfigure-replica-set-with-unavailable-members/
[20:54:05] <owen1> Gargoyle: interesting. i wonder what happened if i have rs of 4.
[20:55:16] <Gargoyle> owen1: You shouldn't (can't)
[20:56:04] <owen1> Gargoyle: so stick to 3 or 5. i want to configure my hosts with a bash script. i don't want to run mongo console and type manual commands like rs.initiate, rs.add etc. is it possible and is there an example for doing that?
[20:56:11] <Gargoyle> A rs should have an odd number of members.
[20:56:34] <Gargoyle> owen1: It's all just javascript! :)
[20:57:00] <Gargoyle> make config.js, and then just do "mongo < myconfig.js" type thing
[21:05:57] <owen1> Gargoyle: oh. so u can dump data into the mongodb collections that will setup the replica set etc?
[21:06:22] <owen1> Gargoyle: can u send me links to examples? that's perfect
[21:07:14] <Gargoyle> if you redirect (or pipe) to mongo shell, its the same as if you were running those commands by typing them.
[21:08:32] <Gargoyle> owen1: Eg. create a test.js with the following:-
[21:08:33] <Gargoyle> use test;
[21:08:33] <Gargoyle> show collections;
[21:08:43] <Gargoyle> And then run mongo < test.js
[21:09:08] <owen1> ok
[21:10:03] <Gargoyle> owen1: Or you could write a script in whatever language you want, and pipe correct output to mongo
[21:11:00] <Gargoyle> Eg. PHP :- php -r 'echo "use test;\nshow collections";' |mongo
[21:11:48] <owen1> Gargoyle: yeah. i just did it with bash. mongo < start-mongo.js
[21:11:51] <owen1> awesome
[21:14:09] <owen1> btw, i noticed that eventough i have replSet = abc in /etc/mongodb.conf it's being ignored, so i had to add it to the mongod commmand.
[21:58:26] <Hoverbear> Hi all, I'm working with Mongoose and have a query result (from like Foo.findById(myId)) and have assigned it some something… But now when I try to use the .id function on a sub doc I'm getting errors that it does not have such a method… Any ideas?
[22:58:57] <jaimef> does restarting a mongod help in anyway catch up on delay?
[23:15:34] <Derick> What's with all the new extra whitespace on the osm.org pages?! Lots of scrolling needed now :-/
[23:16:54] <Zelest> where does osm get all the data from?
[23:48:57] <owen1> what triggers an election for a new primary? only loss of connection to a member or are there more conditions like high cpu/memory/diskspace?