PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 26th of July, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:04:36] <wc-> man ive messed up some permissions
[00:04:37] <wc-> couldn't open /srv/db/mongodb/maverick.ns errno:1 Operation not permitted
[00:05:40] <jfe> hi all
[00:06:06] <jfe> are there any particular advantages to using mongodb over a traditional relational db?
[00:06:19] <jcromartie> jfe: nope, no advantages
[00:06:21] <jcromartie> ...
[00:06:22] <wc-> nevermind i got it
[00:06:33] <jcromartie> sorry I felt like being a smartass
[00:07:01] <jfe> i've used relational databases for a while, but nosql seems to have taken off in popularity.
[00:07:21] <jcromartie> jfe: it has, for good reasons, and for not-so-good reasons
[00:07:32] <jcromartie> jfe: I think that you need a pretty compelling reason to use a document store
[00:07:58] <jcromartie> jfe: right now I'm prototying an analytics system, and MongoDB is well-suited to capturing lots and lots of events over time
[00:08:11] <jcromartie> jfe: my only index is by time
[00:08:24] <jcromartie> jfe: and I can add any event fields I want at any time
[00:09:01] <jcromartie> jfe: and I have a collection per domain (i.e. customer)
[00:09:08] <jcromartie> so it's pretty well suited for this purpose
[00:09:28] <jcromartie> the data is not very relational
[00:10:16] <jcromartie> of course it *could* be fully normalized in a RDBMS, but it would take a lot of work
[00:10:42] <jfe> i like the simplicity of the bson format. sometimes i feel like relational databases are a little too heavy-weight.
[00:13:08] <jfe> you mentioned your "only index is by time." does this imply that nosql databases are only suited for queries on a single key?
[00:24:26] <jcromartie> jfe: you can index whatever fields you want
[00:24:36] <jcromartie> jfe: but just like in a RDBMS, each index has a cost
[00:25:10] <jcromartie> jfe: Relational databases are pretty fantastic at what they do. MongoDB is pretty fantastic at what it does.
[00:26:10] <jcromartie> There are not a lot of tools that are as good as SQL for aggregating over a bunch of normalized data.
[00:27:34] <jcromartie> I think it depends on how many entities you are modeling.
[00:28:10] <jcromartie> And the structure of those things.
[00:30:47] <jfe> hmm. i'm intrigued :)
[01:33:03] <jcromartie> I don't have an adequate hardware setup to test this, but I wondered how query performance is effected by collection size?
[01:33:56] <jcromartie> Let's say I have 20B documents, and I want to select some sample of all of them.
[01:34:12] <jcromartie> is that possible?
[04:34:07] <xpen> hi guys, i have a problem
[04:34:49] <xpen> we have tree nodes as a replca set without a arbiter
[04:35:35] <xpen> one node's status now is startup2
[04:36:34] <xpen> so, i'm wandering if that because we don't have a arbiter and that startup2 node can not determine which one is primary, and which one is secondary?
[04:37:55] <xpen> btw, we are using mongodb 2.4.2
[07:28:14] <[AD]Turbo> hola
[09:23:58] <_Heisenberg_> Hi! I wanted to remove mongod from autostart service on ubuntu 12.04 via sudo update-rc.d -f mongodb remove but the service keeps on running after reboot.
[09:57:26] <rspijker> _Heisenberg_: Is the service called mongodb or mongod?
[10:05:27] <_Heisenberg_> rspijker: the script in /etc/init.d is called mongodb
[10:06:11] <rspijker> okay, I'm not that familiar with ubuntu. Why not just remove the script?
[10:06:29] <rspijker> is it an actual script or just a symlink?
[10:06:33] <_Heisenberg_> rspijker: I'm not sure if I will regret it ^^
[10:06:55] <rspijker> then move it elsewhere for safekeeping...
[10:07:12] <rspijker> it's not like your system is going to break because you moved the mongodb startup script :P
[10:07:18] <_Heisenberg_> rspijker: it's an symlink to /lib/init/upstart-job
[10:07:50] <_Heisenberg_> ok, I#ll move it and see what happens ^^
[10:10:50] <_Heisenberg_> can't believe that bitch is still running
[10:13:06] <_Heisenberg_> ok there is another one at /etc/init/ i'll move it roo...
[10:17:01] <_Heisenberg_> killed it, nice
[10:17:24] <_Heisenberg_> thanks rspijker
[10:17:45] <rspijker> sure
[11:22:09] <remonvv> \o
[11:29:18] <pl2> Hello, I restarted my mongod process to map to a different drive, and it's taking ages to start. Does anybody know what is going on behind the scenes when you use : mongod--dbpath /example/data/path ?
[11:29:48] <kali> pl2: i suspect preallocation
[11:29:54] <kali> pl2: look for mongodb logs
[11:30:13] <pl2> kali, yeah, i'm getting a lot of this in the console: Fri Jul 26 11:28:00 [clientcursormon] mem (MB) res:75 virt:1590 mapped:464
[11:30:31] <pl2> my battery is going to die.. :(
[11:30:37] <kali> pl2: i'm not looking for that
[11:31:06] <pl2> kali, pardon?
[11:32:01] <kali> pl2: these are not the log line i was referring too... i'm looking for some examples to show you
[11:33:02] <pl2> kali, ok, cool, just thought I'd paste a line from the console to show you where it was at..
[11:33:22] <kali> pl2: what file system is your --dbpath on ?
[11:33:34] <pl2> ext3
[11:33:42] <kali> right. you need ext4
[11:33:45] <kali> or xfs
[11:34:21] <pl2> does that improve the speed?
[11:34:24] <kali> yes
[11:34:38] <kali> i'm looking for the reference, but i'm totally lost in the new documentation site :/
[11:34:58] <pl2> no worries, the drive is empty so I should be able to re-format quite easily
[11:35:19] <kali> pl2: http://docs.mongodb.org/manual/administration/production-notes/#mongodb-on-linux
[11:36:18] <pl2> kali, thanks man. I'll make those changes
[12:42:02] <Nodex> is master/slave still supported in the latest mongo versions and is it going to be deprectaed?
[12:45:25] <kali> Nodex: i think it is still supported (and will stay supported) as it is the only way to get more than ~10 read slaves
[12:46:17] <Nodex> I just want a slave as a backup fora bit of redundancy
[12:46:29] <Nodex> for a*
[12:58:08] <rspijker> Nodex: just out of curiosity, why the preference of master/slave over replica set?
[13:08:44] <kkuno> hi
[13:09:00] <kkuno> I wnt to create a date that is today + 1 month
[13:09:05] <kkuno> what can I do?
[13:10:42] <sybarite> can anyone explain to me the benefits of storing a simple reference vs a complex reference in the mongodb database?
[13:11:02] <Derick> references are really just ids
[13:11:06] <Derick> there is not a "real" reference
[13:11:07] <sybarite> I am using the php doctrine ODM for persistance
[13:11:36] <Nodex> unlucky
[13:21:48] <rspijker> kkuno: var d = new Date(); d.setMonth(d.getMonth()+1);
[13:22:21] <kkuno> rspijker: ok thanks
[13:22:36] <kkuno> can I do something similar specifying i.e. milliseconds?
[13:22:58] <rspijker> kkuno: it's just a javascript Date object
[13:23:12] <kkuno> mmmh ok
[13:23:23] <rspijker> so you can do loads of stuff with it, have a look at the reference of it
[13:23:24] <kkuno> I thought it was something of mongo
[13:23:27] <kkuno> ok
[13:39:02] <bobinator60> is there a standard/simple way to generate data for map clusters from a 2d index'ed field? https://developers.google.com/maps/articles/toomanymarkers/markerclustererfull.png
[13:41:11] <ragusource> hey guys
[13:41:56] <ragusource> I'm having some trouble with setting up auth on mongodb, I added a user, enabled auth in the config and now i can't login
[13:58:11] <cheeser> 3 more issues to fix on this release of morphia: https://github.com/mongodb/morphia/issues?milestone=2&state=open (#464 is in code review now)
[14:00:52] <smremde> i'm trying to work out why a 2dsphere query is taking so long... anyone around?
[14:01:49] <smremde> 107 million rows, "indexSizes" : { "_id_" : 6115893280, "geometry_2dsphere" : 2040770480 }
[14:02:19] <smremde> version: 2.4.5, i'm expecting around 10-20 results
[14:02:28] <cheeser> god. that's a 2G *index* :)
[14:03:01] <smremde> yeah, fits in memory
[14:03:56] <smremde> the db is around 270gb
[14:04:27] <smremde> but, like i said, the index fits in memory, and i'm expecting 10-20 results - so its only 10-20 seeks on disk, and this thing has been running over 1 hour
[14:04:35] <remonvv> smremde: Fits in memory != is in memory. It'll still have to page the data in. Not that it has to have the entire index in memory to satisfy a query. What do you call "long"?
[14:04:37] <robothands> how to undo rs.initiate please? I ran this on the primary, and then accidentally on the secondary, so now I have 2 primaries
[14:05:15] <cheeser> robothands: not sure you can. i think you'll have to rebuild that secondary.
[14:05:42] <remonvv> smremde: Hours? In that case it's unlikely it's hitting an index at all
[14:06:11] <robothands> pants
[14:06:15] <smremde> redsand: i'm running the query, with a hint to use the index, and explain
[14:06:30] <smremde> remonvv: that was for you!
[14:06:50] <remonvv> smremde: Did you try the same query in a smaller test set?
[14:07:11] <cheeser> smremde: run explain on your query
[14:07:27] <remonvv> smremde: Hinting to an index that might not be compatible with the query plan needed for your query results in no index use at all.
[14:07:39] <remonvv> cheeser: Doesn't work. Explain performs the query so that'll take hours ;)
[14:07:57] <remonvv> It should have a made where it just publishes the query plan really.
[14:08:01] <remonvv> mode*
[14:08:03] <cheeser> true. but he'll know the lay of the land when it's done.
[14:08:06] <smremde> ok, let me double check and show you indexes and queries
[14:08:16] <remonvv> cheeser: true
[14:08:29] <drag> Hi. Does MongoDB maintain a log of all write/update operations performed on it?
[14:08:34] <remonvv> smremde: Alright, pastebin or similar please. Formatted.
[14:09:03] <remonvv> drag: Not a permanent one. It has something called the "oplog" which it uses for repset member synchronization. Why?
[14:09:04] <cheeser> all? no.
[14:10:03] <drag> remonvv, so if I wanted to keep a log of all such operations, then I need to do it in my own code, correct? Management just wants a log of all commits to our databases just in case something happens.
[14:10:29] <remonvv> drag: I...don't know where to start with that one. Management as in non-technical?
[14:10:47] <cheeser> drag: ...
[14:10:50] <remonvv> drag: Every time someone asks a question like that it's usually followed by a bad idea ;)
[14:11:00] <cheeser> and what do they expect to do with that information?
[14:12:04] <remonvv> drag: To answer your question first; yes you'd have to intercept every write and store it somehow. Of course that just moves the problem to that storage solution but..yeah.
[14:12:51] <remonvv> drag: There are *many* problems with something like that. It's much better/easier to use somewhat more standard durability solutions.
[14:13:42] <smremde> could you take a look at http://pastebin.com/dBzfyGWe ?
[14:13:52] <bobinator60> does anyone have a suggestion of how to use a 2D index to generate map clusters, like this: https://developers.google.com/maps/articles/toomanymarkers/markerclustererfull.png
[14:14:07] <drag> remonvv, right now, we are using MySQL with Spring and it has an @Audited annotation that writes operations to an _AUD table. Non-technical management wants to keep this, so I was asked to look into whether or not it was possible and/or how much effort we would need to implement it in our change to MongoDB.
[14:14:31] <remonvv> smremde: "geometry" != "geomety"
[14:14:46] <remonvv> smremde: If that's the problem with your code rather than a typo in the paste then there you are.
[14:15:22] <bobinator60> +1 remonvv
[14:15:35] <drag> Thanks for the information, cheeser and remonvv.
[14:15:49] <bobinator60> smremde: i've done the typo/irc thing too many times. don't feel bac
[14:15:51] <bobinator60> bad
[14:16:34] <remonvv> drag: No problem. Go for replicasets and/or backups. There is an excessive amount of problems with the solution to write to B what you've written to A.
[14:16:51] <remonvv> smremde: It's cool. Happens to the best of us ;)
[14:17:07] <smremde> i have a feeling my index will be much bigger now...
[14:17:25] <remonvv> smremde: Might be, is it sparse?
[14:17:44] <robothands> so, I ran rs.initiate on the secondary node, I've removed mongo entirely and reinstalled, but when I connect to it after installing again, I get "mongod started without --replSet yet 1 document are present in local.system.replset" how to remove this configuration?
[14:18:01] <robothands> ..please
[14:18:23] <remonvv> robothands: Just clean it and add it as a fresh member and resync it.
[14:18:38] <robothands> how clean it?
[14:18:45] <robothands> sorry, new to mongo :)
[14:18:48] <remonvv> Delete your datafiles.
[14:18:58] <remonvv> You have a healthy primary no?
[14:19:01] <robothands> i tried, but error persists
[14:19:16] <robothands> yes...primary is fine
[14:19:28] <remonvv> Show me your current repset status
[14:19:30] <remonvv> pastie
[14:20:35] <robothands> http://pastie.org/8178014
[14:21:00] <remonvv> robothands: It says what's wrong. Run it in --replSet
[14:21:17] <robothands> ok, sorry, I assumed there was some config leftover from trying previously
[14:21:38] <remonvv> robothands: Nope. Members need to run with --replSet or the replication magic shall not happen.
[14:23:04] <robothands> so the document that says its present in the local database....that isnt going to mess up the replication necessarily?
[14:24:07] <remonvv> Doubtful. The local database is for replication mechanics mostly. It's not your data.
[14:24:26] <smremde> remonvv:perhaps, is it possible to change the apramaters of a 2dsphere index? it uses google's S2 right?
[14:24:43] <remonvv> In this case it's the repset topology document in replset collection
[14:25:10] <remonvv> smremde: The parameters?
[14:26:11] <smremde> maximum keys per object, also perhaps the key size?
[14:27:21] <remonvv> smremde: I'm not following.
[14:28:27] <smremde> https://code.google.com/p/s2-geometry-library/source/browse/geometry/s2regioncoverer.h#88
[14:29:50] <remonvv> smremde: I don't think there is any such functionality to tune geo indexes. What are you trying to do and what problem do you think it solves?
[14:32:34] <smremde> well, s2cells at the maximum depth are <1cm in size. i dont need that level of accuracy. decreasing the maximum dpeth, and decreasing the maximum cells lower, would probably make my index more efficient :)
[14:34:22] <remonvv> smremde: Okay but I don't think 2dsphere indexes support any such tweaking. I've heard references to Google S2 but I'm not even sure if the current 2.4.x codebase uses that.
[14:35:38] <remonvv> smremde: Just checked the code, it does use S2.
[14:36:31] <remonvv> smremde: Try with a smaller set to see if it hits the index at all (and fix it if it does not). If the index works but it is too big or your queries too slow then start looking at optimization.
[14:43:56] <smremde> remonvv: well I have to rebuild the database now, as I actually messed up the geojson formatting :) i'll start the import tonight
[14:48:19] <remonvv> smremde: Okay ;) Good luck. I'd still suggest a partial set for all testing.
[14:59:50] <smremde> remonvv:where is the fun in that? xD
[15:01:22] <remonvv> smremde: Your definition of fun needs further review.
[15:13:26] <robothands> im in a bit of a hole with replica sets...could someone have a look at the errors here and let me know what you think
[15:13:27] <robothands> http://pastie.org/8178152
[15:13:59] <robothands> primary is fine, secondary won't seem to join and I can't run any commands (like adding user) on the secondary
[15:14:20] <robothands> checked the error and it says its because I need slave=OK....but if it isn't in the cluster, it shouldn't matter?!?
[15:21:01] <Harageth_> Hey so I was just re-reading stuff on locking for mongo and I am slightly confused because it seems excessive that a write operation would put a lock on an entire mongod instance. Did I read that correctly? Or does it just lock a single document.
[15:34:49] <Nodex> it's database level currently
[15:34:55] <Nodex> (locking level)
[15:35:25] <Nodex> http://docs.mongodb.org/manual/faq/concurrency/#how-granular-are-locks-in-mongodb
[15:37:19] <gee_totes> what's the best way to generate a unique id on a document that's suitable to be used in a url?
[15:37:44] <gee_totes> like i want to have my url /record/123 serve up record with id of 123
[15:55:02] <robothands> how do I completely remove mongodb? I've removed via yum, deleted the database directory and config files, yet when I reinstall this node is still in the replica set it was before I removed :(
[15:57:35] <rud> hi, my mongod 2.4.3 (running on FreeBSD 9.1) refuses connections with the following error: can't create new thread, closing connection. googling results seem to imply a problem with max open files or max user processes, but I already have these set pretty high on this production server, plus the system logs don't report any kernel/system limit being reached.. Also, I only have a few internal processes that connect to mongodb, nothing exceeding 5-7 clients at an
[15:57:35] <rud> given time..
[15:57:46] <rud> so, in your opinion, what else could be the cause of such issue ...?
[16:04:08] <robothands> how do I completely remove mongodb? I've removed via yum, deleted the database directory and config files, yet when I reinstall this node is still in the replica set it was before I removed...this seems to be ridiculously difficult to do such a simple thing
[16:15:02] <ragusource> hey guys, When I add roles to a user, I can not login with that user, please help!
[16:49:34] <rhalff> hi, is dot notation for keys undesired?
[16:49:47] <rhalff> I read this post which basicaly states it is not: http://stackoverflow.com/questions/12397118/mongodb-dot-in-key-name
[16:49:59] <rhalff> mongodb allows it but many drivers don't
[16:59:21] <djBuss> Hi, I'm trying to start a database with mongo but I want require authentication. I started reading the docs and it says that I have to access a 'localhost exception' access. Can I create the user through the command line instead?
[17:05:41] <bcows> what is the easiest way to compare multiple values in a sub collection of a document, for instance I have doc->children and I have an search array of type children .... I want to return all documents that have children matching the children in my search array
[17:06:28] <bcows> (children has fields on it like: name, date, etc.."
[17:31:26] <LoneSoldier728> hey
[17:31:42] <LoneSoldier728> anyone know how to increment a field while $addToSet
[17:31:59] <LoneSoldier728> so increment it before $addToSet or while
[17:43:49] <LoneSoldier728> http://pastebin.com/index/bfhdsS9u
[17:43:54] <LoneSoldier728> anyone understand why this does not work
[17:58:53] <ashley_w_> i have two 12 core 60GB ram servers with tons of raid5 storage replacing a 4 core 32GB ram server running mongodb. the old server just has the one mongod instance running (2.2.2), and recently there have been some performance issues. wondering if doing sharding (no replica sets) is the best approach with the given hardware.
[18:12:45] <Nodex> what's your load ?
[18:14:43] <ashley_w_> it's a private database that i'd say has plenty of writes. it's not just store and query. some data might be stored and rarely read more than once.
[18:15:34] <Nodex> define plenty
[18:16:56] <ashley_w_> i have no analytics on this
[18:18:18] <Nodex> a rough guess?
[18:18:24] <ashley_w_> we have various databases for various projects, and reads vs writes may change over time
[18:19:15] <ashley_w_> i don't know what kind of info to even put in a guess. sorry.
[18:19:54] <Nodex> ok, good luck debugging and working out the bottleneck, not really a lot we can do to help
[18:21:20] <ashley_w_> right, wasn't looking for that level of detailed help. the new hardware should at least mitigate any issues for quite some time
[18:21:56] <ashley_w_> i just wanted to make sure it's a sane setup.
[18:22:31] <kali> a setup with no redundancy is insane. you need to set your box as a replica set
[18:22:55] <ashley_w_> why is it insane?
[18:23:21] <kali> because a box can crash
[18:24:04] <kali> your controller can go crazy and wipe out your array
[18:24:15] <kali> a bug in mongodb can corrupt the db
[18:24:22] <kali> so you need replication
[18:24:28] <cheeser> a giant marshmallow man can stampede down the city streets...
[18:24:39] <ashley_w_> a bug i imagine would affect all replicas
[18:24:48] <kali> ashley_w_: not necessarily
[18:25:23] <kali> cheeser: are you the keymaster ?
[18:25:31] <Nodex> lmao
[18:25:35] <cheeser> different versions of mongo. interactions with varying hardware specs. faulty memory on one machine, etc.
[18:25:38] <Nodex> retro jokes ftw
[18:25:46] <cheeser> why are you the gatekeeper?
[18:25:58] <kali> cheeser: bevause of zuul
[18:26:13] <Nodex> kali : you watch Sons OF Anarchy ?
[18:26:23] <ashley_w_> i'd rather have several smaller servers, but unfortunately i don't get to make hardware decisions
[18:26:46] <kali> ashley_w_: well, you asked for advice. i gave you mine :)
[18:27:03] <kali> Nodex: nope. rewinding 24, right now
[18:27:29] <Nodex> you should check it out, and I recently rewound 24 - up to season 6 in anticipation of next year :D
[18:28:11] <kali> tpb
[18:28:28] <ashley_w_> downtime of mongodb is non-SLA impacting.
[18:28:35] <Nodex> not sure what that means kali
[18:28:42] <kali> Nodex: thepiratebay
[18:28:55] <Nodex> ah, torrents are err bad
[18:29:06] <Nodex> the NSA haz your torrentz
[18:29:08] <kali> yep. i'll stop when netflix gets here. promise.
[18:29:23] <kali> we have prism, but not netflix
[18:29:30] <Nodex> haha
[18:30:01] <Nodex> the NSA could wipe out all child porn, piracy, online money laundering in one hit but they choose not to, I wonder why
[18:30:32] <kali> because ryan chapelle told tony not to
[18:32:30] <Nodex> ah, makes sense
[18:34:46] <kali> Nodex: have you tried Veep, btw ?
[18:36:35] <ashley_w_> another question: following http://docs.mongodb.org/manual/tutorial/deploy-shard-cluster/, it says nothing about having any backend mongod servers running, so sh.addShard(...) fails. i assume i am supposed to already have these running with --shardsvr.
[18:36:45] <Nodex> Yeh, didn't like it
[18:36:53] <ashley_w_> my assumption correct? it works.
[18:41:00] <kali> Nodex: it decided me to finally try seinfeld
[18:41:17] <Nodex> never really found that funny either
[18:41:57] <Nodex> few good new shows airing this season
[18:42:21] <kali> under the dome is alright
[18:44:25] <cheeser> i'm liking that one
[18:54:17] <Ahlee> i have a small collection, but heavy on updates. It's 100,000 records, but i'm attempting to push through ~20,000 updates to those 100k updates per second, mongo initially has no issues, but will periodically freeze (guessing i'm hitting a buffer limit?). Right now running one instance (no shard, no replica set, just the one instance) - what are first steps to track down issue?
[18:54:48] <Ahlee> My db path is in /dev/shm (ruling out disk commit), memory set is less than a gig, system has 8 gigs of ram
[18:55:45] <Nodex> are the updates index bound?
[18:55:51] <Nodex> i/e do they update an index
[18:56:24] <Ahlee> Checking
[18:56:50] <Nodex> it could just be locking and choking the update
[18:57:21] <Ahlee> only index appears to be on _id
[18:57:45] <Nodex> are they large updates?
[18:57:50] <cheeser> you'll probably want to fix that.
[18:57:58] <Ahlee> db.paramters.getIndexes() returns just _id, at least
[18:58:12] <kali> periodic freeze ? what kind of sile system are you running on ?
[18:58:17] <kali> file
[18:58:38] <Ahlee> Nodex: let me pull a size, but they're not large
[18:58:50] <Ahlee> kali: none - I'm running dbpath in /dev/shm
[18:58:58] <Ahlee> well, i guess that'd technically be tmpfs
[18:59:03] <kali> ha.
[18:59:18] <kali> can you check out what mongod logs as to say in FileAllocator lines ?
[18:59:48] <Ahlee> production is ext4, reproducable in dev on /dev/shm
[19:00:52] <kali> ok. it's unlikely to be preallocation then.
[19:01:00] <kali> but the logs will tell you for sure
[19:01:30] <Ahlee> nojournal = true, smallfiles=true, noprealloc=true, oplogsize = 1024
[19:02:43] <Ahlee> tcmalloc?
[19:03:44] <Ahlee> so effectively adding an index on a/the field used for the updates would be wise next move is what i'm getting from this (man, that sounds painfully obvious)
[19:07:22] <ashley_w_> most indeed.
[19:07:51] <ashley_w_> i've seen adding an index change an update from taking several hours to just a few minutes
[20:15:53] <Harageth_> So I was reading some of the documentation today and it was saying that when updating a document it locks the entire local instance of mongod.... Did I really read that correctly? That seems kind of overkill to lock the entire local database to update one document.
[20:20:32] <cheeser> it does lock the db yes.
[20:20:45] <cheeser> future versions will include more granular locking.
[20:49:50] <bobinator60> is there a good way to generate clusters for maps from the 2D indexes?
[20:50:59] <retran> what do you mean 'clusters for maps'
[20:51:12] <retran> mongo has a spatial index type
[20:56:14] <cheeser> morphia users: https://github.com/mongodb/morphia/pulse
[21:05:19] <bobinator60> yes, 'clusters for maps'
[21:05:34] <bobinator60> and yes, ai already have a 2d spacial index
[21:06:37] <bobinator60> retran: yes, 'clusters for maps' based on existing our existing mongodb 2d spacial indexes
[21:09:03] <retran> i dont now what clusters for maps means
[21:09:48] <bobinator60> retran: this is a clustered map: http://www.kelvinluck.com/wp-content/uploads/2009/08/cluster_screenshot.png
[21:23:20] <LoneSoldier728> hey
[21:23:27] <LoneSoldier728> http://pastebin.com/index/bfhdsS9u
[21:23:35] <LoneSoldier728> anyone know how to get the inc to work correctly
[22:05:09] <Ontological> I've got collection.name and collection.something.name and when I query the collection by name, I believe mongodb is returning both sets. Can I limit it to NOT return subdocuments or am I just seeing things?
[22:07:08] <bcows> is there an official debian/ubuntu package for the c++ driver ?
[22:18:12] <LoneSoldier728> how to do inc on mongoose
[22:45:22] <bjori> LoneSoldier728: use findByIdAndUpdate instead
[22:45:36] <bjori> LoneSoldier728: you are searching by an id, but that method expects a search criteria :)
[22:46:02] <bjori> LoneSoldier728: http://mongoosejs.com/docs/api.html#model_Model.findOneAndUpdate
[22:46:52] <bjori> Ontological: say what now?
[22:47:17] <bjori> Ontological: what exactly is your query?
[22:47:29] <bjori> bcows: I don't think so no
[22:47:45] <bjori> bcows: you need to build it yourself from the mongodb source :/
[22:54:20] <caitp> you know what would be kind of cool?
[22:57:50] <caitp> it would be cool if I could just say (in any particular RDBMS or otherwise), "hey, this table is going to need to support pagination, please make some stored procedures automatically to support pagination with a very simple API"
[22:58:42] <caitp> or any other common pattern, really
[23:52:18] <dougb> I'm trying to test something where I have a document that would have sub objects in it, and once I return that object initially, is there a way to search within that sub object?
[23:52:38] <dougb> sorry, return the document initially...after I have found it with a query
[23:53:18] <retran> start over and rephrase question so i'm not confused
[23:56:31] <dougb> sorry, I have this 'User' document, and that document can have the status for multiple placements saved in it, as detailed here: http://pastebin.com/BDeyehv7
[23:56:57] <dougb> Once I return a specific user Document, is it possible for me to query the 'UserPlacements' field for a specific object?
[23:57:15] <retran> when you say "return"... what do you mean
[23:57:29] <retran> i'm guess you're talking about the client application ?
[23:57:39] <retran> from there it's out of the hands of mongodb
[23:57:50] <dougb> when I query the collection based on the _id and it finds the specific document and returns it
[23:58:01] <dougb> ok
[23:58:03] <retran> then it's in your client application control
[23:58:24] <retran> you can implement a search routine there, i guess
[23:58:44] <retran> you could insert that object into a temp mongo collection, and search it
[23:59:07] <retran> you could try to craft a query that searches for what you're wanting to begin with