[00:11:44] <Boomtime> it isn't mongodb's fault that the design generates a huge amount of indexes, those are choices made entirely by you (or your developers)
[00:12:08] <brotatochip> not so much, by the developers of a project called nodebb
[00:12:23] <Boomtime> fine, that still isn't mongodb's fault
[00:12:26] <brotatochip> which our PM and CTO have decided to incorporate into our platform
[00:12:38] <brotatochip> right, thank you for explaining the reason it was slow
[00:15:01] <Boomtime> welcome to the cost of full text searching
[00:15:29] <Boomtime> btw, it will become a little more efficient over time
[00:15:43] <Boomtime> but if you're not handling the load now, then that probably won't matter
[00:36:32] <brotatochip> the only time it matters Boomtime is if there is a disaster and I have to restore using mongodump (which is less likely as I have a secondary)
[00:37:14] <brotatochip> a hidden slave for backups is still a good idea but restoration is going to never be fast as far as I see it
[00:38:03] <ianseyer> hi all. could somebody help me construct an aggregation pipeline? have been struggling to no avail
[00:38:53] <ianseyer> need to group all my documents by a field called 'code', and then find all those documents matching that code that have distinct titles
[00:41:49] <joannac> ianseyer: what do you have so far?
[00:44:05] <ianseyer> well, it's definitely worth mentioning this is my first serious exploration of mongo
[00:44:39] <ianseyer> but, here's my pipeline: [{'$group':{'_id':'$code'}}, {'$project':['title', 'code']}]
[00:44:41] <brotatochip> Boomtime: any idea as to how I can find out why my secondary keeps getting veto'd out of an initial sync?
[00:44:59] <ianseyer> grouping my code, then trying to get the title for each resulting document to see if any code groupings had multiple titles?
[00:45:09] <ianseyer> (doing this in pymongo, btw)
[00:45:28] <joannac> ianseyer: erm, get rid of the second bit (the $project) and just print out what you have
[00:45:44] <joannac> I think it'll help you understand what's actually happening
[00:48:26] <brotatochip> oh, nevermind, I just found out why in the log 2015-06-10T00:46:14.077+0000 [rsBackgroundSync] replSet error RS102 too stale to catch up
[00:49:33] <brotatochip> is there any way to force the initial sync?
[01:01:43] <tejasmanohar> For a URL shortener app lol :P
[01:02:07] <GothAlice> https://twitter.com/GothAlice/status/582920470715965440 may be relevant. ;P
[01:02:09] <tejasmanohar> Just because it was easy on Heroku, and it works for our internal URL shortener that doesnt need that much space, 500mb is >>>>> than enough
[01:05:40] <GothAlice> Yeah. For me, a cluster of HS-M5s from mongolab would cost me… $76K/month, going with the less-RAM one. $106K/mo. for the more RAM one.
[01:05:53] <GothAlice> Don't know about leader, the yeah was to the expensive comment. ;P
[01:06:07] <GothAlice> So compose is actually only 4x as expensive for my scale.
[01:06:31] <tejasmanohar> well i dont quite have that scale :P
[01:07:17] <tejasmanohar> I've heard a lot of things about MongoDB going to be terrible if my app goes to scale and uses multiple nodes due to its pooor distributed consistency and especially because im charging users for things and then saving it in the db so if it doesnt save thats not good
[01:07:34] <tejasmanohar> Is this just people are not configuring the DB properly and out-of-the-box that's a common situation faced?
[01:08:38] <GothAlice> https://blog.serverdensity.com/does-everyone-hate-mongodb/ covers most of the naysayers.
[01:08:49] <GothAlice> And the answer is yes, that is the most typical situation.
[01:08:57] <GothAlice> (Misunderstanding coupled with misconfiguration or misuse.)
[01:09:34] <GothAlice> The default settings used to be less… tolerant of ignorance… than they are today.
[01:14:57] <tejasmanohar> We then tested a configuration that prevents any possible data loss. In this configuration, MongoDB outperforms Cassandra and Couchbase by more than 25x, with latency that is more than 95% better than Cassandra, and more than 99.5% better than Couchbase.
[01:15:09] <tejasmanohar> not something i have to worry about right now much because i have no one using my app
[01:15:19] <tejasmanohar> but its just something i want to think about like do i need to be considering switching etc
[01:15:43] <GothAlice> Understanding how to construct your sharding keys for maximum benefit is something you'll need to worry about later, too.
[01:15:50] <tejasmanohar> GothAlice: for any financial application, would you recommend pushing up to a higher write concern than the default Acknowledged like Journaled?
[01:16:09] <tejasmanohar> With a journaled write concern, the MongoDB acknowledges the write operation only after committing the data to the journal. This write concern ensures that MongoDB can recover the data following a shutdown or power interruption.
[01:16:32] <GothAlice> For financial information I recommend using a transactional database (TokuMX, a fork of MongoDB, or any actual transactional DB), or figuring out two-phase commits and rollback strategies.
[01:17:18] <tejasmanohar> you can just buy stuff (lottery tickets) in my app
[01:18:11] <tejasmanohar> Shoot, is TokuMX the same interface tho? Like if i have a node mongoose express app can i still use it in the same way as mongo?
[01:18:15] <tejasmanohar> or is it a lot different
[01:21:19] <tejasmanohar> gotta think about if we fall into that hehe
[01:21:57] <brotatochip> race condition attacks are super cool
[01:22:03] <tejasmanohar> What would be an example? Like a gambling casino app? GothAlice
[01:22:10] <GothAlice> You want to know that if the user hits "Stop" in their browser part-way through a deduction from one account and addition to another, that either both operations succeed or both operations fail, also.
[01:22:38] <GothAlice> If not, either money disappears from existence, or money can be spontaneously created.
[01:22:57] <tejasmanohar> Oh no we don't have a system like that
[01:23:00] <tejasmanohar> Not a finance application
[01:23:11] <tejasmanohar> You can just buy tickets, and Stripe handles the charges GothAlice
[01:23:11] <GothAlice> "Money" being the thing you're counting.
[01:24:13] <tejasmanohar> *sorry i dont know why i kept doing capital h :P
[01:24:30] <tejasmanohar> Hm GothAlice but how would changing the database, change that?
[01:24:32] <GothAlice> XD Thought you were having a eureka-seisure.
[01:24:51] <cheeser> in practice, financial transactions are never as simple as a single transaction "take from one give it to another" action
[01:25:09] <GothAlice> In TokuMX you can perform a bulk operation set that will either all succeed, or all fail, regardless of the client connection failing or even the server being written to disappearing. On startup, the incomplete transaction will be rolled back safely.
[01:25:34] <GothAlice> TokuMX is fully ACID compliant.
[01:25:56] <tejasmanohar> So you're saying I can connect to TokuMX server the same way I can connect to mongo servers etc?
[01:26:00] <tejasmanohar> Like I don't nede another driver?
[01:26:04] <tejasmanohar> Becuase I'm using Mongoose
[01:26:13] <tejasmanohar> And I don't see any documentation of this so I'm getting weary ;)
[01:26:22] <GothAlice> Pity about you using mongoose.
[01:26:27] <GothAlice> But yeah, it's wire-protocol compatible.
[01:27:34] <tejasmanohar> Alright, I'll have to give it a shot I think... so how do things just magically get better and more ACID compliant without changing the way i interact with the DB?
[01:27:41] <tejasmanohar> like why doesn't Mongo just incorporate those things? GothAlice
[01:28:01] <cheeser> part technical, part product reasons
[01:28:47] <cheeser> toku's txns, e.g., only apply to a single shard. mongo's philosophy to date has been to do nothing that can't be applied equally well in a sharded sitation
[01:28:52] <StephenLynx> tejasmanohar you talking about toku?
[01:28:57] <GothAlice> TokuMX uses a very different behind-the-scenes structure that allows for point in time everything.
[01:29:08] <GothAlice> (Fractal trees instead of b-trees.)
[01:29:08] <StephenLynx> they did A LOT of compromises to achieve what they achieved.
[01:29:17] <StephenLynx> from what I heard, it doesn't even support unique indexes.
[01:29:19] <cheeser> StephenLynx: and not all of them good
[01:29:31] <StephenLynx> toku has a really narrow use case.
[01:29:33] <tejasmanohar> sheet so i MAY have to change things in my code :P
[01:29:33] <GothAlice> Indeed. It's a trade-off for those with slightly different needs. :)
[01:29:53] <tejasmanohar> so i do need to change things in my code from that then
[01:30:03] <GothAlice> (Those needs being über performance, compression at the time was unique to TokuMX, and the aforementioned point-in-time everything that facilitates true ACID transactions.)
[01:30:14] <StephenLynx> toku is a very advanced tool that one shouldn't adopt without studying it a lot.
[01:30:17] <GothAlice> There would be certain patterns you would have to avoid.
[01:30:47] <StephenLynx> because it will impose severe limitations on what you can do.
[01:31:22] <tejasmanohar> trying to find the limitations doc
[01:31:38] <tejasmanohar> mongodb on 1 node with journaled write concern seems safe as ever
[01:33:49] <StephenLynx> study it, learn and adopt if its fit.
[01:33:52] <tejasmanohar> Does someone have a performance stats of Journaled vs Acknowledged write concern speeds?
[01:36:00] <tejasmanohar> Is Write Concern something set at the driver level when you write to the DB or in the DB's configuration itself? Looks like a driver level but checking since I don't see it written anywhere in the docs (prob just missing something)
[01:43:36] <tejasmanohar> eventual consistency scares me
[01:43:49] <tejasmanohar> when we have time sensitive data like a lottery draw closing NOW :P
[01:43:50] <GothAlice> Depending on how you structure your updates (i.e. using two-phase, synchronization, stacks, queues, etc.) they can be highly resistant to partial writes.
[01:44:19] <cheeser> tejasmanohar: eventual only applies to the secondaries. so long as you do primary reads, you're fine.
[01:44:29] <GothAlice> Critical reads you point at primaries.
[01:44:53] <GothAlice> Only for queries where getting stale data is A-OK (i.e. user profile data like profile picture, etc.) do you direct them at secondaries.
[01:45:20] <cheeser> you need to be wary of rollbacks during primary elections but those are relatively rare and you can mitigate by using a higher write concern
[01:48:38] <tejasmanohar> im feeling a lot better after going over all this stuff here lol
[01:48:43] <GothAlice> For things like large-scale analytics, the laggiest data we've had back was 15 seconds old, and that's perfectly acceptable for fifteen-minute to one-hour granularity graph data. The data being closer to the people querying it was more important. :)
[01:48:43] <tejasmanohar> was p scared about launching before
[01:48:49] <cheeser> that book might be terrible but it's free :)
[01:50:26] <GothAlice> I also find the whitepapers and whatnot that are on mongodb.com (but not .org for some reason) to be quite useful in gauging capabilities and approaches to problems.
[01:50:38] <GothAlice> Helps to have examples to follow. :)
[01:50:47] <cheeser> .org is the community site. .com is the commercial side.
[02:07:57] <tejasmanohar> GothAlice: do you know Journaled vs Acknowledged speed?
[02:08:16] <tejasmanohar> That's the only thing I haven't been able to find
[02:08:21] <tejasmanohar> Just wanna know how much of a performance hit it is
[02:08:33] <cheeser> journaled will be slightly slower
[02:09:00] <GothAlice> Not sure of the applicability of these results: http://techidiocy.com/write-concern-mongodb-performance-comparison/
[02:09:11] <GothAlice> A drop to 25/sec seems extreme to me.
[02:09:14] <cheeser> all writes are journaled. the journaled write concern waits for that to happen vs acknowledged is just the server ack'ing the write back to the driver
[02:11:48] <cheeser> "This is true because you can only really get an idea of performance when you’re testing your own queries on your own hardware. Raw figures can seem impressive but they’re not representative of how your own application is likely to perform."
[02:13:15] <tejasmanohar> "all writes are journaled. the journaled write concern waits for that to happen vs acknowledged is just the server ack'ing the write back to the driver"
[02:13:22] <tejasmanohar> so it really doesnt make a difference
[02:13:29] <tejasmanohar> if the server crashes, i still lose data thats not journaled lol
[02:17:43] <Doyle> Hey. Does this look about right for a geo distributed replication-set with sharding? https://drive.google.com/file/d/0B5g2nsz5NekdSnB1RVoyeC1FM1k/view?usp=sharing
[03:18:09] <arussel> anyone knows of a graphite plugin for mongo ?
[03:39:37] <arussel> how do you get the lag of a secondary out of rs.status() ?
[03:41:09] <joannac> arussel: diff the optime with the primary optime
[03:41:21] <Boomtime> @arussel: or use db.printSlaveReplicationInfo()
[03:41:42] <Boomtime> (what joannac says is correct, i just find the other command much easier)
[03:43:16] <arussel> Boomtime: I have to parse it in javascript to send it to graphite
[03:47:04] <sabrehagen> given that objectids store a timestamp in them, is there a query to check if any documents in a collection are newer than a given document by comparing ids?
[03:48:20] <joannac> yes, but it'd only be at millisecond accuracy
[03:50:26] <sabrehagen> that's plenty. how might i structure the query?
[03:50:46] <arussel> sabrehagen: Stackoverflow has plenty example
[03:51:38] <arussel> iirc, just $lt, $gt on _id should do
[03:53:25] <sabrehagen> arussel: thanks, was not immediately obvious when googling. will try this now.
[04:11:57] <tejasmanohar> 557668d932d0f724c8af263a Unhandled rejection Error: Invalid val: {"_bsontype"=>"ObjectID", "id"=>"UvjÒ¬\u0092\f B\u0082\u009D8"} must be a string under 500 characters
[10:24:43] <pagios> i am willing to run a db on my rpi, rpi can crash at anytime (by removing the power for example) is mongodb a good db to use in this case?
[10:24:54] <pagios> does it run on persistent stroage or purely in ram volatile
[11:23:58] <mehdy314> I have a python script that insert docs of a mongo collection into another db using pymongo. the mongo collection has 306099 docs after running the script without any error just 33426 of docs has been inserted. i use find method by the nway
[11:49:11] <mehdy314> solved! problem was in the script
[12:11:05] <Doyle> When running a distributed mongodb setup, is it recommended to place config servers and query routers at different geographical locations?
[12:14:07] <Doyle> Say you have US-EAST and US-WEST with a sharded replication set. You place the shards in EAST (Primary/Secondary/Arb), and another secondary for each shard (priority 0) in West.
[12:14:31] <Doyle> In this setup you have all the config servers and query routers in East. No problem.
[12:16:38] <Doyle> Can you place a new shard in West (Pri/Sec/Arb) and add it to the configuration so the config servers track all the meta data for it as well? Is there anything to be concerned with when spanning config servers and query routers across different sites?
[13:05:53] <leev> i have a cluster with a primary, secondary and arbiter. when I shutdown the secondary, the primary drops back to secondary. shouldn't it stay primary as it's also connected to the arbiter?
[13:24:26] <leev> i also get "[LockPinger] Socket say send() errno:9 Bad file descriptor" to all config servers
[13:26:31] <nfo> Hi. About the reuse of disk space freed by deleted documents, does setting a TTL on documents is more efficient than removing the docs with the `delete` command ? http://docs.mongodb.org/manual/tutorial/expire-data/ As far as I know, with mmapv1, one has to compact() or repairDatabase() or resync the collection/DB to be able to reuse disk space freed by deleted documents, is it true ?
[13:27:21] <nfo> ho it's more or less documented here: http://docs.mongodb.org/manual/faq/storage/#faq-empty-records
[14:00:55] <rickardo1> I am new to mongodb and need it only to handle a large amounts of products ~1gb they are listed under root with key "Data" : [....] Is there any way I can import them directly from cmd line?
[14:01:48] <StephenLynx> How is this data formatted and organized?
[14:02:18] <StephenLynx> I have never used mongoimport, but I think it might be able some common standards.
[14:02:48] <StephenLynx> if you have any change of automatically importing it, it will be with mongoimpor.
[14:12:21] <rickardo1> StephenLynx: mongoimport -d test -c Data data/products.json it goes to 100% but then a memory error.. have 8 gb on this machine.. :/
[14:13:02] <saml> how can I drop databases with prefix test ?
[14:13:18] <saml> due to bug in test suite, it created so many test* databases
[14:16:31] <Doyle> Hey. When spanning multiple sites with a replication set, are the config and query servers distributed as well?
[14:16:39] <tubbo> any mongoid users here? wondering how i can solve a timeout issue with an aggregation i have. https://gist.github.com/tubbo/aecf639c8e683b43777c
[14:16:41] <Doyle> Couldn't find any specifics on that
[14:17:04] <tubbo> actually more just looking for documentation how one would optimize it
[15:20:10] <yauh> I only want to find 10+1 documents, but the current player's position in the score list (you are on position #27153) should be listed as well
[15:57:48] <StephenLynx> never heard about mongoid :v
[15:58:01] <tubbo> it's an ODM for MongoDB in Rails
[15:58:33] <diegoaguilar> Hello, I'm trying to restore some datbases
[15:58:50] <StephenLynx> it is specifically designed for the rails framework rather than just ruby?
[15:58:52] <diegoaguilar> I received multiple .json and .bson for each
[15:59:01] <diegoaguilar> but when I run mongorestore I obtain back:
[15:59:03] <Lonesoldier728> \why would you say the value is wrong StephenLynx in what sense
[15:59:05] <diegoaguilar> "Failed: error scanning filesystem: error reading root dump folder: open dump: no such file or directory"
[15:59:40] <tubbo> StephenLynx: i suppose you could use it however you want, but it's definitely intended for use in a rails app. follows ActiveRecord's lead on a lot of things.
[16:01:03] <StephenLynx> lone, that looks plain wrong from everything I know from the standard driver.
[16:01:16] <StephenLynx> but again, mongoose might do some ass backward thing where that is right.
[16:01:23] <diegoaguilar> could anyone guide me on it?
[16:01:24] <StephenLynx> being the reason I suggest not using it.
[16:07:11] <Lonesoldier728> well there is my code and question http://stackoverflow.com/questions/30761620/mongoose-populate-not-returning-results
[16:35:49] <jecran> Hello guys. I am using mongodb with node. https://gist.github.com/anonymous/d1920d655b0c1655e03f simple code here, I cannot figure out how to use the findOne() method instead of find(). Please help!
[16:37:04] <jecran> var dbs = db.collection('users').findOne(query); i tried this and similar and keep getting errors
[16:37:25] <diegoaguilar> jecran, what does query look like?
[16:37:50] <jecran> var query = {name:data.name}; this query works fine with the find() function
[16:38:16] <diegoaguilar> ok, and what are those errors?
[16:38:44] <diegoaguilar> are u using node "mongodb" module?
[16:38:48] <jecran> https://gist.github.com/anonymous/d1920d655b0c1655e03f the sample code. Just want to apply findOne() instead of find() lol.... 'object is not a function' is the errors that I get
[16:39:04] <diegoaguilar> ok, this is because what find and findOne returns
[16:39:15] <jecran> diegoaguilar: the sample works
[16:39:19] <diegoaguilar> mongodb module is quite "odd" in this sense
[16:39:42] <StephenLynx> you are not passing anything to find
[16:39:51] <StephenLynx> your query variable is unused.
[16:40:02] <svm_invictvs> In Morphia, if I have an @Reference annotated Map<String, Foo> does that basically make it a mapping of strings to object ids?
[16:40:45] <svm_invictvs> And second question, does morphia cascade the put operation when persisting the object?
[16:41:26] <jecran> var dbs = db.collection('users').findOne(query, {}, function(err, doc){}) .... this produces null value without any errors
[16:42:29] <StephenLynx> probably because didn't find anything.
[16:43:45] <jecran> StephenLynx: query = {name:data.name}; the sample here works fine and displays the data correctly, but I want just the 1 piece of data, not all.
[16:44:09] <StephenLynx> what you mean "1 piece of data"?
[16:46:21] <StephenLynx> instead of comparing it to null
[16:46:35] <StephenLynx> and you can just use doc.name instead of ['name']
[16:46:52] <StephenLynx> in general, refer to this : https://github.com/felixge/node-style-guide
[16:47:12] <jecran> StephenLynx: any change at all in the code to try to make it findOne() results in an error, except for: var dbs = db.collection('users').findOne(query, {}, function(err, doc){}) Which results in null. If I exclude the middle param {}, I get actual errors
[16:55:57] <jecran> Im playing with it lol.. back in 2mins
[16:59:45] <jecran> StephenLynx: latest attempts..... no crashing, but still null. with or without using a cursor... https://gist.github.com/anonymous/79af1207cd4a19243355
[17:03:27] <StephenLynx> are you sure you have this data on your db?
[17:03:39] <StephenLynx> that you are using the right collection?
[17:04:09] <jecran> StephenLynx: Yes. its there, and displays when I search through the whole collection
[17:05:43] <jecran> StephenLynx: I don't know , but I just erased it all and started over, and it worked first try.... what the hey lol .... thanx for your input
[17:11:39] <StephenLynx> I am more concerned with skill than fun to be honest.
[17:12:24] <jecran> StephenLynx: turn's out that all the reading I have done in my spare time has paid off. Learning concepts, not code, like inheritance and encapsulation etc
[17:12:26] <StephenLynx> you should derive enjoyment from the challenge of mastering it.
[17:13:20] <StephenLynx> not from spitting out easy and low-quality software.
[17:15:57] <jecran> agreed. And I still love it. And honestly, I learned regular java, and am more than happy to span off to all this: node, mongo, js, unity3d and c# , php.... And my code, less refactoring every week :P
[17:16:10] <svm_invictvs> HOw do I get a sequential object ID in Morphia?
[17:16:59] <svm_invictvs> cheeser: Also looking at this example here: https://github.com/mongodb/morphia/blob/8f70190862c0094f02f3bf27713ac0f63d1f2cd2/morphia/src/test/java/org/mongodb/morphia/utils/LongIdEntity.java
[17:17:34] <cheeser> if you want that, you could findAndModify() to emulate a sequence in the DB
[17:22:17] <GothAlice> It is, but. No enforcement, no cascading rules, and ODMs that add such behaviour often trap users into thinking MongoDB does support it, and are surprised when the command-line tools don't do the same thing.
[17:22:31] <GothAlice> (Ending up with bad data as a result.)
[17:22:38] <svm_invictvs> GothAlice: THre have been such things as poorly written tools, that's why I asked.
[17:22:50] <svm_invictvs> Though I was 99% sure that it ididn't try do to anything like that.
[17:23:00] <svm_invictvs> Now I'll shut up before cheeser comes at me with a fireaxe
[17:24:07] <GothAlice> When db.collection.remove(somecriteria) in the shell and db.collection.find(somecriteria).remove() in my app do different things… there may be a problem.
[17:24:56] <svm_invictvs> Cascacing is a nightmare anyhow.
[17:24:56] <jecran> StephenLynx: https://gist.github.com/anonymous/3cec295353b57cd62658 .... any recommendations on this? I get all expected results now..... coolbeans, although stupidly simple
[17:26:25] <ggoodman> It appears that in the 2.x series of node-mongodb-native, the semantics of `findAndModify` have been significantly changed.
[17:27:33] <ggoodman> What is now available to do a conditional update on many documents and return the modified documents resulting from the operation?
[17:27:33] <svm_invictvs> What's wrong with findAndModify?
[17:27:34] <GothAlice> The edge cases are substantial: http://docs.mongodb.org/manual/reference/method/db.collection.findAndModify/#return-data
[17:39:50] <cheeser> if the server is gone, the agents are too.
[17:40:01] <cheeser> so they're not really around to notice anything.
[17:40:06] <GothAlice> shlant: MMS uses a once-per-minute ping, AFIK, and debouncing means that that once a minute thing gets stretched for a few minutes before the web interface really notices the problem.
[17:40:27] <cheeser> the server has to account for lag and this and that.
[17:40:32] <GothAlice> (A single failure to ping might not indicate a problem. Two or more in a row missed = a problem.)
[17:40:43] <cheeser> that threshold is probably configurable. i don't recall.
[17:42:26] <GothAlice> Debouncing is useful, but does add to the latency of alerts. Nobody wants their monitoring to flap, though. (It's down! It's back! It's down! It's back! …)
[17:43:05] <shlant> understood, just seemed like 5 minutes of losed connectivity is more than a "flap"
[17:43:26] <shlant> I still have servers showing on MMS that _had_ monitoring and backup agents on them
[17:43:39] <shlant> I chose "uninstall" before shutting down the servers
[17:43:40] <GothAlice> Oh, MMS doesn't forget about hosts automatically.
[17:43:59] <GothAlice> You have to go in and trash the host mappings manually, AFIK.
[17:48:19] <GothAlice> Hmm, also check Administration > Agents and try the "…" button, "Remove from MMS", if available in the server list against each host.
[17:48:43] <GothAlice> (If still enrolled in a replica set, you'll have to "Remove from Replica Set" first.)
[17:51:10] <shlant> yea I already removed the replica
[17:51:35] <shlant> I don't see an option under Agents to remove them
[17:51:47] <shlant> even though I already uninstalled them before killing the hosts
[17:51:56] <shlant> so not sure why they are even there still
[17:53:38] <shlant> like they are on the server and yet I don't have the option to uninstall them anymore as I already did
[18:02:32] <shlant> so I want the server to unfriend them
[18:05:02] <mbeacom> Hello everyone! Can someone please assist me? http://pastebin.com/YwLhKGMB In the provided JSON example, you'll find three example mongodb 2.6 results. I need to filter down the results to only return documents where the sub/embedded document in each document doesn't have more than one messages.author_id over all 'messages.$.author_id'
[18:09:29] <shlant1> so I guess I'll just wait and see if the server go away or contact support?
[18:14:52] <shlant1> but I gues it decided to just ignore my request
[18:15:01] <shlant1> but still remember that I chose it
[18:15:19] <shlant1> dangit robots, do what I want
[18:21:16] <mbeacom> If there is a Bueller out there? If so, does Bueller want to help me resolve a simple aggregation pipeline issue? :D http://pastebin.com/YwLhKGMB
[19:18:40] <jacksnipe> Hey, I'm using mongo with python, flask, and the MongoEngine extension. Since I'm pretty sure MongoEngine is just a wrapper over the basic python mongo lib, do I have to worry about ints vs longs or other variable size stuff?
[19:20:18] <jacksnipe> nvm I need to be using an explicitly specified LongField
[19:33:02] <GothAlice> jacksnipe: As a note, there's also #mongoengine, and sometimes people even talk there. ;)
[19:41:51] <qrome> Let's say I wanted to start purging data where it would remove it from my mongodb after a certain time period and store it somewhere slow. Is this normal?
[19:47:53] <qrome> i have a ephemeral app so this is perfect and will make sure queries are fast
[19:50:50] <deathanchor> WTF? rs_a:SECONDARY> rs.reconfig(cfg, { force : true }); "errmsg" : "exception: need most members up to reconfigure, not ok : host2:27018"
[19:52:10] <GothAlice> That node is in a failure state, at the moment, unable to hold an election.
[19:52:46] <GothAlice> Thus no way to propagate the reconfig to other nodes reliably, and yeah, even if you force it, it'll say no to that.
[19:53:20] <deathanchor> it's only a test setup anyway
[19:53:31] <GothAlice> One of the things I love about MongoDB, when in doubt, rm -rf. ;)
[19:53:42] <deathanchor> funny it is complaining about the new host
[19:53:54] <deathanchor> how can I check what mongodb think is the host name?
[19:54:24] <GothAlice> rs.conf() will hand you back the member list, with host names.
[19:54:29] <GothAlice> Those are the ones MongoDB would use to connect.
[19:55:17] <GothAlice> If DNS names are in there, DNS becomes critically important to the reliability of your DB cluster, which is why I have /etc/hosts managed by the cluster to automatically include hard references to all other hosts in the cluster. No DNS problems. :)
[19:55:45] <deathanchor> yeah, I think someone changed the box from hostname, to hostname.fully.qualified
[19:57:08] <GothAlice> Fully-qualified is good. Changing is bad. ;)
[19:57:55] <deathanchor> yeah, hence the testing, someone messed with it and didn't bounce the service
[19:58:38] <deathanchor> oh man I totally broke it now
[19:59:49] <GothAlice> If you have fewer than 9 hosts, for the free tier, I can highly recommend https://mms.mongodb.com/ as a service to manage your cluster. Full disclosure: satisfied customer.
[20:00:07] <deathanchor> LOL fewer than 9 hosts...
[20:00:12] <GothAlice> You'll still need to resolve the DNS problems yourself, but everything else is made easier by MMS. :)
[20:00:42] <deathanchor> yeah I know now that too many things changed since it was last running
[20:08:54] <GothAlice> The old "2d" index, meant for backwards compatibility with 2.2 and earlier, assumes a flat two-dimensional euclidian plane.
[20:08:55] <cheeser> 2d sphere, as I understand it, is a more appropriate way to handle geo stuff than geo2d because distances and the like aren't as simplistic as a flat surface
[20:09:01] <jacksnipe> ah that's it, what's the speed difference between a Geo2D index and a 2dsphere?
[20:09:32] <GothAlice> There's a reason planes fly in a curve across the surface of the Earth instead of taking a "straight line" approach. Calculating distances around spheres can be mind-bending, and a curved path is often shorter.
[20:09:50] <jacksnipe> yeah, but is it a huge speed hit?
[20:10:09] <GothAlice> Optimization without measurement is by definition premature. Try both, and measure. ;)
[20:10:23] <GothAlice> It may depend very heavily on your queries.
[20:10:53] <jacksnipe> yeah, thought so. ATM I'm just looking for users in a radius, and calculating the distortion from the rect mapping to the sphere mapping
[20:11:02] <jacksnipe> and then accounting for that manually
[20:11:29] <GothAlice> 2dsphere would handle that for you. You can specify a search with a central point and circular radius.
[20:11:46] <jacksnipe> yeah I guess I just have to time it
[20:12:27] <jacksnipe> yeah I know what functionality I'd get I'm really _only_ concerned about the speed of the queries :P
[20:13:03] <GothAlice> I have peculiar requirements not covered by MongoDB's geographic capabilities, assuming operation on the surface of a spherical planetoid is highly restrictive for my needs. ;)
[20:14:01] <jacksnipe> yeah I need very low precision/accuracy (adding or dropping ~20 miles is nbd) so I was thinking that the Geo2D was good enough but I guess I'll take a look
[20:14:22] <GothAlice> centerSphere should work on geo2d, too.
[20:14:54] <GothAlice> And the note that geo2d is meant for legacy compatibility with 2.2 (and we're in 3.0…) should be a sign that its use is flatly deprecated.
[20:15:18] <GothAlice> (For use on mostly spherical planetoids, that is.)
[20:16:53] <jacksnipe> hmmm spherical only supports the standard projection though right?
[20:17:18] <GothAlice> I'm not sure of the exact projection being used in MongoDB.
[20:17:28] <jacksnipe> Am I SOL if, say, I want to use the USGS Maryland projection with spherical?
[20:45:44] <GothAlice> The more records you have, the more the field name overhead adds up. Renaming like this, though, really hurts when I choose to use non-Python tools like the mongo shell, though.
[20:45:48] <deathanchor> GothAlice: what ODM you use for python?
[20:46:27] <deathanchor> gonna check it out, our devs use something for java for the same resone
[20:47:34] <GothAlice> MongoEngine (and ODMs in general) provide a bit more functionality than just field re-naming. ;) MongoEngine is a full schema enforcement system, with triggers (signals), etc.
[20:48:19] <deathanchor> Poop to that, I like freeballing it in my documents :D
[20:48:47] <GothAlice> Also handles things like caching references (storing "remote" fields alongside the ObjectId of the reference) automatically, such that updating the referenced document updates all cached values, too.
[20:58:32] <saml> update and then find right away gives inconsistent results
[20:59:00] <saml> i have integration tests that inserts documents and removes them.. and executes count and other queries very fast. and sometimes tests fail
[21:08:26] <greyTEO> GothAlice, have you ever had an object override itself? e.g. lets say you have a web admin. Page 1 has an object and page 2 has the same object. page 2 updates. will page 1 updates override page 2? Similiar to a race condition with "stale" data.
[21:08:56] <greyTEO> this would be give the user has 2 tabs open at the same time...all hypothetical of course.
[21:09:19] <GothAlice> "Last to save wins." is a typical approach.
[21:09:46] <GothAlice> Github, on the wiki portion of projects, will alert a user in the process of editing something that someone else managed to hit save first, and leaves resolving the conflict to the user.
[21:10:30] <greyTEO> but this isn't something Mongo would catch per se
[21:10:33] <GothAlice> The other extreme from "whoever saves last wins" is to have every field fully live. Someone changes something in one place, all other places are immediately updated, regardless of manually refreshing.
[21:11:11] <greyTEO> That is what I thought. I was wondering if there was some magical method for merging docs...lol
[21:11:54] <greyTEO> ok thanks. I can live with that.
[21:12:01] <GothAlice> Versioning would be a simple approach, and would leverage update-if-not-different queries.
[21:12:58] <GothAlice> I.e. user A and B load the form, each get version 1. User A presses save, db.foo.update({_id: …, version: 1}, …). User B presses save, the update can't find the record because the version is different, and user B is presented with the updated data, with their changes applied, and a message that someone else got to it first.
[21:14:36] <greyTEO> Yea that would be a fairly simple implementation. For my case, last to save wins works.
[21:14:50] <GothAlice> It's usually "good enough" for most apps. ;)
[21:15:01] <GothAlice> It's an edge case to handle, not usually a regular thing.
[21:15:02] <greyTEO> it would be the User A overriding User A. going out of his way to be a PITA
[22:42:37] <jecran> Hi guys. Using express , I am just trying to do a simple 'post' but I cant access my data. If I do a 'get' everything is there, but I need post lol. app.post('/login', function(req, res){}); This is the line that handles the post. I make it inside this function but can't retrieve my data. Any ideas on what im doing wrong?