PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 25th of November, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:12:09] <jumpman> dumb question, but how do I set a var to the results of a .find?
[00:12:31] <jumpman> right now var someVariable = collection.find(query) & console.log(someVariable) outputs what appears to be the entire mongo object xD
[00:13:04] <GothAlice> jumpman: That's because the result of .find() is a queryset cursor, not the record.
[00:13:20] <GothAlice> jumpman: You'll want to run .next() on that cursor and log *that*.
[00:13:21] <GothAlice> :)
[00:13:33] <GothAlice> (Potentially repeatedly.)
[00:14:22] <jumpman> I see! thanks :3
[00:26:25] <wsmoak> jumpman: in javascript you can also try putting .toArray after the find, and then it will be [ {…}, {…} ]
[00:26:52] <jumpman> I tried that and it was giving me an Object [Object object] has no method .toArray
[00:26:54] <jumpman> error
[00:26:59] <jumpman> :9
[00:27:01] <jumpman> *:(
[00:28:21] <GothAlice> .forEach(function(r){console.log(r);}) ?
[00:29:16] <wsmoak> .toArray(function(err,items){ … items is the array } but nvm… logging that just says Object. it was ruby that behaved better. :)
[00:30:07] <GothAlice> Not really.
[00:30:10] <jumpman> wsmoak, you're probably on to something correct as I found a stackoverflow post that suggested that as well
[00:30:49] <GothAlice> wsmoak: Ref: https://www.youtube.com/watch?v=Othc45WPBhA
[00:31:18] <GothAlice> (A short 4 minute video I'd recommend for anyone who likes JS or Ruby.)
[00:32:37] <jumpman> GothAlice, awesome!! forEach worked
[00:32:45] <GothAlice> jumpman: It never hurts to help. :)
[00:33:14] <wsmoak> jumpman: possibly where I found it! just looking for the javascript equivalent of .inspect … would be nice to dump the entire array without iterating
[00:47:07] <BuildRPMPackage> Hello.. I am trying to package mongodb rpm using the mongo/buildscripts/packager.py
[00:47:17] <BuildRPMPackage> I get this error
[00:47:25] <BuildRPMPackage> http://pastie.org/9741506
[00:48:50] <joannac> BuildRPMPackage: did you check the source exists?
[00:49:11] <BuildRPMPackage> Can someone help me how I can workaround debian man page installs when all I need is rpm
[00:49:23] <BuildRPMPackage> yep.. I am in in the source directory
[00:50:26] <BuildRPMPackage> I had modified the packager.py slightly to exclude non-redhat distros
[00:51:55] <BuildRPMPackage> But it looks like for the man pages it still goes for the debian directory... can someone guide me how to circumvent this
[00:56:50] <BuildRPMPackage> http://pastie.org/9741506
[00:57:39] <BuildRPMPackage> joannac: could you help? I am building this on my CentOS machine
[01:02:05] <jumpman> shoot... no good. var cusor = EmailInterceptor.collection.find ( { $query: { user_id: User._id }, $orderby: { timestamp: -1 } });
[01:02:14] <jumpman> that's the "cusor", right? xD
[01:02:20] <jumpman> so I should be able to call .next() on it?
[01:02:59] <BuildRPMPackage> jumpman, you should be able to!
[01:03:02] <BuildRPMPackage> what do you see?
[01:03:07] <joannac> BuildRPMPackage: sorry, packaging is not something I know that much about
[01:04:06] <BuildRPMPackage> joannac: sigh. All my hopes dashed...!!! :) any rpm packaging experts in the room?
[01:06:08] <cheeser> probably in ##redhat or ##centos
[01:07:59] <BuildRPMPackage> cheeser, I was hoping to find some fellow mongodb packagers, the ##centos chaps will just throw the packager.py script at me and tell me to ask the writer of that script
[01:08:21] <cheeser> ah
[01:08:30] <Boomtime> unfortunately, that is you now
[01:08:41] <cheeser> why not use the provided RPMs?
[01:10:24] <BuildRPMPackage> I want to build my own for AGPL licencing purposes
[01:10:53] <cheeser> for what now?
[01:13:27] <BuildRPMPackage> I was told that the rpm provided are only to be used if I have some sort of a support agreement with mongodb Inc. If not, we are suppose to build our own..
[01:13:47] <BuildRPMPackage> I got the binaries built, but packaging is a beeeeeeeecccchhhh!
[01:14:04] <cheeser> uh. i'm not sure that's true at all.
[01:14:36] <joannac> I'm almost certain that's not true
[01:14:41] <cheeser> http://docs.mongodb.org/manual/tutorial/install-mongodb-on-red-hat-centos-or-fedora-linux/
[01:14:49] <BuildRPMPackage> cheeser!!! - I wish I can tell that to my legal department
[01:15:05] <cheeser> i'm not sure who told you that but i've never heard that coming from mongodb
[01:16:01] <BuildRPMPackage> yeah.. I have been using the standard yum repo for years now...
[01:16:57] <BuildRPMPackage> but this year, I was told my legal to use the binaries compiled under AGPL, and seemingly the standard yum repo doesnot have those
[01:18:36] <cheeser> it's not like the kernel team erects an umbrella over the build machine with AGPL written on it.
[01:18:52] <cheeser> the AGPL isa n agreement btween you and mongodb
[01:20:05] <joannac> running `yum info` shows "License: AGPL 3.0"
[01:22:24] <BuildRPMPackage> joannac,cheeser, you can't see me, but I am dancing here!!! :) I was stupid to believe the legal and not do my due-diligience!
[01:22:37] <BuildRPMPackage> you are right, it's already under that
[01:22:58] <BuildRPMPackage> I am going to talk to Legal and see if I can get past without pulling my virtual teeth!
[01:23:35] <cheeser> take a long look at those "diplomas" on their office walls while you're in there. ;)
[01:26:40] <BuildRPMPackage> ;)
[01:39:11] <wsmoak> jumpman: I win. :) console.log("Items: %j", items);
[01:39:56] <b1205> I am running a lot of queries against MongoDB through an ORM - How can I monitor/log connection open/close events?
[04:27:02] <r01010010> hi
[04:27:47] <r01010010> i'm with the node.js's driver of mongodb, and making inserts in series using async.eachSeries , but the insertion is not in order.... do you know why?
[04:28:12] <r01010010> here is the code..
[04:28:13] <r01010010> https://gist.github.com/r01010010/af8eeb39023ce1cc04f6
[04:38:28] <joannac> r01010010: did you forget that async = asynchronous = no order guaranteed?
[04:39:08] <r01010010> async is the name of the library, its suposed that eachSeries make an iteration sync
[04:46:17] <joannac> oh i see
[04:47:05] <joannac> well, figure out if it's an async problem or a mongodb problem
[04:47:25] <joannac> and once you've established if it's actually a mongodb prblem, come back with the results of your testing
[04:50:05] <joannac> it works for me, i just tested
[04:50:08] <joannac> r01010010: ^^
[04:51:35] <r01010010> that's the problem, i'm not sure where is exactly the problem
[04:52:29] <r01010010> but it seemed to me like if it was mongodb that returned something before making the insert efectively
[05:16:26] <b1205> q
[05:16:27] <b1205> exit
[05:46:24] <winem_> good morning, I have a question to see if I understood it correct: TTL requires and index and the document with the timestamp can only have exactly one index. so it's not possible to create a index on the timestamp and a 2nd compound index on 3 columns including the column with the timestamp and the TTL is not possible. so we could not use this field in a compound index we would like to use as shard key
[05:47:27] <winem_> but we can use the expireAt, which is handled as a separate field by mongodb and requires a specific date when the document expires. so the column with the timestamp can be used for a compound index... and this might be used as shard key, right?
[05:48:14] <winem_> I'm talking about a timestamp set by the clients which are mostly mobile devices. not about the _id
[05:50:29] <joannac> why in the world would you want to use that as a shard key?
[05:51:57] <joannac> but yes, I think you could create a TTL index on the "expireAt" field
[05:52:09] <joannac> and also create another compound index with the "expireAt" field
[05:52:20] <winem_> it's the current plan of our devs and not yet final. the timestamp is the part we're not sure about.
[05:53:03] <winem_> let me check if I have the latest json and paste it as an example. any help is really appreciated, because we just start to use mongoDB
[05:59:11] <winem_> we will use it to store tracking data send by mobile devices. http://pastie.org/private/ckhrzhsw5w4o5fxzlgua this is a example data set
[06:00:13] <winem_> the first one will occure once per session and the following documents will only include the session identifier
[06:00:53] <winem_> most queries will use the mandant, appID, userguid and the timestamp
[06:02:33] <winem_> and we have customers running thousands of devices and others which have less than 50. so we thought that the timestamp might be helpful to avoid too large chunks due too it's cardinality
[06:02:51] <winem_> customers = mandant
[06:03:08] <winem_> please take a look at the example and let me know what you think and recommend
[08:39:38] <jsjc> I am a newbie and I am breaking my head with this weird issue… I am using Pymongo and I am trying to iterate a large collection a bit over a 1M documents.. when it gets around 840K suddenly the iteration stops/halts and does not continue… what could be the issue??
[08:43:18] <joannac> cursor timeout? how quickly are you iterating?
[08:44:24] <jsjc> oh maybe a reason...
[08:44:35] <jsjc> I am iterating quick quickly but is large so maybe not enough..
[08:44:41] <jsjc> let me check on internet how to sort that
[08:47:03] <jsjc> joannac: I have tested with timeout=False… and same issue
[08:47:09] <jsjc> that is very odd!
[08:50:44] <jsjc> around 30 seconds it take sto come to a halt
[08:57:58] <jsjc> driving nuts… cannot find the reason hehe
[09:49:08] <brakafor> is it normal for a mongo driver to return a string type for a reference? I have a reference in a document that if I view from the mongo shell is ObjectId("... however when I retrieve the same object in node its of type String
[09:53:16] <oznt> hi everyone,I was wondering: How do you monitor sharding and replica logs? Should I collect all my logs with a tool like rsyslog ? or is it enough to look at the primary look?
[10:14:08] <PirosB3> hi all
[10:14:19] <PirosB3> mongostat and mongotop show 0 usage of my MongoDB
[10:14:35] <PirosB3> still, it’s 200% CPU usage, and it’s 3GB of RAM
[10:14:42] <PirosB3> how can I find out what it taking so much time
[10:14:52] <PirosB3> and is making every query 10 seconds slow?
[10:33:43] <ssarah> Hei guys :)
[10:34:54] <ssarah> if i do db.foo.update(_id: ..., voters: { $ne: 'joe'}, etc
[10:35:28] <ssarah> and voters is an array, does $ne check if the array contains or not the value or is strictly equal to the whole array?
[10:38:26] <ssarah> ah.. that would be $nin
[10:51:49] <aagren> hello - is it possible to merge e.g. a 2 sharded cluster to a single standalone shard by running db.copyDatabase 2 times on the standalone, one with eachindividual shard as source?
[12:03:19] <gargrag> Hello everyone,
[12:03:45] <gargrag> can smbd help me to find why the 96% of one collection is on memory (mongomem)
[12:03:59] <gargrag> i was reading about how mongodb uses memory,
[12:04:17] <gargrag> but i still need to have a non linear ram consumtion
[12:29:29] <brakafor> is there any way to check if a bulk operation has any operations to execute before executing?
[12:29:53] <brakafor> ie to avoid throwing an error
[12:32:33] <Derick> which driver?
[12:33:04] <brakafor> node.js
[12:33:20] <Derick> In that case, I don't know :S
[12:35:39] <remonvv> \o
[12:46:55] <brakafor> try catch it is!
[13:08:35] <vineetdaniel> how can we take backup of just config servers ?
[13:18:53] <kenITR> I am trying to add elements to 2 embedded documents deep in a single doc. I know the index of each subdoc and I use a positional to find the index of the sub-sub doc. This works in the shell, but when I go thru php (for $i; $i < $ln; $++ ) it adds the first one OK, but it creates a new array in the second. Is it possible to do this?
[13:20:50] <kenITR> (ignore the typo. I was just indicating that I use a for loop to do this)
[13:28:59] <remonvv> kenITR: Can't help you with the PHP but having arrays more than one level deep in your mongo documents is always going to be tricky for a variety of reasons. I suggest you adjust your schema.
[13:29:47] <kenITR> ouch. It might as well be relational.
[13:31:59] <kenITR> remonvv: but maybe I can get rid of one level. Thanks.
[13:32:57] <wsmoak> kenITR: post a sample document and some code if you want people to try it
[13:32:58] <remonvv> Not sure it has much to with relationality. MongoDB's query language still contains a "flaw" that disallows more than one positional operator in a key. That and conceptually multiple nesting in documents usually results in large documents that are ever growing and thus need to be moved around a lot.
[13:38:17] <kenITR> remonvv: I only use one .$. The other I know, so it is doc.section.1.page.$.paragraph then doc.section.2.page.$.paragraph As I say it works in the shell but seems to get mixed up in the for loop.
[14:00:44] <kenITR> wsmoak: OK I pasted the code. http://pastie.org/9742503 On the first loop it creates the element in the right place, on the second it creates an array containing the element in the right place.
[14:23:36] <NoReflex> hello! I'm looking into the sharding process of MongoDB. I can't find if the chunkSize also contains the index for the actual data. I need to know whether this is true so I can estimate if the chosen shard key will distribute the data well.
[14:25:45] <NoReflex> also when using stats on a collection I have size and storageSize; which one is considered by the balancer process when comparing with the chunkSize option?
[14:59:21] <d-snp> hi, I get this error: ReplicaSetMonitor no master found for set: main_shard1, but if I connect to main_shard1 and do rs.status(), it thinks it's the master and everything seems fine
[14:59:45] <d-snp> I only get the exception when my service tries to write data, reading seems to go fine
[15:06:48] <d-snp> db.printShardingStatus gave "socket exception [CONNECT_ERROR] for main_shard1/db1:28010,db2:28011"
[15:07:21] <d-snp> telnetting those addresses worked, so I gave up and just restarted the mongos service, and that fixed it -_-
[15:08:25] <d-snp> I suspect it has something to do with some network reconfiguration I did earlier, but no idea why it would keep failing even when the network configuration was hours ago and no ip addresses or hostnames changed..
[15:49:18] <elatomo> Hi! I would like to use the aggregation framework in a similar way than the SQL `select * ... group by f1,f2`. That means, grouping and retrieving all fields (without manually specifying each of them)
[15:49:40] <elatomo> For example: `db.test.aggregate([ { $group: { _id: {f1: "$f1", f2: "$f2"}, f4: { $first: "$f4" },..., fN: { $first: "$fN" }, f3: { $addToSet: "$f3" }}} ])`
[15:52:53] <GothAlice> elatomo: db.example.aggregate([{$group: {_id: {f1: "$f1", f2: "$f2"}, subresults: {$push: "$$ROOT"}}}])
[15:53:15] <GothAlice> Because you are grouping, there may be multiple records with a matching f1/f2 pair.
[15:53:30] <GothAlice> This will create a list of matching documents for the group, for each group. :)
[15:54:00] <elatomo> GothAlice: thanks a lot! I'll check it out :)
[15:54:13] <GothAlice> http://try.mongodb.com/ and plug in: db.example.insert({f1: "foo", f2: "bar", f3: "baz", f4: "diz"}); db.example.aggregate([{$group: {_id: {f1: "$f1", f2: "$f2"}, subresults: {$push: "$$ROOT"}}}])
[15:54:14] <GothAlice> :)
[15:55:41] <elatomo> try.mongodb.com seems to redirect to mongodb.com, I'll test it on my box. Thanks again :)
[15:55:50] <GothAlice> Er, mongodb.org, sorry.
[16:07:29] <hazmat> so reading through this http://www.zdnet.com/mongodb-cto-how-our-new-wiredtiger-storage-engine-will-earn-its-stripes-7000036047/
[16:07:43] <hazmat> it reads like 2.8 w/ wiredtiger has multi document transactions
[16:08:03] <hazmat> is that accurate.. i'm not seeing it in the storage interface
[16:20:09] <GothAlice> Ugh, cloning mongo always takes so long. T_T
[16:20:41] <GothAlice> 151MB later…
[16:21:43] <GothAlice> hazmat: wiredtiger_recovery_unit.cpp in db/storage/wiredtiger lines 199 (commit) 206 (rollback) and 216 (begin) transaction. :)
[16:23:48] <GothAlice> It would appear that the _abort() process in WiredTigerRecoveryUnit unwinds all recorded changes; this would naturally be multi-document if multiple document changes are recorded in the same iterator.
[16:24:59] <hazmat> GothAlice, backing up two dirs.. mostly i'm trying to find how thats exposed to apps
[16:25:44] <hazmat> ie. tokumx does a beginTxn/endTxn.. what's the equiv for apps with mongo..
[16:27:34] <hazmat> the unit of work looks like its tied to an operation context.. ie wrapped up as internal implementation detail of the existing interface instead of exposed as an app/api op
[16:28:05] <GothAlice> _oplogSetStartHack — lol, I love code spelunking
[16:28:18] <elatomo> GothAlice: the aggregation works fantastically, I just used $first instead of $push... Thanks again :)
[16:29:46] <hazmat> GothAlice, yeah.. its not confidence inspiring ;-)
[16:32:23] <GothAlice> The recovery unit code is used primarily within the wiredtiger_record_store and index implementations, exposed as a per-transaction session, with the low-level commit/abort behaviour wrapped around atomic operations like compact, etc. The transaction is being passed in as a OperationContext. A new OperationContext is constructed for each message received, in db.cpp.
[16:33:46] <GothAlice> (There's also what appears to be a light-weight OperationContextImpl factory in the GlobalEnvironmentMongoD class.
[16:34:44] <GothAlice> (This factory-provided context is used in range delete operations… and that's about it, actually.)
[16:36:31] <hazmat> GothAlice, it mostly appears to be an internal impl detail instead of user exposed (ie the zdnet article is marketing bs). ie. unit of work commits are driving oplogs writes or wiredtiger txns
[16:37:02] <GothAlice> hazmat: Have you examined the test suite? (And yeah, that's what it looks like with my brief, unstudied examination of the CPP code. I don't CPP, so I may be missing things.)
[16:37:27] <GothAlice> (I.e. rollbacktests)
[16:37:43] <hazmat> GothAlice, thanks for dumpster diving with me.. i'll peek at the tests
[16:38:28] <GothAlice> Seems completely obscured by the per-message operation context.
[16:38:55] <GothAlice> OTOH, for bulk operations, this may imply that if one operation fails, all will be rolled back. It's worthy of testing.
[16:41:05] <GothAlice> From the article, "disk compression" — balls. My hand-rolled xz compression and the storage engine compression may end up duking it out. T_T
[17:42:13] <pbunny> hi, can i use mongo-cxx-driver to decode raw mongo protocol?
[17:42:23] <pbunny> i need to code an intermediate proxy
[17:42:45] <pbunny> that must decode mongo request, parse/modify it and send updated request to actual server
[17:49:08] <freeone3000> Hey. I'm doing the following aggregate query, and it's giving me the error "exception: aggregation result exceeds maximum document size". https://gist.github.com/freeone3000/4ce74ce29973f593b1c4 is my aggregation query. How can I narrow it down further than $project?
[17:51:10] <GothAlice> freeone3000: First, those multiple match stages are kinda weird.
[17:51:26] <GothAlice> freeone3000: I'd strongly recommend merging those together into the *first* $match.
[17:51:37] <freeone3000> GothAlice: That makes the query take much longer.
[17:51:46] <freeone3000> (source and dateAdded are indexed; requestInfo is not)
[17:52:05] <GothAlice> freeone3000: You'd need (and want) an index on requestInfo.userId anyway, for this.
[17:52:20] <Ryanaz> The download links are broken for me for downloading Mongo. Is there an alternative download location?
[17:52:33] <GothAlice> A single compound index on (source, dateAdded, requestInfo.userId) will cover this query, and all queries on just source/dateAdded, and all queries on just source.
[17:53:42] <GothAlice> freeone3000: However, you also want to use $out as your last processing stage, to write the results to a real collection for retrieval, since the entire result of that aggregation exceeds the size limit of a single document (16 MB).
[17:54:27] <freeone3000> GothAlice: Okay, so use $out, then after the aggregation query, make a second query to retrieve the actual data, then a third to delete the temporary collection?
[17:54:36] <GothAlice> Aye.
[17:54:48] <freeone3000> Alright. Thanks.
[17:55:25] <GothAlice> This, BTW, is exactly why I use pre-aggregated statistics. It reduces the data granularity from microsecond to per-hour, but I have to process substantially less data. (Two-week compound comparison charts take ~40millis to query and generate the JSON data to emit over HTTP.)
[17:56:34] <freeone3000> We have a SLA for response time - we'd have to pre-aggregate as a batch process anyway.
[17:56:50] <GothAlice> O_o Why not realtime pre-aggregation?
[17:57:17] <freeone3000> 'cause that'd put us over our 50ms response time.
[17:57:35] <GothAlice> (We record both literal click data, one document per click, as well as hand that record off to the aggregator to chop up and upset; delay between hit track and showing up in aggregated reports is ~200ms.)
[17:58:00] <GothAlice> (But it adds no delay to the request, which gets 302'd over to the final destination. We're like bit.ly at work…)
[17:58:19] <GothAlice> s/upset/upsert — right, disabling auto-correct.
[17:58:55] <freeone3000> Ah, right. Run it on a separate server. We can provision that and work into the new version, yeah. Product launched two weeks ago and I know we need to have analytics today.
[17:59:10] <freeone3000> (PREVIOUS solution was a real-time scrolling analysis which is what they asked for but apparently not what they wanted.)
[17:59:21] <GothAlice> Hit documents get inserted into a capped collection for chewing on by other processes that "tail" the capped collection for new data.
[18:01:28] <GothAlice> Logging of the full request, session data, slowms database queries for that request, and complete response adds only a few milliseconds to each of our requests. (The full-scale logging of _everything_ has a w=-1 write concern and is used for live support—we can watch our users in realtime—as well as replay in development for error reproduction.)
[19:13:32] <freeone3000> GothAlice: https://gist.github.com/freeone3000/2b392eed520eeeccb083 gives me a collection without any data in it.
[19:20:26] <freeone3000> How can I wait on the result of an aggregation with $out?
[19:22:38] <GothAlice> freeone3000: From: http://docs.mongodb.org/v2.6/reference/operator/aggregation/out/#pipe._S_out "The collection is not visible until the aggregation completes."
[19:22:58] <GothAlice> Polling the list of collection names in the DB would seem to be the only way.
[19:25:29] <shoerain> is there a way to usefully view JSON on a website? Doesn't look like <table> elems. Being able to collapse/expand subfields would be pretty nifty, too.
[19:38:15] <freeone3000> http://mongodb.github.io/node-mongodb-native/2.0/ seems questionably-useful - I can't get to the documentation. Anyone have ideas?
[20:51:03] <Derek56> Hi all! Was hoping someone might be able fix a bit of an issue I'm having. I have a Collection with about 700k documents in it, and I'm processing each document down in Node. When I get down to item 200,000 however, my cursor gets really really slow. Instead of getting through the 2000 or so documents a second I was doing previously, it gets to spe
[20:51:03] <Derek56> eds like 30 a second. If I disable all the processing being down and simply let the cursor loop through, it still tends to get slow around 200k. Anyone ever have this problem before?
[21:21:30] <newbsduser> hello, is there a setting about mongodb instances for max memory usage
[21:21:36] <newbsduser> i want to define a limit
[21:23:21] <newbsduser> i want to limit cache size
[21:32:09] <blizzow1> http://docs.mongodb.org/manual/core/replica-set-arbiter/ says: "Only add an arbiter to sets with even numbers of members."
[21:32:10] <blizzow1> This seems wrong to me. I should add an arbiter to a replica set with an odd number of members so if a member dies, a voting tie cannot happen.
[21:36:41] <thevdude> you're basically saying "if it becomes even i should have an arbiter"
[21:37:59] <GothAlice> Having extra arbiters is generally a non-issue, FYI. Voting rules pretty much guarantee it won't get to a tie under all but the most "end of the world" scenario.
[21:47:48] <freeone3000> Is the mongodb-native-driver supplied by mongo or node? I'm trying to figure out how to get all collection names in a database.
[21:48:27] <GothAlice> The native driver is written and maintained by the MongoDB folks, AFIK. However as for being "supplied", I'm not sure how you get it into a project.
[21:53:08] <freeone3000> Okay. how do you get all collections in a database with the mongodb-native-driver? I'm able to use it for other things, but neither listCollections() as on the API nor getCollectionNames() as on mongodb docs are defined methods on a #<Db> object.
[21:54:17] <GothAlice> freeone3000: In a mongo shell, run: db.getCollectionNames
[21:54:20] <GothAlice> (Note the lack of parenthesis.)
[21:54:32] <GothAlice> You can do this on almost any shell built-in to figure out how the shell goes about doing things.
[21:55:29] <GothAlice> (Yay for dynamic scripting languages! ;)
[21:58:27] <freeone3000> Ah. Awesome, thanks.
[22:29:07] <Krazypoloc> Hey guys
[22:29:30] <Boomtime> hello
[22:29:48] <Krazypoloc> I'm trying to determine a way to roll out a production server in the most "best practice" way possible
[22:30:27] <Krazypoloc> We are using SOLR, mongodb hosted by ObjectRocket, on several linux servers
[22:30:45] <Krazypoloc> Each server will have quite a few solr cores
[22:30:49] <Krazypoloc> ~30
[22:31:52] <Krazypoloc> I'm trying to find the best way to automate the data collection that is being pulled from ObjectRocket
[23:40:28] <wsmoak> anyone around using node? how do I use the ‘wtimeout’ option with a remove? http://mongodb.github.io/node-mongodb-native/api-generated/collection.html#remove
[23:41:20] <wsmoak> I tried items.remove({"_id": ObjectID(req.params.id)},{ "wtimeout": 5000 }, function(err,count) {… }
[23:41:53] <wsmoak> it says ‘combines with w option’ but I’m not sure what the values for ‘w’ mean...
[23:43:31] <Boomtime> wsmoak: does what you tried not work? that looks about right...
[23:46:00] <wsmoak> Boomtime: nope, not without also setting ‘w’ to something. I set it to -1.
[23:47:08] <Boomtime> setting w:-1 means wtimeout is meaningless
[23:47:23] <wsmoak> yeah… I just figured that out :)
[23:47:58] <wsmoak> without the ‘w’ though, it just hangs the same way it does without wtimeout. 5000 is 5 seconds right ?
[23:48:21] <Boomtime> so set w:1
[23:48:53] <Boomtime> you probably want w:majority - however, it sounds like something else is wrong
[23:49:39] <Boomtime> it sounds like you've asked for an enormous delete operation or the query part takes a very long time to complete
[23:50:50] <wsmoak> still hangs with w: 1 .. what’s “wrong” is that the database is not available. I need the call to remove or whatever to time out so my api does not just hang and never respond.
[23:51:34] <Boomtime> in what way is the "database not available" that would cause a client to hang?
[23:53:14] <wsmoak> client (angular app) does $http.delete(‘api/item/sldjfsdlk’)
[23:53:38] <wsmoak> node app https://github.com/wsmoak/geojson-example/blob/master/server.js in app.delete
[23:53:38] <Boomtime> you might just need a connection timeout
[23:53:41] <Boomtime> http://docs.mongodb.org/manual/reference/connection-string/#uri.connectTimeoutMS
[23:54:11] <Boomtime> wtimeout is a server instruction, it tells the server not to spend more than that amount of time on the op
[23:54:23] <Boomtime> if the command is not getting to the server then wtimeout cannot possible do anything
[23:56:05] <wsmoak> hmmm… blog says connect timeout is “This value is only used when making an initial connection to your database”
[23:56:23] <wsmoak> socket timeout I think
[23:57:35] <wsmoak> “The socket timeout option specifies to your driver how long to wait for responses from the server.”
[23:59:42] <wsmoak> thank you Boomtime