PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 10th of December, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:08:28] <ericsaboia> cheeser: cause I want to bring all results, just ordered by the friends first
[00:09:02] <ericsaboia> so I'm trying to create the field "friends" to order by it
[00:18:14] <tkeith> If I use w=2, is there any good reason to set j=True?
[05:54:55] <mark____> nohup python agent.py >> /YOUR_LOG_DIRECTORY/agent.log 2>&1 & Found the command in mongodb management service,but i am unable to understand the comand ?can u please explain me a bit???
[08:24:14] <spicewiesel> hi all
[08:32:26] <dnsdds> Is there a way to suppress any errors that are produced?
[08:56:48] <dropindex> Hi guys. I have a question about dropIndex.
[08:57:20] <dropindex> is the dropIndex worked as background? like ensureIndex's option.
[08:59:01] <dropindex> I've been googled a log, but didn't have any the answer.
[08:59:49] <dropindex> question again. dropIndex() worked as background like ensureIndex?
[09:00:36] <dropindex> does the dropIndex() block all read/write operation?
[09:03:17] <dropindex> Warning This command obtains a write lock on the affected database and will block other operations until it has completed.
[09:03:31] <dropindex> OK. I see.
[09:04:21] <retran> now you know
[09:17:12] <dnsdds> Is there really no way to suppress errors? I'm doing a bulk insert with continueOnError, but it still returns the error, so my framework thinks it's a problem rather than intentional and okay
[09:20:51] <retran> your framework is poo
[09:22:46] <dnsdds> Arguably
[09:23:01] <retran> i'm just messin
[09:24:58] <dnsdds> But I mean, c'mon; there has to be some solution, right?
[09:25:40] <retran> you got me
[09:26:14] <retran> if i want to ignore errors, i would know how
[09:26:23] <retran> but i'm only familiar with PHP client
[09:37:26] <gmg85> hi guys? mongodb doesn't seem to be deleting documents immediately....what could be the problem?
[09:39:42] <Derick> do you use acknowledged writes?
[09:44:20] <tiller> Hi!
[09:44:45] <tiller> Can we ensure indexes like: "Make sure there is only one document having default: true"
[09:45:28] <Derick> tiller: no
[09:45:38] <tiller> m'okay, thanks!
[09:45:45] <Derick> you can make sure fields have a unique value though
[09:46:26] <tiller> yes but I just tried, and with a boolean you've only 3 values. true / false / null. I hoped that maybe the null's value wouldn't be part of the unique field
[09:46:32] <tiller> s/field/index/
[09:46:55] <tiller> (when I mean null, I mean not specifying the field)
[09:47:27] <ron> there's a difference between a null value and an absent field.
[09:47:59] <tiller> ron> is there?
[09:48:20] <tiller> except then a null value would return true on {$exitst: {field: true}}
[09:48:55] <Derick> tiller: you can do that with a sparse index; (which doesn't include documents *without* the field)
[09:49:07] <Derick> but I am not sure you combine that with a unique index
[09:49:34] <tiller> I'll try, thanks!
[09:50:43] <tiller> Derick> it works!
[09:50:51] <Derick> yay!
[09:51:52] <mark____> Synchronous replication for session data means???
[09:52:28] <ecornips> Need some info on replica sets. I'm looking into the feasibility of running various secondary non-voting/read-only members across various data centres. Each can only see the Primary Mongos in the main data centre, and not other secondaries. Docs state each instance should see each other instance, but will it cause problems if two non-voting/read-only secondaries can't see each other on a permanent basis?
[09:52:50] <ecornips> (More specifically, this question: https://groups.google.com/forum/#!topic/mongodb-user/bimN-drQ_CM )
[09:53:14] <Derick> ecornips: all members need to be able to talk to all members
[09:53:38] <Derick> ecornips: you might want to look at master-slave replication instead...
[09:54:03] <ecornips> Derick: is there any particular reason why they need to talk to all members? Shouldn't reachability to the primary ones be the important factor?
[09:54:36] <ecornips> if you think about a simple DC1-DC2 replica set, if the link between the two goes down, won't the instances on both DCs still be queryable?
[09:54:50] <Derick> ecornips: that's not how a replicaset works or is designed. It's meant for failover, which requires communication between nodes. At least they need to know which other nodes exist, to figure out which one is the fastest to sync from
[09:55:05] <mark____> i think u miss my question
[09:55:31] <ecornips> Derick: ok, that's a shame. I was hoping to force them to sync from just a specific subset.
[09:55:33] <Derick> mark____: you didn't ask a question. You wrote a sentence with three question marks behind it really...
[09:56:05] <Derick> ecornips: as for "if you think about a simple DC1-DC2 replica set" —> yes
[09:56:11] <Derick> you can query both
[09:56:15] <Derick> but not write to any
[09:56:18] <ecornips> that's fine
[09:56:21] <Derick> (as you only have 2 nodes)
[09:56:22] <gmg85> Derick, acknowledged writes?
[09:56:54] <ecornips> it's a system that's caching data at the edges for faster access, with very low writes
[09:57:05] <bitinn> hi guys, appreciate some help on the user role required to run mongodump, details here: http://stackoverflow.com/questions/20490291/minimum-permission-for-using-mongodump-to-dump-a-specific-db
[09:57:12] <Derick> gmg85: mongodb supports not acknowledged writes (ie, wait until the operation succeeded) and unacknowledged write (where the operation immediately returns to the client)
[09:57:21] <mark____> @derick: http://www.severalnines.com/sites/default/files/docs/Severalnines_WP_Databases_Social_Gaming.pdf page 5 line no.4
[09:57:37] <Derick> in the latter case, the client (language driver really), can continue without all documents having been deleted already
[09:57:58] <Derick> mark____: can you please ask a simple question?
[09:58:34] <gmg85> Derick, aha...where do i configure that or are acknowledged writes the default?
[09:59:01] <mark____> i didnt able to get the mean of that line in that things whether it is synchronous or asycnchronous
[09:59:10] <Derick> gmg85: in the connection string most often
[09:59:16] <Derick> gmg85: which language / driver do you use?
[09:59:33] <dnsdds> http://stackoverflow.com/questions/20491115/suppress-mongodb-unique-errors-on-node-js
[09:59:36] <ecornips> in the DC1-DC2 example, if DC1 has 2 instances and DC2 has 1 instance, and a split happens. DC1 will have a primary and a secondary, while DC2 will just have a secondary. In this case won't the secondary on DC1 still receive updates from the primary, as it's reachable?
[09:59:50] <Derick> ecornips: correct
[10:00:11] <gmg85> am using javascript nodejs
[10:00:17] <gmg85> native driver
[10:00:29] <tiller> If we use ensureIndex({field: "hashed"}); and that field is an array (field: [id1, id2, id3]) does the index work for each id, or only for the block of ids? (I'm not sure I'm very clear on my point)
[10:00:37] <Derick> mark____: "own me
[10:00:37] <Derick> mory bank, and will then batch
[10:00:37] <Derick> -
[10:00:39] <Derick> write the data back to the database server like " is on that line
[10:00:49] <Derick> gmg85: can you show the connection string?
[10:01:02] <ecornips> Derick: I'm struggling to see the technical constraint here then. If all secondaries can see the primary, they'll be able to sync even if its perhaps not the most efficient path
[10:01:14] <Derick> tiller: for each id
[10:01:22] <tiller> thanks!
[10:01:59] <Derick> ecornips: yes, but there is no fixed primary in the replicaset concept. It's perfectly possible that at some point in the future (perhaps with a different configuration), the node in DC2 becomes primary
[10:02:47] <ecornips> Derick: agreed. In this case, I'd nominate a couple DC to have primary-capable instances (say 3 instances as per the example), and then have additional DCs with links to both of these primary-capable DCs
[10:02:52] <gmg85> Derick, not much of a connection string...this is what i have var db = mongodb.db('localhost', 27017, 'database_name');
[10:03:02] <Derick> gmg85: which driver is that?
[10:03:34] <ecornips> Derick: so all secondaries could see all primaries/voting-secondaries
[10:03:51] <Derick> ecornips: have you tried it? I am not sure if you can add a node that can't be seen by the majority... can't see why it shouldn't work besides getting lots of "cannot connect" log messages
[10:04:34] <ecornips> Derick: haven't tried it yet, that's the next step. Appreciate it's a bit of an annoying question, just trying to figure out any gotchas before I start building up 5-10 VMs ;)
[10:05:05] <Derick> :-)
[10:05:18] <Derick> ecornips: it is not something we'll recommend - but I do suggest you look at master-slave replication too
[10:05:39] <ecornips> Derick: thanks, will give it a look too
[10:05:44] <Derick> it's sort of deprecated, but it makes sense in a few cases (like this one)
[10:06:01] <gmg85> Derick, https://github.com/mongodb/node-mongodb-native
[10:06:02] <Derick> gmg85: ok, one sec
[10:06:35] <Derick> gmg85: and which version?
[10:07:56] <gmg85> Derick, Sorry...am using the mongodb-wrapper npm module...it uses the driver i pasted above...
[10:08:21] <Derick> gmg85: yes, but which version. Is it the latest?
[10:09:57] <gmg85> Derick, 1.03 its the latest version of mongodb-wrapper
[10:10:32] <gmg85> depends on mongoDb native 1.2.x
[10:10:40] <Derick> gmg85: you use the old "mongodb.db"
[10:10:56] <Derick> you should use what's on https://github.com/mongodb/node-mongodb-native/blob/master/docs/articles/MongoClient.md
[10:11:21] <gmg85> Derick, Found that the db method also accepts a connection string..
[10:11:45] <Derick> gmg85: the reason why I point you to MongoCLient, is because that has the interface that does acknowledged writes by default
[10:11:51] <Derick> and it's the new and updated client
[10:12:09] <Derick> gmg85: please read that page, and do as it says :)
[10:13:39] <gmg85> Derick, cool..thanks...let me give it a try
[10:31:45] <bitinn> Hi Derick, not sure if you got time to take a look at this question on stackoverflow http://stackoverflow.com/q/20490291/1677057 user role permission is kinda tricky.
[10:32:00] <Derick> bitinn: yeah, sorry, I've no idea
[10:33:09] <bitinn> no problem, one day my savior would come :)
[10:33:10] <movedx_> Is turning on write concern enough to gain data integrity?
[10:33:57] <Derick> movedx_: write concerns have little to do with integrity. And it depends on which ones you set too.
[10:34:39] <ArSn> "Error: 127.0.0.1:27017: invalid operator: $eq"
[10:34:45] <ArSn> was $eq removed or something? oO
[10:35:22] <ArSn> or doesn't it work with null?
[10:36:01] <Derick> ArSn: it never existed
[10:36:21] <ArSn> http://kwick.me/HSRv <-- what is this then?
[10:36:39] <Derick> that's for aggregation only.
[10:36:41] <Derick> not for normal queries
[10:37:00] <ArSn> ah I see
[10:37:09] <ArSn> and $ne and the other fellas exist as comparison operators too
[10:37:13] <ArSn> Derick: thanks for that hint
[10:37:29] <Derick> at some point, A/F will run with the normal query system too, but not yet
[10:37:45] <ArSn> I see
[10:45:08] <movedx_> Derick: I'm just interested in gauranteed writes and data safety. I know how-to replicate and then shard, which is cool, but still, data actually being written is important.
[10:45:24] <movedx_> Derick: ACID compliance would be cool :D
[10:45:43] <Derick> you'd need a relational database for that
[10:46:02] <Derick> movedx_: journalling on for MongoDB is the most important one for "integrity"
[10:46:12] <Derick> but replication is always asynchronous
[10:46:22] <Derick> (but *can* be synchronous from a single client connection)
[10:46:44] <movedx_> So there is room for data loss before replication?
[10:47:08] <Derick> you need journalling on regardless
[10:47:19] <movedx_> OK cool
[10:50:56] <movedx_> SO can it be argued that the main use case for Mongo, besides its NoSQL roots and scalability, is for data that isn't mission critical?
[10:51:31] <Derick> movedx_: you can argue about that :-)
[10:51:37] <kali> it can be argued, many angry blog posts do
[10:51:41] <Derick> but it's also used in mission critical places.
[10:51:43] <Derick> like banks
[10:54:19] <movedx_> Interesting. I reckon it's perfactly fine for almost any job. I just wish querying the damn thing was easier. I find the query and filter syntax, MapReduce, etc, a right pain to understand.
[10:58:39] <kali> i think that is highly subjective... from my experience pure beginners with no SQL background tend to struggle less with mongodb than SQL
[10:59:12] <kali> and are less likely to shoot themselves in the foot
[10:59:43] <Derick> movedx_: M/R is more of a pain than the Aggregation Framework though...
[11:01:01] <movedx_> aggregation is OK, I guess. I can get my head around it eventually.
[11:36:48] <Nodex> [10:56:54] <kali> i think that is highly subjective... from my experience pure beginners with no SQL background tend to struggle less with mongodb than SQL
[11:36:49] <Nodex> ++
[11:44:38] <gmg85> Derick, had nothing todo with mongodb...turns out nginx caches ajax requests? :@
[11:45:21] <gmg85> Derick, so when i deleted stuff..it was gone in mongodb but my ajax request returned it in the list..
[11:48:05] <Derick> gmg85: duh :-)
[11:48:11] <Nodex> gmg85 : nginx doens't cache, your browser does
[11:48:35] <gmg85> based on headers it receives from nginx?
[11:48:38] <Nodex> + that particular request
[11:48:52] <Nodex> just add a timestamp to it.. or use a post
[13:30:40] <Nomikos> is there such a thing as a type-less search in MongoDB?
[13:30:53] <Nomikos> other than $or'ing it
[13:31:13] <Nomikos> mostly for integer vs string values
[13:32:10] <kali> $in : [42, "42"] is the best you'll get
[13:32:20] <kali> better cleanup your data :)
[13:32:31] <Nomikos> aye, fortunately it's still small
[13:32:33] <Nomikos> thanks :-)
[13:32:36] <hanser> i have a collection that stores documents as like http://www.hastebin.com/xayujagiho.apache, how can i find document that contains given key ("DEF") and cpuid (3) for this structure ?
[13:33:30] <kali> hanser: what have you tried ?
[13:33:59] <hanser> kali: tbh, i didn't try, because i have no idea how can i query in arrays
[13:35:24] <kali> hanser: http://docs.mongodb.org/manual/tutorial/query-documents/
[13:37:27] <hanser> kali: tried , db.data.find( { key : "DEF", activator: { cpuid: 3 } } ) didn't work
[13:38:16] <hanser> kali: my problem is i have no idea how can i query in an array object
[13:38:25] <hanser> activator itself an array
[13:46:48] <hanser> kali: db.data.find( { 'key' : "DEF", 'activator.cpuid' : 3 } ) works, thanks
[13:51:09] <Nodex> amazing what reading docs does
[14:58:51] <ericsaboia1> hey guys, I need to sort users based on a list of ObjectIds, I'm trying to use aggregation with project to accomplish that
[14:58:57] <ericsaboia1> https://gist.github.com/ericsaboia/d6d04109569ce3f19181
[14:59:04] <ericsaboia1> But aggregation comparsion operators do not accept "$in"
[14:59:09] <ericsaboia1> is there any way to do that without map/reduce?
[14:59:56] <Derick> ericsaboia1: not quite sure what you want to do
[15:00:20] <Derick> your $eq looks wrong
[15:01:31] <Derick> ericsaboia1: you can't use match operators ($eq) in $project
[15:02:00] <ericsaboia1> Derick: I wanna sort my activities based on users that are my friends first
[15:02:26] <ericsaboia1> so I'm trying to create the field "friend" to sort it
[15:02:52] <Derick> what are you sorting by?
[15:03:01] <ericsaboia1> aggregation comparison operators accept $eq :http://docs.mongodb.org/manual/reference/operator/aggregation/#comparison-operators
[15:03:17] <Derick> yes, but not in every operator
[15:03:51] <ericsaboia1> the problem is that it compare exactly matchs
[15:04:07] <ericsaboia1> and I need to see if the user is inside a list
[15:04:23] <ericsaboia1> I have a list of users in my memory (friends of the logged user)
[15:04:50] <Derick> yes, but what are your sort criteria?
[15:04:51] <ericsaboia1> and I need to query a list of activities, bringing first activities from my friends
[15:05:08] <Derick> you probably should do that in your app
[15:05:13] <Derick> looks much easier to do
[15:05:38] <ericsaboia1> if the friend field works, I'll sort by it
[15:06:01] <ericsaboia1> I did that, the problem is that I have hundreds of activities
[15:06:29] <ericsaboia1> and I just want to bring the last 4
[15:06:48] <ericsaboia1> ordered by my friends first (if my friends did any of those)
[15:07:07] <ericsaboia1> So, to do that inside de app, I need to bring all activities, sort and than slice
[15:07:42] <ericsaboia1> or I need to query twice, one just to found my friend's activities, and other to complete the data
[15:08:20] <ericsaboia1> this is taking longer than I would like
[15:08:44] <ericsaboia1> So I'm trying to do the job inside mongodb, bringing only 4 activities, ordered by my friends first
[15:09:01] <ericsaboia1> makes sense?
[15:09:06] <ericsaboia1> sorry for my bad english ;D
[15:09:37] <Derick> sorry, not following. I think your gist provides to little information.
[15:10:06] <ericsaboia1> ok.. let me try update it
[15:24:12] <ericsaboia1> Derick: see if u understand now please: https://gist.github.com/ericsaboia/d6d04109569ce3f19181#file-gistfile1-txt
[16:58:38] <retran> qwerty
[16:59:10] <ericsaboia1> Derick: did u see the link?
[16:59:14] <ericsaboia1> thanks for the help!
[18:48:48] <brockfredin> How do I connect to the mongo system dashboard?
[18:51:51] <cheeser> you mean mms?
[18:55:00] <brockfredin> cheeser Sure
[19:01:35] <cheeser> mms.mongodb.com ?
[19:14:38] <brockfredin> cheeser Thanks
[19:22:29] <includex> hy gius, any tip about this: TypeError: Cannot set property 'version' of undefined at src/mongo/shell/utils.js:1008 ? when loading this sample.js http://pastie.org/8542822
[19:22:30] <includex> ?
[19:39:17] <joannac> includex: works for me
[19:40:31] <joannac> Whatversion?
[19:45:44] <tg2> anybody here from tokumx?
[19:46:00] <tg2> trying to import a mongodb dump... getting errmsg: "no such cmd: beginTransaction"
[19:52:50] <includex> joannac 2.4.8 on Debian 7.2
[20:01:11] <staben> Hi there!
[20:01:13] <staben> A User has many Requests. Some of the Requests has PriceQuotes. # E.g: u.requests.first.price_quotes . How can I do a query that returns only requests that has price quotes?
[20:01:38] <cheeser> $exist
[20:01:57] <cheeser> that looks more like a character flaw than a query command
[20:02:03] <ron> sexist
[20:02:05] <cheeser> that joke is $exist!
[20:02:17] <cheeser> oh, hi ron. how ironic. :D
[20:02:28] <ron> :D
[20:02:41] <staben> cheeser: Thanks! :)
[20:05:33] <staben> cheeser: I tried with "u.requests.where('price_quote_id').exists?" but that does not seem to work. Could you please help me out with the syntax? ( Just started using MongoDB a few days ago )
[20:07:09] <staben> yeah, I'm doing the query with Mongoid.
[20:08:17] <cheeser> http://docs.mongodb.org/manual/reference/operator/query/exists/
[20:09:59] <includex> joannac weird, just removed the port value from "host" : "hosname:port" and worked :/
[20:10:39] <cheeser> "hosname" or "hostname" ?
[20:10:50] <includex> cheeser hostname :) (typo)
[20:10:55] <cheeser> :D
[20:11:14] <includex> i'pm going to rebuild the cluster (i'm using ansible, brb)
[20:11:19] <ron> I think cheeser's been drinking.
[20:11:43] <includex> ron he should give us some
[20:11:52] <staben> u.requests.where(:price_quotes.exists => true).all # Returnes nothing, but "u.requests.first.price_quotes.first == PriceQuote"
[20:11:59] <ron> he's too far away from me.
[20:12:01] <ericsaboia1> staben: should be something like where(:price_quote_id.exists => true)
[20:12:07] <ericsaboia1> http://stackoverflow.com/questions/8963054/mongoid-how-to-query-for-all-objects-where-value-is-nil
[20:12:30] <includex> true
[20:24:22] <staben> ericsaboia1: Thanks, but does that work when the relationship is like this? "PriceQuote belongs_to :request" and "Request has_many :price_quotes" u.requests.where(:price_quote_ids.exists => true).first # => nil u.requests.collect(&:price_quote_ids) # => [[BSON::ObjectId('52a770664d6172d820010000')], []]
[20:25:56] <ericsaboia1> staben: how do u save that field when u do not have an relationship?
[20:26:25] <staben> ericsaboia1: which field?
[20:26:36] <ericsaboia1> price_quote_id
[20:27:02] <ericsaboia1> just an empty array?
[20:27:59] <staben> PriceQuote.last => #<PriceQuote _id: 52a77066XXX, user_id: BSON::ObjectId('52a770XXXX'), request_id: BSON::ObjectId('52XXX'), price: 1.5
[20:28:05] <ericsaboia1> if it is the case, I think u should try something like $ne: []
[20:29:09] <staben> ericsaboia1: A Request can receive many PriceQuotes, but it´s not necessary that it has any.
[20:29:38] <ericsaboia1> so u are trying to find requests without any pricequotes, right?
[20:29:48] <staben> Yes! :-)
[20:30:05] <staben> And the other way around.
[20:30:15] <ericsaboia1> http://stackoverflow.com/questions/14789684/find-mongodb-records-where-array-field-is-not-empty-using-mongoose
[20:30:19] <staben> Requests that has received quotes.
[20:30:26] <staben> tnx for the link!
[20:30:56] <ericsaboia1> something like $size: {$gt: 0} should work
[20:31:02] <ericsaboia1> or $ne: []
[20:31:07] <ericsaboia1> and so on..
[20:31:48] <astropirate> Is anyone here using mgo?
[20:31:58] <astropirate> It isn't reading any of my bool typed fields
[20:32:02] <astropirate> but is reading everything else
[20:32:26] <ericsaboia1> staben: where is what u are looking for: http://stackoverflow.com/questions/12167403/how-do-i-query-for-item-with-non-empty-array-with-mongoid
[20:32:29] <ericsaboia1> *here
[20:34:25] <webus> hi to all!
[20:34:41] <webus> is gridfs more faster than postgresql lob ?
[20:35:52] <kali> yes, if you release both of them at the same time in vacuum, gridfs will hit the ground first
[20:38:56] <astropirate> kali, what if they are both suspended in a magnetic field?
[20:41:56] <kali> they're both made of plastics
[20:56:57] <ksanchez> Hi, where i can find big data docs to fill a mongodb database?
[20:57:12] <ksanchez> it's for testing purposes
[20:57:41] <astropirate> ksanchez, US census data
[20:59:32] <ksanchez> ok, thnx i'll check it out
[21:00:13] <bjori> ksanchez: http://mongodb-enron-email.s3-website-us-east-1.amazonaws.com/
[21:01:04] <ksanchez> Cool bjori :D
[21:27:15] <mikaeldice> I need to run addition or subtraction to a number value in a document, sometimes from multiple node threads simultaneously. I need to read the original value from this document prior to performing the operation. Is there a risk of a scenario where thread 1 reads the value, starts it's math, then thread 2 reads the value and starts it's math with a different add/subtract value, thread 1 writes its n
[21:27:15] <mikaeldice> ew value to the document, then thread 2 writes its computed value to the document, ignoring the change from the first thread? If that makes sense..
[21:28:29] <platzhirsch> Counting the number of associations for a document of mine takes too long. What's the strategy here? This is not indexable is it?
[21:29:20] <cheeser> can you use findAndModify?
[21:30:30] <platzhirsch> although it seems there was a nice fix to better performance on indexes for count
[21:31:41] <mikaeldice> hmm, not sure, I'll look into that cheeser
[21:37:35] <platzhirsch> Okay, nevermind. I simply put an index on the association, it's not sufficiently fast
[21:37:44] <platzhirsch> oh boy :)
[21:51:35] <intellix> mmm, my connections to Mongo keep going up... something wrong? [initandlisten] connection accepted from ?:50632 #71 (71 connections now open)
[21:52:33] <cheeser> are your sure you're closing the ones you're using?
[22:15:25] <intellix> not closing them, read that PHP will reuse connections
[22:16:49] <cheeser> most drivers will. some drivers, like java's, can leak connections if you don't close cursors properly
[22:44:49] <intellix> yeah, it seems to open a new connection on every page refresh
[22:49:42] <cheeser> check your cursors.
[22:55:38] <qswz> err.. there's a way to pool the connection with php?
[22:56:00] <qswz> bah anyway I'm not gonna use php
[22:57:11] <metasansana> +1 qswz
[23:00:34] <Derick> qswz: it happens automatically
[23:01:42] <qswz> oh they did good then
[23:01:48] <qswz> join #java
[23:02:00] <qswz> oops ^^
[23:20:07] <intellix> yeah happens automatically supposedly. I've got about 1k connections, and hitting some limit. Is there any recommendation about the amount of connections that should be allowed?
[23:28:58] <intellix> ok so it's based on your system ulimit, which is currently set to unlimited. must be some other issue then. Saw 1k connections and anew one being connected on every page refresh so thought something was wrong. I think it all looks fine though, must be something else... was getting some pipe dropped errors
[23:35:56] <justin___> Howdy guys
[23:36:55] <justin___> Anybody have any insight on why quiet = 1 does not work, still seeing lots of logged events with rsyslog
[23:50:57] <joannac> justin___: ?
[23:51:13] <justin___> Hey joannac.
[23:51:25] <justin___> I have quiet =1, but still seeing lots of messages, specially:
[23:51:44] <justin___> Dec 10 15:49:38 mongodb3 mongod.27017: Tue Dec 10 15:49:38.445 [conn128792] authenticate db: local { authenticate: 1, nonce: "XXXXXXXXXXXX", user: "__system", key: "XXXXXXXXXXXXXX" }
[23:51:48] <joannac> Did you restart mongod?
[23:51:52] <justin___> yup
[23:51:57] <justin___> using rsyslog
[23:52:19] <justin___> syslog = true
[23:52:19] <justin___> profile = 0
[23:52:20] <justin___> quiet = 1
[23:52:38] <joannac> is that in a config file?
[23:53:03] <justin___> yeah /etc/mongod.conf
[23:56:26] <joannac> I think you want quiet=true
[23:56:40] <justin___> Hum, 1 and true the same aren't they?