PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 23rd of June, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[02:22:35] <synergy_> In mongoose, what's the logic in having 5 connections open at a time when connecting to a database
[02:23:04] <GothAlice> Connection pooling to allow for handling of variable server load.
[02:23:09] <cheeser> that
[02:23:14] <cheeser> most drivers will do that
[02:24:00] <synergy_> So if one connection's "busy" another one can handle a query?
[02:24:33] <cheeser> basically
[02:25:28] <synergy_> Then what does "busy" actually mean in database terms?
[02:26:49] <GothAlice> Waiting for a response to a query.
[02:27:43] <synergy_> I assumed so
[02:27:48] <synergy_> Thanks
[02:28:00] <GothAlice> It never hurts to help. :)
[02:28:21] <synergy_> Agreed
[02:39:35] <synergy_> Is this a reasonable channel to ask about general mongoose specific things?
[02:58:19] <synergy_> Why is it that when `console.log`ing mongoose.Schema, I'm not able to see any of the api methods?
[06:08:45] <sjose_> Hi,Does mongodb stores metadata about a document..?
[07:50:57] <Axy> Hey all, I'm using node.js with mongodb
[07:51:19] <Axy> I'm using the native driver for node.js
[07:51:25] <Axy> I've realized that nothing is saving -
[07:51:33] <Axy> should I do anything specific for mongodb to save stuff?
[07:51:45] <Axy> I can add stuff to collections but they're not really being added
[07:51:54] <Axy> (zero entries)
[08:01:00] <coudenysj> Axy: have any code snippets?
[08:02:21] <Axy> coudenysj, http://pastebin.com/bJ4KA1SW this only
[08:02:27] <Axy> this is how I insert documents
[08:02:48] <Axy> I never do db.close();
[08:03:10] <Axy> I was just ctrl+c quitting when the program is done adding stuff
[08:04:13] <coudenysj> sure the "err" variable is empty? and how do you setup the rest (where is the collection var coming from)?
[08:16:24] <Axy> coudenysj, I just realized I get invalid ns error
[08:16:51] <Axy> what's that
[08:17:47] <coudenysj> what is the "connection string" you use?
[08:18:09] <Axy> mongodb://localhost:27017/mydb
[08:18:11] <Axy> coudenysj,
[08:18:46] <Axy> everything was working fine on my local machine, it started to behave weird when I out it to the server
[08:19:43] <coudenysj> and the collection name?
[08:19:52] <Axy> entries
[08:21:14] <coudenysj> and connecting via the mongo client works?
[08:25:59] <Axy> ARGGH coudenysj i've realized there was a typo at some random place
[08:26:17] <Axy> it's fixed now, thank you for helping
[08:27:51] <coudenysj> np
[08:45:18] <mitereiter> I have a sharded system
[08:45:18] <mitereiter> I made an index and got the error: key values too large to index
[08:45:18] <mitereiter> but the Index were created on some shards, because when I want to insert on those shards I get the error value to large for index
[08:45:18] <mitereiter> I cant see the index with db.col.getIndexes()
[08:45:19] <mitereiter> is this the normal behaviour?
[08:45:19] <mitereiter> can I drop the index trough mongos?
[08:54:23] <zamnuts> mitereiter, was an index file created on the filesystem?
[09:42:14] <pamp> hi
[09:42:27] <pamp> Im in trouble with mongodb memory usage
[09:42:54] <pamp> mongodb consumes all the available memory on the server, but I have another things in this server
[09:43:11] <pamp> and when I do operations I get exceptions
[09:43:22] <pamp> because mongodb consumes all memory
[09:43:50] <pamp> ate this time I cant take a mongodb dedicated server...
[09:44:09] <pamp> is there any workaround to prevent this from appen???
[09:48:08] <lqez> pamp: use another engine, not mmapv1
[09:48:34] <coudenysj> pamp: http://docs.mongodb.org/manual/faq/fundamentals/#does-mongodb-require-a-lot-of-ram
[09:48:50] <zamnuts> lqez, would enforcing ulimit crack down on mongodb memory usage?
[09:48:51] <lqez> like wiredTiger, you can set cacheSizeGB
[09:49:29] <lqez> I think it just causes a crash or a performance problem
[09:49:58] <pamp> ok thanks a lot for your inputs
[09:50:13] <zamnuts> performance problem no doubt - no RAM available for working set
[09:50:27] <zamnuts> i'm skeptical of a crash
[09:51:36] <zamnuts> i guess i'll have to try it out, i'm interested
[09:53:51] <pamp> another question. You recomended WiredTiger for productions server for now?
[09:56:29] <lqez> hmm I'm using tokuft , not wiredTiger.
[09:56:39] <lqez> I tested tokuft with wiredTiger at 3.0RC4,
[09:56:58] <zamnuts> i haven't had any problems w/ wiredtiger, but it has only been a month or so
[09:57:31] <lqez> But WT crashes when it stores more than physical memory size. However,
[09:57:49] <zamnuts> drastic performance difference (for my dataset anyway) with WT over mmapv1, and i'm getting at least 50% compression ratio... again for my dataset
[09:57:50] <lqez> At stable release(not RC), it looks like very stable
[09:58:02] <lqez> zamnuts: agree++ compression +1
[09:59:00] <zamnuts> lqez, tokuft == tokutek ?
[09:59:02] <lqez> pamp: See http://www.acmebenchmarking.com/ if you interest in another engines
[09:59:18] <lqez> zamnuts: yeap I used tokumxse - now it is 'tokuft'
[09:59:46] <zamnuts> like it? i only found out about it _after_ my WT migration :/ too late now
[10:00:06] <aps> Do the change entries stay forever in the journal?
[10:01:23] <zamnuts> aps, no, they are cycled out of the journal, but if you mean the oplog, then they stay there as long as your oplog limit hasn't been reached
[10:02:05] <pamp> once again, thnks for your inputs
[10:02:10] <pamp> they really helped
[10:02:43] <lqez> zamnuts: I had to choose newer engine at Mar, 2015 - If I had a lot of time, I may choose WT, not toku. lol - too late now (2)
[10:03:02] <lqez> pamp: good luck -
[10:03:54] <lqez> My colleague (works in online game services) runs 20+ mongo shard/replica and he recommend WT now
[10:04:10] <lqez> he said it's stable - not like as RC versions
[10:04:32] <lqez> Actually, I chose tokuft because WT crashes a lot at RC4
[10:05:09] <lqez> (I mean 3.0 RC4)
[10:06:35] <zamnuts> that's nice to know about the success of others, sometimes i feel alone :(
[10:08:09] <zamnuts> i just wonder if toku's improvements still have a leg up on WT, support for transaction is also intriguing
[10:16:46] <aps> I need to backup a DB from sharded cluster. Is there a FOSS solution similat to MMS available?
[10:16:57] <aps> s/similat/similar
[10:18:49] <joannac> aps: how large?
[10:19:29] <aps> joannac: ~600 GB
[10:20:20] <joannac> aps: http://docs.mongodb.org/manual/administration/backup-sharded-clusters/
[10:20:35] <joannac> probably filesystem snapshots would be most suitable for that size
[10:21:18] <aps> joannac: okay. thanks!
[10:21:52] <joannac> it's not the same as what you get from MMS though
[10:24:34] <aps> joannac: I believe MMS uses Oplogs for backup purposes? Isn't there a way to go through that path?
[10:25:28] <joannac> not unless you want to reimplement it
[10:39:46] <mtree> i have a collection where _id keys are sometimes strings and sometimes ObjectID, how to query them properly?
[10:41:33] <phutchins> Is there a way (in 2.6) to run with --sslMode requireSSL but not provide a sslCAFile ?
[10:42:14] <joannac> mtree: that seems really suboptimal. why?
[10:44:56] <Waheedi> hello everyone. I have two different environments running mongodb on each env. I did rsync on /data/mongodb files from env1 to env2
[10:45:34] <Waheedi> after rsync is complete I did restart mongodb server but still i can't find the latest records made on env1 reflected in env2 database
[10:45:51] <Derick> you need to stop mongod on env1 before you sync
[10:46:24] <Waheedi> its already an read only (env1)
[10:46:27] <phutchins> Waheedi: you can stop a single secondary and copy from there
[10:46:41] <phutchins> Waheedi: how do you mean read only?
[10:46:46] <Derick> how did you make it read only?
[10:46:49] <phutchins> :)
[10:46:59] <Waheedi> db..fsynclock something
[10:47:46] <Waheedi> its actually not ready only its locked
[10:48:11] <Waheedi> which means its read only
[10:48:17] <Waheedi> lol
[10:48:23] <Waheedi> locked from writes
[10:48:41] <phutchins> Waheedi: new document writes but I'd assume there are other types of writes that happen
[10:49:05] <Waheedi> alright
[10:49:19] <joannac> phutchins: what
[10:49:41] <Waheedi> joannac: thank god you are here :)
[10:50:00] <phutchins> could be wrong :)
[10:50:14] <mtree> joannac: dont ask me, it just like that
[10:50:16] <phutchins> I've never had any luck copying a db from the fs level with mongo running...
[10:50:41] <Waheedi> i have done it few times
[10:50:53] <joannac> Waheedi: a write you did on env1 before fsyncLock, doesn't show up on env2?
[10:50:54] <Waheedi> but this time i'm using a different method which is rsync
[10:51:07] <Waheedi> yes joannac
[10:51:32] <Waheedi> just to be clear about the rsync I'm using https://en.wikipedia.org/wiki/Rsync
[10:51:34] <phutchins> Waheedi: i've done fs snapshots with mongo up which worked... But not all the time.
[10:51:45] <Waheedi> yes that was what I did
[10:51:53] <Waheedi> previously but not this time
[10:51:58] <phutchins> (and the last time I've done this was a few years ago so maybe wildly different now) :)
[10:52:08] <joannac> Waheedi: I suspect something else is going on
[10:52:30] <joannac> is env1 or env2 a replica set?
[10:52:32] <phutchins> the new cluster, are you bringing a single node up first to verify?
[10:52:47] <phutchins> maybe anotehr node in the cluster is winning out and the new one is rolling back
[10:52:52] <Waheedi> env1 is a secondary replicaset
[10:52:59] <Waheedi> env2 is running in single mode
[10:53:07] <phutchins> gotcha
[10:53:09] <phutchins> hum
[10:53:44] <joannac> what does "secondary replicaset" mean?
[10:54:11] <Waheedi> it means env1 is running in a replica set as a secondary instance
[10:54:19] <joannac> are you searching by _id ?
[10:54:30] <joannac> does the document exist in env1?
[10:54:39] <phutchins> Waheedi: did you confirm that... joannac beat me to it
[10:54:42] <phutchins> :)
[10:54:45] <Waheedi> nop I'm just finding the last result from a collection
[10:54:55] <Waheedi> which has an old date (a month ago created at)
[10:55:08] <Waheedi> while the latest record in that collection in env1 is today
[10:55:53] <joannac> what's the second latest record in env1?
[10:56:16] <joannac> wait, how are you determining "latest"?
[10:56:19] <joannac> sort?
[10:56:22] <Waheedi> yes
[10:56:51] <Waheedi> db.websites.find().sort({_id:-1}).limit(2);
[10:58:52] <joannac> and the last _id in env2 is a month ago?
[10:59:12] <joannac> how many documents in env1 are there since 1 month ago?
[11:00:22] <joannac> ... oh well
[12:05:02] <Naeblis> Is there a way to alias document fields?
[12:05:18] <deathanchor> why would you want to?
[12:05:43] <Naeblis> My problem: a user can be part of multiple organizations which are stored inside user.member_of. But at any given time, I would like a user.organization that represents the currently logged in org
[12:07:01] <deathanchor> and you want an alias in the users.org value?
[12:07:38] <Naeblis> yes.
[12:08:31] <deathanchor> I would just store a value there. AFAIK there is no way to alias
[12:08:56] <deathanchor> you might be able to make a data model to conform to your needs
[12:09:08] <deathanchor> via mongoengine or something similar
[12:37:52] <leporello> Hi. How can i remove specified field from all documents in collections?
[12:38:43] <leporello> I'm using layouts.update( unknownArgument, { $unset: {'average': ''}}). What should I use in query to tell mongo to use all documents?
[12:39:24] <cheeser> s/unknownArgument/{}/
[12:40:02] <leporello> cheese, so obvious! Thanks :)
[12:40:09] <cheeser> any time
[13:02:00] <pithagora> hello all. i have a server which entered into slave mode after the second one was disabled and the application can't write into it.
[13:02:38] <pithagora> is it correct that i have to drop the local db then restart the mongodb without replset option to have the server in standalone ?
[13:03:43] <coudenysj> you can just restart it without the replset, if I remember correct
[13:03:47] <cheeser> you shouldn't have to drop anything.
[13:03:53] <cheeser> just remove the replSet option
[15:11:18] <d-snp> hi
[15:11:51] <d-snp> we get this error in our cluster: errmsg: "mongos specified a different config database string : stored : 10.35.0.2:27019,10.65.0.3:27019,10.72.0.3:27019 vs given : 10.35.0.2:27019,10.65.0.3:27..."
[15:12:23] <d-snp> we tried updating those strings, but apparently they have them stored somewhere
[15:12:35] <d-snp> is it possible to remove that store? where can we find it?
[15:12:44] <d-snp> apparently restarting the config servers was not enough
[15:14:08] <d-snp> our servers get new ips every now and then
[15:16:02] <Waheedi> joannac: sorry for the disconnection
[15:16:05] <d-snp> http://docs.mongodb.org/manual/tutorial/migrate-config-servers-with-different-hostnames/ reading this now
[15:16:38] <Waheedi> I have there is almost 150 new documents in env1 that are not showing at all in env2
[15:17:29] <einyx> d-snp: a better solution will be using hostnames i guess
[15:17:32] <Waheedi> joannac: i think a db.repairDatabase() would fix the issue, but I need to understand whats causing it and second the repair would take about 48 hours on my 400GB database
[15:17:36] <einyx> so you don't need to change them
[15:18:18] <d-snp> yes
[15:18:35] <d-snp> thanks, I don't know why we weren't doing that already
[15:48:16] <GothAlice> d-snp: Because DNS adds another point of failure to a server cluster, my servers automatically populate /etc/hosts as nodes come online and go offline.
[15:48:44] <GothAlice> (This removes external network resources as a potential issue.)
[17:19:12] <lmatteis> hi all. i tried creating an index using .createIndex()
[17:19:21] <lmatteis> how long does it take to create it. the command returned immediately
[17:19:32] <lmatteis> is it running in the background?
[17:19:42] <GothAlice> lmatteis: Only if you told it to do so.
[17:20:12] <lmatteis> no i just ran db['coll_name'].createIndex
[17:20:16] <lmatteis> and it returned quite fast
[17:20:19] <GothAlice> Run db.collection.getIndexes() to see if the collection actually has the index you want.
[17:20:28] <GothAlice> Uhm, .createIndex without parenthesis will do nothing.
[17:20:31] <GothAlice> It's a function.
[17:20:37] <GothAlice> You pass it arguments, right?
[17:20:41] <lmatteis> GothAlice: yes it's there
[17:20:51] <lmatteis> of course
[17:20:51] <GothAlice> Then it's done, and there. :)
[17:20:52] <lmatteis> :)
[17:20:56] <lmatteis> ah
[17:21:10] <lmatteis> ok then i'm still getting this and don't know why: { [MongoError: too much data for sort() with no index. add an index
[17:21:13] <lmatteis> or specify a smaller limit] name: 'MongoError' }
[17:21:24] <GothAlice> Foreground indexing will block other access to the collection while it's happening, but it's orders of magnitude faster than indexing in the background.
[17:21:36] <lmatteis> the code throwing that is: collection.find({},{"_id":0}).sort({"endpoint.datasets.0.label":1,"endpoint.uri":1})
[17:21:42] <GothAlice> ...
[17:21:51] <lmatteis> and db['ptasks_agg'].createIndex({"endpoint.datasets.0.label":1}) is how i created it
[17:22:01] <lmatteis> (or i should say, other developers created it)
[17:22:04] <GothAlice> I actually don't know if that would work, at all.
[17:22:21] <GothAlice> Direct access to list indexes like that is a sign you may need to denormalize to efficiently query.
[17:23:02] <GothAlice> Run an .explain() on that query, though. It might tell you why it's rejecting the index you added.
[17:23:29] <lmatteis> so like .explain() after the sort() ?
[17:23:38] <GothAlice> Aye.
[17:23:52] <lmatteis> ok got back an object
[17:24:02] <lmatteis> with no indexBounds
[17:24:07] <lmatteis> like an empty obj
[17:24:14] <GothAlice> Could you gist the whole explain result?
[17:24:21] <GothAlice> Gist/pastebin/etc.
[17:24:30] <lmatteis> and https://gist.github.com/lmatteis/3ac474c203062c48034e
[17:24:52] <GothAlice> Yup, you can't do what you are trying to do with those array indexes.
[17:25:12] <lmatteis> GothAlice: like the query is wrong?
[17:25:37] <GothAlice> You can index endpoint.datasets.label (without the index) to have MongoDB create a multi-key index.
[17:25:45] <GothAlice> (Er, without the .0 index, that is.)
[17:25:54] <lmatteis> how do i remove the other one
[17:26:01] <lmatteis> i just created
[17:26:10] <GothAlice> lmatteis: http://docs.mongodb.org/manual/tutorial/remove-indexes/ :)
[17:27:28] <lmatteis> GothAlice: so just like .createIndex({"endpoint.datasets.label":1})
[17:27:33] <codesine> Hi, anyone here for what I think is a basic question (:
[17:27:59] <GothAlice> lmatteis: It may be worthwhile to also add endpoint.uri to that index, as you're sorting on both.
[17:28:06] <GothAlice> codesine: Ask, don't ask to ask. :)
[17:28:08] <lmatteis> GothAlice: yeah that's there
[17:28:11] <codesine> I'm using mongoengine and for some reason things in arrays are showing up with a unicode in mongodb shell like so in this snippet:
[17:28:12] <lmatteis> (that's already there)
[17:28:14] <codesine> ... "results" : "[{u'q': 2}, u'test3']"...
[17:28:22] <codesine> but in mongodb i can't do stuff like
[17:28:24] <codesine> u"test"
[17:28:31] <GothAlice> lmatteis: "Already there" -> .createIndex({"endpoint.datasets.label":1}) <- not demonstrably.
[17:28:38] <lmatteis> GothAlice: ok that .createIndex({"endpoint.datasets.label":1}) returned really fast
[17:28:39] <codesine> it's causing me a problem cuz i want to do for instance
[17:28:48] <codesine> "results.q":
[17:28:51] <lmatteis> i wonder how it's able to create it so quickly but ok
[17:28:59] <lmatteis> like on couchdb it takes a bunch
[17:28:59] <codesine> but it can't find it cuz that 'u'
[17:29:03] <lmatteis> and this dataset is quite huge
[17:29:13] <GothAlice> codesine: You're not in a Mongo shell, you're in a Python one.
[17:29:21] <codesine> no no i'm in the mongoshell
[17:29:25] <lmatteis> GothAlice: sorry i sent you the wrong gist
[17:29:31] <GothAlice> The 'u's say otherwise, codesine.
[17:29:34] <codesine> db.smsdao.find() { "_id" : ObjectId("5589950783107d49b278090d"), "sms_to" : "6467061433", "body" : "test body", "sms_from" : "+16465128787", "timestamp" : ISODate("2015-06-23T17:19:03.025Z"), "clicks" : 0, "responses" : { "survey_for" : "6467061433", "results" : "{u'a': [{u'q': 2}, u'test3']}", "title" : "test surey title" } } > db.smsdao.find()
[17:29:37] <codesine> look
[17:29:41] <lmatteis> GothAlice: https://gist.github.com/lmatteis/2ac278767ed3c5badc0a
[17:29:48] <codesine> this is in mongoshell
[17:29:50] <lmatteis> it was wrong collection
[17:29:53] <codesine> i've never seen this ever before
[17:30:06] <codesine> I'm wondering how the hell the u got in mongoshell
[17:30:21] <GothAlice> codesine: Ah, you've got something in Python writing badly encoded JSON (or just a Python "repr" of a dictionary) into a string field.
[17:30:26] <GothAlice> That's pants-on-head crazy.
[17:30:40] <lmatteis> GothAlice: my problem?
[17:31:13] <codesine> I was inputting it through curl
[17:31:22] <codesine> hm...
[17:31:58] <codesine> hey gothalice
[17:32:02] <GothAlice> codesine: http://showterm.io/8f0134cbd73dcc4b2b187 < that string is Python.
[17:32:07] <codesine> just for grins, how can you do a find command on that string with u'
[17:32:31] <codesine> yeah i know the u is python unicode
[17:32:54] <codesine> i was just surprised how it even got thru like that, mongoengine only does it on arrays
[17:33:00] <codesine> on dynamicdocuments
[17:33:06] <lmatteis> GothAlice: just the complete situation in case you're interested: https://gist.github.com/lmatteis/7c2e98efb693dc9385dd
[17:33:07] <GothAlice> codesine: http://docs.mongodb.org/manual/reference/operator/query/regex/
[17:33:48] <GothAlice> lmatteis: You didn't read what I wrote earlier.
[17:34:07] <GothAlice> "It may be worthwhile to also add endpoint.uri to that index, as you're sorting on both."
[17:34:36] <GothAlice> You responded "Already there", which .createIndex({"endpoint.datasets.label":1}) and your latest gist say otherwise to.
[17:34:50] <codesine> oh yeah of course, regex! thanks alice!
[17:35:11] <lmatteis> GothAlice: but it's there: "key": {"endpoint.uri" : 1 }
[17:36:13] <GothAlice> lmatteis: MongoDB can only use a single index per query, in most cases.
[17:36:24] <GothAlice> Thus you need a single, compound index which covers both fields.
[17:36:26] <lmatteis> ah!
[17:36:43] <GothAlice> In the same order you later use the fields.
[17:36:47] <GothAlice> With the same direction of sort.
[17:37:11] <lmatteis> GothAlice: https://gist.github.com/lmatteis/10fdba88745271216b15
[17:37:48] <GothAlice> Can you give me a dump of an example typical record from that collection?
[17:38:06] <lmatteis> ah wait
[17:38:14] <lmatteis> i added the index i had earlier (with 0.label)
[17:38:18] <GothAlice> Oh, also, your sort is wrong.
[17:38:24] <lmatteis> and something seems to happen
[17:40:36] <GothAlice> From the explain you seem to have 535 records in that collection. If it's complaining about there being too much data to sort, something is terribly wrong with how you structured your data.
[17:41:18] <lmatteis> yup
[17:41:20] <lmatteis> not my data
[17:41:34] <saml> what's good write concern to make sure data is completely saved?
[17:41:49] <saml> i tried w=majority&journal=true but still tests fail (find right after update)
[17:42:17] <saml> i can't tell how many replicaset members there are. so can't do w=100
[17:43:22] <saml> maybe it can't be done. currently 3 members. so w=3&journal=true . still tests fail sometimes
[17:43:27] <lmatteis> GothAlice: thanks, at least now it's working :) love you
[17:47:32] <GothAlice> "So our new prospect wants the entry level Product X™, but they have some questions about searching the data. They can search for candidates with certain levels of experience in certain areas, right?" My response: At the entry level? http://s.webcore.io/image/2T2z070q072I0R1z1n09
[17:49:58] <GothAlice> saml: Your tests are flawed.
[17:50:06] <GothAlice> Each time you bring it up and I'm around, I mention this. ;P
[17:50:39] <StephenLynx> GothAlice I didn't get that prospect and entry level stuff :v
[17:51:19] <GothAlice> StephenLynx: A potential client is asking for doctorate-level natural language processing capability for the equivalent of pennies on the dollar. "Ha ha ha. No."
[17:51:26] <StephenLynx> lel
[17:51:37] <codesine> Woot, @gothalice, thanks for the help, turns out the flaskrestful lib i was using automatically uses 2 types it calls, str, or int, which mangled my embedded object. But I could pass it a dict and it did without the mangling (:
[17:56:18] <wojons> running into some errors on a sharded cluster with "thread over memory limit, cleaning up", and "Assertion failure messageShouldHaveNs() src/mongo/db/dbmessage.cpp 97" and "AssertionException handling request, closing client connection: 0 assertion src/mongo/db/dbmessage.cpp:97"
[17:56:39] <GothAlice> saml: https://github.com/marrow/cache/blob/develop/test/test_model.py < a pretty thorough test suite for my "MongoDB as cache" package. And they work reliably: https://travis-ci.org/marrow/cache/jobs/59758804#L387-L427
[17:57:42] <wojons> i am running version 3.0.0-rc8 even if i update to 3.0.4 i still have the same issue
[17:57:50] <GothAlice> wojons: Storage driver?
[17:58:18] <wojons> mmapv1 but let me double check
[17:58:36] <cheeser> if you have to check, it's almost certainly mmapv1
[17:58:46] <wojons> yeah i dont think i changed it cheeser
[17:58:54] <GothAlice> Set and forget is a thing, too.
[17:59:07] <cheeser> but less likely in this case :)
[18:00:44] <GothAlice> Alas, I can't immediately think of what would cause that in 3 beyond WiredTiger usage (with two directly related outstanding JIRA tickets). I would recommend opening up your own JIRA ticket with as much information as you can provide as to the runtime environment and configuration of the failing node.
[18:01:44] <wojons> GothAlice: mmv1
[18:22:48] <wojons> GothAlice: any ideas
[18:23:02] <GothAlice> Alas, I can't immediately think of what would cause that in 3 beyond WiredTiger usage (with two directly related outstanding JIRA tickets). I would recommend opening up your own JIRA ticket with as much information as you can provide as to the runtime environment and configuration of the failing node.
[18:23:34] <wojons> what jira tickets do you have for that
[18:27:19] <StephenLynx> IMO this guy is just flooding.
[18:27:28] <StephenLynx> "(Max SendQ exceeded)"
[18:28:06] <StephenLynx> can we have him banned? his constant logout-login is taking quite some screen space
[18:30:36] <GothAlice> StephenLynx: Most clients let you turn off non-chat messages.
[18:30:43] <GothAlice> I do so on channels I don't moderate.
[18:30:46] <StephenLynx> let me see
[18:31:43] <GothAlice> Max SendQ may also be the exact opposite of what you assumed: the server transmit buffer for that connection is filling. I.e. the client is "connected", but not successfully pulling in all data quickly enough. He may just have joined too many rooms at once. ;)
[18:32:22] <doc_tuna> i.e. /ignore #mongodb JOINS PARTS QUITS
[18:32:29] <doc_tuna> in irssi
[18:33:18] <GothAlice> Command+I, Shift+Tab four times, space, enter. (In Textual 5. ;)
[18:33:39] <StephenLynx> ah, found it.
[18:34:15] <mike_edmr> i do it for specific users when they are chronic rejoiners too
[18:34:43] <GothAlice> Depending on the messages surrounding the re-join, I sometimes PM people and advise on how to correct their problem.
[18:44:03] <kaliya> Hi. I have a replicaset for $database on three mongo instances. Two stopped because the partition where they store $database data filled up to 100%. I can discard the data... So I can remove $database.* files... But what is the procedure to follow here to remove those files? Without deleting the replicaset
[18:53:14] <deathanchor> StephenLynx must hate whem my autoaway re-nicks me
[18:53:23] <StephenLynx> hm?
[18:53:31] <StephenLynx> never noticed
[18:54:05] <StephenLynx> maybe, I have a very rigid schedule :v
[18:59:18] <GothAlice> There's a proper away flag, so re-naming is completely unnecessary.
[19:00:54] <cheeser> agreed
[19:05:48] <domo> hi
[19:05:53] <domo> can I restart mongo when creating an index
[19:06:03] <domo> if its blocking too much
[19:06:40] <domo> is there a risk of corrupting data?
[19:06:43] <cheeser> i wouldn't advise it
[19:06:57] <cheeser> you could kill that operation, though.
[19:07:18] <domo> yeah but what if we can't access the mongo cli?
[19:07:31] <cheeser> talk to an admin?
[19:17:02] <saml> GothAlice, yes tests so bad
[19:17:54] <dan336> So funny story, I'm restoring a mongodb instance from a backup, and it is taking an extremely long time. right now the backup is about 63% done, and it has been running for over 24 hours, there are about 5 million records in the db. Are restores normally this slow?
[19:19:12] <deathanchor> dan336: is it building indexes?
[19:19:38] <dan336> deathanchor: how would I check that?
[19:19:54] <deathanchor> uh... it usually tells you what it is doing while doing a mongorestore
[19:20:59] <deathanchor> my experience has been 50% of the time is restoring the data, the other 50% was spent rebuilding the god aweful indexes the devs made.
[19:22:21] <GothAlice> Those sound like some unfortunate indexes.
[19:22:43] <dan336> that makes sense. we are using 2.6 and the only output we get from the command comes in the form:
[19:22:44] <dan336> 2015-06-23T14:07:24.169-0500            Progress: 1553171680/2537959363 61%     (bytes)
[19:22:44] <GothAlice> With ~1GB of data in one project at work, a mongorestore takes about 30 seconds, only 5 of which are spent on indexes.
[19:22:46] <deathanchor> GothAlice: unfortunate is supporting a dying product for another year and half.
[19:23:22] <GothAlice> Also helps that the 3.x tools are parallelized.
[19:24:15] <deathanchor> dan336: perhaps disk io?
[19:24:30] <domo> so killop doesn't seem to kill the index being created
[19:24:36] <domo> am I safe to restart mongod
[19:24:36] <dan336> its not disk io. cpu is capped at 100 for one core.
[19:24:38] <domo> and drop the index?
[19:24:47] <domo> is there any concerns around that?
[19:24:53] <GothAlice> dan336: Check the server load average (run "uptime") on the machine running the primary.
[19:25:11] <GothAlice> If that number exceeds the number of cores, something's up.
[19:25:53] <dan336> it has a load of 1
[19:27:03] <GothAlice> "iostat -x 1" output shoing anything with a overly-high "await" time?
[19:27:13] <GothAlice> *showing
[19:27:57] <dan336> there is nothing in wait.
[19:28:47] <GothAlice> Cool. That's fully eliminated as a potential problem. :) (EBS volumes, as an example, can sometimes get "stuck", causing await times to go nuts, which might not show as part of the load average.)
[19:29:17] <GothAlice> So, there is something you can do. You can restore the data without indexes, then tell the indexes to build in the background.
[19:29:27] <dan336> yeah we aren't on aws. we are running on our own servers.
[19:30:03] <GothAlice> This will take _substantially_ longer to fully build the indexes in the background, but at least the data will be accessible while that works away.
[19:31:06] <cheeser> background indexes aren't as space efficient fwiw
[19:31:12] <cheeser> but diskspace is cheap, right? :D
[19:31:13] <GothAlice> That too.
[19:31:37] <dan336> so if we do it in the background it will be using a lot of disk and ram? is that correct?
[19:31:48] <dan336> I mean more then it would originally.
[19:32:03] <GothAlice> For various definitions of "a lot".
[19:33:34] <dan336> are we looking at maybe a 2x increase?
[19:36:59] <jr3> anyone famaliar with AWS auto scaling?
[20:42:22] <dan336> So we got our restore to finish, however when we created the backup we have 5.9 million or so records. now we only have 3.2. does mongo not backup everything on a backup?
[20:50:28] <dan336> or is it possible that the restore process did not restore everything?
[20:54:33] <deathanchor> dan336: I'm going to go with the latter
[20:54:50] <deathanchor> unless your backup used some --query options
[21:04:31] <dan336> deathanchor: thanks, we found what we did. it was our mistake
[22:04:16] <Ruphin> I have a question regarding journaling and replica sets
[22:04:47] <Ruphin> We have an issue where due to large write volume the journaling creates a significant bottleneck in throughput
[22:05:45] <Ruphin> Is it possible to have the PRIMARY member of a replica set run without journaling to accept writes faster, and have a SECONDARY member run with oplog to have some safety in place in case of problems?
[22:12:10] <GothAlice> Ruphin: That would be bad.
[22:12:31] <GothAlice> If the primary goes poof, you're likely to lose data in that scenario. (Any data that has hit the primary but not been replicated out to a secondary yet.)
[22:12:41] <Ruphin> We can afford some loss
[22:13:32] <Ruphin> Let's say we only store statistics such as performance graphing
[22:13:37] <GothAlice> Then disabling journalling might be an optimization you can get away with. As per my usual quip, optimization without measurement is by definition premature. I'd benchmark your data in a development environment under both scenarios.
[22:13:47] <Ruphin> Losing up to an hour of data is fine
[22:13:55] <GothAlice> Nah, shouldn't be that bad.
[22:14:02] <GothAlice> We're talking maybe 15 seconds of data.
[22:14:06] <Ruphin> Right
[22:14:21] <Ruphin> What would be the consequence of disabling journaling on both primary and secondary?
[22:14:32] <GothAlice> Lack of node durability, that's it.
[22:14:57] <Ruphin> I mean, if I'm not journaling on the primary, does it matter if I journal on the secondary, or can I disable that as well without losing much
[22:15:23] <Ruphin> What I want to avoid at all cost is that the entire database is not recoverable if the entire cluster goes down due to power failure or something like that
[22:47:16] <AbuDhar> hey
[22:47:46] <AbuDhar> http://kopy.io/yue5u why does my third item not have ObjectID? :/
[23:02:37] <AbuDhar> never mind!
[23:06:31] <d-snp> hey :) I'm having trouble with oplog queries
[23:06:48] <d-snp> every 10-20 writes, an oplog query randomly takes 500-2000ms
[23:07:02] <d-snp> I think this is bottlenecking my write performance, anyone ever seen this before?
[23:10:16] <d-snp> QUERY [conn29] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1435098587000|18 } } cursorid:22303303740 ntoreturn:0 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:293 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } 1341ms
[23:49:01] <d-snp> I'm going to sleep, still curious so if someone has an idea please let me know :)