PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 25th of July, 2016

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[10:28:39] <ceegee> hi
[10:29:41] <ceegee> the official documentation just describes how to install mongodb on debian wheezy, but what about debian jessie?
[13:28:24] <kali> hello, looking for a way to insert the result of an aggregation pipeline as new documents to a pre-existing collection... it looks like i need to go through a temporary collection, ok, but how to get the result from there to the final collection ?
[13:28:48] <kali> trying to avoid JS as much as possible, including eval and copyTo
[13:31:20] <cheeser> your application will have to handle that. whether that's js or not is up to you
[13:32:36] <kali> cheeser: ok, so i'm not missing something obvious :) thanks
[13:32:42] <cheeser> sadly, no.
[13:51:41] <ams__> We have lots of apps that use mongodb with different scaling requirements. We're considering splitting out to a few different unlinked mongodb instances (cluster a for X, cluseter b for Y). Does that sound like a reasonable thing to do? Or is there some feature of mongo we're missing out on?
[13:53:40] <cheeser> sounds reasonable as far as what you've told us.
[13:54:07] <cheeser> differing read/write loads yield different perf characteristics so it *can* make sense to split them up.
[13:55:59] <ams__> OK thanks. I'm just a bit concerned about the overhead of running (e.g.) 9 mongos vs running 3 mongos (in a cluster)
[13:57:36] <cheeser> oh, i wouldn't worry about *that*
[13:58:24] <ams__> The other thing is maintenance and updates, managing 1 cluster is easier than managing 3
[13:59:15] <cheeser> using mongo cloud it's even easier! ;)
[14:00:44] <ams__> backed by AWS?
[14:06:59] <cheeser> or azure
[14:46:35] <jokke> hey i have had a replica set node in RECOVERING mode for over 4 days now... Is this normal?
[14:50:33] <jokke> is it possible to display the process of recovery?
[14:54:27] <cheeser> 4 days seems excessive
[14:54:34] <jokke> there db is constantly being written to
[14:55:32] <jokke> yeah i think so too
[14:55:41] <jokke> but the logs show nothing out of the ordinary
[15:07:00] <quattro_> my secondary is stuck in rollback but there’s no rollback data dir
[15:07:37] <jokke> i also only have one secondary so it's kind of a pickle.. :/
[15:08:13] <ThePendulum> what method would you recommend for installing 3.2 on debian jessie?
[15:08:59] <cheeser> ThePendulum: the deb files
[15:11:31] <ThePendulum> I understand those are either up to 2.4 or for wheezy only?
[15:12:07] <quattro_> one of my secondaries is in ROLLBACK, there’s no rollback data on this server, it’s on another secondary that is not in rollback state, is this normal behaviour?
[15:17:15] <quattro_> can I just remove the whole rollback directory if I don’t need to recover anything?
[15:25:27] <cheeser> ThePendulum: the wheezy debs install fine on jessie
[15:26:55] <ThePendulum> alright, well let me find the loose deb files
[15:28:14] <ThePendulum> guess I'm stuck with compiling on arm
[15:29:44] <cheeser> is there not already a 3.2 arm build?
[15:32:58] <jokke> about that progress for recovery? any way to know get a ETA or so?
[15:44:39] <jokke> ah i got it
[15:44:48] <jokke> rs.printSlaveReplicationInfo()
[15:44:56] <jokke> uh oh
[15:45:09] <jokke> the lag increases
[15:45:25] <quattro_> jokke: only thing you can do is watch the disk space
[15:45:29] <jokke> shit
[15:45:44] <jokke> not true
[15:45:54] <jokke> rs.printSlaveReplicationInfo() like i said
[15:58:41] <cagomez> what's the best mongodb module for flask?
[16:03:11] <jayjo> nthe driver for python is pymongo
[16:20:28] <ThePendulum> cheeser: can't find it if so
[18:11:50] <xingped> new to mongo here. is it possible to, in one query, return a referenced document in a query on the parent? i.e. do all the embedding/combining at the query level instead of making separate calls.
[18:12:13] <cheeser> no
[18:22:15] <xingped> okay thanks
[19:02:31] <n1colas> Hello
[20:36:11] <^GitGud^> for picture serving, better idea to store on mongodb thru gridfs or better to put filename in mongodb then letting webserver pick up and serve the file?
[20:36:14] <^GitGud^> quality is important
[20:38:19] <cheeser> picture quality would be the same regardless
[20:40:03] <^GitGud^> ok so which would you go with?
[20:45:24] <cheeser> depends on the language
[20:45:44] <cheeser> i wrote a java wrapper for the gridfs api so that's what *I* would use :)
[20:52:22] <StephenLynx> kek
[20:52:39] <StephenLynx> ^GitGud^, I store the files directly on gridfs.
[20:53:01] <StephenLynx> works perfectly and I don't have to bother with scaling.
[20:53:18] <^GitGud^> cheeser, nodejs
[20:53:24] <StephenLynx> same here.
[20:53:34] <StephenLynx> and I don't think the runtime environment matters.
[20:53:35] <^GitGud^> StephenLynx, oh i see. alright. maybe i will go the grid route then. thanks
[20:54:02] <StephenLynx> the biggest issue is "what do I do when I have so much data on these files that I can't keep them on a single server?"
[20:56:00] <^GitGud^> yeah i see your point
[20:56:26] <^GitGud^> i suppose with mongodb and sharding its possible to hold different data in different file systems
[20:56:31] <StephenLynx> exactly
[20:56:57] <cheeser> yeah. certainly querying by metadata is easier with gridfs
[20:56:58] <^GitGud^> but if its just a file system then its more difficult and i suppose with that option i'd have to write glue code to connect the stuff. making things more and more complex lol
[20:57:15] <^GitGud^> good points. thanks guys/girls
[20:57:24] <StephenLynx> plus with mongo you have its own memcache for said files.
[20:57:38] <^GitGud^> which is more optimized than anything i could come up with
[20:57:39] <^GitGud^> yea
[20:57:43] <StephenLynx> aye
[20:59:22] <StephenLynx> I can't say I ever had a single issue with that approach, and my software relies on it intensively.
[20:59:30] <StephenLynx> over a year now.
[20:59:34] <StephenLynx> multiple sites using it.
[20:59:58] <StephenLynx> specially now that I migrated my code to use the new node gridfs api.
[22:38:38] <naf> hi
[22:38:46] <naf> does anybody know why i could be getting this error?:
[22:38:53] <naf> 2016-07-25T18:11:25.833-0400 Failed: error reading separator after document #1: bad JSON array format - found no opening bracket '[' in input source
[22:38:53] <naf> 2016-07-25T18:11:25.833-0400 imported 0 documents
[22:38:59] <naf> or at least how i can track down
[22:39:05] <naf> where in the file the invalidity is?