PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 15th of December, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:33:02] <bros> I have a collection called orders that has a field called "items" that represents an array of objects. What's the best way to loop/aggregate over this? Map/reduce, aggregation?
[00:35:21] <cheeser> aggregation is better than map/reduce
[01:45:49] <C0nundrum> and you stream an aggregate query ?
[01:47:21] <C0nundrum> can*
[01:49:53] <cheeser> you can get a cursor back, yes.
[01:52:45] <C0nundrum> I see docs for batches using cursor but would this work http://mongoosejs.com/docs/api.html#querystream_QueryStream on aggregate ?
[01:54:15] <cheeser> i'm not a mongoose use so i don't know how that'd look
[01:55:31] <C0nundrum> hm i see. I will try with cursors then
[02:12:09] <C0nundrum> ugh cursor returning undefined :/
[07:16:37] <Kosch> hey guys. mongo 3.0.7: I've set the systemlog to syslog, verbosity to 0 and profiling is also 0, but inserts/updates/query... are still logged. Do I have missed something?
[07:24:08] <C0nundrum> Is there a reason mongodb has the arbitrary memory limit of 100Mb for pipeline aggreagation piplines ... When there is more than enough memory free on the system ?
[07:26:26] <C0nundrum> Is it to control the ammount of time pipelane stages are allowed to take ?
[07:28:02] <Kosch> "100MB is just a arbitrary number and as for my knowledge the 100MB limit was because if we consider each document to be of size 100KB and each pipeline returns 1000 documents so it reaches 100MB limit." - seem sindeed just a number they choose.
[07:28:58] <C0nundrum> Well i can understand that a default. What i don't understand is why it isn't configrable
[07:29:05] <C0nundrum> as a*
[07:29:21] <Kosch> feature request :)
[07:33:34] <C0nundrum> mains me to have 24GB of ram and have to use my hd for storing data that should be in memory x.x
[07:33:37] <C0nundrum> pains*
[07:33:55] <Kosch> use ramdisk :-D
[07:34:20] <C0nundrum> what i mean is the fact that i have i have to use allowDisk on my aggreagation pipeline :/
[07:34:29] <Kosch> yes
[07:35:35] <Kosch> so maybe mount _tmp directory as ramdisk... as workaround :)
[07:37:19] <Kosch> smth like mount -t tmpfs -o size=8G /my/little/pony/dbpath/_tmp ...
[07:37:39] <C0nundrum> hm true i could create a ram disk and have the .tmp directory be a symbolic link to it
[08:06:33] <Rabenklaue> Hi, I need to implement a document-versioning, but I do not find a way querying all documents and return those with the highest (version) values. http://codepad.org/6sggdwGW
[08:28:50] <msn> is there i can make sure that read queries do not go to one secondary at all
[08:31:03] <sabrehagen> the syntax of the following criteria is suboptimal. will mongodb perform more poorly than if it were structured correctly, or does it normalise it internally and there is no performance degredation?
[08:31:07] <sabrehagen> { requestUser: { $nin : [ '5498e258a7fbbfcc12c3fa15', '565f9406ca3491aff9b9ad4a' ] }, $and : [ { requestUser: '562057d20cf8ea6727f91be4' } ] }
[08:31:11] <sabrehagen> (i have an index on requestUser)
[11:58:40] <mtree> is finding the most efficient way to find if records with given string params exists?
[11:59:01] <mtree> i just need to know if there's any record with given records
[11:59:07] <mtree> params*
[17:07:29] <Boemm> Hi Guys, I search for some web UI to connect to mongodb 3.0+ ... can somebody give me some tips about such a tool?
[17:08:27] <Boemm> Its mainly for doing querys ... but all I found seems to be pretty not up to date ... and do not work with mongodb 3+
[17:11:03] <GothAlice> Has anyone tried storing gettext string translations in MongoDB? I'm thinking one record per language a la {_id: {lang: <IETF BCP-47 language code>, domain: 'messages'}, strings: [{key: value}, …]}
[18:08:49] <Kristjan85> how can i remove the first entry in a collection?
[18:11:03] <cheeser> define "first"
[18:12:05] <Kristjan85> what does this do:
[18:12:06] <Kristjan85> db.collection('chattexts').remove(
[18:12:07] <Kristjan85> {}, true );
[18:13:27] <Kristjan85> its the first entry in a collection
[18:13:33] <Kristjan85> cheeser, can u help me?
[18:14:06] <Kristjan85> i am making a chatroom, but when the lines exeed 10 then i want to delete oldest entry
[18:14:10] <mtree> is this { $and: [ { "result.ota": item.result.ota }, { "cacheKey": item.cacheKey } ] }
[18:14:13] <Kristjan85> from the chattexts
[18:14:25] <mtree> the same as this? {'ota': item.ota, 'cacheKey': item.cacheKey }
[18:14:37] <mtree> please ignore double quotes ;)
[18:16:44] <cheeser> Kristjan85: why would you want ot limit it like that?
[18:17:06] <Kristjan85> its a simple chatroom for a test job
[18:17:21] <Kristjan85> i want the server to hold 5 text lines
[18:18:02] <cheeser> https://docs.mongodb.org/manual/core/capped-collections/
[18:18:48] <Kristjan85> this deletes all:
[18:18:48] <Kristjan85> db.collection('chattexts').remove(
[18:18:48] <Kristjan85> {},{justOne: true} );
[18:18:57] <Kristjan85> i don't know why
[18:19:02] <Kristjan85> i read the docs also
[18:19:10] <Kristjan85> https://docs.mongodb.org/v3.0/reference/method/db.collection.remove/
[18:23:42] <Kristjan85> what does db.collection.findOne do?
[18:23:50] <Kristjan85> can it find first entry?
[18:24:09] <cheeser> use a capped collection
[19:05:36] <vivoo> Hello
[19:06:30] <Derick> hi
[19:08:45] <vivoo> I'm wondering about using mongoDB for an app, but i need to import (with small processing) data from a mysql db and keep the mongo one up to date
[19:09:36] <vivoo> Any advice about the correct way to do that is welcome :)
[19:11:08] <cheeser> you want a constant sync from mysql to mongo? or just a one off?
[19:13:00] <vivoo> Well, i'd like to alert admin that mysql changed, and let him allow sync .
[19:14:15] <vivoo> Mysql db got an updates tables, so it's pretty easy to detect change.
[19:16:42] <vivoo> But i'd like to add processed fields during sync
[19:41:12] <StephenLynx> vivoo I would implement something on application level.
[19:41:19] <StephenLynx> I don't think there is anything out of the box.
[19:42:33] <dddh> http://arstechnica.com/security/2015/12/13-million-mackeeper-users-exposed-after-mongodb-door-was-left-open/
[19:42:52] <dddh> "there are at least 35,000 publicly available, unauthenticated instances of MongoDB running on the Internet." That's about 5,000 more than there used to be in July"
[19:43:13] <StephenLynx> kek
[19:43:16] <StephenLynx> that article
[19:43:21] <StephenLynx> I remember it.
[19:43:55] <vivoo> StephenLynx: Thanks for your answer
[19:43:59] <StephenLynx> that is really old news though, arstechnica is really crap
[19:45:40] <StephenLynx> oh hold on, that is another place that did the same old thing
[19:45:49] <StephenLynx> kek, I thought it was the same thing
[19:48:04] <dddh> I guess best practices for mongo is to run it on private IP address
[19:48:24] <cheeser> and it's the default install now
[19:49:23] <dddh> and IPv6 is disabled by default so the problem with publicly available instances is lame dbas
[19:52:03] <cheeser> pretty much
[19:53:56] <Derick> StephenLynx: it's a new article on the same thing though... yawn
[19:59:28] <Zelest> Derick, i promised to poke you for a friend to get pecl-mongo for php7 :)
[20:04:59] <StephenLynx> dddh just enable authentication, problem solved.
[20:05:09] <StephenLynx> or don't enable external connections.
[20:05:31] <StephenLynx> some people are just too dumb to realize that you have to have either.
[20:05:45] <StephenLynx> it is the same as anything else.
[20:10:39] <dddh> StephenLynx: I prefer vrfs with private IP addresses
[20:11:16] <StephenLynx> or that
[20:11:46] <StephenLynx> my point is that these news are just testaments to the general IT workforce dumbassery
[20:12:12] <StephenLynx> but I wouldn't expect much more from such a company that sells crap like mackeeper
[20:16:08] <Derick> Zelest: not going to happen for pecl-mongo, only for pecl-mongodb
[21:08:04] <edrocks2> hello
[21:08:27] <edrocks2> can an op unbaned my other nick `edrocks` irccloud spammed nickserv with a wrong password and got it banned
[21:14:04] <redondos> how can I make mongodump 3.0.1 limit to a certain number of documents?
[21:15:48] <edrocks2> redondos: maybe use `.limit()` with the --query flag
[21:15:59] <edrocks2> not sure if it will work though
[21:20:23] <edrocks> hello
[21:20:45] <edrocks> GothAlice: it seems to be working now
[21:21:11] <GothAlice> Ah, that might have been why I couldn't find anything. I was looking at the wrong channel's moderation queue. XP
[21:21:14] <edrocks> gothalice I think switching to edrocks2 then back fixed it. sorry for bothering you
[21:21:23] <GothAlice> No worries, I'm glad it's resolved!
[21:21:28] <edrocks> thanks!
[21:23:49] <GothAlice> edrocks: Even in the channel I was looking in, I did notice the massive irccloud reboot, though. Wagh. ;P
[21:24:09] <GothAlice> Hotfix all the things!
[21:24:32] <edrocks> that must be a pain for them to update
[21:25:27] <GothAlice> Luckily I could imagine most changes can be rolled out progressively and non-disruptively.
[21:25:53] <GothAlice> As users roll off a given server/balancer, retire it, while passing new connections to the new machines. :)
[23:12:53] <sabrehagen> the syntax of the following criteria is suboptimal. will mongodb perform more poorly than if it were structured correctly, or does it normalise it internally and there is no performance degredation?
[23:12:57] <sabrehagen> { requestUser: { $nin : [ '5498e258a7fbbfcc12c3fa15', '565f9406ca3491aff9b9ad4a' ] }, $and : [ { requestUser: '562057d20cf8ea6727f91be4' } ] }
[23:13:00] <sabrehagen> (i have an index on requestUser)
[23:15:32] <Boomtime> sabrehagen: the best way to know for certain is to run an explain - i'm not sure if it will trip over the $nin, which has sucky performance (because indexes are for presence matching, not absence matching)
[23:16:54] <sabrehagen> Boomtime: okay, thanks. if i have any trouble interpreting the results, i guess i'll ask again :)
[23:17:28] <sabrehagen> also, as a general survey, when do you (and everyone else in the channel) pick mongodb as the db over competing technologies e.g. couch*, postgres.