PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 24th of September, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[02:54:40] <jigax> hello everyone, so I have a query which uses $inc and $ positional operator to increment quantity for a product in a object array, works as expected. How can I go about if the product doesn't exists and I would like to insert, upsert it?
[04:10:37] <speewave> Hey all, just a quick question to see if i'm doing something the right way, or the best way
[04:11:13] <speewave> I'm writing a music site, and every week, i want it to save the Plays and Purchases to an array, and clear the variable
[04:12:09] <Boomtime> he speewave, please continue...
[04:12:42] <speewave> each song is a document, and they have the plays\purchases this week variables as well as PastWeeks as an array of these figures
[04:13:22] <speewave> so would the best thing to do be the obvious, write these to an array, push them, and clear the variables
[04:13:38] <speewave> if the site gets big, how big of a performance hit would i be taking
[04:14:26] <speewave> i'm thinking to use the available options for Multi updating and Maybe using the bulk writing thing i found in the docs today (i'm still rather new)
[04:14:40] <speewave> Using the Node driver btw, (straight mongodb, not mongoose)
[04:44:05] <rgenito> yayyyy mongo people
[04:47:24] <Boomtime> speewave: sorry, i was away a bit...
[04:47:35] <rgenito> in mongo, is there a way to have a unique field that ISN'T required?
[04:48:06] <Boomtime> rgenito: https://docs.mongodb.org/manual/core/index-sparse/#sparse-and-unique-properties
[04:48:30] <rgenito> ty
[04:48:38] <Boomtime> speewave: your use case requires more information - arrays are really handy in small doses
[04:48:57] <Boomtime> if the array content constantly grows you need to consider the impact that will have
[04:49:24] <rgenito> Boomtime, *sigh* :( maybe the problem isn't mongo itself, but the mongoengine wrapper i'm using for Django
[04:49:33] <Boomtime> it has two effects; adding to an array means the document grows in size - it's allotted size on disk may not be enough and it might entail re-writing the whole document each time
[04:49:57] <Boomtime> also, a document has a maximum size - 16MB
[04:50:21] <Boomtime> you cannot keep growing an array forever, not only will that get slower over time, it will eventually become impossible
[04:50:55] <Boomtime> arrays are really handy for data that generally doesn't grow indefinitely - people's addresses for example, you tend not to have more than a few different mailing addresses
[04:51:27] <preaction> yeah. growing arrays is a good way to make mongo devour disk space :(
[04:51:37] <preaction> growing arrays *poorly
[04:51:38] <Boomtime> an array in a suer document could be used to hold every address that particular user needs - and you can even restrict them to, say, 100 addresses in your business logic
[04:51:45] <Boomtime> *user
[04:51:48] <rgenito> preaction, hey :D
[04:52:40] <speewave> Well i was storing the data in an array for analytical purposes
[04:52:50] <speewave> so artists can see how their song is selling\playing each week
[04:53:03] <rgenito> ahh, is there a way for a collection to have MANY sparse indexes? ;x
[04:55:11] <speewave> i'm working with some other people, i don't think they've set a time frame for how many weeks will be stored in the DB *can't imagine it being too many*
[04:55:31] <rgenito> http://pastie.org/private/sbnizj9ri27zzk6l18cwq
[04:55:43] <rgenito> ^--- any suggestion on how i can prevent the same phone or email or ip from being in the whitelist twice?
[04:55:48] <speewave> The arrays would be arrays of objects [{plays: # , purchases: #});
[04:57:42] <speewave> what might be a better way to go about that?
[05:02:58] <speewave> or a better way to tackle this situation as a whole may be a better question ;)
[05:03:49] <_root> Hi, I’m doing geospatial queries in mongoose using .within.circle(), for the I’d like to know what units the radius is in (miles or kilometers) and whether I need to divide it by 3963.2 the equitorial radius of the earth? thanks :)
[05:11:13] <speewave> would nested objects be better than arrays?
[05:12:00] <speewave> so pastWeek{ #: {plays:#,buys:#}}
[05:14:56] <speewave> or is that as bad as arrays?
[05:37:49] <speewave> ok after looking at the thing on how to do time series data, i think i know how i wanna tackle this, or have a very close idea
[05:38:10] <speewave> well i g2g
[10:24:32] <moura> hey guys, i have a file base64 encoded already allocated on memory how do i save it to GridFS using express and mongoose? all the examples out there has a filepath
[13:38:10] <brianV> Hi all. Quick question - is it possible to mix strong consistency and eventual consistency within a single Mongo cluster? In practice, we have records that require both levels
[13:38:43] <brianV> can that be done on a per-request level? IE, say I want record A with strong consistency, Record X can have a weaker consistency guarantee?
[13:38:45] <StephenLynx> you mean, write concern?
[13:40:06] <brianV> StephenLynx: perhaps. Ultimately, I don't want to risk losing writes, so write concern would always be MAJORITY
[13:40:19] <brianV> anything less, there is potential for rollback if I understand
[13:40:19] <deathanchor> brianV: look at writeconcern tagging
[13:40:21] <StephenLynx> afaik, you set the write concern on the operation itself.
[13:40:34] <StephenLynx> but I have never used anything other than the default.
[13:41:45] <deathanchor> StephenLynx: the default is write to journal or disk on a single member
[13:42:13] <deathanchor> brianV: write concern majority will have a performance impact
[13:42:47] <brianV> deathanchor: yes, that's understood. Unfortunately, we really can't lose client records, so we want to ensure they are on the replica set
[13:42:59] <deathanchor> so I would use tagging or write concern majority for specific operations
[13:43:33] <deathanchor> depends on the write operation no?
[13:44:00] <brianV> deathanchor: ok, let me attempt to re-cast the question
[13:44:05] <deathanchor> like if you are just updating a users' login time, that's not critical info really. but if you are updating their password, I would use majority
[13:45:39] <brianV> deathanchor: ultimately, writes need to succeed for me. However, it's not a huge problem for *most* reads to show outdated data. ie, given a write and a (near-)synchronous read, while I need to ensure that the write is persisted as troong as possible, it's actually OK for the read to return stale data while the write is happening
[13:46:22] <deathanchor> brianV: that's no majority then
[13:46:24] <brianV> however, for some queries, I absolutely need to have the latest, updated value
[13:46:47] <deathanchor> so for reads you require the latest, query the primary
[13:46:57] <deathanchor> for other reads, use secondaries
[13:46:57] <brianV> all writes go to the primary, right?
[13:47:09] <deathanchor> correct writes can only be done on primary
[13:47:54] <deathanchor> but if you want to guarantee that the write will always be propagated to the rest of the set eventually then you need to use majority
[13:48:01] <brianV> deathanchor: so in my case, I should do all writes as replica acknowledged, but in the meantime, reads that can have a weaker consistency guarantee can read from a secondary. queries that need a strong consistency guarantee should read from the primary
[13:48:04] <deathanchor> otherwise you have a chance of a rollback
[13:48:59] <deathanchor> if you happy with the slight chance of a rollback, then the default (write to disk on primary) is enough.
[13:49:33] <deathanchor> I've had only one rollback and basically only lost 5 minutes worth of data which wasn't the worst thing ever
[13:50:02] <deathanchor> but anything more than write concern 1 (default) will have perfomance implications.
[13:50:56] <brianV> deathanchor: right, ok. Sounds like I can tweak it to what I need then. Different data has different needs for writes
[13:51:11] <deathanchor> brianV: yes that is best practice
[13:51:31] <brianV> deathanchor: if a client makes a purchase, we need to persist it absolutely. If they update their profile, we can afford to lose that in a rollback if necessary
[13:52:16] <deathanchor> critical data (ie. passwords, email, etc.) should have majority since you don't want that to rollback, but should happen less often than non-crit info like (last login time, favority colour, what you had for dinner)
[13:53:09] <brianV> deathanchor: of course. Ok, thanks so much for your help and patience!
[13:53:29] <deathanchor> eh, we are all here seeking help or offering it freely
[13:54:09] <brianV> deathanchor: well, hopefully soon I will be on the other side of the coin. At this point, Mongo is new to me, and I am proposing it for an insanely large project
[13:55:54] <deathanchor> so I'm working with another group who uses mongodb, and they can't maintain the performance as I do on myside. Write locks are a bitch in mongodb, thank god for tokumx getting around that (at a feature deficit cost though.)
[14:01:25] <cheeser> write locks? what's write locks, precious?
[15:32:58] <deathanchor> cheeser: exclusive write locking at collection level
[15:33:28] <deathanchor> tokumx has finer locks at document level for most operations
[15:47:06] <cheeser> deathanchor: it was a joke. 3.0 doesn't have the global/collection write locking.
[15:47:24] <deathanchor> uh.. not per the docs
[15:47:32] <deathanchor> still collection level locks
[15:47:40] <deathanchor> nothing finer than that
[15:47:48] <cheeser> if you use the wired tiger storage engine, no collection locks.
[15:53:40] <deathanchor> hmm.. I couldn't find that documented, you have a link to that?
[15:57:09] <cheeser> http://docs.mongodb.org/manual/core/storage/
[16:00:23] <deathanchor> ah I was reading about the mmapv1 before which was collection level still
[16:00:35] <cheeser> and likely always will be
[16:00:42] <deathanchor> why is WT not the default in 3.0?
[16:00:43] <cheeser> WT is the default engine as of 3.2
[16:01:06] <deathanchor> is 3.2 stable released yet?
[16:01:25] <StephenLynx> were at 3.0.x afaik
[16:06:36] <cheeser> deathanchor: it is not
[16:10:35] <deathanchor> is 3.2 suppose to get the data at REST also? (disk encryption)
[16:10:58] <deathanchor> I watched that video on youtube from the mongo talks.
[16:11:28] <deathanchor> but the speaker was clear that she was "Not the PM and they would hunt me down with sticks" if she gave a date.
[16:18:05] <deathanchor> this part of the video: https://www.youtube.com/watch?v=g5LW6i-F9-E&t=2170
[16:20:56] <cheeser> ah, amalia.
[16:28:17] <deathanchor> she says next release, but is that 3.2 or something sooner or later, or basically going to play the "I don't want to beaten with sticks" plea?
[16:28:19] <Derick> afaik, 3.2 will get encryption at rest
[16:28:20] <deathanchor> woo! I'm still going to wait for benchmarks before moving to it, but I'll be sure to play with 3.2.
[18:35:33] <brianV> test
[19:07:39] <n0arch> when implementing x.509 certs for auth, do the config servers in a sharded cluster need certs?
[19:35:24] <melissamm> Hi guys, I'm having some trouble starting up mongo. I am using it on a remote machine that I SSH into. Should I change the bind_ip from 127.0.0.1 to the remote machine's IP address?
[19:35:52] <melissamm> melissa@simba:~$ sudo service mongod start mongod start/running, process 10623 melissa@simba:~$ mongo MongoDB shell version: 2.6.11 connecting to: test 2015-09-24T19:27:55.279+0000 warning: Failed to connect to 127.0.0.1:27017, reason: errno:111 Connection refused 2015-09-24T19:27:55.280+0000 Error: couldn't connect to server 127.0.0.1:27017 (127.0.0.1), connection attempt failed at src/mongo/shell/mongo.js:146 exception: connect
[19:36:51] <melissamm> I get the same error message when I changed the bind_ip in /etc/mongod.conf to the IP of my remote machine
[19:38:31] <n0arch> are you connecting to the database from the localhost or a remote host?
[19:40:47] <melissamm> everything is done at the remote host @n0arch
[19:41:11] <melissamm> I'm doing everything on the remote host, connecting, setting things up
[20:25:22] <alhof> are there any free tools to visualize mongo queries?
[20:26:31] <StephenLynx> visualize in what sense?
[20:32:24] <alhof> i am collecting a trove of numerical data in a collection. i would like an easy way to graph the results of queries i make to the collection. eg to graph the number of records per minute, to graph a key's value across records, things like that.
[20:33:01] <alhof> really excel-like things. i suppose i could export. but i was just wondering if there are any preexisting in-browser solutions.
[20:34:47] <StephenLynx> I have never heard of such a specialized tool for mongo.
[20:35:19] <StephenLynx> because you are not talking just about a GUI to query it, but a tool that interprets the data in a certain way.
[20:37:11] <alhof> StephenLynx okay, thanks
[21:11:03] <jimi_> If I have a document and that document has a property that is an array, how can i search that array to see if a value exists? findOne doesn't exist at the document level, correct?
[21:11:46] <cheeser> db.collection.find({somearray: 'blah'})
[21:11:52] <cheeser> iirc
[21:12:28] <jimi_> cheeser, I've already returned/fetched the row that i need, now i just want to check that object to see if it has some property... should i just do that in my scripting language?
[21:13:30] <cheeser> why not do it in the query?
[21:14:07] <daidoji> can I use the $type operator in an aggregation $group?
[21:14:30] <jimi_> cheeser, if i don't find a record within that array, i want to add it... if it doesn't find that entry, i still have to retrieve the record to add it, correct? (very new to mongo... just making sure i understand)
[21:14:56] <jimi_> cheeser, Something simple like this http://pastie.org/private/alovpqdls3vtn6ppisg
[21:16:58] <cheeser> you could use $addToSet
[21:18:01] <jimi_> :) didn't know about that function, that looks perfect, it adds to it unless exists
[21:22:33] <jimi_> cheeser, whatever they are paying you.. isn't enough
[21:22:40] <cheeser> :D
[21:22:47] <jimi_> cheeser, effective immediately, i am raising your pay... by 1/2... doubled
[21:22:49] <jimi_> :D
[21:22:56] <cheeser> glad it helped
[21:23:48] <jimi_> That's the ticket. lol i <3 mongo now
[21:24:08] <jimi_> Coming from MySQL and postgresql... having to query everything out, and all these crazy joins .. f*** that
[21:27:12] <daidoji> ...
[21:27:36] <daidoji> cheeser: do you know if I can aggregate by types?
[21:31:02] <cheeser> { $group : {$type : "$somefield"}} ?
[22:02:21] <daidoji> cheeser: oh sweet
[22:02:23] <daidoji> thanks
[22:19:01] <daidoji> can I query mongo to see if it was built with --ssl?
[22:54:18] <cheeser> daidoji: did that $group work? because I kinda i pulled it out of my ... you know.
[22:54:21] <cheeser> :D