PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 27th of May, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:03:36] <xaxxon> can you restrict what fields you send back when you call find? like if there's some big field that you don't care about?
[00:04:44] <svm_invictvs> hm?
[00:04:48] <svm_invictvs> xaxxon: I think that's the filter
[00:05:08] <xaxxon> http://docs.mongodb.org/manual/tutorial/project-fields-from-query-results/ - I think I found it
[00:05:21] <svm_invictvs> xaxxon: /win 6
[00:05:31] <svm_invictvs> xaxxon: sec, sorry that was an error
[00:05:43] <svm_invictvs> xaxxon: Yeah, projection
[00:05:46] <xaxxon> damn. I thought i won 6 points
[00:05:52] <xaxxon> can you give me the 6 points anyhow?
[00:05:54] <svm_invictvs> xaxxon: Yes, that does exactly what you wanted it to do
[00:05:55] <svm_invictvs> sure
[00:05:58] <xaxxon> YAY
[00:06:09] <svm_invictvs> /win 6 xaxxon
[00:06:19] <svm_invictvs> It's not working. I'll have to try again later :)
[00:06:29] <xaxxon> oh is that real?
[00:06:38] <xaxxon> I thought we were on who's line is it
[00:07:15] <svm_invictvs> xaxxon: No, that's a joke, I just use /say
[00:07:27] <svm_invictvs> xaxxon: /win 6 is actually the command in irssi to switch windows
[00:07:35] <xaxxon> oh :) haha
[00:07:47] <xaxxon> ok :)
[00:08:10] <svm_invictvs> xaxxon: Yeah
[00:08:20] <xaxxon> damn now I really want the points
[00:08:36] <xaxxon> it's like upvotes on a self.post on reddit
[01:00:54] <kryo_> hey i'm generating some sample data for my db
[01:01:08] <kryo_> first i remove all elements from a collection, then i create 6 new elements
[01:01:38] <kryo_> seems the remove({}) call sometimes removes the new elements
[01:01:59] <joannac> ... remove({}) removes everything
[01:02:19] <kryo_> yup i want to remove all existing elements, and add 6 new ones
[01:02:39] <joannac> okay. so remove({}) first, then insert your 6 new ones
[01:02:42] <kryo_> sometimes when i call the procedue though, i'm left with less than 6 new elements
[01:03:03] <joannac> are you doing it asynchronously or something?
[01:03:03] <kryo_> it seems to be random
[01:03:11] <kryo_> using node.js and mongo
[01:03:15] <kryo_> *mongoose
[01:03:29] <kryo_> it appears the remove and save calls are running asynchronously
[01:04:03] <kryo_> sometimes i'm left with 6 elements, sometimes 1 or 2, or 3
[01:04:29] <joannac> ...okay?
[01:04:48] <joannac> well, if they're running asynchronously, then... that happens sometimes?
[01:04:50] <kryo_> looking for someone to confirm if mongoose does things async
[01:04:58] <kryo_> guess i can find out
[01:05:40] <kryo_> welp i guess that solves it then
[01:05:46] <kryo_> thanks
[01:05:49] <joannac> np
[01:06:21] <devinceble> How do I use GridFS on the new .Net MongoDB Driver 2.0?
[01:08:13] <Boomtime> @devinceble: you need to use the legacy API for GridFS for now as the async specification for the API is not final
[01:08:24] <Boomtime> see https://jira.mongodb.org/browse/CSHARP-1247
[01:09:58] <devinceble> @Boomtime Thanks
[01:25:25] <kba> if I'm using mongoose in nodejs and I have a mongoose object that I save. If I then update that object and save it again, is all the data transferred?
[01:26:04] <kba> for instance, I set obj.foo = 1, then obj.save(), then obj.bar = 1 and obj.save().
[01:26:19] <kba> is foo=1 transferred twice?
[01:31:44] <joannac> almsot certainly yes
[02:16:09] <MacWinne_> in mongo, i see a certain document field is of type ISODate("2014-06-24T23:10:10Z") however when I query with mongoose, I seem to get back just the string '2014-06-24T23:10:10Z'
[02:16:18] <MacWinne_> even though my schema has defined the field as Date
[02:17:13] <MacWinne_> is this expected behavior? I've recently started using Mongoose.. was using PHP before, and I think PHP returned the date field correctly.. though am not completely sure
[02:21:49] <joannac> sounds like a mongoose thing
[02:22:05] <joannac> does it show up as a date if you query for it in the mongo shell?
[02:37:13] <MacWinne_> joannac, yeah showing up as ISODate("2014-06-24T23:10:10Z")
[02:37:16] <MacWinne_> inside the shell
[02:38:04] <joannac> MacWinne_: so it sounds like mongoose is doing its own processing
[02:38:19] <MacWinne_> yeah.. seems like it..
[02:39:27] <MacWinne_> crappy.. i define the field as Date in the mongoose schema
[02:42:12] <xshk> mongodb spawning multiple processes with ubuntu init.d restart?
[02:42:20] <xshk> old PID is not killed
[04:13:19] <fontanon> Hi everybody! Observing my database db.stats() I have one doubt ... is the free space on disk pre-allocated by mongo db equals to (fileSize - (storageSize + indexSize)) ??
[04:22:27] <Boomtime> @fontanon: i believe that will be very close, although not exact - the difference is in the amount of space remaining in the extents already allocated to the indexes
[04:23:05] <Boomtime> by way of comparison:
[04:23:20] <Boomtime> "storageSize" is the total amount of extents allocated for storage of data
[04:23:25] <fontanon> aha
[04:23:31] <Boomtime> "dataSize" is the actual size of the data contained in those extents
[04:23:43] <Boomtime> "indexSizes" is analogous to "dataSize" but for indexes
[04:23:48] <fontanon> yep
[04:23:57] <Boomtime> there is no analogous storageSize for indexes :/
[04:24:11] <Boomtime> that is my understanding, take it with a grain of salt
[04:25:09] <Boomtime> this means of course that you can't know precisely how much free space exists already in the free lists, but you can get a very close approximation
[04:25:10] <fontanon> Boomtime, so mongodb does not only pre-allocate space for data but for indexes too
[04:25:35] <fontanon> and this approximation is fileSize - (storageSize + indexSize)
[04:26:49] <Boomtime> the pre-allocation is for a database
[04:27:07] <Boomtime> from within those files, section are carved out called extents, and allocated to collections for data or indexes
[04:27:41] <Boomtime> when you work out the "free space" using the method you have, you are getting a combination of three things in the result:
[04:27:51] <Boomtime> 1. unassigned extents
[04:28:02] <Boomtime> 2. space on the free list for data within extents
[04:28:20] <Boomtime> 3. (small) spare spaces inside extents on the indexes
[04:28:44] <Boomtime> oh, i'm sorry, i think you have accounted for #2 by using storageSize
[04:29:07] <Boomtime> ok, so just #1 and #3 are combined in the result
[04:29:32] <fontanon> and are unassigned extents size close to the free pre-allocated space =
[04:29:33] <fontanon> ?
[04:29:51] <Boomtime> those are the same thing
[04:30:12] <fontanon> alright
[04:30:16] <Boomtime> unassigned extents exist inside pre-allocated files but are not attributed yet to any collection within that database
[04:30:34] <fontanon> understood
[04:30:40] <Boomtime> once an extent is assigned for data, storageSize increases
[04:31:04] <Boomtime> note that dataSize must increase too, though by far less most likely
[04:31:34] <Boomtime> indeed, it could be as little as one byte increase to dataSize that forces a new extent to be allocated - which might be 512MB
[04:31:59] <Boomtime> there is no real way to know this for indexes
[04:32:22] <Boomtime> if you insert a billion documents, then slowly remove all but one, you'll still have a large index
[04:33:37] <fontanon> the current behaviour I observe in my disk (by looking df -h frequently) is; new writes uses additional disk space but this space turns free again once consume a couple of GB or so.
[04:33:59] <fontanon> consumed*
[04:34:06] <Boomtime> new writes should consume existing free space first, when possible
[04:34:25] <Boomtime> are there many deletes occurring?
[04:34:28] <fontanon> nope
[04:34:34] <Boomtime> right..
[04:34:35] <fontanon> no deletings at all now
[04:34:55] <Boomtime> ok, so new writes frequently trigger new allocations
[04:35:13] <fontanon> the point is i moved the half of the chunks from the primary shard but I expected to see the proper free space
[04:35:41] <Boomtime> there will be no change on the filesystem from that
[04:35:50] <fontanon> Then I learnt mongodb is not returning free space to the OS
[04:35:55] <fontanon> yep
[04:36:40] <Boomtime> it is even possible that no extents even got returned to the free list, depending on the distribution of documents that got transferred to the other shard
[04:37:06] <Boomtime> document storage will have been freed up, but whole extents might not have been released
[04:37:24] <fontanon> alright
[04:37:43] <fontanon> currently storageSize is the half of fileSize.
[04:38:19] <fontanon> in the primary shard
[04:38:48] <Boomtime> half is a relative metric - what are the numbers?
[04:39:01] <Boomtime> mongodb tries to keep ahead of the curve, so the last file allocated is almost always empty
[04:39:23] <Boomtime> it's also the largest file at the start, until the filesize reaches it's maximum of 2GB
[04:39:31] <Boomtime> its*
[04:39:49] <fontanon> "dataSize" : 23278657872, "storageSize" : 36217163664, "indexSize" : 4388868624, "fileSize" : 62180556800,
[04:40:56] <Boomtime> btw, passing "1024 * 1024" as a parameter to db.stats() will make that eaiser to read
[04:41:09] <fontanon> thats a good thing!
[04:42:01] <fontanon> "dataSize" : 22202, "storageSize" : 34539, "indexSize" : 4186 "fileSize" : 59300,
[04:42:53] <Boomtime> seems you have a good bit of free space in that db
[04:42:59] <fontanon> 59300 - (4186 + 34539) = 20575
[04:43:15] <fontanon> around 20gb, isn't?
[04:43:41] <fontanon> perhaps the best should be for me to monitorize the storageSize
[04:43:57] <Boomtime> yes
[04:44:16] <Boomtime> it reflects what you're actually using on disk, which ultimately is the most important
[04:45:10] <fontanon> Ok Boomtime, really thanks for your invaluable help
[04:51:01] <zamnuts> db.authenticate hangs when attempting to authenticate against multiple mongos instances using node.js driver version 2.0.33: http://pastebin.com/WsxpvGxq any ideas?
[04:51:58] <zamnuts> driver version 1.4.x does not hang
[04:59:56] <fontanon> Boomtime, this is really interesting :) http://www.awesomescreenshot.com/image/266237/823d898ff3eb54b01c2cdaddd52d3909
[05:00:21] <fontanon> MMS is evidencing all we've talked about
[05:00:46] <fontanon> the chunk migration started about 12.00 as the graphs shows
[05:01:47] <fontanon> the dataSize was shrinking during the chunk migration, once finished the storageSize shrinked abruptly, while fileSize remains the same
[05:02:15] <fontanon> the three red vertical lines is me getting nervous
[05:02:28] <Boomtime> what did you do?
[05:03:02] <DragonPunch> do i have to clean user input before storing into it in mongodb?
[05:03:06] <DragonPunch> if im using nodejs on server side?
[05:03:31] <preaction> why would you not?
[05:03:40] <fontanon> Boomtime, I had MMS installed by the time the chunk migration started. I didn't realized how useful this graphs was to understand what happened with the expected free space
[05:04:27] <Boomtime> mms is pretty fantastic
[05:04:52] <fontanon> By me getting nervous, I meat me seeing how no new free space was returned to the SO so I restarted mongod (big fail)
[05:05:07] <fontanon> Boomtime, indeed
[05:05:19] <fontanon> meant*
[05:05:44] <zamnuts> DragonPunch, you don't have to worry about things like SQL injection with the nodejs mongodb driver
[05:06:42] <zamnuts> well... as long as the user input is being stored as a value, and not used as a field, config option, query struct, etc.
[05:08:54] <zamnuts> fontanon, Boomtime, chunk migrations will consolidate data files? i.e. compact?
[05:09:37] <fontanon> zamnuts, dunno but I believe not.
[05:09:39] <zamnuts> how else would physical space decline? unless the chunk migrations are moving from mmapv1 to WT w/ compression?
[05:10:18] <zamnuts> wait. dumb question. you're looking an an individual mongod.
[05:31:21] <Streemo> INot only would it be convenient, but it was be beautiful if we could perform arbitrarily nested array queries. Unfortunately, this doesn't even work for N=1 level of nesting.
[05:32:47] <Streemo> for example doc = {field:'val','arrayField':[doc1,doc2,doc3,doc4,doc5,...]} arrayField could be considered its own sub-collection, in which case we could run a query on the parent collection, with a nested query on the sub collection so that the returned document does not contain a full array in the arrayField.
[05:34:59] <Streemo> db.collection.find({_id:ObjectId},{arrayField: subQuery: {find: {name:"john"}}}) would return {_id:ObjectId, arrayField:[{name:"john"}]} where arrayField only contains the elements which were matched by the subQuery.
[05:35:39] <Streemo> db.collection.find({_id:ObjectId},{arrayField: {subQuery: {find: {name:"john"}}}}) I mean
[05:36:05] <zamnuts> Streemo, aggregation framework perhaps?
[05:36:10] <Streemo> yes, indeed
[05:36:15] <Streemo> but still :/
[05:36:36] <Streemo> i was dissapointed to find out that elemMatch was like a subquery that didnt actually return the matched results
[05:36:52] <zamnuts> technically you're matching the whole doc
[05:36:53] <Streemo> still very useful, no doubt.
[05:36:57] <Streemo> i know :/
[05:37:18] <Streemo> would be nice to nested querying
[05:37:30] <zamnuts> IMO it would muck up the query language/structure
[05:38:31] <Streemo> well the synta i proposed isnt unreadable
[05:38:56] <Streemo> and it could be off by default for performance, but turned on if the array has "isQuery: true" or something
[05:39:03] <Streemo> er, isCollection
[05:41:16] <Streemo> rlly it could just be a denormalized syntax for normalized colections
[05:41:56] <Streemo> so you could return a denormalized document with a Post/comment type relationship, even though the udnerlying collections are normalized
[05:44:03] <Streemo> zamnuts: anyways, i think it would be a nice API addition to at least be able to choose whether elem match returns a modified document or the document itself. Sure, it can be done in the aggregation framework ,but to me it seems like such a common thing that i would want a general syntax for it and not have to implement it on the fly every time.
[05:52:25] <zamnuts> Streemo, open up an Improvement in SERVER on JIRA :)
[05:52:48] <Streemo> zamnuts: would you support it?
[05:53:11] <zamnuts> probably not
[05:53:14] <zamnuts> ...lol, not gunna ie
[05:53:19] <zamnuts> s/ie/lie/
[05:53:21] <Streemo> zamnuts: i suspect my reuqest is so common because it is a very obvious thing to expect possible, that someone else must have already openend a similar ticket.
[05:53:35] <zamnuts> Streemo, there is a search feature :)
[05:53:38] <Streemo> zamnuts: fair, thanks for the honest answer though.
[05:54:55] <Streemo> zamnuts: honestly, i will npot open a ticket. there are several solutions around what i am talking about - specifically, normalization.
[05:55:08] <Streemo> and of course the agg framework
[05:55:20] <Streemo> it probably will not get a second look, to be honest.
[06:03:49] <Boomtime> @Streemo: the answer will be aggregation, what it's designed to do - why is that not an option for you?
[06:40:59] <zamnuts> bump: any idea why authenticating against multiple mongos would hang the node.js driver?
[06:51:27] <Boomtime> @zamnuts: no, but are you using multiple mongos in the seed list to the same database or do you mean seperate instances?
[06:52:50] <zamnuts> Boomtime, multiple mongos in the seedlist: mongodb://mongo1,mongo2/database
[06:53:12] <zamnuts> where mongo1 and mongo2 are mongos'
[06:54:00] <zamnuts> it connects just fine, i can attempt queries and get auth errors... but when doing db.athenticate (node.js driver), it just sits there, forever. when i use a single mongos in the seed list, everything is snappy.
[07:06:57] <Streemo> Boomtime: it is an option, but Meteor does not support livequery for aggregation operations
[07:10:36] <Streemo> structural question here. if you keep track of users' searches in the DB, would you store searches as an array field on users, or would you normalize it and have a serches collection which stores documents like this: {userId: someUser, search:searchCriterion}
[07:12:25] <Streemo> well gtg, ill ask tomorrow!
[07:19:45] <acidjazz> hey all
[07:20:07] <acidjazz> fresh aws ec2 install with the amazon ami, did a yum install mongodb-server
[07:20:30] <acidjazz> then a sudo service mongod start and got: /usr/bin/mongod: symbol lookup error: /usr/bin/mongod: undefined symbol: _ZN7pcrecpp2RE4InitEPKcPKNS_10RE_OptionsE
[07:21:03] <acidjazz> looks like hte package is mongodb-server.x86_64 2.4.13-1.el6 @epel
[07:21:10] <preaction> perhaps PCRE's C++ library didn't get installed?
[07:21:59] <acidjazz> well i mean if its a requirement for hte package it should have installed it
[07:22:16] <acidjazz> heres my issue
[07:22:16] <acidjazz> http://stackoverflow.com/questions/20872774/epel-mongodb-will-not-start-on-ec2-amazon-ami
[07:22:52] <acidjazz> i have no problem going direectly to mongodb for the repo and trying out 3.0/etc
[07:22:59] <acidjazz> but i was still pretty surprised to see this and wanted to report it
[07:23:10] <acidjazz> so ill go that route but dang.. cmon amazon..
[07:23:20] <preaction> doesn't seem like a mongodb problem. seems like an image problem or an os problem
[07:24:10] <acidjazz> yeap
[07:24:32] <acidjazz> ok so ill purge this and add monogo's epel.. just hope my old mongo dump goes into this 3.0 setup
[07:25:58] <phocks> Can anyone tell me the difference between var mydate1 = new Date() and var mydate2 = new ISODate() ??
[07:30:48] <acidjazz> phocks: ISODate is a wrapper
[07:31:05] <phocks> So it's just the same thing?
[07:31:39] <phocks> the two commands produce the same time it looks like (a few seconds later for the second one of course)
[07:31:47] <acidjazz> well you can pass it an ISODate but itll always return a Date format
[07:32:37] <phocks> ahh okay, so it's mainly for passing ISODate formatted strings?
[07:32:45] <acidjazz> yup
[07:34:23] <phocks> cool thanks
[07:37:30] <acidjazz> 2015-05-27T07:28:43.277+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
[07:37:35] <acidjazz> whats this new thingy
[07:51:26] <Boomtime> acidjazz: THP have always been a bad thing, mongodb just warns you about now
[07:51:57] <acidjazz> so usually i insatll php56-devel and use pecl to install the mongo driver
[07:52:26] <acidjazz> looks like thats not happening w/ this newer ami
[07:53:57] <acidjazz> good lord this t2.micro current gen ec2 is WAY faster than my older medium instance
[07:58:55] <Boomtime> micro instances are a crapshoot, they have no assurances so you run on spare cycles only - there is no point doing performance tests because it's rolling dice every time
[08:01:35] <acidjazz> Boomtime: is this you http://i.imgur.com/NLj6a13.jpg
[08:02:08] <acidjazz> because reading that i 100% imagined the skit with her saying that in response to my excitement
[08:04:12] <Boomtime> well, if you want to put a tonne of micros into a computing cluster where you don't care what each one achieves then go nuts
[08:04:21] <Boomtime> but don't put somethig you care about on them
[08:05:37] <acidjazz> waaa waaoooo
[08:30:47] <f31n> hi i'm pretty new to monogo actually to nosql i asked yesterday whats wrong with this table layout in the eyes of a nosql: http://pastebin.com/kJqSB0TB and worked this one out - is that better - do you have any ideas to make it even better http://pastebin.com/wrh5axJw ?
[12:20:03] <aps> So I am in a weird sticky situation here. I had 1 primary + 1 secondary + 1 arbiter. Due to some unexpected shutdown both primary and secondary are in "recovering" mode now. How do I fix this since there's no primary?
[12:25:47] <aps> I don't have any important data, so I can go back to clean slate as well. But how do I do that?
[12:26:33] <cheeser> well, if you don't need to save your data, shut them down, delete the data dir, and bring 'em back up
[12:28:04] <aps> cheeser: I had tried something like that and I was getting "assert failed : no config object retrievable from local.system.replset" when I tried to start again
[12:29:47] <cheeser> you'll have to rebuild your replica set, too.
[12:30:32] <aps> cheeser: is there a way to fix this without deleting any data? In case this happens i future where I have some precious data
[12:30:51] <cheeser> there might be. i don't know.
[12:31:01] <aps> okay. Thanks :)
[12:37:14] <leandroa> hi, I've a doc like: {val: {"0": 231, "1", 323, "2": 23, ...., "31": 23}.. is there a way to get a portion of val? like $gte 5 and $lte 20?
[12:38:16] <leandroa> filtering by key names I mean
[12:45:56] <joannac> leandroa: not without listing them all
[12:46:19] <joannac> aps: how in the world did you get both data nodes in recovering?
[12:47:00] <leandroa> joannac, thanks
[12:48:50] <aps> joannac: that is something even I'm confused about
[12:50:40] <aps> joannac: What I think happened was that 1st secondary went down, and by the time it was back, even primary was down and just arbiter was running. So, both members think they are out of sync and need to recover from primary
[12:54:04] <aps> joannac: This is how r.status( ) on the Arbiter looks like - http://paste.ubuntu.com/11391107/
[13:10:40] <joannac> aps: you wiped both of them
[13:11:07] <aps> joannac: what can I do now? I don't care about the data
[13:11:23] <joannac> wipe the whole thing and start again
[14:46:03] <ToTheInternet> is it bad practice to create new collections for every session? named something like collection-132095102 where the number is a session identifier
[14:46:26] <Derick> yes
[14:49:05] <ToTheInternet> kinda figured. i'm new to mongodb, and whenever i try to model data i end up using mongodb like sql
[14:49:36] <ToTheInternet> so i tried to think of the opposite extreme.. ;)
[15:11:57] <_ari> if i want to query everything that has an object id of "r-id"
[15:12:01] <_ari> how would i do that?
[15:12:16] <_ari> db.collection.find('r-id': {})
[15:12:16] <_ari> ?
[15:13:04] <StephenLynx> there is an operator for that.
[15:13:12] <StephenLynx> isSet or exists, I cant remember
[15:13:34] <_ari> hmm
[15:30:24] <pcr0> Hello!
[15:31:15] <pcr0> I'd like some help with a seemingly common error, "Failed to load C++ bson extension, using pure JS instead"
[15:31:34] <pcr0> I've described my problem here http://stackoverflow.com/questions/30486594/failed-to-load-c-bson-extension-using-pure-js-version-error-when-running-mong
[15:33:06] <StephenLynx> yeah, it failed to build properly.
[15:33:15] <StephenLynx> what version are you using of which platform?
[15:34:15] <StephenLynx> and how did you installed the driver?
[15:34:25] <jubjub> Hi guys i need to make a notification collection with multiple types and there are 2 approaches i can think of. 1) multiple collections and merging them later on, 2) a single notifications collection with a type field. Which configuration is done more often in practice?
[15:35:45] <StephenLynx> if they are so similar they only have the "type" being different, I would use a single collection
[15:36:46] <StephenLynx> pcr0 you mentioned xcode, that is probably what is going on. from what I know, xcode is responsible for compiling stuff on OSX.
[15:36:58] <StephenLynx> so when the driver tries to compile, it fails.
[15:37:30] <pcr0> Hmm, I understand
[15:37:36] <pcr0> Trying to figure out how to go about from here
[15:38:36] <pcr0> The driver? using npm install. npm itself using homebrew
[15:38:56] <StephenLynx> I would use gnu/linux :v
[15:39:03] <StephenLynx> so you just need the compiler installed
[15:39:15] <StephenLynx> instead of a gigantic bloated thing that you will use only the compiler.
[15:41:07] <pcr0> Hmm, I'm trying to make do with what I have. Not having the giant bloated thing was my intention, I thought the CLI tools were all that other apps depended on. The problem is, reinstalling Xcode didn't fix it either.
[15:41:43] <StephenLynx> yeah, OSX is pretty bad for software development.
[15:41:59] <StephenLynx> I know because I started my career as an iOS dev.
[15:42:10] <cheeser> really? it's pretty good for me for a long time.
[15:42:16] <pcr0> Hmm, but c'mon, isn't it a step up from Windows?
[15:42:16] <cheeser> it's *been*...
[15:42:20] <StephenLynx> always been complete hell for me.
[15:42:27] <mike_edmr> *windows* is pretty bad for software development
[15:42:35] <StephenLynx> I dunno, windows at least is not as walled.
[15:42:48] <pcr0> Where are the walls in OS X?
[15:42:55] <StephenLynx> and it really doesn't matter, both are pathetic compared to gnu/linux.
[15:43:27] <StephenLynx> you have to sign for apple's development program.
[15:43:37] <StephenLynx> plus the update tools are awful.
[15:43:42] <jubjub> StephenLynx: well in the future there may be a diffrent notifictation with different people effected and may need to modify the structure
[15:43:43] <StephenLynx> you cannot use xcode while its updating
[15:43:47] <StephenLynx> it will discard progress if failure
[15:44:07] <StephenLynx> what do you mean?
[15:44:09] <StephenLynx> jubjub
[15:44:17] <StephenLynx> ah
[15:44:41] <jubjub> StephenLynx: i think i just answered my own question haha
[15:48:49] <StephenLynx> in general, I have actively developed in osx, windows and gnu/linux for a job. I cannot even compare how gnu/linux is superior to both of the others, pcr0
[15:50:09] <StephenLynx> you never have to jump through hoops imposed by some asinine executive.
[15:50:22] <StephenLynx> or plain dumb architecture.
[15:51:25] <pcr0> While I might change my mind in the future...I'm perfectly happy with OS X for all my nascent dev needs so far
[15:53:43] <StephenLynx> ¯\_(ツ)_/¯
[16:05:11] <jubjub> How can i parse a collection into a new one. For example {a: 10, b:20, c:30} -> {test1: 20, test:30}
[16:09:24] <StephenLynx> afaik, there is an $out operator for aggregation.
[16:09:56] <StephenLynx> if will get your documents and create a new collection composed by these results
[16:10:23] <StephenLynx> but if its a one-time only thing, you can just move or copy collection and change its fields and remove others.
[16:26:32] <Foxandxss> Hello, I installed mongo yesterday on my vagrant, I created an app (without user/pass) and today it is not working
[16:26:41] <Foxandxss> I get a lot of: not authorized on news to execute command
[16:26:46] <Foxandxss> cant show dbs either
[16:47:43] <laurentide> i'm following this tutorial: http://code.tutsplus.com/tutorials/authenticating-nodejs-applications-with-passport--cms-21619. under the 'configuring Mongo' section, the author assumes i know how to set up a local Mongo instance, which I don't. is anyone able to point me to an accurate resource for this? thank you
[16:48:17] <cheeser> http://docs.mongodb.org/manual/installation/
[16:49:15] <StephenLynx> Foxandxss what is vagrant?
[16:49:44] <Foxandxss> StephenLynx: a virtual machine
[16:49:57] <Foxandxss> I ended using my local mongo for the app
[16:50:10] <StephenLynx> so disable authentication for the db and reload the mongod.
[16:50:37] <Foxandxss> StephenLynx: I think I had messed up two installations, nevermind my question
[16:51:12] <Foxandxss> vagrant's mongo was 2.x and my client on my computer was 3.0.3 so I uninstalled the older one to put a new one from mongodb's repo and after restarting vagrant it got screwed
[16:55:32] <StephenLynx> I never heard about vagrant. any particular reason you use it instead of virtualbox?
[16:55:56] <saml> vagrant uses virtualbox
[16:57:03] <StephenLynx> so what is vagrant fo?
[16:57:05] <StephenLynx> for*
[16:57:52] <saml> it's a wrapper around virtualbox (and others) so that mac osx frontend devs can create development environment with ease
[16:59:57] <StephenLynx> ah
[16:59:59] <StephenLynx> bloat
[17:00:00] <StephenLynx> got it
[17:51:58] <Parabola> hey folks, I need to create a read only user for all databases (including future ones) the issue is there are no DBs, and i have no idea what DBs the admins will add (its a dev box)
[17:52:25] <Parabola> so i need a read only account for all non system DBs, for the rest of the developers who are testing. is this possible? i cannot find what im looking for in the docs or google
[17:57:24] <Parabola> nevermind, turns out that was easier than i thought it would be, theres a role for that
[18:12:12] <Parabola> woops
[18:50:00] <Vitium> "If the field holds an array, then the $in operator selects the documents whose field holds an array that contains at least one element that matches a value in the specified array (e.g. <value1>, <value2>, etc.)"
[18:50:22] <Vitium> Is there a way I can enforce that it must match all of the elements of the specified array?
[18:51:08] <cheeser> $elemMatch
[18:51:43] <Vitium> Fast, cheeser.
[18:51:44] <Vitium> Thanks.
[18:51:53] <Vitium> Let me test it.
[18:52:47] <cheeser> np
[19:58:57] <kenlik> hello. Is there any tool or way to create a data model visualization like a UML diagram?
[19:59:42] <kenlik> specially i way to represent embedded array field ..
[20:22:51] <StephenLynx> afaik there are a couple of GUI clients for mongo, kenlik
[20:22:58] <StephenLynx> but I never heard people praising them.
[20:29:34] <kenlik> StephenLynx, Ok thanks!
[20:33:05] <kenlik> StephenLynx, I hope it can help --> http://atlanmod.github.io/json-discoverer
[20:33:14] <StephenLynx> nah
[20:33:30] <StephenLynx> I am not willing to bother with a gui client for mongo
[22:16:12] <Dev0n> hey, when modling mongo collections, should I try to "forget" about the rational way of thinking (ER) and just dump what's related into a single collection?
[22:18:26] <cheeser> it's a little more complicated than that...
[22:18:49] <cheeser> http://docs.mongodb.org/manual/data-modeling/
[22:19:27] <Dev0n> thanks for the link, I'll have a read
[22:19:58] <cheeser> np
[22:20:39] <jrcconstela> Hello everyone.
[22:25:15] <jrcconstela> I've got a collection with bulk data. Does anyone knows a tool to process its documents, whenever theres any?
[22:37:58] <Dev0n> cool cheeser, that cleared things up a bit :)
[22:38:43] <Dev0n> just read about the capped collections, would I be right in saying that these would still allow the normal find/sort/group etc. queries to be run as well?
[22:49:08] <cheeser> Dev0n: cool. happy to hear that.