[01:25:25] <kba> if I'm using mongoose in nodejs and I have a mongoose object that I save. If I then update that object and save it again, is all the data transferred?
[01:26:04] <kba> for instance, I set obj.foo = 1, then obj.save(), then obj.bar = 1 and obj.save().
[02:16:09] <MacWinne_> in mongo, i see a certain document field is of type ISODate("2014-06-24T23:10:10Z") however when I query with mongoose, I seem to get back just the string '2014-06-24T23:10:10Z'
[02:16:18] <MacWinne_> even though my schema has defined the field as Date
[02:17:13] <MacWinne_> is this expected behavior? I've recently started using Mongoose.. was using PHP before, and I think PHP returned the date field correctly.. though am not completely sure
[04:13:19] <fontanon> Hi everybody! Observing my database db.stats() I have one doubt ... is the free space on disk pre-allocated by mongo db equals to (fileSize - (storageSize + indexSize)) ??
[04:22:27] <Boomtime> @fontanon: i believe that will be very close, although not exact - the difference is in the amount of space remaining in the extents already allocated to the indexes
[04:23:57] <Boomtime> there is no analogous storageSize for indexes :/
[04:24:11] <Boomtime> that is my understanding, take it with a grain of salt
[04:25:09] <Boomtime> this means of course that you can't know precisely how much free space exists already in the free lists, but you can get a very close approximation
[04:25:10] <fontanon> Boomtime, so mongodb does not only pre-allocate space for data but for indexes too
[04:25:35] <fontanon> and this approximation is fileSize - (storageSize + indexSize)
[04:26:49] <Boomtime> the pre-allocation is for a database
[04:27:07] <Boomtime> from within those files, section are carved out called extents, and allocated to collections for data or indexes
[04:27:41] <Boomtime> when you work out the "free space" using the method you have, you are getting a combination of three things in the result:
[04:30:40] <Boomtime> once an extent is assigned for data, storageSize increases
[04:31:04] <Boomtime> note that dataSize must increase too, though by far less most likely
[04:31:34] <Boomtime> indeed, it could be as little as one byte increase to dataSize that forces a new extent to be allocated - which might be 512MB
[04:31:59] <Boomtime> there is no real way to know this for indexes
[04:32:22] <Boomtime> if you insert a billion documents, then slowly remove all but one, you'll still have a large index
[04:33:37] <fontanon> the current behaviour I observe in my disk (by looking df -h frequently) is; new writes uses additional disk space but this space turns free again once consume a couple of GB or so.
[04:36:40] <Boomtime> it is even possible that no extents even got returned to the free list, depending on the distribution of documents that got transferred to the other shard
[04:37:06] <Boomtime> document storage will have been freed up, but whole extents might not have been released
[04:44:16] <Boomtime> it reflects what you're actually using on disk, which ultimately is the most important
[04:45:10] <fontanon> Ok Boomtime, really thanks for your invaluable help
[04:51:01] <zamnuts> db.authenticate hangs when attempting to authenticate against multiple mongos instances using node.js driver version 2.0.33: http://pastebin.com/WsxpvGxq any ideas?
[04:51:58] <zamnuts> driver version 1.4.x does not hang
[04:59:56] <fontanon> Boomtime, this is really interesting :) http://www.awesomescreenshot.com/image/266237/823d898ff3eb54b01c2cdaddd52d3909
[05:00:21] <fontanon> MMS is evidencing all we've talked about
[05:00:46] <fontanon> the chunk migration started about 12.00 as the graphs shows
[05:01:47] <fontanon> the dataSize was shrinking during the chunk migration, once finished the storageSize shrinked abruptly, while fileSize remains the same
[05:02:15] <fontanon> the three red vertical lines is me getting nervous
[05:03:40] <fontanon> Boomtime, I had MMS installed by the time the chunk migration started. I didn't realized how useful this graphs was to understand what happened with the expected free space
[05:05:44] <zamnuts> DragonPunch, you don't have to worry about things like SQL injection with the nodejs mongodb driver
[05:06:42] <zamnuts> well... as long as the user input is being stored as a value, and not used as a field, config option, query struct, etc.
[05:08:54] <zamnuts> fontanon, Boomtime, chunk migrations will consolidate data files? i.e. compact?
[05:09:37] <fontanon> zamnuts, dunno but I believe not.
[05:09:39] <zamnuts> how else would physical space decline? unless the chunk migrations are moving from mmapv1 to WT w/ compression?
[05:10:18] <zamnuts> wait. dumb question. you're looking an an individual mongod.
[05:31:21] <Streemo> INot only would it be convenient, but it was be beautiful if we could perform arbitrarily nested array queries. Unfortunately, this doesn't even work for N=1 level of nesting.
[05:32:47] <Streemo> for example doc = {field:'val','arrayField':[doc1,doc2,doc3,doc4,doc5,...]} arrayField could be considered its own sub-collection, in which case we could run a query on the parent collection, with a nested query on the sub collection so that the returned document does not contain a full array in the arrayField.
[05:34:59] <Streemo> db.collection.find({_id:ObjectId},{arrayField: subQuery: {find: {name:"john"}}}) would return {_id:ObjectId, arrayField:[{name:"john"}]} where arrayField only contains the elements which were matched by the subQuery.
[05:35:39] <Streemo> db.collection.find({_id:ObjectId},{arrayField: {subQuery: {find: {name:"john"}}}}) I mean
[05:41:16] <Streemo> rlly it could just be a denormalized syntax for normalized colections
[05:41:56] <Streemo> so you could return a denormalized document with a Post/comment type relationship, even though the udnerlying collections are normalized
[05:44:03] <Streemo> zamnuts: anyways, i think it would be a nice API addition to at least be able to choose whether elem match returns a modified document or the document itself. Sure, it can be done in the aggregation framework ,but to me it seems like such a common thing that i would want a general syntax for it and not have to implement it on the fly every time.
[05:52:25] <zamnuts> Streemo, open up an Improvement in SERVER on JIRA :)
[05:52:48] <Streemo> zamnuts: would you support it?
[05:53:21] <Streemo> zamnuts: i suspect my reuqest is so common because it is a very obvious thing to expect possible, that someone else must have already openend a similar ticket.
[05:53:35] <zamnuts> Streemo, there is a search feature :)
[05:53:38] <Streemo> zamnuts: fair, thanks for the honest answer though.
[05:54:55] <Streemo> zamnuts: honestly, i will npot open a ticket. there are several solutions around what i am talking about - specifically, normalization.
[05:55:08] <Streemo> and of course the agg framework
[05:55:20] <Streemo> it probably will not get a second look, to be honest.
[06:03:49] <Boomtime> @Streemo: the answer will be aggregation, what it's designed to do - why is that not an option for you?
[06:40:59] <zamnuts> bump: any idea why authenticating against multiple mongos would hang the node.js driver?
[06:51:27] <Boomtime> @zamnuts: no, but are you using multiple mongos in the seed list to the same database or do you mean seperate instances?
[06:52:50] <zamnuts> Boomtime, multiple mongos in the seedlist: mongodb://mongo1,mongo2/database
[06:53:12] <zamnuts> where mongo1 and mongo2 are mongos'
[06:54:00] <zamnuts> it connects just fine, i can attempt queries and get auth errors... but when doing db.athenticate (node.js driver), it just sits there, forever. when i use a single mongos in the seed list, everything is snappy.
[07:06:57] <Streemo> Boomtime: it is an option, but Meteor does not support livequery for aggregation operations
[07:10:36] <Streemo> structural question here. if you keep track of users' searches in the DB, would you store searches as an array field on users, or would you normalize it and have a serches collection which stores documents like this: {userId: someUser, search:searchCriterion}
[07:20:07] <acidjazz> fresh aws ec2 install with the amazon ami, did a yum install mongodb-server
[07:20:30] <acidjazz> then a sudo service mongod start and got: /usr/bin/mongod: symbol lookup error: /usr/bin/mongod: undefined symbol: _ZN7pcrecpp2RE4InitEPKcPKNS_10RE_OptionsE
[07:21:03] <acidjazz> looks like hte package is mongodb-server.x86_64 2.4.13-1.el6 @epel
[07:21:10] <preaction> perhaps PCRE's C++ library didn't get installed?
[07:21:59] <acidjazz> well i mean if its a requirement for hte package it should have installed it
[07:51:26] <Boomtime> acidjazz: THP have always been a bad thing, mongodb just warns you about now
[07:51:57] <acidjazz> so usually i insatll php56-devel and use pecl to install the mongo driver
[07:52:26] <acidjazz> looks like thats not happening w/ this newer ami
[07:53:57] <acidjazz> good lord this t2.micro current gen ec2 is WAY faster than my older medium instance
[07:58:55] <Boomtime> micro instances are a crapshoot, they have no assurances so you run on spare cycles only - there is no point doing performance tests because it's rolling dice every time
[08:01:35] <acidjazz> Boomtime: is this you http://i.imgur.com/NLj6a13.jpg
[08:02:08] <acidjazz> because reading that i 100% imagined the skit with her saying that in response to my excitement
[08:04:12] <Boomtime> well, if you want to put a tonne of micros into a computing cluster where you don't care what each one achieves then go nuts
[08:04:21] <Boomtime> but don't put somethig you care about on them
[08:30:47] <f31n> hi i'm pretty new to monogo actually to nosql i asked yesterday whats wrong with this table layout in the eyes of a nosql: http://pastebin.com/kJqSB0TB and worked this one out - is that better - do you have any ideas to make it even better http://pastebin.com/wrh5axJw ?
[12:20:03] <aps> So I am in a weird sticky situation here. I had 1 primary + 1 secondary + 1 arbiter. Due to some unexpected shutdown both primary and secondary are in "recovering" mode now. How do I fix this since there's no primary?
[12:25:47] <aps> I don't have any important data, so I can go back to clean slate as well. But how do I do that?
[12:26:33] <cheeser> well, if you don't need to save your data, shut them down, delete the data dir, and bring 'em back up
[12:28:04] <aps> cheeser: I had tried something like that and I was getting "assert failed : no config object retrievable from local.system.replset" when I tried to start again
[12:29:47] <cheeser> you'll have to rebuild your replica set, too.
[12:30:32] <aps> cheeser: is there a way to fix this without deleting any data? In case this happens i future where I have some precious data
[12:30:51] <cheeser> there might be. i don't know.
[12:37:14] <leandroa> hi, I've a doc like: {val: {"0": 231, "1", 323, "2": 23, ...., "31": 23}.. is there a way to get a portion of val? like $gte 5 and $lte 20?
[12:38:16] <leandroa> filtering by key names I mean
[12:45:56] <joannac> leandroa: not without listing them all
[12:46:19] <joannac> aps: how in the world did you get both data nodes in recovering?
[12:48:50] <aps> joannac: that is something even I'm confused about
[12:50:40] <aps> joannac: What I think happened was that 1st secondary went down, and by the time it was back, even primary was down and just arbiter was running. So, both members think they are out of sync and need to recover from primary
[12:54:04] <aps> joannac: This is how r.status( ) on the Arbiter looks like - http://paste.ubuntu.com/11391107/
[13:11:07] <aps> joannac: what can I do now? I don't care about the data
[13:11:23] <joannac> wipe the whole thing and start again
[14:46:03] <ToTheInternet> is it bad practice to create new collections for every session? named something like collection-132095102 where the number is a session identifier
[15:31:15] <pcr0> I'd like some help with a seemingly common error, "Failed to load C++ bson extension, using pure JS instead"
[15:31:34] <pcr0> I've described my problem here http://stackoverflow.com/questions/30486594/failed-to-load-c-bson-extension-using-pure-js-version-error-when-running-mong
[15:33:06] <StephenLynx> yeah, it failed to build properly.
[15:33:15] <StephenLynx> what version are you using of which platform?
[15:34:15] <StephenLynx> and how did you installed the driver?
[15:34:25] <jubjub> Hi guys i need to make a notification collection with multiple types and there are 2 approaches i can think of. 1) multiple collections and merging them later on, 2) a single notifications collection with a type field. Which configuration is done more often in practice?
[15:35:45] <StephenLynx> if they are so similar they only have the "type" being different, I would use a single collection
[15:36:46] <StephenLynx> pcr0 you mentioned xcode, that is probably what is going on. from what I know, xcode is responsible for compiling stuff on OSX.
[15:36:58] <StephenLynx> so when the driver tries to compile, it fails.
[15:39:03] <StephenLynx> so you just need the compiler installed
[15:39:15] <StephenLynx> instead of a gigantic bloated thing that you will use only the compiler.
[15:41:07] <pcr0> Hmm, I'm trying to make do with what I have. Not having the giant bloated thing was my intention, I thought the CLI tools were all that other apps depended on. The problem is, reinstalling Xcode didn't fix it either.
[15:41:43] <StephenLynx> yeah, OSX is pretty bad for software development.
[15:41:59] <StephenLynx> I know because I started my career as an iOS dev.
[15:42:10] <cheeser> really? it's pretty good for me for a long time.
[15:42:16] <pcr0> Hmm, but c'mon, isn't it a step up from Windows?
[15:42:55] <StephenLynx> and it really doesn't matter, both are pathetic compared to gnu/linux.
[15:43:27] <StephenLynx> you have to sign for apple's development program.
[15:43:37] <StephenLynx> plus the update tools are awful.
[15:43:42] <jubjub> StephenLynx: well in the future there may be a diffrent notifictation with different people effected and may need to modify the structure
[15:43:43] <StephenLynx> you cannot use xcode while its updating
[15:43:47] <StephenLynx> it will discard progress if failure
[15:44:41] <jubjub> StephenLynx: i think i just answered my own question haha
[15:48:49] <StephenLynx> in general, I have actively developed in osx, windows and gnu/linux for a job. I cannot even compare how gnu/linux is superior to both of the others, pcr0
[15:50:09] <StephenLynx> you never have to jump through hoops imposed by some asinine executive.
[15:50:22] <StephenLynx> or plain dumb architecture.
[15:51:25] <pcr0> While I might change my mind in the future...I'm perfectly happy with OS X for all my nascent dev needs so far
[16:47:43] <laurentide> i'm following this tutorial: http://code.tutsplus.com/tutorials/authenticating-nodejs-applications-with-passport--cms-21619. under the 'configuring Mongo' section, the author assumes i know how to set up a local Mongo instance, which I don't. is anyone able to point me to an accurate resource for this? thank you
[16:49:15] <StephenLynx> Foxandxss what is vagrant?
[16:49:44] <Foxandxss> StephenLynx: a virtual machine
[16:49:57] <Foxandxss> I ended using my local mongo for the app
[16:50:10] <StephenLynx> so disable authentication for the db and reload the mongod.
[16:50:37] <Foxandxss> StephenLynx: I think I had messed up two installations, nevermind my question
[16:51:12] <Foxandxss> vagrant's mongo was 2.x and my client on my computer was 3.0.3 so I uninstalled the older one to put a new one from mongodb's repo and after restarting vagrant it got screwed
[16:55:32] <StephenLynx> I never heard about vagrant. any particular reason you use it instead of virtualbox?
[17:51:58] <Parabola> hey folks, I need to create a read only user for all databases (including future ones) the issue is there are no DBs, and i have no idea what DBs the admins will add (its a dev box)
[17:52:25] <Parabola> so i need a read only account for all non system DBs, for the rest of the developers who are testing. is this possible? i cannot find what im looking for in the docs or google
[17:57:24] <Parabola> nevermind, turns out that was easier than i thought it would be, theres a role for that
[18:50:00] <Vitium> "If the field holds an array, then the $in operator selects the documents whose field holds an array that contains at least one element that matches a value in the specified array (e.g. <value1>, <value2>, etc.)"
[18:50:22] <Vitium> Is there a way I can enforce that it must match all of the elements of the specified array?
[20:33:30] <StephenLynx> I am not willing to bother with a gui client for mongo
[22:16:12] <Dev0n> hey, when modling mongo collections, should I try to "forget" about the rational way of thinking (ER) and just dump what's related into a single collection?
[22:18:26] <cheeser> it's a little more complicated than that...
[22:25:15] <jrcconstela> I've got a collection with bulk data. Does anyone knows a tool to process its documents, whenever theres any?
[22:37:58] <Dev0n> cool cheeser, that cleared things up a bit :)
[22:38:43] <Dev0n> just read about the capped collections, would I be right in saying that these would still allow the normal find/sort/group etc. queries to be run as well?
[22:49:08] <cheeser> Dev0n: cool. happy to hear that.