[05:45:24] <spencercarnage> I have a question about the mongoose.model().find.
[05:46:18] <spencercarnage> It’s been awhile since I’ve used mongoose, but I recall than when I did a find on a model, it returned all of the fields of the schema. Now, I’m only getting a ‘name’ field, along with _id and __v.
[05:47:14] <spencercarnage> The model was generated from a yeoman generator which created the model with only name on the schema. I updated the schema with additional fields but I can get them to show up when I use find.
[06:00:46] <spencercarnage> nevermind. I’m dumb. new results have the new fields.
[06:58:08] <BigOrangeSU> Hi all, wondering about getting some info about using a mongo shell version that is much older then server version. What are the implications?
[07:02:43] <joannac> won't support some of the new shell helpers?
[07:03:33] <joannac> how old are we talking? a 2.2 shell might have trouble with inserting users into a 2.6 mongod
[08:29:41] <jasvir> hello all. I am seeking for a demo application using mongodb and django. Do anyone have some recomendations?
[08:40:22] <nfroidure_> Does the $addToSet operator takes more than one filed ? Sometinhg like $addToSet: ['$user', '$owner'] ?
[10:00:53] <rspijker> nfroidure_: do you want to add multiple things to a single field? Or do you want to add a single thing to multiple fields?
[10:01:09] <rspijker> because the addToSet in your example doesn’t make that much sense to me
[10:15:51] <nfroidure_> rspijker, i have two fields containing similar concepts, i'd like to select distinct rows depending on those 2 fields combinated.
[10:17:03] <nfroidure_> put it in a set is the first step in my current attempt to achieve this
[10:18:25] <nfroidure_> i wanted to group on id + put field1 and field2 in a set, unwind on this set basis and then group on the unwinded value
[10:19:51] <rspijker> why don’t you just group on them directly?
[10:20:58] <nfroidure_> if i group them directly, i'll potentially have the same value twice
[10:21:05] <nfroidure_> that's what i want to avoid
[10:22:21] <rspijker> I’m not really sure how… but ok.
[10:22:37] <rspijker> if you want to add multiple source fields in a single field, you can use the $each modifier
[10:22:44] <rspijker> it’s documented on the $addToSet operator
[10:26:08] <yruss972> Can someone take a look at this output from mongostat: http://pastebin.com/S6hx7U4D
[10:26:58] <yruss972> Our servers are showing mongodb using massive amounts of swap but really, we have very small databases with relatively little activity :?
[10:46:11] <nfroidure_> rspijker, thanks for the tip
[11:20:38] <yruss972> can't imagine a good reason to us so much memory
[11:20:39] <r1pp3rj4ck> i sent a mail to the users list https://groups.google.com/forum/#!topic/mongodb-user/r2ru3Mv6HJo
[11:20:52] <r1pp3rj4ck> and i figured i could do some benchmarking myself too
[11:20:54] <kali> yruss972: it's not memory, it's adress space
[11:21:36] <yruss972> kali: but you agree that the numbers are wierd?
[11:21:54] <kali> yruss972: but yeah. so figures seems atypically high. chances are smartos instrumentation of mmap is different from the more mainstream kernels
[11:22:23] <kali> yruss972: mmap implementation itself can differ for all i know
[11:22:23] <r1pp3rj4ck> this is what i have now for the bench https://gist.github.com/36c920b44d93603e0195
[11:22:50] <yruss972> In the time since the last paste- the process has reached 111G :/
[11:23:02] <r1pp3rj4ck> and it sometimes prints what it's needed to be printed, but sometimes it throws this error 2014-07-10T13:20:37.595+0200 error hasNext: false at src/mongo/shell/query.js:127
[11:25:35] <r1pp3rj4ck> but i assume it's not what causes this error
[11:28:38] <kali> r1pp3rj4ck: i agree, but i don't see anything fishy except this: when you ask for a sort on a big chunk of data (iirc, 1000 documents) with no matching index, bad things happen
[11:28:49] <kali> r1pp3rj4ck: sorry i can't be more specific here
[11:28:58] <kali> r1pp3rj4ck: try it with the index
[11:49:33] <kali> sweb: if you're not doing anything smart when generating them, if your servers are on time, and if you don't care about the one second granularity, it's safe
[12:02:49] <Industrial> Say I'm receiving a stream of millisecond accurate sensor data and I don't know upfront for each message which collection I put it in
[12:03:08] <Industrial> doing a db.collection(SOMEVAR).insert(data) A LOT of times per second
[13:41:34] <czajkowski> Folks have you seen the Call for participation: main tracks and developer rooms at FOSDEM is now open https://lists.fosdem.org/pipermail/fosdem/2014-July/002010.html
[14:54:39] <Industrial> considering the amount of tickets, it's level and the time it was created and it's current status, i dont think +1 will make a dent here
[15:45:21] <dypsilon> Hi, what is the point of a such detailed RBAC authorization when mongodb forces one user per connection and the connection overhead is pretty high (aprox 10MB per connection)? Do I understand mongodb security model correctly?
[15:51:38] <BaNzounet> Hey there if I've to do "join" thing, I've to do it externaly right?
[16:22:30] <dypsilon> Hi, what is the point of a such detailed RBAC authorization when mongodb forces one user per connection and the connection overhead is pretty high (aprox 10MB per connection)? Do I understand mongodb security model correctly?
[16:33:20] <adamcom> where are you getting 10MB per connection?
[16:37:22] <dypsilon> So is it sane to base the security of the application completely on the mongodb security and create one connection per user?
[16:37:43] <adamcom> weirdly enough, that's the second time I've answered that in a week - don't think it was mentioned for months before that
[16:38:25] <adamcom> dypsilon: I'd never delegate all security to a database, regardless of which one
[16:39:04] <dypsilon> well, not all security but the access control part
[16:39:36] <adamcom> enforce on both, sure, and mis-matches then act as a kind of check sum, but I would not want to be waiting on a bug fix from the database to plug a hole on my app
[16:40:04] <adamcom> plus, if your needs diverge and you need greater granualarity.....
[16:40:11] <dypsilon> that is a good advice, indeed
[16:42:33] <adamcom> and, you have the headache of what happens with connection pooling, connection re-use - if you have to make it such that you tear down all connections…….well let's just say I have seen that (because of a bug with read preferences) and it ain't pretty - I've seen mongod go crazy because Linux was creating and destroying thousands of connections a second, ran out of ephemeral ports and all sorts of other ugliness. Pooling and re-use are needed for really
[16:43:07] <adamcom> not to mention the overhead - even at 1MB per connection, they still add up quickly (20GB of mem for 20,000 connections is still a lot for most people)
[17:31:57] <Nikola_> Unable to stop my balancer. Any advice?
[17:38:37] <Nikola_> Actually I managed to stop the balancer
[17:38:55] <Nikola_> But when i try to manually move a chunk I get error message "moveChunk failed to engage TO-shard in the data transfer: still waiting for a previous migrates data to get cleaned, can't accept new chunks"
[17:41:21] <adamcom> that's exactly what it sounds like - when a chunk is moved off a shard a background delete thread is spawned to clean up the chunk that was moved off
[17:41:39] <adamcom> if too many are active, it won't accept new chunks until the deletes finish
[17:42:03] <adamcom> the deletes should be pretty quick - after all, it's a delete on a chunk that was just read into memory recently in order to be migrated
[17:42:26] <adamcom> but if the shard is struggling, then they will take a while, and so it tells other migrations to back off until they finish
[17:42:52] <adamcom> you can stop them, by stepping down the primary, but you will then have orphaned docs
[17:51:48] <Nikola_> hm. Not sure. There should be no load now on the cluster I used to have 3 replica sets and I added another 6
[17:52:01] <Nikola_> created a new collection on rs7
[17:53:00] <Nikola_> balancer was not distributing this collection so tried manual move of chunk from rs7 to a new shard rs4 and it worked. But i get error when i try to move the chunks to rs0,1,2
[17:53:12] <Nikola_> aka the original shards in the cluster
[17:54:04] <Nikola_> It has been in this state for days no so doubt it is just taking this long to delete the data
[18:07:24] <Nikola_> How can i see what are these chunks that have not been cleaned up yet?
[18:07:41] <adamcom> if you are on 2.6, there is cleanupOrphaned
[18:07:57] <adamcom> before that, it needs scripts
[18:09:48] <adamcom> could be stuck, or the original shards could still be doing deletes - they would have been the sources of all migrations initially since the others were empty, so a lot of migrations drom them
[18:10:53] <adamcom> as mentioned, you can step down the primary, clean up the orphans later with that command on 2.6
[18:11:00] <adamcom> for 2.4, there is a JS version: https://github.com/mongodb/support-tools/tree/master/orphanage
[18:36:16] <user123321> cheeser, I'd have minimum 2 identical Apache servers, load balanced, and I thought of using one common storage for all of the Apache servers.
[18:36:29] <user123321> cheeser, any advice on my scenario?
[18:37:32] <user123321> common storage would contain mongo db database
[18:39:34] <cheeser> why would the apache servers matter?
[19:08:55] <user123321> cheeser, sorry? hmm, I'd like to make my Apache servers connect to a one data store, is this ok?
[19:09:53] <cheeser> well, your client code would just talk to whever mongod is running.
[19:12:04] <user123321> cheeser, Solution 1: Installing 2 mongo DB servers in both servers, pointing to the remote storage. Solution 2: Install Mongo DB server in the common storage, and let the clients connect.
[19:24:07] <user123321> cheeser, one of my friend has hosted mongo DB in a remote host, I could access it with robotmongo, question is, could I get a copy of that db and host it in mine?
[20:54:20] <saml> db.docs.find({ModelNumber: '232.34'}).limit(3) only first 3
[20:58:23] <nylon> it's a text search so i can't limit it to just one field, because the search text maybe contained in the itemName field or other fields (which i've had to omit due to data being sensitive
[21:01:08] <saml> oh i don't know then.. i use solr for full text search. haven't used full text search in mongo
[21:01:36] <saml> you might have to combine all fields
[21:03:11] <nylon> anybody here a text search expert?
[21:26:29] <staykov> if im using $elemMatch is there a way to also get fields from the parent object?
[21:26:55] <staykov> following: http://docs.mongodb.org/manual/reference/operator/query/elemMatch/ i mean can i also get a field from grades?
[21:27:17] <staykov> i am trying it by putting the field in the selector but its not working, just checking if its possible
[21:31:52] <staykov> nevermind i wasnt using my lib properly