PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 21st of September, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:25:19] <StephenLynx> cheeser the cursor still closes after a couple of documentsa
[00:27:10] <cheeser> that's weird. between tail and await, it should just sit there
[00:27:25] <StephenLynx> and I tried just not doing nothing with the documents, still closes.
[00:28:32] <StephenLynx> I think its closing based on time.
[00:28:45] <StephenLynx> I didn't even sent any documents and it closed after a few seconds.
[00:29:15] <cheeser> are there docs already in the collection when you open the cursor?
[00:29:38] <StephenLynx> yes.
[00:29:54] <cheeser> hrm. that should work.
[00:30:10] <cheeser> maybe a driver bug
[00:30:14] <StephenLynx> yeah, its really weird it reads all documents, work for a time, then closes.
[00:30:40] <StephenLynx> http://pastebin.com/4zEHPHD9
[00:30:45] <StephenLynx> this is the full code that handles it.
[00:31:03] <StephenLynx> it receives all existing documents, then it awaits for new ones and works as intended.
[00:31:09] <StephenLynx> then it closes.
[00:31:38] <cheeser> hrm. yeah, i don't know what to tell you. i thought that'd work...
[00:39:24] <StephenLynx> god damn it, I hate jira
[00:39:43] <StephenLynx> "hurr, your password must have letters, numbers and an elven rune"
[00:44:08] <StephenLynx> https://jira.mongodb.org/browse/NODE-567 here
[01:00:33] <StephenLynx> cheeser I read this here: http://docs.mongodb.org/v3.0/reference/method/cursor.addOption/ " The sequence creates a cursor that will wait for few seconds after returning the full result set so that it can capture and return additional data added during the query:"
[01:00:49] <StephenLynx> so the cursor is not intended to stay open until manually closed?
[01:03:45] <cheeser> that's not my understanding
[01:04:33] <StephenLynx> have you ever used a tailable cursor like this? open it, leave it open for long periods of time and grabbing new documents?
[01:07:50] <cheeser> not for a long, long time, no. but i've used it in unit tests which doesn't really help with this...
[01:08:07] <cheeser> have you tried doing in the shell and ruling out the node driver?
[01:09:28] <StephenLynx> tried, but couldnt figure how to fetch it from the terminal
[01:09:45] <StephenLynx> the command on that link didn't give me the new documents
[01:19:57] <edrocks> is there anyway to set the primary in a replica set if you only have two members? Does increasing the priority of one work with only two members?
[01:23:30] <cheeser> use an arbiter
[01:25:42] <jtthedev> Sup al... Is there a reason I can't call substr in a function ?
[01:25:46] <jtthedev> *all
[01:26:16] <jtthedev> ill create a gist
[01:26:17] <jtthedev> one sec
[01:28:52] <jtthedev> eg; https://gist.github.com/jterrero/cf59a13dac03f8d2a64d
[01:29:56] <jtthedev> works find if I am not exporting it and sending results to console
[01:30:16] <jtthedev> but when adding similar logic to the app, it's giving me a TypeError
[01:30:57] <edrocks> cheeser: I only have two physical hosts though
[01:32:19] <joannac> edrocks: you can put the arbiter on a tiny machine
[01:32:32] <joannac> edrocks: in any case, yes, you can change the priority
[01:34:44] <edrocks> joannac: will a priority update work without the arbiter? their colo servers so I'd have to host the arbiter elsewhere
[01:36:56] <joannac> yes
[01:37:06] <joannac> having a arbiter is just for HA reasons
[01:37:10] <joannac> currently you have no HA
[01:37:40] <edrocks> I know we have to buy another server for HA or go get a vm on gce or linode
[01:57:00] <_syn> /ms/msg asdfsdfsadfsd~dsaf
[05:00:14] <antiPoP> Hi, I have a a document A which has a array of B subdocuments. I want to get all A documents but fiter B subdocuments based on a criteria.
[05:00:39] <antiPoP> This mean is there is no match for B, I still can get A with no subdocuments. Can this be done?
[05:18:52] <antiPoP> nobody?
[05:43:16] <joannac> antiPoP: doesn't work like that
[05:43:33] <joannac> you could try the aggregation framework maybe
[05:53:48] <antiPoP> joannac, then this cant be done just with $elemMatch?
[05:55:14] <joannac> antiPoP: "if there is no match for B, I still can get A with no subdocuments" -> that's not how elemMatch works
[05:56:48] <antiPoP> joannac, not according the docs, see http://docs.mongodb.org/manual/reference/operator/projection/elemMatch/#zipcode-search
[05:57:02] <antiPoP> elem with id 2 is missing there
[05:57:30] <antiPoP> mmm
[05:57:32] <antiPoP> wait
[06:00:49] <joannac> antiPoP: if you're happy with just the first matching B, sure
[06:02:06] <antiPoP> joannac, and If I need all matching B?
[06:04:16] <joannac> then you should be storing them in an array
[06:04:20] <joannac> shouldn't*
[06:04:41] <joannac> and you can use the aggregation framework
[06:05:32] <antiPoP> joannac, so moving B to a Document?
[06:12:50] <antiPoP> joannac, here is what I'm trying to achieve: https://gist.github.com/antiPoP/2e9eb36852c3e018b894
[06:12:50] <joannac> antiPoP: whatever makes sense for your applicationa nd usage patterns
[06:13:42] <antiPoP> The isssue is that I don't want to use a relational approach... IO would have used mysql then
[06:18:40] <joannac> antiPoP: well, your schema is not working for this use case.
[06:19:50] <antiPoP> joannac, seems you are right, I will rewrite it
[06:19:58] <antiPoP> got an idea
[07:47:56] <antiPoP> how can i remove subdocuments from a query?
[08:12:23] <Mick27> Hello Folks
[08:12:51] <Mick27> what is the best practice for user creation, should I store the user in the db it has access to, or in the admin db ?
[08:32:00] <mtree> why you're having separate db for admin?
[08:43:41] <Mick27> mtree: I have nothing for now, I am just fishing for info :)
[09:06:28] <mtree> jest keep all the users in single collection
[12:40:19] <ams_> My mongodb --journal recovery is taking a long time (1hour and counting)
[12:40:27] <ams_> Any way to estimate how long it will take?
[12:59:07] <ams_> If I interrupt this will it recover from now or start all over again?
[13:05:10] <asteele> wish i could help ya mate but not sure
[13:05:47] <asteele> hopefully someone else with some more knowledge will be here ;)
[13:07:04] <ams_> Thanks :-)
[13:08:03] <ams_> The funny thing is, the process doesn't seem to be doing all that much. CPU + I/O is pretty low.
[13:17:02] <MatheusOl> I might be wrong, but the recovery of journal is completely serial
[13:20:21] <cheeser> by nature, it has to be...
[13:20:51] <ams_> It gets flushed every 60 seconds though, right? So I wouldn't have expected it to be huge
[13:20:56] <antiPoP> Is there something to populate mongo with fake data using mongosee?
[13:21:56] <ams_> We're on Processing commit number 75585 now
[13:22:56] <cheeser> antiPoP: this might still work https://github.com/bak/mongo_populator
[13:23:43] <antiPoP> cheeser, I'll take a look, thanks
[13:24:17] <antiPoP> mmm... ruby :(
[14:15:01] <jordonbiondo> can anybody tell me why an ObjectId is supposedly a 12 byte value, but represented by a 24 byte hex string?
[14:15:20] <jordonbiondo> or for that matter, how can I get the actual 12 byte value from an ObjectId?
[14:18:44] <cheeser> depends on the language, i'd imagine
[14:19:48] <deathanchor> jordonbiondo: not sure what you mean... here is all you need: http://docs.mongodb.org/manual/reference/object-id/
[14:20:20] <jordonbiondo> I've read that document, that's where my question comes from
[14:21:04] <deathanchor> what are you going to do with the 12-bytes of data?
[14:21:19] <jordonbiondo> if the internal structure of an ObjectId is 12 bytes, why is it always represented with a 24 byte hex string? And is there a way to actually get the real 12 byte value from the hex representation
[14:22:05] <deathanchor> would you rather have zeros and ones?
[14:23:30] <jordonbiondo> omg
[14:23:45] <jordonbiondo> I'm an idiot
[14:24:40] <jordonbiondo> For some reason I was thinking 1 byte = 1 hex char, but it's 2 hex chars, looks like I have no issue
[14:25:36] <deathanchor> omg? is that Omicron Mayhem Games?
[14:25:55] <ams_> Anyone know if I stop my "mongod --journal" recovery midway, will it continue from where it left off?
[14:41:39] <deathanchor> ams_: not sure what you mean, journal option is to use journalling.
[14:42:36] <ams_> deathanchor: it's used to apply journal updates, is it now?
[14:43:07] <ams_> *is it not
[14:43:20] <deathanchor> http://docs.mongodb.org/manual/tutorial/manage-journaling/#enable-journaling
[14:44:43] <deathanchor> it only turns on journalling, and then here is what happens for a crash when you had journalling on:http://docs.mongodb.org/manual/tutorial/manage-journaling/#recover-data-after-unexpected-shutdown
[14:46:06] <deathanchor> I've only had a few dirty shutdowns and journalling recovers automatically 90% of the time. the other 10% I had a do a full resync.
[14:49:48] <androidbruce> i'm trying to enable ssl connectiong for mongodb 2.6, but it seems that ssl isn't complied in the distributed packages?
[14:50:01] <cheeser> androidbruce: correct
[14:50:09] <cheeser> i think that's fixed in 3.0 packages, though.
[14:50:17] <androidbruce> cheeser: is there any work around?
[14:50:29] <ams_> deathanchor: ah maybe i'm being dumb then
[14:50:29] <cheeser> something to do with openssl/linux distro variance blahblahblah
[14:50:36] <androidbruce> makes sense.
[14:50:41] <cheeser> androidbruce: build from source or upgrade, really.
[14:50:52] <androidbruce> cheeser: yeah, that's what i assumed. thank you for confirming
[14:50:57] <cheeser> there might be a better choice. i haven't used ssl with mongo, though.
[14:51:01] <cheeser> androidbruce: no problem
[15:00:07] <StephenLynx> yeah. I ended up using TCP for what I was trying to use capped collections
[15:00:26] <StephenLynx> the idea of opening the cursor every 10 second didn't appeal to me.
[15:01:22] <bosyi> hi
[15:01:37] <bosyi> if i found mistake in docs what i need to do?
[15:01:54] <bosyi> http://docs.mongodb.org/master/tutorial/query-documents/#match-a-field-in-the-embedded-document-using-the-array-index
[15:01:55] <Derick> you can file a docs bug at http://jira.mongodb.org/browse/DOCS
[15:02:37] <Derick> what's the prob though?
[15:04:59] <bosyi> db.inventory.find( { 'memos.0.by': 'shipping' } )
[15:05:27] <Derick> yes, what about it?
[15:07:09] <bosyi> Consider that the inventory collection includes the following documents: *documents*. this .find() not find nothing
[15:07:44] <bosyi> there are no embedded doc at index 0 with field by that equals 'shipping'
[15:07:44] <Derick> sorry, it's not wrong.
[15:07:52] <Derick> sure there is, in:
[15:07:58] <Derick> memos: [ { memo: "on time", by: "shipping" }, { memo: "approved", by: "billing" } ]
[15:08:04] <Derick> in the first document just mentioned above
[15:22:15] <bosyi> yeah. i'm fucked up(
[17:22:16] <makufiru> Hey all, I'm looking for some resources that talk about use cases for MongoDB over SQL, and the benefits involved in that route. I'm also interested in hearing your own views if you have any.
[17:27:02] <StephenLynx> when you expect your dataset to require more and more space and you want to add more server to the stack with ease
[17:27:35] <StephenLynx> when you can afford not having ACID and transactions are not worth the slower development cycle of using a RDB
[17:27:53] <StephenLynx> when you need to handle large datasets with efficiency
[17:28:02] <StephenLynx> when you have to store files in the database
[17:28:10] <StephenLynx> when you have to handle geo location queries.
[17:29:34] <makufiru> Thanks @StephenLynx those are really helpful points
[17:30:56] <StephenLynx> ah, when your data is not too relational
[17:31:43] <StephenLynx> also, there is an use-case for other no-sql databases.
[17:31:54] <StephenLynx> there is this misconception that there is sql and no-sql.
[17:32:05] <StephenLynx> the truth is that there is sql and then a lot of databases.
[17:32:35] <StephenLynx> theres little similarity between the uses cases, technology and use between mongo and redis, for example
[17:32:44] <StephenLynx> or between any of these and a graph database.
[17:32:57] <makufiru> Very interesting
[17:33:07] <StephenLynx> afaik, only sql dbs are similar between each other.
[17:33:15] <StephenLynx> when you leave that group, is quite wild.
[17:33:24] <makufiru> And then hardly so, even then
[17:33:40] <makufiru> Large differences between mySQL/postgreSQL/MSSQL for instance
[17:33:46] <StephenLynx> yeah, but they share SOMETHING on their base.
[17:33:52] <StephenLynx> witch is SQL
[17:34:01] <deathanchor> a WITCH!
[17:34:06] <makufiru> But there are even gaps between what parts of SQL they all support
[17:34:06] <StephenLynx> :v
[17:34:20] <StephenLynx> yeah, but in comparison with the nothing that other databases share
[17:34:37] <makufiru> fair point
[18:22:39] <Doyle> Hey. Will this have any config when the URI is pointing at 3 mongos rather than the RS members themselves? MongoConfigUtil.setReadSplitsFromSecondary
[18:23:09] <Doyle> for com.mongodb.hadoop.util.MongoConfigUtil
[19:57:33] <deathanchor> does mongodb 3.0 do collection write locks for updates that affect an index on that collection?
[20:24:30] <pokEarl> Hey friends, so I am told that its bad to use FindOneById if you are just checking if the object is there because using find.limit(1) and then .hasNext does not read the entire object. Is the same true if you are using projections then? like if you exclude a lot of the object, will a find be faster than a findone because findone does not exclude or something i dont know? :(
[20:35:21] <StephenLynx> >FindOneById
[20:35:28] <StephenLynx> I think thats from your driver.
[20:35:42] <StephenLynx> I don't remember ever seeing such function.
[20:36:25] <skullcrasher> can anybody tell me how I can update a array item with java in mongodb?
[20:36:25] <StephenLynx> and from what I heard, a findOne is nothing but a find with a limit(1)
[20:36:26] <pokEarl> Yeah you are right meant findOne
[20:36:38] <StephenLynx> so there isn't any fundamental difference between them
[20:36:54] <StephenLynx> skullcrasher I think cheeser can
[20:37:02] <pokEarl> ook thanks
[20:39:53] <deathanchor> does mongodb 3.0 do collection write locks for updates that affect an index on that collection?
[21:09:22] <Torkable> is $regex slow like $where?
[21:09:29] <Torkable> seems like it would be slow
[21:09:30] <cheeser> depends on the regex
[23:24:10] <jigax> hello everyone. I'm trying to figure out best way to design my schema. lets say I have a product I would like to keep quantity track for multiple users. does it make sense to push the user and quantity into array in products?
[23:24:34] <StephenLynx> I would keep track of that in the own user document.
[23:24:50] <StephenLynx> and duplicate the product identification into the array that contains that user's inventory.
[23:24:54] <StephenLynx> like
[23:25:13] <jigax> I see
[23:25:21] <StephenLynx> user = {products : [{id:x,amount:1}],name: auser}
[23:25:35] <jigax> awesome
[23:25:37] <jigax> thanks
[23:25:51] <StephenLynx> otherwise each product's document would potentially become huge
[23:26:03] <jigax> makes sense
[23:26:07] <StephenLynx> since its more usual to a product be owned by many users rather than an user own many product.
[23:26:14] <jigax> thanks for clarifying
[23:26:24] <jigax> ;)
[23:26:27] <StephenLynx> and even then you are usually dealing with all products instead of dealing with all users at once.
[23:26:38] <StephenLynx> but all of these are assumptions.
[23:26:46] <StephenLynx> your scenario might not fit what I assume.
[23:26:53] <jigax> your assumptions hit the nail on the head ;)
[23:26:57] <StephenLynx> v:
[23:27:06] <jigax> right on ;)