PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 23rd of January, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[01:18:04] <toter> Hi everybody... I have a user on my mongodb server that connects successfully using db.auth(). When I try to connect using Robomongo with authentication, it doesn't connect... Do I need to create this user again with additional roles?
[01:19:38] <joannac> what version of mongodb
[01:19:52] <joannac> are you sure you're authing against the right database with robomongo?
[01:26:19] <toter> joannac: mongodb version 2.6.7
[01:28:37] <toter> joannac: I'll run other tests here...
[01:29:22] <joannac> toter: robomongo doesn't support 2.6 stuff
[01:29:44] <joannac> google for it, but last I checked they only did 2.4
[01:29:52] <toter> joannac: Good to know... I'll try other mongodb gui's
[01:30:57] <joannac> toter: stennie is working on 2.8 stuff for it's a WIP
[01:36:03] <toter> joannac: I am so sorry to bother you... I found the problem: Firewall rules... It's working now
[01:36:25] <toter> joannac: Thank you for your help
[03:57:26] <hicker> Any Mongoose people? How do I execute something before every CRUD operation?
[03:59:02] <jsjc> what could be the issue for mongo getting faults/s all the time?
[03:59:21] <jsjc> How to troubleshoot it?
[04:27:48] <christo_m> is this the place to ask about mongoose also?
[04:29:51] <christo_m> I'm experiencing the same issue as this person: https://github.com/LearnBoost/mongoose/issues/1204 , with this code http://pastie.org/9853384 , trying to save this data: http://pastie.org/9853385
[04:30:10] <christo_m> the items subdocument isn't being updated.
[04:33:36] <christo_m> https://github.com/LearnBoost/mongoose/issues/2210 nvm . apparently saing on embedded documents isnt supported..
[04:34:16] <christo_m> saving*
[04:38:42] <christo_m> oh wait even better doc.markModified('array');.. apparently mongoose can't detect if an array has changed *facepalm*
[04:43:47] <DragonPunch> mongoose distcint betrween two fields??
[04:43:50] <DragonPunch> anyone know hwo to get that?
[07:39:56] <aaearon> i have a query that i want to return only if the particular document's codes are all marked 'complete':True. this is what i have so far: http://pastie.org/9853606
[07:40:02] <aaearon> is it possible to do this and how should i modify my query?
[07:45:08] <DragonPunch> what does mongodb return if it cant fidn aynthing
[07:45:34] <aaearon> from the console, nothing from my experience
[07:56:26] <joannac> aaearon: db.foo.find({"codes.complete": { $nin: [ false ]}})
[07:57:01] <joannac> DragonPunch: nothing. But not "undefined", as someone claimed the other day
[07:59:30] <aaearon> great joannac thank you!
[11:27:23] <gildo7> hi everybody
[12:37:41] <gildo7> I have a little question about a "join"
[12:37:48] <gildo7> 2 collections: notes and users
[12:37:58] <gildo7> a user can bookmark multiple notes
[12:38:33] <gildo7> when I fetch notes I'd like the json of each note to contain a boolean field "isBookmarked"
[12:39:34] <gildo7> clearly I could run two queries and "enrich" the response with some application logic
[12:39:49] <gildo7> I was just wondering if there's a way to do it in a single query to mongodb
[12:49:14] <StephenLynx> there are no joins in mongo.
[12:49:19] <StephenLynx> so, no.
[12:49:34] <StephenLynx> any tool that promises something like a join is just masquerading multiple queries.
[12:50:20] <gildo7> fair enough StephenLynx
[12:50:26] <gildo7> how would you solve the problem?
[12:50:51] <StephenLynx> I would model in such a way that I wouldn't need to use anything like a join.
[12:51:10] <gildo7> my problem is that notes are batch updated weekly
[12:51:25] <gildo7> so what I do is throw away all the old notes and add the new ones
[12:51:56] <gildo7> so I cannot add an array of users ids to a note (which is how I'd solve it usually)
[12:52:39] <StephenLynx> you can have an array with note ids on the users.
[12:52:53] <StephenLynx> so you are able to list them without a second query
[12:53:09] <StephenLynx> and you are able to fetch a single note when you need to.
[12:54:08] <gildo7> that's exactly what I do, I have an array of note ids on users
[12:54:33] <StephenLynx> so your model is already ok.
[12:54:54] <StephenLynx> now in your control you just don't fetch the note itself when listing them to the user.
[12:54:56] <gildo7> but when the user runs a search I display the notes in a different way, according to the fact that they are bookmarked by that particular user or not
[12:55:31] <StephenLynx> can multiple users bookmark the same note?
[12:55:35] <gildo7> yes
[12:55:54] <StephenLynx> I would have a second list on the user with the notes he have bookmarked.
[12:56:05] <StephenLynx> you just have one with the ones he owns, right?
[12:56:29] <gildo7> yes, an array of ids, but I could add the whole note
[12:56:45] <StephenLynx> I would store the notes in a seperate collection
[12:56:55] <gildo7> they are alreay
[12:56:58] <gildo7> *already
[12:57:01] <StephenLynx> ok
[12:57:11] <StephenLynx> so yeah, that's what I would do.
[12:57:28] <StephenLynx> you don't even need an array with the ids of notes the user owns
[12:57:38] <gildo7> ok but a note now does not have a "isBookmarked" field
[12:57:45] <StephenLynx> I would take it off.
[12:57:54] <gildo7> which is exactly what I'd like to add
[12:58:13] <StephenLynx> you can't have a boolean if the bookmark relation is n*x
[12:58:17] <StephenLynx> n*n
[12:58:25] <StephenLynx> bookmarked by whom?
[12:58:35] <gildo7> sorry maybe my explanation was confused lemme retry
[12:58:55] <gildo7> a user, logged in, runs a search on notes
[12:59:24] <gildo7> the returned array can contain notes that he bookmarked or not
[12:59:55] <StephenLynx> and you wish to differentiate the ones he bookmarked?
[13:00:05] <gildo7> so I'd like the web service to return a JSON that, for each note, has a boolean "isBookmarked" field
[13:00:29] <gildo7> yes!
[13:00:42] <StephenLynx> ok. I would in the back-end read both the searched notes and the list of ids of bookmarked notes by the user.
[13:00:53] <StephenLynx> and add the boolean in the returned data.
[13:01:11] <gildo7> ok, that's exactly what I am doing at the moment :)
[13:01:25] <gildo7> I was wondering if there was a way to solve it via a single query
[13:01:32] <StephenLynx> nope.
[13:01:40] <gildo7> ok thanks
[13:01:46] <StephenLynx> unless you were to completely fuck up the model :v
[13:02:07] <gildo7> I am open to suggestions
[13:02:07] <StephenLynx> nosql in general has it's limitations.
[13:02:17] <StephenLynx> they are not a panacea
[13:02:28] <StephenLynx> off to lunch
[13:02:28] <StephenLynx> gl
[13:02:31] <gildo7> but notes are "volatile" in this case
[13:02:48] <gildo7> enjoy your meal StephenLynx and thanks for listening
[13:02:51] <StephenLynx> np
[13:43:32] <huleo> hello
[13:44:09] <huleo> this is probably more of a mongoose.js question, but anyway: I have population going on after my query
[13:44:30] <huleo> and I want to filter out documents whose field wasn't populated (because the referenced doc is no longer there)
[15:14:12] <FunnyLookinHat> Hey everyone - I've got a question with regards to maintaining an extra foreign_key of sorts on records that are being updated at a high frequency: http://pastebin.ubuntu.com/9835900/
[15:21:50] <StephenLynx> foreign keys?
[15:21:51] <StephenLynx> what
[15:23:19] <StephenLynx> what does each of these ids represent?
[15:29:31] <INIT_6_> Is there a way to store the oplog on another mount point. Want it in a different location than the data.
[15:32:52] <neo_44> FunnyLookinHat: why are you using a foreign_key at all....document database...not relational :)
[15:36:34] <neo_44> INIT_6_: nope
[15:36:59] <INIT_6_> Thought so, My google foo didn't bring anything up. Thanks
[15:37:19] <neo_44> INIT_6_: what are you trying to solve with that?
[15:39:08] <INIT_6_> We need to make the oplong 2.5x bigger due to a complex recovery. However, The mount that database is on isn't big enough to support this option. We have other options so its not the end of the world.
[15:42:43] <FunnyLookinHat> neo_44, foreign key is the wrong word to describe it
[15:42:46] <FunnyLookinHat> "identifier" might be better.
[15:43:03] <neo_44> INIT_6_: you can't resize the oplog of an already initiated replica set
[15:43:03] <FunnyLookinHat> StephenLynx, the IDs represent users and cookies
[15:43:12] <FunnyLookinHat> StephenLynx, so - "ref_id" is attached to a cookie
[15:43:29] <StephenLynx> why can't you just attach it to an user?
[15:43:42] <FunnyLookinHat> But if I have a user login on two separate devices I want to associate those cookies together for reporting purposes by setting "alt_id" to represent the user
[15:43:55] <StephenLynx> so you put the same user on both cookies
[15:43:59] <FunnyLookinHat> Right
[15:44:05] <StephenLynx> and use this user id the cookie gave oyu
[15:44:08] <FunnyLookinHat> And I don't necessarilly know that the cookie belongs toe ach user until they've logged in
[15:44:21] <FunnyLookinHat> i.e. if someone visits my site and does a bunch of things and then LATER logs in - I want to associate all of that history together
[15:44:50] <neo_44> FunnyLookinHat: you can create a new record every log in ... bucket the results by the cookie
[15:44:55] <StephenLynx> cant you just store this history in local storage?
[15:45:12] <FunnyLookinHat> StephenLynx, lots and lots of data, and I need to generate reports on it :)
[15:45:15] <StephenLynx> and associate after the user logs in?
[15:45:42] <neo_44> FunnyLookinHat: don't use mongo then
[15:45:45] <neo_44> use elastic search
[15:45:45] <FunnyLookinHat> hehe
[15:46:00] <neo_44> pond that shit in with the cookie...the aggregate on it
[15:46:04] <neo_44> poiund*
[15:46:06] <neo_44> pound*
[15:46:09] <FunnyLookinHat> LOL
[15:46:25] <neo_44> you can have that set up and in production in 1 day
[15:47:06] <neo_44> or you can use mongodb and use bucketing on the cookie
[15:47:46] <FunnyLookinHat> neo_44, so when you say bucketing on the cookie
[15:47:53] <FunnyLookinHat> You mean to generate a separate collection of cookie buckets?
[15:48:03] <neo_44> no
[15:48:06] <FunnyLookinHat> And when I identify a cookie - i cre.... oh
[15:48:16] <neo_44> bucketing is then you use a value to associate all documents
[15:48:45] <neo_44> {_id : ObjectID, cookie : "test cookie", blah:blah, blah:blah }
[15:48:47] <FunnyLookinHat> neo_44, Ah - bucketing is a mongodb thing?
[15:48:57] <neo_44> every record has the same cookie that are associated
[15:49:11] <FunnyLookinHat> Yeah so I index cookie as it is
[15:49:15] <neo_44> then you can use 1 query to get all documents that have the same cookit
[15:49:44] <neo_44> okay...so then you have another collection that is just a mapping of cookie to user
[15:50:22] <INIT_6_> neo_44: Sure you can: http://docs.mongodb.org/manual/tutorial/change-oplog-size/
[15:50:22] <FunnyLookinHat> neo_44, i.e. objects like this: { _id: ObjectID, user_id: "whatever", cookies: [ "1234", "5678" ] }
[15:51:34] <neo_44> INIT_6_: sorry I meant you couldn't just change it in the config
[15:51:58] <INIT_6_> very true.
[15:51:59] <neo_44> INIT_6_: it would be easier to create a new replica set and move the data with mongo-connector...then to resizee oplog
[15:52:26] <neo_44> FunnyLookinHat: okay so that is the current document?
[15:52:43] <FunnyLookinHat> neo_44, well I'm asking if that would be the way to do what you're suggesting
[15:52:50] <neo_44> yes
[15:52:55] <FunnyLookinHat> Currently my document has the alt_id on each object
[15:53:05] <neo_44> and you could have device id, and any other key that identified a user
[15:53:09] <FunnyLookinHat> Because I heard "DENORMALIZE!!" when I started digging into this :)
[15:53:23] <neo_44> there is another way to do it....called a "key store" I ahve implemented this before as well.
[15:53:26] <FunnyLookinHat> neo_44, is there a way to query through that object ?
[15:53:36] <FunnyLookinHat> neo_44, i.e. could I query for all objects based on a user document?
[15:53:47] <FunnyLookinHat> all of the objects with the cookies that are on the user document, I mean
[15:54:01] <FunnyLookinHat> ( That sounds an awful lot like a SQL JOIN )
[15:54:14] <neo_44> yeah...you want to have a seperate collection for a "key store"
[15:54:23] <neo_44> a key is anything that looks up a user
[15:54:37] <neo_44> so cookie, facebook id, device id, whatever id
[15:55:23] <neo_44> each document has {_id : <typeofkey value>, username : "User"}
[15:56:20] <neo_44> so {_id : "cookie <cookievalue>, user : "User"}
[15:56:36] <neo_44> this would allow you to track cookies and any other key that maps to a user
[15:56:56] <FunnyLookinHat> neo_44, and then I could query for all objects tied to a user ?
[15:57:05] <neo_44> yep...by key
[15:57:13] <FunnyLookinHat> OK - hold on.
[15:57:15] <neo_44> so all cookies....or all facebook id, etc
[15:57:49] <FunnyLookinHat> Let's call the objects with cookie_id and a bunch of other data "RECORDS" - and the objects with the user and cookies "USERS" -
[15:58:03] <FunnyLookinHat> So I could say "Give me all RECORDS tied to this USER by their cookie_id" ?
[15:58:09] <FunnyLookinHat> That's a join - isn't it?
[15:58:13] <FunnyLookinHat> And join's aren't possible
[15:58:25] <neo_44> well...they kind of are
[15:58:31] <neo_44> it would be a CODE join
[15:58:34] <neo_44> not a server join
[15:58:43] <neo_44> but you are right.
[15:59:11] <neo_44> the only other way to do it...is how you proposed....to have the cookie on every user record.....and the user name on every cookie record
[15:59:33] <neo_44> or an array of cookie id on the user
[15:59:41] <neo_44> but you would have to do 2 round trips to the database
[15:59:47] <neo_44> one for the array of cookie ids
[15:59:53] <neo_44> then one for the cookie records
[16:00:26] <FunnyLookinHat> Right right
[16:00:27] <FunnyLookinHat> O
[16:00:27] <FunnyLookinHat> Ok
[16:00:35] <neo_44> there isn't an easy way to do it
[16:00:36] <FunnyLookinHat> neo_44, thanks - I think I'm going to have to go back to the drawing board a bit :)
[16:00:57] <neo_44> take a look at kibana and elastic search
[16:09:18] <Kai_> Hey
[16:09:31] <Kai_> How to clear the internal connection pool of the c# driver?
[16:10:52] <neo_44> Kai_: what version ?
[16:11:35] <Kai_> Nvm. I think I found it.. It was a stic method of MongoServer
[17:12:22] <Xeon06> Hey folks
[17:13:20] <Xeon06> I have documents with objects with a very precise pattern (author: {_id: ..., ...}) a bit throughout, and am wondering if there's an update query that would allow me to update them all
[17:13:26] <Xeon06> IE, they are not necessarily at the same level
[17:14:47] <canistr> Does anyone know what a MongoError: "No pending nonce" code 17 is?
[17:15:06] <StephenLynx> with an OR clause, I guess you could, Xeon06
[17:15:33] <canistr> getting this error when using mongodb driver for db.admin().authenticate(user, pass)
[17:18:57] <Xeon06> I just realized my question wasn't super clear. This is my data structure http://pastebin.com/aa7aQKJm
[17:19:17] <Xeon06> I'd like to update the author object in the root as well as any author object within the subdocuments, with an ID
[17:19:23] <Xeon06> Would I be better off doing two different queries?
[17:20:21] <StephenLynx> describe your match and update logic.
[17:20:27] <StephenLynx> in regular english.
[17:23:05] <Xeon06> I want to match and update objects within a document, as well as objects within an array in that same document
[17:23:58] <StephenLynx> no, YOUR logic, in this case.
[17:24:05] <StephenLynx> not the general operation.
[17:25:32] <Xeon06> Right now I'm just doing a standard update by matching author.id and using $set with author
[17:25:40] <Xeon06> Haven't figured out how to approach the array part
[17:26:18] <StephenLynx> still not what I need, I asked for what you want, not the unsatisfying logic you have.
[17:26:19] <StephenLynx> http://stackoverflow.com/questions/4669178/how-to-update-multiple-array-elements-in-mongodb
[17:26:27] <StephenLynx> but I dont think it may be possible, anyway.
[17:31:02] <Xeon06> Thank you!
[17:54:42] <edcaceres> Hi guys im experiencing a problem with mongo client for java
[17:54:52] <edcaceres> this is stack
[17:54:54] <edcaceres> java.lang.IllegalArgumentException: response too long: 1256845501 at com.mongodb.Response.(Response.java:49) at com.mongodb.DBPort$1.execute(DBPort.java:141) at com.mongodb.DBPort$1.execute(DBPort.java:135) at com.mongodb.DBPort.doOperation(DBPort.java:164) at com.mongodb.DBPort.call(DBPort.java:135) at com.mongodb.DBTCPConnector.innerCall(DBTCPConnector.java:289) at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:268) at com
[17:56:09] <kali> first, use a paste service somewhere
[17:56:37] <kali> second, what ar you doing ? it looks like your trying to transfert a huge document
[17:59:57] <edcaceres> im not, the query is almost 50mb size per request
[18:00:32] <kali> 50MB for a request ?
[18:00:44] <kali> that's huge
[18:01:04] <kali> and i think a request is limited to 16MB as any document anyway
[18:16:15] <edcaceres> for future references to "response too long message" we just realize this problem is related with java resources, the solution is to increase heap java memory according to need
[18:16:31] <edcaceres> thank you anyway
[19:36:56] <gansbrest> hi. What's the process of restoring replica set on EC2? I took snapshot, attached it to another box, modified config by specifying new replSet name, but instance still has old config from old replica set, listing old servers and wont come up correctly
[19:37:04] <gansbrest> am I doing something wrong here?
[19:37:16] <gansbrest> running it like this sudo /usr/local/bin/mongod --dbpath /data/db/mongod --replSet test-set
[19:37:40] <gansbrest> gives me error like this replSet info Couldn't load config yet. Sleeping 20sec and will try again. every 20 sec
[19:40:33] <gansbrest> and "replSet info self not present in the repl set configuration". It's a new node and should not be in this configuration. Should I manually clean config and define just this master node?
[19:59:22] <Letze> Can someone please explain to me why I would want to use MongoDB over MySQL (MariaDB) ?
[19:59:56] <Letze> I simply cannot paint a clear picture in my head.
[20:01:17] <KLVTZ> newb question: when I run db.users.save({name: '1'}, {name: '2'}), it only saves one. According to an example, i should be able to save two...
[20:04:49] <cheeser> http://docs.mongodb.org/manual/reference/method/db.collection.save/
[20:05:06] <cheeser> the second document is for the write concern
[20:18:54] <adoniscik> can you count the number of results from an aggregation using count instead of group and sum?
[20:43:16] <DragonPunch> db.colection.find({Day: Monday, Tuesday});
[20:43:43] <DragonPunch> How can I get specific elements on an Item of an Arary in Mongo
[21:11:16] <kexmex> guys. CoreOS + Docker (ubuntu) + MMS = can't install ;(
[21:27:05] <kexmex> does MMS agent dial out, or the port on the machine has to be opened up?
[21:36:02] <cheeser> kexmex: the agents call out
[21:36:15] <kexmex> thats cool
[21:36:19] <kexmex> feels more secure then :)
[21:36:37] <kexmex> i have a problem installing MMS on Ubuntu Docker
[21:36:40] <cheeser> definitely
[21:36:42] <kexmex> Docker running in a CoreOS
[22:04:34] <DragonPunch> whats the diff of having your mongodb like Data: { {}, {}, {}. {}, {}, } compared to [ {}, {}, {}, {}, {}, {} ]
[22:08:48] <morenoh149> -_- array vs object
[23:03:03] <Tyler_> Hey guys! How do I update my callback?
[23:03:46] <Tyler_> like findOne({phone:123456}), function(account) { account.balance=account.balance-2; //how do I update account here?})
[23:03:54] <Tyler_> I tried $inc and it breaks it
[23:19:34] <GothAlice> Tyler_: Typically you'd use .update(query, update) where query is the {} argument to findOne(), and update is the {$inc} bit.
[23:19:44] <GothAlice> Rather than loading the whole record to save a minor change to it.
[23:40:28] <adamsilver> how can I delete the second translation from this: https://bpaste.net/show/0cae17ca46c0
[23:57:07] <robo> Hi all - I'm attempting to pipe a mongodb collection into a scipy random forest trainer - any idea the best way to approach this?
[23:57:35] <robo> There seem to be a few ways to go about it, just interested if anyone has any experience in this area