PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 13th of February, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:22:50] <tylerdmace> Let's say that I want to give each user of my application the ability to set a custom MongoDB URI to allow for their own local datastores to power my application. The models I create in my ODM are tied to a connection but since every user will have their own connection, I need a way for my models to be shareable across all the different connections. Has anyone done something like this before? Any insight as to a good solution?
[00:26:38] <tylerdmace> Essentially I want to have an application that allows for a custom connection to be made on user login. As soon as they logout (or after an expiration period) the connection should terminate. But I'd like the same RESTful API I have built to apply to all these connections, no matter where they come from. I'm having a really hard time finding a solution to this particular setup.
[00:27:14] <tylerdmace> Each user has a profile where they can set a custom Mongo URI. If they dont' supply one, I want to connect to localhost. But if they do supply one, I'd like the connection to be made to that custom URI.
[01:57:44] <haole> where do I get a list of the support events in MongoDB's Node.js driver? like 'fullsetup' and 'close'
[08:02:32] <Folkol> Is it possible to configure a Mongo Client to be read only?
[08:19:55] <repxl> hey, whats up with safe:true does not work anymore ? ....
[08:59:00] <repxl> can i use in one update query $set and $addtoset at once ?
[08:59:28] <repxl> or i have to first create log data with $set and than another query to increase log data.
[08:59:42] <repxl> death channel response.
[09:00:04] <Zelest> I would help you if I knew the answer :)
[09:00:13] <Zelest> Instead if sit quiet in my cage to avoid off-topic chitchat. ;)
[09:02:01] <kali> repxl: safe:true has been replaced by the write concern enumeration, and safe:true has been made more or less the default
[09:03:12] <kali> repxl: $addToSet will create an empty array if there is none (and i suspect $push is closer to what you need that $addToSet)
[09:05:05] <repxl> kali i can create with $push the field:data structure i need and at the end increase some values like how many times they tryed to log ? or i have to create first the structure with $set and than $push for incement or add data ?
[09:06:00] <kali> repxl: i'm not sure i understand completely what you're asking, an example would help
[09:06:41] <repxl> kali ok w8
[09:25:55] <repxl> kali this is how i create my log data "db.users.update({email:'test'}, {$set:{log_data:{log_attempts:0,attempt_date:'some_Date'}}})" however i want incement the log_attempts everytime the update runs and date.
[09:30:44] <kali> repxl: just calling $inc on log_attempts will do the right thing
[09:30:56] <kali> ha wait. no.
[09:31:02] <kali> don't use $set
[09:31:05] <kali> just $push
[09:31:23] <kali> even with the first log attempt
[09:31:28] <kali> it will create an array for you
[09:31:43] <repxl> kali hmm ok i will try somehow figure it out.
[09:32:14] <kali> really simple. replace set with push and you're done :)
[09:34:57] <repxl> kali "db.users.update({email:'test'}, {$push:{log_data:{log_attempts:0,attempt_date:'some_Date'}}, $inc:{log_data.log_attempts:+1}})" also why i can't navigate to the subdocument log_attempts by using "."
[09:48:46] <repxl> "db.users.update({email:'test'}, {$push:{log_data:{log_attempts:0,attempt_date:'some_Date'}}, $inc:{log_data:{log_attempts:1}}})" it tells me "ERROR: Cannot increment with non-numeric argument: {log_data: { log_attempts: 1 }}"
[09:57:54] <DMANRJ> hello
[09:58:17] <DMANRJ> any way to speedup map javascript code ?
[10:39:00] <DMANRJ> anyone alive ?
[10:42:47] <DMANRJ> how to optimize js code in MAP function ?
[10:43:48] <kali> DMANRJ: not much you can do
[10:44:03] <LouisT> get a better CPU?
[10:44:28] <kali> DMANRJ: the aggregation framework will be faster than m/r if you can express what you need with it
[10:46:03] <DMANRJ> well, i need to manipulate items so it's a bit hard to do IF and such with aggregation framework
[11:46:23] <Sticky> jiffe: no I did not ticket the auth idea
[12:07:34] <duckk> how does a chunk growns beyond the specified chunk size?
[12:08:56] <duckk> wasnt mongo supposed to change the shard or create a new chunk when one has it's size reached?
[12:40:05] <duckk> I installed mongo but I don't have the "mongos" file. Did I miss anything when installing?
[13:11:09] <triplefoxxy> I'm use bulk operations that should operate on collection "foo", but they end up executing on "foo.collection". What's going on there?
[13:21:27] <Folkol_> Hello. Can I upload a javascript file to MongoDB from my Java Client, so that I can use a module defined in this file in a mapReduce-operation from the same java client? (To avoid duplication of javascript code, since I am using the same module from the browser).
[13:41:17] <StephenLynx> are you planning to get the function out of the database to execute it on the front-end?
[14:37:14] <Neo9> http://pastebin.com/tL3kfzT1
[14:38:14] <kenITR> Hello, I have identified a portion of my application as suitable for Mongo, The rest is MySQL. Can these co-exist on the same box and if so is there a way to manage the memory allocation?
[14:40:14] <Neo9> http://pastebin.com/tL3kfzT1
[14:40:15] <Neo9> http://pastebin.com/tL3kfzT1
[14:40:17] <Neo9> http://pastebin.com/tL3kfzT1
[15:00:28] <StephenLynx> yes, they can co-exist.
[15:00:30] <StephenLynx> I done that before.
[15:01:04] <StephenLynx> and as they don't even use the same port, you don't ahve to do anything, just get them running
[15:01:08] <StephenLynx> kenITR
[16:06:40] <_bahamas> hello! what can I use (java) to discover the actual primary on a replicaset ?
[16:08:06] <cheeser> why would you need that? just write to the driver and it'll find the primary for you.
[16:08:41] <cheeser> the host(s) you give MongoClient are just a seed list. once the driver connects, it discovers the primary for you and tracks elections and primary changes transparently.
[16:10:50] <kenITR> Thank you, StephenLynx
[16:15:11] <duckk> how does a chunk growns beyond the specified chunk size?
[16:15:15] <duckk> wasnt mongo supposed to change the shard or create a new chunk when one has it's size reached?
[17:24:36] <_bahamas> thank you cheeser
[17:37:53] <NoOutlet> How's it going in here?
[18:36:14] <mrmccrac> anyone know about listCollections being very slow in mongo 3.0 / wiredtiger? http://pastie.org/9945147
[18:36:43] <mrmccrac> 88 seconds to do a listCollections, this is when i'm doing a high number of inserts/updates into the database
[18:41:07] <cheeser> nothing in jira that I can find
[18:41:14] <mrmccrac> ya i tried searching too
[18:41:20] <mrmccrac> its strange
[18:41:24] <mrmccrac> the writes seem to be doing ok
[18:41:34] <mrmccrac> but yeah this listCollections cant ever get a lock it seems
[18:42:45] <mrmccrac> i can do a find() query okay no problems
[18:47:08] <tylerdmace> I could use some help figuring out how to correctly implement a Mongoose connection manager. Anyone familiar with Node + Mongo (and Mongoose) wanna take a look at this: http://stackoverflow.com/questions/28503464/writing-a-mongoose-connection-manager
[18:47:34] <StephenLynx> yeah, Ive been using mongo and node for a while.
[18:47:47] <StephenLynx> I wouldn't wipe my ass with mongoose.
[18:48:07] <StephenLynx> tylerdmace
[18:48:10] <tylerdmace> :(
[18:48:24] <tylerdmace> Haha, well I can't swap it out (not my call) but I will consider that going forward
[18:48:30] <NoOutlet> I did one time. He bit me.
[18:52:48] <mrmccrac> whats funny is tab completion on mongo cli always freezes up because it wants to do a list collections too
[18:52:52] <mrmccrac> also*
[18:55:56] <mrmccrac> ugh yeah the second i stop doing any writes im able to do a list collections no problem
[18:56:00] <cheeser> mrmccrac: you could try filing an issue and see what comes back
[18:56:08] <mrmccrac> surprised no one has seen this
[19:12:12] <mrmccrac> i think listCollections works differently now maybe, might've figured it out...
[19:26:32] <tpayne> Is there a way for me to remove duplicates but keep the latest entry instead of the first?
[19:26:51] <mrmccrac> how do you determine latest
[19:26:59] <tpayne> _id
[19:28:24] <StephenLynx> _id is always unique.
[19:28:24] <tpayne> perhaps i can write a script to do it, by sorting, taking the first, and then looping through any that are less than, and removing them
[19:28:37] <tpayne> if there's no easy way
[19:29:20] <GothAlice> Aggregate query $group on your unique constraint, ordered by _id in the direction you need with $first on the fields you want to keep, $out'd to a new, de-duplicated collection.
[19:29:38] <NoOutlet> There's no simple programmatic way to indicate a removal strategy when defining a unique index with dropDups.
[19:29:59] <NoOutlet> Exactly what I was going to suggest, Alice. :)
[19:32:03] <tpayne> GothAlice: can you help me write that?
[19:32:11] <GothAlice> Unfortunately not at the current time. :/
[19:32:20] <tpayne> ok, thanks
[19:32:26] <GothAlice> http://docs.mongodb.org/v2.6/reference/operator/aggregation/group/ should get you started
[19:33:30] <tpayne> cool i'll take a look
[19:34:53] <NoOutlet> db.col.aggregate([{$sort: {_id: -1}}, {$group: {_id: "$uniqueField", otherField: {$first: "$otherField"}, thirdField: {$first: "$thirdField"}}},{$out: "newCol"}])
[19:36:08] <NoOutlet> Possibly with a $project to set the "$uniqueField" back to being "$uniqueField".
[19:38:17] <NoOutlet> db.col.aggregate([{$sort: {_id: -1}}, {$group: {_id: "$uniqueField", id: {$first: "$_id"}, otherField: {$first: "$otherField"}, thirdField: {$first: "$thirdField"}}},{$project: {_id: "$id", uniqueField: "$_id", otherField: 1, thirdField: 1}},{$out: "newCol"}])
[19:39:48] <tpayne> what's uniqueField in this case?
[19:40:20] <NoOutlet> Whatever field it is that is duplicated which you don't want duplicated...
[19:41:03] <tpayne> https://gist.github.com/troypayne/c7eb25e5db636b3bdb73
[19:41:26] <tpayne> this is the col. Simple 1 to many where a user can have many request tokens. But i'm changing it so they can only have 1 request tokens, the latest one
[19:41:39] <tpayne> 1 to 1
[19:41:58] <tpayne> user is represented by guid
[19:42:15] <NoOutlet> Why not store the requestTokens on the user document?
[19:44:19] <NoOutlet> To answer your question (as opposed to questioning your solution), then the field you want to be unique sounds like it's the `guid` field.
[19:44:23] <tpayne> well, i want to keep them separate and i want to add an expiry col as well
[19:44:25] <tpayne> and they already exists
[19:44:49] <NoOutlet> When you say expiry col, you mean column, not collection?
[19:45:26] <NoOutlet> I would still argue that keeping them separate needlessly complicates things.
[19:46:43] <NoOutlet> You can have a field on the user called "requestToken" that is a subdocument with a "token" field where the value is the token string and an "expiredOn" field which is a date.
[19:47:25] <NoOutlet> You wouldn't be able to set the "expiredOn" field as a TTL index, but you could verify that the token is valid when fetching it.
[19:47:46] <tpayne> even though mongoldb is a non relational db, i'm using it like one
[19:47:57] <tpayne> so i'm avoiding subdocuments
[19:48:17] <allcentury> hi all - anyone using the Mongo Ruby Driver? We're trying to connect to a docker container on AWS with an exposed port but not having much success. MongoClient.new('IP', PORT, options). Does IP need to be mongodb://IP ?
[19:50:21] <NoOutlet> :-/ .... Well, my advice is to not use it like a relational db.
[19:50:33] <tpayne> heh, i find it more flexible
[19:52:40] <StephenLynx> actually, relational dbs are the most flexible you can get.
[19:52:59] <mrmccrac> my joins, let me show you them
[19:53:29] <tpayne> i'm using mongodb as a relational db, with Scala to actually join them
[19:53:43] <StephenLynx> heh
[19:54:08] <tpayne> non blocking, it's really sexy
[19:54:25] <tpayne> no slow queries because of complex joins!
[19:54:38] <tpayne> and then flexibility of relational without coding myself into a corner with document mongo
[19:54:44] <StephenLynx> lol
[19:54:49] <StephenLynx> :^)
[19:55:04] <StephenLynx> you dun goofed
[19:55:10] <mrmccrac> ive done some extremely complicated joins that return very quickly
[19:55:12] <NoOutlet> Does T stand for Troll?
[19:55:36] <StephenLynx> ive read enough the daily wtf to know there are people doing stuff like that.
[19:55:47] <mrmccrac> mongo needs the right indexes just like any other db ever
[19:55:58] <tpayne> i don't see the problem
[19:56:12] <StephenLynx> course you dont
[19:56:12] <mrmccrac> for relational data i would still use sql
[19:56:38] <tpayne> StephenLynx: that was more of a question for you
[19:56:56] <StephenLynx> i g2g
[19:57:07] <StephenLynx> other people might be able to explain you
[19:57:55] <tpayne> such strong words but you still haven't given me a single thing
[19:58:00] <tpayne> on what's wrong with doing that
[20:03:17] <tpayne> NoOutlet: maybe you know?
[20:07:00] <NoOutlet> When someone uses a request token, you're going to have to find that token and get the GUID for the user. Then, you'll need to make a separate find for the user with the GUID. This is two queries instead of using a single query to simply find the user by requestToken.
[20:07:38] <NoOutlet> Well, that use case is more descriptive of a session, so it may not be the actual usage of these request tokens since I don't know what you're doing really.
[20:10:35] <NoOutlet> But the main point is the same. You can't do JOINs, so why set up your data schema to require JOINs which you then need to implement in Scala when you can so easily embed in a subdocument?
[20:10:53] <tpayne> no flexibility with subdocuments
[20:11:11] <tpayne> a user is a user, subdocuments can't be shared
[20:11:40] <tpayne> of example i return a list of notifications, these notifications have who posted them, and some other stuff.
[20:11:54] <tpayne> it's made up of two documents, not one
[20:12:05] <tpayne> so a subdocument would mean duplicate data
[20:12:18] <tpayne> and if a user changes their name, now i have to manage all the subdocuments and change them too? gets nasty
[20:12:31] <tpayne> so relational db solves this, but now you have to join, however joins are complex and slow
[20:12:42] <tpayne> so using Scala, i can actually put different collections on different clusters
[20:13:01] <tpayne> since user is used often, i can put that on 4 machines
[20:13:07] <tpayne> and put requestTokens on 1 machine
[20:13:57] <tpayne> and since Scala allows me to build these complex models from many documents, non blocking, without slow join queries bogging down the machine
[20:14:49] <tpayne> and the code clarity is another great thing, which i value over speed. I figure there are other ways to improve efficiency than to write rigid code
[20:14:56] <tpayne> anyway that's my two scents on why i do it this way
[20:25:55] <tpayne> NoOutlet: this is how i ended up doing it btw: https://gist.github.com/troypayne/1e3ad1cb4e8d5a378333
[20:26:50] <tpayne> boom
[20:26:55] <tpayne> worked like gangbusters
[20:28:01] <tpayne> cents*
[21:48:47] <deanclkclk_> folks i getting an error I can't figure ut
[21:48:48] <deanclkclk_> rs0:PRIMARY> local.system.replset Fri Feb 13 15:44:34.178 ReferenceError: local is not defined
[21:48:51] <deanclkclk_> why is this?
[21:49:00] <deanclkclk_> local is not defined?
[21:49:02] <deanclkclk_> plz help
[21:53:24] <mango_> what's the command you're using?
[21:53:28] <mango_> to get that error
[21:56:08] <deanclkclk_> I tried running this mango_ MongoDB shell version: 2.4.6 connecting to: commandCenter rs0:PRIMARY> rs0:PRIMARY> rs.initiate() { "info" : "try querying local.system.replset to see current configuration", "ok" : 0, "errmsg" : "already initialized" }
[21:56:48] <deanclkclk_> tried running -> rs.initiate()
[21:56:58] <deanclkclk_> it says I needed to run local.system.replset
[21:57:09] <deanclkclk_> I did and this thing say sit doesn't know what local is
[21:57:21] <mango_> how many nodes in your replica set?
[21:57:24] <deanclkclk_> rename dbclk
[21:57:49] <deanclkclk_> 2 and one arbiter
[21:57:58] <mango_> have you declared them in a config?
[21:58:04] <mango_> config = ...
[21:58:20] <deanclkclk_> not sure what you're asking but, when I do rs.status()
[21:58:23] <deanclkclk_> it shows them
[21:58:53] <mango_> how did you state which nodes are in your replicaSet?
[22:00:54] <mango_> ?
[22:01:38] <dbclk> not sure what to tell you mango_ ..we have some script that does the configuration
[22:01:50] <dbclk> on the slave..we specify what the primary is
[22:02:16] <mango_> ok, are you able to share that in paste.it
[22:02:50] <mango_> Is everything else in your environment ok.
[22:02:56] <mango_> error logs?
[22:03:07] <mango_> other nodes up?
[22:04:06] <dbclk> getting some stuff in the logs
[22:04:08] <dbclk> one sec
[22:09:07] <fn_steve> hey, i was wondering if anyone could help me with an issue that's left me a bit confused.
[22:09:26] <fn_steve> i'm using the native mongo driver for node.js with a connection pool size of 5.
[22:10:11] <fn_steve> my application sets up the connection at start up and then shares the db reference, so there's only the single connection (well, 5 connections via the pool size)
[22:11:11] <fn_steve> however, the output from mongod on my local machine shows the connection size slowly increasing over time
[22:11:39] <fn_steve> after about a minute, it'll tick up to 6 connections
[22:11:49] <fn_steve> then a little while after, it'll go up to 7 connections
[22:12:08] <fn_steve> but it typically stops once it reaches 9 connections.
[22:12:27] <fn_steve> is this coming from my node.js app or is there something else going on here?
[22:23:51] <dbclk> how can I test if slave is talking to master?
[22:28:16] <doxavore> dbclk: in a replica set, you can run rs.status()
[22:29:22] <doxavore> so... according to the compact docs, disk storage is not released back to the system. is a repair (which requires 2x storage) and deleting _all_ the data and re-replicating it the only way to recover disk space? o.O
[22:32:08] <doxavore> or repair no longer uses 2x storage and just storage + 2Gb..