PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 27th of April, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[06:49:57] <thomaschaaf> hello I just upgraded from wheezy to jessie today on one of our servers. Now I cant start mongodb anymore: http://pastie.org/10115694
[06:50:11] <thomaschaaf> any idea what could cause this error?
[07:19:04] <thomaschaaf> reinstalling the mongodb package seems to have fixed it
[08:12:33] <errakeshpd> how to use rails and mongo ( Model.find(1) is for rails and Table.find(1) for mongo ) so it may make some confusion while seeing the code
[08:21:46] <eren> hey folks, I switched to mongod 3.0.2 but I forgot to switch wiredTiger storage, it is still using the old backend
[08:21:52] <eren> how can I switch to wiredTiger without losing data?
[08:25:33] <Derick> eren: you need to re-import
[08:25:50] <Derick> (or, setup a replica set member with WT and let it sync)
[08:28:08] <eren> Derick: replica set could be a good solution. Unfortunately I have 1 mongo installation and it's actively used now, collecting meters from agents. I believe the collector cannot process them and write to mongo fast enough and the messages are queued in rabbit
[08:28:11] <mick27> do you guys know a way to tail the oplog to amazon sns ?
[08:29:15] <Derick> eren: wouldn't rabbit just queue them for a bit longer then?
[08:29:27] <Derick> (no idea how rabbitmq works ;-) )
[08:37:43] <eren> Derick: yeah, I have 2million messages in rabbit, that's way more than it shuold be :)
[08:38:36] <eren> I believe I will go with replica solution. I will spin up a new replica with wiredTiger, add this replica to single mongo instance, wait for sync, and ... ?
[08:39:23] <eren> should I switch to this replica as a main mongo database, or wait for the sync. I have 60GB of data currently
[08:39:34] <eren> when main mongo istance is down, rabbit will queue the messages
[08:55:03] <eren> Derick: I believe I will use mongodump/mongorestore
[11:00:36] <amitprakash> Hi, for queries in currentOp which aren't being recorded by the virtue of being too large, is there a way for me to inspect what this query is?
[11:01:22] <eren> how can I instruct mongodb to rebuild indexes? I moved to wiredtiger via "mongodump/mongorestore" and I see that /var/lib/mongo is 350MB, where in the previous instance it was 7GB
[11:01:33] <eren> I believe it is because of nonexisting indexes
[11:05:15] <pamp> hi
[11:06:24] <pamp> which operating system you guys recommend me for mongodb, Cento OS or Ubuntu?
[12:28:34] <razieliyo> hi
[12:28:52] <razieliyo> does mongorestore remove previous collection data?
[12:30:23] <snowcode> Anyone know what kind of error is this: "MongoError: n&#x2F;a" ?
[13:24:21] <mtree> is it good practice to disable authentication requirement when automatically testing protected endpoints?
[13:24:37] <mtree> i guess its more of a nodejs question
[13:25:05] <StephenLynx> sort of unrelated but I suggest migrating from node.js to io.js.
[13:25:17] <StephenLynx> and why would it be more of a node question?
[13:26:02] <StephenLynx> what do you mean by automatically testing protected endpoints?
[13:41:29] <snowcode> I've executed a $geoNear query. Everything it's okay until I add as query param the sort key:
[13:41:41] <snowcode> query["$sort"] = { "distance" : -1};
[13:41:53] <snowcode> query execution fails with this error: Can't canonicalize query: BadValue unknown top level operator: $sort
[13:41:55] <snowcode> any idea?
[13:48:38] <snowcode> okay found. I cannot place it on query, need to use .sort
[13:49:42] <StephenLynx> you could if you were using aggregate.
[13:50:17] <StephenLynx> snowcode
[14:03:38] <snowcode> StephenLynx Thank you :) Have you ever worked with $geoNear? I've tried to perform a query to get all the points near a [lon,lat] point but it returns wrong results (in fact it seems to return any results): https://gist.github.com/malcommac/72a689e6b8847fc5e289
[14:04:53] <StephenLynx> no, never used it.
[14:07:09] <snowcode> ok^
[15:40:18] <MadLamb> can i could the number of properties of a object in mongo? {1: "something", 2: "something2"}
[15:42:52] <StephenLynx> what?
[15:43:22] <saml> Object.keys(doc).length
[17:13:19] <MadLamb> what undefined mean as return of a update command?
[17:15:24] <MadLamb> the docs say it should return a WriteResult object, but i'm receiving undefined
[17:16:48] <StephenLynx> are you receiving an error too?
[17:16:52] <StephenLynx> what platform is that?
[17:17:14] <StephenLynx> is the operation actually happening?
[17:18:16] <MadLamb> no
[17:19:00] <MadLamb> print(db.Col.update(...))
[17:19:03] <MadLamb> undefined
[17:19:08] <MadLamb> StephenLynx, no error
[17:19:27] <StephenLynx> what platform?
[17:19:35] <MadLamb> mongo console
[17:19:42] <StephenLynx> the regular terminal?
[17:19:45] <MadLamb> yes
[17:20:14] <StephenLynx> when you perform the operation without the print, it proceeds?
[17:20:35] <MadLamb> it does not modify what it should be modifying.
[17:20:45] <StephenLynx> and what message it displays?
[17:21:12] <MadLamb> none, it returns as the comand suceeeded. Then i tried to get the WriteResult as debug, but its returning undefined instead.
[17:21:37] <StephenLynx> paste the displayed message after the succeeded update.
[17:23:37] <MadLamb> StephenLynx, there is no message. it returns to the terminal.
[17:23:41] <MadLamb> StephenLynx, https://gist.github.com/fabiocarneiro/cb539c48bd115fbcc0ca
[17:24:02] <StephenLynx> wait
[17:24:12] <StephenLynx> you are not starting the terminal client then?
[17:24:29] <jrbt> Hi!
[17:24:34] <MadLamb> StephenLynx, ? https://gist.github.com/fabiocarneiro/cb539c48bd115fbcc0ca
[17:24:48] <MadLamb> StephenLynx, added the output there.
[17:25:38] <StephenLynx> ok, so you have a js file and is using mongo to execute it?
[17:26:37] <MadLamb> StephenLynx, i'm running that directly in terminal as one-line-command. I just formatted it for readability purposes to show to you.
[17:27:04] <MadLamb> StephenLynx, by terminal i mean mongo console
[17:27:32] <StephenLynx> ok, so you open your terminal, you type "mongo", then you type all that code?
[17:27:42] <MadLamb> yes
[17:27:49] <StephenLynx> ok.
[17:27:58] <StephenLynx> try using just the update command.
[17:28:25] <StephenLynx> it should output stuff.
[17:29:55] <MadLamb> StephenLynx, found it.
[17:30:11] <MadLamb> {id: i._id} should be {_id: i._id}
[17:30:22] <MadLamb> it was missing an underline in identifier
[17:30:45] <MadLamb> :)
[17:31:43] <MadLamb> the weird part is that it still returned undefined, but the command worked. Maybe the WriteResult is returned in terminal instead of the default method return?
[17:34:18] <StephenLynx> don't know, I never added so much logic in the terminal.
[17:34:32] <StephenLynx> I never added any logic at all in the terminal.
[17:34:32] <snowcode> is possible to have an aggregate query with multiple params? I've added a $geoNear but I would to add some other conditions
[17:34:55] <StephenLynx> aggregate are designed for that purpose, you add operators in any amount or order.
[17:35:09] <StephenLynx> thats why I never use find.
[17:35:44] <snowcode> StephenLynx I've added another params to db.earthquakes.aggregate({"sourceID" : "444", $geoNear : {near : {type : "Point", coordinates : [13.369729,42.324681]},maxDistance : 20000,spherical : true,distanceField : 'distance'}})
[17:35:59] <snowcode> it returns error: ""exception: A pipeline stage specification object must contain exactly one field."
[17:36:15] <StephenLynx> because you must specify what the stage is.
[17:36:24] <StephenLynx> match, projection, sort.
[17:36:52] <StephenLynx> because of its flexibility, you must tell it what each block is.
[17:37:00] <StephenLynx> with find it knows it will just match and project.
[17:48:41] <snowcode> damn distanceField is ignored without using aggregate
[17:48:57] <snowcode> with a simple find it will be not added as field into results of $geonear
[17:58:29] <Takumo> Somewhat dumb question, what's best way to do a Mongo version of an SQL group and count?
[17:59:10] <Takumo> so I get a count of documents bucketed by a field, e.g. "$user.language"
[17:59:13] <StephenLynx> you can use $group on aggregate
[17:59:33] <StephenLynx> and count on a find's result.
[17:59:42] <Takumo> ok
[17:59:46] <StephenLynx> you could also use group to count, but I believe count to work better.
[18:00:39] <StephenLynx> but if you need to perform complex operations to get the desired count, then you will need to use aggregate to count.
[18:01:44] <daidoji> hello, how do I query for ObjectID in array?
[18:01:49] <daidoji> like by type I mean?
[18:03:59] <snowcode> StephenLynx and $match can contains $lte,$gte keywords? {$match : {time : {$gte : "2015-03-23T00:00:00Z",$lte :"2015-03-23T23:59:00Z" }}}
[18:04:30] <StephenLynx> yeah.
[18:04:55] <snowcode> mmmh so I think i've something wrong in my query (date format?)
[18:08:20] <snowcode> oh well I've missed ISODate(()
[18:08:26] <snowcode> damn I need to stop using mongoose
[18:19:53] <Takumo> StephenLynx: awesome! Let's see how it works on this dataset I'm adding to at about 100 docs/s
[18:20:30] <StephenLynx> yeah, mongoose is crap
[18:20:33] <StephenLynx> snowcode
[18:33:32] <ToeSnacks> I have been draining a shard in my cluster for 3 days and the remaining chunks count has not gone down at all, how can I verify Mongo is actually draining and not in a hung state? Watching the logs on the primary of the draining shard shows it migrating data, but it's taking way too long.
[18:33:47] <daidoji> why do objectIds say they're strings when calling typeof()?
[18:36:18] <daidoji> oh wait nevermind
[18:46:22] <StaticIP> Question: How would I connect 2 collections into 1 and have access to the information for use ona view?
[19:21:39] <daidoji> StaticIP: what?
[19:26:08] <dj3000> hi. I get this error when trying to install mongodb on ubuntu: W: Failed to fetch http://repo.mongodb.org/apt/ubuntu/dists/vivid/mongodb-org/3.0/multiverse/binary-i386/Packages 404 Not Found
[19:26:44] <dj3000> any ideas? Looks like a problem with the MongoDB repo
[19:32:47] <StaticIP> daidoji, sorry for not clarifying. I want bring all the data in 2 collections. So say I have 1 collection called Users that has information about the user. 2 collection is called Events and in here it also has data. I want to connect to both and collect all the data and then in my views I want to display it.
[19:35:10] <StaticIP> it would be equal to say sql where you do a JOIN and collect information from 2 tables.
[19:35:40] <StephenLynx> you don't.
[19:35:48] <StephenLynx> that is a join.
[19:35:53] <StephenLynx> you just don't.
[19:37:09] <StaticIP> ok.. can i have a reasoning? or perhaps a work around that would be doable. because say, I want the user to see all the events he/she has been to.
[19:37:50] <cheeser> because mongo doesn't support joins.
[19:38:26] <ToeSnacks> how long should removing a shard take with only 300 gigs of data?
[19:39:03] <cheeser> depends on many different variables.
[19:39:34] <ToeSnacks> realistic range of time?
[19:41:06] <cheeser> no idea
[19:41:57] <StaticIP> ok how about this, how would i reference 2 collections to a user? or would the best way to do this would to have a collection called user and then inside of there have Events with an array of the events?
[19:42:50] <ToeSnacks> cheeser: do you know how to find out what/if mongo is moving from one shard to the other?
[19:46:15] <cheeser> maybe this: https://github.com/serverdensity/mongodb-balance-check
[19:48:49] <ToeSnacks> cheeser: thanks
[19:50:51] <StephenLynx> StaticIP in that case you would need to have dynamic creation of collections.
[19:50:57] <StephenLynx> it is possible but it is ugly as sin.
[19:51:09] <StephenLynx> and unmaintainable.
[19:51:29] <StephenLynx> you either will have to accept mongo's limitations or use something else.
[19:52:02] <StephenLynx> you could use sub-documents to replace your 1 * n relations though.
[19:52:13] <StephenLynx> like, an array in a field of a document.
[19:52:20] <StephenLynx> but that has its own limitations too.
[19:54:08] <StaticIP> i was thinking that i am setting this up incorrectly to mongodb. i come from a php/mysql background and thought i could use some of the building process like i would there into here. but i guess i have to figure out a different way of doing it.
[19:54:29] <StephenLynx> yes, it is very different.
[19:54:49] <StephenLynx> and by no means give you the same flexibility in querying data.
[19:56:10] <StaticIP> i think, if i just use "Users" for all the storing data, meaning if there is an event that a user will be at, that information would go into Users document as Events (array)
[19:56:13] <StaticIP> correct?
[19:56:37] <StephenLynx> yes. you could have that.
[19:58:31] <StaticIP> and then when I want to display in a view for the Events for that particular User, it would just be all there. Am I thinking that correctly? Lol
[19:58:53] <StephenLynx> yes. but the problem arises when you DON'T want them all to be there.
[19:59:23] <StaticIP> I just started with node/express/mongodb, seriously like last week. I have gotten far in my web app but there were certain aspects I wasn't thinking because of my thinking in sql.
[19:59:30] <StaticIP> ah.
[19:59:50] <StephenLynx> I suggest you just using io.js and its driver.
[20:00:09] <StephenLynx> express is crap and node.js has been outdated by io.js.
[20:00:12] <StaticIP> Alright, I'll take a look into that.
[20:00:19] <StephenLynx> back to track:
[20:00:34] <StephenLynx> you can't sort a subarray without using wind on it before.
[20:00:59] <StephenLynx> you would also have to unwind it in order to limit and skip.
[20:01:29] <StephenLynx> and if you have a REALLY big amount of data, each document can only holds 16mb. which is millions and millions of words, but its a hard cap.
[20:04:21] <StephenLynx> for generic business systems I don't believe mongodb to be the best option, TBH.
[20:05:00] <StephenLynx> first theres just too much relations, then you have how often business rules change
[20:05:23] <StephenLynx> and how usually they don't require that much of a fast db.
[20:07:08] <saml> but mongodb is web scale business
[20:07:14] <StephenLynx> :^)
[20:07:33] <StephenLynx> with differential tools for cloud leverage
[20:08:11] <StephenLynx> On the other hand, I would still use io.js, been using it with mysql and its pretty great.
[20:21:21] <ToeSnacks> are there any causes for 'ns not found, should be impossible' other than config desync?
[21:00:32] <girb1> help please .. how can I retrieve data which are 30 days old by "_id"
[21:11:48] <tubbo> girb1: rails provides a created_at column
[21:11:52] <tubbo> i'd start there
[21:12:47] <girb1> tubbo: any standalone api lib ?
[21:12:58] <tubbo> girb1: lol i thought this was #rubyonrails sorry :D
[21:13:03] <tubbo> girb1: never mind what i said
[21:13:23] <girb1> tubbo: :)