[06:49:57] <thomaschaaf> hello I just upgraded from wheezy to jessie today on one of our servers. Now I cant start mongodb anymore: http://pastie.org/10115694
[06:50:11] <thomaschaaf> any idea what could cause this error?
[07:19:04] <thomaschaaf> reinstalling the mongodb package seems to have fixed it
[08:12:33] <errakeshpd> how to use rails and mongo ( Model.find(1) is for rails and Table.find(1) for mongo ) so it may make some confusion while seeing the code
[08:21:46] <eren> hey folks, I switched to mongod 3.0.2 but I forgot to switch wiredTiger storage, it is still using the old backend
[08:21:52] <eren> how can I switch to wiredTiger without losing data?
[08:25:50] <Derick> (or, setup a replica set member with WT and let it sync)
[08:28:08] <eren> Derick: replica set could be a good solution. Unfortunately I have 1 mongo installation and it's actively used now, collecting meters from agents. I believe the collector cannot process them and write to mongo fast enough and the messages are queued in rabbit
[08:28:11] <mick27> do you guys know a way to tail the oplog to amazon sns ?
[08:29:15] <Derick> eren: wouldn't rabbit just queue them for a bit longer then?
[08:29:27] <Derick> (no idea how rabbitmq works ;-) )
[08:37:43] <eren> Derick: yeah, I have 2million messages in rabbit, that's way more than it shuold be :)
[08:38:36] <eren> I believe I will go with replica solution. I will spin up a new replica with wiredTiger, add this replica to single mongo instance, wait for sync, and ... ?
[08:39:23] <eren> should I switch to this replica as a main mongo database, or wait for the sync. I have 60GB of data currently
[08:39:34] <eren> when main mongo istance is down, rabbit will queue the messages
[08:55:03] <eren> Derick: I believe I will use mongodump/mongorestore
[11:00:36] <amitprakash> Hi, for queries in currentOp which aren't being recorded by the virtue of being too large, is there a way for me to inspect what this query is?
[11:01:22] <eren> how can I instruct mongodb to rebuild indexes? I moved to wiredtiger via "mongodump/mongorestore" and I see that /var/lib/mongo is 350MB, where in the previous instance it was 7GB
[11:01:33] <eren> I believe it is because of nonexisting indexes
[14:03:38] <snowcode> StephenLynx Thank you :) Have you ever worked with $geoNear? I've tried to perform a query to get all the points near a [lon,lat] point but it returns wrong results (in fact it seems to return any results): https://gist.github.com/malcommac/72a689e6b8847fc5e289
[17:24:48] <MadLamb> StephenLynx, added the output there.
[17:25:38] <StephenLynx> ok, so you have a js file and is using mongo to execute it?
[17:26:37] <MadLamb> StephenLynx, i'm running that directly in terminal as one-line-command. I just formatted it for readability purposes to show to you.
[17:27:04] <MadLamb> StephenLynx, by terminal i mean mongo console
[17:27:32] <StephenLynx> ok, so you open your terminal, you type "mongo", then you type all that code?
[17:31:43] <MadLamb> the weird part is that it still returned undefined, but the command worked. Maybe the WriteResult is returned in terminal instead of the default method return?
[17:34:18] <StephenLynx> don't know, I never added so much logic in the terminal.
[17:34:32] <StephenLynx> I never added any logic at all in the terminal.
[17:34:32] <snowcode> is possible to have an aggregate query with multiple params? I've added a $geoNear but I would to add some other conditions
[17:34:55] <StephenLynx> aggregate are designed for that purpose, you add operators in any amount or order.
[17:35:09] <StephenLynx> thats why I never use find.
[18:33:32] <ToeSnacks> I have been draining a shard in my cluster for 3 days and the remaining chunks count has not gone down at all, how can I verify Mongo is actually draining and not in a hung state? Watching the logs on the primary of the draining shard shows it migrating data, but it's taking way too long.
[18:33:47] <daidoji> why do objectIds say they're strings when calling typeof()?
[19:26:08] <dj3000> hi. I get this error when trying to install mongodb on ubuntu: W: Failed to fetch http://repo.mongodb.org/apt/ubuntu/dists/vivid/mongodb-org/3.0/multiverse/binary-i386/Packages 404 Not Found
[19:26:44] <dj3000> any ideas? Looks like a problem with the MongoDB repo
[19:32:47] <StaticIP> daidoji, sorry for not clarifying. I want bring all the data in 2 collections. So say I have 1 collection called Users that has information about the user. 2 collection is called Events and in here it also has data. I want to connect to both and collect all the data and then in my views I want to display it.
[19:35:10] <StaticIP> it would be equal to say sql where you do a JOIN and collect information from 2 tables.
[19:37:09] <StaticIP> ok.. can i have a reasoning? or perhaps a work around that would be doable. because say, I want the user to see all the events he/she has been to.
[19:37:50] <cheeser> because mongo doesn't support joins.
[19:38:26] <ToeSnacks> how long should removing a shard take with only 300 gigs of data?
[19:39:03] <cheeser> depends on many different variables.
[19:41:57] <StaticIP> ok how about this, how would i reference 2 collections to a user? or would the best way to do this would to have a collection called user and then inside of there have Events with an array of the events?
[19:42:50] <ToeSnacks> cheeser: do you know how to find out what/if mongo is moving from one shard to the other?
[19:51:29] <StephenLynx> you either will have to accept mongo's limitations or use something else.
[19:52:02] <StephenLynx> you could use sub-documents to replace your 1 * n relations though.
[19:52:13] <StephenLynx> like, an array in a field of a document.
[19:52:20] <StephenLynx> but that has its own limitations too.
[19:54:08] <StaticIP> i was thinking that i am setting this up incorrectly to mongodb. i come from a php/mysql background and thought i could use some of the building process like i would there into here. but i guess i have to figure out a different way of doing it.
[19:54:29] <StephenLynx> yes, it is very different.
[19:54:49] <StephenLynx> and by no means give you the same flexibility in querying data.
[19:56:10] <StaticIP> i think, if i just use "Users" for all the storing data, meaning if there is an event that a user will be at, that information would go into Users document as Events (array)
[19:56:37] <StephenLynx> yes. you could have that.
[19:58:31] <StaticIP> and then when I want to display in a view for the Events for that particular User, it would just be all there. Am I thinking that correctly? Lol
[19:58:53] <StephenLynx> yes. but the problem arises when you DON'T want them all to be there.
[19:59:23] <StaticIP> I just started with node/express/mongodb, seriously like last week. I have gotten far in my web app but there were certain aspects I wasn't thinking because of my thinking in sql.
[20:00:34] <StephenLynx> you can't sort a subarray without using wind on it before.
[20:00:59] <StephenLynx> you would also have to unwind it in order to limit and skip.
[20:01:29] <StephenLynx> and if you have a REALLY big amount of data, each document can only holds 16mb. which is millions and millions of words, but its a hard cap.
[20:04:21] <StephenLynx> for generic business systems I don't believe mongodb to be the best option, TBH.
[20:05:00] <StephenLynx> first theres just too much relations, then you have how often business rules change
[20:05:23] <StephenLynx> and how usually they don't require that much of a fast db.
[20:07:08] <saml> but mongodb is web scale business