PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Sunday the 26th of January, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[11:13:47] <calaghan> hi. can someone tell me what's wrong with my geospatial query? http://pastebin.com/JuNmZyu2
[11:13:55] <calaghan> or maybe index or data format is wrong...?
[11:41:38] <kali> calaghan: i think you index should be on "location" (the field) and not coordinates
[11:42:11] <calaghan> ok, I'll try to fix it
[11:42:57] <kali> calaghan: same for the query.
[11:53:01] <calaghan> MongoError: Can't extract geo keys from object, malformed geometry?:{ type: "Point", coordinates: [ "25.1495367", "60.222693" ] } :(
[11:53:47] <calaghan> i just changed "coordinates" to "location"
[11:54:23] <calaghan> var location = { address : address, name : locationName, location : {type: "Point", coordinates : [ lon, lat ]} }; Locations._ensureIndex( { "location": "2dsphere" } );
[12:17:03] <curiositi> Hey, I'm going to have structures/entities that the count of their attributes may differ in time (may increase/decrease), Is it a wise decision to use MongoDB?
[14:11:29] <gencoya> Hi. I'm trying to generate database dump file but I'm still getting error following (look at end of message). I googled that it can be caused by 4MB document size limit, but it seems that message is about 724MB. It's not possible that there is so much data in this database. recv(): message len 759714643 is too large759714643 DBClientCursor::init call() failed assertion: 10276 DBClientBase::findN: transport error: 0.0.0.0 query: { isd
[14:12:00] <gencoya> Can I ask you for help explaining what could be done wrong?
[15:46:08] <TechMiX> Hey, I'm going to have structures/entities that the count of their attributes may differ in time (may increase/decrease), Is it a wise decision to use MongoDB?
[15:48:52] <twatson123> im having a lot of trouble with one of my replica set members, can someone help me out?
[15:50:11] <joannac> twatson123: sure, but i need more information
[15:51:09] <twatson123> ok so my site went down, so i went to the primary and tried rs.status() but got unauthorized
[15:51:25] <joannac> Okay. So authenticate?
[15:51:38] <twatson123> ye i tried but i still get unauthorized
[15:51:48] <twatson123> the thing is i never needed to authenticate
[15:51:48] <joannac> did the auth succeed?
[15:51:58] <twatson123> i use keyfile
[15:52:00] <twatson123> ye
[15:52:06] <joannac> that's not authenticating
[15:52:20] <joannac> that's so your replica set members can talk to each other when you have auth on
[15:52:42] <twatson123> ok so do i authenticate against admin db?
[15:52:51] <joannac> you need to actually authenticate like db.auth("user", "password")
[15:52:58] <joannac> with an appropriate user
[15:53:47] <twatson123> ok thanks, lol sorry about that
[15:54:02] <twatson123> ok so now im authenticated
[15:54:08] <twatson123> all members are secondary
[15:54:31] <joannac> how many members do you have?
[15:54:39] <twatson123> 3
[15:54:45] <joannac> how many are up?
[15:54:50] <twatson123> one is a hidden delayed
[15:54:52] <twatson123> 3
[15:55:00] <joannac> so you should have majority
[15:55:06] <joannac> check the logs?
[15:56:03] <twatson123> im getting a lof of replset in "ip" thinks that we are down
[15:57:00] <twatson123> [rsHealthPoll] replset info [ip]:27017 thinks that we are down
[15:57:04] <joannac> network problems?
[15:57:28] <joannac> heavy load on your primary?
[15:57:40] <twatson123> well rs.status on the primary says that the other members are up
[15:58:05] <joannac> the messages are on which node?
[15:58:15] <twatson123> my primary
[15:58:25] <twatson123> but..
[15:58:47] <twatson123> rs.status on a secondary says the primary is "(not reachable/healthy)"
[15:58:53] <joannac> right
[15:58:57] <joannac> network partition?
[16:01:21] <twatson123> im not sure what the problem is
[16:02:17] <joannac> your secondary can't contact your primary?
[16:02:31] <twatson123> yes but im not sure why
[16:02:50] <twatson123> thing is i didnt have to authenticate on the primary before today when i ran rs.status()
[16:02:50] <joannac> try mongo HOSTNAME:port on your secondary, using hostname and port of your primary
[16:04:10] <twatson123> i can connect to the primary from the secondary server using 'mongo HOSTNAME:PORT"
[16:04:20] <joannac> hrm
[16:04:37] <joannac> but you're still seeing connection problems in the logs?
[16:04:53] <joannac> the secondary logs
[16:05:23] <twatson123> its backup :O
[16:05:27] <twatson123> back up
[16:05:59] <twatson123> my site is back up anyway
[16:06:35] <twatson123> ok so my primary has been electced as primary again
[16:08:35] <twatson123> in my primaries log im still getting a lot of 'thinks that we are down messages'
[16:09:09] <twatson123> and a lot of [conn281] recv(): message len 50412928 is too large. Max is 48000000
[16:09:50] <joannac> inserting large documents?
[16:11:34] <twatson123> i dont have large document sizes
[16:13:14] <joannac> well, i don't know what you're doing.
[16:13:21] <joannac> but that's what the message indicates
[18:29:43] <_rgn> hi. the docs say that updates on capped collections can't make the documents grow. what does that mean?
[18:35:34] <Hyperking> why mongodb? im not trolling, just asking because lm liking it but need a good reason to present to bosses
[19:15:49] <Gargoyle> Hyperking: Because it fits your use case?
[20:06:41] <Leoss> is nosql more efficiant than sql to find document in very large collection ?
[20:06:41] <Leoss> for example, if I have a 2D map where each square is represented by a document containing a X/Y pair,
[20:06:41] <Leoss> If I do the request 2DMap.find({X: {$gte: 0, $lt: 5}, Y: {$gte: 0, $lt: 5} }) (request that should return 25 documents)
[20:06:41] <Leoss> how will the request time increase as the total collection size increase ?
[20:08:38] <Leoss> will making request of this kind be viable even if I have like 1,000,000 documents ?
[20:26:51] <Hitesh> Good afternoon guys
[20:26:56] <Hitesh> I am new in MongoDB
[20:27:08] <Hitesh> I have a couple of queries
[20:28:47] <Hitesh> I would like to process large amounf of data let say 10M row...I need to aggregate or group them with respective rules....which features are good in MongoDB? so that I can query efficiently and get the results in efficient time?
[20:37:38] <magicat1> Hi everyone. I've been using mongodb for a while and I like it. But some people I talk to keep saying I shouldn't be using it. What are the criticisms of mongodb? Why do some people think it's bad?
[20:45:51] <BurtyB> magicat1, ask them?
[20:46:21] <kizzx2> hi all
[20:47:15] <kizzx2> the docs tell me that if i'm going to shard i should do it early — so i'm going to create a shard cluster with just one shard to begin with. does that make sense or not at all better than not having sharding enabled at all?
[20:48:13] <cheeser> you can run with mongos long before you ever shard.
[20:48:41] <cheeser> the main disadvantage of waiting too long to enable sharding on a collection is that it can takea long time to balance your data if there's a lot of it.
[20:53:11] <kizzx2> cheeser: and if i understood the docs correctly, during balancing the cluster is still available right albeit slower right?
[20:56:26] <cheeser> correct
[22:06:37] <future28> Hey all, I am brand new to mongodb and need some help. For now, all I want to do is take a text file I have and map it with a key value. For example - I want to map "1" to the text file body ie "Hello, this is my first go at mongo."
[22:06:55] <future28> Could someone point me on the best way to do this? I'm using unix if it helps.