PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 8th of September, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[01:12:05] <mango_> what is host set as by default
[01:14:37] <godelmang> i am diving into mongodb oh so soon lemme soak up the vibrations while i get there yallyall!
[01:15:18] <tab1293> How can I make sure that when trying to create a compound index, mongo doesn't run out of memory
[01:15:20] <godelmang> i have nice json documents im hoping its really easy to import into mongodb without too much doings
[01:15:35] <tab1293> it keeps crashing when I try to create an index on two fields
[01:16:26] <joannac> tab1293: um, that doesn't sound good
[01:16:26] <mango_> increase the size of your oplog maybe
[01:16:31] <joannac> what's the error in the logs?
[01:17:08] <tab1293> hold on let me get them up
[01:18:30] <tab1293> joannac http://pastebin.com/X9vepTTP
[01:20:01] <tab1293> I can't connect to mongo anymore after it crashes either
[01:20:21] <tab1293> I have to start and stop the service multiple times until it eventually will start working
[01:20:32] <joannac> how much memory do you have?
[01:20:44] <joannac> and how big is that collection?
[01:21:18] <tab1293> I have only a gig cause im running on an ec2 micro
[01:21:26] <tab1293> and the collection is about 1.5 million docs
[01:21:56] <joannac> yeah, you don't have enough memory
[01:22:00] <joannac> get a bigger instance
[01:22:21] <tab1293> bleh okay. Theres no way to limit how much memory mongo uses?
[01:22:50] <joannac> even if there is, you'll get the same problem
[01:23:02] <joannac> mongodb needs more memory and can't get it
[01:24:25] <tab1293> does mongodb have any support to push and pull remote databases?
[01:24:55] <tab1293> so I can download the db locally, create the index on a machine with more memory and then push it back to the ec2?
[01:26:17] <joannac> mongodump / restore
[01:26:40] <tab1293> and that will include existing indexes and all?
[01:27:35] <joannac> yes
[01:27:44] <joannac> Actually, no
[01:28:23] <joannac> after you build the indexes, shut down the instance, and copy the data files over
[01:31:02] <tab1293> just launching a bigger instance. not worth the hassle
[01:31:12] <joannac> nod
[02:45:19] <AlecTaylor> hi
[08:46:24] <jordana> Hmm, what happens when in a sharded cluster one of the shards runs out of space?
[08:46:36] <jordana> Will it continue to function or will mongod just exit?
[09:48:43] <yauh> Does mongos redirect an app's request to mongod, so that the application will connect to the mongod instead?
[09:48:56] <yauh> I am trying to set up firewalls and would like to only allow access to mongos - will that work?
[09:54:16] <jordana> yauh: mongos routes queries based on where data is stored across the cluster. Your connection from your app is to mongos
[09:54:33] <yauh> jordana: so my app never speaks directly to mongod, right?
[09:54:36] <yauh> only to mongos
[09:55:03] <jordana> Yes
[09:55:10] <yauh> great, thanks so much :)
[09:57:10] <Petazz> Hi! Anyone using pymongo to drive Mongo? I'm having trouble inserting stuff coming from an angular front-end saying InvalidDocument: key '$oid' must not start with '$'
[09:57:36] <Petazz> What should the key name be then if not exactly what mongo is returning?
[09:58:26] <Petazz> I guess BSON specs say that a key cannot start with $?
[10:17:10] <jordana> Petazz: I haven't really used the python driver but to me it looks like you're doing an update on $oid or trying to insert a doc with $oid
[10:17:28] <jordana> which won't work, because anything prefixed with $ is reserved for mongo
[10:17:50] <jordana> $oid as the field key
[10:18:04] <jordana> if you're trying to set the object id, usually _id is the field you wnat
[10:18:05] <jordana> want*
[10:18:31] <Petazz> jordana: Ou're on it ;) What I'm doing now is trying to db.doc.update() what I previously got from find_one basically
[10:18:58] <Petazz> And since I'm pretty new to mongo, I don't get where the $oid key actually comes from
[10:19:09] <jordana> so you want do do update({ find }, { update })
[10:19:18] <Petazz> Browsing my db with mongo client shows _id on all documents
[10:19:39] <Petazz> But I guess pymongo as a driver translates _oid into $oid?
[10:19:51] <jordana> Let me have a look
[10:20:11] <jordana> do this
[10:20:58] <Petazz> Or is it this what causes the translation: http://docs.mongodb.org/manual/reference/mongodb-extended-json/#oid
[10:21:24] <Petazz> Should I first do a reverse translation from JSON to BSON?
[10:21:43] <jordana> collection.update({ '_id': ObjectId }, { $set: { fields... } })
[10:21:50] <jordana> No
[10:22:05] <jordana> Just take the object id out of the find you did and use it as reference
[10:22:13] <jordana> It should be of type ObjectId
[10:23:31] <Petazz> So using $set would fix it?
[10:23:48] <Petazz> The problem is that the document I'm trying to update has embedded oid:s
[10:24:06] <jordana> can you show me the document in pastebin from the mongo shell?
[10:24:08] <Petazz> And it would be a pain to go through all the keys to translate them. I cannot remove them
[10:24:17] <Petazz> Just a sec!
[10:26:34] <Petazz> jordana: Here's a subset of it: http://pastie.org/private/iqsdrrtqvhgw632bkjniw
[10:27:17] <jordana> Okay that alll looks fine
[10:27:43] <jordana> what are you trying to accomplish?
[10:28:58] <Petazz> Working with that set through an API
[10:29:16] <Petazz> A backend I'm building with Python and using pymongo
[10:29:49] <Petazz> So I guess pymongo does a strict translation from BSON type ObjectId to JSON well a dict
[10:30:12] <Petazz> So how could I manually work with the data as little as possible
[10:34:40] <jordana> Petazz: ObjectId when deserialized has multiple values embedded in it such as creation datetime etc.
[10:34:55] <jordana> To get it as a string there should just be a toString or cast availble
[10:35:25] <jordana> What are you trying to accomplish with the update sorry
[10:35:29] <jordana> is what I meant to say
[10:35:42] <Petazz> Erm, why do people update their documents?
[10:35:56] <jordana> No, what are you trying to change?
[10:36:02] <Petazz> Does it matter?
[10:36:39] <jordana> You're getting an error with your update, I'm tring to figure out what query you're running to get the error so I can give you the query to update the document
[11:56:54] <Baluse> hello
[11:57:42] <Baluse> I want to build an app that would save forms(documents)
[11:58:55] <Baluse> in RDMS mapping all these would take hundends of collumns and nasty relationships . So I am consindering mongodb.
[12:03:34] <Lucas1_> alguém fala português para tirar duvidas?
[12:04:19] <Bittu> Hi Everyone,
[12:04:55] <Bittu> is it possible to make a read only query from sharded replica set ?
[12:05:42] <Bittu> I have 2 shard machine each has 1-replica set (say rs0 and rs1)
[12:06:02] <Bittu> i want to make a read only query from secondary of rs0 and rs1
[12:07:22] <Bittu> since the data are distributed on both shard ......so how to make query on replica ?
[12:54:16] <jekle> Baluse: thats the reason why I am here too :) no one wants to build eav like structures in relational databases anymore :D
[12:55:05] <cheeser> Bittu: you can set a read preference for the secondary but it's not recommended.
[13:02:53] <jekle> Baluse: this blog post shows a small example schema you might find interesting
[13:03:02] <jekle> Baluse: http://chilipepperdesign.com/2013/11/11/versioned-data-model-definitions-with-mongodb/
[13:04:26] <Bittu> @cheeser: Is there any configuration, that I setup a separate Mongos to query on secondary replica only?
[13:05:01] <cheeser> you configure your client accordingly. the mongo uri supports the readPreference parameter in (hopefully) all the drivers
[13:05:34] <Bittu> yes, it's supported by Pymongo, I went through it
[13:05:51] <cheeser> you should read over the read preference docs, though: http://docs.mongodb.org/manual/applications/replication/
[13:06:03] <cheeser> there are some caveats you should be aware of before doing secondary reads
[13:06:35] <Bittu> okay....thanks to answer the question
[15:35:33] <asdf23sdfs> hey ive got a collection of 40 million documents, and i need to access only 10% of them much more regularly. should i create an index on a boolean field?
[15:36:12] <cheeser> is it only that 10% that'll be marked as true?
[15:37:15] <asdf23sdfs> yeah
[15:37:47] <asdf23sdfs> and actually i screwed up the numbers...its 400 million and my focus will be on 40 million of them.
[15:38:27] <asdf23sdfs> basically, i'm doing > db.tweets.find({'hashtags.0': {'$exists':1}})
[15:38:49] <asdf23sdfs> so scanning 400 million to find the 40 million or so that have hashtags. looking for a faster way to do this.
[15:40:40] <saml> asdf23sdfs, doesn't 2.6 have sparse index?
[15:41:20] <asdf23sdfs> i think so. i'm new to mongodb. so i wasn't sure what the best way to do this. i haven't added this extra field yet, b/c i wasn't sure it was the best way to do it even.
[15:42:00] <saml> db.tweets.find({hashtags: {$not:{$size:0}}})
[15:42:08] <saml> if every document has hashtags: []
[15:43:32] <cheeser> probably better to not store hashtags if it's empty and just do a $exists on hashtags
[15:46:07] <asdf23sdfs> ok, so it sounds like an index is not the way to go then?
[15:47:14] <cheeser> an index is essential if you want answers before the sun dies.
[16:29:55] <mango_> I've got some bson files I'm trying to restore to a config server
[16:30:47] <mango_> but I'm receiving an error saying it can't create databases on a --configsvr instance
[16:30:53] <mango_> what is this about?
[16:31:42] <Derick> mango_: config servers are *only* for sharding information
[16:33:08] <mango_> @Derick that's right, but I need to restore that data back to the config server
[16:33:15] <mango_> is the config server the right place/
[16:33:16] <mango_> ?
[16:34:28] <mango_> I thought the configsvr is the right place, because it references the configdb as it's datapath
[16:53:20] <adamcom> mango_: if you are going to write to a config server, even with mongorestore, you should do it through the mongos
[16:54:10] <adamcom> when it restores to the config database, it will know that lives on the config servers, and anything written via the mongos to the config DB will be written to all 3
[17:08:29] <mango_> @adamcom - ok, I'll run the backup through mongos
[17:12:20] <mango_> now the shards can't be found
[17:12:25] <mango_> I'll try adding the shards.
[17:12:31] <mango_> via mongos
[19:42:00] <mango_> When restoring a shard do you restore direct to the shard of via mongos?
[19:42:08] <mango_> (i'm using mongorestore)
[20:16:13] <bsmartt`> I would greatly appreciate the opinion of an experienced designer on thisschema/normalization design question. http://pastie.org/9537203
[20:27:59] <rfk> wondering about gridfs - has anyone written code to store the exact native filesystem representation of files in gridfs?
[20:28:28] <rfk> e.g. all the metadata that would be required to restore the file to its exact original state, like permissions etc
[20:48:02] <djMax> I have a replica set that I thought had a couple capped collections. Turns out not. disk is full, can't do anything. How do I wipe it without like re-imaging or reconfiguring the replica set?
[20:49:02] <cheeser> is dropping the collection acceptable?
[20:49:06] <djMax> yes
[20:49:16] <djMax> I was about to rm MyCollection* on both members.
[20:49:34] <cheeser> dropping should return the space back to the OS , iirc
[20:49:46] <djMax> drop doesn't work in the shell either
[20:49:51] <djMax> but maybe rm will?
[20:51:44] <cheeser> what does the shell say?
[20:52:44] <djMax> rm worked
[20:52:59] <djMax> drop tells me the same "I can't get a write lock because the disk is full" stuff
[20:52:59] <Zelest> cheeser, fox.. is the word you're looking for. ;)
[20:54:57] <cheeser> djMax: ah
[20:55:03] <cheeser> Zelest: :D
[20:55:08] <djMax> now to convert these suckers to capped
[22:24:23] <autoburger> so do I need mysql and mongo or am I good with mongo for my web app?
[22:28:39] <autoburger> whats the downside?
[22:34:22] <autoburger> :)