[00:08:36] <dexterity> I use flask and mongoengine and every time I use model_form(Task) I get AttributeError: 'dict' object has no attribute 'iteritems'
[00:09:15] <dexterity> I use it like described here: http://docs.mongodb.org/ecosystem/tutorial/write-a-tumblelog-application-with-flask-mongoengine/
[01:26:22] <dccc> hi folks, i'd like a second opinion on my data structure. i'm building a property management applications. i have 'buildings' and within, 'units' as an array of subdocuments. the 'listings' documents duplicate data from buildings and units to facilitate search. this doesn't seem very efficient to me. perhaps sql is better for this kind of data.
[10:45:02] <ed_saturn> hello, my mongo query on a replicated sharded cluster is slow if goes to mongos (route server), if query goes to replSet instance directly, everything is fine. any clue ?
[10:46:06] <Alan> ... well I joined to ask a question, but answered it myself
[10:47:55] <Alan> (I was trying to figure out how I'm supposed to go from a full mongodb URI to the database it refers to - answer is MongoClient(uri).get_default_database())
[12:30:44] <Rocketuncle> cann u please look at this issue
[12:31:33] <Rocketuncle> @joannac r u from mongodb support team?
[12:35:19] <dragoonis_> I'm trying to back up a collection before I do some processing on it. I'm using db.collection.copyTo() but it's doing an eval underneath and LOCKING everything, so all reads are locked too. How can I back up a collection and have it not lock the mongod connection ?
[12:41:21] <dragoonis_> hey Derick I'm in PHP land right now, using the extension - I'm doing Mongo::execute("function(){ db.$from.copyTo('$to') }", array('nolock' => true));
[12:45:23] <Derick> dragoonis_: I think you'll just have to loop it yourself. Write a script that does a find() without query, and write each entry to the new collection
[13:21:55] <cozby> mongo documentation talks about snapping EBS often (on AWS of course), but snapping an EBS requires that disk be in a static state, and no writes are taking place
[13:22:37] <cozby> so I'm guessing that one must remove a replica instance from a replica set, then snapshot the ebs volume, then re-attach
[13:30:28] <cozby> here's what I'm trying to do (let me know if its silly) - Have my replica set in an autscaling group, so if one instances goes down a new one comes up, this new instance startup will pull the mongo files from a mongodump I take daily and store on an s3. The new instance will copy the dump data and will be able to sync with the replica set again.
[13:31:09] <cozby> right now I'm configuring the dumping and re-importing piece
[13:31:41] <cozby> if a new instance is launched the data will be too stale, hence me copying the daily data dump at boot
[13:31:52] <cheeser> you still run the risk of the sync being too far behind.
[13:35:25] <cozby> cheeser: hmm I reckon its 24 hours
[13:35:36] <cozby> so perhaps I bump my daily snapshots to every 8 hours
[13:35:53] <cozby> (am I being too paranoid here?)
[13:36:51] <joannac> if you're on EBS, why don't you take EBS snapshots?
[14:57:24] <Derick> so you can use the old date in the update's query then?
[14:57:32] <saml> the loop doesn't take too long to run.. if i refactor the script so that there's only one loop
[14:57:48] <Derick> for id in ids { collection.update({_id: id, date: old_date}, { '$set': { count: count[id], date: new_date}) } }
[14:58:00] <saml> but actually, it's talking to omniture. get the counts. do the loop. talk to different service. do different loop.. so the whole script takes 10 minuets to run
[14:58:13] <saml> and during 10 minutes, the app's query returns empty
[14:59:02] <saml> Derick, not sure what {_id:id, date:old_date} makes difference
[15:00:06] <Derick> but by checking for the old_date, you can run it multiple times without problems
[15:00:18] <saml> it's not really mongodb. i just need to refactor the script so that it collects counts from various services. and do the update in one loop
[15:00:21] <Derick> (that one single update query really)
[15:01:10] <saml> coll.find({date: new_date}) this is bad (app's query)
[15:02:01] <saml> let's say script updated one document. new_date is updated. so the app tries to find({date: new_date}), which returns single document only
[15:03:21] <saml> probably app should do coll.find(date < now and date > (now - 24hr))
[15:04:02] <saml> instead of finding the lastest "date" in coll and do coll.find(date == latest_date_found)
[15:36:58] <Shakyj> Hey, I am thinking on storing google ranking data in MongoDB, just trying to think of my structure, access will be mainly keyword then domain based so I came up with this structure https://gist.github.com/anonymous/60ed1fef3184b5134226 but would that allow me to sort the rankings by date? If so how?
[18:24:58] <ForSpareParts> Is there a way to retrieve only part of a Mongo document? I anticipate specific pieces of my documents getting huge, but it’s important I be able to get their top-level attributes without pulling down loads of data every time.
[18:25:11] <ForSpareParts> I’m using Mongoose, and I’m a total noob. Sorry if this is a stupid question.
[20:09:00] <cheeser> i was thinking you'd use somedoc but if that works for you...
[20:09:18] <NaN> cheeser: how do I tell $pull the position on the array?
[20:11:14] <cheeser> you ahve the document you're wanting to remove, just pass that with $pull
[20:45:25] <diphtherial> i'm not sure if this is a commonly-asked question, but just to verify there isn't guarantee of atomicity when updating multiple documents, right?
[20:47:01] <diphtherial> i was reading a bit about tokumx, which apparently offers transactions; what's the opinion about it here?
[20:47:18] <diphtherial> (the transactions unfortunately aren't going to work for us, since we're using mongodb with shards)
[20:59:17] <kali> diphtherial: i'm messing around with an alternative approach, if you want to have a look and comment... https://github.com/kali/extropy
[21:11:56] <diphtherial> kali: interesting! i'll read it in more depth in a moment
[21:17:28] <cozby> I'm trying to access the webconsole via curl
[21:17:34] <cozby> it works when I access it using localhost
[21:17:41] <cozby> but when I use the IP says -- not allowed
[21:17:51] <cozby> looking at the headers there's some sort of basic auth enabled
[21:21:40] <ForSpareParts> How do you represent an arbitrary dictionary in a Mongoose schema? Something where the keys don’t have any special meaning, and they’re just used as an index to a model.
[21:22:15] <ForSpareParts> Like this: http://hastebin.com/kohapivige.json
[21:23:28] <ForSpareParts> In schema terms, there’s nothing special about the names ‘somewidget’ and ‘otherwidget’, they’re just how widgets are identified. But I don’t see a structure like that in Mongoose, so I’m assuming I should be doing something else — maybe making the name/id part of the widget somehow, and representing “widgets” as a list of objects...?
[22:57:40] <Jadenn> hiya. so i'm having an issue with sorting, i need to retrieve the newest records, which is fine, but it's not letting me sort again so they are in the right order.
[23:05:52] <NaN> cheeser: still couldn't do $pull with index. ex: { foo: [{},{},{}] } < I would like to remove the 3rd {} something like $pull: { foo.3: true } ...any clue?
[23:06:33] <Jadenn> i guess i just had to apply a skip. works for me :P
[23:24:45] <b1nd3r> Hello. If my docs have boxes recorded inside a field, how can I search in mongo which boxes (docs) a point belongs too?
[23:27:18] <NaN> what's boxes recorded inside a field?
[23:31:06] <b1nd3r> NaN: a box, with a bottom left coordinates and an upper right coordinates inside my "bounds" field and type "Box". They represent a box in a map, how can I take a point and search in my database which boxes this point is inside of?