PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 7th of November, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:52:58] <lastk> hi guys, i have this object: {'title': "Hello", 'posts': [{'message':'messages'}]} how can i do a update to add a new object to posts ?
[00:53:44] <joannac> $addToSet / $push
[00:54:39] <lastk> thanks
[01:51:20] <azathoth99> how do I backup mongo?
[02:12:01] <neeky> azathoth99, you create a replica if you want to backup online, or you dump the database if you can afford to lock it while the dump happens
[02:12:34] <cheeser> or use MongoDB's BRS
[02:14:14] <neeky> anyone want to talk schema validation design patterns? I'm trying to decide how to design a schema that will allow user/customer defined validations
[02:15:00] <neeky> i'm currently using mongoose, but use about 1% of what mongoose does, and have about decided I don't need it
[03:10:26] <tpayne> is it a bad idea to use a records _id value as a file name that's associated with it?
[03:25:45] <tpayne> how do i get the new _id associated with a saved document?
[03:43:21] <baegle> I am writing a script to dedupe some data. I have identified 2 records and I would like to strip their IDs and then create a checksum of their string representation. Do I have functions like stringify() and md5() to help me here?
[06:31:34] <k_sze[work]> Out of curiosity: what happens if there is a netsplit among the servers of a replica set?
[06:32:29] <k_sze[work]> e.g. 4 replicat set members: A, B, C, and D. A and B can still talk with each other, C and D can still talk with each other.
[06:32:40] <k_sze[work]> but A can't talk with C
[06:34:24] <k_sze[work]> Or heck, a 3-member rs, A, B, and C
[06:34:47] <k_sze[work]> C is not down, it just can't reach either A or B.
[06:44:54] <svector> why do I get Thu Nov 7 09:40:22.451 ReferenceError: crudb is not defined?
[06:45:08] <svector> show dbs gives:
[06:45:11] <svector> crudb 0.203125GB
[06:45:11] <svector> local 0.078125GB
[06:45:11] <svector> mydb 0.203125GB
[06:46:10] <ron> you get the error when you do what?
[06:49:50] <svector> ron, I have a collection by the name dudes and I'm trying to do crudb.dudes.find()
[06:50:00] <svector> ron, I'm just practicing :)
[06:50:31] <svector> looks like I'm allowed to use crudb followed by db.dudes.find()
[06:50:40] <ron> yes.
[06:50:46] <ron> that's the syntax.
[06:56:10] <svector> ron, okay thanks
[08:53:06] <durre> I'm going to write a newsfeed using mongodb. right now I've decided on "fan out on write". so I write the same object to all users that should see that event. then have an index on userId. how will that scale if there are millions and millions of documents? by doing this way, updating the document is not a good idea I guess?
[09:03:27] <bcave> morning all
[09:11:51] <NodeX> durre : can you explain that a little deeper?
[09:15:00] <durre> NodeX: sure. so we have the concept of followers & followees. potentially a user can have _lots_ of followers. Lets say 100 000. when a user performs certain actions the followers should see that. its important that the read of the newsfeed is quick. writing isnt as important but shouldnt be terribly slow either. so how should I model it
[09:16:54] <NodeX> your scale will top out at 16mb if you store followers in the user document
[09:17:26] <durre> I'm storing the followers in a separate collection, not in the user document
[09:19:22] <NodeX> your bottleneck will be that join then
[09:20:27] <n008> ran iotop I have 5 mongod processes running
[09:21:14] <durre> that's more of a "write bottleneck", which is hidden from the user. I basiclly only use the followers to determine who to write the newsfeed events to
[09:22:51] <NodeX> it's a read bottlenek when you need to read the followers the user has following him to fan out to them
[09:22:56] <NodeX> bottleneck*
[09:25:08] <durre> but the user never reads the followers. its a background job that reads them and then writes the newsfeed events
[09:29:40] <NodeX> it still gets read even if it is a background job
[09:29:53] <k_sze[work]> I'm confused. So the official `mongo` command line client can automatically "detect" and "resolve" the replica set by giving it just one of the servers?
[09:30:40] <k_sze[work]> whereas client libraries like PyMongo need to be specifically told the addresses of all members in the replica set?
[09:31:07] <kali> k_sze[work]: it's just good practise to give all of them when possible, in case the one you're passing is down
[09:31:34] <kali> k_sze[work]: but when connecting, the client retrieves the full list of nodes
[09:33:05] <kali> k_sze[work]: to answer your previous question, an even number of replica in a set is a bad idea, because it allows 50%/50% splits, in case nobody get a majority
[09:33:17] <kali> k_sze[work]: 3 nodes sets are not subject to the problem
[09:34:44] <k_sze[work]> So what's the proper way to specify multiple addresses with the `mongo` command line client?
[09:34:57] <k_sze[work]> the man page's examples are all single host only.
[09:35:28] <kali> ha... good one, no idea :)
[09:37:00] <kali> just... coma separated ?
[09:37:34] <k_sze[work]> And how does it tell the difference between a simple replica set and a cluster? (Somehow the pymongo library makes a distinction, with separate classes for MongoClient and MongoReplicaSetClient)
[09:38:03] <k_sze[work]> (but MongoClient is used for both single host and clusters, which is doubly weird.)
[09:39:14] <k_sze[work]> In terms of hierarchy, one would think it goes from single-host -> replica set -> cluster
[09:45:06] <k_sze[work]> Even the latest doc at http://docs.mongodb.org/manual/reference/program/mongo/ doesn't say anything about replica sets or clusters.
[11:57:37] <liquid-silence> hi all
[11:57:49] <liquid-silence> whats the easiest way to do base64 to a gridfs file?
[11:58:45] <liquid-silence> the client is posting base64 in the body and I want to save that file into gridfs
[11:59:06] <Derick> base64 decode it...
[12:00:09] <liquid-silence> and save the file locally to disk?
[12:00:18] <liquid-silence> then do a fs.open?
[12:01:31] <Derick> why? not sure which language you're using, but the PHP one allows to read from a variable to store it into gridfs
[12:01:46] <liquid-silence> javascript
[12:01:52] <Derick> so node?
[12:01:58] <Derick> not sure how to do it with that
[12:02:09] <liquid-silence> yeah node
[12:02:13] <liquid-silence> currently I do
[12:02:29] <liquid-silence> http://pastebin.com/besxe1Ju
[12:02:33] <liquid-silence> to save the file
[12:02:41] <liquid-silence> if its an actual file
[12:05:30] <Derick> all I can see in the docs is about gridfsstore, not just gridfs
[12:06:22] <liquid-silence> using mongolian
[12:06:23] <liquid-silence> for node
[12:09:50] <liquid-silence> Derick I think I will have to save it on local disk first
[12:09:53] <liquid-silence> and the add to gridfs
[13:17:25] <bmcgee> hey if you have an object { foo: 1, bar: 2} being use as the _id for a document, is it possible to sort by _id.foo for example?
[13:22:07] <_NodeX> i dont see ewhy not
[13:22:17] <_NodeX> you should try it to be sure
[13:34:23] <bmcgee> tried it, didn't work
[15:01:29] <liquid-silence> how can I update an item in an array
[15:01:32] <liquid-silence> forinstance
[15:02:04] <liquid-silence> db.items.find({ "item.subarray._id": "id"}, {$set: { subarrayitem.deleted: true }}
[15:02:08] <liquid-silence> is this possible?
[15:09:28] <Nodex> look at the positional opreator
[15:09:33] <Nodex> "$"
[15:10:02] <liquid-silence> yeah
[15:10:04] <liquid-silence> sorted that
[15:10:10] <liquid-silence> btw Nodex sorted my other problem
[15:10:24] <liquid-silence> the docs where so out of date, I had to trawl through the source
[15:14:28] <liquid-silence> Nodex is there a way I can exclude items that have been flagged as deleted when I do db.items.find({ _id: "myid"})
[15:14:42] <liquid-silence> its currently bringing back the deleted subarrayitems and the non deleted ones
[15:19:24] <liquid-silence> or delete the specific item in the array
[15:23:02] <liquid-silence> can one exclude items when fetching?
[15:25:30] <liquid-silence> hmmm
[15:25:34] <liquid-silence> mstearn got a second?
[15:28:08] <Nodex> there are some docs on removing blank items from an array
[15:28:20] <liquid-silence> well
[15:28:24] <liquid-silence> actually
[15:28:29] <liquid-silence> the item will not be blank
[15:28:56] <liquid-silence> say I have document { name: "test", array { item1: { deleted:true }, item2: deleted:false}
[15:29:10] <liquid-silence> if I fetch the document I would want to exclude the deleted items in the aray
[15:29:16] <liquid-silence> s/aray/array
[15:29:46] <Lipathor> hi, can i ask about nodejs driver here?
[15:30:23] <liquid-silence> Nodex can you point me to the docs
[15:34:49] <Lipathor> can anybody know why i sometimes get empty subdocuments with node driver?
[15:37:46] <Nodex> liquid-silence : the docs wont help you with it, the docs refer to removeing (deleting) items
[15:38:05] <Nodex> Lipathor : please pastebin an example
[15:38:31] <liquid-silence> Nodex I can do this, db.collection("products").findOne({_id: id, deleted: false}, { books: { $elemMatch: { deleted:false } } }
[15:38:42] <liquid-silence> but what is does is remove all the product fields from the output
[15:38:52] <liquid-silence> so I just see the product id and the book array
[15:39:01] <liquid-silence> so product name is missing from the output
[15:39:28] <Lipathor> Nodex: http://pastebin.com/p6FfLtGY
[15:40:01] <Lipathor> Nodex: On first request all is ok, but all next has empty pl field
[15:40:36] <liquid-silence> Nodex how can I persist the other fields
[15:40:52] <Nodex> are you sure the proceeding docs have the pl fields?
[15:41:02] <Nodex> liquid-silence : do it in your app
[15:41:20] <Lipathor> Nodex: well, as you can see it is the same item with the same _id ...
[15:41:22] <liquid-silence> huh?
[15:41:29] <liquid-silence> Nodex that makes no sense
[15:41:58] <liquid-silence> that is returning the product id and the array with "non deleted" books
[15:42:02] <liquid-silence> but not the product name
[15:42:24] <Nodex> liquid-silence : I don't really have a clue what you're trying to do to be honest
[15:42:36] <liquid-silence> ok so I soft delete "books"
[15:42:42] <liquid-silence> now when I select the product
[15:42:50] <liquid-silence> I dont want the deleted items to come through
[15:42:52] <Nodex> all of your "hitting the enter key" when you type puts me off reading it
[15:43:09] <liquid-silence> Nodex, I am really sorry
[15:43:31] <Nodex> Lipathor : please pastebin your piece of code , I can only assume your code is deleting the object
[15:44:11] <Lipathor> Nodex: it is not, there is no delete operations there...
[15:45:01] <Nodex> ok good luck ;)
[15:45:37] <liquid-silence> Nodex, so if I flag book[0] as deleted I do not want it to be in the result, when I select the product, makes sense?
[15:45:44] <Lipathor> Nodex: and when i restart app again first req is ok, and next empty
[15:46:06] <Nodex> Lipathor : without seeing your code I cannot help you sorry so again good luck to you ;)
[15:46:22] <Nodex> liquid-silence : why don't you just filter it in your app ?
[15:46:47] <liquid-silence> because its an unnecessary payload to send over the wire.
[15:47:43] <Nodex> well mongodb cannot do what you want so you're going to have to
[15:48:56] <Nodex> this is the only chance you have at mongo returning it without the array elements http://docs.mongodb.org/manual/reference/operator/projection/positional/
[15:49:10] <Nodex> best to hope your data structure matches what it needs
[15:55:13] <tavooca> hello friends
[15:55:40] <ncls> hello tavooca
[16:00:28] <tavooca> I mongodb during m101, but doing the task HW3 python gives me 500 error blog.py I'll be doing wrong?
[16:11:48] <tavooca> Template 'blog_template' not found.
[16:24:49] <skython> Is anyone using MongoDB with Ember.js ?
[16:48:30] <shantanoo> hi, is it possible to limit the size of the db/collection? (in standalone and shard configuration)
[16:49:01] <shantanoo> was going through http://stackoverflow.com/questions/5054628/limit-mongodb-database-size
[16:49:19] <shantanoo> it suggests db.createCollection("mycoll", {size:100000})
[16:49:53] <shantanoo> is it ok to use this?
[16:57:32] <azathoth99> I am looking for that answer? I mean disk space wise are u supposed to jsut give mongo lots and hope?
[16:57:37] <azathoth99> hope is not a strategy
[16:58:01] <cheeser> it's a way of life
[16:58:23] <cheeser> i don't think you can limit standalone mongod's
[16:58:35] <cheeser> i think you can only limit shard sizes.
[16:59:26] <Derick> azathoth99: setting a size only works with capped collections
[17:01:13] <tomasso> is there some way to access mongo db from an android lib? as if it were a web service..
[17:01:41] <tomasso> but with a db driver, such as jdbc is for dbs, but for mongo
[17:01:42] <cheeser> you can't just use the java driver?
[17:02:11] <tomasso> mm i dont know.. it should... shouldnt it?
[17:02:24] <cheeser> should work? yes.
[17:02:34] <tomasso> thanks xD
[17:30:22] <mongonewbie2387> Hi guys, is there a way to do something like what I'm trying to do here: http://pastie.org/8463181 ? Basically there is an entry in my collection with the field "astring" and it has a value "asdasd(blahasdasdf" and i want to replace the "(" with " ( "
[17:30:53] <mongonewbie2387> the only way ive seen to do this is to iterate over each document, get the value out into a variable and change it, then insert the variable back into the entry
[17:31:00] <mongonewbie2387> i was trying to not do that cause it's very inefficient
[17:40:10] <tomasso> i have a collection with lat and lng values.. im reading on geospatial indexes and found out they use GeoJSON objects, that are like { type : "Point" , coordinates : [ <longitude> , <latitude> ] } . Is there some way to specify my index to use the existing lat and lng keys, and not the GeoJSON point? or I need to convert the collection from lat and lng to GeoJSON point respectively?
[17:58:49] <jpiche> I was hoping to be able to run mongodb on a rasberri pi, but arm isn't supported (see SERVER-1811). It looks like the Scons file for V8 is a big part of what is limiting it, but vanilla V8 compiles just fine on arm. Anyone have any insights?
[18:00:03] <Derick> jpiche: mongodb doesn't run on ARM because we don't support big endian platforms yet
[18:01:47] <jpiche> Derick: I like the "yet". Is that something that's scheduled to be implemented?
[18:02:07] <Derick> I don't think it's high priority, but you never know what the future holds :)
[18:03:01] <cheeser> /1/1
[18:07:30] <jpiche> Derick: really? is there a lot of little endian specific code? When I was looking through the code, it looked to me like the asm blocks might have been the big holdup
[18:07:58] <Derick> yes, but it's not tested anywhere, so you've no idea where bugs might lurk
[19:54:05] <ph88_> in SQL i have a long table and i need to do WHERE Color = green and then get the unique values from the other columns. Would MongoDB be fast at doing this or is it better i stay with SQL ? (it's ok if i need to change data structure)
[21:57:51] <roryhughes> My latest mongodb, node.js, angular.js creation http://imageswaps.tk/
[23:13:47] <foobar2k> hey guys
[23:13:58] <foobar2k> i have a hashed index that we're using as a shard key
[23:14:11] <foobar2k> how can I query for documents that are in a particular chunk
[23:14:35] <foobar2k> eg, my chunk's bounds are :bounds=>[{"project_id"=>-1696054874687179387}, {"project_id"=>-1672730701517160081}]
[23:14:46] <foobar2k> i want to pull out some documents from that chunk
[23:48:49] <foobar2k> anyone know how to look up documents in a given chunk?