PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 25th of June, 2012

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[03:58:40] <charlesos> hey all, is anyone familiar w/ mongoose for node / willing to help w/ a newb question?
[04:00:09] <charlesos> i want to search for a document by objectId which could be one of a few different types of models, is there a good way to do this?
[04:03:56] <rageous> Anybody around?
[07:26:12] <samurai2> hi there, how to make MongoDB map/reduce process to elect the new master node in java driver? thanks
[07:28:48] <horseT> Hi, I want to reduce space used by a database on creation (0,2 GB) , is that configurable ?
[07:31:43] <[AD]Turbo> hola
[09:56:28] <NodeX> http://highscalability.com/blog/2012/6/11/monday-fun-seven-databases-in-song.html .... lolol
[10:01:26] <macarthy> newbie question: I have a query that returns records, but when I wrap it in the function I get nothing, any ideas?
[10:03:03] <guest08152121> where is your code?
[10:04:50] <NodeX> pastebin your function
[10:04:54] <NodeX> and or query
[10:06:37] <macarthy> command line
[10:06:45] <guest08152121> provide code and query
[10:08:34] <macarthy> http://pastie.org/4147557
[10:10:14] <macarthy> basically function f(id) { db.mydb.find( { "thing": id } ) };
[10:10:24] <macarthy> is that correct syntax?
[10:10:46] <guest08152121> heard of "return" statement?
[10:10:51] <guest08152121> heard of find_one()?
[10:10:53] <macarthy> tried that
[10:10:57] <NodeX> I have never tried it like that but I would think it would be "return db.find....
[10:11:15] <guest08152121> please learn the javascript basics
[10:11:19] <NodeX> thr trouble is it returns a cursor on a find()
[10:11:23] <macarthy> I see
[10:11:37] <NodeX> did you try ... return db.find();
[10:11:51] <macarthy> Ok return does work, I must that entered it wrong in the command line ok
[10:11:57] <macarthy> thanks
[10:12:12] <NodeX> ;)
[10:12:26] <macarthy> and I've been doing JS since netscape 1,2 guest08152121 :-)
[10:12:35] <macarthy> thank
[10:31:06] <magmatt> I'd like to store binary data... how do I do it?
[10:31:14] <magmatt> I'm using txmongo
[10:36:52] <Tobsn> in a file for example
[10:38:13] <NodeX> magmatt : gridFS
[10:38:46] <magmatt> NodeX: even for small files? I found pymongo.binary.Binary, which solves my immediate problem.
[10:40:53] <NodeX> this is mongo channel not python so I assumed you wanted to store files inside mongo
[10:41:02] <NodeX> the way that is achieved is using gridFSD
[10:41:04] <NodeX> -D
[10:52:05] <phira> I'd always store files in it given the option.
[11:03:07] <NodeX> it's certainly a good way to store files for portability
[11:33:53] <e-dard> Any mongoengine users in here?
[11:34:24] <e-dard> If one of my classes calls mongoengine.connect() to one DB on one host, and then later on another class somewhere else calls mongoengine.connect() to another DB on another host, what happens when in another class I do doc = Collection(**attributes); doc.save() ??
[11:54:00] <__neilg_> @Derick: i fixed my problem from yesterday (if you can remember it). Just ifdown/ifup'ing the interface gave replication a kick in the butt, probably because all active connections were killed?
[11:54:20] <__neilg_> (if you don't - replication was slow during initial sync)
[13:09:02] <NodeX> +1 on the new docs guys
[13:09:36] <NodeX> much easier to navigate
[14:21:13] <Pilate> Are there limitations on the query filter object when doing runCommand mapreduce? it seems to return 0 records when using a regex, but works fine if i match exactly
[14:26:10] <Pilate> example: https://gist.github.com/f8a377d2a30dc029ca50
[14:54:56] <therealkoopa> Using mongoose, I want to find an object and all its embedded documents, but filtering the embedded documents at the same time, based on a boolean flag. Is this possible?
[15:09:13] <remonvv> Not at a database level, no.
[15:31:17] <locojay> hi anyone familiar with mongo-hadoop streaming?
[16:23:16] <queso> Is there a way to prevent mongo from using +2GBs when initiating a new instance? I am testing it on a server that has just under 2GB of free space and it keeps running out of space.
[16:28:02] <FerchoDB> Hi. Are you familiar with C# driver? Is it possible to do myCollection.EnsureIndex(new IndexKeysBuilder().Ascending("address.city")); // i.e use dot notation
[16:31:01] <linsys> questo: Yes, http://www.mongodb.org/display/DOCS/Excessive+Disk+Space
[16:32:31] <augustl> hi folks. Want to set up some integration tests against a HTTP API that uses mongodb, and reset all data between each test. In postgres I just droped all the tables and created new ones for each tests. I suppose for mongo I can just delete all the dbs?
[16:35:17] <queso> linsys: The only suggestion that page has given me (and I found that page before coming to this channel) is setting it to not preallocate, which I did in /etc/mongod.conf, sudo rm -rf /var/lib/mongo/* ; sudo service mongod start and it again filled up my disc. Help?
[16:38:00] <FerchoDB> Hi. Are you familiar with C# driver? How can I index a sub-document? I tried "myCollection.EnsureIndex(new IndexKeysBuilder().Ascending("address.city")); "but doesn't work
[16:40:09] <linsys> queso: probably want to add "noprealloc = true" and oplogSize = 0 and nojournal = true
[16:40:13] <linsys> that should get you started
[16:40:20] <queso> linsys: great, thank you
[16:40:38] <ankakusu> I'm a newbie for mongodb. I want to run it, and when I write "mongo" at terminal window I get the following error:
[16:40:41] <ankakusu> Error: couldn't connect to server 127.0.0.1 shell/mongo.js:84
[16:41:03] <ankakusu> how can I solve this problem?
[16:41:20] <linsys> ankakusu: you need to actually start mongodb type "mongod" first
[16:41:21] <linsys> then mongo
[16:41:30] <modcure> :)
[16:41:57] <NodeX> ankakusu : start your mongo instance
[16:42:37] <ankakusu> linsys and NodeX, thanks for reply. I started mongo instance as you suggested (mongod)
[16:42:44] <ankakusu> after all I entered mongo
[16:42:51] <ankakusu> I'm still getting the same error
[16:43:14] <NodeX> netstat -pan | grep mongo
[16:44:15] <ankakusu> NodeX, I typed what you write
[16:44:23] <ankakusu> and nothing returned.
[16:44:32] <NodeX> then your mongod is not running
[16:44:55] <ankakusu> ok. I retyped mongod
[16:45:03] <NodeX> tail /var/log/mongodb/mongodb.log
[16:45:09] <NodeX> there will be an error in there
[16:45:14] <ankakusu> it says: exception in initAndListen: 10296 dbpath (/data/db/) does not exist, terminating
[16:45:16] <ankakusu> at terminal
[16:45:29] <NodeX> then there is your problem
[16:45:42] <NodeX> either create /data/db/ or change the path in /etc/mongodb.conf
[16:47:39] <ankakusu> NodeX, I checked /etc/mongosb.conf
[16:47:56] <ankakusu> and my data store location is: dbpath=/var/lib/mongodb
[16:48:07] <ankakusu> I go to that directory
[16:48:14] <NodeX> then start mongod with the config file patrh
[16:48:16] <NodeX> path*
[16:50:01] <DwarfSmasher> can you have multipel dbpath's in config file?
[16:50:21] <NodeX> no
[16:52:58] <VibesTriton> I'm new to mongodb. I want to update an object with $set like this... Collection.update({propname1:unique_key}, {$set{propname2:value}}) What I can't figure out is how to modify propname2 when it is located several objects deep inside the returned object. How can this be done?
[16:54:30] <ankakusu> NodeX, I started mongodb as follows:
[16:54:31] <ankakusu> sudo mongod -f /etc/mongodb.conf
[16:55:20] <ankakusu> does this what you suggested me to do? (then start mongod with the config file path)
[17:03:14] <dgottlieb> VibesTriton: if you want to modify say, {person: {name: "VibesTriton"}} you can do: {$set: {"person.name": "TritonVibes"}}
[17:26:45] <crodas> hi there, is it possible (perhaps a known bug) to get the *same* document on MongoCursor with an empty query?
[17:26:58] <crodas> it's unlikely I guess, perhaps it is a bug on my app
[17:30:30] <Derick> crodas: it's possible if you modify the documents while iterating
[17:31:03] <crodas> Derick, is it possible to avoid it?
[17:31:20] <crodas> Derick, tailable(false) perhaps?
[17:31:23] <Derick> yes, by using a "snapshot" cursor: http://php.net/manual/en/mongocursor.snapshot.php
[17:31:36] <Derick> tailable only works for capped collections
[17:32:10] <crodas> Derick, thanks :)
[17:54:11] <jenner> guys, how do I force a new secondary to sync from other secondaries instead of primary on a 2.0.6 replica set? initialSync?
[18:55:54] <dstorrs> oh Mongo channel, grant to me your wisdom. I've got a couple of stats interns who need access to our DB in order to do their job. They have zero Mongo experience, and I don't want to give them access to the prod DB directly. The DB is too large to make copyDatabase desirable, although I suppose I could if I absolutely had to. What's my best option?
[18:56:19] <Derick> a secondary?
[18:56:28] <Derick> but that'd be read-only
[18:57:33] <dstorrs> actually, that's probably perfect.
[18:58:16] <dstorrs> Does the secondary have to be a precise copy, or could it have a locally-defined writable database that can be a sandbox for them?
[18:58:40] <Derick> secondaries are by definition a full copy I supose
[18:59:14] <dstorrs> but secondaries run their own mongod, yes?
[18:59:20] <Derick> yes
[18:59:35] <dstorrs> so if I have a DB named 'cm_prod' on primary, it gets duped to secondary.
[18:59:54] <Derick> yes
[19:00:02] <dstorrs> interns connect to secondary, can read from cm_prod. can they create a DB 'my_sandbox' that exists only there and is writable?
[19:00:14] <Derick> no
[19:00:55] <dstorrs> ok, so secondary means "forbid any writes to this mongod instance". got it.
[19:01:16] <dstorrs> are there any tools comparable to MySQLWorkbench that would allow them to view the data remotely?
[19:01:21] <Derick> yes, it's a pure read-only copy of a primary
[19:01:38] <Derick> dstorrs: sure, http://www.mongodb.org/display/DOCS/Admin+UIs
[19:02:12] <dstorrs> ah, sweet. thanks. I've RTFM'd pretty much everything else in the docs, but never paid attention the GUI sections because I'm a CLI guy myself
[19:02:23] <Derick> http://stackoverflow.com/questions/10646954/is-there-a-good-admin-ui-for-mongodb-on-node-js-like-phpmyadmin/10650495
[19:40:56] <linsys> Derick: Why do you care if the admin ui is written in Node
[19:41:00] <linsys> it doesn't really matter does it?
[19:41:23] <Derick> I don't care
[19:41:40] <Derick> node is still a server side tech, not a GUI
[19:41:47] <ron> Lack of care is the biggest problem in the world.
[19:50:35] <FerchoDB> I could do it through the C# driver but cannot do it on the mongo shell: How can I delete a index by name?
[19:50:59] <Derick> db.col.dropIndex( 'name' );
[19:51:05] <FerchoDB> I have tried db.emp.dropIndex({name: "thename" }) and lots of variantes
[19:51:09] <FerchoDB> ahhh, let see
[19:51:27] <FerchoDB> dah, I feel stupid
[19:51:31] <FerchoDB> THanks
[19:51:35] <Derick> np :)
[19:51:45] <guest08152121> rtfm would have helped
[19:51:58] <FerchoDB> I could find that case
[19:52:01] <FerchoDB> I mean
[19:52:08] <guest08152121> you mean you can't google
[19:52:09] <FerchoDB> I could NOT find that case
[19:52:11] <geoffeg> db.col.help() is nice :)
[19:52:22] <guest08152121> google "Mongodb drop index"
[19:52:25] <geoffeg> also, db.help().. they're very... helpful! :)
[19:52:25] <guest08152121> and click on the first hit
[19:52:29] <guest08152121> seems to be very hard
[19:53:16] <FerchoDB> no, documentation is very difuse
[19:53:23] <FerchoDB> as there are so many drivers
[19:54:07] <guest08152121> no documentation????
[19:54:22] <geoffeg> the most helpful thing is to be abrasive to people's questions on IRC
[19:54:27] <FerchoDB> I haven't found but yes, surely it is in some forum i didn't see
[19:54:49] <Derick> guest08152121: please drop it
[19:54:55] <guest08152121> 4 out of 5 #mongodb people are blind, the rest is one-eyed
[19:55:16] <guest08152121> good night
[19:56:02] <Derick> FerchoDB: please feel free to ask questions here; googling does help though :)
[19:58:51] <FerchoDB> Thanks derick. It's not that I haven't googled it, but sometimes I try to get info from official doc because it's true that as there are so many drivers, google is a mess. But now I saw that db.dropIndex.help() for this case, gives a lot of info
[19:59:40] <FerchoDB> I have looots of tabs with results, also lookoing for some info related to the C# driver, but now I'm getting along with
[20:00:25] <guest08152121> blablablabla
[20:00:45] <Derick> guest08152121: i said drop it
[20:01:07] <guest08152121> #mongodb.drop()
[20:01:56] <FerchoDB> oh, don't worry about the troll
[20:03:24] <Derick> FerchoDB: we need to, otherwise the atmosphere goes to *** here
[20:03:44] <ron> MACYet anyone?
[20:03:51] <Derick> sounded like him
[20:03:55] <ron> :D
[20:04:02] <Derick> he still whines on irc too
[20:04:04] <ron> such a cheerful guy/
[20:04:09] <Derick> erm, twitter
[20:04:20] <ron> you follow him on twitter? ;)
[20:04:27] <Derick> no, he showed up in a reply
[20:04:53] <ron> ah. never really understood twitter.
[20:05:02] <ron> it seems only the hip people know how to use it.
[20:09:56] <augustl> form the docs: "The maximum size of a collection name is 128 characters (including the name of the db and indexes). It is probably best to keep it under 80/90 chars." why is that?
[20:11:39] <augustl> from*
[20:12:10] <wereHamster> why is what
[20:12:23] <wereHamster> the limet? Or the suggestion to keep it under 90
[20:12:30] <augustl> the suggestion
[20:13:51] <wereHamster> no idea
[20:14:17] <joaquin> Hi all
[20:14:22] <kali> augustl: the collection name is used to build other names (like index names, or temporary collections for map reduce). if its already big, you may run into... issues :)
[20:15:13] <augustl> ah, I see
[20:15:33] <joaquin> Do you know haw to change the type of a field? type 1 (Double) to type 16 (Int), I have tried this but it is not working.
[20:15:35] <joaquin> db.meta.find({"pk":{"$type":1}}).count()
[20:15:35] <joaquin> 11
[20:16:04] <joaquin> db.meta.find({"pk":{"$type":1}}).forEach(function(item){item.pk = new parseInt(item.pk); db.meta.save(item);});
[20:16:49] <augustl> my app will store "events". I'm thinking one database per event makes sense, there'll be no shared data between events. But that means having a gazillion databases after some time. Is that a problem?
[20:17:13] <kali> joaquin: it's because of javascript lack of type. there are wrapper to specify what number type you want... lemme find you the page
[20:18:24] <kali> joaquin: http://www.mongodb.org/display/DOCS/Overview+-+The+MongoDB+Interactive+Shell#Overview-TheMongoDBInteractiveShell-Numbers
[20:18:41] <kali> joaquin: wrap item.pk in a NumberLong instead of a parseInt
[20:20:05] <kali> augustl: each database will be allocated separately on the disk, and each database take at least 200MB on the disk with default settings
[20:20:52] <kali> augustl: then, ther eis the issue with the gazillion. you need to define how big a gazillion you imagine
[20:20:59] <augustl> kali: would you say it's better to use one database and have a "event_id" type thing in the "attendants" collection?
[20:22:16] <joaquin> kali: I did this: db.meta.find({"pk":{"$type":1}}).forEach(function(item){item.pk = new NumberLong(item.pk); db.meta.save(item);});
[20:22:37] <joaquin> kali: but it does not change the type : db.meta.find({"pk":{"$type":1}}).count() 11
[20:22:39] <kali> augustl: what about one collection for attendants, one collection for events ?
[20:23:17] <augustl> kali: you mean one collection of attendants per event? Or one collection for all attendants, and storing the ID of the event along with the attendant and filter on that?
[20:24:22] <augustl> one collection per event seems bad too, since there's a limit to the number of collections
[20:24:41] <kali> augustl: it would work, but i don't know your use case, and you'll be the one to liev with it :)
[20:25:06] <augustl> kali: it = one events collection, one attendants collection, and "foreign keys"?
[20:25:34] <kali> augustl: it's the "obvious" answer, but maybe it's wrong... for archiving old events for instance
[20:25:44] <kali> joaquin: i'm trying it :)
[20:27:23] <augustl> kali: why is each database at least 200MB btw, and can that number be decreased?
[20:28:24] <Derick> a little, by using the "smallfiles" config option
[20:28:35] <Derick> mongodb preallocates to speed things up
[20:28:37] <augustl> anyway, seems like one database would work, and then shard on the "event_id" in the attendants collection in the future
[20:28:47] <augustl> Derick: ah, I see
[20:29:10] <kali> augustl: its 64MB, not 200MB, according to the doc
[20:29:25] <Derick> actually, its' 192MB
[20:29:36] <Derick> just the first file is 64, but if you insert one doc, it creates the 128mb file too
[20:29:54] <kali> Derick: ha, ok :)
[20:29:55] <augustl> http://shop.oreilly.com/product/0636920018391.do aww, "This product has been canceled."
[20:31:04] <augustl> btw, having references to other documents is not "bad" or anything? And it's as easy as just putting the ID of another document in some document?
[20:31:14] <kali> joaquin: http://privatepaste.com/4ef75372b5 work for, can you cut and paste what you're doing ?
[20:31:23] <kali> +me
[20:31:58] <kali> augustl: it's just that. mongodb will not do anything with the references, it's entirely your job to deal with them
[20:32:04] <augustl> (and indexing it if I'm using it for filtering)
[20:32:08] <augustl> kali: I see
[20:33:33] <kali> augustl: it might also be interesing to consider the other way around : storing the event_ids in the attendant collection
[20:34:19] <kali> augustl: the logic being... one event with have thousand of attendees, whereas an attendant will have about a few dozen event top
[20:34:25] <kali> augustl: so it might be more tractable
[20:36:09] <joaquin> kali: http://privatepaste.com/697f31922f
[20:36:16] <joaquin> it is the same code
[20:36:22] <augustl> kali: that's what I was trying to say but apparently failed to :) An attendant stores the ID of the event it belongs to
[20:36:30] <joaquin> I am using mongodb 2.0.6
[20:36:56] <kali> joaquin: can you show me the result of the find() instead of the count ?
[20:37:09] <kali> augustl: ok, sorry, multitasking :)
[20:38:50] <joaquin> kali: Oh, it seems to work, but the type is still 1
[20:38:51] <joaquin> http://privatepaste.com/581df7c3af
[20:40:37] <kali> joaquin: this is weird.
[20:40:54] <kali> joaquin: lemme try on a 2.0.6
[20:43:36] <kali> joaquin: for me the $type is 18,, as expected... I can't reproduce what you see
[20:45:12] <joaquin> kali: thanks anyway :)
[20:55:10] <augustl> any particular reason for not using 2.1.1? We'll be in production a couple of months from now, perhaps 2.1.1 is the recommended production release then (ish)?
[20:55:33] <augustl> where production = incredibly low scale beta with few customers :)
[21:04:48] <wereHamster> augustl: does 2.1.1 have any features that 2.0.x doesn't?
[21:04:59] <wereHamster> (features that you need...)
[21:05:31] <augustl> wereHamster: not sure :)
[21:05:42] <augustl> fewer build dependencies, heh
[21:06:23] <dstorrs> augustl: 2.1.1 won't be the recommended production then. Odd minor numbers are always dev versions
[21:06:30] <dstorrs> if 2.1 got promoted, it would be 2.2
[21:06:52] <augustl> ah, I see
[21:07:09] <dstorrs> so, yes, don't use a dev version that will always be a dev version and has no features you're sure you need as your production version. :>
[21:10:01] <FerchoDB> me again. I'm reading about indexes in official Doc, and I have a doubt. Here: http://www.mongodb.org/display/DOCS/Indexes#Indexes-AdditionalNotes . It says:
[21:10:05] <FerchoDB> "Document indexes like these cannot be used to search for queries using their (embedded) parts, like "state" in this example. When you search you must use a prefix, or the whole document, of the embedded document as if they were stored as opaque blobs"
[21:10:33] <FerchoDB> When it says "you msut use a prefix", y means you can search only in the FIRST field with a prefix?
[21:12:20] <FerchoDB> I understand that if you make a search by an embedded document, it will not return results if you provide only one of the fields
[21:12:36] <FerchoDB> unless you use $gt like in that example,
[21:12:57] <FerchoDB> but I'm not sure what it means with "you must use a prefix" in that case
[22:38:59] <E1ven> Can anyone give me a list of indexes I should be using on GridFS for best practices? I'm indexing filename, and db.fs.chunks.ensureIndex({files_id:1, n:1},{unique:true}), although I understand the latter is supposed to be taken care of by the driver. Are there any others that I should add for best performance?
[23:08:37] <murrdoc> quick question … when using hint() with count in php … does the hint() call go before the count() call or after
[23:15:16] <dstorrs> murrdoc: I don't believe it matters, but TIAS
[23:15:34] <dstorrs> hey all, how do I stop a currently-running map reduce?
[23:17:18] <murrdoc> tias ?
[23:17:28] <murrdoc> dstorrs: sorry not sure wat tias is
[23:17:40] <dstorrs> murrdoc: try it and see. :>
[23:17:59] <murrdoc> oh ahaha
[23:18:16] <murrdoc> from what i read or heard… the way mongo picks an index to apply
[23:18:23] <murrdoc> when more than one are applicable
[23:18:31] <murrdoc> is it fires of a query with each index
[23:18:40] <murrdoc> and then uses the first one to return
[23:18:51] <murrdoc> so i was wondering
[23:20:34] <murrdoc> dstorrs: thnx … so before works
[23:20:52] <dstorrs> good to know
[23:50:34] <ferrouswheel> hello world. slightly weird situation. Trying to solve "Connection reset by peer" using pymongo.
[23:51:12] <ferrouswheel> It occurs after a brief network outage, and is fixed by reloading apache2 (where pymongo is being used by a Django project).
[23:52:53] <ferrouswheel> Obviously manually reloading apache is not optimal, and somehow pymongo gets stuck in a non-useful state. Any ideas on how I can implement something to avoid the manual reload? Anyone had similar experiences?
[23:56:10] <sirpengi> that's strange
[23:56:33] <sirpengi> from what I remember the driver kept a connection pool around and handled reconnecting for you
[23:58:14] <ferrouswheel> sirpengi, I wonder if all connections in the pool got reset and it's just needs to exhaust failing on all of them?