PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 4th of October, 2012

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:54:05] <Frozenlock> I just installed mongodb on a new ubuntu machine using apt-get. It seems that mongodb runs at startup; how can I prevent that?
[00:54:55] <aster1sk_> update-rc.d mongodb remove
[00:54:57] <aster1sk_> I believe is the syntax, otherwise `man update-rc.d`
[00:56:07] <Frozenlock> thanks, I'll try that!
[00:57:08] <aster1sk_> You may need a -f remove in there, otherwise deb* will not be happy.
[01:02:17] <Frozenlock> Weird, I still can't run mongo. (port already bound stuff) Another mongodb on the same LAN is irrelevant, right?
[01:04:24] <aster1sk_> Same LAN doesn't matter.
[01:04:35] <aster1sk_> What is the error you are receiving.
[01:07:17] <Frozenlock> ERROR: listen(): bind() failed errno:98 Address already in use for socket: 0.0.0.0:27017
[01:08:17] <aster1sk_> ps aux | grep mongo
[01:08:38] <aster1sk_> I'd bet there is an existing instance of mongo running on that host.
[01:08:49] <aster1sk_> If so try `/etc/init.d/mongodb stop`
[01:09:31] <aster1sk_> Run that PS command again, if it's still running issue a kill -9 $PID (where $PID is the Process ID)
[01:09:39] <aster1sk_> Also don't kill -9 with production data.
[01:09:50] <Frozenlock> mongodb 1045 0.6 0.3 131476 31764 ? Ssl 20:56 0:04 /usr/bin/mongod --config /etc/mongodb.conf There it is.
[01:10:01] <aster1sk_> Yeah it's still running.
[01:10:13] <aster1sk_> Which is why you can't start another instance, the socket is already bound.
[01:12:11] <Frozenlock> stop: Rejected send message, 1 matched rules; type="method_call", sender=":1.64" (uid=1000 pid=2999 comm="stop mongodb ") interface="com.ubuntu.Upstart0_6.Job" member="Stop" error name="(unset)" requested_reply="0" destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init")
[01:12:29] <Frozenlock> But I don't mind restarting. Any clue where the autostart command might still be?
[01:12:44] <aster1sk_> `which mongodb`
[01:12:56] <aster1sk_> You used the 10gen apt repository to install mongodb?
[01:13:05] <Frozenlock> yes
[01:13:24] <Frozenlock> Hopefully it was the right way to do it..
[01:13:58] <aster1sk_> dpkg -l | grep mongo
[01:14:07] <aster1sk_> If there are many lines, please use pastie.org
[01:15:32] <Frozenlock> ii mongodb-10gen 2.2.0 An object/document-oriented database
[01:15:44] <aster1sk_> Well you have the correct one running.
[01:15:52] <aster1sk_> Is this running with production data?
[01:16:02] <aster1sk_> Or are you just setting it up to / learn / play ?
[01:16:02] <Frozenlock> eh... I'll try another reboot.
[01:16:27] <aster1sk_> So you're the impatient, brute-force kind like me then eh?
[01:16:31] <aster1sk_> ;P
[01:16:46] <Frozenlock> Well I copied the db from another computer. I WILL be production data soon :)
[01:17:08] <Frozenlock> *it
[01:17:53] <aster1sk_> Oh... OK - I'd suggest then making sure this rig is stable...
[01:18:06] <aster1sk_> Have you already placed the prod data in the mongo dbpath?
[01:18:29] <aster1sk_> Because that could be your issue too.
[01:18:58] <Frozenlock> Yes and no, I have a custom shell script where I start mongo and give it the dbpath as an argument.
[01:19:12] <Frozenlock> So I don't use the 'official' path.
[01:19:33] <aster1sk_> Ahh, well why use the init scripts then? Seems you've mangled this a touch (pls don't take offensively)
[01:19:50] <aster1sk_> Yeah, instead consider editing /etc/mongo*.conf
[01:20:32] <Frozenlock> No offense taken, I am but a newbie in this :P
[01:20:58] <Frozenlock> Sure I'll try that.
[01:21:07] <aster1sk_> I'm no veteran but I have good *nix skills.
[01:21:27] <aster1sk_> You are dev / ops / cto?
[01:22:03] <Frozenlock> A poor mechanical engineer who's lost his way :P
[01:22:36] <aster1sk_> excellent, glad to hear that. Yeah I'd say for stabilitititity / maint use the paths specified by mongo.
[01:23:05] <aster1sk_> Custom script + init makes for ... undesireable results.
[01:30:47] <aster1sk_> LOL no prob.
[01:31:29] <Frozenlock> I'm still doing it
[01:31:46] <aster1sk_> Sweeet Frozenlock, LMK how it goes.
[01:45:57] <Frozenlock> Aaaaaand now mongo won't start. "ERROR: dbpath (/data/db/) does not exist" This is NOT the path I've given it in etc/mongo.conf.
[01:46:48] <aster1sk_> Hmm what's ps aux | grep mongo
[01:46:54] <aster1sk_> If there's a -f flag, shit's broke.
[01:47:09] <aster1sk_> Meaning a -f that doesn't point to /etc/mongo*
[01:47:53] <Frozenlock> 1000 2607 1.2 0.3 573136 27656 ? Sl 21:39 0:04 gedit /etc/mongodb.conf
[01:47:53] <Frozenlock> 1000 2714 0.0 0.0 13584 888 pts/0 S+ 21:45 0:00 grep --color=auto mongo
[01:48:11] <aster1sk_> Problem the first : gedit ;P
[01:48:35] <aster1sk_> J/K -- mongo is not pointed to your etc file.
[01:48:56] <Frozenlock> Yeah, like I said fresh install, haven't installed emacs yet.
[01:49:33] <aster1sk_> It's the GUI that scares me, but no matter.
[01:49:49] <aster1sk_> Check your /etc/init.d/mongodb file
[01:49:52] <aster1sk_> See where that points.
[01:49:59] <aster1sk_> It could also live under /etc/default/mongodb
[02:12:07] <Frozenlock> Could it be that tilde (~) doesn't work in the dbpath?
[02:25:30] <Frozenlock> Well I haven't found what was wrong... but at least now mongodb doesn't start on its own, which was my original goal :)
[02:25:52] <Frozenlock> So thank you very much for your assistance!
[02:26:18] <aster1sk_> Happy to help haha
[02:55:27] <aster1sk_> I suppose you all were watching the debat.
[03:10:56] <Skillachie> Anyone awake
[03:11:04] <aster1sk_> PONG
[03:11:18] <Skillachie> aster1sk_: haha ok kool
[03:12:01] <aster1sk_> But it's time to pizza. BBL
[03:13:55] <Skillachie> aster1sk_: oh man
[03:14:30] <Skillachie> I have a quick question about map reduce. I ran this same job before on and it ran successfully
[03:14:31] <Skillachie> http://pastie.org/4905970
[03:14:54] <Skillachie> But now for some reason i get NaN for one of the aggregation totalFiles
[03:15:22] <aster1sk_> Have you considered the new Aggregation Framework in 2.1?
[03:15:29] <aster1sk_> Or are you jailed to M/R?
[03:15:55] <aster1sk_> http://docs.mongodb.org/manual/applications/aggregation/
[03:15:58] <aster1sk_> BBL Pizza
[03:16:08] <Skillachie> aster1sk_: looked at it but heard there would be a few limitations. So decided to just use mapreduce form the get go
[03:16:15] <Skillachie> aster1sk_: no prob
[03:17:44] <Skillachie> if i replace tempFiles += values[i].num_files; with tempFiles += 1 here it works. Why is it not taking the value of num_files in the emit ?
[03:25:24] <aster1sk_> Are you using replication sets?
[03:25:34] <aster1sk_> Are you using read preference.
[03:25:54] <aster1sk_> If so drop M/R and use Aggregation Framework.
[03:26:06] <aster1sk_> M/R is a last ditch effort to get the things you want.
[03:27:58] <Skillachie> i am not using replication sets jet
[03:28:17] <aster1sk_> Well going to eat pizza and watch tv for a bit.
[03:28:34] <Skillachie> aster1sk_: no pizza
[03:29:45] <Skillachie> But i probably should, was working so wonderfully earlier on
[03:46:20] <aster1sk_> Skillachie: now it's time for bed, yo.
[03:46:31] <aster1sk_> Use M/R, I don't give a shit.
[03:46:42] <aster1sk_> But remember I suggested against it.
[03:46:55] <aster1sk_> Laterzzzz LOL~~~`~ <4
[03:47:51] <Skillachie> aster1sk_: I got it figured out. Seemed i just made a mistake with the variable the emit and the reduce need to use the exact same variable name.
[03:47:57] <Skillachie> aster1sk_: :P
[03:48:14] <aster1sk_> K goooood fur yoooo g'nite.
[03:48:29] <Skillachie> aster1sk_: problem solved. thanks for the pizza! Laterzz
[04:23:42] <elux> hey guys..
[04:23:56] <elux> i just upgraded mongodb but i cant seem to start it now.. in the logs i see: exception in initAndListen std::exception: locale::facet::_S_create_c_locale name not valid, terminating
[04:24:40] <elux> i upgraded from 2.0 -> 2.2
[04:25:14] <elux> hrmm maybe im wrong.. it might have been 1.8
[04:25:15] <elux> :S
[04:25:35] <elux> naw.. was 2.0.1
[04:26:35] <elux> btw.. https://gist.github.com/3831470
[04:31:36] <elux> i think this is my fault
[04:31:37] <elux> :S
[04:49:34] <elux> yea.. my bad
[04:49:35] <elux> sry
[06:14:42] <arussel> what libs can I use when writing functions ? (I'm looking for an eaysy to compute a md5 or some kind of hash)
[06:25:18] <arussel> answering my own question, hex_md5 is default js
[08:42:09] <leehambley> how does one get the space back from a database, after deleting it
[08:47:58] <NodeX> repairDatabase
[08:48:45] <NodeX> you shouldn't do it on a live system as it eats resources
[09:42:02] <dEPy> hi, I have question :)
[09:42:26] <dEPy> I have a chat server with chat channels, and the messages send to channel are logged in mongodb.
[09:42:46] <dEPy> Shoul I put the messages in format where I specify the channel as a message attribute
[09:42:54] <dEPy> or should I make a collection for each channel?
[09:43:24] <dEPy> keep in mind that those collections would be never deleted (i need messagse to resolve disputes with clients)
[09:45:06] <dEPy> I just wanna know if there's a limit to how many collections you can have. Cause there's gonna be tens of thousand of channels. It's 1 on 1 chat and for each 1 on 1 chat 1 channel is created for the duration of that chat
[09:52:38] <christkv> dEPy: I would avoid using a collection pr chat as it will make it harder for you to shard later if you need to scale up. there is also a hard limit of about 22K collections pr db unless you change the --nssize parameter on mongod to be bigger than 16MB as all the namespaces for the collections are stored in the namespace file.
[09:53:54] <dEPy> Hm. Yea. That's no good then.
[09:54:01] <dEPy> Thanks.
[09:55:15] <dEPy> Hm. Second things is, I'll probably have huge amount of reads and writes.
[09:55:35] <Derick> dEPy: can you not archive chats that are over to a different collection?
[09:56:02] <dEPy> Writes happen for every msg sent, reads happen when client reconets, then the chat history is selected from mongo and sent to them.
[09:56:35] <dEPy> Hm. I could move documents that belong to finished chat to different collection probably.
[09:57:04] <dEPy> So everytime a chat finishes I would just select every message related to that chat and move them to another collection?
[09:57:37] <Derick> f.e.
[09:57:43] <Derick> but that might not be neccesary
[09:58:37] <dEPy> hm
[09:59:15] <dEPy> another thing I could do, is optimize mongo for writes, and put the chat history in redis and read from there, and once the chat is finished remove from redis
[10:00:03] <Derick> this sounds like premature optimisation :-)
[10:00:06] <dEPy> cause the logs from mongo will be read only in case of client disputes, which i think will be rare and even then, WE will read the logs. That means one db read every few days maybe.
[10:00:43] <dEPy> It sounds this way because it probably is. :) I'm just a optimization junkie :D
[10:01:23] <dEPy> But you're right. I won't complicate things for now. I can easily add redis caching if we run into performance problems.
[10:01:44] <dEPy> Thanks :)
[10:02:10] <Derick> as long as you keep it in mind... that's probably the most important thing
[10:59:06] <coopsh> how can the $showDiskLoc query options be used with the C++ driver?
[11:16:13] <coopsh> got it: Query query = BSONObjBuilder().append("$query", BSONObj()).append("$showDiskLoc", 1).obj();
[11:32:32] <rh1n0> Whats the best filesystem for using with mongo replica sets? (ubuntu 12 os). Im guessing xfs?
[11:34:28] <christkv> ext4 or xfs
[11:35:14] <christkv> http://www.mongodb.org/display/DOCS/Production+Notes
[11:45:59] <brauliobo> hello all, i'm using mongo in a rails app and mongodump/mongorestore for backups. i have multiple collections referencing one another with ids, but when i use mongorestore ids changes and so references are lost. how can I preserve them?
[11:46:52] <quazimodo> long story short, is mongodb with rails easy or hard
[11:48:29] <brauliobo> quazimodo: http://railscasts.com/episodes/194-mongodb-and-mongomapper
[11:55:17] <quazimodo> sure, already have that
[11:55:27] <quazimodo> but my question is a bit more open and perhaps in depth
[11:55:33] <quazimodo> in the long run
[11:55:38] <quazimodo> is it gonna be easy or hard
[11:55:53] <quazimodo> and i know thats subjective, etc
[11:59:18] <brauliobo> the question is more on how you need sql for db
[11:59:36] <brauliobo> relational db thinking is totally different
[11:59:51] <brauliobo> http://www.mongodb.org/display/DOCS/MongoDB+Data+Modeling+and+Rails
[12:08:52] <NodeX> brauliobo : id's are not lost in a restore
[12:09:32] <NodeX> it would be pretty pointless if they were !!
[12:11:00] <kali> brauliobo: there are hundreds of blog posts that will give you more insight than a two sentence irc answer...
[12:54:41] <quazimodo> brauliobo: hrm, well I'm liking mongo
[12:54:48] <quazimodo> im not sure how fast/slow it is
[12:54:59] <quazimodo> im thinking its very nice for a multitennant environment
[13:28:34] <grondilu> /leave
[13:34:48] <rh1n0> I have setup a replication set with 2 nodes. Everything seems to be working but i am curious how to configure connections from our application. Should i be using a loadbalancer or shared ip so if one box fails the application can still reach mongo? thanks
[13:38:49] <ankakusu> I'm using node.js and I want to store image files to mongodb database.
[13:38:54] <ankakusu> How can I do this?
[14:09:40] <ankakusu> what is the best way to insert a file into mongodb?
[14:09:53] <NodeX> gridfs
[14:18:33] <Gargoyle> ankakusu: What do you mean a file?
[14:33:09] <skrammy> hey all
[14:33:40] <skrammy> is it possible to name a map reduce job so i can identify it in the rest interface/console
[14:34:08] <skrammy> right now i have long running ones that sometime are still running when the cron job starts a new one. the solution is for me to figure out how to do incremental map reduce but i want to figure this outb efore i implement that. any ideas?
[14:51:16] <ankakusu> Gargoyle, it can be a image, audio or video in my case.
[14:51:36] <Gargoyle> ankakusu: How big?
[14:52:53] <ankakusu> actually, if I can manage to insert just one single image file, I will try to insert bigger files.
[14:53:01] <Gargoyle> We are currently just storing images, flash and audio files in a collection using the binary data type. Under the understanding that we are limited by the 16mb doc limit on mongo 2.0
[14:53:37] <christkv> Gargoyle: yeah you want to use gridfs
[14:53:50] <christkv> also as it will chunk up the file
[14:53:56] <Gargoyle> christkv: No I dont!
[14:54:21] <skrammy> just repeating in case anyone didnt see: is it possible to name a map reduce job so i can identify it in the rest interface/console
[14:54:22] <Gargoyle> Not much point unless you are talking getting close to the 16mb limit
[14:54:34] <christkv> there is till a point
[14:54:36] <christkv> still
[14:54:38] <ankakusu> ok.
[14:54:42] <christkv> if you put it in a single doc
[14:54:52] <christkv> you might in worst case read ~16MB per file
[14:55:07] <christkv> even if they are only 3-4 MB you need to materialize the whole file into memory
[14:55:23] <christkv> the gridfs chunks are 256K by default
[14:55:46] <ankakusu> Gargoyle, how can I insert a binary data?
[14:56:18] <Gargoyle> ankakusu: Like I said, for PHP its a MongoBinData() object. not sure how other drivers do it
[14:56:36] <christkv> most drivers have a binary type
[14:56:52] <Gargoyle> christkv: And to load that into memory and say do some imagemagik manipulation, i still need to load the while thing
[14:57:16] <christkv> yes for that case it's true
[14:57:57] <christkv> but if you want to stream a mp3 or video over http then the chunks are better
[14:58:44] <Gargoyle> christkv: Assuming the driver / application being used supports streaming.
[14:59:47] <christkv> well if your http server buffers the whole result before outputting it then you have a problem :)
[15:00:07] <christkv> but for most modern ones you can write as you go
[15:00:36] <christkv> but I'm no php expert my domain is mostly node.js/java/ruby and erlang
[15:01:40] <quazimodo> wow
[15:01:44] <quazimodo> a document can only be 16mb eh
[15:02:02] <quazimodo> so i cant have a user object and just have *everything* in there
[15:02:16] <christkv> well depends on how much "everything" is
[15:02:19] <christkv> 16MB is a lot
[15:02:23] <Gargoyle> quazimodo: 16MB is a LOT of data
[15:02:37] <quazimodo> yeah i thought maybe i can make a single object per tennant
[15:02:46] <quazimodo> and have *all* the stuff for each in there
[15:02:50] <quazimodo> obviously not
[15:03:33] <christkv> you also want to avoid a "uncontrolled growing document" scenario
[15:03:53] <christkv> as mongo db will have to move the document around in memory as it grows out of it's allocated space
[15:04:04] <quazimodo> hrm
[15:04:15] <quazimodo> i need to do some practical tutorials
[15:04:53] <quazimodo> like an account (business name) with user(s) and each has his/her own data, the business has categories of 'things' that the users can create/edit/share, etc
[15:05:39] <quazimodo> its not a work project so it'd be a great time to learn
[15:05:55] <christkv> if the things are very varied mongodb makes sense
[15:06:35] <christkv> I usually say one of the benefits of mongodb comes from that documents are closer to objects
[15:06:37] <quazimodo> sure, i get the feeling for extremely rigid, non changing, pure data then relational is probably better
[15:07:10] <christkv> relational is better if the problem domain is strictly relational
[15:07:18] <christkv> most webapps seem to fall somewhat off that
[15:07:39] <quazimodo> right
[15:07:52] <quazimodo> i've just alwyas used mysql or postgres
[15:08:01] <quazimodo> never really thought about that
[15:08:26] <christkv> well there's an opportunity to learn :)
[15:08:34] <quazimodo> sure
[15:10:02] <christkv> there are some good schema design presentations here http://www.10gen.com/presentations
[15:24:16] <quazimodo> how do you store a 32 mb blob?
[15:42:06] <ankakusu> Gargoyle, thanks for your help. By the way, do you also keep the exif files?
[15:42:23] <Gargoyle> exif files?
[15:45:54] <ankakusu> yes.
[15:46:00] <ankakusu> sorry.
[15:46:16] <ankakusu> I want to use the exif data in an image.
[15:46:34] <ankakusu> not exif files but, there is an exif data in an image.
[15:46:39] <Gargoyle> ankakusu: I thought the exif data was embedded into the image.
[15:46:44] <ankakusu> yes.
[15:46:59] <ankakusu> I store the image as a binary data
[15:47:05] <Gargoyle> then yes. I just read the file directly into the bindary data store.
[15:47:36] <ankakusu> then I can access to the exif data.
[15:47:38] <ankakusu> ok.
[16:11:51] <Bartzy> Hi
[16:12:18] <Bartzy> Should I format a partition with any special options when using ext4 for /var/lib/mongodb ?
[16:24:49] <ElGrotto> Bartzy: I always will recommend mounting filesystems with noatime... tonnes of writes every read op no tyvm ;)
[16:28:26] <Bartzy> Yeah, got that :)
[16:28:31] <Bartzy> But for the mkfs.ext4 part ?
[16:29:25] <bhosie> I'm RTM on sharding. i've also watched the case study videos for Foursquare and Craigslist. The general consensus seems that you should plan ahead by choosing a shard key at start rather than waiting until the day you may need to shard. If i'm using ec2 how big would a collection need to get before i would benefit from sharding and if i never get that big is there a drawback to setting a shard key anyway?
[16:38:49] <linsys1> bhosie: no harm, but no need
[16:42:15] <bhosie> linsys1: ok cool
[17:42:19] <alaska> if my find clause references two keys for concurrency issues (let's say: db.collection.find({ name: 'foo', revision: 1}) and the 'name' key is already uniquely indexed, do I need to bother indexing the 'revision' key as well?
[17:43:03] <alaska> Basically, I'm combining this with upsert to ensure that I don't update anything in an entry that's been touched while I was looking at it client-side.
[17:44:06] <alaska> Should I use a compound index? Would it make a difference?
[18:05:21] <alaska> Looks like it's to the mailing list.
[19:08:33] <arussel> is there a way to do a groupBy ? (get me all the document grouped by date
[19:21:48] <timeturner> does the Date type mean that mongo just shows the timestamp in a "Date" format but it actually just have the unix timestamp number? like 23509702384 or whatever
[19:22:04] <timeturner> otherwise it looks like a big waste of space
[19:35:56] <_m> timeturner: http://stackoverflow.com/questions/5168904/group-by-dates-in-mongodb
[19:36:14] <_m> Err… wrong person
[19:36:19] <_m> arussel: http://stackoverflow.com/questions/5168904/group-by-dates-in-mongodb
[19:37:18] <crudson> timeturner: http://www.mongodb.org/display/DOCS/Dates
[20:04:12] <Almindor> how can you define and then call a function through the node/js driver? Also how to get the function result?
[20:04:54] <timeturner> through a driver
[20:05:00] <timeturner> like node-mongodb-native
[20:05:09] <timeturner> or mongoose, which is a fully functional ORM
[20:05:33] <timeturner> you mean receive it in the mongo console?
[20:05:35] <Almindor> timeturner: yes but I don't see a "define function" or "call function" method anywhere in the docs
[20:06:05] <timeturner> not sure
[20:06:19] <Almindor> well I have a rather complicated map/reduce wrapped in a mongodb/function, I'd like to just call that function from within node.js and just get the results
[20:06:26] <timeturner> you would for sure have to drop down to the native driver for that
[20:06:41] <timeturner> ah
[20:06:43] <Almindor> yeah mongoose is too much about tables
[20:07:44] <Almindor> I just don't see a way to actually tell the driver "here's a function, put it on the db" or "call this function, give me the result object"
[20:10:33] <jmar777> Almindor: liked a stored procedure?
[20:12:51] <jmar777> Almindor: there's system.js - http://www.mongodb.org/display/DOCS/Server-side+Code+Execution#Server-sideCodeExecution-Storingfunctionsserverside
[20:13:33] <jmar777> Almindor: although that won't let you store down a full m/r operation. you can just encapsulate some of the logic in functions there. just note that mongodb advises against using this approach
[20:24:25] <Almindor> jmar777: no I don't need it stored, I'd just like to be able to define and call a function (mongoside) from a driver not a console
[20:27:36] <Almindor> found this benbuckman.net/tech/12/06/understanding-mapreduce-mongodb-nodejs-php-and-drupal so I guess I'm going to use the mongoose mapreduce directly
[20:27:51] <Almindor> well it's basically just mongodbdriver mapreduce
[21:04:56] <bagvendt> Hi guys, Would this be the right place to get some help for map reduce and/or distinct?
[21:38:24] <bagvendt> Any mapReduce wizards that want to help me out?
[21:40:29] <LouisT> if i'm using db.pastes.find().length();, how could i sort by unique by say 'User'?
[23:43:35] <chetane> Anyone familiar with building queries in Mongoose? I'm starting to learn and have a few questions
[23:44:35] <chetane> Specifically, in the following example: Person.find({ occupation: /host/ }) .where('name.last').equals('Ghost') -- Setting "occupation" in find(), is it equivalent to doing where("occupation").equals("host")?
[23:47:07] <quazimodo> is there something like sqlite for mongo
[23:47:10] <quazimodo> for dev purposes