PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 25th of October, 2012

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[02:05:34] <dungeonduke> hello, excuse me for noob question, how many documents i can store in one collection, is there any limit?
[02:06:24] <skot> No limit.
[02:09:30] <dungeonduke> thanks
[02:37:10] <newbie22> *: Is there a GUI interface that I can run on windows to connect to my mongo databare running on a linux server ??
[02:37:43] <IAD> newbie22: you can use rock-mongo (it's like phpmyadmin)
[02:57:58] <skot> you can take a look here for more: http://www.mongodb.org/display/DOCS/Admin+UIs
[03:13:04] <Dr{Wh0}> is it possible to do a bitwise logical find? I see updates etc but cant find examples of bit searches.
[04:43:04] <jedir0x> I have a java based web app that is using mongodb. on a specific developer's machine (osx) this app has a problem where a DBCursor returns true for "hasNext()" but throws NoSuchElementException upon calling "next()" - anyone experience anything similar?
[04:56:20] <sqwishy> If I have an document with a list in some field, how do I form my query to limit how many items of the list I obtain?
[04:57:05] <Dennis-> you can not limit
[04:58:29] <sqwishy> So in that case, should I store those items as documents in a separate collection and have them refer back like in a relational database?
[04:59:25] <Dennis-> do whatever you need. it is not supported on the backend
[04:59:33] <Dennis-> slice in your app if needed
[04:59:46] <Dennis-> does this database support anything useful at all?
[05:00:07] <Dennis-> can this database do anything useful other suxing at all ends?
[05:00:31] <Dennis-> this database is full of flaws and limitations?.why use it?
[05:01:09] <jedir0x> db.documents.find().skip(x).limit(y);
[05:01:24] <jedir0x> oh, limit the items of the list, can't do that
[05:01:33] <jedir0x> if you need to query on those they should be somewhere else
[05:01:44] <sqwishy> jedir0x: In a different collection?
[05:01:46] <jedir0x> if you're used to RDBMS, think of a document as a row
[05:01:51] <jedir0x> ... kind of.
[05:01:59] <jedir0x> yes, different collection
[06:29:08] <fbfb> anybody using libmongo-client
[06:29:08] <fbfb> ?
[06:30:48] <fbfb> i'm having trouble using bson_build_full to build queries using libmong-client. anybody used it?
[06:30:52] <Dennis-> you are using it
[06:31:09] <fbfb> Dennis-: good one.
[06:31:17] <fbfb> Dennis: have you?
[06:32:09] <Dennis-> no
[06:42:02] <Gargoyle> I have an issue connecting to a replica set. Same code running mod_php 5.3.10, driver 1.3.0beta2, connecting to 2.2.0 server works OK. But on the new test platform which is php-fpm 5.4, driver 1.3.0beta2 and connecting to the same server fails with a "not master and slaveOk=false" whenever a query is attempted.
[06:43:03] <Gargoyle> This is my connection stuff:- https://gist.github.com/d6777a20bdf9d863fb1b
[06:54:10] <Noiano> hello
[06:54:45] <Noiano> I've installed mongo on my dev machine but I don't want it to start at boot
[06:54:59] <Noiano> strangely I see no entry in /etc/rcX.d
[06:55:14] <Noiano> but I still see mongod started by init....
[06:55:17] <Noiano> ideas?
[06:55:27] <Gargoyle> Noiano: What flavour and how did you install it?
[06:55:37] <crudson> Noiano: OS? and...what Gargoyle said also
[06:55:45] <Noiano> 32bit linux version, from repository
[06:55:58] <Dennis-> you have been asked about the OS
[06:56:03] <Gargoyle> Noiano: Seriously! Which one?
[06:56:09] <Noiano> ubuntu 12.04
[06:57:01] <Gargoyle> Noiano: So that should be using upstart not init.
[06:58:37] <Noiano> mmm let me see
[06:59:58] <Gargoyle> Noiano: "initctl list |grep mongo" will show you if it is running, but I am not familiar with the correct way to disable something
[07:00:17] <Noiano> I always used update-rcd
[07:00:39] <Noiano> definitely running
[07:08:40] <Noiano> done
[07:08:41] <Noiano> ;)
[07:09:01] <Gargoyle> Noiano: What was the proper way to disable it?
[07:09:25] <Noiano> simple,
[07:09:40] <Noiano> there's a ENABLE_MONGODB variable in /etc/init/mongodb.conf
[07:09:51] <Noiano> replace ENABLE_MONGODB="yes" to ENABLE_MONGODB="no"
[07:40:37] <[AD]Turbo> hi there
[07:44:12] <Gargoyle> monring
[07:44:24] <Gargoyle> Or even, Morning!
[07:54:07] <NodeX> howdee
[07:56:06] <Gargoyle> ping Derick
[07:57:30] <Gargoyle> Derick: Hope zendcon is going well. I have another reproduceable segfault using php 5.4 fpm and the HEAD driver from git. gdb (apport) retrace leads to:- #0 mongo_deregister_callback_from_connection (connection=0x164dc40, cursor=cursor@entry=0x1d8b608) at /root/mongo-php-driver/mcon/manager.c:317
[09:04:23] <MongoDBIdiot> back
[09:04:39] <NodeX> the room just got Dumber
[09:12:04] <MongoDBIdiot> because you opened your incompetent mouth?
[09:17:00] <NodeX> no, it happened just after you joined
[09:17:05] <NodeX> the collective IQ dropped
[09:19:20] <NodeX> my mission is to troll the bad kid today
[09:20:00] <NodeX> (in between work)
[09:20:05] <elninjo> lawl
[09:20:07] <MongoDBIdiot> just move one before you end up on my ignore list
[09:20:31] <NodeX> one -> on kid
[09:21:28] <NodeX> I do, but I feel it's important
[09:21:58] <MongoDBIdiot> NodeX is the prototype is the typical MongoDB dumb-ass
[09:23:19] <NodeX> of the typical *
[09:23:23] <NodeX> learn how to type kid
[09:23:34] <IAD1> http://lenta.iadlab.ru/wp-content/uploads/2012/10/1.jpg =)
[09:23:58] <NodeX> IAD1 : lol
[09:25:09] <MongoDBIdiot> #MongoDB, the community where the dumb asses are king
[09:25:32] <ppetermann> so, what brings you here?
[09:25:38] <elninjo> ./kickban pls
[09:25:50] <ppetermann> if its that bad, you probably shouldnt hang out here
[09:25:51] <NodeX> he is MacYET... he i going to get banned pretty soon
[09:25:52] <MongoDBIdiot> 10gen admins are sleeping, like always
[09:25:59] <NodeX> is *
[09:26:16] <MongoDBIdiot> learn to type,kid
[09:26:26] <NodeX> I did
[09:26:30] <NodeX> Kid ;)
[09:28:07] <Aim> oh great
[09:28:09] <kali> Derick: ping
[09:28:10] <Aim> everyone is here
[09:28:24] <NodeX> kali : he's at zendcon I think
[09:28:38] <Aim> ppetermann: php rocks
[09:28:47] <Gargoyle> NodeX: Probably somewhere near the bar! ;)
[09:29:07] <NodeX> yeh, let's hope
[09:29:46] <MongoDBIdiot> kids are calling the channel police
[09:29:48] <MongoDBIdiot> cute
[09:30:52] <NodeX> tough guy behind a keyboard
[09:39:22] <Gargoyle> If I connect to a local mongos which in turn connects to a replica set - does the driver know it's connecting to a replica set, or does it just think it's a local mongo db?
[09:40:07] <NodeX> the mongos is a router which sends the queries out and returns the data
[09:40:38] <NodeX> if you define to send/connect to a primary it will (should) connect to it
[09:41:18] <NodeX> I have recently seen some people who were having trouble connecting directly to replica sets with the PHP driver when specifying an RS
[09:41:44] <Gargoyle> NodeX: yes. seems beta 2 is borked in php 5.4
[09:41:57] <Gargoyle> HEAD works, but getting other segfaults.
[09:42:15] <Gargoyle> Which havent shown when using just a local db.
[09:42:41] <NodeX> :/
[09:42:57] <NodeX> what's your connection string look like?
[09:42:57] <Gargoyle> Will the mongos reconnect / use secondaries for reads, etc?
[09:43:17] <NodeX> setSlaveOk I think controls that
[09:43:25] <Gargoyle> mongodb://host:port,host:port,host:port
[09:43:32] <Gargoyle> setSlaveOk is depricated
[09:44:01] <Gargoyle> and setReadPreference() didnt seem to be having any effect.
[09:44:38] <NodeX> maybe you're in pergatory
[09:44:51] <NodeX> in between the two functions
[09:45:05] <Gargoyle> updating to the HEAD of the driver solved the connection, but is randomly segfaulting. Possibly connected to timeouts or packet loss as I am connecting to the db via a vpn
[09:45:37] <NodeX> "Version 2.2 of mongos added full support for read preferences. When connecting to older mongos instances, Mongo::RP_PRIMARY_PREFERRED will send queries to secondaries. "
[09:45:51] <Gargoyle> But was thinking about looking into using a localhost mongos anyway. (I believe it will help future migration to a shard setup if needed?)
[09:46:43] <NodeX> have you tried query string http://www.php.net/manual/en/mongo.readpreferences.php
[09:46:45] <Gargoyle> ahh. right. Looks like I'll be doing a bit more reading this afternoon. :)
[09:46:50] <NodeX> syntax *
[09:48:19] <Gargoyle> nope. I'll wait to see if I can speak to Derick for 10 mins and see if I can dig up any useful info for him.
[09:55:16] <mcen> Anyone can help me with a mapreduce function having a collection with page views [{user_id:xxx, page:xxx},{}, {}] want to find out "also looked at" for a page.
[09:55:18] <mcen> ?
[09:57:54] <mcen> I'm I going to use something like http://cookbook.mongodb.org/patterns/unique_items_map_reduce/
[09:57:55] <mcen> ?
[09:59:53] <NodeX> you might be betetr with aggregation framework if your on 2.2
[10:11:56] <mcen> Sad I'm on 2.0.2
[10:16:35] <mcen> Kind of want to collect on page_id other page_ids that have same user_id
[10:18:13] <ppetermann> Aim: uhm, ok
[10:18:55] <IdiotAway> then you should upgrade to 2.2
[10:19:57] <ppetermann> Aim: not sure what you are trying to start there
[10:20:10] <Aim> meh was bored
[10:20:15] <Aim> but got something better to do now :)
[10:21:27] <ppetermann> dont forget the tissue, makes less of a mess then
[10:22:21] <Aim> I think I'll need a towel then
[10:22:38] <ppetermann> if you haven't unloaded for that long.. :p
[10:22:57] <Aim> but dont worry, i wont use my travel towel
[10:26:24] <NodeX> lol
[10:26:58] <Aim> milf? :)
[10:27:11] <NodeX> gilf
[10:27:17] <Aim> brrr
[10:27:20] <NodeX> lol
[10:27:27] <NodeX> she has a gunt
[10:27:30] <Aim> no thanks :)
[10:27:44] <Aim> im not that desperate yet
[10:28:00] <NodeX> even old fat ugly chicks need love
[10:28:17] <NodeX> think of it as "giving back
[10:28:27] <NodeX> it's a lot like open source really
[10:28:42] <Aim> i hope it isnt like gpl
[10:28:46] <Aim> would suck
[10:29:21] <NodeX> modify and pass on lmfao
[10:29:31] <NodeX> mod and redistribute LOL
[10:29:52] <Aim> yeah exactly
[10:30:02] <Aim> meaning you'll have to share all your gf with eachother
[10:30:05] <Aim> 's
[10:30:17] <Aim> oh god
[10:30:23] <Aim> what if one of them get kids
[10:30:36] <Aim> oh wait
[10:30:45] <Aim> then we can always change it to the Apache foundation license
[10:30:49] <Aim> and ditch them there
[10:30:50] <NodeX> well we could use git as the repo master
[10:30:58] <NodeX> then they can branch off
[10:31:09] <NodeX> pull requests sound messy though
[10:31:17] <Aim> lol
[10:31:45] <Aim> dont even wanna think about merging a branch into the head
[10:31:49] <NodeX> hahahaha
[10:31:58] <NodeX> nearly spat my tea all over my laptop then
[10:32:10] <Aim> ^^
[10:36:02] <IAD1> so, http://sphotos-b.xx.fbcdn.net/hphotos-prn1/525042_420793794642299_1396220444_n.jpg
[10:38:16] <ppetermann> tbh, i really wonder what makes someone to take on a nickname like that
[10:38:27] <MongoDBIdiot> engineers cry "does not work", good engineers" use their brain first
[10:38:48] <ppetermann> you got a " too much
[10:39:07] <ppetermann> it hurts my brain to read your line :(
[10:45:47] <MongoDBIdiot> motivtion using your brain is the first step becoming a good engineer
[10:46:28] <ppetermann> thank god i don't want to be an engineer.
[10:47:46] <unknet> Hi
[10:48:46] <MongoDBIdiot> perfekt
[11:20:33] <giskard> hey!
[11:20:46] <giskard> I remember that mongoc will be gone soon
[11:21:01] <giskard> but I don't remember where i read it.. do you have a link for that or it's something that i made up
[11:22:24] <NodeX> mongoc?
[11:22:38] <Gargoyle> Is there a solution to needing an index with more than one array based field ?
[11:23:39] <giskard> mongoc == mongod --configsrv
[11:27:47] <ron> where's remonvv?
[11:28:30] <NodeX> in my pocket eating cheese
[11:29:37] <ron> dude, he's probably twice your size ;)
[11:30:07] <NodeX> wide not tall lol
[11:30:11] <NodeX> i'm nearly 6'5
[11:30:23] <NodeX> (not saying he is fat)
[11:30:46] <ron> he's taller :P
[11:31:31] <NodeX> really?
[12:12:10] <Gargoyle> Seeing this far too much at the mo :( - Segmentation fault: 11
[12:43:50] <unknet> Hi
[12:45:07] <unknet> Does anybody know if there are some performance issues by using embedded polymorphic relathionships with mongoid?
[12:47:41] <NodeX> polymorphic relationships?
[12:50:08] <unknet> polymorphic classes
[12:50:51] <NodeX> you're going to have to explain that another way
[12:51:27] <NodeX> mongo has no idea of your data
[12:51:48] <unknet> NodeX in mongo you can define a relation as polymorphic
[12:52:21] <Gargoyle> unknet: In mongo, you don't define relations!
[12:53:04] <unknet> class Photo
[12:53:04] <unknet> include Mongoid::Document
[12:53:04] <unknet> embedded_in :photographic, polymorphic: true
[12:53:04] <unknet> end
[12:53:10] <ppetermann> thats not mongo
[12:53:37] <unknet> im refering to this
[12:53:45] <ppetermann> thats your ruby odm
[12:53:50] <unknet> if there are some performance issues by using it
[12:54:01] <unknet> i have said mongoid
[12:54:07] <unknet> not mongodb
[12:54:27] <ppetermann> 14:48 < unknet> NodeX in mongo you can define a relation as polymorphic
[12:54:53] <Gargoyle> yes, odm = performance penalty! Perhaps #ruby or some other odm specific chan may be of more help.
[12:55:14] <NodeX> sorry, I misread mongoid for mongodb
[12:55:17] <NodeX> my mistake
[12:56:20] <unknet> i want to know if this could bring some performance issues or if works as any other relathionshio that you can define with mongoid
[12:56:38] <unknet> (with penalyties for beign an odm)
[12:56:51] <NodeX> you already penalise yourself with a relationship
[12:57:08] <unknet> is an embedded relathionship really
[12:57:08] <NodeX> I don't know the nitty gritty of mongoid and how it handles polymorphics
[12:57:20] <unknet> oh
[12:57:22] <unknet> thanks anyway
[12:57:25] <NodeX> then each of your queries to it requires another lookup
[12:57:30] <NodeX> which is a performance penalty
[12:57:44] <ppetermann> an embedded document is not really a relationship though =)
[12:57:50] <unknet> yes
[12:57:53] <unknet> is not
[12:58:23] <unknet> i think that there is no performance issues but...
[12:58:45] <unknet> i dont know for sure
[12:58:49] <NodeX> wait, is the whole doc embedded or the id and a relationshiop?
[12:58:55] <ppetermann> relations in mongoid according to its documentation use proxies, and build the relation in mongoid
[12:59:10] <unknet> im using only embedded "relations"
[12:59:13] <ppetermann> which means aggregation over it gets very complex
[13:01:27] <NodeX> you're welcome LOL
[13:20:21] <jawr> hmm, on an upsert {$inc {"value: 1}} is setting the value to 0, is that normal behaviour?
[13:20:39] <NodeX> if it was -1 to begin wiht
[13:20:42] <NodeX> with*
[13:20:53] <jawr> well it was inserted
[13:21:15] <jawr> so i'm not entirely sure what the default is, i would have thought for an int it would be 0
[13:21:38] <jawr> hmm
[13:22:11] <NodeX> http://www.mongodb.org/display/DOCS/Updating#Updating-UpsertswithModifiers
[13:22:28] <NodeX> it should set it to 1 if the doc didn't exist
[13:22:57] <jawr> yeah it must have been something quirky i dropped the db and tried again and it worked as expected
[13:33:13] <NodeX> I do rather like linked in
[13:33:23] <NodeX> it tells me which mongoer's are stalking me
[13:34:22] <Gargoyle> :)
[13:45:56] <Gargoyle> Note to self. Remember to resize user uploaded images when they upload them.
[13:46:15] <Gargoyle> currently re-processing 130,000 images. :S
[13:46:28] <kali> not that much
[13:46:54] <kali> whenever apple creates a new device with a new screen size, we re-generate a picture the right size for our full dataset
[13:47:21] <kali> it happened twice this year, and i thought it was going to be three times until a few hours ago :)
[13:47:32] <Gargoyle> kali :)
[13:47:37] <Gargoyle> How many images?
[13:47:54] <kali> more than 1M
[13:48:08] <kali> something like 1.2 or 1.3
[13:48:09] <Gargoyle> Eek. I bet that takes some time?
[13:48:31] <kali> it took about 24 hours for the iphone 5
[13:48:39] <kali> hadoop rules
[13:49:45] <Gargoyle> This has been running for about an hour on my MBA, and its about 1/2 way through the collection. But I know towards the end of the collection, there are a lot more items that are not images, so it should speed up.
[13:50:36] <NodeX> Gargoyle : a better way is to add them to a queue
[13:50:49] <NodeX> then they get done when it's efficient for the server
[13:51:37] <Gargoyle> NodeX: For the future, that will probably happen. For now, not so much of an issue to reside during upload.
[13:52:31] <Gargoyle> I suspect that if we could limit to a new browser, then there's probably a js + canvas script that will do it client side!
[13:53:05] <NodeX> I would hate to be one of your users lol
[13:53:13] <NodeX> using my CPU to do your work :P
[13:53:16] <Gargoyle> ha ha! :D
[13:57:01] <Gargoyle> Think I might have missed if anyone answered this earlier. But is there a solution to needing to have more than 1 array "column" in an index?
[13:57:32] <kali> Gargoyle: it's explicitely not supported, and for very good reasons :)
[13:57:32] <NodeX> I didn't know it was an issue
[13:57:47] <Gargoyle> kali: Oh!
[13:57:49] <Gargoyle> :)
[13:58:54] <Gargoyle> ok, given an example of doc = { flags: [1,2,3], other: { [{id:4},{id:5},etc] }
[13:59:41] <Gargoyle> what's the solution to indexing. Just pick the one that yeilds the lowest subset of results and then let is scan them without the second column being indexed?
[14:00:32] <kali> Gargoyle: pick the one with the widest distribution of values
[14:00:43] <kali> Gargoyle: aka the most selective one
[14:01:28] <kali> Gargoyle: there are possible optimizations that can be done with having one index on each array, but as far as i know, they're not there yet
[14:21:27] <Gargoyle> Any thoughts on what would be faster? and index on item: thing, and using an $in query to search for multi "things" or an index on item: [thing, thing, thing] ?
[14:46:15] <newbie22> *: what is the meaning of the term "single-server durability" in mongo ???
[14:47:26] <kali> newbie22: it means mongodb will use a journal on disk to avoid loosing the last minutes of writes
[14:47:34] <kali> last minute, not last minuteS
[14:47:44] <kali> nacer: in case of a server crash
[14:47:50] <meghan> newbie22 this doc may help http://www.mongodb.org/display/DOCS/Durability+and+Repair
[14:48:23] <newbie22> thanks guys
[14:48:47] <newbie22> *: I am new to mongodb and will most likely be asking more questions...
[15:04:48] <jordanorelli> offhand, anybody know if there's any preferred way to rename a field to _id? I have a 2m documents with a third-party ID of id instead of _id that I'd like to rename, which $rename can't do because _id is immutable so it has to create new documents. any special case for this, or just do it manually?
[15:09:41] <newbie22> *: Is there a windows GUI that I can use to connect to my mongodb running on a linux server ???
[15:10:47] <MatheusOl> newbie22: why don't you use the mongo shell?
[15:12:52] <newbie22> Matheus01: I am using the mongo shell on linux. This is just a curiostiy question ....
[15:15:38] <MatheusOl> ok
[16:12:09] <Derick> Gargoyle: here now
[16:12:55] <Gargoyle> Hey there!
[16:13:01] <Gargoyle> How is zendcon?
[16:15:14] <Gargoyle> Derick: ^^
[16:15:43] <Derick> zendcon is almost over
[16:15:50] <Derick> wifi still crap :-)
[16:16:06] <Gargoyle> More importantly, how's the beer!? ;)
[16:16:15] <Derick> i don't drink beer :P
[16:16:37] <Gargoyle> whiskey (with or without the e)
[16:17:02] <Derick> without, but the selection I have at home is better :-)
[16:17:05] <Derick> anyway, wassup?
[16:17:57] <Gargoyle> I have setup a test server running php5.4 fpm, and using the exact same code as out live server, I had to update the driver to HEAD to get it to connect to the RS.
[16:18:18] <Gargoyle> But I seem to be able to get it to segfault quite reliably on the same page load.
[16:18:33] <Derick> from master? really?
[16:18:42] <Gargoyle> yup.
[16:18:51] <Derick> updated code I suppose?
[16:19:40] <Gargoyle> Is there any info I can dig up that could help you?
[16:19:59] <Derick> A backtrace would be awesome :-)
[16:20:11] <Gargoyle> You can just give me a holler next week if you want to chat about it when you have more time?
[16:20:26] <Derick> well, we want to release RC1 on tuesday or so :-)
[16:20:35] <Derick> it could be a quick fix...
[16:20:35] <Gargoyle> I have got gdb and it's an ubuntu box, so apport is on there.
[16:20:50] <Derick> so a backtrace, as well as a full log with mongolog would be awesome :-)
[16:21:17] <Gargoyle> An application log or the mongo logs?
[16:21:41] <Derick> application
[16:21:46] <Derick> unless you crash mongo? :)
[16:22:22] <Derick> mongolog::setModule( MongoLog::ALL ); MongoLog::setLevel( MongoLog::ALL ); spits out a whole load of very useful information as PHP notices (so direct to log)
[16:24:16] <Gargoyle> I can probably get a backtrace if you help me a bit. Will need to take more time getting monolog into the code and trying to make sure codeIgnigter isn't turning off logging.
[16:24:44] <Gargoyle> But I can have the same page load 2 or 3 times, and then segfault on the 4th , 5th load. Etc!
[16:24:49] <Derick> not monolog, MongoLog :-)
[16:25:05] <Gargoyle> Oh! Silly me!
[16:25:11] <Derick> it's part of the extension
[16:25:44] <Gargoyle> Ahh. so I just shove that at the start of the front controller! :)
[16:25:58] <Derick> just do it before you do new Mongo();
[16:26:06] <Derick> otherwise we can't see everything
[16:26:46] <Gargoyle> OK, gimme 10 mins and I'll see what I can dig up.
[16:27:34] <krawek> guys what's the proper way to backup replica sets?
[16:31:56] <Gargoyle> Derick: Does MongoLog include some kind of anti-segfault code - because apart from all the php notices, the page is now loading ! :/
[16:32:18] <Derick> nope, it doesn't
[16:32:32] <Gargoyle> OK, It's done it on the third try!
[16:32:42] <Derick> yay :-/
[16:33:52] <Gargoyle> I'm still getting things setup and I dont think the notices are being logged to file, so are you happy with the HTML source for the PHP notices?
[16:34:35] <Derick> Gargoyle: hmm, it'd be a lot easier if it was in a file
[16:34:50] <Derick> (so that I can parse and scan through it)
[16:34:57] <Derick> I would also really still like the GDB backtrace
[16:35:15] <Gargoyle> OK. I should be able to get php-fpm to log them somewhere!
[16:35:33] <Derick> set_error_log( "/var/log/errors/log" ) ought to do it
[16:35:36] <Gargoyle> I have the .crash file and have run apport-retrace -R -g _usr_sbin_php5-fpm.33.crash
[16:35:39] <Derick> ah
[16:35:52] <Derick> that's useful, but I do need the output
[16:35:56] <Derick> also, what's apport? :)
[16:36:06] <krawek> Derick: is it ok to use mongodump --oplog to make a backup of a replica set db?
[16:36:34] <Derick> krawek: it depends, is that replicaset member part of the production nodes?
[16:36:42] <Derick> how much data are we speaking off?
[16:36:48] <Gargoyle> Derick: Some ubuntu program that wraps up core dumps with other info! It's given me some output (i'll pastebin), and dropped me to a gdb prompt
[16:36:57] <Derick> ok, cheers
[16:37:05] <Derick> I think, "bt full" would be best
[16:37:07] <krawek> it's really small, less than 100mb
[16:37:13] <Derick> krawek: then go for it
[16:37:35] <krawek> is it better to use --oplog ?
[16:37:51] <krawek> or mongodump alone is just fine
[16:38:14] <Gargoyle> Derick: Back trace (i think!) :) = https://gist.github.com/1b43fab96a2a7f8e8a98
[16:38:39] <Derick> krawek: I'd use it
[16:39:06] <Derick> Gargoyle: that looks good
[16:39:07] <krawek> Derick: why? faster? easier? smaller?
[16:39:18] <Derick> now need to find out *whn* it happens, and the mongolog output helps with that
[16:39:32] <Derick> krawek: because it takes into account write operations that happen during the dump as well
[16:39:47] <Gargoyle> Derick: I couldn't figure out how to load the debug symbols.
[16:40:02] <krawek> ok so it's probably slower
[16:40:13] <Derick> Gargoyle: it's ok, we're getting the extension's one
[16:40:18] <Derick> krawek: maybe a little bit only
[16:40:55] <krawek> Derick: and this can be performed on any of the nodes right?
[16:41:01] <Derick> Gargoyle: in order to get debug symbols for php, install the ubuntu's php5-dbg package
[16:41:04] <Derick> krawek: yes
[16:41:08] <krawek> good
[16:41:18] <Gargoyle> Derick: Cool. I'll try and setup this logging properly and give it another go, as we could be attempting to load any number of images from mongoBinData fields in other requests!
[16:41:48] <krawek> Derick: I was also wondering what happen when the oplog cap is reached and then you add new nodes, would that work?
[16:41:53] <Derick> this seems to do with some issues with cursor free
[16:42:03] <Derick> bjori wrote it, so we should him have a look
[16:42:18] <Derick> krawek: new nodes do a full sync of the whole data first
[16:42:34] <krawek> great, thank you very much Derick
[16:42:40] <krawek> really appreciatted
[16:48:18] <Goopyo> is there a way to disable MMO's on a collection/db
[16:48:28] <zakovyrya> Hi guys. I was wondering if it's possible to use the field inside an array for sorting?
[16:48:30] <Derick> MMO?
[16:48:37] <Goopyo> memory mapped objects
[16:48:56] <Derick> zakovyrya: sure, just use { 'field.nested' : 1 }
[16:49:20] <Derick> Goopyo: uh, no. MongoDB;s design is to use Memory Mapped Files
[16:49:43] <zakovyrya> Derick: Hm… what happens if I need just specific value?
[16:50:02] <Derick> zakovyrya: don't understand what you're asking now - can you give an example?
[16:50:15] <Goopyo> Derick: so fast timeseries data is a no go for mongo?
[16:51:04] <Derick> Goopyo: why wouldn't it be? What is your problem with memory mapped files?
[16:51:42] <Goopyo> Wasted ram on the timeseries over the other important data
[16:51:44] <therealkoopa> I have a question on collection layout. I have created a gist to reference: https://gist.github.com/3953966 In this example, sometimes widget 1 will need all of the gizmo information. Is it possible to run a query of the foobars collection to find the gizmo with that id, if you don't knwo the foobar id?
[16:51:53] <Goopyo> i.e. to much data overhead
[16:52:08] <therealkoopa> Or is it better to store all gizmos in their own collection, and each one would hold a reference to the foobar collection
[16:52:08] <Goopyo> i'm honestly hoping you persuade me otherwise
[16:52:10] <Derick> Goopyo: ? What are you trying to get at? data overhead?
[16:52:57] <zakovyrya> Derick: Sure. Here is the results that I have to sort: http://pastie.org/5115506
[16:53:02] <Derick> therealkoopa: how many updates are we thinking about, and how many "linked" objects?
[16:53:04] <Goopyo> here let me rephrase this: would you store realtime stock data into mongodb?
[16:53:31] <Derick> zakovyrya: and what would you want to sort on?
[16:53:49] <zakovyrya> I need to sort them by "buckets.added_on" field, but only the one that matches "bucket_id": "50881d05aa50af78869c5abb"
[16:54:00] <Derick> Goopyo: That's a bit of a biased question for me - I would suggest that you try it out; and you'll probably expect it to work well
[16:54:15] <zakovyrya> Potentially there might be lots of buckets
[16:54:19] <Goopyo> Biased as in you would say you would?
[16:54:36] <Derick> zakovyrya: that I don't think you can do in one go - you would probably have to split out the buckets into it's own collection
[16:54:41] <Derick> Goopyo: I would give it a try, sure.
[16:55:15] <therealkoopa> Derick: I'm not sure, yet. I think foobar.gizmos would not be updated too frequently. There will be a lot of links to foobar.gizmo
[16:55:26] <zakovyrya> Derick: Thanks, I also thought that is too good to be true :)
[16:56:05] <zakovyrya> Derick: One more question - is it possible to do that in some kind of temporary collection?
[16:56:31] <Derick> zakovyrya: hmm, there is no such thing as a temp collection
[16:56:39] <Derick> but you can create a new one, and remove it yourself
[16:57:12] <zakovyrya> Derick: Got it. Thanks
[16:57:18] <therealkoopa> Derick: I think my approach will be fine, if it's possible to run a query to find a gizmo by its ID, outside of knowing the foobar id. Meaning, Find me this particular gizmo, but I don't know or care what foobar it's part of
[16:57:56] <Derick> therealkoopa: yes, I think so...
[16:58:00] <therealkoopa> I know I could do that easily if gizmos was its own collection
[16:58:14] <Derick> therealkoopa: I deal better with things other than "foobar" and "gizmo" :-)
[16:58:38] <Goopyo> Derick: One last thing would you index ticker and put them in one collection or would you put them in say 500 different collections (assuming you dont query across tickers)
[16:58:40] <therealkoopa> It just seems better if the gizmo is stored on the foobar in this case. I'm sorry, I tried to simplify my situation. I can come up with a better situation
[16:58:56] <NodeX> DJ GIzmo :)
[16:59:01] <NodeX> The ddreamteam
[16:59:11] <Derick> Goopyo: I would put it in one now - as we don't have collection level locking yet
[16:59:41] <Goopyo> can you clearify on 'collection level locking' ?
[16:59:51] <Goopyo> for safe writes?
[16:59:54] <Derick> mongodb only has a lock per database
[17:00:16] <Goopyo> gotcha
[17:00:19] <Derick> safe has nothing to do with it, but it's how mongodb handles concurrency
[17:00:50] <Derick> Collection level locking is a feature request: https://jira.mongodb.org/browse/SERVER-1240
[17:01:32] <Goopyo> btw does mongodb gurantee backwards compatibility?
[17:01:33] <therealkoopa> How about this. A query to find the part with an id of 2: https://gist.github.com/3953966
[17:02:10] <Derick> Goopyo: sure
[17:02:42] <Derick> therealkoopa: parts is a different collection?
[17:03:09] <krawek> Derick: is it planned to support different storage engines for mongodb?
[17:03:18] <Derick> therealkoopa: cause right now you'll have to do parts.top.id = 1 || parts.bottom.id = 1
[17:03:25] <Derick> krawek: not that I'm aware of
[17:03:34] <Derick> the data storage is very much ingrained into the design
[17:03:46] <therealkoopa> Derick: Ugh, yea I want something along the lines of parts.*.id = 1
[17:03:55] <krawek> ok
[17:04:06] <therealkoopa> So it might make sense to store parts in its own collection so I can grab by its id?
[17:04:23] <Derick> therealkoopa: yes, I think so
[17:04:43] <Derick> I go to go guys, might be back later.
[17:05:09] <Goopyo> My bad Derick last thing I promies: which does mongodb handle better lots of small writes or combined big writes?
[17:05:24] <Goopyo> oh shit nvm take care
[17:09:56] <krawek> Goopyo: I think you can scale writes using sharding
[17:12:16] <NodeX> yup
[17:14:48] <jgornick> Hey guys, is it possible to use the $pull update modifier on multiple levels of embedded documents?
[17:15:48] <NodeX> yep
[17:16:19] <jgornick> For example, let's say I have an issue document that has embeds many task documents which contain comments. Using something like db.issues.update({ "_id" : ObjectId("507ee2de2c2414a257000025") }, { $pull: { tasks: { comments: { _id: ObjectId("507ee2dd2c2414a257000019") } } } }), I would want to remove the comment who's id matches the specified id.
[17:17:11] <NodeX> you need the positional operator
[17:17:26] <NodeX> if you want to do it by finding things
[17:17:45] <NodeX> http://www.mongodb.org/display/DOCS/Updating#Updating-The%24positionaloperator
[17:18:00] <jgornick> NodeX: Can you show me an example using my scenario?
[17:18:11] <jgornick> I've tried using the positional operator as well.
[17:18:51] <NodeX> infact you dont need it for that
[17:19:12] <NodeX> $pull : {"tasks.comments._id":ObjectId("507ee2dd2c2414a257000019") }
[17:19:15] <NodeX> that will work fine
[17:19:21] <jgornick> let me try.
[17:22:36] <jgornick> NodeX: That doesn't seem to work for me.
[17:22:54] <jgornick> Heading out to lunch… bbl.
[17:44:16] <newbie22> *: I am trying to connect to the HTTP web interface of my mongo databse. It is running on a server across the internet. Can anyone help me with this process ??
[17:50:29] <newbie22> *: Does anyone know how I can start my http web interface to run on a different port ??? "
[17:55:27] <skiz> I need to atomically insert a unique record in to a sharded collection, which is sharded on _id. However I'm getting a "For non-multi updates, must have _id or full shard key ({ _id: 1.0 })". The lookup for the existing record is based on other fields and not the shard key. Anything I'm not seeing/doing here?
[17:56:12] <MatheusOl> newbie22: when you say "my http web interface", are talking about what exactly?
[17:56:32] <newbie22> http://www.mongodb.org/display/DOCS/Http+Interface
[17:57:02] <MatheusOl> newbie22: Sleepy?
[17:57:33] <newbie22> The server I am on only allows me to use ports 5000 to 5500.
[17:57:43] <MatheusOl> newbie22: there are plenty of tools on this page, which one are you using?
[17:58:01] <newbie22> HTTP Console
[18:01:01] <newbie22> MatheusOl: Any suggestions ???
[18:06:15] <MatheusOl> newbie22: actually, it will run on MongoDB's port + 1000
[18:06:39] <MatheusOl> newbie22: I don't think there is a way to configure it, but using a webserver you can do redirection
[18:06:54] <kali> or socat/rinetd at the tcp level
[18:07:54] <MatheusOl> yeah
[18:08:17] <kali> or change provider :)
[18:17:26] <newbie22> Matheus01: Thank you...... I have posted the question on "https://groups.google.com/forum/#!forum/mongodb-user"
[18:19:59] <newbie22> Matheus01: The answer should be in the mongodb.config file.
[18:20:36] <MatheusOl> newbie22: there is really no way to that, it's hard-coded at ./src/mongo/db/dbwebserver.cpp, line 531 (version 2.2.0): const int p = cmdLine.port + 1000;
[18:21:43] <newbie22> Matheus01: OK, that is great to know... I am learning and asking questions as I go.
[18:23:00] <MatheusOl> I think it's a bad thing to MongoDB
[18:23:07] <MatheusOl> But I can live with that
[18:23:29] <ashley_w> what does "If the first key is not present in the query, the index will only be used if hinted explicitly." mean from http://www.mongodb.org/display/DOCS/Indexes#Indexes-CompoundKeys ?
[18:24:11] <NodeX> means a compound key on a & b and a query on "b" must be hinted
[18:24:13] <newbie22> Yeah, if that is the way it is, ,,,,,,,,,,,, The so be it... I am moving on..
[18:24:19] <newbie22> The so be it
[18:24:21] <NodeX> whereas a query on a + b doesnt
[18:24:32] <NodeX> or "a" doesnt
[18:25:26] <ashley_w> what does "hinted" mean?
[18:25:37] <NodeX> means you hint the query to use an index
[18:25:40] <NodeX> hint();
[18:26:04] <matiassingers> Hi guys
[18:26:05] <NodeX> http://www.mongodb.org/display/DOCS/Optimization#Optimization-Hint
[18:26:58] <newbie22> Matheus01: I have just finished "The Little MongoDB Book", by Karl Seguin. I am looking for a good tutorial to start with for Mongo (pdf or http). DO YOU HAVE AN SUGGESTIONS ?????
[18:27:35] <ashley_w> NodeX: ah, thanks. :)
[18:27:58] <NodeX> ;)
[18:28:12] <MatheusOl> newbie22: no
[18:28:34] <MatheusOl> newbie22: perhaps the O'Reilly ones may be good
[18:28:35] <newbie22> grat
[18:28:36] <newbie22> great
[18:29:39] <ashley_w> newbie22: i did the TRY IT OUT on http://www.mongodb.org/ and then jumped into production code
[18:29:46] <rowanu> got a server-side JS + sharded db question: if i want to run a script as close as possible to my sharded db, do i need to do it through a mongos process (which is connecting to all the shards), or can i just connect to one of the shards' mongod processes and run the script there?
[18:34:45] <newbie22> ashley_w: I am sorry, but I am not clear on what you are speaking on.
[18:35:11] <ashley_w> go to that site, click on that text, follow the instructions
[18:36:20] <ashley_w> i've read docs as i've gone along, which doesn't make for knowing the best way to use mongodb, but it works for our use case
[19:05:40] <Dr{Wh0}> any suggetions on a good key to use for sharding? I am testing simple stuff like a value from 0-15 but the distribution is not even when it creates the shard ranges. Can I control the ranges it creates?
[19:08:16] <Dr{Wh0}> http://pastebin.com/Tt8HR3H2 <- this is how it split it up.
[19:10:00] <Zelest> when using GridFS, can one use '/' in the filename? like, create fake directories and such?
[19:13:02] <Dr{Wh0}> Zelest: afaik the path is just meta data you can make it what ever you want.
[21:29:51] <brandon-dacrib> is there a way for me to check the status of the replication to a new node that I have just added to a replica set?
[21:30:14] <brandon-dacrib> I basically want to sort out how long it will take before the data is replicated over
[21:44:23] <unknet> what performs better in a sharded mongodb environment?: query a collection looking all members with the same owner_id or query a collection giving a list containing a limited number of _id of members of that collection?
[23:10:19] <Dr{Wh0}> trying to test sharding and see how it scales but I am not getting expected results. I have 4 shards setup and if I run my test insert app to add 5m rows as fast as possible I get 120k/s inserts if I direct each app to a specific shard. If I run just 2 apps connected to 2 separate routers connected to a shareded collection where I see "ok" distribution I end up with about 30k/s so it seems as if it does not scale correctly. Where could the bottleneck be?
[23:10:21] <Dr{Wh0}> I tried with 1 router or 2 routers I have 3 config servers.
[23:56:07] <jordanorelli> is it possible to run a query on just the data that's in memory?
[23:56:22] <jordanorelli> like, "query for this, but don't do a tablescan, and return what you've found so far".
[23:57:00] <jordanorelli> so that if my data is not all in memory, i can still query it safely, but without setting notablescan for the whole database. i just have one query that i want tablescans prevented on.
[23:57:33] <doxavore> Has anyone had luck using the ruby driver's pool_size config under JRuby? I seem to be getting a lot of errors around checking out connections under load.