PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 21st of September, 2012

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:00:06] <Oddman> oh it's your loop that's inaccurate?
[00:00:12] <hadees> i guess if my site gets big enough where foliowers size is a problem I should switch to something other then mongo
[00:00:22] <jrxiii> I know there are more than 9.4k venues in the bay area.
[00:00:24] <hadees> for this
[00:00:48] <Oddman> i'd have a separate collection, hadees
[00:01:08] <hadees> Oddman: what would be in it?
[00:01:18] <Oddman> Followers collection, with the follower id and the followee id
[00:01:48] <jrxiii> hadees: queries like users.find({followers : 'jrxiii'}) would get crazy in time
[00:01:54] <Oddman> I mean if you're talking hundreds of thousands of followers, and you're worrying about document size and/or limits, then just separating out to a collection would work
[00:02:07] <Oddman> and it would be quite fast.
[00:02:29] <Oddman> but really, I wouldn't worry too much until you hit those snags
[00:04:06] <hadees> is there a good way to monitor that? like an alert if my document sizes get too big?
[00:51:39] <planas> I have not been able to get php to work with Ubuntu 12.04 after editing the php.ini file
[01:08:19] <hdm> somewhat random performance question; given 4 spinning disks, would it be more or less efficient to raid-0 them or run 4 shards each with one disk, if the machine has godly amounts of ram and processor cores? (256G ram, 16 cores)
[01:09:07] <hdm> guessing disk overhead is higher with 4 shards, but processing and i/o would be 4 x faster with shards
[02:05:29] <niriven> hi, if i find myself making very large documents (well, small at first in mongo then they get bigger, and bigger, etc) shoul di be breaking them apart?
[02:06:16] <niriven> eg, users -> events. events might get massive. should i make a users collection and a users collection and relate to them in code isntead of users containg events (which might get large, eg. large document)
[02:08:54] <mrpro> depends if you grab all user events every time
[02:09:00] <mrpro> or you only grab a subset
[02:10:09] <mrpro> massive doesnt sound good cause i'd think that the document would need to grow and things would need to be rearranged
[02:13:41] <niriven> mrpro: so is it common to store two seperate documents, and relate in code?
[02:13:56] <mrpro> why two separate document
[02:14:06] <mrpro> maybe a doc for each event?
[02:14:17] <mrpro> or maybe…doc for each month worth of events? i dont know niriven
[02:15:29] <niriven> mrpro yeah i have one doc per event, and one doc per user, but if i store all the users events in the user doc, it'd probably be bigger than the recommended 16mb or whatever mongo recommends.
[02:15:41] <mrpro> yeh
[02:16:11] <niriven> i was just asking if that was bad or not :P
[02:16:32] <mrpro> i dont want to mislead
[02:16:44] <mrpro> dont know mongo that well
[02:24:03] <Oddman> niriven, really I'd be thinking about it from a query point of view
[02:24:18] <Oddman> if events are something you query for regularly (aka, they're a top-level data structure), have them in their own collection
[02:24:43] <Oddman> one thing to note is that mongodb has a recommended max size (it doesn't have a technical limit) of 16mb per document
[02:24:58] <Oddman> as I've mentioned a few times - deal with scaling issues when/if you get them
[02:25:17] <Oddman> and think more about how you'll be getting/retrieving data
[02:25:29] <niriven> Oddman: yeah the events are queried very often, but with some users as an and critera
[02:25:53] <Oddman> but are events queried separately from users?
[02:26:00] <niriven> eg. get all events that have this, this, and this, but also that are associated with this set of users
[02:26:09] <Oddman> users - plural?
[02:26:11] <niriven> Oddman: they can be
[02:26:14] <niriven> Oddman: yes.
[02:26:19] <Oddman> how many users are we talking per event?
[02:26:32] <Oddman> cos it might be better to have an events collection, with a users subset of data on the events collection
[02:26:32] <niriven> Oddman: sorry, each event has one or no user, wrong statement...
[02:26:34] <Oddman> rather than the othe way around
[02:27:05] <Oddman> imho, and again this is pretty shallow considering I don't know the extent of your requirements - but sounds like events should be it's own collection, with a user_id
[02:27:21] <Oddman> where user_id is optional
[02:27:27] <niriven> Oddman: event and user have a id that can relate (i know, bad word here!). so, my plan was to have a users, and events, find the users im interested in, then find the events im interested from there
[02:27:38] <Oddman> yup, sounds good :)
[02:27:49] <niriven> Oddman: ok cool, yeah i though that was best too, but i figured it might be against the whole mongo thing to relate in code.
[02:27:59] <Oddman> no, that's actually the RIGHT way to do it :)
[02:28:11] <Oddman> with systems like mongo
[02:28:17] <niriven> i see, cool, thanks oddman :)
[02:28:17] <Oddman> your schema is generally defined by your application, not the database
[02:28:32] <Oddman> which is why systems like mongo are so refreshing :D
[02:31:04] <niriven> well its not an app, just analyitics
[02:31:18] <niriven> a bunch of "whats happening under this context, and what users are involved"
[02:32:33] <niriven> but yeah :)
[02:34:25] <niriven> details tho are, 1000 users with some information in the user, 151 million events, and 6 million which relate to a valid user. right now i have a users, and events_assigned and events_unassigned, since all the queries are targeted to events that have valid users, and unassigned events might make it into assigned collection later if i find a user. from there ill find all users that fit some profile, then look at their events.
[02:38:16] <Oddman> wow...
[02:38:23] <Oddman> 151 million events for 1000 users!?!
[02:38:40] <Oddman> so every user has created 1.5 million events?
[02:38:51] <Oddman> er, 150,000 sorry. hehe
[02:39:21] <niriven> Oddman: no only 6 million acutally relate to one of those users
[02:39:32] <Oddman> one?
[02:39:48] <Oddman> bah, don't understand the context. haha
[02:40:50] <niriven> 151 million events captured. each event might or might not relate to one user in my db (there are 1000 of them), and there are 6 million with a user id, so thats 6 million / 1000 :)
[02:41:32] <Oddman> gotcha
[02:44:17] <niriven> so hmm maybe its better ot have two collections, users (each with on average 600 events), and events, one that don't tie to users
[02:46:26] <niriven> but if my document in mongo gets bigger than 16mb, thats not a prob? i know i cant send a doc with 16mb, which means i cant just post save a full user if it exceeds 16mb
[02:47:23] <Oddman> I think what you'd best do here - is get all the events into an event collection
[02:47:27] <Oddman> create your indexes
[02:47:30] <Oddman> and see how it performs
[02:48:26] <niriven> tried that, didnt work. had to at least split out events that are assigned to a user and ones that are not, since i don't care about complex queries for the majority of the events (those that are not assigned to users). so indexing events im not interested in is a waste :)
[02:51:46] <Oddman> *nod*
[03:01:35] <Almindor> hello
[03:01:45] <Almindor> does $or return results in order of query?
[03:02:36] <Almindor> e.g.: { $or: [a, b, c] } if all were found would give a, b, c in that order?
[03:19:36] <niriven> so can i document acutally exceed 16m in mongo?
[03:25:42] <tinhead> If am using mongoid to maintain a cache. I insert like: Cache.where(:key => key).add_to_set(:route, route)
[03:25:57] <tinhead> How do I pull these using Mongoid?
[03:27:20] <tinhead> Honestly, I would expect Cache.where(:key => key).pull_all(:route) to work, but it doesn't.
[03:34:27] <niriven> anyone know what happens if you hit the document limit server side?
[05:20:30] <johnny_rose> when using mongoid, is there a way to only fetch the fields defined in a subclass and not its parent?
[07:35:42] <[AD]Turbo> hola
[07:40:01] <Gargoyle> What's the overhead of running ensureIndex if the index already exists?
[07:44:23] <ron> I imagine that none?
[07:45:40] <Gargoyle> Is it possible to have an index changed without creating a new one if the ensureIndex doesn't match? by specifying an index name explicitly?
[07:46:35] <NodeX> eh?
[07:47:03] <Gargoyle> NodeX: Assume I have ensureIndex('col':1)
[07:47:40] <Gargoyle> Then I want ensureIndex('col':1, 'another':1), but I no longer want to keep the other index.
[07:48:41] <Gargoyle> However, If possible, I don't want to have a separate script - I just want the ensureIndex() statement as part of my db access code for that collection.
[07:54:33] <Gargoyle> And if you really want a friday morning brain teaser, any thoughts on how I can make this faster? http://pastebin.com/RZjh2mh5
[07:55:32] <oxman> easy.
[07:55:33] <NodeX> so you want to add a compound key to your index
[07:55:36] <oxman> buy a new computer ;D
[07:55:39] <oxman> (sorry for the troll)
[07:55:40] <NodeX> is that what you were asking
[07:55:46] <oxman> it's only to give you an answer :)
[07:56:59] <Gargoyle> NodeX: Kind of. Rarther than having a separate "database maintenance script", if the overhead to ensureIndex() is minimal when the index exists, I was thinking about just having ensureIndex calls inside the class that is querying that collection.
[07:57:41] <NodeX> the overhead depends on the size of the collection
[07:58:30] <Gargoyle> So if you have a large collection, calling ensureIndex() unessicarrily would be a bad idea?
[07:59:50] <NodeX> yes
[08:01:11] <Gargoyle> Is there a fast way to check for an index? (Trying to make code that sets up the db automatically, without requiring a "install" as such)
[08:01:36] <wereHamster> Gargoyle: db.system.indexes.find ...
[08:08:11] <Gargoyle> any thoughts on that query? the collection only has 300,000 items, and 100+ms seems a long time
[08:08:57] <NodeX> $exists checks every key
[08:09:06] <hdm> yup ^
[08:09:13] <NodeX> if you dont need it dont use it
[08:09:23] <NodeX> or if you know the $type check on that
[08:10:08] <hdm> does type use indexes?
[08:14:13] <NodeX> the docs dont say if it does or not so I assume it does as they normaly tell you if they donr
[08:14:14] <NodeX> dont*
[08:14:26] <Gargoyle> Ok. removing the exists helped a bit.
[08:17:17] <Gargoyle> If I am checking type, does that column still need to be in the index?
[08:22:02] <Gargoyle> Final one on indexes for now. If the results are being sorted, does the sort column need to be in the same index as the search params?
[08:23:43] <Gargoyle> s/column/parameter|entity|field|thing/
[08:23:44] <NodeX> yes, and the last part of the index
[08:24:18] <Gargoyle> cool. We will have sub second page load times!
[08:25:25] <NodeX> :)
[09:56:51] <Gargoyle> Any PHP users with a free 10 mins, I would appriciate any feedback you might have on: https://github.com/gargoyle/MongoSession/blob/master/MongoSession.php
[09:58:18] <NodeX> I'll take a peek
[09:58:55] <Gargoyle> I think the main potential for a problem would be in _lock()
[09:59:10] <Gargoyle> and gc()
[10:06:59] <NodeX> 'unique' => false, <--- indexes are not unique by default
[10:10:03] <Gargoyle> ahh. that was hanging from previous version that had a separate session_id as a unique…
[10:13:06] <NodeX> just out of interest why is there a lock() in there
[10:13:32] <Gargoyle> an attempt to prevent session race conditions.
[10:14:55] <NodeX> it can only race itself though if you use session_id as the upsert key
[10:15:18] <NodeX> it requires the same user loads a page at exactly the same time on the same browser
[10:15:24] <NodeX> (twice)
[10:15:33] <Gargoyle> NodeX: Or two ajax requests
[10:15:55] <NodeX> yer or that
[10:16:07] <NodeX> mongo is fire adn forget approach
[10:16:19] <NodeX> first in the queue wins then the next
[10:16:29] <Gargoyle> PHP normally does it using flock on the filesystem
[11:37:32] <noordung> I have a question on Mongoose... If a document is new, I cannot call populate, but I need the populatable-field to be here. How can I do this?
[11:38:42] <noordung> If the document is new, I'd generally have an ObjectId, instead of a full document...
[11:39:02] <noordung> So, there is no way I can fetch that object without knowing what is in ref
[11:43:32] <noordung> Ah, but I can inspect the schema! :D
[12:43:12] <Seidr> Heya, I'm doing a mapreduce on a dataset which results in an object (containing data like count, deviation values and various other bits)..is it possible to filter on the values within this object when doing a find on the resulting collection? I seem to be getting no results back - would I need to run a second mapreduce run on this data? Cheers for any advice =)
[13:00:58] <Seidr> Ahh, figured it out! :) I had to use dot-notation to reach into the inner document. Huzzah!
[13:37:20] <Gizmo_x> hi i ineed example for pagination in mongodb with php for example when i go to page which is in the middle of the pages for example if i have from 1 to 1000 pages using skip()->limit() as i hear will be slow how can i implement fast paging?
[13:52:20] <noordung> Gizmo_x, you can employ 'fake' pagination, by chunking... create a document with an array of objectids which will hold a specified amount...
[13:54:07] <Gizmo_x> noordung: do you have any exaple that i can see?
[13:54:31] <noordung> Gizmo_x, no, not really... you can implement it yourself pretty easily
[13:54:56] <Gizmo_x> noordung: what i will do with that array of ids?
[13:56:35] <noordung> Well, lets assume that a chunk is a document with an array of object ids, and holds exactly 100 objectids. You can even call this chunk a 'page'. Assign an index property to it which will identify the page number. Then findOne by this index, and load all of the documents referenced by the ObjectIds in the array...
[13:57:01] <noordung> And voila, you have pagination...
[13:57:26] <noordung> (I think a similar method is described somewhere in the docs...)
[13:58:58] <NodeX> that's more expensive than paging normaly
[13:59:30] <Gizmo_x> noordung: for every pageSize i will have new document in the colection and if i have 500 pages i have to create 50 documents in the array? because the page size will be 10
[13:59:49] <NodeX> it's a bad idea
[13:59:56] <noordung> Gizmo_x, see what NodeX says...
[14:00:18] <Gizmo_x> is there better idea?
[14:00:19] <NodeX> it's more queries than you need
[14:00:22] <Gizmo_x> NodeX: ?
[14:00:30] <NodeX> I am reading your quesiton
[14:00:39] <Gizmo_x> NodeX: ok thanks
[14:00:57] <NodeX> you cant really implement fast paging
[14:01:02] <Gizmo_x> also there is something like here http://stackoverflow.com/questions/9703319/mongodb-ranged-pagination
[14:01:22] <NodeX> paging isn't slow but it's not as fast as other operations
[14:01:29] <Gizmo_x> when i say fast from paging ways in mongodb which is the best to deal with high amount of documents
[14:02:14] <NodeX> it depends what you call high amount of documents
[14:02:23] <NodeX> a high amount*
[14:02:44] <Gizmo_x> lets say 100 000 document or 1 000 000
[14:03:15] <NodeX> why would you ever want to go past page 100 is the question you want to answer
[14:04:03] <Gizmo_x> NodeX: i tihnk that too but lets say im idiot and i want to find if app have bugs or if i can make the app to crash or to spam
[14:04:06] <NodeX> personaly if your going past page 5 then somehting is wrong with the query because the result(s) shouldv'e been found by then
[14:04:07] <noordung> NodeX, maybe implement an index property on every document, and then find on a lowerRange < index < upperRange?
[14:04:53] <noordung> NodeX, conventional logic tells me it would be faster, especially if you employ indexing on the index property...
[14:04:59] <NodeX> Gizmo_x : an expensive part of the page is the count() so firstly you need to cache that
[14:05:46] <Gizmo_x> NodeX: hmm but if i cache it i will never have the real count in real time right?
[14:06:02] <noordung> NodeX, where index is auto-incremented...
[14:06:04] <NodeX> it depends how write heavy your data is
[14:06:17] <NodeX> noordung : the index is not the problem
[14:07:46] <Gizmo_x> NodeX: you suggest to use only prev and next links for pagination this way i will not use the last page as count() and i can use ranges if that way is faster then skip()?
[14:08:43] <NodeX> the problem lays with where your documents lay on disk/ram.. the first few pages will more than likely live next to each other, the rest could requie massive disk seeks
[14:09:14] <NodeX> if you need pages all the way to the end then you should really throw a cache in the middle
[14:10:07] <NodeX> you should certainly cache the count because that is expensive
[14:10:18] <NodeX> invalidate it when a write happens that changes results
[14:13:13] <Gizmo_x> NodeX: this noSQL is more complicated when you get in deep. a little offtopic which db is faster in this situation using offset and limit? sql or nosql like mongodb vs mysql?
[14:14:28] <NodeX> it's been so long since I used MySQL I couldn't say tbh
[14:14:46] <NodeX> I dont have paging problems but I also dont try and page a million documents
[14:15:40] <Gizmo_x> NodeX: i try to get result if in the features if i need that option to list a lot of documents that's why im calculating now to optimise my db like that
[14:18:04] <NodeX> if you need to page from 1 to 10,000 you're going to have to accept that it's an expensive operation
[14:19:13] <Alir3z4> What web-framework is out-there that will work very well and integrated with mongodb?
[14:21:06] <NodeX> web framework?
[14:22:03] <Alir3z4> NodeX: Yes, Web-Framework, to dive into web app development
[14:22:44] <NodeX> right... but mongodb is a datastore
[14:23:26] <Alir3z4> I know it's datastore, but i guess maybe it's right place to ask about it
[14:24:12] <NodeX> I still dont understand what you mean by a web framework
[14:24:39] <Alir3z4> Basically i guess every web-framework out-there were created to work with SQL databases
[14:24:51] <algernon> ...and you're wrong.
[14:25:09] <NodeX> but what is a "web framework" .. do you mean a server side scripting language?
[14:25:10] <Alir3z4> I mean not all of them
[14:25:31] <Gizmo_x> Alir3z4: Doctrine 2 Mongod support
[14:25:34] <Alir3z4> NodeX: http://en.wikipedia.org/wiki/Web_application_framework, http://en.wikipedia.org/wiki/Comparison_of_web_application_frameworks
[14:25:59] <NodeX> right, they are LANGUAGE frameworks
[14:26:14] <NodeX> and you should avoid them like the plague because they all carry bloat
[14:26:15] <Quantumplation> Are there any known issues with using both sort and limit on a mongo query? On my local machine, the code behaves perfectly (selects the items in descending date, so the newest items, then limits the results to 50), however once I deploy (same code, same database), it's selecting some number of items, limiting them to 50, then sorting those items by descending date.
[14:26:27] <Alir3z4> Gizmo_x: What about python or Java ?
[14:26:41] <Gizmo_x> Alir3z4: no idea about them, im php dev
[14:27:09] <Alir3z4> Thank you
[14:27:11] <NodeX> http://www.mongodb.org/display/DOCS/Java+Tutorial <---- 2 seconds in google
[14:28:16] <NodeX> Quantumplation : can you pastebin the query?
[14:29:41] <Alir3z4> NodeX: If you look to my first question you will see i'm looking for a web-framework that works really integrated with mongoDB, Alot of these web-frameworks around will loose couple of their main functionality when going to work with MongoDB and other NoSQL data[base|store]
[14:30:07] <Alir3z4> NodeX: That because their created to work with SQL based databases, and yeah!
[14:30:53] <NodeX> If they're created to work with SQL databases then dont use them
[14:31:27] <NodeX> most of the discussion with any sort of framework in this chan revolves around the drivers themselves
[14:31:39] <NodeX> mainly java, node, php, python
[14:31:55] <Alir3z4> Yes, specially django/python
[14:32:06] <Gizmo_x> Alir3z4: http://code.google.com/p/morphia/ http://mongolink.org/ check this web sites will help you
[14:32:10] <NodeX> I have never seen anyone specifically model their app around a framework, to do so is silly
[14:32:20] <Quantumplation> Mongo C# drivers 2.0.5, http://pastebin.com/5C7khcR2
[14:32:39] <NodeX> Quantumplation : cna you output the query into json?
[14:32:43] <NodeX> can *
[14:33:50] <Quantumplation> On my local machine, and/or the deployed machine?
[14:33:52] <Alir3z4> NodeX: You have to do that if you want to us that framework, But designing the database model for both nosql and sql will be damn confusing
[14:34:23] <Alir3z4> Gizmo_x: Seems it's work with Play framework also, let me check it out and try it for couple of hours
[14:34:34] <NodeX> why would you design a model for both?
[14:34:44] <NodeX> Quantumplation : both
[14:35:09] <Gizmo_x> Alir3z4: GL
[14:35:37] <Gizmo_x> NodeX: ODM style instead of norrmal
[14:36:00] <NodeX> I dont know what ODM is sorry
[14:36:15] <Gizmo_x> NodeX: Object document Manager :)
[14:36:30] <Gizmo_x> NodeX: Like ORM for sql bases
[14:36:39] <NodeX> I dont know what ORM is either
[14:36:43] <Alir3z4> :D
[14:36:49] <NodeX> sorry, I don't do these trendy buzz words
[14:36:52] <Quantumplation> Sure, getting it from the deployed server is going to take me a minute though, hold on. In the mean time, here's from my local machine: http://pastebin.com/wFWTipQp
[14:37:11] <NodeX> Quantumplation: I mean the query
[14:37:14] <NodeX> not the results
[14:37:16] <Gizmo_x> NodeX: Object-relational mapping
[14:37:43] <NodeX> Gizmo_x : Means nothing to me sorry
[14:37:53] <Gizmo_x> NodeX: ok :)
[14:38:15] <NodeX> I assume it's some way to do somehting in a framework?
[14:38:16] <Quantumplation> er, oh. Is there a way, with the C# drivers, to output the query from the cursor object?
[14:38:32] <NodeX> just write the query as you would on the shell
[14:38:49] <NodeX> not sure if the driver has a helper for that
[14:39:28] <Alir3z4> NodeX: No, it's not depend on framework or anything
[14:40:15] <NodeX> Alir3z4 : dont waste your time trying to explain it to me, it's not somehting I use or would ever need
[14:40:36] <Alir3z4> NodeX: What do you use then?
[14:40:57] <NodeX> things that are fast and not confusing ;)
[14:41:21] <NodeX> (and dont put yet another layer of crap in my stack)
[14:41:46] <Alir3z4> NodeX: I like Assembly too :d
[14:42:15] <NodeX> lol
[14:42:16] <Gizmo_x> Alir3z4: holly sh1t :) then
[14:43:42] <NodeX> I go on the idea that the quickest path is that of least resistence so why would anyone ever want to put frameworks, ORM's etc in the way and cause friction
[14:44:29] <Gizmo_x> NodeX: it's more easy to manage db? because there is standards and team integration in enterprise prjects
[14:44:58] <Gizmo_x> NodeX: it's a performance issue but team work is more inportant in big projects
[14:45:07] <TTimo> hello. I am using mongoengine/pymongo .. are there tools to log/time the mongodb traffic?
[14:45:11] <Alir3z4> NodeX: The same reason why folks using GUI library to create desktop application
[14:45:33] <TTimo> e.g. similar to logging ORM queries in Django when working against a traditional RDBMS
[14:45:35] <NodeX> Gizmo_x : I dont work in big teams so that's probaly why
[14:46:03] <Gizmo_x> NodeX: ok just disscussing don't take it personally please
[14:46:14] <NodeX> I am not taking it personaly
[14:46:26] <Alir3z4> he didn't take it personaly
[14:46:28] <Alir3z4> :D
[14:46:33] <NodeX> Performance is and should be the number one priority of an app
[14:46:55] <Alir3z4> No actually i using these tools to speed-up my work
[14:47:32] <NodeX> to the detrment of performance
[14:47:37] <NodeX> detrement*
[14:47:52] <NodeX> and that's the reason most of us use mongo... is for performance
[14:48:01] <NodeX> and/or scalability
[14:48:28] <algernon> then I'll let others worry about performance. :P
[14:48:30] <NodeX> PoC is a little different
[14:48:41] <NodeX> I lay POC out in one file LOLOL
[14:50:09] <Alir3z4> Performance shouldn't be number one priority, If that so, we should all create program in 0101010010101
[14:50:38] <Alir3z4> I like to use mongo for scalability
[14:50:55] <NodeX> I feel sorry for the users of your apps if you dont prioritise it, or the admins of your servers
[14:51:12] <ppetermann> wonder how you scale without perfomance :)
[14:51:29] <NodeX> +1
[14:51:31] <Alir3z4> No i'm not saying forget about performance
[14:51:50] <NodeX> getting the user in and out of your stack in the fastest time is key
[14:51:58] <Quantumplation> NodeX: I can't seem to distill out exactly what the C# drivers are querying, but if I were to type it manually it'd be http://pastebin.com/9P1JY4Rc. Without knowing exactly what query C# is running though, that's not much help.
[14:51:58] <Alir3z4> I'm saying that i shouldn't sacrifice for performance
[14:52:08] <ppetermann> sacrifice what?
[14:52:45] <NodeX> Quantumplation : do you have indexes on both machines?
[14:53:04] <Alir3z4> A lot of things, Time, resources and alot of things
[14:53:22] <Quantumplation> it's using the same database. (both connecting remotely to another server)
[14:53:30] <Alir3z4> But i guess because we are not from the same industry, we can't be agree on this part
[14:53:48] <NodeX> what industry are you in?
[14:53:57] <ppetermann> NodeX: hardware sales i'd guess
[14:54:03] <Alir3z4> =))
[14:54:05] <NodeX> Quantumplation : are you saying there is a problem with the query ?
[14:54:44] <Alir3z4> Web-development somehow
[14:55:02] <NodeX> me too
[14:55:08] <ppetermann> oh i did that in the 90s
[14:55:15] <Alir3z4> No way!
[14:55:23] <ppetermann> still doing it "somehow"
[14:55:32] <Quantumplation> NodeX: I'm not sure. All I know is that the results turned when i run it on my local machine are correct, but the results returned when it's run from my deployed environment are incorrect.
[14:56:07] <Alir3z4> And of course desktop/mobile/ apps
[14:56:18] <Alir3z4> But mainly i like to work with mongo for the web
[14:56:27] <ppetermann> and when you end up serving enough concurrent requests, you will realize that performance is an integral part of whats called scalability
[14:56:27] <Alir3z4> mongodb*
[14:56:45] <Alir3z4> We just put more server
[14:56:55] <Alir3z4> optimize the code itself
[14:57:19] <Alir3z4> hurting ourself with load balancers, nginx :d and others
[14:57:35] <NodeX> you can only optimise so much before you realise that your framework is the bottleneck
[14:57:50] <NodeX> then you're in a bad place because you have to re-write the whole app
[14:57:52] <ppetermann> well that depends on the framework ;)
[14:58:03] <NodeX> I have never seen one that's not bloated
[14:58:21] <Alir3z4> We can say something about Django and other well-known web-framework out there
[14:58:27] <Alir3z4> we can't*
[14:59:22] <NodeX> I can't comment because I dont use python
[14:59:24] <ppetermann> sure i can say something about django. django is a nice framework to build average content serving websites on
[15:01:16] <Alir3z4> NodeX: What do you use?
[15:01:42] <NodeX> mostly php
[15:01:53] <NodeX> a touch of node.js
[15:02:01] <Alir3z4> ppetermann: I wrote complete accounting app with it, and that was prefect with django, specially with it's ORM
[15:02:48] <ppetermann> i sure hope you didn't use mongo there.
[15:03:05] <Alir3z4> i start the web with php and then went to rails for a year and half and then jump into django/python
[15:03:42] <ppetermann> imajes: haven't seen you in ages
[15:03:43] <ppetermann> how you doing
[15:03:44] <Alir3z4> ppetermann: Of course i didn't, accounting itself is full relatonal data that makes it hard to work with mongo and nosql
[15:03:53] <NodeX> php is great if you stay away from frameworks lol
[15:04:08] <ppetermann> Alir3z4: if THAT's your problem..
[15:04:14] <ppetermann> NodeX: i disagree
[15:04:27] <NodeX> ppetermann : that's your opinion ;)
[15:04:34] <NodeX> which you are entitled to
[15:04:49] <Alir3z4> ppetermann: 2-3 years ago we were laughing at NoSQL :D
[15:05:24] <Alir3z4> Now, NoSQL is laughing at us :D
[15:05:49] <ppetermann> Alir3z4: speak for yourself.
[15:06:04] <NodeX> I've been using mongo for a little over 2 years myself
[15:06:14] <ppetermann> about 3 years here
[15:06:18] <NodeX> and have not touched SQL in all that time
[15:06:26] <ppetermann> plus minus a month or two
[15:06:32] <ppetermann> oh, i have
[15:06:44] <NodeX> (apart from managing my mail server accounts)
[15:06:50] <ppetermann> because i wouldnt want anything that needs transactional safety done in mongo :)
[15:07:01] <Alir3z4> About 1 minute here, but it's been couple of week that i trying to fit it into my stack
[15:07:06] <NodeX> I dont do transactional stuff so I'm ok
[15:09:08] <airportyh> Hi all, how to write a group query using runCommand?
[15:10:08] <airportyh> example: https://gist.github.com/3762060 how would I convert that to use runCommand, so that I can get the "explain" stats?
[15:10:21] <Alir3z4> Thank you all
[15:10:35] <NodeX> are you on 2.2 airportyh ?
[15:10:41] <NodeX> Alir3z4 : no probs, good luck ;)
[15:10:43] <airportyh> yes
[15:10:55] <NodeX> use aggregation framework - much eaier
[15:10:57] <NodeX> easier
[15:11:05] <NodeX> much faster too
[15:11:28] <airportyh> I thought I was using it in the example?
[15:11:46] <airportyh> isn't the "group" method part of the aggregation framework?
[15:11:51] <NodeX> no lol
[15:12:14] <NodeX> db.collection.aggregate({....});
[15:12:41] <NodeX> your's would be somehting along the lines of ....
[15:14:57] <airportyh> Ah, I see
[15:15:14] <airportyh> I saw this page titled Aggregation http://www.mongodb.org/display/DOCS/Aggregation and I assumed it was the page for aggregation framework
[15:16:50] <NodeX> db.collection.aggregate({{$group:{_id:"$element_id",total:{$sum:1}}},{$match:{browser:'Chrome',os: 'MacOS', device_type: 'Desktop'}});
[15:16:55] <NodeX> somethign like that
[15:17:16] <NodeX> probably a few mistmatches as it';s late on friday!!
[15:17:25] <airportyh> thanks!
[15:17:31] <airportyh> I'll give it a try
[15:19:12] <imajes> ppetermann: i usually hang out here :) where did we last meet/hangout
[15:20:40] <ppetermann> imajes: puh.. #php on efnet?
[15:20:59] <ppetermann> or was it #php-bugs?
[15:21:09] <ppetermann> or was it some doctrine related channel on this network - can't tell
[15:21:22] <ppetermann> i just saw your nick and was like "oh, right, james, you havent seen that guy in ages"
[15:23:06] <imajes> heh, :)
[15:23:10] <imajes> i remember
[15:23:16] <imajes> prolly php-bugs
[15:31:21] <ppetermann> imajes: i think i used "disasta" there
[15:31:26] <ppetermann> as a nickname
[15:31:34] <imajes> yeah, that was right
[15:31:42] <imajes> i remember you changed bout the same time as i left
[15:32:46] <ppetermann> =)
[15:33:07] <ppetermann> anyway, beertime
[15:37:53] <imajes> :)
[16:26:51] <diegoviola> i'm trying to have an ajax/websocket thing that will retrieve changes in a collection, every time there's a change in a collection, any ideas how i can accomplish this from the mongodb side?
[16:27:00] <diegoviola> how do i know when there's a change in a collection, etc
[16:40:47] <NodeX> diegoviola : you'll have to sort that in your app layr
[16:40:49] <NodeX> layer
[16:45:34] <diegoviola> ok ty
[17:22:18] <EatAtJoes> I installed this sonata admin bundle distro, and the demo page has no css applied. Is that normal?
[18:39:38] <alyman> Is there any way to provide a $hint for the findAndModify command?
[20:14:02] <diegoviola> i'm working with some code that saves data to a mongodb database, and i want to show notifications with websockets or ajax when data is saved in a collection... but i don't think mongodb supports triggers or anything like that, what do you guys recommend?
[20:14:15] <diegoviola> nvm
[20:14:45] <diegoviola> https://jira.mongodb.org/browse/SERVER-124
[20:14:48] <diegoviola> i've just seen this
[20:29:33] <diegoviola> any ideas please?
[20:37:44] <crudson> you can add websockets support at the tier that collects and saves data to mongodb
[20:37:57] <crudson> and pushes to websocket client at that time
[20:38:28] <crudson> rather than saving, then waiting for some notification from mongodb (which doesn't exist)
[20:39:11] <crudson> I wrote a blog on this a while ago for saving and broadcasting gps data with ruby/mongo/websockets/html5
[20:39:26] <crudson> it's pretty easy to tie together
[20:49:28] <diegoviola> crudson: cool, do you have a link?
[20:55:35] <crudson> found it on a machine here that isn't connected publicly at the moment - do you use ruby? I can pm you a copy of page
[20:57:34] <diegoviola> ruby, yes
[21:12:08] <crudson> let me gitify it and put on github
[21:21:31] <crudson> diegoviola: I have to pull together a few repos and blog posts to make it nice and pretty. pm your email address and I will send you a msg in next couple days.
[21:21:47] <diegoviola> crudson: thanks
[21:22:06] <diegoviola> pm'ed
[22:53:03] <jiffe98> is there a procedure for replacing a replica with another machine with the same hostname and IP?