PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 12th of October, 2012

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[04:11:58] <jnh> Hey all.
[04:12:21] <jnh> I'm unable to compile MongoDB from source on my Linux PowerPC box:
[04:12:49] <jnh> g++ -o build/linux2/normal/mongo/base/status.o -c -Wnon-virtual-dtor -Woverloaded-virtual -fPIC -fno-strict-aliasing -ggdb -pthread -Wall -Wsign-compare -Wno-unknown-pragmas -Winvalid-pch -Werror -pipe -fno-builtin-memcmp -O3 -DBOOST_ALL_NO_LIB -D_SCONS -DMONGO_EXPOSE_MACROS -DSUPPORT_UTF8 -D_FILE_OFFSET_BITS=64 -DJS_C_STRINGS_ARE_UTF8 -DMONGO_HAVE_HEADER_UNISTD_H -DMONGO_HAVE_EXECINFO_BACKTRACE -DXP_UNIX -Ibuild/linux2/normal/third_party/boost -Isrc/
[04:12:49] <jnh> third_party/boost -Ibuild/linux2/normal/third_party/pcre-8.30 -Isrc/third_party/pcre-8.30 -Ibuild/linux2/normal -Isrc -Ibuild/linux2/normal/mongo -Isrc/mongo -Ibuild/linux2/normal/third_party/snappy -Isrc/third_party/snappy -Ibuild/linux2/normal/third_party/js-1.7 -Isrc/third_party/js-1.7 src/mongo/base/status.cpp
[04:12:49] <jnh> {standard input}: Assembler messages:
[04:12:51] <jnh> {standard input}:592: Error: Unrecognized opcode: `lock'
[04:12:53] <jnh> {standard input}:1144: Error: Unrecognized opcode: `lock'
[04:12:55] <jnh> scons: *** [build/linux2/normal/mongo/base/status.o] Error 1
[04:12:57] <jnh> scons: building terminated because of errors.
[04:12:59] <jnh> any ideas?
[04:26:50] <jrdn> Herro
[04:27:40] <jrdn> It seems that about once a day our mongodb's load spikes… what typically causes this?
[04:28:24] <mrpro> herro
[04:29:31] <codemagician> Why does my mongod process say there are still two connections open even though I have no open connections?
[04:29:57] <codemagician> "got signal 2 (Interrupt: 2), will terminate after current cmd ends"
[04:30:12] <codemagician> "[conn23] end connection 127.0.0.1:55016 (2 connections now open)"
[04:31:17] <mrpro> what makes you so sure?
[04:41:18] <jrdn> oOOOOo
[04:41:25] <mrpro> ?
[04:41:30] <jrdn> does anyone here actually utilize pre-aggregations ?
[04:41:42] <jrdn> and not just a single field, but more granular (minutely / hourly / daily / etc)
[04:43:56] <mrpro> jrdn
[04:44:02] <mrpro> like doing it urself?
[04:44:08] <jrdn> yeah.
[04:44:21] <mrpro> yea i did something that does 10sec counts
[04:44:25] <jrdn> I'm doing it right now but it's turned out to be kind of a pain
[04:44:28] <mrpro> like how many requests per 10 seconds
[04:44:36] <mrpro> but i did it in one field
[04:44:46] <jrdn> i see!
[04:44:52] <mrpro> like 12:45:10… 12:45:20 … etc
[04:44:56] <mrpro> dunno if its right, just getting a feel for it
[04:45:00] <mrpro> i want to make it configurable
[04:45:15] <jrdn> each one is a document?
[04:45:22] <mrpro> field
[04:45:27] <jrdn> for an entire day?
[04:45:33] <mrpro> yea like forever
[04:45:33] <mrpro> lol
[04:45:44] <mrpro> i can make it a doc tho
[04:45:46] <jrdn> wait one document holds 10 minute aggregations for every 10 minutes?
[04:45:50] <mrpro> actually i wanted to make it an array
[04:45:52] <mrpro> 10 second
[04:45:56] <jrdn> 10 seconds*
[04:46:00] <mrpro> yaw
[04:46:04] <jrdn> won't you hit the 16mb document limit? lol
[04:46:10] <mrpro> good point
[04:46:13] <jrdn> haha
[04:46:21] <mrpro> this isnt in prod :)
[04:46:28] <jrdn> oh good :P
[04:46:48] <jrdn> I have a document for a single day and I store every second of the day essentially
[04:46:49] <mrpro> maybe i'll do a document per day
[04:46:54] <mrpro> ahh
[04:46:59] <jrdn> I have "hourly" and "minutely" fields
[04:47:00] <mrpro> do you do an array or a field for each
[04:47:13] <jrdn> one sec
[04:51:51] <jrdn> @mrpro, my schema looks like https://gist.github.com/ba8fb717525f3058dd6c
[04:52:24] <jrdn> oops, plus I have a daily count too in there
[04:53:06] <mrpro> oahh
[04:53:11] <mrpro> you use github and shit
[04:53:16] <mrpro> enterprenuer type of guy?
[04:53:26] <jrdn> sorta. :P
[04:53:33] <mrpro> oh shit
[04:53:34] <mrpro> tt.fm bot
[04:53:36] <mrpro> i am on tt.fm all the time
[04:54:05] <jrdn> Ah, yeah I only toyed with the bot a while ago. Didn't finish it.
[04:54:11] <mrpro> ah
[04:54:12] <mrpro> what room
[04:54:42] <jrdn> anyway, as for the schema, it works well if you want the document in line.. but if you need to transform anything it becomes a pain in the ass
[04:55:02] <mrpro> new aggregation framework no?
[04:55:06] <jrdn> i was wondering if anyone has done something similar but then used the aggregation framework on top of it
[04:55:16] <jrdn> we have too much data for the aggregation framework :(
[04:55:28] <mrpro> it has limitation?
[04:55:40] <jrdn> not fast enough for real time charts :P
[04:56:04] <jrdn> http://dl.dropbox.com/u/65317585/Screenshots/_yhe.png
[04:56:27] <jrdn> stuff like that
[04:57:10] <mrpro> nice charts
[04:57:12] <mrpro> what do you use for that
[04:57:17] <jrdn> www.highcharts.com
[04:57:24] <mrpro> you really need real time?
[04:57:38] <jrdn> duh ;)
[04:57:41] <mrpro> i mean, whatever is inserting into mongo can also publish on UDP
[04:57:48] <mrpro> and feed the charts
[04:58:02] <jrdn> Yeah of course, there's always that
[04:58:03] <mrpro> rabbitmq or w/e
[04:58:44] <jrdn> That's later down the road.
[04:58:54] <mrpro> nice, gonna link my web developer to it
[04:59:11] <mrpro> i need a good one for ios tho
[04:59:27] <jrdn> But none-the-less, if something is broken for 5 minutes in our industry
[04:59:38] <jrdn> that can lead to a few thousand bucks that we lost
[04:59:59] <mrpro> broken how?
[05:00:17] <jrdn> so having the data pre-aggregated to send out alerts is the way to go vs using the aggregation framework for it
[05:00:20] <mrpro> ah so you move ads around real-time?
[05:00:21] <jrdn> unless you don't have that much data
[05:00:26] <jrdn> yeah basically
[05:00:35] <mrpro> yea i wouldn't think you'd wanna keep agreggating same shit
[05:00:36] <jrdn> we're essentially an ad server
[05:00:40] <jrdn> but our own partners and our own products
[05:00:41] <mrpro> thats cool man
[05:00:44] <mrpro> do you do affils?
[05:00:59] <mrpro> you know graphite?
[05:01:10] <mrpro> maybe u can feed graphite and then extract from graphite for charts
[05:01:26] <jrdn> so if a partner's site is down, basically we'd know if an aggregation showed for $0 within a 5 minute period so to say (although we take the average $ per minute over the last 7 days then gear alerts / metrics based off that)
[05:01:37] <jrdn> some stuff is affiliate based, yes
[05:01:45] <jrdn> we do a lot of lead gen
[05:01:45] <mrpro> i need affil
[05:01:57] <mrpro> can i ask u some vague questions about how it works?
[05:02:41] <jrdn> affiliate programs?
[05:02:52] <mrpro> ye
[05:03:20] <jrdn> it just comes down to how much money do you have to initially waste vs how good quality you can produce :P
[05:03:21] <mrpro> so from what i understand, if i charge for my product and an affil parter gets me a signup, i have to pay x% per signup coming from them
[05:03:28] <jrdn> OH
[05:03:39] <jrdn> yeah, basically
[05:03:41] <jrdn> there's ons of options
[05:03:43] <mrpro> but the question is
[05:03:50] <mrpro> what if my product is a month trial
[05:03:53] <jrdn> you can pay CPC / CPA, CPM, etc
[05:03:54] <mrpro> and after a month they sign up
[05:04:04] <mrpro> can it be arranged where affil gets paid if someone signs up in a month
[05:04:20] <jrdn> you should still pay out after the trial
[05:04:24] <jrdn> but yes, you can
[05:04:45] <jrdn> you just need to know the value of your users
[05:04:47] <jrdn> even if they don't sign up
[05:04:51] <mrpro> yea
[05:04:57] <mrpro> interesting
[05:05:19] <mrpro> but yea value is like average ppl signed up vs ppl that came right
[05:05:38] <mrpro> multiplied by TLV or w/e
[05:05:42] <jrdn> if you have 1000 trials a month and 100 signups, you take revenue / 1100, etc
[05:05:46] <jrdn> yeah
[05:06:10] <mrpro> have you been using mongo for a while?
[05:06:22] <jrdn> ~ 5 months now in production
[05:06:24] <mrpro> we are developing a new product that uses it for DB but i am very weary so far
[05:06:36] <mrpro> what kinda setup you got…replica…shards…etc? …. do you regret?
[05:06:42] <jrdn> right now replica
[05:06:50] <jrdn> from my understanding, it's best to scale up, not out
[05:06:59] <mrpro> i.e.. more CPU ram?
[05:07:00] <jrdn> we'll shard eventually
[05:07:08] <jrdn> we're on EC2 currently
[05:07:14] <mrpro> oh
[05:07:19] <mrpro> you must be paying a shitload
[05:07:29] <jrdn> ~10k / month :(
[05:07:29] <jrdn> haha
[05:07:32] <mrpro> i am just renting dedicated servers from diff providers
[05:07:45] <mrpro> for like 150/mo get 10TB BW
[05:07:57] <mrpro> good to start with in my case
[05:08:10] <mrpro> btw, your replica is in diff physical loc?
[05:08:11] <jrdn> we just hired someone who works for EC2 to manage our servers… we were starting to do our own puppet management and stuff, but moved it over to someone else so we can focus on products and good code
[05:08:18] <jrdn> yeah.
[05:08:20] <jrdn> it is
[05:08:32] <mrpro> whats the latency like
[05:08:35] <mrpro> between those two
[05:08:42] <mrpro> when you save to mongo do you do safemode.w2?
[05:08:44] <jrdn> Hmm...
[05:08:59] <jrdn> it's all on the east coast in a different availability zone on AWS
[05:09:05] <jrdn> so it's near instant.. no delay
[05:09:11] <mrpro> oh
[05:09:12] <mrpro> got it
[05:09:18] <jrdn> but… i don't know how your app works
[05:10:02] <jrdn> but what we're going to do next is run Mongods on every app server that is kind of like a replica (not set up as a replica… AMPQ is going to send new data from one main replica to each web server)
[05:10:15] <jrdn> so all reads will actually be local, if the read doesn't exist, then it goes to replica
[05:10:29] <mrpro> ah
[05:10:38] <jrdn> then because mongo's writes are fast and asynchronous, we're going to write to the remote master as well as local
[05:10:41] <mrpro> like memcache sorta?
[05:10:44] <jrdn> yep
[05:11:02] <mrpro> mongos writes are fast?
[05:11:07] <jrdn> yep
[05:11:10] <mrpro> how so? because mongos journals and lets go?
[05:11:12] <mrpro> or you are not using safemode
[05:11:17] <jrdn> yeah no safemode
[05:11:20] <mrpro> oh then
[05:11:22] <mrpro> u can lose some
[05:11:22] <jrdn> so writes queue in memory
[05:11:24] <mrpro> but i guess u dont care
[05:11:27] <mrpro> you have streaming data
[05:11:33] <mrpro> my app needs durability lol
[05:11:42] <mrpro> and transactions too
[05:11:46] <jrdn> :X!
[05:11:46] <mrpro> i had to come up with some clever shit
[05:12:14] <jrdn> we're using it because it's mainly analytic stuff
[05:12:15] <mrpro> we coded our datalayer such that we can probably move to a regular RDBMS within a day or two
[05:12:16] <mrpro> just in case
[05:12:17] <jrdn> we do track revenue
[05:12:17] <Oddman> hence why you're called mrpro
[05:12:26] <mrpro> Oddman: not really
[05:12:31] <Oddman> what language, mrpro?
[05:12:36] <mrpro> mono c#
[05:12:39] <jrdn> mrpro, you should always code like that ;)
[05:12:40] <mrpro> another risky move
[05:12:40] <mrpro> haha
[05:12:41] <Oddman> ugh
[05:12:50] <mrpro> mongodb + mono c# <-- FML
[05:12:55] <Oddman> haha
[05:12:57] <jrdn> but i'm set on mongo currently
[05:13:00] <jrdn> don't even think about mysql
[05:13:02] <Oddman> why mono c#?
[05:13:05] <mrpro> oddman
[05:13:07] <mrpro> i love c#
[05:13:10] <mrpro> and didnt wanna pay for wind0z
[05:13:14] <Oddman> i see
[05:13:16] <mrpro> worst case we move to windows
[05:13:25] <jrdn> mrpro, why do you have to do safe = true on everything?
[05:13:27] <mrpro> we develop on windows dekstops and then our CI builds using mono compiler
[05:13:31] <mrpro> jrdn: yep
[05:13:39] <mrpro> everything is safe
[05:13:40] <mrpro> :)
[05:13:42] <jrdn> why?
[05:13:48] <mrpro> cause we cant miss writes
[05:13:56] <mrpro> and notify client of success when it really isnt
[05:14:03] <mrpro> then client will be outta sync with server for next call
[05:14:14] <jrdn> hRmmm
[05:14:25] <mrpro> server has to be able to fail and read everyhting from DB when it comes back up and be in same state that the client is
[05:14:33] <mrpro> otherwise whatever client is sending it will not make any sense
[05:14:42] <mrpro> its a state machine basically
[05:14:48] <jrdn> yeah then why aery ou using mongo ;P
[05:14:59] <mrpro> cause its WEB SCALE
[05:15:01] <mrpro> :)
[05:15:15] <jrdn> you hipster
[05:16:10] <jrdn> our master has stepped down a few times due to latency, and we never really lost any data
[05:18:01] <jrdn> our last app was on mongo and was heavily write intensive… so we had to use mongo for writes
[05:18:22] <jrdn> to make mysql work before, we had to queue some writes in memcached then to mysql or to a mysql memory table, etc
[05:18:25] <jrdn> :X!
[05:19:22] <jrdn> but anyway, were moving our environment so that our apps can run even if we completely lost all our moongos which won't happen.. we're going to have 1/2 servers on EC2 and 1/2 on Rackspace eventually
[05:22:04] <mrpro> damn
[05:22:10] <mrpro> isnt EC2 bad
[05:22:14] <mrpro> if a bad disk gets mounted
[05:22:20] <mrpro> your mongo will get a backlog
[05:25:20] <jrdn> yeah which is what monitoring is for
[05:25:25] <jrdn> ;p
[05:32:05] <mrpro> gawd
[05:32:08] <mrpro> Mongo is trouble
[05:32:08] <mrpro> :p
[05:32:27] <mrpro> i wanna use mongo cause user data in our app is completely separate fro other users
[05:32:33] <mrpro> so we will be able to shard by USER/STATE easily
[05:32:45] <mrpro> probably by country/state
[05:33:13] <mrpro> so its a pain with all the safemode crap and stuff like that, plus it takes time to come up with some clever stuff
[05:33:35] <mrpro> but i think so far we're doing ok with that… if all that works out, we should get relibability and also ability to shard it later on
[05:33:37] <mrpro> win win :)
[05:33:54] <LouisT> are EC2s worth it? i've been told to try the free one, but i fear i'd blow through the 15GB in no time
[05:34:13] <mrpro> i dont think they are
[05:34:18] <mrpro> i think its a ripoff
[05:34:19] <mrpro> :)
[05:34:40] <LouisT> i like my cheapish VPS'
[05:35:47] <mrpro> deciated is pretty cheap
[05:35:53] <mrpro> i got decent ones for 150-200 a month
[05:36:04] <mrpro> i got a linux admin so he sets everyhting up, i dont have to bother with it
[05:36:31] <LouisT> oh.. setting stuff up is the fun part tbh
[05:39:04] <mrpro> no way
[05:39:11] <mrpro> vpn, raid…etc? :)
[05:39:18] <mrpro> ldap and other crap
[06:23:32] <arex\> Any tips for where to create a new tech blog?
[06:39:57] <chovy> how do i save an object?
[06:40:34] <chovy> http://pastie.org/5039730
[06:50:46] <crudson> chovy: find() returns an cursor to be iterated over, not a single document
[06:55:33] <chovy> crudson
[06:55:38] <chovy> i'm looking at the example here
[06:56:18] <chovy> oh. nm. it's using findOne()
[06:56:23] <chovy> hansk
[06:56:24] <chovy> thanks
[07:00:00] <crudson> :)
[07:58:39] <[AD]Turbo> yo
[09:23:54] <Dantas> hi all ! I'am in a new project using mongoose and mongodb, but when I try to query all documents ( 200k documents ) to mongoose is too slow ( average response 5 seconds ). But when using the mongo repl, the response is immediately. What am I wrong ?
[09:30:34] <remonvv> You're doing two different things most likely.
[09:30:44] <remonvv> Post your code and the shell query in a pastie.
[09:33:34] <Dantas> remonvv: Ok , i will do it right now
[10:48:17] <Guest52366> I am seeing a lot many mongo "down/slow to respond" in "rs" logs, in our live AWS replica-servers, the load is just normal, no network issues, the flip pattern looks like very random... any suggestions what to look at
[11:49:07] <tunnuz> Hi everyone
[11:49:21] <tunnuz> just a quick question: is it possible to compile mongodb with clang++/libc++?
[11:52:25] <tunnuz> (I'm building it from source.)
[12:13:56] <tunnuz> Ok, apparently scons --clang all is doing the trick, however I also need to compile it with libc++, how do I force that?
[12:51:20] <Lipathor> hi, i have problem with map/reduce, or key names, or sth. else
[12:51:44] <Lipathor> i have a document with keys 1,2,3..7
[12:51:59] <Lipathor> and the values of these keys are arrays
[12:52:14] <Lipathor> i want to do in map function: this.1.forEach
[12:52:54] <Lipathor> but i'm still getting SyntaxError :/
[12:53:01] <Lipathor> what's wrong?
[13:09:31] <NodeX> I'm pretty sure you can't do what you want because in Mongo you can access the first array member with foo.0.....
[13:12:40] <remonvv> this["1"]
[13:12:58] <NodeX> remonvv to the rescue
[13:13:44] <remonvv> well your answer is better, i'm uncomfortable with integer values as field names.
[13:13:50] <remonvv> for exactly this reason
[13:13:56] <NodeX> +1
[13:35:59] <tunnuz> Hi, is there anybody in there?
[13:36:11] <Lipathor> well i tried this["1"]
[13:36:16] <BlackPanx> hi guys
[13:36:23] <Lipathor> and I got TypeError: this['1'] has no properties
[13:36:25] <Lipathor> :/
[13:36:35] <BlackPanx> hows with scheduled release of 2.2.1 version today ? will it go live ?
[13:36:51] <tunnuz> Does anyone know if is it possible to compile mongodb with libc++?
[15:33:25] <remonvv> Lipathor, is "this" the actual document in your case?
[15:34:47] <remonvv> basically what you can do on JS object X where X is the document is X["1"] :
[15:34:52] <remonvv> mongos> db.test.save({"1":1})
[15:34:52] <remonvv> mongos> doc = db.test.find().next()
[15:34:52] <remonvv> { "_id" : ObjectId("5078380d59eb5dfed26cfea7"), "1" : 1 }
[15:34:52] <remonvv> mongos> doc["1"]
[15:34:52] <remonvv> 1
[15:34:55] <remonvv> oops
[15:35:01] <remonvv> well, that
[16:00:24] <bhosie> is there a query to get the size of a document, or to get the id of the largest document in a collection?
[16:01:02] <NodeX> no
[16:01:17] <thesteve01> with the nodejs driver when I do an insert it looks like a doc is the object passed into the callback
[16:01:26] <thesteve01> what is that parameter actually?
[16:01:37] <thesteve01> is it the doc I just inserted?
[16:06:30] <timeturner> yes
[16:11:35] <thesteve01> timeturner: was that to me?
[16:11:40] <vdudukgian> hey, i'm having a moment of dumbness
[16:12:09] <vdudukgian> how can i query for rows that have a particular value for a key which consists of an array of strings?
[16:13:09] <vdudukgian> {$elemMath : 'thestring'}
[16:13:10] <vdudukgian> ?
[16:14:08] <Gargoyle> vdudukgian: You can just treat arrays like normal fields.
[16:14:19] <Gargoyle> field.name: 'value'
[16:14:32] <vdudukgian> really? lemme test that out. thanks
[16:14:33] <vdudukgian> !
[16:31:11] <emehrkay> how is mongo's performance when searching for something like a tag against millions of documents?
[16:33:49] <Derick> emehrkay: depends hugely on how it's indexed
[16:39:55] <emehrkay> What we're doing now is breaking down documents into keywords (and their properties; date, weight, etc.) and then searching against them by using an IN clause (mysql). We have millions of rows in mysql, but i feel that the number of entries could be drastically reduced using something like mongodb. Id still need to search against all documents. I want to explore this a bit more, just figured Id ask if anyone has done anything simil
[17:42:24] <hdon> hi all :) been a little while since i used scons. how can i tell it to take advantage of multiple cpu cores? i hit ^c and tried adding -j5 (i have 4 cpu cores) to my command (scons .) but ---- oh, there it goes. it's using them all now
[17:43:02] <hdon> new question: why does it look like scons is configuring build parameters when i run my scons command again, even after the build was already underway? is that normal scons behavior, or mongodb behavior?
[17:54:14] <hdon> why does scons check for pcap when building mongodb?
[18:06:30] <hdon> does mongo have man(1) pages?
[18:59:42] <ArturoVM> Hi there. I'm looking for help with the MongoDB Node.js driver, is this the right place to ask?
[19:11:48] <hdon> ArturoVM, i'm about to go on the same adventure :)
[19:12:01] <hdon> ArturoVM, might as well ask your question :)
[19:13:47] <ArturoVM> Oh, good luck, then :)
[19:14:04] <ArturoVM> Well my question is about usage and best practices regarding connections.
[19:14:15] <hdon> ArturoVM, i'm looking in npm now to see if there's one in there. is that where you got it?
[19:15:20] <hdon> i'm using node 0.8.11
[19:16:06] <ArturoVM> I'm using 0.8.9
[19:16:14] <ArturoVM> and yeah, that's where I got the driver
[19:16:22] <hdon> what npm packages did you install?
[19:16:57] <hdon> also... do you know how to use -g in npm? when i use -g, i never get the dependencies installed. and when i don't use -g, the dependencies are installed in subdirectories of each package :\
[19:17:12] <ArturoVM> mongodb
[19:17:19] <ArturoVM> ah, do you use `sudo`?
[19:18:24] <ArturoVM> Generally npm installs global pkgs to folders where you need sudo to make changes.
[19:19:45] <ArturoVM> hdon, I'm sorry if that seems dumb to ask, but you never know :)
[19:19:47] <hdon> ArturoVM, yeah i use root when i use -g
[19:20:02] <hdon> you'd be even more right to ask if you knew i was on ubuntu :|
[19:21:40] <ArturoVM> So after you've installed a package with -g, you can't import it? Is that it?
[19:26:39] <hdon> what is the mongodb shell made of? is it a completely custom repl? it seems to be javascript. but i don't have Object.keys()
[19:27:07] <hdon> ArturoVM, i can require() it, but it throws an error trying to require() its dependencies
[19:27:14] <hdon> ArturoVM, so for now i've given up on globally installed modules
[19:32:19] <ArturoVM> hdon, that is really strange. But it does install dependencies when you run npm install?
[19:33:30] <hdon> ArturoVM, yes, but when a dependency is installed, it isn't installed in the same directory as its dependent module, it's installed in a subdirectory. let me see if i have an example dir laying around..
[19:34:43] <hdon> ArturoVM, actually, i see this is happening even without -g. *pasting*
[19:35:16] <hdon> http://pastebin.mozilla.org/1865213
[19:36:21] <hdon> so if i "npm install supermodule" then it ends up with ./node_modules/supermodule but its dependency ultramodule doesn't end up in ./node_modules/ultramodule it ends up in ./node_modules/supermodule/node_modules/ultramodule
[19:36:48] <hdon> like, wth. am i gonna end up with multiple copies of modules? will the copy that gets loaded depend on the order i load my modules in? many questions...
[19:36:59] <hdon> i'm not a big fan of npm but i find myself using it a lot
[19:38:33] <ArturoVM> Ugh :/ yeah, dependency management in npm is weird. But yep i think that every module installs its own dependencies, no matter if you have them already. I could be wrong, though. Not much of an npm expert :P
[19:39:35] <ArturoVM> Well, I've got to go. See you later. FTR, this is the question I was going to ask: http://stackoverflow.com/questions/11799953/whats-the-best-practice-for-mongodb-connections-on-node-js
[19:59:56] <storrgie> I got asked by someone to help out with their project, they are 'adding me to their github' but while I was waiting I scanned their project host and noticed that they had both mongodb and mongodb console open to the outside. Is this common? I've only ever worked with mysql
[20:00:21] <storrgie> with the mongo client I can connect to their database without any authentication and actually browse around
[20:00:28] <storrgie> this seems like a bad thing to me...
[20:03:14] <storrgie> I want to be able to give them some pointers on how to lock this down... but maybe this is the way you host a mongodb and I'm just new/ignorant
[20:06:14] <crudson> storrgie: set the bind_ip option to restrict addresses to listen on
[20:06:50] <storrgie> crudson, so this is not common to expose it to the whole world?
[20:07:10] <crudson> well, the machine firewall may prevent access, I'd look there too
[20:07:20] <crudson> but it depends on how you want it deployed
[20:07:20] <storrgie> I installed the mongo client and I was able to get in without auth... I'm guessing from there I could try to elevate access
[20:07:35] <storrgie> well, it seems bad practice to allow for the entire DB to be viewable to anyone right?
[20:08:05] <crudson> I'd say
[20:08:32] <storrgie> alright, well... like I said before I'm _really_ new to mongo (as of 10 minutes ago)
[20:08:43] <crudson> adding authentication to mongo itself is appropriate too
[20:08:45] <storrgie> I didn't want to write them an email and say they were doing something silly if I was just being ignorant
[20:09:27] <crudson> but not always done if access is restricted to localhost that the application server is running on
[20:10:08] <storrgie> right
[20:10:15] <storrgie> but this instance doesn't have any network limitations
[20:10:22] <storrgie> no firewall either, atleast for these ports
[20:10:28] <storrgie> so I can hit it from my laptop here
[20:12:48] <storrgie> Yeah, so I feel fairly confident this is not a good thing
[20:14:49] <crudson> right, at least restrict to localhost and tunnel through ssh if you want to connect remotely
[21:42:55] <ArturoVM> Hi there, once again :)
[22:50:49] <Aartsie> hi all, i have a collection created with the name user-log but know mongo think i want to read log ?
[23:17:02] <ArturoVM> Aartsie: I think it's standard practice (regardless of language/OS/environment) that when you want to name something and they're two separate words, you should either a) camel-case them or b) use underscores.
[23:17:38] <Aartsie> ArturoVM: yeah i think so :) i use debian :)
[23:18:29] <Aartsie> when im in the console and want to show all the records of a collection i got the first 10 and then it says 'have more' how can i see them all ?
[23:19:43] <LouisT> Aartsie: type 'it'
[23:20:05] <LouisT> i don't think you can actually print them all off at once
[23:20:31] <Aartsie> LouisT: Ok thank you that works :D
[23:24:58] <Aartsie> when i do db.stats() the storage size are bytes ?
[23:36:40] <crudson> Aartsie: you can get all records in one go by doing something like: db.col.find().map(function(d) { return d })
[23:37:26] <crudson> add a .forEach(function(d) { printjson(d) }) to the end to get normal output
[23:38:13] <LouisT> crudson: is that really a good idea? i could see that being very bad =/
[23:38:23] <crudson> Aartsie: just answering his question :)
[23:38:46] <crudson> LouisT: that was for you
[23:38:56] <crudson> of course you don't want to do this for a mega query
[23:40:42] <crudson> but say you know you want 50 and get them all printed without having to type 'it' many times, this could be useful: .find().limit(50).map(function(e){return e}).forEach(function(d) { printjson(d) })
[23:41:22] <crudson> actually you can get rid of the .map() bit totally
[23:41:37] <crudson> (getting late in the day, sorry)