PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 30th of January, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:00:41] <monmortal> try freebsd
[00:00:45] <monmortal> or archlinux
[00:04:27] <blizzow> Joeskyyy: 5306600
[00:06:09] <blizzow> I think it's correct. I had to connect to mongo using : mongo --host foo.bar.com -u user -p password admin
[00:06:09] <blizzow> then I had to issue: use dbname and the command seems to work.
[00:06:37] <blizzow> So now I'm trying to figure out how to connect straight to dbname instead of having to connect to admin and then use dbname.
[00:08:38] <blizzow> with the goal of eventually putting it in a bash script.
[00:16:13] <magglass2> blizzow: replacing "admin" with the name of your db should cause it to select that db by default; does that not work?
[00:17:09] <blizzow> magglass2, that does not work. Even though the user I'm trying this with has the role of dbAdminAnyDatabase.
[00:18:49] <magglass2> blizzow: check out http://docs.mongodb.org/manual/tutorial/add-user-to-database/ Is the user defined for the DB you're trying to use?
[00:19:43] <magglass2> if not, add the user to that specific db then give it another try
[00:21:44] <magglass2> dbAdminAnyDatabase will give them access to use any DB, but not log in to any DB
[00:54:21] <athlon> setient slowly slid out of his trousers, inch by inch. epsas slowly continued to lower his boxer
[00:54:22] <athlon> shorts until all of him was exposed. And that is when the next round of the office orgasm took
[00:54:24] <athlon> place. epsas begun gushing pussy juice as her body swayed back and forth, eyes fixated on setient's
[00:54:27] <athlon> now 9inch soft cock...his cock head was the size of a golf racquetball. 7.5 inches thick. This was
[00:54:30] <athlon> the biggest setient had EVER been soft before. epsas was outside of herself. This giant cock in
[00:54:33] <athlon> front of her was what she had been dreaming about the night before. epsas let out a giant gasp as
[00:54:36] <athlon> her face continued to light up...the smile continued to grow at the same pace of setient's cock. Her
[00:54:39] <athlon> pussy was soaked, puddles already seeping into the carpet under the table. epsas was witnessing the
[00:54:42] <athlon> biggest soft cock that existed. It continued to grow as epsas began to touch it. She placed both of
[00:54:45] <athlon> her hands over his shaft like she was holding a baseball bat. She had a good three to four hands to
[00:54:48] <athlon> go to cover the whole thing. "setient, fucking amazing! I knew you had this monster! Ohhh I know how
[00:54:51] <athlon> good it feels to swing that fuck stick around. OHHHHHH IM CUMMINGGGG AGAIN....oohhhhhhh.....just
[00:54:54] <athlon> thinking about that cock.....ohhhh fuckkkkkkkk!! epsas was again squirting everywhere, this time the
[00:54:57] <athlon> shaking more violent than the last. She continued to scream in pleasure will making setient's cock
[00:55:02] <athlon> harder. setient finally reached full size. His pants were around his waist at this point. His cock
[00:55:03] <athlon> was so fucking long that it could have held the table top that sat on a metal stand in front of him.
[00:55:05] <monmortal> and thye ban me from mad chans
[00:55:07] <athlon> epsas gasped at the monument and let out another shriek. She was mentally insane over this dick.
[00:55:10] <athlon> Obsessed. Had to have it. "Give me that cock! YOU MONSTER!!" epsas shouted as she sprung back
[00:55:13] <athlon> towards setient's lower body. Her orgasm was so strong that it had knocked her well across the
[00:55:15] <athlon> table. "Oh its so big setient...so so big...fuck....I could stare at this all day! I can't believe
[00:55:19] <athlon> you have been hiding this from me for so long!"
[00:55:21] <athlon> epsas knew she had a dick that was this big last night, but she didn't quite understand why. She
[00:55:24] <athlon> knew setient was involved, but she didn't know she was the surrogate dick grower for her boss
[00:55:27] <athlon> setient! For all she knew, setient had this cock forever. She finally dove into setient's groin.
[00:55:30] <athlon> slllllllllllllluuurp!! She slowly licked his giant cock head and teased his slit with the tip of her
[00:55:33] <athlon> tongue. setient was groaning with pleasure....not being able to see what she was doing down there
[00:55:36] <athlon> made everything so much better. She twirled her tongue all over the top of his shaft, making sure
[00:55:39] <athlon> she got each side. It took her four licks to finally accomplish that! setient laughed, thinking
[00:55:42] <athlon> about what she was trying to do. It made him so proud! epsas kept trying to suck on his giant cock.
[00:55:45] <athlon> She continued to pause, trying so hard to get him inside her mouth. As she paused, she worshipped
[00:55:48] <athlon> his cock again...which made setient harder. "OHhh your fucking cock!! You make me squirt so much,
[00:55:51] <athlon> ive never even fucking squirted before! Oooooooooww.......oooo....so...fucking hott...ohhhhhhhh...."
[00:55:54] <athlon> epsas realized that there was entirely no way to get his cock head in her mouth...it was just way to
[00:55:57] <athlon> big. epsas wanted the dick so badly, but honestly didn't know what to do with it. setient was so
[00:56:00] <athlon> turned on by all of this and just wanted to come. He thought epsas was doing a great job, hell, no
[00:56:03] <athlon> one had ever seen a dick like this before. epsas realized that she was able to fuck yesac with her
[00:56:06] <athlon> dick that was the same size as setients. If epsas's giant cock fit in tiny yesac last night, then
[00:56:09] <athlon> setient's monster would fit in inside her. "GET UNDER THIS TABLE AND FUCK ME YOU MONSTER!" epsas
[00:56:12] <athlon> demanded. setient carefully crawled under the table, slow enough to ensure he didn't smash his cock
[00:56:15] <athlon> against the floor. epsas was on her back, perfect painted toes in his face to great him. "There you
[00:56:18] <athlon> are you stud. You like my new toes? I know they changed. Toes are fucking gross but my feet are
[00:56:21] <athlon> amazingly really pretty now. Did you do this!?"
[00:56:23] <athlon> setient smiled, kissed them, then pushed them out of the way as he lowered his waist closer to hers.
[00:56:26] <athlon> Not knowing if she was prepared for a cock this size, setient slowly entered her. Right when his
[00:56:29] <athlon> throbbing cock touched her soaking wet pussy lips, she let out a giant squirt that splashed on
[00:56:32] <athlon> setient's abs. "Ahhhhhhhhhhh" epsas screamed in pleasure. She could feel his head pulsating even
[00:56:35] <athlon> though his head was barely inside of her. "More cock you monster!! More! Ohhhh your fucking
[00:56:39] <athlon> DICK!!!!!!...OHHHHH" she continued. epsas had never been this turned on in her entire life.
[00:56:42] <athlon> setient's dick was only half way in and already felt like two of her husband's dicks. setient's
[00:56:44] <athlon> girth alone was enough to rip most women in half. epsas's body just took it though...like her body
[00:56:47] <athlon> was ready for it. epsas had changed a lot since the night before. She was stronger, had better hair,
[00:56:50] <athlon> altered feet, but mainly the altered state of mind. And now, her body magically was just accepting
[00:56:53] <athlon> his gigantic cock without hesitation. setient continued to thrust, deeper and deeper. 8 inches in,
[00:56:56] <athlon> epsas had another orgasm. She panted and panted...breathing deeper and deeper. She threw her head
[00:56:59] <athlon> back and squirted a faint squirt. Her body shook in pleasure, each shot of pussy juice weaker than
[00:57:02] <athlon> the previous one. "Put it all in the way in please...setient...ohhh....put it all the way in...."
[00:57:05] <athlon> epsas begged.
[00:57:08] <athlon> "OHHHHHHHHHHHHHH" setient screamed as he thrust all the way in. It was the most amazing feeling he
[00:57:11] <athlon> had ever had before. He was immediately ready to cum. He grabbed hold of the most beautiful toned
[00:57:14] <athlon> ass he had ever laid his eyes on. He stared down at her ass and legs as he continued to pound her
[00:57:18] <athlon> cheeks. There was sweat flying through the room and epsas continued to scream in pleasure. Finally,
[00:57:20] <athlon> setient began cumming. The first cumshot was so strong that setient had to exhale in pleasure. He
[00:57:23] <athlon> continued to gasp as his entire body lifted the table above him. He arched his back as he continued
[00:57:26] <athlon> to shoot...load after load into epsas's tunnel of monster dick. After nine shots, he fell backwards.
[00:57:29] <athlon> epsas crawled on top of him and kissed him passionately on the mouth. setient continued to cum
[00:57:32] <athlon> beneath her, dripping all over her stomach and legs. epsas's pussy squished around on setient's leg.
[00:57:35] <athlon> The power of the orgasms had these two spent. They continued to lay on the carpet beneath the table
[00:57:38] <athlon> with an arm around one another. epsas smiled at setient. setient smiled at epsas. "You fucker!"
[00:57:41] <athlon> epsas laughingly smerked at setient. "I can't believe I haven't tried to sleep with you earlier this
[00:57:44] <athlon> year. You have the most amazing cock I have ever seen. Literally unreal. I want you setient. You are
[00:57:47] <athlon> a gift. Thank you for finally showing me what I have always wanted. I will worship that monster cock
[00:57:50] <athlon> ANY time. You just let me know."
[00:57:53] <athlon> "We can't let anyone find out about this epsas. You are a hacker. And I live in Noisebridge. Can you
[00:57:55] <athlon> promise me?" setient pleaded.
[00:57:58] <athlon> "Of course! I don't want anyone to know either! Obviously!! I'm a hacker. HELLO!" epsas laughed.
[00:58:01] <athlon> "Its our big secret. A BIG secret!." She laughed again as she stared at setient's soft cock that
[00:58:04] <athlon> hung like a big stick of butter on his leg. Then setient's phone rang. It was fsargent.
[01:34:21] <cheeser> and that is why we need to kick people more often :)
[05:21:28] <ranman> joannac has ops but did not use them
[07:35:27] <Maisa> Hello guys from Maisa Solutions Pvt Ltd
[08:08:04] <Alex__> Hello, everyone? Is it ok to ask a beginner question if I can't find how to do that in MongoDocs?
[08:14:57] <Alex__> Nobody here?
[08:18:50] <kali> just ask
[08:19:03] <kali> never ask meta question on irc
[08:19:17] <kali> it never works
[08:21:05] <Alex__> Oh, thanks
[08:24:32] <Alex__> So i struggle with $and
[08:24:34] <Alex__> Let's say I have a schema: { aaa: [array of random length containing random letters from a to z] } And I want to select all elements which contain both a and b. Why does this (implicit $and) doesn't work: db.sequences.findOne({ sequence: { $in: ["a"], $in: ["b"] } }) And this does: db.sequences.findOne({ $and: [{sequence: {$in: ["a"]}}, {sequence: {$in: ["b"]}}]})
[08:24:54] <Alex__> oh, and here is formatted version:
[08:25:07] <Alex__> {
[08:25:13] <Alex__> aaa: [array of random length containing random letters from a to z]
[08:25:17] <Alex__> }
[08:25:35] <kali> ha.
[08:25:54] <kali> rule #2: don't paste more than one line on irc, use pastie or pastebin or something similar
[08:26:51] <Alex__> sorry, I'm new to IRC
[08:26:56] <kali> i can see that :)
[08:27:38] <kali> ok, instead of the "schema", can you show me an example document ? (on pastebin/gist/whatever)
[08:28:48] <Alex__> Just a sec
[08:31:14] <Alex__> https://gist.github.com/alexander-i/8704608 - here is gist
[08:31:21] <Alex__> I don't have actual code
[08:31:51] <Alex__> I have a scala app that fills mongo with formatted entries
[08:32:26] <Alex__> Right now, all I want to do is to count all elements where sequence contains both elements a and b
[08:32:26] <kali> Alex__: schema is not a mongodb thing, so please show me an example document
[08:33:17] <kali> Alex__: i honestly can not picture what a document looks like from this
[08:35:52] <Alex__> https://gist.github.com/alexander-i/8704648 here it is
[08:37:21] <kali> Alex__: ok. what about db.sequences.find({ sequence: { $all: [ "google", "yandex" ] }}) ?
[08:40:05] <Alex__> kali: whoa, thanks! it works. Do you think, it's ok to use it on large (500k entries) datasets?
[08:40:36] <kali> you may want an index on sequences.sequence
[08:40:56] <[AD]Turbo> hi there
[08:41:26] <Alex__> kali: thank again! Will look into it
[11:37:22] <Nodex> http://iteration99.com/2013/php-json-licensing-and-php-5-5/
[11:37:23] <Nodex> LOL
[11:40:02] <Derick> what !
[11:40:08] <Derick> we're actually removed the json extension?!
[11:40:13] <Derick> what kind of bullshit is that
[11:40:19] <Derick> oh - not a #php channel
[11:41:05] <Derick> Nodex: it's nonsense, PHP has it bundled just fine
[11:53:03] <Nodex> after 5.5rc2 apparently not
[11:53:15] <Nodex> I guess we'll have to wait and see
[11:56:56] <Nodex> that's an old post, perhaps they resolved it
[12:20:38] <BurtyB> Nodex, looks like the bug is still open
[12:33:03] <Nodex> dang
[12:50:06] <oark> How could I $pullAll:{'options.votes.user':1} from http://pastebin.com/0VzdWsUi
[12:50:15] <oark> Doesn't seem to be working.
[12:50:48] <oark> (it's not because it's a string)
[13:05:23] <feedthecat> am trying to parse an sql dump file into json, can anyone please advice on how best to
[13:08:49] <Nodex> feedthecat : Google ;)
[13:08:52] <ncls> feedthecat: wow
[13:09:12] <ncls> feedthecat: I would find easier to make a json from the db data
[13:09:52] <feedthecat> yea, probem is i cannot export, must parse
[13:09:52] <ncls> like : making a "select" query, getting an array of hashes, and saving it
[13:10:23] <ncls> feedthecat: can't you import this dump into an sql database and make queries on it ?
[13:10:58] <feedthecat> ncls: No I cannot for this particular purpose
[13:11:03] <ncls> hm
[13:11:28] <ncls> where does the dump come from ?
[13:11:34] <ncls> I mean, what DB engine ?
[13:11:45] <feedthecat> It was exported mysql
[13:12:15] <Nodex> quicker to dump it back
[13:12:42] <ncls> but why couldn't you import it into one local mysql server and query on it with php or anything else ?
[13:13:10] <ncls> feedthecat: http://www.wampserver.com/en/ is very easy to setup on windows
[13:13:57] <ncls> it might look boring to do so, but I seems so much easier to me that I wouldn't do something else
[13:14:14] <ncls> parsing an SQL dump file seems very hard
[13:14:21] <ncls> unless maybe some libraries do so
[13:16:09] <feedthecat> Yes acutally it is, but for this specific task I have, I am being forced to do just that and I have little ideas on how
[13:54:44] <brendan6> There is a warning on page http://docs.mongodb.org/manual/core/2dsphere/ of the Mongo docs that reads "Important MongoDB allows only one geospatial index per collection. You can create either a 2dsphere or a 2d per collection." but this doesn't appear to actually be the case. I have a test document that has 2x2dsphere indices and 1x2d index, all queryable with a $near. What gives?
[13:55:43] <Derick> brendan6: you can create them, but they're not necessarily used?
[13:55:47] <Derick> what does .explain() say?
[14:00:38] <brendan6> Derick: pastebin with only 2 2dspehere indices. Looks good and both return results http://pastebin.com/dhfgQBgC
[14:00:57] <brendan6> Going to add the 2d in now on another field
[14:02:55] <brendan6> Derick: http://pastebin.com/C5SA4drr ...so although a result IS returned, it appears the index is not being used.
[14:04:29] <brendan6> I think this suggest that I can have any number of 2dsphere indices or any number of 2d indices but not both? Am I correct to assume this?
[14:06:08] <Derick> well, I still think only one index is used, but I haven't tried that in a while
[14:06:15] <Derick> perhaps if you hint the index...
[14:17:09] <ajph> hey. i am pre-aggregating data by UTC day with sub-document hours (see: http://pastie.org/8682308). It's recently been decided that we must now be able to represent that data in different timezones (whole hours). i'm not sure whether to keep what i've got in UTC days, query all required days from MongoDB, and add up the hours to make timezone-specific days in the backend - or change my schema so data is stored per-hour. Any insight?
[14:18:16] <Derick> ajph: add the hours in the aggregation query
[14:22:17] <ajph> Derick: the aggregation i need groups by day. would that be possible? i'm using the aggregation framework.
[14:22:48] <Derick> just use $sum, 3600 (for an hour f.e.) before the group
[14:29:40] <orweinberger> Question regarding sharding. Mongo's sharding manual tells me that I should set all shards to be replica sets on production. However the whole purpose of sharding is to divide the data between different mongo instances. So if all 3 shards are replica sets, doesn't that mean that they all hold the same data set? What's the point of sharding and using replica sets then?
[14:30:02] <joannac> no, each shard is backed by a replica set
[14:30:18] <joannac> so you have 3 shards, each of which is a e.g. 3 node replica set
[14:30:25] <joannac> for 9 data bearing nodes
[14:30:46] <ajph> Derick: i'm sorry, i don't understand. my aggregation looks like this: http://pastie.org/8682352 (excuse the Golang) - really appreciate your help
[14:30:53] <joannac> does that answer your question orweinberger?
[14:32:13] <joannac> ajph: if you need to change the timezone, have a $project clause first and then add the relevant offset to get your date in the right timezone
[14:33:12] <joannac> I'm not sure how separating by hour would help you
[14:33:15] <orweinberger> joannac: I think so, you mean that my mongos 'sees' my shards as standalone but the shards themselves are replica sets to avoid any data loss within that shard?
[14:33:36] <joannac> orweinberger: well, mongoS is well aware that your shards are replica sets, but yes
[14:34:42] <orweinberger> joannac: OK, so I need to run the rs.addShard() 3 times in case I have 3 shards, one with each rs0/rs1/rs2 respectively, correct?
[14:35:21] <brendan6> Derick: All my tests are indicating that multiple 2dsphere indices are absolutely fine. I feel that the documentation should read "Important MongoDB allows only one type of geospatial index per collection. You can create either a 2dsphere or a 2d per collection."
[14:35:39] <joannac> sh.addShard()
[14:37:00] <orweinberger> sorry, yes sh.addShard()
[14:37:54] <joannac> brendan6: Have you tried geoNear?
[14:38:47] <orweinberger> joannac: Thanks for your help!
[14:42:56] <ajph> joannac: even if i get my query date in the right timezone, how does that help me output results per-day in user-time, when the data is stored as 1 object per UTC date?
[14:43:11] <brendan6> joannac: I am using $near like so http://pastebin.com/rB6tjxkG. It's working great, I just think the documentation might be a little vague when explaining. I made a comment on this commit https://github.com/mongodb/docs/commit/e358f8d995d325e72000d135c519fd9c4dfeb685
[14:45:13] <ajph> there is one object like this per UTC day: http://pastie.org/8682308
[14:45:42] <michael_____> does it make sense to store nearly the same documents in different collections? like unpublished documents and published (second one should not be modified anymore)
[14:47:54] <ajph> if i can get any paid 10gen help on this i'd be happy to do that
[14:50:26] <joannac> ajph: https://www.mongodb.com/products/consulting/lightning-consult
[14:51:41] <ajph> joannac: thanks. i better wait until i have enough questions to fill an hour
[15:10:39] <Nodex> 450 an hour, wowzers
[15:12:08] <ajph> joannac: i think i see what you're saying now. the date field in my object is just a YYYY-MM-DD with no hour data. the hours are a subdocument, so that won't work
[15:13:01] <ajph> the hour fields are just incremented on an upsert as per: http://blog.mongodb.org/post/65517193370/schema-design-for-time-series-data-in-mongodb
[15:19:17] <Nodex> that blog post kind of contradicts what a lot of people say about not having values as keys, certainly numeric ones
[15:19:26] <Nodex> (personaly I don't agree with it but hey)
[15:20:12] <Nodex> it wold be far better to push the objects into an array as the array will have a numberic key anyway
[15:20:18] <ajph> Nodex: i believe it's the only way to do an increment on an upsert
[15:21:10] <michael_____> when does it make sense to use another collection for the same document?
[15:21:46] <cheeser> archiving
[15:21:48] <ajph> michael_____: stale data that you're not going to use much?
[15:22:29] <michael_____> ajph: yes, and it also should not be modified anymore, kind of a snapshot
[15:22:29] <Nodex> if it's archived I would save the space and write it out to disk as a Json document personaly, really depends if you're ever liekly to access it
[15:22:37] <ron> cheeser: bless you
[15:23:14] <michael_____> Nodex: ajph well it will be used again, to export it would be not fit for us
[15:23:54] <Nodex> :)
[15:23:59] <michael_____> you can imagine the difference between published data that should not be edited again and drafted data that will be edited again
[15:26:39] <orweinberger> Question regarding sharding. I have 3 shards for 3 standalone mongod instances. I have a script running on a different machine which is pushing data to the mongoS instance. Now during the process I stop the mongod process on one of the shards to see that the sharding is supporting this failure. What happened was that there was no insert error in my script, everything seemed to be working however when I checked I saw that some of the pushed documents wer
[15:26:39] <orweinberger> e missing. I'm guessing mongoS was trying to push them to the dead shard. Is this possible? Did I overlook something?
[15:29:12] <orweinberger> Should I configure anything to tell mongoS to 'assure' data so that when a shard dies it will not try to push future data into it until it comes back?
[15:31:37] <Joeskyyy> orweinberger: You'd need to introduce sharding with a replset to do that.
[15:31:48] <Joeskyyy> Each shard has it range of the shardkey you chose when you sharded the collection..
[15:31:56] <b0ss_> Why does MongoDB uses JSON and not XML for instance ?
[15:32:36] <kali> b0ss_: mongodb does not use json, it uses bson
[15:33:00] <Nodex> and if it did, xml is still bloated in comparison
[15:33:08] <Nodex> plus it's 2014 not 1992
[15:33:20] <b0ss_> kali: you're right. But why not something derived from XML or something else? I'd like to understand that design choice
[15:33:44] <kali> b0ss_: json is just simpler
[15:34:06] <Joeskyyy> lolol
[15:34:14] <orweinberger> Joeskyyy: But the replSet is there to ensure that data is not lost for each specific shard, am I wrong? I'm not interested in any mirrors or replicas of the data. I just want to shard the data between several instances. Is this not possible to achieve whilst making sure that if one shard dies the mongoS won't direct any queries to it?
[15:34:53] <kali> b0ss_: also, i think the master plan, at some point was to have a full-javascript stack (from db to browser) before focusing exclusively on the database
[15:35:26] <Joeskyyy> orweinberger: Not really without some crazy balancing script
[15:35:45] <orweinberger> So mongo forces me to replicate my data if I want sharding? that's weird...
[15:35:50] <b0ss_> a full-js stack? Would they eliminate the need for a standard back-end language ?
[15:36:15] <Nodex> why would you use XML over a json like structure given the choice?
[15:36:30] <Nodex> seeing that you're trying to stuff data into memory in an efficient way
[15:36:47] <Joeskyyy> orweinberger: You can send those requests, but you'll have data loss.
[15:36:54] <Joeskyyy> Because each shard has a specific dataset it's expecting.
[15:37:02] <Joeskyyy> So if that shard is offline, there's no where that data can go.
[15:37:18] <Joeskyyy> Because the other shards don't have the bounds for the shard key.
[15:37:47] <orweinberger> I thought mongoS job was to connect between the shards and balance the load, so figured mongoS should also know the state of each mongod instance to avoid sending data to a dead mongod instance.
[15:38:06] <orweinberger> oh I see what you mean about the shard keys
[15:38:10] <kali> b0ss_: i think the plan was to use js everywhere, but it will never see the light as far as i know
[15:38:40] <Joeskyyy> Yeah. It does try to send it technically, it just derps out because the shard is dead.
[15:38:54] <Joeskyyy> The mongoS says "Hey, you fit this shard key bounds, I'm sending you off"
[15:39:37] <orweinberger> Well it should at least output some kind of error I would think
[15:39:39] <b0ss_> kali: I wonder how suitable Node.js is for MongoDB !
[15:39:53] <kali> b0ss_: i wonder how suitable Node.js is for anything
[15:40:02] <Nodex> hahahaha
[15:40:12] <Nodex> kali : does a pretty good job with some API#s
[15:40:34] <kali> Nodex: yeah, it was an easy troll.
[15:41:53] <Nodex> kali : I personaly don't like it for Mongo
[15:42:23] <Joeskyyy> orweinberger: It's because of the mongoS being stateless. It doesn't report errors like that, rather the mongod's do.
[15:44:47] <orweinberger> Joeskyyy: but the mongod instance is dead :) OK I get your point so I HAVE to set replSet if I want to use sharding correctly..
[15:46:11] <Joeskyyy> don't have to, but that's the best practice.
[15:46:18] <Joeskyyy> Just in case your shard goes offline
[15:46:18] <Joeskyyy> :D
[15:50:26] <kali> kali : I personaly don't like it for anything
[15:50:35] <kali> wow, this substitution works very well :)
[15:50:48] <Nodex> haha
[15:53:43] <b0ss_> in my PHP script how to read the structure of a MongoDB colletion and displays it ?
[15:54:18] <Nodex> you have to grab a document
[15:54:23] <orweinberger> Joeskyyy: What do you mean don't have too? If I don't set up the replica sets and one of my shards die, I will lose data for sure..
[15:56:21] <Nodex> b0ss_ : You have to grab a document and loop it
[15:58:02] <b0ss_> Nodex: there might be documents with different contents / structure. I can not rely on only one, don't you agree? I should iterate over all !
[16:00:13] <Nodex> personaly If I have docs like that I just store in a collection a schema and update it i/e ONE document with a collection name that has EVERY field
[16:05:00] <kali> b0ss_: you may want to have a look at https://github.com/variety/variety
[16:05:42] <Nodex> gonna be a long running process on large collections
[16:05:48] <Nodex> better to keep a running total imo
[16:05:57] <Nodex> running config*
[16:06:24] <kali> agreed. it's just nice to have something to verify it from time to time
[16:06:43] <kali> or when you're inheriting a database
[16:08:13] <b0ss_> Nodex: a running config ?
[16:08:22] <b0ss_> kali: that variety thing is just awesome !
[16:09:23] <Nodex> b0ss_ : as I explained :)
[16:09:33] <Nodex> [15:57:53] <Nodex> personaly If I have docs like that I just store in a collection a schema and update it i/e ONE document with a collection name that has EVERY field
[16:16:21] <Gargoyle> b0ss_: Check out the rockmongo source and see how they do it.
[16:16:40] <b0ss_> thanks
[16:19:28] <Nodex> anyone sane would keep a running document, it's the most efficient way
[16:20:03] <kali> great. now i'm insane
[16:20:27] <Nodex> kali : are you programming though?
[16:20:46] <Nodex> as a programmer you would (if you really cared) keep the running document structure
[16:21:49] <kali> i'm mostly a developper yeah. i make the code document the db structure
[16:21:58] <Nodex> :D
[16:22:54] <Nodex> doens't bother me - the structure, I fire all sorts of things at my docs
[16:23:24] <Nodex> hell my solr index is one massive bunch of dynamic fields with collection separated by a value of a field
[16:47:06] <fluxdude> for replication, does each mongod communicate directly with the other mongod's in the replica set?
[16:47:24] <fluxdude> details of the replication protocol and semantics seem a bit sketchy from the docs
[16:53:20] <Joeskyyy> fluxdude: Yeah, they all know of each other
[16:53:24] <Joeskyyy> And send regular heartbeats and such
[16:54:37] <_boot> is there a simple way to load balance reads over a replica set?
[16:57:25] <Joeskyyy> kind of? you can set preferences.
[16:57:42] <Joeskyyy> IT doesn't really load balance them, but it tells your drivers your preference for the reads.
[16:57:47] <Joeskyyy> http://docs.mongodb.org/manual/core/read-preference/
[17:02:23] <NyB> does anyone know if the new MongoDB Java driver (3.0) will be any faster than the current version?
[17:02:35] <fluxdude> Joeskyyy: thanks
[17:02:52] <fluxdude> wish it was a bit clearer on the process of communication at the tcp level so I could get a deeper understanding
[17:03:01] <fluxdude> gotta run
[17:22:43] <_boot> hmm, shame there isn't a "least busy" read preference
[17:39:36] <ekristen> my mongodb server seems to stall out randomly, not sure why, nothing in the log files
[17:40:59] <Nodex> define "stall"
[17:41:44] <salty-horse> is it possible for a collection's _id index to not be unique?
[17:41:58] <algernon> yes.
[17:42:49] <salty-horse> I was sure it's automatically enforced to be unique. guess I was wrong :/
[17:43:07] <Nodex> ObjectId's are
[17:43:19] <Derick> Nodex: no, _id is, as long as there is an index
[17:43:24] <Derick> you can drop the _id index
[17:43:47] <Nodex> so you can have two ObjectId's that are the same?
[17:43:55] <Nodex> learn something new every day :D
[17:43:56] <Derick> sure
[17:44:04] <Nodex> never knew that
[17:44:09] <Derick> they're created by the driver upon insert
[17:44:37] <Derick> it does look like an index on _id is implicitly unique though
[17:44:53] <Nodex> I thought it was for internal counters
[17:45:31] <salty-horse> Derick, I see an index, but it doesn't have a unique attribute
[17:45:40] <Derick> salty-horse: yes, I know - let me check something
[17:46:10] <salty-horse> Nodex, that makes two of us. and now I wonder if I had made more wrong assumptions in code
[17:46:23] <Derick> salty-horse: if you do this:
[17:46:26] <Derick> > db.createCollection( "test", { autoIndexId: false } );
[17:46:28] <Derick> > db.test.createIndex( { _id: 1 } );
[17:46:34] <Derick> then the index on _id is still unique
[17:46:48] <Derick> so I would say it's implicitly unique if it's on _id
[17:47:09] <Derick> also, after creating that index:
[17:47:12] <Derick> > db.test.dropIndex( { _id: 1 } );
[17:47:12] <Derick> { "nIndexesWas" : 1, "ok" : 0, "errmsg" : "may not delete _id index" }
[17:48:07] <salty-horse> what if I don't disable autoIndexId? :) is it unique, then? I looked over some of my collections, and none have a "unique" on the _id_ index
[17:48:19] <Derick> salty-horse: yeah, that's why I said:
[17:48:22] <Derick> 17:44 <@Derick> so I would say it's implicitly unique if it's on _id
[17:48:37] <Nodex> I think the main point is that in order to make ObjectId's non unique contrained you have to do some work
[17:48:50] <Nodex> so it's not somehting your code would be doing without you knowing it
[17:49:11] <Derick> you can cheat though, by making an index:
[17:49:14] <Derick> > db.test.createIndex( { _id: 1, not_here: 1 } );
[17:49:38] <Derick> it will work as well as just the "special case where there is an idnex on just _id"
[17:49:43] <Derick> > db.test.insert( { _id: 1 } );
[17:49:43] <Derick> > db.test.insert( { _id: 1 } );
[17:49:43] <Derick> >
[17:50:06] <salty-horse> I need to test something locally. sec :)
[17:50:33] <joannac> @derick stop teaching people to remove the _id index :p
[17:50:36] <Derick> interesting, if I do this:
[17:50:37] <Nodex> haha
[17:50:39] <Derick> > db.test.createIndex( { _id: 1, not_here: 1 } );
[17:51:25] <Derick> joannac: :-รพ
[17:51:29] <Derick> salty-horse: yeah, don't do this
[17:52:51] <salty-horse> Derick, I think I reached an abnormal situation where I have a vanilla _id_ index, but two records with the same _id (created explicitly by me with "save()", thinking it will update the other record). testing
[17:53:29] <joannac> you shouldn't be able to have 2 docs with the same _id field
[17:53:35] <joannac> Are you sure that's what you have?
[17:54:54] <salty-horse> yup. it's in a sharded environment. let me write a test-case in pastebin
[17:54:59] <salty-horse> joannac, yes
[17:57:01] <salty-horse> http://pastebin.com/D1mmKn6z
[17:57:05] <salty-horse> joannac, Derick ^
[17:57:11] <salty-horse> should I report this?
[17:57:57] <Derick> salty-horse: is that without _id index?
[17:58:34] <salty-horse> Derick, with. this is a vanilla-ish created collection. I will try it on a fresh collection just to make sure. sec
[17:58:46] <salty-horse> (I tested it on a production collection that I did not create)
[17:59:31] <Derick> cause that what you pasted does this for me: http://pastebin.com/Ef0kPYtH
[17:59:46] <salty-horse> Derick, is it a sharded collection?
[17:59:57] <Derick> nope
[18:00:14] <salty-horse> I wrote that it's sharded at the top. will create a REAL test case with shardCollection to be explicit
[18:00:18] <Derick> oh
[18:00:28] <salty-horse> just a moment
[18:03:02] <salty-horse> oh, wait. sharding on an index means that only that field can be unique...
[18:03:09] <salty-horse> *facepalm*
[18:03:14] <salty-horse> s/field/index
[18:03:38] <salty-horse> (so my _id_ index can't be unique)
[18:04:28] <salty-horse> http://docs.mongodb.org/manual/reference/limits/#Unique%20Indexes%20in%20Sharded%20Collections
[18:06:40] <DevRosemberg> Does anyone know how to get a Map from Mongo in Java?
[18:06:59] <DevRosemberg> like, i save a this private Map<String, String> petNames = new HashMap<String, String>();
[18:07:04] <DevRosemberg> but then how can i load it back again
[18:14:09] <ruphos> Derick: I ended up with a few docs with duplicate _ids on a non-sharded cluster after a migration. any idea how that could happen?
[18:15:00] <ruphos> does the insert write lock extend until the index is updated or could two docs inserted closely together possibly get the same id?
[18:15:51] <DevRosemberg> Hello?
[18:15:57] <DevRosemberg> can anyone reply?
[18:16:26] <DevRosemberg> i have been googling for hours and there is no way of getting data from mongo and puting it in a hashmap
[18:16:56] <ruphos> pretty easy with perl, not sure how java would do it ;)
[18:17:18] <ruphos> I believe cheeser works on a java driver, he may have more insight
[18:20:57] <DevRosemberg> cheeser: http://stackoverflow.com/questions/21464432/load-data-from-mongodb-to-hashmap
[18:23:31] <joannac> DevRosemberg: do a find, iterate the cursor, get the "PetNames" field, and insert it into your map?
[18:23:46] <DevRosemberg> how?
[18:23:53] <joannac> How for which part?
[18:24:34] <DevRosemberg> everything
[18:24:40] <DevRosemberg> like
[18:24:41] <DevRosemberg> can you show me?
[18:27:54] <cheeser> DevRosemberg: i'm not following your question. what's not working?
[18:28:20] <DevRosemberg> nothing is not working, i just dont know how to load data to a map from mongo
[18:28:54] <cheeser> new BasicDBObject().putAll(map)
[18:29:04] <cheeser> then stuff that ina collection
[18:31:30] <DevRosemberg> no
[18:31:32] <DevRosemberg> cheeser
[18:31:34] <DevRosemberg> FROM the Mongo
[18:31:38] <DevRosemberg> not TO the mongo
[18:31:43] <cheeser> oh, right. other way. new HashMap<>().putAll(dbObject)
[18:31:45] <cheeser> magic
[18:32:05] <DevRosemberg> um
[18:32:08] <DevRosemberg> let me put it in a pastie
[18:33:59] <DevRosemberg> if (object.containsField("PetNames")) {
[18:34:00] <DevRosemberg> BasicDBList values = (BasicDBList) object.get("PetNames");
[18:34:00] <DevRosemberg> for (Object listContent : values) {
[18:34:00] <DevRosemberg> BasicDBObject result = (BasicDBObject) listContent;
[18:34:00] <DevRosemberg> petNames.putAll(result);
[18:34:00] <DevRosemberg> }
[18:34:02] <DevRosemberg> }
[18:34:04] <DevRosemberg> noppe
[18:35:46] <DevRosemberg> cheeser
[18:36:41] <cheeser> use a pastebin next time please.
[18:37:03] <cheeser> soooooo what am I doing with that?
[18:37:03] <DevRosemberg> k
[18:37:10] <DevRosemberg> tht dosent work
[18:37:21] <cheeser> and by doesn't work you mean ... ?
[18:38:25] <DevRosemberg> putAll in map cannot be applied to BAsicDBObjec
[18:38:26] <DevRosemberg> t
[18:39:15] <cheeser> what's the *actual* compiler output?
[18:39:31] <DevRosemberg> i didnt even try to compile it
[18:39:32] <DevRosemberg> its an error
[18:39:33] <cheeser> i'm guessing it can find putAll(List)
[18:39:34] <DevRosemberg> on the writing
[18:39:35] <DevRosemberg> lol
[18:39:38] <cheeser> um. no.
[18:39:51] <cheeser> *some* compiler is giving you fits.
[18:41:25] <DevRosemberg> http://d.pr/i/FsGq
[18:41:28] <DevRosemberg> thats the error
[18:41:53] <DevRosemberg> http://gyazo.com/a07a4c39d2be596c990c2853b018451a
[18:41:54] <DevRosemberg> that one
[18:43:10] <cheeser> ah, yeah. BasicDBObject is a Map<String, Object> not a Map<String, String>
[18:43:22] <cheeser> so you'll have to iterate and put() each entry manually
[18:43:44] <DevRosemberg> ?
[18:44:06] <cheeser> .
[18:58:34] <DevRosemberg> cheeser?
[18:59:16] <cheeser> yes?
[20:48:39] <kkspy[bnc]> Do I have any chance to have a conversation with developer of PHP driver?
[21:03:47] <joannac> Sure. You'd be better off asking your question though
[21:15:55] <TheDracle> So, I'm trying to update multiple embedded documents on a document.
[21:16:03] <TheDracle> Calls.update({'participants._id': previous_cm[0]._id, ended: false}, {$set: {'participants.$.state': 'waiting'}}, {multi: true});
[21:16:08] <TheDracle> I've tried this, but it doesn't seem to work.
[21:16:09] <unholycrab> if i do db.collection.ensureIndex( { a: 1}, { background : true}) on the primary, will the secondaries automatically start building the a: 1 index after it finishes?
[21:16:16] <unholycrab> mongod 2.4
[21:16:39] <TheDracle> I.E: I have multiple participants that meet the criteria that participants._id is equal to previous_cm[0]._id
[21:16:57] <TheDracle> That are in the participants document, and I want 'all' of them to have their state altered.
[21:22:34] <joannac> a db.find({'participants._id': previous_cm[0]._id, ended: false}) gives you how many results?
[21:22:41] <joannac> db.collection.find
[21:23:34] <TheDracle> 2
[21:24:06] <TheDracle> It works the first time.
[21:24:11] <TheDracle> When there's only one result.
[21:24:23] <TheDracle> And the second time, it doesn't, because I think it just ends up setting the first result again.
[21:24:41] <joannac> ?
[21:25:59] <TheDracle> Well, it's a call system, I Call, and when I hang up, I keep a record of the previous hang up.
[21:26:09] <TheDracle> So, this call sets the call state to 'hung_up'
[21:26:13] <TheDracle> And that succeeds.
[21:26:17] <TheDracle> If I join the same call again.
[21:26:22] <TheDracle> It creates a new entry.
[21:26:26] <TheDracle> WIth the state 'on_call'.
[21:26:35] <TheDracle> And then this call is made again to set the state to 'hung_up'
[21:26:45] <TheDracle> But it doesn't work on the second participant record for the same ID.
[21:26:51] <TheDracle> You gave me an idea though.
[21:27:02] <TheDracle> Just to filter for where status is not equal to hang_up
[21:27:06] <TheDracle> So I get one result.
[21:27:50] <joannac> TheDracle: http://pastebin.com/cR9j9EHh
[21:29:00] <TheDracle> Hm.
[21:29:06] <TheDracle> So it ought to work.
[22:21:50] <TheDracle> joannac, http://pastebin.com/sJkR5tat
[22:22:49] <TheDracle> version() -> 2.4.8
[22:22:59] <TheDracle> So confused...
[22:24:18] <joannac> TheDracle: That's not how positional operators work
[22:24:21] <joannac> THat's one document
[22:24:33] <joannac> multi is across multiple documents, mot multiple array elements
[22:25:00] <TheDracle> Hm.. It looks like the same strucutre as in your pastebin.
[22:26:19] <TheDracle> You have {_id: XXX, participants: [{_id:0},{_id:0},{_id:1}]}
[22:26:26] <joannac> no I don't
[22:26:27] <TheDracle> They are the same aren't they?
[22:26:30] <joannac> I had 2 documents
[22:26:41] <TheDracle> Ahhh
[22:26:57] <TheDracle> Not two participants embedded in the top level documents.
[22:27:09] <TheDracle> With the same id...
[22:28:11] <TheDracle> So, does participants.$ only match the first matching embedded document in that array?
[22:28:47] <asturel> is mongodb uses much memory or why is it limited to 2gb on 32bit?
[22:29:28] <ruphos> asturel: because 32bit
[22:29:43] <TheDracle> http://stackoverflow.com/questions/8505489/multiple-update-of-embedded-documents-properties
[22:30:02] <asturel> ruphos but i dont rly understand how's data affected
[22:30:13] <asturel> unless it would use >2GB mem
[22:30:40] <TheDracle> asturel, It optimizes the access to the database file by mapping it into memory.
[22:30:49] <ruphos> mongo uses memory mapped files. A 32-bit OS has a total max memory address space of 3.5GB
[22:30:57] <TheDracle> asturel, On 32-bit systems you have an exponentially smaller address space to map to.
[22:31:02] <ruphos> ^
[22:31:18] <ruphos> it's a limitation of the OS, not mongo
[22:31:24] <asturel> i know but i use pae
[22:31:54] <TheDracle> asturel, Is there a specific reason why you have to use 32-bit?
[22:32:06] <TheDracle> asturel, I would say.. Mongodb was not designed to work with 32-bit systems.
[22:32:19] <asturel> well there is no reason to use 64 for me.. with 3-4GB mem
[22:32:58] <ruphos> is there a gain from using 32-bit over 64-bit?
[22:33:12] <TheDracle> asturel, "Note that virtual address space is not increased by PAE. PAE is purely a Physical Address Extension to allow you to have more than about 3.5GB of RAM."
[22:33:22] <TheDracle> http://stackoverflow.com/questions/3717024/mongodb-assumes-64-bit-system-does-it-mean-the-hardware-or-the-os-kernel-or-bo
[22:33:37] <TheDracle> Yeah, it's really nothing but badness to use MongoDB with 32-bit.
[22:33:44] <TheDracle> It is the modern day trail of tears.
[22:33:47] <TheDracle> I would avoid it.
[22:34:03] <asturel> yeah but i installed the os like 5 years ago
[22:34:22] <TheDracle> Mongo is new and shiny... It doesn't want to be associated with your old smelly hardware.
[22:34:38] <retran> why ar eyou using 32 bit
[22:34:47] <TheDracle> He doesn't want to reinstall OS.
[22:34:48] <retran> that's ril rude
[22:34:48] <TheDracle> Makes sense.
[22:34:50] <asturel> well.. its a intel Qxxx series cpu
[22:35:07] <TheDracle> asturel, Go to amazon cloud.
[22:35:15] <retran> why didnt they make mongo work on 80286
[22:35:16] <asturel> hehe.. thats rly expensive
[22:35:17] <TheDracle> Make tiny instance 64-bit
[22:35:22] <TheDracle> It's free!
[22:35:27] <TheDracle> Or go to mongohq.
[22:35:28] <asturel> u mean the free trier?
[22:35:31] <TheDracle> And spin up a free instance.
[22:35:37] <retran> but free tier ec2 is poop slow
[22:35:42] <asturel> yeah..
[22:35:43] <retran> you'll hate life
[22:35:52] <retran> use DigitalOcean
[22:35:56] <TheDracle> retran, Something makes me think it will be faster than the system he's trying to put it on.
[22:36:06] <retran> true that
[22:36:11] <asturel> i tried to compile znc on free ec2 but took hours if it isnt get killed
[22:36:24] <TheDracle> Compile...
[22:36:26] <TheDracle> ZNC?
[22:36:31] <retran> youll need to use compute optimized ec2
[22:36:35] <asturel> yeah its a bouncer
[22:36:38] <retran> and start pooping money
[22:36:41] <TheDracle> They have mongodb AWS instances.
[22:36:42] <retran> to pay for it
[22:36:45] <TheDracle> That have dedicated IOPS plans.
[22:36:48] <TheDracle> But they're pricey.
[22:36:52] <asturel> but i dont want to pay :D
[22:36:54] <retran> the mongo optimized ec2 you need to poop money to pay
[22:36:55] <TheDracle> If you're just prototyping, start small :0
[22:36:57] <retran> as well
[22:37:03] <TheDracle> https://www.mongohq.com/home
[22:37:09] <TheDracle> MongoHQ instances are pretty fast.
[22:37:09] <unholycrab> i use mongodb on aws, and IOPS
[22:37:14] <unholycrab> i dont pay for a special instance
[22:37:15] <retran> i'm having good luck at DO
[22:37:19] <TheDracle> But only provide 500MB for free.
[22:37:40] <retran> i have a over 40GB in mongo data dir
[22:37:50] <retran> blazing fast regex search
[22:37:52] <TheDracle> unholycrab, You mean, you use a tiny instance or something, with IOPS?
[22:37:57] <TheDracle> unholycrab, And get it pretty cheaply?
[22:38:01] <asturel> wellwell my 1 years log is only 1GB
[22:38:12] <retran> getting iops provisioned is never cheap
[22:38:15] <unholycrab> TheDracle: m2.4xlarge instances
[22:38:16] <retran> on ec2
[22:38:19] <unholycrab> for mongodb
[22:38:20] <retran> those are expensive
[22:38:20] <asturel> its not rly big i just wanted to use something instead of file logging
[22:38:24] <TheDracle> unholycrab, What are you paying/month?
[22:38:29] <unholycrab> a lot probablhy
[22:38:33] <TheDracle> unholycrab, Did you do the 4000 IOPS?
[22:38:39] <TheDracle> I'd like to know actually.
[22:38:45] <TheDracle> I set up a mongolab instance recently..
[22:38:49] <retran> i'm guesing $800+
[22:38:51] <TheDracle> That just goes to aws.
[22:39:00] <unholycrab> TheDracle: yeah
[22:39:04] <TheDracle> I'd sort of like to move directly to AWS myself...
[22:39:05] <unholycrab> 4000 iops
[22:39:07] <retran> i paid $200+ for a mediocre mongo system
[22:39:10] <TheDracle> But I was worried about $$.
[22:39:12] <asturel> anyway then mongodb generally uses huge amount of mem?
[22:39:12] <retran> at ec2
[22:39:27] <TheDracle> asturel, It doesn't use the memory, it just needs the virtual address space.
[22:39:53] <asturel> ah okay
[22:39:56] <TheDracle> asturel, It's just a technical issue with 32-bit systems that their address spaces are really small, and so the virtual address spaces that each protected instance gets is small.
[22:40:06] <retran> mongo is like any well written db... the more mem you give it the better it runs
[22:40:20] <TheDracle> asturel, Increasing to 64-bits increases address space size from 2^32 to 2^64
[22:40:25] <asturel> how fast is mongohq ?
[22:40:33] <retran> it's 42 fast
[22:40:44] <asturel> i know the addressing space things, but until now i didnt had problem
[22:40:57] <TheDracle> 2**64= 4294967296/1024/1024 = 4096 MB
[22:41:10] <TheDracle> Er, 2**32
[22:41:29] <TheDracle> 2**64 = 17592186044416 MB
[22:42:07] <TheDracle> It is exactly 42 arbitrary units fast.
[22:42:10] <asturel> my logs from 2012-06 didnt reach 1GB :D
[22:42:34] <asturel> in fileformat.. but i guess it would use more in mongodb?
[22:42:58] <TheDracle> asturel, That could very well be the case depending on the layout of your DB.
[22:43:10] <retran> indexes have a space cost
[22:43:15] <retran> so of course it will use more space
[22:43:31] <TheDracle> asturel, Every document you store will duplicate field names, so it's data + field names + indexes + other stuff.
[22:43:37] <retran> each record is indexed, necessarily by at least 1 field. and more if you make it usefully indexed
[22:43:44] <asturel> but index is always gnerated?
[22:43:50] <TheDracle> asturel, Also, I've seen things where you modify documents and embed stuff, and all sorts of size weirdness can happen.
[22:43:51] <retran> of course, the object Id
[22:44:00] <retran> is always generated
[22:44:26] <asturel> :/
[22:44:37] <TheDracle> It's one of those 'it depends' questions.
[22:44:45] <retran> if you're worring about how a db system stores space, then you're worrying about "the wrong thing"
[22:44:56] <retran> go solve other problems with your time
[22:44:58] <retran> you're wasting it
[22:45:02] <asturel> :D
[22:45:43] <retran> you have 1 job dealing with space and your db... accomidate it
[22:45:49] <retran> accomidate what it wants
[22:46:03] <retran> dont try to change your schema to accomidate space
[22:46:05] <asturel> well i asked because that mongohq is limited to 500MB :D
[22:46:24] <retran> i wonder if they are measuring data dir
[22:46:34] <retran> pretty silly way to meter their service
[22:46:37] <asturel> then what they measure?
[22:46:40] <retran> if you ask me
[22:46:41] <retran> i have no clue
[22:46:49] <retran> i would never use shared hosting
[22:46:51] <asturel> sry but im complety noob for nosql.. never used it
[22:47:16] <asturel> just thought i switch to it from file based logging to spare the i/o
[22:47:19] <retran> just spin up a digitalOcean machine
[22:47:26] <retran> it will cost you $5 and run mongo great
[22:47:33] <retran> SSD direct IO
[22:47:44] <retran> way better than all but the most pricey ec2 options
[22:47:53] <asturel> hehe :D
[22:47:57] <asturel> its not a huge service
[22:48:12] <retran> just spend $5 then at DO
[22:48:26] <retran> if you're concerned about 500MB being to small
[22:48:26] <TheDracle> asturel, Join the cloud....
[22:48:30] <asturel> :D
[22:48:41] <asturel> how much do u get for advertising DO :D
[22:48:49] <retran> nothing
[22:48:52] <retran> they suck
[22:48:54] <retran> dont use them
[22:49:04] <asturel> i just interested in nosql. but i dont want to pay for it
[22:49:19] <retran> ok go sweep floors
[22:49:25] <asturel> haha :D
[22:54:02] <asturel> i guess i just start on my own box.. 2GB would be enought for 2+ years :D
[23:29:58] <ctp> hi folks. i have a mongodb shard running. 3 config servers, 2 shard nodes and 1 mongos. the question now is: how to add new users? where to add them? standalone mongo was simple: auth=true and db.addUser("admin", "MyVerySecretMongoDBPassword")
[23:30:38] <ctp> s/I have a mongodb shard running/I have a mongodb cluster running/ :)