[00:06:09] <blizzow> I think it's correct. I had to connect to mongo using : mongo --host foo.bar.com -u user -p password admin
[00:06:09] <blizzow> then I had to issue: use dbname and the command seems to work.
[00:06:37] <blizzow> So now I'm trying to figure out how to connect straight to dbname instead of having to connect to admin and then use dbname.
[00:08:38] <blizzow> with the goal of eventually putting it in a bash script.
[00:16:13] <magglass2> blizzow: replacing "admin" with the name of your db should cause it to select that db by default; does that not work?
[00:17:09] <blizzow> magglass2, that does not work. Even though the user I'm trying this with has the role of dbAdminAnyDatabase.
[00:18:49] <magglass2> blizzow: check out http://docs.mongodb.org/manual/tutorial/add-user-to-database/ Is the user defined for the DB you're trying to use?
[00:19:43] <magglass2> if not, add the user to that specific db then give it another try
[00:21:44] <magglass2> dbAdminAnyDatabase will give them access to use any DB, but not log in to any DB
[00:54:21] <athlon> setient slowly slid out of his trousers, inch by inch. epsas slowly continued to lower his boxer
[00:54:22] <athlon> shorts until all of him was exposed. And that is when the next round of the office orgasm took
[00:54:24] <athlon> place. epsas begun gushing pussy juice as her body swayed back and forth, eyes fixated on setient's
[00:54:27] <athlon> now 9inch soft cock...his cock head was the size of a golf racquetball. 7.5 inches thick. This was
[00:54:30] <athlon> the biggest setient had EVER been soft before. epsas was outside of herself. This giant cock in
[00:54:33] <athlon> front of her was what she had been dreaming about the night before. epsas let out a giant gasp as
[00:54:36] <athlon> her face continued to light up...the smile continued to grow at the same pace of setient's cock. Her
[00:54:39] <athlon> pussy was soaked, puddles already seeping into the carpet under the table. epsas was witnessing the
[00:54:42] <athlon> biggest soft cock that existed. It continued to grow as epsas began to touch it. She placed both of
[00:54:45] <athlon> her hands over his shaft like she was holding a baseball bat. She had a good three to four hands to
[00:54:48] <athlon> go to cover the whole thing. "setient, fucking amazing! I knew you had this monster! Ohhh I know how
[00:54:51] <athlon> good it feels to swing that fuck stick around. OHHHHHH IM CUMMINGGGG AGAIN....oohhhhhhh.....just
[00:54:54] <athlon> thinking about that cock.....ohhhh fuckkkkkkkk!! epsas was again squirting everywhere, this time the
[00:54:57] <athlon> shaking more violent than the last. She continued to scream in pleasure will making setient's cock
[00:55:02] <athlon> harder. setient finally reached full size. His pants were around his waist at this point. His cock
[00:55:03] <athlon> was so fucking long that it could have held the table top that sat on a metal stand in front of him.
[00:55:05] <monmortal> and thye ban me from mad chans
[00:55:07] <athlon> epsas gasped at the monument and let out another shriek. She was mentally insane over this dick.
[00:55:10] <athlon> Obsessed. Had to have it. "Give me that cock! YOU MONSTER!!" epsas shouted as she sprung back
[00:55:13] <athlon> towards setient's lower body. Her orgasm was so strong that it had knocked her well across the
[00:55:15] <athlon> table. "Oh its so big setient...so so big...fuck....I could stare at this all day! I can't believe
[00:55:19] <athlon> you have been hiding this from me for so long!"
[00:55:21] <athlon> epsas knew she had a dick that was this big last night, but she didn't quite understand why. She
[00:55:24] <athlon> knew setient was involved, but she didn't know she was the surrogate dick grower for her boss
[00:55:27] <athlon> setient! For all she knew, setient had this cock forever. She finally dove into setient's groin.
[00:55:30] <athlon> slllllllllllllluuurp!! She slowly licked his giant cock head and teased his slit with the tip of her
[00:55:33] <athlon> tongue. setient was groaning with pleasure....not being able to see what she was doing down there
[00:55:36] <athlon> made everything so much better. She twirled her tongue all over the top of his shaft, making sure
[00:55:39] <athlon> she got each side. It took her four licks to finally accomplish that! setient laughed, thinking
[00:55:42] <athlon> about what she was trying to do. It made him so proud! epsas kept trying to suck on his giant cock.
[00:55:45] <athlon> She continued to pause, trying so hard to get him inside her mouth. As she paused, she worshipped
[00:55:48] <athlon> his cock again...which made setient harder. "OHhh your fucking cock!! You make me squirt so much,
[00:55:51] <athlon> ive never even fucking squirted before! Oooooooooww.......oooo....so...fucking hott...ohhhhhhhh...."
[00:55:54] <athlon> epsas realized that there was entirely no way to get his cock head in her mouth...it was just way to
[00:55:57] <athlon> big. epsas wanted the dick so badly, but honestly didn't know what to do with it. setient was so
[00:56:00] <athlon> turned on by all of this and just wanted to come. He thought epsas was doing a great job, hell, no
[00:56:03] <athlon> one had ever seen a dick like this before. epsas realized that she was able to fuck yesac with her
[00:56:06] <athlon> dick that was the same size as setients. If epsas's giant cock fit in tiny yesac last night, then
[00:56:09] <athlon> setient's monster would fit in inside her. "GET UNDER THIS TABLE AND FUCK ME YOU MONSTER!" epsas
[00:56:12] <athlon> demanded. setient carefully crawled under the table, slow enough to ensure he didn't smash his cock
[00:56:15] <athlon> against the floor. epsas was on her back, perfect painted toes in his face to great him. "There you
[00:56:18] <athlon> are you stud. You like my new toes? I know they changed. Toes are fucking gross but my feet are
[00:56:21] <athlon> amazingly really pretty now. Did you do this!?"
[00:56:23] <athlon> setient smiled, kissed them, then pushed them out of the way as he lowered his waist closer to hers.
[00:56:26] <athlon> Not knowing if she was prepared for a cock this size, setient slowly entered her. Right when his
[00:56:29] <athlon> throbbing cock touched her soaking wet pussy lips, she let out a giant squirt that splashed on
[00:56:32] <athlon> setient's abs. "Ahhhhhhhhhhh" epsas screamed in pleasure. She could feel his head pulsating even
[00:56:35] <athlon> though his head was barely inside of her. "More cock you monster!! More! Ohhhh your fucking
[00:56:39] <athlon> DICK!!!!!!...OHHHHH" she continued. epsas had never been this turned on in her entire life.
[00:56:42] <athlon> setient's dick was only half way in and already felt like two of her husband's dicks. setient's
[00:56:44] <athlon> girth alone was enough to rip most women in half. epsas's body just took it though...like her body
[00:56:47] <athlon> was ready for it. epsas had changed a lot since the night before. She was stronger, had better hair,
[00:56:50] <athlon> altered feet, but mainly the altered state of mind. And now, her body magically was just accepting
[00:56:53] <athlon> his gigantic cock without hesitation. setient continued to thrust, deeper and deeper. 8 inches in,
[00:56:56] <athlon> epsas had another orgasm. She panted and panted...breathing deeper and deeper. She threw her head
[00:56:59] <athlon> back and squirted a faint squirt. Her body shook in pleasure, each shot of pussy juice weaker than
[00:57:02] <athlon> the previous one. "Put it all in the way in please...setient...ohhh....put it all the way in...."
[08:24:34] <Alex__> Let's say I have a schema: { aaa: [array of random length containing random letters from a to z] } And I want to select all elements which contain both a and b. Why does this (implicit $and) doesn't work: db.sequences.findOne({ sequence: { $in: ["a"], $in: ["b"] } }) And this does: db.sequences.findOne({ $and: [{sequence: {$in: ["a"]}}, {sequence: {$in: ["b"]}}]})
[08:24:54] <Alex__> oh, and here is formatted version:
[13:12:42] <ncls> but why couldn't you import it into one local mysql server and query on it with php or anything else ?
[13:13:10] <ncls> feedthecat: http://www.wampserver.com/en/ is very easy to setup on windows
[13:13:57] <ncls> it might look boring to do so, but I seems so much easier to me that I wouldn't do something else
[13:14:14] <ncls> parsing an SQL dump file seems very hard
[13:14:21] <ncls> unless maybe some libraries do so
[13:16:09] <feedthecat> Yes acutally it is, but for this specific task I have, I am being forced to do just that and I have little ideas on how
[13:54:44] <brendan6> There is a warning on page http://docs.mongodb.org/manual/core/2dsphere/ of the Mongo docs that reads "Important MongoDB allows only one geospatial index per collection. You can create either a 2dsphere or a 2d per collection." but this doesn't appear to actually be the case. I have a test document that has 2x2dsphere indices and 1x2d index, all queryable with a $near. What gives?
[13:55:43] <Derick> brendan6: you can create them, but they're not necessarily used?
[14:00:38] <brendan6> Derick: pastebin with only 2 2dspehere indices. Looks good and both return results http://pastebin.com/dhfgQBgC
[14:00:57] <brendan6> Going to add the 2d in now on another field
[14:02:55] <brendan6> Derick: http://pastebin.com/C5SA4drr ...so although a result IS returned, it appears the index is not being used.
[14:04:29] <brendan6> I think this suggest that I can have any number of 2dsphere indices or any number of 2d indices but not both? Am I correct to assume this?
[14:06:08] <Derick> well, I still think only one index is used, but I haven't tried that in a while
[14:06:15] <Derick> perhaps if you hint the index...
[14:17:09] <ajph> hey. i am pre-aggregating data by UTC day with sub-document hours (see: http://pastie.org/8682308). It's recently been decided that we must now be able to represent that data in different timezones (whole hours). i'm not sure whether to keep what i've got in UTC days, query all required days from MongoDB, and add up the hours to make timezone-specific days in the backend - or change my schema so data is stored per-hour. Any insight?
[14:18:16] <Derick> ajph: add the hours in the aggregation query
[14:22:17] <ajph> Derick: the aggregation i need groups by day. would that be possible? i'm using the aggregation framework.
[14:22:48] <Derick> just use $sum, 3600 (for an hour f.e.) before the group
[14:29:40] <orweinberger> Question regarding sharding. Mongo's sharding manual tells me that I should set all shards to be replica sets on production. However the whole purpose of sharding is to divide the data between different mongo instances. So if all 3 shards are replica sets, doesn't that mean that they all hold the same data set? What's the point of sharding and using replica sets then?
[14:30:02] <joannac> no, each shard is backed by a replica set
[14:30:18] <joannac> so you have 3 shards, each of which is a e.g. 3 node replica set
[14:30:46] <ajph> Derick: i'm sorry, i don't understand. my aggregation looks like this: http://pastie.org/8682352 (excuse the Golang) - really appreciate your help
[14:30:53] <joannac> does that answer your question orweinberger?
[14:32:13] <joannac> ajph: if you need to change the timezone, have a $project clause first and then add the relevant offset to get your date in the right timezone
[14:33:12] <joannac> I'm not sure how separating by hour would help you
[14:33:15] <orweinberger> joannac: I think so, you mean that my mongos 'sees' my shards as standalone but the shards themselves are replica sets to avoid any data loss within that shard?
[14:33:36] <joannac> orweinberger: well, mongoS is well aware that your shards are replica sets, but yes
[14:34:42] <orweinberger> joannac: OK, so I need to run the rs.addShard() 3 times in case I have 3 shards, one with each rs0/rs1/rs2 respectively, correct?
[14:35:21] <brendan6> Derick: All my tests are indicating that multiple 2dsphere indices are absolutely fine. I feel that the documentation should read "Important MongoDB allows only one type of geospatial index per collection. You can create either a 2dsphere or a 2d per collection."
[14:37:54] <joannac> brendan6: Have you tried geoNear?
[14:38:47] <orweinberger> joannac: Thanks for your help!
[14:42:56] <ajph> joannac: even if i get my query date in the right timezone, how does that help me output results per-day in user-time, when the data is stored as 1 object per UTC date?
[14:43:11] <brendan6> joannac: I am using $near like so http://pastebin.com/rB6tjxkG. It's working great, I just think the documentation might be a little vague when explaining. I made a comment on this commit https://github.com/mongodb/docs/commit/e358f8d995d325e72000d135c519fd9c4dfeb685
[14:45:13] <ajph> there is one object like this per UTC day: http://pastie.org/8682308
[14:45:42] <michael_____> does it make sense to store nearly the same documents in different collections? like unpublished documents and published (second one should not be modified anymore)
[14:47:54] <ajph> if i can get any paid 10gen help on this i'd be happy to do that
[15:12:08] <ajph> joannac: i think i see what you're saying now. the date field in my object is just a YYYY-MM-DD with no hour data. the hours are a subdocument, so that won't work
[15:13:01] <ajph> the hour fields are just incremented on an upsert as per: http://blog.mongodb.org/post/65517193370/schema-design-for-time-series-data-in-mongodb
[15:19:17] <Nodex> that blog post kind of contradicts what a lot of people say about not having values as keys, certainly numeric ones
[15:19:26] <Nodex> (personaly I don't agree with it but hey)
[15:20:12] <Nodex> it wold be far better to push the objects into an array as the array will have a numberic key anyway
[15:20:18] <ajph> Nodex: i believe it's the only way to do an increment on an upsert
[15:21:10] <michael_____> when does it make sense to use another collection for the same document?
[15:21:48] <ajph> michael_____: stale data that you're not going to use much?
[15:22:29] <michael_____> ajph: yes, and it also should not be modified anymore, kind of a snapshot
[15:22:29] <Nodex> if it's archived I would save the space and write it out to disk as a Json document personaly, really depends if you're ever liekly to access it
[15:23:59] <michael_____> you can imagine the difference between published data that should not be edited again and drafted data that will be edited again
[15:26:39] <orweinberger> Question regarding sharding. I have 3 shards for 3 standalone mongod instances. I have a script running on a different machine which is pushing data to the mongoS instance. Now during the process I stop the mongod process on one of the shards to see that the sharding is supporting this failure. What happened was that there was no insert error in my script, everything seemed to be working however when I checked I saw that some of the pushed documents wer
[15:26:39] <orweinberger> e missing. I'm guessing mongoS was trying to push them to the dead shard. Is this possible? Did I overlook something?
[15:29:12] <orweinberger> Should I configure anything to tell mongoS to 'assure' data so that when a shard dies it will not try to push future data into it until it comes back?
[15:31:37] <Joeskyyy> orweinberger: You'd need to introduce sharding with a replset to do that.
[15:31:48] <Joeskyyy> Each shard has it range of the shardkey you chose when you sharded the collection..
[15:31:56] <b0ss_> Why does MongoDB uses JSON and not XML for instance ?
[15:32:36] <kali> b0ss_: mongodb does not use json, it uses bson
[15:33:00] <Nodex> and if it did, xml is still bloated in comparison
[15:34:14] <orweinberger> Joeskyyy: But the replSet is there to ensure that data is not lost for each specific shard, am I wrong? I'm not interested in any mirrors or replicas of the data. I just want to shard the data between several instances. Is this not possible to achieve whilst making sure that if one shard dies the mongoS won't direct any queries to it?
[15:34:53] <kali> b0ss_: also, i think the master plan, at some point was to have a full-javascript stack (from db to browser) before focusing exclusively on the database
[15:35:26] <Joeskyyy> orweinberger: Not really without some crazy balancing script
[15:35:45] <orweinberger> So mongo forces me to replicate my data if I want sharding? that's weird...
[15:35:50] <b0ss_> a full-js stack? Would they eliminate the need for a standard back-end language ?
[15:36:15] <Nodex> why would you use XML over a json like structure given the choice?
[15:36:30] <Nodex> seeing that you're trying to stuff data into memory in an efficient way
[15:36:47] <Joeskyyy> orweinberger: You can send those requests, but you'll have data loss.
[15:36:54] <Joeskyyy> Because each shard has a specific dataset it's expecting.
[15:37:02] <Joeskyyy> So if that shard is offline, there's no where that data can go.
[15:37:18] <Joeskyyy> Because the other shards don't have the bounds for the shard key.
[15:37:47] <orweinberger> I thought mongoS job was to connect between the shards and balance the load, so figured mongoS should also know the state of each mongod instance to avoid sending data to a dead mongod instance.
[15:38:06] <orweinberger> oh I see what you mean about the shard keys
[15:38:10] <kali> b0ss_: i think the plan was to use js everywhere, but it will never see the light as far as i know
[15:38:40] <Joeskyyy> Yeah. It does try to send it technically, it just derps out because the shard is dead.
[15:38:54] <Joeskyyy> The mongoS says "Hey, you fit this shard key bounds, I'm sending you off"
[15:39:37] <orweinberger> Well it should at least output some kind of error I would think
[15:39:39] <b0ss_> kali: I wonder how suitable Node.js is for MongoDB !
[15:39:53] <kali> b0ss_: i wonder how suitable Node.js is for anything
[15:40:12] <Nodex> kali : does a pretty good job with some API#s
[15:40:34] <kali> Nodex: yeah, it was an easy troll.
[15:41:53] <Nodex> kali : I personaly don't like it for Mongo
[15:42:23] <Joeskyyy> orweinberger: It's because of the mongoS being stateless. It doesn't report errors like that, rather the mongod's do.
[15:44:47] <orweinberger> Joeskyyy: but the mongod instance is dead :) OK I get your point so I HAVE to set replSet if I want to use sharding correctly..
[15:46:11] <Joeskyyy> don't have to, but that's the best practice.
[15:46:18] <Joeskyyy> Just in case your shard goes offline
[15:54:23] <orweinberger> Joeskyyy: What do you mean don't have too? If I don't set up the replica sets and one of my shards die, I will lose data for sure..
[15:56:21] <Nodex> b0ss_ : You have to grab a document and loop it
[15:58:02] <b0ss_> Nodex: there might be documents with different contents / structure. I can not rely on only one, don't you agree? I should iterate over all !
[16:00:13] <Nodex> personaly If I have docs like that I just store in a collection a schema and update it i/e ONE document with a collection name that has EVERY field
[16:05:00] <kali> b0ss_: you may want to have a look at https://github.com/variety/variety
[16:05:42] <Nodex> gonna be a long running process on large collections
[16:05:48] <Nodex> better to keep a running total imo
[16:09:33] <Nodex> [15:57:53] <Nodex> personaly If I have docs like that I just store in a collection a schema and update it i/e ONE document with a collection name that has EVERY field
[16:16:21] <Gargoyle> b0ss_: Check out the rockmongo source and see how they do it.
[17:48:07] <salty-horse> what if I don't disable autoIndexId? :) is it unique, then? I looked over some of my collections, and none have a "unique" on the _id_ index
[17:48:19] <Derick> salty-horse: yeah, that's why I said:
[17:48:22] <Derick> 17:44 <@Derick> so I would say it's implicitly unique if it's on _id
[17:48:37] <Nodex> I think the main point is that in order to make ObjectId's non unique contrained you have to do some work
[17:48:50] <Nodex> so it's not somehting your code would be doing without you knowing it
[17:49:11] <Derick> you can cheat though, by making an index:
[17:51:29] <Derick> salty-horse: yeah, don't do this
[17:52:51] <salty-horse> Derick, I think I reached an abnormal situation where I have a vanilla _id_ index, but two records with the same _id (created explicitly by me with "save()", thinking it will update the other record). testing
[17:53:29] <joannac> you shouldn't be able to have 2 docs with the same _id field
[17:53:35] <joannac> Are you sure that's what you have?
[17:54:54] <salty-horse> yup. it's in a sharded environment. let me write a test-case in pastebin
[21:16:08] <TheDracle> I've tried this, but it doesn't seem to work.
[21:16:09] <unholycrab> if i do db.collection.ensureIndex( { a: 1}, { background : true}) on the primary, will the secondaries automatically start building the a: 1 index after it finishes?
[22:31:54] <TheDracle> asturel, Is there a specific reason why you have to use 32-bit?
[22:32:06] <TheDracle> asturel, I would say.. Mongodb was not designed to work with 32-bit systems.
[22:32:19] <asturel> well there is no reason to use 64 for me.. with 3-4GB mem
[22:32:58] <ruphos> is there a gain from using 32-bit over 64-bit?
[22:33:12] <TheDracle> asturel, "Note that virtual address space is not increased by PAE. PAE is purely a Physical Address Extension to allow you to have more than about 3.5GB of RAM."
[22:39:56] <TheDracle> asturel, It's just a technical issue with 32-bit systems that their address spaces are really small, and so the virtual address spaces that each protected instance gets is small.
[22:40:06] <retran> mongo is like any well written db... the more mem you give it the better it runs
[22:40:20] <TheDracle> asturel, Increasing to 64-bits increases address space size from 2^32 to 2^64
[22:54:02] <asturel> i guess i just start on my own box.. 2GB would be enought for 2+ years :D
[23:29:58] <ctp> hi folks. i have a mongodb shard running. 3 config servers, 2 shard nodes and 1 mongos. the question now is: how to add new users? where to add them? standalone mongo was simple: auth=true and db.addUser("admin", "MyVerySecretMongoDBPassword")
[23:30:38] <ctp> s/I have a mongodb shard running/I have a mongodb cluster running/ :)