PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 6th of September, 2012

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:07:21] <_m> EvanCarroll: One you're logged-in, there should be a "create issue" button next to the search bar. Or use the keyboard shortcut 'c'
[00:09:54] <_m> That's a lot of complaints for "I don't half care," btw.
[00:35:52] <heph_> hello
[00:36:31] <heph_> anyone here?
[00:39:33] <goleldar> hello
[00:43:56] <bizzle> In salat, how does one write a query that only returns a subset of the fields in a document
[00:43:59] <goleldar> is it common practice when refrencing another collection by including the collectionId to also include into the document any information you would like available from that referenced collection to avoid making another query
[00:44:10] <bizzle> do you make two different objects?
[00:45:03] <bizzle> like a UserBasic(email: String, username: String), UserDetailed(email.. bio... interests...)
[00:45:18] <goleldar> yes
[00:45:18] <bizzle> and then have them query the same collection?
[00:46:01] <bizzle> cool thanks
[00:46:09] <goleldar> i can post up my json for the collection
[00:48:17] <goleldar> http://pastie.org/private/or3wx4tfyu8znzxzcmdlgw
[00:48:46] <goleldar> i started to design my app and ran into a wall with modeling data that would cause me to use EAV in mysql
[00:56:21] <retran> anyone talking in here, about to deploy a mongodb for the first time
[00:56:45] <retran> i'm so freaking ... excited
[00:57:54] <goleldar> nice
[00:58:23] <goleldar> you already designed it?
[01:03:17] <retran> goleldar, no, it's a development system i'm 'deploying' but we hope to be live with it under 1month
[01:03:52] <retran> there's a current system that is based on mysql, which, is pretty clumsy
[01:04:30] <retran> it's a system for creating 'profiles' for actors showing demo reels, press photos, etc
[01:05:09] <goleldar> nice
[01:05:24] <goleldar> can you take a look at my collection schema and tell me if it looks good? i am just getting started
[01:05:33] <goleldar> http://pastie.org/private/or3wx4tfyu8znzxzcmdlgw
[01:07:23] <retran> interesting... but right off the bat it looks like
[01:07:39] <retran> you're not taking advantage of fact schema is flexible
[01:07:57] <retran> in a given dataset
[01:08:22] <goleldar> i tried modeling this data in mysql and its a nightmare
[01:08:35] <retran> i can understand why you'd want to return JSON document that in say, a web service request
[01:08:46] <retran> but storing it, you woudln't have to be that ridgid,formal
[01:09:56] <retran> i guess if that's just your broad schema, wtf, it looks fine
[01:10:11] <retran> brb
[01:10:12] <goleldar> k
[01:19:25] <fractalid> hey - I have a hanging connection problem w/mongodb 2.2.0. on the server I've got about 3000 connections, 99% of which show as "established" in netstat
[01:19:39] <fractalid> but on the clients, the vast majority of connections show as "fin_wait1"
[01:20:19] <fractalid> on top of that, according to mongodb.log, it's refusing connections because it's hit its max of 6553
[01:22:33] <fractalid> I'll be afk for a bit, sorry - time to go home, I just wanted to get this out there. should be back in half an hour or so
[01:22:36] <fractalid> thanks in advance :)
[01:32:24] <retran> wish i had more experience to be able to give meaningful feedback on a schema
[02:17:00] <Bilge> How can you display stats for a given document such as storage space?
[02:57:35] <crudson> Bilge: what other than storage size were you hoping for? Determining the size of an object depends on what language you are using.
[03:38:23] <Init--WithStyle-> Does anyone know how running 6000 consecutive DB inserts into a mongoDB via a javascript driver could be causing the process to consume massive amounts of RAM??
[03:49:37] <Bilge> What? Why would it depend on language
[03:49:51] <Bilge> And I dunno, I just thought there would be some kind of stats thing, but I really just want to know how big documents are
[03:55:58] <Gavilan2> Is there any good "something" on throw and recover arbitrary objects to mongo from javascript (i want to associate the objects with their class once they get back)?
[04:09:29] <crudson> Bilge: you can get a fair amount of stats for server and dbs, but not really at the document level, other than average (b.stats()) and individual size (Object.bsonsize(o))
[04:11:43] <Bilge> In particular I really want to know if storing something as binary is smaller than storing it as a hex string (it should be)
[04:11:48] <Bilge> But I just want to be able to SEE it
[04:12:56] <Bilge> I can't normally because of the weird output format e.g. BinData(2, "BAAAAAA=")
[04:13:12] <Bilge> No idea how much storage it's really taking up
[04:13:15] <crudson> Bilge: as I said, use Object.bsonsize(o) from the shell. Otherwise google "mongodb bson size <some language>" for other languages, as there are different methods.
[04:14:36] <Bilge> What is "bsonsize"?
[04:15:08] <Bilge> A 32-char string has bsonsize of 315
[04:15:14] <Bilge> That doesn't seem right or make any sense to me
[04:17:54] <crudson> Bilge: a '32-char string' is not a valid bson object on its own.
[04:22:06] <Bilge> I wrapped it in "new String()"
[04:22:15] <Bilge> I have no idea what I'm doing or what a "bson object" is
[04:23:22] <crudson> Bilge: I'd read this to start: http://www.mongodb.org/display/DOCS/BSON
[04:31:59] <Ventrius> Good Evening!
[04:32:11] <Ventrius> I have an interesting optimization question
[04:33:04] <Ventrius> Is it at all efficient to have a clone of a db sitting on your local machine, and have it copy data from some far off machine
[04:33:34] <Ventrius> I ask because I need to minimize latency between queries of a single entry collection
[06:37:22] <null_route> Hey all - Where can I get "internals" information on how the Config servers work?
[06:37:40] <null_route> ...or do I just delve into the code?
[06:48:39] <null_route> For example, if I understand correctly, the Mongos processes keeps the conf servers updated
[06:48:55] <null_route> where does that happen in the code, or is there a WIKI page which describes teh process?
[07:24:57] <cmex> good morning
[07:25:39] <null_route> When performing a moveChunk, it seems as though " the destination shard connects to the config database and updates the chunk location in the cluster metadata" . How does the destination shard know where the config server is?
[07:32:02] <[AD]Turbo> hi there
[07:43:46] <ron> [AD]Turbo: do you ever get a reply to that? ;)
[07:46:33] <NodeX> I think it's auto magic
[07:46:45] <NodeX> every day is a different greeting around the same time
[07:48:57] <ron> different greeting? I thought it was the same
[07:49:07] <ron> and I thought he actually likes us. oh well.
[07:49:22] <NodeX> some times it's "Hola"
[07:49:33] <Gargoyle> Morn' o/
[07:49:41] <NodeX> hi2u2
[07:53:33] <ron> right.. I remember Hola.. which is odd considering he's in Italy.
[07:59:04] <cmex> NodeX:good morning
[07:59:58] <cmex> Nodex: tell me is it any visual tool to see what happening between replica set servers?
[08:00:30] <cmex> somone?
[08:00:52] <NodeX> cmex : I dont sorry
[08:12:25] <Lujeni> cmex, http://blog.mongodb.org/post/28053108398/edda-a-log-visualizer-for-mongodb
[08:14:23] <Gargoyle> cmex: mms
[08:14:41] <Gargoyle> cmex: https://mms.10gen.com
[08:15:53] <cmex> Gargooyle: good morning :) we are using here a database master 5 . theres option to add replica set
[08:16:05] <cmex> but no success to do it :((
[08:16:24] <cmex> Lujeni:thanks i'll chek this out
[08:16:39] <Gargoyle> cmex: Read the docs again! I have no idea what database master 5 is!
[08:16:50] <cmex> ok i will thanks
[08:18:34] <dstorrs> Hey folks. Anyone seen this before: Thu Sep 6 08:17:20 uncaught exception: error { "$err" : "socket exception", "code" : 11002 } ?
[08:18:54] <dstorrs> I have a 3 shard cluster (third one just added today).
[08:19:08] <dstorrs> The third one starts and I can connect, but then all commands yield this.
[08:20:26] <dstorrs> I'm Googling around and seeing something about "entire shard down" / "replica set primary unavailable" / "auth information unavailable" but none of those seem to apply.
[08:29:29] <null_route> dstorrs: did you REMOVE any members at any point?
[08:30:23] <dstorrs> yes. This shard was originally added last week and was not having data sent across. we added / removed it a couple of times before we got it working today
[08:31:35] <null_route> My guess: It's related to any annoying property of MongoDB - REMOVING a member causes the PRIMARY to step down and hold elections. When it does this, it closes all TCP connections. This causes the mongos', obviously, to lose connectivity.
[08:32:05] <null_route> they will reconnect, but only after a query to each shard
[08:32:25] <null_route> You're queries should be returning OK after a retry
[08:32:32] <dstorrs> I have stopped / started the mongod and mongos on this machine several times.
[08:32:43] <dstorrs> ok.
[08:32:55] <dstorrs> I'll kick all the mongo(d|s) instances, then query them.
[08:32:57] <dstorrs> thanks
[08:35:10] <dstorrs> whew. yep, worked.
[08:35:14] <dstorrs> thanks null_route
[10:05:14] <PDani> hi
[10:05:56] <PDani> can somebody tell me why exactly 2GB is the upper limit of MongoDataFile size?
[10:09:03] <Azoth> only on 32 bit systems
[10:09:47] <Azoth> and 2^31 = 2GB
[10:10:54] <algernon> the files are capped at 2Gb everywhere.
[10:11:17] <Azoth> per file or per DB ?
[10:11:21] <algernon> per file.
[10:12:26] <Azoth> 32-bit builds are limited to around 2GB of data. See here for more information on the 32-bit limitation. from the mongo dowload page
[10:12:32] <algernon> but the reason is probably similar, with the addition that a higher cap would increasingly waste more space.
[10:12:43] <NodeX> ^^
[10:12:44] <algernon> Azoth: the question was not about db size, but datafile sizes.
[10:49:46] <golddog> Hello
[10:51:05] <golddog> I can't seem to find any examples for how to best merge duplicate documents in a mongodb collection. I thought i sounded like a job for map reduce but i can't find any examples of map reduce that are not basically just collating statistics
[10:54:29] <golddog> if anyone could point me to any examples of how to do this I'd be eternally grateful
[11:12:03] <kali> golddog: it's not a easy problem, but if you can come out with a signature of your document (some id, or some generated id) you can group your collection counting the occurence of this signature, then look at the counters > 1
[11:19:06] <golddog> kali: every document has an id and I only want to merge the ones with equal id's so identifying the documents are no problem. Basically I have the output of an sql join in a collection and i want to convert from 5 documents with mostly identical properties except for one property to one document with that property as an array as it should be
[11:20:53] <golddog> but I think I'll just remove everything and fix it before i insert it instead for now, just seemed like something one might like to do rather often
[11:21:21] <kali> so what's the problem ? just remove the dups ?
[11:23:42] <golddog> kali: the problem is basically this: I have {id: 1, name: 'foo', tag: 'dog'}, {id: 1, name: 'foo', tag: 'cat'} (in a large scale) and I'd like to merge every document with an identical id into {id: 1, name: 'foo', tag: ['dog', 'cat'] }
[11:24:16] <kali> ha, ok
[11:24:18] <golddog> I'm sure it's either impossible or incredibly easy but I just can't find any examples
[11:24:39] <kali> it's not that difficult, but it's considered out of mongo strict scope
[11:24:55] <kali> it's something you would typically do in script or application code
[11:25:36] <kali> you could also imagine to do it in mongodb-side javascript, but mongo side javascript is a Bad Idea for production
[11:26:05] <golddog> wasn't intending to do it in production, just trying to do a one-time fix of a broken data-set
[11:26:14] <golddog> so I'll be doing it in code instead.
[11:26:20] <golddog> but thanks for the help
[11:26:36] <kali> you're welcome
[11:28:05] <kali> Goopyo: as a matter of fact, I'm realizing the merging or rewritting you describe falls just into the scope of the aggregation framework
[11:28:29] <kali> Goopyo: not you, the right guy as left
[11:28:34] <kali> i hate when they do that.
[11:38:15] <Vile> I have documents like {_id: {a:1,t:"2012-01-01T00:00:01Z"}, value:123}, {_id: {a:1,t:"2012-01-01T00:00:51Z"}, value:100}, {_id: {a:1,t:"2012-01-01T00:01:41Z"}, value:0}. I want to build minute averages on it using the fact that value is considered constant until changed
[11:39:17] <NodeX> aggregation framework should be able to perform that
[11:40:57] <Vile> Currently i'm using map/reduce for that, but it is very slow, due to db.coll.findFirst( { t: {$lt:"2012-01-01T00:01:00Z"} )
[11:41:20] <Vile> in every map call
[11:41:49] <ppetermann> can you do incremental ma/reduces?
[11:41:51] <ppetermann> +p
[11:41:52] <NodeX> readteh aggregation framework docs
[11:46:22] <Vile> ppetermann: yes, i'm doing them incrementally
[12:41:56] <cmex> hi all
[12:43:31] <cmex> someone knows this error? "Unable to connect to a member of the replica set matching the read preference Primary"
[12:43:38] <cmex> cant find nothing on internet
[12:44:24] <kali> is your replica set happy ? does it have a primary ?
[12:46:53] <unsleep> mmm im a bit confused about something...
[12:47:15] <unsleep> its the mongo rest and http interface public ? O_o
[12:48:19] <cmex> it has primary
[12:48:31] <cmex> its only happens when i want to connect from my pc
[12:48:47] <cmex> when application on the air connects its ok
[12:49:25] <unsleep> http://37.247.51.141:28017/
[12:49:30] <unsleep> :-?
[12:49:37] <kali> cmex: it smells like firewall
[12:49:55] <kali> unsleep: it smells like no firewall
[12:52:14] <unsleep> it sound weird to need to configure a firewall to avoid it ... why i cannot disconnect it?
[12:54:26] <kali> unsleep: I guess not enough people have expressed the need...
[12:54:29] <algernon> unsleep: you can set nohttpinterface = true in /etc/mongodb.conf to disable it completely.
[12:56:20] <kali> algernon: ho, nice, I did not know that
[12:56:29] <algernon> though, the http interface defaults to localhost:28017 though
[12:56:51] <unsleep> solved :D
[12:56:51] <algernon> but perhaps bind_ip affects it too, dunno.
[12:57:19] <algernon> kali: neither did I, I grepped mongodb.conf for http :P
[12:59:33] <unsleep> it would be better to ask for a password. i think it will improve a lot in tehe future
[13:03:11] <unsleep> many thanks!
[13:26:49] <cmex> kali:
[13:30:00] <Zelest> i bet he tried to say "don't use PHP 4.. ever!" :o
[13:31:02] <ravana> hi all
[13:32:07] <ravana> what is the type of unit which collection.datasize() returns?
[13:32:26] <ravana> is that Mega Byte?
[13:35:14] <ravana> hello..
[13:35:21] <ravana> is there someone who can help me?
[13:37:58] <algernon> ravana: kilobytes, by default (http://docs.mongodb.org/manual/reference/database-statistics/)
[13:38:20] <algernon> err, bytes by default. sorry.
[13:38:53] <ravana> thanks dude.
[13:38:55] <ravana> :)
[13:40:06] <thecodecowboy> Hi. I am considering using MongoDB for this data requirement: http://programmers.stackexchange.com/questions/161602/would-this-data-requirement-suit-a-document-oriented-database. Anyone have any advice on how to structure the data? Would it make sense to allow users to create collections ??
[13:44:40] <ppetermann> thecodecowboy: you know there is a max-amount-of-collections cap?
[13:45:09] <thecodecowboy> ppetermann, i do now!
[13:45:48] <DiogoTozzi> Morning guys!
[13:46:00] <thecodecowboy> ppetermann, baby steps in nosql, interested to learn and wondering if my requirement is a good fit. any thoughts?
[13:46:11] <DiogoTozzi> Does anybody here tested the Full Text Search feature in 2.3?
[13:46:15] <ppetermann> thecodecowboy: http://www.mongodb.org/display/DOCS/Use+Cases you might want to read this one
[13:49:34] <thecodecowboy> ppetermann, yeah, i am looking for the advice of someone with experience of mongodb rather than documentation which will not mention my use case
[13:49:54] <thecodecowboy> ppetermann, but thanks for the link
[13:50:18] <ppetermann> thecodecowboy: well this one is about use cases
[13:50:22] <ppetermann> :)
[13:52:24] <ppetermann> thecodecowboy: also i don't think you need one collection per user
[13:52:33] <ppetermann> i'd write a bit more, but i'm quite work-busy atm
[13:52:59] <thecodecowboy> ppetermann, no worries. i will do some more reading....
[13:58:51] <ravana> and the other thing is Object.bsonsize(db.pages.find({uname:"sam"}))
[13:59:05] <ravana> it says 50 bytes
[13:59:11] <ravana> i wonder
[13:59:15] <ravana> how this is possible
[13:59:59] <algernon> bson overhead + padding, I assumme.
[14:00:14] <ravana> can you please explain this?
[14:00:27] <ravana> :D
[14:00:31] <algernon> there's _id, which is iirc 12 bytes in itself.
[14:00:39] <ravana> yes
[14:00:39] <algernon> and the whole thing is BSON encoded, which adds some more bytes
[14:01:34] <Aartsie> Derick: Tonight is the webinar on 19:00 pm dutch time right ?
[14:02:03] <Vile> NodeX: Read teh aggregation docs. Does not looks like something thats going to help me :(
[14:02:16] <ravana> case is not that algernon:
[14:02:25] <algernon> ravana: the string is 3 bytes + null + 4 byte length; then there's the field name, 5+1 bytes + length + type; then there's the _id, 3+1 bytes for the name, 1 for the type, 12 for the data, and wrap it in a document (4+1+1 bytes)
[14:02:32] <ravana> my document it has many elements
[14:02:41] <algernon> aha.
[14:03:12] <ravana> it contains nested documents
[14:04:11] <ravana> i guess 1 ASCII character is equel to 1 byte i think
[14:04:42] <ravana> so the 50 bytes means 50 ASCII chars
[14:04:51] <ravana> don't laugh the way i think :D
[14:05:09] <null_route> Take a look at http://www.slideshare.net/mdirolf/inside-mongodb-the-internals-of-an-opensource-database
[14:05:16] <null_route> it has some cool byte-level descriptions of the BSON encoding
[14:05:24] <null_route> A slide on the _id field
[14:05:32] <null_route> and bytle-level description of the protocol
[14:06:37] <algernon> ravana: I suppose the bsonsize there is not the size of the document, but the db object instead
[14:06:49] <algernon> at least http://www.mongodb.org/display/DOCS/dbshell+Reference seems to suggest that.
[14:07:37] <ravana> yes i think it is guys
[14:07:57] <algernon> pasting a document into Object.bsonsize() returns the correct result
[14:08:01] <ravana> "but the db object instead" I agree with this
[14:10:24] <ravana> "pasting a document into Object.bsonsize() returns the correct result" i didn't get this
[14:10:56] <ravana> algernon:
[14:36:21] <joshua> what exactly goes on when you initiate a replica set? The local database is 6GB now
[14:36:56] <Vile> joshua: logs start to being recorded
[14:37:40] <jY> joshua: it pre-allocates i think 10% of usable space
[14:38:56] <joshua> Is it possible for one server to be arbiter for 2 sets, or will I have to run on two ports? (Not my design)
[14:39:26] <jY> joshua: run 2 servers
[14:39:51] <joshua> Thanks. Thats what I thought. I guess they wanted to use more shards so its shared
[14:40:29] <Vile> what's wrong with one server (machine) being arbiter for multiple sets?
[14:46:31] <joshua> Vile I guess functially it doesn't hurt. I might create a second init script with conf so they can be started with a service instead of hacking something into rc.local.
[14:50:52] <timeturne> what is the best way to check the size a particular object will have when added to mongo as a document?
[14:52:01] <algernon> what language?
[14:52:17] <timeturne> node.js/javascript
[14:53:22] <timeturne> basically I want to embed the maximum latest blog posts in a "section" document and then push the older ones to another collection
[14:53:30] <timeturne> kind of like a capped subdocument collection
[14:53:45] <algernon> timeturne: http://mongodb.github.com/node-mongodb-native/api-bson-generated/bson.html#bson-calculateobjectsize
[14:55:39] <timeturne> cool, I'll try that thanks!
[15:13:59] <timeturne> is there an objectid stored for subdocuments for quering purposes?
[15:16:15] <algernon> timeturne: nope, unless you explicitly add it
[15:17:48] <timeturne> would the index be as efficientif I added objectids from subdocuments in it? also, how would I goabout actually adding the objectid to the default _id index?
[15:18:26] <algernon> you'd have two indexes
[15:18:42] <algernon> one for the main document's _id, and another for the subdocuments
[15:27:29] <timeturne> I think I'll implement a character limit on the blog post and just add a fixed number of blog posts rather than checking the size for now
[15:27:47] <timeturne> it would overcomplicate the system before I even launch for the first time
[15:27:55] <timeturne> better to leave that for later
[15:30:03] <goleldar> hello
[15:37:39] <fg3> data = { name: 'foo', items: [{ name: 'joe' }] // how can I add a key/value to the hash under items?
[15:38:28] <goleldar> i am new to mongodb can someone look at my document schema and tell me if it looks good
[15:38:48] <manveru> fg3: data[:items] << {name: 'jake'}
[15:38:49] <goleldar> http://pastie.org/private/or3wx4tfyu8znzxzcmdlgw
[15:39:09] <manveru> assuming that's ruby :)
[15:39:29] <fg3> manveru, using the mongodb CLI
[15:39:32] <manveru> oh, wait, add to the existing hash?
[15:39:59] <manveru> fg3: data[:items].first[:name] = 'jake'
[15:40:01] <fg3> yes add a new k,v to the existing hash
[15:41:21] <fg3> result = { name: 'joe', colors: 'red' }
[15:42:00] <manveru> replace :name and 'jake' then :)
[15:42:07] <fg3> ok
[15:47:06] <fg3> cannot translate to CLI
[15:47:41] <manveru> ah, js then
[15:48:13] <manveru> data.items[0].colors = 'red'
[15:58:44] <bizzle> does a casbah MongoConnection need to be closed?
[15:58:45] <bizzle> it seems that the MongoConnection is actually a connection pool, I am confused
[17:52:18] <FreeFries> hello...I'm running into a problem with mongo shards+replica sets and I'm wondering if there's a way to turn in more debugging information...
[17:52:43] <FreeFries> I have two shards which I want to replicate to three servers, replica set repl_mongod02 works fine
[17:53:09] <FreeFries> replica set repl_mongod01 doesn't work at all failing with a "couldn't connect to new shard socket exception [CONNECT_ERROR] for repl_mongod01/..." error
[17:53:22] <R-66Y> is there a $size for associative arrays that just returns how many keys are in the array?
[17:53:32] <FreeFries> but when I change the "repl_mongod01" to "repl_mongod02" it says that it's not part of the set
[17:53:39] <FreeFries> so it's clearly connecting to *something* at that location
[17:54:06] <FreeFries> when I run mongo to the host and do a rs.status() I get "set" : "repl_mongod01",
[17:54:30] <FreeFries> so I have no idea why it's failing to connect
[17:54:53] <FreeFries> any way to get more information out of mongos, or anyone happen to know how to fix the problem?
[18:02:21] <elux> is there a profiling mode or something in mongodb? (2.2)
[18:02:36] <elux> im finding very simple queries are taking 120ms+ ... they really shouldn't.. and they should be using indexes
[18:02:47] <elux> so perhaps a way of analyzing if indexes are being used properly or whatever
[18:04:38] <kali> elux: db.foo.find(...).explain()
[18:05:41] <elux> thx
[18:07:24] <elux> thank you
[18:07:25] <elux> :)
[18:19:49] <elux> what is the difference between using sort() and $orderby .. ?
[18:19:55] <elux> in terms of performance or otherwise..?
[18:20:35] <elux> http://www.mongodb.org/display/DOCS/Advanced+Queries#AdvancedQueries-%24orderby
[18:20:39] <elux> says its the same.. but perhaps not?
[18:21:57] <kali> look at the plans, but i'm pretty sure they're the same
[18:22:28] <peter_c> why would people use mongodb over mysql?
[18:22:57] <Gargoyle> peter_c: They do different things.
[18:23:35] <Gargoyle> peter_c: In some situations, a full relational db might be overkill
[18:23:36] <peter_c> what was lacking in the industry with all the relational databases around, that spawned mongodb
[18:23:40] <peter_c> oh
[18:23:44] <kali> peter_c: not a subject for irc, i think... you can find dozen of well argumented blog posts for one side or the argument or the other
[18:23:49] <peter_c> so it wasnt the lack of something
[18:24:00] <Gargoyle> peter_c: Mongo is not a relational db
[18:24:14] <peter_c> yeah its like nosql or something
[18:24:56] <Gargoyle> peter_c: As kali said, there's plenty of resource on the web dig through if you want to bring yourself upto speed.
[18:25:03] <peter_c> yeah i cant understand any of em though
[18:25:16] <Gargoyle> did you start at mongodb.org ?
[18:25:17] <peter_c> im just a project manager and one developer suggested to switch to mongodb for our product
[18:25:29] <peter_c> and another is fighting for scaledb
[18:25:37] <peter_c> im trying to get a third opinion
[18:26:02] <peter_c> yeah i cant understand any of that
[18:26:14] <Gargoyle> peter_c: Seems the first task is to fully research if a document based db fits your product requirements.
[18:27:03] <peter_c> if mongodb is document based, what is mysql based on
[18:27:14] <lohkey> rows and tabular data
[18:27:17] <Gargoyle> tables and fields.
[18:27:20] <peter_c> oh
[18:28:00] <peter_c> so whats the most cookie cutter example that would draw the scenario where mongodb (document based dbs) would be hands down the better choice versus a rows/tabular
[18:28:48] <FreeFries> how do I reset a replica set?
[18:28:55] <FreeFries> deleting the data directory doesn't seem to do it
[18:29:05] <Gargoyle> peter_c: http://www.mongodb.org/display/DOCS/Use+Cases
[18:29:20] <FreeFries> (by that I mean, how do I clear out all replica config information and start over?)
[18:30:18] <Gargoyle> FreeFries: That normally works. shutdown server, delete (or move) data dir, and start fresh.
[18:30:52] <Gargoyle> But if you have your replica set info in the config / startup param, it will reconnect and resync itself automatically
[18:31:15] <FreeFries> well, my replica set inititally didn't work
[18:31:27] <FreeFries> so now I'm trying to blow away the config and start over
[18:32:59] <FreeFries> but it's giving me various errors
[18:33:17] <FreeFries> the most recent is "all members and seeds must be reachable to initiate set"
[18:34:01] <Gargoyle> FreeFries: Sounds like you are bringing the server back up still telling it to be part of a rs. Do you have it in the config file / startup options?
[18:34:53] <FreeFries> yeah, I want it to still be part of repl_mongod01, but I want to kill the entire thing and start over as if repl_mongod01 never existed
[18:34:58] <FreeFries> do I have to change the name of it?
[18:35:46] <Gargoyle> FreeFries: You will have to take all the servers in the set down together - then I think, initially boot them up without any reference to the RS
[18:39:12] <FreeFries> that's not working
[18:39:33] <FreeFries> I think I need to find where that rs.local stuff lives
[18:40:03] <FreeFries> I took down all members of the rs, and booted up just 1
[18:40:09] <Gargoyle> FreeFries: It lives in a database - if you have removed the data dir, it should be gone.
[18:40:21] <FreeFries> what data dir do I need to remove tho? not the mongodb one
[18:40:37] <FreeFries> the mongoconfig servers?
[18:41:00] <FreeFries> basically the rs was royally screwed up when I set it up inititally
[18:41:02] <Gargoyle> FreeFries: Umm. not sure about those, I've not used them yet.
[18:41:14] <FreeFries> three servers, one of them saw itself as the only member of the set
[18:41:17] <FreeFries> the other two saw all 3
[18:41:41] <FreeFries> server 3 couldn't connect to server 1, server 2 saw itself only, server 1 could connect to all three
[18:41:53] <FreeFries> restarting server 2 caused it to think there were 2 primaries
[18:42:01] <FreeFries> but it did end up seeing all the members
[18:42:28] <bizzle> Is there a way to drop all the documents in a collection without dropping the indexes?
[18:42:30] <FreeFries> so I figured the easiest approach would be to destroy the rs configuration and start over
[18:42:42] <FreeFries> but apparently that's not very easy, heh
[18:43:49] <Gargoyle> FreeFries: Sounds like you need to sort the network out first! ;)
[18:44:02] <FreeFries> the network is fine
[18:44:17] <FreeFries> those servers are running two sets of mongo on different ports
[18:44:41] <FreeFries> repl_mongod01, and repl_mongod02, repl_mongod02
[18:44:43] <FreeFries> was fine
[18:45:10] <FreeFries> all ports are open between the servers, etc
[18:45:20] <FreeFries> so now I just want to start over because I think I screwed something up
[18:45:39] <FreeFries> when I was initially setting it up I issued a rs.initiate() and rs.config() on servers 1 and 2
[18:45:49] <FreeFries> and I think that caused the problem
[18:50:08] <muffs> is there a best practice for modifying / removing indexes with minimal effect to prod?
[18:50:23] <FreeFries> rs.config() returns null
[18:50:42] <FreeFries> rs.initiate() returns "all members and seeds must be reachable..."
[18:50:51] <FreeFries> so I'm not sure how to fix this problem
[18:51:14] <Gargoyle> sounds like you need to regonfigure
[18:51:38] <Gargoyle> if rs.config() returns null - doesn;t that mean there is no rs config?
[18:51:43] <FreeFries> yes
[18:52:29] <FreeFries> rs.config returns "all members and seeds must be reachable to initiate set"
[18:52:41] <FreeFries> er
[18:52:44] <FreeFries> initiate
[18:52:46] <FreeFries> reconfig
[18:52:56] <FreeFries> returns "rs.conf() has no properties src/mongo/shell/utils.js:1735"
[18:53:20] <FreeFries> so...I dunno, I can't start a new one, but I apparently have no rs configuration
[18:54:44] <Gargoyle> have specified a replSet name in starting the server?
[18:56:26] <FreeFries> yes
[19:41:55] <timkuijsten> someone any experience with using Timestamp objects and mongo's node driver?
[19:42:34] <timkuijsten> trying to search the oplog with it.. but somehow it looks like my timestamp objects or not used correctly in my finds..
[20:13:35] <FreeFries> hmm
[20:13:38] <FreeFries> no luck finding it
[20:13:51] <FreeFries> does anyone happen to know where mongo stores the replicaset config info?
[20:14:13] <FreeFries> I've cleared out the data directory in all of my mongods, along with all of my mongo configs
[20:14:45] <FreeFries> but the replica set config info is still floating around somewhere
[20:15:28] <FreeFries> or would anyone know of some linux tool that would let me find where it's getting the info from? (similar to filemon on windows?)
[20:30:30] <FreeFries> I also can't manually setup the rs config and run rs.reconfig(cfg, force : true)
[20:30:40] <FreeFries> gives an error of "rs.conf() has no properties"
[20:33:43] <FreeFries> (running 2.2)
[20:39:20] <FreeFries> maybe I can approach this from another side...how can I see what data is in db.local.system?
[20:39:41] <FreeFries> db.local.system.find() from a mongo console appears to do nothing
[20:39:51] <FreeFries> does that mean it's empty, or the command wasn't run?
[20:50:57] <Gavilan2> So! Is there any "transparent" library or thing to ORM into MongoDB from JavaScript. I'm looking for something that is, taking any app that runs in memory, replace Array or a Collection with CustomCollectionThatGoesIntoMongoDb... And then the app continues working with persistance... Is there anything like that? or do I need to make it?
[20:54:27] <gibletsngravy> Hi, I'm learning how to use the Mongo's profiling feature. When I query the system.profiles collection, why isn't the inserted data shown for "insert" operations (as updateobj is shown for update ops)?
[20:56:07] <Venom_X> Gavilan2: I'm aware of mongoose.js and mongoskin.js, both node.js packages
[20:56:32] <Gavilan2> Venom_X: Thx
[21:35:43] <Neptu> hej
[21:35:55] <Neptu> anyone using pymongo over replication??
[21:40:27] <jY> Neptu: master/slave?
[21:40:54] <Neptu> jY replica set yes
[21:41:14] <jY> ok i have
[21:41:17] <jY> whats the question?
[21:41:19] <Neptu> I just wnat to confirm if i pass a list of servers he will detect and connect to the master automaticaly
[21:41:30] <jY> of course
[21:41:43] <Neptu> and if I pass only one and its active should find the master aswel even if its not in the list
[21:41:43] <Neptu> ...
[21:42:09] <jY> no idea on that
[21:42:12] <Neptu> ok
[21:42:41] <Neptu> i mean if the inputs are seeds
[21:42:56] <Neptu> and connect can find the master with only one active
[21:43:00] <Neptu> (manual)
[21:43:26] <Neptu> The nodes passed to Connection() are called the seeds. As long as at least one of the seeds is online, the driver will be able to “discover” all of the nodes in the set and make a connection to the current primary
[21:43:47] <Neptu> not so sure about this
[21:45:56] <FreeFries> I couldn't find it, so I ended up reformatting the machine
[21:46:10] <FreeFries> which seems to have worked to remove whatever it was that mongo had left around remembering the repl set
[21:53:47] <jefferai> Hello...I'm trying to figure out how to put an TTL index on a value in an array
[21:54:17] <jefferai> I have records like { "registrations": [ { ......, "lastupdated": (date)}, {......}] }
[21:54:34] <jefferai> and I want to have items in the array expire
[21:54:47] <jefferai> is that possible, or do I need to split each item up into a separate document?
[21:56:24] <fg3> data = { person: { name: 'joe', cars: [{name: 'ford'}] } // how to add property to car?
[23:21:53] <zanefactory> qq, what's the syntax to turn slaveOk to false on a node?
[23:24:07] <zanefactory> so much docs about how to turn it on, can't find anything about how to turn it off