PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 20th of May, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:12:02] <jamesaanderson> If I have a users collection that looks like this: http://pastie.org/9191184. I can't seem to figure out how to select a post by id and return only that post. What am I missing?
[00:17:28] <joannac> $elemMatch
[00:18:19] <joannac> no, I take that back. The positional operator http://docs.mongodb.org/manual/reference/operator/projection/positional/
[00:18:25] <joannac> jamesaanderson: ^^
[00:18:30] <dgarstang> dangit. now I can't add a shard. ""errmsg" : "couldn't connect to new shard socket exception [CONNECT_ERROR] for rs1/mongo-shard-shard0-rs1.prod.foo.com:27017""
[00:19:25] <jamesaanderson> joannac: Thanks that's been frustrating me for a while
[00:20:34] <dgarstang> the mongos server obviously needs to be able to reach the shard on port 27017. Does the shard need to communicate back to the mongos box?
[00:22:29] <joannac> ...yes? otherwise none of your queries will return results?
[00:23:11] <dgarstang> joannac: moi?
[00:24:14] <dgarstang> joannac: i mean... do I have to open a firewall hole to allow shard to communcate to mongos in addition to allowing mongos to reach shard
[00:24:22] <federated_life> dgarstang: yea..once you get it up and running, you'll notice that each mongoD of the replica set of each shard keeps connections open to the config servers , the mongoS
[00:25:11] <dgarstang> federated_life: in this particular case, I'm running the addshard command and getting a connect error. Running tcpdump on the shard box though shows that it's receiving the traffic from the mongos box
[00:26:04] <dgarstang> your saying the shards need to talk to the config servers too? Yikes.
[00:27:55] <dgarstang> Added that. Didn't help.
[00:30:09] <dgarstang> Also allowed shards to talk to mongos. Didn't help.
[00:31:02] <joannac> that error looks like a problem connecting from mongoS to mongoD to me...
[00:32:32] <dgarstang> joannac: as I said above, tcpdump shows that the mongod shard is receiving traffic from the mongos box, so it's not a firewall/connectivity issue. It's also logging the connection but nothing else.
[00:33:20] <joannac> can you connect with a mongo shell?
[00:34:16] <dgarstang> joannac: yep... mongo --host mongo-shard-shard0-rs1.prod.foo.com --port 27017 works
[00:34:26] <dgarstang> ran that from the mongos box
[00:34:37] <dgarstang> so, mongos box can connect to shard box
[00:35:26] <dgarstang> but again, running sh.addShard("rs1/mongo-shard-shard0-rs1.prod.footest.com:27017") results in "errmsg" : "couldn't connect to new shard socket exception [CONNECT_ERROR] for rs1/mongo-shard-shard0-rs1.prod.footest.com:27017"
[00:40:26] <joannac> um, one is foo.com
[00:40:30] <joannac> the other is footest.com
[00:40:40] <joannac> unless you're just redacting inconsistently
[00:41:06] <cheeser> in which case you should get your redact together!
[00:41:51] <dgarstang> i'm redacting inconsistently. :)
[00:42:06] <dgarstang> stoopid redaction.
[00:42:32] <joannac> shard box is initialised?
[00:43:32] <dgarstang> joannac: actually, no. There's nothing in the mongo doc at http://docs.mongodb.org/manual/tutorial/deploy-shard-cluster/ about that. I did see someone saying you had to run rs.initialize() first but when I did that it complained mongod had not been started with the replset option.
[00:43:43] <joannac> ummmmm
[00:43:52] <dgarstang> um is never good
[00:45:18] <dgarstang> there's really nothing about initialising at http://docs.mongodb.org/manual/tutorial/deploy-shard-cluster :)
[00:45:32] <joannac> okay, I interpret "adding a replica set" as having the pre-condition "set up a replica set"
[00:45:41] <joannac> but we can agree to disagree on that
[00:46:06] <dgarstang> I was under the impression, based on the doc, that running addShart() did that.
[00:48:08] <dgarstang> do I need to start mongod on the shard with the replset option (even though it's not mentioned in the doc?)
[00:48:21] <joannac> yes
[00:48:26] <joannac> if you want a replica set
[00:48:57] <dgarstang> ugh. mongo, y u no put in docs.
[00:53:04] <joannac> dgarstang: I'm trying to put together a docs ticket, could I pm you?
[01:01:16] <dgarstang> joannac: Sure
[01:09:56] <joannac> oh he's gone, but https://jira.mongodb.org/browse/DOCS-2561
[01:12:30] <joshua> Its a good night to remove 50 million documents from a db.
[01:20:32] <joshua> 5-6k documents deleted per second, this should only take another couple hours. heh
[01:30:56] <nullne> morning,i met a problem but got no proper resolution via google.
[01:31:52] <nullne> i am running fedora 20,and installing the latest version of mongodb, I was able to run the mongod daemon as a service without error the very first time but when I shut down my machine and came back, the service refused to start due to some failure.
[01:33:03] <nullne> this is error log in /var/log/mongod/mongod.log
[01:33:07] <nullne> ***** SERVER RESTARTED *****
[01:33:07] <nullne> ERROR: Cannot write pid file to /var/run/mongodb/mongod.pid: No such file or directory
[01:34:32] <cheeser> looks /var/run/mongodb is missing
[01:34:33] <nullne> when i start the mongos :sudo service mongos start ,i got this erro message : Job for mongod.service failed.See 'systemtcl status mongos.service' and 'journalctl -xn' fro details
[01:34:37] <cheeser> looks *like*...
[01:35:04] <nullne> why?
[01:35:18] <cheeser> well, is it?
[01:35:38] <nullne> i follow this resolution http://stackoverflow.com/questions/23086655/mongodb-service-will-not-start-after-initial-setup, and remkdir the mongodb
[01:36:09] <nullne> yes ,there is no mongodb directory
[01:37:22] <nullne> cheeser, do you know how to solve it ?
[01:37:31] <cheeser> create that directory
[01:38:01] <nullne> i have done "mkdir mongodb " already
[01:38:29] <nullne> and touch a new file mongodb.pid,and change mod to 777
[01:38:39] <nullne> bug it doesn't work
[01:39:21] <nullne> but it doesn't work
[01:40:12] <joannac> same error message?
[01:40:20] <nullne> cheeser, any ideas?
[01:40:32] <nullne> cheeser, yes same error mesage
[01:40:35] <nullne> cheeser, yes same error message
[01:40:45] <cheeser> mkdir -p /var/run/mongodb
[01:41:16] <cheeser> check permissions on that directory and the ownership. make sure it works for whatever users mongod runs as.
[01:45:26] <nullne> it works!
[01:45:32] <nullne> cheeser, thank you
[01:45:58] <cheeser> great
[01:46:44] <nullne> cheeser, after i changed the mode of directory mongos to 777,it goes
[01:46:55] <nullne> cheeser, after i changed the mode of directory mongodb to 777,it goes
[01:47:13] <cheeser> you really don't have to keep repeating everything.
[01:51:23] <nullne> cheeser, i misspell the word
[01:51:51] <joshua> chown the directory whatever username the startup script uses
[01:52:13] <cheeser> it's ok. we can cope with typos
[05:56:42] <Soothsayer> Does the size of the field name matter from the point of view of sorting on that field name?
[07:38:27] <stevenm> Hey, anyone know of any prebuilt ARM (compatible with say a Raspberry Pi) packages?
[07:40:40] <rspijker> stevenm: don’t think mongod will work on ARM, prebuilt or otherwise
[07:41:52] <stevenm> rspijker, well I've found plenty of guides showing it being compiled on a Pi and working
[07:41:57] <stevenm> just after something precompiled
[07:42:06] <rspijker> Then I spoke too quickly
[07:42:26] <rspijker> I had it in my head that there were some issues around endianness etc.
[08:14:44] <justind> How do I use lodash's _.difference function on two arrays of ObjectId?
[08:18:20] <rspijker> justind: difference uses strict equality for comparisons
[08:19:55] <rspijker> that is, ===
[08:20:33] <justind> rspijker: Is there another function that will return the difference of two arrays of ObjectId?
[08:22:13] <rspijker> what is it you are actually after?
[08:22:26] <rspijker> so, when do you consider two ObjectIds different?
[08:24:17] <justind> The returned array should contain all ObjectId's in array 1, if that ObjectId is not in array 2.
[08:25:24] <rspijker> that doesn’t answer my question
[08:26:16] <justind> ObjectId("555") == ObjectId("555") != ObjectId("444")
[08:27:58] <rspijker> try mapping str over the arrays first
[08:28:19] <rspijker> what you want to comapre is the string content of the ObjectId and not the actual Object
[08:31:42] <rspijker> then you’ll end up with a list of stirngs though, which might be ok, again, depends on what you want exactly
[08:32:30] <justind> Yeah I was thinking of doing that, but I was hoping there would be a more elegant solution.
[08:32:54] <justind> Thanks for the help, appreciate it very much.
[08:33:21] <rspijker> I’m not 100% up-to-speed on Lo-dash, so there might be some fancy-ness I don’t know about
[08:33:53] <rspijker> but I’m not aware of any differenceBy function or anything like that, where you can pass in a comparator-function, which is kind of what you want…
[08:34:03] <rspijker> I suppose you could just write it yourself as a wrapper
[08:47:36] <jacksmith> I need some advice, I have a big data coming from some GPS devices, which i need to store it for months and analyze it, currently i am using MySQL but it is very slow for this work, can mongoDB can helpfull to me in this case ?
[08:49:48] <kali> jacksmith: mongo is good for small, fast, targetted requests. it should be fine for collecting the data. but if you plan big scala data analysis, you may want to complement mongo with something from hadoop (like hive or pig) or spark/shark
[08:52:12] <jacksmith> @kali i am talking of DB not webserver
[08:53:30] <kali> i'm talking data
[09:03:11] <jacksmith> I need some advice, I have a big data coming from some GPS devices, which i need to store it for months and analyze it, currently i am using MySQL but it is very slow for this work, can mongoDB can helpfull to me in this case ?
[09:06:23] <rspijker> sure, can be
[09:06:51] <rspijker> could also not be, depends on the exact usecase
[09:06:58] <rspijker> kinda like kali already tried to tell you...
[09:09:01] <Zelest> Is it possible to have a TTL index combined with a conditional/partial index?
[09:09:24] <Zelest> E.g, a TTL on field datetime but only if the field deleted is true?
[09:17:22] <rspijker> Zelest: pretty sure it’s not
[09:18:16] <Zelest> :(
[09:18:47] <rspijker> TTL doesn’t even support compound indices
[09:19:01] <Zelest> compound indices?
[09:19:22] <rspijker> indices on multiple fields
[09:19:38] <Zelest> indices = indexes?
[09:19:56] <rspijker> yes…
[09:20:43] <rspijker> let me guess, you’re from the US?
[09:21:09] <Zelest> I am not.
[09:21:50] <rspijker> hmmm, whois suggests Sweden :)
[09:22:05] <rspijker> either way, indexes is more prevalent in north america, I think
[09:22:30] <rspijker> indices is more prevalent in the rest of the world. Both are correct plurals of index though
[09:23:48] <Zelest> Sweden is correct, I've never heard indices before though. :P
[09:25:04] <rspijker> mongodb docs use indexes as well
[09:25:08] <rspijker> sounds wrong to me
[09:26:02] <rspijker> then again, the shell provides both db.coll.getIndexes() and db.coll.getIndices() :)
[09:27:09] <Zelest> Aah
[09:27:22] <Zelest> Well, today I learned something then. :)
[10:37:32] <talbott> hello mongoers
[10:37:37] <talbott> quick question
[10:37:48] <talbott> if a user has "dbAdminAnyDatabase"
[10:37:55] <talbott> and "readWriteAnyDatabase" roles
[10:38:27] <talbott> so they need some additional role to be able to authenticate to the db in the first place, instead of having to authenticate against admin and then change
[10:38:33] <talbott> so = do
[10:57:20] <rspijker> talbott: they will need to auth against admin
[10:58:06] <rspijker> I think you can set up users on specific dbs as well that inherit from the admin db, but you would have to check the docs for details
[11:00:45] <netQt> Hi all. I need to traverse very large db and make updates on each row. It would take to much time to select all data then process. Is there a way to process as soon as selecting?
[11:01:23] <netQt> I'm trying to avoid using limit
[11:03:01] <rspijker> be more specific
[11:04:44] <netQt> I want something like iterator. Without having to select all data before starting to make updates
[11:05:30] <rspijker> what do the updates look like?
[11:05:34] <rspijker> are they dependant on the data?
[11:06:03] <netQt> Assume we have 1 million documents in the db, and we need to update one of the fields in all of them
[11:06:36] <rspijker> db.collection.update() ...
[11:07:32] <netQt> I read info from each doc, according to the data I make request to some services, get data, and update it int db
[11:08:04] <netQt> that's why I need to select them, but I can't select 1 million rows at once
[11:08:29] <rspijker> well, if you do a find, the result is a curosr
[11:08:35] <rspijker> you can iterate over a cursor
[11:08:41] <rspijker> .hasNext() and .next()
[11:08:54] <netQt> oh, seems that's what I need
[11:10:15] <rspijker> it will perform the select first (I believe), so the query will still take time. But it won’t get all of the documents from the database, it just gives you this cursor into the resulting set
[11:12:33] <netQt> so I hope that won't take large amount memory, right?
[11:14:01] <rspijker> on the server side it will take the memory that it requires...
[11:14:19] <rspijker> the kernel will page stuff into RAM as needed
[11:14:37] <rspijker> in order to perform the query, the data will have to be checked...
[11:14:54] <rspijker> on your *client* though, it’s up to your application to handle it correctly
[11:15:14] <rspijker> that is, if you iterate over the cursor, technically you would never have to have more than 1 document in memory at the same time
[11:15:54] <netQt> oh, than I can't go that way. That's what I;m trying to avoid
[11:17:16] <rspijker> then chunk it with skip and limit
[11:17:31] <rspijker> but if your indices are bad, you’ll still end up with a large memory footprint
[11:17:37] <rspijker> since the query still needs to be run...
[11:19:43] <rspijker> or if you have a field that you know to be monotonically increasing (like an ObjectID), you could use findOnes with a $gt on the id field…
[11:20:11] <rspijker> but this is all very contrived… You should just let the kernel and mongod deicde what to do, trust me when I say they’ll be better at it than you
[11:27:13] <netQt> I found this query streaming http://mongoosejs.com/docs/2.7.x/docs/querystream.html
[11:30:02] <rspijker> that’s mongoose… it doesn’t say what it does under-water
[11:30:58] <netQt> yea, but I'm fine with using that :)
[11:31:30] <netQt> I'll try it now on large data to see how it behaves
[11:32:20] <rspijker> hmmm, apparently cursors behave a bit better nowadays: http://docs.mongodb.org/manual/core/cursors/
[11:33:04] <rspijker> so the cursor approach should work fine as well
[11:33:15] <rspijker> as long as you’re not sorting on an unindexed field
[11:47:21] <Fiibar> hi, how can i check for double uniqueness? i have a mongodb document in my symfony project. i want to have a check on two fields for uniqueness. how? mongdb/unique can be done on one field by default, right?
[11:50:36] <Fiibar> * @MongoDBUnique(fields = {"content", "people"}, message="exists.already") >> not workin'
[11:51:43] <Fiibar> its on a embeddeddocument
[11:55:52] <rspijker> is this a mongodb quesiton or a symfony question?
[11:57:01] <rspijker> Fiibar: ^ ?
[11:59:25] <Fiibar> mongo
[11:59:41] <Fiibar> can you do such things on embedded documents ? rspijker
[12:00:09] <rspijker> the only way I know of to ensure uniqueness is by adding a unique index
[12:00:16] <Fiibar> ok thnx
[12:00:25] <rspijker> these work fine on embeded docs, as well as on compound indices
[12:00:34] <Fiibar> great
[13:05:00] <Industrial> Hi. I'm trying to build an export feature (to CSV) by merging several streams that come from mongodb aggregations. I'd like to support progress reporting while the export is running in the browser, so I need to know howmany documents will be streamed from the aggregation and at what point it's currently at (So I can send 20% 30% 40% etc to the browser). How would I do this?
[13:05:44] <tscanausa> Industrial: I am pretty sure you cant.
[13:05:47] <cheeser> aggregate out to a collection. count. export.
[13:36:24] <Industrial> tscanausa,
[13:36:29] <Industrial> http://mongodb.github.io/node-mongodb-native/api-generated/collection.html#aggregate
[13:36:35] <Industrial> says it returns a Cursor in 2.6
[13:36:44] <Industrial> http://mongodb.github.io/node-mongodb-native/api-generated/cursor.html#count
[13:36:48] <Industrial> says cursor has a count
[13:37:35] <Industrial> But the Stream object I'm getting from Collection#aggregate doesnt have a count method
[13:37:40] <tscanausa> the count exists after it is down
[13:37:49] <tscanausa> done*
[13:38:17] <Industrial> .. right so basically I cant count howmany left until its done
[13:39:33] <Industrial> tscanausa, Is there an event for this time?
[13:43:53] <Industrial> tscanausa, so the method .count() is only put on the instance of Cursor after .. what event?
[13:48:12] <tscanausa> after the aggregation has fully completed
[13:58:26] <Industrial> tscanausa, so is that the 'end' event on the cursor object?
[13:58:41] <Industrial> its a stream, so its already sending output before it is done, instead of loading everything into memory, right?
[13:59:11] <Industrial> OR is it already done in mongodb with the aggregations
[13:59:17] <Industrial> before it sends the cursor object?
[13:59:23] <Fiibar> In Symfony config for MongoDB Doctrine bundle there is an option for "default_database" is there an easy way to set another database? Like different databases for each user for example.
[13:59:29] <Fiibar> exit
[14:02:05] <cheeser> before 2.6, there were no cursors involved with aggregations.
[14:03:46] <tscanausa> Industrial: in my experience the aggregation is already done by the time you get the cursour
[14:15:18] <tscanausa> when I remove a shard, my mongos server complain about not being about to connect to the old shard. Is this expected?
[14:32:44] <wickwire> Hi everyone
[14:33:32] <wickwire> is there some way in mongodb, to omit the results enclosure for the documents returned by an aggregation framework query...?
[14:34:03] <wickwire> I'm trying to have the response formatted with the same structure of a find()
[14:38:19] <cheeser> use a cursor for your output.
[14:38:54] <rspijker> wickwire: just return xxx.result instead of xxx to whatever consumer you’re using?
[14:39:17] <wickwire> cheeser, rspijker thanks, I was just mistaken
[14:39:27] <wickwire> I could get away with a find and a regex
[14:39:36] <wickwire> where I was using an aggregate with match
[14:40:05] <wickwire> I think it's ok now, using find and the regex, I get the filtered documents and the same find() structure
[14:40:42] <rspijker> if all you were doing in the aggregate was a match, then you can do the same with a find
[14:43:21] <wickwire> yes you're right, I didn't get it at first
[14:46:17] <q85> Do the mongod instances (replicasets) have to be running to perform the meta-data upgrade when moving to 2.4.x?
[14:49:30] <tscanausa> q85: generally you leave everything running and can do a "liveish" upgrade but doing one server at a time
[14:50:37] <cheeser> http://docs.mongodb.org/manual/release-notes/2.6-upgrade/
[14:50:47] <cheeser> *to* 2.4? not from?
[14:50:53] <q85> tscanausa: but if a mongod was down at the time the mongos --upgrade was run would the upgrade complete as normal.
[14:50:54] <cheeser> http://docs.mongodb.org/manual/release-notes/2.4-upgrade/
[14:50:55] <cheeser> :D
[14:51:07] <q85> cheeser: 2.2.5 to 2.4.9
[14:51:17] <q85> Yes, I've already read the upgrade docs.
[14:51:20] <cheeser> 2.4.10 is out, fwiw
[14:52:42] <q85> I'm curious if the config servers are all up and a mongod is down, will the upgrade to the meta-data still complete ok.
[14:56:03] <tscanausa> from the release noted it looks like the upgrade just changes information on the config servers.
[15:25:46] <q85> tscanausa: that's also what I'm thinking. I'm going to assume the upgrade was completed successfully.
[15:45:39] <dgarstang> Confused about shards and replicasets...
[15:49:41] <dgarstang> Does each replicatset gets it's own mongod process?
[15:51:19] <cheeser> each replicaset *member* gets its own mongod
[15:53:59] <dgarstang> cheeser: ok, that's what I meant to say, thanks
[16:42:24] <zerooneone> hi there, i'm having errors trying to build the 26compat cxx driver on ubuntu 12.04 with errors like 'make_shared' is not a member of 'boost'
[16:42:56] <zerooneone> it's failing on replica_set_monitor.cpp
[16:43:13] <zerooneone> here's a paste bin http://paste.ubuntu.com/7493683/
[16:43:49] <zerooneone> i wonder if anyone has any suggestions on how to resolve this issue?
[17:04:17] <jimpop> does sharding have any benefit for small (<2GB) datasets?
[17:04:32] <cheeser> probably not
[17:04:36] <jimpop> ty
[17:12:01] <tscanausa> at under 2gb it probably adds overheard
[17:12:49] <jimpop> that's what I was thinking, but wasn't sure.
[17:18:15] <jimpop> with FTS (2.6.1), is it correct that "\"batmna\" joker" should only find documents containing both batman AND joker (logical AND)?
[17:18:26] <jimpop> *batman
[17:19:38] <jimpop> or do I need an explicit phrase within the inner quotes?
[17:41:32] <dgarstang> Argh. I'm just not quite getting sharding and replicasets. If I have a shard, split across two boxes... what do I have? I have a single shard, correct? How many replicasets would I have? one or two?
[17:42:12] <cheeser> you'd have 2 shards and 0 replicas
[17:42:36] <cheeser> typically each shard has its own replica set
[17:42:55] <proteneer_osx> "members" : [{"_id" : 0,"host" : "mdb1:27017"}
[17:43:02] <proteneer_osx> how do i change the "host" part so it's actually foo.bar.com
[17:43:13] <proteneer_osx> i can't find the corresponding config value
[17:43:29] <dgarstang> cheeser: is there typically one replicatset per shard?
[17:43:40] <cheeser> didn't i just say that?
[17:44:19] <dgarstang> cheeser: you said 0 replicac
[17:44:22] <dgarstang> replicas
[17:44:28] <cheeser> 13:41 < cheeser> typically each shard has its own replica set
[17:44:40] <cheeser> but you have to set it up that way
[17:45:29] <dgarstang> cheeser: so I'm looking at the last diagram at http://www.severalnines.com/blog/turning-mongodb-replica-set-sharded-cluster
[17:45:51] <dgarstang> it's got 3 boxes in each shard...
[17:46:00] <dgarstang> one replica(set)
[17:46:14] <cheeser> yes. because they've set up a replica set for each shard.
[17:46:45] <dgarstang> i thought the shard just had some number of replicas. Why the primary/secondary stuff?
[17:47:11] <dgarstang> maybe I'm getting ahead of myself. but...
[17:47:21] <pmercado> hi, how to query with an id ???
[17:47:27] <dgarstang> one replicaset. So, all the boxes in the same shard would have in mongodb.conf replSet = samething
[17:47:42] <dgarstang> where something is the same
[17:48:16] <pmercado> find( { _id:ObjectId("53753204abce5effbeb3ffa6") } ); <--- in console, but using java driver ?
[17:49:18] <pmercado> o.append( "_id", new ObjectId( "id_string" ) ); <--???
[17:49:24] <dgarstang> cheeser: implementations I've looked at earlier only have two nodes in the same shard. That's the minimum? one primary and one secondary?
[17:51:21] <cheeser> you should have 2 secondaries typically
[18:03:42] <boomy> Hey guys, I would I calculate the rank of each item in a collection and add that as a field to the entry when I retrieve them?
[18:06:31] <cheeser> the rank?
[18:15:18] <alex_javascript> hey guys - how can I print the current config file the db is using?
[18:16:19] <cheeser> cat /etc/mongod.conf
[18:33:35] <kali> alex_javascript: it's shown in the startup logs too
[18:38:27] <alex_javascript> Is it supposed to be in YAML format?
[18:48:24] <dgarstang> Next! New mongo issue.... "[rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)" ... what's that mean?
[19:20:31] <dgarstang> I just set up a shard... and I think it's working but sh.status() is showing me nothing...
[19:25:57] <federated_life> dgarstang: pastebin the output
[19:29:47] <dgarstang> federated_life: from?
[19:30:03] <dgarstang> federated_life: router, shard node1 or shard node2?
[19:30:40] <federated_life> sh.status
[19:30:49] <dgarstang> on the mongos box?
[19:30:58] <federated_life> yea, its the only place you can run it
[19:31:50] <Frozenlock> Will a capped collection be faster than a normal collection where I manually remove the oldest element? (say after 2-3 complete rewrite)
[19:32:04] <dgarstang> http://pastebin.com/zMbMV8bc .... that shard isn't the right one. it's from an earlier attempt. the replicaset is called rs0
[19:40:33] <dgarstang> hmmm
[19:41:51] <tscanausa> how long will a mongos server try to talk to a removed shard?
[19:45:30] <proteneer_osx> why do mongo clients have to present SSL certificates?
[19:45:45] <proteneer_osx> is there a way to accept any SSL certificate?
[19:45:58] <proteneer_osx> i have server side certificates that are signed by CAs
[20:22:37] <pmercado> is mature morphia?
[20:22:58] <cheeser> yes
[20:26:18] <pmercado> https://github.com/mongodb/morphia/wiki/QuickStart#prepare-the-framework <--- here in code "Mongo mongo = ....." means MongoClient ?
[20:27:55] <cheeser> the docs are just a bit outdated.
[20:34:26] <pmercado> I'm interesting in do a transparent development with mongo
[20:34:34] <pmercado> morphia seems fit to it
[20:35:54] <dgarstang> I'm running sh.status() on my mongos, and eventhough I think sharding has been set up, it's not showing me anything
[20:38:00] <dgarstang> nothing but pain mongo, nothing but pain "{ "ok" : 0, "errmsg" : "Can't remove last shard" }"
[20:39:13] <cheeser> pmercado: i'm not sure what that means.
[20:41:17] <dgarstang> ugh. anyone know of any mongo training in bay area?
[20:43:17] <cheeser> dgarstang: http://www.meetup.com/San-Francisco-MongoDB-User-Group/events/148827432/
[20:45:31] <dgarstang> city... I need to buy some soap. :)
[20:46:25] <dgarstang> although more of a general training would be better at this point I think.
[20:46:52] <dgarstang> because... every time I touch what I have, it breaks. Either it's fragile, or I dont have a clue
[20:48:39] <schimmy1> Does anyone have experience with 2.6 replication, and whether it:
[20:48:48] <schimmy1> a) uses the Batch API
[20:49:02] <cheeser> i dont' think replication changed in 2.6.
[20:49:07] <schimmy1> b) holds the write lock for less time than 2.4 due to a.
[20:49:07] <cheeser> it's still based on the oplog
[20:49:37] <schimmy1> hmm - I figured that would be some low-hanging fruit to use the batch API with
[20:49:50] <cheeser> you mean the bulk API?
[20:49:54] <schimmy1> sorry, yes
[20:50:22] <cheeser> replication is more akin to streaming data than doing bulk/batch updates
[20:51:17] <schimmy1> My own personal use case is that I would trade off update latency and frequency for less write lock time on the replica
[20:51:48] <schimmy1> I have writers to the db, but there's no strict SLA on when those writes need to go through
[20:52:07] <schimmy1> and I have an API reading from the replica, which is ideally as low-latency as possible
[20:52:08] <cheeser> you can have a delayed replication but that's a terrible idea.
[20:54:23] <schimmy1> hmm - not sure that would work for it either, as the writes are fairly constant
[20:54:58] <schimmy1> the delayed replica would still have a stream of writes locking, just an hour-delayed stream, etc
[20:55:53] <schimmy1> ah well, thanks cheeser
[21:00:14] <cheeser> sure
[21:13:41] <pmercado> where can I found more info about query sintaxys ? fourStarHotels = ds.find(Hotel.class).field("stars").greaterThenEq(4).asList();
[21:13:51] <pmercado> *syntaxis
[21:21:49] <proteneer_osx> yay i finally deployed my 3 member replica set with SSL support, upstarts, and all that shebang on digital ocean
[21:21:58] <proteneer_osx> took 3 days from start to finish
[21:21:59] <proteneer_osx> what a blast.
[21:22:11] <proteneer_osx> i suddenly have a lot more respect for sys admins now
[21:22:21] <schimmy1> haha
[21:22:23] <cheeser> heh
[21:23:05] <proteneer_osx> it's actually pretty cost effective
[21:23:08] <proteneer_osx> compared to just about everything else
[21:23:20] <proteneer_osx> much better than AWS
[21:23:54] <dgarstang> Can't seem to get sharding to work. :(
[21:24:52] <pmercado> proteneer_osx: power user haha lol
[21:25:12] <dgarstang> both shard boxes _Seem_ to beconfigured but sh.status() on the mongos shows no shards. Arg.
[21:25:12] <proteneer_osx> not relaly
[21:25:28] <proteneer_osx> we're a research lab so we can't afford the more corporate options
[21:25:51] <cheeser> why not self host then? that's pretty straightforward
[21:26:33] <proteneer_osx> well we'd rather not manage hardware :P
[21:26:46] <proteneer_osx> else we'd need to bitch at the university for money for clusters
[21:26:54] <proteneer_osx> and having a vm that i can destroy at any time is useful
[21:27:21] <dgarstang> when i run rs.initiate() on the shards, it says "already initialised" but the mongos shows no shards. Any ideas?
[21:27:56] <proteneer_osx> rs is for replica sets isn't it?
[21:28:06] <proteneer_osx> you're doing both sharding and replica sets?
[21:28:07] <Derick> dgarstang: you should run rs.initiate when connected to mongos
[21:28:12] <dgarstang> proteneer_osx: I think so.
[21:28:14] <Derick> proteneer_osx: that's the normal thing
[21:28:17] <proteneer_osx> oh
[21:28:29] <Derick> a shard consists of one node, or a replicaset
[21:28:39] <Derick> we recommend replicasets for failover
[21:29:07] <proteneer_osx> so you replicate your shards as well?
[21:29:10] <dgarstang> Derick: well... rs.initiate() on the mongos gets me "{ "ok" : 0, "errmsg" : "no such cmd: replSetInitiate", "code" : 59 }" Ugh
[21:29:20] <Derick> proteneer_osx: no, within each shard there is replication
[21:29:29] <proteneer_osx> hm
[21:29:32] <dgarstang> ... which is what I think I have
[21:29:42] <Derick> dgarstang: sorry, rsinitiate is not sharding
[21:29:45] <Derick> it's replicasets
[21:29:50] <Derick> you need to also set up sharding
[21:29:56] <dgarstang> Derick: I have
[21:29:57] <Derick> it's sh....
[21:30:02] <Derick> rs. has nothing to do with it
[21:30:14] <Derick> sh.status() shows?
[21:30:25] <dgarstang> hang on... sh.status() on mongos shows no shards
[21:30:37] <Derick> right
[21:30:42] <Derick> so you haven't set that up
[21:30:48] <dgarstang> Derick: I thought I had
[21:31:41] <dgarstang> assuming i had not... it's addShard on the mongos, right?
[21:31:46] <Derick> yes
[21:31:49] <dgarstang> k
[21:31:51] <Derick> but, read the docs
[21:32:08] <dgarstang> Derick: I have been. They are a missing important stuff
[21:32:17] <Derick> oh, such as?
[21:33:22] <dgarstang> can't remember. my mind is jello after spending days trying to get this to work. someone here was going to open a doc bug for me yesterday against one of the issues
[21:33:24] <proteneer_osx> shards a really confusing
[21:33:28] <proteneer_osx> are* really confusing
[21:33:42] <Derick> yes they are
[21:34:17] <proteneer_osx> 3 config servers?
[21:34:18] <proteneer_osx> jesus christ
[21:34:21] <proteneer_osx> 2 routers?
[21:34:32] <Derick> you only need two for failover
[21:34:45] <Derick> typically, you use one per application server
[21:34:51] <Derick> (one router (mongos) I mean)
[21:35:03] <proteneer_osx> talk about a fuck ton of overhead
[21:35:19] <proteneer_osx> how big do your collection sizes need to get
[21:35:22] <proteneer_osx> until you need to go sharding?
[21:35:26] <ap4y> you don't have to use dedicated servers for config in routers
[21:35:42] <dgarstang> i'm trying to config with chef which is making this even harder
[21:35:48] <Derick> until your working set fits in memory, you probably don't need it; unless your app gets limited due to write contention
[21:35:50] <ap4y> you can distribute put them on other machines
[21:36:03] <Derick> ap4y: yes, and mongos runs on your app server machine
[21:37:00] <ap4y> Derick: you can do the same with config and arbiters too
[21:37:19] <ap4y> will shrink hardware requirements for cluster a bit
[21:37:22] <Derick> ap4y: yes, for config you need to be careful and make sure they don on different machines though...
[21:37:33] <Derick> i mean, on different "sections" ofr your replicasets f.e.
[21:37:34] <ap4y> Derick: yeah, agree
[21:41:35] <Derick> http://www.kchodorow.com/blog/2010/08/09/sharding-and-replica-sets-illustrated/ is a good minimal setup
[21:42:26] <Derick> sorry, viable
[21:42:28] <Derick> not for production really
[21:42:41] <Derick> don't put two data carrying nodes (primary, secondary) on the same server
[22:05:39] <nukulb> joannac: thoughts on a good way to workaround this - https://jira.mongodb.org/browse/SERVER-1243. I can loop over the array items and update manually but I am wondering if its set to P4 then there might be exist a better work around
[22:10:13] <dgarstang> Is it possible to run addShard() on one of the shard boxes?
[22:49:45] <federated_life> dgarstang: no