[04:05:10] <butblack> hi, I setup mongoose and mongodb, when running mongoose.connect('mongodb://localhost/test'); ,, after going into my terminal and running mongod in one terminal and mongo in another, how can i get access to this db?
[08:12:50] <LoneSoldier728> anyone here use mongoose?
[08:48:40] <dennismartnesson> Hi I have a questions the server types to use for my replica set and what strategy is used and proven with mongodb, Is any one interested in anwsering the questions?
[08:49:48] <kali> dennismartnesson: it's usually best to ask the "payload" question directly :)
[08:50:08] <kali> dennismartnesson: irc crowd is not good with meta-questions
[08:50:18] <dennismartnesson> kali: Okey thank you
[08:51:15] <dennismartnesson> So I have setup my mongodb as a replica set and are geting ready to go to beta and started to think about is I should go with bigger servers or more small ones.
[08:53:12] <dennismartnesson> I know that this deppens and all but its a fairly small dataset at this point and I am more thinking about it in a general aspect
[08:55:02] <kali> dennismartnesson: i would say that it depends what kind of growth you expect. if you expects a huge growth, sharding early might be the way (so use more, but smaller servers)
[08:55:30] <kali> dennismartnesson: if you're pretty sure you're never need huge setups, scaling up is enough
[10:35:52] <masan> I donot want this operation collection.find().sort({ name: 1}). I want to display keys in the resultant document in the alphabetical order. If document has many fields, i want `collection.find` output document which should dispaly fields in sorted order
[10:40:28] <kali> masan: i think it also sort the fields (even if it's not advertised in the readme)
[11:17:32] <cell00> Anyone setup MSSQL to MongoDB replication before? Any suggestions on tools to use?
[11:18:26] <retran> that doesn't sound like replication
[11:18:36] <retran> that sounds like a batch process migration
[11:18:55] <retran> maybe i'm not very clever though
[11:20:50] <cell00> retran: A migration process could work, any existing tools for the job?
[11:24:06] <freeMind> hi guys im facing some performance issues, im trying to insert a big amount of data (about 109783)using c++ driver and mongoDB 2.4.9 the insert take a long time about 20min and the data inserted are corrupted
[15:29:02] <bobob> anyone knows a good solution to translate "(key:42 AND key2:551) OR bob:20" query type to mongodb query ? in python if possible ? thanks
[16:22:52] <Chepra> Hi, is it possible to run a replica set with just one member for some hours?
[16:23:17] <Chepra> I know that this wouldnt add extra safety, but we need to hdd-upgrade the other nodes
[16:28:31] <starfly> Chepra: Yes, it's atypical and not recommended, but the primary member can run alone with the limitation you noted.
[16:33:42] <starfly> Chepra: Goes without saying, but you'd want to make sure your backup are current first.
[16:36:18] <kali> you need to boost the remaining "votes" in the replica set config so it gets strictly more than half the votes by himself, because it need to have a strict majority to stay alive
[16:36:46] <kali> Chepra: why don't you "roll" the upgrde one node after the other ?
[17:27:21] <Nakomis_> Hi, I've got an app that uses the java client to regularly poll a MongoDB router to check it's status. After a while my app starts getting out of memory exceptions and jmap shows *lots* of BasicDBObject objects. I'm not caching them anywhere so is there any reason they're not getting garbage collected? If I close and recreate the MongoClient each time I poll the status, the problem goes away, but this is less than ideal
[17:29:20] <kali> Nakomis_: you may need to close the cursor explicitely
[18:08:54] <pierre1_> Well, actually it's a problem. I'm not being able to populate an array (type: ObjectId) in a document if the array elements don't have a 'ref'
[18:22:49] <pierre1_> So, I resources is an array of references, but with no specific model (no ref attribute), because the type of document it points to is going to vary
[18:25:52] <rkgarcia> pierre1_: i don't know mongoose :( sorry
[20:07:31] <brucelee> i have 3 site replication, with 1 being the primary. but whenever the primary goes down due to the apower outage or whatever, an election takes place, which is intended
[20:07:38] <brucelee> which causes the entire app to go down
[20:16:00] <starfly> brucelee: The app needs to be aware of and correct for (including waiting for resolution of) that kind of election scenario...
[20:39:41] <brucelee> starfly: ah so when the DB is up for reelection, it will warn the application, and the application will wait for the resolution of such an event
[20:40:00] <brucelee> starfly: any cookie cutter solution for this with tomcat?
[20:40:13] <brucelee> our new app is kind of suffering from these re-elections :P
[20:40:49] <cheeser> there is no warning. it just happens.
[20:41:10] <cheeser> you'll have to implement retry logic in your app or handle such failures gracefully.
[20:41:28] <cheeser> a lost db connection (whatever its type) should never cause your app to crash
[20:45:56] <starfly> correct, it is like any other app that needs persistence (regardless of replica set tech), if the database connection fails for whatever reason, the app code needs to be able to trap that condition and retry. If it doesn't, it isn't durable enough to survive a database failure. Many aren't, a good one is.
[20:46:56] <starfly> That of course, means a good app could survive short database outages like occur in MongoDB elections of a new primary, there aren't many apps that can survive for long without persistence :)
[21:11:04] <brucelee> starfly: what about situation where a replica flaps (goes offline/online multiple times in short period) due to unstable connection
[21:11:12] <brucelee> the election process would be mindfucked wouldnt it?
[21:12:03] <starfly> ernetas: MongoDB is not really a consistent (across document) type of technology to begin with
[21:13:46] <starfly> brucelee: replica set member flapping (assuming you mean secondaries) is definitely a problem for any app components that depend upon reads, just as the same goes for app components depending upon writes to a primary. Bottom line, unstable connections will create issues
[21:16:16] <starfly> brucelee: regarding the election process, yes, of course that will be significantly impacted by nodes that can't reliably see each other on the network
[21:16:40] <brucelee> starfly: hmm what are some ways to deal with this?
[21:16:58] <brucelee> starfly: the only way would be to remove the nodes from service right?
[21:18:48] <starfly> brucelee: originally, you were referring to outages caused by power issues, now sounds like you have issues with either significant network latency or an inability to maintain network connections. In the latter case, you're only hope is to colocate secondaries in the same LAN/VLAN segment or find a way to increase network reliability
[21:20:10] <brucelee> starfly: just discussing all my issues ive experienced :)
[21:20:14] <starfly> brucelee: yes, if the nodes are not reliably able to talk with each other and flapping to the extent that elections are occurring a lot, then you should remove secondaries that are wreaking such havoc and as mentioned, place others closer...
[21:20:17] <brucelee> and stuff keepingme up at night, and weekends lately
[21:20:51] <starfly> brucelee: sure, that's what this space is for, no worries
[21:21:29] <brucelee> and what we do is remove one replication site from the load balancer
[21:21:38] <brucelee> so no users will be going there anymore
[21:22:30] <starfly> brucelee: sounds good, but you still have to worry about the effect of secondaries that can't reliably talk with other replica set members to avoid the excessive elections
[21:23:27] <starfly> brucelee: secondaries and primary that can't communicate in the replica set
[21:25:45] <brucelee> starfly: yeah, but if the secondary is not being served
[21:25:52] <cheeser> ernetas: probably. but for my needs it's sufficient.
[21:25:52] <brucelee> then it shouldnt matter right?
[21:26:21] <brucelee> to clarify, what i mean is, if the secondaries arent part of the lb group, and users arent getting there, then it doesnt matter right?
[21:26:46] <brucelee> or i thought users go to our app, and whenever they make a query from the database
[21:26:49] <brucelee> it goes back to the primary :P
[21:30:06] <starfly> brucelee: well, you still have to be concerned about whether your primary can retain majority, etc., so it does matter than replica set members can talk to each other, at least than 2/3 (typical) are communicating. It might help to examine your replica set topology and see if you can use a member closer to your usual primary, whether a data-secondary or arbiter
[22:03:01] <gsd> does MongoClient.connect emit log events?
[22:03:38] <cheeser> probably depends on the driver
[22:26:12] <xaq> Hey guys, I have an interesting problem. I have a large deeply nested dataset, and I need to frequently look up and update deeply nested subdocuments. I can't call find and then save, because the updates are frequent enough that I would miss some.
[22:28:20] <xaq> So I end up with queries like User.update({ 'posts.comments.tags._id': '3240s9daf893' }, { $push: { 'posts.$.comments.$.tags' : { tagname: 'sample' } } });
[22:29:31] <xaq> Derick: I see. So there is no way to call 'update' on a deeply nested match to a complex query?
[22:31:28] <xaq> I would have to call 'find' and then manipulate the document, and then 'save' it. But if I'm doing this 5 times per second, then some data will be lost, because two manipulations will be doing different things to the same document.
[22:37:16] <rafaelhbarros> xaq: you might find a better solution by restructuring your data
[22:37:28] <rafaelhbarros> xaq: if you have to do that many queries, it might not be worth having nested docs
[22:39:00] <xaq> rafaelhbarros: I think you are right. How would you recommend? Basically I have a pyramid structure of: w belongs to x belongs to y belongs to z, etc. It would make a ton of sense relationally, but I'm not sure how to best represent it in Mongo.
[22:40:10] <rafaelhbarros> xaq: by what I can see, it's a one way cascade, so, you can just have the ObjectId there, but yeah, relational would be the easiest way to go.
[22:42:03] <xaq> rafaelhbarros: what do you mean I can just have the ObjectId there? Can I update by a sub document id?
[22:45:45] <xaq> Looks like it's not possible: http://stackoverflow.com/questions/18173482/mongodb-update-deeply-nested-subdocument
[22:46:52] <xaq> I would use a relational db, but I'm using meteorJS, so mongo is the only option right now.
[22:47:42] <rafaelhbarros> xaq: not a bad option, but the SO solution is fine
[22:47:56] <rafaelhbarros> xaq: well, it looks fine to me
[22:49:09] <Diplomat> Hey guys, I have a quick question
[22:49:25] <Diplomat> I have a database that will house billions of rows.. and I need real time analysis
[22:49:42] <Diplomat> I mean, I need to query data from the database for real-time analysis
[22:50:04] <Diplomat> Would MongoDB be good enough? I need fast reading
[22:50:14] <xaq> rafaelhbarros: My arrays are going to be super long, so I can't do exactly what they do with "levels". I could index by objectId on the subdocument, I suppose.
[22:50:46] <Diplomat> I'm currently using Cassandra.. and it takes like 5 seconds to return 22k rows from SSD
[22:50:56] <xaq> What if I just make each collection it's own subdocument, with a reference to its parent (basically using Mongodb as a relational db). ?
[22:51:20] <xaq> Kind of like what they do here: http://docs.mongodb.org/ecosystem/use-cases/storing-comments/
[22:51:38] <xaq> Also, if document size limit is 16MB, then I would exceed that under the current system.
[22:52:09] <rafaelhbarros> Diplomat: if you build your indexes right, for a flat query, it's quite fast
[23:13:35] <Zitter> Following tutorial/guide I have "E: Sub-process /usr/bin/dpkg returned an error code (1)" on my Debian amd64. Any hint on how to solve it? I'm googling but at moment I haven't solved yet
[23:14:21] <Zitter> and of course I've done a "sudo apt-get remove mongodb-clients"
[23:19:19] <Zitter> even sudo apt-get install mongodb-10gen=2.2.3 doesn't work
[23:32:02] <Zitter> OK, found. It is a problem of a insufficien space in /var/lib/mongodb/journal
[23:57:53] <brucelee> what /proc/sys/net/ipv4/tcp_keepalive_time settings do you guys use?
[23:58:00] <brucelee> i think we initially had it for 7200