[03:43:58] <frodo_baggins> Oh boy... now I get to figure out how to include language in text search... @_@
[03:51:12] <frodo_baggins> Oh, well, it actually looks pretty easy.
[04:37:39] <k0emt> trying to get mms running on an Azure Ubuntu 12.04 instance. MongoDB is configured and accessible on the VM and from the world given appropriate CNAMES and internal/external IPs. admin db is set up with an account for mms. mms control panel has several hosts added (various attempts at CNAME and IP) along with db credentials. However, the agent doesn't seem to be reporting anything up.
[11:45:49] <kristian_> it says mongodb BSON "conform to the JSON RFC" -- but still I "can't have . in field names". I would say this is an incompatibility
[11:47:25] <Derick> kristian_: where do you read that?
[13:24:30] <talbott> but how do i allow writes into my secondary?
[13:24:54] <talbott> i have a connection from a supplier straight into my mongo (primary)
[13:25:03] <talbott> if it becomes secondary, their writes are failing
[13:25:29] <Nodex> I don;t think writes can go to secondaries
[13:26:55] <talbott> is my architecture off them? I have a primary, a secondary and an arbitur
[13:27:17] <talbott> they do a great job of failing over, but i cant tell the supplier to fail over (because they have no mechanism to take a list of hosts, only one)
[13:28:43] <Nodex> if you want HA then you need to shard and replicate
[13:28:57] <Nodex> if you want read scaling then replication will suffice
[13:36:23] <rustx> Nodex: sorry . may i try to explain my problem to you again ?
[13:37:40] <rustx> Nodex: i have 4 vms ( 1 ruby on rails VM as backend) and 3 Mongo VMs built on a replicaset ( 1 primary, 2 secondary)
[13:38:12] <rustx> Nodex: my ruby on rails VM can read and write documents on the replicaset, thanks to the ruby-mongo-driver
[13:38:27] <rustx> Nodex : so, from an irb shell, I can find all collections from the replicaset, and write into it
[13:39:02] <rustx> Nodex : but when i go on the primary Mongo vm, and try db.database.find(), I don't find the documents that were created by the ruby-mongo-driver
[13:40:20] <rustx> Nodex: i was wondering if the replicaset was keeping documents writing in RAM, before writing them to disk ? or, does a replicaset writes on a mongodb the same way it do it locally
[13:41:21] <rustx> Nodex: sorry if my question makes no sense .. i hope that explanation was better
[13:42:42] <rustx> Nodex: to resume , the problem i have is that i don't find the documents created by my ruby-mongo-driver on my mongo primary vm locally
[13:42:51] <rustx> is there a sync delay to write documents on disk ?
[13:45:48] <Nodex> rustx : where are you writing the docs to?
[17:09:31] <mcr> Can I run a mongos on multiple systems? I want to migrate where it runs from being at HQ, to being in a cloud, but I don't want to interrupt all the applications talking to it
[17:11:00] <mcr> I think that I can, and I also think that having multiple mongos running is not unusual.
[17:12:45] <cheeser> more than one on a single machine is rather pointless, yes.
[17:29:10] <skot> (for the server, but you should be able to compile/use any client like c++/c/java/python/node.js/etc)
[17:36:54] <moleWork> in node.js what is prefered: var mongoClient = new MongoClient(new Server('localhost', 27017)); mongoClient.open()........ or MongoClient.connect("mongodb://url....
[17:37:32] <LouisT> moleWork: i would assume it's what you prefer
[17:38:30] <moleWork> i'm trying to tell if there are differences... like one has maxPoolSize and one has poolSize... not sureif they differ in behavior or not yet... poolSize creates all the connections and doesn't scale on demand... not sure if .connect with maxpoolsize has different behavior
[17:38:42] <moleWork> also i'm trying to figure out how to deal with driver state
[17:39:35] <moleWork> like using .open.... is it if(mongoClient._db.state != "connected") { i can't use you right now)
[17:40:32] <moleWork> trying to figure out what the differences are right now
[17:40:58] <moleWork> trying to avoid "Connection was destroyed by application" errors
[17:48:28] <moleWork> also seems weird you can specify connectTimeoutMS and socketTimeoutMS in url and in options hash (along with maxPoolsize in url and poolSize in optionshash)
[18:09:55] <moleWork> looks like maxPoolSize is same as poolSize just weird naming,.. maxPoolSize isn't the maximumPoolSize, it is the poolSize so if you specify it, it opens that many connections right away, it doesn't scale up,... which is same as what poolSize does
[18:17:45] <q85> mcr, yes, you can have multiple mongos instances, on different hosts, connected to the one cluster. You can also have multiple mongos instances, on the same host, each connected to a different cluster.
[21:03:37] <ranman> grails: 1) put a u in front of the string 'alcudia' and make sure you're using the accented character like this u'alcúdia' and also pass in the ignore case option
[21:04:51] <ranman> frodo_baggins: sweet glad that works!
[21:20:32] <ranman> grails: your other option is to enumerate the set of accents that are possible in spanish and create a search function that creates a regex that matches all those special characters
[21:20:55] <ranman> grails: or you can pass on to your users that they have to type the special characters :D
[21:56:07] <TheFinal> if i use a encrypted connection to mongodb port..
[21:56:54] <ranman> TheFinal: security mostly, also it's inefficient, what exactly are you trying to do with your iOS app? Have you considered just using a backend like parse instead?
[21:57:21] <frodo_baggins> Does MongoDB have any conventions for the naming of document properties?
[21:57:30] <frodo_baggins> Such as... a_property_name, or aPropertyName?
[21:58:05] <TheFinal> what i’m trying to do is a prototype, needs to be a quick one, to a sort of a “social” app based on user coordinates. now i need to realize a proof of concept
[21:58:24] <ranman> TheFinal: use parse, putting a database driver in the iOS app is incorrect.
[23:11:26] <frodo_baggins> Hmm, it looks like my upsert isn't quite working as I had hoped.
[23:26:15] <frodo_baggins> Does an update query have to have all the fields in order to match an existing document?
[23:27:36] <ranman> frodo_baggins: fields no? array elements yes
[23:27:47] <ranman> frodo_baggins: as always :), example you want to match and query you're using?
[23:28:05] <frodo_baggins> Ah, that may be my issue then, the document I'm hoping will be matched has array elements.
[23:29:47] <frodo_baggins> Which, in my case aren't exactly necessary.
[23:29:59] <ranman> you can use $in query to match just some array elements
[23:40:45] <frodo_baggins> What I want to do basically is check for two fields, increment another field, while using the upsert option.
[23:41:02] <frodo_baggins> And have it match a document that has many other fields.
[23:41:32] <frodo_baggins> Here's a code sample: https://gist.github.com/blafreniere/b16b78dc9824e35771f8
[23:45:14] <ezrios> so I'm using mongoengine for Python and I'm not sure I understand the difference between the `unique` and `unique_with` constraints
[23:45:41] <ezrios> well actually I just don't really know what `unique_with` means
[23:46:02] <ezrios> does it mean that a particular combination of the fields given is unique?
[23:47:56] <kireevco> Hi all, does anyone use puppet to orchestrate mongodb cluster configuration? Thanks!
[23:54:09] <moleWork> damn... anyone run mongodb on azure before
[23:54:13] <frodo_baggins> Perhaps this is what I need? http://docs.mongodb.org/manual/reference/operator/update/setOnInsert/
[23:54:47] <moleWork> i've been playing with keepalive's and everything.... to try to make this connection pool stable but alll of a sudden i get "end connection" and it shuts down the whole connection pool and starts it up again
[23:55:01] <moleWork> this desk is getting flipped