[03:32:00] <s3k7i0n8> OperationFailure: database error: not authorized for query on....
[06:04:18] <IrishGringo> I'm an iPHONE and ANDROID developer. looking possibly at mongo to replicate data back to my apps. are there native libs for mongo, ala counchbaselite?
[10:08:04] <trupheenix> my mongodb instance is not starting up after i added the following line to my config file setParameter = textSearchEnabled = true
[10:08:14] <trupheenix> I am not even getting any errors on my log file output
[10:08:22] <trupheenix> anyone know why my mongo is refusing to start even?
[10:08:28] <trupheenix> removing the above line solves the problem.
[12:48:01] <grexican> hello all. I have a question about the aggregation pipeline and $elemMatch. I'm trying to "wind" up a collection into a collection using $addToSet, then I'm trying to match items in my new collection against another value in my document by using $elemMatch
[12:49:20] <Derick> grexican: you can't do it like that
[12:49:35] <grexican> So I roll up and I find out if there are any elements with the Closed flag set by using HasClosed: { $max: '$Closed' }. My subdocument is then each year + its closed flag. I then want to filter my subdocument and remove any element where Closed != HasClosed
[12:51:17] <grexican> yea I see that hehe. Basically what I'm trying to do is SQL equiv of a subquery where I'd get the element with the max Year within the group 'Associations.ClientId' given that ClientId's max Closed flag.
[12:51:36] <grexican> i.e. I want to prefer getting a Closed year if there is one, but if there isn't one, I'll take the most recent not-Closed year
[12:52:29] <grexican> The only way I know how to do it at this point is two separate queries and process them -- one that gets Closed years and one that gets non-Closed years. I'd like to avoid that, though, because I'd like to use other aggregation features
[12:53:35] <Derick> grexican: you can use $eq with two field names though
[12:54:21] <Derick> grexican: and you don't need an elemMatch either
[12:56:22] <grexican> hrmmm ok. I'll experiment a bit and see if I can get this further. I feel like I need to be able to include a $match and $group in the same query. Let me give it a go and I'll report back. Thanks Derick
[13:26:58] <crashev> hello, anyone knows what this error means (using php5-mongo and trying to connect, it used to work with php53, stopped working after upgrade to php54) => 'Failed to connect to: server.com:27017: get_server_flags: got unknown node type' ?
[13:29:30] <Derick> crashev: which version of php5-mongo is that?
[14:16:16] <ymas> hello everyone, I'm currently building a system which allows users to take a survey on the phone. Our provider posts the response to each question individually to a callback url I specify. Rather than initiating and storing each response in my RDBMS, I was thinking of storing the responses in MongoDB and then persisting to the RDBMS at the end of the call, is this a sane use case for MongoDB?
[14:17:58] <ymas> I could write the responses to the RDBMS directly, but the anticipated volume is quite large
[14:18:26] <ymas> so I was looking to use an intermediate storage and then persist to the RDMS as a way to reduce load
[14:18:39] <Nodex> seems more suited to something like redis
[14:19:02] <Nodex> but you won't get garunteed consistency in the case of a network partition
[14:19:09] <whaley> ymas: incidentally mongo is very well suited for "event data", which is what your response data looks like. what you do after that is entirely up to you
[14:19:15] <cheeser> or just drop the rdbms altogether if it can't handle the load
[14:19:15] <Nodex> seems like you really really need that data
[14:19:46] <whaley> ymas: there are tools within (and written for) mongo to aggregate those very easily to do whatever you need to do afterwards... including writing the aggregations to another mongo collection
[14:20:24] <cheeser> well, until 2.4 you'll have to write out to that other collection manually.
[14:20:39] <cheeser> 2.6 gives you $out in the aggregation pipleline to do that automatically for you.
[14:20:57] <cheeser> but it'll clobber the existing data so it's not super great for incremental stuff.
[14:22:19] <Nodex> to be honest it's a hug overkill just to save load, it's much more suited to a queue system
[14:25:05] <ymas> Nodex, I need to persist the data as it comes in and I also need to persist the call id (and some more meta data), ideally, I'd like one CRUD operation to the RDBMS per call, my worry about object-value store is consistency
[14:26:04] <ymas> whaley, yes, it is incremental event data, basically user gets question, keys in response, system posts to me until the call terminates, so being able to store everything then aggregating and sending to RDBMS is precisely what I'm looking to do
[14:36:01] <xialvjun> sorry, 3q is a style of chinese english
[14:36:06] <ymas> cheeser, the phone option is a new addition to an existing online option so I don't think I need to throw the back-end away but I will consider your point :) I just want to reduce load.
[14:36:17] <ymas> Many thanks all for your suggestions, much appreciated.
[14:37:11] <Derick> xialvjun: $inc is safe. It's an atomic operation that will always be executed. If you do $set in two threads then you might just get *one* of the new values.
[14:46:23] <drunkgoat> hi! i have a question regarding schema design. i'm building a twitter-like app... when presenting a feed for a user, i need to query for all posts made by followed users. it doesn't seem wise to put all followed users in a sub document - that array might grow up to, lets say, a thousand ObjectIds... besides, i sometimes need to query for all of a user's followers.. so i thought, maybe i need
[14:46:23] <drunkgoat> a collection for "follows", these documents will be something like { followingUser : a, followedUser : b }, and just keep counters of total following/followers in the user document. in that case, to present a feed i'll need to first query for all followed users, and than for posts made by them... what do you think? thanks!
[14:47:45] <cheeser> i'd be ok putting followers as an array in a users collection
[14:47:48] <crashev> Derick: I wonder how to check which php5-mongo version do I have, any clues? I have version from dotdeb for Debian, seeing 5.4.24-1~dotdeb.0, I see nothing special in phpinfo()
[14:49:26] <crashev> Derick: what exactly get_server_flags: got unknown node type means, is it some kind of conflict version between library and remote server version ?
[14:50:18] <Derick> crashev: yes - which php5-mond version do you have?
[14:50:32] <Derick> crashev: I don't know what packaging system you're using
[14:50:41] <drunkgoat> cheeser: but what if a certain user has thousands of followers or he follows thousands of others?
[15:02:16] <Derick> There is: http://docs.mongodb.org/manual/release-notes/1.6/#upgrading http://docs.mongodb.org/manual/release-notes/1.8/#upgrading http://docs.mongodb.org/manual/release-notes/2.0/#upgrading http://docs.mongodb.org/manual/release-notes/2.2/#upgrading and http://docs.mongodb.org/manual/release-notes/2.4-upgrade/
[15:02:58] <crashev> Derick: oh great, I was just looking for this, Thank You very much for help
[15:03:25] <Derick> trying to figure out where to get 1.6 and 1.8 right now
[15:45:21] <pmitros> If I turn off write concerns, what happens if my Mongo server becomes overloaded (e.g. CPU-bound or disk bound)? Does it still slow down when I make the call (e.g. it adds to the queue during the call, but just doesn't let me know until it is done processing), or will it be like a lot of UDP packets falling into the ether?
[15:47:30] <Derick> pmitros: it will slow down, and not drop anything
[15:47:40] <Derick> (as long as you don't end up hitting default cursor timeouts)
[15:52:06] <pmitros> You don't, by chance, know of a page where I can read about where/how that happens (one level deeper than http://docs.mongodb.org/manual/core/write-concern/)
[15:52:52] <Nodex> if you turn off write concerns it will just fire and forget as Derick says
[15:54:58] <pmitros> Nodex: That's the opposite of what Derick said.
[15:55:21] <Derick> write concern is enabled by default
[15:56:20] <pmitros> The key question is what "fire" means. Does it go into a queue (and slow down if the queue is full), and just not acknowledge the actual write, or does it just send e.g. some UDP packets where you don't know if it actually made it into a queue?
[15:58:10] <Derick> pmitros: it does not use UDP, only TCP
[15:58:39] <pmitros> The specific use-case is I have a set of Python processes that would like to dump data into Mongo as fast as Mongo can handle it. There is a pool of such processes, and it is inside of a while loop ("while stuff to dump: dump it to Mongo"). I'm wondering if w=0 will result in all the data will end up with the writes throttling to the speed Mongo can write to disk, or if it will end up with the data getting dropped.
[15:58:41] <Derick> but with "fire and forget" (writeConcern of 0), the client does not know whether the server is doing something with it. It does not mean the server does not get the command
[15:59:14] <Derick> pmitros: data will not get dropped unless the server gets into some error situation (duplicate key, disk full...)
[15:59:47] <pmitros> Derick: Is running out of queue space in the incoming queue (not disk space) an error situation?
[16:00:15] <Derick> there is no such thing in mongodb
[16:01:57] <pmitros> Derick: Is it a pool of processes like a web server? Is there an architecture diagram somewhere?
[16:06:33] <Derick> pmitros: it's a pool of threads - one per connection IIRC. Not sure of a diagram
[16:09:46] <pmitros> Thank you. That makes perfect sense.
[20:08:17] <remote> would it be possible to create a unique constraint that would requires two duplicate fields to apply?
[20:09:29] <remote> i would like to allow duplicate station_id entries when they are of type "historical" but not when they are of type "location"
[20:10:05] <kali> remote: yes, just create a unique index with your two field
[20:36:28] <emiel1> Hi all, small question (I hope);
[20:38:34] <emiel1> i'm using mongoose, and have 2 schemas, rooms and messages.. I populate the messages on a room using populate(), but I want to limit the messages, I need to use match: { $gte: '**' }, but I want to match ** to a field in the room collection, is this posible?
[20:39:11] <emiel1> when I use match : { created: { $gte: 'room.join' } } it gives me a cast error
[20:39:27] <emiel1> cannot convert Date to string..
[20:45:11] <kali> emiel1: you can't get it with the $gt/$lt/... syntax. you can use $where, or maybe the aggregation framework.
[20:45:52] <kali> emiel1: this may have an important performance impact, as the index are inoperant on these filtering
[20:48:18] <emiel1> Thanks kali, I think my schema design is wrong, what kind of schema would you suggest for the following situation: Rooms have messages, messages has a creation date, I want to get all the rooms and its messages but limiting the returned messages based on the creation date..
[20:48:45] <emiel1> I would like to do this with one query if posible :)
[20:50:04] <kali> keep enough recent messages in the room object
[20:53:27] <emiel1> I don't want to use it for 'the last 5 messages', i need this for an history of an user (user has a join date to a room, i want to use this date to fetch the messages)
[20:55:27] <kali> emiel1: you may want to check this out, then: http://fr.slideshare.net/jrosoff/mongodb-advanced-schema-design-inboxes
[20:57:08] <emiel1> Kali: thanks for your help, i'll check it out :-)
[20:58:00] <kali> emiel1: there should be a blog entry about the same topic somewhere
[21:19:48] <emiel1> kali: thanks for your help! I added a 'receipient' field to the messages and use that to filter only the messages for the user
[23:17:38] <italomaia> hi folks. I'm thinking about hiring a 3vps openvz vms to run my blog mongo
[23:17:52] <italomaia> how good of a setup is this
[23:47:25] <asido> I have such configuration: http://i.imgur.com/ggtyWkz.png
[23:47:32] <asido> but I still get such error: ---> MongoDB.Driver.MongoConnectionException: Unable to connect to a member of the replica set matching the read preference SecondaryPreferred (tags = [{dc.andromeda:1}])
[23:56:54] <Joeskyyy> Might help to see the snippet where it's reading.