[00:27:06] <Bilge> I have an embedded document called emails.address and I have an index on emails.address but when I do db.User.find({ 'emails.address': "an@example.com" }).explain() I find that "indexOnly" is false
[01:11:40] <therealkoopa> I'm struggling with the $match on an aggregation. If I put { $match: { 'people.inactive' : false, _id: id }}, it doesn't return anything if there is a document with all inactive people, but I want it to always return the document, but have the people filtered.
[03:51:10] <mrpro> when are collection locks coming?
[03:55:01] <ckd> mrpro: https://jira.mongodb.org/browse/SERVER-1240 it's mentioned somewhere near the top that it's scheduled for 2.4, but i'm looking for more concrete info
[03:56:45] <ckd> mrpro: i should walk that back, http://www.mongodb.org/display/DOCS/How+does+concurrency+work mentions it's not necessarily going into 2.4
[07:58:34] <unknet> How can i consult a subdocument with mongoid?
[08:06:33] <samurai2> Hi there, If I use compound keys as my shard key and I use query only using one of them, will that query will be done in parallel? thanks :)
[09:25:01] <Lujeni> Hello - query with the findOne operator works well. However , when i use *find* operator, mongo don't return the cursor ( collection = ~200docs ). Someone have an idea pls?
[10:13:16] <Derick> why build it from source when you don't have to?
[10:13:43] <bigmeow> Derick: just kidding, the main reason is that i am not root on that machine, i have to build it from source:( what a hell, it's painful:(
[10:13:43] <NodeX> coz that;'s what the cool kids do
[10:15:09] <bigmeow> NodeX: dude, i am not kid:( i am oldman:)
[10:15:17] <Derick> bigmeow: I thought RPM allowed for package relocations for this purpose?
[10:15:45] <Derick> bigmeow: what you can do, is download the RPM source and compile that - it should have all the required patches in there
[10:15:46] <bigmeow> Derick: can i insatll rpm to my home directory?
[10:16:00] <Derick> bigmeow: I believe you can - but I have no RPM-based system to test here
[10:16:22] <bigmeow> Derick: i have only tried to install rpm using rpm -ivh xx.rpm, and it will install that package to place where root priviledge is needed:(
[10:16:38] <NodeX> bigmeow : dude, I am not a surfer LOL :P
[10:17:12] <bigmeow> Derick: dude, do you use debian? then how to install mongodb to my home directory on debian?
[10:17:36] <Derick> bigmeow: just download the binary off mongodb.org/downloads
[10:18:01] <Derick> Download the .tgz, unpack, and run ./mongod (path might be different)
[10:18:20] <bigmeow> NodeX: i am that suffered poor guy:(
[10:18:51] <bigmeow> Derick: what is the format of mongodb file?
[10:19:02] <bigmeow> Derick: seems it is kind of binary file
[10:35:29] <roxlu> I've been asking on the google groups and in this channel for a way to set meta data for files, when using gridfs and the c-driver. When using buffer writes it seems there is no way to add meta data? (when doing a normal gridfs insert you can pass an bson object as meta data).
[10:51:43] <nico__> Hi, I'm using pymongo and I'm testing the ReplicaSetConnection with read_preference. However with a small python script which lists a collection, the find() seems to be frozen at the end of the loop. If I run the same script with Connection (Single Node) instead of ReplicaSetConnection, the find() terminates normally...
[11:05:51] <bigmeow> Derick: dude, what is this? http://www.slideshare.net/pengwynn/mongodb-ruby-document-store-that-doesnt-rhyme-with-ouch
[11:06:10] <bigmeow> Derick: why it is kinds of zip files, it's not a pdf document:( why?
[11:46:25] <Gargoyle> Derick: Last night when I updated my blog, I am running RC1 ad home (and now RC2) on the server and the result from aggregate() was different. Is that something specific to the aggregate() function or results in general?
[15:01:42] <therealkoopa> Is it possible to use an aggregation filter subdocuments based on position? I'd like to find a document that has an array of subdocuments, but only return subdocuments that are before one of the subdocument's id.
[15:04:34] <Derick> therealkoopa: https://jira.mongodb.org/browse/SERVER-6074 like that?
[15:05:56] <therealkoopa> I don't think I need slice. I would like something "All subdocuments that are before the subdocument with an id of 10", but I odn't know if that's possible
[15:38:57] <NodeX> hanging one of my apps and it's getting annoying
[15:41:39] <therealkoopa> I'm able to get my query to work within the mongo command line, but the node mongo driver doesn't return results when I have the match in there: { $match: { 'changesets._id': { $gte: mongoose.Schema.ObjectId("50a110dafe69c6dee500000c") }}}. Any ideas how I can figure out why the match doesn't return anything from within the node mongo driver?
[15:54:47] <Derick> ckd: yay - let me know how it goes!
[16:21:11] <alexnixon> hi all. I have two collections, c1 and c2, in the same database which have almost identical documents, except in c1 all documents have a field "x" which is not present in c2. I would like to copy the field "f" from each document in c1 into the corresponding document in c2. I have written the following: https://www.refheap.com/paste/6579 however it's incredibly slow - some updates are taking upwards of 30 seconds. Does anyone have ideas for how I could a
[16:22:05] <kali> your question has been trucated by your irc client...
[16:22:30] <alexnixon> bah. I'll post separate sentences.
[16:22:42] <alexnixon> I have two collections, c1 and c2, in the same database which have almost identical documents, except in c1 all documents have a field "x" which is not present in c2
[16:22:48] <alexnixon> I would like to copy the field "f" from each document in c1 into the corresponding document in c2
[16:22:52] <alexnixon> I have written the following: https://www.refheap.com/paste/6579 however it's incredibly slow - some updates are taking upwards of 30 seconds
[16:22:57] <alexnixon> Does anyone have ideas for how I could achieve the same effect, but more quickly?
[16:23:26] <kali> alexnixon: do you have an index on c2.title ?
[16:24:09] <kchodorow> do they have the same _ids?
[16:24:24] <alexnixon> there is an index on title, yes. the _id's are different
[16:29:54] <alexnixon> it's not all inserts that are taking that long - it's just that when I hammer the console with db.currentOp(), I often see long running inserts
[16:31:59] <alexnixon> is there a general solution to the "collection migration" problem - i.e. you want update a whole bunch of documents before the universe expires?
[16:32:51] <alexnixon> I've hit this problem before and wasn't too sure what to do. Perhaps a mongodump/mongorestore, with some flag telling mongo to leave some extra padding for each doc when importing?
[16:35:22] <kali> alexnixon: no general solutions, more like evasive action, here
[16:39:36] <kchodorow> alexnixon: you could try compacting the collection first, you can tell compact to leave padding on each document (see paddingFactor: http://docs.mongodb.org/manual/reference/command/compact/)
[16:39:49] <kchodorow> but then you're just trading update time for compaction time
[16:41:52] <neiz> anyone famliar with 10gen c# driver?
[16:42:07] <ron> don't ask meta questions. ask your question.
[16:43:11] <alexnixon> kchodorow: thanks for that - it could well be quicker than waiting for the updates...worth a try at least
[16:43:46] <kchodorow> alexnixon: compact does block everything else from running, so don't do it on a live system
[16:44:44] <alexnixon> kchodorow: cool that's not a problem for me right now as I'm doing this manipulation off-line, and then will dump/restore to the live system when it's finished.
[16:46:52] <neiz> I am updating a collection by using Collection.Update.PushWrapped("Item", item) to push my object in as an array. Once inserted as an array, how do I access individual elements? (10gen c# driver)
[17:14:35] <hays> is it fair to think of mongodb as just a giant hash table
[17:16:16] <kali> hays: not really, it has some knowledge of what is in the document, so you can have indexes, server side filtering, selective retrieval, atomic updates
[19:08:37] <srp_> Hello, I have this BSON in my database.. http://paste.kde.org/604988/ I want to update this by removing {"50a13bec21966423837a02cd" : 0 } . how do i do this? I'm a newbie and tried this(http://paste.kde.org/605000/) query but i think its wrong! i'd be glad if someone helped me! :)
[19:09:02] <totic> wereHamster: if you want to save space you dont use strings.. if you want to make it human readable you dont use numbers .. the combo is enums
[19:09:10] <totic> saved as numbers, represented as strings
[19:09:24] <wereHamster> totic: great. So what was yer question again?
[19:13:07] <srp_> wereHamster: I tried it.. not getting the query right :( this is what i tried.. db.sim.update({"type" : "note"}, {"similarity" : {"$pull" : [{"50a13bec21966423837a02cd": 1}]}}) but it said "uncaught exception: field names cannot start with $ [$pull]"
[19:14:06] <wereHamster> srp_: read the $pull syntax
[19:14:24] <srp_> wereHamster: ok.. just a min.. :)
[19:16:50] <srp_> wereHamster: ah.. saw it.. i understood that i cannot use $pull here because i cannot know the value in advance. I only know the key in the example i showed! how do i do that?
[19:17:50] <srp_> wereHamster: is there a wildcard in mongodb which matches anything in the place of value.. ?
[19:20:55] <srp_> wereHamster: no problem.. thats fine.. i can store those numbers as strings as i have very less number of these floating numbers .. how can i use wildcard on strings?
[19:21:56] <wereHamster> I'm just gussing here. regex (// is an empty regex)
[19:22:25] <srp_> wereHamster: ok.. let me try that.. thanks :)
[19:53:03] <fideloper> Hey! Does anyone have any opinions on rethinkdb? Looks interesting.
[19:53:09] <fideloper> Wondering what ya'll thought about it
[19:56:08] <FrenkyNet> fideloper: no php support, aka fail for me
[19:57:07] <fideloper> yeah I noticed that as well - same here
[19:57:25] <fideloper> that's a big segment of web dev to ignore
[20:03:30] <ee99ee> hi... I'm having trouble with mongo and PHP, getting this simple query to work: http://pastebin.com/E51XQ44d
[20:03:40] <ee99ee> anyone have any ideas? it returns the right data at the command line, but not within PHP
[20:03:54] <ee99ee> I think it has something to do with the nested array
[20:06:15] <Gargoyle> ee99ee: Your array looks OK. But I have never used new MongoCollection(). Do you get the same result if you use $this->mongodb->users->find() ?
[20:16:25] <Derick> ee99ee: instead of: new MongoCollection($this->mongodb, 'users'); you can just do $this->mongodb->users
[20:16:28] <Gargoyle> ee99ee: You are deffo connecting to all the right db's / collections. Not using a test db by mistake (Not that I have ever done that and run myself round in circles for hours!! ;)
[20:16:39] <Derick> also, is $this->mongodb the database, or the collection?
[20:33:46] <Nicolas_Leonidas> This changes the "_id" attribute of docs, correct? is there a way to preserve the id that rows have i n smalldaily, when they are moved to largedaily?
[20:35:06] <Gargoyle> I would have thought the _id would be preserved. Mongo only creates one if there isn't one in the doc you are saving.
[20:44:34] <Gargoyle> Derick: client server 1 segfaulted 3 times on the run. I realised this isn't the one I installed valgrind on, so switched to the other box - and the script runs fine!
[20:45:06] <Gargoyle> going to see if I can spot a difference - both servers should be identical.
[20:45:27] <Derick> Gargoyle: export USE_ZEND_ALLOC=0 to make debugging easier
[21:33:36] <Nicolas_Leonidas> when I do this in shell => db.largedaily.find({"$and":{"date":{"$gt":"2010-01-01","$lte":"2010-01-02"},"lid":6209}});
[21:33:45] <Nicolas_Leonidas> it says error: { "$err" : "$and expression must be a nonempty array", "code" : 14816 }
[21:34:33] <Derick> Nicolas_Leonidas: $and wants an array ([]) not an object ({})
[21:34:46] <Gargoyle> Derick: Line 30 is the one segfaulting. But ONLY when the other commands have run. If I remove lines 3 to 29 which is essentially what the previous loop iteration is doing, it does not crash.
[21:35:19] <Derick> Gargoyle: and what does gdb's "bt full" say?
[21:35:31] <Derick> Gargoyle: also, did you run this with the env var "USE_ZEND_ALLOC=0" ?
[21:35:37] <Gargoyle> Not put gdb onto this box yet.
[22:04:32] <Nicolas_Leonidas> Derick: I asked my question with more details here: http://stackoverflow.com/questions/13352302/mongodb-search-using-a-date-range-and-another-condition
[22:39:42] <navu> I'm quite new to mongodb and wanted to find out some information. I'm planning to deploy a mongodb project to Azure and would like to take advantage of the replica sets
[22:57:30] <timhaines> Guys - a newb question. When does it make sense to put many collections inside one DB, and when should collections have their own DBs? I read the write lock is now set at DB level. Does this mean it's beneficial for each collection to have their own db?
[23:10:31] <Freso> What's the "definitive guide" book for MongoDB? I see both O'Reilly and Apress having one.
[23:13:56] <jin_> Hey guys, I have a question regarding setting up replicasets. Currently, I'm setting up the mongod instances on three separate VMs using a chef recipe. After it is setup, one of the VM executes a recipe that initializes it by itself, then gathers the other nodes' fqdn to reconfigure the replicaset. The first node has a configuration file setup as it should, but the other nodes are not initializing, although they are being contacted by
[23:14:06] <jin_> I would appreciate any pointers.
[23:40:02] <ckd> Freso: Kristina is hard at work on a second edition of her O'Reilly book
[23:40:55] <ckd> timhaines: that's an excellent example of when it's great to have one collection per DB
[23:41:37] <timhaines> ckd: Is there any benefit to having more than one collection in a db?
[23:42:22] <Freso> ckd: It probably won't be done by Nov. 24th though. :)
[23:43:07] <ckd> Freso: Probably not :) but hers is def a great book, but factor in that both books are already a bit out of date
[23:43:19] <ckd> Freso: but the basic concepts are def there
[23:44:37] <Freso> ckd: Yeah. I'm compiling a list of books for my birthday if anyone wants to make a donation for a project I'm working on (which will eventually utilise MongoDB).
[23:44:57] <Freso> ckd: "list of books", read: Amazon wishlist
[23:45:29] <ckd> timhaines: that's a question for somebody more competent than myself… believe it or not, for what I use Mongo for, I don't use map reduce or aggregation, so i don't' know if either of those are limited in cross-database stuff