[01:37:27] <azbyin> hi all.. what is the recommended way to store images?
[01:38:10] <azbyin> I'm not sure if I want to use gridfs as the images will be small, usually < 1MB, but guaranteed to be less than the BSON limit of 16MB
[06:43:35] <x1337807x> Anybody familiar with Mongoid?
[06:45:57] <x1337807x> I'm using mongoid in a rails-api app and trying to store JSON in mongodb in a pretty basic case I think. I have one model called Result and if I call Result.mongoize I see something different than what is eventually stored.
[08:47:59] <moian> I've a collection with 80M+ objects, and I just find out that there are about 4k objects that are duplicated (between 2 and 9000 times). Anyway, the only operation I use on this collection is an upsert, I don't understand how it is possible to have duplicates
[09:25:05] <kali> moian: i'm having a look, i was in a meeting
[09:26:56] <kali> moian: aggreed, it's weird. any chance there are other code path that may have insert data there ?
[09:27:02] <kali> moian: is the collection sharded ?
[09:31:00] <moian> kali: no problem! yes the collection is sharded, and the duplicates have been inserted just after I add a new shard to the cluster, don't know if it can be linked. This is the only code used to insert & update, no chance that it comes from another place...
[09:38:29] <kali> moian: what are the shard keys ? i think this kind of weird phenomenon can happen during re-balancing, but i'm not 100% sure about it
[09:40:04] <moian> kali: the shard key is `{ slt: 1 }` where slt is randomely generated from a string (mid here)
[10:14:01] <Maro_> Hi, does anyone know how to get a replica set going again fter all the members have been stopped on Amazon AWS? I get "errmsg" : "loading local.system.replset config (LOADINGCONFIG)" on rs.status()...can't seem to fix :/
[10:33:32] <mjburgess> hello, im getting linking errors when building against the mongo c driver, specifically undefined reference to mongo_client, etc.; it is linking against /usr/local/lib/libmongoc.so and it's obv. finding the header include. any thoughts?
[10:33:57] <mjburgess> this is ubuntu 12.04 and i had to install with make since its on a vm without internet access (I sftp'd a tar of the git repo)
[10:43:45] <daslicht> Error: Cannot use a writeConcern without a provided callback
[10:43:58] <daslicht> interesting is that i had used that function a while back successfully
[10:57:52] <mjburgess> i dont know what changed, but if that Q comes up again, recommend "make clean"/"make"/sudo make install & chmod u+r on /usr/lib/local
[11:13:25] <daslicht> is it possible to use the $exists operator when inserting? or do i need 2 queries?
[12:34:47] <tairov> I tried to cloneDatabase command. But it seems that it only copied part data. How to copy all data from instance?
[12:36:24] <tairov> there is a database db1 on the instance1. I opened mongoshell with connection to instance2. And then enter command use mydb; db.cloneDatabase("instance1");
[12:37:25] <tairov> all collections copied but not fully. db.collection.count() - is differ
[13:34:32] <dandre> I have a document structure like {foo:bar,baz:{x:0,y:1}}. How can I replace all baz without affecting the other pats of the document ?
[13:35:45] <dandre> I have tried update with {$set:{baz:{...}} but it doesn't update baz once set
[13:36:04] <moian> dandre: you can use an update query with { $set: { baz: {...} }
[13:36:52] <bplaa-yai> dandre: did you pass the multi option to the query ?
[13:37:04] <dandre> ok I'll try again but it doesn't seem to update correctly
[13:38:49] <moogway> there is a collection of emails, each document has sender-id and receiver-id, now I want to find all the messages sent or received by a user. Does that mean I have to check on both sender-id and receiver-id or is there some way to make a compound index?
[13:39:09] <bplaa-yai> dandre : you have set the multiple (or multi, can't remember exactly) option to the query
[13:39:23] <gizmo_x> Hi eveyone, how can i select in mongodb using php query like this: find all rows where age = 15 or age = 23 or .... with same field specific values?
[13:40:54] <dandre> oups sorry, in fact the query works
[13:43:53] <moogway> Nodex: Could you please help with this?
[13:47:27] <bplaa-yai> Anyone using c++ driver with replica set ? I'm wondering if there is any way to specify read preference (2.4.4 client)
[13:47:51] <bplaa-yai> preferably at connection level, but query level would be okay if it cannot be done at connection level
[13:47:59] <Siyfion> I'm saving messages in a messageQueue implementation in mongoDB, and I'm thinking that I need a reference field so that two way communication can occur whilst still knowing what messages they are responding too...
[13:48:36] <Siyfion> Is it just a simple case of use another ObjectId ?
[13:48:58] <Nodex> moogway, you need a compound index and an $or... or (I would) have structured my data to have from / to in an array and just index the array
[13:56:10] <moogway> so my query would be - db.foo.find({arr:$in:[userobjectid]})
[13:57:57] <Nodex> [14:38:20] <moogway> there is a collection of emails, each document has sender-id and receiver-id, now I want to find all the messages sent or received by a user
[13:58:09] <Nodex> if that's how you saved them then yes
[13:59:06] <moogway> yeah but I hadn't saved the sender-id and the receiver-id in an array yet, will do that now
[14:13:19] <Skunkwaffle> Does anyone know if it's possible to group documents with a dataetime field by date only (ie. ignoring the time) in aggregation?
[14:16:36] <bplaa-yai> Skunkwaffle : it's possible by creating document n the project, and use $hour, $day, etc to extract components of the date object, then group by the created document
[14:16:57] <bplaa-yai> not sure if there's a better way, but that's how I do it
[14:22:13] <cr3> hi folks, db.stats(1024*1024) is showing 11525 objects with an avgObjSize of 454. does that mean I have 454*11525 == 5232350Mb == 5Tb of data?
[14:25:09] <cr3> aha, nevermind! The documentation contradicts itself: first it says that the result is "in bytes", second it says that it's "dataSize divided by number of documents."
[15:33:20] <Zardosht> is there a way to use the C# driver to get persistent connection, such that consecutive operations are guaranteed to happen on the same connection and not different connections from a pool?
[15:54:46] <gyre007> is there any danger of locking the Mongo when dropping LARGE collection ?
[15:55:22] <gyre007> I'm about to drop 20G collection but I'm afraid it's going to lock up MongoDB
[16:36:43] <astro73|roam> is there any reason to have w=0?
[17:04:57] <starfly> astro73|roam: better performance, less assurance writes will complete
[18:06:28] <grouch-dev> Hi all. Is it normal that during an initial sync that the secondaries appear as unreachable, and the primary drops to secondary?
[18:07:46] <grouch-dev> we have 5 servers, 1 pri, 3 secs, 1 arb. I did a restore on the primary, and then added the arb, then the secondaries. Right now there is no primary for some reason