PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 28th of June, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[01:37:27] <azbyin> hi all.. what is the recommended way to store images?
[01:38:10] <azbyin> I'm not sure if I want to use gridfs as the images will be small, usually < 1MB, but guaranteed to be less than the BSON limit of 16MB
[06:43:35] <x1337807x> Anybody familiar with Mongoid?
[06:45:57] <x1337807x> I'm using mongoid in a rails-api app and trying to store JSON in mongodb in a pretty basic case I think. I have one model called Result and if I call Result.mongoize I see something different than what is eventually stored.
[06:47:08] <x1337807x> I'm calling Result.create(key1: {nested_key1: 'a'}, key2: {nested_key2: 'b'}}
[06:47:18] <x1337807x> I end up losing the second hash.
[06:47:36] <x1337807x> The JSON is a bit larger but certainly isn't hitting any kind of disk size limit
[06:48:10] <x1337807x> What sort of things should I check?
[06:49:28] <x1337807x> Turns out metadata is a reserved word, fail :(
[07:35:24] <[AD]Turbo> hola
[07:48:51] <kali> bonjour !
[08:36:11] <Nodex> g'day
[08:44:36] <moian> hello everybody!
[08:47:59] <moian> I've a collection with 80M+ objects, and I just find out that there are about 4k objects that are duplicated (between 2 and 9000 times). Anyway, the only operation I use on this collection is an upsert, I don't understand how it is possible to have duplicates
[08:48:22] <moian> I'm using nodejs driver
[08:50:46] <moian> All objects are similar, only _id are different
[08:55:15] <moian> Anybody ?
[08:56:17] <kali> well, upsert decide on _ids
[08:56:30] <kali> so if you have different _ids, upsert will insert
[08:57:27] <moian> I don't use _id in my queries
[08:58:28] <kali> moian: you're confused about what upsert does. http://docs.mongodb.org/manual/core/update/#update-operations-with-the-upsert-flag
[08:58:48] <kali> mmmm or maybe not
[08:59:12] <kali> can you show us some examples ? upsert code and dup data ?
[08:59:35] <moian> yes sure
[09:05:56] <moian> kali: https://gist.github.com/n1t0/5883465
[09:19:39] <moian> kali: weird right ?
[09:25:05] <kali> moian: i'm having a look, i was in a meeting
[09:26:56] <kali> moian: aggreed, it's weird. any chance there are other code path that may have insert data there ?
[09:27:02] <kali> moian: is the collection sharded ?
[09:31:00] <moian> kali: no problem! yes the collection is sharded, and the duplicates have been inserted just after I add a new shard to the cluster, don't know if it can be linked. This is the only code used to insert & update, no chance that it comes from another place...
[09:38:29] <kali> moian: what are the shard keys ? i think this kind of weird phenomenon can happen during re-balancing, but i'm not 100% sure about it
[09:40:04] <moian> kali: the shard key is `{ slt: 1 }` where slt is randomely generated from a string (mid here)
[10:14:01] <Maro_> Hi, does anyone know how to get a replica set going again fter all the members have been stopped on Amazon AWS? I get "errmsg" : "loading local.system.replset config (LOADINGCONFIG)" on rs.status()...can't seem to fix :/
[10:33:32] <mjburgess> hello, im getting linking errors when building against the mongo c driver, specifically undefined reference to mongo_client, etc.; it is linking against /usr/local/lib/libmongoc.so and it's obv. finding the header include. any thoughts?
[10:33:57] <mjburgess> this is ubuntu 12.04 and i had to install with make since its on a vm without internet access (I sftp'd a tar of the git repo)
[10:43:37] <daslicht> is not valid ?:
[10:43:39] <daslicht> db.collection('user', function(err, collection)
[10:43:41] <daslicht> {
[10:43:43] <daslicht> collection.ensureIndex({email:1},{unique:true});
[10:43:45] <daslicht> Error: Cannot use a writeConcern without a provided callback
[10:43:58] <daslicht> interesting is that i had used that function a while back successfully
[10:57:52] <mjburgess> i dont know what changed, but if that Q comes up again, recommend "make clean"/"make"/sudo make install & chmod u+r on /usr/lib/local
[11:13:25] <daslicht> is it possible to use the $exists operator when inserting? or do i need 2 queries?
[11:30:02] <moian> kali: any idea ?
[11:35:16] <daslicht> i have managed to create a unique index which works using nodejs as the following:
[11:35:31] <daslicht> https://gist.github.com/daslicht/3be28d9cbf9c3d4f7068
[11:35:38] <daslicht> the created index looks like this:
[11:35:50] <daslicht> https://gist.github.com/daslicht/c90d020125fd35d86514
[11:36:04] <tairov> hi all
[11:36:35] <daslicht> is it correct that 2 indexes are created one named email and the other emial_1 ?
[11:36:41] <daslicht> email_1
[11:39:44] <moian> daslicht: from what I see, you only have an index on _id an one on email, which seems right
[11:40:41] <daslicht> ahhh i see
[11:40:50] <daslicht> its just one object
[11:41:15] <daslicht> now after looking at my own gist its more clear
[11:41:25] <daslicht> in terminal it looked not correct
[11:41:26] <daslicht> cool
[11:41:28] <daslicht> thank you
[11:51:31] <moian> daslicht: you're welcome!
[12:01:18] <daslicht> thank you !
[12:34:47] <tairov> I tried to cloneDatabase command. But it seems that it only copied part data. How to copy all data from instance?
[12:36:24] <tairov> there is a database db1 on the instance1. I opened mongoshell with connection to instance2. And then enter command use mydb; db.cloneDatabase("instance1");
[12:37:25] <tairov> all collections copied but not fully. db.collection.count() - is differ
[12:44:52] <tairov> Guys?
[13:33:01] <dandre> hello,
[13:34:32] <dandre> I have a document structure like {foo:bar,baz:{x:0,y:1}}. How can I replace all baz without affecting the other pats of the document ?
[13:35:45] <dandre> I have tried update with {$set:{baz:{...}} but it doesn't update baz once set
[13:36:04] <moian> dandre: you can use an update query with { $set: { baz: {...} }
[13:36:52] <bplaa-yai> dandre: did you pass the multi option to the query ?
[13:37:04] <dandre> ok I'll try again but it doesn't seem to update correctly
[13:37:07] <dandre> no
[13:37:09] <moogway> hi all, have a pretty basic query about indexing
[13:37:46] <Nodex> best to ask
[13:38:00] <Nodex> asking to ask never gets a good response
[13:38:00] <dandre> I have only one document to update
[13:38:17] <Nodex> dandre .. look at $set
[13:38:49] <moogway> there is a collection of emails, each document has sender-id and receiver-id, now I want to find all the messages sent or received by a user. Does that mean I have to check on both sender-id and receiver-id or is there some way to make a compound index?
[13:39:09] <bplaa-yai> dandre : you have set the multiple (or multi, can't remember exactly) option to the query
[13:39:23] <gizmo_x> Hi eveyone, how can i select in mongodb using php query like this: find all rows where age = 15 or age = 23 or .... with same field specific values?
[13:40:54] <dandre> oups sorry, in fact the query works
[13:41:08] <moogway> any help?
[13:43:53] <moogway> Nodex: Could you please help with this?
[13:47:27] <bplaa-yai> Anyone using c++ driver with replica set ? I'm wondering if there is any way to specify read preference (2.4.4 client)
[13:47:51] <bplaa-yai> preferably at connection level, but query level would be okay if it cannot be done at connection level
[13:47:59] <Siyfion> I'm saving messages in a messageQueue implementation in mongoDB, and I'm thinking that I need a reference field so that two way communication can occur whilst still knowing what messages they are responding too...
[13:48:36] <Siyfion> Is it just a simple case of use another ObjectId ?
[13:48:58] <Nodex> moogway, you need a compound index and an $or... or (I would) have structured my data to have from / to in an array and just index the array
[13:49:20] <Nodex> gizmo_x : you also need an $or
[13:50:07] <gizmo_x> Nodex: can you show me full query?
[13:50:24] <Nodex> yes but what would you have learnt?
[13:50:34] <Nodex> !google MongoDB $or
[13:50:35] <pmxbot> http://docs.mongodb.org/manual/reference/operator/or - $or — MongoDB Manual 2.4.4
[13:50:43] <gizmo_x> Nodex: i need example to start :)
[13:50:51] <Nodex> there is an example ^
[13:50:52] <gizmo_x> ty
[13:51:24] <moogway> Nodex: thanks mate, will the array thing work when both the fields are object ids from another collection?
[13:52:00] <moogway> Nodex: I don't see why it shouldn't but I'm new and I can't take anything for granted
[13:53:26] <Nodex> it's a simple comparison so yes
[13:53:48] <moogway> Nodex: thanks again
[13:53:53] <Nodex> db.foo.find({arr:$in:[123,456]}); <---- get all docs where arr = 123 or 456
[13:54:36] <moogway> Nodex: wow, thanks
[13:56:10] <moogway> so my query would be - db.foo.find({arr:$in:[userobjectid]})
[13:57:57] <Nodex> [14:38:20] <moogway> there is a collection of emails, each document has sender-id and receiver-id, now I want to find all the messages sent or received by a user
[13:58:09] <Nodex> if that's how you saved them then yes
[13:59:06] <moogway> yeah but I hadn't saved the sender-id and the receiver-id in an array yet, will do that now
[14:13:19] <Skunkwaffle> Does anyone know if it's possible to group documents with a dataetime field by date only (ie. ignoring the time) in aggregation?
[14:16:36] <bplaa-yai> Skunkwaffle : it's possible by creating document n the project, and use $hour, $day, etc to extract components of the date object, then group by the created document
[14:16:57] <bplaa-yai> not sure if there's a better way, but that's how I do it
[14:17:31] <Skunkwaffle> ah great
[14:22:13] <cr3> hi folks, db.stats(1024*1024) is showing 11525 objects with an avgObjSize of 454. does that mean I have 454*11525 == 5232350Mb == 5Tb of data?
[14:25:09] <cr3> aha, nevermind! The documentation contradicts itself: first it says that the result is "in bytes", second it says that it's "dataSize divided by number of documents."
[14:32:20] <Nodex> 5232350 = bytes not MB
[14:32:38] <Nodex> 4.9mb ;)
[15:33:20] <Zardosht> is there a way to use the C# driver to get persistent connection, such that consecutive operations are guaranteed to happen on the same connection and not different connections from a pool?
[15:54:42] <bplaa-yai> answer to myself : query(namespace, QUERY("field" << value).readPref(ReadPreference_Nearest, tags), ...)
[15:54:46] <gyre007> is there any danger of locking the Mongo when dropping LARGE collection ?
[15:55:22] <gyre007> I'm about to drop 20G collection but I'm afraid it's going to lock up MongoDB
[16:36:43] <astro73|roam> is there any reason to have w=0?
[17:04:57] <starfly> astro73|roam: better performance, less assurance writes will complete
[18:06:28] <grouch-dev> Hi all. Is it normal that during an initial sync that the secondaries appear as unreachable, and the primary drops to secondary?
[18:07:46] <grouch-dev> we have 5 servers, 1 pri, 3 secs, 1 arb. I did a restore on the primary, and then added the arb, then the secondaries. Right now there is no primary for some reason
[18:48:23] <ehershey> 2.2.5 is out! yay!
[20:08:20] <kkouddous> hello
[20:08:27] <kkouddous> what does ntoreturn -1 mean
[20:08:38] <kkouddous> 0 means a limit was not specificed
[20:08:43] <kkouddous> what's -1
[20:28:43] <asdfpenguin1> how can I use $pop to update http://pastebin.com/WtdnDXyg ...if I wanted to remove f1 : something from the array
[20:29:43] <asdfpenguin1> is it like update({'b.c.f1' : "something"},{$pop : {"f1" : 1}})
[20:30:22] <asdfpenguin1> er, I want to remove the whole object containing it
[20:34:46] <asdfpenguin1> like how do you even use $pop with objects
[21:33:30] <warz07> Can anyone help me understand how to add a new field to an array using the $update method. I have a collection
[21:33:52] <warz07> db.users {emails: [{field1, field2}].
[21:34:03] <warz07> i would like to add 'field3' for all my users with a default value
[21:36:48] <warz07> i tried this
[21:36:49] <warz07> > db.users.update({_id: "hyf3tkXsWjFKxQrkk"}, {$set: {"emails.$.primary": true}})
[21:37:01] <warz07> where 'primary' is the new field i want to add
[21:37:12] <warz07> And i always get a Cannot apply the positional operator without a corresponding query field containing an array.
[21:50:21] <winterpk> Goodmorning everyone
[21:50:57] <winterpk> Does anyone know how to check a gridfs cursor to count the number of results, in php.
[21:50:57] <winterpk> ?
[21:51:10] <winterpk> Is there a proper way to do this?
[21:55:06] <winterpk> nvm just figured it out. apparently the cursor has a count method even though its not in the monogo php docs.