[01:24:23] <hpekdemir> hi all. how would I remove this user (I made a mistake with function "db.addUser"): { "_id" : ObjectId("53000b4f6401744411cd375c"), "user" : { "user" : "blabla", pwd: ....} }
[01:50:10] <granger915> so i'm new to mongodb and rockmongo...i'm totally confused...i kinda understand the json format, but i can't seem to figure out how to write to a file like you can with phpmyadmin
[02:06:22] <granger915> is anyone alive in this channel?
[03:44:08] <EldonMcGuinness> hey everyone, was trying to use upsert with a custom _id but it does not seem to take the _id I specify in the query during an insert. I tried looking around but some people say it should work while others do not. Any one here have an idea on it?
[03:53:03] <EldonMcGuinness> welp nevermind, looks like you can put it in the set section, didn't think you could
[04:14:29] <tripflex> yeah you can set _id to whatever you want
[04:14:39] <tripflex> if you don't it will create a random unique one of its own
[04:15:13] <EldonMcGuinness> yea I was reading that it would cause an error if you try to update with _id set since _id should not be changed
[04:15:39] <EldonMcGuinness> but apparently that is not the case :-) Not that I'm changing the _id though heh
[05:29:43] <langemann> Heya, anyone around? I have a page with 8 documents, I want to click a button and retrieve new 8 documents based on the objectID of the 8th previous document. Bad/good idea? :)
[11:51:27] <fl0w> So I have the following setup - pseuduo-ish: https://dpaste.de/5usM The plan is that I'll be finding the whole document where I am included in _authors. Then I'd like to reduce all articles where I'm not included. The query aside, is this a good approach?
[11:52:22] <fl0w> (mongodb firsttimer, trying to comprehend schema design comming from a typical relational database)
[11:53:00] <kali> well, at least, you're considering the querying part
[11:53:17] <kali> which is key in designing for mongodb and nosql
[11:54:46] <fl0w> "at least" - meaning I'm way off?
[11:55:20] <kali> nope. i am just not sure i understand what you're trying to do
[12:00:06] <fl0w> Oh, sorry. I'll try to rephrase my question. Is my example - generally speaking - a valid approach to schema design? I figure I'll need to ensure indexes on _authors: [ ObjectId ], and I will not ever be close to the 16mb limit (this is a pseuduo example - real world scenario isn't article related). I'll be juggling 1..~20 articles for each document.
[12:00:57] <kali> that part sounds fine. i was a bit more worried about the "reduce all articles where I'm not included.
[12:04:57] <kali> but the aggregation pipeline might help
[12:05:37] <kali> map/reduce is more flexible, but slower than the pipeline, so anything that can be done with the pipeline should be done with it
[12:07:02] <fl0w> Alright, I'll take a read and a stab at testing this - and I'll figure out if my setup holds true to what I'm trying to accomplish! Many thanks kali.
[12:16:57] <fl0w> kali: The grouping part of an aggregation seems to be the recuding that I want! Aww yee. However, should I be worried that I'll be indexing a JSON array? (my usecase would be equivalent to how tags are show cased in random mongodb examples)
[12:18:01] <fl0w> But it'll be an in-app join (since I need to normalise users b/c of other circumstances)
[12:20:16] <kali> indexing a field in a array of object is just fine, so that part is fine
[12:22:05] <orweinberger> Let's say I have a mongod instance with a 1GB database in it. Now I create a mongos instance and add that db as a shard. Now I set up another empty mongod instance and add that one to the mongos as a shard, will data from first mongod be divided to the new shard as well? or only new data from that point on will be sharded?
[12:23:04] <fl0w> Sweet. I'm not working with massive data anyway (actually, it's quite miniature) - but I'm new so I'd like to get the cogwheels working correctly (in my head)
[12:23:23] <kali> orweinberger: you need to enable shard on the databases and on the collections you want shardedn but then the balancer will start move things around
[12:23:54] <orweinberger> kali, OK, but will it start from that point onwards or will take also the data that is currently in the old shard and split it to the new shard?
[12:24:42] <kali> yes it will rebalance old stuff too
[13:25:41] <Batmandakh> I want to ask a db design best practice...
[13:26:04] <Batmandakh> So, I'm developing a system that could become bigger...
[13:26:27] <Batmandakh> I'm thinking of 2 collections those are "users" and "items"
[13:26:56] <Batmandakh> specially item could cause problem if did it in this way
[13:27:34] <Batmandakh> Items have some indexes such as id, categories and status,.. etc
[13:27:49] <Batmandakh> one user could have millions of items.
[13:28:37] <Batmandakh> so, what about this? if i get thousands of users, each have millions of items... what its future?
[13:28:49] <Batmandakh> I'm sorry for my bad English :-D
[13:29:42] <Batmandakh> Will the items collection slow down my system?(in other words my db's work?)
[13:30:43] <Batmandakh> I'm think I must insert user id into items collection for link...
[13:31:48] <Batmandakh> I'm trying hard to do it most clear and optimal way to achieve this issue... later I could replicate and shard my infrastructure when the times come...
[13:32:59] <Batmandakh> the system will count and analyze items very frequently...
[14:51:39] <hexdump> hi is there a way to set index on all collections in database? in one command?
[16:01:25] <bobinator60> could someone take a look at my query & explain() and tell me why indexOnly is not True?
[16:02:32] <bobinator60> i'm also curious about why the indexBounds for attributes.kind is 1:1
[20:03:08] <squeakytoy> I have a question, for maybe you guys who has used mongodb more.. seriously. Object ID is autogenerated based on a lot of factors, right? And its pure numeric, right? How big can that number be?
[20:06:07] <squeakytoy> for example: domain.com/user/507f191e810c19729de860ea/
[20:06:54] <fl0w> squeakytoy: You could, but wouldn't it be nicer to do domain.tld/user/:username instead?
[20:07:57] <squeakytoy> ive been thining about that aLOT
[20:08:33] <squeakytoy> the problem is, correct me if you have another opinion, is that it will be a racecondition. someone will take domain.com/user/obama/ when they are not obama
[20:08:50] <squeakytoy> wont that just promote identity thefts?
[20:10:02] <squeakytoy> Twitter has now "verified" status, which must be a huge time consuming investment
[20:10:38] <fl0w> Can't say much in that regard - there's no context. Depends on the service you're planing on providing. However, if you get to twitter size then you can just sell it or buy others to solve that issue :)
[20:11:51] <squeakytoy> Yea, but as a service provider (twitter) i wont be able to sell
[20:12:16] <squeakytoy> or selling the service entirely? uhm. odd reaction to solve identity thefts?
[20:13:21] <fl0w> squeakytoy: I was kidding (though badly) - because I have no idea what you're trying to do. If usernames are aliases, then it doesn't really matter in my opinion (much like having an alias @ a games forum). If you're doing something more serious, then maybe you're right.
[20:14:21] <squeakytoy> but a mongodb object id is rather long. I could shorten it, but then I would.. actually not make is scalable
[20:14:33] <squeakytoy> if i came up with my own id for documents
[20:16:04] <squeakytoy> correct me if i am wrong,but the whole beauty with object id is that once generated, its pretty unique, hence you do not query the whole database to inject a new document?
[20:16:25] <squeakytoy> in other traditional databases, you need to scan through the whole database in order to make sure the id is free?
[20:18:23] <fl0w> squeakytoy: Either way you'll be going through some kind of index.
[20:25:48] <leifw> squeakytoy: most databases just use a regular autoincrement value, this doesn't work if you have sharding though, you could generate the same _id on two machines and then migrate those docs' containing chunks together and lose _id uniqueness
[20:26:12] <leifw> objectid is only 12 bytes, it's not "rather long"
[20:26:16] <squeakytoy> hence why mongodb has object id?
[20:26:50] <leifw> I believe that's why they created object id, it's partially an autoincrement value, partially determined by a unique-ish machine id and PID
[20:27:36] <squeakytoy> so, if you have multiple mongodbs, the generated ids are so unique so they are probably safe to insert it directly?
[22:48:14] <mboman> joannac, so the following would remove the 'yara' key if it is empty? db.vxcage.update({'yara':{$size:0}}, { $unset: {'yara':""}, { multi: true }})