PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 3rd of October, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[02:02:24] <kt> Atomic increment-- $inc -- can anyone point me to some documentation or examples where this kind of operation supports a floor (i.e., zero)?
[02:03:14] <cheeser> can't you just filter out documents that are already 0?
[02:04:07] <kt> I suppose, sure
[02:44:50] <Zeeraw> Hey, guys I have an issue with DataFileSync mmap drop taking forever to complete
[02:45:18] <Zeeraw> When it runs it locks locks reads and write and ruins performance
[02:45:50] <Zeeraw> Anything I could do to my environment and configuration to reduce the amount of time it takes to do these dumps?
[06:00:24] <johnnode> how do we use $inc for multiple fields in the same db query? e.g: db.users.update( {_id:'ab'}, { $inc:{field1:1} , $inc:{field2:1} } ) -> I tested but only applied for the "field2", not "field1". Thanks for help.
[06:11:00] <Soothsayer> Hi
[06:13:43] <Soothsayer> I am using the aggregation framework. I've defined pipeline stages such that I'm now getting an array of Products as results with an [{ _id, title, price }, ..]. How to define the next pipeline stage such that I get the count of products, the minimum price, the maximum price and also the list of product ids?
[06:14:57] <redsand_> add another group
[06:15:19] <redsand_> and define the new column names
[06:15:51] <redsand_> google and you'll find examples of multiple pipeline aggregation
[06:50:07] <Soothsayer> redsand_: you still here?
[06:50:11] <Soothsayer> redsand_: sorry, missed your message
[06:51:32] <Soothsayer> redsand_: how do I define a new column name which holds the values of all product ids?
[07:06:20] <johnnode> how do we use $inc for multiple fields in the same db query? e.g: db.users.update( {_id:'ab'}, { $inc:{field1:1} , $inc:{field2:1} } ) -> I tested but only applied for the "field2", not "field1". Thanks for help.
[07:29:51] <redsand_> Soothsayer: ie: group_avg : { "$avg" : "$group_column_name" }
[07:30:25] <kali> johnnode: db.users.update( {_id:'ab'}, { $inc:{field1:1, field2:1} } )
[07:30:46] <kali> i hope you've found the answer in the meantime, though
[07:32:20] <Soothsayer> redsand: I was looking for $addToSet found it
[07:33:18] <johnnode> kali: you did a great job. thanks a lot!
[07:33:54] <johnnode> how wonderful it is, mongo!
[07:42:11] <lzakrzewski> hi
[07:54:45] <lzakrzewski> is in mongo features to role check ?
[07:56:08] <Kim^J> lzakrzewski: ? Are you asking about permissions on the database?
[08:00:05] <Kim^J> Hm, is there a limit on how many items you can have in an array that you send to the $in operator? I see something about 400k when you use two $in operators, but I'm only using one.
[08:23:53] <gerryvdm_mbp> is there an expected behavior for doing .sort({field:0}) ?
[08:24:45] <kali> it's not specified, as far as i know
[08:33:34] <bodik> hi
[08:33:47] <bodik> are configdbs data on config servers the same ?
[08:34:12] <bodik> resp what happens when one config server data is lost? can i replicate them from the other twos ?
[08:36:00] <retran> what's a configdbs
[08:36:41] <bodik> content of --dbpath on --configsvr
[08:37:45] <bodik> well what i'm really up to is, i have a private cloud with monodb deployed, i've created a script for starting up replicated sharded cluster
[08:38:01] <bodik> https://gist.github.com/bodik/6801018
[08:38:14] <bodik> https://gist.github.com/bodik/6801007
[08:38:38] <bodik> so far so good, but this creates only single config server, when is lost i'm screwed
[08:39:07] <bodik> i can run three of them, no problem, but what happens when one of them is lost forever
[08:46:18] <quattr8> why can't I use $hint with findOne?
[08:48:26] <kali> quattr8: findOne() is just a convenient helper. it's the same as a find().limit(1).pretty()[0]
[08:49:13] <quattr8> I see i'll give that a try thanks
[08:49:26] <retran> what odes pretty() mean
[08:49:51] <retran> oh that's why findOne displays with tabs/newlines
[08:49:57] <retran> in a shell env
[08:50:32] <retran> i like how mongo is straightfoward about such things
[09:14:44] <bodik> i found it, configs are the same so i can recover anytime when i have at least one copy
[09:16:27] <mark____> How to get write permission in mongodb as of now i am able to use only read permission
[09:17:31] <joannac> Change your user's permissions?
[09:19:48] <mark____> @joannac: how do i?
[09:19:59] <mark____> can you explani plz
[09:22:49] <joannac> mark____: What user are you authed as?
[09:23:21] <mark____> @joannac:mean
[09:23:56] <Rhaven> Hello everyone, i've got some troubles about migration between shards. After a few search queries on google, it seems like there are some corrupted files that caused this error. http://pastebin.com/ZPeGeMm9
[09:24:02] <joannac> mark____: is that a user on the admin database, or your own database?
[09:24:27] <mark____> i have my own
[09:24:41] <mark____> but i want to add in admin nd user
[09:24:41] <Rhaven> Someone have an idea about how i can solve it?
[09:24:45] <mark____> also on mine
[09:26:02] <joannac> mark____: give your user appropriate roles. Add an appropriate admin user
[09:26:05] <joannac> mark____: http://docs.mongodb.org/manual/reference/user-privileges/
[09:26:19] <joannac> mark____: http://docs.mongodb.org/manual/tutorial/add-user-administrator/
[09:26:35] <quattr8> Is anyone here using MMS? How long does it take for a new host to get data in?
[09:27:17] <joannac> Rhaven: Was tat connection the one doing the migration?
[09:27:22] <mark____> @joannac: and in my database
[09:27:23] <joannac> quattr8: up to 15 mins
[09:29:02] <joannac> mark____: yes, get an account with the right privileges to either 1. create a new user with write privileges, or 2. give an existing user write privs
[09:30:58] <Rhaven> joannac: This migration is the result of the mongo balancer process
[09:32:58] <Rhaven> joannac: But i think that there is something wrong in local.oplog.rs and i don't know how to solve it
[09:37:08] <roflmaus> Does Mongo by default allow anyone store and retrieve data even over a network?
[09:37:47] <Rhaven> roflmaus: Yes
[09:37:48] <mark____> @joannac: can you please help me to do by typing command ,because i dnt understand the links as you providede to mmme
[09:38:24] <roflmaus> Rhaven, and it is very unsecure, right? Requires some configuration to deal with it?
[09:41:38] <Rhaven> roflmaus: sure it is. In my case i use some firewall rules to allow only trusted source
[09:43:03] <Rhaven> roflmaus: But you can also enable user authentication on mongo
[09:53:55] <joannac> Rhaven: do you have a replset? why do you think it's in the oplog?
[09:56:50] <quattr8> joannac: no data is comming in at mms.mongodb.com, it just says "no data" and there's nothing in the logs or any errors
[09:57:05] <quattr8> it just connects to the mongos right? or do i have to install the monitoring agent?
[09:59:10] <joannac> quattr8: you need to install the monitoring agent.
[10:00:32] <Rhaven> joannac: Yes, i have 5 replica set in sharding environment. And i think this is about oplog cause it is mentioned "problem detected during query over local.oplog.rs : { $err: "BSONElement: bad type 49", code: 10320 }"
[10:00:49] <roflmaus> Rhaven, thank you.
[10:01:38] <joannac> Rhaven: aha.
[10:02:08] <Rhaven> joannac: ?
[10:02:28] <Rhaven> roflmaus: Np
[10:07:31] <Rhaven> joannac: And that's the rs.status() output. http://pastebin.com/m0R6ppN3
[10:12:08] <joannac> Rhaven: repair() and resync ?
[10:13:06] <joannac> Not much else I can recommend without more context
[10:18:59] <Rhaven1> joannac: Should i run db.repair() on the primary or secondary member?
[10:26:12] <joannac> Rhaven: primary.
[10:26:16] <joannac> Rhaven: also, what version?
[10:28:24] <bmcgee> Hey guys, I need a little help figuring out a query. I have a doc that has a timestamp and a value x. I want to grab the last N docs where the total for x in the document set is no more than some parameter f. I want the full document. Is aggregation the way to go?
[10:29:58] <retran> sort, limit, find $gt
[10:30:02] <joannac> Why can't you do that with a find?
[10:30:30] <bmcgee> can find aggregate the x field and limit it when the sum of > than f?
[10:30:43] <bmcgee> *sum is > than f
[10:31:06] <bmcgee> as i'm going back in time i need to aggregate the x field
[10:31:11] <joannac> bmcgee: what are you aggregating on?
[10:31:27] <bmcgee> joannac: let me phrase this a bit better, i'll throw together a gist
[10:31:36] <joannac> Yes :)
[10:32:41] <Rhaven> joannac: 2.2.3
[10:34:22] <joannac> Rhaven: --objCheck might help you in future
[10:34:25] <Tiller> Hello!
[10:35:28] <bmcgee> joannac: http://pastebin.com/YuFmQ3GT
[10:36:04] <Tiller> Guys, is there a way to do something like: .find({sign: {$in: [1, 2,3]}, chainId: <?>}); to retrieve 3 rows having sign = to 1 2 and 3 AND the same chainId
[10:40:33] <joannac> bmcgee: that doesn't help. are you adding all values of the field "foo" together over all docs? How do you decide which docs need to be added together?
[10:41:11] <Rhaven> joannac: Just to be sure, we are talking about the db.repairDatabase() command? Does it will block all operations across all shards or just on the replicat set? Should i turn off the production website?
[10:41:18] <bmcgee> joannac: start from now, go backwards in time, add foo as you go, if the sum of foo > x then stop
[10:41:34] <bmcgee> joannac: that defines the document set
[10:45:58] <joannac> bmcgee: oh. I don't know if that's doable inside the shell.
[10:46:46] <bmcgee> The alternative is I just create a cursor and stripe through on the server side myself
[10:46:57] <bmcgee> was curious if it could be offloaded
[10:47:54] <mark____> how to globallly i give write permission in mongodb
[10:54:38] <bartzy> Hi
[10:54:47] <bartzy> Do field names get saved in an index ?
[10:55:28] <bartzy> i.e. if I have 100 million record in a single-field index. The field is called "status". status as ASCII is 6 bytes. Will the index have 6 * 100M bytes now , only for the name? :|
[10:55:44] <bartzy> Or are the savings in short field names are ONLY for the data files themselves, meaning not the index part ?
[11:03:31] <joannac> bartzy: http://pastebin.com/UEYd6Gu7
[11:04:02] <bartzy> joannac: So no. Thanks!
[11:04:20] <bartzy> So basically for saving space only in the data files itself - it really isn't that important?
[11:04:28] <bartzy> I have a ~200-300 million docs collection
[11:04:35] <bartzy> and I can shave around 20 bytes of field names
[11:04:55] <bartzy> That's ~5.5GB in the data files.
[11:05:18] <bartzy> That means I'll have less "real" data to have in RAM, right ?
[11:22:53] <kali> grmbl github
[11:38:37] <remonvv> kali: DDoS-ing a public source repository. One wonders what the motivation can be there.
[11:40:04] <gerryvdm_mbp> reminding you of spof in your workflow :)
[11:50:08] <kali> remonvv: yeah. i would not do it.
[11:50:30] <remonvv> kali: Amen. The spof argument is a decent one though ;)
[11:50:57] <kali> remonvv: it did work for us, we had to run deployments for hot fix manually this morning
[11:51:42] <kali> i'm not sure antagonizing github users is a very wise move
[11:52:21] <remonvv> To be honest we're not even sure it's DDoS. DDoS is an awefully easy way to get out of the blame game when a big service goes down.
[11:57:19] <Soothsayer> In the aggregation framework, if I apply a $sort before a $group in the pipeline, the sort doesn't seem to work.. is there a priority in which they are executed?
[11:58:18] <kali> Soothsayer: they are executed in the order you write the pipeline
[11:59:02] <kali> Soothsayer: well, that's not technically exact, but with the same semantics
[12:03:27] <Soothsayer> kali: how does $group deal with a sorting that took place in the previous stage of the pipeline?
[12:03:42] <Soothsayer> "Important The output of $group is not ordered."
[12:03:44] <Soothsayer> hmm
[12:06:50] <kali> Soothsayer: i think group will maintain the order of its input among the arrays of subdocuments it output, but it will probably break the order of the toplevel document
[12:08:14] <Soothsayer> kali: In my previous pipeline stage, I have a list of Product documents with id, title, finalPrice. In the next stage, I'm sorting the data by 'price'
[12:08:52] <Soothsayer> and then I am doing a $group where I am using an $addToSet to add all the Product id's into a field of the final result.
[12:08:57] <Soothsayer> The order of ids in this field is not the same as the result of the sort.
[12:09:36] <kali> Soothsayer: can you gist two or three documents and your current pipeline so that we can see it and try to tinker with it ?
[12:26:33] <Soothsayer> kali: can you get a result from the middle of an aggregation pipeline into my application?
[12:26:37] <Soothsayer> before it goes to the next step
[12:30:48] <kali> Soothsayer: nope, you have to discard the steps you don't want and hope you won't break the 16MB limit
[12:31:19] <Soothsayer> kali: so i've to run the aggregation multiple times then?
[12:32:17] <kali> why ?
[12:49:38] <themapplz821> hi there
[12:49:54] <themapplz821> anybody here?
[12:51:15] <Soothsayer> kali: if I have a Pipeline like A | B | C | D and I want the result of B and D in my application. Can I achieve that?
[13:01:59] <kali> ha, no, indeed, you need to run 'a|b' and then 'a|b|c|d' separately
[13:19:54] <themapplz> Hi everyone from Copenhagen. I'm looking for some help with (map/)reducing a query. Maybe someone can help?
[13:19:54] <themapplz> The first of a few questions:
[13:19:54] <themapplz> I have a fairly large mongo database (~400M docs) in the following form:
[13:19:55] <themapplz> { "_id" : ObjectId("7472922e49a664e3ac2568f7"), "homeId" : 32168, "sensor" : "z1t", "date" : ISODate("2013-08-18T22:17:09Z"), "val" : 2137 }
[13:19:57] <themapplz> There are apprx. 200 distinct homeIds and some 15 distinct sensors.
[13:19:59] <themapplz> Running a normal find() query with a certain condition (one sensor, one homeId, a date-range ) returns, say, 50000 values.
[13:20:01] <themapplz> But doing the same condition through map/reduce
[13:20:03] <themapplz> function(key, values) {return {count:values.length};}
[13:20:05] <themapplz> and it return less than a thousand.
[13:20:07] <themapplz> Can somebody explain to me why taht is?
[13:20:09] <themapplz> Thanks in advance
[13:23:33] <themapplz> anybuddy?
[13:28:32] <richthegeek> hey, is there anyone here who knows the internal workings of the Node driver? Got an idea for a caching wrapper but I just need to know about how it works
[13:29:13] <richthegeek> so the question is: if I do something like collection.find().sort({column: 1}) ... does that actually do anything on the collection, or does it all occur after I call each() or toArray() or similar?
[13:41:11] <kali> themapplz: paste us your entire mapreduce somewhere
[13:48:29] <themapplz> ok
[13:50:07] <themapplz> kali: http://pastebin.com/memQuGj7
[13:50:42] <themapplz> it's really quite simple at this point- just to do some testing
[13:51:08] <themapplz> but I need to know that the dataset I'm working on is correct
[14:02:45] <kali> themapplz: you're doing it wrong. the reducer can be called more than one time, so its input and output must be homogeneous
[14:03:48] <themapplz> kali: but the map only emits everything once, no?
[14:04:16] <cheeser> you can emit multiple times, iirc
[14:05:09] <themapplz> :( i'm afraid i don't follow
[14:05:39] <kali> yes, your map() is correct. but the reduce() is wrong
[14:05:51] <kali> well, no.
[14:05:56] <kali> even the map() is wrong actually
[14:06:05] <kali> let me show you.
[14:06:16] <themapplz> so how would i simply count what gets in and out
[14:07:46] <kali> themapplz: http://pastebin.com/Tddqn6cT
[14:08:01] <kali> (untested)
[14:08:23] <themapplz> right
[14:08:25] <themapplz> ok cool
[14:08:29] <themapplz> i'll test that
[14:08:52] <kali> the second argument of the emit(), each member of the values array in the reducer, and the return value of the reducer have the same format
[14:10:30] <themapplz> kali: i sent you a PM
[14:11:11] <kali> yes. i prefer to keep conversation here, so somebody can jump in to help
[14:11:26] <themapplz> ok cool
[14:11:58] <themapplz> are you on here a lot? i'm out the door but will be back again tomorrow - will you be ?
[14:12:31] <kali> i'm on here waaaay too much.
[14:12:35] <themapplz> lol
[14:12:42] <themapplz> i'm on here waaay too little :D
[14:13:03] <themapplz> cool! thanks for your help so far!!
[14:13:15] <kali> you're welcome
[14:57:12] <dandre> hello,
[14:57:19] <dandre> please see http://pastebin.fr/28953
[14:58:45] <Soothsayer_> dandre: you want to return only the subdocument ?
[15:00:00] <dandre> yes
[15:00:16] <Soothsayer_> dandre: I don't think thats possible. You can only control the fields returned by mongodb.
[15:00:39] <Soothsayer_> You will have to get the whole of "foo"
[15:00:43] <Soothsayer_> and then filter within your application
[15:00:57] <dandre> ah ok
[15:01:51] <dandre> this will result in more network traffic if array is big
[15:02:23] <Soothsayer_> dandre: that's acceptable.
[15:02:35] <kali> Soothsayer_: there is now
[15:02:44] <kali> Soothsayer_: a way to filter subdocs
[15:03:02] <Soothsayer_> kali: really?
[15:03:18] <kali> yes
[15:03:25] <dandre> kali: how?
[15:03:50] <Soothsayer_> unless you mean aggregate ?
[15:03:57] <kali> http://docs.mongodb.org/manual/reference/projection/positional/
[15:04:39] <kali> and you can also use a $elemMatch http://docs.mongodb.org/manual/reference/projection/elemMatch/#proj._S_elemMatch
[15:05:02] <Soothsayer_> kali: nice .. i stand corrected ^^ dandre
[15:05:09] <Soothsayer_> I've been using positional operators so far only in updates.
[15:05:13] <Soothsayer_> This mongo 2.2+
[15:05:32] <dandre> this is what I have tried to do
[15:06:01] <kali> $elemMatch is 2.2+, i think $ is 2.4+ (can't see it specified in the page)
[15:07:52] <Soothsayer_> dandre: check your version of mongodb
[15:07:52] <Soothsayer_> ?
[15:10:53] <kali> dandre: http://uu.zoy.fr/p/tzudoda#clef=xsoivvdifoonngkh with 2.4
[15:11:09] <dandre> my version is 2.0.4
[15:11:24] <kali> this's ancient
[15:11:47] <kali> you should consider upgrading, there is a huge functionall gap between 2.0 and 2.2
[15:12:06] <kali> and 2.4 adds a few goodies too
[15:18:57] <dandre> ok I'll try to upgrade
[15:20:03] <kurtis> Hey guys, is there a good way to adjust the output of map-reduce data? Specifically, I'd like to pull some values out of the _id subdocument and make them top-level objects (for indexing purposes)
[15:20:35] <kurtis> Or am I limited to just running a .forEach on the collection?
[15:33:50] <bartzy> Asked this in here today but got no answer
[15:34:08] <bartzy> if my field names are ~6GB for a big collection, that means I have less space for actual data in RAM, right ?
[15:35:14] <kali> bartzy: yes. if you have documents with lots of smallish values, it make sense to keep the field names small
[15:35:51] <bartzy> Not so smallish, much bigger than the 20 bytes I'm saving with small fields name
[15:35:54] <bartzy> but still, 6GB.
[15:36:17] <bartzy> So with 96GB RAM, more than 6% of it going to field names is not the best I guess ?
[15:36:32] <cheeser> probably not
[15:37:14] <kali> well, to be fair, the document model is not the much compact way to store data :)
[15:37:22] <kali> s/much/most/
[15:40:21] <bartzy> kali: Yeah, don't care about data in the drives
[15:40:26] <bartzy> only about RAM (which is more expensive)
[15:40:34] <bartzy> kali: Why doesn't Mongo deal with it internally? Somekind of hashmap ?
[15:40:47] <kali> bartzy: there is a bug somewhere, you can vote on it
[15:40:53] <bartzy> oh great, thanks
[15:42:09] <cheeser> "deal with it?"
[15:43:07] <bartzy> cheeser: With the fact that field names are included in each document instead of some reference
[15:43:36] <bartzy> like, internally storing something in the document, and resolve it when querying it
[15:45:04] <spicewiesel> hi all
[15:48:30] <cheeser> right. https://jira.mongodb.org/browse/SERVER-3288
[15:56:24] <kali> cheeser: yeah, there are other avatars of it :)
[15:56:42] <kali> cheeser: from the three digits era
[16:30:42] <blerght_> hi, say I have documents of the following form: { feature: 'age', value: 20, users: [1, 2, 3] }, etc. Now I want to select an array of users aged between 10 and 20. What would be the most efficient way to do this ? (I'm a bit confused between the different ways of aggregating, which one would work best here ?)
[17:08:26] <kali> BlackPanx: what do you want to aggregate ? this is a regular find()
[17:09:03] <kali> BlackPanx: haaa... never mind :)
[17:10:10] <kali> BlackPanx: if you can tolerate to reorganize the data at application level, the find() is the most efficient way. if not, look at the aggregation pipeline
[17:42:58] <jfine> How do I determine a field type in a document?
[17:43:52] <Zelest> typeof doc.field =
[17:43:54] <Zelest> ?*
[17:44:26] <jfine> For instance I have a document with a field unit, and when I do a findOne() it shows as a string (assuming because what I'm seeing is json) but I'm pretty sure they're stored as symbols
[17:45:24] <jfine> That just returns string :-/
[17:46:28] <jfine> When I find({unit: {$type: 2}}) it's empty
[17:46:29] <jfine> but
[17:46:46] <jfine> type 14 it finds them all
[17:47:11] <jfine> Based on http://docs.mongodb.org/manual/reference/operator/type/ 14 is Symbol and 2 is String.
[17:47:23] <jfine> I'm totally stumped on how to query for the type of unit.
[17:47:49] <jfine> Googled for over a half hour and have asked many others.
[17:47:55] <jfine> *sigh*
[17:47:57] <jyee> http://bsonspec.org/#/specification
[17:48:06] <jyee> according to that, symbol is deprecated
[17:48:54] <kali> jfine: if that's just for investigation purposed, you can mongodump the document and hexdump the bson
[17:49:10] <jfine> ah, ok, so I need to work directly with the bson
[17:49:40] <kali> jfine: i can't think of any other way to confirm that
[17:49:44] <jfine> jyee: I should bring that to the attention of the mongoid folks since they're relying on it.
[17:49:50] <jfine> Cool, thanks kali.
[18:29:20] <niftylettuce> built w/mongodb ... https://wakeup.io -- free wakeup call service
[18:36:57] <kzim_> hello is there a way so see if the auth is enable on a shard from the config database ?
[18:52:18] <JakePee_> Is there a rule of thumb for when it's better to return a larger result set and parse the data yourself vs when you should pass a more complex query to mongo
[20:09:23] <TkTech> I'm trying to find a way of getting pymongo to use a specific socket, can't seem to find any easy way of replacing the pool mechanism.
[20:09:42] <TkTech> (I have a socket-like object created by punching through a couple of SSH tunnels that I need pymongo to use)
[20:28:41] <quuxman> From time to time, under very moderate query load, pymongo throws "AutoReconnect: [Errno 104] Connection reset by peer"
[20:28:58] <quuxman> if I restart the process everything seems to work normally again
[20:40:57] <quuxman> nevermind, I was reaching the connection limit
[21:27:16] <scyth> does anyone know how to save ISODate() object from nodejs? In nodejs, date objects are just that - Date() objects, and when they are saved.. they're saved as string, which prevents me from using expireAfterSeconds index option
[21:28:42] <LouisT> you can't convert it back to a date object?
[22:09:40] <calstad> So I'm a complete mongo noob and have a question about how to use the aggregation pipeline that I have outlined here https://gist.github.com/calstad/6817868
[22:26:50] <retran> why is everyone doing cooler things in mongo than me
[22:26:54] <retran> i'm just storing boring shit
[22:28:47] <matt1> Heya is ther anything wrong with using mongoengine and pymongo in the same app? mongoengine will handle all user/site related stuff and pymongo the content. I'm using flask if it makes any difference.
[22:58:51] <JFrame> Hey guys, I've been trying out mongodb today and seems it's what i need but i have to do one last thing, when using save (in Java) as a new DBObject, i need to give a key and a value, and the key is like 12345-1, i can search like find( { "12345-1.game_id" : 100 });, how could I find with that "KEY" as a wildcard? like search everyobject that has game_id as 100
[23:00:18] <retran> why is your key so weird
[23:00:25] <retran> you're abusing keys
[23:00:39] <JFrame> how could I do it better then?
[23:00:52] <JFrame> I'm just a total newbie in mongo db
[23:00:54] <retran> i dont think oyu understand the purpose of the key
[23:01:01] <retran> i mean, it's apparent you dont
[23:01:10] <JFrame> could you explain?
[23:01:39] <retran> it should be {"game_id":100,"something_else":"12345-1"}
[23:01:42] <retran> for example
[23:01:59] <retran> or are you talking about a nested document
[23:02:29] <crudson> JFrame: pastie one of your documents and we can see how you are structuring them
[23:02:38] <JFrame> it's huge
[23:02:56] <JFrame> http://euw.leagueoflegends.com/tribunal/en/get_reform_game/3190/1/ you can even see it on internet
[23:03:34] <retran> do you work for riot
[23:03:42] <JFrame> nope
[23:03:52] <retran> what is that then
[23:04:12] <JFrame> I'd like to know some statistics
[23:04:19] <retran> i had a recruiter who tried to hire me there
[23:04:30] <JFrame> on riot?
[23:04:36] <retran> yeh
[23:04:52] <JFrame> i actually tried to have an internship there
[23:04:57] <JFrame> and they told me you suck, gtfo
[23:05:19] <retran> i wasn't a good enough fit for them, only for one reason
[23:05:25] <rafaelhbarros> JFrame: hostile, never work on a hostile place.
[23:05:32] <retran> they liked my php portfolio
[23:05:39] <retran> they liked everything except
[23:05:45] <JFrame> rafaelhbarros hm?
[23:05:48] <retran> i didnt have enough to show how serious of a gamer i was
[23:05:56] <retran> they only want ot hire "serious gamers"
[23:05:57] <JFrame> they didnt tell me that exactly but heh
[23:06:04] <JFrame> oh, I see
[23:06:07] <retran> they dont seem like a hostile place at all
[23:06:13] <retran> in fact. mega cool
[23:06:28] <retran> my only gaming experience is casual playing of WoW
[23:06:35] <rafaelhbarros> retran: the way JFrame displayed it...
[23:06:36] <retran> and NES emulators
[23:06:44] <JFrame> so i think i'll add that "something_else":"12345-1"
[23:06:52] <JFrame> rafaelhbarros it was just a way of talk
[23:06:55] <JFrame> they didnt say that xD
[23:06:59] <rafaelhbarros> ok
[23:07:06] <retran> it seems the culture there is like
[23:07:15] <retran> they want ppl really interested in gaming
[23:07:17] <retran> primarily
[23:07:31] <retran> and they'l train you for the work whatever it is
[23:07:43] <JFrame> well, if you like gaming and you like their game, you'll do it with passion
[23:07:49] <retran> if you show some promise of competency
[23:07:54] <retran> no..
[23:08:07] <JFrame> it's the way i see it
[23:08:08] <retran> they're looking for programmers who are going to know what players want
[23:08:17] <retran> passion isn't really their goal
[23:08:17] <JFrame> well, that too
[23:08:39] <crudson> JFrame: so what are you wishing to do with this document?
[23:08:41] <JFrame> anyways they hear too much to people
[23:08:59] <JFrame> crudson save some of them and get some statistics
[23:09:24] <JFrame> and retran if i had to do it with the "key" how would you do it?
[23:09:54] <retran> the key is the part on the left
[23:10:04] <retran> heh
[23:10:36] <retran> you dont want key's to be variable things
[23:10:43] <retran> though its certainly possible
[23:10:54] <retran> just makes referencing them nuts
[23:11:16] <JFrame> I see
[23:11:19] <retran> meaing, you can do it, just don't plan to have a way to obtain that document easily with it
[23:11:27] <retran> i mean, dont expect to
[23:11:30] <JFrame> okay
[23:13:28] <JFrame> and what about without key?
[23:14:06] <retran> what do you mean without key
[23:14:23] <retran> example
[23:14:24] <retran> ?
[23:15:01] <JFrame> "" as a key
[23:15:06] <JFrame> just a zero length key
[23:15:23] <retran> i dont see any examples of poor keys in that document you pasted
[23:15:38] <retran> mongo does not allow 0 len keys
[23:16:24] <retran> the document you pasted has a very rigid consistent schema actualy
[23:17:59] <JFrame> coll.save(new BasicDBObjectBuilder().add("", objectjson).get());
[23:18:03] <JFrame> I actually used this
[23:18:05] <JFrame> and it let me
[23:18:58] <retran> you must be performing voodoo
[23:19:06] <retran> mongo dont allow zero-len keys
[23:19:32] <retran> i have no clue what that function crap is you pasted
[23:19:42] <JFrame> from java
[23:20:54] <retran> well that still wont mean much ot me
[23:21:27] <retran> i'm guessing that that insert is not succeeding
[23:21:32] <retran> or it's adding some weird key
[23:21:39] <retran> that it generates
[23:21:47] <retran> mongo does not allow 0 len keys
[23:21:55] <retran> use this knowledge my friend
[23:22:11] <JFrame> http://i.imgur.com/bhKrpq7.png
[23:22:24] <JFrame> see, under "_id"
[23:22:47] <retran> your screenshot doesn't include that
[23:22:56] <retran> its not collapsed
[23:23:31] <JFrame> http://i.imgur.com/aY9exLh.png
[23:24:21] <retran> i see no 0-len keys
[23:24:28] <JFrame> db.tribunal.find( { '.platform_game_id' : 387832405 }); <- i can search with this
[23:25:10] <JFrame> isnt that a 0 len key?
[23:25:14] <JFrame> else im just freaking lost
[23:25:58] <retran> you're lost
[23:27:07] <retran> its a nested document
[23:27:18] <JFrame> I see
[23:28:39] <JFrame> and is it possible to do it not-nested?
[23:31:11] <retran> yes
[23:31:17] <retran> in fact, you're going out of your way to nest
[23:32:54] <JFrame> woho!
[23:32:56] <JFrame> I understood it
[23:33:10] <JFrame> lulz i was doing it so bad xD
[23:34:06] <JFrame> http://i.imgur.com/bZOAtDt.png :D
[23:34:18] <JFrame> now I have it so cool xD
[23:35:03] <JFrame> I love you retran xD
[23:35:13] <retran> thanks i can feel your love
[23:35:22] <JFrame> :D
[23:35:40] <JFrame> now I only need tons of storage to save everything xD
[23:36:12] <retran> and index
[23:36:21] <retran> you should plan your indexes
[23:36:28] <retran> and those will take up space too
[23:36:45] <JFrame> could you tell me more about it?
[23:37:02] <retran> plan for every find("myKey")
[23:37:27] <retran> need ot have an index on keys you will be find()'ing on
[23:37:32] <JFrame> last year I learned sql and now i have almost all schemas broken
[23:37:34] <JFrame> I see
[23:37:54] <retran> if you dont it will start getting really slow, if you have tons of documents
[23:38:20] <JFrame> if i write too many indexes, it'll be huge, right?
[23:38:35] <retran> indexes have a storage cost
[23:40:02] <retran> once indexes are built, mongo is awesome with them
[23:40:19] <retran> for example, i have a massive mongo db that is basically a mirror of IMDB
[23:40:25] <JFrame> woho
[23:40:32] <JFrame> well
[23:40:36] <retran> and it can run on a 512MB vps very fast
[23:40:59] <JFrame> the storage would be
[23:41:12] <retran> the indexes take up GBs
[23:41:13] <retran> yes
[23:41:19] <retran> but big deal
[23:41:30] <JFrame> objectsize*1310162+indexes
[23:41:45] <retran> i dunno if that's correct
[23:42:00] <retran> i wouldn't worry about that
[23:42:07] <retran> unless you have some extreme edge case
[23:42:15] <retran> i see no reason to think you're an edge case
[23:42:40] <JFrame> I think 3 indexes are enough
[23:42:43] <JFrame> i dont think its too much
[23:43:04] <retran> i wouldn't think about it in numbers of indexes
[23:43:12] <JFrame> so?
[23:43:15] <retran> i would think about it in terms of what searches you'll be doing
[23:43:24] <retran> and create an index for each one
[23:44:01] <JFrame> is it possible to add an index later?
[23:44:06] <retran> sure
[23:44:26] <retran> db.collectionName.ensureIndex({my,list,of,keys})
[23:44:38] <JFrame> and it'd take the storage it needs and everything would be awesome, right?
[23:45:35] <JFrame> so awesome
[23:45:45] <retran> yeah
[23:45:58] <JFrame> Then 3 indexes would be enough right now
[23:46:18] <retran> dont ask me
[23:46:31] <retran> that's dependant on your app
[23:46:37] <JFrame> i know
[23:46:49] <JFrame> well, i'm like thinking loud
[23:48:03] <platzhirsch> How does it affect the queries if I use descending indexing? key : -1 as opposed to key : 1 ?
[23:50:18] <JFrame> thank you retran
[23:50:22] <retran> http://docs.mongodb.org/manual/core/index-single/
[23:50:56] <platzhirsch> retran: so just the query is returned in another order
[23:52:22] <retran> it matters if you return more than one document
[23:52:49] <retran> for single doc, wont matter
[23:53:17] <platzhirsch> That's reasonable, I am just wondering because for my document mapper I can specify a sort order, too... but that's just for the relation :)
[23:53:21] <retran> i mean, key
[23:53:24] <retran> sorry not doc
[23:53:29] <retran> multi key indexes
[23:53:31] <retran> it matters
[23:53:32] <platzhirsch> oh
[23:53:37] <retran> otherwise it wont
[23:54:01] <platzhirsch> so if would not matter if { date: 1 } or { date: -1 } ?
[23:54:09] <retran> no
[23:54:11] <platzhirsch> only { date: 1, name: -1 } etc.
[23:54:21] <retran> you are correct , yes
[23:55:03] <platzhirsch> ok, seems a bit inconvenient to define it then anyway, but guess it has its reason :)
[23:55:23] <retran> its for multi-key indexes
[23:55:30] <retran> nature of b-tree
[23:56:16] <platzhirsch> I have to restudy the data structure, will hit me sooner or later in an interview question
[23:56:43] <platzhirsch> thanks retran. I am astonished by the MongoDB documentation, I was already 1,5 years ago.. and it improved even more