[02:06:54] <tejasmanohar> should i be using a counter cache for this? i feel like the other method would be quite inefficient.
[02:06:58] <tejasmanohar> but i also hate data duplication
[02:07:22] <tejasmanohar> should i just cache my queries? O.o
[03:10:31] <JaVaSan> Hi, is there any way to create a query to retrieve a nested array as the root document in response? (i.e. I would like instead of {"fieldArray" : [...] } something like {[...]} here fieldArray is the nested field which is an array of documents)
[03:13:21] <joannac> you know that {[...]} is not a valid document, right?
[03:16:40] <JaVaSan> @joannac, sorry, I mean something like [{...}, {...}]
[03:18:19] <JaVaSan> so, instead of {"fieldArray" : [{...}, {...}] } I would like to retrieve [{...}, {...}].
[09:26:08] <carver404> hi, i'm getting a 404 while installing mongodb 3.0.x on fedora.. any headers?
[09:26:38] <carver404> the baseurl https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.0/x86_64/ gives 404
[11:16:10] <Siyfion> I keep getting a "MongoError: cursor killed or timed out" when performing a simple .find({}) on a collection with around 10MB of data in it... Any idea why that might be?
[11:16:21] <Siyfion> Or even how I can diagnose the issue?
[11:19:00] <Siyfion> Looks like this issue: https://jira.mongodb.org/browse/NODE-300
[11:19:08] <Siyfion> But it's supposed to be fixed :/
[11:31:49] <tibyke> im trying to find in a db and order the results by the multiplication (or whatever mathematical relation) of two fields. should it be whatever.find(..., ...).aggregate() or whatever.aggregate().find(..., ...)? neither of these works actually :)
[15:40:47] <Owner> well in this case, all copies of the application need oplog
[15:45:30] <zacharypch> HI - is there a configuration option in mongo3 for hyphenation as a delimiter in text indexes? I want to keep english as the language choice, just remove the treatment of hyphenation as splitting since my dataset has a lot of hyphenated terms
[15:45:31] <zacharypch> also numbers next to letters, like abc123 should be one word
[15:48:29] <zacharypch> cheeser: how are the languages defined? can I invent my own language?
[15:49:46] <Owner> cheeser, well for performance reasons this app needs the oplog, but for economic reasons, they only want one mongo cluster for all the environments to share (with seperate dbs)
[15:50:22] <Owner> but cross zone data leakage is bad for security
[15:55:17] <cheeser> zacharypch: you can't as far as i know.
[15:55:43] <cheeser> Owner: if you don't expose the oplog directly to users, you can filter by db
[16:14:29] <deathanchor> what's the right way to have two member replset with no 3rd member for the short term?
[16:22:58] <deathanchor> was thinking of giving primary votes : 2 just to break ties for now
[16:24:29] <mensgis> hey all! I'm switching from a application that created id's like kScTJM9n+g2TUU9eVfQu .. To do things right, right from the beginning, I'd like to use valid ObjectId's. Do you have any idea on how to change existing data easily?
[16:26:06] <mensgis> i am speaking about the _id-attribute.. will removing it just create a new one? if so I might go for updating references manually
[16:27:50] <mak``> Hello :) If I don't care about sequentiality of numeric IDs, but I want uniqueness and for the service to not be a SPOF, how can I do this? I need numeric 32bit integer IDs for MongoDB and sync it with SQL Server in a distributed system.
[16:35:18] <StephenLynx> why don't you do the oposite?
[16:35:19] <mak``> deathanchor: ObjectID is 12 bytes
[16:35:32] <StephenLynx> and store on mongo the id that you have on the legacy system?
[16:36:22] <mak``> StephenLynx: the current system generates the ID in SQL Server and persists it in MongoDB, but it is a nightmare of message queue and we need to change the master from the legacy system to the new system and ultimately switch off SQL Server
[17:02:15] <StephenLynx> generate sequential 32 bits ids in your application
[17:02:30] <StephenLynx> and set a unique index on the collections for this field.
[17:02:54] <StephenLynx> this way you are guaranteed to never have a duplicate.
[17:05:16] <cheeser> or use findAndModify() to let the db track those for you.
[17:05:54] <cheeser> it'd mean an extra db trip but it'd be durable across application runs and unique across a cluster (provided you have more than one instance of the app running)
[19:40:28] <deathanchor> use cheeser's link to change it while mongod is running
[19:40:38] <deathanchor> and my link to set it for every mongod startup
[19:41:22] <deathanchor> I noticed that large document inserts usually show up in the logs.
[19:43:10] <silasx> Thanks a ton guys! My setting is currently {was: 0, slowms: 100} … so that means the profiling level is 0, but it will still log 100ms ops?
[19:44:09] <silasx> And there’s no separate profiling log from the system log?
[19:45:32] <deathanchor> silasx: 0 only writes to logs, 1 writes slow ops to profile collection (capped collection), 2 writes all ops of that db
[19:54:24] <deathanchor> basically it's a find().limit(1).pretty()
[19:54:32] <silasx> @deathanchor: Ah, okay, the 100ms threshold holds all the time. Yeah some of these may be because of taking a long time, some are big inserts, but still that seems like a long time. Will have to look into it but this explains a lot.
[19:55:18] <StephenLynx> in the meanwhile, if I want to check if TWO documents exist, a count will be faster than a find and toArray because it will perform less work at each document found?
[19:55:19] <deathanchor> silasx: I was curious how big these docs were and how slow they were. I deal wiht some big docs also
[19:56:00] <deathanchor> StephenLynx: nah, I would a find().limit(2)
[19:56:18] <deathanchor> StephenLynx: count would scrub through your whole damn collection again
[19:58:27] <silasx> let me take a look at some of these
[20:09:10] <deathanchor> limit... for when you just don't care about the rest.
[20:09:21] <silasx> @deathanchor if your’e still around, a bunch of these are insertions of docs around 3000 bytes in length … doesn’t seem like they’d take a long time
[20:10:39] <deathanchor> the w: and r: values on the long line are micros waiting for a w/r lock.
[20:14:35] <silasx> so … next steps to diagnose something like that?
[20:14:48] <silasx> oooh, give give! I love my vim :-]
[20:14:51] <deathanchor> silasx: what problem are you trying to solve?
[20:15:36] <silasx> @deathanchor well, a few: 1) We don’t like data being written to logs for security reasons, and 2) now that we see these are long ops, why are they taking so long
[20:16:47] <deathanchor> silasx: let me publish my vim file on github
[20:18:19] <deathanchor> well slowms you can set to some large number
[20:18:33] <deathanchor> write locks, sounds like contention to me.
[20:19:50] <silasx> wait what about slow ms? What would I set to a large number? Looking up contention
[20:21:27] <beekin> When using the aggregation pipeline, I read somewhere that's it best practice to $match and $project early on in order to trim data. Does this mean, that I should always be projecting only what's needed for the entire aggregation?