[01:32:44] <Whisket> anyone know why db.getCollectionNames() would return an empty list? I have a bunch of data in the db but that doesn't seem to know any of the collections
[01:46:41] <Freman> queryguard now kills running queries when a client disconnects... it's a bit brute force and will absolutly kill other queries but meh :D
[07:45:33] <Lope> if I want to update many mongoDB documents by _id. Can I do something like this. (this is made-up code) db.collection('foo').updateMany([{_id:'abc'},{_id:'def'}],{$set:{bar:0}});
[09:08:18] <smashway> hi there, I am a complete noob and fishing for information. My database here http://pastebin.com/zykhwR6j doesn't look like mongodb, does it? What could be its format?
[09:44:04] <wieshka> Hi everyone, having situation: replica set, 3 replica members, A node with 1.5 prio, B with 1.2, C with 1.0 . Had re-install for other reasons server running node A, so, before, putting it down, did rs.stepDown(), so node B took primary role. Now having B as primary, C as secondary. What happens when I start now node A with prio 1.5 but with empty data dir (fresh install of same mongo version with identical config)
[09:44:18] <wieshka> 1) It will sync with current primary
[09:44:27] <wieshka> 2) It will overtake primary and corrupt current primary
[09:45:35] <wieshka> and in case of 1), will it take back primary role after oplog is in sync ?
[09:46:39] <Derick> it will not "corrupt" current primary
[09:46:51] <Derick> it will sync, and then take over as primary again
[09:47:07] <Derick> but, you really should not try to micromanage the replica set
[09:47:53] <wieshka> what you mean with micromanage her, all I expect now in perfect case scenario is: I start mongod on server A with empty data dir, it does sync part with current primary and then overtakes role
[09:48:02] <wieshka> if thats what I should expect, then perfect
[09:48:12] <Derick> micro managing meaning giving replicaset members different priorities
[09:48:20] <Derick> you almost never actually need that
[09:48:45] <wieshka> its old mongo version, deployed according to current mongo doc at that time :)
[09:49:07] <Derick> I also didn't think you could give non-integer priorities...
[10:29:45] <tantamount> It's not the first match in the pipeline, in case that matters
[10:29:58] <Derick> yes, you can't have "$source_list.user" there I believe, but let me check
[10:30:14] <tantamount> You mean I can't read an embedded field?
[10:30:39] <Derick> no, you can't compare (eq) with a variable, it needs to be a constant
[10:31:01] <tantamount> Yes, that's what seems to be happening, but how do I match a field...
[10:31:16] <Derick> tantamount: if you can come up with a few sample documents, and what you want out of it and put it in a pastebin, I can give it a try (and explain it)
[10:31:37] <tantamount> It's private data so I probably can't share it, but the document will look as you imagine from that query
[10:31:59] <tantamount> target_list is just a DBRef and source_list.user is also a DBRef, and that's it. Doesn't even matter that they're DBRefs, though
[10:32:03] <Derick> tantamount: you can make up different words in it
[10:32:11] <Derick> but I really need to see the example data
[10:32:27] <tantamount> Why? You already know what I'm trying to do... I just want to find matching fields
[10:32:38] <tantamount> I don't understand why it's so difficult
[10:34:09] <tantamount> I guess my answer is that I cannot match fields, as showcased by this answer http://stackoverflow.com/questions/14501337/mongodb-aggregation-framework-match-between-fields
[10:34:51] <tantamount> One has to add an extra projection stage to be able to specify a constant in the match
[10:35:47] <tantamount> If you want sample data you can use the data from the link
[10:41:53] <tantamount> Because even though you can compare nested fields, the match would eliminate the entire document, when I only want to eliminate the unwound document
[10:42:15] <tantamount> i.e. I want to manipulate a list in the document
[10:42:18] <Complexity> I'm new to sharding in Mongo and I've setup Hyper-V on my local machine, with a couple of Ubuntu Virtual Servers.
[10:43:00] <Complexity> Now, I'm facing an issue that as soon as I enable Sharding, the insert speed of records does drop from 600 docs / sec to 300 docs / sec, so a factor 2 slow. I use the hashed "_id" field, so the documents are distributed over 2 shards.
[10:43:22] <Complexity> How is that possible? I tought that sharding was supposed to provide a higher insert speed.
[10:54:09] <forcebanana> Complexity: why only inserting on 2 shards? my understanding is that for insert performance to increase linearly you must perform inserts on every shard (?)
[10:55:31] <Complexity> I don't quite follow you here.
[10:55:59] <forcebanana> it is my understanding that inserting on only a subset will not increase performance
[10:56:10] <Complexity> forcebanana: I connect to my mongos instance and let the insert happen. The router (mongos) will write to both shards. However, I don't understand why the performance will go down.
[10:56:22] <Complexity> forcebanana: What do you mean by inserting only a subset?
[10:57:03] <forcebanana> ah, ok… perhaps i misread what you wrote. apologies
[11:00:03] <Complexity> MongoCollection: I've already done it and I'm trying to get sharding to work for 4 days right now. I've used the '_id' field as a key, which is not good as all writes go to the last shard. I use a hashed value of the '_id' field which distributes writes accross the shards evenly and the performance goes down by factor 2. The same is true when sharding on another field which I fill randomly.
[11:05:00] <Complexity> forcebanana: Do you have any idea why I have this problems with sharding?
[11:14:47] <forcebanana> Complexity: without any further inspection and just off of the top of my head, i’d suspect that your key is not optimal (sufficiently random to scale write operations appropriately)
[11:17:51] <Complexity> forcebanan: I can confirm that my write operations are distributed evenly. Connecting to both mongod instances show roughly the same amount of items. Do you can point in the right direction on how to debug this particular issue?
[11:25:03] <forcebanana> Complexity: do you have any other operations or bottlenecks which may be impeding write performance? because if it’s not the distribution, it’s likely this
[11:25:44] <Complexity> It's a clean install at this point. However, all servers are running virtual on a host with a single SSD. Can that be the bottleneck?
[11:26:30] <forcebanana> all machines on a single node? it’s possible. is this a POC or test env?
[11:27:08] <forcebanana> because this is the point where someone would highly advise you not to do that otherwise :-)
[11:27:44] <forcebanana> are you monitoring IO in any way?
[11:27:48] <Complexity> This is a POC, just to test how to configure Sharding and see the performance gain (however, it's a loss right now :) )
[11:28:13] <Complexity> Just a Task manager, but it does not come near 100 %
[11:29:59] <Complexity> I mean that the disk does not come near 100% of I/O usage. I'm monitoring the Host
[11:32:42] <forcebanana> not sure how to take that measurement… typically, SSDs have a max throughput measured in IOPS (less useful), Rand/Seq Read/Write Ops/sec (far more useful). not sure how “100%” aligns with what your SSD actually is doing or can do
[11:33:41] <Complexity> So, your guess is that it should be better when using 2 different physical disks?
[11:35:24] <forcebanana> you’d be better off using multiple physical nodes, but that could possibly be your issue and might present a “fix” for it. i honestly don’t know without measurements that align with storage I/O performance
[11:35:48] <forcebanana> which Windows OS is this?
[11:36:44] <forcebanana> not really designed to be a server OS
[11:37:12] <forcebanana> certainly not intended for that purpose, i mean
[11:37:41] <Complexity> I totally agree on that one but I don't have anything else since it's a POC.
[11:39:05] <Complexity> I'll take your advice and try to do it on multiple physical nodes. This would probably be best as it seems that I'm facing a bottleneck somewhere, however, I don't know where for the moment.
[11:41:21] <forcebanana> yeah. bottlenecks could literally be anywhere in the system you’re using… hard to tell without data
[11:41:55] <forcebanana> how many nodes in your cluster?
[11:43:29] <Complexity> 1 router, 1 configuration and 2 shards
[11:43:32] <forcebanana> >>> I totally agree on that one but I don't have anything else since it's a POC. <—- Linux ;-)
[11:43:37] <Complexity> no replica sets at this point.
[11:43:45] <Complexity> would like to keep it as easy as possile.
[11:44:09] <forcebanana> when you say 2 shards, are you also saying 2 (mongod) nodes?
[11:46:43] <forcebanana> and i know this is a cliche (and also unhelpful at best), but whenever I hear someone running mongo on windows this is my immediate knee-jerk reaction: https://media.giphy.com/media/DGiZfWmc0HWms/giphy.gif
[11:47:41] <forcebanana> there are plenty of easy to follow guides out there to help you set up a *nix box for a) virtualization and b) mongodb… i’d highly suggest that you do that. save yourself the frustration
[11:48:40] <forcebanana> IIRC digitalocean has some good ones
[11:49:19] <StephenLynx> digital ocean sucks balls though
[11:56:10] <sireseog> When inserting a new document into a collection, i want to get an object from another collection by it's id and insert it, how can i achieve this?
[12:11:11] <Complexity> Just to tell you, I'm not running Mongo on Windows, I'm using a Windows Host, but Mongo is running on Ubuntu Server 14.04.3 LTS :-)
[12:24:16] <StephenLynx> you can configure to change its permissions system?
[12:26:49] <Complexity> No, that's true. But are you going to guarantee me that Linux is Bug and Exploitable-free? :-) (I do get your point)
[12:27:36] <StephenLynx> its not, but at least the bugs and exploits don't stay around for years and years and years without fix because the only people with access to the code don't give a shit about actual quality and just want to take more of your cash.
[12:27:53] <StephenLynx> ffs, w10 could be exploit with the scrollbar code.
[12:28:49] <StephenLynx> I still hope at least you don't run windows on a live server.
[13:20:08] <spuz> if there were a significant number of bugs i'd definitely consider something else though
[13:23:13] <StephenLynx> not to mention that its useless on a protected server.
[13:23:26] <StephenLynx> since you can't connect from outside on it and you don't have a GUI either.
[13:30:20] <spuz> StephenLynx: yeah definitely good to know how to use the console for remote machines
[13:30:30] <spuz> we have all sorts of VPNs and ssh tunnels for that
[13:41:46] <Keksike> hey, I have mongodb running on my clients server. How could I analyze what is happening when I do something something (reads, writers, etc.). I want to see what is taking such a long time with some of my processes.
[14:26:43] <phishy> If a document has a property that is a large array, and I only want to find the document and the count of the array instead of the array, do I use find() syntax or some mapReduce?
[16:48:25] <Ange7> i have one collection with : docA, docB, docC, docB, docB, docC
[16:49:04] <Ange7> I want pipeline which find docA cause docB & docC appear more than 1 time
[17:17:33] <saira_123> Hi Professionals can someone please comment on https://goo.gl/lzYK9o mongo performance comparison research?
[17:17:59] <saira_123> Hi Professionals can someone please comment on https://goo.gl/lzYK9o mongo performance comparison research? am i going in right direction any suggestions?
[17:18:39] <saira_123> i am testing mongo performance using ycsb
[17:29:53] <saira_123> Hi Professionals can someone please comment on https://goo.gl/lzYK9o mongo performance comparison research? am i going in right direction any suggestions?
[17:41:01] <CustosLimen> saira_123, while I can get it there - its a bit of a schlep
[17:42:02] <saira_123> CustosLimen if you just type a command without arguments in mongo shell you will get its definition
[17:42:22] <saira_123> Hi Professionals can someone please comment on https://goo.gl/lzYK9o mongo performance comparison research? am i going in right direction any suggestions?
[17:42:35] <CustosLimen> saira_123, not for injected functions: https://bpaste.net/show/ee224f5fe0f3
[17:49:20] <saira_123> Hi Professionals can someone please comment on https://goo.gl/lzYK9o mongo performance comparison research? am i going in right direction any suggestions?
[18:28:35] <CustosLimen> is there a way to access config variables (like listening port) from mongo shell ?
[18:48:27] <uuanton> if replica set try db.isMaster()
[18:56:47] <uuanton> assuming that writes coming to that database1
[19:04:29] <uuanton> 1. Connect new database to prod replica set as new member 2.catch up with data 3. detach from replica 4. drop local database to drop replica settings 5. setup as standalone. (remove repl=123 from /etc/mongod.conf) 6. restart. 7. change app code to write to new machine
[19:05:58] <uuanton> between 6 and 7 there will be down. If anyone could suggest anything
[19:28:34] <godzirra> Why would fetching a document via mongoose show a different __v than looking in mongo?
[19:31:16] <renlo> what file extension do you guys normally use for files with mongo queries in them (equiv to .sql)
[19:47:30] <renlo> I guess the answer is that files with mongo queries in them are generally .js files
[21:07:21] <justink101> In node, tailing the oplog and seeing `ts: { _bsontype: 'Timestamp', low_: 1, high_: 1455137585 }` what is the recommended way to just get a native javascript number (timestamp) from this?
[21:07:33] <justink101> I.e. want to store as JSON
[21:08:31] <justink101> Should I just pull ts.high? Seems hacky? Is there a function that deserializes _bsontype: 'Timestamp'
[21:17:53] <justink101> Like the following works, but makes baby jesus cry: if(doc.ts && doc.ts.high_) { doc.ts = doc.ts.high_; }
[21:18:31] <justink101> Is there a native JS way to turn a BSON timestamp into a native number
[23:34:31] <magicantler> in a shard, if i want to keep the data grouped together by the user_id field, so that objects of the same user are together.. should i use a range or hash shard on user_id?
[23:42:12] <Freman> oh, even better, mongo seems to completely ignore any fields it doesn't udnerstand