[01:40:25] <ranman> hey guys -- if I have a document {_id: ObjectId(), value: "really really long string"} -- what's the difference between db.collection.ensureIndex({"value": "hashed"}) vs db.collection.ensureIndex({"value": 1}, unique=true)
[01:41:04] <php> When using the Java client — how can I fetch results from the Mongo instance and only once I have the results, return values?
[01:41:58] <ranman> php: mongodb uses the concept of a cursor to continue grabbing all of the data but if just want all of it you can write a quick wrapper to go through the cursor and build your array
[01:43:05] <ranman> you can call toArray on the cursor
[01:44:10] <php> Latest client, latest server version.
[01:44:15] <cheeser> i'm not sure what your question even means php
[01:44:42] <ranman> cheeser I'm pretty sure he means just getting the objects instead of the cursor
[01:45:04] <ranman> php be very clear of the warning on the page I sent you: "Warning: Calling toArray or length on a DBCursor will irrevocably turn it into an array. This means that, if the cursor was iterating over ten million results (which it was lazily fetching from the database), suddenly there will be a ten-million element array in memory."
[01:45:48] <cheeser> iterate the cursor like any other Iterator and process each document
[01:46:14] <ranman> hey cheeser maybe you know the answer to this: if I have a document {_id: ObjectId(), value: "really really long string"} -- what's the difference between db.collection.ensureIndex({"value": "hashed"}) vs db.collection.ensureIndex({"value": 1}, unique=true)
[01:46:50] <ranman> my suspicion is that hashed will be more efficient
[01:46:57] <php> The problem I face right now is that, I have to use Mongo is a separate thread. Mongo's client won't work in my default thread, because it's a plugin which has thread protection. I have to use Mongo in a Runnable.
[01:47:22] <php> Right now, I can grab the results just fine, but since I'm using a Runnable — it's grabbing the values too late. I am returning an empty List
[01:48:17] <php> I'll sort of show the flow... one minute.
[01:48:48] <ranman> php: can you use a lock on the list, acquire it in the runnable and wait on that to be released to proceed?
[01:49:05] <php> getAllTransactionsForUser(UUID uuid) -> schedules a Runnable (allows Mongo to use its multithreaded functionality), RETURNS -> Mongo finally gets the results, and adds them to the List
[02:50:12] <cheeser> the threadedness is orthogonal to your query needs. get one working then the other. combine.
[04:19:15] <arussel> in https://docs.mongodb.org/manual/reference/method/db.collection.update/ , I don't really understand:
[04:19:23] <arussel> If all update() operations complete the query portion before any client successfully inserts data, and there is no unique index on the name field, then each update operation may result in an insert.
[04:19:45] <arussel> could someone show me a step by step situation where the problem arises ?
[04:25:56] <Boomtime> arussel: this is specifically when you use the 'upsert' option
[04:27:21] <Boomtime> specifying upsert behavior makes update behave essentially like two non-atomic operations - it attempts to update atomically first, but if the match predicate does not match anything, then it attempts a pure insert operation - given these are executed as two distinct steps, there is a race condition with other clients
[04:28:05] <Boomtime> if you have a lot of clients updating the same docunment (or potential to update the same document) and you specify 'upsert' behavior then you might get duplicates
[04:28:47] <Boomtime> again, this only applies to 'upsert'
[04:29:08] <Boomtime> if you are worried about this, and cannot use a unique index, then do not use upsert
[04:31:16] <arussel> Boomtime: I'm not 'worried about it', but we discovered a awful lot of duplicates in our db because of it (most likely)
[04:32:08] <arussel> but this should not happen if the update part is made exclusively of modifier, shouldn't it ?
[04:33:25] <arussel> Boomtime: and also, when this happen, duplicate documents should have very close _id timestamp, no ?
[04:47:05] <Boomtime> arussel: "but this should not happen if the update part is made exclusively of modifier, shouldn't it ?" <- wat? i don't know what you mean
[04:47:13] <Boomtime> "duplicate documents should have very close _id timestamp, no " <- no
[04:47:39] <Boomtime> the generation of _id is client-side, they might be very close, or they might be unrelated, i don't knopw
[04:48:07] <Boomtime> indeed _id is the best way of preventing this, update by _id and you can't go wrong
[04:50:34] <arussel> Boomtime: db.foo.update({name: "Andy"}, {$set:{rating: 1, score:1}}, {upsert:true}) => can this create duplicate ?
[04:51:53] <arussel> what I meant for document creation is that this happens only when 2 upsert operation happens at 'about the same time' so if I the creation timestamp of the document should be very close.
[05:03:36] <Boomtime> arussel: does the "name" field have a unique index?
[05:04:16] <Boomtime> "2 upsert operation happens at 'about the same time' so if I the creation timestamp of the document should be very close" <- yes
[05:06:28] <arussel> Boomtime: thanks, that was very helpful
[08:39:44] <procton> Apart from using $maxTimeMS, is there anything else that can cause a "operation exceeded time limit"? (MongoDB 3.0.6)
[08:44:41] <procton> I have a capped collection which after a few days get the above "exceeded time limit" error, but I do not understand why.
[09:15:39] <Hypfer> hi everyone, can I turn off replication without loosing the database?
[09:17:15] <kali> Hypfer: if you restart one node without the replica setting, it will behave as a standalone node and have the data
[09:20:03] <kali> Hypfer: mmm... short answer is... discard the setting, restart mongod, but if you're not comfortable with doing that on your own, I wonder if i'm not just enabling you to do something stupid
[09:21:37] <Hypfer> kali: what I want to do is migrate to another datacenter with 0 downtime
[09:21:49] <Hypfer> for that I want mongo to replicate over there and then disable the original node
[09:22:41] <kali> in that case, you should just add the new datacenter nodes to your replica set, then remove the old ones, i think
[09:22:51] <kali> and you will not have it with zero downtime.
[09:23:02] <kali> a replica set reconfigurations always take a few seconds
[09:30:21] <Hypfer> kali: the current machine is no replica set atm
[09:31:44] <ooskapenaar> Hypfer: you'll need at least one restart to switch on replSet
[09:32:23] <ooskapenaar> kali&Hypfer: Can one minimise the downtime, by manually nominating the master and making sure there are enough nodes in the network (also arbiters)? (just an idea I've never done that)
[09:53:20] <ooskapenaar> I have about 20 or 30 servers running mongoDb and one of them recently started throwing Segmentation Faults regularly. https://gist.github.com/ooskapenaar/03a702e9384f84176349
[09:53:36] <ooskapenaar> This is using mongoDb 3.0.7
[09:54:19] <ooskapenaar> Via mongodb JS drivers v2.0.47
[09:55:44] <ooskapenaar> Anyone using mongoDb 3.0+ and has similar experience? I checked out the mongodb JIRA and there don't seem to be any relevant tickets when searching for status=open and segmentation
[09:58:24] <Hypfer> hm, okay. thanks. Will do some testing now
[10:22:53] <chovy> anyone know why my config files are diff on mac?
[10:23:07] <chovy> the format is a nested structure on the mac
[10:40:15] <Kosch> hiho. Since I could not find any SRPM: Can somebody of you tell me, which options are used to compile the official rpm packages for rhel 6?
[10:40:57] <chovy> in the old configs there was noauth flag i don't see that in new ones
[10:53:07] <arussel> I'm still seeing issue with upsert. Given an existing document {a: 1, b:2, c:3}, sometimes update({a:1, b:2}, {$set: {c :4}},{upsert:true}) will create a new document
[10:54:22] <arussel> I do a lot of concurrent upsert when this happens, but the original document was created long before so this isn't a case of update/create race condition
[10:54:33] <arussel> this is a case of update not finding an existing document
[12:01:41] <zerobaud> I really dont get the NoSQL concept, okay... you have tables with columns and rows... yet no FK or PK's. Is this supposed to speed up searches? what other benefits are there?
[12:01:51] <zerobaud> Why wouldnt I just use normal SQL?
[12:42:42] <StephenLynx> you have to drop the index and recreate it.
[12:42:52] <StephenLynx> or you are talking about the document?
[12:43:00] <StephenLynx> and updating values that are part of an index?
[12:44:59] <repxxl> StephenLynx yes im updating username and username_lowercase fields which the lowercase version is indexed somehow i cannot update them i dont know why
[16:04:23] <Sagar> greetings, before we enable user login must using auth=true in config
[16:04:37] <Sagar> now we have 3.0.7, how to we enable it in configuration?
[16:22:13] <billifischmaul> please help me! I need to test if a user has the given id, the specified projectid and a role with a given name: http://pastebin.com/g5izkvRV my code gives me an error: {roles.role.name: roleName} Unexpected token . Thank you!
[16:59:49] <pchoo> I'm playing around in the mongo shell, can I have an array of ISODates, and iterate over them, calling an aggregate pipeline which uses the date?
[17:00:18] <deathanchor> pchoo: it's basically programming in javascript
[17:00:36] <cheeser> aggregation pipelines operate on collections...
[17:03:13] <pchoo> deathanchor/cheeser: I've got a collection which has a date for several different events, created, resolved and closed. there will always be a created date, then the record may be marked as resolved, and then closed, or it may be closed without being resolved (i.e. closed has a date, resolved is still null)
[17:05:04] <pchoo> I'm hoping to easily be able to get out of the collection a statistic for how many documents were open (i.e. no closed date) at the end of each month, and the same for unresolved tickets (i.e. open before the end of the month, not resolved ever, or resolved after etc.)
[17:05:25] <pchoo> and I'm struggling to be able to do this in one aggregation pipeline (or one for each balance)
[17:05:40] <pchoo> I know that the data structure is not ideal, but I can't do anything about that right now.
[17:05:59] <deathanchor> pchoo: you might want to look at: date : { $gte : ISODate("2015-11-01") }
[17:06:40] <pchoo> deathanchor: yes, I'm aware of all of that already, i'm struggling with the aggregation pipeline for it
[17:08:33] <pchoo> Unfortunately the ops team the reports were designed for decided to convolute and change their process, and that means the way we designed the data initially does not lend itself to easily getting what they now want :(
[17:52:31] <pamp> exist a command like "touch" for WiredTiger Storage Engine?
[18:44:55] <serversides> Ok noob question here. I have set up a new installation of Mongo according to https://www.digitalocean.com/community/tutorials/how-to-install-mongodb-on-ubuntu-14-04 , but how do I set up the admin account?
[19:17:23] <serversides> How to create new db user in mongodb?
[19:17:35] <serversides> I keep getting TypeError: Property 'addUser' of object 0 is not a function
[20:40:13] <serversides> Hellow again, where can I find the mongodb config file? Instructions I have is I need to goto /etc/mongo.conf and uncomment "auth=true”. But no dice