PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 22nd of April, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:39:41] <LurkAshFlake> when i db.naissance.drop() in my ubuntu terminal it return false
[00:42:15] <LurkAshFlake> i can't drop any of my collection
[01:25:02] <joannac> LurkAshFlake: pastebin exact command and output
[08:15:19] <KekSi> quick question since i'm not entirely sure about this right now: is it possible to add more shards to an already sharded collection? it looks to me like that wouldn't be a problem and the only problematic thing would be to change the shard key later
[08:28:14] <Boomtime> KekSi: you can add shard any time you like, note that shards are added to the cluster as a whole, sharded collections merely make use of shards as necessary
[08:29:03] <Boomtime> also, changing the shard key is a difficult problem any time, regardless of how many shards you have - you effectively must dump and re-import your entire collection to change the shard key
[08:39:44] <KekSi> thanks boomtime, thats what i thought
[10:51:22] <djlee> Hi all, if i have a list of object id's represented as strings, is there a way of doing an IN query against _id without converting them ll to ObjectIds, like : find({_id:{"$in":[ "stringid", "stringid2" ]}})
[10:51:30] <Derick> no
[10:51:34] <djlee> darn
[11:07:47] <mah0-deh0> hello all
[11:10:54] <mah0-deh0> I have a visits collection that I would like to "group", i.e.: a user just can have a single visit in a certain day so I need to get all its visits by using the min in_timestamp and max out_timestamp and summarizing each visit duration (which may be available or not..). Which is the best way to achieve it?
[12:27:21] <allaire> Hi. We're using the MongoDB Amazon AMI image for our app. we just upgraded from 2.4 to 2.6.9. sudo sevice mongodb start is not working but sudo mongod --config /etc/mongod.conf is working, any ideas?
[12:27:50] <allaire> Only getting Starting mongod: [FAILED], nothing is logged.
[12:30:34] <gabrielsch> hey, is there any way to group datetime by day in mongodb queries?
[12:32:35] <gabrielsch> for example, I want to find how many products where created for each user in the last 30 days...
[12:32:39] <gabrielsch> I have the following schema: https://gist.github.com/gabrielsch/1d8a0d8125a21378f35c
[12:36:23] <allaire> detailed question here: https://groups.google.com/forum/#!topic/mongodb-user/XpuKeygrrWU
[13:27:48] <teejay23> hello, How to see live changes in Mongo using CLI ?
[13:28:59] <teejay23> what's the command for that?
[13:29:22] <cheeser> live changes?
[13:30:26] <teejay23> yeah like redis has >$ redis-cli monitor
[13:31:11] <Derick> i guess you could write a little script to tail the oplog... but that's not what it's really meant for
[13:32:11] <teejay23> actually when i run tests, usual CRUD. there's no way to actually see data was in mongo and got deleted...
[13:32:32] <cheeser> query after a write/update, and after a delete...
[13:32:37] <cheeser> like testing any database...
[13:33:15] <teejay23> hmm.. makes sense.
[13:43:44] <amitprakash> Hi, how can I sync my collection data from mongo to EMR? I was looking at mongodb hadoop connector, but the documentation doesn't seem to indicate anything about how information is continuously updated from the latter to hdfs?
[13:44:13] <cheeser> because mongo-hadoop doesn't provide a sync facility
[13:44:22] <cheeser> you can build one with it, though.
[13:45:45] <StephenLynx> EMR being
[13:46:02] <StephenLynx> electronic health record?
[13:47:26] <cheeser> elastic map reduce
[14:01:02] <amitprakash> cheeser, aight, thanks
[14:07:45] <GothAlice> Oh, HL7 encoding, whoever invented you needs to be taken out back and shot. (Speaking of EMR in the "electronic medical records" sense.)
[14:20:56] <saml> hey do you use mongo connector?
[14:21:08] <saml> so many bugs
[14:21:15] <saml> is it better to tail oplog myself?
[14:21:32] <cheeser> mongo connector is used in a lot of places...
[14:21:46] <saml> it stopped working since upgrading to wiredtiger storage engine
[14:22:32] <cheeser> oh, interesting...
[14:26:40] <GothAlice> Connector uses a lot of deep tricks to do what it does, including tailing the oplog. If there are _any_ differences in the status information (there are) or operations themselves (there shouldn't be, but could…) then Mongo Connector will have to be updated to recognize it.
[14:28:39] <cheeser> only two issues against wired tiger: https://github.com/10gen-labs/mongo-connector/issues?utf8=%E2%9C%93&q=is%3Aissue+wired+tiger
[14:32:41] <saml> if i can reproduce at smaller testcase, will create ticket
[14:35:30] <pamp> Anyone here with Csharp driver experience?
[14:49:41] <dberry> is there a good study guide out there for the mongodb DBA exam(C100DBA)?
[14:51:26] <saml> wasn't mongodb created so that you don't need dba?
[14:52:27] <StephenLynx> who gives out this certificate? 10gen?
[14:52:58] <cheeser> saml: no
[14:54:48] <dberry> MongoDB puts out the certificate
[15:03:57] <saml> http://docs.mongodb.org/manual/release-notes/3.0-upgrade/#change-storage-engine-to-wiredtiger
[15:04:03] <saml> what to do with replication?
[15:04:38] <saml> mongodump, take a node out of replication, change storage engine, restart the node (not joined in replicaset), mongorestore, let it join. repeat?
[15:04:55] <saml> or do I not have to worry about taking it out of replication?
[15:04:58] <cheeser> if you use mms, we can change the storage engine for you
[15:05:13] <saml> we don't use mms. is it free?
[15:05:36] <saml> oh no, we have our servers. can't go cloud
[15:05:58] <cheeser> https://mms.mongodb.com/
[15:06:02] <StephenLynx> didn't know cheeser was a mongodb employee.
[15:06:05] <cheeser> you can use local machines
[15:06:20] <cheeser> StephenLynx: java driver, morphia, mongo-hadoop, and mms. :)
[15:06:45] <StephenLynx> and saml, from what I heard, is not advisable to use wired tiger on production yet.
[15:06:57] <saml> but we need to open up some port.. etc so that mms can connect to our servers?
[15:07:04] <saml> really?
[15:07:09] <saml> StephenLynx, got a link?
[15:07:20] <StephenLynx> GothAlice that says it.
[15:07:27] <StephenLynx> and afaik it still isn't the default engine.
[15:07:31] <saml> yah she has 20000TB of data
[15:07:41] <saml> i see
[15:07:43] <StephenLynx> and I heard a couple people having issues with wt.
[15:07:52] <GothAlice> Notably memory management issues.
[15:08:30] <StephenLynx> so I have a guess you will want to hold on a while before switching your servers to wt.
[15:08:43] <cheeser> saml: the agents open the connection outward so you shouldn't need any firewall tweaking
[15:08:58] <cheeser> StephenLynx: it should be the default in 3.2
[15:09:08] <GothAlice> saml: https://jira.mongodb.org/browse/SERVER-17421 https://jira.mongodb.org/browse/SERVER-17424 https://jira.mongodb.org/browse/SERVER-17386 https://jira.mongodb.org/browse/SERVER-17456 https://jira.mongodb.org/browse/SERVER-17542 ancillary: https://jira.mongodb.org/browse/SERVER-16311
[15:09:09] <saml> connection's via ssh ?
[15:09:24] <StephenLynx> so it might take a few months yet?
[15:09:31] <StephenLynx> still*
[15:09:34] <cheeser> well, ssl at any rate
[15:09:54] <cheeser> StephenLynx: yeah. not sure there's a firm/public date for 3.2
[15:10:08] <StephenLynx> just guessing from how often I see releases.
[15:10:22] <StephenLynx> 3.0 came out a month or so ago, and its on 3.0.2 yet.
[15:10:45] <cheeser> *fingers crossed*
[15:10:50] <saml> cheeser, so mongodb recommends wiredtiger on production?
[15:11:31] <saml> what's wiredtiger for even :P
[15:11:36] <cheeser> i don't recall what our offical recommendation is. i know mms uses it in production.
[15:11:51] <cheeser> better disk usage, better concurrency, etc.
[15:12:02] <cheeser> document level locking is available via WT
[15:12:11] <saml> wiredtiger is default storage engine for mms? or can convert to wiredtiger on demand?
[15:14:41] <saml> i prefer wifilions
[15:17:03] <cheeser> mms uses WT on some of its node. when you provision via mms, it uses the default engine which is mmapv1
[15:17:15] <cheeser> but you can tell it to configure your nodes with WT if you want.
[15:18:02] <GothAlice> Indeed; I tested out my dataset that way. It was fun to watch the elected primary symbol rotate around the nodes during my testing. :3
[15:19:13] <GothAlice> The primary literally couldn't stay up for longer than 30 seconds under my 1K ops/sec benchmark. :(
[15:31:46] <saml> 1k ops/sec is web scale depending on the size of each op
[15:40:56] <GothAlice> saml: It was ~700MB of data, initially loading the first 500MB then querying that data to generate and upsert the rest (combined reads, getMores, and updates).
[15:41:40] <GothAlice> A typical pre-aggregation workload; the "benchmark" is actually my at-work project's data bootstrap and migration run, part of which re-calculates aggregate click stats.
[15:42:33] <saml> so complicated. i just want to start mongod, no configuration and everything works well up to web scale
[15:42:53] <saml> i guess mongod conf yaml is still shorter than postgres conf
[15:43:28] <GothAlice> saml: Slap the MMS agent on a few machines, and yeah, it basically is that simple. (I'm just using somewhat under-sized VMs with tight memory constraints, thus running into multiple issues with WiredTiger.)
[15:45:40] <GothAlice> ^_^ I'm guessing if I threw 64GiB of RAM at my VMs, my test suite won't make them crash. 2GiB, though, and yeah, there be problems.
[15:47:01] <saml> oh i see. you should downlaod more ram
[16:32:06] <Croves> I'm learning MongoDB and I have a question. My app have two collection: products and tickets. One ticket can store infinite products. Imagine a restaurant that assign many orders to a specific table
[16:32:48] <Croves> I need to relate these two tables. As a mysql user, I'm about to create a products_in_ticket collection and relate them in this new object.
[16:33:04] <Croves> Is there a better aproach? If so, what is it? Is there foreign keys in MongoDB?
[16:34:21] <StephenLynx> no, there arent foreign keys in mongo, and you never create collections for n*n relations.
[16:34:40] <StephenLynx> you should have an array in a field in the ticket
[16:34:48] <StephenLynx> this array would hold the products it contains.
[16:35:20] <Croves> But what if I need to change some information in products?
[16:35:30] <Croves> I have to update all tickets with that product?
[16:35:34] <StephenLynx> so you just keep track of its unique id.
[16:36:02] <StephenLynx> I would use a readable name as its unique id.
[16:36:15] <StephenLynx> so you don't need to duplicate the name and keep track of it.
[16:37:44] <Croves> a name as unique id?
[16:40:30] <StephenLynx> yeah
[16:41:19] <StephenLynx> if you want to have relations in mongo, you are already off road.
[16:41:42] <StephenLynx> don't expect to end the ride without mud in your windshield :v
[16:42:44] <Croves> StephenLynx: So, my collection should look like something like this? http://pastebin.com/507mKFWQ
[16:43:21] <StephenLynx> yeah.
[16:43:39] <StephenLynx> duplicate the price because it might change and you won't change it where it has been already recorded anyways.
[16:43:59] <StephenLynx> you might want to keep track of the date the item was added too.
[16:44:20] <StephenLynx> so you can keep track of how much something cost when in the product too.
[16:45:04] <Croves> So, anyway.. I still need that products collection?
[16:45:19] <StephenLynx> yes.
[16:45:53] <StephenLynx> otherwise you wouldnt be able to check the final cost of a ticket
[16:46:07] <StephenLynx> and would be worse to list all available products.
[16:46:19] <StephenLynx> but you could do without it.
[16:46:23] <StephenLynx> it wouldn't be impossible;.
[16:46:56] <Croves> I don't understand :/
[16:47:09] <StephenLynx> because you see
[16:47:21] <StephenLynx> for you to list all available products without a products collection
[16:47:40] <StephenLynx> you would have to get all tickets, which are way more than available products, unwind it, group
[16:48:05] <mah0-deh0> Croves, I'm not the best at mongo, but one of your first steps should figure out if u really need mongoDB or a SQL one as your data is relational
[16:48:44] <StephenLynx> well, you can do with a few relations in mongo. but if you need a join, then you really can't use mongo.
[16:48:46] <Croves> StephenLynx: This part I get it...
[16:49:09] <StephenLynx> if your amount of queries increase with the amount of results, you can't use mongo.
[16:49:34] <Croves> Would you use Mongo for a restaurant software?
[16:50:06] <StephenLynx> no
[16:50:09] <StephenLynx> I wouldn't.
[16:50:16] <mah0-deh0> me neither :P
[16:50:27] <mah0-deh0> because I think that's relational data
[16:50:29] <StephenLynx> first: in that case performance is not an issue.
[16:50:36] <StephenLynx> second: you have much relational stuff.
[16:50:59] <StephenLynx> third: it is likely to change requirements often.
[16:51:41] <StephenLynx> so no, I would not use mongo for any physical business at all.
[16:52:08] <StephenLynx> unless you are integrating a huge amount of end-points.
[16:52:16] <StephenLynx> like, global scale.
[16:52:20] <Croves> So what would you use mongo?
[16:52:33] <StephenLynx> for online services.
[16:52:48] <StephenLynx> for stuff that requires a lot of reading and writing and expect to have many users.
[16:52:54] <StephenLynx> and its specialized.
[16:53:17] <Croves> Without relational data?
[16:53:28] <StephenLynx> not without, but few relational cases.
[16:53:47] <StephenLynx> as I said, you can go with fake relations, if you understand how that limits you.
[16:55:16] <Croves> So, I guess I'll have to use MySQL then
[16:55:22] <StephenLynx> if you can use fake relations without performing additional queries to the db, you don't really lose much in performance, or anything at all. and if you would need to perform an unwind on a large set of data to perform a sort, you would even gain, IMO.
[16:55:25] <mah0-deh0> Croves: you should google "mongodb vs mysql" to figure out what do you need, there are A LOT of articles covering this topic already
[16:55:31] <StephenLynx> I suggest mariadb.
[16:55:37] <StephenLynx> its a mysql fork, but better.
[16:56:29] <StephenLynx> mysql went downhill since oracle took it over, from what I heard.
[16:56:40] <StephenLynx> mariadb is from one of its creators.
[16:58:02] <Croves> I heard about that too, StephenLynx
[16:59:18] <Croves> My biggest fear is about performance. I'll run this app in a Raspberry Pi
[16:59:56] <StephenLynx> in that case
[17:00:01] <StephenLynx> I wouldn't run a database.
[17:00:29] <StephenLynx> I am not sure
[17:00:32] <StephenLynx> but
[17:00:52] <StephenLynx> I don't think stuff like this are designed with that kind of hardware in ming.
[17:00:54] <StephenLynx> mind*
[17:01:30] <StephenLynx> options like this would be sqlite.
[17:01:37] <cheeser> mongo isn't officially supported on that hardware at any rate.
[17:01:37] <Croves> StephenLynx: I have a software that manages a entire restaurant. Now I'm building a new version of it, so the waiters will use iPads to note customer's orders and so on...
[17:01:50] <cheeser> there are ARM builds out there but they're community driven so caveat emptor
[17:02:06] <Croves> But for many reasons, I can't keep this software on the web. I really need a physical server into my customer's restaurant
[17:02:28] <StephenLynx> I understand.
[17:02:59] <Croves> So, I was readinh about RPi and ARM and I don't think I'll be able to run on it... Probably I'll have to switch for a micro-itx PC
[17:03:10] <StephenLynx> or use something like sqlite.
[17:03:23] <StephenLynx> you won't have that much load anyway.
[17:03:29] <Croves> Yeah, I tough about SQLite before mysql or postgres
[17:03:48] <StephenLynx> a database is kind of an overkill for a system that runs only locally and is not heavy on data load.
[17:04:07] <StephenLynx> and would be cheaper for the waiters to use some other kind of tablet.
[17:04:19] <StephenLynx> and easier to work too.
[17:04:30] <StephenLynx> have you ever developed on iOS and android?
[17:04:32] <GothAlice> I've used SQLite on small apps before, for example, an online massage booking system.
[17:04:46] <GothAlice> The write load is so negligible that it worked out A-OK.
[17:04:48] <Croves> Actually, I just have a simple RESTful server built in with NodeJS. Everything else is processed by the browser with Angular
[17:05:17] <Croves> StephenLynx: I'm using IONIC to build HTML5 apps
[17:05:31] <StephenLynx> btw,if you use node, I suggest migrating to io.js. its a fork, but much, much better.
[17:06:10] <Croves> StephenLynx: What's io.js?
[17:06:27] <StephenLynx> a fork of node. all the contributors that are not on joyent's payroll migrated to it.
[17:06:42] <StephenLynx> it is faster and more stable than node.
[17:06:49] <Croves> Oh, really?
[17:06:51] <StephenLynx> aye.
[17:06:59] <StephenLynx> have been using it for a while now.
[17:07:34] <StephenLynx> just released a project that uses it, https://play.google.com/store/apps/details?id=com.cheshire.lynxhub
[17:09:09] <Croves> Nice!
[18:16:00] <juliofreitas> Hi! How can I see the time for my search ?
[18:18:27] <saml> juliofreitas, from mongo shell? or node.js?
[18:18:48] <saml> mongod logs long running queries
[18:18:52] <juliofreitas> sorry! In a moment from mongo shell
[18:19:07] <juliofreitas> but, through node are welcome too!
[18:20:48] <saml> db.docs.find(query).explain("executionStats")
[18:23:17] <juliofreitas> saml, fine thank you :D
[18:25:25] <juliofreitas> I've another problem. I'm using: mongodbimport -d testParser -c myCollection --type tsv --file myfile.tsv --headerline and my collections there is a key value like this "" : "" . How can I remove it?
[18:26:15] <GothAlice> db.docs.update({"": {"$exists": 1}}, {"$unset": {"": ""}}) — might work, juliofreitas.
[18:26:40] <StephenLynx> does he needs the first block?
[18:26:51] <StephenLynx> wouldn't a {} work?
[18:26:54] <GothAlice> The query portion? It's an optimization to avoid needless updates against records not affected.
[18:27:03] <StephenLynx> indeed.
[18:27:11] <GothAlice> No point unsetting something that isn't actually set, eh?
[18:27:46] <juliofreitas> GothAlice, Does not have a way for not create this key/value?
[18:28:00] <GothAlice> juliofreitas: Only by fixing your source data. ;)
[18:28:33] <GothAlice> mongoimport is only doing what you're telling it to do, so it really thinks you have an extra field in there named "".
[18:28:41] <juliofreitas> GothAlice, :( I'm importing from third party :/
[18:29:01] <GothAlice> If you need to do processing, you're going to have to write an import tool yourself, alas.
[18:30:45] <juliofreitas> Ok, I found a module called csv-parser. Maybe I'll use.
[18:31:48] <juliofreitas> GothAlice, db.collection.update({"": {"$exists": 1}}, {"$unset": {"": ""}}) works at mongodb 3.x?
[18:32:02] <StephenLynx> yes.
[18:32:27] <GothAlice> Nope.
[18:32:31] <juliofreitas> write error, code 56 "errmsg" : "An empty update path is not valid."
[18:32:36] <GothAlice> Indeed.
[18:32:40] <GothAlice> That data is somewhat boned.
[18:33:41] <GothAlice> StephenLynx: https://gist.github.com/amcgregor/22b70fae30cdfce3e4ed for your benefit, it's not a "showterm". ;)
[18:34:02] <GothAlice> Seems a bit strange to let one insert an empty field name, but then never be able to manipulate it. XD
[18:34:09] <StephenLynx> heh
[18:34:18] <StephenLynx> look at that
[18:34:43] <StephenLynx> I really wouldn't expect that.
[18:34:55] <GothAlice> I'll dig around and see if there's an existing JIRA ticket for it after work. :)
[18:36:49] <StephenLynx> so he will have to manually import / export it if he really don't want the empty field.
[18:46:26] <GothAlice> Indeed, StephenLynx.
[19:40:23] <StephenLynx> hey, if a json object has a key with ":", how do I read it? object['lala:lolol']?
[19:59:49] <GothAlice> StephenLynx: Yup, that'd be the approach.
[21:45:15] <miceiken> Hello guys. Do I just store password hashes in a StringField?
[21:57:04] <GothAlice> miceiken: Unless you are aware of the crypto security issues involved, I recommend storing scrypt hashes rather than rolling your own (i.e. "just sha it").
[21:57:09] <GothAlice> miceiken: Which language are you using?
[21:58:07] <GothAlice> ccccccbhkhknlivieuivldrtenvfjkdetebetjetrbrn
[21:58:48] <GothAlice> Heh; sorry 'bout that. Sometimes picking up my laptop triggers my OTP token. XD Time to generate a new one somewhere else…
[21:59:51] <cheeser> bcrypt++
[22:00:10] <GothAlice> Hmm. Blowfish.
[22:01:14] <GothAlice> Still; scrypt handles things like salting for you, it's also designed purposefully to be slow. (And memory hard in addition to being CPU hard, to prevent parallelization attacks. Literally the result of every previous loop iteration, internally, is needed to produce the next iteration.)
[22:02:47] <GothAlice> This differs from bcrypt in that bcrypt uses symmetric encryption to make itself slow, and doesn't have that type of feedback setup; increasing the number of needed iterations increases CPU difficulty, but not memory.
[22:02:50] <GothAlice> cheeser: ^
[22:03:16] <miceiken> GothAlice, sorry, Python, yeah I haven't decided how to hash it yet
[22:03:27] <GothAlice> Ooh, Python. :3
[22:03:40] <GothAlice> https://warehouse.python.org/project/scrypt/
[22:04:41] <cheeser> GothAlice: interesting. i'll see if there's an implementation for the jvm. surely there is.
[22:04:50] <GothAlice> Pretty sure there would be, yeah.
[22:04:53] <miceiken> thanks ill check it out GothAlice
[22:05:15] <cheeser> boom. https://github.com/wg/scrypt
[22:05:16] <GothAlice> miceiken: https://github.com/bravecollective/core/blob/develop/brave/core/util/field.py#L14-L48 < I use the MongoEngine ODM on top of pymongo; if you do, too, you can borrow my PasswordField implementation. (MIT license.)
[22:05:26] <miceiken> hahahahhaa oh my god
[22:05:30] <miceiken> I was just gonna ask
[22:05:36] <miceiken> Are you the alice from brave it?
[22:05:55] <GothAlice> miceiken: I sure am. AKA Draleth. :)
[22:06:24] <GothAlice> Though it's been a year and a bit since I was in that position. :P
[22:06:24] <miceiken> wow, 7o
[22:06:29] <GothAlice> 7o
[22:06:58] <miceiken> yeah, I heard there was something going on. Anyway, I am literally writing a microservices thing now because I got so inspired by the brave collective IT
[22:07:05] <GothAlice> \o/
[22:07:32] <GothAlice> miceiken: For general web dev chat, there's ##webdev, which I also camp in, BTW.
[22:07:40] <cheeser> this? http://www.themittani.com/news/brave-it-infrastructure-shut-down
[22:07:46] <miceiken> I'll hop in, I've got some questions
[22:07:51] <GothAlice> cheeser: Yup; they forgot to pay the server bills.
[22:07:59] <cheeser> wankers
[22:08:03] <GothAlice> The sheer volume of drama caused in EVE Online by people not paying bills is kinda insane.
[22:08:12] <miceiken> it's pretty funny
[22:09:02] <GothAlice> cheeser: The result of that, BTW, was doxxing and some rather tedious conversations with my employers. (They agreed that gamers getting that upset over volunteer services going away are, themselves, pants-on-head.)
[22:09:26] <miceiken> =/ doxxing is just stupid
[22:10:06] <cheeser> reminds me of a t-shirt i wish i'd bought years ago: do not taunt me for I am root and have not the need for subtlety.
[22:10:17] <GothAlice> Heh.
[22:10:48] <cheeser> probably from the BOFH heyday
[22:11:29] <GothAlice> cheeser: I use lines similar to that fairly frequently. http://s.webcore.io/image/3A313C1q1A0y < side effect of camping ##webdev with a female nick.
[22:12:44] <cheeser> that was in PM?
[22:12:56] <GothAlice> Aye.
[22:13:02] <cheeser> what a tool
[22:13:07] <GothAlice> Spontaneous amorous PMs FTL.
[22:14:32] <GothAlice> cheeser: Could be worse. Incidents like https://gist.github.com/amcgregor/1447023efb76b97af637 aren't uncommon, either.
[22:17:54] <GothAlice> Yeah. I'd say it destroyed my faith in humanity, but I had little enough going in. ;)
[22:18:22] <cheeser> https://s-media-cache-ak0.pinimg.com/736x/22/34/fa/2234fa431771307180c6071e21080fa2.jpg
[22:20:21] <Feadurn> Hi everyone
[22:22:47] <Feadurn> Is it possible to query a database to find all the documents that have a specific value to a field and doing it again with the next different one
[22:23:08] <Feadurn> I have seen that distinct can output all the distinct value but the dataset contains millions of row
[22:23:29] <Feadurn> so how can I do that without knowing in advance all the distinct value and not parsing them several times