[00:08:55] <hjando> Hey there. I'm attracted to mongo for its lack of schema for quicker development and the simplicity of storing my data as documents. However, I really don't think my data model has any reason to be otherwise. My data is structured and has a few relations. Is it worth it to shoehorn what should probably be represented in a relational database into mongo?
[02:31:32] <melissamm> Does anyone know how to store collections as heaps? http://stackoverflow.com/questions/32940372/how-to-store-data-in-a-mongodb-collection-as-a-heap
[02:55:24] <melissamm> db.foo.find().sort({the_field: 1}).limit(1) Sorting basically the same thing hundreds of times per minute would be bad, wouldn't it? Or is MongoDB so very efficient that it can handle this?
[08:04:53] <pagios> 2015-10-05T11:00:14.517+0300 I JOURNAL [initandlisten] preallocating a journal file /media/removable/SD Card/mongodb/data/db/journal/prealloc.0 <-- when mongodb is being started, initialization hangs for like 5 minutes at this step any idea?
[08:06:15] <pagios> JOURNAL [durability] Durability thread started
[08:06:16] <pagios> 2015-10-05T11:01:22.709+0300 I JOURNAL [journal writer] Journal writer thread started <-- then i get it working
[08:06:16] <kali> pagios: you're probably using a file system that does not allow fast preallocation (some version of FAT probably, as it is a SD card)
[08:06:31] <pagios> kali, yes i am using an sdcard, formatted as ext4
[08:07:31] <kali> mmm ext4 should be fine, actually
[08:08:23] <kali> sdcard are usually slow, but it should not be that much of an issue with ext4
[09:07:27] <BlackPanx> does mongodump create smaller backup than database actually is ?
[09:08:09] <BlackPanx> i have 160GB used space from mongodb and i still have 122GB free space. i'd like to do mongodump on same server. problem is i can't expand the disk space...
[09:10:11] <BlackPanx> in rockmongo i see: Size Storage Size Data Size Index Size Collections Object, is there any field i can "discard" when calculating storage used by backup ?
[09:29:18] <Uatec_> the documentation says that {"$date": xxx } and new Date(xxx) are semantically the same (http://docs.mongodb.org/manual/reference/mongodb-extended-json/)
[09:29:25] <Uatec_> however my queries only work when i use new Date(xxx) but not using $date...
[09:29:31] <Uatec_> how come this happens? does anybody know if i'm missing something?
[10:19:08] <amcsi_work> Is it possible to index an array of references?
[10:19:46] <amcsi_work> and do selection based on whether an item has a reference to a specific id?
[10:34:44] <Ross_> Could anybody help me with the following - I have a simple collection with 2 fields - _id and condition (hash)
[10:35:13] <Ross_> Condition is a hash tree with many nested levels and flexible structure
[10:35:32] <Ross_> I need to serach the condition field for existence of specific leaves
[10:36:01] <Ross_> for exmaple does condition contain { name: Alex; status: Active }
[10:36:27] <Ross_> { name: Alex; status: Active } may be anywhere so position of it is not known
[10:36:34] <Ross_> therefore I cannot use dotted notation
[11:20:47] <amcsi_work> how do I list items from a collection where prop1 (that contains an array of references) contains a reference of my choice?
[11:39:10] <Mattias> BlackPanx: You can dump it over ssh to your local machine. Won't even touch the server space
[11:58:14] <BlackPanx> does anyone remember since which version of 2.4.* mongodb, oplog syntax changed ?
[11:58:19] <BlackPanx> i know there were differences
[11:59:42] <BlackPanx> it was: 1373658837:993 before, then it was t: 203948203948 i: 234 or was it the other way around
[13:35:15] <BlackPanx> i wish to rename my replSet name.... there can be "downtime". Can i simply just shutdown mongodb instances and change replSet, then start them up again ?
[14:24:39] <maxkelley> hi all, does anyone have suggestions on storing application settings in mongodb? For example, essentially what I want is a collection within my database which is "globalSettings", which would have a single dictionary of key-value pairs... Is that the right way to go about it?
[14:25:23] <cheeser> unless some of those settings are connection credentials. ;)
[14:26:06] <maxkelley> hahaha, no :) I guess my question is, should I be creating key-value pairs so that each pair is a document? I guess how do I enforce that there's only ever one document?
[14:26:29] <cheeser> i do one document, personally
[14:27:43] <saml> is mongodb a good company to work at?
[14:28:46] <saml> it depends. if you do one document, you need utility functions to update a single entry
[14:28:51] <maxkelley> so, as a test, I put some values in the first document, as an example: {'name': 'Max'}, and then I tried to do an update({}, {'Gender': 'Male'}, upsert=True), and I ended up with just {'Gender': 'Male'}... is there a more elegant way than appending the document?
[14:29:05] <saml> if you do multiple docs where _id is configuration key, you can just use mongod shell
[14:29:40] <saml> and you need to think about different envirionments.. if you do qa, staging, production, ... etc
[14:31:34] <saml> downside of multiple docs is that you need utility functions to get full configuration. or db.globalSettings.find()
[14:32:06] <saml> not sure why you want to put configuration in db though
[14:32:42] <saml> i version control configuration file. and have a static http server serving it with basic auth.. and apps just do http GET at startup or reload
[14:34:19] <maxkelley> Yeah, so this is going to serve as a database for a python application which users run on their own machines, so we need someplace to store common values like directories, server IPs, etc.
[14:34:58] <maxkelley> So it's not a web application at all, so there won't necessarily be staging, production, etc. environments per se? The DB server will be running local on each user's machine.
[14:35:08] <Uatec_> @cheeser, Kibana stores it's config in ElasticSearch...
[14:35:52] <saml> you're distributing python interpreter + your scripts + mongodb to users?
[14:35:53] <maxkelley> I'm migrating from SQLite because the application data is essentially large nested dictionaries which don't lend themselves well to SQLite storage, but mongodb is all about that.
[14:39:07] <maxkelley> saml: because we need to implement a database for the rest of the data, so I figured why not just add a collection to it? that is a valid point, though.
[14:39:34] <saml> oh i see. bundling mongodb just sounded ambitious
[14:40:33] <maxkelley> haha yeah... I got the idea from Ubiquiti's Unifi controller S/W, which is a software controller for wireless access points... it bundles Mongo!
[14:40:36] <saml> i bet there's pure python dictionary database thingy
[14:42:22] <maxkelley> hmmmm... I might look into that.
[14:44:08] <saml> if you want simple key-val, there's always dbm
[14:44:59] <saml> but looks like you want more structure. just include mongodb. make mongodb largest installed database in the world
[14:45:33] <maxkelley> hahaha, thanks, I'll try my best :)
[14:49:15] <symbol> Any of you node.js devs have an opinion on waterline? I like the native node.js driver but things just feel messy. I'd like to get some sort of structure and I know Mongoose is the devil.
[15:00:58] <symbol> That's a good point - I'm mostly looking for creating some clean strucure. All the examples on the web I've found keep showing examples that open a MongoClient for each collrvyion.
[15:01:06] <StephenLynx> I can understanding them on a different context, but in JS is very redundant.
[15:10:56] <StephenLynx> the json api is just named api because it uses the api subdomain.
[15:11:08] <StephenLynx> the form api is named form and uses the main domain.
[15:11:18] <symbol> I know you don't like Express but that's what I'm currently using and I was struggling with where to store the DB logic and the actually driver code. I like how you structured things.
[15:13:34] <StephenLynx> it will make your software faster, easier to read and easier to maintain.
[15:14:24] <symbol> I've never done it but I'll give it a try.
[15:14:38] <StephenLynx> and unless your development cycle is extremely short, like 2 weeks, the time you spend doing it is negligible.
[15:14:54] <StephenLynx> plus it will teach you about how the runtime environment works.
[15:15:32] <symbol> I suppose it's about time I stop staying high level
[15:15:44] <symbol> Thanks for the words of wisdom StephenLynx
[15:15:57] <StephenLynx> that is a relative concept. IMO, what I do is high level.
[15:16:13] <StephenLynx> I don't know how http is implemented, I write scripts that run on native code.
[15:16:40] <StephenLynx> that can't be considered low level when there is so much below my work.
[15:17:25] <StephenLynx> using webframeworks is just over-abstracting, IMO. you are abstracting something abstracted.
[15:17:59] <StephenLynx> the difference lies when the abstraction affects your system logic and design.
[15:18:07] <StephenLynx> that is when I find excessive.
[15:19:28] <StephenLynx> not that I oppose any framework, I find them justifiable on few cases, like when you build an engine and provides a framework to access it. examples being game engines and SDKs (android, ios)
[15:19:58] <StephenLynx> or when you already know the foundation of the runtime environment and just want to assemble a quick project in a very short timespan.
[15:20:16] <StephenLynx> so you won't face any issue that you will be clueless of what is causing it.
[15:23:42] <StephenLynx> you wouldn't believe the crap I read from the node community before I isolated myself from them
[15:24:06] <StephenLynx> literally there was a dude that said he wanted to not have to write any code and let dependencies to everything for him.
[15:24:13] <symbol_> Well, your code speaks for you. It is readable and nicely organized.
[15:24:37] <symbol_> You've deinitely encouraged me to try my hand at it. I thought I was being low level using something like express instead of Sails.
[15:55:38] <nosocks> Hi all. Do indexes apply retroactively to data already stored in mongo? We had some slow queries, and a co-worker applied an index to the field causing slow queries, but it doesn't seem to have sped up our queries at all
[15:55:47] <nosocks> Is there a command to apply that index to existing data?
[15:57:14] <StephenLynx> maybe the problem was not the field not being indexed, maybe he applied to the wrong collection
[15:58:40] <nosocks> StephenLynx: Thanks for the answer. I am 100% sure about that field being the culprit, but perhaps my co-worker applied the index incorrectly. I'll have to have a closer look
[16:11:44] <nosocks> When an index is being created in mongo, how do we know when it is completed? When I run .getIndexes(), I see "background": true
[16:12:29] <kali> nosocks: look if you can find it in db.currentOp()
[17:16:30] <daidoji1> what I'm trying to do isn't really kosher for a document database anyways but I thought I would ask
[17:18:19] <symbol_> daidoji1: I'm assuming it's just the driver formatting it. The shell does the same thing despite the integer being stored as an int32
[17:18:52] <daidoji1> symbol_: yeah thats what I figure too
[17:19:20] <daidoji1> doesn't seem like there's info on how to override that or what the rules are for that in the documentation though :-(
[17:20:45] <symbol_> daidoji1: Are you saying that you store something like 10 and get 10.0 back?
[17:28:12] <daidoji1> but between the loose typing of pymongo and mongo its hard to tell where/how/why that happens
[17:29:03] <symbol_> I think that's pretty standard behavior from the Mongo driver and you just need to turn it into an int application side.
[17:30:37] <daidoji1> symbol_: yeah apparently I have to do voodoo like this to store as a certain type on the loading side http://stackoverflow.com/questions/8817856/pymongo-64bit-unsigned-integer
[17:30:51] <daidoji1> but this particular tool was only supposed to warn on incorrect types
[17:32:54] <topwobble> If I index, foo, will count({foo: {$exists:false}) be able to run using just the index (ie. not go to disk)? I am not sure if undefined values get indexed
[17:33:51] <cheeser> topwobble: run that with .explain() and see
[17:34:51] <topwobble> Id like to figure that our before running as it may take a long time if not using index
[18:49:21] <deathanchor> or remove the secs part to see what is actively writing/reading
[18:49:47] <topwobble> deathanchor nice! So flexible!
[19:10:49] <topwobble> ok indexed staging. This is the explain() of find({key: null}). Looks like it's 100% indexed... right? https://gist.github.com/objectiveSee/eaf97ea74ee10f948678
[19:11:01] <topwobble> never seen `KEEP_MUTATIONS` before
[19:17:18] <G1eb> Hi! what are the popular frameworks for rapid webdev using mongo as db nowadays?
[19:36:29] <StephenLynx> it was not this strict enterprise mentality that caused people to over-engineer and ruin it.
[19:36:59] <StephenLynx> it was just developers being awful and being praised for being awful because they were awful while sounding professional and knowledgeable.
[19:37:44] <StephenLynx> it was caused by this culture that leads incompetent developers to get by just by throwing buzzwords and technologies and no one bats an eye that the person might not know what the fuck is doing.
[19:38:11] <StephenLynx> this is what ruins any accessible and popular technology.
[19:38:15] <StephenLynx> the same happens in java.
[19:38:19] <stickperson> i;m trying to clone a collection and get the error “not authorized on <user> to execute command <command>”
[19:38:44] <StephenLynx> this is what made PHP even worse on top of all problems it had from birth.
[19:41:22] <RWOverdijk> Question.. Like $in, is there something to do the reverse? So, if a value is an array, check if it contains "x"?
[19:41:43] <RWOverdijk> Wow that was horribly vague. Allow me to rephrase.
[19:42:24] <RWOverdijk> I have documents in my mongodb database. These documents have a field called 'appliesTo', which is of type array. I want to query all documents that have value 'x' somewhere in `appliesTo`
[19:43:07] <RWOverdijk> StephenLynx, Doesn't $all require that all values match?
[20:27:02] <dj3000> basically, there seems to be a bunch of cached ("backed up") inserts forming a queue, and this data is stale. I want to clear them.
[20:28:02] <Fardin> Is there any plan for an official Go driver?
[20:42:47] <dj3000> i wonder if it's because it's in a replica set. so i delete data from the master node, and maybe the others send it old data....but that wouldn't make sense.
[20:53:23] <shlant> is it possible to have a replica set on one server? I have mongo in docker and I can't seem to get it working as it looks like having all the members with the same hostname causes problems
[20:53:59] <StephenLynx> do they have different ips?
[20:56:26] <dj3000> i think at least the port number has to be different
[20:57:48] <shlant> I have the port different on the host. Each container uses 27017 internally, but exposes 27017, 27018 and 27019 to the host
[20:58:05] <deathanchor> shlant: don't use localhost
[20:58:09] <shlant> and the container IP's are different, but I don't know what mongo uses
[20:58:13] <deathanchor> shlant: use the actually hostname
[20:58:59] <deathanchor> localhost, good for a single member set
[21:00:20] <shlant> I'm not using localhost, I'm using a routable hostname, but it's the hostname of the host. something like blah.node.consul:27017/27018/27019
[21:00:38] <shlant> do I need differing hostnames?
[21:02:28] <shlant> Quorum check failed because not enough voting nodes responded; required 2 but only the following 1 voting nodes responded: csm-dev.node.consul:27017; the following nodes did not respond affirmatively: csm-dev.node.consul:27018 failed with Failed attempt to connect to csm-dev.node.consul:27018; couldn't connect to server csm-dev.node.consul:27018 (172.17.5.182), connection attempt failed",
[21:02:48] <shlant> I can telnet to that address on the host
[21:03:21] <RWOverdijk> Is it possible to sort on multiple fields? So first sort on a date field, and then sort within a time field within those dates?
[21:03:38] <RWOverdijk> As in, 10:00 next day, should be grouped with 10:00 current day
[21:03:50] <RWOverdijk> They should be sorted on their own dates
[22:39:44] <topwobble> So, `db.games.count({mode: {$exists:false}})` took > 2 hours to complete on a 30M collection. `mode` was indexed however. Any thoughts on how to speed that up?
[22:40:44] <topwobble> Same query by specifying a specific value for mode is instant. I thought that mongo uses `null` as the value for undefined values in an index
[22:40:56] <StephenLynx> yeah, I heard count ops are slow