PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 20th of April, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[01:00:42] <davidbanham> Hi all. I'm getting my butt kicked by setting up a replica set. I think I have tracked the problem to: "Received heartbeat from member with the same member ID as ourself: 0""
[01:00:45] <davidbanham> However I cannot find any information on mongodb member IDs.
[01:03:34] <clay__> hello all
[01:04:02] <clay__> I am kind of in a bind and was hoping someone more knowledgable then me could help :)
[01:04:42] <clay__> error loading initial database config information :: caused by :: Couldn't load a valid config for voipo.cdrtest after 3 attempts. Please try again.
[01:04:51] <clay__> thats the error I'm getting on my sharded cluster
[01:05:21] <clay__> I've done some research online and people ppoint to the config servers as a possible problem but I'm not sure what command to run to "repair" the config server
[01:17:44] <joannac> davidbanham: check rs.conf() on all members of the replica set
[01:18:20] <joannac> clay__: look at the logs before that, why can't it load the config? what's the error?
[01:18:34] <davidbanham> joannac: On the master, it was listing the other two nodes as unresponsive. Connectivity between all members is fine. I can mongo --host from on to another without problems.
[01:18:36] <clay__> alright
[01:18:46] <davidbanham> The other two nodes acted as if they'd never heard of the master
[01:19:25] <joannac> davidbanham: define "unresponsive"
[01:20:04] <davidbanham> "stateStr" : "(not reachable/healthy)",
[01:20:16] <davidbanham> "lastHeartbeatMessage" : "Received heartbeat from member with the same member ID as ourself: 0",
[01:20:40] <davidbanham> And paradoxically: "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"),
[01:21:51] <joannac> davidbanham: pastebin
[01:22:17] <joannac> davidbanham: did you by chance rs.initiate() on more than one member or something?
[01:22:31] <davidbanham> https://gist.github.com/davidbanham/aa9f7c24b3932a995f59
[01:22:53] <davidbanham> joannac: Out of desperation, yes, but it was already doing this before that.
[01:23:00] <joannac> davidbanham: start again
[01:23:48] <joannac> you may have had connection problems before, but you were *not* getting the same error
[01:23:55] <joannac> but now, you have to start again
[01:23:57] <davidbanham> Righto.
[01:25:00] <davidbanham> Is removing the contents of /var/lib/mongodb sufficient or is there other state persisted elsewhere?
[01:25:08] <joannac> shut down the mongod
[01:25:23] <joannac> is there any data in this replica set?
[01:25:28] <davidbanham> Not yet
[01:25:31] <joannac> okay
[01:25:48] <joannac> then remove the contents of that directory, assuing that's your dbpath
[01:26:03] <davidbanham> It is, and it's done.
[01:26:07] <joannac> cool
[01:26:12] <joannac> start the 3 mongods
[01:26:18] <joannac> rs.initiate() on ONE only
[01:26:23] <joannac> and rs.add() the others
[01:26:41] <davidbanham> Need to fool around creating the admin users first. Will ping when I'm done, thanks.
[01:27:36] <joannac> ...what?
[01:28:22] <joannac> oh, i guess that's okay
[01:30:47] <davidbanham> It won't let you rs.initiate() unless you're a root perms user. It won't let you add a root user without a majority. It's a bit of a dance to start in non-rs configured mode, add the users, then restart in the rs configured mode, auth, etc etc.
[01:30:51] <davidbanham> Anyway:
[01:30:52] <davidbanham> https://gist.github.com/davidbanham/aa9f7c24b3932a995f59
[01:32:03] <davidbanham> And then on both the children
[01:32:03] <davidbanham> https://gist.github.com/davidbanham/5f0054091902e55a7ffd
[01:32:25] <clay__> I'm seeing ChunkManager::_isValid failed: allOfType(MinKey, chunkMap.begin()->second->getMin())
[01:35:13] <joannac> davidbanham: "mongo --host mongod2 --opprt 27017
[01:35:38] <joannac> davidbanham: "mongo --host mongod2 --port 27017" on the host "ip-10-1-17-121" and let me know the output*
[01:36:08] <joannac> clay__: weird... what were you doing?
[01:36:18] <clay__> well i added another shard earlier today
[01:36:21] <clay__> and let it do its thing
[01:36:33] <clay__> then all i see is Assertion failed while processing query op for voipo.$cmd :: caused by :: 13282 error loading initial database config information :: caused by :: Couldn't load a valid config for DATABASE.COLLECTION after 3 attempts. Please try again.
[01:36:34] <joannac> clay__: all the config servers are up?
[01:36:35] <clay__> over and over
[01:36:37] <clay__> yeah
[01:36:41] <clay__> just checked that
[01:36:48] <joannac> can you query then?
[01:36:51] <joannac> them*
[01:37:08] <joannac> check config.chunks
[01:37:39] <clay__> yeah i see a lot of chunks
[01:37:48] <clay__> 209 documents
[01:38:06] <joannac> check for the database.collection you're getting errors for
[01:38:37] <clay__> the database in question doesn't even load
[01:38:41] <clay__> when using the mongos
[01:38:51] <clay__> when using mongod on the shards themselves
[01:38:53] <clay__> i can see all the data
[01:38:54] <joannac> okay... that's because the chunks won't load
[01:39:04] <joannac> so go check the chunks
[01:39:41] <davidbanham> joannac: Connects successfully, works like a champ. There's definitely connectivity between all hosts
[01:40:21] <joannac> davidbanham: "mongo --host
[01:40:26] <joannac> argh stupid enter key
[01:40:47] <joannac> davidbanham: "mongo --host ip-10-1-17-121 --port 27017" from mongo2
[01:41:25] <joannac> davidbanham: if that works, but mongo2 still doesn't get the config, then check the logs from mongo2
[01:41:35] <davidbanham> Ohhhhhh.
[01:41:57] <clay__> so weird... i went to all 3 primary in the 3 shards and i can see 10k+ documents
[01:42:02] <clay__> which sounds right... i had about 30k total
[01:42:09] <davidbanham> mongo --host mongod1 from mongod2 works fine, but not mongo --host ip-10-1-17-121
[01:42:43] <joannac> davidbanham: there you go. fix the config to use "mongo1"
[01:43:45] <clay__> when i try to go to mongos and query
[01:43:46] <clay__> mongos> show collections;
[01:43:47] <clay__> 2015-04-20T01:42:43.295+0000 E QUERY Error: error: {
[01:43:47] <clay__> "$err" : "error loading initial database config information :: caused by :: Couldn't load a valid config for voipo.sms after 3 attempts. Please try again.",
[01:43:47] <clay__> "code" : 13282
[01:43:47] <clay__> }
[01:44:29] <joannac> clay__: 1. pastebin. 2. i *know*. if you want help, please just do what I told you
[01:44:39] <davidbanham> There doesn't appear to be an option in the config to specify hostname, unless I'm blind.
[01:44:40] <clay__> ok sorry
[01:44:54] <joannac> davidbanham: ? yeah there is
[01:45:03] <clay__> you said go check the chunks... how can I do that? I am not certain exactly what you mean by that
[01:45:05] <davidbanham> Blindness is always an option
[01:45:29] <joannac> davidbanham: the bit that says "name"?
[01:45:42] <joannac> clay__: db.chunks.find({_id: "database.collectopm
[01:45:58] <clay__> on the admin database or config?
[01:46:01] <joannac> actually
[01:46:03] <joannac> on config
[01:46:16] <joannac> db.chunks.find({ns: "db.coll"})
[01:46:49] <clay__> ok i found 8 documents
[01:46:53] <joannac> clay__: pastebin
[01:47:25] <clay__> you want me to pastebin the output of my search results?
[01:47:30] <joannac> clay__: yes
[01:47:47] <davidbanham> The only reference I can find to name is the replica set name
[01:48:02] <clay__> http://pastebin.com/nAyVbG0V
[01:48:52] <joannac> clay__: you're missing chunks
[01:49:08] <joannac> do the same query connecting to each of your config servers directly
[01:49:15] <clay__> ok
[01:50:19] <joannac> davidbanham: erm, rs.conf()
[01:50:41] <davidbanham> Ah, right. I was thinking /etc/mongod.conf
[01:52:35] <clay__> i connected to all 3 on port 27019 directly and did that query and 8 documents appear on each... the same ones
[01:52:44] <joannac> clay__: well, you're missing chunks
[01:52:52] <joannac> where did they do? who last touched the server?
[01:52:57] <clay__> only e
[01:52:59] <clay__> me*
[01:53:17] <davidbanham> I think I'll just burn it down and start again with every machine knowing it's externally reachable hostname at an OS level. Thanks for helping me track down the issue, joannac
[01:53:49] <joannac> clay__: well, when did it break? are you out of disk, maybe?
[01:53:53] <joannac> davidbanham: np, good luck
[01:53:55] <clay__> about 3 hours ago
[01:54:03] <clay__> disk seems ok on all machines
[01:54:06] <clay__> i checked that first
[01:54:49] <clay__> so unless i do a full cluster restore... what are my options?
[01:56:23] <clay__> i would have figured if chunks were missing mongo would just return partial data
[02:01:02] <joannac> clay__: um, no. if you're missing chunks that's bad and you should fix it
[02:01:18] <clay__> alright
[02:01:40] <joannac> if you have all the data you could manually fix the documents in config.chunks
[02:01:59] <joannac> if you're willing to lose data since the backup, you could restore from backup, assuming your backup is good
[02:03:38] <clay__> kk
[02:10:07] <clay__> ok i think i fixed it with your help
[02:10:09] <clay__> i found a backup
[02:10:13] <clay__> copied all the chunks
[02:10:15] <clay__> and imported
[02:14:18] <davidbanham> Cluster is up and healthy! Yay.
[02:20:21] <clay__> thank you joannac for you help
[02:20:24] <clay__> your*
[02:20:49] <clay__> for my reference.. how could you tell by looking at that pastebin that documents were missing?
[02:21:03] <joannac> the log says "no chunk with MinKey"
[02:21:19] <joannac> and you don't have a chunk with MinKey
[02:21:34] <joannac> also, each chunk's minkey should match the maxkey of the previous chunk
[02:21:38] <joannac> you have a couple of gaps
[02:22:07] <clay__> ahhhh thank you
[02:53:46] <reactormonk> muh, TIL you need to specify -d manually with mongorestore
[02:58:08] <reactormonk> how do I go browse around the data of the mongodb DB?
[02:59:19] <joannac> mongo shell
[03:05:06] <reactormonk> thanks
[06:07:03] <jry123> hey guys, quick question -- i have documents with 15+ language translations which can be quite large (well under the 16mb document size though)... i only need to use the keys for the current user's language, so obviously would be electing to only return those fields.... is this strategy ok? or should i split the translations into multiple documents?
[06:08:13] <jry123> obviously the single document approach is way easier, i just wonder from a memory perspective, there will be millions of records in this collection (converting from an old MySQL DB)
[07:41:29] <gothos> Hello, anyone here familiar with casbah? I'm trying to extract data from an array that has some object in it, it looks like this: "perfData" : [ { "pi" : "3.1" , "minute" : "3"}, ... ]
[07:41:31] <gothos> I'm getting a MongoDBList but can't find a way to get the object within, any recommendations? :)
[08:50:00] <isale-eko> http://stackoverflow.com/questions/29719471/mongodb-mongoid-rails-unable-to-increment-counters-in-a-hash-in-embedded-documen
[09:07:22] <willemb> Hi. I need some advice about running a mongodb replication set across multiple datacentres.
[09:21:29] <sudomarize> If i'm storing categories in mongo, what's the best way to do this? e.g. for a category "House Cleaning", how should i store this?
[09:23:08] <willemb> I have 3 nodes in dc A and 2 in dc B. the B-side's priority is set to 0 so that they don't become master (the services that query this db are all in dc A). However, when dc A goes awol, it is pretty hard to make one of them a primary while there is no primary available.
[09:42:35] <sudomarize> why is this channel so dead?
[09:46:40] <Derick> sudomarize: everybody is checking their mails first on Monday morning - and, it's still sleep time in the US.
[09:47:04] <sudomarize> Derick: ah ok fair enough
[09:47:38] <Derick> willemb: I'd make the DC A nodes have priority 2, and the ones in DC B prioity 1
[09:47:47] <Derick> willemb: so that they can become primary
[09:48:11] <Derick> willemb: but note, that if DC A goes fully down, the 2 (out of 5) in DC B can not see the majority, and hence not elect a primary
[09:53:26] <sudomarize> Derick, do you have any suggestions about the best way of setting up categories in my schema? i'm a bit stuck here
[09:53:52] <Derick> sudomarize: You need to provide a little more information
[09:54:06] <Derick> on what you need to store, on how you add the data, and how you need to query it
[09:57:08] <sudomarize> Derick: no worries. So i'm creating a marketplace that allows users to create tasks. In terms of the parameters of listing a task (for the lister), i want to provide most of the structure, i.e. there will be a set of predefined categories and subcategories (3 tiers). But I imagine there will still be significant varying, even between two tasks with the same category tree, so would implementing a tag: {}
[09:57:09] <sudomarize> object in my schema (and then doing a search for both categories and tags) be a good idea?
[09:58:17] <sudomarize> i'm also not sure what the best way of storing that value in my db is, i.e. should i store it just as the formatted string, e.g. "House Cleaning", or as a key-value pair, something like {category: {name: "house-cleaning", formatted: "House Cleaning"}}
[09:59:41] <sudomarize> i've looked on the mongo page for creating categories, which seems relatively complex compared to the SQL solutions which i'm used
[09:59:54] <KekSi> anyone had this problem before? 2015-04-20T09:57:14.038+0000 E - [mongosMain] error upgrading config database to v6 :: caused by :: could not load config version for upgrade :: caused by :: socket exception [CLOSED]
[10:00:25] <KekSi> when i'm trying to restart a router after upgrading docker
[10:01:03] <sudomarize> and finally i'm a little worried about how making changes to the category structure will affect searches, i.e. is there a way to make changes to my category tree without causing issues when i run queries?
[10:01:09] <sudomarize> sorry for the long post
[10:05:43] <sudomarize> Derick: also what way do you think is best of adding categories? should i add a large amount initially, or add them slowly after they are requested by users? basically i want to be able to manage which categories are able to be searched, so only very popular categories are shown
[10:07:03] <Derick> sudomarize: hmm, tricky :-)
[10:07:12] <Derick> what would you do about the tiers?
[10:13:19] <sudomarize> Derick: in terms of having a dynamic set of category tiers, i'm still not sure. But generally speaking i want to manage the quality of the category that are available to users, to make search and listings really easy. there's a good guide on the mongo site about setting up product categories (inc a tierd ssystem), so with a few adjustments that can easily be modelled to my particular needs
[10:13:38] <sudomarize> tiered*
[10:13:45] <Derick> okay
[10:14:01] <Derick> so you're asking how to store a category with each post? I don't quite understand yet
[10:17:56] <sudomarize> Derick: basically i'm asking 1. what's the best way of storing categories in mongo (e.g. an easy json name and then a formatted name for HTML like {key: "housecleaning", formatted: "House Cleaning}), 2. is there a way to make just a category tree dynamic, that is to say, add, removing or moving categories up or down the tree and 3. Assuming i have a relatively strict category system in place, is the best way
[10:17:58] <sudomarize> of differentiating listings with the same categories using a tagging system?
[10:18:19] <sudomarize> Derick: hope that's not too vauge, still trying to figure a lot of this stuff out at the moment
[10:18:42] <Derick> 1 - store it as you'd show it. It means less things to store
[10:18:54] <dvass> I just set up MMS
[10:19:08] <dvass> how can I manage my collections and data from it?
[10:19:46] <Derick> 2 - storing path structures is not very simple to do. But I think you found the docs on doing that. But I am not sure what you mean by "just make a category tree dynamic"
[10:19:54] <Derick> and 3 - what do you need to differentiate on?
[10:23:34] <sudomarize> Derick: basically so that i can edit and move categories up and down the category tree, say from tier 3 to tier 2
[10:24:13] <Derick> sudomarize: that's for question 2, right?
[10:24:18] <sudomarize> Derick: RE 3: well if a particular user has expertise in a subset of a tier 3 category, then there's no way of users searching for tasks to be able to find it
[10:24:20] <sudomarize> Derick: yep
[10:24:40] <Derick> sudomarize: i think you'll have to do most in your application for the tree management
[10:25:01] <Derick> sudomarize: oh - hmm, for 3 - yeah, tags will work I suppose.
[10:25:24] <Derick> but what if a task requires multiple categories?
[10:25:58] <sudomarize> Derick: say gardening was a tier 3 category, and there's a lister who has expertise in growing roses, currently there's no way for them to let users looking for gardeners know this (except in the description of the task, but users can only search with categories <- to make the experience cleaner)
[10:26:33] <sudomarize> Derick: what would an example of multiple categories be?
[10:27:22] <Derick> sudomarize: a task that requires both rose-tending and fence repair?
[10:27:25] <sudomarize> mutliple tags or multiple categories, because they would only be able to add multiple categories due to the nature of the category tree
[10:27:41] <sudomarize> Derick: ah right i see what you're saying
[10:28:04] <Derick> i'd probably make "growing rose" a tier 4 cat
[10:28:32] <sudomarize> Derick: this is a perfect example of where i think tags would be useful
[10:28:52] <sudomarize> Derick: otherwise i'd have to make n categories to fit every particular skill set out there
[10:29:19] <Derick> that's true
[10:29:29] <sudomarize> growing roses -> rare temperate roses -> a particular species of rose -> etc
[10:30:03] <sudomarize> i mean 3 tiers is a relatively arbitrary number of tiers to have, but i think it gives good depth while retaining a relatively easy UX
[10:30:50] <sudomarize> Derick: but that's also what i was talking about before with a dynamic tree. there are probably some instances where >3 tiers is required
[10:32:08] <sudomarize> is it just me, or is this quite a complex problem?
[10:32:24] <sudomarize> really having trouble getting my head around it
[10:46:07] <Derick> sudomarize: trees in a database are always tricky
[10:47:02] <Derick> (unless you use a graph database)
[10:57:15] <KekSi> right, apparently it was a docker problem (upgraded from 1.5 to 1.6) -- just in case someone else is having odd problems
[12:39:41] <sudomarize> Derick: it seems like it
[12:40:16] <deathanchor> so one of my secondaries fell out of sync beyond the replWindow, and is stuck in recovery with this in the logs: [rsBackgroundSync] replSet not trying to sync from MEMBER it is vetoed for 80 more seconds
[12:40:41] <deathanchor> no way to recover other than clearing the data dir and doing a fresh sync?
[12:42:44] <cheeser> unless you have a backup via, say, mms.
[12:42:55] <cheeser> you could restore a checkpoint and then sync from there.
[12:43:24] <srimon> Hi all
[12:43:39] <deathanchor> yeah same problem. lots of data to sync. fresh sync looks like my only choice
[12:43:40] <sudomarize> cheeser, any suggestions you have about how to implement this tiered category system?
[12:43:50] <cheeser> not really
[12:44:20] <cheeser> lots of pros/cons to any solution
[12:45:36] <sudomarize> cheeser: how would you got about it?
[12:45:57] <sudomarize> really out of my depth here
[13:06:41] <arussel> I have a replicat set without authorization/authentication. I'd like to introduce it, so: 1: db.createUser({user: "root", pwd: "foobar", roles: [{role: "userAdminAnyDatabase", db: "admin"}]})
[13:07:09] <arussel> 2. db.createUser({user: "myapp", pwd: "foobar2", roles: [{role: "readWrite", db: "myappdb"]})
[13:07:26] <arussel> and from now on, the app has to user username/password
[13:07:46] <arussel> Do I have to create the users on all the nodes of the replica set ?
[13:08:20] <arussel> would the app stop having access after step 1 or after step 2 ?
[13:41:08] <willemb> Derick: thanks. that's pretty obvious, isn't it?
[13:48:30] <willemb> one problem: bad config for member[3] this version of mongod only supports priorities 0 and 1",
[13:52:39] <willemb> i guess i am running too old a version
[13:55:19] <Derick> willemb: or perhaps it's an arbiter?
[14:09:19] <greyTEO> GothAlice, you mentioned you are on rackspace, correct?
[14:09:25] <greyTEO> or where on rackspace.
[14:10:38] <greyTEO> Did you ever use cloud server images as backups? http://docs.mongodb.org/ecosystem/platforms/rackspace-cloud/#cloud-servers-images
[14:16:59] <GothAlice> greyTEO: None of my backup mechanisms require locking like that example. Interesting that it's an option, though. (Also: don't get me started on the pricing of this type of "backup".)
[14:18:21] <GothAlice> Backups should be a) as cheap as possible, b) as difficult to restore as possible. "Deep storage" is great. If it takes an hour to get anything out of it, one tries extra special hard to make sure one doesn't need to use it.
[14:26:29] <greyTEO> Yea they are going to wreck you in the cloud storage price for those backups.
[14:28:20] <greyTEO> https://github.com/failshell/mongolvmsnapback/blob/master/mongolvmsnapback.sh
[14:29:23] <GothAlice> I also get giggles every time I read through platform instructions that focus on single distributions. "Here's how you do everything you need to do on Rackspace… if you use Debian/Ubuntu."
[14:30:35] <greyTEO> Tis true about about the backups being hard to extract data. PITA is the best motivation
[14:31:37] <greyTEO> I thought it was wierd they have a section for Rakespace...on ubuntu. Shouldn't they just have a section for Ubuntu and stop recommending those specific solutions?
[14:31:40] <GothAlice> Alice's Law #105: "Do you have a backup?" means "I can't fix this." ;)
[14:32:13] <greyTEO> Usually the 2nd question
[14:32:20] <GothAlice> Yeah, I wish these sections only had details that actually were unique to that platform. (I.e. the backup thing.) Installation instructions are best left to _installation instructions_.
[14:32:46] <greyTEO> 1.) what happened? 2.) do you have a backup? 3.) ???? lawlz
[14:32:47] <GothAlice> (And there really didn't seem to be anything unique from an installation standpoint.)
[14:33:17] <greyTEO> no not really.
[14:34:22] <greyTEO> LVM looks to be the most widley used option. Im not that familiar with it.
[14:35:10] <greyTEO> mongodump doesnt seem to be that scalable and doesnt cover indexes
[14:35:52] <GothAlice> Hmm; not sure where you'd get those impressions from.
[14:37:48] <GothAlice> You could always run multiple dumps in parallel, each dumping different collection sets, potentially from different replica secondaries. The latest version even incorporates parallelism directly into the restore tool, too.
[14:38:27] <greyTEO> I thought on restore you have to rebuild the indices?
[14:39:15] <GothAlice> And information about indexes is, in fact, backed up. "Backing up" an index doesn't actually make any sense. The location of the data in the restored stripes will differ from the original stripes, so the original index data is utterly useless anyway…
[14:41:11] <greyTEO> the data would be useless but it would cut back on the restore time. depending on the indices and amount of data though
[14:41:18] <GothAlice> No, it wouldn't.
[14:41:22] <GothAlice> In fact, it'd slow it down.
[14:41:31] <GothAlice> You'd be "restoring" bogus data you'd then have to re-iterate and correct anyway.
[14:41:54] <GothAlice> Thus there's literally no point in even attempting to do that; just rebuild the index either as you go, or once at the end, and be done with it. (Iterate once, not many times.)
[14:42:39] <greyTEO> well restoring from a dump would re-import. I am thinking more point in time snapshots.
[14:42:48] <GothAlice> You _can not snapshot indexes_.
[14:43:02] <GothAlice> Index data outside of the literal instance that is storing the at-rest data is _meaningless_.
[14:43:25] <greyTEO> if you snapshot the the instance, it would.
[14:43:31] <greyTEO> it would be a full restore
[14:44:49] <GothAlice> There are several types of snapshot; only a filesystem-level snapshot would benefit from preserving index data, and, well, has the added bonus of MongoDB being completely unaware. So yeah, that'd work. (It's also the least efficient method to do a backup.) mongodump with the --oplog option is also a point-in-time snapshot.
[14:45:08] <GothAlice> It's still a re-compacted dump, though, so storing index data would continue to be a waste here.
[14:46:35] <greyTEO> I was just reading the mongodump with oplog give PIT
[14:47:30] <GothAlice> Of course, in production systems "high availability" should be the only backup one needs. Have replication secondaries in-DC, outside the DC, at your office, at your home… so worse-case and you need to switch hosting providers or data centres? Spin up a node, let it replicate, and keep on keeping on. ;)
[14:51:23] <greyTEO> Filesystem might not be the most effecietn but wouldnt that be the best way to get a hot backup without interrupting performance...(given you could run it at your slowest time of traffic)
[14:51:41] <GothAlice> greyTEO: No, replication = best hot backup.
[14:52:15] <GothAlice> Replication typically has no "instantaneous load" (i.e. it's constantly streaming, so shouldn't have peaks in activity that slow down the rest of the system).
[14:53:20] <GothAlice> See: http://docs.mongodb.org/manual/core/replica-set-high-availability/
[14:54:00] <greyTEO> Corruption or server failures wont replicate, but deletes and drop collections could.
[14:54:20] <GothAlice> Delete a collection, do a snapshot. That collection's gone.
[14:54:25] <GothAlice> So there's no difference, there.
[14:54:54] <GothAlice> However you can also delay your replicas by however much time you want, to allow for rollbacks in the event of user error.
[14:55:06] <greyTEO> that is what I was looking for
[14:55:17] <GothAlice> http://docs.mongodb.org/manual/core/replication-introduction/
[14:55:20] <GothAlice> ^ Read through all of this.
[14:56:15] <GothAlice> Replication is how you get reliable service in MongoDB. Point in time snapshots / backups are not. (Always remember law #105. Backups are what happens when everything else is on fire.)
[14:56:29] <GothAlice> Actually, totally need to add that quip as a follow-up.
[14:57:29] <greyTEO> I appreciate the info. Im still learning the in's & out's of mongo
[14:57:37] <greyTEO> you have helped a lot.
[14:57:44] <GothAlice> It never hurts to help.
[14:58:25] <greyTEO> with mysql http://www.percona.com/ is the best way for hotbackps and PIT. restore is crazy easy and it's all Filesystem based.
[14:58:49] <greyTEO> I was applying the same logic but I guess it doesn't always apply
[14:59:06] <GothAlice> It can be. Unless AWS decides that if your EC2 instance needs to touch an EBS volume, lock both the instance and the volume. Which _has happened multiple times_ on that service. ;)
[14:59:18] <GothAlice> I.e. filesystem snapshots as a backup process can destroy your business.
[14:59:26] <cheeser> heh
[14:59:56] <GothAlice> Also MySQL. (In one of those failures I had to spend the 36 hours prior to getting my wisdom teeth extracted to reverse engineer the on-disk InnoDB format. Managed to recover all the data, amazingly.)
[15:00:12] <GothAlice> (http://grimoire.ca/mysql/choose-something-else ;)
[15:01:22] <GothAlice> greyTEO: I switched to Postgres, and use WALe to stream the write ahead log, i.e. oplog, straight from the Postgres server to S3 and several other destinations. None of my Postgres servers had permanent storage, because like MongoDB, I had them configured to self-repair on startup.
[15:01:32] <GothAlice> These days I just use MongoDB. ;)
[15:02:09] <Vitium> Hello
[15:02:27] <Vitium> I've just started using MongoDB and it's my first time using a NoSql database
[15:02:41] <GothAlice> Vitium: Do you have previous experience with relational databases?
[15:02:43] <Vitium> I'm trying to make a website with products and these products have reviews
[15:02:47] <Vitium> Yes GothAlice
[15:02:54] <Vitium> I'm still seeing things in a relational way
[15:02:58] <digi604> Hi everybody… what is the easiest way to copy a db from a replica set with 2 servers to a 6 server cluster? db.copyDatabase seams not working on clusters…
[15:03:06] <Vitium> I'd create a table for the product and a table for the reviews
[15:03:08] <GothAlice> Vitium: http://www.javaworld.com/article/2088406/enterprise-java/how-to-screw-up-your-mongodb-schema-design.html may be useful, if you're up on the formal terms. :)
[15:03:29] <GothAlice> digi604: Easy? Mongodump/mongorestore.
[15:04:07] <digi604> GothAlice: the db is too big for that one...
[15:04:51] <GothAlice> digi604: You do realize mongodump/mongorestore aren't limited to only running locally? I.e. you can run it from a machine with enough scratch space.
[15:05:14] <GothAlice> (Or attach an EBS volume or equivalent, for the duration of the migration.)
[15:05:25] <digi604> GothAlice: ok that seams to be the way to go...
[15:05:38] <GothAlice> Won't be fast, but you only asked for easy. ;)
[15:05:55] <digi604> GothAlice: what is the hard way to do it?
[15:06:15] <digi604> no let me rephrase this: what is the fastest way to do it...
[15:06:35] <GothAlice> digi604: Introduce the machine with the data on it into the new cluster, and let it re-balance. That's the hard way. ;)
[15:07:02] <GothAlice> I don't know which would be faster. Balancing can be quite slow.
[15:07:24] <digi604> GothAlice: can i remove hosts afterwards?
[15:07:31] <GothAlice> Indeed you can.
[15:08:30] <GothAlice> digi604: http://docs.mongodb.org/manual/reference/replication/ and http://docs.mongodb.org/manual/administration/replica-sets/ for some reading.
[15:09:05] <digi604> so if i have 3 shards and 3 replicas…. and i add the 2 existing nodes to it as an additional shard i can later remove the 2 existing nodes (one complete shard)?
[15:09:10] <GothAlice> Vitium: The article I linked should be quite helpful in exploring your current model. In MongoDB, dependant data like the reviews can often be embedded within the "parent" document, since when getting the review you'd likely also be wanting a copy of the product data, and when you delete a product, cleaning up the reviews is a natural second step.
[15:09:16] <GothAlice> (Embedding means it's all handled in one step.)
[15:09:45] <GothAlice> digi604: Yes; though when you ask it to remove the node, you'll have to wait for it to shuffle the data off that node before shutting it down.
[15:10:00] <digi604> GothAlice: ok that helped… thnx
[15:10:20] <GothAlice> digi604: I'd still recommend the dump/restore approach. ;)
[15:10:41] <GothAlice> Fewer things can go wrong. ^_^
[15:10:42] <Vitium> GothAlice, You mean like an array?
[15:11:21] <GothAlice> Vitium: Exactly; your reviews would be an array of embedded documents. e.g. product = {_id: …, name: "Walkman", manufacturer: "Sony", reviews: [{author: "Alice", comment: "It's boss."}, …]}
[15:11:35] <Vitium> Damn
[15:11:48] <Vitium> I can do that?
[15:11:49] <Vitium> WOOT
[15:11:52] <GothAlice> Yeeeah.
[15:11:52] <GothAlice> :3
[15:11:55] <GothAlice> NOTE
[15:12:06] <GothAlice> There is a 16 MB limit on the size of a single top-level document.
[15:12:16] <GothAlice> This roughly equates to around 3 million words of English text.
[15:12:32] <Vitium> What if I want more than that?
[15:12:42] <Vitium> One-To-Many
[15:13:06] <GothAlice> When there are so many reviews on something that the record fills up, you need to have a contingency plan. In my forums, where I store replies to a thread within the thread itself, I handle this by automatically starting a new thread, then linking the old one to the new one.
[15:13:59] <GothAlice> Vitium: It's important to not go overboard with nesting things; you can only effectively query one array per document per query.
[15:14:09] <GothAlice> (And no joins…)
[15:15:23] <GothAlice> You _can_ have "references" (typically just the ID of the target record, or a DBRef) between collections (tables), but without joins any time you need to gather information from a different collection, you need to perform an additional query.
[15:15:40] <Vitium> Yeah that's what I was thinking
[15:15:43] <Vitium> Hard to see things in a new way
[15:15:48] <Vitium> Keep thinking of primary keys
[15:15:56] <GothAlice> The article I linked should help with that.
[15:16:02] <GothAlice> "Identifying documents is key."
[15:16:35] <Vitium> reading
[15:17:01] <GothAlice> Any time you get the sensation of wanting a JOIN, smack yourself. MongoDB has no relational capability whatsoever, and modelling with such non-existent capability in mind will only lead to slow queries and mangled code. ;)
[15:17:30] <Vitium> I'd slap myself into a comma though
[15:17:32] <GothAlice> See also: http://docs.mongodb.org/manual/reference/sql-comparison/ and http://docs.mongodb.org/manual/reference/sql-aggregation-comparison/
[15:17:47] <Vitium> coma*
[15:18:15] <GothAlice> Just repeat this mantra: "My data is more varied and wonderful than a spreadsheet."
[15:18:37] <Vitium> SELECT *
[15:18:40] <Vitium> oops
[15:18:44] <GothAlice> XP
[15:18:49] <GothAlice> That'd be db.collection.find()
[15:18:59] <Vitium> I know that!
[15:20:01] <StephenLynx> What she said. Any time you need to do something that would require a join in sql, you are doing it wrong.
[15:20:30] <StephenLynx> I personally see (few) uses for fake relations, but don't you ever think of them as your first, second or third option.
[15:20:35] <GothAlice> "The Gods of Excel have poisoned your minds against your data, friends!"
[15:24:48] <Vitium> Aren't they basically showing a join with the One-To-Many example? http://blog.mongodb.org/post/87200945828/6-rules-of-thumb-for-mongodb-schema-design-part-1
[15:25:23] <Vitium> The array is just a bunch of ObjectIDs
[15:28:33] <GothAlice> Vitium: Indeed, that example is simulating a fake join at the application level.
[15:29:22] <GothAlice> However the full impact of this is difficult to gauge from that example: it implies a second complete round-trip, you currently can't get results back in arbitrary orders (i.e. give me records X, Y, and Z in the order [X, Y, Z]), etc.
[15:31:10] <GothAlice> Unfortunately, that article states "Each Part is a stand-alone document, so it’s easy to search them and update them independently." which would seem to imply that you lose this ability if you embed. You do not.
[15:31:32] <Vitium> I'm just concerned about the size
[15:31:38] <GothAlice> 3 million words.
[15:31:42] <GothAlice> That's a _lot_ of words.
[15:31:59] <GothAlice> To the point that the entire gaming forums I was converting from phpBB to my MongoDB solution in Python could fit in a single record.
[15:32:24] <StephenLynx> yeah, unless you expect your subdocuments to be really, really, really, huge, size is not an issue.
[15:32:55] <Vitium> So 16MB is the max size for any document?
[15:33:17] <GothAlice> Yes.
[15:35:42] <greyTEO> GothAlice, http://s3.amazonaws.com/info-mongodb-com/MongoDB-backup-disaster-recovery.pdf
[15:36:10] <greyTEO> bottom of page 3.
[15:36:33] <GothAlice> greyTEO: Anything more specific I should be looking for than that? (And numbered page 3, or PDF page 3?)
[15:36:41] <greyTEO> Mongo instilling fear in the use of mongodump...
[15:36:53] <greyTEO> numbered page 3
[15:36:56] <greyTEO> pdf page 5
[15:37:01] <greyTEO> right column
[15:37:35] <greyTEO> Theirs docs give mixed signals sometimes.
[15:37:47] <GothAlice> Uhm.
[15:37:54] <GothAlice> I suspect people read too much into these things.
[15:38:09] <GothAlice> "While mongodump is sufficient for small deployments, it is not appropriate for larger systems. mongodump exerts too much load to be a truly scalable solution. It is not an incremental approach, so it requires a complete dump at each snapshot point, which is resource-intensive." — All true.
[15:38:26] <GothAlice> Also, notably, all things that will vary from deployment to deployment and application to application in terms of acceptable load, etc.
[15:38:53] <greyTEO> lol they recommend filesystem snapshot
[15:38:58] <greyTEO> which is way more complicated
[15:39:07] <StephenLynx> really?
[15:39:12] <greyTEO> Yes it varies
[15:39:20] <StephenLynx> wouldn't it just be a "cp" command?
[15:39:22] <GothAlice> Well, they're pimping Ops Manager / MMS.
[15:39:29] <GothAlice> That is a mongodb.com white paper, after all.
[15:39:50] <greyTEO> for sure that is their goal for this one
[15:39:59] <Left_Turn> from the docs: db.runCommand(command) ... shouldn't there be a 2nd context/mapping argument for search?
[15:40:00] <GothAlice> Again: the simplest, easiest approach is replication. Replication is love. Replication is life.
[15:40:23] <GothAlice> Left_Turn: The number of times I've had to call runCommand myself can be counted on no hands. What're you trying to do?
[15:40:53] <Left_Turn> hdb.mbox.runCommand("text", {"search" : "term"}) GothAlice
[15:41:19] <Left_Turn> is that valid?
[15:41:31] <Left_Turn> http://docs.mongodb.org/manual/reference/method/db.runCommand/
[15:41:42] <GothAlice> I honestly have no idea why you're trying to do that, that way.
[15:41:43] <greyTEO> GothAlice, lol you have a way with words...
[15:41:49] <Left_Turn> oh
[15:41:58] <Derick> db.mbox.find( { fieldname : { '$search' : "string" } } );
[15:42:04] <GothAlice> ^ There we go.
[15:42:07] <Left_Turn> ive never used mongodb before GothAlice .. forgive me:(
[15:42:08] <Derick> no need to use a command any more
[15:42:10] <Derick> http://docs.mongodb.org/manual/reference/operator/query/text/
[15:42:15] <Left_Turn> oh i see
[15:42:20] <Left_Turn> thanks!
[15:42:24] <GothAlice> Old docs from somewhere told you to do it that way, Left_Turn?
[15:42:31] <Left_Turn> yes:(
[15:42:34] <GothAlice> Link?
[15:42:44] <Left_Turn> it's a python book
[15:42:53] <Left_Turn> that uses mongo shell
[15:43:19] <Left_Turn> i see the light now GothAlice :)
[15:43:56] <Derick> Left_Turn: technically, what you wrote should still work
[15:43:58] <cheeser> walk calmly in to it. the pain will be over soon.
[15:44:11] <Left_Turn> heh
[15:44:18] <Left_Turn> ah ok Derick
[15:44:19] <Derick> Left_Turn: http://docs.mongodb.org/manual/reference/command/text/
[15:45:13] <GothAlice> Manual invocation of runCommand is a good warning sign of something being done more "craftily" than it needs to be, though. Driver support is pretty solid for almost all commands, excluding some pretty low-level administrative ones.
[15:45:58] <Derick> GothAlice: for the new PHP//HHVM driver we're not implementing *any* helpers in the base extension
[15:46:05] <Derick> but leave it to a library on top
[15:46:08] <Left_Turn> ah ok ... I'll take the advice... also thanks Derick for all the links.. got a lot of reading to do:)
[15:46:18] <Derick> Left_Turn: have fun :)
[15:46:27] <Left_Turn> thanks!
[15:47:03] <GothAlice> Derick: Well, that does seem to be PHP's modus operandi: expose raw C function calls to the application layer. Which C calls? ALL THE C CALLS!
[15:47:05] <GothAlice> ;)
[15:47:14] <Derick> heh
[15:47:29] <Derick> only for the older stuff though
[15:47:43] <Derick> exposing C calls straight into userland makes for shitty APIs
[15:48:00] <GothAlice> Ref: any function with the word "real" in its name.
[15:48:28] <Derick> that's mysql ;-)
[15:50:28] <StephenLynx> lol php
[15:52:07] <digi604> How do you delete a DB from a cluster?
[15:56:03] <cheeser> use somedb
[15:56:08] <cheeser> db.dropDatabase()
[15:57:34] <digi604> sh.status() still displays the database
[15:57:55] <cheeser> https://jira.mongodb.org/browse/SERVER-17397
[16:02:11] <juliofreitas> Hi! Each 15 minutes I get a file with a time series data. The data are structured this way: time (2015-03-27T20:00:01Z), state (string), owner (string). Which is the best way to store and make search with a great performance although my search will be (by day, choosing the interval 15 minutes, 30 minutes, 1 hour...)
[16:04:00] <cheeser> juliofreitas: this might help: http://blog.mongodb.org/post/65517193370/schema-design-for-time-series-data-in-mongodb
[16:05:20] <juliofreitas> cheeser, fine fine thank you :D
[16:21:39] <ParisHolley> “driver is incompatible with this server version”, on 3.0.2 with latest 2.x of mongo native driver and mongoose
[16:22:17] <StephenLynx> have you tried without mongoose?
[16:22:32] <StephenLynx> I have just deployed a server and didn't had any issues with the driver.
[16:22:39] <ParisHolley> no, all my projects use mongoose, did a server migration and thinks just seemed to work find, accept one app
[16:22:57] <ParisHolley> seems to be happening out of no where
[16:24:02] <ParisHolley> and, it only happens on writes
[16:24:07] <ParisHolley> can read without issue
[16:26:58] <ParisHolley> nvm
[16:27:05] <ParisHolley> just wiped out node_modules, all is well
[16:31:41] <snowcode> what mean "Can't use $geoIntersects with String." ?
[16:31:55] <snowcode> my query is "AlarmModel.find({geo: { $geoIntersects: { $geometry: {type:'Point',coordinates:[12.506129,41.889117]} } } },".....
[16:34:00] <Derick> snowcode: perhaps your index is the issue? the query seems alright
[16:39:16] <snowcode> Derick my model is pretty simple http://pastebin.com/Rwi7WLhT
[16:39:28] <snowcode> I really don't understand what's wrong...
[16:40:17] <int3nsity> I'm using mongoose in nodejs, is it recommended or you prefer a more low level mongo driver?
[16:40:35] <StephenLynx> I prefer to use the regular driver.
[16:40:47] <StephenLynx> mongoose is a notorious source of headaches.
[16:40:56] <StephenLynx> and since you are using node, I suggest migrating to io.js.
[17:10:38] <snowcode> Derick I've also validated my polygon on http://geojsonlint.com . But it still returns http://pastebin.com/vuRGuXqy
[17:10:50] <snowcode> what the hell is wrong with this geometry :|
[17:12:06] <Derick> snowcode: which MongoDB version?
[17:22:21] <christo_m> i just setup mongo-express but its asking me to authenticate
[17:22:45] <christo_m> no idea what creds to use
[17:23:44] <snowcode> Derick: 2.6.7
[17:25:54] <StephenLynx> yeah, express is pretty crap.
[17:26:08] <StephenLynx> I wouldn't touch it with a mile long pole, christo_m
[17:29:57] <Vitium> You don't like the node.js express framework, StephenLynx?
[17:30:05] <StephenLynx> nope.
[17:30:11] <StephenLynx> it is awful.
[17:30:35] <Vitium> I was going to learn it but ended up choosing meteor because of how low level expressed looked
[17:30:43] <Vitium> express*
[17:30:57] <Vitium> Now Meteor seems a bit too high level
[17:31:00] <snowcode> uh "Point must only contain numeric elements"
[17:31:02] <snowcode> lol
[17:31:13] <StephenLynx> any web framework that doesn't provide any features is a cancer.
[17:31:24] <Vitium> Lol
[17:31:33] <StephenLynx> I can understand something like the android framework, that is used for you to interact with the android API
[17:31:40] <StephenLynx> but these don't do anything you can't already do.
[17:31:49] <StephenLynx> all while eating up performance and adding vulnerabilities.
[17:32:06] <StephenLynx> and completely changing the workflow, making it harder to get documentation and reference.
[17:32:48] <Vitium> Having to write your own HTTP server surprised me
[17:32:54] <StephenLynx> people just use them because they don't know any better. I had a dude that thought he needed them for http.
[17:32:56] <StephenLynx> there
[17:33:23] <cheeser> everyone should write an http server.
[17:33:25] <StephenLynx> you don't write your own http server, that is already built-in in the runtime environment.
[17:33:31] <StephenLynx> you just ask it to run.
[17:33:38] <cheeser> if only to highlight how much we need a better protocol :)
[17:33:56] <StephenLynx> but people don't even try, so they keep stuck with these frameworks.
[17:34:02] <Vitium> Yeah but coming from a .net background it seemed too much
[17:34:05] <Vitium> Maybe I'm just spoiled
[17:34:08] <StephenLynx> yes, you are.
[17:34:35] <StephenLynx> but now you are not even using something that is the default.
[17:34:42] <StephenLynx> and is not officially supported.
[17:34:57] <Vitium> That's one of my worries
[17:35:03] <StephenLynx> in the long run, using these frameworks is asking to get bitten in the ass hard.
[17:35:19] <Vitium> Well meteor has a decent community
[17:35:25] <Vitium> And a company behind it
[17:35:29] <StephenLynx> it remains a sub community.
[17:35:43] <StephenLynx> when you don't use these you just have the whole community.
[17:35:51] <StephenLynx> instead of a set of people that happen to use the same framework as you.
[17:35:53] <Vitium> True
[17:36:00] <Vitium> Never had to worry about that with asp.net
[17:36:07] <Vitium> Everyone just used the same thing
[17:36:10] <StephenLynx> yeah.
[17:36:18] <StephenLynx> you are not in kansas anymore, dorothy.
[17:38:01] <StephenLynx> Ive been working with node (now io.js) since september 2014 I guess, using just the RE and mongo driver, I am glad to work with web for the first time in my life.
[17:38:38] <StephenLynx> just released one of my two projects that use a back-end https://gitlab.com/mrseth/bck_lynxhub
[17:39:02] <StephenLynx> not having to deal with bloat is a bliss.
[17:39:54] <StephenLynx> so yeah, I suggest you at least trying to not use a framework for once, Vitium
[17:40:05] <StephenLynx> also, migrate to io.js, is everything node is, but better and faster.
[17:40:39] <Vitium> StephenLynx, Using node.js without a framework?
[17:40:49] <StephenLynx> yes.
[17:40:53] <Vitium> I'd have to manually deal with gets and posts?
[17:41:05] <StephenLynx> you can just write how you deal with them once.
[17:41:09] <StephenLynx> and reuse it.
[17:41:16] <Vitium> Writting that tho
[17:41:18] <Vitium> lol
[17:41:21] <StephenLynx> and is not hard anyways.
[17:41:41] <StephenLynx> have you ever tried it?
[17:41:44] <Vitium> No
[17:41:47] <StephenLynx> ...
[17:41:54] <Vitium> Lol
[17:42:00] <christo_m> StephenLynx: i dont want to use db.users.findOne({_id:ObjectId(...)}) just to look at document structure
[17:42:03] <christo_m> etc
[17:42:08] <christo_m> i want a phpmyadmin style thing
[17:42:17] <StephenLynx> then you still don't need mongoose.
[17:42:30] <StephenLynx> you need a db manager with a GUI.
[17:42:33] <StephenLynx> which mongoose isn't.
[17:42:46] <Vitium> I'm learning this for a school project, I don't have enough time to code all that
[17:42:59] <snowcode> okay now there is a "Point must only contain numeric elements" error inserting my GeoJSON Data into mongodb. If I try to insert a simple coordinate point [[lon,lat]] it works
[17:43:00] <StephenLynx> the learning curve is higher for these frameworks.
[17:43:06] <snowcode> If I try to insert a polygon
[17:43:08] <Vitium> I have a week, StephenLynx
[17:43:11] <snowcode> it returns this error
[17:43:12] <StephenLynx> because you need to learn 2 things instead of one.
[17:43:13] <snowcode> any idea?
[17:43:37] <christo_m> StephenLynx: okay well.
[17:43:43] <christo_m> not sure what to do now
[17:43:48] <christo_m> guess ill stare at my monitor for a bit till something makes sense
[17:44:01] <StephenLynx> what I did was to write the schema in the documentation
[17:44:04] <StephenLynx> and refer to it.
[17:44:10] <StephenLynx> since mongo doesn't guarantee a schema at all.
[17:44:14] <christo_m> StephenLynx: id like to just look at data and manipulate
[17:44:28] <StephenLynx> then you need a manager with a GUI.
[17:44:32] <christo_m> i want to see if when im adding things that its actually reflected in the database
[17:44:37] <christo_m> okay what do you recommend
[17:44:47] <StephenLynx> you can always check the insert result.
[17:44:53] <StephenLynx> that is provided by the driver.
[17:45:21] <StephenLynx> https://gitlab.com/mrseth/bck_lynxhub/blob/master/doc/model.txt the model documentat that I told about.
[17:45:34] <StephenLynx> I don't need a tool to check the schema because I have it defined.
[17:49:45] <christo_m> StephenLynx: i get that
[17:49:48] <christo_m> but id like to look at the actual data
[17:49:56] <christo_m> thats why i wanted to use mongo-express
[17:50:03] <StephenLynx> I use the terminal client for that.
[17:50:08] <christo_m> i know you do
[17:50:12] <StephenLynx> just add a pretty() in the end.
[17:50:12] <christo_m> and like i said earlier, i don't want to
[17:50:30] <StephenLynx> well, if you don't want to use the right tools, use the wrong ones then.
[17:50:44] <StephenLynx> its your project that will be compromised because of what you "want".
[17:50:54] <StephenLynx> instead of you doing what you "need".
[17:51:14] <christo_m> so because id rather use a GUI the implication is that im using the wrong tools?
[17:51:22] <StephenLynx> no.
[17:51:37] <StephenLynx> I already told you to use a manager with a GUI
[17:51:41] <christo_m> its cool if you dont agree with or like mongo-express but saying "just use the terminal" isn't a solution to what im saying
[17:51:44] <christo_m> yes, then i asked you which
[17:51:48] <StephenLynx> you don't need express anywhere for that.
[17:51:56] <StephenLynx> well, you asked me which.
[17:52:01] <StephenLynx> and I use nothing but the terminal.
[17:52:07] <christo_m> welp gg i guess
[17:52:28] <StephenLynx> can't you try google for a manager with a GUI?
[17:53:10] <christo_m> StephenLynx: http://docs.mongodb.org/ecosystem/tools/administration-interfaces/ i was here and mongo-express was listed there
[17:53:24] <christo_m> but you called it the wrong tool so now i dont know what to believe
[17:53:29] <christo_m> foundations broken etc
[17:53:36] <StephenLynx> aaaah
[17:53:44] <StephenLynx> mongo-express is not what I thought it was.
[17:53:49] <christo_m> :P
[17:53:52] <StephenLynx> I thought it was something for your server.
[17:53:56] <StephenLynx> and not just a regular client.
[17:53:59] <StephenLynx> yeah, use it, w/e
[17:54:06] <christo_m> well its asking me for creds
[17:54:12] <christo_m> and i dont remember setting any up when i installed mongodb
[17:54:26] <StephenLynx> yeah, that is the reason I wouldn't use it.
[17:54:27] <christo_m> my user creds didn't work
[17:54:40] <StephenLynx> try giving it blank fields.
[17:55:51] <christo_m> hmm nope
[17:55:59] <StephenLynx> yeah, dunno
[17:56:05] <StephenLynx> GUI managers suck :^)
[17:57:01] <markin> i've been happy with mongohub
[17:57:29] <christo_m> oh nice
[17:57:32] <christo_m> mac native, ill try it, thx
[18:13:48] <gswallow> were there deprecation notices that simple things like Mongo::Connection.new() would just *stop* working when other peoples' code requires mongo gems, and you released 2.0.0?
[18:16:10] <GothAlice> gswallow: There are still people in existence who don't pin version number ranges? O_o
[18:17:06] <GothAlice> The latest update broke a fair number of things in Python-land, but "pymongo<3" is really, really easy to define as your package's requirement.
[18:19:42] <gswallow> not my package, but were there deprecation warnings?
[18:20:01] <gswallow> because I'd have thought about it sooner if there were (unless I ignored them, which is equally likely)
[18:26:37] <arussel> is that true: if I don't add the keyFile attribute, then anyone can do anything (if they can reach the host)?
[18:26:58] <StephenLynx> no.
[18:27:14] <StephenLynx> you would also have to allow for external connections.
[18:27:43] <StephenLynx> by default one can only connect from the host itself.
[18:28:00] <StephenLynx> so the person would have to login as an user on the server first.
[18:28:09] <arussel> ok, thanks
[18:28:25] <GothAlice> But! If you do open it up to connections from anywhere, be sure to either enable authentication (and use good passphrases) and/or set up strong firewall rules.
[18:28:49] <arussel> so 1. I add keyFile to each of my nodes. 2. add an admin user 3.add an 'app' user
[18:29:04] <GothAlice> Pretty much, yes.
[18:29:08] <StephenLynx> if you want to allow for external connections, yes.
[18:29:46] <arussel> do adding keyFile attribute enables authentication, or is it the admin user creation ?
[18:30:07] <arussel> I'm trying to find out when I should take my app down and when to restart it with its new crendentials
[18:30:10] <GothAlice> keyFile requires and enables auth automatically.
[18:30:35] <arussel> thanks
[18:33:49] <arussel> if the app is passing credentials but there is not keyFile, would the credentials just be ignored or would this throw an error ?
[18:34:48] <StephenLynx> from my experience
[18:35:02] <StephenLynx> it only throws an error when you try to do something the user is not allowed.
[18:35:12] <StephenLynx> is not allowed to do unless authenticated*
[18:35:44] <StephenLynx> I could get it to connect and get references to collections fine with a failed authentication.
[18:37:41] <gnu_d> Hi, I'm researching in Sharding, I'd like to know given two mongo servers, let say the first will have the newest data, and the other the data from older than 6 months. Is ok to make a scheduled job that will search for documents older than x time and then relocate them in the other shard, or this is made automatically when configuring the shard, if so can you tell me how ?
[18:39:53] <cheeser> gnu_d: you might consider http://docs.mongodb.org/manual/core/tag-aware-sharding/
[18:39:53] <arussel> StephenLynx: thanks
[18:40:06] <cheeser> have a process tag your old data and let the balancer handle it
[18:40:07] <greyTEO> does anyone have any experience using http://mongify.com/?
[18:41:40] <StephenLynx> that sounds interesing, greyTEO, but I wouldn't trust an automated tool to do something like that.
[18:41:59] <StephenLynx> migrating a db like that is a one-time operation that needs some careful thought.
[18:42:18] <greyTEO> StephenLynx, those were my thoughts
[18:42:38] <gnu_d> cheeser: does this balancer needs to be configured, or it's on by default ?
[18:42:52] <greyTEO> just wondering if any has used it. I am currently handlign this migration with custom code.
[18:43:23] <StephenLynx> yeah, I would keep on doing that. is not much about the process itself, is much more about the design of the new database.
[18:43:46] <StephenLynx> you must consider how you query your data when designing a non-relational database.
[18:44:03] <StephenLynx> it can break your legs going for the wrong choice there.
[18:44:24] <greyTEO> this has been a difficult challenge.
[18:45:14] <greyTEO> Makes you think a lot about the product/features when you move away from relational structures
[18:45:25] <StephenLynx> yes.
[18:45:26] <cheeser> gnu_d: it's what does the sharding
[18:45:40] <StephenLynx> I had to refactor quite some times with my project during its development.
[18:46:01] <StephenLynx> and there will always limitations when using non-relational databases.
[18:46:46] <greyTEO> Luckily I am making heavy use of elasticsearch, so my database change didnt disrupt too much
[18:47:06] <gnu_d> cheeser: so, how many mongodb instances are needed ?
[18:47:21] <cheeser> for?
[18:50:20] <gnu_d> cheeser: for the sharded tagging.
[18:51:01] <cheeser> it's just sharding...
[18:58:14] <jeromegn> any maintainers of the mongo-ruby-driver gem hang around here?
[18:59:51] <StephenLynx> I really doubt.
[19:00:03] <StephenLynx> is the driver maintained by 10gen?
[19:00:39] <cheeser> well, there is *a* supported ruby driver, yes. i think that's the one.
[19:00:52] <cheeser> but there are several around, i think, and i'm not a ruby guy.
[19:01:26] <jeromegn> yes it’s maintained by 10gen
[19:01:36] <StephenLynx> hm
[19:01:41] <StephenLynx> yeah, then you might have a chance.
[19:01:44] <jeromegn> well the “mongo” one
[19:01:59] <StephenLynx> but I never see any ruby discussion here. Ive seem more C++ talk than ruby.
[19:02:16] <jeromegn> interesting
[19:20:04] <christo_m> StephenLynx: quick question, data design thing
[19:20:21] <christo_m> I have a concept of a feed or queue for users. is it better to have it be a nested subdocument?
[19:20:38] <christo_m> its a 1-1 relationship so having it separated out doesn't make much sense.. except for the fact that i dont want to dirty up the user code i have currently.
[19:20:47] <christo_m> id rather queues have their own endpoint in the API etc
[19:21:06] <StephenLynx> you don't have to dirty your user code.
[19:21:12] <StephenLynx> why would you need to do that?
[19:21:17] <christo_m> im using angular fullstack
[19:21:20] <StephenLynx> ah
[19:21:21] <christo_m> i just go off the generator
[19:21:23] <StephenLynx> well
[19:21:30] <StephenLynx> I wouldnt use angular.
[19:21:38] <christo_m> well angular has nothing to do with this
[19:21:42] <christo_m> as its express/node stuff
[19:21:47] <StephenLynx> I still wouldnt use it.
[19:21:54] <christo_m> lol, why
[19:21:54] <StephenLynx> as well as express.
[19:22:10] <StephenLynx> because it is unecessary bloat.
[19:22:20] <StephenLynx> it takes control away from you
[19:22:32] <StephenLynx> and makes you shape your project around your dependency.
[19:22:32] <christo_m> seems alright to me
[19:22:55] <StephenLynx> you could just query for the user's queue separately.
[19:23:05] <christo_m> im using ionic framework on mobile so i like that ill be able to reuse the services im creating.
[19:23:05] <StephenLynx> without touching the code you already have.
[19:23:10] <christo_m> yes thats what i was thinking
[19:23:12] <christo_m> not sure about performance though
[19:23:20] <StephenLynx> whats wrong with it?
[19:23:26] <StephenLynx> a query is a query.
[19:23:30] <christo_m> well, ill have to map reduce or something won't i?
[19:23:38] <christo_m> need to find the queues that belong to the user in question etc
[19:23:45] <StephenLynx> a regular query so far.
[19:23:54] <StephenLynx> you can use aggregate to shape more or less.
[19:24:17] <StephenLynx> and what is ionic?
[19:24:25] <christo_m> StephenLynx: http://ionicframework.com/
[19:24:30] <christo_m> ^ why i got into angular in the first place
[19:25:03] <christo_m> StephenLynx: okay so my rest endpoint will be like /queue/:userid
[19:25:07] <christo_m> i guess?
[19:25:08] <StephenLynx> why not just using the regular SDK?
[19:25:13] <christo_m> what regular SDK
[19:25:16] <StephenLynx> I don't know
[19:25:21] <StephenLynx> you can make it be if you want.
[19:25:30] <christo_m> well im trying to design this properly..
[19:25:34] <StephenLynx> I don't work with pseudo REST.
[19:25:36] <christo_m> i know i can make it whatever i want but ya
[19:25:49] <StephenLynx> I just make separate files and give them descriptive names.
[19:26:10] <christo_m> yes well , the other way will be /users/queue
[19:26:12] <christo_m> or something
[19:26:15] <christo_m> just seems weird
[19:26:26] <StephenLynx> https://gitlab.com/mrseth/bck_lynxhub/tree/master/pages
[19:26:31] <christo_m> because i have the queue as a nested document
[19:26:33] <christo_m> of a user right now
[19:26:43] <StephenLynx> I just use a flat hierarchy
[19:27:01] <StephenLynx> because querying for data is not just grabbing whats in your db.
[19:27:11] <christo_m> right, so flat would be having /queues as its own thing.
[19:27:18] <StephenLynx> yeah
[19:27:34] <StephenLynx> https://gitlab.com/mrseth/bck_lynxhub/blob/master/doc/backend_interface.txt
[19:27:35] <christo_m> and not having a nested document structure
[19:27:44] <StephenLynx> you can.
[19:27:53] <StephenLynx> one thing is unrelated to the other.
[19:28:14] <StephenLynx> this is the problem with using these kinds of tools I tell people not to.
[19:28:29] <StephenLynx> they don't realize how free they are to shape their work.
[19:28:40] <christo_m> id rather have some level of convention.
[19:29:00] <christo_m> it lets me know others have established some patterns and that things will scale
[19:29:08] <StephenLynx> first
[19:29:20] <StephenLynx> you can just define what your patterns are and be consistent about it.
[19:29:26] <StephenLynx> you don't need a dependency for that.
[19:29:29] <StephenLynx> and second
[19:29:46] <StephenLynx> scability is mostly a buzzword that has nothing to do with it anyway.
[19:29:54] <christo_m> https://github.com/DaftMonk/generator-angular-fullstack
[19:29:59] <christo_m> this is what i use
[19:30:05] <StephenLynx> yeah, that is crap.
[19:30:15] <christo_m> lol ok
[19:30:34] <StephenLynx> dependencies on top of dependencies on top of dependencies on top of dependencies.
[19:30:44] <StephenLynx> that add no real functionality.
[19:30:48] <StephenLynx> it is the same but different.
[19:31:01] <christo_m> yes well, that is programming
[19:31:04] <christo_m> dependencies on dependencies
[19:31:12] <StephenLynx> <StephenLynx> that add no real functionality.
[19:31:21] <StephenLynx> thats the difference.
[19:31:28] <StephenLynx> when I use mongo driver, for example
[19:31:30] <christo_m> yeoman is useful
[19:31:34] <christo_m> it scaffolds things for me
[19:31:40] <StephenLynx> what does it do?
[19:31:42] <christo_m> bower is useful
[19:31:45] <StephenLynx> bower
[19:31:48] <StephenLynx> is awful
[19:31:52] <christo_m> npm is useful
[19:31:58] <christo_m> karma is useful
[19:32:00] <StephenLynx> yeah, npm is.
[19:32:02] <christo_m> protractor is useful
[19:32:06] <StephenLynx> never heard about karma nor protactor.
[19:32:09] <StephenLynx> what these do?
[19:32:13] <christo_m> its for testing
[19:32:17] <StephenLynx> lol
[19:32:28] <StephenLynx> anyway, let me give you how I use dependencies
[19:32:38] <StephenLynx> when I use the mong odriver, for example
[19:32:38] <christo_m> unit, and e2e
[19:32:40] <christo_m> ok
[19:33:12] <StephenLynx> I use because It does something that I really need, that is specialized and has lots of details to implement.
[19:33:23] <christo_m> well congratulations i guess?
[19:33:30] <StephenLynx> when I use imagemagick I use it because doing image manipulation is not trivial.
[19:33:33] <StephenLynx> but these
[19:33:35] <christo_m> im working on something that has deadlines, i dont have time to muck about with configuration when something works out of the box
[19:33:47] <StephenLynx> yeah, but when you add dependencies that change your workflow
[19:33:50] <christo_m> i have a lot of work done in just 2 days with this setup
[19:33:57] <StephenLynx> you are increasing the learning curve.
[19:34:00] <christo_m> grunt is anothre tool in there that is amazing
[19:34:06] <StephenLynx> grunt is really bad.
[19:34:07] <christo_m> minifies , bundles, obfuscates everything
[19:34:13] <StephenLynx> >obfuscates
[19:34:14] <christo_m> yes i prefer gulp myself but this uses grunt
[19:34:14] <GothAlice> …
[19:34:18] <StephenLynx> solving a non issue.
[19:34:31] <GothAlice> Obfuscated CSS/JS?
[19:35:07] <StephenLynx> anything that involves a building with stuff that can be just interpreted is bad.
[19:35:51] <StephenLynx> unless it is some sort of really incredible optimization that you really, really need
[19:36:07] <cheeser> we use it to minimize our js/css
[19:36:13] <christo_m> i enjoy dependency injection with angular, i enjoy two way data binding
[19:36:17] <christo_m> i dont find any of it bloated at all tbh
[19:36:23] <christo_m> at least compared to what i did with jquery..
[19:36:29] <GothAlice> Heeeeh.
[19:36:31] <christo_m> im aware there are other frameworks out there like react.js etc
[19:36:45] <christo_m> but i can swap things later , im just trying to scaffold this quickly.
[19:38:10] <StephenLynx> whatever floats your goat, m8
[19:38:17] <StephenLynx> you asked me how to design it.
[19:38:36] <christo_m> yes i was strictly talking about an API decision
[19:38:39] <christo_m> nothing to do with front end really.
[19:39:19] <christo_m> also, data design question, because i dont have much experience with mongo/mongoose
[19:39:36] <GothAlice> mongoose. ._.
[19:39:39] <StephenLynx> lol
[19:39:41] <StephenLynx> mongoose
[19:39:51] <christo_m> i guess thats wrong too
[19:39:54] <christo_m> oh well :)
[19:39:55] <StephenLynx> yeah
[19:40:08] <GothAlice> It's a greater-than-average source of problems.
[19:40:32] <StephenLynx> learn from the elders. http://homepage.cs.uri.edu/~thenry/resources/unix_art/ch01s06.html
[19:40:34] <christo_m> whats funny is i can go into another channel
[19:40:41] <christo_m> and mention mongodb, and people will tell me how that tech is shitty
[19:40:43] <cheeser> me, too!
[19:40:45] <christo_m> and how i should use their favorite bullshit
[19:40:46] <cheeser> oh.
[19:41:01] <christo_m> the elitism on freenode is just crazy
[19:41:02] <GothAlice> christo_m: https://blog.serverdensity.com/does-everyone-hate-mongodb/
[19:41:02] <StephenLynx> yeah, I dont tell what to use.
[19:41:04] <StephenLynx> just what to not use.
[19:41:12] <StephenLynx> hey, I an elitist on rizon too :c
[19:42:47] <GothAlice> I'm biased, as a contributor/owner to/of several of the projects I'll mention, but MongoDB has replaced a multitude of other services (queues, caches, etc.) for me, I use an ODM called MongoEngine that supplies, application-side, many nice things like efficient fake joins using caching, triggers, reverse delete rules, etc, and build a multitude of packages (under the "marrow" org on Github) that depend on it.
[19:43:02] <GothAlice> MongoDB is fantastic. MongoEngine is great. Mongoose is… a bad teacher.
[19:43:22] <christo_m> well again, this is what came out of the box with the generator
[19:43:24] <christo_m> i dont *have* to use it
[19:43:36] <christo_m> so when i hit the problems you're talking about, ill consider it, seems to work fine for me right now
[19:44:59] <StephenLynx> that is the problem with these kinds of solutions. is one big stuff. it is heavy and obscure and asks you to shape your problem around its solution.
[19:45:10] <GothAlice> christo_m: The link I provided counters many of the common points others may attempt to use to nay-say MongoDB and can help avoid falling into some of the same traps. :)
[19:46:25] <christo_m> GothAlice: ya a lot of the things mentioned there are over my head right now
[19:46:33] <christo_m> id like to have the problems theyre talking about, means i have users :D
[19:46:53] <christo_m> StephenLynx: except it isnt like that at all.. because the components are right there in front of you, there's nothing magical happening
[19:47:04] <christo_m> you can swap whatever you want
[19:47:07] <StephenLynx> hm
[19:47:17] <GothAlice> MongoDB "eats its own dog food" when it comes to APIs and whatnot; the lack of magic is itself almost magical.
[19:47:38] <GothAlice> (Thus application-side triggers, application-side reverse delete rules, application-side everything, basically.)
[19:47:55] <GothAlice> It's a refreshing approach, vs. relational databases that continually pile on more features into the core.
[19:47:56] <christo_m> have mercy i am noob
[19:53:32] <StephenLynx> yeah, it saddens me when people start piling up stuff on mongo.
[19:54:09] <StephenLynx> All I can think is that "this is what how it was suposed to be"
[19:54:16] <StephenLynx> this is not*
[19:54:48] <cheeser> pesky appplications!
[20:03:49] <GothAlice> Now here's an interesting question: BSON itself has no particular restrictions on field names. (It's a combined Pascal+C string.) Most drivers seem to enforce no $ in field names. DBRef is a "complex" type (i.e. a compound of simpler types) that uses $ in its field names. Is it OK to create custom types like DBRef for your own use?
[20:03:53] <GothAlice> And notably, are there any examples of doing this?
[20:07:39] <StephenLynx> can you use $ on a field name in the terminal?
[20:07:52] <cheeser> StephenLynx: i don't think so
[20:08:03] <cheeser> GothAlice: there'd be no serverside support for it...
[20:08:22] <cheeser> but as far as bson goes, it'd just have to serialize to a bson document.
[20:08:56] <StephenLynx> so mongo enforced field names without $ as well?
[20:09:04] <GothAlice> That's also not exactly what I'm describing, StephenLynx. I don't wish to manually commit embedded documents with $ in their field names, I'm wanting to create a new BSON adapter (like DBRef) that self-encodes other types. Mostly looking for an example of extending pymongo for additional types.
[20:09:53] <GothAlice> StephenLynx: pymongo.errors.InvalidName: key '{key}' must not start with '$'
[20:09:59] <GothAlice> It's checked in the client driver.
[20:10:09] <StephenLynx> yes, but is it checked on the server itself?
[20:10:17] <StephenLynx> thats why I asked about the terminal.
[20:10:33] <GothAlice> The mongo shell is a client driver…
[20:10:35] <GothAlice> ^_^
[20:10:39] <StephenLynx> ah
[20:11:20] <GothAlice> And as expected, it explodes when I attempt to use $ in a field name, at the client level. (src/mongo/shell/collection.js)
[20:12:38] <cheeser> right. fields can't have "." in the name either
[20:13:29] <GothAlice> cheeser: It's always dubious how much of the validation goes on server-side. "[object Object]" being a valid collection name always gives me pause to shake my head. ;)
[20:14:51] <cheeser> javascript--
[20:16:15] <christo_m> [CastError: Cast to ObjectId failed for value "add" at path "_id"]
[20:16:21] <christo_m> i guess the mongoose errors have begun ;)
[20:16:33] <GothAlice> Heh, yup.
[20:16:37] <StephenLynx> heh
[20:16:46] <StephenLynx> "Less is more".
[20:17:21] <GothAlice> Biggest problem I frequently see with Mongoose users: it effectively forces you to throw your hands up and treat all ObjectIds as plain strings. (I.e. the hex encoded binary form, taking up more than 2x as much space, and heaven help you if you mix real ObjectIds and string-form ones.)
[20:19:42] <mortal1> howdy folks
[20:20:16] <GothAlice> Why hello there.
[20:20:25] <mortal1> I did a db.collection.insert, instead of an upsert, and i think it may have overridden my collection
[20:20:34] <StephenLynx> first time I see someone get greeted back in freenode :v
[20:20:44] <mortal1> :)
[20:20:50] <StephenLynx> mortal1 that is not possible.
[20:20:59] <GothAlice> mortal1: Insert won't delete or overwrite things.
[20:21:02] <StephenLynx> check if you are using the correct databse.
[20:21:08] <GothAlice> And the correct collection name.
[20:21:10] <mortal1> oh good!
[20:21:26] <mortal1> that means it was screwed up to begin with lol
[20:21:37] <StephenLynx> on the other hand, update might completely overwrite a document if you forget $set.
[20:21:41] <StephenLynx> happened to me a couple of times :v
[20:23:33] <mortal1> GothAlice: I once had hipchat magically insert a ; in a sql statement, right before the where
[20:23:43] <GothAlice> Haaaaah.
[20:23:43] <mortal1> that was a bad day
[20:24:44] <mortal1> now , every time someone sends me code via hipchat, my eye twitches a little
[20:27:15] <christo_m> GothAlice: {"message":"Cast to ObjectId failed for value \"add\" at path \"_id\"","name":"CastError","type":"ObjectId","value":"add","path":"_id"} i dont even know what this is doing
[20:27:24] <christo_m> im not doing anything special.. just routed to an empty function with express.js
[20:27:38] <GothAlice> I have no idea what that means or is doing, either.
[20:29:08] <jhoff909> is there a recommended replica set config when running on linux in Azure? and where there are clients external to Azure?
[20:33:12] <StephenLynx> do you really have to use azure?
[20:33:29] <StephenLynx> I heard its pretty expensive.
[20:33:36] <StephenLynx> and its proprietary
[20:36:05] <mortal1> StephenLynx: i hear they're now supporting linux on it, which is ironic
[20:36:33] <mortal1> we are talking about MS's cloud right?
[20:36:52] <StephenLynx> well, not supporting would be having way too much faith on people's stupidity.
[20:36:59] <StephenLynx> and yes.
[20:37:06] <StephenLynx> MS's cloud.
[21:03:13] <jhoff909> yes, it's MS cloud and yes, they support linux
[21:03:27] <jhoff909> aws is propriatary too :)
[21:06:42] <snowcode> anyone know how to store a GEOJson Polygon into mongodb? It still say to me "Point must only contain numeric elements" but i want to store a polygon, not a point
[21:08:52] <sudomarize> I have Task collection, and each collection has put into a category, e.g. one Task may have a category "iPads" <- "Apple" <- "Hardware" <- "Repair" (where the categories are a mongo tree, each one with a "parent" key pointing to the id of it's category parent)
[21:09:59] <sudomarize> Would it make sense to put the category tree for a particular Task into a {categories: []} array in my Task document?
[21:28:23] <greyTEO> what is the deepest level an object should be nested? speaking in terms of best practices...
[21:33:36] <GothAlice> greyTEO: No more than one level of nesting, unless you are _extremely_ careful.
[21:34:01] <greyTEO> so "address.locale.postalCode" is frowned upon?
[21:34:14] <GothAlice> How would you update that?
[21:34:25] <GothAlice> Well, also, it depends on if lists are involved.
[21:34:25] <greyTEO> just replace local
[21:34:34] <greyTEO> locale is an object
[21:35:42] <greyTEO> if it's best to flatten, I will extend the address object rather than introducing another.
[21:35:51] <GothAlice> Straight nesting of document in document is OK up to pretty much any level.
[21:35:58] <GothAlice> You run into trouble when you introduce arrays.
[21:36:40] <greyTEO> Yea I have notices that with arrays...kind of tricky
[21:37:10] <greyTEO> I alwasy use $set on the array and replace rather than update a single...if I need arrays
[21:37:32] <GothAlice> That's not good.
[21:37:40] <GothAlice> $push/$pushAll/$pull are your friends.
[21:37:59] <GothAlice> ($pushAll being deprecated these days as the functionality has been rolled into $push)
[21:38:09] <greyTEO> or $addToSet I should say
[23:26:30] <bros> I know you can query for the nth index in an array. (ex. log.0.time) How can you do the last index? -1? $? where clause?
[23:53:01] <GothAlice> That page took 15 seconds to load.
[23:53:07] <GothAlice> Er, wrong window. Sorry!
[23:54:55] <GothAlice> bros: Alas, you can't query the last element unless you know how many there are using standard queries. You can project to the last using http://docs.mongodb.org/manual/reference/operator/projection/slice/ and use http://docs.mongodb.org/manual/reference/operator/aggregation/unwind/ followed by an additional match in an aggregate query.