PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 5th of July, 2016

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[06:03:27] <7GHABHM56> Hi there, can I set [secondary to other hidden replica ] replication without touching anything on my primary node ?
[06:04:51] <kurushiyama> 7GHABHM56 Huh?
[06:05:20] <kurushiyama> 7GHABHM56 Please explain in more detail what you want to achieve – and why.
[06:06:39] <7GHABHM56> I mean, is it possible like, I have a secondary replica, I want to make a clone of it,(yet another secondary hidden replica) but don't want (actually afraid) to touch primary node ? kurushiyama
[06:07:59] <7GHABHM56> kurushiyama: what's your recommendation on this ?
[06:08:02] <kurushiyama> 7GHABHM56 You want to instruct a hidden _member_ to replicate from a specific secondary or you want a different replica set to read from a specific secondary (which would not make much sense).
[06:08:50] <7GHABHM56> kurushiyama: I want to do replicate from secondary, yes specific one
[06:09:50] <kurushiyama> 7GHABHM56 You want a MEMBER to replicate from that secondary? Or a second replica set?
[06:11:17] <7GHABHM56> I want to make new replica set (fresh start) which syncs everything from a secondary node kurushiyama
[06:13:00] <kurushiyama> 7GHABHM56 That is not possible.
[06:13:07] <kurushiyama> 7GHABHM56 Here is what you should do.
[06:13:20] <kurushiyama> 7GHABHM56 Add the new nodes to the existing replica set.
[06:13:37] <kurushiyama> One by one.
[06:13:39] <7GHABHM56> ah hmmm
[06:13:59] <kurushiyama> 7GHABHM56 You add one, you remove one, to keep an uneven number of members.
[06:14:20] <kurushiyama> 7GHABHM56 When the new node finished syncing, you repeat the process.
[06:14:42] <bambanx> how i can keep mongodb running ?
[06:15:00] <kurushiyama> bambanx You should explain your problem in a bit more detail.
[06:16:02] <bambanx> i connect to my remote server on webfaction using ssh, then i run $HOME/webapps/application/mongodb-linux-architecture-version/bin/mongod --auth --dbpath $HOME/webapps/application/data/ --port number
[06:16:15] <bambanx> but when i close the terminal it is stopped
[06:16:23] <7GHABHM56> run it as daemon bambanx
[06:16:34] <bambanx> cant run as a service becouse is shared hosting
[06:16:47] <bambanx> i read some trick like a fork method?
[06:17:54] <kurushiyama> 7GHABHM56 Basically all you need is to start your new members with the same replSet name, and then use https://docs.mongodb.com/manual/reference/method/rs.add/ and https://docs.mongodb.com/manual/reference/method/rs.remove/
[06:18:14] <7GHABHM56> hmm, you don't have admin previleges then bambanx
[06:18:39] <kurushiyama> bambanx Running mongodb in a shared hosting env is a Very Bad Idea™ .
[06:18:44] <bambanx> 7GHABHM56, not at all
[06:19:20] <kurushiyama> 7GHABHM56 How does your current replset look like? Is there an arbiter?
[06:19:32] <bambanx> the hosting recommend me this method , what u think guys? https://community.webfaction.com/questions/6157/watchdog-script-to-keep-process-running-with-cron
[06:19:52] <kurushiyama> bambanx Here is what I think: Don't.
[06:20:07] <kurushiyama> bambanx Use a hosting better suited for your needs.
[06:20:20] <kurushiyama> bambanx How big is your DB?
[06:20:21] <bambanx> its wha i have now
[06:20:25] <bambanx> small
[06:20:30] <kurushiyama> <500MB?
[06:20:39] <7GHABHM56> kurushiyama: what if I add new set with this config
[06:20:51] <bambanx> my database is empty i am starting to learn to use with node
[06:20:56] <kurushiyama> 7GHABHM56 ?
[06:21:00] <7GHABHM56> #eg: rs.add({_id: 4, host: "192.168.2.123:27017", priority: 0, hidden: true, votes: 0 })
[06:21:00] <kurushiyama> bambanx Do it locally.
[06:21:20] <kurushiyama> 7GHABHM56 What? why hidden?
[06:21:32] <7GHABHM56> don't want any trouble kurushiyama
[06:21:36] <kurushiyama> 7GHABHM56 Please, how does your current replset look like? How many nodes, is there an arbiter?
[06:21:54] <kurushiyama> 7GHABHM56 Hence I am asking you questions you refuse to answer... :/
[06:22:20] <kurushiyama> bambanx Or, for learning, use mongolabs
[06:22:24] <7GHABHM56> currently there are 3 nodes, (1 primary, 1secondary and 1 arbiter) kurushiyama
[06:22:55] <7GHABHM56> but I have explicitly set high priority, for primary
[06:23:07] <kurushiyama> 7GHABHM56 Ok, here is what I would do, fwiw. Add the first new node, remove the arbiter. No harm done, since no impact on the replication factor.
[06:23:20] <kurushiyama> 7GHABHM56 Wait till the new node is synced.
[06:24:13] <7GHABHM56> I want to add new set, such that I could sync from secondary, isn't it possible or not recommended method ?
[06:24:32] <kurushiyama> 7GHABHM56 As I said right at the beginning: No, that is not possible.
[06:25:25] <kurushiyama> 7GHABHM56 After the new node is synced, add the next of the two nodes, remove the old secondary.
[06:25:35] <kurushiyama> s/two/new/
[06:25:50] <kurushiyama> 7GHABHM56 Wait till syned.
[06:25:57] <7GHABHM56> :( I thouht you would recommend sth like this: https://docs.mongodb.com/manual/reference/method/rs.syncFrom/#rs.syncFrom
[06:26:13] <7GHABHM56> 12:00 <kurushiyama> 7GHABHM56 After the new node is synced, add the next of the two nodes, remove the old secondary.
[06:26:20] <kurushiyama> 7GHABHM56 YOU CAN NOT ADD A REPLICA SET TO ANOTHER.
[06:26:38] <7GHABHM56> sorry, that ways my buffer
[06:27:15] <7GHABHM56> launch time, so hungry, brb
[06:27:20] <7GHABHM56> thanks
[06:27:28] <kurushiyama> 7GHABHM56 Sure.
[06:27:50] <kurushiyama> bambanx https://mlab.com
[06:27:59] <kurushiyama> bambanx I use it quite often. ;)
[06:28:23] <bambanx> thanks kurushiyama
[06:30:20] <kurushiyama> bambanx You are welcome
[06:59:25] <sumi> hello
[07:05:35] <bambanx> hello sumi
[07:08:36] <7GHABHM56> back
[07:09:02] <7GHABHM56> kurushiyama: have you tried rocksdb (mongorocks) ?
[07:10:50] <kurushiyama> 7GHABHM56 Not interested in it. wiredTiger is pretty good at what it does, and I prefer battle tested storage engines for my persistence. Forgive me, but given your skill level, you probably should not make any experiements until you have at least a bit more experience.
[07:14:06] <7GHABHM56> hmm, rocksdb seems pretty promissing, compared to what I am using (Memory Map)
[07:14:58] <7GHABHM56> even backup stragegy is less painful,
[07:29:08] <7GHABHM56> We we very helpful from what percona offered (tools like online schema changer for mysql on live production env ).. so kinda more biased to experiment percona mongo with rocksdb.
[07:30:53] <7GHABHM56> kurushiyama: any awesome urls(blogs,youtube), book you want to recommend boosting my mongo skills ?
[07:31:59] <kurushiyama> 7GHABHM56 You more an administrator or a developer?
[07:33:59] <7GHABHM56> more aligned to ops than to dev
[07:34:41] <7GHABHM56> why kurushiyama ?
[07:34:58] <kurushiyama> 7GHABHM56 Because the magic link depends on that ;P
[07:35:21] <kurushiyama> 7GHABHM56 https://university.mongodb.com/courses/M102/about
[07:36:02] <7GHABHM56> 😄
[07:38:45] <7GHABHM56> where could I jump dive to learn about storage engines and replication...for that course I will have to wait(which I will)... and time is what I have not
[07:54:20] <kurushiyama> 7GHABHM56 Rushing that kind of stuff usually bites you in the neck, and rather sooner than later.
[07:54:32] <kurushiyama> 7GHABHM56 Start reading the docs – thoroughly.
[08:10:44] <7GHABHM56> 😏 ok kurushiyama, I shall
[08:50:09] <7GHABHM56> hey, what does the "size:" signifies for db.table_name.stats() ?
[09:00:53] <krion> hello
[09:01:07] <krion> prior to mongo 2.6, chunk weren't archived ?
[09:06:28] <krion> ok, read the doc
[09:07:38] <krion> what's the good way to set sharding.archiveMovedChunks to false
[10:24:05] <cxeq> hey guys quick super noob question about mongodb: if I have a collection of clients with each client on an individual document, and I want to make a list of items sold to each particular client, or something along those lines
[10:24:12] <cxeq> do I make a new collection, or new documents
[10:24:36] <cxeq> does the ID link persist over collections or just within a collection
[11:30:25] <Zelest> Derick, I fail to create a upsert with the new driver.. :(
[11:32:04] <Zelest> Derick, http://pastie.org/private/hii6tn65hy3jza1ouxzxcq
[11:32:36] <Zelest> the error I get is "Uncaught MongoDB\Driver\Exception\BulkWriteException: Cannot do an empty bulk write" o_O
[11:33:37] <Zelest> what am I missing here?
[12:32:14] <sumi> hello
[12:44:30] <Zelest> hey
[12:45:36] <chris|> is there a way to get a measure about the size of the queries from the mongocluster?
[12:48:29] <deathanchor> size?
[12:49:08] <chris|> in bytes or any factor of bytes
[12:50:43] <deathanchor> Object.bsonsize( db.collection.findOne() )
[12:50:57] <deathanchor> that gives one doc size
[12:51:37] <deathanchor> or you can do a db.collection.stats() to find out the average size of the docs and guesstimate from there and a count of your query
[12:51:52] <_matix> i have a db dump folder where all of the files are named {prefix}\{collectionname}.bson or {prefix}\{collectionname}.metadata.json. I'm certain that {prefix} is the db name. how do i restore this?
[12:54:05] <chris|> deathanchor: all true, but also not what I am looking for. I am looking for a metric or a stat of some kind that tells me about the size of the document that is sent to the cluster as part of the query
[13:19:28] <Zelest> Derick, nvm, solved it.. it's me being a moron as usual :D
[13:26:03] <GothAlice> chris|: You can also easily do some napkin-calculations using the sizes given in: http://bsonspec.org/spec.html + protocol documentation in https://github.com/mongodb/specifications/tree/master/source
[13:27:32] <GothAlice> The protocol is itself BSON, so, it's just a matter of adding up the parts. Or, wiring together the BSON protocol parts from your language driver of choice (i.e. the "bson" module from "pymongo", in the Python case) to add an event listener for different types of communication, then use a similar bsonsize() thing on the messages.
[13:27:38] <GothAlice> (I.e. you can collect these stats yourself.)
[13:28:54] <GothAlice> https://api.mongodb.com/python/3.1/api/pymongo/monitoring.html
[13:29:01] <Zelest> GothAlice, I wrote my first own aggregate thingie today! :D
[13:29:05] <GothAlice> :D
[13:29:41] <GothAlice> Heh, the aggregate I'm working on this morning would take three separate IRC lines, minified. XD
[13:30:10] <GothAlice> http://s.webcore.io/0L3E25451v0Z < have a screenshot instead. ^_^;
[13:32:38] <Zelest> wow, sexy :D
[13:32:45] <Zelest> what does $lookup do?
[13:32:50] <Zelest> (yes, i'm too lazy to look it up :P)
[13:33:00] <GothAlice> It's a left outer join.
[13:33:12] <Zelest> :O
[13:33:15] <GothAlice> >:D
[13:33:30] <Zelest> wait, I can do joins with mongo now? o_O
[13:33:50] <GothAlice> In fact, you can. You can have any join you want as long as it's a left outer. (Ford joke.)
[13:34:19] <Zelest> :O
[13:34:31] <Zelest> wow
[13:34:55] <Zelest> the good news? i can do joins! the bad news? i have to redo parts of my application now :(
[13:35:01] <GothAlice> It's… still not recommended to structure your models this way, as it's still "inefficient" (just _slightly_ less, now), but sometimes it can't be avoided.
[13:35:16] <Zelest> yeah
[13:35:47] <Zelest> in my case, i have one to many relation.. where the many _id's are stored in a sub-document array
[13:35:58] <Zelest> and when quering, i wish to fetch the "parent" to those id's
[13:36:25] <Zelest> now i fetch the array and loop through it and fetch each _id individually.. which probably is way more inefficient as $lookup
[13:36:34] <GothAlice> *efficient
[13:36:42] <Zelest> it is?
[13:37:02] <Zelest> meant*
[13:37:55] <GothAlice> Yes and no. If you have an array of IDs you need to look up, you'll need to $unwind on that, then $lookup the resulting singular values, then $unwind that, then $group on the original ID ($push'ing the individual foreign values back into an array). It _might_ be more efficient than doing it application-side, and will certainly benefit from future enhancements to $lookup (a number are planned).
[13:38:26] <GothAlice> When in doubt, try it out (and don't forget to measure! ;)
[13:39:02] <Zelest> That's what she said! ;)
[13:39:41] <GothAlice> Zelest: It's #159: https://gist.github.com/amcgregor/9f5c0e7b30b7fc042d81
[13:39:42] <GothAlice> ;P
[13:40:13] <Zelest> Haha
[13:41:18] <GothAlice> Heh, hopefully not all of them!
[13:41:45] <Zelest> Nah, but some of them seems very solid
[13:42:11] <GothAlice> Some are simply truisms, such as "Everything is air-droppable at least once."
[13:42:12] <GothAlice> ;P
[13:42:31] <Zelest> Well, that one fits with my own "Stupidity should be painful" law ;)
[13:42:51] <GothAlice> Heh. Ref: 141/142.
[13:45:14] <Zelest> GothAlice, mind if I privmsg you btw?
[13:45:36] <GothAlice> Yeah, no worries; I'll be back in about 10m, though.
[13:45:57] <Zelest> Ah, okey :)
[14:03:30] <chris|> GothAlice: the thing is, I am seeing some serious impact on the mongo with pretty small changes in oplog, so I am currently trying to verify that someone is just "doing it wrong", assuming he is just sending huge documents over the wire that contain next to no updatable data. I could of course review all the application codes and calculate data usage for data types I did not write, or hack on the application
[14:03:32] <chris|> drivers, but that is somewhat like searching for a needle in a haystack
[14:06:46] <GothAlice> Well, given most modern drivers incorporate listener (i.e. event, or callback) functionalities, monitoring communication shouldn't be too difficult.
[14:20:18] <GothAlice> chris|: I've had users store documents containing JSON strings as primary data storage, so I feel your pain. ("I wanted to store JSON… what's wrong with that?" :head-desk: ;)
[14:38:21] <yopp> um. I wonder how bad it will be to run mongo in docker with ceph-backed volume driver
[14:47:56] <chris|> yopp: probably very
[14:48:05] <cheeser> yopp: http://bit.ly/29nBTHV
[14:59:19] <yopp> and how bad is to have nbd backed device with wired tiger?
[15:32:51] <krion> i got an error "cannot move chunk, data transfer error"
[15:32:57] <krion> I wonder how i can invetigate
[15:41:36] <krion> ok, the balancer was off, for no reason...
[15:46:09] <krion> ok, this doesn't help
[19:13:05] <cruisibesares> anyone know what this operation is doing replSetUpdatePosition I'm seeing that in the log taking 58 seconds
[19:43:39] <zylo4747> I am running MongoDB 2.6 and would like to migrate databases to another replica set. If I have all of the users setup in the admin database on the target and I restore a database with mongorestore, will the users and their permissions be there still or will it get reset?
[20:14:37] <GothAlice> zylo4747: The roles assigned to users should be preserved. Fine-grained permissions come down to the names of the "namespaces" used (typically just the collection names) and should be unaffected. (I.e. there's no hidden identifier used that gets reset.)
[20:14:52] <GothAlice> Unrelated: http://s.webcore.io/020e1y1n0G0r < a common MongoEngine problem I loathe. Instead of raising the validation error where it's checked, which would provide, y'know, a _useful_ stack trace, it instead gathers errors and raises them at the last (and shallowest) location possible, making it utterly useless. Given that message, which field exploded? What was the bad value used?
[20:26:35] <nalum> Hello all I'm having a bit of trouble running mongo-connector. I have a 3 node replicaset running on kubernetes and trying to get mongo-connector to sync data from that to elasticsearch but it is giving me the following error
[20:26:51] <nalum> pymongo.errors.ServerSelectionTimeoutError: mongodb-svc-02:27017: [Errno -3] Try again,mongodb-svc-01:27017: [Errno -3] Try again,mongodb-svc-00:27017: [Errno -3] Try again
[20:26:58] <nalum> Any one got any ideas about this?
[20:35:38] <kurushiyama> @GothAlice Mylady, you are asking for too much ;)
[20:36:27] <GothAlice> kurushiyama: Exceptions in my libraries tend to be exceptionally informative. There's no reason not to be. (I.e. they don't just tell you what the problem is, and where the problem is, but often also explicitly how to fix the problem.)
[20:37:53] <GothAlice> They also don't defer such error generation until later, which obfuscates the source. If you assign an invalid value to the attribute, it immediately explodes in Marrow Mongo, rather than deferring to the Active Record .save() method in ME's case.
[20:39:07] <kurushiyama> @GothAlice I agree with your approach, and I guess most users appreciate that behaviour. But it requires discipline and dedication to quality. Qualities desperately missed in certain subgroups of devs ;)
[20:40:46] <GothAlice> kurushiyama: Hehehe, did you see my "migrate away from ME" meta-/tracking ticket? It's kinda astounding. https://github.com/marrow/contentment/issues/12
[20:40:59] <GothAlice> After 0.10, ME turned into Python's own Mongoose.
[20:41:42] <kurushiyama> @GothAlice Well, I am a Java and Go dev, so I am not actively following marrow, I am afraid.
[20:42:06] <GothAlice> Certainly no worries; I linked it awhile ago, you might have seen it.
[20:42:52] <kurushiyama> Albeit I still try to wrap my head about Mongoose. Maybe I am just missing its point, given my lack of knowledge regarding JS.
[20:43:14] <kurushiyama> And node, specifically.
[20:43:55] <StephenLynx> nah, its shit.
[20:44:00] <StephenLynx> kill it with fire
[20:44:05] <nalum> :D
[20:44:33] <StephenLynx> the only reason mongoose took off was the dev working for mongodb and being able to promote it
[20:49:09] <kurushiyama> Well, I guess we all agree that enforces a bad, if not outright dangerous pattern of approaching data modelling. Aside from that, maybe it gives the JS developers something they miss within the driver. Dont know.
[20:50:22] <StephenLynx> nope
[20:50:24] <StephenLynx> not really
[20:50:37] <StephenLynx> been using the driver for years
[20:50:42] <StephenLynx> there is nothing missing there
[20:51:10] <kurushiyama> StephenLynx Given your level of experience with it and MongoDB in general...
[20:51:32] <StephenLynx> the problem is how bad the average node.js dev is.
[20:51:41] <StephenLynx> no surprise they use crap all the time
[20:51:56] <cheeser> starting with js ;)
[20:52:00] <StephenLynx> kek
[20:52:11] <StephenLynx> js is alright, though. lots of pitfalls though
[20:52:28] <StephenLynx> being good at js is more about knowing what to not use than knowing what to use.
[20:52:44] <kurushiyama> I have come to some sort of love with grunt, react and reflux...
[20:53:15] <StephenLynx> aw
[20:53:18] <StephenLynx> ew*
[20:53:19] <StephenLynx> grunt
[20:53:37] <StephenLynx> grunt is cancer imo
[20:54:55] <kurushiyama> It does all I need, and admirably so – but then, what I need to do is very basic: minify, uglify, htmlprocess a bit, LESS here and there.
[20:56:47] <StephenLynx> meh
[20:57:06] <StephenLynx> I abhor transpiling as a whole.
[21:03:52] <nalum> I started out with JS in the mid 90s and loved it for a few years, I have no patients for it now
[21:04:51] <cheeser> they all passed away, eh? that's horrible.
[21:05:28] <GothAlice> JS is a terrible anti-language, with multiple examples (some trivial, such as 'typeof "string" !== typeof new String("also a string")', or the "NaNNaN… Batman!" bit, others pointing straight to the specification, such as the weak comparison decision tree…)
[21:05:44] <StephenLynx> kek
[21:07:06] <nalum> :D *patience
[21:09:09] <nalum> Is there a tool other than mongo-connector that can sync data between mongodb and elasticsearch?
[21:09:57] <GothAlice> nalum: … depends on what you need, and what you mean by synchronization. https://github.com/richardwilly98/elasticsearch-river-mongodb/ is an alternate approach.
[21:10:41] <nalum> Oh, didn't think "rivers" where used anymore
[21:11:16] <nalum> I need an initial sync of data and then to sync operations from the oplog
[21:11:33] <nalum> so mongo-connector fits, but if I can't get past this error then it's an issue
[21:11:58] <GothAlice> What's the specific error?
[21:12:19] <nalum> pymongo.errors.ServerSelectionTimeoutError: mongodb-svc-02:27017: [Errno -3] Try again,mongodb-svc-01:27017: [Errno -3] Try again,mongodb-svc-00:27017: [Errno -3] Try again
[21:12:47] <GothAlice> Hmm. Which mongod version are you using, and which version of pymongo is installed?
[21:12:59] <nalum> Running 3.2
[21:13:09] <nalum> I'll check the pymongo version now
[21:13:32] <GothAlice> https://docs.mongodb.com/manual/release-notes/3.2-compatibility/
[21:13:45] <Zelest> *yawns*
[21:14:05] <nalum> pymongo 3.2.2
[21:14:15] <GothAlice> Quite curious.
[21:14:40] <GothAlice> http://api.mongodb.com/python/current/tutorial.html < following the first few steps here (up to "getting a collection") works in a "python" REPL shell?
[21:15:14] <nalum> I'm running on kubernetes, let me get a pod up and running that I can test that with
[21:15:59] <GothAlice> Hmm; more layers I can't really assist with, if the problem lies somewhere in there. (As an aside: when testing issues, I tend to boil problems down to the absolute minimum number of moving parts to help in diagnosing the true cause.)
[21:22:29] <GothAlice> (On my work project, wrestling with aggregates, phase one refactor has improved report generation performance ~1700x. $lookup, you are my new best friend.)
[21:23:19] <GothAlice> It's glorious when stupidly complex reports generate with no perceptible delay between filter selection and result. \o/
[21:23:51] <nalum> :D
[21:24:34] <GothAlice> Still takes 15 seconds for the "all companies" (unfiltered) resultset, but that only impacts staff, not users, so I'm okay with that. XP
[21:38:15] <nalum> GothAlice: worked fine connecting directly from python
[21:47:37] <kurushiyama> @GothAlice Sounds like a successful day.
[21:50:57] <GothAlice> nalum: Sorry, was walking home from work on this gorgeous late afternoon. Hmm. I'd have to dig in to how mongo-connector is set up to see what the issue is, and I have no ElasticSearch handy to test it against. :(
[21:51:15] <nalum> No worries, thanks for the help :)
[21:52:04] <GothAlice> For a long time mongo-connector was stuck on the old connection driver, preventing use with 3.0 (making use of updated authentication) or 3.2 (updated protocol).
[21:52:12] <GothAlice> (Which is why my first question related to pymongo version.)
[21:54:41] <nalum> Ah. I've been using the connector with 3.2 for a little while now. I'm currently migrating to kubernetes and just started getting mongo-connector set up today
[21:56:22] <dsc_> trying to do 'between' query with '$lt' and '$gt', is this the correct way? http://pastie.org/pastes/10899758/text?key=w2oeoxhokgwsjpxrfrw
[21:56:38] <dsc_> trying to get all `info.score`'s between 6-8
[21:56:44] <dsc_> or 5-8 ;-)
[22:00:51] <GothAlice> dsc_: So, that's the trick. 6-8 inclusive (6 and 8 are valid), or exclusive (6 and 8 are not valid)?
[22:01:10] <dsc_> are valid! and anything in between
[22:01:20] <dsc_> first time doing mongodb queries, no idea what im doing. Is what I pasted correct?
[22:03:25] <dsc_> GothAlice: oh, you mean `>=` or `>`
[22:03:36] <dsc_> I am hoping to get `>=`
[22:03:43] <dsc_> and <=
[22:03:51] <GothAlice> Then you should be using $gte and $lte, not $gt / $lt. :)
[22:03:59] <dsc_> right
[22:04:20] <GothAlice> However, more generally, you don't need the $and. 95% of the time, "and" is assumed.
[22:04:37] <GothAlice> filter = {"info.score": {"$lt": 5, "$gt": 8}}
[22:04:57] <GothAlice> That would be the simplified version, not using an $and. Now, is info an array of values, or a single sub-document?
[22:05:49] <dsc_> GothAlice: I see it in python has a dict so I'm guessing sub-document
[22:05:53] <dsc_> as a dict*
[22:06:07] <GothAlice> Aye, that'd be the pairing, there. So at this point, that's it.
[22:06:14] <dsc_> awesome ^^
[22:06:20] <GothAlice> If "info" were an array, the query would mean something slightly different. ;)
[22:06:46] <dsc_> I'm not sure how `.score` would work on `info` if `info` were an array tho?
[22:07:04] <GothAlice> Almost the way you'd expect. info.score would refer to _any score_ in the info array.
[22:07:31] <dsc_> right
[22:07:55] <dsc_> what does the operator $lte behave on an array then? :P
[22:08:01] <dsc_> is that even possible?
[22:08:06] <dsc_> how does the*
[22:08:08] <GothAlice> Thus your range query would mean "match any document that contains any array element whose "score" field is < 5, and that contains any array element whose "score" field is < 8. That's where $elemMatch comes in handy; that pins the array subsection of your query to matching a single array element that matches all criteria, instead of splitting them up like the previous example.
[22:08:48] <GothAlice> (I keep using < / $lt and > / $gt because that's what's what was in your original example, pretend I'm using $gte/$lte and whatnot. ;)
[22:08:55] <dsc_> sure
[22:09:15] <dsc_> I'm not sure what you just typed though :D
[22:09:23] <dsc_> if `info` == [1,2,3]
[22:09:26] <dsc_> and you do info.score
[22:09:35] <GothAlice> score isn't a thing in that example.
[22:09:39] <dsc_> aah ok
[22:09:46] <GothAlice> info = [{score: 1}, {score: 2}, …]
[22:09:48] <nalum> Alright, gonna have to continue this tomorrow :(, night and thanks for the help GothAlice
[22:09:56] <dsc_> right right
[22:09:58] <dsc_> :)
[22:10:03] <dsc_> thats cool
[22:10:07] <GothAlice> nalum: It never hurts to help, and apologies for not fully solving your issue. We've at least ruled out certain things, though! :)
[22:10:44] <nalum> No worries at all :D
[22:12:03] <GothAlice> dsc_: To clarify, in the non-$elemMatch case, if "info" is an array of sub-documents with a score field, the two parts of the query don't necessarily mean it's a single array element matching both.
[22:14:00] <dsc_> GothAlice: I think I know what you mean
[22:14:31] <dsc_> It would do 1 part of the query on a element?
[22:14:36] <dsc_> (only 1)
[22:14:53] <dsc_> or not?
[22:15:43] <dsc_> yeah i dont understand
[22:15:45] <dsc_> but that's okay
[22:15:46] <dsc_> xD
[23:09:42] <zylo4747> can i stop mongodb, copy the data files to a target replica set and have that replica set automatically "see" those database files as another database?
[23:18:04] <cheeser> like so? https://docs.mongodb.com/manual/tutorial/resync-replica-set-member/#sync-by-copying-data-files-from-another-member
[23:19:54] <fission6> this is kind of weird… i haven't used mongo in a year or so, i just went to startit up so i could run a web app i develop locally and I can't find the mongo command
[23:19:58] <fission6> any ideas?
[23:27:03] <cheeser> fission6: um. 1. your path is wrong. 2. mongodb isn't actually installed. 3. whut?
[23:28:31] <fission6> cheeser: i think it was 2
[23:28:36] <fission6> i must have tweaked it over the year
[23:29:53] <cheeser> 2 would be my guess as well.
[23:35:20] <zylo4747> thanks @cheeser, that's exactly what I wanted
[23:35:25] <zylo4747> I'll give it a shot
[23:39:46] <cheeser> great