[00:27:35] <daidoji> diegoaguilar: mongoimport has the --upsert flag which is nice but I think restore operates on the file itself and doesn't do many smart things
[00:27:44] <daidoji> I usually stick with mongoimport with --upsert
[00:27:51] <daidoji> although you can have schema issues when you do that
[00:28:43] <daidoji> mongorestore preserves key and value types if I'm not mistaken, while mongoimport/mongoexport uses javascript conversion which can mess things up if you're say using ISODates or whatnot
[00:29:33] <diegoaguilar> hmm I have one schema which is "not" going to change
[00:29:53] <diegoaguilar> but I should have a upsert option
[02:22:26] <nattyrice> joannac, I have.. for the last week.. all around the same issue and there hasn't been a single response. not only that but nobody in the channel has said anything about anything else either. I am pretty sure I am the only who to have typed anything in that channel for the last week.
[02:31:31] <atomicb0mb> Hello, im studying db(mongo), and sometimes the word "atomic" is used... what means like: "operations are not atomic, isolated transactions. ? What "atomic" stands for? Thanks
[02:40:07] <atomicb0mb> any documents so I can read about it ?
[02:45:00] <ehershey> an operation (or set of operations) is atomic, linearizable, indivisible or uninterruptible if it appears to the rest of the system to occur instantaneously. Atomicity is a guarantee of isolation from concurrent processes. Additionally, atomic operations commonly have a succeed-or-fail definition — they either successfully change the state of the system, or have no apparent effect.
[02:46:02] <nattyrice> from what i understand, as i've never read specifically about it, is that atomic means that once called it comes back guaranteed either to have completed of failed with nothing about the process being pushed off to be completed later. atomicb0mb
[02:46:39] <nattyrice> think atomic as in the smallest indivisible unit operation
[02:51:09] <atomicb0mb> indivisible? it says that is separated by pauses. The system behaves as if each operation occurred instantly, separted by pauses.
[02:55:34] <atomicb0mb> hummmm, i'm seeing atomic operation an isolated operation (thats why indivisible) and with 2 possible results: everything occour, or nothing occours...
[02:55:51] <atomicb0mb> something like that ehershey nattyrice ?
[03:02:56] <atomicb0mb> ok great. Thanks for the light guys. I'm gonna back to the course hehehe
[03:39:30] <felixjet_> the M102 and M101JS exam are free or 150$ ?
[03:39:44] <felixjet_> im confused because the homework and stuff is free
[08:50:42] <amharris> Hey, folks. It's going to be my first time venturing into MongoDB but I've had some troubles starting the `mongod` service and was hoping that someone could give me a
[08:57:31] <mbuf> I have attached an EBS snapshot (from recovery) to a new instance, removed /var/mongodb/local.* and /var/mongodb/mongod.lock files; when I try to start the mongod instance on the primary node, it says couldn't get file length when opening mapping /var/mongodb/local.1 boost::filesystem::file_size: No such file or directory: "/var/mongodb/local.1"
[08:57:42] <mbuf> when restoring, should I only remove the local.* files from the secondary nodes?
[09:43:34] <bartzy> Is it possible to “add back” data to the aggregation pipeline, which $match filtered out..?
[09:44:06] <coudenysj> bartzy: i don't think so, maybe you should reorder the pipeline?
[09:44:25] <bartzy> for example: I have a posts collection with user_id. I match only the posts in the last month, group by user_id to get all the user_id’s that posted in the last month. Now I want to count how many posts each of these user_id’s did for all eternity (not just the last m onth)
[09:44:36] <bartzy> coudenysj: OK, but I think I need to save a temp collection anyway? I can’t think of a way not to
[09:45:14] <bartzy> coudenysj: I need to get all the user_id’s that did more than X (lets say 50) posts (in all time), AND that at least one of these posts was in the last month.
[09:48:42] <coudenysj> bartzy: now that's a different query :), your last line suggests a query with 2 find entries
[09:49:13] <bartzy> coudenysj: So save to a temp collection..?
[09:50:35] <coudenysj> do you want a list of users that match the criteria?
[09:51:02] <coudenysj> or do you want the list of users and the actual dates, etc…
[12:14:23] <aps> While doing rs.stepDown( ) on a primary, I get following error. Is this supposed to be like this? https://www.irccloud.com/pastebin/OiCFHP74/
[12:23:00] <nixstt> if I have 1 primary 1 second. and 1 arbiter and wanted to add another secondary should I add the new secondary as a non voting replica set while the inital sync is taking place?
[12:43:18] <kali> nixstt: you don't really have to, unless you have reasons to worry about the chances of having a 2-2 split in your replica set during the replication time
[12:44:59] <nixstt> kali: I thought it would be better, initial sync takes really long since the dataset is almost 1tb and takes about 24 hours
[12:46:06] <kali> nixstt: that's up to you, really. note that the "slightly more risky" time is not the intial sync time but the time when you have an evec number of nodes in the cluster
[13:32:47] <cheeser> you should post that to the mongodb-users list. more of the kernel devs will see it there.
[13:46:12] <amharris> Hey, folks. It's going to be my first time venturing into MongoDB but I've had some troubles starting the `mongod` service and was hoping that someone could give me a helping hand. As a heads-up, I'm running Debian Wheezy (x86_64).
[14:32:23] <bartzy> And MongoDB people here perhaps? kali ? :p
[14:32:44] <bartzy> I have a weird issue - I upgraded from 2.4 to 2.6.11 recently, and today I got this error (and the process crashed): DR102 too much data written uncommitted 314.577MB
[14:33:24] <bartzy> The 2.4 environment was extremely stable - I never had an issue and almost never had to fallback to the secondary (other than maintenance periods)
[14:36:42] <bartzy> just had another crash with that log...
[15:31:39] <stuntmachine> is there a recommended way to take a nightly mongodump of all data on a replica set setup? not sure if there are any best practices that aren't covered in the docs.
[15:54:59] <jr3> We are considering creating a separate database with the same schema but only for a division of location, e.g, US would have its own database, EU would have it's own
[15:55:41] <jr3> seems like long term complexity of doing it like that would outweight just adding a region property to the current schema and just constraining on region when needd
[15:58:13] <cheeser> when building a CMS at a former gig, we would just add a websiteId to every query that needed it. wasn't so hard to do.
[15:59:48] <jr3> so your model tracked that websiteId right? That's what I'm leaning to
[16:28:08] <terminal_echo> hey guys, how to properly setup the permission for /data/db?
[16:28:19] <terminal_echo> The documentation says make sure setup the permissions properly :)
[16:28:48] <terminal_echo> and clearly when i run 'mongod' i get permission denied because currently /data/db is under root :)
[16:35:20] <Doyle> Hey. Say I have a RS with a master/secondary, one is added via IP and I want to change it to a DNS name in the conf. If it's the secondary do I just do the reconfigure operation that's described in the manual?
[16:36:33] <Doyle> Or can I do an rsremove and rsadd to adjust it without it doing an initsync?
[16:43:13] <aadityatalwai> Are there any known issues with permissions for `serverStatus` in Mongo Cloud Manager? We have our homerolled monitoring agent trying to call `serverStatus` with the `clusterMonitor` and `readAnyDatabase` roles, but are apparently not authorized.
[17:54:20] <jr3> is it common to connect to a mongo instance and just leave it connected during the lifetime of a node app?
[17:54:47] <jr3> or should we be connecting/disconnecting on every request
[20:25:50] <doc_tuna> perhaps they meant... [{column1: 1}, {column2: -1}]?
[20:25:55] <doc_tuna> i'm not sure of that syntax, sorry
[20:26:58] <lemmi_> Hey there. If my data model is kind of relational and I don't have an immediate need for performance, should I just stick to a sql database? Everything about my web app / model is pretty conventional at this point
[20:28:51] <atomicb0mb> doc_tuna no problem, thank you :)
[20:29:39] <doc_tuna> lemmi_: if it were me i would identify a particular use case where having json docs as opposed to rows/tables suits the application
[20:29:48] <doc_tuna> and experiment with using mongodb for that
[20:30:33] <xar-> Derick: just finding the training challenging, is there a suitable place to discuss homework(s)?
[20:34:08] <Razerglass> hey friends I have a question regarding $unset, is it possible to 'delete' a key+value from a object and not the entire object?
[20:35:14] <doc_tuna> I believe thats what unset does
[20:36:12] <doc_tuna> do you mean from a nested object?
[20:36:26] <Razerglass> im using it $unset: { person: 'john' }, if i have a object of persons with values of names, that would delete my whole persons object out of my DB instead of 'pluck' a single person
[20:36:46] <Razerglass> is there different syntax i should be following?
[20:37:12] <doc_tuna> the 'john' part doesnt matter in $unset
[22:57:35] <jmorph> hmm ok, i think i'm just getting used to embedded docs
[22:58:00] <jmorph> digging that deep in each user document isn't a big deal?
[23:02:47] <daidoji> jmorph: no, the depth probably doesn't matter very much
[23:03:35] <daidoji> remember that in Mongo all the documents are stored as json objects, so the read (into memory) of the javascript object probably takes most of the query time, once its in your cursor, querying into that document is probably only a fraction of the time even if you go several levels deep
[23:04:35] <daidoji> in fact, I'm not really an expert on how javascript objects store their keys (they might just store them all flat in a hash) in which case it would matter even less as they could just store the keys in a hash and do an O(1) lookup
[23:07:54] <MalteJ> which tool do you use to collect system status information (cpu, memory usage, disk i/o) and send it to some database?
[23:10:45] <NaN> Why I can't find by _id in this simple query? -- https://ghostbin.com/paste/qzqmj
[23:12:30] <cheeser> because _id is a String there and not an ObjectId
[23:12:31] <MatheusOl> NaN: I might be wrong, but seems that it is literally "ObjectID(52e964d1a9673f80abdfa8b5)", not an ObejctID with value 52e964d1a9673f80abdfa8b5