PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 18th of August, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:24:27] <diegoaguilar> Hello, I just read docs on http://docs.mongodb.org/v2.6/reference/program/mongorestore/
[00:24:48] <diegoaguilar> so, I got a question, there any any "upsert" alternative for importing?
[00:24:57] <diegoaguilar> --drop argument with mongorestore might be not what I want
[00:27:13] <daidoji> diegoaguilar: not with mongorestore I don't think
[00:27:24] <diegoaguilar> so what would be an option?
[00:27:30] <diegoaguilar> mongorestore's alternative?
[00:27:35] <daidoji> diegoaguilar: mongoimport has the --upsert flag which is nice but I think restore operates on the file itself and doesn't do many smart things
[00:27:44] <daidoji> I usually stick with mongoimport with --upsert
[00:27:51] <daidoji> although you can have schema issues when you do that
[00:28:43] <daidoji> mongorestore preserves key and value types if I'm not mistaken, while mongoimport/mongoexport uses javascript conversion which can mess things up if you're say using ISODates or whatnot
[00:29:33] <diegoaguilar> hmm I have one schema which is "not" going to change
[00:29:53] <diegoaguilar> but I should have a upsert option
[00:29:56] <diegoaguilar> I mean, are both safe?
[00:30:11] <diegoaguilar> not thinking about upsert, schema change ... I
[00:31:10] <daidoji> diegoaguilar: what do you mean by "safe"?
[00:32:39] <diegoaguilar> like production-ready
[01:51:43] <tejasmanohar> db.cahsouts.update({_id:ObjectId("..")}, {$set:{status:"processing"}})
[01:52:01] <tejasmanohar> oh nvm sorry
[01:57:42] <nattyrice> Could I get some mongoosejs specific help in here? Their irc channel is practically dead.
[02:12:14] <joannac> nattyrice: maybe you should try the support channels for mongoosejs?
[02:13:05] <joannac> the other support channels*
[02:22:26] <nattyrice> joannac, I have.. for the last week.. all around the same issue and there hasn't been a single response. not only that but nobody in the channel has said anything about anything else either. I am pretty sure I am the only who to have typed anything in that channel for the last week.
[02:23:28] <nattyrice> only one to have**
[02:31:31] <atomicb0mb> Hello, im studying db(mongo), and sometimes the word "atomic" is used... what means like: "operations are not atomic, isolated transactions. ? What "atomic" stands for? Thanks
[02:40:07] <atomicb0mb> any documents so I can read about it ?
[02:44:14] <ehershey> atomicb0mb: interesting question
[02:44:24] <ehershey> it basically means black-or-white completed or not
[02:44:41] <ehershey> https://en.wikipedia.org/wiki/Linearizability
[02:45:00] <ehershey> an operation (or set of operations) is atomic, linearizable, indivisible or uninterruptible if it appears to the rest of the system to occur instantaneously. Atomicity is a guarantee of isolation from concurrent processes. Additionally, atomic operations commonly have a succeed-or-fail definition — they either successfully change the state of the system, or have no apparent effect.
[02:46:02] <nattyrice> from what i understand, as i've never read specifically about it, is that atomic means that once called it comes back guaranteed either to have completed of failed with nothing about the process being pushed off to be completed later. atomicb0mb
[02:46:39] <nattyrice> think atomic as in the smallest indivisible unit operation
[02:51:09] <atomicb0mb> indivisible? it says that is separated by pauses. The system behaves as if each operation occurred instantly, separted by pauses.
[02:55:34] <atomicb0mb> hummmm, i'm seeing atomic operation an isolated operation (thats why indivisible) and with 2 possible results: everything occour, or nothing occours...
[02:55:51] <atomicb0mb> something like that ehershey nattyrice ?
[02:59:29] <ehershey> yeah
[02:59:35] <ehershey> I think that sums it up correctly
[02:59:43] <ehershey> but wikipedia is a but more specific
[02:59:51] <ehershey> "no apparent effect"
[02:59:54] <nattyrice> yep
[02:59:56] <ehershey> like something can occur
[03:00:03] <ehershey> but the effect shoult not be visible
[03:00:07] <ehershey> from the relevant pov
[03:02:56] <atomicb0mb> ok great. Thanks for the light guys. I'm gonna back to the course hehehe
[03:39:30] <felixjet_> the M102 and M101JS exam are free or 150$ ?
[03:39:44] <felixjet_> im confused because the homework and stuff is free
[08:50:42] <amharris> Hey, folks. It's going to be my first time venturing into MongoDB but I've had some troubles starting the `mongod` service and was hoping that someone could give me a
[08:50:44] <amharris> [yeitijem][] │ | helping hand.
[08:57:31] <mbuf> I have attached an EBS snapshot (from recovery) to a new instance, removed /var/mongodb/local.* and /var/mongodb/mongod.lock files; when I try to start the mongod instance on the primary node, it says couldn't get file length when opening mapping /var/mongodb/local.1 boost::filesystem::file_size: No such file or directory: "/var/mongodb/local.1"
[08:57:42] <mbuf> when restoring, should I only remove the local.* files from the secondary nodes?
[09:08:47] <mbuf> joannac: ^^ ?
[09:43:15] <bartzy> Hello
[09:43:34] <bartzy> Is it possible to “add back” data to the aggregation pipeline, which $match filtered out..?
[09:44:06] <coudenysj> bartzy: i don't think so, maybe you should reorder the pipeline?
[09:44:25] <bartzy> for example: I have a posts collection with user_id. I match only the posts in the last month, group by user_id to get all the user_id’s that posted in the last month. Now I want to count how many posts each of these user_id’s did for all eternity (not just the last m onth)
[09:44:36] <bartzy> coudenysj: OK, but I think I need to save a temp collection anyway? I can’t think of a way not to
[09:45:14] <bartzy> coudenysj: I need to get all the user_id’s that did more than X (lets say 50) posts (in all time), AND that at least one of these posts was in the last month.
[09:48:42] <coudenysj> bartzy: now that's a different query :), your last line suggests a query with 2 find entries
[09:49:13] <bartzy> coudenysj: So save to a temp collection..?
[09:50:35] <coudenysj> do you want a list of users that match the criteria?
[09:51:02] <coudenysj> or do you want the list of users and the actual dates, etc…
[12:14:23] <aps> While doing rs.stepDown( ) on a primary, I get following error. Is this supposed to be like this? https://www.irccloud.com/pastebin/OiCFHP74/
[12:23:00] <nixstt> if I have 1 primary 1 second. and 1 arbiter and wanted to add another secondary should I add the new secondary as a non voting replica set while the inital sync is taking place?
[12:43:18] <kali> nixstt: you don't really have to, unless you have reasons to worry about the chances of having a 2-2 split in your replica set during the replication time
[12:44:59] <nixstt> kali: I thought it would be better, initial sync takes really long since the dataset is almost 1tb and takes about 24 hours
[12:46:06] <kali> nixstt: that's up to you, really. note that the "slightly more risky" time is not the intial sync time but the time when you have an evec number of nodes in the cluster
[12:46:48] <kali> *even
[12:47:20] <nixstt> kali: so during the initial sync it won’t vote?
[12:47:46] <kali> nixstt: it will vote the same way than when it is synced
[13:26:33] <sascha> hey there
[13:27:49] <sascha> my mongodb server 3.0.5 is segfaulting when i fire several group() commands
[13:28:01] <sascha> is there any way to get more informations from a segfault?
[13:28:26] <sascha> last 2 lines before the strace:
[13:28:28] <sascha> 2015-08-18T15:20:40.844+0200 F - [conn2] Invalid access at address: 0
[13:28:30] <sascha> 2015-08-18T15:20:40.859+0200 F - [conn2] Got signal: 11 (Segmentation fault).
[13:32:47] <cheeser> you should post that to the mongodb-users list. more of the kernel devs will see it there.
[13:46:12] <amharris> Hey, folks. It's going to be my first time venturing into MongoDB but I've had some troubles starting the `mongod` service and was hoping that someone could give me a helping hand. As a heads-up, I'm running Debian Wheezy (x86_64).
[13:53:58] <sascha> cheeser: ty
[14:09:47] <fcanela> Hello. Is normal that under node, each method on a cursor brings only the first 1k results?
[14:11:00] <cheeser> i wouldn't think so but i'm not a js guy so i've never used it.
[14:11:05] <fcanela> with "each method" I was refering to performing cursor.each()
[14:31:58] <bartzy> Hey
[14:32:23] <bartzy> And MongoDB people here perhaps? kali ? :p
[14:32:44] <bartzy> I have a weird issue - I upgraded from 2.4 to 2.6.11 recently, and today I got this error (and the process crashed): DR102 too much data written uncommitted 314.577MB
[14:33:24] <bartzy> The 2.4 environment was extremely stable - I never had an issue and almost never had to fallback to the secondary (other than maintenance periods)
[14:36:42] <bartzy> just had another crash with that log...
[14:36:45] <bartzy> Anyone has any idea..?
[14:49:15] <kali> bartzy: i am not from mongodb :P
[14:49:23] <bartzy> oh, I was sure you are ...
[14:49:34] <bartzy> I guess it’s the level of depth you’re going into… :)
[14:49:49] <bartzy> Anyway I’m having a real weird production crisis here
[14:52:17] <bartzy> kali: perhaps you have any idea what can cause this?
[14:52:48] <kali> bartzy: no, i have no idea what this is about
[14:52:53] <kali> sorry
[14:52:55] <bartzy> ok, sorry :)
[15:31:39] <stuntmachine> is there a recommended way to take a nightly mongodump of all data on a replica set setup? not sure if there are any best practices that aren't covered in the docs.
[15:54:59] <jr3> We are considering creating a separate database with the same schema but only for a division of location, e.g, US would have its own database, EU would have it's own
[15:55:41] <jr3> seems like long term complexity of doing it like that would outweight just adding a region property to the current schema and just constraining on region when needd
[15:58:13] <cheeser> when building a CMS at a former gig, we would just add a websiteId to every query that needed it. wasn't so hard to do.
[15:59:48] <jr3> so your model tracked that websiteId right? That's what I'm leaning to
[16:28:08] <terminal_echo> hey guys, how to properly setup the permission for /data/db?
[16:28:19] <terminal_echo> The documentation says make sure setup the permissions properly :)
[16:28:48] <terminal_echo> and clearly when i run 'mongod' i get permission denied because currently /data/db is under root :)
[16:35:20] <Doyle> Hey. Say I have a RS with a master/secondary, one is added via IP and I want to change it to a DNS name in the conf. If it's the secondary do I just do the reconfigure operation that's described in the manual?
[16:36:33] <Doyle> Or can I do an rsremove and rsadd to adjust it without it doing an initsync?
[16:41:16] <aadityatalwai> /msg NickServ VERIFY REGISTER aadityatalwai ncdoazgodeyd
[16:42:18] <StephenLynx> :^)
[16:43:13] <aadityatalwai> Are there any known issues with permissions for `serverStatus` in Mongo Cloud Manager? We have our homerolled monitoring agent trying to call `serverStatus` with the `clusterMonitor` and `readAnyDatabase` roles, but are apparently not authorized.
[17:54:20] <jr3> is it common to connect to a mongo instance and just leave it connected during the lifetime of a node app?
[17:54:47] <jr3> or should we be connecting/disconnecting on every request
[17:56:10] <cheeser> i should think not...
[17:56:25] <cheeser> you'd pay the cost of renegotiating a connection each time. less than optimal.
[18:41:46] <d4rklit3> hi
[18:41:56] <d4rklit3> i have an ubuntu server with mongodb running
[18:42:06] <d4rklit3> all the sudden the service just stop and refuses to restart
[18:42:15] <d4rklit3> where is the error log?
[18:43:13] <cheeser> cat /etc/mongod.conf
[18:43:17] <cheeser> er, *check*
[18:44:16] <d4rklit3> yeah i see that
[18:44:22] <d4rklit3> that file doesn't show any errors
[18:45:07] <d4rklit3> wow hexchat, you suck
[18:45:23] <d4rklit3> its in . /var/log/mongodb/mongod.log
[18:45:34] <d4rklit3> no errors
[18:52:24] <d4rklit3> :(
[19:46:42] <deathanchor> anyone know how to get around this limitation? aggregation result exceeds maximum document size (16MB)
[19:47:23] <cheeser> use a cursor for your output
[19:47:35] <deathanchor> how do I specify that?
[19:48:34] <cheeser> http://docs.mongodb.org/manual/reference/method/db.collection.aggregate/#db.collection.aggregate
[19:49:16] <deathanchor> yeah reading that, so somthing like aggregate([stuff], { cursor : 1000 }) ?
[19:50:12] <deathanchor> ah bangin, yeah prettymuch that
[19:50:15] <deathanchor> thx
[19:50:42] <cheeser> np
[19:53:38] <deathanchor> ah poopsteak, introduced in 2.6, gotta do some finagling.
[20:12:29] <cheeser> heh. yeah. time to upgrade! 3.0's been out for months and 3.2's right around the corner.
[20:14:41] <atomicb0mb> hello, what i'm missing here: db.collection('foo').find().sort( [ [ "field", 1], ["otherField", -1 ] ] ).toArray(function(err,docs){......} ?
[20:15:28] <atomicb0mb> i get an error: can't canonicalize query: badvalue bad order array [2]
[20:16:50] <doc_tuna> .sort({ column1: 1, column2: -1})
[20:16:56] <doc_tuna> not whatever structure you have, afaik
[20:17:17] <xar-> anyone here participating in the mongodb university training?
[20:17:24] <xar-> wondering if this might be a good forum to discuss it
[20:19:46] <Derick> you can ask questions here, but not for the homework ;-)
[20:21:57] <atomicb0mb> hummm.. that worked doc_tuna Thank you. But i read that if I put like in arrays [ [], [] ] it respect the order of the sort.
[20:25:20] <atomicb0mb> or am I trippin?
[20:25:50] <doc_tuna> perhaps they meant... [{column1: 1}, {column2: -1}]?
[20:25:55] <doc_tuna> i'm not sure of that syntax, sorry
[20:26:58] <lemmi_> Hey there. If my data model is kind of relational and I don't have an immediate need for performance, should I just stick to a sql database? Everything about my web app / model is pretty conventional at this point
[20:28:51] <atomicb0mb> doc_tuna no problem, thank you :)
[20:29:39] <doc_tuna> lemmi_: if it were me i would identify a particular use case where having json docs as opposed to rows/tables suits the application
[20:29:48] <doc_tuna> and experiment with using mongodb for that
[20:30:33] <xar-> Derick: just finding the training challenging, is there a suitable place to discuss homework(s)?
[20:31:44] <Derick> xar-: sorry - I don't know
[20:31:48] <lemmi_> doc_tuna: if i can't, probably just stick with sql?
[20:32:18] <doc_tuna> yes unless you can say for sure mongo would improve your architecture i wouldnt switch just for the sake of switching
[20:32:34] <lemmi_> fair enough, thank you :)
[20:34:08] <Razerglass> hey friends I have a question regarding $unset, is it possible to 'delete' a key+value from a object and not the entire object?
[20:35:14] <doc_tuna> I believe thats what unset does
[20:36:12] <doc_tuna> do you mean from a nested object?
[20:36:18] <Razerglass> yes
[20:36:26] <Razerglass> im using it $unset: { person: 'john' }, if i have a object of persons with values of names, that would delete my whole persons object out of my DB instead of 'pluck' a single person
[20:36:46] <Razerglass> is there different syntax i should be following?
[20:37:12] <doc_tuna> the 'john' part doesnt matter in $unset
[20:37:15] <doc_tuna> its not for matching
[20:37:30] <Razerglass> is it possible to match with unset?
[20:37:43] <doc_tuna> what you do is match in the first part
[20:38:06] <Razerglass> $unset: { person['john'] },?
[20:38:26] <doc_tuna> what does your object look like
[20:39:00] <doc_tuna> is it: {person: {john: {obj}, jane: {obj}}} ?
[20:39:02] <Razerglass> person{1:'john', 2:'jim'} etc
[20:39:05] <doc_tuna> ah
[20:39:34] <Razerglass> its only a object in my collection
[20:39:36] <Razerglass> no nesting
[20:39:37] <doc_tuna> your keys are integers there
[20:39:43] <Razerglass> yes
[20:40:13] <doc_tuna> you'd do $unset: {'person.1': ""}
[20:40:18] <doc_tuna> i think
[20:40:25] <Razerglass> ill give that a shot, thanks doc_tuna
[20:47:54] <Razerglass> doc_tuna id didnt like that
[20:48:05] <Razerglass> i couldnt imagine it would be a string right there would it?
[20:48:47] <doc_tuna> sry gotta go bud
[20:48:58] <Razerglass> thanks for your time
[20:50:23] <Razerglass> ok that actually works $unset: {'person.27': ""},
[20:50:33] <Razerglass> but how would i use a dynamic variable in place of 27?
[20:50:48] <Razerglass> i cannot figure that out? person.myvariable
[21:05:47] <Razerglass> goot it, had to dynamicaly create my query for example var tmpObj = {} tmpObj['person.'+number] = ""; $unset: tmpObj,
[21:41:56] <d4rklit3> hi can i set logrotate in the mongod.conf ?
[21:46:08] <leptone> I'm getting this error
[21:46:09] <leptone> https://gist.github.com/leptone/6f7d754e0a89803cdcd8
[21:46:43] <leptone> idk why my mongo server is not running or what turns it off or how to turn it back on
[21:47:00] <leptone> but I'm pretty sure the problem is the mongodb server isn't running
[21:47:42] <leptone> again idk how it gets turned off in the first place or how to turn it on but i have had this issue beofe
[21:47:46] <leptone> *before
[22:51:04] <jmorph> Is querying for embedded documents something to be avoided in mongo?
[22:51:21] <jmorph> I see there's support but it seems like a lot of searching
[22:52:50] <daidoji> jmorph: what do you mean?
[22:53:36] <jmorph> It just seems a lot more cumbersome than searching through a flat table
[22:54:41] <jmorph> if i wanted to grab all users.posts.comments.likes > 20 or something
[22:55:18] <daidoji> jmorph: as far as I'm aware, denomralizing everything into embedded documents is considered best practice in the mongo world
[22:55:51] <daidoji> db.collection.find({'users.posts.comments.likes': {$gt: 20}})
[22:56:01] <daidoji> ???
[22:57:35] <jmorph> hmm ok, i think i'm just getting used to embedded docs
[22:58:00] <jmorph> digging that deep in each user document isn't a big deal?
[23:02:47] <daidoji> jmorph: no, the depth probably doesn't matter very much
[23:03:35] <daidoji> remember that in Mongo all the documents are stored as json objects, so the read (into memory) of the javascript object probably takes most of the query time, once its in your cursor, querying into that document is probably only a fraction of the time even if you go several levels deep
[23:04:35] <daidoji> in fact, I'm not really an expert on how javascript objects store their keys (they might just store them all flat in a hash) in which case it would matter even less as they could just store the keys in a hash and do an O(1) lookup
[23:07:54] <MalteJ> which tool do you use to collect system status information (cpu, memory usage, disk i/o) and send it to some database?
[23:10:45] <NaN> Why I can't find by _id in this simple query? -- https://ghostbin.com/paste/qzqmj
[23:10:53] <NaN> What am I doing wrong?
[23:12:30] <cheeser> because _id is a String there and not an ObjectId
[23:12:31] <MatheusOl> NaN: I might be wrong, but seems that it is literally "ObjectID(52e964d1a9673f80abdfa8b5)", not an ObejctID with value 52e964d1a9673f80abdfa8b5
[23:12:38] <cheeser> yep
[23:12:40] <MatheusOl> :)
[23:15:33] <NaN> wow, so how could that be possible? I mean, so all my objectID in are all Strings not objectIDs
[23:15:37] <NaN> that's bad :S
[23:15:54] <NaN> you where right MatheusOl, it's a sString
[23:16:10] <NaN> cheeser: you are right
[23:16:31] <MatheusOl> Seems that you are doing something wrong when you save it
[23:18:41] <NaN> I did a js script
[23:31:48] <NaN> thanks guys, MatheusOl, cheeser, I'll check my save script