PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 25th of September, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:15:38] <darius93> Does mongo perform any locks? eg if i concurrently am writing (inserting and/or updating) a document within a collection, does it lock it so say thread b has to wait for thread a to finish its operation before it can write or update a document?
[00:18:45] <StephenLynx> yes.
[00:18:56] <StephenLynx> afaik it locks collections and will lock only documents on WT
[00:19:33] <daidoji> darius93: oh does it ever
[00:19:45] <daidoji> it locks on nearly every operation
[00:19:48] <daidoji> even copyTo
[00:25:11] <cheeser> any write will acquire an exclusive lock
[00:45:39] <daidoji> cheeser: how can I tell if my mongo build was built with ssl?
[00:45:42] <daidoji> 2.6
[00:54:22] <daidoji> anyone?
[01:15:46] <cheeser> daidoji: probably wasn't with 2.6
[01:16:10] <cheeser> but you could always try starting mongod with ssl turned on and see if it complains.
[01:29:01] <daidoji> cheeser: :-(
[01:29:07] <daidoji> it wasn't
[01:29:10] <daidoji> oh well
[05:59:25] <Jonno_FTW> in pymongo, If I use find(), iterate over the cursor, and change the document, how doI write that changed doc back to mongo? will it break the cursor?
[06:10:00] <Boomtime> Jonno_FTW: many drivers have a 'save' method and no, it won't break the cursor, though if you changed the values in the doc that your cursor is using to enumerate, there is the possibility of running over the same document again
[06:11:30] <Boomtime> be aware that the 'save' method, for those drivers which have such, is usually a full update - it replaces the entire existing document with the one you modified ('save' usually can't detect what you've changed)
[06:12:18] <Jonno_FTW> Boomtime: I've gone with update_one like this:collection.update_one({"_id": i["_id"]}, {"$set": {"anomaly_score": anomalyScore}})
[06:12:53] <Boomtime> right, that's better
[06:13:18] <Boomtime> you are only sending the data that is needed, and providing the server with the best possible chance for optimization
[06:13:31] <Jonno_FTW> just wondering because in other languages updating a collection while iterating it breaks the cursor like in java
[06:14:56] <Boomtime> mongodb isn't a language - the 'collection' on the server is concurrently accessible and you will see changes as they occur
[06:15:57] <Boomtime> if you run a query for a large number of documents, it is entirely possible that by the time you've enumerated towards the end, more documents have been inserted in the interim - and it is even possible your cursor will return some (all, or possible none) of the new documents that got inserted during that time
[06:16:40] <Boomtime> the only things you are guaranteed is that you'll never see a partial update to a single document
[06:16:52] <Boomtime> all docment updates are atomic
[06:17:08] <Jonno_FTW> cool
[06:37:49] <dddh> what about mongodb certification
[06:38:45] <dddh> for example if you already have m101, 102 and 202
[06:41:16] <dddh> should I pass c100dev/dba exams or m101p/102/202 is enough?
[07:11:39] <antiPoP> Hi, I'm trying to make a or search between text and number fields, but when I do it it says it's not indexed although there is an index there. What i'm doing wrong? Here are the commands and output? https://gist.github.com/antiPoP/a5e38acc07b631b257d0
[07:11:55] <antiPoP> *an , *I'm
[07:16:29] <antiPoP> I updated the gist
[07:50:11] <AzaToth> So, I'm trying to make a small migration of a db, via some node-migration. the script in question can be found at https://paste.debian.net/313320/ and the db in question is a 2.4. The problem I have is that users.find() inside the forEach() doesn't give me any data at all on the server, but does so for me locally; thus I wonder if I can't call a second collection inside a forEach from a first one?
[07:51:15] <AzaToth> if I test executing the "users" outside the loop, it return data
[07:52:27] <AzaToth> any help would be welcome
[07:54:01] <ax003d> Are you using same version of mongodb shell locally & remotely?
[07:57:36] <AzaToth> yes, 2.0.43
[07:57:47] <movedx> What could be the reason as to why 'mongod' won't start from the official init script provided by your RPMs, or a custom systemd service file I wrote, but it does start when I run it as root in a simple manner: mongod -f /etc/mongodb.conf
[07:58:16] <joannac> check the logs?
[07:59:01] <movedx> If I run it as root directly, it logs fine. If I run it using your init script, it doesn't log. It fails with absolutely no clue as to what is going on.
[07:59:20] <AzaToth> ax003d: though I notice the local execution isn't working either fully, it doesn't get the same number of users every run
[07:59:33] <AzaToth> like it's some kind of race condition
[08:01:27] <joannac> movedx: set -x the initi script and see where it gets stuck?
[08:01:35] <AzaToth> ax003d: basically I want to set an array in "companies" containing all "users" who has a reference to said company in it's "owns" array, if said user's "needsInvite" flag is false
[08:01:48] <movedx> joannac: Good idea. let me see what that generates.
[08:02:26] <AzaToth> due to a added feature I want to update the database with initial logic based on that
[08:02:54] <AzaToth> but I'm just getting stuck running the migration :(
[08:03:42] <AzaToth> even tried to run it from scratch as a plain script, without using node-migration, but getting the same issue, that not all "users" are found everytime
[08:04:22] <movedx> joannac: It's CentOS 7, so it rolls over to systemd, of course. I can't find the location that your RPM is putting the service file.
[08:06:37] <AzaToth> ax003d: do you know if there's a bug in 2.4 that might cause this?
[08:06:49] <AzaToth> can't really upgrade to 2.6 :(
[08:11:15] <AzaToth> ax003d: just tried locally to install mongodb 3.0.6 and same thing happens :(
[08:11:23] <movedx> joannac: So obviously systemd supports init scripts, so systemd is executing the init script. Output is none existent. I've tried writing my own service file, but the same issue arises.
[08:11:42] <AzaToth> can't really figure out what I'm doing wrong
[08:12:01] <movedx> joannac: I'm going to try removing the user from the service file, to see if it starts as root from the service file.
[08:12:46] <movedx> The answer is no.
[08:23:07] <movedx> I just don't get what I'm doing wrong here. I don't believe I've changed anything. Even from a fresh installed, minimal CentOS 7 box it doesn't work.
[08:25:02] <movedx> The executable exits with an exit code of '1': http://docs.mongodb.org/manual/reference/exit-codes/ -- wonderful.
[08:36:02] <movedx> And this is all I get out of it: ERROR: child process failed, exited with error number 1
[08:36:19] <movedx> 342 people and no one is even willing to offer any ideas?
[08:43:25] <movedx> "Failed global initialization: FileNotOpen Failed probe for "/data/db/mongodb.log": Permission denied" -- for the root user? Permission denied, for... the root user?
[08:44:13] <BurtyB> does the dir exist? are you using that horror called selinux? anything in the system logs?
[08:50:05] <movedx> Yes. Yes (why on Earth would you turn it off?). No. I'm going step-by-step through my Ansible code to determine if I'm doing something wrong, but really, all it does is download and install the RPM to the official repositories, installs MongoDB 3.0.5, drops the configuration onto disk (http://pastebin.com/h9GvxMPR), and starts the service which, of course,
[08:50:05] <movedx> fails.
[09:02:08] <AzaToth> Think I solved it by doing http://paste2.org/5X0sW3At
[10:26:12] <jamiel> Hi all, I am trying to find a way to prevent remove actions on a secondary replica set member to act as an "archiving member" . Am I correct in thinking I can't set a user for the operations syncing the oplog to make use of privileges for this?
[10:49:35] <m3t4lukas> hey guys
[10:50:15] <m3t4lukas> how do I filter for all documents where the field "deleted" does not exist?
[11:12:12] <m3t4lukas> found it :)
[11:13:00] <m3t4lukas> the query is {"deleted": {"$exists": false}} for anyone stumbling upon it
[11:15:27] <m3t4lukas> now I'm ready to do big data with mongoDB, I will never delete anything from now on :)
[18:21:02] <jmzc> hi
[18:21:47] <daidoji> hi
[18:22:20] <jmzc> i'm getting a weird behaviour with $elemMatch operator
[18:22:35] <jmzc> with
[18:22:36] <jmzc> db.Session.find({"events":{ $elemMatch: { "eventDescription":"CALL_REQUESTED", "eventTime" : { $gte : ISODate("2015-09-21T06:00:00Z") }, "eventTime" : { $lte : ISODate("2015-09-21T07:00:00Z") }}}});
[18:23:55] <jmzc> i'm only getting records before ISODate("2015-09-21T07:00:00Z") , and ignores "eventTime" : { $gte : ISODate("2015-09-21T06:00:00Z") }
[18:25:00] <jmzc> cannot I set the same property ( "eventTime" ) in the $elemMatch queries ?
[18:25:51] <jmzc> I thought that with $elemMatch was required to match ALL queries
[18:26:22] <jmzc> It's like if the second query overrides the first one
[18:36:46] <daidoji> jmzc: thats because queries in mongo are javascript objects
[18:36:52] <daidoji> so the first key is overwriting the second key
[18:37:41] <jmzc> ok
[18:37:51] <jmzc> and how I can do that ?
[18:38:12] <jmzc> $and operator ?
[18:39:40] <daidoji> yes
[18:39:50] <daidoji> $and operator is what you want there
[18:40:41] <jmzc> ok,thanks !
[18:41:15] <daidoji> np
[18:47:06] <jmzc> daidoji,
[18:47:07] <jmzc> db.Session.find({"events":{ $elemMatch: { "eventDescription":"CALL_REQUESTED", $and : [ "eventTime" : { $gte : ISODate("2015-09-21T06:00:00Z") }, "eventTime" : { $lte : ISODate("2015-09-21T07:00:00Z") } ] }}});
[18:47:07] <jmzc> 2015-09-25T20:42:34.674+0200 E QUERY SyntaxError: Unexpected token :
[18:47:11] <jmzc> why?
[18:47:28] <daidoji> I don't think thats the way to use $and
[18:47:58] <daidoji> isn't it like "field1: {$and: [{ $gte : ISODate("2015-09-21T06:00:00Z") }, "eventTime" : { $lte : ISODate("2015-09-21T07:00:00Z") }]}" or something like that?
[18:48:00] <daidoji> let me check
[18:48:23] <jmzc> yes, you're right
[18:48:31] <jmzc> wait, i'll check it
[18:48:43] <daidoji> oh no you are right
[18:48:46] <daidoji> my apologies
[18:49:01] <daidoji> I think you have to wrap your array things as seperate documents
[18:49:03] <jmzc> :-/
[18:49:30] <daidoji> db.Session.find({"events":{ $elemMatch: { "eventDescription":"CALL_REQUESTED", $and : [{ "eventTime" : { $gte : ISODate("2015-09-21T06:00:00Z")} }, {"eventTime" : { $lte : ISODate("2015-09-21T07:00:00Z") }} ] }}});
[18:49:34] <daidoji> like that
[18:54:18] <daidoji> actually you might have to wrap that $and in its own document too
[18:54:18] <jmzc> daidoji, perfect.. that worked
[18:54:19] <jmzc> i'm newbie with mongodb and it's hard , thanks
[18:54:19] <daidoji> np, glad I could help
[19:04:34] <mbwe> i was wondering, what is the best way to store timeseries data with mongodb, or could somebody give me some pointers to soms docs dealing with this kind of data, or if mongodb is not the best choice, what other db i could use
[19:06:07] <daidoji> mbwe: I have not had a chance to use it, but I've heard good things about influxDB
[19:07:07] <daidoji> mbwe: no docs on hand, but if you think of each document as a "tick" in your time-series (assuming uniform distribution) then each document representing a tick with the _id set to the time it was taken seem like a good way to do things
[19:07:13] <mbwe> ok, but that is to say that mongodb is not a good fit for timeseries daidoji ? i prefer mongo to be honest
[19:07:30] <daidoji> mbwe: I mean depends on what you want to use it for
[19:07:35] <daidoji> if you're just storing stuff its probably okay
[19:07:49] <daidoji> if you want to be able to do adhoc analysis and querying over your time series, its probably not the best fit imo
[19:08:06] <mbwe> well that would be the usecase to be honest
[19:08:11] <mbwe> i just found something
[19:08:18] <mbwe> a blogpost on the site
[19:08:21] <mbwe> http://blog.mongodb.org/post/65517193370/schema-design-for-time-series-data-in-mongodb
[19:08:25] <mbwe> have to read it first
[19:09:01] <daidoji> mbwe: word, good luck
[19:09:07] <daidoji> its lunch time for me anyways
[19:09:56] <mbwe> ah thanks and have a nice lunch
[19:14:09] <StephenLynx> kek
[19:14:13] <StephenLynx> that blog thing
[19:14:18] <StephenLynx> is exactly what I do on lynxchan
[19:14:43] <StephenLynx> to track board posting stats
[19:19:37] <mbwe> hi StephenLynx the lynxchan what is that
[19:19:45] <StephenLynx> lynxhub.com
[19:19:49] <StephenLynx> a chan engine
[19:20:06] <mbwe> let me have a look
[20:26:06] <[diecast]> having some issues. both of my secondaries in a replication set are down and have oplog too far out of sync errors. i stopped them and started to work on the primary when the primary now says it is a secondary
[20:26:17] <[diecast]> it's the only mongo server online now, how can i force it to be primary?
[20:39:31] <[diecast]> nm, got it
[21:16:55] <ciwolsey> is there a way to do a mongorestore from the mongo shell?
[23:10:18] <macwinner> hi, any ideas why my query stream for mongo prints out only the first record of my returned data here: var stream = Activity.find(filter).read("primaryPreferred").stream(); stream.('data', console.log);
[23:10:36] <macwinner> i've tried to attach to end and error events, but nothing gets printed
[23:37:20] <daidoji> macwinner: what's a query stream?
[23:37:57] <daidoji> and where is read() defined?
[23:47:52] <macwinner> daidoji: oh.. this is a mongoose thing
[23:51:50] <macwinner> daidoji: ahh, nm.. found the issue