PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 4th of June, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:23:40] <jason0_> Quick question-- is there any read performance gain when using replicasets?
[00:24:31] <proteneer_osx> yes
[00:24:34] <proteneer_osx> you read from secondaries
[00:24:37] <proteneer_osx> which reduces load on primaries
[00:26:41] <jason0_> proteneer_osx: will most drivers distribute queries over multiple replicasets? For example a complex find operation that has to scan all elements, will this become distributed with replicasets?
[00:26:58] <proteneer_osx> depends
[00:27:02] <proteneer_osx> i set my driver to use SECONDARY_PREFERRED
[00:35:28] <jason0_> proteeneer_osx: i see. but it would be up to the driver to distribute the query.
[02:38:00] <gancl> og01: http://stackoverflow.com/questions/24017372/nodejs-mongodb-findandmodify-exception-e11000-duplicate-key-error-index Thanks!
[02:56:26] <proteneer_osx> any idea what MongoError: auth failed
[02:56:27] <proteneer_osx> means?
[04:50:12] <n4l> anyone know why I would be unable to drop a db? in the mongo shell im doing use [database];
[04:50:17] <n4l> db.dropDatabase();
[04:51:21] <n4l> also tried db.dropAllUsers();
[04:53:05] <n4l> still getting data back from my express app.
[04:53:28] <n4l> [ { _id: 538ea1d644f4078d5f000001, name: 'Arthur Dent' }, { _id: 538ea1d644f4078d5f000002, name: 'Ford Prefect' } ]
[05:53:57] <nezt> Hey all
[05:54:55] <nezt> is anyone here?
[06:08:00] <josias> no ... my questions wasn't answered, too.
[06:08:26] <josias> :(
[06:12:11] <josias> maybe you know an answer? http://pastie.org/9254622#22-24,28
[06:41:00] <Egoist> Hi
[06:41:30] <Egoist> is there any way to check from mongos instance that configsvr are working?
[06:44:08] <Egoist> is there any way to check from mongos instance that configsvr are working?
[06:49:20] <Egoist> is there someone who know about mongo shard cluster?
[07:08:17] <Egoist> i have a problem with shard cluster
[07:09:06] <Egoist> does anyone can help me?
[07:16:30] <Egoist> Hello
[07:16:40] <Egoist> is someone know about shard cluster?
[07:18:24] <kali> Egoist: if the config servers are fine, queries on magic collections from the "config" database will work
[07:18:37] <kali> Egoist: but the logs are usually the go-to place when nothing is right
[07:20:46] <rspijker> Egoist: general tip, just state your problem instead of saying you havea problem and waiting for someone to respond to that. If you state your problem and someone knows about it, they’ll probably respond
[07:30:17] <Egoist> this is what i have in my mongos log: could not verify that config servers are in sync
[07:31:59] <kali> gist / paste the relevant section of the log somewhere
[08:25:15] <Egoist> is there any way to check configsvr from mongos instance, before i connect them to each other?
[08:28:23] <rspijker> how do you mean exactly?
[08:28:28] <rspijker> what is it you want to check?
[08:29:39] <joannac> josias: you have to match exactly
[08:30:12] <joannac> point: {"key" "Master..."} does not macth at there's another field (name: test1)
[08:30:39] <Egoist> i want to check that's configsvr is waiting for connection, but i want to do this from another host
[08:31:09] <joannac> connect to it directly?
[08:31:25] <joannac> mongo host:port
[08:32:40] <josias> @joannac but how? i the software doesn't know the name... why it isn't possible to make a deep elemMatch?
[08:34:00] <rspijker> josias: point.key : “Master”
[08:35:39] <josias> @rspijker: as i wrote: that isn't possible in morphia 'hasThisElement()'
[08:36:35] <rspijker> josias: I think I joined after your initial question, sorry :)
[08:37:14] <josias> rspijker: http://pastie.org/9254622#
[08:41:01] <rspijker> so… it’s a morphia issue…
[08:42:53] <josias> yes, but a mongodb-issue, too
[08:43:34] <josias> i thought morphia is somehow connected with the mongodb-dev-team?
[08:43:48] <Derick> josias: I think we help them out
[08:46:37] <abdullah> hello
[08:48:08] <abbood1515> can someone help me with this question please?
[08:48:27] <abbood1515> http://stackoverflow.com/questions/24032571/how-to-search-in-mongodb-by-querying-multiple-fields-in-the-same-document
[09:26:05] <AlexejK> when trying to see if Mongo needs more RAM under e.g Aggregations, is looking at serverStatus.recordStats.accessNotInMemory a proper thing to do?
[09:30:28] <rspijker> AlexejK: not really… That value can just mean you are aggregating ‘cold’ data
[09:31:53] <rspijker> if you are under continuous load and you see a lot of paging in/out, then you would want to increase RAM
[09:48:14] <AlexejK> rspijker: thanks, which info i should be looking at to see the paging?
[09:51:53] <rspijker> AlexejK: you can use mongotop, only works properly in Linux though...
[09:52:02] <rspijker> in Windows you will have a slightly harder time
[09:52:10] <AlexejK> Linux here, so should be good
[09:52:12] <rspijker> MMS can also detect paging
[09:52:18] <rspijker> if you’re using that
[09:53:05] <AlexejK> yes, thankfully we are
[09:53:59] <rspijker> it has page faults listed for the hosts as wlel
[09:54:06] <rspijker> well*
[09:56:09] <AlexejK> so is my understanding correct that high page fault is bad? (sorry, not so good with that)
[10:09:04] <rspijker> AlexejK: yes
[10:09:39] <rspijker> a page fault basically means that mongo accesses a piece of virtual memory that is not in RAM. Which in turn means that it has to go and retrieve the info from disk and load it into RAM
[10:09:39] <AlexejK> then i think i have a problem as during the heavy operation time (the one i've seen) page faults on primary suddenly spiked to 100-150
[10:10:02] <rspijker> well… it can’t always be helped
[10:10:13] <AlexejK> understandable, all depends on data size :-)
[10:10:15] <rspijker> if you are accessing information that’s old or not used often, it’s always going to have to be paged in
[10:10:54] <rspijker> 100-150 per sec?
[10:11:26] <AlexejK> dont think i cans ee that in the graph on MMS
[10:11:38] <AlexejK> its the spikes there with 1 min granularity
[10:12:35] <rspijker> yeah, but you can tell it to show you average/sec or total
[10:12:41] <rspijker> avg/sec is the default
[10:12:43] <AlexejK> oh let me see
[10:12:55] <rspijker> which means that during that minute, it does an average of 100-150 page faults per second
[10:12:55] <AlexejK> yes its avg/sec
[10:13:22] <rspijker> which is high but not extreme
[10:13:38] <rspijker> this was due to an aggregation you said?
[10:13:59] <AlexejK> yes, we have some aggregation jobs running.. they use both Map Reduce and Aggregation framework operations
[10:14:08] <AlexejK> maybe i should give some context on a problem im trying to resolve:
[10:15:10] <AlexejK> we have a java app that starts behaving relly sluggish and DB queries suddenly return nothing when looking up a value that has just been written to DB (at least sent to socket) and this gives errors back through our API. this only happens when these aggregations are running
[10:15:53] <AlexejK> this specific entity is read from primary to ensure we respond fastest possible back and don't wait for replication to slaves
[10:17:27] <AlexejK> basically we create a session key and this key lookup fails for us when mongo is running a big operation.. this also happens if we do a delete operation on many entries.. which partially is udnerstandable but currently my guess is that IO performance is a problem and possibly memery (not ruling out app code problem as well)
[10:34:28] <rspijker> AlexejK: hmmmm, that’s a little strange, but possible
[10:35:00] <rspijker> you could use getLastError to ensure you wait for the write to complete
[10:35:32] <AlexejK> we used write concern acknowledge from primary
[10:35:45] <AlexejK> isn't it doing the same thing inside of Java driver?
[10:35:51] <rspijker> ah, in 2.6 this was changed I see
[10:35:56] <rspijker> no clue, never used the java driver
[10:36:07] <rspijker> does seem reasonable
[10:36:07] <AlexejK> ok ill check the source of that one
[10:36:22] <rspijker> lunch time :)
[10:36:41] <AlexejK> have a good one and thanks for pointing me in right directions :)
[11:24:09] <narutimateum> anyone here use laravel eloquent jessenger with mongodb?
[12:25:35] <minimoo1> hey, anyone know a mongoose help channel?
[12:28:01] <Nodex> some people in here use Mongoose
[12:28:09] <Nodex> try asking your question, you might get lucky
[12:34:57] <minimoo1> i have a schema containing array of objectids, i was wondering if there is a way to remove one of the values in the array with a single query
[12:36:40] <rspijker> $pull in mongodb
[12:36:50] <rspijker> sure there is a mongoose ‘wrapper’ for that
[12:38:59] <minimoo1> thanks
[12:42:30] <AlexejK> humm suddenly mms-monitoring agent started complaining about "Either the server detected the possibility of another monitoring agent running, or no Hosts are configured on the MMS group." And all i did was upgrade from agent 2.1 to 2.2
[12:43:56] <AlexejK> dont have multiple agents running in that installation (however do have several different agents for my group for different server groups, but they are isolated from eachother)
[12:49:01] <minimoo1> i found this http://stackoverflow.com/questions/19786075/mongoose-deleting-pull-a-document-within-an-array-does-not-work-with-objectid
[12:49:03] <minimoo1> but it's not working :|
[12:53:26] <Nodex> define not working... does it not have a job?
[12:55:31] <minimoo1> oh wait :)
[13:17:38] <minimoo1> still no luck
[13:41:50] <minimoo1> can someone tell me please why this doesn't work? http://jsfiddle.net/gD8ft/ (ignore syntax errors if missed anything)
[13:46:11] <kali> minimoo1: your id are ObjectIds, and your query look for string
[13:46:46] <minimoo1> i tried using casting too
[13:46:55] <minimoo1> like this
[13:48:35] <minimoo1> http://jsfiddle.net/gD8ft/1/
[13:49:31] <minimoo1> there is no error, just nothing happens and the returned obj is just 1
[13:49:51] <kali> minimoo1: time to go to the mongodb shell and try to make it work without mongoose
[13:50:07] <kali> minimoo1: wait. I know.
[13:50:37] <kali> minimoo1: $pull: { appointments : { '_id' : mongoose.Types.ObjectId('538f04e45a4e06a01b000013') } } }. you want $pull: { appointments : mongoose.Types.ObjectId('538f04e45a4e06a01b000013') } }
[13:53:08] <minimoo1> ooh
[13:53:12] <minimoo1> damn
[13:53:15] <minimoo1> let me try
[13:53:48] <minimoo1> oh wait
[13:57:46] <minimoo1> still not working
[13:58:21] <kali> well, this time, try it in the shell :)
[13:58:30] <minimoo1> when i cast the string to objectid
[13:58:45] <kali> it's not a cast, but whatever
[13:59:02] <minimoo1> yea but i'm not sure what the right word is
[13:59:07] <minimoo1> create an object
[13:59:27] <minimoo1> i'll try in shell
[14:08:11] <jaggerwang> I installed mms and mms-backup, there is a replica set being monitored and backuped, all is going well. But when I terminate the backup, it keeps status in teminating, and when I try to remove hosts from monitoring, it says “This host cannot be deleted because it is enabled for backup”.
[14:08:45] <minimoo1> kali
[14:09:17] <minimoo1> it's something about the condition
[14:11:43] <jaggerwang> How can I remove backup thoroughly.
[14:12:05] <kali> minimoo1: show us what you're doing in the shell
[14:13:19] <minimoo1> one moment
[14:15:02] <minimoo1> i found out why i used $or instead of $in
[14:37:59] <AlexejK> out of curiocity, is there a reason why MMS does not allow logical grouping of server+agent combos? I mean we have several separate/isolated datacenters and want to manage them in one place.. But not it seems (i think this changed recently) we have to have 1 group per isolated datacenter
[14:52:35] <Tulsene> hi everyones
[14:54:43] <rspijker> hey
[14:56:51] <AlexejK> hello
[15:15:43] <Tulsene> hi
[15:16:04] <Nodex> best to just ask the question
[15:16:49] <Tulsene> ok
[15:17:45] <Tulsene> I begin with mongoDB and trying to perform authentification on standalone cluster and after activate auth in /etc/mongodb.conf I'm trying this command db.createUser( { user: "siteUserAdmin", pwd: "password", roles: [ { role: "userAdminAnyDatabase", db: "admin" } ] } )
[15:18:20] <Tulsene> but I've this error : property 'createUser' of object admin is not a function
[15:18:59] <Tulsene> and if I try addUser instead of cresteUser I got this : Roles must be non-empty string
[15:21:58] <rspijker> Tulsene: which version of mongod and mongo shell?
[15:22:06] <Tulsene> 2.6
[15:23:04] <Tulsene> @rspijker sry fail, that's 2.4 on this desktop
[15:23:15] <Tulsene> try to update and come back :x
[15:23:21] <rspijker> :)
[15:52:38] <Egoist> Hello
[15:52:43] <Egoist> i have a question
[15:53:01] <Egoist> does config server for shard cluster could be a replica set?
[16:03:50] <kesroesw1yth> How can I query for documents where 1 or more objects don't have a key by a particular name, or where that key exists but is empty?
[16:04:07] <kesroesw1yth> If this isn't possible with one query, that's fine.
[16:04:36] <AlexejK> you can check if key exists with $exists i think
[16:04:42] <AlexejK> http://docs.mongodb.org/manual/reference/operator/query/exists/
[16:04:49] <kesroesw1yth> I am testing with that now.
[16:05:21] <AlexejK> there is also a few"see-also" links that i think you can look into.. I think you can do it in one query with logical or
[16:05:36] <kesroesw1yth> The issue is that the documents have any number of properties which contain objects, and I need to check all of them recursively and return true if any are missing a given key.
[16:05:54] <Nodex> "" != not exists
[16:06:05] <kesroesw1yth> Sorry, I use key and property interchangeably.
[16:06:10] <kesroesw1yth> Nodex, right.
[16:06:28] <Nodex> you might want to run two checks... one for $exists:false and one for key:""
[16:06:48] <kesroesw1yth> Nodex, I can do that, but the recursive part is what I'm having trouble with.
[16:07:07] <Nodex> I don't follow what you;re trying to do
[16:07:38] <AlexejK> I don't think you can check EVERY key in the document to look for a sub-key existance, can you?
[16:07:40] <kesroesw1yth> Basically return any document where any nested object it contains does not have a property by a name I provide in the query.
[16:07:50] <kesroesw1yth> AlexejK: That's what I'm trying to find out. :)
[16:07:55] <Nodex> you can use dot notation to reach into documents
[16:08:16] <Nodex> foo.bar.baz => {foo:{bar:{baz:"Hi"}}}
[16:08:24] <kesroesw1yth> Nodex, I do know that much, but the issue there is that I can't know the subdocument names beforehand.
[16:08:27] <AlexejK> but for that you need to know foo and bar names
[16:08:31] <kesroesw1yth> Right.
[16:08:47] <Nodex> then you need to loop it, a document has no concept of "this"
[16:08:59] <kesroesw1yth> Loop in my code?
[16:09:04] <Nodex> yup
[16:09:22] <kesroesw1yth> I tried that. Perl is a real pain about nested hashes when they come from Mongo for some reason.
[16:09:40] <Nodex> if you're looking for any key that doens't exist or is null then you will have to loop EVERY key in a document and check it
[16:10:37] <kesroesw1yth> Yep.
[16:11:11] <saml> that's gonna be expensive. redesign?
[16:11:31] <Nodex> one off, doesn't have to be to expensive
[16:11:34] <kesroesw1yth> saml Probably I just won't do it this way.
[16:11:36] <Nodex> majority is in perl
[16:12:14] <kesroesw1yth> Not sure why but I can't seem to iterate through nested hashes like usual if the original hash came from Mongo.
[16:12:51] <Nodex> perhaps it has a weird type?
[16:13:00] <Nodex> try casting it? - dunno if you can do that in perl
[16:13:04] <kesroesw1yth> Should just be objects.
[16:13:52] <kesroesw1yth> I can create a hash when retrieving the object from Mongo, but I can't seem to refer to its keys with foreach(keys%$hash) like normal.
[16:14:00] <kesroesw1yth> Anyways, lunch time. Thanks for the help.
[16:14:38] <Nodex> enjoy
[16:24:12] <AlexejK> In MMS, when i look at OPCounters, I think i understand the "getmore" graph, but what is counted in "command"
[16:24:34] <AlexejK> I see that one spiking up quite often under heavier load
[16:26:33] <AlexejK> I assume Map-reduce is counted as command?
[16:29:35] <q851> Question for you ladies and gents: I have a host running mongos that connects to my cluster. Can I update the binary while mongos is still running THEN restart the process? I need to minimize downtime while upgrading the 2.0.X mongos components.
[16:31:18] <q851> AlexejK: I believe those are internal commands for replication and such.
[16:49:55] <Egoist> could configsvr may be a replica set?
[17:40:59] <subway> Howdy... I'm in the process of taking a snapshot of a couple exsiting databases in one replica set, and seeding a fresh replicaset with them.
[17:41:45] <subway> With mongo stopped, can I safely just grab all but local.* and journal from my source data dir, and drop them into an empty data dir on the first node in my new replica set?
[18:08:38] <ddod> Mongo noob here using node native: I currently have a function that makes sure people can't use the same value of something that someone else has (e.g. to make sure every user has a unique email address). Is this possible to do in the findandmodify function by itself? I don't really understand the ensureIndex function and when I would call that.
[18:20:51] <tscanausa> ddod: ensure index just created indexes.
[18:21:47] <tscanausa> so you would create a unique email address on email address and then try to insert an error should raise when you do that.
[18:22:42] <nickmbailey> git status
[18:23:01] <nickmbailey> doh >_<
[18:23:05] <tscanausa> lots of files
[18:32:00] <gancl> Hi! Mongoose How to findOneAndUpdate insert new score when not exist and increase score when exist?
[19:16:46] <Dieterbe> hey, in my config i have 'noprealloc = true' and also 'journal = false', but yet when i start mongo it always tries to allocate 3GB of journal files ><
[19:25:29] <ranman> Dieterbe: can you c/p your whole config somewhere?
[19:32:36] <Dieterbe> ranman: here you go https://gist.github.com/Dieterbe/10b424f332bc0f259c92
[19:38:25] <ranman> Dieterbe: 3GB of journal files? what version of mongo?
[19:40:36] <ranman> Dieterbe: the only difference in my config is that I have a space after the config
[19:40:46] <ranman> s/the config/the journal
[19:40:49] <ranman> and before the =
[19:40:49] <Dieterbe> ranman: v2.6.1
[19:41:08] <Dieterbe> i guess i can try that
[19:42:12] <Dieterbe> same problem
[19:43:10] <ranman> Dieterbe: you also have two journal entries
[19:45:07] <Dieterbe> yeah when i remove one of them, same result
[19:47:48] <unkleara> Hi, I have a existing rails app using mongodb, I am building a feature with node and need to query the same database. Are their any resources to guide me how to set it up properly? Do i create a new db user for the node app or use same as rails? I tried using mongoose but I think it requires a scheme defined and I dont want it to create new collections. I just need to read from the existing database…
[19:48:01] <unkleara> their = there
[19:56:08] <Dieterbe> ranman: ugh apparently for 64bit machine it's "nojournal = true"
[20:05:03] <mistawright> hi guys i need some help. my server ran out of space due to me filling up my mongodb databases. how can i clear these databases? I cannot seem to login into mongo on the server as well
[20:06:12] <_newb> what is the best way to query if a document exists (return the id) before doing an insert?
[20:13:01] <_newb> what is the best way to query if a document exists (return the id) before doing an insert?
[20:51:00] <sshaginyan> Is this room dead?
[20:52:51] <tscanausa> 400 idles users is not exactly dead
[20:53:14] <Dieterbe> i'm not feeling that great but i'm not dead
[20:53:21] <sshaginyan> I want to do something like this. db.test.update({ domain: 'atest.com'}, { $push: { technologies: { name: 'jquery', $push: { verifications: { proof: 'thisthat', source: 'http://www.google.com/' } } } } }, { upsert: true })
[20:53:42] <sshaginyan> The second push doesn't work
[20:54:13] <sshaginyan> Would I have to make two queries for this to work?
[20:57:56] <sshaginyan> See.... dead :)
[21:02:05] <_newb> lol @ "Is this room dead?"
[21:11:50] <asido> I need to query documents by taking an average of 2 attributes in the document and compare whether it's within the bounds I want. does this sound like a doable thing with one query?
[21:12:29] <tscanausa> asido: you need an aggregation query
[21:13:00] <asido> tscanausa, I understand that, just don't have an idea how to start yet
[21:15:43] <tscanausa> http://docs.mongodb.org/v2.4/core/aggregation-pipeline/
[21:18:35] <insanidade> hi all. what's the best cursor function for finding out the current size of an 'ever growing' collection?
[21:18:51] <insanidade> stats() ?
[21:19:01] <tscanausa> your id in group would be the document id and then you would your "total" field as the $avg of the 2 fields
[21:20:50] <tscanausa> insanidade: http://docs.mongodb.org/manual/reference/command/collStats/
[21:28:59] <asido> how can I perform $sum on 2 attributes inside $match ?
[21:29:17] <asido> I want to perform comparison on the result of that sum
[21:30:05] <bowtie> xdg: hi, what else can I do for you :)
[21:40:55] <John_____> hi, any with knowledge of mongoid?
[21:45:19] <asido> anyone has a clue why my $add doesn't work: http://paste.kde.org/pf7orxt6l ?
[21:58:02] <asido> is it possible to do something like this: db.coll.aggregate({ $match : { AttrInt1+AttrInt2: { $gt : 0, $lt : 100 } } }) ?
[21:58:26] <asido> what I am trying to do is to match a sum of attributes
[23:33:51] <codenado> hi all. I have a question about many-to-many relationships. I have users and groups, users can be in many groups, groups contain many users. What’s the preferred way to store that? One option is a person list within the group document, and a group list on the user document, but that means data is duplicated