PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 5th of May, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:50:45] <rednovae> i have taggable content. post = {..., tags=['one', 'two', 'three]}. I want to query this content to select documents that have given flags. for example, I want to query all documents that have flags two and three, and also all documents that have flags two or three
[04:26:10] <loklaan> hi, is there a way to use Grid and GridStore without having to db.close() after every class chain
[04:29:33] <k_sze[work]> If my documents have an array field, and I want to find the documents where 'foo' is among the values of the array field, how do I do it?
[05:24:02] <chrisbelfield> Hey all :)
[05:27:37] <fghd> hello!
[05:27:44] <fghd> sorry for my english
[05:28:05] <fghd> i have a little porblem with auth on mongodb instance via localhost
[05:29:37] <fghd> i've created user with admin priveleges on my localhost machine. If I trying to connect to instance i recieving message that i not authorized
[05:30:12] <fghd> mongodb.conf is edited to auth=true of course
[05:30:45] <fghd> i want to connect without password only at localhost
[05:31:22] <fghd> i added to enableLocalhostAuthBypass=0 to my init file
[05:31:33] <fghd> but i still showing an error
[05:31:54] <fghd> what i do wrong? please help
[05:32:02] <fghd> mongo version is 2.4.8
[06:10:57] <fghd> :(
[06:21:47] <joannac> fghd: once you have a user defined and auth on, you have to auth
[06:21:51] <joannac> there's no bypass
[06:22:36] <joannac> further documented here: http://docs.mongodb.org/manual/core/authentication/#localhost-exception
[06:38:52] <jbergstroem> Yo. Scons bails while linking with argument list too long. Suggestions? :) https://gist.githubusercontent.com/jbergstroem/ef7acf18fed90da42243/raw/gistfile1.txt
[06:39:11] <jbergstroem> (mongodb 2.6.0)
[06:42:21] <Agnar> hello
[06:43:44] <ranman> jbergstroem: can you hit us up with the exact scons command you ran
[06:44:36] <Agnar> while trying to get the mongodb 2.6 package for solaris running on solaris10/amd64, I noticed that a) the needed mongodb-extras.tgz is no longer available and also the software is linked against the illumos libc, which makes it impossoible to run the software on stock solaris.
[06:45:35] <ranman> Agnar: I think they're working on better solaris support, can you search for a ticket jira.mongodb.org or open one with your findings?
[06:45:50] <Agnar> ranman: will do, thank you
[06:46:01] <ranman> Agnar: thanks!
[06:46:21] <ranman> Agnar: feel free to link it here when done, I'm interested in the result
[06:46:21] <Agnar> ranman: I need to create a account, right? ;)
[06:46:32] <ranman> yup
[06:47:31] <jbergstroem> ranman: hang on, pasting full build log at some pastebin
[06:48:38] <jbergstroem> ranman: https://paste.lugons.org/raw/5612/
[06:48:56] <Agnar> ranman: https://jira.mongodb.org/browse/SERVER-13446 - it's already there :)
[06:49:25] <ranman> Agnar: upvote that shiz
[06:49:36] <ranman> woah that's assigned to my old roommate.
[06:49:57] <Agnar> hehe
[06:50:29] <Agnar> building mongodb on a vanilla sol10 install is pita, so I prefer the binary package ;)
[06:54:36] <jbergstroem> ranman: is default behaviour in Scons to append build info in directory build structure? Seems a bit.. excessive ..to me
[06:56:51] <jbergstroem> default or not, its a pretty big part of the problem.
[06:57:38] <ranman> jbergstroem: I know not... but if you stay in this channel it will be solved
[06:57:53] <joannac> ranman: he moved out?!
[06:58:02] <ranman> joannac: I did
[06:58:07] <ranman> joannac: I moved to brooklyn
[06:58:07] <joannac> ranman: :o
[06:58:18] <ranman> or I do starting june 1st
[06:59:04] <jbergstroem> so, are bugs filed at github or jira?
[06:59:15] <ranman> jira is faster
[06:59:32] <jbergstroem> what does faster mean? are both officially supported?
[06:59:42] <ranman> no, jira is officially supported
[06:59:50] <ranman> github is more of a "it will be read eventually"
[07:00:02] <jbergstroem> roger that
[07:03:05] <jbergstroem> bug here: https://jira.mongodb.org/browse/SERVER-13829
[07:04:28] <ranman> jbergstroem thanks!
[07:25:24] <Richhh> I have a field f which holds an array of objects eg f:{a:'x',b:'y'}, how can I query just for ({ f:{a:'x'} }) (any value of b)
[07:25:54] <Richhh> I have a field f which holds an array of objects eg f:[{a:'x',b:'y'},{a:'z',b:'z'}], how can I query just for ({ f:{a:'x'} }) (any value of b) *
[07:27:25] <Richhh> I tried ({f:{a:'x',b:{ $exists : true }} }) but no luck
[07:31:13] <ranman> Richhh: have you tried something like {'f.a': x} ?
[07:36:50] <Richhh> oh, yeah that works, thanks ranman
[07:37:02] <ranman> NP, GL
[07:59:40] <Agnar> by the way - are there any effords to make the mongodb server runnable on big-endian?
[10:29:04] <remonvv> Hi all. Our mongos process is complaining about config servers being out of sync. How do we determine (reliably) which config server has the most recent config data?
[10:29:05] <z1mme> So i have a user collection with the user accounts able to log in to my system, and each user belongs to one or more organizations. the users will have some organization specific information, where's a good place to put that?
[10:30:20] <remonvv> How many organizations typically per user?
[10:31:19] <z1mme> The edge-case would be a user that belongs to many organizations, usually the user just belong to one org
[10:32:41] <z1mme> 90%: 1 org, 5%: less than 5 orgs, 5%: more than 5 orgs
[10:32:47] <z1mme> of the users
[10:34:42] <remonvv> More as in dozens or thousands?
[10:35:18] <z1mme> dozens
[10:35:19] <remonvv> Basically, if there are few organizations per user, if they change rarely if at all and if you regularly need the organizations a user is part of then embedding is the route.
[10:35:36] <remonvv> In all other scenarios I'd probably go for a dedicated collection
[10:38:50] <z1mme> I'll do a dedicated collection, because the organization specific information will be manipulated more than the data in the users collection
[10:38:59] <z1mme> thnxz
[12:59:41] <jet> in the mongoc driver, I don't understand which command create the tcp connexion and how I can get any error code if the connexion failed
[14:42:11] <rspijker> anyone know of any reason why my netout in mongostat should be fairly large (around 15M) for no apparent reason?
[14:50:18] <q851> rspijker: can you post the full mongostat output?
[14:50:38] <cheeser> preferrably to a pastebin :)
[14:52:45] <rspijker> haha, sure
[14:54:39] <rspijker> q851: http://pastie.org/9142808
[14:56:58] <q851> rspijker: probably because of the getmore operations.
[14:57:39] <Frosh> Is there a simple way to duplicate the _id column since django doesn't like displaying _id?
[14:58:38] <rspijker> q851: maybe... but there's not that much going on... As in, there should be pretty much nothing going on. But if I use nethogs to inspect my output, mongos is sending between 2.5MB/s and 7MB/s (capital B)... Which seems excessive
[14:58:39] <q851> rspijker: do you have profiling turned on?
[14:58:45] <rspijker> nope
[15:01:08] <q851> you might want to run a mongotop to find out what db has the most read operations. Then turn on full profiling for that db.
[15:01:37] <q851> That way we can get more information about what reads are hitting the db.
[15:04:21] <rspijker> q851: mongotop just shows like 50ms of read time
[15:04:31] <rspijker> I'll run it again, this is from memory
[15:04:54] <rspijker> also, the traffic all seems to be coming from the mongos
[15:04:54] <q851> run with 60 secs on data
[15:05:00] <rspijker> almost nothing from the mongod
[15:05:13] <q851> of*
[15:07:32] <rspijker> most of them are empty (all 0ms), some of them have something between 50 and 150ms on read concentrated on a single DB, nothing on write. Sometimes there will be 1 or 2ms on the users collection for auth purposes
[15:08:36] <rspijker> ah, that's every second
[15:08:41] <rspijker> let me do the 60 secs thing
[15:08:50] <rspijker> didn't catch the extra option in --help
[15:09:39] <rspijker> wait... how on earth do I do that if I don't want to put my password on the commandline? :/
[15:10:08] <rspijker> nvm, figured it out
[15:12:30] <rspijker> q851: http://pastie.org/private/97xisktybr7trbijfqq4g
[15:13:04] <rspijker> doesn't look like anything that should keep the thing sending so much :/
[15:13:40] <q851> that's not why we did it.
[15:14:08] <q851> we wanted to know what db to turn full logging on.
[15:15:42] <q851> looks like gateway is getting most of the action.
[15:15:48] <rspijker> it should
[15:16:09] <rspijker> my other mongos (same shard) is showing lines like this exclusively
[15:16:17] <rspijker> 0 0 0 0 0 1 286m 7m 0 62b 728b 3 RTR 17:13:19
[15:16:31] <rspijker> which looks a lot more reasonable to me
[15:16:56] <q851> ProcessRun is a sharded collection?
[15:17:31] <rspijker> probably not, actually
[15:17:59] <rspijker> don't think anything is sharded atm
[15:18:05] <q851> ok
[15:18:15] <rspijker> new setup still getting it ready for load testing and then production
[15:18:31] <q851> let me know when you have full logging turned on.
[15:18:37] <rspijker> (Y)
[15:18:41] <rspijker> thanks for your help so far
[15:18:43] <q851> np
[15:23:21] <q851> db.system.profile.find({"ns" : "gateway.ProcessRun"}, {"query" : 1}).sort({"ts" : -1}).limit(20)
[15:24:23] <q851> actually, getmore wont have a query.
[15:32:03] <rspijker> I've had it turned on for a while now
[15:32:30] <rspijker> the find doesn't return any results
[15:32:44] <rspijker> (even if I drop the query part and only query on the ns)
[15:32:45] <q851> rspijker: you can turn it off now.
[15:32:49] <rspijker> I already did
[15:32:52] <q851> ok
[15:34:29] <q851> can you pastie the output from this db.system.profile.aggregate({"$group" : {"_id" : "$ns", "count" : {"$sum" : 1}}})
[15:36:13] <rspijker> formatted a bit: http://pastie.org/private/sxuq5y21aqu59dvzn6vjg
[15:39:00] <morfin> hello
[15:39:54] <morfin> how do i perform some selection of information using ODBC(that should be configurable)?
[15:42:53] <q851> rspijker: db.system.profile.aggregate({"$match" : {"ns" : "gateway.ProcessTaskLog"}}, {"$group" : {"_id" : "$op", "count" : {"$sum" : 1}}})
[15:45:01] <rspijker> q851:
[15:45:02] <rspijker> update - 634
[15:45:03] <rspijker> query - 634
[15:46:10] <rspijker> morfin: I don't understand your question. Isn't the entire purpose of ODBC to not have to know how mongo does that?
[15:55:50] <q851> Gah, more missing info from mongo docs. Eternally frustrating. Does anybody know what RTR means under repl column of the mongostat command?
[15:57:17] <kali> q851: RTR - mongos process ("router")
[15:57:21] <rspijker> router
[15:57:27] <rspijker> damn, too slow
[15:57:36] <kali> q851: i guess it shows for mongos
[15:57:44] <rspijker> it does, this is all on a mongos
[15:58:10] <q851> ty
[15:58:44] <kali> q851: for future reference, "mongostat --help"
[15:59:55] <q851> didn't check there. I had the incorrect assumption the mongo docs would be the most robust.
[16:00:30] <kali> not this time :)
[16:03:35] <wc-> hi all, i have a string field that is one of 3 possible values
[16:03:45] <wc-> im trying to do an aggregation query that will give me the count of those 3 values
[16:04:15] <wc-> if x == 'a' then 'a' = a +1, else if x == 'b' then 'b' = b + 1 etc
[16:04:55] <wc-> not exactly sure how to do that when already calcuating other averages and things in the aggregation query
[16:05:54] <q851> wc-: db.<collection>.aggregate({"$group" : {"_id" : "$<some_field>", "count" : {"$sum" : 1}}})
[16:07:04] <wc-> what if i already have a $group wiht an _id == null
[16:07:56] <q851> rspijker: so if I'm understanding the situation, you ran mongostat on mongos? What shards did you turn profiling on?
[16:08:05] <morfin> i want to have something like "stored procedure"
[16:08:30] <kali> wc-: $groupBy : { _id:... , a: { $sum { $cond : [ { $eq : { "$x", "a" } }, 1, 0 ] } }, b: ... } ?
[16:08:35] <morfin> so i can run it from ODBC and from somewhere else because that's only way to integrate with side software
[16:08:49] <saml> db.articles.aggregate({$group:{_id:'$blog',count:{$sum:1}}})
[16:08:57] <wc-> ah nice kali, i was going down that path but had the $eq syntax all wrong
[16:08:58] <saml> hehe now i can get how many articles there are per blog
[16:09:21] <kali> wc-: i haven't tried it, it's from memory, but it should not be very far from this
[16:09:32] <kali> wc-: and i'm sur you can get it working
[16:11:13] <kali> morfin: odbc ? really ? you found a connector ?
[16:11:19] <morfin> huh?
[16:11:51] <morfin> there is a project
[16:11:56] <morfin> not sure how it works
[16:12:46] <morfin> anyway
[16:14:07] <morfin> my problem is next i need dynamic rules to select information from database, and same rules should be applied when running from different applications. Any ideas how can i make that?
[16:14:22] <morfin> *dynamic and configurable
[16:18:31] <rspijker> q851: the only active one
[16:20:47] <morfin> i guess that's impossibler
[16:21:57] <q851> rspijker: what does the mongostat look like when you run it on the replica?
[16:22:54] <morfin> http://docs.mongodb.org/manual/tutorial/store-javascript-function-on-server/ why this is not recommended?
[16:23:09] <morfin> what's a problem with stored on server functions?
[16:23:33] <rspijker> q851: you mean the actual mongod?
[16:23:36] <rspijker> or the other mongos?
[16:23:52] <q851> yes, the mongod.
[16:27:06] <rspijker> q851:
[16:27:07] <rspijker> *0 4 3 *0 6 7|0 0 9.18g 19.2g 791m 0 gateway:0.2% 0 0|0 0|0 2k 9m 58 shard1 PRI 18:25:46
[16:27:19] <rspijker> hmm, i'll just pastie it
[16:27:45] <rspijker> http://pastie.org/pastes/9143061/text?key=s6wssfqyowsdfuac2ndoa
[16:29:41] <q851> rspijker: I don't think the primary should be showing asterisks like that.
[16:34:19] <q851> rspijker: does this show anything rs.status().syncingTo
[16:34:35] <q851> run on mongod
[16:50:57] <rspijker> q851: output is the same on both set members
[16:51:03] <rspijker> output of mongostat that is
[16:51:49] <rspijker> looks like the asterikses are just there when something is 0
[16:51:57] <q851> rs.status().syncingTo
[16:52:20] <rspijker> although it says * means replicated op...
[16:52:52] <q851> So, do both members say they are primary?
[16:53:02] <q851> Do you have an arbiter?
[16:53:16] <rspijker> no
[16:53:30] <rspijker> no, both don;t say they are primary
[16:53:30] <q851> You have two members?
[16:53:34] <rspijker> yes, I do have an arbiter
[16:53:37] <q851> ok
[16:53:43] <rspijker> the primary has no SynchingTo
[16:53:49] <rspijker> the secondary has SynchingTo the primary
[16:53:58] <q851> well, that's good.
[16:54:05] <rspijker> yeh
[16:58:46] <rspijker> q851: I have to run, thanks for your help so far. I'll have to pick up the investigation again later
[16:58:55] <rspijker> have a nice day :)
[17:31:19] <modcure> about updates: updating at the collection level. would mongodb place a lock on the collection or the database level?
[17:35:09] <cheeser> the db in 2.6. 2.8 will have document level locking.
[17:36:29] <modcure> thank you
[19:25:15] <shesek> I want to update one of a few fields where they have some specific value, and set it to a new value. Basically. `update({ foo: 1 }, { $set: { foo: 2 } })`, `update({ bar: 1 }, { $set: { bar: 2 } })` and `update({ qux: 1 }, { $set: { qux: 2 } })`
[19:25:29] <shesek> Can I do that in one update statement somehow, or do I have to issue one for each field?
[19:28:37] <q851> shesek: see the $set parameter: http://docs.mongodb.org/manual/reference/method/db.collection.update/#update-specific-fields
[19:32:27] <kali> shesek: you need to create this different statement, but you can send them in one single round trip since 2.6 and the update command
[19:33:15] <kali> shesek: http://docs.mongodb.org/manual/reference/command/update/#dbcmd.update
[19:34:12] <pscheie> We're using mongodb 2.4.9/10, installed via yum from the mongo repo.
[19:35:26] <pscheie> But something changed recently such that when we run 'yum install mongo-10gen-server' it says that version is obsolete and it forces the install of mongodb-org packages which are version 2.6 which we don't want.
[19:35:58] <pscheie> Why is this being forced on us and how do we get around it?
[19:36:40] <cheeser> the packages were renamed out of the 10gen namespace and into the mongodb namespace to coordinate with the corporate name change.
[19:37:00] <cheeser> afaik, there are no 2.6 packages under 10gen
[19:37:11] <pscheie> cheeser, yes, but it wasn't just name change.
[19:37:50] <cheeser> what else changed?
[19:38:04] <shesek> kali, thanks!
[19:38:07] <pscheie> If 'yum install mongodb-org' installed 2.4.10, it would be okay,
[19:38:22] <pscheie> But it installs 2.6.x and we're not ready to go to that version yet.
[19:38:47] <kali> pscheie: http://docs.mongodb.org/v2.4/tutorial/install-mongodb-on-red-hat-centos-or-fedora-linux/#manage-installed-versions
[19:38:54] <kali> pscheie: you need to "pin" the version
[19:39:54] <pscheie> kali, thanks for the tip.
[19:43:13] <pscheie> Unfortunately, we're using puppet & spacewalk and we don't really have a yum.conf file.
[19:46:19] <pscheie> It is irritating that even if I specify an explicit package, say, mongo-10gen-server-2.4.10-mongodb_1, mongo's repo has been configured not to give it to me.
[19:46:54] <cheeser> i doubt that it's configured not to give you anything.
[19:47:10] <kali> can't help much, i'm a .deb guy
[19:47:17] <kali> and i use chef :)
[19:47:23] <cheeser> yeah. i'm on ubuntu myself, too.
[19:47:37] <pscheie> "Package mongo-10gen-server is obsoleted by mongodb-org-server, trying to install mongodb-org-server-2.6.1-0.2.rc0.x86_64 instead" is what it says.
[19:48:26] <pscheie> heh, well, today I wish we were using chef. But that was a decision made long before I got here.
[19:48:49] <pscheie> Sadly, I'm dead in the water with this.
[19:49:42] <pscheie> puppet does have a 'install_options' parameter and it seems like I ought to be able to specify --exclude parameters.
[19:49:58] <pscheie> But it's not working so far.
[20:50:45] <rakkaus_> hi guys! how to install mongo 2.4.10 10gen on CentOs
[20:50:54] <rakkaus_> it says that it is obsolete
[20:50:58] <rakkaus_> and going to install 2.6
[20:51:18] <rakkaus_> I need 2.4.10
[20:51:52] <rakkaus_> it's available during yum list
[20:53:11] <rakkaus_> please any help will be helpful
[20:57:26] <ehershey> rakkaus_: you can run: yum install mongo-10gen-2.4.10 mongo-10gen-server-2.4.10 --exclude mongodb-org,mongodb-org-server
[20:58:37] <rakkaus_> but am doing this using puppet as "package"
[21:06:16] <rakkaus_> can I add exclude to repo file?
[21:06:19] <rakkaus_> how to do this?
[21:06:24] <rakkaus_> will it work?
[21:08:41] <rakkaus_> but it looks like a buggy repo
[21:09:01] <rakkaus_> that must install version which I wamt
[21:09:03] <rakkaus_> nt*
[21:10:56] <ehershey> rakkaus_: you can add the exclude option to your yum.conf
[21:11:36] <rakkaus_> yes already did but
[21:11:46] <rakkaus_> it's working not well
[21:11:49] <rakkaus_> any way
[21:15:30] <ehershey> but working?
[21:15:32] <ehershey> just not well?
[21:18:54] <harttho> Anyone know the tradeoffs of bulk inserts with sharded collections?
[22:16:48] <kreantos> hi, has anyone experience with morphia?
[22:18:08] <cheeser> what's up?
[22:22:29] <kreantos> is there a possibility to use "BasicDBObject" queries together with morphia?
[22:22:57] <cheeser> nothing comes to mind. what did you have in mind?
[22:32:12] <scottyob> Hi all. I'm wondering about the best way to shard my dataset. Let's say I'm keeping user-download counters and the most common operation is an update (upsert). Would I better off be sharding on the username or date? If I shard by the date, with every day that passes I imagine there's a lot of rebalencing going on
[22:34:57] <joannac> don't shard with a monotonically increasing shard key
[22:35:42] <scottyob> Well that settles that then. Sharding on the username :). Sharding should make these updates go faster with a smaller set to look through out of that sharded set yeah?
[22:37:58] <joannac> well, it'll be targetted, yes
[22:51:26] <scottyob> oh man. I don't really get what I'm doing. I didn't tell this to be uniqueue. The username is not inuque, the username AND date combinations together are unique { "v" : 1, "name" : "username_1", "key" : { "username" : 1 }, "unique" : true, "ns" : "herbert.users", "sparse" : 1 }
[22:57:34] <scottyob> no wait. I'm an idiot
[23:01:39] <scottyob> if I always want to search and update based on a field 'username', and a 'date' field.. would I be best doing ensureIndex( {username: "hashed", date: 1} )?
[23:49:53] <mikeylikesit> Hi all, I have a question about collection design in mongo, I come from a sql background so things have been a different.
[23:51:48] <mikeylikesit> I am attempting to do something that I would have typically done with a linking table and joins in sql
[23:52:47] <mikeylikesit> ie, I have a table (lets call it group) and I have a table (lets call it messages) and I have yet another table (users)
[23:53:59] <mikeylikesit> groups have many members and many messages, in sql I would have just had linking tables to link user_ids to group_ids and message_ids to group_ids
[23:54:06] <mikeylikesit> what is the blessed way to do this in mongo?
[23:54:31] <mikeylikesit> I have heard people talking about embedding, but from a performance perspective, how long will it take for that to get out of hand?