PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 25th of September, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:00:35] <hydrajump> joannac: sure I'll have to try it.
[00:01:38] <hydrajump> Different Q about MMS. Can the metrics that the MMS agent gathers be accessed via an API without using the hosted or on premise MMS server?
[00:01:49] <donotpet> Hi. If I have a working replica set, my application can write to the primary, and read from the secondaries. After a failure, how can my app know who is the primary?
[00:02:29] <donotpet> is there some config for a primary ip listener a'la keepalived I'm missing?
[00:05:05] <joannac> hydrajump: not yet, i think. coming soon.
[00:05:32] <joannac> donotpet: every driver has a replica set connection, which will auto reconnect when the primary changes
[00:06:10] <donotpet> joannac: that is what I hoped you would say! Thank you for the confirmation. It worked that way in my VM setup
[00:06:22] <donotpet> wanted to make sure I wasn't just getting lucky before moving to hardware.
[00:06:36] <donotpet> thank you for responding. :)
[00:08:30] <hydrajump> joannac: ok is MMS for monitoring only HIPAA compliant?
[00:09:29] <joannac> hydrajump: erm, no idea. what's needed for HIPAA compliancy?
[00:10:06] <joannac> my guess would be no, you'd need to run MMS on your own servers
[00:10:15] <hydrajump> right
[00:10:51] <hydrajump> are there are ways to get similar monitoring metrics without MMS?
[00:11:13] <joannac> ..roll your own?
[00:13:09] <hydrajump> yeah it would be easier if the MMS agent itself was opensource and/or allowed sending the collected data to something other than MMS server ;)
[00:16:06] <hydrajump> looks like that is planned https://jira.mongodb.org/browse/MMS-1853?jql=component%20%3D%20%22Monitoring%20Agent%22%20AND%20project%20%3D%20MMS
[00:22:41] <hydrajump> joannac: this is helpful http://mms.mongodb.com/help/reference/monitoring/#database-commands-used-by-the-monitoring-agent
[00:23:30] <hydrajump> so I assume that what the MMS monitoring agent does is execute mongo commands on each monogodb instance and than shipping the data to the MMS server
[00:24:44] <joannac> yes
[00:27:24] <hydrajump> cool
[00:28:13] <hydrajump> that should allow me to do something similar. thank joannac
[03:57:04] <d0tn3t> i have 150GB on SQL 2008, i want using mongodb with mssql to improve performance
[03:57:13] <d0tn3t> can i do it???
[03:57:17] <d0tn3t> how??
[03:59:27] <d0tn3t> help me
[03:59:48] <Xe> d0tn3t: you need to first install gentoo
[04:00:21] <Xe> then you can set up mongodb with mssql and get /dev/null sharding
[04:00:40] <d0tn3t> but i'm using SQLserver 2008
[04:00:48] <d0tn3t> my db size = 150GB
[04:01:07] <Xe> install gentoo
[04:01:16] <Xe> all your mmsql problems will go away
[04:01:39] <d0tn3t> mssql can run on gentoo ???
[04:01:54] <d0tn3t> how to sharding mssql to mongodb
[04:03:19] <d0tn3t> ??
[04:10:45] <joannac> d0tn3t: what
[04:11:11] <joannac> what you're asking doesn't make sense
[04:11:13] <d0tn3t> can i using mssql with mongodb???
[04:11:38] <Xe> no
[04:11:40] <d0tn3t> i want sharding mssql with mongodb
[04:12:06] <joannac> what does "sharding mssql with mongodb" mean?
[04:12:26] <d0tn3t> i have 150GB mssql
[04:12:40] <d0tn3t> i want increase performance,
[04:13:34] <joannac> d0tn3t: okay. so you want to move to mongodb?
[04:13:46] <d0tn3t> yes
[04:13:52] <joannac> okay
[04:13:56] <joannac> so what's your question?
[04:14:01] <d0tn3t> what must i do
[04:14:39] <joannac> dump all your data, redesign your schema, modify your data for the new schema, start up some mongod instances, put all your data in mongodb
[04:53:09] <FatBoyXPC> Can I aggregate the data and return all results in the same call?
[04:54:12] <Boomtime> yes
[04:55:21] <FatBoyXPC> Well, and project, but I assume that's like adding a projection any other time.
[04:56:39] <FatBoyXPC> Boomtime: If I were to gist up what I want my data to look like returned to me, could you point me to an example that's similar to what I want?
[04:56:52] <Boomtime> nope, i'm not doing your work for you
[04:57:13] <Boomtime> the documentation for the pipeline stages is very good
[04:57:21] <FatBoyXPC> That's why i asked to be pointed to an example, not have you do it for me...
[04:57:37] <FatBoyXPC> The documentation might be good but this aggregation stuff isn't easy to understand, to me, at least.
[04:58:31] <Boomtime> http://docs.mongodb.org/manual/reference/operator/aggregation-pipeline/
[04:58:41] <Boomtime> every single one of the operators has an example on how to use it
[04:59:02] <Boomtime> so what you are asking for is for me to search for the appropiate documentation link, that is unreasonable
[05:00:17] <FatBoyXPC> I was hoping you'd know where to look. I can match, group, and project just fine. All I'm doing is summing up some numbers with the match/group, then I want a total number of those sums, too. That's all fine and dandy. I just don't know how to then also see all matching results. it's trivial to do it in 2 queries but I feel like it's possible in 1.
[05:00:50] <FatBoyXPC> Well, I haven't projected this specific query yet, but I've done it in the past. It's just not where my hangup is.
[05:00:58] <Boomtime> ok, you are much further along than i understood your statements to mean
[05:01:43] <Boomtime> you can use multiple $group stages
[05:01:46] <FatBoyXPC> Yeah, you're too used to noobs coming in and asking for shit to just get done. I understand why you have that attitude, but it really sucks.
[05:01:52] <FatBoyXPC> Oh, I was unaware of that.
[05:02:07] <Boomtime> if you have an intermediate set of results you can just keep stacking your pipeline with more operators.. it's fun
[05:02:49] <FatBoyXPC> But how would that help me in getting all matching results? Ideally the query is find all clients in zipcodes x, y, and z. And they have sent files field, and accepted files field, I want to sum those up per zip code (that part is easy peasy), then total those (also easy), but then return all results from the first bit.
[05:02:52] <Boomtime> so.. $match -> $project -> $group -> $unwind -> $match -> $group is a totally reasonable pipeline
[05:03:07] <FatBoyXPC> Do I have to unwind? :/
[05:03:22] <Boomtime> who knows, that was an example
[05:03:22] <joannac> FatBoyXPC: um, not possible
[05:03:35] <FatBoyXPC> joannac: so it has to be 2 queries?
[05:03:38] <joannac> yes
[05:03:46] <FatBoyXPC> bah. Alright, oh well.
[05:03:59] <FatBoyXPC> Wait - can I aggregate on a returned data set?
[05:04:05] <joannac> you can't keep all your data and oh by the way aggregate it too
[05:04:51] <FatBoyXPC> Oh well, I'll just do 2 calls then. One for results, one for data.
[05:05:30] <FatBoyXPC> *one for results, one for aggregate
[05:05:41] <joannac> what you want is something like
[05:06:19] <joannac> db.coll.find({a:1}) and also db.coll.aggregate([{$match: {a:1}}, ....other aggregation stuff here])
[05:06:24] <joannac> right?
[05:06:30] <FatBoyXPC> yeah basically
[05:06:40] <Boomtime> ok, that is two unrelated things
[05:06:51] <FatBoyXPC> it's "find all matching clients where blah blah blah" then aggregate some of the fields in the matched set
[05:07:00] <FatBoyXPC> but I also want all the results of the find all matching clients
[05:07:25] <FatBoyXPC> The idea is the data gets returned to a list of all clients, but there's also a "Totals" table to get a quick glance at the numbers.
[05:07:56] <joannac> Yeah, that's 2 separate things
[05:08:04] <joannac> a normal find and an aggregate
[05:08:15] <FatBoyXPC> Alright. I'll just call the db twice.
[05:11:14] <FatBoyXPC> now, with projection, I can have 2 levels of grouping do get sums of provided zips, and then total sums, right?
[05:11:34] <FatBoyXPC> *tog et
[05:11:41] <FatBoyXPC> I give up on fixing the typos.
[05:12:22] <Boomtime> yes, you can just keep piping into $group until you get the final collated result you want
[05:13:43] <FatBoyXPC> Well yes, but I mean like, I want a group of sums of field A, sums of field B, then another group of total sum field A, total sum field B. But I want both those returned. I imagine I can do that with $project?
[05:15:17] <FatBoyXPC> Boomtime: The idea is sum of field a and sum of field b for zipcode X. sum of A, sum of B for zipcode Y. Then another group of total sum field A, total sum field B.
[05:18:44] <Boomtime> so group on "zip" and then use $sum twice in that stage, A: { $sum: "$A" }, B: { $sum: "$B" }
[05:19:05] <Boomtime> or something like that, check the syntax, that is close if not exact
[05:19:15] <Boomtime> http://docs.mongodb.org/manual/reference/operator/aggregation/sum/#example
[05:19:16] <FatBoyXPC> Yeah, I do that for field a sum and field b sum, but what about for total sum A, total sum B?
[05:19:46] <FatBoyXPC> I feel like a projection should be able to handle that, shouldn't it?
[05:19:48] <Boomtime> so that would be the "different result sets" problem again
[05:20:16] <FatBoyXPC> if it's a 'different results' problem then I'll just do the math on the javascript front end when I iterate over the totals, that's trivial enough.
[05:20:26] <Boomtime> you want the intermediate stages results, but you also want to use those results with another operation
[05:20:54] <FatBoyXPC> I guess I feel like project could handle that.
[05:21:03] <Boomtime> i can see a way to do it using an array to temporarily hold the intermediate results, but that is kind of tricky
[05:21:28] <FatBoyXPC> If it gets too tricky then it's not worth it, honestly. It's easy enough to do in node or js front end
[05:21:30] <Boomtime> i think you would be better off just summing the completel totals on the client
[05:22:34] <FatBoyXPC> Right. I iterate over them anyway on the client to throw them in a table, doesn't hurt to just add another var and var += var
[05:23:10] <joannac> FatBoyXPC: I feel like there's a conceptual understanding here
[05:23:22] <joannac> at each stage of the pipeline, all your documents must look the same
[05:23:38] <joannac> there's not way of saying "all my documents should look like this, oh except for the last one"
[05:23:43] <FatBoyXPC> There's a LOT of me not understanding aggregation :P
[05:24:01] <FatBoyXPC> I really thought $project could do more than it can, I guess
[05:27:15] <FatBoyXPC> Really, since these are all sums, I could do *all* of this math on the js client, and still only have a single db call, but meh, I like having the assurance the db will be right, I suppose.
[05:30:48] <Boomtime> if you try to do all the math on the client you will need to pull half the database across the network - i suspect this will kill you pretty quick
[05:31:12] <Boomtime> you should definitely do what you can on the server, get the results you must, then fill in any missing math on the client at the end
[05:31:13] <FatBoyXPC> Remember - I'm returning the results anyway - and also iterating over them anyway (to put them in a table)
[05:31:31] <Boomtime> how big is the collection?
[05:31:42] <FatBoyXPC> I think 300k records, let me check
[05:31:45] <Boomtime> and does your table load every result before being able to be displayed?
[05:32:01] <Boomtime> or does it paginate the results like any sensible and scalable system does
[05:32:08] <FatBoyXPC> 349,137 records (and this is a static collection)
[05:32:38] <Boomtime> no human can possibly make sense of a table containing 349k records displayed all at once, that is silly
[05:32:43] <FatBoyXPC> Oh, no no
[05:32:57] <FatBoyXPC> the total collection size is 349k records. I figure at most they'll see a few hundred.
[05:33:16] <Boomtime> so you only actually read a few hundred from the cursor then right? right?
[05:33:21] <FatBoyXPC> Yeah
[05:33:38] <FatBoyXPC> But honestly, I might start limiting to roughly 50 results. They really only care about the highest sellers, really. He has me sort from highest to lowest honestly, but he swears to me it's good to see all the results.
[05:33:52] <FatBoyXPC> *obviously, not honestly.
[05:34:12] <Boomtime> excellent, the aggregation can save you having to iterate over the other 348,000+ results just to do some basic math
[05:34:21] <FatBoyXPC> well it'd be over a few hundred, but yeah.
[05:34:26] <FatBoyXPC> Even still, it saves it.
[12:59:39] <tirengarfio> How to install MongoId?, I mean I have just started with mongodb using this symfony tutorial: http://symfony.com/doc/current/bundles/DoctrineMongoDBBundle/index.html. but when I try to use persist as the tutorial says, Im getting this error: Attempted to load class "MongoId" from the global namespace in /home/tirengarfio/workspace/mongo/vendor/doctrine/mongodb-odm/lib/Doctrine/ODM/MongoDB/Id/AutoGenerator.php line 36. Did y
[12:59:39] <tirengarfio> ou forget a use statement for this class?
[13:00:34] <tirengarfio> I have tried "sudo gem install mongoid" but the problem persists..
[13:03:32] <Derick> tirengarfio: gems have nothing to do with php - it's a ruby thing
[13:03:48] <tirengarfio> yes, I supposed that..
[13:04:01] <tirengarfio> are you Rethans, it is me Javi, from Php summercap
[13:04:11] <Derick> tirengarfio: did you install the mongodb php extension? I think Symfony/Doctrine can't find it
[13:04:14] <Derick> oh hi :-)
[13:04:21] <tirengarfio> hi!
[13:05:07] <tirengarfio> I installed these three things: apt-get install php-pear > apt-get install php5-dev > pecl install mongo
[13:05:35] <Derick> okay, and did you add extension=mongo.so to your php.ini file?
[13:05:40] <tirengarfio> yes
[13:05:55] <Derick> and phpinfo() shows a section on mongodb?
[13:07:29] <tirengarfio> no
[13:07:56] <Derick> then I think you've either not added it to the right php.ini, or didn't restart your webserver (or php-fpm if you're using that)
[13:08:12] <tirengarfio> oh, in installed it on cli/php.ini..
[13:08:22] <tirengarfio> well wrote
[13:08:37] <Derick> that means "php -m | grep mongo" should show it
[13:11:29] <tirengarfio> now it is working, thanks
[13:12:03] <Derick> np :-)
[13:33:16] <FezVrasta> hey guys... if I've this doc: {"foo": [ {"a": true, "b": true}, {"a": true, "b": false} ]} how can I select only the "foos" which have both values true?
[13:42:25] <FezVrasta> nevermind, found... $elemmatch
[13:42:44] <Derick> do not forget that mongodb will return the whole document though
[13:43:11] <FezVrasta> yes it's what I want
[15:27:47] <jordz> Has anyone had an issue where mongo shards on a single shard somehow?
[15:27:53] <jordz> i do sh.status()
[15:28:18] <jordz> and it tells me there 50 chunks on shard001 and 50 on shard0001
[15:28:22] <jordz> the balancer is moving chunks
[15:28:27] <jordz> but it's the same machine?
[15:31:41] <jordz> moreover
[15:31:46] <jordz> how the hell do I fix it?!
[15:32:00] <jordz> literally it's telling me its sharded but it's not
[15:43:08] <wc-> hi all, im using pymongo to insert a document into a collection, one of the fields has a value of None in python, when i check it in mongo the value is NaN
[15:43:28] <wc-> i would like that value to be stored as null in mongo, any idea how to force that to happen?
[15:48:46] <wc-> im using pymongo 2.7.2
[16:27:05] <tirengarfio> any good "phpadmin" for mongodb?
[16:29:46] <rafaelhbarros> there is a few
[16:29:58] <rafaelhbarros> tirengarfio: http://docs.mongodb.org/ecosystem/tools/administration-interfaces/
[16:45:15] <obeardly> Hello fellow nerds; is it acceptable to ask questions about MMS in this channel?
[16:54:04] <obeardly> Bueller? Bueller? Bueller?
[17:02:01] <tirengarfio> I have this schema: https://gist.github.com/Ziiweb/b4ed1f57449c87eec2b2 but when I remove that category, the products associated is not removed.. I expected the deletion in CASCADE
[17:02:13] <tirengarfio> the product*
[17:05:45] <obeardly> tirengarfio: Tough crowd in here today
[17:06:03] <tirengarfio> :)
[17:06:20] <tirengarfio> bad askers maybe :)
[17:06:27] <obeardly> Wish I could help, but it's out of my expertise
[17:07:32] <obeardly> I've been trying to get some help on MMS to no avail, but I'm not even sure if this is the right place to ask
[17:10:45] <isevi> hello all, I'm trying to set up a mirror of the downloads-distro.mongodb.org site for internal use, and can't find any info on this... any leads?
[17:15:27] <obeardly> isevi: Isn't that just an apache webserver? Can't you just rsync it to your local server?
[18:42:36] <keeger> hello
[18:43:16] <keeger> i am looking to store like an upgrade queue in a document, and i want to return a "view" of that document
[18:44:24] <keeger> so if i have a document: town: townHall { level: 1 }, townHallUpgrade [ { level: 2, atTime: 10:52 am }, { level: 3, atTime: 12:20 pm }]
[18:45:14] <keeger> is there a way i can query it and return: town: townHall { level 2 }, townHallUpgrade [ { level: 3, atTime: 12:20 pm} ] if the query is done at 10:53?
[19:25:48] <dgarstang> i just did a fresh install of mongo. I thought before the admin user had a password assigned, I could connect locally as admin without a password???
[19:28:37] <cheeser> http://docs.mongodb.org/manual/core/authentication/#localhost-exception
[19:30:16] <dgarstang> cheeser: I'm using a library here which, according to tcpdump, is connecting locally as the admin user.... is the username required when using the localhost exception?
[19:30:45] <dgarstang> f*ckin ruby
[19:31:50] <dgarstang> The mongodb documentaion says nothing about this
[19:31:50] <decompiled> you have to enable auth on mongo by default
[19:32:18] <dgarstang> decompiled: err, after the localhost exception?
[19:32:40] <cheeser> once you have a user, i think you have to use it.
[19:32:47] <dgarstang> cheeser: this is before the user
[19:32:48] <cheeser> i.e., the localhost exception no longer applies
[19:33:00] <dgarstang> yes, but what cli ioptions do you use ?
[19:33:08] <cheeser> oh, then you don't need the username afaik. because there's no user to have a name.
[19:33:13] <dgarstang> this library seems to be using -u admin, and I have no idea why
[19:33:15] <decompiled> mongo localhost/admin -uadmin -p
[19:33:29] <dgarstang> decompiled: really? That's with the localhost exceoption?
[19:33:35] <decompiled> I always have to auth
[19:33:37] <dgarstang> it's not just ... 'mongo' ?
[19:33:47] <decompiled> I don't think I use localhost exception I guess haha
[19:34:49] <dgarstang> sigh
[19:34:49] <decompiled> yeah, it says this option only applies when there are no users
[19:35:09] <dgarstang> decompiled: right, but ... would you use -u admin ?
[19:35:15] <decompiled> for sure
[19:35:15] <decompiled> always
[19:35:23] <dgarstang> trying to work out why this library is doing it
[19:42:50] <dgarstang> hang on... you'd use -u admin WITH the localhost exception?
[19:43:46] <dgarstang> i really don't understand that, given the name of the admin user is arbitrary.
[19:46:16] <decompiled> I mean, the first user I created I called admin
[19:48:04] <dgarstang> but it didn't have to be admin...
[19:50:15] <decompiled> thats true
[20:27:53] <TSMluffy> is there a way to directly load a dump.tar.gz?
[20:44:51] <James1x0> Anyone know if you can populate a path like path.to.array.$.withref in mongoose?
[20:52:20] <speaker123> hi i've got a mongodump that is not completing. just stalls at 13% of the creation of some bson file. come back >8 hours later, same spot
[20:52:44] <speaker123> how can i figure out what is going on. nothing is else writing to the db (or even reading)
[20:53:24] <javeln> good afternoon everybody
[20:54:09] <javeln> so we just upgraded to mongo-hadoop 1.3.0 and it seems like the "ignore duplicate keys" on insert behavior changed?
[20:54:50] <javeln> is there config option or some way to get the pre 1.3 duplicate key behavior with out downgrading?
[21:18:02] <nicolas_leonidas> hi I have this http://pastie.org/9595115
[21:18:16] <nicolas_leonidas> but I don't need all the data all I need to know is how many results this query returns
[21:18:31] <nicolas_leonidas> .aggregate(STUFF).count() doesn't work is there a way to find out?
[21:21:16] <donotpet> has anyone gotten "invalid char in key file" when using a keyfile?
[21:21:32] <donotpet> (on linux)
[21:21:49] <donotpet> (mongo 2.6.4)
[21:28:11] <frodopwns> i have a collection "Users" …each user has a list called sites….each item in sites is a dict {"id":"id", "roles":[roles]}, how could i query for all users that have a site with an id of X?
[21:46:19] <keeger> in mongo, can i do an update across a collection, using the current value to set another one?
[21:46:33] <keeger> sql: update table set columnX = columnX + columnY
[21:46:53] <keeger> all the examples i see of update use a passed in value
[21:49:10] <daidoji> keeger: mapReduce?
[21:49:37] <keeger> daidoji, you can use that in an update?
[21:49:44] <keeger> is there an example?
[21:50:17] <daidoji> keeger: not really. Mongo doesn't do well on key manipulations within the document, you're kind of expected to do that all in the app
[21:50:36] <keeger> daidoji, ah ok. thx
[21:50:52] <daidoji> although maybe update({field: 'bar'}, { $inc: {field1: "$field2"}}) notation might work
[21:50:58] <daidoji> I don't know, I ahven't tried it
[21:51:16] <keeger> daidoji, hmm, lemme give that a shot
[21:56:10] <keeger> doesnt look like it. gives an error about a non numeric value
[21:57:53] <daidoji> keeger: yeah thats another thing, there's no explicit casting so if you didn't store them as the BSON "number" then all numeric operations on values within your document won't work
[21:58:02] <daidoji> errr implicit casting
[21:58:28] <daidoji> which basically means you need to mapReduce the values out with teh keys you want to update, and then $set via a javascript function (if you want to do everything in Mongo)
[21:58:32] <daidoji> (or just do all that in the app)
[21:58:44] <daidoji> which is the typical path of least resisitance imo
[21:59:11] <donotpet> anyone? invalid char in keyfile?
[22:00:27] <daidoji> donotpet: sorry bro/broette never heard of it
[22:09:44] <decompiled> I use a keyfile and have never seen that error. Any ^M characters or something?
[22:10:38] <donotpet> It's complaining about the - char I think
[22:11:32] <donotpet> there are "-" in the pemfile
[22:11:35] <donotpet> "-----BEGIN PRIVATE KEY-----"
[22:13:15] <decompiled> mine doesn't have BEGIN PRIVATE KEY, but I wasn'
[22:13:20] <decompiled> t the admin that generated the keyfile
[22:13:41] <decompiled> both the END and BEGIN lines do not exist in my file
[22:19:25] <donotpet> hmm
[22:19:33] <donotpet> that's helpful! thank you!
[22:40:57] <donotpet> @decompiled: thanks! that worked.