PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 6th of January, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:20:19] <vaerros> any way to leverage $avg to work with occurences of documents as opposed to a specific field?
[00:20:41] <vaerros> counts of documents with fields of a certain value, that is
[07:40:34] <Logicgate> anyone around
[07:40:39] <Logicgate> I need help with aggregation
[07:42:05] <Logicgate> I have a collection with objects looking like this {date: ISODate Object, type: 'type1'}, {date: ISODate Object, type: 'type2'},
[07:42:22] <Logicgate> I want to aggregate by date how many of each types there is
[07:44:36] <Logicgate> so the results get returned like so
[07:45:17] <Logicgate> {_id: {date: ISODate Object}, type1: 102, type2: 239}
[07:51:37] <IAD> Logicgate: http://docs.mongodb.org/manual/tutorial/aggregation-with-user-preference-data/#return-usernames-ordered-by-join-month
[07:53:54] <Logicgate> IAD, that won't work
[07:54:02] <Logicgate> as I'm trying to get the count for 2 values in one object
[07:57:04] <Logicgate> i'm trying to aggregate two different types of data in this case.
[07:57:46] <IAD> Logicgate: looks like you need to $group the result http://docs.mongodb.org/manual/reference/operator/aggregation/addToSet/#grp._S_addToSet
[07:58:39] <Logicgate> i could also add a count field to my data of 1
[07:58:47] <Logicgate> hmmm that wouldn't work.
[07:59:23] <Logicgate> I don't know how to achieve this.
[07:59:29] <Logicgate> i've done some pretty serious aggregations before
[07:59:35] <Logicgate> this one is twisting my mind
[08:01:37] <IAD> Logicgate: you need something like this http://pastebin.com/pnZEw0Yp
[08:02:04] <Logicgate> but add to set puts in in an array!
[08:02:11] <Logicgate> so I guess I could unwind it after
[08:02:13] <Logicgate> blah
[08:58:49] <idank> how do I specify read preferences when connecting to a mongos with pymongo?
[09:28:25] <lqez> idank: http://api.mongodb.org/python/current/api/pymongo/#pymongo.read_preferences.ReadPreference
[09:28:45] <lqez> "MongoClient connected to a mongos, with a sharded cluster of replica sets:"
[09:29:29] <idank> lqez: thanks
[09:30:57] <idank> I have a two member replica set + arbiter, and the secondary with priority 0 set. When I take down the primary, reads don't work, why is that? I thought only writes would be disabled because the master is offline
[09:34:58] <samurai2> idank : because by default secondary node not slave ok (mean it can't read from slave)
[09:35:18] <idank> samurai2: how do I turn it on?
[09:35:34] <samurai2> idank : from the console on the secondary node
[09:35:52] <idank> samurai2: with what command
[09:36:39] <samurai2> idank : http://docs.mongodb.org/manual/reference/method/rs.slaveOk/
[09:38:09] <idank> samurai2: I see that it's deprecated in favor of read preference which I specified 'secondary_preferred'
[09:42:59] <samurai2> idank : maybe you should check with rs.slaveOk() first on the node you want to do read
[09:45:42] <idank> samurai2: slaveOk seems to work, strange
[09:46:42] <samurai2> idank : I think is only a preference method, but not exactly automatically enable read from slave node
[09:47:05] <samurai2> idank : anyway, I always use default read from primary :)
[09:47:42] <samurai2> because secondary can have stale data
[09:48:21] <idank> samurai2: right, but this is mostly a read only cluster
[09:52:35] <tiller> Hi there!
[10:05:04] <tiller> Someone has idea of what could cause this error : com.mongodb.MongoException: have to have sort key in projection and removing it
[10:06:18] <tiller> The code I'm using is pretty simple: http://pastebin.com/2h4J6Hwa
[10:06:30] <tiller> and it works fine if I type the query into the console directly
[10:18:13] <tiller> If I remove the sorting part of my java code, it works fine...
[10:18:26] <tiller> I think this is a bug with the java driver?
[10:32:28] <tiller> =/
[13:23:27] <harshada_> how to perform db.stores.distinct("merchant",{merchant:/adi/ig}) in pymongo? Any Idea?
[14:25:57] <locojay> hi can i use route53 as a private dns for name lookup for instance in a private vpc (dabase, ....)
[14:41:10] <jiltr> Hello Im developing a c# application using mongodb. I've a table which will contain some products and the price and the prices are updated every 30 minutes. How can I ensure that the table is always up2date and how can I avoid it to sync the whole table even if the table is unchanged?
[14:41:29] <ppetermann> a table?
[14:41:53] <jiltr> I mean a collection
[14:47:50] <Nodex> sync?
[14:48:23] <Nodex> that's an application problem - to keep your prices up to date
[14:49:51] <jiltr> Yes. Should I make a smaller table where I store the last update time,because the product table will be large.
[14:51:25] <Nodex> define large?
[15:01:13] <jiltr> @Nodex I'd say about 20mb
[15:02:32] <Nodex> not exactly what I would call large. Personaly I would store the last updated time with the product and the price
[15:02:49] <Nodex> then conditionaly update it
[15:05:21] <jiltr> Would I have a loop where it always would update the database so It would try to update the whole db every 500ms
[15:05:53] <Nodex> why 500ms ? you said every 30 mins
[15:07:26] <jiltr> The prices itself updates every 30 minutes
[15:07:38] <jiltr> But I have to work with the prices in the application
[15:07:53] <Nodex> I don't see the problem you're having
[15:09:27] <jiltr> Well let me explain. Im developing a price engine which checks prices in online stores every 30 minutes it updates the prices in the database. I have some other threads which constantly check the db for the prices and if a price reached a specific level it will do a specific action.
[15:29:36] <trakowski77> Hi, I'm trying to resolve the following error on 60M inserts
[15:29:49] <trakowski77> Fatal error in CALL_AND_RETRY_2
[15:29:56] <trakowski77> Allocation failed - process out of memory
[15:30:26] <trakowski77> I've looked it up on google, and it is mostly referred to in context of aggregation
[15:30:34] <Derick> trakowski77: are you running a 32bit mongod?
[15:30:46] <trakowski77> That I need to check
[15:30:52] <trakowski77> with SA
[15:31:17] <Derick> how do you access mongodb? with the mongo shell?
[15:31:43] <trakowski77> It's the PHP driver
[15:32:19] <Derick> trakowski77: can you run:
[15:32:37] <trakowski77> version is 64bit
[15:32:43] <trakowski77> apparently :)
[15:32:51] <Derick> trakowski77: how do you know?
[15:32:52] <trakowski77> but if you have a way to check it, I'll do it
[15:32:56] <Derick> one sec
[15:32:57] <trakowski77> SA told me
[15:34:47] <Derick> $m = new MongoClient;
[15:34:47] <Derick> var_dump($m->db->command(array('hostInfo' => 1)));
[15:34:51] <Derick> and put the output somewhere?
[15:34:55] <trakowski77> sure
[15:35:40] <Derick> sorry
[15:35:57] <Derick> instead of hostInfo, please use buildinfo
[15:39:20] <Derick> trakowski77: any luck?
[15:41:27] <trakowski77> Derick: http://pastebin.com/0AjQkX60
[15:43:37] <Derick> ["bits"]=> int(64)
[15:43:40] <Derick> or, so 64bits
[15:43:47] <Derick> now, next thing to look at is your mongod log
[15:44:14] <trakowski77> Derick: username 'hennzo' is a guy I'm working with on this. He may answer some questions as well ...
[15:44:21] <trakowski77> I'm looking to get that
[15:46:53] <trakowski77> Do you need the whole thing ?
[15:47:04] <trakowski77> or just a part related to failure ?
[15:47:38] <Derick> around the failure would be a good start
[15:49:28] <trakowski77> http://pastebin.com/BbiwZH4B
[15:50:39] <Nodex> rather a long lock
[15:51:20] <Derick> 1123 seconds is rather long indeed
[15:51:31] <Nodex> there is a 9 second one near the bottom
[15:51:41] <Nodex> reslen:77 9684ms
[15:51:47] <Derick> your writes are really large too
[15:51:56] <Derick> 6.2MB
[15:51:59] <Derick> what are yo udoing? :-)
[15:52:14] <trakowski77> pumping data from mysql into mongo
[15:52:22] <trakowski77> to see if we can get better performance
[15:52:55] <trakowski77> So we're doing it too quickly ?
[15:53:15] <Derick> oh
[15:53:19] <Derick> not saying that
[15:53:23] <Derick> just wondering what you were doing :)
[15:53:30] <Derick> but you have write contention I think
[15:53:34] <Derick> or slow disks
[15:53:39] <trakowski77> yeah
[15:53:44] <trakowski77> it is a slow machine
[15:53:49] <trakowski77> dev box
[15:54:20] <trakowski77> we're just trying to see limitations
[15:54:24] <Derick> trakowski77: and you show us a mongostat — better to do that when you get to the "out of memory"
[15:54:35] <Derick> I am thinking you might have memory overcommit turned off or so
[15:55:32] <trakowski77> OK, so you want me to trigger the problem again and then ... ?
[15:55:43] <trakowski77> execute mongostat with some params ?
[15:56:36] <Derick> trakowski77: no params needed
[15:57:07] <trakowski77> ok, let me get on it
[16:00:27] <trakowski77> hennzo is trying to trigger it, it's gonna take few minutes
[16:00:59] <Derick> that's ok!
[16:01:06] <Derick> i'll be here for another hour at least
[16:02:11] <_aeris_> hello :)
[16:03:18] <_aeris_> i'm currently looking for a nosql backend to store some tests results to generate reports afterwards
[16:04:21] <_aeris_> mongodb seems great, but I don't succeed to retrieve my data as I want
[16:04:45] <_aeris_> here is some example document i want to store : http://paste.imirhil.fr/?b0d46513dd019a7f#fAuxOQkgaxudGrLH7MsPdx26Ivm3Vu1epx2SZyXVWVs=
[16:05:32] <_aeris_> user can input any filter on the 'config' fields, and i have to aggregate the corresponding 'results' in a report
[16:06:21] <_aeris_> but i don't find a way to retrieve only the files matching a criteria, mongodb seems only able to find the full document :(
[16:06:26] <_aeris_> am i wrong ?
[16:09:53] <Kudos> anyone else having issues with the MMS api returning HTML instead of JSON?
[16:10:07] <sflint_> question
[16:10:14] <Kudos> MMS seems to think my backup agent is down, but it's not, it's choking on HTML
[16:10:17] <sflint_> i ran this mongorestore -h inny-p-p0520-prd-mdb-15 --port 27017 --oplogReplay --oplogLimit 1385947526:6 dump/oplogR
[16:10:25] <sflint_> to replay an oplog
[16:10:39] <sflint_> results are "Applied 595563 oplog entries out of 238371751 (237776188 skipped)"
[16:11:00] <sflint_> i then re ran the same command adding "-vvvvv" to see why something was skipped
[16:11:19] <sflint_> i have checked the log and there is nothing about skipping an oplog entry...can anyone tell me why this would be skipped?
[16:11:28] <sflint_> and why so many were skipped?
[16:19:37] <ujan> Kudos: same here, throttling data for backup agents, monitoring agents seem to be fine
[16:19:58] <Kudos> ujan: ok, it's not just me then
[16:20:10] <Kudos> i'll just ignore it for now
[16:26:01] <trakowski77> Derick: It looks like the problem resolved itself
[16:26:16] <trakowski77> Derick: At least we're unable to trigger it agian
[16:26:44] <trakowski77> Derick: Thanks for your help, and have a good day !
[16:27:49] <Derick> hehe
[16:27:50] <Derick> ok :)
[16:27:58] <Derick> please do report it when it happens again
[16:42:01] <Nodex> amazing how much work gets done when HN is down LOL
[17:00:11] <rafaelhbarros> Good morning, new project will be coming up this february, and I need a few use cases that used mongo and scaled well, or some videos / literature. Any guidance is appreciated.
[17:00:21] <rafaelhbarros> Good afternoon / depending on timezone.
[17:01:01] <Nodex> probably better to give people an idea of your project and they advise you
[17:02:27] <rafaelhbarros> user generated content, it's a side project of a currently successful "social-network" with a specific area
[17:05:27] <rafaelhbarros> now about this: http://nosql.mypopescu.com/post/12466059249/anonymous-post-dont-use-mongodb, are some of these issues fixed?
[17:08:54] <cheeser> i would probably not put any stock on anonymous, self-described rants like that, personally.
[17:10:39] <Nodex> certainly the global lock doens't exist anymore
[17:11:08] <rafaelhbarros> cheeser: Yeah, I know about the locks
[17:11:18] <rafaelhbarros> cheeser: but I'm looking at random bad stuff to write them down later.
[17:11:30] <rafaelhbarros> cheeser: I'm going to advocate in favor of mongodb, btw
[17:11:35] <Nodex> and that user seems to have a problem with his own arcitecture. He assumed that MongoDB did things that it never really claimed
[17:11:46] <Nodex> which is what most people who moan about it do
[17:12:26] <WhereIsMySpoon> Yo, if I have an object with a list of strings in mongo, how do I remove a string from that list/add a string to that list?
[17:12:43] <Nodex> what is a list of strings?
[17:12:47] <cheeser> $pull
[17:12:55] <WhereIsMySpoon> ["blah", "foo"]
[17:12:59] <Nodex> oh an ARRAY
[17:13:03] <WhereIsMySpoon> yea sorry
[17:13:09] <Nodex> ^^ $pull
[17:13:13] <rafaelhbarros> $pull will remove from list, $push will append
[17:13:21] <WhereIsMySpoon> i'll look that up, thanks
[17:13:26] <Nodex> and $pop if you want to pop
[17:13:28] <rafaelhbarros> I <3 pull and I forgot about it last week.
[18:07:52] <anykey> is it normal that the mongodb process causes a lot of wakeups even if the system is completely idle?
[18:08:15] <anykey> there's no requests to be answered
[18:08:56] <cheeser> sharded?
[18:09:01] <anykey> no
[18:09:05] <anykey> it's just my development laptop
[18:09:09] <cheeser> strange
[18:09:15] <anykey> there's absolutely nothing going on with the db
[18:10:37] <anykey> another thing: I want to completely migrate to SSDs in the near future for battery power savings and performance reasons. Now I read on the SSD page that write endurange is a concern and got afraid of that. I'm just using mongo for development...
[18:10:52] <anykey> endurance*
[18:11:13] <anykey> am I reading too much into it?
[18:11:50] <cheeser> probably
[18:12:50] <anykey> well I could turn mongodb off if it's not in use anyway
[18:13:30] <anykey> OR run it completely on tmpfs :-)
[18:32:31] <cortexman> i have n documents in my database, i.e., {a:*, b, c:*}, and I would like a unique list of all a values - can this be done?
[18:38:39] <cheeser> db.collection.distinct(a)
[20:36:19] <trakowski77> Hi
[20:36:29] <trakowski77> Derick: Are you still there É
[20:36:55] <Derick> physically yes :)
[20:37:08] <trakowski77> Derick: lol, the crash happened again
[20:37:21] <trakowski77> Derick: We spoke about it earlier
[20:37:50] <Derick> Yes, I remember that shutdown issue :)
[20:37:51] <trakowski77> Same error: Fatal error in CALL_AND_RETRY_2
[20:38:09] <trakowski77> Allocation failed - process out of memory
[20:38:53] <Derick> what was mongostat saying?
[20:39:06] <trakowski77> nothing, we ran it after the fact
[20:39:12] <Derick> bleh, too late :)
[20:39:13] <trakowski77> I think we need to run it while it runs right ?
[20:39:18] <Derick> yes
[20:39:24] <trakowski77> We will try to restart it and do that
[20:39:35] <trakowski77> is it some output we should log into a file ?
[20:39:42] <Derick> that might be handy
[20:39:50] <Derick> as you won't miss when things happen
[20:39:55] <trakowski77> ok, let me set this up
[22:20:13] <OliverJAsh> is it possible to compare two object IDs in just javascript? similar to the way you can use $lt/$gt in a query
[22:20:20] <OliverJAsh> except i don't want to do it in a query
[22:29:03] <OliverJAsh> asked here: http://stackoverflow.com/questions/20960467/compare-objectids-for-greater-smaller-in-javascript