PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 3rd of November, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:05:04] <vlitzer_> so i just installed mongodb-org on my debian wheezy box following the mongodb instructions , and the installation fails saying that the server is not configured
[00:07:06] <Boomtime> can you put the output of the install into a pastebin/gist and link here? include the command you used to start it and all output that follows
[00:08:54] <vlitzer_> http://pastebin.com/uKMRLm7m
[00:09:28] <vlitzer_> i just did apt-get install mongodb-org
[00:09:37] <vlitzer_> after adding the repositories
[00:17:58] <vlitzer_> its like 3 steps and it failed. I am trying re-downloading the packages again, and purging the installation
[00:18:12] <vlitzer_> I never got apt failing like this
[00:43:01] <vlitzer_> nop, it didnt work :/
[03:07:43] <Timmay> Hello?
[03:08:44] <Timmay> I am working with morphia in java and have a problem I am hoping someone can help me with?
[10:45:28] <Forest> Hello,can anyone help me to solve error { [MongoError: $near requires a point, given { type: "Point", coordinates: [ "17.237571000000003", "48.18046400000001" ] }] name: 'MongoError' } ?
[10:46:09] <Derick> you need numbers, not strings with numbers for the coordinates
[10:46:57] <Forest> @Derick: aha so i need to convert the strings to real numbers? ok let me check how its done in node
[10:47:49] <Derick> I don't know that
[10:48:40] <Forest> @Derick: Thank you it works now :)
[12:40:50] <snoobydoo> Hello guys, I'm trying to return records matching a specific date, say today. How do I do that? Eg: db.expenses.find({ date: { $gt: new Date(2014, 11, 02), $lt: new Date(2014, 11, 04)} }) This doesnt seem to work
[13:00:47] <flyingkiwi> snoobydoo, query looks good so far.
[13:01:36] <snoobydoo> Hmm idk why it doesnt return anything though. So, is that how it has to be done? I was wondering if I was doing the right way
[13:01:38] <flyingkiwi> snoobydoo, what do you mean with "doesnt seem to work"? Any error messages?
[13:01:49] <snoobydoo> No, it doesn't return anything. No records are returned
[13:02:06] <snoobydoo> Though there are records that have been inserted today
[13:02:26] <flyingkiwi> snoobydoo, that datatype does those documents have for "date"? are they strings, maybe?
[13:02:34] <flyingkiwi> s/that/what
[13:02:52] <snoobydoo> They're in ISODate format
[13:06:56] <flyingkiwi> snoobydoo, but are they also the ISODate type? Because your query will not work if you have a "ISODate format" as a string like "2014-11-03T12:42:34.897Z"
[13:07:37] <snoobydoo> Yes they are. Here's a sample field: "title" : "Uber", "cost" : "5.00", "location" : "Home", "category" : "Transport", "date"
[13:07:52] <snoobydoo> OOps sorry. My bad.
[13:10:03] <snoobydoo> Lol
[13:10:09] <flyingkiwi> me too!
[13:10:10] <snoobydoo> Here's a sample record, flyingkiwi http://pastie.org/9693208
[13:15:32] <diegows> good morning
[13:15:34] <diegows> does anyone have faced issue with php-mongo driver and repl sets? specially when the first node on the list is down and php tries to connect to it every time a page is loaded
[13:17:56] <joannac> diegows: i remember a bug like that being fixed in the PHP driver
[13:18:30] <diegows> I'll check
[13:18:32] <diegows> but anyway
[13:19:43] <diegows> there is a theorical issue, php is stateless, so there is always a risk of using a dead node sometimes... not sure what was the fix in that case
[13:19:52] <diegows> I'll check php-driver history
[13:20:24] <Derick> diegows: it will do it for every process, at least once
[13:20:54] <Derick> and it can take up to 60 secs for it to notice
[13:20:57] <diegows> workaround?
[13:22:33] <Derick> set the timeout to a shorter interval
[13:22:39] <Derick> though not less than 5 secs
[13:22:53] <Derick> mongo.is_master_interval iirc, but check the real name
[13:27:30] <flyingkiwi> snoobydoo, haha
[13:27:41] <flyingkiwi> snoobydoo, I've got a surprise for you
[13:27:51] <snoobydoo> flyingkiwi: You do?
[13:29:09] <flyingkiwi> snoobydoo, month is index based.. so it starts from 0. 0 = january, 1 = feb, ...
[13:30:19] <flyingkiwi> snoobydoo, so try { date: { $gt: new Date(2014, 11, 02), $lt: new Date(2014, 11, 04)} }
[13:30:26] <flyingkiwi> ah
[13:30:28] <snoobydoo> I did
[13:30:31] <flyingkiwi> snoobydoo, { date: { $gt: new Date(2014, 10, 02), $lt: new Date(2014, 10, 04)} }
[13:30:32] <flyingkiwi> this
[13:30:41] <snoobydoo> Oh
[13:30:46] <snoobydoo> gimme a sec
[13:31:01] <Derick> be careful with 04 and like, as 08 will likely not work
[13:32:57] <flyingkiwi> Derick is right
[13:34:46] <snoobydoo> Well, idk why it returns these records now: http://puu.sh/cBChV/ba522cd464.png I mean, I'm looking for records after 3rd and before 5th i.e, tomorrow, it shouldnt return anything
[13:38:30] <snoobydoo> and I'm curious as to why 08 won't work?
[13:38:42] <Derick> it's octal
[13:38:49] <Derick> because it starts with a 0
[13:38:54] <Derick> and octal 08 does not exist
[13:39:09] <snoobydoo> Ah! right
[13:39:23] <snoobydoo> So, whats the best way to do this?
[13:40:47] <Derick> use 8 instead of 08
[13:42:30] <snoobydoo> oh okay
[13:43:13] <flyingkiwi> snoobydoo, no, you're looking for 3rd 00:00:01 to 4rd 23:59:59
[13:44:11] <flyingkiwi> at least your query states that
[13:44:26] <flyingkiwi> so results looks good so far
[13:45:32] <snoobydoo> Oh. So, I'd have to mention hours:minutes:seconds too.
[13:45:49] <snoobydoo> Thats a pain -.-
[13:45:57] <snoobydoo> Anyway, i'm glad i got this fixed
[13:46:13] <snoobydoo> Thank you, flyingkiwi and Derick! :)
[14:23:33] <flyingkiwi> I'm still waiting for my 10k chunks to migrate. 1% is moved so far. I recognized my lock% is now at 70% on the new shard. (compared to 10% on the old one)
[14:23:50] <flyingkiwi> So - is migrating causing any aggressive locking?
[14:23:58] <YourNickname> lo
[14:25:30] <Guest34403> is there something in the aggregation framework that will allow you to iterate over previous pipeline results and calling $match on that resultset?
[14:30:31] <Guest34403> in other words, is there a more efficient way of doing this query: http://pastebin.com/qinnLvz3
[14:32:28] <Guest34403> can I avoid the forEach loops?
[14:33:36] <krion> hello everyone
[14:33:59] <krion> i got a replicaset with two vm, one is running the primary, the other one is running the secondary and also the arbiter
[14:34:08] <krion> i konw it's a bad design...
[14:34:24] <krion> the thing is i want to poweroff the vm with the secondary and the arbiter
[14:35:10] <krion> will the primary stay the primary and not complain when the secondary and the arbiter come back ?
[14:36:53] <cheeser> you'll go in to read only mode
[14:37:08] <krion> really ?
[14:37:53] <krion> while the secondary and the arbiter are not there ?
[14:38:21] <Mmike> Hello, lads. Do you know, under what (which) circumstances could one end up with two primaries in the replset cluster?
[14:38:54] <krion> Mmike: i guess if you don't have an arbiter, that's possible
[14:39:18] <krion> 4 nodes without arbiter could lead to have two primaries, in my understanding
[14:40:05] <leifw> krion: no
[14:40:44] <leifw> if you turn off the machine with the secondary and arbiter, the primary will step down to secondary because it doesn't know it's in the majority component anymore
[14:41:18] <krion> oh
[14:41:23] <leifw> Mmike, krion: 4 nodes with no arbiter require 3 votes to elect a primary, you still won't get dual primaries in normal operation
[14:41:36] <mmars> the swap on my mongodb server is constantly utilized, 75% or greater. anyone have some good starting points where I can start looking?
[14:41:52] <leifw> Mmike: you can get 2 primaries if for some reason nobody can see the old primary but the old primary doesn't yet realize that it should step down
[14:42:09] <krion> cheeser: leifw thanks !
[14:42:14] <leifw> the other machines may elect a new primary, the old one is supposed to step down on its own but for any number of reasons it might not
[14:42:24] <Mmike> hm
[14:43:20] <cxz> how can i clear a field in a json array in an array?
[14:43:22] <krion> leifw: do you think i could find a documenation that quote exactly what your said ?
[14:43:23] <Mmike> leifw: i'm actually trying to fix a local orchestration script/framework. I create one mongodb box. Then I add another one, and only after that I have two primaries. And i'm not sure how that can happen.
[14:43:44] <Mmike> mmars: what is your vm.swapiness set to?
[14:43:44] <krion> leifw: it's not that i don't trust you, i just need "official" documentation for it
[14:43:50] <mmars> 60
[14:44:00] <Mmike> mmars: lower that - i'd go as low as 1
[14:44:03] <leifw> krion: you probably can
[14:44:11] <krion> thanks i'll google for it
[14:44:12] <cxz> ie, removing field1 in: mydict: {[{'field1': ''}, {'field1': ''}]}
[14:44:45] <Mmike> mmars: and ++ on Axis and Allies, love that one :D
[14:45:03] <leifw> Mmike: like I said, there are many possible reasons, you'd probably need an expert to look at your logs and rs.status()
[14:46:02] <leifw> i'd start by looking at rs.status() on both machines to see what they think the other one is doing
[14:50:45] <mmars> Mmike: it's set to 60
[14:51:48] <Mmike> mmars: set it to 1.
[14:52:40] <mmars> Mmike: OK, so mongodb doesn't have much control if it gets swapped or not?
[14:52:43] <Mmike> mmars: that number tells the linux kernel how 'opportune' it is while deciding what to swap. If a number is large then it would swap more often, regardless of a available ram. That's because when you actually ran out of ram so that you don't have to do swapout - stuff is already in the stuff
[14:52:52] <Mmike> no, this is a kernel issue/feature
[14:52:57] <mmars> ah ok
[14:53:27] <Mmike> having swappiness set to 60 or more is ok for a desktop with limited amounts of ram (2-4 GB these days, I'd say)
[14:53:43] <Mmike> but on servers I found out that having lower swappiness usually is beter
[14:54:06] <mmars> thanks for the input. i'll add that.
[14:54:29] <Mmike> keep in mind that setting swappines to 0 doesn't mean ' do not swap at all if you have enough ram'
[14:54:49] <Mmike> but read the docs about that one, not really 100% sure how 0 is interpreted
[14:55:14] <Mmike> leifw: both machines think they're primary. rs,status() gives me two primaries
[14:55:41] <Mmike> leifw: http://jebo.me/pas/3
[15:15:57] <leifw> Mmike: yeah that machine looks like it's never gotten a heartbeat from the other one, that's no good
[15:19:10] <Mmike> leifw: hartbeat goes over plain tcp?
[15:20:12] <cheeser> everything does
[17:03:31] <krion> stupid question but i can't have two arbiter right ?
[17:03:55] <cheeser> sure you can
[17:04:07] <krion> hum
[17:04:31] <krion> my goal is to do, primary, secondary with one arbiter, another secondary with another arbiter
[17:05:38] <krion> then shutdown one secondary with one arbiter, reinstall, readd it
[17:05:40] <krion> great
[17:47:08] <izolate> has the mongo service been renamed from mongod to mongodb?
[17:47:32] <izolate> ive seen both service mongod and service mongodb --is there a difference?
[17:49:12] <Derick> izolate: it's been renamed
[17:50:56] <izolate> cheers
[17:52:03] <izolate> is mongodb the new name?
[17:52:15] <Derick> yes
[17:52:59] <izolate> thx
[18:10:53] <Mmike> after i've done rs.initiate(), is there a way do un-do that/
[18:12:29] <Mmike> i had a cluster with two failed nodes, i rs.removed them, then added first one (rs.add), and ended up with two primaries :)
[18:12:45] <Mmike> i'm assuming on the lastly added node someone did rs.initiate too
[18:15:37] <DanielKarp> Question: I have a collection with (this is a slight simplification) a unique index on userid and objectid. Now, elsewhere in the application, I merge objectid 1 and 2. What I'd like to do is update all record that have objectid = 1 to 2, but if any user is linked to both objects to begin with, the update gives a duplicate key error, and stops updating other records. Is there any mongodb equivalent to the MySQL "UPDATE
[18:15:37] <DanielKarp> IGNORE" that could help here?
[18:27:57] <izolate> I can start the mongodb process with "service mongodb start", but when I do "service mongodb stop" it says "unknown instance". Any idea why?
[18:30:02] <MrDHat_> I have multiple array of json stored in a model and want to append different values in them. Is there a way to do this directly without manually taking out every field and appending the values?
[19:13:43] <izolate> does "expect daemon" in the mongo.conf do anything?
[19:32:03] <leifw> Mmike: replica set config is stored in the local db
[19:32:16] <leifw> if you start the server without --replSet you can modify this manually
[19:32:21] <leifw> BE CAREFUL
[19:32:28] <leifw> but it's fairly straightforward
[19:35:00] <GothAlice> MrDHat_: I believe what you are looking for is $push: http://docs.mongodb.org/v2.6/reference/operator/update/push/
[19:37:49] <GothAlice> MrDHat_: Note that there are some severe restrictions on how you can query your data when you deeply nest, or have multiple lists of sub-documents broadly. I.e. you can only $elemMatch one list from a document at a time.
[19:52:49] <scruz> hello.
[19:53:18] <scruz> if i have DBRefs, can i access the linked documents in a mapreduce?
[19:57:09] <GothAlice> scruz: I believe so, using somedbref.fetch() — I think.
[19:58:17] <GothAlice> Hmm; might be somedbref.fetch(somedbref.getCollection(), somedbref.getId())
[19:58:36] <GothAlice> Nope, .fetch() pulls those values from the instance. Cool. :)
[20:00:48] <scruz> i’ll look up .fetch() then. thanks, GothAlice
[20:04:05] <shoerain> wouldn't mongotop use the same credentials that the mongodb have? db.auth('username','password') works fine, but mongotop -u username -p password does not, hum dum
[20:26:06] <shcovina> Hey there. I am looking at mongodb as a solution for a real-time analytics platform. I would push my pixel requests into mongo as they come in. However, I'm a little worried about doing that many single writes. Does mongo do better with batch writes? I was thinking I could each request to redis and batch write to mongo every minute or so
[20:27:07] <GothAlice> shcovina: I think I've heard this before. :) http://www.devsmash.com/blog/mongodb-ad-hoc-analytics-aggregation-framework
[20:27:48] <shcovina> oh cool, i'll give that a read, thank you :)
[20:28:03] <GothAlice> I log large volumes of data directly to MongoDB; this includes system and application logs, a complete copy of every webapp request and response (including associated session, cookie, etc. data), &c., &c.
[20:28:31] <shcovina> hmm, do you ever write one at a time?
[20:28:41] <GothAlice> I always write one at a time; events are logged as they happen.
[20:29:02] <shcovina> ah ok!
[20:29:04] <GothAlice> See also: http://docs.mongodb.org/ecosystem/use-cases/storing-log-data/ (which has the same "lots of individual writes" concern).
[20:29:17] <shcovina> cool i'll check it out too
[20:29:57] <GothAlice> To save some time when querying, I follow the "pre-aggregated analytics" approach at the same time as logging the raw event data, a la: https://gist.github.com/amcgregor/1ca13e5a74b2ac318017#file-sample-py
[20:30:07] <GothAlice> (Click tracking data.)
[20:32:29] <GothAlice> shcovina: Total latency added to each HTTP roundtrip when recording all associated request/response data: 6-12ms. (With a w=0 "no concern write concern" and skipping of the "successfully written to the wire" check I don't really care if the occasional log entry gets lost in the mix, or that I may lose a few minutes of logs if the database cluster suddenly dies. Keeps it fast. ;)
[20:33:20] <shcovina> oh wow
[20:33:26] <shcovina> that's pretty fast
[20:34:23] <GothAlice> shcovina: http://twitpic.com/12qr1a and http://twitpic.com/12vmdg show off the first draft of this logging setup. :)
[20:50:40] <shcovina> thanks for the links GothAlice