[00:05:04] <vlitzer_> so i just installed mongodb-org on my debian wheezy box following the mongodb instructions , and the installation fails saying that the server is not configured
[00:07:06] <Boomtime> can you put the output of the install into a pastebin/gist and link here? include the command you used to start it and all output that follows
[03:08:44] <Timmay> I am working with morphia in java and have a problem I am hoping someone can help me with?
[10:45:28] <Forest> Hello,can anyone help me to solve error { [MongoError: $near requires a point, given { type: "Point", coordinates: [ "17.237571000000003", "48.18046400000001" ] }] name: 'MongoError' } ?
[10:46:09] <Derick> you need numbers, not strings with numbers for the coordinates
[10:46:57] <Forest> @Derick: aha so i need to convert the strings to real numbers? ok let me check how its done in node
[10:48:40] <Forest> @Derick: Thank you it works now :)
[12:40:50] <snoobydoo> Hello guys, I'm trying to return records matching a specific date, say today. How do I do that? Eg: db.expenses.find({ date: { $gt: new Date(2014, 11, 02), $lt: new Date(2014, 11, 04)} }) This doesnt seem to work
[13:00:47] <flyingkiwi> snoobydoo, query looks good so far.
[13:01:36] <snoobydoo> Hmm idk why it doesnt return anything though. So, is that how it has to be done? I was wondering if I was doing the right way
[13:01:38] <flyingkiwi> snoobydoo, what do you mean with "doesnt seem to work"? Any error messages?
[13:01:49] <snoobydoo> No, it doesn't return anything. No records are returned
[13:02:06] <snoobydoo> Though there are records that have been inserted today
[13:02:26] <flyingkiwi> snoobydoo, that datatype does those documents have for "date"? are they strings, maybe?
[13:06:56] <flyingkiwi> snoobydoo, but are they also the ISODate type? Because your query will not work if you have a "ISODate format" as a string like "2014-11-03T12:42:34.897Z"
[13:15:34] <diegows> does anyone have faced issue with php-mongo driver and repl sets? specially when the first node on the list is down and php tries to connect to it every time a page is loaded
[13:17:56] <joannac> diegows: i remember a bug like that being fixed in the PHP driver
[13:19:43] <diegows> there is a theorical issue, php is stateless, so there is always a risk of using a dead node sometimes... not sure what was the fix in that case
[13:19:52] <diegows> I'll check php-driver history
[13:20:24] <Derick> diegows: it will do it for every process, at least once
[13:20:54] <Derick> and it can take up to 60 secs for it to notice
[13:34:46] <snoobydoo> Well, idk why it returns these records now: http://puu.sh/cBChV/ba522cd464.png I mean, I'm looking for records after 3rd and before 5th i.e, tomorrow, it shouldnt return anything
[13:38:30] <snoobydoo> and I'm curious as to why 08 won't work?
[13:45:57] <snoobydoo> Anyway, i'm glad i got this fixed
[13:46:13] <snoobydoo> Thank you, flyingkiwi and Derick! :)
[14:23:33] <flyingkiwi> I'm still waiting for my 10k chunks to migrate. 1% is moved so far. I recognized my lock% is now at 70% on the new shard. (compared to 10% on the old one)
[14:23:50] <flyingkiwi> So - is migrating causing any aggressive locking?
[14:25:30] <Guest34403> is there something in the aggregation framework that will allow you to iterate over previous pipeline results and calling $match on that resultset?
[14:30:31] <Guest34403> in other words, is there a more efficient way of doing this query: http://pastebin.com/qinnLvz3
[14:32:28] <Guest34403> can I avoid the forEach loops?
[14:40:44] <leifw> if you turn off the machine with the secondary and arbiter, the primary will step down to secondary because it doesn't know it's in the majority component anymore
[14:41:23] <leifw> Mmike, krion: 4 nodes with no arbiter require 3 votes to elect a primary, you still won't get dual primaries in normal operation
[14:41:36] <mmars> the swap on my mongodb server is constantly utilized, 75% or greater. anyone have some good starting points where I can start looking?
[14:41:52] <leifw> Mmike: you can get 2 primaries if for some reason nobody can see the old primary but the old primary doesn't yet realize that it should step down
[14:42:14] <leifw> the other machines may elect a new primary, the old one is supposed to step down on its own but for any number of reasons it might not
[14:43:20] <cxz> how can i clear a field in a json array in an array?
[14:43:22] <krion> leifw: do you think i could find a documenation that quote exactly what your said ?
[14:43:23] <Mmike> leifw: i'm actually trying to fix a local orchestration script/framework. I create one mongodb box. Then I add another one, and only after that I have two primaries. And i'm not sure how that can happen.
[14:43:44] <Mmike> mmars: what is your vm.swapiness set to?
[14:43:44] <krion> leifw: it's not that i don't trust you, i just need "official" documentation for it
[14:52:40] <mmars> Mmike: OK, so mongodb doesn't have much control if it gets swapped or not?
[14:52:43] <Mmike> mmars: that number tells the linux kernel how 'opportune' it is while deciding what to swap. If a number is large then it would swap more often, regardless of a available ram. That's because when you actually ran out of ram so that you don't have to do swapout - stuff is already in the stuff
[14:52:52] <Mmike> no, this is a kernel issue/feature
[18:10:53] <Mmike> after i've done rs.initiate(), is there a way do un-do that/
[18:12:29] <Mmike> i had a cluster with two failed nodes, i rs.removed them, then added first one (rs.add), and ended up with two primaries :)
[18:12:45] <Mmike> i'm assuming on the lastly added node someone did rs.initiate too
[18:15:37] <DanielKarp> Question: I have a collection with (this is a slight simplification) a unique index on userid and objectid. Now, elsewhere in the application, I merge objectid 1 and 2. What I'd like to do is update all record that have objectid = 1 to 2, but if any user is linked to both objects to begin with, the update gives a duplicate key error, and stops updating other records. Is there any mongodb equivalent to the MySQL "UPDATE
[18:15:37] <DanielKarp> IGNORE" that could help here?
[18:27:57] <izolate> I can start the mongodb process with "service mongodb start", but when I do "service mongodb stop" it says "unknown instance". Any idea why?
[18:30:02] <MrDHat_> I have multiple array of json stored in a model and want to append different values in them. Is there a way to do this directly without manually taking out every field and appending the values?
[19:13:43] <izolate> does "expect daemon" in the mongo.conf do anything?
[19:32:03] <leifw> Mmike: replica set config is stored in the local db
[19:32:16] <leifw> if you start the server without --replSet you can modify this manually
[19:32:28] <leifw> but it's fairly straightforward
[19:35:00] <GothAlice> MrDHat_: I believe what you are looking for is $push: http://docs.mongodb.org/v2.6/reference/operator/update/push/
[19:37:49] <GothAlice> MrDHat_: Note that there are some severe restrictions on how you can query your data when you deeply nest, or have multiple lists of sub-documents broadly. I.e. you can only $elemMatch one list from a document at a time.
[19:53:18] <scruz> if i have DBRefs, can i access the linked documents in a mapreduce?
[19:57:09] <GothAlice> scruz: I believe so, using somedbref.fetch() — I think.
[19:58:17] <GothAlice> Hmm; might be somedbref.fetch(somedbref.getCollection(), somedbref.getId())
[19:58:36] <GothAlice> Nope, .fetch() pulls those values from the instance. Cool. :)
[20:00:48] <scruz> i’ll look up .fetch() then. thanks, GothAlice
[20:04:05] <shoerain> wouldn't mongotop use the same credentials that the mongodb have? db.auth('username','password') works fine, but mongotop -u username -p password does not, hum dum
[20:26:06] <shcovina> Hey there. I am looking at mongodb as a solution for a real-time analytics platform. I would push my pixel requests into mongo as they come in. However, I'm a little worried about doing that many single writes. Does mongo do better with batch writes? I was thinking I could each request to redis and batch write to mongo every minute or so
[20:27:07] <GothAlice> shcovina: I think I've heard this before. :) http://www.devsmash.com/blog/mongodb-ad-hoc-analytics-aggregation-framework
[20:27:48] <shcovina> oh cool, i'll give that a read, thank you :)
[20:28:03] <GothAlice> I log large volumes of data directly to MongoDB; this includes system and application logs, a complete copy of every webapp request and response (including associated session, cookie, etc. data), &c., &c.
[20:28:31] <shcovina> hmm, do you ever write one at a time?
[20:28:41] <GothAlice> I always write one at a time; events are logged as they happen.
[20:29:04] <GothAlice> See also: http://docs.mongodb.org/ecosystem/use-cases/storing-log-data/ (which has the same "lots of individual writes" concern).
[20:29:57] <GothAlice> To save some time when querying, I follow the "pre-aggregated analytics" approach at the same time as logging the raw event data, a la: https://gist.github.com/amcgregor/1ca13e5a74b2ac318017#file-sample-py
[20:32:29] <GothAlice> shcovina: Total latency added to each HTTP roundtrip when recording all associated request/response data: 6-12ms. (With a w=0 "no concern write concern" and skipping of the "successfully written to the wire" check I don't really care if the occasional log entry gets lost in the mix, or that I may lose a few minutes of logs if the database cluster suddenly dies. Keeps it fast. ;)