PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 4th of August, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[02:02:01] <jaccob> if we just work on the localhost there is no point to bson?
[02:10:00] <jaccob> mongo data is in bson?
[02:13:02] <jaccob> yes
[02:23:45] <sflint> json
[02:29:53] <jaccob> sflint, bson
[02:31:26] <jaccob> http://docs.mongodb.org/manual/core/document/#structure MongoDB stores documents on disk in the BSON serialization format.
[02:31:54] <sflint> on disk bson
[02:32:09] <sflint> which are JSON-style data structures composed of field-and-value pairs:
[02:32:37] <jaccob> I thought maybe bson was for networking, but it is the disk format
[02:32:45] <jaccob> format of data stored on disk
[02:32:55] <sflint> yeah it is to save space on disk
[02:33:26] <sflint> i think in 2.8 or 3.0 they are going to remove 'key' names and let you compress data on disk
[02:33:34] <sflint> cause you technically only need the key once
[02:34:49] <jaccob> cool
[02:35:06] <sflint> jaccob: why do you care? just curious?
[02:35:14] <sflint> how are you planning to access the data?
[02:35:21] <sflint> or were you just asking?
[02:36:34] <jaccob> when you use .find in the shell, there is no mention of bson, but when you use mgo, the golang driver, it explicitly marshals/unmarshals bson so I was confused
[02:37:18] <sflint> desmondHume : it is horrible to use skip and limit...unless you have to
[02:37:33] <sflint> oh
[02:38:01] <sflint> i have never written anything in go
[02:38:16] <sflint> but if you serialize it to an object it shouldn't matter, right?
[02:43:39] <jaccob> yeah true
[02:44:10] <jaccob> you just pass your json to bson.M()
[07:12:39] <salty-horse> hey. I have a dump file I created with "mongodump -o -". How do I feed it into mongorestore? mongorestore seems to work on directories, but I only have a single file
[07:14:17] <joannac> mongorestore filename doesn't work?
[07:19:01] <salty-horse> joannac, nope. seems like I had to rename it to end with ".bson". this really isn't documented. had to look at the source for restore.cpp
[07:19:41] <salty-horse> should I file a bug?
[07:53:25] <joannac> yeah, file a DOCS ticket
[07:53:34] <joannac> salty-horse: ^^
[08:13:30] <salty-horse> joannac, https://jira.mongodb.org/browse/DOCS-3861
[08:20:51] <remonvv> How does one execute a save with w:1 from the shell?
[08:21:03] <remonvv> I thought it was db.col.save(doc, {w:1}) but it isn't ;)
[08:24:14] <salty-horse> remonvv, http://docs.mongodb.org/manual/reference/method/db.collection.save/
[08:24:44] <salty-horse> { writeConcern: {w: 1}} maybe?
[08:24:50] <remonvv> Ah, yeah, that doesn't seem to work either.
[08:24:54] <remonvv> Let me double check
[08:25:06] <remonvv> db.test.save({a:i}, {writeConcern:{w:1}})
[08:25:18] <remonvv> Trying to replicate some 2.4 vs 2.6 performance issues.
[08:26:05] <remonvv> "not work" = no performance difference
[08:26:17] <remonvv> Which might be expected given the recent changes actually.
[09:08:04] <jaccob> anyone using go? anyone use gmo? I tried: query := c.Find(bson.M{}, bson.M{"_id":0}); but I got error: too many arguments in call to c.Find
[09:25:59] <d0x> Hi, my document has a field called "status". Is there a way to aggregate the highest ammount of consequent status occurrencies when the collection is ordered by date?
[09:27:23] <d0x> The result should look like this: { status : "ERR", maxConsequentCount : 1337 }, { status:"SUC", maxConsequentCount : 1000000}, ...
[09:37:27] <remonvv> d0x: I cannot think of a practical way to do that within AF. Not sure if it's possible.
[09:38:43] <remonvv> Perhaps with an $cond operator you can reset a counter as a first stage, then do a second stage to $max it
[09:51:43] <piotrkowalczuk> hi guys, i read this art: http://blog.mongodb.org/post/87200945828/6-rules-of-thumb-for-mongodb-schema-design-part-1
[09:52:07] <piotrkowalczuk> and my question is what has better performance
[09:52:26] <piotrkowalczuk> query with $in from one to many example
[09:53:11] <piotrkowalczuk> or find({host:host._id}... from one to squillions
[09:53:18] <piotrkowalczuk> in this use cases
[09:53:40] <piotrkowalczuk> in sql where in [array of thousends] is kind of overkill
[10:22:41] <d0x> remonvv2: hm, i don't haven an idea how to refer to the previous doc
[10:25:40] <d0x> o i think i have an idea
[10:47:37] <d0x> Having smth. like this: { id : 1 },{ id : 2 },{ id : 3 },{ id : 4 },{ id : 5 },… Is it possible to group them in groups like 1-3, 2-4, 3-5, 4-n?
[10:48:49] <d0x> Its every time from "$id" to "$id + 2"
[10:53:59] <blitzm> Hi, i have a problem with mongoDB connections. We are 6 developers that use the same configuration in a vagrantbox for a php project with mongoDB. Only one of our developers has the problem, that he gets the error „Connection terminated by foreign host“ instantly. on refresh everything may work fine or he gets the error over and over again. In the mongoDB logs I only see an entry when he got connected properly. Can someone give me a hint please how
[10:53:59] <blitzm> debug. (the other 5 devs with exactly the same drivers, the same connection sepped on the same network have no problems at all.
[12:09:51] <jaccob> when I do a find() in the shell it limits the results, how can I increase it?
[12:10:26] <sorribas> jacob you can type 'it' for more results afterwards.
[12:10:40] <rasputnik> anyone have a procedure to clone a replica set? need to back port a prod. db to a dev. environment. doesn't have to be realtime, just consistent - so i'm wondering if i can get the data from one of the secondaries and ship that somehow
[12:11:54] <rasputnik> jaccob: DBQuery.shellBatchSize = N before the query
[12:13:09] <jaccob> sorribas, thx
[12:13:11] <jaccob> rasputnik, thx
[13:05:18] <tadeboro> Hi all.
[13:05:42] <tadeboro> Is it possible to aggregate data over local time interval?
[13:06:01] <Derick> not yet :-/
[13:06:14] <tadeboro> For example, to obtain a sum of daily income in LT.
[13:06:19] <tadeboro> Darn;)
[13:06:34] <tadeboro> Looks like so client-side magic to the rescue.
[13:06:42] <Derick> at our latest hackaton I hacked it in
[13:07:53] <tadeboro> Derick, nice. But unfortunatelly I'm stuck with 2.4 for now.
[13:08:35] <rasputnik> where does a replica sets configuration live? i want to copy databases only from one rs -> another, but don't want the new rs to think it's part of the old one.
[13:12:28] <rspijker> rasputnik: The configuration for a replica set is stored as a document in the system.replset collection in the local database.
[13:13:19] <rspijker> since replication isn’t on a DB level, but on an instance level. You can copy over a DB without having to worry about RS confusion
[13:13:21] <_boot> can I run compact on a mongos?
[13:15:12] <rspijker> _boot: pretty sure you can’t...
[13:15:22] <rasputnik> rspijker: cool, so all three members essentially have a copy - i.e. if i took datafilee from one production replica member, i could in theory load all preprod replica set members with those files?
[13:15:27] <_boot> didn't think so :(
[13:15:47] <rspijker> rasputnik: yes
[13:16:03] <rspijker> be aware of copying data from an active system though…
[13:16:21] <rspijker> best to bring a secondary down, or lock it, before you start the copy and then bring it back in after you’re done
[13:16:40] <rspijker> otherwise there are no guarantees about the files being consistent or even valid
[13:17:06] <_boot> is there any reason I'd see "estimated data per chunk : 91.53MiB" on a shard with chunksize set to 64?
[13:17:53] <rasputnik> rspijker: no that's fine, we'd shut down the detached replica member before shipping the files
[13:18:13] <rasputnik> basically we've split one huge db into smaller ones, just need to back port that to preprod
[13:19:07] <rasputnik> does mongo scan it's dbdir for datafiles on startup, or would i somehow have to tell it about the new databases?
[13:19:28] <rspijker> _boot: chunksize isn’t an exact science. There are loads of rules that govern splitting. Could be your shardkey choice. What if there are only 2 values for the shardkey. It can’t ever split it in more than 2
[13:20:13] <rspijker> rasputnik: fairly sure it just checks the dbpath
[13:20:23] <rspijker> specifically the .ns file will be checked
[13:20:34] <rspijker> but that’s separate per db, so should be no problem if you just copy everything
[13:21:35] <rasputnik> rspijker: cool - my other plan was to create 'stub' dbs in preprod to be sure mongo knew about them, then shut it down and overwrite the dbname.N and dbname.ns files for that DB with the prod. data
[13:22:25] <_boot> hmm, the shard key is based on a date (it's the only thing we can guarantee to be in the queries with the application)
[13:22:41] <rasputnik> just trying to work out what order to bring the replicas up that way. guess stop all 3, then start up primary, then a secondary at a time.
[13:22:42] <_boot> there should be lots of different values
[13:23:25] <rspijker> _boot: was just an example. There can be many reasons, like I said. Splitting is governed by a bunch of riles
[13:23:37] <_boot> okey
[13:23:44] <rasputnik> thanks for the second pair of eyes anyway, rspijker
[13:24:10] <rspijker> rasputnik: on preprod? Shouldn’t really matter… I would stop all 3. Then copy over the data to all 3. Once that’s done, just bring em up one at a time...
[13:24:19] <rspijker> ‘primary first’ makes little sense
[13:24:33] <rspijker> since the primary node is ‘dynamic’ anyway ;)
[13:24:33] <rasputnik> rspijker: yeah i guess first guy awake is the primary :)
[13:24:39] <rspijker> exactly
[13:41:17] <bo> LouisT, i figured out my issue, needed a connect-mongo package
[13:54:01] <Gargoyle> Anyone got any thoughts on AWS EBS vs Instance Storage ?
[13:54:26] <Number6> Gargoyle: Normal EBS (that is, not provisioned) is "bursty"
[13:54:51] <Number6> Instage storage is good, but when the machine dies, you loose all teh data
[13:55:14] <Gargoyle> Number6: Yeah. I'm leaning towards instance storage for the speed, and the fact that the replicaset is redundant
[13:55:52] <Gargoyle> If a node dies, it comes back clean.
[13:56:20] <Gargoyle> Is the only downside having to do a full resync?
[13:56:53] <kali> Gargoyle: well, if there is a majour outage on ec2 and the three servers are hit...
[13:57:31] <Derick> could that be mitigated by having a hidden node on EBS?
[13:57:34] <Gargoyle> kali: Isn't an outage of that magnitude likely to take EBS volumes with it?
[13:57:47] <Number6> Derick: Yes, it could. Have a node with proper disk behind it
[13:57:55] <kali> Derick: that's an option, yeah
[13:58:06] <Number6> Gargoyle: It's possible, but I'd want to have a FIVE node RS as opposed to a 3 node RS
[13:58:17] <kali> Gargoyle: not clear. EBS is such a complex beast that i would not dare make a pronostic
[13:58:41] <kali> that said us-east has not known bigger-than-one-zone failure for quite a while, now.
[13:58:43] <Gargoyle> Number6: And have 2 of the 5 in another region?
[13:58:55] <Number6> Gargoyle: Yeah, as hidden nodes
[13:59:25] <Number6> Gargoyle: Actually, put a hidden node in two different regions, just to be safer
[14:00:03] <kali> or on a synology somewhere
[14:14:01] <klevison> for gelocation application based, mongoDB is the best choice? (5 millions requests per day)/600GB DATA (SQL Server)
[14:19:28] <saml> klevison, yes. it's web scale database
[14:20:33] <rspijker> ugh
[14:20:35] <klevison> saml, I've read about mongo storage consume.. is it true?
[14:20:54] <saml> storage consume?
[14:21:11] <saml> if you set up lots of indexes, it'll require more storage
[14:21:27] <saml> how many differet queries does your app run?
[14:21:36] <saml> how many of them need to be fast?
[14:21:37] <rspijker> klevison: it can consume a lot of storage. It’s gotten signifcantly better with 2.6 though, since PowersOf2 became default.
[14:22:05] <saml> i have 2GB database. without indexes, queries go real slow
[14:22:20] <saml> can't imagine 600GB database
[14:24:22] <klevison> nowadays, we have a database sql server..
[14:24:51] <klevison> and we pretend migrate to mongoDB or other noSQL
[14:25:10] <rspijker> whether something is the ‘best choice’ for you is really not something that can be answered based on so little information. Nor on any information, really… If you really want to know, you’ll have to test several alternatives with your specific use-case and load
[14:25:13] <klevison> we need performance and easily polygons's based queries
[14:26:38] <saml> klevison, is your data related/
[14:26:42] <saml> or all denormalized?
[14:26:59] <klevison> related
[14:27:17] <saml> you can still keep SQL backend. and use mongodb as fast-read frontend
[14:27:30] <saml> mongodb is so bad at data integrity. there's none
[14:27:43] <saml> so if things are related, just keep in mind that there's no join
[14:28:26] <klevison> fas-read frontend? :O
[14:28:35] <saml> and you need to denormalize everything. have a background job keep updating your denmoralized table
[14:29:04] <saml> i mean, things get written to your SQL backend. and have trigger to write to mongodb
[14:29:20] <klevison> hum
[14:29:22] <saml> assumming mongodb quries are faster.but you should really measure
[14:29:55] <saml> what is your app klevison ?
[14:30:15] <saml> depends on your app, mongodb can be the right thing to use
[14:30:30] <saml> but, yes, mognodb has geolocation queries
[14:30:48] <klevison> saml, http://crowdmobi.com.br
[14:31:45] <saml> so people review cellphone signals per location?
[14:31:57] <klevison> so so
[14:31:59] <saml> like restaurant reviews but instead of restaurants, cellphone
[14:32:20] <saml> not sure where geolocation query comes in
[14:32:37] <klevison> you can mensure your signal per region (geolocation).. and see about all carriers into any location...
[14:32:51] <klevison> network data to..
[14:33:02] <saml> oh it sends stuff automatically. not reviews
[14:33:39] <klevison> for signal yeah.. senda automatically.. for network speed. no, its manual
[14:33:57] <saml> so is the data document based?
[14:34:22] <saml> one JSON document containing lon-lat, signal, speed.. etc
[14:34:36] <klevison> now we have a rest webservice... (raptor) counicating with sql server data base
[14:34:51] <klevison> yeah... json comunication
[14:35:05] <saml> mongodb is good at that
[14:35:18] <saml> you can prototype things quickly
[14:35:26] <saml> try mongodb
[14:35:51] <saml> i don't have experience with geolocation queries. so i can't tell. others might have oppinions
[14:36:13] <saml> all i know is that it's easy to get started
[14:36:38] <saml> one problem i encountered was that data became messy since I was migrating data from other database
[14:36:55] <klevison> hum
[14:37:03] <saml> absence of database level schema or data integrity check gave me lots of issues
[14:37:35] <saml> had to make sure import script does things properly. application logic would not write data with different fields.. etc
[14:37:35] <bo> does mongo suck or do i?
[14:38:00] <saml> yah especially when many apps talk to same mongo, you must make sure all apps read/write same json format
[14:38:45] <saml> so i ended up with a "thin" HTTP server in front of mongodb and have all apps go through that HTTP
[14:39:07] <saml> that HTTP server became the largest piece of software in the company
[14:39:45] <rspijker> bo: who knows… maybe both
[14:40:01] <bo> ty
[14:40:34] <saml> hence emergence of Data Engineers
[14:40:52] <saml> they just write "RESTful API" on top of "Big Data" databse such as mongo
[14:40:59] <saml> NoSQL
[14:41:37] <saml> good thing is that you can implement more advanced and tricky data integration logic, not just in terms of relational database schema
[14:41:38] <klevison> saml, what is the best advantage of mongodb? what is it propuse?
[14:41:47] <saml> it was so easy to get started
[14:42:09] <saml> i mean, not being forced to think about structure of data
[14:42:13] <saml> made prototyping a breeze
[14:42:36] <saml> of course in the end, had to implement structure integrity stuff inside REST API
[14:42:47] <saml> so some might argue that it's not really a strong point
[14:43:00] <Derick> hmm, you do need to think about your data structure, but it's a lot more flexible to change later - so great for prototyping.
[14:43:15] <saml> but it's really easy to get started. and.. though i never dealt with big data, i heared mongodb doesn't even sweat with terabytes of data
[14:43:25] <remonvv2> flexibility is nice after the prototyping phase as well
[14:43:35] <Derick> certainly
[14:44:07] <remonvv2> Well, there are very few databases that do have significant issues with TB scale datasets.
[14:44:48] <saml> yah just don't be dumb like me. think about structure from the beginning
[14:45:18] <saml> and if you made a mistake or something, still it can be fixed. even gradually
[14:45:43] <saml> unlike SQL, where everything will break if schema changes and apps aren't updated
[14:47:37] <klevison> humm
[14:47:45] <klevison> thanks a lot saml
[14:51:11] <remonvv> You know what's interesting; even at the most recent Amazon AWS Summit people are still claiming that SQL is the best starting point for beginning database engineers/developers.
[14:51:40] <remonvv> I struggle to find motivation for that viewpoint other than perhaps "It's the most mature database tech"
[14:57:57] <saml> in my experience, when it comes to collaboration with other developers, database schema and programming language type system are miles better than no schema and dynamic language
[14:58:24] <saml> "better" as in subjective feeling
[14:58:25] <baegle> Does anyone have any idea how this might happen and how to remove it: {"grade", "level" : { }}
[14:58:42] <saml> that's not valid json?
[14:58:48] <saml> {"grade"} ?
[14:59:08] <saml> baegle, copy paste actual query and output?
[14:59:19] <baegle> THat's the output of a find
[14:59:41] <saml> i haven't seen such output.
[14:59:44] <baegle> Me neither
[14:59:50] <saml> {"grade": {"level":{}}} ?
[15:00:04] <baegle> that's not what it says
[15:00:14] <saml> are you using mongo shell?
[15:00:15] <baegle> I have nested objects, this is not that
[15:00:16] <baegle> yes
[15:00:29] <saml> can you copy paste find and output? just curious
[15:00:56] <baegle> db.users.find({username:{$in:["cm_0803t1","cm_0803s1"]}});
[15:01:50] <baegle> { "_id" : ObjectId("515f3354d798d85c17000002"), "customDemographics" : { }, "grade", "level" : { }, "domainId" : "515f3354d798d85c17000002", "custom" : { "grade" : "8720", "level" : "7252" }
[15:02:53] <saml> db.users.findOne({_id: ObjectId("515f3354d798d85c17000002")}) can you do this in pastebin ?
[15:02:59] <saml> result of that
[15:04:03] <baegle> Relevant line is: "grade", "level" : {},
[15:05:55] <saml> db.users.find({_id: ObjectId("515f3354d798d85c17000002")}).forEach(printjson)
[15:06:12] <baegle> http://pastebin.com/0gtVD2Em
[15:06:36] <saml> wait neve mind
[15:06:49] <saml> printjson doesn't really print valid json
[15:06:59] <baegle> That pastebin is findone
[15:07:47] <saml> db.users.find({_id: ObjectId("515f3354d798d85c17000002")}).forEach(function(doc){print(JSON.stringify(doc));});
[15:07:49] <saml> can you do this?
[15:08:11] <baegle> on Aug 4 15:05:44 ReferenceError: JSON is not defined (shell):1
[15:08:40] <saml> eh.. what's your mongod version and mongo version?
[15:08:56] <baegle> shell is 2.0.6
[15:09:05] <saml> i'm assumming the key is "grade, level": {}
[15:09:19] <saml> but mongo shell displays like "grade", "level"
[15:09:33] <saml> or, maybe you inserted {["grade", "level"]: {}}
[15:09:53] <baegle> oh
[15:10:02] <saml> i don't know.. i can't even insert such
[15:10:02] <baegle> it's actually "grade\", \"level"
[15:10:07] <baegle> jeez
[15:10:11] <saml> hahah
[15:10:12] <baegle> how ridiculous
[15:12:30] <penzur> j postgresql
[15:12:33] <saml> db.users.update({}, {$unset: {'grade", "level': ''}}, {multi:true})
[15:12:41] <baegle> yeah, just did that
[15:13:06] <saml> on production, right?
[15:13:22] <baegle> hahaha
[15:13:23] <saml> i once dropped production collection. it was crazy
[15:13:24] <baegle> no
[15:46:04] <jaccob> how should I store time, if I just care about a 24hr clock like hr:min:sec and not day month or year
[15:52:32] <kali> jaccob: seconds elapsed since midnight ? as an integer ?
[15:53:25] <jaccob> kali, no just a 24hr time like 00:00:00
[15:55:10] <kali> well, a string then ?
[15:57:01] <jaccob> kali, but then I might want to select between 08:00:00 amd 09:00:00
[15:58:27] <saml> you don't want to store entire iso date?
[16:00:06] <kali> jaccob: well, an int alows to do that, a cleanly formatted string too
[16:00:37] <jaccob> saml, no prefence, but I just need to select stuff based on hour of day. What ever works really
[16:01:14] <jaccob> kali, how would I select say between 08:34:00 and 08:34:55 ?
[16:02:24] <saml> db.docs.find({time: {$lt:'08:34:55', $gt:'08:34:00'}})
[16:03:36] <jaccob> saml, really, that would work? it would convert to int or byte and the comparison would be same is comparing actual times?
[16:04:44] <saml> no it'll be string comparison
[16:04:59] <saml> i wonder if you store full datetime, if you can query time only
[16:05:01] <jaccob> saml, but it would still line up as if you were doing time comparison eh
[16:05:02] <saml> it'll be useful
[16:05:12] <jaccob> saml, thx
[16:05:15] <saml> for example, if you have a collection of all images in the world
[16:05:36] <saml> db.images.find({taken: at 6am})
[16:05:50] <saml> all images that are taken at 6am regardless of date, season, year..
[16:07:13] <synestry> Hey, does anyone have any good information/links on how to structure documents with scalability in mind?
[16:07:54] <jaccob> saml, would be cool
[16:08:16] <saml> i think you need to store date and time as separate field at index time
[16:53:28] <jaccob> how do you cancel or break from command if you did you some bad syntax?
[16:54:26] <jaccob> nevermind, hitting return a couple times does it
[16:54:40] <jaccob> db.stoptimes.find({"stopid":2418, "departuretime": { $gt: "07:45:00", $lt: "08:00:00 } }, {"_id":0 } )
[16:56:01] <jaccob> saml, doesn't work
[17:01:19] <jaccob> I have, here a sample of db.stoptimes.find(); { "stopid" : 2425 }, then I try: db.stoptimes.find({ "stopid": { $gt: "2400", $lt: "2500" } } ); but I don't get any results
[17:01:45] <jaccob> oh, I passed a string, instead of int
[17:05:54] <jaccob> saml, nevermind, it works, thx again
[17:06:23] <Zelest> Derick, around?
[17:08:37] <Zelest> http://pastie.org/pastes/9444840/text?key=dpf8gxh93qce9niultoiiw .. that's 1 second of my mongod.log and I'm using php-fpm, which I assumed used persistent connections.. How come it does all the connect/disconnects?
[17:10:52] <Derick> Zelest: yes
[17:10:57] <Zelest> ^
[17:11:19] <Derick> you should get one per process
[17:11:21] <Zelest> I use php-fpm with static forking.. 10 workers contantly.
[17:11:30] <Derick> so, 10 then
[17:11:32] <Zelest> yeah, but why does it keep disconnecting/connecting?
[17:11:35] <Derick> but, are you calling close?
[17:11:40] <Zelest> nope
[17:11:55] <Zelest> not that I know of anyway.. :o
[17:11:58] <Zelest> *hits grep*
[17:12:07] <Derick> are your FPM process setup to do only one request perhaps?
[17:12:35] <Zelest> not of my knowledge at least, i'll double check that
[17:13:01] <Derick> easy to spot - just do ps aux | grep fpm and check the PIDs
[17:13:32] <Zelest> ah true, nah, they stay the same
[17:14:05] <Zelest> nope, no close()
[17:14:40] <Derick> hm
[17:14:58] <Derick> which version of the driver is this?
[17:15:15] <Zelest> 1.5.2
[17:15:24] <Derick> do rather new
[17:15:26] <Derick> so*
[17:15:42] <Derick> can't think of a recent error...
[17:16:00] <Derick> are you seeing segfaults in your log? (you shouldn't, as the PIDs don't recycle)
[17:16:12] <Zelest> nope
[17:16:49] <Derick> Zelest: can you make a mongolog thingy?
[17:16:51] <Derick> something like:
[17:17:41] <Derick> MongoLog::setModule(MongoLog::ALL); MongoLog::setLevel(MongoLog::ALL); MongoLog::setCallback(function($a, $b, $) { echo $c, "\n"; });
[17:18:57] <Zelest> uhm..
[17:19:20] <Zelest> can I do this in a single script? or must it be one of the "heavy" ones?
[17:19:25] <Zelest> (it's a prod-environment)
[17:19:29] <Derick> ah
[17:19:44] <Derick> try it with a separate script first
[17:19:52] <Derick> how often do connectoins recycle?
[17:22:51] <Zelest> uhm.. no idea :o
[17:23:41] <Zelest> oh, it seems to be doing something called "mongo_connection_destroy" :o
[17:23:46] <Zelest> (check privmsg)
[17:24:07] <Derick> ah, I know what this is
[17:24:14] <Zelest> host vs ip?
[17:24:15] <Derick> you provide IPs in your connection string
[17:24:19] <Zelest> aaah
[17:24:21] <Derick> but replicasets are configured with hostnames
[17:24:26] <Derick> ismaster: the server name (vps8.adrecord.com:27017) did not match with what we thought it'd be (109.74.12.31:27017).
[17:24:31] <Derick> sorry!
[17:24:33] <Derick> sorry!
[17:24:35] <Zelest> haha
[17:24:37] <Zelest> no worries :)
[17:24:39] <Derick> :-/
[17:24:40] <Zelest> firewalled anyway :P
[17:24:44] <Zelest> but yeah, makes sense
[17:24:53] <Zelest> I'll try and fix that :D
[17:24:54] <Derick> pasted wrongly :-(
[17:25:17] <Derick> but yeah, because the names don't match, we create a new connection and drop the old one
[17:25:23] <Derick> for every single request
[17:26:59] <Zelest> solved it.. awesome :D
[17:27:04] <Zelest> Thanks!
[17:30:19] <Derick> np :-)
[17:30:25] <Derick> did you just quick edit that on production? ;-)
[17:30:38] <Zelest> I did ;)
[17:31:31] <Zelest> I don't mind tiny adjustments that might blow up and force me to fix it real quick.. but doing heavy debugging on a machine that serves thousands of requests each sec.. sounds nasty :P
[17:35:03] <Derick> :D
[17:51:34] <Zelest> Derick, 2014-08-04T19:48:37.929+0200 [TTLMonitor] ERROR: key for ttl index can only have 1 field
[17:51:48] <Zelest> i know the error, probably me testing.. but how can I find which index?
[17:52:51] <Derick> i think that error message should include the name - JIRA ticket please
[17:53:03] <Derick> but otherwise, you need to do a getIndexes for every db
[17:53:06] <Derick> and collection
[17:53:18] <Zelest> that's what I did, until I got bored :P
[17:53:27] <Derick> write a script! :-)
[17:54:22] <Zelest> hehe
[17:57:15] <Zelest> ...
[17:57:29] <Zelest> it's the "foo" collection in the "test" database..
[17:57:54] <Derick> haha
[17:57:58] <Derick> you could have guessed that one :P
[17:58:41] <Zelest> :P
[18:01:28] <Zelest> off-topic as hell, but yesterday I learned that nginx is rather dodgy about If-Modified-Since.. the date must be exact to the Last-Modified for it to reply 304... :o
[18:10:09] <klevison> can anyone recomend me a GUI to acesss a local mongo database?
[18:10:15] <Zelest> Derick, is it normal for the replicaset members to connect to eachother every now and then even though they're already "linked" ?
[18:10:27] <Derick> yes
[18:10:32] <Derick> why, dunno
[18:10:47] <Zelest> who are you answering? haha
[18:10:54] <Derick> you
[18:11:52] <Zelest> check priv :)
[18:12:02] <Zelest> those two IP's are the other replica members
[18:12:33] <Derick> i don't know why it does that
[18:12:57] <Zelest> anything to worry about?
[18:31:03] <Zelest> Derick, added a file (jpeg) using the MongoBinData class.. how do I read it back? Tried echoing it "as it is" and __toString(), still no luck.. :o
[18:34:21] <Zelest> Derick, nvm... $f['content']->bin did the trick :)
[19:19:23] <mango_> How do I delete an empty database from MongoDB?
[19:19:40] <mango_> I tried db.dropDatabase(databasename) but it just emptied it,
[19:19:50] <mango_> the database name still exists when I type show dbs
[19:24:28] <huleo> hi
[19:25:08] <saml> hi huleboer
[19:25:12] <saml> hi huleboer
[19:25:15] <saml> hi huleo
[19:25:18] <saml> damn
[19:26:02] <huleo> no worries samlo
[19:26:05] <huleo> smal
[19:26:08] <huleo> saml
[19:26:09] <huleo> *
[19:26:19] <saml> how are you doing
[19:26:30] <huleo> good, I came here to ask a question
[19:26:48] <saml> i would like to hear your question
[19:26:51] <huleo> is there any magic pill that would only make collections keep documents older than X units of time? or something similar?
[19:27:09] <saml> capped colleciton?
[19:27:25] <huleo> uh, capped collection is new term for me
[19:27:25] <saml> http://docs.mongodb.org/manual/core/capped-collections/
[19:27:59] <huleo> hmm
[19:28:06] <huleo> that's capping by number of documents
[19:28:17] <huleo> close, closer, but not close enough
[19:28:21] <huleo> I need time thingy
[19:30:06] <huleo> oh
[19:30:07] <huleo> TTL
[19:30:12] <huleo> looks warm, warmer
[19:54:07] <babykosh> mongo gods…how do I stop mongod from cli … on mac
[19:54:29] <Derick> Ctrl-C
[19:54:36] <Derick> I guess Option-C ?
[19:54:54] <babykosh> hmmm Option-C only works if terminal is still open
[19:55:01] <babykosh> terminla closed
[19:55:10] <Derick> ah
[19:55:11] <babykosh> but mongod still in background
[19:55:19] <Derick> so, the following shows (2) lines:
[19:55:22] <Derick> ps aux | grep mongod
[19:56:41] <babykosh> tranlannhi 53435 0.0 0.0 2442000 616 s000 S+ 12:53PM 0:00.00 grep mongod
[19:57:50] <babykosh> any thoughts?
[21:51:16] <lkannan> Hey guys! I am reading a bunch of blog posts about oplog in mongo and I am unclear if it is even recommended for apps to consume oplog directly. I just feel like it is a bad idea to do depend on oplog. Just wanted to check with the experts.
[21:52:03] <stefandxm> i would say its very dumb yeap
[21:52:26] <stefandxm> but i am not expert
[21:52:40] <stefandxm> and at mongodb world there were alot of talks about the op log / demos of how to consume it
[21:53:14] <stefandxm> imo oplog should be an implementation detail that should be considered as a "as-is" and subject to change even in minors
[22:06:23] <lkannan> Yeah, these blog posts suggest you *can* build your app to depend oplogs. I just can't reconcile with it. Are there any mongo devs who can help?
[22:09:36] <stefandxm> there were some crazy demos at mongodb world, were they mirrored a mongodb database with sql and what not
[22:09:55] <stefandxm> imo a freakshow but.. they were there on behalve of mongodb
[22:10:05] <stefandxm> so i guess they wont change the oplog in a minor at least
[22:12:29] <lkannan> heh. The perils of trying to understand a new NoSQL datastore at every other job.
[22:14:10] <stefandxm> luckily i am not depending on it in that way
[22:14:16] <stefandxm> but.. i was a bit flabbergasted :D
[23:00:40] <huleo> guys
[23:01:38] <stefandxm> women!