PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Sunday the 2nd of September, 2012

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[01:42:25] <LuckySMack> is there any way i could bundle a mongo instance with a desktop app/executable so it has a local database? for either python or node.js
[04:31:27] <kstzz> We just had the following happen recently: http://pastebin.com/rVuW4GPc
[04:31:43] <kstzz> Sun Sep 2 10:32:36 [journal] LogFile::synchronousAppend failed with 8192 bytes unwritten out of 8192 bytes; b=0x7f7748c98000 errno:6 No such device or address
[04:32:10] <kstzz> it shuts mongo down after that.
[06:44:55] <LuckySMack|sgs3> If I was making a python or node.js application, is there a way I could bundle a mongo instance with it? This way its integrated with the app and the user doesn't have to install mongo themselves?
[06:45:49] <LuckySMack|sgs3> I would be making a desktop app as an exexutable
[06:54:42] <kstzz> LuckySMack|sgs3: It's possible, but it's going to make it so your app is licensed the same way as mongo's license
[06:55:27] <kstzz> LuckySMack|sgs3: You can't use a database that's more suited toward embedding? Like sqlite?
[06:57:06] <kstzz> LuckySMack|sgs3: or Xapian
[07:01:04] <mids> LuckySMack|sgs3: http://stackoverflow.com/questions/6115637/can-mongodb-be-used-as-an-embedded-database
[07:05:20] <kstzz> mids: even if he did that though, he would be paddling upstream in a strong current. There are so many other issues that can come up that it doesn't really make sense to embed mongo at all.
[07:06:11] <mids> kstzz: yup
[07:08:20] <mids> plus creating a python/node.js desktop app turned into a single executable can be tricky as well
[07:09:11] <kstzz> I am a little lazy right now to look at the source code. We have a network mounted filesystem that mongo is currently using for the data store and journaling. If I change where journals go to the local filesystem, can I avoid: http://pastebin.com/rVuW4GPc when network issues arise?
[07:09:28] <kstzz> and still have mongo in a functional state
[07:10:22] <kstzz> mids: with python it's easy. node.js maybe not, it's never occurred to me to even use it like that
[07:18:05] <LuckySMack|sgs3> Mids the plan was for the installer to be proprietary if possible. Its only available to those who have paid for app access. I'll have to look into sqlite though
[07:18:55] <kstzz> LuckySMack|sgs3: what type of data are you storing in the database? sqlite is notoriously easy to embed, and commercially viable. Xapian is also very easy to embed, and depending on what you're doing it can also be a good fit.
[07:19:58] <LuckySMack|sgs3> Is there a way I could use sqlite to sync to a remote db? Think like an inventory management db. When you create an new item locally it syncs that data up to the remote db
[07:20:27] <LuckySMack|sgs3> Inventory management and in store POS
[07:20:51] <kstzz> LuckySMack|sgs3: that's not a feature of the database. CouchDB might be your best friend here?
[07:21:58] <LuckySMack|sgs3> So it won't b like amazon with millions of items. It will be for individual stores. And that store will be able to sync its data online.
[07:23:33] <LuckySMack|sgs3> Yea I would likely be handling the syncing. I will have to look into couch more too though.
[07:27:24] <kstzz> LuckySMack|sgs3: yeah, either write your own sync routines and use sqlite, or read up on how people are using couchdb for offline sync.
[07:29:32] <LuckySMack|sgs3> How different would I have to make the db structure if I used sqlite? And synced up to mongo? I haven't used it before.
[07:30:27] <kstzz> LuckySMack|sgs3: it's up to you, sqlite is a traditional RDBMS so you could just write some web service as an API and convert it to whatever structure you are using in mongo.
[07:34:41] <LuckySMack|sgs3> Hrmm. I'll have to look I to them. Thanks.
[10:01:20] <kristaps> Hello
[10:07:02] <kristaps> I have a collection Items and each item can have one or more images. Which is better approach - Item has list field with images ids or each image has item`s ID?
[10:55:13] <Neptu> hej if you do a drop in mongo why is not releaseing the space?
[10:56:55] <Derick> it will reuse the space though
[10:58:53] <Neptu> Derick: ok, understood
[10:59:26] <Neptu> so far mongo si going quite well
[10:59:27] <Neptu> :D
[11:00:12] <Neptu> how i see all the indexes of a collection?
[11:00:26] <Neptu> found it
[11:02:55] <Gargoyle> Derick: Can I ask a question? ;)
[11:03:43] <Derick> http://www.explosm.net/comics/2885/
[11:03:49] <Gargoyle> :)
[11:04:42] <Derick> :-)
[11:07:52] <kali> Derick: nice
[11:08:28] <Neptu> what was the name of that framework to get queues in mongo?
[11:09:30] <kali> Neptu: mongo-resque ?
[11:10:19] <Neptu> kali: u recommed this?
[11:12:18] <kali> it works
[11:13:03] <Neptu> ok
[11:13:08] <Neptu> fair enough
[11:13:31] <Neptu> lets see if i have time to implement it
[11:13:59] <Gargoyle> Derick: How's beta2 looking?
[11:19:27] <Derick> Gargoyle: two or three issues to solve first
[11:19:55] <Derick> i'm enjoying weekend today though
[11:21:02] <Gargoyle> Good. I'm enjoying half of it! Golf this morning, work this afternoon! What you up to?
[11:21:47] <Derick> watching paralympics and a walk in the park!
[11:25:39] <Gargoyle> Just getting a quick VM upgrade so I can hopefully catch the football on the internet somewhere. :)
[11:28:27] <UnSleep> what about when a server have two harddrives?
[11:28:47] <Gargoyle> UnSleep: What about it?
[11:28:51] <UnSleep> i need about 2tb of data
[11:29:13] <Gargoyle> and?
[11:29:36] <UnSleep> but i dont know if i can configure two hds into the same server... i aw how to replica set or in my case sharding but... the config looks for a single drive
[11:31:18] <Gargoyle> UnSleep: I doubt you can split your mongo data across two drives, but you have many options from an OS point of view normally.
[11:31:50] <UnSleep> it look i need to config gridfs
[11:31:59] <UnSleep> (if its possible)
[11:32:44] <UnSleep> its looks easy to deploy into vps servers with one HD each
[11:32:55] <kali> UnSleep: gridfs is a protocol for storing big blobs of data in mongodb
[11:33:04] <kali> UnSleep: i'm not sure how it helps with your use case
[11:33:17] <UnSleep> but... does exists a frontend control panel for that things?
[11:34:52] <UnSleep> oh sorry i confused gridfs with hadoop hdfs
[11:35:57] <kali> hadoop hdfs is probably useless as a file ystem unless you plan to use hadoop mapreduce or hadoop hbase
[11:38:36] <UnSleep> yep
[11:45:36] <UnSleep> there is a master in the replica set? or the same parameters are configured in each server? (and that makes it able to clone a vps image directly)
[11:47:42] <kali> by default all hosts are the same, and they elect a primary, but you can bias the election with configuration
[12:10:09] <UnSleep> many thanks!
[13:41:05] <durre> I have a document which has a list of ObjectId's… I want to retrieve all documents which has the id X in them. not sure how to do this with salat.
[13:46:30] <IAD> durre: you can use find with the "$in" operator: http://www.mongodb.org/display/DOCS/Advanced+Queries#AdvancedQueries-%24in
[13:46:49] <kali> durre: how is salat the issue ? salat does not deal with sending queries, but only translating results
[14:18:46] <durre> kali: I know how to query with the mongo shell, but not with salat. this is what I'm trying: "dao.find(MongoDBObject("taggedPeople" -> MongoDBObject("$in" -> Array(childId))))"
[14:23:59] <kali> durre: ok, and what's wrong ? no match ?
[14:24:51] <durre> kali: yep, no match. I'm starting to think I'm saving the wrong id's in the first place
[14:25:54] <cmex> hi all
[14:26:09] <cmex> i have a question
[14:26:22] <cmex> are inserts locks the collection or document or db>
[14:26:30] <cmex> sorry for my english
[14:26:32] <cmex> ?
[14:26:39] <kali> durre: you can try mongosniff or the mongodb optimizer to check if the query is correct
[14:27:17] <kali> cmex: db with 2.2, whole server with 2.0. but insert are *very* fast, so it's not usually a problem
[14:28:09] <cmex> i have inserts all the time i think it can be a problem to get reports from this collections :((
[14:29:12] <cmex> kali is it any arhitechure for this type of problem : i have instant inserts and need to get reports from those collections ... now its about 9m documents
[14:29:17] <cmex> more o less
[14:29:17] <kali> cmex: how many insert a second do you expect ?
[14:29:43] <cmex> i think 20-30
[14:29:47] <cmex> + -
[14:29:50] <kali> this is nothing.
[14:30:10] <kali> reporting will be issue though
[14:30:31] <kali> it's not realy what mongodb is best at
[14:30:34] <cmex> insets are not the problem . the prblem is the reports from theese collections
[14:30:45] <kali> you plan on running map reduce ? aggregagtion framework ?
[14:31:15] <cmex> we runnign map reduce functions
[14:31:30] <cmex> agregation is in 2.2 and its unstable yet
[14:31:33] <cmex> ?
[14:31:39] <kali> 2.2 is stavle
[14:31:42] <kali> stable
[14:31:49] <cmex> it was released?
[14:31:57] <kali> yes.
[14:32:01] <kali> where have you been ? :)
[14:32:07] <kali> look at the topic
[14:32:14] <cmex> agreagation framwork is fasteer then mapreduce?
[14:32:19] <wereHamster> or the mongodb homepage
[14:32:26] <durre> kali: I had forgotten to remove old garbage data which confused my program. thx for the tip about mongosniff… that will come in handy
[14:32:32] <cmex> ok
[14:32:56] <kali> cmex: i certainly hope so
[14:32:59] <cmex> ok we will update ours tomorrow
[14:33:00] <cmex> :))
[14:33:06] <kali> cmex: can't see how it could be slower :)
[14:33:22] <cmex> now i need to wait about 1.5 - 2 minutes for each report:((
[14:34:19] <kali> cmex: try the AF, but i don't think you'll get fast reports on 9m docs with it either
[14:34:30] <kali> cmex: fast as in "virtually instant"
[14:34:36] <cmex> it will be much bigger
[14:35:00] <kali> cmex: at least, you'll have several reports run concurrently
[14:35:35] <cmex> i cant get if its my problem or its ok to get count from mongo by dates
[14:35:43] <cmex> its really takes about 2 minutess
[14:36:07] <kali> cmex: mongo is not terrific at counting stuff...
[14:36:25] <kali> cmex: there are some improvement in the pipe for 2.4
[14:36:31] <cmex> so what can u suggest ?
[14:36:43] <cmex> replicate it into sql?
[14:36:48] <cmex> or something?
[14:36:49] <kali> run the report asynchronously ?
[14:36:57] <cmex> thats what im doing
[14:37:00] <kali> so they're ready when the users need them ?
[14:37:34] <cmex> ahh u talking about agregation before
[14:37:52] <cmex> kali we talking about realy big amount of data
[14:38:07] <cmex> we testing it less then month and its lik 9m
[14:38:11] <cmex> documents
[14:38:28] <cmex> i dont this we can agregate any thing they want in reports :((
[14:38:46] <cmex> *thinks
[14:38:51] <cmex> *think
[14:41:13] <kali> well, the AF will help, but it's not silver bullet. you can't expect any tech to scan an arbitrary high number of documents in an arbitrary small time
[14:44:59] <cmex> so u saying that only caching things will work?
[14:45:19] <cmex> :((
[14:45:43] <Gargoyle> cmex: Caching or pre-calculating.
[14:46:18] <Gargoyle> There is a finite amount if time it is going to take to process 9m docs. What type of data are you storing and reporting?
[14:46:31] <cmex> Gargoyle: thats the problem i cant see the way to precalculate all users request
[14:47:02] <cmex> lets say its page imresions from date to date
[14:47:27] <cmex> i have a document userId : ..... inserDate:
[14:47:38] <cmex> and another couple of properties
[14:48:14] <cmex> any time user chosing another from date to date and get a count of impressions
[14:48:37] <Gargoyle> Where are you storing the impressions?
[14:49:09] <cmex> in collection i have a collection called statistics
[14:49:28] <Gargoyle> cmex: Are you recording every hit separately?
[14:49:33] <cmex> and it has objects wit userId , InsertDate
[14:49:44] <cmex> i making a bulk insert
[14:50:02] <cmex> yes theres no precalculation or agregation oin insert
[14:50:52] <cmex> is sharding can help with this type of problem
[14:50:54] <cmex> ?
[14:51:53] <Gargoyle> Possibly, but you also might need to double check how you are storing data. I used to work in the telematics industry, and we were pulling reports from 150m+ rows.
[14:53:19] <Gargoyle> Sooner or later, you are going to need to limit the range for a "live" report, and fall back to some sort of archive system. eg. You can fetch live reports for the last 6 months, but if you need to go back further, it's submitted to a job queue and produced at a later time.
[14:53:52] <Gargoyle> Also, have you checked you have the best indexes?
[14:54:27] <Gargoyle> So if you are always searching user, from and to - are they all in a single index?
[14:59:51] <cmex> yes we have userId and insertDAte in single index
[15:00:20] <cmex> but its still 2 minutes from collection of 11m recors
[15:00:26] <cmex> *records
[15:00:49] <Gargoyle> Why don't you pastebin an example doc, and your query (with explain)
[15:01:00] <cmex> theres no complex or nested objects in ti
[15:01:53] <Gargoyle> cmex: Then you could experiment with storing that info in a different DB that lends itself to this type of reporting ?
[15:02:22] <cmex> thats the data of one document
[15:02:27] <cmex> 503e22d6a695e05a60c26d63 10 1 1 747 1 0 1 72 3717107 779745257 2012-07-25T21:00:00Z
[15:02:33] <cmex> sorry
[15:04:33] <kali> cmex: I assume the problem only exist if the user ask for a wide timespan ? smaller timespan should be fine, right ?
[15:04:41] <cmex> someProp:503e22d6a695e05a60c26d63 someProp:10someProp:1 userId:1someProp:747someProp:1someProp:0someProp:1 someProp:72 someProp:3717107 someProp:779745257 someProp:2012 someProp:07 dateInserted:25T21:00:00Z
[15:04:53] <Gargoyle> cmex: Pastebin!
[15:05:15] <cmex> what is pastebgin sorry for noob question :))
[15:05:20] <cmex> pastebin
[15:05:23] <Gargoyle> google it!
[15:05:46] <Gargoyle> cmex: http://pastie.org
[15:07:29] <cmex> http://pastie.org/4651254
[15:08:55] <cmex> Gargoyle , kali : do you see?
[15:08:58] <Gargoyle> cmex: So the *ONLY* data relavant to your query is userid and date?
[15:09:16] <cmex> in simple cae yes
[15:09:19] <cmex> case
[15:09:36] <cmex> i have another query that i need to mapreduce by another property
[15:09:57] <cmex> but once again at the end its about some proprty and insertedDated
[15:09:59] <Gargoyle> In a typical case, what would give the lower cardinality, userId or date range?
[15:11:34] <kali> cmex: you haven't anwser my question about timespans
[15:12:31] <cmex> inserted date is unique
[15:13:21] <cmex> its agregated by userd between selected ates
[15:13:27] <cmex> *dates
[15:13:47] <kali> cmex: I assume the problem only exist if the user ask for a wide timespan ? smaller timespan should be fine, right ?
[15:14:04] <cmex> kali:yes this is the problem by default its a full month
[15:14:42] <Gargoyle> cmex, how many docs would you store for 1 month (for all users)?
[15:14:46] <kali> ok, so why don't you pre-aggregate data by month ?
[15:14:56] <cmex> for now its lik 9 m
[15:15:47] <cmex> cause we need to drilldown at daily level
[15:16:14] <cmex> only thing i see its to aagregate at daily basis
[15:16:41] <kali> use your actual collection for drill downs, a new aggregated one for month queries
[15:16:55] <cmex> kali: do we have any jobs in mongodb
[15:17:23] <cmex> i mean something like in mssql
[15:17:37] <cmex> any schedule jobs
[15:17:40] <cmex> ?
[15:17:47] <kali> nope
[15:17:51] <Gargoyle> cmex: You'd do that in your OS
[15:17:52] <kali> but cron is your friend
[15:18:03] <cmex> min eis windows scheduller
[15:18:05] <cmex> :)))))))))))))))
[15:18:07] <cmex> mine*
[15:18:20] <Gargoyle> Ahh! There's your problem! ;)
[15:18:25] <cmex> ok :))
[15:18:38] <cmex> ok so i dont see any way . just to agregate on day basis
[15:19:04] <cmex> so it will reduce amount for a couple of hundreds
[15:19:09] <Gargoyle> cmex: If daily is the lowest resolution you need, then it would be a good start.
[15:19:18] <cmex> thanks guys . thanks alot
[15:19:28] <cmex> im going to jump from 5th flor
[15:19:29] <cmex> :)))
[15:19:52] <kali> always look on the bright side of life
[15:19:55] <cmex> :))
[15:20:06] <Gargoyle> cmex: You could have 150M rows!
[15:20:08] <Gargoyle> :P
[15:20:19] <cmex> i will ...
[15:20:27] <cmex> i dont wanna think about it even
[15:20:28] <cmex> :))
[15:20:32] <Gargoyle> then mongo is probably the wrong tool!
[15:20:54] <cmex> Gargoyle : so what u did its precalculation?
[15:21:01] <Gargoyle> No.
[15:21:06] <cmex> we added the mongo for this :)
[15:21:13] <Gargoyle> We used MSSSQL!
[15:21:19] <Gargoyle> -s
[15:21:43] <cmex> Gargoyle:and mssql was ok with amount of inserts
[15:21:45] <cmex> ?
[15:22:03] <Gargoyle> Why wouldn't it be?
[15:22:14] <kali> 20 insert a second ? :)
[15:22:20] <Gargoyle> How fast are your inserts?
[15:22:21] <cmex> this is our problem about the speed of insert of mssql
[15:23:08] <cmex> ok guys i ned to go talk to you tomorrow
[15:23:13] <Gargoyle> cmex: mongo is not faster!
[15:23:15] <cmex> thanks alot !!!
[15:23:33] <cmex> thanks and bye all
[15:27:19] <durre> I have the case class with the column "parents: Option[List[ObjectId]]" .. when I retrieve the class and try to access the parents it's instead com.mongodb.BasicDBList
[15:27:38] <zanefactory> qq: if I get this error, what's the best way to remediate:
[15:27:40] <zanefactory> replSet error rollback : can't rollback drop database full resync will be required
[15:28:01] <zanefactory> do i mongodump on one of my other slaves, mongorestore on the broken one? how does it know where to restart replication from
[15:29:36] <kali> zanefactory: mongodb can do that for you. stop the broken secondary, remove the content of its dbpath, and start it again
[15:29:44] <kali> zanefactory: just make sure the primary is fine
[15:29:54] <zanefactory> ok
[15:30:06] <kali> zanefactory: and it's better to remove the stuff by moving it away than removeing it (if you have enough disk space)
[15:30:25] <Gargoyle> just in case!
[15:30:27] <Gargoyle> ;)
[15:30:30] <kali> yeah
[15:30:42] <kali> these instruction are given AS IS and all that
[15:30:47] <zanefactory> ha yeah, just move all the .[n] and .ns files
[15:31:15] <Gargoyle> zanefactory: Last time I did it, I moved the whole parent directory
[15:31:20] <Bilge> If you store compound data in a field, such as image dimensions (e.g. '123x456'), is it possible to query just the width or height in Mongo?
[15:31:25] <zanefactory> the whole mongodb dir?
[15:31:28] <zanefactory> and let it restore completely?
[15:31:33] <zanefactory> not just the db in question
[15:31:38] <Gargoyle> Bilge: If you sore it so.
[15:31:54] <Bilge> what
[15:31:55] <kali> zanefactory: yes, the whole thing, replication is server wide not database wide
[15:32:01] <zanefactory> gotcha
[15:32:22] <kali> zanefactory: mongo will not create the empty dir, so if you move it away, you need to mkdir it
[15:32:35] <kali> zanefactory: and chown it
[15:32:37] <Gargoyle> Bilge: Assuming you have {image: { width: 123, height: 123, … etc}}
[15:32:49] <zanefactory> yup
[15:33:00] <Gargoyle> Bilge: Then you can query with {"image.width": 123}
[15:33:06] <Bilge> No
[15:33:12] <Bilge> It is a string field containing the string '123x456'
[15:33:26] <Gargoyle> Bilge: Then you are down to regex i think
[15:34:34] <Gargoyle> Bilge: /^123x/ would match width and /x456$/ would match height. As a very crude example
[15:35:24] <Bilge> What if you want to find a range of widths
[15:35:29] <Bilge> e.g. 100-200px
[15:35:41] <Gargoyle> Bilge: then store them separately!
[15:35:58] <Bilge> Mongo doesn't seem to have a very powerful query API
[15:36:24] <Gargoyle> Bilge: And is there another DB that can do that for you?
[15:36:37] <kali> Bilge: you can use the $where and whrite javascript... don't complain if it's slow
[15:37:24] <Gargoyle> Bilge: Also, if you are querying nested document type data, and you are not storing it as a nested document, then you only have your own design decisions to blame - not mongo.
[15:43:50] <Bilge> I'm not doing anything at this stage other than investigating
[15:44:02] <Bilge> Not really sure where you get off judging everyone
[15:44:39] <Gargoyle> Bilge: Not judging - just telling how it is!
[15:45:07] <kali> Bilge: the thing, is, you can't just push data in any form in any database and expect it to solve your problem
[15:45:12] <Bilge> Even if they were separate fields, how would you query a range?
[15:45:31] <Gargoyle> Bilge: Using greater than and less than - like any other DB
[15:45:33] <kali> in your width x height case, any db would be as dumb as mongodb
[15:46:05] <Bilge> MySQL has a plethora of functions for dealing with processed data
[15:46:21] <kali> Bilge: http://www.mongodb.org/display/DOCS/Advanced+Queries#AdvancedQueries-%3C%2C%3C%3D%2C%3E%2C%3E%3D
[15:46:24] <Bilge> String manipulation functions in particular
[15:46:42] <kali> Bilge: yes, but that's equivalent to $where in mongodb. it will be un-indexable and slow
[15:47:17] <Gargoyle> Bilge: Also, begs the question why do you need to do that kind of manipulation in the database?
[15:47:52] <Bilge> Real world scenarios
[15:48:13] <kali> aw, sorry
[15:48:24] <kali> world
[15:48:26] <kali> zzzzzip
[15:49:19] <Bilge> Could you be any more mad?
[15:49:39] <Gargoyle> Bilge: Well, if you throw up an example or two, you'll probably get an equiv. But if you are looking for a more extensive API, then it's quite possible mongo is not the solution for your app.
[15:50:22] <Bilge> Mongo is already the solution for my app, I'm just interested in its capabilities
[15:51:10] <Gargoyle> Well, your example question hinted at an issue with data design, not mongo's capabilities.
[15:52:34] <Bilge> Assumptions
[15:55:31] <Gargoyle> eh?
[15:56:35] <Bilge> That's what you assume
[15:57:12] <Gargoyle> Nope. It's a fact from the example you gave.
[15:57:19] <Bilge> Problem with IRC geeks is that even though you have the answers people need you also assume everyone is doing everything wrong and you only enjoy being here because you get off on preaching your moral code to everyone in the hopes of changing the world one idiot at a time
[15:57:49] <Bilge> You assume I've designed my database wrong so you'll be the first to jump all over it and tell me how wrong I am
[15:57:54] <Bilge> Because that's what you enjoy doing
[15:58:16] <Bilge> You assume I'm doing everything wrong when in fact I'm doing it right
[15:58:28] <Bilge> I haven't designed anything at this point, I'm merely here to figure out how I should do it
[15:58:32] <kali> well, obviously, you don't need our help
[15:58:47] <kali> wonder why you even bothered asking
[15:58:50] <Gargoyle> No. You asked how can you query a range on a bit of data being stored non optimally. I pointed out the non optimal storage of your data!
[15:59:22] <Gargoyle> Bilge: But feel free to get lost and seek the same answer from some other source of information!
[15:59:42] <Bilge> Feel free to get mad
[16:00:27] <Gargoyle> Not getting mad
[16:02:34] <Gargoyle> Anyone got any tips on where I chase down apache segfaults on default ubuntu 12.04 install?
[16:20:31] <Gargoyle> Note to self: When configuring NTP, it helps to open the network port on the firewall. (Server clocks have skewed by over an hour!)
[16:21:42] <Gargoyle> Could this mess up my replSet if they suddenly jump back an hour?
[16:40:02] <kali> Gargoyle: /me remembers the night of 30th june and tremble
[16:40:35] <Gargoyle> kali: What happened on that night?
[16:40:50] <kali> Gargoyle: the leap second broke all my jvm
[16:41:00] <kali> but mongo was ok :)
[16:41:00] <Gargoyle> :(
[16:53:14] <Neptu> hej using the pymongo driver and I wonder if I have a key:value the value should is better to store it as array or as a tuple??
[16:56:36] <Neptu> stupid question anyway
[16:59:49] <garrettwilkin> total mongo n00b here
[17:00:09] <garrettwilkin> by default mongo runs on port 27021?
[17:00:59] <garrettwilkin> oh its 27017
[17:01:05] <garrettwilkin> so when i install this on my server
[17:01:14] <garrettwilkin> is there anything i need to do to allow remote connections?
[17:01:27] <garrettwilkin> or should that work out of the box?
[17:10:01] <wereHamster> garrettwilkin: google 'mongodb ports'
[17:10:33] <garrettwilkin> yea I'm reading a bit up on it
[17:10:46] <garrettwilkin> I'm not able to telnet to port 27017 on my host
[17:10:52] <garrettwilkin> so I'm guessing thats a problem
[17:31:52] <garrettwilkin> should i be able to telnet to my mongo port?
[17:31:59] <garrettwilkin> maybe that's not a valid test
[17:32:11] <garrettwilkin> i did find ufw for managing firewall settings on Ubuntu 10.04
[17:36:02] <garrettwilkin> where do i find my mongo config file?
[17:36:15] <garrettwilkin> I'm wondering if i need to change this bind setting
[17:37:16] <garrettwilkin> looks like that will be the issue
[17:37:28] <garrettwilkin> since my config at /etc/mongodb.conf has this setting
[17:37:34] <garrettwilkin> bind_ip = 127.0.0.1
[17:59:55] <Vile> Hi All!
[18:00:26] <Vile> I'm back again with strange questions
[18:02:13] <Vile> Does anybody here have experience with dealing with timeseries? I.e. kind of a data where each record has some timestamp, and those timestamps are not equidistant
[18:03:16] <Vile> i'm trying to calculate some aggregates on those using map/reduce (average, for example)
[18:04:22] <Vile> but to calculate average properly for some time period, i need to know previous value. the one that is outside of the time period
[18:05:12] <Vile> at the moment i'm doing query inside 'map'
[18:05:38] <Vile> but this totally kills performance
[18:06:54] <Vile> any ideas how to improve?
[21:59:52] <Dr{Wh0}> Q. if I shutdown a primary of a replication set how long should I expect it to take for a new primary to be elected. I have 4 servers 1 arb 2 secondary's and one master.
[22:02:06] <Derick> 20-30 secs
[22:04:20] <Dr{Wh0}> k. must be a problem i waited 10 min. I have a mix of versions 2.0 arb and slaves and one 2.2 master probably the reason?
[22:09:09] <Dr{Wh0}> im good with testing and happy with 2.2 im going to update everyone and see what happens.
[22:18:04] <Dr{Wh0}> hmm maybe its how it was primary. I had forced it to primary before I turned it off trying to test stuff in code
[22:25:51] <Dr{Wh0}> seems to be a bug. If you force a member to be primary and shut it down the system will never elect a new primary
[22:29:00] <Dr{Wh0}> i think at minimum someone should update this page http://www.mongodb.org/display/DOCS/Forcing+a+Member+to+be+Primary to explain the dangers of using db.adminCommand({replSetStepDown:1000000, force:1})