[01:42:25] <LuckySMack> is there any way i could bundle a mongo instance with a desktop app/executable so it has a local database? for either python or node.js
[04:31:27] <kstzz> We just had the following happen recently: http://pastebin.com/rVuW4GPc
[04:31:43] <kstzz> Sun Sep 2 10:32:36 [journal] LogFile::synchronousAppend failed with 8192 bytes unwritten out of 8192 bytes; b=0x7f7748c98000 errno:6 No such device or address
[04:32:10] <kstzz> it shuts mongo down after that.
[06:44:55] <LuckySMack|sgs3> If I was making a python or node.js application, is there a way I could bundle a mongo instance with it? This way its integrated with the app and the user doesn't have to install mongo themselves?
[06:45:49] <LuckySMack|sgs3> I would be making a desktop app as an exexutable
[06:54:42] <kstzz> LuckySMack|sgs3: It's possible, but it's going to make it so your app is licensed the same way as mongo's license
[06:55:27] <kstzz> LuckySMack|sgs3: You can't use a database that's more suited toward embedding? Like sqlite?
[07:05:20] <kstzz> mids: even if he did that though, he would be paddling upstream in a strong current. There are so many other issues that can come up that it doesn't really make sense to embed mongo at all.
[07:08:20] <mids> plus creating a python/node.js desktop app turned into a single executable can be tricky as well
[07:09:11] <kstzz> I am a little lazy right now to look at the source code. We have a network mounted filesystem that mongo is currently using for the data store and journaling. If I change where journals go to the local filesystem, can I avoid: http://pastebin.com/rVuW4GPc when network issues arise?
[07:09:28] <kstzz> and still have mongo in a functional state
[07:10:22] <kstzz> mids: with python it's easy. node.js maybe not, it's never occurred to me to even use it like that
[07:18:05] <LuckySMack|sgs3> Mids the plan was for the installer to be proprietary if possible. Its only available to those who have paid for app access. I'll have to look into sqlite though
[07:18:55] <kstzz> LuckySMack|sgs3: what type of data are you storing in the database? sqlite is notoriously easy to embed, and commercially viable. Xapian is also very easy to embed, and depending on what you're doing it can also be a good fit.
[07:19:58] <LuckySMack|sgs3> Is there a way I could use sqlite to sync to a remote db? Think like an inventory management db. When you create an new item locally it syncs that data up to the remote db
[07:20:27] <LuckySMack|sgs3> Inventory management and in store POS
[07:20:51] <kstzz> LuckySMack|sgs3: that's not a feature of the database. CouchDB might be your best friend here?
[07:21:58] <LuckySMack|sgs3> So it won't b like amazon with millions of items. It will be for individual stores. And that store will be able to sync its data online.
[07:23:33] <LuckySMack|sgs3> Yea I would likely be handling the syncing. I will have to look into couch more too though.
[07:27:24] <kstzz> LuckySMack|sgs3: yeah, either write your own sync routines and use sqlite, or read up on how people are using couchdb for offline sync.
[07:29:32] <LuckySMack|sgs3> How different would I have to make the db structure if I used sqlite? And synced up to mongo? I haven't used it before.
[07:30:27] <kstzz> LuckySMack|sgs3: it's up to you, sqlite is a traditional RDBMS so you could just write some web service as an API and convert it to whatever structure you are using in mongo.
[07:34:41] <LuckySMack|sgs3> Hrmm. I'll have to look I to them. Thanks.
[10:07:02] <kristaps> I have a collection Items and each item can have one or more images. Which is better approach - Item has list field with images ids or each image has item`s ID?
[10:55:13] <Neptu> hej if you do a drop in mongo why is not releaseing the space?
[10:56:55] <Derick> it will reuse the space though
[11:29:36] <UnSleep> but i dont know if i can configure two hds into the same server... i aw how to replica set or in my case sharding but... the config looks for a single drive
[11:31:18] <Gargoyle> UnSleep: I doubt you can split your mongo data across two drives, but you have many options from an OS point of view normally.
[11:31:50] <UnSleep> it look i need to config gridfs
[11:45:36] <UnSleep> there is a master in the replica set? or the same parameters are configured in each server? (and that makes it able to clone a vps image directly)
[11:47:42] <kali> by default all hosts are the same, and they elect a primary, but you can bias the election with configuration
[13:41:05] <durre> I have a document which has a list of ObjectId's… I want to retrieve all documents which has the id X in them. not sure how to do this with salat.
[13:46:30] <IAD> durre: you can use find with the "$in" operator: http://www.mongodb.org/display/DOCS/Advanced+Queries#AdvancedQueries-%24in
[13:46:49] <kali> durre: how is salat the issue ? salat does not deal with sending queries, but only translating results
[14:18:46] <durre> kali: I know how to query with the mongo shell, but not with salat. this is what I'm trying: "dao.find(MongoDBObject("taggedPeople" -> MongoDBObject("$in" -> Array(childId))))"
[14:23:59] <kali> durre: ok, and what's wrong ? no match ?
[14:24:51] <durre> kali: yep, no match. I'm starting to think I'm saving the wrong id's in the first place
[14:26:39] <kali> durre: you can try mongosniff or the mongodb optimizer to check if the query is correct
[14:27:17] <kali> cmex: db with 2.2, whole server with 2.0. but insert are *very* fast, so it's not usually a problem
[14:28:09] <cmex> i have inserts all the time i think it can be a problem to get reports from this collections :((
[14:29:12] <cmex> kali is it any arhitechure for this type of problem : i have instant inserts and need to get reports from those collections ... now its about 9m documents
[14:32:26] <durre> kali: I had forgotten to remove old garbage data which confused my program. thx for the tip about mongosniff… that will come in handy
[14:41:13] <kali> well, the AF will help, but it's not silver bullet. you can't expect any tech to scan an arbitrary high number of documents in an arbitrary small time
[14:44:59] <cmex> so u saying that only caching things will work?
[14:51:53] <Gargoyle> Possibly, but you also might need to double check how you are storing data. I used to work in the telematics industry, and we were pulling reports from 150m+ rows.
[14:53:19] <Gargoyle> Sooner or later, you are going to need to limit the range for a "live" report, and fall back to some sort of archive system. eg. You can fetch live reports for the last 6 months, but if you need to go back further, it's submitted to a job queue and produced at a later time.
[14:53:52] <Gargoyle> Also, have you checked you have the best indexes?
[14:54:27] <Gargoyle> So if you are always searching user, from and to - are they all in a single index?
[14:59:51] <cmex> yes we have userId and insertDAte in single index
[15:00:20] <cmex> but its still 2 minutes from collection of 11m recors
[15:27:19] <durre> I have the case class with the column "parents: Option[List[ObjectId]]" .. when I retrieve the class and try to access the parents it's instead com.mongodb.BasicDBList
[15:27:38] <zanefactory> qq: if I get this error, what's the best way to remediate:
[15:27:40] <zanefactory> replSet error rollback : can't rollback drop database full resync will be required
[15:28:01] <zanefactory> do i mongodump on one of my other slaves, mongorestore on the broken one? how does it know where to restart replication from
[15:29:36] <kali> zanefactory: mongodb can do that for you. stop the broken secondary, remove the content of its dbpath, and start it again
[15:29:44] <kali> zanefactory: just make sure the primary is fine
[15:30:42] <kali> these instruction are given AS IS and all that
[15:30:47] <zanefactory> ha yeah, just move all the .[n] and .ns files
[15:31:15] <Gargoyle> zanefactory: Last time I did it, I moved the whole parent directory
[15:31:20] <Bilge> If you store compound data in a field, such as image dimensions (e.g. '123x456'), is it possible to query just the width or height in Mongo?
[15:35:41] <Gargoyle> Bilge: then store them separately!
[15:35:58] <Bilge> Mongo doesn't seem to have a very powerful query API
[15:36:24] <Gargoyle> Bilge: And is there another DB that can do that for you?
[15:36:37] <kali> Bilge: you can use the $where and whrite javascript... don't complain if it's slow
[15:37:24] <Gargoyle> Bilge: Also, if you are querying nested document type data, and you are not storing it as a nested document, then you only have your own design decisions to blame - not mongo.
[15:43:50] <Bilge> I'm not doing anything at this stage other than investigating
[15:44:02] <Bilge> Not really sure where you get off judging everyone
[15:44:39] <Gargoyle> Bilge: Not judging - just telling how it is!
[15:45:07] <kali> Bilge: the thing, is, you can't just push data in any form in any database and expect it to solve your problem
[15:45:12] <Bilge> Even if they were separate fields, how would you query a range?
[15:45:31] <Gargoyle> Bilge: Using greater than and less than - like any other DB
[15:45:33] <kali> in your width x height case, any db would be as dumb as mongodb
[15:46:05] <Bilge> MySQL has a plethora of functions for dealing with processed data
[15:49:39] <Gargoyle> Bilge: Well, if you throw up an example or two, you'll probably get an equiv. But if you are looking for a more extensive API, then it's quite possible mongo is not the solution for your app.
[15:50:22] <Bilge> Mongo is already the solution for my app, I'm just interested in its capabilities
[15:51:10] <Gargoyle> Well, your example question hinted at an issue with data design, not mongo's capabilities.
[15:57:12] <Gargoyle> Nope. It's a fact from the example you gave.
[15:57:19] <Bilge> Problem with IRC geeks is that even though you have the answers people need you also assume everyone is doing everything wrong and you only enjoy being here because you get off on preaching your moral code to everyone in the hopes of changing the world one idiot at a time
[15:57:49] <Bilge> You assume I've designed my database wrong so you'll be the first to jump all over it and tell me how wrong I am
[15:57:54] <Bilge> Because that's what you enjoy doing
[15:58:16] <Bilge> You assume I'm doing everything wrong when in fact I'm doing it right
[15:58:28] <Bilge> I haven't designed anything at this point, I'm merely here to figure out how I should do it
[15:58:32] <kali> well, obviously, you don't need our help
[15:58:47] <kali> wonder why you even bothered asking
[15:58:50] <Gargoyle> No. You asked how can you query a range on a bit of data being stored non optimally. I pointed out the non optimal storage of your data!
[15:59:22] <Gargoyle> Bilge: But feel free to get lost and seek the same answer from some other source of information!
[16:02:34] <Gargoyle> Anyone got any tips on where I chase down apache segfaults on default ubuntu 12.04 install?
[16:20:31] <Gargoyle> Note to self: When configuring NTP, it helps to open the network port on the firewall. (Server clocks have skewed by over an hour!)
[16:21:42] <Gargoyle> Could this mess up my replSet if they suddenly jump back an hour?
[16:40:02] <kali> Gargoyle: /me remembers the night of 30th june and tremble
[16:40:35] <Gargoyle> kali: What happened on that night?
[16:40:50] <kali> Gargoyle: the leap second broke all my jvm
[18:00:26] <Vile> I'm back again with strange questions
[18:02:13] <Vile> Does anybody here have experience with dealing with timeseries? I.e. kind of a data where each record has some timestamp, and those timestamps are not equidistant
[18:03:16] <Vile> i'm trying to calculate some aggregates on those using map/reduce (average, for example)
[18:04:22] <Vile> but to calculate average properly for some time period, i need to know previous value. the one that is outside of the time period
[18:05:12] <Vile> at the moment i'm doing query inside 'map'
[18:05:38] <Vile> but this totally kills performance
[21:59:52] <Dr{Wh0}> Q. if I shutdown a primary of a replication set how long should I expect it to take for a new primary to be elected. I have 4 servers 1 arb 2 secondary's and one master.
[22:04:20] <Dr{Wh0}> k. must be a problem i waited 10 min. I have a mix of versions 2.0 arb and slaves and one 2.2 master probably the reason?
[22:09:09] <Dr{Wh0}> im good with testing and happy with 2.2 im going to update everyone and see what happens.
[22:18:04] <Dr{Wh0}> hmm maybe its how it was primary. I had forced it to primary before I turned it off trying to test stuff in code
[22:25:51] <Dr{Wh0}> seems to be a bug. If you force a member to be primary and shut it down the system will never elect a new primary
[22:29:00] <Dr{Wh0}> i think at minimum someone should update this page http://www.mongodb.org/display/DOCS/Forcing+a+Member+to+be+Primary to explain the dangers of using db.adminCommand({replSetStepDown:1000000, force:1})