PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 23rd of August, 2012

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:06:10] <zacinthus> I am seeing some weird issues with the new TTL feature
[00:06:29] <zacinthus> I have 2 databases, and inside each of them, I have just one table each
[00:06:42] <zacinthus> and both of them have TTL based indexes on a datetime field
[00:06:50] <zacinthus> one of them is expiring data properly
[00:06:55] <zacinthus> and another is just not doing any expiry
[00:07:01] <zacinthus> completely strage
[00:07:05] <zacinthus> strange*
[00:11:59] <owen1> crudson1: thanks
[01:13:44] <Init--WithStyle-> Hey guys... I would like to set up a collection of data on my server before hitting the database... but have no idea how to set the collection up correctly.
[01:13:59] <Init--WithStyle-> Initially I tried just doing an insert on the database for every piece of my collection but... it's too big
[01:14:04] <Init--WithStyle-> Its a 2000 x 2000 array
[01:14:15] <Init--WithStyle-> too many database hits
[01:15:07] <_johnny> Init--WithStyle-: you can use mongoimport directly on the data (if you're able to have the mongod shut down while you do it)
[01:15:31] <Init--WithStyle-> _johnny: could you point me towards an example/some literature?
[01:15:37] <Init--WithStyle-> I'm not sure hwat mongoimport is
[01:16:48] <_johnny> yes. it's part of mongodb, as an import util, for json/mongo/csv/tsv data: http://www.mongodb.org/display/DOCS/Import+Export+Tools
[01:17:02] <Init--WithStyle-> The main thing is programatically prepping this array for sending to the mongoDB where it can be unpackaged into my collection..
[01:17:36] <_johnny> right, parsing is usually the intensive part
[01:18:06] <_johnny> i was prepping some xml to json which took me 4 hours. the import of json to mongo took 30 minutes :)
[01:19:24] <crudson1> Init--WithStyle-: what form is the data in currently?
[01:19:38] <Init--WithStyle-> just a 2d array created via javascript
[01:19:56] <Init--WithStyle-> right now i'm parsing through every part of the array in a for loop and doing a mongo insert
[01:20:11] <Init--WithStyle-> seems i'm getting cut off for some reason at ~ line 546 of the 2d array..
[01:20:15] <Init--WithStyle-> maybe i'm hitting it too intensively
[01:21:31] <Init--WithStyle-> i'm using nodejitsu.. if there was some way I could just push the whole array over and then have it unpack itself... maybe that would work better?
[01:22:11] <Init--WithStyle-> Am i approaching this completely wrong?
[01:22:14] <crudson1> Init--WithStyle-: so it's being generated programmatically. If performing inserts in realtime is slow (or getting slower over time, which could be for a number of reasons) then you could output the json for each document to a file and import that afterwards (as _johnny suggested)
[01:22:51] <Init--WithStyle-> crudson is doing an insert for each part of the array the correct way to go here?
[01:23:10] <Init--WithStyle-> this is for my initial population of the collection
[01:24:55] <Init--WithStyle-> for some reason things just stop when I get to line 576 of my 2d array :/
[01:25:00] <crudson1> Init--WithStyle-: it depends whether you've decided on the best document structure for this data. Have you decided how it will be queried or analyzed, as how you are representing it should be a consideration at this stage.
[01:25:13] <Init--WithStyle-> yes
[01:25:19] <Init--WithStyle-> it's a geospatially indexed collection
[01:25:31] <Init--WithStyle-> so i'm dropping into an x,y loc for each piece of data
[01:25:57] <crudson1> did you create the geo index before you started inserting?
[01:27:43] <Init--WithStyle-> no crudson1
[01:28:02] <Init--WithStyle-> I am doing inserts using a loc: [x,y] with some small additional data
[01:28:16] <crudson1> I think you should try dumping the json to a file then sucking in the whole lot in one go.
[01:28:17] <Init--WithStyle-> the total size of the array is 2000 x 2000 so ~ 1,600,000 entries
[01:29:42] <Init--WithStyle-> crudson1: it seems it was a problem with my mongohub
[01:29:55] <Init--WithStyle-> it takes time for the system to index and properly show the actual count of objects in the database I guess
[01:30:12] <Init--WithStyle-> it's updating at approx ~ 5000 entries per sec
[01:30:26] <Init--WithStyle-> maybe it's buffered
[01:30:40] <Init--WithStyle-> could be that jitsu buffered the inserts and it sequentially pushing it in
[01:37:16] <Init--WithStyle-> Strange.... it seems at a certain point my inserts take forever to complete....
[01:39:17] <Init--WithStyle-> Do I need to have multiple primary keys or something?
[02:59:15] <Glace> Any good experiences with rep set on ebs raid 0?
[03:19:38] <Glace> Is there any reason to not use raid 0 if you have rep sets+ snapshots?
[03:20:15] <Init--WithStyle-> I wish i knew what you were talking about Glace :D
[03:21:29] <Glace> Hmm.. I see that all example of mongodb with replication sets on ec2 use raid10 on ebs volumes. I was wondering why not just raid0 since the data is replicated..+journaling and taking ebs snapshots
[03:50:16] <geoffeg> raid1 doubles read throughput?
[04:07:52] <circlicious> if i am doing a mapReduce and there are 1000 requests made at once, its going to be performant if i keep on creating a tmp collection for each operation and then drop them?
[04:14:00] <circlicious> can i filter over the resultset returned by mapreduce when using inline:true ?
[04:57:37] <ravana> is there an equivalent to mysql sql_calc_found_rows in Mongo?
[04:59:10] <IAD> $sursor->count() and $sursor->count(true) for PHP
[05:02:30] <ravana> if i put there $collection->find()->limit(1), can i expect the same behavior as mysql does?
[05:03:29] <IAD> O_o
[05:04:54] <ravana> :D
[05:05:04] <ravana> no right?
[05:06:17] <IAD> MongoDB return cursor to a one document
[05:11:08] <ravana> i solved this in different manner ;) thanks for your time
[05:11:49] <ravana> it is something like $collection->count(array('active'=>1));
[05:11:52] <circlicious> can you help me iad
[05:17:56] <IAD> circlicious: http://www.triggeredmessaging.com/blog/mongodb-with-high-volume-data
[05:26:13] <circlicious> IAD: wa?
[06:01:39] <IAD> circlicious: what does wa mean? warped armbands or watery apple?
[06:16:30] <circlicious> IAD: what?
[06:16:37] <circlicious> that article is not related to may problem
[06:16:38] <circlicious> my
[06:27:58] <abhi9> how to make sum of one of the element of an array?
[07:49:34] <NodeX> http://www.businessinsider.com/mongodb-2012-5
[07:50:22] <NodeX> The trouble with that is now we're going to get alot of idiots who dont have a clue how to write efficient queries and programing code brigning the overall speed of mongo down and giving ti a bad name :/
[07:51:26] <NodeX> oh well, I suppose take the good with the bad
[08:18:48] <kali> NodeX: there are already here anyway, the post is actually 4 monthes old
[08:18:56] <kali> or 3.
[08:29:55] <NodeX> LOL
[08:30:05] <algernon> NodeX: well, there's always been people writing mongodb using apps in php... *ducks*
[08:32:45] <NodeX> but it's a funny thing when those apps out-perform so called "faster" languages
[08:32:46] <NodeX> LOL
[08:36:26] <algernon> you can write shit code in any language, but I had my troll hat on anyway :)
[08:39:58] <NodeX> I agree, language is agnostic for bad programing!
[08:43:31] <jQuy> What is the best GUI for MongoDB and Windows 7?
[08:44:00] <BurtyB> Putty :)
[08:44:25] <NodeX> ^^
[08:44:44] <NodeX> there are a few, never used one myself but I see people talk about rockmongo and phpmymongo
[08:45:20] <jQuy> NodeX: I have Node.js backend
[08:45:39] <jQuy> so I can't use any PHP GUIs
[08:46:42] <NodeX> how come ?
[08:47:21] <NodeX> http://www.mongodb.org/display/DOCS/Http+Interface
[08:47:27] <NodeX> First result on google
[08:47:31] <NodeX> Lazy :/
[08:49:18] <jQuy> Maybe I use MonjaDB
[08:49:31] <jQuy> It works on Eclipse
[09:00:14] <yatiohi> Hello, I want to "break" a replica set and switch to a single instance. Do I have to just restart the server without the --replset parameter?
[09:06:49] <_johnny> jQuy: i "expose" a db with rockmongo, and i never use php in my stack. it's very lightweight, and just an instance of php-cgi. personally i find most UI's, both app and web, rather limiting, but for basic stuff either of them seems to do
[09:07:06] <_johnny> and besides, like NodeX said, there's MongoDB Rest which is node based :p
[09:07:45] <_johnny> NodeX: reminds me of a chat i saw on nodejs yesterday. "i'd never use mongo". i got curious, so i asked why. he wanted a rest interface. lol
[09:09:02] <jQuy> _johnny: I google for MongoDB Rest
[09:09:56] <jQuy> _johnny: yesterday I was adviced to use Mongoose
[09:10:16] <_johnny> mongoose seems to be a popular one aswel, yes
[09:15:57] <jQuy> MongoDB Rest server doesn't start up!
[09:16:57] <jQuy> I installed it via npm and followed the instructions there: https://github.com/tdegrunt/mongodb-rest
[09:17:36] <_johnny> did you adjust config.json?
[09:17:44] <jQuy> no I didn't
[09:18:09] <_johnny> seems to have non-standard port numbers, so that could be why
[09:18:48] <jQuy> My Express 3 server uses port 3000
[09:19:19] <_johnny> you just put a config.js similar to that in the repo, in the dir you're currently in, and issue a mongodb-rest
[09:19:25] <_johnny> https://github.com/tdegrunt/mongodb-rest/blob/master/config.json
[09:20:30] <jQuy> So I have to change those port settings?
[09:20:34] <_johnny> e.g., # cd ~; vi config.js; mongodb-rest
[09:20:58] <_johnny> right. if you have an express server running on 3000, your mongod is probably running on something else
[09:21:10] <_johnny> you can check that in your mongod.conf, which is usually in /etc/mongodb.conf
[09:21:20] <_johnny> and by default is 27017
[09:22:00] <jQuy> yes, that's the default port
[09:24:07] <jQuy> Oh no! I can
[09:24:16] <jQuy> I can't end the mongod process
[09:25:18] <Derick> why not?
[09:25:53] <jQuy> I think because it run as a windows service
[09:26:12] <_johnny> services.msc (or what it's called), you should be able to stop it from there
[09:32:20] <circlicious> so .. how doe sone filter resultset from inline mapReduce ? :D
[09:32:47] <jQuy> _johnny: thanks, it worked
[09:32:55] <[AD]Turbo> hola
[09:33:59] <jQuy> _johnny: mongodb-rest command still doesn't work. " 'mongodb-rest' is not recognized as an internal or external command, operable program or batch file."
[09:39:13] <jQuy> I might use Mongoose. It works better.
[09:56:47] <jQuy> Is it wise to create own js-file for data modelling?
[09:59:20] <jwilliams> is there any place that a mongo admin can check for slow update?
[10:02:27] <IAD> jwilliams: http://www.mongodb.org/display/DOCS/Database+Profiler#DatabaseProfiler-EnablingProfiling
[10:03:19] <jwilliams> iad: thanks.
[10:05:26] <_johnny> jQuy: try npm -g install mongodb-rest (notice the -g)
[10:06:01] <_johnny> i'm not entirely sure how installs work on windows. the place where your mongodb-rest.exe is, needs to be in your %PATH%
[10:11:47] <lizzin> http://pastie.org/4573524
[10:12:08] <lizzin> how would you guys suggest going about inserting that json array into a collection?
[10:12:16] <lizzin> im used to dealing with key value pairs
[10:12:20] <lizzin> with scala
[10:12:32] <lizzin> im willing to use any lang/library here though
[10:14:31] <circlicious> so you cannot tell me how to filter out results from inline mapReduce?
[10:22:56] <NodeX> lizzin : you just add it... what's the error?
[10:24:19] <lizzin> NodeX: you mean add the entire array as a single element?
[10:25:15] <lizzin> NodeX: i am used to working with something alongs the lines of {"dba_name":"63D6EF50-0E38-446B-B09B-E0FD60FFA169"}
[10:25:40] <lizzin> where i would use a scala case class and then extract that json, then save
[10:25:56] <lizzin> but here i start off with just a json array without any keys
[10:25:58] <NodeX> I dont know what a scala case class is
[10:26:07] <NodeX> add a key to it then
[10:26:20] <NodeX> db.foo.insert({key:YOUR_ARRAY})
[10:26:32] <lizzin> yea, thats easy enough
[10:26:51] <lizzin> but i want to break the array up using the keys on the bottom of that pastie
[10:27:17] <NodeX> that's an appside problem
[10:27:25] <NodeX> and quite an easy one
[10:27:28] <lizzin> true
[10:27:38] <lizzin> how would you do it?
[10:28:09] <NodeX> split the array into chunks of the same size as your keys then loop each chunk and push it
[10:29:10] <lizzin> right
[10:29:20] <NodeX> 19 keys, split the array by 19 and you'll have zero based array members that will match your keys (as long as the key worder is correct), then loop the keys and assign each part of the chunk to a key:value array, then add the while lot back together and insert
[10:29:30] <NodeX> worder -> order *
[10:29:44] <lizzin> i need to explore the json libraries more. would be extremely helpful if there was a jsonArray to List method. then i could just zip the two
[10:30:38] <lizzin> true
[10:30:38] <NodeX> I wouldnt know about that, it's not difficult to do so perhaps a method exists
[10:32:21] <lizzin> NodeX: i see what you mean
[10:32:34] <lizzin> NodeX: thanks
[10:34:34] <NodeX> ;)
[11:44:28] <Vile> Hi! I still need an idea. I have an hierarchically arranged collection (using materialized path). I need to do a m/r on it, but… for proper processing of each document i need all of its parents
[11:45:43] <NodeX> perhaps ask 10gen for some professional consulting
[11:45:49] <NodeX> nobody seems to know
[11:53:21] <remonvv> Vile, that simply isn't going to be possible through m/r with that schema.
[11:53:50] <jQuy> mongodb-rest contains deprecated code
[11:54:25] <fredix> hi
[11:54:46] <fredix> is that an update with upsert flag=true create a collection if not exist ?
[11:55:22] <NodeX> any insert / update creates a collection if not exists
[11:55:25] <Vile> remonvv: I'm currently doing it with m/r
[11:55:31] <fredix> NodeX: ok thx!
[11:55:49] <Vile> but it is slow, because i have to run subqueries from map()
[11:55:50] <fredix> so there is a fail on my code
[11:56:42] <NodeX> fredix : your driver should normaly trap an error
[11:56:50] <NodeX> trap/catch *
[11:56:57] <fredix> NodeX: i'm using the c++ driver
[11:57:05] <Vile> the problem is, it is impossible to run "find()" from within map()
[11:57:51] <Vile> (on the second thought this might be a good thing as well)
[11:58:06] <fredix> NodeX: and my code catch nothing error, but my collection isn't created
[11:59:08] <NodeX> fredix : I wouildnt know about that driver, consult the driver docs regarding error catchign
[11:59:14] <NodeX> catching *
[11:59:24] <fredix> yep
[11:59:38] <Vile> any other ideas on how to handle tree structures in mongo? Maybe I'm storing them incorrectly...
[12:00:22] <NodeX> pastebin your schema again
[12:04:52] <Vile> NodeX: i'm using nested set. Just a simple schema to store file-tree like structure
[12:05:54] <NodeX> cool but I left my crystal ball in my other Hard drive
[12:06:15] <NodeX> and your "simple nested set" might be different to mine
[12:11:56] <Vile> NodeX: { _id, full_path:"_id/_id/_id", data:{some data here} } :)
[12:12:23] <Vile> not nested set, sorry.
[12:13:10] <Vile> tree structure with materialized path.
[12:13:41] <Vile> probably for the nested set the processing like i want can be done
[12:14:15] <NodeX> and what do you need to do with it again
[12:17:07] <Vile> NodeX: final purpose is to get the hierarchical search
[12:18:03] <NodeX> I dont understand what that is sorry
[12:19:21] <Vile> i.e. i have tree like: {path:"/a", data:"hello"}, {path:"/a/b", data:"world"}. Then search for the terms "hello", "world" should return node "/a/b"
[12:20:08] <NodeX> and this cannot be done appside?
[12:21:00] <Vile> NodeX: sorry for being unclear. No, it can not => nodes are in the database and there is quite a large number of them
[12:22:14] <NodeX> http://stackoverflow.com/questions/1619058/storing-directory-hierarchy-in-a-key-value-data-store
[12:22:20] <NodeX> (the one with 40 answers)
[12:23:19] <remonvv> Not the same problem I think.
[12:23:44] <remonvv> In fact it can be argued that hierarchy has very little to do with this.
[12:23:53] <NodeX> there is a recipie on the mongo site somewhere
[12:24:51] <remonvv> Why not just store it as {path:"/a", data:["hello"]}, {path:"/a/b", data:["hello", "world"]}?
[12:25:13] <remonvv> That removes all hierarchical complexity with search at the expensive of additional disk space/document size
[12:25:27] <Vile> NodeX: I'm using materialized path (same as in the answer)
[12:25:46] <Vile> remonvv: because objects can be very large
[12:26:15] <remonvv> can be or always are?
[12:26:24] <remonvv> If so :
[12:26:45] <remonvv> {path:"/a", data:[dataId1]}, {path:"/a/b", data:[dataId1, dataId2]}?
[12:26:58] <remonvv> seperate collection for the content, and you can dump m/r
[12:27:24] <remonvv> Which is a bit of a red flag feature for production services anyway
[12:28:02] <remonvv> Is there a logical limit to the amount of children every parent can have?
[12:28:24] <Vile> remonvv: what will it give me? I need to search based on the data contents
[12:28:36] <Vile> remonvv: not really
[12:28:51] <Vile> could be very deep hierarchy
[12:29:02] <remonvv> search data -> get ids -> do query on tree data
[12:29:05] <remonvv> alright
[12:29:11] <Vile> but we are considering up to roughly 20 levels
[12:29:28] <Vile> search data for what?
[12:30:28] <remonvv> you just said you need to search on content right? Put your content elements in a flat collection, search that, fetch the _ids and use those _ids to query on the tree node data to get whatever it is you need that for.
[12:30:53] <remonvv> What is this for anyway?
[12:31:06] <Vile> i have two search terms
[12:32:12] <Vile> (or more). If one of them appears higher on the hierarchy and another is deeper - the deepest level which has all the search terms matched is considered a match
[12:32:26] <remonvv> You need to write out what you're trying to do functionally somewhere. That might be easier.
[12:32:59] <remonvv> Right but that doesn't require storing it hierarchical at all. The content elements then simply need to know how deep they are rather than what their exact path is.
[12:33:07] <remonvv> In which case you can simple sort them
[12:33:13] <Vile> i.e. search matches if this item and all of its parent items on the hierarchy match all the search term
[12:33:19] <remonvv> Unless "Hello,WOrld" != "World, Hello"
[12:33:31] <remonvv> I have to go for a bit
[12:34:47] <Vile> remonvv: parent has "hello" somewhere in the data, child has "world" (but does not have "hello"). search terms are hello && world. only child item matches (because hello is contained in the parent)
[12:34:53] <remonvv> {pathDepth:1, data:"hello"}, {pathDepth:"2", data:"world"} -> find({data:{$in:["hello", "world"]}}).sort({pathDepth:-1})
[12:34:55] <remonvv> oversimplified
[12:38:36] <Vile> remonvv: could be other way around
[12:40:15] <Vile> parent has "hello" and child has "world"
[12:40:38] <Vile> or child has both
[12:49:03] <jQuy> testing
[12:49:39] <Vile> remonvv: should be find({data:{$all:["hello", "world"]}}) but in the collection where each node contains all the data from its parents
[12:50:27] <Vile> (but i can not use such a collection, because objects could be large and updates will be a nightmare)
[12:57:13] <circlicious> can i use mongodb on one server from anothe server?
[12:57:43] <ron> yes?
[12:57:49] <circlicious> how?
[13:04:50] <NodeX> anyone had this error in Gridfs before MongoGridFSException' with message 'couldn't find file size
[13:04:55] <NodeX> "MongoGridFSException' with message 'couldn't find file size"
[13:05:17] <NodeX> circlicious : change teh connection string to an external IP
[13:07:58] <ron> circlicious: by using the host and port?
[13:08:16] <ron> circlicious: I don't really understand the question.
[13:09:24] <circlicious> sorry i'll try
[13:12:08] <NodeX> nvm, fixed it
[13:21:42] <remonvv> Anyone attending the munich event here?
[14:13:09] <doxavore> Pro tip: under no circumstance should one use ext3 with MongoDB. Yuck. It's bringing everything down on every new file allocation.
[14:13:25] <doxavore> Is there a way to check and see how close MongoDB is to thinking it needs to allocate a new file?
[14:22:26] <doxavore> Or even a way to pre-allocate a few files at a time, so I can control and work around the server coming to a stand-still?
[14:26:56] <NodeX> it pre-allocates in chunks doesnt it
[14:28:10] <doxavore> NodeX: yeah... I'm using it for GridFS and have to continue running on ext3 for at least a few mode days. I'd just like it to not keep bringing everything down. :-/
[14:28:33] <brahmana> Hi all
[14:28:42] <brahmana> I am running Mongodb (db version v2.0.4, pdfile version 4.5) on Ubuntu 12.04
[14:28:52] <brahmana> Journaling is enabled
[14:29:10] <brahmana> A little while ago it crashed because of lack of permission to create a _tmp directory
[14:29:25] <brahmana> I set the permissions right and restarted the mongod
[14:29:35] <brahmana> Now inserts don't work and I see assertions in the log
[14:29:37] <NodeX> what's your write thruput doxavore ?
[14:30:26] <brahmana> Here is the log : http://pastebin.com/EVExxKAg
[14:31:07] <brahmana> Any hints?
[14:31:15] <brahmana> I wouldn't need repair as journaling is on, right?
[14:33:01] <doxavore> NodeX: For the disks or MongoDB?
[14:34:10] <NodeX> your write rate in mongo
[14:34:23] <NodeX> (just trying to work out where the bottleneck is)
[14:35:57] <doxavore> nothing very big, we hover around 5-10 inserts/sec, GridFS increasing around 300-400MB/hour
[14:38:05] <thewanderer1> hi. let's say I use Mongo as a data warehouse and have daily data migrations from various sources to one collection. How do I ensure the collection integrity, i.e. it contains the old dataset, or the new dataset, but not partial data?
[14:38:31] <thewanderer1> filesystem analogy: write the new file first, then rename old file to new file
[14:38:50] <thewanderer1> err, other way round (but you get the idea)
[14:49:10] <remonvv> thewanderer1, I believe db.old.drop() -> db.imported.renameCollection("old", true) should do the trick
[14:49:19] <remonvv> The rename is atomic
[14:49:37] <remonvv> Possible states are then old, new, or no data
[14:54:04] <circlicious> anyone using mongoid?
[14:59:53] <thewanderer1> remonvv, hmm, can't I do an in-place rename?
[15:00:06] <jmar777> Anyone got some recent benchmarks with the aggregation framework? testing right now against some eventing/olap use cases. getting ~1sec to aggregate across ~100k events. am i seeing roughly the best I can expect?
[15:38:52] <brahmana> Hi all. (again.. got disconnected earlier)...
[15:39:18] <brahmana> So anyone knows what causing this assertion : http://pastebin.com/EVExxKAg ?
[15:48:11] <circlicious> (Could not connect to a primary node for replica set
[15:48:39] <circlicious> tried to set connection string to ip:port from other server (mongoid ruby library). thats what i get, what should be done?
[15:55:13] <estebistec> for a read-heavy app, what's a good lower-bound on journal internval? I'd like to crank it down at least somewhat from the 100ms default, but don't want to go crazy
[15:55:46] <estebistec> Is it cheap enough when there are no writes such that 10ms j-interval wouldn't cost me much cpu or other resource contention for the DB?
[16:53:21] <Lujeni> Hello - It's possible to specify query for mongo_connector tools ? if a want only store document older than 90 days for example. thx
[17:26:02] <Almindor> hello
[17:26:13] <Almindor> what's the correct date format for the JSON import using mongoimport?
[17:29:25] <crudson1> Almindor: see the section "Dates" http://www.mongodb.org/display/DOCS/mongoexport
[17:30:51] <crudson1> Almindor: try exporting one document that contains a data and examine that
[17:33:50] <eka> hi all... anyone knows if ensure_index from pymongo behaves as ensureIndex from the shell? I mean, I don't want the index to be recreated
[17:53:54] <linsys> eka: yes it does
[17:54:23] <eka> linsys: so it doesnt recreate the whole thing... thanks
[18:10:35] <LesTR> hello guys, exists any option how i can sync one secondary server from another secondary in one replicaset?
[18:11:44] <LesTR> we have now absolutly broken repliacaset with 5 servers
[18:12:27] <LesTR> 1 secondary server has a litle delay (3h), 2 has 12h and last is down
[18:12:47] <LesTR> imho all 3 up secondary servers read oplog from master
[18:12:57] <LesTR> its posible read it from another server?
[18:13:58] <LesTR> example: one server with 12h lag can read from one with 3h and second from master
[18:14:10] <LesTR> can i do it on 2.0.7?
[18:24:39] <LesTR> have someone idea about this? Please : )
[19:27:16] <ninjai> can soemone help me with authentication? I ran the command db.auth('admin', 'password') and it says "1" after. I try to use the init script to try to start graylog2 and I get and auth fail from mongo. Why?
[19:32:08] <linsys> ninjai: did you restart with --auth?
[19:33:26] <ninjai> linsys, no, because I cannot find out how. I use an init script in /etc/init.d, and there is not even a mention of the mongo command in it
[19:33:37] <ninjai> is there some other way I should be adding it in?
[19:35:11] <ninjai> when I try to stop the init script/service, and use "mongod --auth", I get this error: exception in initAndListen: 10296 dbpath (/data/db/) does not exist, terminating
[19:35:33] <linsys> wjat os is that?
[19:35:36] <linsys> err what os?
[19:37:29] <ninjai> ubuntu 12.04
[19:37:31] <ninjai> *server
[19:38:55] <linsys> In /etc/init/mongodb.conf you should have the ability to add --auth
[19:39:16] <linsys> it isn't in /etc/init.d/ because Ubuntu uses upstart scripts for a lot of stuff, including mongodb
[19:40:27] <ninjai> ok
[19:40:32] <ninjai> so how do I log in now?
[19:40:41] <ninjai> if I were to go use the db in cli?
[19:40:44] <linsys> with the user you created
[19:40:46] <ninjai> i did use admin
[19:40:50] <ninjai> then show dbs
[19:40:56] <ninjai> and says i need a login
[19:40:59] <ninjai> i dont knwo how to log in
[19:41:29] <linsys> use admin
[19:41:35] <ninjai> hm.
[19:41:37] <ninjai> well i did that
[19:41:39] <linsys> db.auth("someAdminUser", password)
[19:41:43] <linsys> http://www.mongodb.org/display/DOCS/Security+and+Authentication
[19:41:45] <ninjai> did that previously too
[19:41:56] <linsys> did you do it after you restarted mongodb with --auth
[19:42:16] <ninjai> no
[19:42:30] <ninjai> i did db.auth('admin', 'password'), and it said "1"
[19:43:11] <linsys> ok, first before you do --auth you need to add a user, then start mongodb with --auth then login
[19:44:57] <ninjai> i had
[19:44:58] <ninjai> i have this
[19:44:59] <ninjai> http://pastebin.com/hgrueNAy
[20:19:06] <quuxman> hi all. I just created a little helper library for pymongo. This is sort of an RFC...
[20:19:09] <quuxman> http://bpaste.net/show/41740/
[20:20:43] <quuxman> a couple examples of its use: db.Pages.search( mq().all('tags_index', 'food', 'art') )
[20:20:47] <quuxman> db.Feed.search( m().bt('created', now() - 86400 * 7, now()).one('class_name', 'Broadcast', 'Star') )
[20:21:13] <quuxman> oops, s/one/is1/ (just renamed that, as it conflicts with Python's 'in')
[20:24:16] <quuxman> I would figure I'm recreating something out there, but I haven't found it
[20:42:55] <geoffeg> findAndModify can not operate as a cursor, right? if i wanted to use findandmodify's semanitics with thousands of documents, i would have to run findandmodify thousands of times in a loop?
[20:43:11] <quuxman> does anybody even use pymongo here?
[20:47:23] <[MAN]> I don't
[20:59:21] <jgornick> Hey guys, with Mongo 2.0.x, does this issue still exist? http://stackoverflow.com/questions/6743849/mongodb-unique-index-on-array-elements-property
[21:22:37] <crudson1> jgornick: I don't think there is a planned feature for this at the index level. Has to be enforced at the application level.
[21:23:07] <jgornick> crudson1: Ok :( Thanks for taking a look at that!
[21:23:40] <crudson1> It comes up a fair bit - I've been searching the jira for matching issues, but I don't see any planned feature.
[21:26:30] <crudson1> jgornick: you may be able to use a feature of the language you use (e.g. Set vs Array), but you may not get automatic serialization.
[21:27:20] <jgornick> crudson1: using PHP
[21:30:04] <ninjai> how do i grant my admin user r/w to the admin database
[21:33:13] <crudson1> jgornick: don't know the php api sorry, but "uniqueness in an array" should be fairly standard material. Just find the most sensible part of your application or object model to manage the constraint.
[21:33:47] <jgornick> crudson1: Yes, pretty simple to implement. I'll check it out. Thanks for the insight.
[21:41:44] <quuxman> pretty curious about which APIs people here use for Mongo
[21:49:48] <eka> hi, is there any operation that based on a query it will insert if there is no doc and do nothing if there is ? I don't have an _id just a query
[21:52:02] <ninjai> when I do a db.auth("user","pass"), what do the 1 or 0 mean that follow it?
[22:24:56] <nexion> hey guys, is it possible to sort the result of a .find by a value that's computed from two other columns, all within mongodb?
[22:26:50] <nexion> like if I have a list of {a: 3, b: 5}, {a: 1, b: 7}, and I call .find(/* a + b > 7 */) or similar for getting the results in order by (a+b)?
[22:32:02] <crudson1> nexion: sure, depending on how big the query result size is
[22:32:58] <crudson1> nexion: for in-memory result sets, you can do db.col.aggregate({$project:{c:{$add:['$a','$b']}}}, {$sort:{c:-1}})
[22:34:52] <crudson1> nexion: oh I missed the > 7 bit
[22:36:37] <crudson1> but that will do such ordering for you, for the find bit you can use $where, but the two can't be used together
[22:37:38] <crudson1> actually you could do it all with .aggregate(), sorry am doing multiple things currently
[22:40:22] <crudson1> nexion: like aggregate({$project:{a:true,b:true,c:{$add:['$a','$b']}}}, {$match:{c:{$gt:7}}}, {$sort:{c:-1}})
[22:44:57] <MikeFair> Is there anything in MongDB that's the equivalent to Couch Apps?
[22:46:18] <MikeFair> I'd like to make a small mobile app for the Android platform. I'd like the database for this app to synchronize with a server whenever its available
[22:46:59] <MikeFair> The database is pretty simple, it's basically a list of contact groups
[22:53:22] <nexion> crudson1: ty