PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 22nd of April, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:51:20] <Ephexeve> Hey guys, I wonder, is there a way to add a password to a database? So the only way to query it is with a password? otherwise there is NO way to access it?
[00:52:12] <cheeser> you mean authentication?
[00:53:00] <Ephexeve> cheeser: Well, that would be when connecting to the server. Anyone could use/create whatever, but when using a certain database, you would need a password for it
[00:53:59] <Ephexeve> say, database foo, anyone would be able to query, but when trying to query something from database x, you would need a password
[00:54:25] <cheeser> sounds like this to me: http://docs.mongodb.org/manual/tutorial/enable-authentication/
[00:54:44] <Ephexeve> let me give a check
[00:55:33] <Ephexeve> cheeser: Could be, but I wonder, for ANY database he would need to authenticate, correct? Isn't there a way to make tis for one database only?
[00:55:47] <Ephexeve> Anyone could connect and do whatever they want, except query database x
[00:55:51] <cheeser> i believe so, yes. read the tut and see.
[00:56:06] <Ephexeve> Thanks
[01:53:15] <k_sze[work]> I'm trying to perform an inline map-reduce on a secondary of my replication set, but I'm getting the "not master" error message.
[01:53:21] <k_sze[work]> What might I be doing wrong?
[01:54:27] <k_sze[work]> The command I am trying is this: db.my_collection.mapReduce(mapper, reducer, {out: {inline: 1}, sort: {some_key: 1}, jsMode: true})
[02:32:52] <k_sze[work]> And when I do "show collections" on the secondary, I get "not master and slaveOk=false"
[02:33:48] <cheeser> rs.slaveOk(true)
[02:35:44] <k_sze[work]> ok.
[02:35:53] <k_sze[work]> What's the equivalent using the pymongo api though?
[02:37:15] <k_sze[work]> ReadPreference.PRIMARY_PREFERRED?
[02:38:15] <k_sze[work]> ok, so that solved *part* of my problem.
[02:38:40] <k_sze[work]> The other part of my problem is that I somehow can't use sorting in my inline mapreduce.
[02:39:16] <k_sze[work]> As soon as I tell it to sort, it gives me zero input, zero emit, and zero reduce, and zero output.
[02:42:40] <k_sze[work]> Or has sorting never worked with *inline* mapreduce?
[02:49:39] <k_sze[work]> Actually, sorting has stopped working for me even with non-inline mapReduce on the master.
[04:05:31] <xissburg> Does this look good (python)? users.update({'_id': {'$in': [userAId, userBId}}, {'$set': {'available': False}}, {'multi': True})
[04:05:53] <xissburg> I want to set the 'available' attribute of both users to false
[04:06:30] <xissburg> bracket missing .... users.update({'_id': {'$in': [userAId, userBId]}}, {'$set': {'available': False}}, {'multi': True})
[04:49:42] <armyriad> How do I remove all the documents in a collection that match a particular criterion except the most recent five?
[04:50:05] <armyriad> Most recent five documents, that is, not the most recent five criterion
[04:59:34] <arlev> Can a mongodb query use different indexes for the find() and sort()? Or can only one index be used for the cursor?
[07:06:01] <morfin> hello
[07:08:27] <morfin> how do you think how optimal will be using MongoDB for next task: i need to create interface where some managers will fill out some information for some clients and there will be lots of different client types with totally different fields(only 2-3 will be always same)
[07:10:01] <morfin> also i'll need search by some of fields, and all that does not fit into relational database(i do not want to add new fields each time i need to add new type)
[07:27:36] <kali> morfin: the search is the tricky part
[07:28:18] <kali> morfin: if there is more than a few hundreds records, you'll need indexes
[07:29:00] <kali> morfin: so chances are you'll need indexes on these optional fields
[07:29:17] <kali> morfin: (you may even need full text indexes actually)
[07:29:49] <kali> morfin: you may want to look at: http://blog.mongodb.org/post/59757486344/faceted-search-with-mongodb
[07:39:01] <morfin> i am thinking about making my app scalable\robust application
[07:39:09] <morfin> oops
[07:39:36] <morfin> because when i use PostgreSQL i can use hacks like xpath but that's slow\ugly
[08:35:42] <hoczaj> hi
[08:36:28] <Zelest> heya
[08:51:52] <hoczaj> Is there any way to search for a string in the whole document?
[08:52:26] <hoczaj> I would like to accomplish the following: split a string on every space (" ")
[08:52:53] <hoczaj> For example: John Tomas California I will get [ 'John', 'Tomas', 'California']
[08:54:04] <Nodex> across every field?
[08:54:05] <hoczaj> I want to do a query, that will give me the result if in the following document it is matching somehow: {name: 'John Tomas', state: 'California', City:'Test'}
[08:54:08] <hoczaj> Yes
[08:54:10] <Nodex> No
[08:55:29] <hoczaj> I tried the following:
[08:55:44] <Nodex> it''s not possible
[08:55:59] <hoczaj> Client.find({ $or:[ {name: {$in:arr}},{state: {$in:arr}} ]}, function (err, docs){ res.json(docs);});
[08:56:01] <hoczaj> ahh I see :(
[08:56:22] <Nodex> that's what "No" means :)
[08:56:34] <hoczaj> Any work around? :)
[08:57:00] <Nodex> you will have to either make a text index with a concat yourself or explode all the fields and store them in another field
[08:57:15] <rspijker> well... the query you wrote should 'work' right?
[08:57:28] <hoczaj> because the code that I mentioned almost worked. The only problem is the Name field.
[08:57:32] <rspijker> it might not do exactly what you want it to do
[08:57:39] <Nodex> {index_searchable:["John","Thomas","California","Test"]}
[08:58:19] <hoczaj> Because it will do the following: "John Tomas" in ['John', 'Tomas'] returns false. :(
[08:58:31] <hoczaj> So almost good, but not that good. :D
[08:58:49] <hoczaj> it works fields with: State, City, where you usually do not have space " "
[08:59:51] <hoczaj> Hmm so you are suggesting to get every word place in a index_searchable, so i can just: index_searchable: {$in:arr}
[08:59:54] <rspijker> you could use $text on the name field, I suppose: http://docs.mongodb.org/manual/reference/operator/query/text/
[09:00:50] <Nodex> either will work fine
[09:01:05] <Nodex> $text has the added benefit of stemming and ranking
[09:06:24] <hoczaj> hm i'll give it a try
[09:24:20] <hoczaj> hm
[09:24:21] <hoczaj> error: { "$err" : "invalid operator: $search", "code" : 10068 }
[09:27:18] <hoczaj> ah nevermind... i have 2.4; ) and it is supported in >2.6
[09:40:00] <thanasisk> hi all. I am getting $err" : "not master and slaveOk=false", "code" : 13435 - even if I have configured it as a STANDALONE machine - anyone care to share where the misconfiguration might lie?
[09:40:24] <hoczaj> rspijker: $text wow it is cool and working !
[09:40:37] <rspijker> hoczaj: glad to hear it
[09:40:56] <rspijker> thanasisk: what does rs.status() print?
[09:42:14] <hoczaj> It is a bit strange, moving from mySQL to mongoDB (and from php to nodejs), but definitely getting better and better. :)
[09:42:46] <thanasisk> rspijker: > rs.status()
[09:42:46] <thanasisk> { "ok" : 0, "errmsg" : "not running with --replSet" }
[09:44:08] <hoczaj> rspijker: maybe could you help me with one more thing? When i start my application (node server.js) after the initialization i find this: Failed to load c++ bson extension, using pure JS version. It is working fine, but if I could make this disappear it would be nice.
[09:44:24] <rspijker> thanasisk: that's fairly weird.... What's the command line you're using for the mongod?
[09:44:52] <rspijker> hoczaj: OS?
[09:44:59] <thanasisk> mongodb 27263 1.5 10.3 445326380 2549436 ? Ssl Apr21 13:28 /usr/bin/mongod --config /etc/mongod.conf
[09:45:49] <rspijker> thanasisk: this is version 2.6?
[09:46:03] <thanasisk> yes
[09:46:56] <rspijker> and in /etc/mongod.conf, is there anything like slave=true or something?
[09:47:48] <rspijker> (in older version that line was normally commented out... haven't played with 2.6 yet)
[09:48:21] <thanasisk> let me verify
[09:49:03] <thanasisk> no this line is not present
[09:50:46] <thanasisk> the only non-commented out values are dbpath, logpath and logappend
[09:51:24] <thanasisk> rest are commented out by a different admin
[09:51:53] <rspijker> any particular command you get that error on?
[09:51:57] <rspijker> or do you get it always?
[09:52:32] <thanasisk> db.printCollectionStats()
[09:52:32] <thanasisk> this is where i got it
[09:55:20] <Lujeni> Hi - the best way to insert a document if not exists. Find and insert (if no result)? insert + unique index or update + upsert?
[09:57:11] <Nodex> upsert with $setOnInsert flag?
[09:59:05] <thanasisk> also: is there a support channel for MMS monitoring?
[10:07:08] <thanasisk> Either not primary agent, or no hosts configured. Sleeping...
[10:07:20] <thanasisk> wth is up with the color codes … effing macs
[10:12:52] <hoczaj> rspijker: sorry, did not reply, i was busy with the textScore. :)
[10:12:55] <hoczaj> rspijker: ubuntu 13.10
[10:15:32] <rspijker> hoczaj: looks like the bson module wasn't built correctly by npm then
[10:15:45] <rspijker> don't know enough about that to help you unfortunately
[10:16:14] <hoczaj> okay, thanks anyway :)
[10:20:40] <thanasisk> how can i use mms monitoring with multiple VPNed datacenters? (opening a connection to each datacenter is not an option)
[10:23:43] <rspijker> thanasisk: you run an agent on one of the machines which can connect outwards to MMS, for each datacenter
[10:24:15] <rspijker> so, you could run the agents on the VPN concentrators
[10:24:46] <hoczaj> rspijker: db.clients.find( { $text: { $search: "Hocza tér" } }, { score: { $meta: "textScore" } } ).sort({ score: { $meta: "textScore" } }).limit(3); It is working fine. Showing the three most relevant matches.
[10:25:05] <hoczaj> However when I want to use with the mongoose
[10:25:37] <hoczaj> Client.find({$text: { $search: req.param('q') }},{ score: { $meta: "textScore" } }).sort({ score: { $meta: "textScore" } }).exec( function (err, docs){res.json(docs);});
[10:25:56] <hoczaj> TypeError: Invalid sort value: {score: [object Object] }
[10:25:57] <hoczaj> :<
[10:28:19] <hoczaj> kinda hate when I copy paste the same code that worked at mongo CLI, and not working in my app :S
[10:31:30] <rspijker> hmmm, why can't you sort on an object? :/
[10:32:34] <rspijker> .sort("score":{$meta:"textScore"}) instead?
[10:33:25] <rspijker> dunno that much about mongoose really
[10:36:04] <thanasisk> rspijker: i tried that but they complain they are not the primary ones
[10:36:35] <thanasisk> rspijker: Either not primary agent, or no hosts configured. Sleeping...
[10:36:54] <abhishekg> Hi, wanted to check if mongoose ODM 3.8.8 could be used with the latest mongodb version
[10:45:12] <thanasisk> is there a mailing list? i can only see announce …
[10:47:04] <abhishekg> Hi all, wanted to check if mongoose ODM 3.8.8 could be used with the latest mongodb version
[10:53:52] <gskill934> how can i sort a date with foursquare rogue?
[11:27:45] <hoczaj> rspijker: I did with aggregate.
[11:27:54] <hoczaj> rspijker: Client.aggregate([{$match: {$text: { $search: req.param('q') }}},{ $sort: { score: { $meta: "textScore" } } }]).exec( function (err, docs){res.json(docs);});
[11:28:17] <rspijker> hoczaj: cool :)
[11:28:18] <hoczaj> rspijker: Not it is working fine. I do not know, whats wrong with the default sort(), however it did work.
[11:28:23] <hoczaj> Now*
[11:28:25] <hoczaj> :D
[11:28:40] <rspijker> yeah, think they changed formats for sort() a couple of times with mongoos
[11:28:43] <rspijker> e
[11:29:41] <hoczaj> Now my application can search ALL fields, and shows the most relevant customers. :)
[11:29:43] <hoczaj> *cheers*
[12:41:35] <tscanausa> does mongo db reallocate memory like the java heap?
[12:41:42] <tscanausa> preallocate*
[12:45:27] <kali> well, it does preallocate disk space
[12:46:11] <kali> and it will mmap this space. but that is not the same as preallocating physical memory
[12:46:42] <tscanausa> disk space is no problem but I seem to do a large operation, the memory grows then 4 hours after completing the op it still is using 20-200x of ram
[12:47:08] <kali> it's not RAM per se. it's address space
[12:47:18] <kali> the kernel swaps it in and out
[12:47:34] <kali> tscanausa: http://docs.mongodb.org/manual/faq/storage/
[12:48:42] <tscanausa> so it is expected for the kernel to swap my server until it does and I need to restart it?
[12:50:51] <kali> mmm no. ok. your problem is different
[12:50:58] <kali> what kind of operation are you running ?
[12:57:56] <tscanausa> map reduce since aggregation is too limited to handle it
[13:05:21] <kali> with 2.4 ? 2.6 ?
[13:08:39] <tscanausa> 2.4
[13:09:47] <kali> there was a leak in early 2.4 rcs, but i assume you have a stable 2.4... https://jira.mongodb.org/browse/SERVER-8442
[13:10:24] <kali> are you doing something fishy in your map and/or reduce function ?
[13:10:52] <tscanausa> I dont think so. what would be fishy?
[13:11:26] <kali> using anything but "this" and the function parameters
[13:12:47] <tscanausa> so kinda
[13:13:03] <kali> aha :)
[13:13:08] <kali> wanna show us ?
[13:13:10] <tscanausa> by strict definition I would say yes but by loose no
[13:13:16] <tscanausa> sure
[13:14:20] <tscanausa> https://gist.github.com/anonymous/11178688
[13:17:17] <kali> well, except for a missing "var" in front of "hours", i don't think this is fishy
[13:19:09] <tscanausa> awesome, any other suspicions as to what might cause un responsive nodes and prolonged memory "holdage"?
[13:20:37] <kali> nope, sorry.
[13:20:51] <tscanausa> thanks for the help kali.
[13:21:05] <kali> :/
[13:23:59] <tscanausa> Its not your fault.
[15:33:23] <herbrandson> I have a question about using composite primary keys in mongodb. specifically around performance.
[15:33:31] <herbrandson> i posted about it on SO here... http://stackoverflow.com/questions/23164417/mongodb-and-composite-primary-keys
[15:34:31] <herbrandson> the unique id for my records is made up of two separate fields. neither one is unique by themselves, but together they will be unique
[15:34:42] <herbrandson> what's the best way to store this?
[15:39:54] <Nodex> 2.6 has some work done for index intersecting
[15:40:06] <Nodex> or another way, using multiple indexes in one uery
[15:40:09] <Nodex> query
[15:40:49] <herbrandson> no, i'm still designing things. just wanting to know what the best direction is
[15:41:23] <herbrandson> i know that in a RDBMS having non-sequentional keys can be really bad for performance when doing inserts
[15:41:38] <Nodex> and as I have just said, 2.6 has work on index intersection
[15:42:00] <Nodex> are you worried about insert performance or read performance?
[15:42:06] <herbrandson> and i've seen some things on the internets that make it sound like the same is true in mongo
[15:42:59] <herbrandson> there will be more reads than inserts
[15:43:16] <herbrandson> but i don't want to cripple insert perf either
[15:43:35] <Nodex> well proper use of an index will keep both of them optimla
[15:43:38] <Nodex> optimal *
[15:44:00] <herbrandson> ok, great
[15:44:33] <herbrandson> so there are still a few ways I could store this
[15:44:47] <herbrandson> as a composite object in the _id field
[15:45:02] <herbrandson> or as multiple fields
[15:45:25] <herbrandson> it's not at all clear to me which i should choose and why
[15:45:50] <herbrandson> (i'm very new to mongo and still trying to figure out best practices)
[15:45:55] <Nodex> at this point I am not sure it matters
[15:46:16] <Nodex> without index intersection there wouldv/e been an index performance and size tradeoff
[15:47:37] <herbrandson> ok, that's good to know
[15:47:47] <herbrandson> are there any other trade offs i should be aware of?
[15:48:17] <herbrandson> i guess that w/ a composite key you can run into issues w/ the order of fields being important when doing queries
[15:48:21] <herbrandson> anything else?
[15:49:22] <Nodex> hard to answer without knowing what you're trying to achieve
[15:51:15] <herbrandson> another option is to concatenate them into one field. that option might be easier for "foreign key" type relationships
[15:51:36] <herbrandson> but i'd end up with something like a 256 bit key :)
[15:51:44] <Nodex> you should avoid relationships where possible
[15:51:45] <herbrandson> not sure what implications that might have on perf either
[15:53:30] <herbrandson> hmmmmm. ok, well that raises another question then
[15:54:00] <herbrandson> i have a structure where there is a game that has several hundred plays
[15:54:07] <herbrandson> the game will be edited almost never
[15:54:40] <herbrandson> but the plays will have dozens of edit's per second (for about a week)
[15:55:18] <herbrandson> i'd like to split the "play" documents into their own collection to help ease contention when being edited
[15:55:31] <herbrandson> is this not the correct approach in mongodb?
[15:55:38] <Nodex> the write lock is db wide not collection wide at present
[15:55:38] <cozby> whats best practice for monitoring a replica set...? I have 3 boxes, 1 primary 2 reps - in a production setup how do I monitor all three instance via http console?
[15:55:59] <cozby> do I proxy just the primary web console?
[15:56:07] <cozby> what about the other two reps?
[15:56:14] <cozby> (I hope I make sense)
[15:56:15] <herbrandson> oh. really?
[15:56:22] <Nodex> yup
[15:56:29] <herbrandson> so when someone does a write, it looks the entire db?
[15:56:35] <Nodex> correct :/
[15:56:56] <herbrandson> wow. ok. i didn't realize that
[15:56:56] <Nodex> check the roadmap for if/when this will change
[15:57:06] <herbrandson> how does that scale?
[15:57:27] <Nodex> For some people it's a pain, for others not a pain, it varies app to app
[15:58:13] <Nodex> I notice it every now and again. My app might freeze for 200ms while it's queued
[15:59:03] <Nodex> for anything write burst intensive I just put a cache in the middle (redis) and do all the work in there then move it to Mongodb later
[16:00:05] <herbrandson> ok. that's really good to know
[16:00:06] <herbrandson> i have to say, that might have made me pee my pants just a little :/
[16:00:14] <herbrandson> our app will be very bursty
[16:01:10] <Nodex> generally speaking it's not a problem. A lot of high burst apps use mongodb
[16:04:51] <herbrandson> ok, so there's a global lock when doing writes. so splitting plays into separate documents won't help with contention. what about issues with having to move records on disk as they grow?
[16:05:27] <herbrandson> i guess if everything is just one document, i'd just need to make sure there's plenty of padding for that collection, no?
[16:05:28] <Nodex> you can pad them if you need to
[16:05:39] <herbrandson> i believe that mongo does that automatically?
[16:05:46] <Nodex> but eventually your OS will start paging
[16:06:02] <cheeser> using powerOf2 sizing will help
[16:06:10] <Nodex> a certain amount of padding is done automatically but it's a waste of space for to much
[16:07:33] <herbrandson> ok, this has been REALLY helpful. thanks so much
[16:07:46] <herbrandson> oh, back to the original question...
[16:08:09] <herbrandson> is there a performance hit on inserts for non sequential id's?
[16:09:10] <Nodex> I don't have a clue, I always use ObjectId's
[16:15:34] <thomasreggi> Im trying to query if field "x" doesn't exists or if it does and is false. https://gist.github.com/reggi/11185165 Ideas?
[16:19:17] <Nodex> tried an $or?
[16:19:55] <Nodex> infact, not sure you can do that in one query
[16:20:17] <Nodex> unless you simply want an $nin or something
[16:25:01] <Lujeni> Hello - the best way to avoid duplicate data - find and insert if not exist ? or insert with a unique index (and try/catch) ?
[16:37:28] <leifw> Unique index is the right way
[16:52:01] <Veejay> Hi, is there a way to query a collection like: give me all the documents whose ID modulo N is 1?
[16:53:08] <Veejay> Ah my bad for not trying hard enough, there's a $mod
[16:53:18] <Veejay> http://docs.mongodb.org/manual/reference/operator/query/mod/
[17:30:31] <rickibalboa> I'm using the node native driver and I want to issue a command which isn't accessible via the api. it's "rs.initiate()", is it possible to do this?
[17:37:02] <rickibalboa> nvm found it
[17:43:46] <Tokenizer> hi, i'm new to mongodb but trying to learn like many by relatating traditional RDBMS type of approach to no-sql object based databases. So imagine if I have a mongo host named "localhost". my connection string is "http://localhost/my_bag". I want to now inside "some_collecion" store objects. But I want to have some soft of orgranization within it. For instance don't dump all objects together
[17:43:46] <Tokenizer> in it in one big bag. I want to be able to have things like db.my_mybag.oranges.find('1') .... (this is the notion of subj collection of types). Where can I find info on this?
[17:48:46] <cheeser> no such thing as sub collections.
[17:52:15] <Tokenizer> so i want to use one mongo server here locally for all dev's
[17:52:28] <Tokenizer> http://someip/somebucket
[17:53:05] <Tokenizer> http://someip/someother_bucket still for dev's but for another type of object
[17:53:08] <Tokenizer> no for staging
[17:53:34] <Tokenizer> i have two make 2 new buckets to be able to distinguish my data collections?
[17:53:56] <Tokenizer> i thought thinking mongo is like thinking documents
[17:54:07] <Tokenizer> on a document file system, i can ahve a staging folder
[17:54:10] <Tokenizer> dev folder
[17:54:14] <Tokenizer> in them folders
[17:54:16] <Tokenizer> in them folders
[17:54:19] <Tokenizer> etc etc
[17:54:31] <Tokenizer> how's that sort of done in Mongo?
[17:58:35] <cheeser> you have databases that hold collections that contain documents
[17:58:49] <cheeser> db/collection/document
[17:59:11] <Tokenizer> so in that sense
[17:59:18] <Tokenizer> a ms world document
[17:59:46] <Tokenizer> goes in the same same "collection" as a let's say photoshop document
[17:59:52] <Tokenizer> yes that's one way
[18:00:02] <Tokenizer> but is that THE only way
[18:00:33] <Tokenizer> what if i want db/collection/documents/words/ and db/collection/documents/txt/
[18:00:41] <Tokenizer> and put things in there
[18:01:16] <cheeser> words can be a subdocument or an array in a document
[18:01:34] <Tokenizer> how do i make "sub" documents
[18:01:51] <Tokenizer> this notion of "sub" document is what I'm trying to find docs for
[18:01:54] <cheeser> http://docs.mongodb.org/manual/data-modeling/
[18:03:38] <Tokenizer> k
[18:04:16] <Tokenizer> so it all looks like right from the root "db.*" you are dealing with documents..... so let me ask this then. How do you go about using the same mongo server for both staging, and dev
[18:04:37] <Tokenizer> for the same app that puts stuff in let's say a collection name "fruits"
[18:05:03] <Tokenizer> "db.fruits"... or namely "http://myserver/fruits"
[18:07:48] <cheeser> no
[18:08:05] <cheeser> db.collection <-- that's a collection not a document
[18:08:41] <Tokenizer> so i should have db.dev
[18:08:46] <Tokenizer> for all my dev collection
[18:08:48] <Tokenizer> inside it
[18:08:52] <Tokenizer> under it's collections
[18:09:00] <Tokenizer> then put "fruits"
[18:09:03] <Tokenizer> "books"
[18:09:05] <Tokenizer> etc etc
[18:09:16] <Tokenizer> correct?
[18:09:21] <cheeser> sure. but try not hitting enter so much.
[18:09:32] <Tokenizer> sorry
[18:22:16] <kali> that's what a tokenizer do...
[18:26:19] <Tokenizer> thanks for the help that clarified
[18:54:26] <diogogmt> anybody has an example of the yml file config for mongo 2.6?
[18:57:27] <kali> diogogmt: that's not a mongodb thing. it's a mongoose or mongoid or play or whatever you're using. and it's probably the same as 2.4 anyway
[18:59:13] <diogogmt> kali: i’m trying to launch the mongodb deamon, it says on the docs that it accepts a yml file
[18:59:30] <diogogmt> i’m not trying to connect via an app, just to start the db
[18:59:32] <kali> show me the doc
[19:01:12] <diogogmt> http://docs.mongodb.org/manual/reference/configuration-options/#config-file-format
[19:01:12] <diogogmt> it says excplicitly: “Changed in version 2.6: MongoDB introduces a YAML-based configuration file format. The 2.4 configuration file format remains for backward compatibility."
[19:01:12] <diogogmt> however setting a yml the db fails to starts
[19:02:25] <kali> ha ok, sorry, this is new, I had never heard of it,
[19:02:38] <kali> you get some kind of error message ?
[19:02:47] <Joeskyyy> i didn't know about that either… gonna make ansible even easier to read for deployments haha
[19:06:13] <Nodex> I hope they don't move to yaml
[19:06:20] <Nodex> permenantly I mean
[19:06:32] <kali> i don't care, i don't use the config files
[19:06:34] <diogogmt> about to fork child process, waiting until server is ready for connections.
[19:06:35] <diogogmt> forked process: 8227
[19:06:35] <diogogmt> ERROR: child process failed, exited with error number 1
[19:06:58] <diogogmt> what da hell error number 1 means?
[19:08:43] <kali> diogogmt: can you check what command line options it is given ? --fork, i would assume ?
[19:26:42] <Acatalepsy> Hello, #mongodb .
[19:27:20] <Acatalepsy> I'm look for some help deciding how to store some records.
[19:30:23] <Acatalepsy> I've got a bunch of (time-sorted) records to filter by. However, some of those records reference other records. When I filter all records and find some record A, I want to also find record B that references record A. The catch is that this is recursive - if there's some record C that references B that references A, I want the query to return A, B,
[19:30:23] <Acatalepsy> and C (sorted by time) whenever my filter conditions match A.
[19:30:49] <Acatalepsy> How do, #mongodb ?
[19:39:12] <Acatalepsy> ...anyone on?
[19:42:09] <tscanausa> Acatalepsy: probably not going to work
[19:43:22] <Joeskyyy> I love when people use overly verbose sentences when talking about joins
[19:43:23] <Joeskyyy> lol
[19:44:06] <Acatalepsy> Hey, I'm only sort of doing a join here.
[19:44:37] <Joeskyyy> haha
[19:45:21] <Acatalepsy> Since all of this is within one collection I was vaguely hoping that there'd be a way to make it work within a single collection.
[19:45:27] <Acatalepsy> *single query.
[19:47:30] <Acatalepsy> Ie, something like find( {$or: [ {attribute: value }, {reference.attribute: value} ]})
[19:47:37] <Acatalepsy> Or something like that.
[20:10:56] <Acatalepsy> This is a chatty bunch.
[20:18:45] <Mark_> Acatalepsy, im new to mongo as well but theres really no join, single query type solution for that
[20:18:52] <Mark_> you might be able to do some server side scripting though
[20:19:03] <Mark_> or map/reduce but i dont know if its faster than just two round trip queries
[20:19:19] <kali> map reduce is not fast
[20:19:28] <kali> server side scripting is a terrible idea
[20:19:54] <kali> you need to pick a schema that will make your queries fast
[20:19:57] <Acatalepsy> What it seems like I'm going to do is embed the documents.
[20:20:53] <kali> that is probably a better approach
[20:21:21] <Acatalepsy> IE, give each document a 'references' array, and then do an $or.
[20:21:35] <Acatalepsy> The documents are not that big, really.
[20:21:58] <Acatalepsy> Any performance issues with respect to $or I should be aware of?
[20:23:41] <redsand> $maybe
[20:24:44] <Acatalepsy> Well, $or and $elemMatch .
[20:39:36] <ap_> greetings. is it possible to port/transfer a mongodb collection over to neo4j for graph analysis?
[20:39:55] <ap_> i've been trying to figure out how, but it has escaped me
[20:40:16] <cheeser> maybe with the mongo-connector
[20:44:19] <skot> ap_: you would need to write code to do something like that.
[20:44:47] <skot> ( you will need to define edges and relationships for your graph )
[20:45:25] <ap_> i see
[20:45:35] <ap_> so i'll need to define those before i try to send them over to neo4j?
[20:45:49] <skot> Well, depending on how you want it to work, yes.
[20:45:52] <ap_> https://developers.google.com/transit/gtfs/reference?csw=1
[20:45:59] <ap_> that's the data set
[20:46:36] <ap_> any ideas on what to start with?
[20:46:39] <skot> I'd suggest asking the neo4j people if anyone is working with that dataset to help you get a good model
[20:46:49] <ap_> ok
[20:47:06] <ap_> thanks!
[20:47:06] <skot> It is pretty unrelated to mongodb…
[20:47:16] <ap_> right
[20:47:24] <ap_> i'm just using node-gtfs with mongodb
[20:47:29] <ap_> and would like to keep that in place
[20:48:47] <skot> if this isn't you, you may want to watch this: http://stackoverflow.com/questions/23228460/gtfs-to-neo4j-using-node
[20:49:03] <ap_> nope, that's me!
[20:49:04] <ap_> haha
[20:49:08] <ap_> well done though
[20:50:09] <xissburg> sup mongoes
[20:50:19] <skot> ap_: looked at this? https://skillsmatter.com/skillscasts/5026-transport-network-route-finding-using-a-graph
[20:50:48] <ap_> no i haven't found that
[20:50:57] <ap_> excellent....will watch now
[20:50:59] <ap_> thank you!
[20:51:09] <skot> anyway, I'd hit up the neo4j list/irc
[20:51:19] <ap_> will do
[20:51:31] <ap_> thanks for your help, even though slightly unrelated to mongo
[21:01:43] <Simonn> mongodb = good db
[21:07:44] <brucelee> is there any recommendation against mongodb being on lvm?
[21:08:11] <brucelee> right now its on ebs, and on top of that, lvm
[21:10:40] <skot> no, that is fine.
[21:16:47] <xissburg> mongodb is cute, isn't it?
[21:18:00] <brucelee> skot: sw33t
[21:18:01] <brucelee> :p
[21:18:07] <phuh> Postgre supports json data type. How is mongodb compared to postgre in terms of its json doc storage function?
[21:18:15] <brucelee> is mongodb gridfs suitable for storing a shit ton of binary files?
[21:18:33] <brucelee> somehow we did that, and we reindex'd a few times, not knowing reindexing multiple times would corrupt the db
[21:18:51] <brucelee> how do we fix it? we tried the --repair option and it didnt work
[23:12:27] <LucasKA> ould someone suggest a mongo data model for this scenario? The use is tracking volunteer hours, I need to track all volunteers
[23:12:29] <LucasKA> | and how many hours they volunteered on a given night (shift), but also need to track each shift that a specific volunteer
[23:13:16] <LucasKA> Wow that got garbled.
[23:13:34] <LucasKA> It was supposed to say: Could someone suggest a mongo data model for this scenario? The use is tracking volunteer hours, I need to track all volunteers and how many hours they volunteered on a given night (shift), but also need to track each shift that a specific volunteer participated in, and be able to access their total hours per month (or any custom date range)
[23:17:07] <redfox_> Not that mongo wouldnt be able to do this, but this sounds like a perfect RDBMS scenario for me.
[23:19:41] <LucasKA> I'm using Meteor, so mongo is already included, I'm not really a backend person so it's already kinda tough for me. I'd like to stick to this, instead of worrying about wiring up another backend.