PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 15th of July, 2016

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:29:35] <freem|nd> sup
[00:29:48] <freem|nd> anyway can please help with mongoose and unique indexes
[00:32:53] <freem|nd> looks like it is just impossible to do unique indexes for subdocuments..
[00:43:22] <cheeser> shouldn't be. not at least in mongo. mongoose is another sotry.
[00:43:45] <Boomtime> ^ +1
[00:57:46] <MacWinner> freem|nd, you should be able to use dotted notation.. like schema.index({'parent.sub.field': 1})
[00:58:12] <MacWinner> haven't tried it myself, but I believe it to be true
[01:14:39] <minnesotags> I've searched, but can't find the answer. I have used "mongod -dbpath" to ensure the directory is /var/lib/mongodb. I can see that the dbpath is /var/lib/mongodb, but if I run mongod --repair, or mongod -shutdown, it still looks for the dbpath of /data/db and says it can't find the server. How do I remove this reference to /data/db?
[01:16:25] <IronMike> I'm new to MongoDB and NoSQL. I'm building a web service and want to use MongoDB as the datastore. However, I'm concerned about the consistency model for things like security tokens and such. Is this outside the scope of use cases for a db like mongo?
[03:35:34] <freem|nd> what is the best way to avoid unique key on subdocument
[06:48:01] <sumi> hello
[06:59:29] <joannac> hi sumi. did you have a question?
[06:59:56] <sumi> no thx
[07:00:06] <Zelest> i do! why does all the media say "nice truck attack" when it's not nice at all? :(
[07:00:18] <Zelest> sorry, couldn't resist.. :/ *horrible person*
[08:10:04] <Lonesoldier728> Hey so I am trying to call not equal and a second arguement not working though
[08:10:21] <Lonesoldier728> User.find({device: 2, username: {$ne: "test1"}}
[08:25:27] <koren> Lonesoldier728: can you show the document you want to query? Your request seems good. Also, do you use the mongo shell directly or are you using a driver?
[08:28:10] <Lonesoldier728> mongoose
[08:31:43] <koren> sorry I don't know mongoose, but your query is mongodb correct, so maybe this comes from your model?
[09:05:52] <obiwahn> Derick: is there some wiredtiger related development channel or is that now all mongodb internal?
[09:14:16] <Derick> obiwahn: I don't think there ever was one
[09:14:48] <Derick> obiwahn: best bet is probably the mongodb-dev google group: https://groups.google.com/forum/#!forum/mongodb-dev
[09:17:33] <Zelest> wohoo! new laptop ordered! \o/
[09:19:54] <obiwahn> At the moment I am measuring wiredtiger against rocksdb. It seems to be a problem when the data does not fit into wiredtiger's cache. I wonder if I miss some configuration option..
[09:20:29] <Derick> Zelest: which one?
[09:20:37] <Derick> obiwahn: don't think I can help you with that one - sorry.
[09:20:46] <Zelest> a 2015 years lenovo x1 carbon..
[09:22:11] <obiwahn> Does anyone happen to know if this is the "normal" behaviour? I got 300 * 10^6 key value pairs. And running over a selection of 30 * 10^6 elements takes about 6000 seconds:(
[09:22:19] <obiwahn> Zelest: the x1 is so sweet:)
[09:22:31] <Zelest> :D
[09:22:46] <Zelest> it will be my first one, so hopefully you're right :)
[09:22:46] <obiwahn> the new ones got 16gb ram:)
[09:22:56] <Zelest> yeah
[09:23:00] <Zelest> no issue for me though :)
[09:23:15] <Derick> i'm looking at a t460s - with 32GB ram
[09:23:16] <Zelest> and no skylake support yet
[09:23:26] <Zelest> all I need is chromium and vim so..
[09:23:31] <Derick> but, need to save money...
[09:23:39] <Zelest> my boss payed mine so :)
[09:23:53] <obiwahn> i got the new x1 but i work like you Zelest
[09:23:54] <Zelest> got 512gb ssd, touchscreen, 4G/LTE, etc
[09:23:55] <Derick> I got a nice big fast desktop box from mine...
[09:24:09] <Zelest> ah
[09:24:31] <Zelest> obiwahn, yeah, but I can't wait for skylake support in BSD, so I settled with broadwell :)
[09:24:50] <Zelest> will probably regret it when skylake support is out, but meh, that's a problem for future Zelest :)
[09:24:55] <obiwahn> my bluetooth is not working on linux
[09:25:14] <obiwahn> That is a bit annoying:)
[09:25:21] <Zelest> what do you use bluetooth for? :P
[09:25:47] <obiwahn> i got a mouse for the x1
[09:25:59] <Zelest> ah
[09:26:09] <obiwahn> with dash loading. and i hast a touchpad for presentations
[09:26:13] <obiwahn> it
[09:26:14] <Zelest> i have my old $10 microsoft mouse :D
[09:26:29] <Zelest> easily one of the best mouses i've ever had
[09:26:42] <obiwahn> it has some extra dongle - so it is not a big problem:)
[09:27:08] <Zelest> :)
[09:27:11] <obiwahn> at my desktop i use the razer deathadder for years now:P
[09:27:32] <obiwahn> pretty happy with that - it breaks all 3 years but that is ok:)
[09:28:08] <Zelest> hehe
[09:28:16] <Zelest> 4 more hours! then it's vacation! \o/
[09:28:31] <Derick> yay
[09:28:33] <obiwahn> yeahy:)
[09:29:02] <Zelest> one week in lisbon and a new laptop waiting for me when I get home :D
[09:30:56] <obiwahn> Zelest: are you from europe?
[09:31:14] <obiwahn> and what will you do in portugal?
[09:31:57] <Zelest> yeah, northern europe, Sweden :)
[09:32:07] <obiwahn> ah cool
[09:32:11] <Zelest> just relax, explore the city and look at old buildings and what not :)
[09:32:14] <Zelest> might visit Sintra
[09:32:20] <obiwahn> that is one of the countries i need to visit again
[09:32:29] <Zelest> oh?
[09:32:33] <obiwahn> i was as child in sweden
[09:32:51] <Zelest> ooh, thought you meant portugal :)
[09:32:58] <obiwahn> i live in germany:)
[09:33:38] <Zelest> Ah :)
[09:33:44] <Zelest> was in Berlin last summer
[09:33:53] <Zelest> really wanna go there again
[09:34:15] <obiwahn> in the winter we have the meetingcpp conference:P
[09:34:22] <koren> in three months...
[09:34:28] <obiwahn> maybe you can ask your boss to send you there
[09:35:03] <Zelest> but I might go there anyway, and paying for it myself :D
[09:35:16] <Zelest> then I can visit c-base
[09:35:29] <Zelest> plan to learn cpp though
[09:39:52] <obiwahn> :) i plan to learn more javascript :)
[09:40:26] <Zelest> sounds like a date! ;)
[09:40:49] <Zelest> i'm somewhat allergic to javascript though
[09:40:54] <Zelest> or, the way some people use it
[09:41:09] <Zelest> all the messy "effects" people demand these days
[09:41:27] <Zelest> whatever happened to http://motherfuckingwebsite.com ? :(
[13:46:03] <atbe> Hey cheeser you around?
[13:46:12] <atbe> I tested the snapshot build posted in the bug report
[13:46:28] <atbe> Mongo client was created successfully.
[13:47:04] <atbe> But, socketexception is raised immediately after. The address is available on my network and the port 27017 is open.
[13:47:33] <atbe> Normal behaviour and maybe an issue with my network?
[13:48:03] <atbe> https://www.irccloud.com/pastebin/IhNEVVmG/mongo_client_ipv6_stacktrace
[13:48:21] <atbe> java.net.ConnectException: No route to host
[13:48:38] <atbe> Must be my network. But ServerAddress parsing is fixed!
[13:50:19] <cheeser> could be network issues, yeah. it's not far after *this* code that we're down in to jdk/netty code.
[13:57:34] <atbe> cheeser: yep, nice work. Looking forward to 3.3.0
[13:58:01] <atbe> Could I make a recommendation in regards to MongoClient instantiation?
[14:00:09] <atbe> When we began using the Mongo Java Driver, my colleague did not parse through the hosts provided to the MongoClient constructor to check if they were online in the first place (some of the host were down due to EngOps configuring things). It could be useful to instead of providing a 10000 timeout limit before raising SocketExceptions to see if the Socket is
[14:00:09] <atbe> open in the first place
[14:01:14] <atbe> I wrote a simple hostIsUp function that I would use to filter the potential hosts I would pass into the MongoClient constructor.
[14:01:14] <cheeser> the list of hosts is a seed list. the client will attempt to connect to the first it can find and discover the topology of your cluster.
[14:01:26] <atbe> https://www.irccloud.com/pastebin/Q32A3NdK/
[14:01:48] <cheeser> that's a race condition, though. what if the host goes down after that check?
[14:02:12] <cheeser> that timeout is configurable, though, i believe.
[14:02:19] <atbe> It is configurable
[14:03:12] <atbe> I agree, it is a race condition but at least a warning in the logs would help so users do not produce hanging threads if they for some reason need multiple MongoClients to target different components of the cluster.
[14:03:35] <atbe> This way they can choose whether they want to create potentially unresponsive MongoClients
[14:03:42] <atbe> Just a thought
[14:16:17] <cheeser> you can always file a jira to that effect. that's going to be more effective than an irc conversation ;)
[14:34:59] <atbe> cheeser: good deal
[14:36:40] <tantamount> File an entire JIRA! ^_^
[14:49:24] <saml> in db.coll.update can I replace matching value of an array field?
[14:50:42] <koren> sure! with the $ operator. db.col.update({'arrayItems.property': 'match me!'}, { $set: {'arrayItems.$.matched': true}})
[14:50:52] <saml> db.articles.update({author: 'Web Scale'}, {$set: {author: new Array where author Web Scale is replaced with WebScale
[14:51:23] <saml> what's .$.
[14:51:34] <koren> I think I misanderstood your question
[14:51:37] <saml> Acts as a placeholder to update the first element that matches the query condition in an update.
[14:51:50] <saml> https://docs.mongodb.com/manual/reference/operator/update/positional/#up._S_ nice
[14:53:35] <saml> db.docs.update({author:'foo'}, {$set:{'author.$': 'Foo'}}, {upsert:false, multi:true})
[14:53:38] <saml> thanks
[14:54:31] <koren> you may have to run the query multiple time
[14:54:41] <koren> in case there is more than once 'foo' in a single array
[14:55:25] <koren> { author: ['foo', 'bar', 'foo'] } after your query will be { author: ['Foo', 'bar', 'foo'] }
[14:56:10] <koren> so it may not be suitable for your needs, for one-shot maintenance its ok but maybe you need something else
[15:00:50] <Lope> I'm running this update function update({},{$pull:{'profile.wish':webref,'profile.own':webref}},{multi:true});
[15:01:07] <Lope> it's unfortunately causing 6 updates, when only 2 documents have the webref in question.
[15:01:27] <Lope> is there a way to specify a query so that I only update the relevant documents?
[15:05:41] <cheeser> your query is empty...
[15:06:00] <cheeser> you're basically saying, "update every document with this set of updates"
[15:08:37] <Lope> yes. this works, but I'm not sure if it's ideal: {$or:[{'profile.own':'foo'},{'profile.wish':'foo'}]}
[15:08:56] <Lope> should I be adding $elemMatch to the above?
[15:12:54] <Lope> does it make any difference whether I search for an element inside an array in a document using $eq or $elemMatch?
[15:13:39] <koren> yes, elemMatch will return documents that matches all the conditions inside elemMatch
[15:14:04] <Lope> oh, so $eq is a simpler operator?
[15:14:29] <Lope> So $elemMatch is when you have multiple conditions, and $eq is simply one, equals?
[15:14:37] <koren> { elem: [{ a: 1, b: 2}, {a: 3, b: 4}] }, { elem: [{ a: 1: b: 1}, { a: 3, b: 4}] }
[15:14:51] <koren> no, because you can use $and for multiple conditions with $eq
[15:14:56] <koren> but elemMatch will go deep down arrays
[15:15:14] <koren> and return documents that contains objects that matches all conditions
[15:15:37] <Lope> okay
[15:15:39] <koren> with $and and $eq in the document I pasted you will have hard time to get documents that have element in array with a: 1 and b: 2
[15:16:24] <koren> because it will return this document too: { elem: [{ a: 1, b: 1}, { a: 2, b: 2}]}
[15:21:06] <Lope> Thanks
[15:25:57] <tantamount> How can I print out a document that's not in a cursor?
[15:28:34] <Derick> printJson() ?
[15:32:39] <Karbowiak> any of you able to help me figure out where i'm going wrong with the following query: http://pastebin.com/0g9pFa6j
[15:34:05] <Karbowiak> in sql it returns a list of regionIDs, together with the amount of kills it's found - but in mongo i just get an array like: array("_id" => array("source" => regionID), "count" => 2.0)
[15:35:00] <koren> you put the sum in the group id
[15:35:11] <koren> group only by region ID and add sum in another property than id
[15:35:20] <koren> ho sorry misread
[15:36:48] <Karbowiak> i'm honestly lost, lol
[15:37:37] <Derick> let's have a look
[15:38:14] <Derick> {$group: {_id: {source: "regionID"}, count: {$sum: 1}}},
[15:38:17] <Derick> ought to be:
[15:38:33] <Derick> {$group: {_id: '$regionID' }, count: { $sum: 1 } },
[15:38:40] <Derick> not sure where this "source" came from
[15:39:00] <Derick> the sort is wrong too
[15:39:08] <Derick> {$sort: {"count": -1}},
[15:39:09] <Derick> should be:
[15:39:18] <Derick> hmm, wait
[15:39:40] <Karbowiak> i was googling to find some info on it, and one site said to use source :D
[15:39:41] <Derick> no, the sort is fine.
[15:39:47] <Derick> Karbowiak: do you have a link?
[15:40:13] <Karbowiak> https://www.mkyong.com/mongodb/mongodb-group-count-and-sort-example/
[15:40:35] <Derick> oh, in the 3rd one
[15:40:47] <Karbowiak> hm, now it says: "errmsg" : "A pipeline stage specification object must contain exactly one field.",
[15:40:57] <Derick> Karbowiak: can you pastebin again?
[15:41:11] <Karbowiak> http://pastebin.com/9C8SUcum
[15:41:30] <Derick> oh, you need [ ] around the pipeline. Each stage is an array element.
[15:41:47] <Derick> and there is an error in the group still
[15:41:51] <Karbowiak> oh, so [$match ...], [$group ...]
[15:42:00] <Derick> (bceause I got it wrong)
[15:42:12] <Derick> {$group: {_id: '$regionID' , count: { $sum: 1 } } },
[15:42:22] <koren> no, [{$match:{}}, {$group: {}}]
[15:42:57] <Derick> yeah
[15:44:29] <Karbowiak> \o/
[15:44:38] <Karbowiak> http://pastebin.com/SV5zCvsg
[15:44:41] <Karbowiak> you guys rock <3
[15:45:52] <Karbowiak> only question left really, is it possible to make _id become regionID? or should i just accept it :D
[15:46:36] <Karbowiak> or i guess, add more fields to the output
[15:48:29] <Derick> you can, with a projection later
[15:48:54] <Derick> { $project: { 'regionID': '$_id', 'count' : '$count' } }
[15:49:35] <koren> I think you can also { $id: 0 } in the $project to remove it completely, otherwise pipeline keeps it
[15:50:34] <Derick> '$_id': 0
[15:51:14] <Karbowiak> "_id": 0 did it
[15:51:19] <Karbowiak> neat
[15:53:05] <Karbowiak> thank to the both of you tho! this really _REALLY_ helped
[15:53:19] <Karbowiak> i sort of actually understand it now.. sort of..
[16:00:54] <tantamount> How am I meant to debug, "E QUERY undefined at src/mongo/shell/assert.js:0"
[16:03:46] <braqoon> Hi I'm trying to transfer db.collection.find().sort({$natural:1}).limit(1); to pymongo syntax. Got db.collection.find(sort=('$natural', 1), limit=1) but throws me an error "too many values to unpack". Any thoughts ?
[16:19:51] <Teaboy> anyone know why createIndex() in php might not be working for me?
[16:28:43] <btorch> is there a way to make mongodb not choose a rs member's name automatically ?
[16:29:36] <btorch> like it's choosing "name" : "mongo1:27017" but instead I want to use ip or has to be somethine like mongo1.local
[16:30:13] <koren> you should put it in the hostname of the replicaset, ensure it is resolvable and you can use it when doing your rs.add
[16:30:47] <koren> but clients must use the same name, so they should also be able to resovle it
[16:30:58] <koren> (from what I understood)
[16:35:34] <btorch> koren: not sure what you mean, cause I don't see any option for that to include in the mongod.conf under replication
[16:37:43] <btorch> all I did so far was to run rs.initiate()
[16:38:05] <btorch> and then tried rs.conf() but that just gives back "not authorized on admin to execute command
[16:38:17] <btorch> rs.status() works though
[17:42:34] <atbe> Does anyone know where I can configure syslog forwarding to a virtual ip?
[17:42:54] <atbe> https://docs.mongodb.com/manual/reference/configuration-options/ I found this but it does not specify output that is not local
[18:20:23] <kurushiyama> btorch You should read about DNS and what it means...
[18:21:30] <kurushiyama> btorch The name is supposed to be a DNS hostname, which mongo uses to resolve the IP.
[18:23:54] <kurushiyama> btorch And actually, you should never use an IP, not even an A record for replica set member addresses. Actually, the suggested way of adressing replica set members and config servers are CNAMEs. You should only deviate from this best practise unless you _really, really, really_ understand what you are doing.
[19:00:06] <theGleep> Hi, all - I have another question later ... but first; the welcome message says there's a searchable log of conversations? How do I search it?
[19:18:02] <cheeser> theGleep: you open the url with "logger" in the name
[19:31:47] <theGleep> I did ... and I got: a calendar, a list of channels, one day of conversations. I couldn't find any way to search the history. :(
[19:32:59] <cheeser> click on the day and you'll get that day's logs.
[19:34:16] <theGleep> I kinda figured that :) ... but is there a way to search across days? I'd like to see if my *real* question has been asked before I ask it again ... but it's not worth the time browsing through who-knows-how-many-days of log to find it
[19:34:31] <cheeser> i don't think there is
[19:35:17] <theGleep> Bummer .. pmxbot said "searchable" when I logged in, and I was hoping I could find it in the histories.
[19:35:19] <theGleep> Oh, well.
[19:36:13] <theGleep> So, here's my *real* question: Using Node.js and the promises version of the driver: mongo.connect(url).then(...) - what's the best way to handle closing the database?
[19:38:40] <koren> I personaly do something quite ugly but I register with the promise on my app start and add the db into global.db, then I access it everywhere using directly db., and I register a handler for sigint/sigterm to perform a db.close
[19:39:36] <koren> never heard of best practice about this, I would be interested too
[19:40:40] <theGleep> koren: I played around with that approach and bailed on it...don't remember why...I think it had to do with timeouts or disconnects or something...
[19:41:25] <theGleep> What I'm doing now is akin to the documentation (which only shows the callback approach, not the promises approach) - and has collection.db.close() buried *deep* in the callback chain.
[19:41:41] <koren> theGleep: yeah I understand. I don't handle the connection drop errors to make the service crash on purpose, then it is started again so I have not this kind of issue (a service running without connection)
[19:42:01] <koren> I read something about connection close, you should keep it open for the lifetime of the app and never close it
[19:42:24] <koren> unless your node app is running as cgi cli or something
[19:42:28] <theGleep> I *really* don't like that - the "imbalance" really bugs me
[19:42:52] <theGleep> Was that connection.close or db.close? is there a difference?
[19:43:25] <koren> I was talking about db close sorry, just talked about connection since it is actually a tcp connection
[19:43:41] <theGleep> that's cool...
[19:44:24] <theGleep> The 3.2 docs seem to indicate the db should be closed after every call ... it looks like it's inherent in the architecture (based on the docs)
[19:44:39] <koren> hum this surprises me
[19:44:43] <koren> it add a big overhead
[19:44:47] <theGleep> Did me, too.
[19:44:48] <koren> have you a reference for this?
[19:44:57] <theGleep> let me see if I can find the url for it
[19:45:19] <theGleep> http://mongodb.github.io/node-mongodb-native/2.1/tutorials/crud/
[19:45:26] <theGleep> (I just happened to have it open :)
[19:46:05] <theGleep> see how you only get the db through the call to MongoClient.connect()?
[19:46:08] <koren> indeed
[19:46:24] <koren> they close it everytime but in a real app I think it is bad practice
[19:46:40] <theGleep> You *can* get the DB in a then-able: MongoClient.connect(url).then(function(db) {})
[19:46:45] <theGleep> but you have the same problem...
[19:47:08] <Karbowiak> hey guys, me again. I've setup a text index on a string in a collection, the names can have weird symbols, including -, when i'm doing db.getCollection('corporations').find({$text: {$search: "4M-Corp"}}) it never actually matches 100% on 4M-Corp, instead it just matches everything, any way to make it take the - into account as a part of the string?
[19:47:08] <Karbowiak> :)
[19:47:09] <theGleep> I agree - it seems to force a bad practice.
[19:47:46] <Karbowiak> and by weird symbols i mean symbols, non-weird.. i suck at wording things
[19:47:48] <theGleep> Karbowiak: can you escape the -?
[19:47:51] <koren> Karbowiak: have you tried to escape it? \-
[19:47:54] <theGleep> :)
[19:48:02] <Karbowiak> yep
[19:48:04] <Karbowiak> same result
[19:48:28] <koren> I think $search return something with a rating for a full text search
[19:48:28] <Karbowiak> if it's just 4M it finds 4 results as expected, but as soon as the - is there it just matches everything
[19:48:33] <theGleep> Big-K: I would guess that your search is using user input, so you can't easily convert it to a regular expression...
[19:48:46] <theGleep> ?
[19:49:13] <koren> what about db.getCollection('corporations').find({$text: /4M\-Corp/}) ? :D
[19:50:16] <Karbowiak> text expects an object :P
[19:50:36] <theGleep> that's what I was thinking - but it requires a "hard-coded" regular expression. But if the regular expression works, it wouldn't be that hard to create a text equivalent and do a replacement
[19:51:50] <Karbowiak> ahh
[19:51:54] <Karbowiak> db.getCollection('corporations').find({$text: {$search: "\"4M-CORP\""}})
[19:51:58] <theGleep> Here's a wild side-idea...maybe you could include a property "words" that would be text-field.split(" ") (or a regular expression to split on any whitespace) ... then you could search for a word...
[19:52:03] <Karbowiak> the other way search for 4M and CORP
[19:52:04] <Karbowiak> lol
[19:52:09] <theGleep> just a random thought
[19:52:51] <theGleep> So "\"4M-CORP\" worked correctly?
[19:53:13] <Karbowiak> yeah
[19:53:35] <koren> hey so you had " inside the string
[19:54:16] <theGleep> interesting ... that implies that the parser *honors* an exact-term spec; but by default assumes something else.
[19:54:37] <theGleep> I'm a noob, btw, so you can ignore anything I have to say ... unless it's a good idea :)
[19:54:44] <Karbowiak> db.getCollection('corporations').find({$text: {$search: "\"4M-\""}}, {score: {$meta: "textScore"}}).sort({"$score": -1}) neat, this will do
[19:55:05] <theGleep> cool.
[19:55:18] <theGleep> glad I could be there for you :)
[19:55:30] <theGleep> so ... back to the discussion of db.close()
[19:56:34] <Karbowiak> thanks guys! :)
[19:56:44] <Karbowiak> sorry for the interruption
[19:57:05] <theGleep> the only way I can really see to make the code cleaner is like you said you did, koren: create an object that keeps the database around ... but the only way to account for accidental closures is to call "getDB" which would re-connect when needed...
[19:57:30] <theGleep> (No prblem Karbowiak - makes me feel better to give back when I'm looking for help myself! :)
[19:58:51] <theGleep> ... and if I have to call getDB() anyway...the only value I'm getting is to have the "db.close()" be centralized somewhere ... and that creates the other problem of how do I fire that other somewhere? I'm not aware of any inherent "onunload" event for objects during GC...
[20:00:49] <koren> well in our services we do a db connect, add db in global and let the service crash it db is not available, so it is restarted (docker container with auto restart)
[20:00:58] <koren> so if it crash once it a while not a big deal
[20:01:01] <koren> and no overhead in code
[20:01:07] <theGleep> with promises, it seems there *could* be something like connect(url).then(db => {}).finally(db => db.close)
[20:01:47] <theGleep> Yeah .. I am figuring that at least one of my projects will need to have "forever app.js" on it. Don't like that approach, tho
[20:02:42] <theGleep> I'm actually looking into using the Bluebird promise library, which provides the concept of a "disposer" - but I haven't figured out how that works. The docs are written from an insider's view, so I'm having trouble really understanding how to use it.
[20:03:43] <theGleep> But if there's an official way that makes more sense, I'd rather do that.
[20:06:16] <Karbowiak> daym, it seems the php driver doesn't support the $meta operator, so i can't sort by score
[20:06:47] <theGleep> Oh? I didn't know that :) :) :)
[20:06:56] <Karbowiak> $find = $this->collection->find(array("\$text" => array("\$search" => "\"$searchTerm\""), "score" => array("\$meta" => "textScore")), array("sort" => array("\$score" => -1)))->toArray();
[20:07:10] <Karbowiak> unless the score part should go into the options part, lemme test that
[20:07:10] <Karbowiak> :D
[20:07:39] <Karbowiak> yeah, yeah, nevermind - that did it
[20:07:39] <Karbowiak> lol
[20:07:59] <theGleep> cool.
[20:14:09] <theGleep> So - for those checking in but not scrolling back; here's my question. What's the official best practice for db.close() when using the promises version of the API that's exampled here: http://mongodb.github.io/node-mongodb-native/2.1/tutorials/crud/
[20:14:14] <theGleep> ?
[21:27:57] <JFlash> please help. I'm trying to start mongo service but it wont read my config file
[21:28:38] <JFlash> says error reading config file so such file
[21:28:41] <JFlash> bu the file is there
[21:28:46] <JFlash> the location is correct
[21:28:56] <JFlash> /etc/mongod.conf
[21:29:09] <theGleep> Rights issue?
[21:29:35] <JFlash> i tried sudo start instead
[21:29:43] <JFlash> same error
[21:30:26] <theGleep> you can 'cat' the file?
[21:32:01] <JFlash> yes
[21:32:07] <JFlash> i can
[21:32:10] <JFlash> here are the permissions
[21:32:12] <JFlash> -rw-r--r-- 1 root root 1693 Jan 13 2015 /etc/mongod.conf
[21:34:32] <JFlash> should I change the permission to user mongo or something
[21:34:37] <JFlash> this so frustrating
[21:35:25] <cheeser> how are you running mongod?
[21:35:34] <JFlash> yes
[21:35:40] <JFlash> service start mongod
[21:35:43] <JFlash> oops
[21:35:49] <JFlash> service mongod start
[21:35:53] <JFlash> status works
[21:36:08] <JFlash> start works but than when I check status I see it failed
[21:36:27] <cheeser> do this: mongod -f /etc/mongod.conf
[21:36:31] <theGleep> what about the folder permissions? You have "x"?
[21:36:32] <cheeser> see what output you get
[21:36:53] <cheeser> it's /etc. if perms on *that* folder gets screwed up, linux catches fire.
[21:37:15] <JFlash> I get this error
[21:37:18] <theGleep> Ah ... right. I realized that a few minutes ago -- and promptly forgot it again
[21:37:20] <theGleep> :)
[21:37:33] <JFlash> Failed global initialization: FileNotOpen Failed to open "/var/log/mongodb/mongod.log
[21:37:47] <cheeser> mkdir /var/log/mongodb
[21:37:48] <cheeser> try again
[21:37:56] <cheeser> as root, of course
[21:38:02] <cheeser> or the service user for mongod
[21:38:23] <JFlash> cannot create. file exists
[21:39:45] <theGleep> ... any chance you have another copy already running?
[21:39:49] <cheeser> is it a file? or a directory?
[21:39:52] <cheeser> also, that.
[21:40:06] <JFlash> another copy of what?
[21:40:16] <theGleep> mongod
[21:40:26] <cheeser> ps auxw | grep mongod
[21:40:31] <JFlash> how can I tell?
[21:40:36] <JFlash> okey
[21:42:47] <cheeser> ok. burrito time. good luck, JFlash. :D
[21:42:56] <JFlash> so I had 3 lines
[21:43:02] <JFlash> I closed all terminal
[21:43:05] <JFlash> I got one line
[21:43:12] <JFlash> and I tried again and got the same error
[21:43:34] <JFlash> this is the line I got:
[21:43:35] <JFlash> joao 21554 0.0 0.0 9496 2184 pts/1 S+ 18:32 0:00 grep --color=auto --exclude-dir=.bzr --exclude-dir=CVS
[21:45:01] <theGleep> Hmm...been a long time since I did a "kill" in *nix ... could you "kill mongod"?
[21:45:27] <theGleep> ... then try the non-service start command again?
[21:45:50] <koren> well yes you can kill the process and restart it
[21:46:03] <koren> but dont use force kill it may corrupt data, kill and wait
[21:46:15] <koren> but it will keep the mongo.lock file
[21:46:22] <koren> and it will run a check on your data I think
[21:46:28] <koren> on the next startup
[21:46:54] <theGleep> Yeah - it would be a worst-case option.
[21:47:18] <theGleep> OK...second-to-worst; reboot would be the real worst-case...
[21:47:20] <theGleep> :)
[21:47:29] <theGleep> (wait...re-install)
[21:49:20] <koren> btw JFlash, when you ps -x | grep you see your grep command in the ps x
[21:49:26] <koren> so you will always have 1 match
[21:49:43] <koren> since ps -x return the "grep mongod" and you grep on that
[21:50:02] <JFlash> guys thank you I have temorarely made it work
[21:50:15] <JFlash> for a permanent fix I will update ubuntu tomorrow
[21:50:25] <JFlash> what I did was to call monngod directly
[21:50:37] <JFlash> then I complained that data/db did not existed
[21:50:51] <JFlash> so I created this folder and now it seems to be up
[21:51:08] <koren> you should create a startup script in case your server restart, if you launch it manually
[21:51:32] <JFlash> koren, agreed, but as I see I'm running old ubuntu, this is going to be a never ending nightmare
[21:51:50] <JFlash> koren, I need to upgrade mongo and ubuntu first of all
[21:52:06] <JFlash> koren, cheeser and evrybody thank you for your help
[21:52:15] <theGleep> Glad I could contribute!
[21:52:41] <JFlash> ( this my personal laptop, not a production server btw. just mentioning)
[22:00:03] <theGleep> So - for those checking in but not scrolling back; here's my question. What's the official best practice for db.close() when using the promises version of the API that's exampled here: http://mongodb.github.io/node-mongodb-native/2.1/tutorials/crud/ ?
[23:32:49] <StephenLynx> theGleep, I wouldn`t use promises to begin with.