PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 7th of August, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:09:12] <jpfarias> hey guys
[00:09:24] <jpfarias> does $text: $search scan the whole collection?
[00:13:16] <acidjazz> hey all
[00:13:26] <cheeser> unless there are other filters, sure.
[00:13:36] <acidjazz> i was wondering.. i need to start putting mongoid's in URLS
[00:13:38] <cheeser> but it's an indexed search so it's not *that* bad
[00:13:53] <acidjazz> is there any found sweet method to shortening them? maybe a base62 or something?
[00:15:01] <cheeser> it's only 24 characters...
[00:16:30] <acidjazz> thats pretty long fo ra url
[00:16:32] <acidjazz> for a url*
[00:16:40] <acidjazz> and recognizable
[00:16:43] <acidjazz> and incrementable
[00:18:12] <acidjazz> looks like the best method seems to be a second id
[00:18:22] <acidjazz> base62 only shaves off 4 characters
[00:18:39] <cheeser> seems kinda redundant. and probably prone to the same problems.
[00:19:06] <acidjazz> look at youtube ids
[00:19:08] <cheeser> putting the id in is fine provided your app properly checks access
[00:19:17] <acidjazz> its not a security issue
[00:19:24] <acidjazz> its conveniency
[00:19:37] <cheeser> so just put you ObjectId in the url and be done with it.
[00:19:38] <acidjazz> i want my sharable links short and sweet
[00:20:01] <cheeser> then use bit.ly
[00:20:04] <acidjazz> you own a lot of duct tape dont you
[00:20:12] <cheeser> nope
[00:43:39] <acidjazz> ok here's a better question
[00:43:51] <acidjazz> so my site is displaying a single document on a page
[00:43:56] <acidjazz> w/ a shareable url
[00:44:01] <acidjazz> and im adding next/prev buttons
[00:44:18] <acidjazz> so i want to show the next document that would appear.. safe if i did a .find and sorted by a certain parameter
[00:44:23] <acidjazz> s/safe/say
[00:44:41] <acidjazz> i cant really think of an ideal way to just get the next document
[01:17:23] <hexsprite> hey, im wondering what sort of index I would need to optimize a query like this? db.freelist.find({userId: 'dtyb7ZSJoSMtT96Wo', duration: {$gte: 1}, end: {$gt: new ISODate()}}).explain()
[01:18:16] <hexsprite> I tried the obvious {userId: 1, duration: 1, end: 1} but I get a bunch of nscannedObjects
[01:20:32] <hexsprite> It does say BtreeCursor userId_1_duration_1_end_1 but I would assume nscannedObjects should be 0 if the indexes are working correctly?
[01:21:04] <hexsprite> here's the explain results http://pastebin.com/PRayVx1F
[01:34:50] <jkevinburton> anyone have a sec to save me from my sanity about indexes? What would be the proper index for this query? find({​name: /gaga/ig}{​name: 1, sku: 1, status: 1}).sort({​status:-1,fans:-1,name:1}).limit(5)
[01:57:11] <jkevinburton> anyone have a sec to save me from my sanity about indexes? What would be the proper index for this query? find({​name: /gaga/ig}{​name: 1, sku: 1, status: 1}).sort({​status:-1,fans:-1,name:1}).limit(5)
[01:57:31] <jkevinburton> (sorry for the double post, i was sent offline and didn't know if anyone responded)
[02:00:58] <StephenLynx> If I am not mistaken, regex queries do not use indexes.
[02:01:04] <StephenLynx> jkevinburton
[02:01:36] <jkevinburton> shitty -- How does one do a search and then sort that search?
[02:01:47] <jkevinburton> (sorry for the n00b question)
[02:02:49] <cheeser> StephenLynx: if it's a tail end regex it can
[02:03:25] <StephenLynx> got it
[02:04:45] <jkevinburton> it's a search anywhere in string
[02:05:13] <jkevinburton> can you use a sort index if you aggregate using regex?
[08:20:24] <neuromute9> hey folks! I've got a strange problem. I have a node/express app that runs fine in Chrom{e|ium} on Ubuntu, until I populate the database with an old import. Then it crashes the browser on every page load. Weirdly, it doesn't crash the browser on another Ubuntu machine running Chromium. It also doesn't crash Chrome on OSX or Windows. Just this one machine. If I drop the database, the site works fine. I'm guessing
[08:20:25] <neuromute9> it's a character/validation issue, but how can I check/fix this?
[08:56:26] <amitprakash> Hi, I am getting Unrecognized option: storage.dbpath while trying to run mongodb
[08:56:28] <amitprakash> what gives?
[08:56:32] <amitprakash> s/mongodb/mongod
[10:54:34] <kenalex> hello
[10:55:22] <kenalex> do any of you guys know how to update all embeded documents in an array field using the typed methods of the c# mongodb driver ?
[12:51:47] <jamiel> Hi all, we removed records from a sharded collection of a specific shard key and are seeing some strange behaviour. We can't locate the records using find() , but count() reports them and we can see them on the shard if we go directly to the shard. Are we missing a step? Important point is probably that we remove all records from a single shard key and it
[12:51:47] <jamiel> was part of a jumbo chunk.
[12:52:16] <jamiel> mongos> db.contents.find({"campaigns.id": null});
[12:52:17] <jamiel> mongos> db.contents.count({"campaigns.id": null});
[12:52:17] <jamiel> 42119
[12:53:12] <jamiel> Anyone seen this behaviour before?
[12:55:35] <joannac> count is not accurate through a monos
[12:55:39] <joannac> mongos*
[12:56:21] <joannac> it's weird you see them on the shard though
[12:56:48] <joannac> is it the same shard that those documents should belong on?
[12:57:02] <joannac> jamiel: ^^
[12:58:07] <jamiel> I believe so, would need to look at the chunks again
[13:03:25] <jamiel> Oh, they are gone now from the shard, it takes a while I guess?
[13:04:48] <joannac> it shouldn't?
[13:50:51] <christo_m> hello, i currently have a schema like this: http://pastie.org/10335533
[13:50:57] <christo_m> but i dont think im designing this properly
[13:51:30] <christo_m> i basically want queue items to belong to many users . but i want each individual user to have info about that queue item (like whether they like it or not etc) , i guess im looking for some concept of a join table?
[13:51:52] <christo_m> in my current system queue items are duplicated which may have the same videoId but they're referencing the same content
[13:52:07] <christo_m> they're duplicated for each user though, which is fine since its mongo and to my understanding the denormalization there is desired.
[13:58:52] <christo_m> anyone? :(
[14:40:38] <christo_m> hello?
[14:40:56] <christo_m> 350 people idling, true
[14:42:26] <dddh> christo_m: hello
[14:42:55] <christo_m> dddh: are you able to see my earlier question?
[14:43:00] <christo_m> it was just a design question
[14:44:47] <christo_m> i basically want a single source of truth of a queueItem, and allow users to add these items plus extra info relevant to them.. like whether they like that content or not
[14:49:04] <christo_m> http://blog.mongodb.org/post/87200945828/6-rules-of-thumb-for-mongodb-schema-design-part-1
[14:49:47] <christo_m> dddh: i followed the one to squillions part , because i figured this system is something ilke twitter.. where it would be easier to reference the owner of the tweets rather than embedding or referencing the tweets in the user object.. which would exceed the 16mb limit eventually.
[14:50:14] <christo_m> the problem is, i dont want to duplicate my "tweets" so to speak. i still want to know what the original one is so i can count likes or retweets etc
[14:50:17] <christo_m> so how do i do that?
[14:52:18] <christo_m> the problem with that tutorial is it isnt one to squllions i want, its actually squillions to squillions i want lol
[14:57:45] <deathanchor> christo_m: you are facing the usual, many-to-many problem, just need to figure out a way or use another db type.
[14:58:01] <christo_m> and if i take the same approach.. that is, storing object ids of queueitems in users and objectids of users in queueitems, it isnt hard to see that if a particular queueitem is liked by too many different users, or if a particular user likes too many picese of different content, that the 16 mb will be exceeded
[14:58:13] <christo_m> deathanchor: well im just thinking aloud here, but i dont think it will work for me..
[14:59:20] <deathanchor> maybe try something like neo4j?
[14:59:59] <christo_m> deathanchor: its a little late im pretty entrenched in mongo..
[15:00:13] <deathanchor> it's never too late for anything
[15:00:21] <christo_m> let me rephrase
[15:00:25] <christo_m> im far too lazy to redesign the db
[15:00:43] <christo_m> im one dev here, this is kind of a run and gun operation, but i guess if i did my research i wouldn't be in this mess.
[15:01:19] <deathanchor> well have fun with the many-to-many problem
[15:08:29] <christo_m> deathanchor: hahaha
[15:08:31] <christo_m> gg
[15:39:36] <gentunian> I'm new to mongo and I'm trying to do some aggregation that I'm not sure if possible. All is resumed in this gist: https://gist.github.com/gentunian/cfb826d6ca49bd3d9e38
[15:39:43] <gentunian> can I do this just with aggregation?
[15:40:45] <amz3> I have a question regarding the way wiredtiger deals with index and raw values. I've setup a table with the config key_format=SS value_format=U. There is an index over this table. When I fetch the U column. the value is prefixed with a size integer. Seems like a bug to me.
[15:42:14] <deathanchor> gentunian: nope. your end result is taking a value amd making it into a field
[15:42:50] <deathanchor> gentunian: maybe mapReduce?
[15:43:23] <gentunian> deathanchor, I will try. I saw an example while reading the doc but didnt dig deeper.
[15:43:34] <gentunian> thanks
[16:17:27] <jr3> how do I run "sudo mongod --config /etc/mongod.conf" on startup?
[16:17:34] <jr3> or how should I is a better question
[16:20:50] <Mia> Hi there
[16:21:00] <Mia> is there any easy way to get nth document in mongodb?
[16:21:12] <Mia> like behave the whole collection as an array and get nth document
[16:23:54] <StephenLynx> skip
[16:24:08] <Mia> StephenLynx, skip processes the entire collection, very slow
[16:36:22] <kba> I'm getting: "Error: Argument passed in must be a single String of 12 bytes or a string of 24 hex characters" when trying to compare two documents, but only when I'm comparing them a mongoose model method
[16:37:47] <StephenLynx> it is using the _id
[16:37:57] <StephenLynx> I got that error once, when I passed an invalid _id
[16:38:02] <StephenLynx> I suggest you stop using mongoose.
[16:38:07] <StephenLynx> it is nothing but trouble.
[16:38:30] <kba> sounds more like a mongoose problem, I suppose, but it just seems to odd and I've been at it for more than 4 hours, already wrote a bug report for mongoose, but haven't submitted, because I keep being confused about where the bug comes from
[16:38:40] <kba> yes, I've found that out by now, but think I'm going to stick it out
[16:38:55] <kba> rewriting everything seems like too much work now, but I don't be using mongoose in the future
[16:39:38] <kba> either way, yes, I'm just comparing the objects, like this.ownA.equals(otherA)
[16:40:02] <kba> should I be doing ._id? Because when I do that, mongoose throws an error instead (which I found a patch for)
[16:40:08] <kba> found=made a patch for
[16:40:15] <StephenLynx> no idea.
[16:40:45] <kba> mongoose is a wrapper around the nodejs mongodb driver, so ho does it work in mongodb?
[16:40:54] <kba> it seems the equals method is mostly a wrapper as well
[16:44:48] <StephenLynx> yes, it is just a wrapper around the native driver, afaik.
[16:45:05] <StephenLynx> look into its dependencies
[16:45:11] <kba> I know it's a wrapper.
[16:45:27] <kba> I was asking if comparing a to b._id is okay
[16:45:30] <kba> a.equals(b._id)
[16:45:44] <kba> according to the documentation, it wants just objects
[16:45:49] <kba> a.equals(b)
[16:45:53] <kba> from what I can see quickly
[16:49:14] <kenalex> hello
[19:04:56] <deathanchor> hey how do I make mongo compress the files?
[19:05:05] <deathanchor> "remove the huge gaps"
[19:05:17] <StephenLynx> there is something
[19:05:27] <StephenLynx> rebuild?
[19:05:39] <StephenLynx> why you need that, anyway?
[19:05:57] <StephenLynx> http://docs.mongodb.org/manual/reference/command/compact/
[19:07:01] <deathanchor> I let a db have about 100+ daily collections, and now I have exported older ones and purged them
[19:07:06] <deathanchor> but they db files take up a lot of space
[19:07:34] <StephenLynx> it will be reclaimed by mongo
[19:07:39] <StephenLynx> no need to worry about that.
[19:09:14] <deathanchor> yeah, but if I rebuild it will save me about 500GB of space
[19:11:59] <StephenLynx> and what will that space be used for?
[19:19:33] <deathanchor> reducing the disk drive
[19:19:46] <deathanchor> compact is for collections
[19:19:57] <deathanchor> and repairDatabase will lock up the whole thing
[19:20:25] <StephenLynx> welp
[19:20:32] <StephenLynx> you can just let it be
[19:20:40] <deathanchor> yeah sucks for me, going to let it be
[19:20:40] <StephenLynx> is not like you are using the hd for anything else
[19:20:49] <StephenLynx> if downtime is a concern
[19:20:54] <deathanchor> well I wanted to deprovision the extra disks
[19:21:12] <StephenLynx> why rock the boat?
[19:21:19] <StephenLynx> they are going to be used eventually
[19:22:25] <deathanchor> not really
[19:22:32] <deathanchor> daily data is about 1GB
[19:22:49] <deathanchor> someone broke the cron job which cleaned it up
[19:22:54] <deathanchor> for a year or two
[19:22:58] <StephenLynx> oh lawdy
[19:23:01] <deathanchor> :D
[19:23:22] <deathanchor> yeah I should also really have a secondary to handle this
[19:23:33] <StephenLynx> and then it kept eating disks and disks and no one bothered to check what was being recorded?
[19:23:40] <deathanchor> 1 member replset :D
[19:23:43] <StephenLynx> not to mention dynamic collection creation
[19:24:12] <deathanchor> it is recording metric data for "historical purposes"
[19:24:13] <deathanchor> :D
[19:24:35] <deathanchor> I should just do a drop db at UTC midnight when the colleciton name changes
[19:26:03] <StephenLynx> if I were on charge of that, I would save this data on plain-text
[19:26:19] <StephenLynx> and move them when needed.
[19:43:29] <deathanchor> it's query-able data, and I have queried it a few times
[19:48:56] <StephenLynx> IMO, a few times is not enough to put it on the database if that is so much data.
[19:49:08] <StephenLynx> being on plain-text files don't keep the application from reading it.
[21:00:18] <kenalex> in your opnion does it make sense to develop an app to run on mongodb that will never need more than one node (not alot of data) ?
[21:01:58] <cheeser> sure. why not?
[21:25:24] <alexi5> I was wondering what type of applications are document databases more suited for than rdbms? And also are document databases best used when run on more than one node ?