PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 2nd of February, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:54:57] <sabrehagen1> morenoh150: not quite, think of this structure: { channel: 'MTV', subscribers: [ObjectID1, ObjectID2, ObjectID3, etc] }. There are many of these. I want to return all of the channels, ordered by the number of subscribers. can this be done without the aggregation pipeline?
[00:59:31] <joannac> sabrehagen1: no, unless there's a field with the length of the subscribers array
[01:05:46] <sabrehagen1> joannac: thanks joanna :)
[04:31:37] <tim_t> Hi. Using Morphia with Java. How can I update an embedded HashMap<String, Object> entry in place? The only way I can do this right now is put the entire hashmap into memory, change values and then save the new state of the hashmap, which if it is large is quite a wasteful operation.
[04:32:34] <tim_t> I guess another way of asking is how to I target a deeply embedded object for an update?
[05:49:56] <lyscer> Hello all, I am looking at trying out mongodb for the first time for a project that I am working on that will have many rows and was hoping that you could tell me if my data structure will potentially hose me in the future. I am going to be searching only on the "number" document; http://pastebin.com/z2UUgP2U
[06:18:32] <morenoh149> lyscer: use .pretty() to make it readable
[06:19:17] <morenoh149> lyscer: looks good to me
[06:19:40] <morenoh149> you'll want to build indexes that reflect how you're going to be accessing this data later
[06:20:14] <lyscer> morenoh149, thanks for looking at that; I wasn't sure if each key/value pair needed to be structured as an object like _id is
[06:24:17] <morenoh149> lyscer: which key/value pair?
[06:24:26] <lyscer> all of them
[06:24:41] <lyscer> wasn't sure if I was structuring the key/values correctly
[06:25:23] <lyscer> it seems like as long as the program that will be reading the values can easily pull and parse I should be fine - which I know the key I will be using 100% of the time so I think it should be fine
[09:35:18] <robopuff> Hi guys, can you help me with rewriting this $where into basic statements? https://gist.github.com/robopuff/c851959a59706ed67ff4
[10:38:24] <morenoh150> robopuff: db.calendar.find({ date : { $gte: start, $lt: end }})
[10:38:55] <davidcsi> Hello guys, I posted an issue i'm having with mongodb, can you please help me out? it seems to me either there's a bug or a lock issue or (of course) I have an indexing problem. http://stackoverflow.com/questions/28253720/mongodb-upsert-doesnt-update-if-locked
[10:48:11] <morenoh150> davidcsi: concurrent updates should be queued http://stackoverflow.com/questions/6997835/how-does-mongodb-deal-with-concurent-updates
[10:48:31] <morenoh150> unless there's a limited size to the queue I'm not aware of. All updates should eventually happen.
[10:51:01] <morenoh150> davidcsi: 'packet' doesn't look like an array in the example
[10:51:14] <morenoh150> do you mean nested document?
[10:52:51] <robopuff> @morenoh150 not really - calendar is an array from bigger document, and I need to check if calendar entry exist (between start & end) and if status: flase occurs, hide it, or if dates do not exist return alwaysAvailable state. When writing this I've found solution for that calendar: { $not: { $elemMatch: { date: {$gte: 'start', $lt: 'end'}, status: false} } }
[10:57:55] <morenoh150> robopuff: yay :)
[10:58:26] <morenoh150> davidcsi: I'm not sure mongodb was intended to store network traffic. It may not be fast enough for that
[11:00:47] <joannac> davidcsi: what's the full logline for "can't unlock b/c of recursive lock"?
[11:01:47] <Climax777> Why would capped collections deliver the same performance as standard collections? using rc7 with wired tiger
[11:05:19] <davidcsi> @morenoh150 in which example? you mean the documents pasted? yes, those aren't arrays, i had to change upserts to inserts, but you know what i mean. If you need it i can paste a real array. The point of pasting those two was to show how they'd look individually.
[11:07:05] <morenoh150> davidcsi: no in the description you explicitly state `these are arrays and you'll see why` then I saw documents
[11:07:23] <davidcsi> @joannac: mongodb.log.1:2015-01-30T20:34:31.243-0600 [conn5] warning: ClientCursor::staticYield can't unlock b/c of recursive lock ns: top: { opid: 6810, active: true, secs_running: 0, microsecs_running: 142309, op: "query", ns: "tracer", query: { $eval: CodeWScope( return db.packets.count( { "deleted": "1" } );, {}), args: [] }, client: "127.0.0.
[11:07:24] <davidcsi> 1:52060", desc: "conn5", threadId: "0x7f6d6bed5700", connectionId: 5, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } }
[11:07:37] <joannac> davidcsi: yes, that's an $eval
[11:07:44] <joannac> why are you running eval?
[11:07:57] <davidcsi> nope
[11:09:07] <joannac> davidcsi: ?
[11:09:12] <davidcsi> monrenoh150, "you'll see why" I meant that because all packets belong to the same transaction, they would end up as an array, the field "packet" is an array where new packets are push'd into
[11:09:23] <davidcsi> i'm not
[11:09:34] <joannac> davidcsi: okay, someone is running eval against your server
[11:09:38] <davidcsi> i'm not running an eval
[11:09:46] <morenoh150> oh
[11:09:54] <davidcsi> oh sh*t
[11:10:01] <davidcsi> i gave you the wrong log
[11:10:05] <davidcsi> lol, hold on
[11:53:35] <Mmike> is there a gui-like tool for mongodb?
[11:53:50] <Mmike> can be web based, but I'd prefer gtk/qt/ncurses like one :)
[11:54:29] <_rgn> robomongo or something
[11:54:30] <_rgn> google it
[11:56:22] <davidcsi> robomongo is really the best there is... as limited as it is
[12:01:23] <Mmike> ack
[12:01:24] <Mmike> thnx lads
[12:44:57] <BadCodSmell> I want to have a key like [], why cant i?
[12:47:32] <BadCodSmell> ah wait, it's robomongo breaking thankfully
[12:47:33] <BadCodSmell> phew
[14:56:30] <bybb> Hi everyone
[14:57:20] <bybb> I'm running a dedicated server, after reading the doc I wonder "do I need to use a replica set?"
[14:58:20] <bybb> Because it's written "use a replica set in production" but "run a replica set on different physical machines"
[14:58:34] <Derick> bybb: up to you. If you have no problem with your application/server going down, then it's fine.
[14:58:59] <Derick> I do however think that you should have a replicaset in production, and, indeed, on three different physical machines
[14:59:29] <cheeser> i run a single node server but it's just an irc/logging bot
[14:59:43] <cheeser> so downtimes mean nothing to me. anything more important and I'd have a replSet
[14:59:54] <Derick> i've a demo site with a single node too
[15:00:06] <Derick> even if the data gets corrupted I don't care (not that that ever happened)
[15:00:12] <Derick> i can always reload it
[15:00:20] <bybb> Derick: well the server is pretty reliable until now. It's my first project with mongodb, do I still need a replica set on a single server?
[15:00:28] <cheeser> basically. hell, i use mongodump for nightly backups. real top shelf stuff. :)
[15:00:46] <Derick> bybb: any important production environment should use a replicaset
[15:01:05] <cheeser> running a single node of mongodb is no more dangerous (or reliable) than a single pgsql server.
[15:01:29] <cheeser> it's a SPOF either way. mongodb makes it easier to remove that than others, though.
[15:03:27] <Derick> yeah
[15:03:41] <Derick> that wasn't the case with 1.4 though :)
[15:03:48] <bybb> Well I'll use the strategy I use for the services, multiple services for a single server. It'll buy me some time to restart it, if it's down
[15:03:55] <bybb> It's a side project
[15:04:11] <Derick> bybb: mongodb, when having lots of data, wants all your memory
[15:04:21] <Derick> that could cause contention with a web server f.e.
[15:04:31] <bybb> Derick: for sure, I won't have lot of data
[15:04:42] <Derick> will it all fit in RAM?
[15:06:14] <bybb> I forgot, but it's at least 4GB
[15:06:19] <bybb> maybe 8GB
[15:07:01] <bybb> And I guess I won't have more than the people at te same time on the server
[15:07:15] <bybb> Does Mongodb really need so much RAM?
[15:11:10] <Derick> bybb: it likes to use all of it
[15:11:38] <Derick> it's perfectly possible to run it on the same server as a web server, but don't expect great performance
[15:11:57] <Derick> but if you don't really care, go for it - if it becomes a problem you can always split it off
[15:14:18] <bybb> Derick: well I'll probably run MongoDB in three Docker containers
[15:14:32] <bybb> Memory and CPU can be limited
[15:14:43] <Derick> on the same physical machine?
[15:14:51] <bybb> yes
[15:14:56] <Derick> that's kinda pointless though
[15:15:36] <cheeser> and self-defeating. they'll be clamoring for RAM
[15:15:37] <bybb> not if I can limit to 1 or 2GB of RAM
[15:15:57] <Derick> that's ... not a good idea
[15:16:06] <Derick> better to have one container with one node with 6GB of RAM
[15:16:29] <bybb> Well why not
[15:16:48] <Derick> because it can only cache a third of the data that way
[15:16:56] <Derick> it's going to be horendous for performance
[15:17:18] <bybb> a third?
[15:17:44] <Derick> three nodes/containers with 2GB each vs one node/container with 6GB memory
[15:18:03] <bybb> Ok I get it
[15:18:24] <bybb> I thought I might need 18GB to cache everything
[15:18:28] <bybb> you scared me
[15:18:34] <Derick> well, you might -
[15:18:39] <Derick> depends on how much data you have
[15:19:11] <bybb> Really not much
[15:19:26] <Derick> that's not a quantity ;-)
[15:19:48] <bybb> It's a B2B service with not so many customers I guess
[15:20:27] <bybb> Well if it's a problem, It will be a problem I'll be glad to have
[15:20:33] <Derick> fair point
[15:20:45] <bybb> It'll mean I'll have the money for new servers
[15:21:53] <bybb> Thanks anyway I'll stick with a 6GB container, the "less worst" solution
[15:22:35] <cheeser> you could mitigate any catastrophe by using mms backup (and automation for what it's worth)
[15:22:48] <cheeser> at least you'd have offsite backups should your server catch fire.
[15:23:35] <bybb> cheeser: i'll have a look, I forgot about MMS
[15:23:45] <cheeser> it's what I do. :D
[15:24:47] <bybb> I'm just a developer doing everything
[17:31:15] <panshulG> Hello.. I am using Spring-data-mongodb -> how can I restrict the length of a text field in my domain for mongodb?
[17:31:53] <panshulG> or should I be asking this question somewhere else... if so please point me to that IRC channel
[17:40:35] <StephenLynx> not that you should ask that somewhere else, is that you have an issue with some unusual 3rd party software, so you may not find many people here.
[17:40:41] <StephenLynx> that could help you.
[17:41:04] <StephenLynx> but if you find a place that discusses spring then you probably will have better chances.
[17:41:18] <StephenLynx> personally I don't use anything on top of mongo.
[17:41:27] <panshulG> cool thanks...
[17:42:16] <StephenLynx> in my code I manually get a substring when I want to.
[17:43:27] <panshulG> well using mongodb directly in a Java Application is not always the best idea
[17:43:55] <panshulG> worst if you have a Spring Enterprise Application :P
[17:45:23] <StephenLynx> why?
[17:47:45] <panshulG> Its more about making ur life easier in complex scenarios... like using an ORM framework is better than using old JDBC
[17:49:36] <StephenLynx> you still have to define your rules for data. but without a framework you have less bloat and you are able to find issues more easily because you took a layer of generic software.
[17:50:44] <Derick> StephenLynx: or you have to fight the layer...
[17:51:15] <StephenLynx> not to mention when the frameworks put constraints on you.
[17:51:57] <Sticky__> there are positives and negatives to ORM, if performance is not a huge issue that you are willing to sacrifice some for simpler code...fair enough
[17:52:51] <StephenLynx> the problem with the "simplicity" these frameworks provide is that is not real simplicity. on the contrary, the code is even more complex, it is just hidden.
[17:53:16] <Sticky__> yeah, I agree when things go wrong they are a nightmare to debug
[17:53:25] <ldiamond> What do you guys use for admin client on mongo?
[17:53:34] <StephenLynx> mongo shell
[17:54:01] <panshulG> mms
[17:54:23] <Derick> shell
[17:54:57] <Derick> panshulG: in which way?
[17:55:45] <ldiamond> panshulG: mms being a hosted managed mongo
[17:55:47] <ldiamond> ?
[17:57:01] <StephenLynx> yeah, I looked into mms and I was like "lolwtf"
[17:57:37] <panshulG> oh sorry.. i thought by admin you meant.. hosting and backup an monitoring
[17:57:44] <panshulG> my bad
[17:58:10] <panshulG> for everything else i use shell or for GUI Robomongo
[19:00:03] <twinings> does the new security.authorization config value replace the old --auth config parameter? and without it, in both cases, you don't need to log in?
[19:48:53] <morenoh150> Mmike: mongohub is the best on osx
[19:49:36] <Mmike> morenoh150, linux here :/
[20:11:40] <lilgiant> hello, short question: is the _id field indexed by default or should i include an index on this field explicit in the schema definition?
[20:12:04] <octoquad> yes its indexed by default
[20:12:09] <cheeser> yes, by default it gets a unique index
[20:13:17] <lilgiant> thanks :)
[21:05:54] <rodd> Hi, I'm having some issues setting up an admin user on 2.4, I added a user through db.addUser with the following role: userAdminAnyDatabase. I can connect with its credentials (mongo -u...) but whenever I try to query I get unauthorized, what am I missing?
[21:16:33] <Torkable> mongoose is shit and I hate it
[21:16:38] <Torkable> thank you for listening
[21:19:54] <iksik> ;-DD
[21:40:17] <joannac> rodd: erm, it's all the in the name?
[21:40:35] <joannac> userAdmin == user administration only. no reads. no writes. only stuff relating to users
[21:40:44] <rodd> joannac: i disabled auth
[21:41:30] <joannac> rodd: okay. well now you know if you want to turn auth on in future
[21:41:49] <rodd> joannac: thx
[23:28:17] <gozu> anyone willing to take a small fix for an extremly nubtasmic bug in the mongodb perl module without me jumping through various "sign up for random websites on the internet" hoops?
[23:28:59] <gozu> (the summary is ... in C, there _is_ a difference between "checking the boolean value of a pointer" and "checking the boolean value of the thing pointered to" ...)
[23:45:11] <hahuang61> any way to get mongorestore to upsert instead of straight insert?
[23:45:38] <hahuang61> I've been restoring mongodumps and hadn't noticed that it just drops the restore if that _id already exists, whereas I want it to update.
[23:57:03] <joannac> hahuang61: requested but not yet implemented
[23:57:05] <joannac> https://jira.mongodb.org/browse/TOOLS-121
[23:57:42] <hahuang61> joannac: yeah, I just looked it up. Shit. So I have to take a massive dump and drop the collections basically and then restore
[23:59:17] <hahuang61> joannac: no other workarounds huh?