PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 25th of July, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[06:58:29] <rspijker> d-rock1: had to leave yesterday, did you manage to find out what was going on with the hidden collections?
[08:02:33] <movedx> Once sharding has been enabled, can it be disabled later on?
[08:03:02] <movedx> SO if I enable sharding on an existing collection, one which is growing in size now, and then need to rollback during the maintenance, can I do so?
[08:06:49] <joannac> movedx: erm, you can
[08:06:51] <joannac> it's not easy
[08:08:26] <movedx> Hmm. I can't find the steps anywhere. What's the process, joannac?
[08:10:31] <joannac> removeShard and movePrimary for each shard until you have 1 shard left
[08:10:42] <joannac> shut down mongoS and config servers
[08:10:48] <joannac> step down the primary of your last shard
[08:10:52] <joannac> voila, single replica set
[08:24:45] <movedx> Wow. OK. Yeah that's... steps... a lot of :P
[08:26:12] <rspijker> you shoul really make your shards replica sets… Then you can actually do maintenance...
[08:27:09] <movedx> They are already, now that I think on.
[08:27:48] <rspijker> So just use rolling maintenance :)
[08:28:36] <movedx> Yeah I'm new to this stuff, so I'm trying to get my head around the architecture.
[08:31:34] <movedx> rspijker: Can you explain the steps involved in that? I will review the architecture of this cluster now.
[08:33:24] <rspijker> well… if your shard is a RS, you can just bring down each secondary and the shard will continue to function fine. You perform maintenance on the secondary and bring it back up. Repeat for each secondary. Then you step down the primary, it becomes secondary and you do the same thing to that. Then you’te done. Maintenance performed on all nodes without a second of downtime
[08:34:40] <rspijker> for this to work properly (and you to have actual guarantees about data retention etc. you need 2 real secondaries in each RS)
[08:34:44] <movedx> Right, OK. SO I'm going to login to the system now and determine primaries/secondaries, etc.
[08:37:14] <movedx> rspijker: It looks like we have two shards with three replica sets.
[08:38:15] <rspijker> eh… what?
[08:38:27] <movedx> Yeah, see, I don't "get it."
[08:38:28] <rspijker> 2 shards which each are replica sets with 3 members?
[08:38:56] <movedx> I think so? We have Shard 1, Server 1, 2 and 3. Then Shard 2, Server 1, 2 and 3.
[08:39:05] <movedx> Someone else set this up, and I've enver done Mongo stuff really.
[08:39:38] <movedx> So I assume the servers are replica sets?
[08:39:56] <rspijker> they are replica set members, most likely, yes
[08:42:46] <movedx> rspijker: So for each "secondary" (Server 2 + 3), I take them out the cluster and then follow joannac's steps?
[08:43:02] <rspijker> no. You don;t need joannac’s steps
[08:43:24] <rspijker> if you just want to perform maintenance on the hosts, you don;t have to remove the entire shard
[08:44:18] <movedx> OK, sorry I'm confused. I thought what you were recommended was supplementary to what joannac was suggesting. How do your steps reduce a sharded collection to a non-sharded collection?
[08:44:32] <movedx> recommending*
[08:44:54] <rspijker> they don’t… that was the entire point
[08:45:24] <rspijker> why would you want to unshard for maintenance?
[08:46:03] <movedx> Sorry, I never clarified "maintenance." So we have unsharded collections which we want to shard.
[08:50:50] <Katafalkas> Deploying 3 config servers for production - they guide says that http://docs.mongodb.org/manual/tutorial/deploy-config-servers/ i just start it with 'mongod --configsvr' all three of them ... does this mean that the config servers are not in replica set ?
[08:51:26] <Katafalkas> lets say if later I would like to increase the size of one of the config servers ... how do I add additional config server replica member ?
[08:52:04] <Derick> right, config servers are not a replicaset
[08:52:10] <Derick> you can only have 3
[08:52:16] <Derick> (or 1, but not for production)
[08:53:09] <Katafalkas> and if later in production i decided to increase the size if config servers ? is there a nice way to migrate ? like adding replica set member ?
[08:53:37] <Derick> you just need to shut one config server down
[08:53:53] <Derick> replace it --- potentionally on the same IP address and bring it up
[08:54:12] <Derick> if there are no 3 active config servers, mongodb can't migrate chunks though
[08:54:20] <Derick> but why would you make a config server larger?
[08:54:59] <Katafalkas> and the data chunks will syncs between servers ?
[08:55:13] <Derick> hmm, that si a good question
[08:55:15] <Derick> I don't know!
[08:55:52] <Katafalkas> we are starting project. so i dont want to get like xlarge servers for config servers, would like to get a micro maybe or a small servers
[08:55:54] <Katafalkas> at the start
[08:56:00] <Derick> http://docs.mongodb.org/manual/tutorial/replace-config-server/
[08:56:00] <Katafalkas> and later migrate to larger ones
[08:56:10] <Derick> micro have terrible throughput though
[08:57:02] <Katafalkas> kk. cheers for info.
[08:59:21] <movedx> rspijker: Sorry had a meeting. So we have unsharded collections, now reaching 10s of GBs in size, so we want to shard them. We need a regression plan should things go south.
[08:59:37] <movedx> rspijker: SO after creating the index, I will then able sharding etc.
[08:59:55] <movedx> rspijker: So I need a regression plan to rollback the sharding in the event something fails or stops working.
[09:00:01] <rspijker> movedx: that’s what joannac suggested. That is in no way maintenance though…
[09:00:35] <movedx> rspijker: It's a maintenance in ITIL terms. Perhaps MongoDB has its own definition, but it is a maintenance in... the entire IT industry ;)
[09:01:51] <Nodex> it's normally called a migration but that's potatoes
[09:02:03] <Derick> movedx: ITIL has stupid names
[09:02:11] <movedx> Derick: Tell me about it.
[09:03:15] <movedx> Derick: No, wait... don't tell me about it!
[09:03:17] <movedx> :P
[09:03:47] <Derick> :-)
[09:03:51] <Nodex> regression is easy. Take a snap shot, if all fails then remove the shards and put the snapshot back
[09:03:55] <rspijker> maintenance, to me, is the act of keeping something maintained. As in, keeping it in good condition. What you are doing isn’t that…
[09:04:01] <Nodex> don't need an ITIL piece of paper for that ;)
[09:08:02] <movedx> So going off of joannac's instructions (and not bickering about the definition of an industry standard term), I should basically removeShard() on the replica servers (the non-primary ones for that shard) and then movePrimary() those shards to a single/current primary?
[09:09:39] <movedx> Why have I got to shutdown the mongoS instance and config servers after doing this?
[09:12:23] <Nodex> movedx : just a quick point on your "industry standard term" comment. I have been in the industry for a very very long time, I have never even heard of ITIL until today, also the (proper) definition from the dictionary of "maintenance" is closer to what rspijker stated which is what most "normal" people would also think of making certain questions ambiguous if you're throwing these terms
[09:12:23] <Nodex> around.
[09:13:27] <movedx> I'll work with what I have been given thus far. Thanks for the help guys.
[09:14:29] <movedx> Luckily we're mkoving away from Mongo in a month or two after we've migrated the data.
[09:14:36] <movedx> It's going to be bliss.
[09:15:12] <rspijker> Well, it certainly looks like you put in the effort to understand it before arriving at a conclusion
[09:15:14] <rspijker> best of luck
[09:16:21] <movedx> I wish I could understand it - it's nice tech. I really like the idea and I love the scability of it.
[09:18:29] <Nodex> unfortunately it can't be learned on paper, you must develop 'things' with it to have a deep understanding of it
[09:18:29] <movedx> Some good results here: https://jira.mongodb.org/browse/SERVER-9845
[09:18:46] <movedx> Nodex: Like most technologies, in fact.
[09:19:30] <Nodex> experience always trumps being taught in an education environment
[09:19:49] <Nodex> any robot can teach a monkey to tap keys lol
[09:19:57] <movedx> "The best suggestion I can offer is that you schedule a maintenance window where there are no writes to the system..." heh.
[09:21:35] <Nodex> the ONLY way to migrate this properly is to make the system read-only for the migration time
[09:21:55] <movedx> Aye.
[09:22:12] <Nodex> take a dump, shard, if sharding fails then remove the shards and restore the dump
[09:22:26] <Nodex> it doesn't get simpler than that
[09:23:39] <crs_> Hello All. A quick question. I am trying to insert a json file into mongodb. Everything seems to be working fine except the date. It is getting stored as string. This is the sample json {"d":{"t":"2014-07-24T01:00:00Z"}}. I read that by making this as {"d":{"t":new Date("2014-07-24T01:00:00Z")}} it will work fine. But is this the only way to insert a json date into mongodb?
[09:24:27] <Nodex> there is no such thing as a json date
[09:24:36] <rspijker> JSON !+ BSON
[09:24:40] <rspijker> !=
[09:25:09] <crs_> my bad, yes BSON date
[09:28:28] <rspijker> and yes, afaik that’s the only way to insert a date through JSON
[09:29:04] <rspijker> http://stackoverflow.com/questions/12186006/inserting-json-into-mongodb-converting-dates-from-strings-automatically this might be useful though crs_
[09:29:07] <movedx> Nodex: So these steps: http://docs.mongodb.org/manual/tutorial/manage-sharded-cluster-balancer/#disable-the-balancer
[09:30:13] <movedx> Nodex: Sorry wrong link: http://docs.mongodb.org/manual/tutorial/backup-sharded-cluster-with-database-dumps/#procedure
[09:30:31] <crs_> Thanks rspijker, I will do try doing something similar
[09:32:08] <Nodex> movedx : I thought you had NOT sharded and you wanted to shard now no?
[09:33:09] <movedx> Nodex: That's a good point. Sorry I never clicked onto that. Yes so I only have to dump from the primary the collection exists on?
[09:33:45] <movedx> Then if it goes belly up, delete the shards in the sharded collection and bring the backup back in?
[09:33:55] <Nodex> correct
[09:34:14] <Nodex> if you have the disk space this is probably the best solution, certainly the easiest
[09:34:18] <movedx> OK, that seems sensible.
[09:34:48] <rspijker> so… how will you deal with all the data that is added between making the backup and deciding you don;t want to shard anyway?
[09:35:11] <movedx> Good question.
[09:35:51] <movedx> Management want a zero downtime solution. *sigh*
[09:36:12] <Nodex> that's not going to happen
[09:36:39] <movedx> Indeed.
[09:36:44] <Nodex> well that's a lie, you can setup a single instance to accept the writes and read from a replica set then migrate the new writes into the new cluster
[09:37:05] <Nodex> seems like a lot of effort to avoid ~20 mins downtime
[09:37:29] <rspijker> Why are you assuming it goes belly-up? And, in the case that it does that the answer is to just stop sharding altogether and go back to unsharded operation?
[09:38:30] <movedx> rspijker: All other suggestions are welcome. ANd I believe having a plan to deal with failure is a good idea - you disagree with this? I shouldn't plan for this going wrong due to a bug or incorrect operation on the data?
[09:39:45] <arussel> I'm starting mongo using: mongod --dbpath, is there any option I can set to see all queries being ouput ?
[09:42:13] <Nodex> you can raise the profiling level to 2 and tail the log
[09:45:38] <rspijker> movedx: having a disaster recovery plan is good, of course. However, this isn’t that.. You are considering implementing a certain ‘technology’ or way of operating and are planning against that not working. In my opinion that’s not a decision you make on a production system. You test it on a copy of your data in a staging set up and if it all works out well there, then you can start using it in production. You are attempting to
[09:45:39] <rspijker> apply a mechanism you would use for dealing with bugs as a way to decide which technologies and techniques to use in your system. That won't end well.
[09:45:47] <arussel> Nodex: wouldn't that be in the output of mongod --dbpath ?
[09:46:14] <rspijker> arussel: set slowms to 1
[09:46:38] <movedx> rspijker: Yep, I agree. I wish we had the automated steps in place to bring up a staging MongoDB cluster.
[09:46:53] <rspijker> then all queries should be printed to stdout (or log if defined)
[09:46:57] <Derick> i think we're working on that thorugh MMS automation
[09:47:31] <arussel> rspijker thanks better, but hitting: $msg: "query not recording (too large)"
[09:48:58] <rspijker> arussel: not sure if there is a way around that, tbh
[09:51:12] <arussel> it is not that large, it is just an aggregate with 5 entries ...
[09:52:41] <rspijker> fairly sure the buffer is only 256 bytes, anything larger than that will generate the message you are seeing
[09:53:00] <movedx> rspijker: Thanks for all the hlep, by the way, it's appreciated.
[09:53:33] <rspijker> no worries
[09:55:09] <arussel> rspijker thanks, I'll try to log some other way
[09:55:33] <rspijker> arussel: maybe mongosniff will work
[09:56:36] <rspijker> perhaps in combination with --diaglog
[09:57:03] <rspijker> I think all of the regular built-in logging capabilities will run into the same 256B wall
[10:01:23] <joannac> 256B?
[10:03:13] <arussel> rspijker thanks, I'll just add more logging to all connecting apps
[10:03:51] <rspijker> joannac: 256 bytes
[10:04:15] <joannac> pretty sure it's bigger than that
[10:04:21] <movedx> Does dropping a sharded collection delete all replicas of the collection/shards? Basically does it "clean out the house?"
[10:04:56] <joannac> movedx: delete all replicas?
[10:04:59] <joannac> what
[10:05:32] <joannac> if you drop a collection, all data that the mongoS knows about in the collection gets deleted
[10:05:48] <movedx> Sorry, what I mean to ask is: if I delete a sharded collection, if it removed from the cluster as a whole in the single operation?
[10:05:53] <joannac> so yes, if it's across 3 shards, the dropCollection gets passed to all shards
[10:06:00] <movedx> OK, thank you.
[10:06:24] <rspijker> joannac: I have to admit I’ve not read it anywhere official. Only in comments in mulitple places (e.g., on here https://jira.mongodb.org/browse/SERVER-1794). So not sure whether it’s the actual truth or just a repeated misconception
[10:08:17] <joannac> rspijker: I've seen super long log lines so my gut feel is it's bigger
[10:08:24] <joannac> but now I'm doubting myself ... :(
[10:08:41] <rspijker> joannac: you might very well be right… I think I’ve seen longer ones as well...
[10:08:54] <rspijker> also not sure whether the 256B only applies to currentOp() or also to logging
[10:19:38] <talbott> hello mongers
[10:20:43] <talbott> quick q, if i've removed millions of docs, do you know how i free up that space on disk?
[10:23:00] <joannac> repair or vresync
[10:30:37] <talbott> thanks
[10:32:58] <dawik> why doesn't removing documents free up space?
[10:35:54] <Nodex> because it's memory mapped files
[10:36:06] <Nodex> everything is pre-allocated
[10:42:28] <dawik> hmm, then why would it be stored on disc?
[10:43:11] <dawik> even if it were pre-allocated it would need to allocate more (and free memory at times) you would think
[11:01:54] <Nodex> the space is pre-allocated to ensure things try to sit next to each other on disc to avoid massive disk seeks
[11:35:11] <rdsoze> Is it true that $exists doesn't use an index ?
[11:35:47] <rdsoze> Found a SO question which references to older docs stating the same: http://stackoverflow.com/questions/9009987/improve-querying-fields-exist-in-mongodb
[11:35:55] <rdsoze> Can't find anything about it now
[11:46:13] <rdsoze> Anyone has insights on using $exists with indexes ?
[11:59:51] <joannac> rdsoze: Works in 2.6 iirc
[12:01:06] <rdsoze> joannac: So $exists uses an index in 2.6 ?
[12:02:38] <joannac> yes
[12:04:26] <joannac> actually, looks like it worked in 2.4 too?
[12:05:46] <rdsoze> joannac: My queries using $exists have slowed down significantly as the collection size has increased. Will using a sparse index here help ?
[12:07:27] <joannac> umm
[12:07:31] <joannac> $exists:true?
[12:07:48] <joannac> what do your queries look like?
[12:09:19] <rdsoze> joannac: find({"results.rank": {$exists: 0}, "results.0": {$exists: 1}}).sort({ uid: 1})
[12:09:42] <joannac> results is a subdoc or an array?
[12:09:52] <rdsoze> joannac: array
[12:10:05] <joannac> sparse index won't help you with that
[12:10:34] <rdsoze> joannac: ok
[12:10:40] <joannac> run your query with a .explain(true)
[12:10:48] <joannac> and pastebin the result
[12:30:39] <rdsoze> joannac: http://pastebin.com/NdmC1pyw
[12:33:15] <joannac> yeah, look at the nscanned
[12:33:32] <joannac> you end up scanning more using the index on results.rank
[12:33:41] <joannac> plus you have to sort in memory
[12:52:42] <rdsoze> joannac: ahh ok. But the cursor is a BtreeCursor which means it is using an index rite ?
[12:53:14] <rdsoze> joannac: then why is the nscanned so high ?
[12:59:23] <rspijker> rdsoze: because it’s not using an index for the search
[12:59:56] <rspijker> it’s using the uid and sids index. You aren’t using those in your search
[13:01:11] <rspijker> the one using the results.rank scans far less
[13:06:10] <rdsoze> rspijker: ok. I have an index on results.rank but its not using it.
[13:06:28] <rdsoze> rspijker: not sure why
[14:40:55] <remonvv> \o
[14:42:52] <rspijker> o/
[14:43:18] <remonvv> Anyone aware of why arrays are stored as index keyed maps in BSON. Always struck me as odd.
[14:44:54] <kali> remonvv: simplifying the implementation ?
[14:45:41] <remonvv> kali: That's my theory. It's not a particularly convincing one though. We're doing something where we store quite a bit of integers in an array and the size overhead is dizzying.
[14:45:59] <remonvv> kali : What I mean is, is it actually more simple?
[14:46:32] <rspijker> kinda.. It’s saying an array is actually a very specific instance of this underlying thing we already have all of the helper methods for...
[14:47:12] <rspijker> what kind of size overhead are we talking here?
[14:47:23] <rspijker> well… double, I suppose?
[14:48:07] <remonvv> give or take, the array is internally stored as {"0":12, "1":34} and so on.
[14:48:15] <remonvv> Actually quite a bit more than double.
[14:48:35] <Nodex> it's strange to store the key when it's not needed for computation
[14:48:45] <Nodex> especially as a string
[14:48:59] <rspijker> that’s weird… why is it more than double?
[14:49:31] <rspijker> o, multi char strings are of course far less optimal to store than their integer representation, nvm
[14:50:04] <kali> well, yes and no
[14:50:41] <remonvv> The optimal way to store an array (int array in this example) is 1 byte for element type, 4 bytes for array size, and 4 bytes per value.
[14:50:45] <kali> "0" is two bytes 30,0
[14:50:54] <kali> ha, plus the four byte size :)
[14:51:03] <kali> ok, nevermind
[14:51:37] <Nodex> it's to close to the weekend to be discussing this haha
[14:51:50] <Nodex> very neerly beer-o-clock
[14:51:52] <Nodex> nearly*
[14:51:58] <kali> yeah, beer time minus 50 minutes
[14:52:02] <Nodex> hah
[14:52:13] <rspijker> remonvv: do arrays have an element type?
[14:52:33] <rspijker> I’ve never actually tried it, but can you have an array like [“string”, 12, true] ?
[14:52:39] <remonvv> No, currently an array is a full BSON document
[14:53:16] <kali> remonvv: well, rspijker is right, your scheme only works for array of a single type
[14:53:17] <remonvv> so you get all the overhead, the total size in bytes, each element has an (unnecessary) utf8 key 0term key string, etc.
[14:53:35] <remonvv> not really, bson already reserves a byte for the element type
[14:54:08] <remonvv> there is no overhead between "this is a float" and "this is an array of floats"
[14:54:22] <rspijker> still, 4 bytes for array size and 5 bytes per value would be quite an improvement
[14:54:22] <remonvv> with the small exception that the total needs to be <256
[14:54:48] <remonvv> yes, but for primitive arrays especially (so, fixed size elements) it can be better still.
[14:55:00] <kali> remonvv: trouble is, how to encode [ 0, true ] ? you need a type for each element
[14:55:06] <remonvv> Anyway, I understand we're not going to solve this now. Was just curious if someone ever caught the reasoning behind it
[14:55:32] <remonvv> kali: True, in such cases you need a type per element. It's still a good optimization.
[14:55:45] <kali> remonvv: agreed.
[14:56:24] <remonvv> And for the additional optimization of "this is an array of type X" as I mentioned it'd need some sort of optional strict typing check.
[14:56:33] <remonvv> So yeah, byte per element.
[14:56:41] <remonvv> Oh well, food for thought I suppose.
[15:01:03] <remonvv> Hm, so an int array with a single value in current BSON is 1(type == array) + 4(docsize of array) + 1(cstring element type) + 2(smallest cstring key) + 1(x10 integer element type) + 4(integer value) + 1(0x00 doc terminator) = 14
[15:03:42] <remonvv> Yeah. Beer.
[15:03:50] <rspijker> remonvv: it’s 16, according to my shell
[15:04:00] <remonvv> Hm, wonder what I'm missing.
[15:04:15] <remonvv> That's just the array value or the enclosing BSON as well?
[15:04:30] <rspijker> Object.bsonsize([1])
[15:04:44] <remonvv> Hm, '0' = 2 bytes right?
[15:04:56] <remonvv> yeah, should be
[15:05:04] <remonvv> I could open the shell too I suppose ;)
[15:05:19] <rspijker> ‘0’ as in a character 0? or an integer?
[15:05:26] <rspijker> (shouldn’t matter)
[15:05:28] <remonvv> the cstring
[15:05:34] <remonvv> as specced in the bson spec
[15:05:38] <remonvv> document key values are cstring
[15:05:46] <remonvv> cstring is a null-terminated utf8
[15:06:12] <rspijker> I see
[15:06:25] <rspijker> then yes, it should be 2 bytes
[15:07:01] <remonvv> the overhead per document is 5 bytes, the minimum element key is then 2 bytes, an int value is 5 (1 for type, 4 for value)
[15:09:48] <remonvv> rspijker: It should be 12
[15:10:23] <remonvv> Oh...1 != NumberInt(1)
[15:10:34] <rspijker> hah, yeh
[15:10:41] <rspijker> the shell makes that an int64, probably
[15:10:53] <rspijker> yes
[15:11:05] <rspijker> Object.bsonsize([NumberInt(1)]) is 12
[15:12:16] <plowe> hello
[15:12:21] <Derick> hello
[15:12:40] <plowe> mind if i ask a beginner question
[15:12:52] <Derick> go ahead
[15:13:00] <rspijker> sure, remonvv’s been doing it for an hour
[15:13:11] <plowe> I want to $set a field in all documents within all arrays within a document within a document. Basically, I want to do this
[15:13:11] <plowe> {$set : {'documentname.*anyandallstrings*.*anyandallnum*.fieldname' : value}}
[15:13:11] <plowe> How is this done?
[15:13:22] <plowe> pretyped came out weird
[15:13:46] <Derick> plowe: no, I don't think you can
[15:13:50] <plowe> oh
[15:14:01] <plowe> well that sucks
[15:14:16] <Derick> plowe: it's often a question that pops up when your data model isn't ideal
[15:14:22] <plowe> its not my data model
[15:14:30] <Derick> somebody elses? :)
[15:14:30] <rspijker> that’s not ideal
[15:14:35] <bdiu> Is someone with experience optimizing MapReduce/Aggregations available for a screenshare consulting session? (paid)
[15:14:41] <bdiu> msg me please :-)
[15:15:07] <joannac> bdiu: how much?
[15:15:26] <bdiu> whatever a resonable hourly rate is
[15:16:31] <remonvv> Someone has a beginnner question and suddenly @Derick wakes up.
[15:16:31] <joannac> what do you find reasonable?
[15:16:49] <plowe> so is it just impossibel
[15:16:55] <remonvv> If I can't buy a Ferrari at the end of it I'm not interested.
[15:17:24] <Derick> remonvv: :-þ
[15:17:52] <Derick> plowe: in that way, yes - set can only update *one* field. In your case, you need to do it client side (but you lose atomicity of the update)
[15:17:59] <Derick> ah, boo
[15:18:02] <bdiu> joannac: ... if you have experience and think you can help send me a msg with your rate and we can chat...
[15:18:04] <Derick> maybe they saw that
[15:18:45] <rspijker> remonvv: that’ll be a cheap session: http://www.ebay.com/itm/Was-Opened-FERRARI-F355-SPIDER-BLACK-1-18-DIECAST-MODEL-CAR-BY-HOTWHEELS-/141353645345
[15:46:12] <fontanon> Hi everyone, I'm looking for some advice on mongodb design patterns. Lets suppose I've a mongodb collection for storing a timeline of events, i mean, with a timestamp each. For example, a phone call registry. A requirement of the app using this mongodb collection is: displaying the last call per phone number with the sum of the call durations. Which design pattern makes more sense? [Design pattern 1] The app make a queries with the agg
[15:46:12] <fontanon> regation framework to calculate the sum of call-durations and returns it attached to the last phone-call (sort by timestamp:-1). [Design pattern 2] To keep a separate collection with one record per phone number. On each new phone-call, do mongodb findAndUpdate on the proper record.
[15:47:33] <Derick> I'd do: store an entry for each call in one collection *and* store an entry for each phone number in another containing the number, and the sum-of-calls
[15:47:48] <Derick> upon insertion of the call entry, also update the phone number entry to add the new time
[15:48:10] <fontanon> Derick, more or less the design pattern 2.
[15:48:41] <Derick> yes, but keeping each single *entry* out of there
[15:48:48] <Derick> and no need for findAndUpdate either...
[15:49:46] <fontanon> Derick, how is the sum-of-calls calculated in that colletion? I've read something about some mongodb facilities to do this.
[15:50:27] <Derick> say, each entry has:
[15:50:54] <Derick> { phonenr: '555-111-2345', start: 1406303313, end 1406303913, duration: 600 }
[15:51:00] <Derick> so when you insert that, also do:
[15:51:39] <Derick> db.phonenrs.update( { _id: '555-111-2345' }, { total_duration: { $inc: 6000 } } );
[15:51:42] <Derick> db.phonenrs.update( { _id: '555-111-2345' }, { total_duration: { $inc: 600 } } );
[15:52:48] <fontanon> $inc operator
[15:52:49] <fontanon> great
[15:53:13] <fontanon> this sounds awsome Derick, thanks !
[15:58:22] <d0liver> Hi, I'm relatively new to mongo. I'm trying to store a relatively small amount of binary data as a field on one of my collections. The field is the binary data for a profile image. The data is being stored as a string for some reason. I can verify this with db.myCollection.find({imageBlob: {$type: 2}}). What do I need to do to make mongo store this as binary data?
[17:04:25] <s2013> does mongodb have limitation on the size of the collection being imported/exported?
[17:30:00] <in_deep_thought> Is there a mongodb equivalent of sqlite? like where the db can just be a file that you access?
[17:45:45] <zfjohnny> What can I add to my mongo.conf file iln order to get a full query profile log file in my development server?
[19:20:28] <zfjohnny> Is anybody in this room? Been monitoring for over an hour.. no messages
[19:24:01] <in_deep_thought> zfjohnny: I ask questions from time to time. I bet you are more interested in answer givers than question askers but here I am!
[19:24:35] <ehershey> there are 387 people here
[19:24:48] <ehershey> they're all just quiet
[19:25:15] <in_deep_thought> shhh
[19:25:40] <ehershey> some are in deep thought
[19:25:48] <ehershey> some are asleep
[19:26:03] <in_deep_thought> lol
[19:38:24] <s2013> is this channel ever active?
[19:38:51] <s2013> this is the least active out of any major tech channel ive ever seen
[19:51:28] <in_deep_thought> s2013: I think cause its friday.. Im not sure. usually has some activity
[19:59:00] <SubCreative> Running into an issue with making an update using $set, I have an object like so { user: "bar" details: { item1: "bar" }}
[19:59:31] <SubCreative> Every time I try to make an update to item1, if there are other items in the same object, it will remove all the others and only keep my updated item.
[20:00:11] <SubCreative> { user: "bar" details: { item1: "bar", item2: "mongodb" }} Becomes ------> { user: "bar" details: { item1: "update" }}
[20:01:12] <SubCreative> How can I add or update to a single key/value, while still retaining all my others?
[20:11:02] <ehershey> SubCreative: use $set
[20:11:19] <SubCreative> That's what I'm using...
[20:13:37] <ehershey> db.stuff.update({}, { $set: { "details.item1": "update" } })
[20:14:32] <SubCreative> db.collection('users').updateById(user, {$set: {apiKeys: {TestKey: apiKey}}}, function(err, result){
[20:14:38] <SubCreative> im using node, but essentially its identifical.
[20:14:45] <SubCreative> identical*
[20:14:58] <ehershey> it you have to do $set: { apiKeys.TestKey: apiKey }
[20:15:13] <SubCreative> let me try
[20:15:18] <ehershey> s/it //
[20:16:32] <SubCreative> That doesn't work for me.
[20:16:51] <SubCreative> db.collection('users').updateById(user, {$set: {apiKeys.TestKey: apiKey}}, function(err, result){
[20:17:25] <ehershey> what happens when you try it?
[20:17:29] <ehershey> is there an error?
[20:17:36] <ehershey> you probably have to quote that object key
[20:17:39] <SubCreative> SyntaxError: Unexpected token .
[20:17:43] <SubCreative> ah
[20:17:50] <ehershey> $set: { "apiKeys.TestKey": apiKey }
[20:18:41] <SubCreative> Good to go, thanks boss.
[20:22:37] <SubCreative> Makes sense now what it was clearing the other keys.
[20:22:45] <SubCreative> I was updating the key apiKeys
[20:22:55] <SubCreative> not just the key inside
[20:28:14] <mn3monic> hi, I can't find how to update a subset, in mongodb, help is appreciated
[20:32:55] <nicolas_leonidas> hi, I'm trying to learn how to aggregate, I have a collection with documents that look like this:
[20:32:56] <nicolas_leonidas> http://paste.debian.net/111689/
[20:33:31] <nicolas_leonidas> I need to aggregate on browsing_metadata.ipgeo.country for example to see how many people are from what countries
[20:34:12] <nicolas_leonidas> the problem I'm facing now is using the dots in a hierarchy, $project and $group don't seem to be doing it
[20:34:46] <nicolas_leonidas> I'd like to modify this command to do what I said http://paste.debian.net/111690/
[20:34:55] <nicolas_leonidas> how can I achieve this? any hep is highly appreciated
[20:35:02] <mn3monic> dots !
[20:35:04] <mn3monic> you're the best, man !
[20:37:03] <nicolas_leonidas> mn3monic, I'm a newbie sorry
[20:37:09] <nicolas_leonidas> what's wrong with dots?
[21:14:10] <leru> Hi. I have don't know anything of mongodb. Is it just like javascript but as an database?
[21:53:44] <mango_> I'm using MMS, and have 'Host is unreachable.' I've just started my MongoDB server, how do I verify the connection again?
[22:18:14] <mango_> Anyone out there help with a MMS issue?
[22:25:36] <AlexZanf> hey guys I have a user model, and i am trying to add profile images to users. Should i just add a images[int,int] list to the user model? with the first param being the image slot, and the 2nd param being the image name or id? or should i make a new image model and compose with it?