PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 31st of July, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:00:35] <skot> find({a: {$in: [1,4,5,6]}})
[00:01:07] <skot> https://github.com/mongodb/node-mongodb-native#find
[00:17:31] <EGreg> $in! THANKS
[00:31:45] <Petrochus> updating a document in-place, like via .update({_id: 1}, {$push: {"array": val}}) should be better/faster than doing a findOne and later save, right?
[00:34:26] <skot> yes, reduces network round-trips and data transmitted.
[00:36:40] <Petrochus> skot: well, let's assume the database will always be located on localhost
[00:38:42] <skot> Internally it has to do the find and then update the document in memory, and write it out. So in practice, it is really about the same.
[00:39:45] <skot> As for which is better, let me ask you this, what happens if two parties try to find+save at the same time with different data?
[00:44:14] <epsas> Hello - is this a place to ask motor questions too?
[02:06:54] <minerale> skot: I have nested classes, and for some reason Morphia is trying to serialize the parent class. (Because the parent class is referenced under the default this$0 ) Is there a way to stop that?
[02:13:09] <cheeser> minerale: declare the inner class as static
[03:17:11] <LoneSoldier728> hey
[03:17:22] <LoneSoldier728> Friend.findOneAndUpdate({userId: mongoose.Types.ObjectId(req.signedCookies.userid)}, {
[03:17:22] <LoneSoldier728> friendStatus: {$set: {fuId: mongoose.Types.ObjectId(req.body.friend), status: 3}}
[03:17:22] <LoneSoldier728> }
[03:17:35] <LoneSoldier728> friendStatus: {$set: {fuId: mongoose.Types.ObjectId(req.body.friend), status: 3}}
[03:17:37] <LoneSoldier728> how do i set
[03:17:39] <LoneSoldier728> the status in tehre
[03:36:32] <jackh> all, when I do jstests, some tests returned with error code 253, do you know why?
[03:37:32] <retran> what's a jstests
[03:41:17] <jackh> retran: it's bulk_insert.js
[03:41:45] <retran> what are you referring to
[03:42:11] <jackh> I just do this test, it failed, i want to know why
[03:42:18] <retran> why would i know the contents of your javascript file
[03:42:58] <jackh> it's just under jstests directory
[03:44:35] <retran> hmm these are are tests for checking if source you've editing for mongod produced unwanted effects?
[03:48:03] <LoneSoldier728> http://pastebin.com/N24JetZB
[03:48:09] <LoneSoldier728> anyone know how to call $set correctly
[04:03:05] <LoneSoldier728> anyone awake?
[04:10:39] <LoneSoldier728> anyone hereeee
[04:20:29] <retran> ask your question
[04:20:33] <retran> dont just poll
[05:21:46] <dotpot`> morning guys :)
[05:22:56] <dotpot`> question: how do I increase lets say "calculation_weight" in aggregation when document property does not exists or is None or empty ?
[06:12:23] <neophy> /msg NickServ identify neo$#123
[06:30:46] <dotpot> neophy: change your nickserv password, since this channel is logged (http://irclogger.com/.mongodb) .)
[06:31:06] <neophy> yes I changed it
[06:32:02] <dotpot> good :)
[06:33:45] <dotpot> question: via aggregation, how do I increase lets say "weight" when document property let say "address" does not exists or is None or empty ?
[06:37:27] <LoneSoldier728> http://pastebin.com/ipFL1UqB hey anyone know how to avoid 4 queries
[06:37:44] <LoneSoldier728> since I cannot have pull and addToSet in the same query
[06:38:03] <LoneSoldier728> anyone please?
[07:26:18] <weblife> I just finished a three part tutorial on how to get going with Node.js / MongoDB on Ubuntu and then deploy it to cloud services. Could I get any interested parties to review it for fixes / ideas on how to improve it? https://github.com/TheMindCompany/mongonode-app/blob/master/tutorial.pdf
[07:34:18] <[AD]Turbo> ciao
[08:03:10] <hard_day> Hi all, i'm continuous with the cursor problem(cursor not found on server) , any idea? can be a net latency or is a Mongodb bug?
[08:05:26] <rspijker> you are going to have to be a bit more specific...
[08:05:35] <rspijker> what is the exact error you are getting, when are you getting it, etc.
[08:07:26] <hard_day> rspijker: yes of course i'm so sorry this is all information http://pastie.org/8186478
[08:08:12] <hard_day> and this is a find db['etl2013070118'].find();
[08:13:00] <hard_day> rspijker you need more information
[08:14:45] <rspijker> hard_day: the bit of surrounding code might help
[08:15:46] <rspijker> and are you saying you have 47000 collections? :/
[08:17:29] <hard_day> :rspijker: i can't becouse i'm use a ETL tool https://github.com/pentaho/pentaho-mongodb-plugin
[08:18:04] <hard_day> no, sorry 4700 Object no collection my mistake sorry
[08:19:09] <rspijker> ok… well, then my guess is the bug is within pentaho
[08:19:27] <rspijker> it's trying to do something with a cursor that doesn't exist, it looks like.
[08:20:43] <hard_day> me too rspijker i thisnk is a penthao bug thanks for your time
[08:24:46] <rspijker> sure, np
[12:09:40] <Industrial> Howmuch of my database will mongodb keep in memory? I want to have 13 databases / collections and be able to efficiently grab time slices
[12:10:31] <Industrial> I guess the most recent history will always be most hit
[12:11:44] <cheeser> as much as it can
[12:13:35] <Industrial> ok
[12:17:30] <rspijker> mongodb itself doesn't handle that, it pretty much offloads that decision to the kernel
[12:18:01] <rspijker> that is, it memory maps all of the data and it is then paged in/out of physical memory by the kernel
[13:09:54] <remonvv> \o
[13:11:26] <rspijker> o/
[14:06:40] <hard_day> can i to see the socket Timeout with any command?
[14:09:10] <DestinyAwaits> Hey Guys a question
[14:10:18] <DestinyAwaits> as there are 2 data structures dicts and lists inside a doc when one should ask for list and when to prefer a sub document
[14:10:21] <DestinyAwaits> ?
[14:12:32] <remonvv> Are you asking which is more appropriate when?
[14:13:11] <DestinyAwaits> well yes that one and the other is when it makes a sense to create a sub document or a list?
[14:13:32] <remonvv> If so, that's a very hard question to answer. They're two rather different things. It's a similar problem to when you use a List or Set rather than a Map in code.
[14:14:46] <remonvv> When to embed what is an equally difficult question to answer in a few sentences.
[14:15:18] <remonvv> What language are you used to?
[14:15:30] <DestinyAwaits> not a problem I have time if you don't mind explaining.. :)
[14:15:34] <DestinyAwaits> Java
[14:15:54] <remonvv> I have to get into a conference call in a bit.
[14:16:38] <DestinyAwaits> well the one thing you mentioned above when to use List, Sets and Map there are certain criteria for that Big-O notation and key-value pairs etc
[14:17:01] <DestinyAwaits> ah ok
[14:17:04] <DestinyAwaits> not a problem
[14:17:06] <DestinyAwaits> :)
[14:17:26] <DestinyAwaits> I hope someone else when gets active might help.. :)
[14:17:40] <remonvv> Well I have a few more minutes. Maybe some more specific questions might help ;)
[14:18:05] <cheeser> perhaps you want a list of subdocuments.
[14:18:15] <cheeser> the question, without context, is next to meaningless.
[14:18:20] <DestinyAwaits> hehe.. I am starting with the language so I don't know much.. :)
[14:18:24] <remonvv> Are you asking what the practical difference is between [{key:<key>, value:<value>}, .., {key:<key>, value:<value>}] versus {<key>:<value>, <key:value>}?
[14:18:36] <cheeser> you're asking how to model your data when we know nothing about your data or how it will be used.
[14:19:06] <remonvv> There is that.
[14:19:27] <remonvv> Read up on MongoDB (and on Java if needed), especially best practice blogs/articles.
[14:19:44] <DestinyAwaits> cheeser: I might be wrong here but it's a general question I think to me as a starter.. on what basis one must choose what DS to use
[14:19:47] <DestinyAwaits> :)
[14:19:52] <DestinyAwaits> I may be wrong as I said
[14:20:04] <remonvv> You base it on research fine sir ;)
[14:20:32] <DestinyAwaits> hmm
[14:20:38] <remonvv> Get a good grasp of the problem you're trying to solve, determine what operations are common and which are rare, find persistence solutions that do what is common well, and then try it.
[14:20:47] <cheeser> a list is used to store scalar elements of like kind (usually)
[14:21:06] <cheeser> a subdocument is grouping related elements together in a single unit.
[14:21:47] <DestinyAwaits> cheeser: this answers my part of my question
[14:22:05] <DestinyAwaits> s/my/a
[14:22:18] <remonvv> Think of an embedded document as the MongoDB equivalent of a class field that refers to another instance rather than a scalar/primitive. e.g. class Foo { Bar bar;} would be db.foos{bar:{..}} in JSON/BSON/MongoDB
[14:22:48] <DestinyAwaits> cheeser: Something related to performance when to avoid between the two or something like that.. I hope I explained the question.. :)
[14:23:06] <remonvv> Small footnote that arrays can contain objects as well as scalar elements, and usually do.
[14:23:42] <cheeser> right. arrays can contain subdocuments.
[14:23:43] <Nodex> 42
[14:23:44] <DestinyAwaits> remonvv: Really know I am getting some answers that I was looking for.. :)
[14:24:01] <remonvv> DestinyAwaits: We're carpet bombing you with information, bound to hit something ;)
[14:24:12] <DestinyAwaits> lol
[14:25:18] <DestinyAwaits> Now the second part of my question when to use what? Time Complexities and all ?
[14:25:33] <Nodex> depends on your data and end goal :)
[14:25:34] <remonvv> DestinyAwaits: As for performance, hard to say. Arrays tend to grow in size as the document ages in most use cases which means MongoDB will have to move it around every so often (or very often, depending). Embedded documents by their nature are fairly static in size.
[14:26:02] <remonvv> Another thing is that arrays are a bit tricky to index on. Each array element is indexes seperately so for large arrays that may mean you can't really index them practically.
[14:26:12] <remonvv> Fields in sub documents can be indexed relatively easy
[14:27:27] <remonvv> DestinyAwaits: I think the conclusion you have to reach here is that these sort of questions are very context sensitive. Try to make something, see what problems you run into and try and fix them.
[14:27:34] <DestinyAwaits> remonvv: so if we talk of normal complexities documents are must easy to work with and maintain.. right?
[14:28:03] <remonvv> DestinyAwaits: It's as easy as you make your schema. And there are some pitfalls. Nested arrays are problematic for example.
[14:30:23] <DestinyAwaits> still around remonvv ?
[14:30:31] <remonvv> For embedding itself a very rough guideline could be : If the relationship with the sub entity is 1:1 embed document, if it's 1:N where N is small use array of objects, if it's 1:N with a big N use a seperate collection. There are a ton of exceptions to the former but that's the rough first guideline.
[14:30:36] <DestinyAwaits> ah ok
[14:30:43] <remonvv> For another 15 minutes or so. Waiting for french people to arrive.
[14:31:32] <DestinyAwaits> that I guess will be enough
[14:31:33] <DestinyAwaits> :)
[14:31:40] <remonvv> DestinyAwaits: Just have a go. Experiment.
[14:31:40] <Nodex> if 42 != 43 do finish();
[14:31:54] <DestinyAwaits> ah not a problem
[14:32:03] <DestinyAwaits> remonvv: thanks mate :)
[14:32:19] <remonvv> DestinyAwaits: No problem, good luck
[14:32:28] <DestinyAwaits> Nodex: Sounds Good.. ;)
[14:32:39] <DestinyAwaits> remonvv: Thanks.. :)
[14:32:54] <remonvv> Nodex++
[14:32:58] <remonvv> Aw man.
[14:33:06] <remonvv> cheeser: mongobot?
[15:11:33] <teeceepee> I am setting up replication using mongodb.conf
[15:11:38] <teeceepee> its a slave
[15:11:45] <teeceepee> what goes in source
[15:14:26] <pwelch> hey everyone. if I have 3 mongos running on localhost that are part of a replicaset, does that mean I cant have another host join that replicaset? I got a weird error about can only be localhost
[15:15:16] <teeceepee> whats the difference with slave and only and slave and source (with master's ip)
[15:15:20] <teeceepee> between*
[15:20:09] <Infin1ty> When recovering a new replicaset member using another members data files, do i need to do anythign special or just start the mongod instance as normal (same parameters as before)?
[15:20:22] <Infin1ty> no special switch parameter?
[15:43:40] <LoneSoldier728> hey ya ll
[15:46:36] <jordiOS> Hello, I wonder if I can use the utility mongodump and mongoimport on their own (I mean putting the files to the apache server or they have to live in the mongodb server), thanks
[15:47:11] <LoneSoldier728> does anyoe here know how to query and update
[15:47:21] <LoneSoldier728> part of an object that is nested in an array
[15:47:48] <teeceepee> I just got bitten by mongodb's 32-bit limit
[15:49:05] <remonvv> teeceepee: I can't decipher your question. Master/slave replication is deprecated. But if you mean primary/secondary then follow the appropriate manuals for setup
[15:49:21] <remonvv> LoneSoldier728: Post your object in a pastie along with what you need to do
[15:49:29] <teeceepee> remonvv point me to manual pls
[15:49:36] <remonvv> google
[15:49:42] <teeceepee> i don't know the diff btw master/slave and primary/secondary
[15:49:51] <teeceepee> google.com is censored at workplace
[15:49:57] <teeceepee> actually blocked
[15:49:58] <LoneSoldier728> http://pastebin.com/KWj9HGp3
[15:50:08] <remonvv> teeceepee: Where in the world are you?
[15:50:12] <teeceepee> China
[15:50:13] <LoneSoldier728> I am trying to update the current object
[15:50:22] <remonvv> teeceepee: I...well....fair enough
[15:50:22] <LoneSoldier728> to change the status to 3 as oppose to 2 or 1
[15:50:29] <teeceepee> :p
[15:50:51] <remonvv> LoneSoldier728: Post the document, not your code ;) What does it look like in the database?
[15:51:30] <remonvv> teeceepee: http://docs.mongodb.org/manual/core/replication-introduction/
[15:51:33] <LoneSoldier728> oh ok
[15:51:41] <remonvv> LoneSoldier728: Sorry, no energy to decipher code ;)
[15:51:58] <teeceepee> remonvv thanks, trying to prevent write on secondary…alright ?
[15:52:14] <weblife> I finished a three part tutorial on how to get going with Node.js / MongoDB on Ubuntu and then deploy it to cloud services. Could I get any interested parties to review it for fixes / ideas on how to improve it? https://github.com/TheMindCompany/mongonode-app/blob/master/tutorial.pdf
[15:52:30] <LoneSoldier728> http://pastebin.com/Fr7S2DL9
[15:52:35] <LoneSoldier728> there remonvv
[15:54:51] <remonvv> and you want to set all status fields to 3 or a specific one based on fuId?
[15:55:03] <LoneSoldier728> specific one
[15:55:05] <LoneSoldier728> based on fuId
[15:55:30] <LoneSoldier728> i know about the $. positional operator but that does not work in mongoose
[15:55:36] <LoneSoldier728> so I need to find a way around that
[15:55:53] <remonvv> Oh, well why didn't you say so.
[15:56:03] <remonvv> Not possible then without read-modify-write
[15:56:15] <remonvv> And people really need to stop using mongoose if that limitation actually exists
[15:56:30] <remonvv> You can't send raw queries over the line somehow?
[15:56:33] <LoneSoldier728> the thing is I tried it and it always gives me errors
[15:56:49] <remonvv> Okay, so it probably is possible but you can't get it to work?
[15:57:15] <LoneSoldier728> I would really like to know
[15:57:24] <LoneSoldier728> how if it is
[15:57:44] <remonvv> It's possible because I just Googled it and I have hits on the topic
[15:58:25] <remonvv> http://diogogmt.wordpress.com/2012/03/23/update-elementmatch-and-the-positional-operator-on-mongodbmongoose/
[15:58:40] <remonvv> And you know how to update if you can use $ I assume?
[15:59:16] <LoneSoldier728> I might be doing it wrong
[15:59:20] <LoneSoldier728> I was thinking like this
[15:59:28] <LoneSoldier728> friendStatus.$.status
[15:59:30] <LoneSoldier728> ?
[15:59:30] <remonvv> The odds of that are extremely high kind sir.
[15:59:45] <remonvv> Looks good.
[16:00:13] <LoneSoldier728> do I have to include a push
[16:00:18] <LoneSoldier728> or something infront of it
[16:00:24] <remonvv> update({"friendStatus.fuId":<YOUR ID>}, {$set:{"friendStatus.$.status":3}})
[16:00:24] <LoneSoldier728> because that could have been the downfall of it
[16:00:51] <remonvv> all the $ does is address a specific array element, you still need an operation to apply on it ($set in this case)
[16:01:04] <LoneSoldier728> ok and also
[16:01:11] <LoneSoldier728> I have to query the user first
[16:01:22] <LoneSoldier728> ok let me try this
[16:01:34] <remonvv> Yes, read $ as "the index of the element you found with my query"
[16:01:36] <LoneSoldier728> thanks
[16:01:50] <remonvv> in your schema the above works, if you query for more than one field in the array elements use $elemMatch
[16:05:03] <LoneSoldier728> it errors me for the period
[16:05:34] <remonvv> Did you quote it?
[16:05:39] <LoneSoldier728> {friendStatus.fuId: mongoose.Types.ObjectId(req.body.friend)}, {$set: {friendStatus.$.status: 2}}
[16:05:47] <LoneSoldier728> in mongoose I have to quote it still
[16:05:48] <remonvv> Quotes are your friend
[16:05:56] <remonvv> I assume so.
[16:06:05] <remonvv> I never use mongoose but it's a safe bet
[16:07:09] <remonvv> cheeser: Is there a Morphia roadmap somewhere/
[16:07:11] <LoneSoldier728> ya lets see
[16:07:15] <LoneSoldier728> no errors now
[16:07:18] <LoneSoldier728> with the quote
[16:07:24] <remonvv> LoneSoldier728: I live in hope.
[16:08:44] <LoneSoldier728> ya it says cannot append to array using string field name fuId
[16:09:01] <remonvv> you're not appending to an array, you're setting a value
[16:09:05] <remonvv> you're doing something wrong
[16:09:28] <LoneSoldier728> Friend.findOneAndUpdate({userId: mongoose.Types.ObjectId(req.signedCookies.userid)},
[16:09:29] <LoneSoldier728> {"friendStatus.fuId": mongoose.Types.ObjectId(req.body.friend)}, {$set: {"friendStatus.$.status": 2}}
[16:09:37] <LoneSoldier728> that is the code?
[16:09:57] <remonvv> What's giving the error? mongoose or mongo?
[16:10:17] <LoneSoldier728> MongoError
[16:10:23] <LoneSoldier728> code 13048
[16:12:37] <remonvv> Let me test this for you.
[16:13:03] <remonvv> By the way, your example data has the same value for fuId for both elements.
[16:13:15] <remonvv> It's not supported to set both of them
[16:13:26] <remonvv> But I assume that's a copy-paste
[16:13:43] <remonvv> Anyway, one second while I go beyond the call of duty here.
[16:13:55] <LoneSoldier728> thank u kind sir
[16:14:04] <ron> remonvv: he called you 'old'.
[16:14:35] <LoneSoldier728> who ill beat em up!
[16:14:52] <remonvv> LoneSoldier728: http://pastebin.com/Wat21TeT
[16:15:09] <remonvv> LoneSoldier728: Works as expected. The problem is somewhere between you and mongodb.
[16:15:15] <remonvv> Likely you ;)
[16:15:20] <remonvv> ron: I like being called old.
[16:15:28] <remonvv> ron: And wise.
[16:15:36] <ron> remonvv: I'm not, because then I'm old too.
[16:15:45] <remonvv> ron: We'll be old together
[16:15:54] <ron> and die at the same time?
[16:16:00] <LoneSoldier728> http://pastebin.com/KWj9HGp3
[16:16:01] <remonvv> No, you can go first.
[16:16:11] <LoneSoldier728> you see the findOneAndUpdate
[16:16:15] <ron> dude, you're taller. it's a known fact dude people die younger.
[16:16:25] <remonvv> LoneSoldier728: $addToSet != $set
[16:16:37] <remonvv> ron: dude people?
[16:16:47] <remonvv> ron: I'm offended on behalf of all the dudes
[16:16:50] <ron> err, older people.
[16:16:59] <remonvv> no, you mean taller people
[16:17:00] <Derick> ron, remonvv : you remind me of the two grumpy old men from the Muppet Show :P
[16:17:12] <remonvv> Derick: Which one am I?
[16:17:13] <ron> Derick: Thanks!!! :}
[16:17:18] <LoneSoldier728> oh i know
[16:17:19] <ron> remonvv: the ugly one
[16:17:21] <LoneSoldier728> i am just saying
[16:17:26] <LoneSoldier728> look at what I am querying
[16:17:38] <LoneSoldier728> before u query the friendstatus.fuid
[16:17:52] <remonvv> LoneSoldier728: Yah, to get a value for $
[16:17:54] <LoneSoldier728> because I need to query the user first, then see
[16:17:57] <remonvv> you can add the user id as well
[16:18:02] <LoneSoldier728> the friendstat fuid
[16:18:03] <remonvv> so add the user
[16:18:05] <LoneSoldier728> then set the status
[16:18:05] <remonvv> you need both
[16:18:32] <remonvv> db.test.findAndModify({userId:<your user>, "friendStatus.fuId":ObjectId("51f8adf8410182f00d000004")}, {$set:{"friendStatus.$.status":3}})
[16:18:39] <remonvv> but that error is coming from $addToSet
[16:33:59] <LoneSoldier728> sorry have to step out but ya
[16:34:03] <LoneSoldier728> I took out addtoset
[16:34:04] <LoneSoldier728> already
[16:34:13] <LoneSoldier728> i was getting that error for this
[16:34:30] <LoneSoldier728> http://pastebin.com/SXK2kM43
[16:35:23] <remonvv> That doesn't look right
[16:35:25] <remonvv> Too many brackets
[16:35:53] <remonvv> http://pastebin.com/e7c57qKR
[16:36:28] <remonvv> Replace with your exact paste
[16:51:12] <NaN> if I have a collection like this > http://pastie.org/private/sjbcr6yjqrfiaskxudhha < should I index the keys from the doc with more keys?
[16:52:57] <remonvv> I assume that's not an actual example. How selective is "a"?
[16:53:27] <remonvv> Meaning, for a typical value of "a" what percentage of documents would match?
[16:54:13] <remonvv> You can index on {a:1}, {a:1, c:1} or {a:1, c:1, e:1} but I suspect the first is the most useful.
[16:54:26] <NaN> it's just an example, not real data, let's suppose "a" is date
[16:54:38] <remonvv> How many documents per unique date?
[16:54:44] <remonvv> That's what it depends on.
[16:55:04] <remonvv> If you give a value for that date and you're already left with a handful of document at most there's no value in adding other fields to the index usually
[16:55:05] <NaN> remonvv: the thing is that I'm ussing the "no schema" support so my documents are so variable
[16:55:12] <NaN> there's no document with the same date
[16:55:21] <remonvv> Then definitely only on "a"
[16:55:30] <remonvv> And flexible schemas are very tricky.
[16:55:38] <remonvv> I would try and avoid that where possible.
[16:55:44] <remonvv> schemaless != do whatever you want
[16:56:18] <NaN> yeah but what about index, I couldn't index everything
[16:56:50] <remonvv> That's the problem with having flexible schemas ;)
[16:56:57] <NaN> :/
[16:56:59] <remonvv> You can't index everything, so you need to make decisions.
[16:57:25] <remonvv> In this case, if "a" would always be a date, would always be part of a query and if there are always at most one document with the same date you only need an index on "a"
[16:57:29] <remonvv> It can't get any faster.
[16:57:54] <NaN> what happens if I don't do index?
[16:58:04] <remonvv> every query becomes a table scan
[16:58:20] <remonvv> That means it becomes somewhere between slower and unacceptably slow depending on the size of your data set.
[16:59:07] <remonvv> MongoDB would have to walk through the entire collection (assuming you didn't specify a limit) for every query.
[16:59:33] <remonvv> In all but a very few exceptions that isn't an option for production level software.
[17:00:05] <NaN> ok let's say that I index the date key because most of my querys will be about the date, but... what if my querys include more keys with $exists? does the date index will be enought?
[17:00:34] <remonvv> Yes
[17:00:51] <remonvv> If your date index already reduces the candidate list to a single document the rest doesn't matter.
[17:00:59] <NaN> great!
[17:01:14] <remonvv> If your query don't include that date it might be useful to add indexes to other fields.
[17:01:23] <remonvv> But if those fields are variable that's going to be tricky
[17:01:44] <NaN> the date will always be in the query
[17:01:45] <NaN> :D
[17:02:18] <remonvv> then look no further
[17:02:34] <NaN> it will be like a "time line" db, so all the querys are about the date
[17:02:35] <NaN> thanks man :)
[17:02:39] <remonvv> No problem
[17:30:27] <rud> i'm running mongod 2.4.3 for a backend app that only opens a few connections to mongod, but makes a lot of requests to it. problem is after a while, mongod stops serving these connections, and all new connections attempts (even via --shell) are failing. i cannot figure if the issue is operating system related (it probably is, my mongod runs in a jailed freebsd-9.1-stable), or a 2.4.3 issue (i haven't seen anything related to this in the change log of 2.4.4, but
[17:30:27] <rud> could be wrong), or something … totally different ..
[17:30:43] <rud> i've gathered infos on this paste bin, in case one you good souls feel like helping :) http://pastebin.com/4dAt1VTu
[17:32:53] <cofeineSunshine> hi
[17:33:46] <cofeineSunshine> .sort('started_at', -1) helps me to sort query properl
[17:34:04] <cofeineSunshine> but I need null values be first, not last?
[17:34:27] <cofeineSunshine> is there NULL FIRST option?
[17:34:48] <remonvv> No
[17:46:13] <fogus> hello. I have an existing 1.8 install and I want to set up master/slave replication to a new host. should I run 1.8 on the new host or something newer?
[17:47:04] <remonvv> Anything lower than 2.2+ is end-of-life
[17:47:10] <remonvv> So is master/slave replication
[17:48:12] <remonvv> You'll want to upgrade to 2.4.x, switch to replication sets
[17:49:59] <fogus> I do want that.
[17:50:12] <fogus> but I have been required to keep the 1.8 master
[17:50:39] <remonvv> By who?
[17:50:43] <fogus> my boss
[17:51:02] <remonvv> And this is a production system with significant business value I assume.
[17:51:10] <fogus> I did tell him about 1.8 being EOL
[17:51:23] <remonvv> Well that and there's a healthy amount of issues resolved since.
[17:51:23] <fogus> yeah, we have a big prod system that runs against the 1.8
[17:51:31] <remonvv> It's incredibly risky not to migrate.
[17:52:09] <fogus> we do plan to, but it's even more risky (imo) to not have any replication
[17:52:46] <remonvv> I agree. But why not set up a 2.4 replication set, export from the old system, import to the new system and done.
[17:52:53] <remonvv> I assume you're allowed a maintenance window.
[17:53:06] <remonvv> Anyway, I think the newer version do support master/slave
[17:53:21] <remonvv> But as I mentioned very few people use it these days since it has issues.
[17:56:05] <fogus> yeah, I explained all that
[17:56:18] <fogus> the main concern he has is that the code won't run against 2.x
[17:57:05] <fogus> or rather that it will need to be QAed
[17:58:32] <remonvv> Well, yes. I think the query language and such is the same but it would be hard to argue against QA-ing after a db upgrade.
[18:27:17] <lucasliendo> Hi everyone !
[18:29:35] <remonvv> Hi ;)
[18:29:45] <NaN_> hi lucasliendo
[18:36:04] <lucasliendo> Got simple question...I'm running a shard (think is well configured). I'm trying to restore a DB from scratch and wanted to test the hashed index, so after : db.runCommand({enableSharding: "myNewDB"}) I run : sh.shardCollection("myNewDB.myNewCollection", {field : "hashed"})
[18:36:36] <lucasliendo> then I get : { "ok" : 0, "errmsg" : "shard keys must all be ascending" }
[18:37:00] <lucasliendo> which I think does not make any sense because the collection is empty...
[18:37:54] <lucasliendo> However if I run a getIndexes on the collection I see that the index was created and the type is hashed indeed
[18:38:11] <lucasliendo> so...is this ok ?
[18:38:51] <lucasliendo> thanks in advance !
[18:43:43] <remonvv> Is that literally your shardCollection command or was that an example?
[18:45:11] <remonvv> What db version are you on?
[18:47:03] <remonvv> First rule of support channels: Respond to questions if you want someone to look at your problem ;)
[18:47:32] <remonvv> Anyway, you're almost certainly talking to a mongo version that does not support hashed indexes.
[18:49:39] <lucasliendo> I'm using 2.2.4 version
[18:50:09] <remonvv> hashed indexes are 2.4+
[18:50:23] <lucasliendo> you're right
[18:50:25] <remonvv> 2.2 will look at your shard key value and expect an integer version.
[18:50:30] <remonvv> Hence the error
[18:50:40] <remonvv> It's not a very good error message, admittedly.
[18:50:48] <lucasliendo> I was so sure that was running 2.4+ that didn't check that :P
[18:50:52] <remonvv> ;)
[18:50:55] <remonvv> Happens to the best of us
[18:51:22] <lucasliendo> you're right again, the message is not very descriptive...
[18:51:31] <lucasliendo> thanks a lot for your time !
[18:52:13] <remonvv> Well, it's trying to check if the key value is positive (e.g. {key: 1}) so if the positive test fails it gives that error (assuming someone entered key: -1 or something).
[18:52:18] <remonvv> So yeah, but error handling.
[18:52:24] <remonvv> Or well bad error message.
[18:52:30] <remonvv> No problem.
[19:14:03] <bjorn248> hey, I stopped by here the other day with a question, specifically, is it possbible to use mongos as a dispatcher for a non-sharded replica set. I want to point my application at the mongos and not the primary of my replica set so in case of failover, mongos will handle the routing automatically. However, I only have a replica set currently (one primary, two secondaries). I don't have any configdbs for mongos to connect to. Last time I was here a fe
[19:16:40] <remonvv> bjorn248: If you can't have config servers you're better off with connecting directly. An appropriate read preference will deal with failover for you.
[19:17:32] <remonvv> bjorn248: Or more accurately, the repset will handle failover. Certain read preferences will make that minimally noticable on the client end of things.
[19:18:01] <bjorn248> remonvv: let's say I could have config servers, can I have config servers without sharding?
[19:18:46] <remonvv> You can have config servers and a single shard and not have sharded collections
[19:19:11] <remonvv> There's a small distinction.
[19:19:48] <remonvv> You set up a replica set as the only shard, and don't shard any collections.
[19:20:14] <remonvv> The nice thing about that setup is that if at any point you do decide to add shards and shard collections you can without downtime.
[19:20:50] <bjorn248> I see, so basically one shard that itself is a replica set
[19:21:08] <remonvv> Right. Most shards are in practice.
[19:21:19] <remonvv> And config servers can be small machines so that shouldn't add too much cost.
[19:21:30] <remonvv> Have 1 or 3, the latter being strongly preferred.
[19:21:31] <bjorn248> yeah I have machines to spare, so that's not a problem
[19:22:11] <remonvv> Such luxury ;)
[19:22:22] <remonvv> So yeah, 3 config, 1 mongod, mongos local to your app server(s)
[19:22:27] <remonvv> sorry, 3 mongod
[19:22:29] <remonvv> 1 repset
[19:23:16] <bjorn248> yeah I mean that's what I have now...3 mongod, 1 replica set
[19:23:40] <remonvv> Yeah, so add 3 config servers, add mongos processes to your app servers and off you go.
[19:23:49] <remonvv> Setup is a bit different but well documented.
[19:24:32] <bjorn248> and so if I have 1 shard, nothing is actually sharded as long as I don't shard the collections? the shard is basically just a layer around the replica set that allows mongos to handle automatic failover?
[19:24:46] <remonvv> Right.
[19:24:50] <bjorn248> alright
[19:24:51] <bjorn248> cool
[19:24:53] <remonvv> Think of it as sharding enabled.
[19:25:09] <remonvv> Sharding itself requires a few additional steps that are not relevant to you.
[19:25:22] <remonvv> But note that automatic failover happens without mongos as well.
[19:25:33] <remonvv> The difference there isn't huge.
[19:26:07] <remonvv> You'll still get errors (one per connection or many, depending on read pref) that you have to deal with during election phases.
[19:50:37] <ThePrimeMedian> Hi all.. I am looking at the mongodb docs, under http://docs.mongodb.org/manual/use-cases/storing-comments/ and what is this code? str_path = '.'.join('replies.%d' % part for part in path) am I supposed to loop through or ?
[19:51:12] <ThePrimeMedian> is that python?
[19:51:30] <ThePrimeMedian> can someone change that line to reg javascript?
[19:52:56] <bjorn248> remonvv: well automatic failover doesn't happen because without mongos where do I point my application if I don't have a dispatcher in between the database and the application? The application will try to connect to a primary that is down, and just time out, how does it know about the new primary?
[19:53:43] <bjorn248> sure there is a new primary somewhere, but my application doesn't know about it
[19:55:07] <LoneSoldier728> hey
[19:55:11] <LoneSoldier728> remonv u here?
[19:56:05] <remonvv> Yep
[19:56:31] <LoneSoldier728> hey so ya
[19:56:36] <LoneSoldier728> not getting any errors
[19:56:42] <LoneSoldier728> but the status doesnt seem to change either
[19:56:58] <LoneSoldier728> nvm
[19:56:59] <LoneSoldier728> lol
[19:57:02] <LoneSoldier728> wrong status set
[19:57:05] <LoneSoldier728> i think it worked
[19:57:10] <ThePrimeMedian> http://pastebin.com/kLn67f9T can someone change that from python to javascript (i found this in the mongo docs)
[19:57:11] <remonvv> \o/
[19:57:15] <LoneSoldier728> awesome
[19:57:25] <LoneSoldier728> ! i think it is all good now!
[19:57:25] <pmxbot> i think it is all good now! died 20 years ago, Death just hasn't built up the courage to tell him yet.
[19:57:51] <remonvv> ThePrimeMedian: Try #javascript
[19:58:20] <LoneSoldier728> if I want to set a second status in there do i do it like so? {$set: {'friendStatus.$.status': 3}, {second: 2}
[19:58:26] <LoneSoldier728> or right after 3 put a comma
[19:58:32] <LoneSoldier728> and add the field and value there
[20:08:01] <ThePrimeMedian> remonvv, i tried #javascript and i tried #python -- no answer in javascript and people in python are just rude and basically told me to f* off
[20:08:42] <ThePrimeMedian> all i want to know is how to convert this simple statement from python to reg. javascript syntax as I found it in mongodb's docs
[20:08:42] <ThePrimeMedian> str_path = '.'.join('replies.%d' % part for part in path)
[20:15:44] <cheeser> remonvv: there's the issue tracker...
[20:16:24] <remonvv> cheeser: hm?
[20:16:32] <cheeser> the morphia roadmap
[20:16:37] <remonvv> oh right, forgot i asked ;)
[20:16:49] <cheeser> sorry. i'm at a client site with a tiny window that's about 15 lines tall.
[20:16:50] <cheeser> :)
[20:17:01] <leifw> ThePrimeMeridian: maybe this: str_path = ''; for (var part in path) { if (str_path.length > 0) { str_path += '.'; } str_path += 'replies.' + part; }
[20:17:17] <leifw> ThePrimeMedian: I'm not great at js
[20:17:34] <leifw> wow eyes good job on that one
[20:17:40] <remonvv> cheeser: Sounds comfy ;) I've been playing around with the idea to open source a sanitized version of our mongo stuff but I was curious if it might not be more interesting to contribute to morphia instead.
[20:17:51] <ThePrimeMedian> leifw: thanks.. I just didn't know if % part for part in path was a comment of a loop
[20:17:55] <remonvv> cheeser: Hence being curious about a roadmap, long term planning, etc.
[20:18:05] <cheeser> PM?
[20:18:11] <remonvv> cheeser: sure
[20:18:26] <remonvv> leifw: You're the TokuMX guy no?
[20:18:56] <leifw> ThePrimeMedian: ('replies.%d' % part) is string interpolation, similar to ('replies.' + part) in js
[20:19:27] <ThePrimeMedian> cheers leifw
[20:19:41] <LoneSoldier728> can i ask a loop question here?
[20:19:51] <leifw> ThePrimeMedian: (f(x) for x in list_of_xs) is python for: l = []; for (var x in list_of_xs) { l += [f(x)]; }
[20:19:59] <leifw> remonvv: one of them, yeah
[20:20:19] <LoneSoldier728> i am looping thru an arrays result, and if user matches i in the loop then I want to return true otherwise it is false
[20:20:40] <LoneSoldier728> however, the first value it hits, returns false and it doesnt loop thru the rest to check for true
[20:20:54] <ThePrimeMedian> start with false and then loop..
[20:21:06] <ThePrimeMedian> if value true, mark variable to trye
[20:21:08] <ThePrimeMedian> true
[20:21:40] <LoneSoldier728> the problem is, if it is false or true
[20:21:43] <LoneSoldier728> it renders something
[20:22:10] <LoneSoldier728> so i guess should i create a variable to store the answer and then return it at the end of the loop and render based on that
[20:22:16] <ThePrimeMedian> right
[20:22:20] <LoneSoldier728> k
[20:26:43] <EmoSpice> So, I'm attempting to find some information on using ming (the python schema library) to retrieve only certain fields out of a document. if I do something like "File.m.find({'type': 'android'}, {'sha256': 1, '_id': 0})", I end up getting the full document filled out with dummy data. Is there a way to avoid this?
[20:27:01] <remonvv> leifw: Sorry, got distracted. I was curious if TokuMX follows the same consistency contract and so on compared to vanilla. And if not what are the differences. I read something about count() being different, any others?
[20:27:58] <leifw> remonvv: what do you mean by consistency, are you asking about replication and write concern?
[20:29:08] <remonvv> remonvv: Well there are some claims about very large speed improvements. Does that come at any cost? atomicity of single writes, different yielding patterns for multi writes, write->read consistency and so on.
[20:29:13] <remonvv> lol
[20:29:14] <remonvv> err
[20:29:23] <remonvv> I talked to myself there.
[20:29:27] <remonvv> Was aimed at you leifw
[20:29:38] <remonvv> I suppose the question is, performance at the cost of what?
[20:29:40] <leifw> the downsides of tokumx are: no 2d/2dsphere/geo indexes, no 2.4 support yet (but we have backported some things like hashed shard keys), and since count() needs to check MVCC information for each doc, it's slower, because the implementation is more like find().itcount() in that we have to check each document (but it's faster than vanilla's find().itcount() because our range queries are faster)
[20:30:00] <leifw> our atomicity and concurrency guarantees are stronger than in vanilla mongodb
[20:30:09] <leifw> we use MVCC snapshots and transactions under the hood
[20:30:18] <remonvv> ah so your indexing approach doesn't support the interval optimization?
[20:30:41] <remonvv> leifw: Interesting.
[20:31:02] <leifw> so if you do db.foo.insert([{a:1}, {a:2}, {a:3}]), either they all go in or none of them go in. in vanilla, if it fails halfway through, then the ones in the front get committed and the ones at and after the failure don't make it in
[20:31:23] <leifw> yes, the interval optimization is invalid due to concurrent writes
[20:31:50] <leifw> for example, if you start a count() operation, then someone else does an insert before you're done, you don't want the count() to include that new document because it came after your operation started
[20:31:54] <remonvv> okay so behaviour is different (but arguably better)
[20:31:59] <leifw> yep
[20:32:21] <leifw> you don't get weird behavior like long queries that see duplicate documents because the document got moved by an update
[20:32:34] <leifw> this all falls out of transactions and MVCC
[20:32:50] <leifw> you also get concurrent writers on the same collection
[20:32:58] <remonvv> I see, and you support all write concerns?
[20:33:00] <leifw> (i.e. document-level locking)
[20:33:01] <leifw> yep
[20:34:07] <remonvv> If you put on your objective hat, what are the downsides other than trailing behind official mongodb releases.
[20:34:09] <leifw> write concern is slightly different ATM but we will probably fix it soon: currently, when getLastError returns and says it's on, say, 3 replicas, that means that the operation got copied into that many replicas' oplogs, but it doesn't mean that a subsequent read will necessarily see the result of the operation in the collections
[20:34:29] <leifw> i.e. there is a small race between when the data is safe on the secondary and when it is *queryable* on the secondary
[20:34:37] <remonvv> subsequent reads to secondaries?
[20:34:39] <leifw> we plan to fix this but we are considering UI options right now
[20:34:40] <leifw> yep
[20:34:49] <remonvv> that's expected though, secondary reads are eventually consistent.
[20:35:10] <remonvv> as in, that's how vanilla behaves afaik
[20:35:13] <leifw> so if you do getLastError({w:'all'}) and then fire off a slaveOk read immediately when it returns, you *might* get old data when you wouldn't in vanilla
[20:35:27] <leifw> it's a tiny race though, and most applications are resilient to it anyway
[20:35:31] <leifw> but it's worth mentioning
[20:35:41] <remonvv> vanilla waits for w > 1 writes until the oplog reaches the write rather than the write being committed to the oplog?
[20:35:44] <remonvv> That's news.
[20:35:48] <remonvv> To me, anyway
[20:36:15] <leifw> when vanilla says "yep, that's on N secondaries" it means that you can immediately read that data from the collections on those secondaries
[20:36:16] <remonvv> That does seem like a bit of a detail.
[20:36:20] <leifw> for us it's just a guarantee that it's durable
[20:37:04] <remonvv> disk and mem usage similar/less/more?
[20:37:38] <leifw> <hat class="objective">if you need geo/2d/2dsphere/text indexes or the authorization stuff in 2.4, please come talk to us and we can see if it makes financial sense for us to backport those features.
[20:37:43] <leifw> otherwise, if I wanted the mongodb data model and I didn't work here, I would start with tokumx and not even consider mongodb (seriously, not just being a marketer here)
[20:38:12] <leifw> as far as future features, we plan to keep up with them, obviously no guarantees can be made, but we are pretty confident that we own the code now
[20:38:17] <leifw> </hat>
[20:38:56] <leifw> disk usage is a lot smaller, because we compress aggressively (fractal trees have large blocks which is good for standard compressors like zlib), and we also don't fragment the way vanilla mongodb does, so you don't need compact() or reIndex()
[20:39:16] <leifw> compression is particularly good in the mongodb world (as compared to mysql), because of all those repeated field names
[20:39:37] <remonvv> true but it does eat cpu cycles.
[20:39:55] <leifw> <hat class="objective">also we haven't tested MMS and we're pretty sure MMS backup won't work</hat>
[20:39:58] <leifw> it does
[20:40:16] <remonvv> and supported platforms is somewhat limited i noticed.
[20:40:19] <leifw> memory usage is maybe a little less because we aren't holding the fragmentation in memory (we don't use mmap)
[20:40:24] <leifw> cpu usage is higher for sure
[20:40:31] <leifw> yep, 64 bit linux only
[20:40:39] <remonvv> covers 90%, but still
[20:40:43] <leifw> we have built on osx and freebsd but we don't test it much
[20:41:02] <leifw> if you really need it ported to something, obviously come talk to us, we can do it we just haven't yet
[20:41:20] <remonvv> no we run 64-bit linux
[20:41:30] <leifw> ok
[20:42:07] <remonvv> I'll give it a go on one of our test clusters
[20:42:33] <leifw> cool, let us know how it goes
[20:42:56] <remonvv> will do
[20:42:58] <leifw> I'm always in #tokutek (but very often afk), and you can email our support alias or the google group
[20:43:58] <remonvv> alright
[22:38:17] <Ontological> I am having trouble $pull'ing the subdocument to which my query match belongs. Any suggestions? http://pastie.org/private/m5s6e2hwvdzzopuw1exmfa
[22:41:31] <minerale> skot: One more morphia question, how do I avoid having morphia insert a "className": "com.foo.bar" field when using toDBObject() ?
[23:37:17] <zachrab> how can I return a query with newest to oldest document sorting?