PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 7th of August, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:45:37] <tjmehta> hi all I am having trouble using the $elemMatch operator
[00:45:55] <tjmehta> It is continually returning all the items in the subdocument array
[01:00:27] <tejas-manohar> has anyone here worked with mongoose?
[01:46:43] <joannac> tjmehta: could you share your query?
[01:48:28] <tjmehta> i figured it out
[01:48:38] <tjmehta> didnt realize elemMatch wasnt part of the query
[01:48:43] <tjmehta> but actually part of field
[01:48:44] <tjmehta> s
[01:48:52] <tjmehta> thanks joannac
[04:42:27] <chungbd> hi all, i have 3 config servers for my sharding. I don't know backup all 3 servers or only server. Plz help me
[05:15:26] <joannac> chungbd: just backing one is fine
[05:16:52] <chungbd> joannac, thanks :)
[06:20:55] <boo1ean> Hi! Is it possible to make lockless reads in mongo?
[08:42:08] <Derick> mornin'
[08:47:10] <rspijker> good morning Derick
[11:15:22] <Ampere-> How to copy / paste in mongodb shell?
[11:43:45] <joannac> um
[11:44:27] <joannac> in whatever terminal you use, has copy and paste functionalities
[11:49:11] <slampis> Hello, I am having trouble understanding the status of this feature: https://jira.mongodb.org/browse/SERVER-9395. Is it already supported in mongodb 2.6?
[11:53:44] <rspijker> slampis: yes, should be
[11:54:04] <rspijker> 2.5.x are dev/test releases. 2.6.x is stable and should include all 2.5.x changes
[11:55:00] <slampis> rspijker: I’ve been trying to use it but it looks like the option is ignored. Also in the official documentation there is still no reference to $minDistance : http://docs.mongodb.org/manual/reference/operator/query/maxDistance/
[11:58:55] <rspijker> slampis: it not being in the docs is apparently known: https://jira.mongodb.org/browse/DOCS-1702
[11:59:11] <rspijker> there’s also a link there to an example that actually uses it, so have a look at that and see if you can get it working? :)
[12:02:15] <slampis> rspijker: It’s where I found out about the $minDistance option, see my comment in the disqus section :). Anyway I’ll check again the syntax.. Maybe I am doing something wrong.
[12:08:43] <rspijker> slampis: ah, ok. I’ve never used it myself, so can’t really say anything about it in terms of existence. If you really can;t get it to work, mongodb is open source, so you could just check out the github repo for 2.6 and see if the fix made it in.
[12:14:21] <slampis> rspijker: OK, thanks
[13:35:42] <jaccob> Hi, I have a baffling situation. My query for distinct values seems to be giving back data. Please see: http://play.golang.org/p/6Bz5AjOIDz
[13:38:10] <rspijker> jaccob: find in the shell doesn’t necesarilly show all results
[13:38:28] <rspijker> it returns a cursor and by default it will show the first x values
[13:39:49] <rspijker> do db.trips.count({“routeid":2792});
[13:40:02] <rspijker> if it’s more than 20, your just being tricked by the display
[13:40:24] <rspijker> the default mongo shell should normally tell you about this btw...
[13:41:10] <jaccob> rspijker, haha, yes of course
[13:41:33] <jaccob> rspijker, thx
[13:41:38] <rspijker> np
[13:58:34] <jekle> Hi all. I am trying to find a schema to query products from categories. each product has a specified position per category. After some research I found a stack overflow thread with a good looking answer. although, the proposed schema violates the rule of not having values as array keys. Now I am unsure, can the "positions" array be indexed properly?
[13:58:53] <jekle> oh: http://stackoverflow.com/a/18993578
[13:59:50] <Derick> jekle: what's the position for?
[14:00:36] <jekle> Derick: it is the position of the product within the given category
[14:01:17] <Derick> i don't understand what that means
[14:02:09] <jekle> hm I have hard time to explain this use case well with my not great english :/
[14:02:18] <Derick> jekle: can you show with an example?
[14:03:09] <jekle> Derick: yeah. I will try to come up with something more specific.
[14:04:35] <rspijker> are your categories sorted?
[14:04:39] <rspijker> if so, why?
[14:07:27] <remonvv> \o
[14:09:12] <rspijker> o/
[14:09:29] <jekle> rspijker: the categories aren´t sorted or better thats not the problem here. one product document has a relationship with many category documents. I want to query products from category x sorted by position. all related products could be ralated to another category with different positioning.
[14:09:38] <jekle> sorry guys :D
[14:10:05] <rspijker> so, the contents of a category are sorted
[14:10:16] <rspijker> as in, the products are ordered inside of a category
[14:11:09] <jekle> rspijker: exact yes. thats how I should have written it in the first place
[14:12:11] <jekle> I guess the proposed schema would work for that but I am not sure if its the right way
[14:13:15] <ssarah_> http://pastiebin.com/53c6812d60c4c <- anyone can tell me the reason for the error?
[14:14:28] <pta9000> I got performance problems due to mongodb being to passive about acquiring memory. it seems not to touch inactive memory at all, but will only take in memory when it is actually free. otherwise it will just linger a bare minimum (say 120m). any ideas?
[14:14:42] <remonvv> ssarah_: Incompatible locale on your system
[14:15:04] <ssarah_> hmmm? pt, en, that stuff? how do i fix it?
[14:15:33] <pta9000> mongod running on debian, bare-metal with 32g-48g
[14:15:41] <remonvv> ssarah_: http://stackoverflow.com/questions/19668570/cannot-start-mongodb-stdexception-localefacet-s-create-c-locale-name-not
[14:15:43] <remonvv> First google hit
[14:17:30] <ssarah_> ty, google, ty remonvv. lemme read dat
[14:19:03] <rspijker> if it’s a one time thing, just go LC_ALL=C mongod … ssarah_
[14:21:18] <rspijker> jekle: you could keep your original schema (the array of documents) and use the aggregation framework to do the sorting
[14:21:32] <rspijker> depending on the size of your collections, that might be fine
[14:21:57] <rspijker> or was it specifically for the purpose of being able to index it?
[14:22:04] <rspijker> suggesting a rather large collection...
[14:25:06] <jekle> rspijker: our collections are kinda small sized. just a few thousands documents. I just want to make it right. indexable. yes. to be able to scale up.
[14:25:34] <rspijker> well… what part do you actually need to index? the search or the sort?
[14:25:55] <rspijker> if searching will always reduce the size down to a manageable amount of docs, then sorting that without an index might not really be an issue
[14:26:40] <jekle> rspijker: well I admit I don´t know the answer to that questions. I just want our queries to be fast :)
[14:26:49] <rspijker> who doesn’t
[14:26:54] <jekle> ^^
[14:27:42] <rspijker> if the amount of products in any given category is limited, you could even embed them
[14:27:55] <rspijker> you’d have some duplication, but it would be pretty fast… :)
[14:28:52] <remonvv> Don't make me slap you rspijker
[14:28:57] <rspijker> hahaha
[14:29:03] <rspijker> I knew someone was going to bite at that
[14:29:10] <rspijker> had my money on either you or Derick
[14:31:06] <jekle> rspijker: I am not sure if thats the best schema for us because we need to do more "where" querying on the products collection besides the category relation. if they are all embedded documents I bet that becomes harder to do. and duplication doesn´t sound fun :/ I am so new to nosql, it drives me crazy but challanging and fun :D
[14:31:34] <rspijker> it almost certainly isn’t a good idea
[14:31:50] <remonvv> And we can debate about the "almost"
[14:34:09] <rspijker> there are cases where it could work…
[14:34:14] <rspijker> this isn’t one of them
[14:35:01] <joshua> You can have more than one index, or compound indexes. If your database will have more reads than writes it would be fine having more than one.
[14:35:25] <joshua> But its best to keep the indexes so they fit in memory for performance.
[14:36:47] <jekle> rspijker: roger. will stay on the current track then. thanks for the talking! back to code now
[14:36:58] <rspijker> have fun!
[14:53:34] <remonvv> People overestimate the number of good reasons there are for embedding collections.
[14:57:24] <joshua> We switched developers on our application and I don't know if they understand that. They seem to create a new database for everything
[15:03:40] <ssarah_> when im starting a config server, mongod --configsvr --dbpath /data/configdb --port 27019, do i need to make a dir for each?
[15:10:58] <rasputnik> ssaraH: running several on the same box? yes, they need their own data
[15:14:19] <ssaraH> aight, ty
[15:14:34] <rspijker> if you’re going to be running them on the same box, why bother with more than 1?
[15:14:34] <ssaraH> but it didnt print out any error when i tried running it twice
[15:14:39] <ssaraH> true true
[15:14:44] <ssaraH> i was just following the tutorial
[15:14:51] <ssaraH> ty guys
[15:22:40] <ssaraH> do i need to run these mongod commands as sudo?
[15:24:43] <rspijker> ssaraH: you shouldn;t have to
[15:25:00] <rspijker> make sure the directories are accessible to the user mongo is running as though...
[15:25:11] <rspijker> a lot of tutorials will us esomething ridiculous like /data
[15:25:20] <ssaraH> yeh, that's what the official is using
[15:26:01] <ssaraH> what dirs should i be using?
[15:26:05] <Derick> it's the default :-/
[15:26:15] <ssaraH> but that's root
[15:26:27] <ssaraH> ah well, sudo then...
[15:26:30] <Derick> :-/ ← unhappy face
[15:27:07] <rspijker> noooooooo
[15:27:29] <dawik> sad panda
[15:27:53] <rspijker> Derick: it’s fine for actual deployments. For tutorials where they have you running it on your own machine, it makes little sense...
[15:28:08] <Derick> i know
[15:28:13] <rspijker> ssaraH: just chmod the dir, or use one in your home
[15:28:21] <rspijker> don’t sudo
[15:28:24] <rspijker> just… don't
[15:28:56] <dawik> it kills kittens, and makes pandas very very sad
[15:29:13] <dawik> also can mess up your system real bad
[15:29:33] <Derick> http://2.bp.blogspot.com/_LNhJJWWBTdo/TDw7GrQVuWI/AAAAAAAAIOM/1OaujBWHk9o/s400/sadpanda.jpg
[15:29:47] <dawik> exactly so
[15:32:33] <ssaraH> i noted that
[15:32:52] <ssaraH> but i'm just messing around
[15:33:29] <rspijker> ssaraH: that really only makes it worse...
[15:33:53] <ssaraH> im so manly...
[16:19:12] <remonvv> You're a man?
[16:21:02] <jaccob> how can I do like a console.log() when I write a script in the shell?
[16:30:07] <ssaraH> guys, when i try to run mongos with the default service mongo already running i get a port already in use error
[16:30:09] <ssaraH> is it normal?
[16:30:19] <ssaraH> (yeh, me guy)
[16:33:22] <themime> i calld .update() and it seems to have replaced the whole object with the field I wanted to updated - ie update({},{field:val}) instead of it updating the object with that new field it replaced the object with the object {field:val} - is this normal behavior?
[16:34:01] <themime> i wanted to add a field to that object (really to all objects on the collection but its a new collection with one object so i'm kinda just playing around right now)
[16:35:03] <themime> ssaraH: i'm new to mongo but not to port binding, it sounds like you have the service already running or you are using a custom port thats already in use. what port is it?
[16:36:30] <ssaraH> im also knew, its the default 27017
[16:36:47] <ssaraH> i think mongos tries to use the default mongo port, by its own default
[16:36:55] <themime> yes it does
[16:37:06] <themime> and i reread your post, it almost sounds like started the service and then tried to start the server?
[16:37:20] <ssaraH> i installed mongo as a previous step
[16:37:26] <ssaraH> it runs on system boot
[16:37:31] <ssaraH> so... yeh i had to stop it
[16:37:45] <themime> so you stopped it, and now when you try to start it manually its still bound?
[16:37:53] <ssaraH> nein nein,now it seems cool
[16:37:57] <themime> by bound i mean it sayS "port already in use"
[16:37:59] <themime> aaah okay cool
[16:40:39] <ssaraH> ty, themime, hope someone helps you with your question up there.
[16:41:48] <themime> ssaraH: i think i just found a stack overflow article for it, i believe i need to use "$set":fieldName as the object param in the update method
[16:53:34] <ssaraH> cool =O
[16:53:49] <ssaraH> everyone else is asleep
[16:53:56] <ssaraH> *crickets*
[17:16:15] <themime> haha i guess so. its early afternoon here
[17:25:36] <ssaraH> 18 here =O
[17:26:49] <stefandxm> you allowed staying up this late?
[17:27:22] <stefandxm> shoot me
[17:33:58] <user123321> Is it difficult to setup a replication?
[17:34:45] <themime> try it and find out :)
[17:35:05] <themime> im new to mongo so i don't know, but i do know your question is vague and even an experienced person here would have difficulty answering
[17:36:06] <user123321> Well then, I'd try to be more specific :)
[17:47:54] <pgentoo-> I need to update all documents in a collection and add a new field ("utc2") which is as function of another field ("utc"). Basically, "utc" is an ISODate() with full resolution, where i want utc2 to have everything more precise than the hour trunctated. Any suggestions on how to do this efficiently? I'm working with a collection having around 500M records.
[17:53:22] <pgentoo-> i was just thinking of doing a foreach on the set to update, and calling collection.update() for each record, and then defining the new field based on the other one, but not sure if this is the most efficient approach or if there is some better way to go about it. Ideas appreciated. :)
[17:54:51] <themime> pgentoo-: im new to mongo but not to development. id say unless efficiency is an actual concern, do what is easiest and makes the most sense to you. mongo may have some fancy way to handle it buuut it does seem a little complicated haha
[17:55:09] <themime> and what gets the job done :)
[17:59:13] <pgentoo-> yeah, thats the approach i was goign to go with unless someone chimed in with some magic. :)
[18:14:59] <ssaraH> http://pastiebin.com/53c6812d60c4c <- does this look good? i still have to add arbiters i think
[18:31:14] <ssaraH> can i add the arbiter directly to that file?
[18:31:22] <ssaraH> or am i just doing crazy shit all the way?
[18:33:03] <ssaraH> rs.initiate( rsconf ) <- this the way im going to use it
[18:33:09] <ssaraH> (intend to)
[18:41:29] <jaccob> I made a collection called 3to5, and when I did >show collections I see it there but when I try db.3to5.find() U get SyntaxError: missing ; before statement (shell):1
[18:44:13] <jaccob> Nevermind: db["3to5"].find()
[18:57:52] <mrgcohen> hey
[18:58:50] <mrgcohen> i have a mongodb related question about moving or rolling averages... anyone i can bug?
[18:59:12] <ranman> mrgcohen ask yo question
[18:59:19] <mrgcohen> cool
[18:59:30] <ranman> no promises on an answer though :/
[18:59:54] <mrgcohen> i'm trying to create a moving average plot of time-series data
[19:00:08] <mrgcohen> is there an easy way to calculate a moving average for a collection of time-series data
[19:00:36] <mrgcohen> for instance if you had 50 years of daily data points
[19:00:57] <mrgcohen> how could you create a 3 month moving average from that with mongodb
[19:01:11] <mrgcohen> definitely can't do it with aggregation framework since it would require multiple docs at once
[19:01:17] <mrgcohen> can you do it with map-reduce
[19:01:44] <mrgcohen> or would you have to pull down a chunk of data
[19:02:29] <mrgcohen> create the avg per interval in ruby (or node, or python etc) and push that onto an array, and then continue pulling chunks until you're done
[19:02:36] <mrgcohen> there must be a better way to do this
[19:02:44] <mrgcohen> it makes me sad ;(
[19:02:58] <mrgcohen> any ideas?
[19:03:03] <mrgcohen> does that make any sense?
[19:03:10] <mrgcohen> hopefully it's not too much of a ramble
[19:04:27] <mrgcohen> i created a stack exchange question but still was hoping for a better solution
[19:04:28] <mrgcohen> anywho
[19:04:29] <mrgcohen> http://stackoverflow.com/questions/25151042/moving-averages-with-mongodbs-aggregation-framework
[19:04:42] <mrgcohen> @ranman any ideas :)
[19:38:52] <rasputnik> user123321: replica sets are pretty straightforward in mongo, is that what you mean?
[19:45:20] <user123321> rasputnik, I would like to have a backup DB server in case the main DB server goes down. They're accessed by at least 2 web servers which are load balanced. I'm wondering what my options are. For example, load balancing mongo db? Or, just making the backup become active etc.
[19:45:59] <rasputnik> user123321: a replica set might be worth a look. only one takes writes but you can read from the secondaries. get 3.
[19:46:43] <rasputnik> working out pretty well for us when it comes to availability during tuning/patching etc. too
[19:47:41] <user123321> rasputnik, Cool. Is a replica automatically synchronized?
[19:47:45] <user123321> realtime?
[19:48:13] <rasputnik> user123321: nothing is realtime. but typically sub second lag.
[19:48:37] <rasputnik> user123321: go read over: http://docs.mongodb.org/manual/core/replication-introduction/
[19:49:13] <user123321> Cool. So am I supposed to use a thing like Keepalived to automatically make a replica DB to receive writes if the main DB goes down?
[19:49:23] <user123321> thanks
[19:49:48] <kali> user123321: the mongod client takes care of that
[19:50:09] <user123321> oh nice
[20:16:11] <pgentoo-> Ok, per my previous question, i ended up with just a foreach over all the documents in the collection, but at the current rate of around 1500/s, it'll me about 4 days. :(
[20:29:30] <pgentoo-> Ok, running locally on the primary puts me more at 24hrs, which is acceptable i suppose, but still a long time. :(
[20:36:53] <umquant> I have a schema that has multiple embedded subdocuments. When I do a slice operation on one of the arrays it returns the slice in addition to all the other fields.
[20:37:05] <umquant> https://gist.github.com/anonymous/32e5f778d464c80034d9
[20:37:49] <umquant> Is there anyway to return only the slice results besides setting each projection I don't want to zero?
[20:40:05] <Antiarc> Hey folks. Does anyone know what the status of the mongo-ruby-driver is, in terms of how close the 2.x is to ready for release?
[20:59:22] <DubLo7> Forgive me for this question - my boss is … questionable sanity. Is it possible to use mongo to send data to mongo as an array and have mongo run a query on that array instead of using the mongo records. Just do it's thing against the data sent in the same request.
[21:00:31] <the-erm> I'm kinda lost, how would you do the equivalent of: SELECT col FROM table WHERE col1 != col2; in mongo?
[21:03:27] <the-erm> DubLo7: Sounds like to me you're trying to do a subquery, and I'm not sure if it's even possible in mongo.
[21:04:26] <the-erm> the-erm: look at the $where command
[21:04:28] <the-erm> thanks erm.
[21:06:51] <DubLo7> the-erm: More like sending the data to mongo NOT saving it, yet somehow magically having mongo run a query against an arbitrary array of objects as the in memory only collection.
[21:07:01] <DubLo7> I don't think such a thing is possible
[21:07:44] <DubLo7> I think it would be quicker to just run the query locally, but the idea is to have mongo queries run against arrays of stuff without going through motions of saving it.
[21:13:47] <DubLo7> OK, clarification - instead of the query paramter, we provide an array of already computed query results as part of the mapReduce command.
[21:24:36] <DubLo7> Q2 - can emit value be an object?
[23:25:13] <rex> hi all
[23:25:35] <EmmEight> Hello
[23:25:55] <rex> let's say I want to add a field called lc_uname to all documents , which already have a field called uname, which usually isn't already lowercase
[23:26:32] <rex> db.users.update({},{$set:{lc_uname:doc.uname}) ? or something like that?
[23:26:33] <EmmEight> why would you store that? It seems like if you are just trying to get uname, cant you use whatever language you are using to convert to lowercase?
[23:26:57] <rex> for indexing purposes. im making my own case-ignoring index.
[23:26:58] <EmmEight> Seems like data replication IMO
[23:27:52] <rex> so, you search for user by lowercasing your search string first, then the db is indexed by lc_uname
[23:28:18] <rex> if I don't do that, mongo will do a regex check or some kind of string op without using the index. unless you know another way...
[23:28:31] <EmmEight> Make an index on the field how it is, query on it with a mongo regex that has start and end anchor
[23:29:08] <EmmEight> http://docs.mongodb.org/manual/reference/operator/query/regex/
[23:29:19] <joannac> rex: text index?
[23:29:50] <rex> so, when I do the query, I say db.users.find({uname : $regex{ ... }}) and the db can still use the index?
[23:29:58] <rex> i guess i should read from that link now
[23:30:58] <rex> joannac: .. just a normal index. it's a switch, right? and it basically sorts the entries, so it can find things really fast...
[23:32:13] <joannac> rex: http://docs.mongodb.org/manual/core/index-text/
[23:32:20] <joannac> (that may be overkill for you though)
[23:33:18] <rex> yah that sounds like overkill. seriously, im trying to let users have their own personal flavor in their uname with caps and stuff, but the essential identity of the user is without any caps
[23:33:46] <rex> since unames are really small, i dont see the big deal in adding a field.
[23:35:10] <joannac> rex: test and see what you get
[23:36:01] <rex> actually, the last 3 paragraphs in http://docs.mongodb.org/manual/reference/operator/query/regex/ have convinced me that it's ok to just use $regex if I use a prefix expression
[23:36:08] <joannac> I think the regex query will almost certainly be slower
[23:36:08] <rex> with an index
[23:36:21] <rex> yah.. hm. you're right. itll be slower, but itll save space
[23:36:36] <rex> but , like i said, these are just unames...
[23:36:42] <joannac> right. so it's a balance :)
[23:36:50] <rex> ok extra field, im sticking to it.
[23:37:07] <rex> i will be doing this query often, and i dont want to have to use a regex all the time
[23:37:54] <rex> now, im back to "how do you add that field to all users" ... is there a way to refer to field of the same doc?
[23:38:02] <rex> db.users.update({},{$set:{lc_uname:doc.uname}) ? or something like that?
[23:38:06] <joannac> db.coll.find().forEach(...)
[23:39:22] <rex> beautiful! thanks
[23:39:53] <rex> there's the big benefit of javascript right there. any function(myDoc){ } !
[23:54:40] <babykosh> Mongodb gods…how to do a many_to_many relationship?