[11:44:27] <joannac> in whatever terminal you use, has copy and paste functionalities
[11:49:11] <slampis> Hello, I am having trouble understanding the status of this feature: https://jira.mongodb.org/browse/SERVER-9395. Is it already supported in mongodb 2.6?
[11:54:04] <rspijker> 2.5.x are dev/test releases. 2.6.x is stable and should include all 2.5.x changes
[11:55:00] <slampis> rspijker: I’ve been trying to use it but it looks like the option is ignored. Also in the official documentation there is still no reference to $minDistance : http://docs.mongodb.org/manual/reference/operator/query/maxDistance/
[11:58:55] <rspijker> slampis: it not being in the docs is apparently known: https://jira.mongodb.org/browse/DOCS-1702
[11:59:11] <rspijker> there’s also a link there to an example that actually uses it, so have a look at that and see if you can get it working? :)
[12:02:15] <slampis> rspijker: It’s where I found out about the $minDistance option, see my comment in the disqus section :). Anyway I’ll check again the syntax.. Maybe I am doing something wrong.
[12:08:43] <rspijker> slampis: ah, ok. I’ve never used it myself, so can’t really say anything about it in terms of existence. If you really can;t get it to work, mongodb is open source, so you could just check out the github repo for 2.6 and see if the fix made it in.
[13:35:42] <jaccob> Hi, I have a baffling situation. My query for distinct values seems to be giving back data. Please see: http://play.golang.org/p/6Bz5AjOIDz
[13:38:10] <rspijker> jaccob: find in the shell doesn’t necesarilly show all results
[13:38:28] <rspijker> it returns a cursor and by default it will show the first x values
[13:39:49] <rspijker> do db.trips.count({“routeid":2792});
[13:40:02] <rspijker> if it’s more than 20, your just being tricked by the display
[13:40:24] <rspijker> the default mongo shell should normally tell you about this btw...
[13:58:34] <jekle> Hi all. I am trying to find a schema to query products from categories. each product has a specified position per category. After some research I found a stack overflow thread with a good looking answer. although, the proposed schema violates the rule of not having values as array keys. Now I am unsure, can the "positions" array be indexed properly?
[14:09:29] <jekle> rspijker: the categories aren´t sorted or better thats not the problem here. one product document has a relationship with many category documents. I want to query products from category x sorted by position. all related products could be ralated to another category with different positioning.
[14:10:05] <rspijker> so, the contents of a category are sorted
[14:10:16] <rspijker> as in, the products are ordered inside of a category
[14:11:09] <jekle> rspijker: exact yes. thats how I should have written it in the first place
[14:12:11] <jekle> I guess the proposed schema would work for that but I am not sure if its the right way
[14:13:15] <ssarah_> http://pastiebin.com/53c6812d60c4c <- anyone can tell me the reason for the error?
[14:14:28] <pta9000> I got performance problems due to mongodb being to passive about acquiring memory. it seems not to touch inactive memory at all, but will only take in memory when it is actually free. otherwise it will just linger a bare minimum (say 120m). any ideas?
[14:14:42] <remonvv> ssarah_: Incompatible locale on your system
[14:15:04] <ssarah_> hmmm? pt, en, that stuff? how do i fix it?
[14:15:33] <pta9000> mongod running on debian, bare-metal with 32g-48g
[14:17:30] <ssarah_> ty, google, ty remonvv. lemme read dat
[14:19:03] <rspijker> if it’s a one time thing, just go LC_ALL=C mongod … ssarah_
[14:21:18] <rspijker> jekle: you could keep your original schema (the array of documents) and use the aggregation framework to do the sorting
[14:21:32] <rspijker> depending on the size of your collections, that might be fine
[14:21:57] <rspijker> or was it specifically for the purpose of being able to index it?
[14:22:04] <rspijker> suggesting a rather large collection...
[14:25:06] <jekle> rspijker: our collections are kinda small sized. just a few thousands documents. I just want to make it right. indexable. yes. to be able to scale up.
[14:25:34] <rspijker> well… what part do you actually need to index? the search or the sort?
[14:25:55] <rspijker> if searching will always reduce the size down to a manageable amount of docs, then sorting that without an index might not really be an issue
[14:26:40] <jekle> rspijker: well I admit I don´t know the answer to that questions. I just want our queries to be fast :)
[14:29:03] <rspijker> I knew someone was going to bite at that
[14:29:10] <rspijker> had my money on either you or Derick
[14:31:06] <jekle> rspijker: I am not sure if thats the best schema for us because we need to do more "where" querying on the products collection besides the category relation. if they are all embedded documents I bet that becomes harder to do. and duplication doesn´t sound fun :/ I am so new to nosql, it drives me crazy but challanging and fun :D
[14:31:34] <rspijker> it almost certainly isn’t a good idea
[14:31:50] <remonvv> And we can debate about the "almost"
[14:34:09] <rspijker> there are cases where it could work…
[14:35:01] <joshua> You can have more than one index, or compound indexes. If your database will have more reads than writes it would be fine having more than one.
[14:35:25] <joshua> But its best to keep the indexes so they fit in memory for performance.
[14:36:47] <jekle> rspijker: roger. will stay on the current track then. thanks for the talking! back to code now
[14:53:34] <remonvv> People overestimate the number of good reasons there are for embedding collections.
[14:57:24] <joshua> We switched developers on our application and I don't know if they understand that. They seem to create a new database for everything
[15:03:40] <ssarah_> when im starting a config server, mongod --configsvr --dbpath /data/configdb --port 27019, do i need to make a dir for each?
[15:10:58] <rasputnik> ssaraH: running several on the same box? yes, they need their own data
[15:27:53] <rspijker> Derick: it’s fine for actual deployments. For tutorials where they have you running it on your own machine, it makes little sense...
[16:33:22] <themime> i calld .update() and it seems to have replaced the whole object with the field I wanted to updated - ie update({},{field:val}) instead of it updating the object with that new field it replaced the object with the object {field:val} - is this normal behavior?
[16:34:01] <themime> i wanted to add a field to that object (really to all objects on the collection but its a new collection with one object so i'm kinda just playing around right now)
[16:35:03] <themime> ssaraH: i'm new to mongo but not to port binding, it sounds like you have the service already running or you are using a custom port thats already in use. what port is it?
[16:36:30] <ssaraH> im also knew, its the default 27017
[16:36:47] <ssaraH> i think mongos tries to use the default mongo port, by its own default
[16:40:39] <ssaraH> ty, themime, hope someone helps you with your question up there.
[16:41:48] <themime> ssaraH: i think i just found a stack overflow article for it, i believe i need to use "$set":fieldName as the object param in the update method
[17:35:05] <themime> im new to mongo so i don't know, but i do know your question is vague and even an experienced person here would have difficulty answering
[17:36:06] <user123321> Well then, I'd try to be more specific :)
[17:47:54] <pgentoo-> I need to update all documents in a collection and add a new field ("utc2") which is as function of another field ("utc"). Basically, "utc" is an ISODate() with full resolution, where i want utc2 to have everything more precise than the hour trunctated. Any suggestions on how to do this efficiently? I'm working with a collection having around 500M records.
[17:53:22] <pgentoo-> i was just thinking of doing a foreach on the set to update, and calling collection.update() for each record, and then defining the new field based on the other one, but not sure if this is the most efficient approach or if there is some better way to go about it. Ideas appreciated. :)
[17:54:51] <themime> pgentoo-: im new to mongo but not to development. id say unless efficiency is an actual concern, do what is easiest and makes the most sense to you. mongo may have some fancy way to handle it buuut it does seem a little complicated haha
[17:55:09] <themime> and what gets the job done :)
[17:59:13] <pgentoo-> yeah, thats the approach i was goign to go with unless someone chimed in with some magic. :)
[18:14:59] <ssaraH> http://pastiebin.com/53c6812d60c4c <- does this look good? i still have to add arbiters i think
[18:31:14] <ssaraH> can i add the arbiter directly to that file?
[18:31:22] <ssaraH> or am i just doing crazy shit all the way?
[18:33:03] <ssaraH> rs.initiate( rsconf ) <- this the way im going to use it
[18:41:29] <jaccob> I made a collection called 3to5, and when I did >show collections I see it there but when I try db.3to5.find() U get SyntaxError: missing ; before statement (shell):1
[18:59:30] <ranman> no promises on an answer though :/
[18:59:54] <mrgcohen> i'm trying to create a moving average plot of time-series data
[19:00:08] <mrgcohen> is there an easy way to calculate a moving average for a collection of time-series data
[19:00:36] <mrgcohen> for instance if you had 50 years of daily data points
[19:00:57] <mrgcohen> how could you create a 3 month moving average from that with mongodb
[19:01:11] <mrgcohen> definitely can't do it with aggregation framework since it would require multiple docs at once
[19:01:17] <mrgcohen> can you do it with map-reduce
[19:01:44] <mrgcohen> or would you have to pull down a chunk of data
[19:02:29] <mrgcohen> create the avg per interval in ruby (or node, or python etc) and push that onto an array, and then continue pulling chunks until you're done
[19:02:36] <mrgcohen> there must be a better way to do this
[19:38:52] <rasputnik> user123321: replica sets are pretty straightforward in mongo, is that what you mean?
[19:45:20] <user123321> rasputnik, I would like to have a backup DB server in case the main DB server goes down. They're accessed by at least 2 web servers which are load balanced. I'm wondering what my options are. For example, load balancing mongo db? Or, just making the backup become active etc.
[19:45:59] <rasputnik> user123321: a replica set might be worth a look. only one takes writes but you can read from the secondaries. get 3.
[19:46:43] <rasputnik> working out pretty well for us when it comes to availability during tuning/patching etc. too
[19:47:41] <user123321> rasputnik, Cool. Is a replica automatically synchronized?
[19:48:13] <rasputnik> user123321: nothing is realtime. but typically sub second lag.
[19:48:37] <rasputnik> user123321: go read over: http://docs.mongodb.org/manual/core/replication-introduction/
[19:49:13] <user123321> Cool. So am I supposed to use a thing like Keepalived to automatically make a replica DB to receive writes if the main DB goes down?
[20:16:11] <pgentoo-> Ok, per my previous question, i ended up with just a foreach over all the documents in the collection, but at the current rate of around 1500/s, it'll me about 4 days. :(
[20:29:30] <pgentoo-> Ok, running locally on the primary puts me more at 24hrs, which is acceptable i suppose, but still a long time. :(
[20:36:53] <umquant> I have a schema that has multiple embedded subdocuments. When I do a slice operation on one of the arrays it returns the slice in addition to all the other fields.
[20:37:49] <umquant> Is there anyway to return only the slice results besides setting each projection I don't want to zero?
[20:40:05] <Antiarc> Hey folks. Does anyone know what the status of the mongo-ruby-driver is, in terms of how close the 2.x is to ready for release?
[20:59:22] <DubLo7> Forgive me for this question - my boss is … questionable sanity. Is it possible to use mongo to send data to mongo as an array and have mongo run a query on that array instead of using the mongo records. Just do it's thing against the data sent in the same request.
[21:00:31] <the-erm> I'm kinda lost, how would you do the equivalent of: SELECT col FROM table WHERE col1 != col2; in mongo?
[21:03:27] <the-erm> DubLo7: Sounds like to me you're trying to do a subquery, and I'm not sure if it's even possible in mongo.
[21:04:26] <the-erm> the-erm: look at the $where command
[21:06:51] <DubLo7> the-erm: More like sending the data to mongo NOT saving it, yet somehow magically having mongo run a query against an arbitrary array of objects as the in memory only collection.
[21:07:01] <DubLo7> I don't think such a thing is possible
[21:07:44] <DubLo7> I think it would be quicker to just run the query locally, but the idea is to have mongo queries run against arrays of stuff without going through motions of saving it.
[21:13:47] <DubLo7> OK, clarification - instead of the query paramter, we provide an array of already computed query results as part of the mapReduce command.
[21:24:36] <DubLo7> Q2 - can emit value be an object?
[23:25:55] <rex> let's say I want to add a field called lc_uname to all documents , which already have a field called uname, which usually isn't already lowercase
[23:26:32] <rex> db.users.update({},{$set:{lc_uname:doc.uname}) ? or something like that?
[23:26:33] <EmmEight> why would you store that? It seems like if you are just trying to get uname, cant you use whatever language you are using to convert to lowercase?
[23:26:57] <rex> for indexing purposes. im making my own case-ignoring index.
[23:26:58] <EmmEight> Seems like data replication IMO
[23:27:52] <rex> so, you search for user by lowercasing your search string first, then the db is indexed by lc_uname
[23:28:18] <rex> if I don't do that, mongo will do a regex check or some kind of string op without using the index. unless you know another way...
[23:28:31] <EmmEight> Make an index on the field how it is, query on it with a mongo regex that has start and end anchor
[23:32:20] <joannac> (that may be overkill for you though)
[23:33:18] <rex> yah that sounds like overkill. seriously, im trying to let users have their own personal flavor in their uname with caps and stuff, but the essential identity of the user is without any caps
[23:33:46] <rex> since unames are really small, i dont see the big deal in adding a field.
[23:35:10] <joannac> rex: test and see what you get
[23:36:01] <rex> actually, the last 3 paragraphs in http://docs.mongodb.org/manual/reference/operator/query/regex/ have convinced me that it's ok to just use $regex if I use a prefix expression
[23:36:08] <joannac> I think the regex query will almost certainly be slower