[03:30:10] <pngl> Do people use the projection part of queries a lot or do people get all the records back and sort things out in their language's driver?
[03:31:47] <cheeser> depends on the use case, i think.
[03:39:40] <tpayne> I don't know why all of the casbah examples are so bad
[03:39:54] <tpayne> they go from 0 to 10 instead of 0 to 1
[03:44:32] <cheeser> trying to remember who works on that...
[03:45:04] <cheeser> i'll see if i can't shake some trees tomorrow. you could probably file an issue or two on the docs/examples
[03:49:08] <tpayne> cheesed maybe i can ask you how to do something via command line, that way i'll know if it's possible or not
[03:51:37] <tpayne> i want to query mongo to give me all of this data back, in a bucket. OR Map[String, List[String]]
[03:52:19] <tpayne> so that i have Map("hello" -> List("world", "tpayne"), "goodbye" -> List("buddy"))
[03:52:47] <tpayne> that's it. I was looking into using coll. aggregate to returning something like this
[03:53:18] <tpayne> groupBy "key" and get back a Map[String, List[String]]
[06:00:31] <raar> Hi, I just removed all collections from a 800GB database and I want to re-claim (un-allocate) this space. I'm considering deleting the whole database, but I don't want a global write lock for more than a couple of seconds - any idea how long deleting this DB might take?
[06:03:00] <raar> any advice would be greatly appreciated
[07:39:54] <rspijker> cmex: so more like page -> comment -> user -> name ...
[07:40:20] <raar> I don't have access to the filesystem at present (it's being hosted by mongohq) - repairDatabase() on 800GB is going to take a while I assume, no? I don't want a global lock of more than a couple of seconds
[07:40:23] <rspijker> it is? That's a bad idea, I think
[07:40:38] <rspijker> raar: the time it takes depends on the size of the *actual* data
[07:41:06] <rspijker> raar: the problem with repairDatabase is that it needs 800GB to start if your current storage size is 800GB
[08:53:15] <_Heisenberg_> hI! I installed mongodb on ubuntu 12.04 which automatically installs startscripts in /etc/init and /etc/init.d ... If I want to autmatically start a configserver instead of a normal mongod instance, do I have to change the startscripts or just a config file?
[08:56:31] <rspijker> _Heisenberg_: if you want to start both, you need to add another start script for that
[08:56:47] <rspijker> if you want to start config _instead_ of mongos, then you only need to modify the config file it uses
[09:04:54] <rspijker> just cat that one and see what it says
[09:08:53] <_Heisenberg_> seems like i fucked up the installation. cant even start the mongod by using sudo service mongodb start, ps ax returns no runnig mongod instance
[09:09:29] <Nodex> by default mongodb uses an init / upstrat file on ubuntu
[09:09:39] <Nodex> (provided you installed it with apt or w/e)
[10:36:47] <aandy> hi, i'm running 2.4.3, and have a weird situation i don't understand. i have a collections, empty, then import a json file with mongoimport which reports ~1k imported objects. from the shell, db.find()/db.findOne() still says null. i then try to insert a doc manually from shell, and it can see only that doc. i export the whole (supposedly 1 object) collection and end up with a ~1k json file again. the only setup for the collection i did was to add db.ensur
[10:37:14] <aandy> am i the only one who sees some sort of black magic going on?
[10:37:37] <Zelest> are you switching database when using the shell?
[10:37:49] <Zelest> (stupid question, but just making sure :-)
[10:37:50] <aandy> it's part of a 1 master, 2 slaves setup if that helps shed some light on it. all actions done to the primary
[10:38:06] <aandy> not stupid at all, and from what i can tell, i'm not
[11:10:48] <_Heisenberg_> I set up a replicaset in order to deploy a sharded testcluster. am I on the currect way? what makes me suspicious is, that all replicas seem to be "primarys" ---> http://pastebin.com/z6q4yTRB
[12:10:21] <hard_day> i have a problem with cursor , i'm reading the doc and the google group by continuous the problem and i have no a idea http://pastie.org/8186478
[12:10:38] <amitprakash> Hi, is it possible to pass the document name as a variable? i.e. instead of saying db.Document.find, I want to call db.find(documentnName, params)
[12:12:24] <kali> amitprakash: you mean the collection name, right ? if not, you're seriously confused
[12:12:45] <kali> hard_day: is it a long-running query ?
[12:13:11] <amitprakash> kali, yes, the collectionname
[12:13:31] <kali> amitprakash: db[collectionName].find(params) then
[12:14:31] <hard_day> Hi kali, Hi kali, no is very short it's like this db.2013070100.find()
[12:20:33] <kali> hard_day: http://docs.mongodb.org/manual/reference/limits/#Restriction on Collection Names
[12:23:39] <hard_day> kali the problem it's the collection name?
[12:25:07] <jimbosimo> hi, question if anyone knows, what will happen if I issue a multi-document update instruction and server crashes before all documents are updated? will it redo the last operation when it goes up again ?
[12:44:11] <dotpot> aggregation related question: http://pastie.org/private/yxr90lqcxaz9csuuzmpaoa
[13:08:07] <_Heisenberg_> if I try adding a replica as a shard I keep getting "couldn't connect to new shard ReplicaSetMonitor no master found for set: rs0" even if the replica set has one primary and two slaves in it
[13:09:40] <kali> _Heisenberg_: what does your addShard command look like ?
[13:28:06] <_Heisenberg_> kali: sorry just dropped out. my command looks like that; mongos> sh.addShard("rs0/10.1.1.49:27017")
[13:33:39] <_Heisenberg_> kali: see here with response: http://pastebin.com/wPpTYD7S
[14:28:24] <remonvv> To be fair I just spit out some tea when someone mentioned ron said something useful.
[14:28:46] <remonvv> I'm so proud. It's...it's hard to put into words really. I need a moment to gather myself.
[14:29:12] <ron> Nodex: cheeser knows me for far longer than you do. and trust me, once he learns you adore php, he would have as little respect towards you as me. probably even moreso.
[14:29:27] <remonvv> Ohhh...he makes a valid point.
[14:31:02] <Nodex> yet ron i am still better than you in spite of these things so that really doesn't say a lot about you does it :)
[14:31:23] <ron> Nodex: I love how delusional you are. it makes you so... special.
[14:36:00] <cheeser> i'm trying to fix those issues. :)
[14:36:06] <cheeser> what were they do you recall?
[14:36:18] <ron> the project was dead. I call that an issue.
[14:36:32] <cheeser> not dead. it was just resting.
[14:36:39] <cheeser> morphia prefers keeping on its back.
[14:36:53] <remonvv> cheeser: It wasn't super active, I think only scott was maintaining it. And I don't like the API. And there were quite a few bugs at the time but I can't comment on that now.
[14:37:24] <cheeser> the API will likely shift quite a bit with the switch to the 3.x driver underneath
[14:37:40] <ron> dude, it was dead. dead, dead, dead.
[14:38:47] <remonvv> cheeser: 3.0 will change details, maybe. But the general approach of using builder patterns for query composition and whatnot will stay the same.
[14:38:52] <ron> remonvv: cheeser took over the development of Morphia as part of the 10gen team.
[14:39:15] <remonvv> ron: Don't think that changes that much ;)
[14:39:26] <cheeser> the 3.x driver does a lot more than it used so morphia will trim down a bit, i expect.
[14:39:36] <ron> remonvv: just wanted you to know :)
[14:40:43] <remonvv> cheeser: Okay, but that's moving things around. The gap between JSON/shell syntax and Morphia/driver code is too big in my opinion, and much more verbose.
[14:41:16] <cheeser> you want your java code to look like the mongo shell?
[14:41:50] <remonvv> cheeser: Wouldn't go that far but our developers pretty much universally wanted to get rid of Morphia in favor of a more..hm...JPA-ish style.
[14:42:04] <ron> Nodex: sssh... let the grownups talk.
[14:45:03] <remonvv> cheeser: It's hard for me to explain in detail, not a native english speaker ;) Our code basically looks something like this : User u = find("{userId: ?1"}).one(); List<User> users = find("{someStatestate: ?state}"}).all()
[14:45:27] <ron> remonvv: that's your excuse? not a native english speaker? pfft.
[14:45:34] <cheeser> oh, right. so you can do a .set("state", someValue), yeah?
[14:45:44] <ron> Nodex *is* a native english speaker and he can't speak it properly.
[14:46:00] <Nodex> ron there is a difference between speaking and typing
[14:47:49] <cheeser> so basically you want a mongo version of JPAQL
[14:48:38] <ron> when cheeser asks things that way, I'm never sure if he's asking sarcastically or not.
[14:48:39] <remonvv> cheeser: Well, JPAQL is a new query language spec, I rather think mongodb's query language should be mirrored as closely as possible in APIs
[14:49:34] <remonvv> cheeser: I've had this discussion with scott as well and I think it basically boils down to type safety and whatnot. Our stuff has to do runtime type checking and whatnot.
[14:49:39] <cheeser> remonvv: right. i've played with doing that in ophelia. it's a bit tricky but I've learned a way to help that out. it just means a server round trip in the easy solution.
[14:50:17] <cheeser> basically morphia is JPA's criteria API for mongo when you want something more like its QL
[14:51:19] <remonvv> cheeser: Right, we need maintainable/readable code and Morphia/driver code doesn't give you that in our experience.
[14:52:04] <remonvv> cheeser: It becomes cluttered, verbose and so forth. This may improve in 3.0/Morphia next gen but at that time we had to decide.
[14:52:05] <cheeser> i wrote critter to help with the maintainability but it's kinda going in the other direction.
[14:52:52] <_Heisenberg_> ron: I dont get it maybe you can help me, what does the two parameters of "repeat" mean in the http://www.json-generator.com/ thing?
[14:52:56] <remonvv> cheeser: And right now it's a 3year old layer that runs on production clusters that routinely host 100,000+ concurrent users so for us switching back to something else that we don't have to maintain is only an option if the functional gap is relatively limited and the robustness equal or better.
[14:54:11] <cheeser> it's mildly experimental. i've used it in a couple of projects and it works well enough so far.
[14:54:50] <remonvv> cheeser: Another thing is that we had to add a ton of other features that were then (and sometimes still) not supported. For example, hash indexes and automatic hashing of shard key values, validation of cluster topology metadata versus our document POJOs (uses similar model to Morphia but with slightly different annotations and sometimes different behaviour).
[15:06:36] <remonvv> And the driver is going to do mapping and whatnot?
[15:07:27] <cheeser> 3.x has encoders/decoders which do a lot of the work shuttling data that morphia's been doing. morphia will use/build on that functionality.
[15:08:46] <remonvv> Hm, suppose that's not a bad thing.
[15:09:25] <cheeser> i think it'll be good overall. i'm hoping much of morphia can be trimmed down.
[15:09:39] <Derick> cheeser: did you not see my PM? :-)
[15:10:21] <remonvv> Yeah as long as the driver itself doesn't get too feature-creepy. It should be a driver with a low level interface. People (or 10gen) can build seperately packaged magic on top of that.
[15:10:35] <cheeser> Derick: sorry. i had a whitelist running. i need to update my config to not load it by default anymore. try again. :)
[15:10:45] <cheeser> remonvv: i think that's the plan.
[15:23:51] <flaper87> Hey guys, Can you help me figuring out why this query is not using _id's index for sorting documents? http://bpaste.net/show/OX41rlImcg2eHZYe6V4c/
[15:25:08] <Derick> flaper87: MongoDB can only use one index per query
[15:25:25] <Derick> and it uses "active" already for the match, and it can't use that for the _id sort
[15:25:50] <flaper87> Derick: yup, I know that but, the _id is part of 'active'
[15:25:56] <cheeser> i was gonna suggest dropping the sort and seeing if it uses the index or not.
[15:36:36] <remonvv> Derick: Actually never saw that, awesome.
[16:01:30] <ashley_w_> i'm trying to do a query where the timestamp is newer than another timestamp, but i get "$err" : "error on invocation of $where function:\nJS Error: ReferenceError: o is not defined nofile_b:0" http://pastebin.com/RaAf7XfT
[16:02:42] <Nodex> you cannot reference "this" unless you're map/reducing
[16:14:03] <ashley_w_> this includes ObjectId(), which getTimestamp() is a part of
[17:40:47] <rogerthealien> hi, i have a question and was hoping someone here could help... I'm getting "TypeError: if no direction is specified, key_or_list must be an instance of list" when trying to call collection.ensure_index({'attributes.id':1}) through pymongo, and the pymongo documentation doesn't seem to be very helpful on this
[17:42:48] <idank> I have two collections: companies and employees. When choosing between embedding employees or putting it in a separate collection with a company reference, how does one deal with high read throughput (embedding is superior) and thousands of employees per company, surpassing the 16MB limit per document (GridFS sounds like overkill)?
[17:48:03] <DanWilson> idank: I'd say put a reference to the companyID on the employee
[17:49:29] <idank> DanWilson: but that kills read performance when reading companies and getting their employees
[17:59:14] <Nodex> idank : I am talking about for a quick view..... for example if you only need the name of each employee on a result of a company then drill down further then embedd each name and a reference to the _id of said employee in the other collection
[17:59:49] <idank> Nodex: I'm not keeping a lot per employee, just a few fields
[18:01:33] <Nodex> if you dont need the data until someone click and drills down then there is no point
[18:01:34] <idank> companies with 2000 employees will still exceed the 16MB limit
[18:02:15] <Nodex> 2000 array members of "not a lot of data" is not 16mb
[18:02:39] <idank> maybe it's 4000, I can't remember
[18:02:53] <Nodex> still won't be 16mb but i understand the point
[18:03:40] <Nodex> if you're constrained then you have no choice but to reference, in which case you shoud embedd the _id of the employee and do an $in - which may or may not be faster
[18:04:09] <idank> that's what I used to do in a similar collection, it was terrible
[18:04:27] <idank> but there the documents referenced were bigger, that may have contributed
[18:04:46] <Nodex> you might have to chunk the query
[18:11:32] <peterjohnson> I'm trying to search a message database for any messages with body containing website links (starts with www. , http or contains .com) but for some reason when I try db.messages.find( { type : 0 , body : { $regex: '\.com', $options: 's' } } the period isn't included in the search, so any result with "com" comes back
[18:11:35] <Nodex> not a lot you can do about it tbh, you can't control when an employee is added and thus cannot control the seek
[18:12:07] <idank> well if they're embedded, there are no seeks w.r.t to employees
[18:12:21] <Nodex> your limit nulls that so it's not an option
[18:13:02] <idank> right, well splitting the array could work when embedding
[18:13:09] <idank> or compressing the employees array
[18:18:11] <idank> it's always process a company with all of its employees
[18:18:30] <Nodex> here is some specs... our CRM has roughly 25M docs with perhaps 50M associated to the docs - contacts/notes/documents (pdf's etc) and I can fire wel over 1000 req/s at it without it blinking
[18:24:40] <Nodex> i/e processed the scraping elsewhere and then inserted to mongo
[18:24:58] <Nodex> mongo adds a certain amount of padding per document incase it grows a little but not a lot
[18:25:33] <Nodex> I understand if you scrape again and find another page you missed that it will be at the end (fragmentation) but that's just the way it is
[18:26:06] <idank> you're talking about what will happen when embedding
[18:43:31] <bluefoxxx> I have a node on a secure network that I would like to connect to a replica set as a non-voting, hidden member. Ideally it would connect to the MongoDB replica set in the DMZ and keep a running replica of all the data.
[18:43:49] <bluefoxxx> the idea being that this one is on our secure network, and is pulling data for backup.
[18:44:40] <bluefoxxx> we have Symantec back-up software, so the client has to connect to the server rather than a back-up agent supplying a pull service (it does, but they have to have two-way connection initiation--client must connect to the server, server must open connections to the client)
[18:45:28] <slowest> I saw that there's group by and count/sum functions for mongodb. Will this work properly if you have a large data set spread on several servers? I guess itwould be quite slow if it had to fetch from all servers? Even if it's properly indexed it would still have to talk to all servers?
[18:46:16] <cheeser> because unless you're sharding, all of a collection can be found on each machine in the repl set
[18:46:36] <mfletcher> Hi, anyone here with experience in load balancing mongodb servers? Id like to use a load balancer to balance connections between a bunch of mongo db servers that would be a in replica set. Is there any side-effects to load balancing all the nodes in the replica set including the primary node?
[18:47:11] <cheeser> you could just set your read preference to allow for secondaries.
[18:47:22] <jblack> What is this? replication&sharding hour?
[18:47:25] <cheeser> but i don't know that a load balancer would really buy you anything.
[18:47:33] <remonvv> mfletcher: Load balancing doesn't get you anything
[18:48:17] <remonvv> slowest: The AF is sharding compatible but it's..hm...not all pipelines scale
[18:48:48] <mfletcher> I was just thinking for the point of view if I have webs pointed at the mongodb servers, and one of them goes down, or out for maintenance, Id still like that web server to serve traffic
[18:52:00] <remonvv> cheeser: No not that one. It's a two-part. I think currently read secondary is conceptually broken in mongos due to a weird sticky connection rule and there's a special Java only implementation of that read preference that makes it different for that environment.
[18:52:17] <cheeser> ok. i should probably deal with 889 at some point.
[18:52:44] <mfletcher> We're using python with pymongo. Im not familiar with the implemenation, one of my devs tells me he just maintains a list of of the mongo servers in a config file, but I figured there might be a better way to do it, since we'd be scaling up and down the number of mongo servers in the farm
[18:53:18] <remonvv> cheeser: https://jira.mongodb.org/browse/SERVER-9788, the Java note is is Randolphs comment "Actually, the Java driver is a special case where the pinning behavior must be explicitly requested by the user."
[18:54:33] <remonvv> cheeser: I don't recognize 889. I've done tests with that setup and it works fine.
[18:54:50] <remonvv> cheeser: The issue there is most likely the "closest first" system in the current driver
[18:56:38] <remonvv> cheeser: Not sure if they/you are redoing the io/network thing but another weird issue is that you can improve throughput by having multiple Mongo(Client) instances. Seems to be a locking issue somewhere.
[18:58:27] <remonvv> mfletcher: Is the setup actually a repset?
[19:00:06] <remonvv> mfletcher: Either way, the better way to do it is put mongos in front of it and let the config servers deal with cluster topology data. Your app should only have a configurable endpoint for mongos. If you want to connect directly to a repset you will need all mongod addresses.
[19:00:18] <remonvv> mfletcher: And for that reason it's not very common anymore afaik.
[19:11:29] <mfletcher> Thanks remonvv, I'll pass this on.
[19:12:27] <remonvv> mfletcher: yw, the only downside to mongos is that is relatively cpu hungry so it will affect net throughput of your app per box slightly if it's pressed
[19:12:34] <remonvv> but then that should never really happen.
[19:32:19] <Ontological> I'm having a hard time using simple regex on pymongo. Here is an example of my problem: http://pastie.org/private/1rzmozkankutqc7thbcjng
[19:34:37] <remonvv> Ontological: You need to use $regex to tell MongoDB it has to match a regex
[19:35:16] <remonvv> Ontological: Also, if at all possible try to reproduce your problem in the shell first
[19:36:17] <Ontological> Well, the shell does it correctly because the shell doesn't encapsulate ''. That's my problem, is that it's being treated as a string
[19:36:23] <Ontological> I'll just force $regex, thanks
[19:56:18] <konr`> So, dear folks, is there an utility to import data from an .sql file to a collection in mongo?
[19:57:24] <jblack> konr`: Not that I'm aware of, but you should be able to use a csv gem and mongo to do it easily enough.
[19:58:00] <jblack> It'll end up being maybe 20 lines of code, including the "db" setups
[19:58:22] <jblack> the actual select/insert will be about 3 lines of code.
[19:58:59] <jblack> http://ruby-doc.org/stdlib-1.9.2/libdoc/csv/rdoc/CSV.html : CSV.foreach("path/to/file.csv") do |row| Mongodb insert row; end
[20:00:45] <pwelch> anyone seen this error when trying to connect to a replicaset with ruby driver: Mongo::ConnectionFailure: Failed to connect to any given member.
[20:02:16] <ranman> pwelch: all the members in the seed list are down?
[20:02:56] <pwelch> ranman: can you expand on that? If I use the MongoClient.new it connects to a single instance fine
[20:03:54] <ranman> pwelch: try with a MongoReplicaSetClient?
[20:41:33] <remonvv> cheeser: Ugh. I still haven't figured out a good API for the AF in Java
[20:41:38] <caitp> for a document with a start date and end date, I need to be able to query for documents where the start date has a time which starts at a certain hour
[20:42:59] <remonvv> caitp: Ignoring $where which you shouldn't use; that's not really possible. The standard way of doing that sort of thing is to store the exact data you want to query on. In this case if you update the start date also write a field for the start data hour of the day.
[20:43:08] <remonvv> And query (and index) on that.
[20:43:41] <caitp> I am currently storing auxilliary fields which are UTCHours() * 3600 + UTCMinutes() * 60
[20:52:32] <remonvv> everything is UTC until you show a date to an end user
[20:52:52] <caitp> I'm not sure why this doesn't make sense to you, I think I'm communicating clearly :l
[20:53:24] <remonvv> caitp: I'm not disagreeing. But if you're getting 1am UTC out of a simple Date.getTime() manipulation for a UTC Date that's midnight something's going wrong no?
[20:53:46] <caitp> so saying that demonstrates that you don't understand anything I've said :[
[20:54:25] <remonvv> caitp: That's certainly one of the possibilities.
[20:54:47] <cheeser> so try again using different words :)
[21:01:58] <caitp> what I need is literally just hours/minutes, I can't take the date into account. eg, "Show me the events that happen after midnight", not "Show me the events that happen after midnight of august 1st"
[21:02:21] <caitp> but of course, "after midnight" doesn't really give you anytthing meaningful, that would give you everything
[21:02:28] <caitp> so there's really no good way to do this
[21:06:52] <remonvv> caitp: getTime() returns the number of ms since epoch for the given Date. If you strip the date and millisecond components you will have a seconds in day value.
[21:07:35] <caitp> this was the first strategy that I tried
[21:07:42] <caitp> aside from not dealing with the timezone issue, it doesn't work anyways
[21:11:32] <remonvv> caitp: Dude. There is no timezone issue. It's UTC. And it does work.
[21:12:26] <remonvv> Show your code that goes from a Date instance d to seconds-in-day
[21:13:02] <remonvv> Print current UTC date, then calc those seconds, convert them to hours and minute and show me that's a different value than current UTC hours:minutes
[21:13:36] <caitp> maybe you don't understand what I mean by "doesn't work"
[21:15:16] <caitp> so, using a UTC timestamp isn't going to be helpful here, because UTC midnight is 0, and >= 0 is going to return everything, and <= 0 is going to return almost nothing -- Local 6:00 pm might be UTC midnight, for instance
[21:16:01] <caitp> unless I pad it with some arbitrary amount
[21:17:12] <caitp> which is problematic in other ways
[21:20:51] <remonvv> Right, so seperate the date component from the hours (or seconds, whatever) in your doc, both UTC. Do timezone conversion to UTC, you'll get one or two candidate matches (two if your range runs across two UTC days even if it's single TZ local day) and combine appside as needed.
[21:21:33] <remonvv> It just means you have to query for two documents sometimes if your range intersects UTC midnight. It's not exactly rocket science.
[21:23:32] <remonvv> Not really. Our stuff (amongst other things) tracks daily and hourly leaderboards for gaming services that do roughly that.
[21:23:34] <caitp> "You'll get one or two candidate matches (two if your range runs across two UTC days even if it's asingle TZ local day)" -- I don't follow your logic, one or two candidate matches for what?
[21:24:12] <remonvv> Was your original problem that you need to finds documents that match an certain "hours of the day" range?
[21:24:37] <remonvv> e.g. "Get all foos that were created between start time and end time of day T in timezone Z"
[21:25:38] <caitp> the service is an event management thing, so events have startdates/times and enddates/times -- the UI is supposed to enable you to search for dates/times starting at or after X or ending at or before X
[21:26:01] <remonvv> yes but X as in absolute date or X as in time of the day (so date component removed)
[21:26:07] <caitp> dates is easy, date+time is easy, time alone is significantly less easy
[21:28:02] <remonvv> Say you store your times in your event as {utcDateInDays:221323, utcTimeOfDayInSeconds:8731}
[21:28:26] <remonvv> That format would make it super easy for a UTC based range check but you have timezones
[21:28:46] <remonvv> so in cases where start.getUTCDay() != end.getUTCDay() you need to create two seperate queries
[21:29:28] <remonvv> One that grabs the events from your local TZ start to UTC midnight and one from UTC midnight to your TZ end (your TZ values converted to UTC)
[21:30:26] <remonvv> Just remove the TZ conversion from the problem and create code that allows range queries on UTC start to end where end - start < 24
[21:33:41] <remonvv> So say your local TZ range is from 15:00pm to 23:00pm that might convert to say; UTC 19:00pm to 03:00am (next day), then you'll grab all events that are 19:00-00:00 on utcDateInDays = X and all events 00:00-03:00 on utcDateInDays = X + 1
[21:50:34] <remonvv> so if your UTC converted range is still within the same UTC day you can simple TZ convert, if it spans two UTC days you have to use an $or query to grab both ranges.
[21:51:11] <remonvv> If you need date ranging it's slightly more complicated but still doable.
[21:52:19] <remonvv> caitp : Test it in a spreadsheet or something if you can't wrap your head around it.
[21:52:30] <caitp> I think I get what you're saying
[21:53:45] <remonvv> What do you think is the problem?
[21:54:09] <caitp> Just trying to wrap head around the queries I need to write, so
[21:54:55] <caitp> in a case where I am only specifying a start time, I need to say "show me events that start between <UTC startTime> and UTC midnight"
[21:55:12] <remonvv> well, only small footnote is that the right side of your ringe has to be *inclusive* on the highest possible value (23 if you do hours) rather than the 0 (00:00) i mentioned to clarify the approach
[21:55:34] <remonvv> right, both UTC start and UTC midnight in your time unit, hours I think
[21:55:41] <caitp> so like, between 14:45 and 23:59
[22:00:25] <remonvv> Don't think it's a good match here but you can check it out
[22:01:10] <caitp> yeah it didn't seem like it was going to make things simpler :( although this looks like it could turn into a potentially complicated bad query too
[22:05:07] <Chammmmm> Hi Guys, I have getting this error message: balancer move failed: { ok: 0.0, errmsg: "" } ... I got this after adding a shard.. everything was going smoothly and now it cannot move chunks.. and I get an empty error message.. (running mongo 2.4.2) I tried to restart all the mongos.. and restart the config servers.. but nothing
[22:19:19] <remonvv> (although at some point we converted leaderboards to have timezone metadata because we rarely had cross TZ leaderboards and such ;) )