[01:49:22] <dbousamra> Hi all. Can anyone help me with an aggregation query. Here is my query and result: https://gist.github.com/dbousamra/14ca3ff7733fe0952a67
[01:56:23] <dbousamra> I am trying to get rid of those key names. I just want an array of snapshots, without keys
[02:01:37] <Boomtime> can you give an example of the result you want?
[02:18:22] <dbousamra> So yeah. Not sure how to remove keys
[11:13:48] <cnu> Hi, Is there some documentation on what is the range for the score calculated for each document, when one does a full text search?
[11:17:38] <rspijker> cnu: isn;t score just the weighted sum of frequency?
[11:19:25] <cnu> Is there a limit of minimum score and maximum score? Need to use that score with a number from another program and get a normalized total score.
[11:19:41] <cnu> If I know the range, will be helpful to normalize it.
[11:36:41] <xdotcommer> kind of like $first with aggregation framework but that will work on update $SetOnInsert wont work because its for an entire document.
[11:37:04] <xdotcommer> ie.. if the field is already set I dont want to update it
[11:42:31] <boo1ean> Hi folks! I'm experiencing issues with 2dsphere index. I've a large collection (about 50 GB) and each doc has location specified as multipolygon. And I'm using search by point at on this collection and it works blazing fast BUT for same rare cases it works very-very slow. I've noticed that some of slow cases are docs with big-area-polygons. Also there cases when mongo lock all reads for more then 90 seconds. What is the reas
[11:42:31] <boo1ean> on of slow search for some cases? is it just worst case for search algorithm? How can I workaround this issue?
[11:48:55] <rspijker> xdotcommer: make the query part $exists:false ?
[11:49:44] <xdotcommer> rspijker: this is an insert/upsert
[11:50:53] <xdotcommer> and it includes many other fields as well
[11:51:46] <rspijker> xdotcommer: and you only want the one specific field to behave this way?
[11:51:59] <xdotcommer> rspijker: exactly the others get $set or $inc
[11:52:58] <rspijker> so why won’t setOnInsert work then?
[11:53:34] <xdotcommer> rspijker: because setOnInsert works on entirer document
[11:56:26] <xdotcommer> or a reasonable work around
[11:56:54] <rspijker> yeah, I can’t see a single-query answer
[12:00:08] <xdotcommer> dual query i guess or aggregation .. not clean :(
[12:00:36] <rspijker> xdotcommer: vote for this, I suppose… https://jira.mongodb.org/browse/SERVER-6566
[12:00:39] <xdotcommer> since this is for financial data.. I also prefered atomic
[12:01:03] <xdotcommer> rspijker: nice thanks... somebody worded it better :)
[12:01:30] <xdotcommer> lol @ first comment "This is a feature that is sorely lacking in Mongo. We have to do > 25 queries in some cases to work around the problem."
[12:03:25] <rspijker> yeah, if you happen to have a use-case where it’s needed then I can imagine it’s quite a big gap...
[12:05:44] <xdotcommer> rspijker: this is for financial data "open" value... so when you are inserting trades... and splitting them up by second/minute/hour... you have the typical open, high, low, close ... the last 3 work great with $max, $min, $set respectivly but first one is missing
[12:06:15] <xdotcommer> ie.. open... would keep getting overwritten
[12:07:01] <rspijker> well… can’t you guarantee that when you create the document you fill in the open value?
[12:07:12] <rspijker> or is this embedded somehow with many opens?
[12:11:58] <xdotcommer> rspijker: yea the document contains data from hour to second resolution
[12:12:25] <gnaddel> Hi there, question about the bindIp option in /etc/mongodb.conf: Can I use wildcards in that? I would like two things to be accepted: 1: localhost and 2: my Universities IP-Range. So would "127.0.0.1,my.uni.range.*" work?
[12:13:11] <xdotcommer> so it will have "open" for 60 times in each minute multiplied by 60 so 3600 opens :)
[12:16:07] <rspijker> gnaddel: bindIP is what you bind on… not where you allow connections FROM...
[12:16:50] <rspijker> unless you dynamically get an ip address at uni every time and you always want to bind to that regardless of what it is
[12:16:58] <rspijker> in which case, you’re probably out of luck…
[12:18:48] <gnaddel> rspiker: I was under the impression that it would only accept connections from hosts it is bound to: "Specifies the IP address that mongos or mongod binds to in order to listen for connections from applications. "
[12:21:25] <rspijker> gnaddel: yeah, I can see how it might be misleading. You define there which interfaces to listen on, roughly speaking. If you want to limit access from external sources, you need to set that up in your firewall
[12:48:33] <ernetas_> So regarding yesterday's issue of high CPU usage after MongoDB upgrade, we filled a bug report: https://jira.mongodb.org/browse/PHP-1164 Is anyone else able to reproduce it?
[13:54:05] <_boot> right, but will the shards the mongos talks to end up with a collection using the correct strategy or will they just use the default?
[13:54:37] <kali> _boot: you're referring to the usePowerOf2 flag ?
[14:22:49] <abusado> there you go... thank you very very much
[14:36:27] <Sjimi> Is it possible when using aggregate to project a String to an Int64 by any chance? Or is there another solution that doesn't involve fiddeling with the data?
[14:39:48] <Sjimi> The $sum group accumulator does not work with Strings; unfortunately it is practically impossible to convert the data itself. Thus projecting to Int would be great.
[14:40:31] <tscanausa> I do not think aggregate will allow but map-reduce probably will
[14:42:25] <Sjimi> Okay I'll have a deeper look at it tscanausa, thanks.
[15:28:09] <rickitan2> Hey guys can any one give me a hand with this:
[17:59:01] <pgentoo-> i am doing around 3000 queries (by _id only) on a large mongo collection (~500M documents), and am seeing upwards of 85% lock percentage on that database. I thought reads in this fashion wouldn't lock things up. Is this normal?
[18:00:39] <wc-> hi all, i am hitting a lot of errors i havent seen before as i attempt to do a query with 2 different 2d geo query operators
[18:01:02] <wc-> i have two polygons, (one happens to be a box) and i want to return objects taht are only in their intersection
[18:01:16] <wc-> so i tried an $and with two different geoWithin queries inside it
[18:01:25] <wc-> and im getting unner error: Overflow hashed AND stage buffered data usage of 33563702 bytes exceeds internal limit of 33554432 bytes
[18:08:19] <wc-> does anyone know how i could go about returning only documents that satisfy both these polygon requirements?
[18:12:04] <pgentoo-> Ah ha, i found an update that i didn't think was happening (relying on mongo query to limit whether an update happened, instead of doing in my client code to prevent the call from happening). :)
[18:31:05] <abusado> $display=$list->find($_POST["search"]); <--- is that correct?
[20:39:37] <culthero> I read that, desired seemed ambiguous and wondered for other desired major features how many of those really make it in on development relasese on time
[20:40:39] <culthero> It just happens to specifically impact some decisions on.. a potential product
[20:42:41] <cheeser> generally speaking, until a feature is delievered it's risky to base any decision on it.
[20:45:27] <culthero> Of course. My general approach for what I am using isn't going to change, however avoiding having to think about deploying something we might start offering as a service as a mongo sharded cluster would be.. nice.. The range limitation conceptually in my head means that fulltext indexes on timestamped data could be fast if done in small enough chunks on a very limited computer
[20:47:26] <culthero> You could let the performance characteristics of the server decide how your application should be configured, and having 1 server is an easier sell then having x servers with two different kinds of (mongo + elasticsearch / solr / whatever)
[21:20:03] <q85> I have a test cluster with a primary, a secondary, an arbiter, one config server, and one mongos. All components are 2.4.9. The secondary is hidden with priority 0. I have a script connecting to mongos with a read preference of secondaryPreferred. I would have assumed that the since the secondary is hidden, mongos would route the queries to the primary. Instead, it is routing queries to the hidden secondary. Is this behavior expected? If
[21:38:11] <cbuckley> hi there, I'm running into some issues with importing a dump on mongo 2.4.9
[21:38:13] <cbuckley> Btree::insert: key too large to index
[21:38:26] <cbuckley> Everything I've looked online has solutions for 2.6, but I'm struggling to find anything for 2.4
[21:44:58] <joannac> q85: yes, although I can't see how it would happen. How did you tell the queries were going to the hidden secondary? Could you pastebin your rs.conf()?
[21:45:50] <q85> I have full profiling turned on. Yes, one moment.
[21:46:08] <joannac> abusado: not sure what you want. db.foo.save({a:[1, 2, ['a']]}) ?
[21:47:43] <joannac> cbuckley: actually, i should correct a detail. in 2.6 the insert or update will fail if the key is too long. the server will *not* crash. not sure where I got that from
[21:48:13] <joannac> cbuckley: the behaviour in 2.4 is that the insert/update succeeds but the index(es) are just incomplete
[22:02:39] <q85> script with connection string: http://pastebin.com/ppLikkCu
[22:09:04] <q85> joannac: and here is part of the log for the secondary showing config version 5 being applied half an hour before I ran the script: http://pastebin.com/28agwJxK
[22:12:08] <joannac> Tue Aug 12 15:54:03.295 [rsMgr] replSet info msgReceivedNewConfig but version isn't higher 5 5
[22:12:24] <joannac> the secondary definitely has the same config?
[22:14:41] <q85> both are on version 5 and both list the secondary has hidden with priority 0.
[22:14:50] <joannac> i've been testing for about 15 minutes, and I can't reproduce it
[22:18:04] <q85> all components in the test environment and prod are on 2.4.9
[22:19:39] <q85> The version 5 warning is because I stored rs.conf() to a variable to make the changes to the member, but didn't remove the version.
[22:25:43] <q85> joannac: in testing now, I'm unable to hit the hidden secondary. Does mongos issue an isMaster command to the primary to receive the list of members for each query it directs?
[22:27:30] <joannac> q85: only at the start of a connection
[22:54:43] <q85> joannac: I was able to reproduce it again. It only happens if I do not remove the key "version" when I apply the new config AND the value of the version key is equal to the current version. The new config is still "applied" however, I'm also still able to hit the hidden secondary.
[23:10:33] <q85> joannac: I've now been able to reproduce the situation as long as the new config includes the version key (any value). Even if the value is correct. The config is applied, it corrects the value of the version key, but the hidden member still receives queries.