[00:34:56] <jtomasrl> if I want to uso aggregate and unwind a key that doesnt exist i'll get errors?
[00:42:39] <Glacee> Any plan to release 2.2.3 this week?
[01:16:21] <`f`o`o`m`a`n> i find the CLI to be a bit awkward to use. For instance, one common mistake that I run into when i'm typing fast is, I'll go accidentally hit the return key in the middle of typing a query, and then I need to copy and paste the half finished query, then CTRL-C out of the whole damn shell, then reconnect, and go from there
[01:16:42] <`f`o`o`m`a`n> it would be much nicer if there was an easy way to back up and correct mistakes
[01:17:18] <`f`o`o`m`a`n> This is just an example of what i think are the many rough edges of the CLI. I would love to see this thing improved
[01:17:33] <`f`o`o`m`a`n> maybe i am just missing some pro tips
[01:59:40] <bolbat> hello guys - i have question related to https://jira.mongodb.org/browse/JAVA-648 - how i can check Java Driver initialisation status when replica set is not available on application startup time - don't want to write hack's like described in https://jira.mongodb.org/browse/JAVA-649
[03:36:34] <salentinux> Hi guys, I'm reading about sharding but I can't figure out if a collection with 2 or more unique indexes can be sharded correclty keep the uniquness across the entire collection.
[03:57:46] <dangayle> Just had a fabulous meetup here in Spokane. Might have converted two programmers over to MongoDB :)
[09:52:04] <Guest_1448> if I have a unique index on a field, is it possible to make it so a new .insert() will overwrite previous instead of just erroring out because of duplicate key?
[09:55:16] <Guest_1448> it says it rejects duplicates
[09:55:50] <Guest_1448> is it possible to make it so that it overwrites (I don't care if it deletes then inserts or just updates) if there's a duplicate value for the unique field?
[09:56:50] <NodeX> it would not be a unique index if it allowed writes would it ;)
[11:35:44] <andrepadez> Hi, can somebody help me please? https://gist.github.com/4663588 - the DB is well populated, but when i run Accounts->List, sometimes it shows alright, sometimes I get "Cannot read property 'length' of undefined" for account.locations .... please i am going out of my mind
[11:37:49] <NodeX> it's an error in your app/function
[11:38:11] <NodeX> most proabbly caused by the lack of a value in the database when you were expecting one
[11:46:45] <andrepadez> NodeX: That's what i'm sure is not happening
[11:47:24] <andrepadez> the database is well filled, and it doesn't change between page refreshes
[11:47:51] <andrepadez> i'm going to try using a recursive function, to guarantee order of execution, inside that for
[11:52:02] <bolbat> hello guys - i have question related to https://jira.mongodb.org/browse/JAVA-648 - how i can check Java Driver initialisation status when replica set is not available on application startup time - don't want to write hack's like described in https://jira.mongodb.org/browse/JAVA-649
[11:57:39] <salentinux> Hi guys, I'm reading about sharding but I can't figure out if a collection with 2 or more unique indexes can be sharded correclty keep the uniquness across the entire collection.
[12:03:13] <Guest_1448> andrepadez: I think you want Location.find(..).toArray(callback)
[14:23:24] <ron> lasagne: don't worry about it. you're good now. just ask your question and you'll be okay.
[14:23:26] <rideh> lasagne: I'm not sure what ron is referring to as i'm not using a typical irc client but usually you just type in here and wait for response, no msg people or notices
[14:23:45] <ron> I was referring to this: Notice from lasagne: hey guys, can anyone help me? I#m tryin' to import a csv file using this command: mongoimport -d finance -c values --type csv --file hwoern/dev.hwoern.de/finance/comma.csv --headerline
[14:24:23] <kali> waw, a channel notice ! i don't think i have seen one since the last millenary :)
[14:24:26] <rideh> ron, oh hehe, yeah there are certainly times i'd like to use something like that as it can take some time to get attention but its bad form ;)
[14:25:27] <ron> well, no worries, mistakes happen. I admit I was a bit harsh, I didn't think that lasagne could simply be new to irc. my apologies.
[14:25:41] <ron> but other than that, please try to help the poor dude (or dudette)
[14:26:13] <lasagne> thank you guys ;) Yes I'm new to IRC
[14:26:54] <rideh> lasagne: are you running the command after connecting to the mongo server? you want to do it from the command line prior
[14:27:03] <rideh> so from bash (or whatever shell) issue mongoimport
[14:27:39] <lasagne> ok thank you all. this was my mistake!
[14:27:49] <rideh> as you are specifying the db and collection mongoimport will make its own connection. wow i helped someone?
[14:29:44] <rideh> so i also have a question regarding mongoimport but the problem is more to do with the source data. I havea semi-broken csv file that i need to repair. issues with new lines not escaped or encoded as crlf or something. I've tried converting line endings and encoding but it's still finding a lot more records (new lines) than are supposed to be there. When i open in sublime text it looks bad but excel is interpretting it properly. I've tried
[14:29:45] <rideh> saving new copies from excel hoping it'd repair it with no success (no suprise). any tips?
[14:41:36] <salentinux> is it possibile to shard a collection that has 2 or more unique indexes?
[14:47:05] <kali> salentinux: yes, but the unique index key must be a superset of the sharding key
[14:47:14] <nemothekid> Our PRIMARY machine just went into ROLLBACK and our secondary (which only synced 4% of the data) is now PRIMARY.
[14:47:28] <kali> nemothekid: you have an arbiter ?
[14:48:08] <nemothekid> our secondaries network card blew died ro something during the sync
[14:48:09] <kali> nemothekid: is it a prod issue ?
[14:48:12] <JoeyJoeJo> Does this mean it took fsync 1579ms to run? command admin.$cmd command: { fsync: 1 } ntoreturn:1 keyUpdates:0 locks(micros) W:773 reslen:51 1579ms
[15:01:59] <nemothekid> drop the local directory or the the local database?
[15:03:55] <kali> nemothekid: i would actually stop the server, rm -rf the dir, and start the server with --replset
[15:05:07] <nemothekid> But I should only be deleting the local.* files?
[15:08:15] <CraigH> hi Im having an issue with a query using $gte. The type is int and i'm casting my query to match, the query structure is $query = array('timestamp' => array('$gte' => 12345));
[15:28:56] <CraigH> I think this is due to max int size... the timestamp field in the mongo collection is from javascript and is the milliseconds, eg: 1359013477223, max in size is 2147483647 so i think when i'm trying to query the int value is being capped or ignored?
[15:43:32] <NodeX> Derick did a write up about this, one sec
[16:03:09] <Derick> Long answer is that it was like this from way before I started. Adding documents to mongo with an object syntax directly is really tricky.
[16:03:22] <Derick> Objects are also a lot heavier on the PHP side
[16:07:38] <Tobsn> sure inserting i understand, but reading out should keep the doc structure as it is in the db
[16:46:28] <sirious> the latter being the correct indexed field
[16:46:51] <sirious> developer copied the wrong query and sent me down a rabbithole
[16:47:05] <sirious> if one of the clauses in the or is not indexed, all clauses won't use an index it seems
[16:47:20] <remonvv> Hm, that wouldn't be my expectation but that is possible.
[16:47:41] <remonvv> As in, I haven't encountered that specific situation
[17:21:53] <thurmda> I'm struggling with an aggregation task. I had hoped it could be done with the aggregation framework or map reduce but I'm not so sure...
[17:22:25] <remonvv> thurmda, let's have a look. Pastie the source data and your result format
[17:22:31] <thurmda> I want to chart geo data drilling down into state> couny > city
[17:22:49] <thurmda> I want to create something like this : http://bl.ocks.org/4063423
[17:23:07] <thurmda> Here is the data supporting that graph : http://bl.ocks.org/d/4063423/flare.json
[17:23:26] <thurmda> that is used in many d3 examples actually
[17:23:30] <remonvv> Right, and what's your source data?
[17:23:39] <thurmda> I want to do similar with geo data though
[17:24:20] <remonvv> Okay, but you said you had issues with the aggregation part. I'm assuming there is data with geoloc fields that you need to aggregate to something along the lines of that JSON
[17:25:04] <thurmda> I have locale codes broken out into {country : 'US', language : 'en'} for each web request
[17:26:26] <thurmda> I'd like to roll them up like { 'US' : {'en' : 5000, 'es' : 1000, 'fr' : 200}, 'FR' : {'fr': 3000, 'en' : 200}} ...
[17:27:19] <thurmda> So I'd have a step where I grouped by Country and then with in that grouped by language
[17:27:29] <thurmda> I can't find an example like that
[17:34:48] <remonvv> His issue is that he wants a field value in his source data to become a field name in the aggregated data and group on that field.
[17:34:52] <thurmda> I have.. but haven't mastered it yet
[17:36:36] <remonvv> thurmda, I don't think this is possible. What you can do is : {countries:[{name:"US", count: 9203092, languages:[{name:"en", count: 29382}]}]}
[17:37:30] <remonvv> AF is not something I use often so my expertise is limited as well I'm afraid but I don't recall a feature where you can use field values and project them to field names. Checking docs though.
[17:43:41] <thurmda> but I can't get the grouping by language inside there
[17:44:18] <thurmda> I can't find any aggregation example anywhere that has any form of hierarchy
[17:44:58] <JoeyJoeJo> Can mongo report the number of inserts/sec? I have a few different insert jobs running and I'd like to get total rate for all of them combined
[17:45:10] <remonvv> I think the first step has to be the $group on languages
[17:45:19] <remonvv> I'm tempted to give this a go now
[19:04:55] <waheedi> i have a replica set of three severs, the primary and another secondary are getting queries normally but the second secondary got no queries at all
[19:04:59] <strnadj> waheedi: You can try it :) If i know the answer, i will help you :)
[19:06:48] <strnadj> How do you mean "second secondary got no queries at all"?
[19:39:20] <BadCodSmell> for example, the index {a=1,b=1,c=-1} works for fine({a:123,b:567,c:89}) but I also have the index {b:1} and when I did find({a:123,b:567}).sort({c:-1}); it uses {b:1} instead
[19:39:38] <BadCodSmell> I thought a btree could handle this
[19:41:02] <BadCodSmell> I don't understand why this isn't index only
[20:02:25] <owen1> what read preference modes are available on 2.0.1? this page (http://docs.mongodb.org/manual/applications/replication/#replica-set-read-preference-behavior-nearest) say that the whole modes in only in 2.2. what does it mean for me?
[20:03:02] <owen1> i have 3 hosts in 1 datacenter and 2 in the other. i want to allow clients to connect to the nearest host.
[20:03:36] <owen1> i also want to be able to read from each host
[21:23:46] <nanashiRei> Hi, i want to select everything from a collection that has "timestamp" (Date Obj) in between the span of 24h of the last day. can't figure it out... whenever i add a condition to my group query it just ... "crashes" (0 docs result)
[21:24:39] <nanashiRei> So i mean from the current view the span i would want is 28.Jan 00:00 - 23:59
[21:29:51] <nanashiRei> https://gist.github.com/af999fc63775d167798e here i illustrated what i'm doing
[21:30:33] <nanashiRei> if anyone can help, pleeeeaaaaasseeeeeee ^^ my brain starts hurting from this
[21:30:58] <nanashiRei> and i need to get stuff out of the log db soon, got 12,5m docs already D:
[21:58:01] <apwalk> nanashiRei: cond.processed needs a comma after it and cond.timestamp is missing an ending }
[21:58:29] <apwalk> just a helpful guess, since i'm not familiar with that query syntax
[21:59:00] <nanashiRei> naw syntax is just ruffly converted from coffeescript
[21:59:26] <nanashiRei> it works, as long as i leave out the cond part with the timestamp
[22:00:40] <owen1> how to make each of the secondaries readable on 2.0.1 ?
[22:40:19] <waheedi_> anyone got a good experience with replicasets
[22:40:38] <waheedi_> i have three servers one primary two secondaries
[22:42:18] <waheedi_> my primary and one of the secondaries do read writes updates, but the third one secondary don't do any reads
[22:43:56] <rickibalboa> I'm working with quite a large result set and doing some processing on it very often, around 10 million documents in one collection, in the app that does the processing would it be more efficient or more quicker to stream the results using the native driver or get it in sections and do the processing.. ?
[22:55:01] <nanashiRei> rickibalboa: that question answers itself
[22:55:16] <waheedi_> what about minerale nanashiRei
[22:55:17] <nanashiRei> but it depends on how you process the data and what you need to process
[23:48:32] <ejcweb> waheedi: Ah, I do actually see data inside /var/lib/mongodb. Perhaps I'm just connecting to the wrong db or something.
[23:56:03] <owen1> are there documentation of not recent version of mongo? i would like to know what options available to me regarding read preference of 2.0.1.