PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 29th of January, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:11:47] <dangayle> Can someone help me with this?
[00:11:48] <dangayle> http://stackoverflow.com/questions/14573471/mongodb-aggregation-framework-sort-by-length-of-array
[00:34:56] <jtomasrl> if I want to uso aggregate and unwind a key that doesnt exist i'll get errors?
[00:42:39] <Glacee> Any plan to release 2.2.3 this week?
[01:16:21] <`f`o`o`m`a`n> i find the CLI to be a bit awkward to use. For instance, one common mistake that I run into when i'm typing fast is, I'll go accidentally hit the return key in the middle of typing a query, and then I need to copy and paste the half finished query, then CTRL-C out of the whole damn shell, then reconnect, and go from there
[01:16:42] <`f`o`o`m`a`n> it would be much nicer if there was an easy way to back up and correct mistakes
[01:17:18] <`f`o`o`m`a`n> This is just an example of what i think are the many rough edges of the CLI. I would love to see this thing improved
[01:17:33] <`f`o`o`m`a`n> maybe i am just missing some pro tips
[01:59:40] <bolbat> hello guys - i have question related to https://jira.mongodb.org/browse/JAVA-648 - how i can check Java Driver initialisation status when replica set is not available on application startup time - don't want to write hack's like described in https://jira.mongodb.org/browse/JAVA-649
[03:36:34] <salentinux> Hi guys, I'm reading about sharding but I can't figure out if a collection with 2 or more unique indexes can be sharded correclty keep the uniquness across the entire collection.
[03:57:46] <dangayle> Just had a fabulous meetup here in Spokane. Might have converted two programmers over to MongoDB :)
[06:10:46] <zaki_> hello mongoids
[07:49:27] <svm_invictvs> Hello
[08:17:57] <foofoobar> Hi. I have a collection "Folders" and a collection "Cards". One folder can have many cards
[08:18:15] <foofoobar> Now I want to show all folders and the number of cards in it.
[08:18:36] <foofoobar> Should I do a count für each folder or should I put a folder "CardCount" in each folder object?
[08:24:38] <IAD> it depends of current architecture. what's the price of supporting the actual value of 'CardCount"
[08:26:29] <foofoobar> IAD, the price? You mean the effort I have to do by inc/dec while adding/removing?
[08:28:04] <IAD> foofoobar: yes, how difficult it is for your ODM/Mapper ...
[08:28:17] <foofoobar> IAD, its not. So I will do this, thanks
[08:37:09] <[AD]Turbo> hola
[09:52:04] <Guest_1448> if I have a unique index on a field, is it possible to make it so a new .insert() will overwrite previous instead of just erroring out because of duplicate key?
[09:54:08] <IAD> Guest_1448: http://docs.mongodb.org/manual/core/indexes/#index-type-unique
[09:54:58] <Guest_1448> yes, I read that
[09:55:16] <Guest_1448> it says it rejects duplicates
[09:55:50] <Guest_1448> is it possible to make it so that it overwrites (I don't care if it deletes then inserts or just updates) if there's a duplicate value for the unique field?
[09:56:50] <NodeX> it would not be a unique index if it allowed writes would it ;)
[09:57:15] <NodeX> what you're after is an upsert
[09:57:26] <Guest_1448> yes, I guess
[09:57:50] <Guest_1448> but it would still be unique in that there won't be two documents in the collection with the same value on that field
[09:58:13] <Guest_1448> well, the main reason I'm not using upsert/save is because I'm doing batch inserts()
[09:58:31] <Guest_1448> mongo doesn't support that with update/save
[10:10:05] <Guest_1448> on a collection of ~300 documents, I'm basically overwriting the whole collection each time I do an update
[10:10:27] <Guest_1448> would it be better to just drop() the collection or call .remove() on it to remove all documents?
[10:11:36] <IAD> drop is faster
[10:12:14] <IAD> but even better is db.drop() =)
[10:12:55] <Guest_1448> ehm
[10:13:15] <Guest_1448> well, actually now that I think about it, I could just create a new collection each time and leave the drops for later..
[10:15:04] <IAD> Guest_1448: maybe this will help http://docs.mongodb.org/manual/applications/server-side-javascript/
[10:50:30] <NodeX> Well symlinking didnt work :/
[10:59:31] <NodeX> scratch that, got it working :D
[11:00:59] <ron> \o.
[11:01:00] <ron> er
[11:01:03] <ron> \o/
[11:35:44] <andrepadez> Hi, can somebody help me please? https://gist.github.com/4663588 - the DB is well populated, but when i run Accounts->List, sometimes it shows alright, sometimes I get "Cannot read property 'length' of undefined" for account.locations .... please i am going out of my mind
[11:36:42] <NodeX> that's a appside errror
[11:37:40] <andrepadez> can you explain?
[11:37:49] <NodeX> it's an error in your app/function
[11:38:11] <NodeX> most proabbly caused by the lack of a value in the database when you were expecting one
[11:46:45] <andrepadez> NodeX: That's what i'm sure is not happening
[11:47:24] <andrepadez> the database is well filled, and it doesn't change between page refreshes
[11:47:51] <andrepadez> i'm going to try using a recursive function, to guarantee order of execution, inside that for
[11:52:02] <bolbat> hello guys - i have question related to https://jira.mongodb.org/browse/JAVA-648 - how i can check Java Driver initialisation status when replica set is not available on application startup time - don't want to write hack's like described in https://jira.mongodb.org/browse/JAVA-649
[11:57:39] <salentinux> Hi guys, I'm reading about sharding but I can't figure out if a collection with 2 or more unique indexes can be sharded correclty keep the uniquness across the entire collection.
[12:03:13] <Guest_1448> andrepadez: I think you want Location.find(..).toArray(callback)
[12:17:08] <andrepadez> Guest_1448: thanks
[12:18:29] <ron> Guest_1448: don't be a guest. be a real person.
[12:36:47] <mantovani> hi
[12:37:07] <mantovani> where can I found how aggreation works in mongodb I mean internally.
[12:37:11] <mantovani> ?
[12:46:49] <NodeX> not sure what that means
[12:47:51] <mantovani> what exactly ?
[12:51:26] <NodeX> "internaly"
[12:53:43] <mantovani> How MongoDB do aggreate operation internally, like after you use a aggreagete function in 4 shards how it "join" the results ?
[12:53:55] <NodeX> the same as every other operation
[12:54:31] <kali> the code :)
[12:54:42] <kali> the ultimate documentation
[12:54:52] <ron> kali++
[12:56:11] <chrisq> omg, the code is not the docs, if it is your project is shit
[12:56:46] <NodeX> internal workings will always be in the docs as they're not really of much use to other people
[12:56:51] <NodeX> in the code *
[12:57:31] <kali> chrisq: there is doc. what mantovani asks is internal implementation stuff.
[12:57:41] <mantovani> kali: yes.
[12:57:56] <NodeX> it merges the same as every other operation that's sharded
[12:58:08] <NodeX> there really is no other way to merge sharded results
[12:58:35] <mantovani> what the name of the algorithm ?
[12:59:18] <NodeX> go scan the source code and find out ;)
[14:18:10] <lasagne> I get this error message: Tue Jan 29 15:16:40 SyntaxError: missing ; before statement (shell):1
[14:18:37] <ron> wtf is wrong with you?
[14:19:28] <lasagne> i thought this is the mongodb channel?
[14:20:17] <skot> lasagne, what ron could state in a nicer way is that you are running a program from the mongo javascript shell
[14:20:18] <ron> it is, but sending a channel notice is considered a bit rude.
[14:20:33] <skot> oh, missed that
[14:20:34] <ron> skot: no, that's not what I meant :)
[14:21:03] <lasagne> Sorry guys I'm new here
[14:21:13] <lasagne> What would be the appropriate way?
[14:23:08] <NodeX> best to just ask the problem
[14:23:24] <ron> lasagne: don't worry about it. you're good now. just ask your question and you'll be okay.
[14:23:26] <rideh> lasagne: I'm not sure what ron is referring to as i'm not using a typical irc client but usually you just type in here and wait for response, no msg people or notices
[14:23:45] <ron> I was referring to this: Notice from lasagne: hey guys, can anyone help me? I#m tryin' to import a csv file using this command: mongoimport -d finance -c values --type csv --file hwoern/dev.hwoern.de/finance/comma.csv --headerline
[14:24:23] <kali> waw, a channel notice ! i don't think i have seen one since the last millenary :)
[14:24:26] <rideh> ron, oh hehe, yeah there are certainly times i'd like to use something like that as it can take some time to get attention but its bad form ;)
[14:25:27] <ron> well, no worries, mistakes happen. I admit I was a bit harsh, I didn't think that lasagne could simply be new to irc. my apologies.
[14:25:41] <ron> but other than that, please try to help the poor dude (or dudette)
[14:26:13] <lasagne> thank you guys ;) Yes I'm new to IRC
[14:26:54] <rideh> lasagne: are you running the command after connecting to the mongo server? you want to do it from the command line prior
[14:27:03] <rideh> so from bash (or whatever shell) issue mongoimport
[14:27:39] <lasagne> ok thank you all. this was my mistake!
[14:27:49] <rideh> as you are specifying the db and collection mongoimport will make its own connection. wow i helped someone?
[14:27:54] <rideh> +1 for this noob
[14:29:44] <rideh> so i also have a question regarding mongoimport but the problem is more to do with the source data. I havea semi-broken csv file that i need to repair. issues with new lines not escaped or encoded as crlf or something. I've tried converting line endings and encoding but it's still finding a lot more records (new lines) than are supposed to be there. When i open in sublime text it looks bad but excel is interpretting it properly. I've tried
[14:29:45] <rideh> saving new copies from excel hoping it'd repair it with no success (no suprise). any tips?
[14:41:36] <salentinux> is it possibile to shard a collection that has 2 or more unique indexes?
[14:47:05] <kali> salentinux: yes, but the unique index key must be a superset of the sharding key
[14:47:14] <nemothekid> Our PRIMARY machine just went into ROLLBACK and our secondary (which only synced 4% of the data) is now PRIMARY.
[14:47:28] <kali> nemothekid: you have an arbiter ?
[14:47:32] <nemothekid> yes
[14:47:41] <kali> ho ! you wre not synced
[14:47:49] <nemothekid> no we weren't
[14:48:08] <nemothekid> our secondaries network card blew died ro something during the sync
[14:48:09] <kali> nemothekid: is it a prod issue ?
[14:48:12] <JoeyJoeJo> Does this mean it took fsync 1579ms to run? command admin.$cmd command: { fsync: 1 } ntoreturn:1 keyUpdates:0 locks(micros) W:773 reslen:51 1579ms
[14:48:16] <nemothekid> yes
[14:48:55] <kali> nemothekid: i would stop the primary and the secondary and restart the primary without the replset parameter to come back online
[14:49:11] <JoeyJoeJo> nemothekid: That seems to be a long time. Is there a way I can make it faster?
[14:49:30] <nemothekid> JoeyJoeJo: I wasn't answering your question sorry
[14:49:43] <JoeyJoeJo> No problem, that was my mistake
[14:52:34] <nemothekid> The rollback machine hangs on shutdown.
[14:52:55] <nemothekid> Is it really rolling back or could this just be a status error?
[14:53:13] <kali> nemothekid: i have no idea :/
[14:55:02] <nemothekid> Thank its safe to kill the process and pray to the journal?
[14:56:08] <nemothekid> What would I even do after I bring it online without the replset parameter?
[14:57:08] <kali> nemothekid: well, you'd be online. i think from there i would re-start the replication from scratch
[14:58:18] <nemothekid> so then restart the database with the repo set flag?
[14:58:50] <kali> nemothekid: drop the "local" directory, restart with the repset flag, and add arbiter and secondary again
[14:58:56] <kali> this one is documented somewhere
[15:00:25] <nemothekid> okay I killed the processes and removed the flag. Doesn't look like we lost any data. Thanks for your help
[15:00:55] <kali> nemothekid: good.
[15:01:59] <nemothekid> drop the local directory or the the local database?
[15:03:55] <kali> nemothekid: i would actually stop the server, rm -rf the dir, and start the server with --replset
[15:05:07] <nemothekid> But I should only be deleting the local.* files?
[15:08:15] <CraigH> hi Im having an issue with a query using $gte. The type is int and i'm casting my query to match, the query structure is $query = array('timestamp' => array('$gte' => 12345));
[15:08:27] <CraigH> should this wokr?
[15:08:31] <CraigH> work...
[15:08:34] <kali> namidark: http://docs.mongodb.org/manual/tutorial/reconfigure-replica-set-with-unavailable-members/#reconfigure-by-breaking-the-mirror
[15:08:41] <kali> oups :)
[15:08:44] <kali> nemothekid: http://docs.mongodb.org/manual/tutorial/reconfigure-replica-set-with-unavailable-members/#reconfigure-by-breaking-the-mirror
[15:08:50] <kali> namidark: please disregard
[15:09:12] <kali> nemothekid: you're right, they are file, not a directory, my mistake
[15:09:42] <NodeX> CraigH : make sure you're data is casted to an int also
[15:09:54] <CraigH> thanks, yes it is
[15:10:09] <NodeX> $query = array('timestamp' => array('$gte' => (int)12345));
[15:10:44] <CraigH> NodeX: I have tried it, still no luck. It ignores the $gte
[15:11:01] <NodeX> can you pastebin db.collection.findOne()
[15:11:28] <NodeX> and your full php query not just that line
[15:13:48] <CraigH> that is the entire query
[15:14:00] <NodeX> the rest of the code
[15:15:24] <CraigH> not that easy really as there are a few layers of abstraction in our app
[15:16:46] <CraigH> boils down to roughly: $docs = $this->db->{$this->collection}->find($query, $fields)->sort($sort)
[15:17:38] <CraigH> tried removing the sort and fields array, still no luck
[15:19:26] <NodeX> without seeing a findOne() and the query that's sent to the server it's very hard to help
[15:26:36] <remonvv> \o
[15:27:06] <NodeX> o/
[15:28:56] <CraigH> I think this is due to max int size... the timestamp field in the mongo collection is from javascript and is the milliseconds, eg: 1359013477223, max in size is 2147483647 so i think when i'm trying to query the int value is being capped or ignored?
[15:43:32] <NodeX> Derick did a write up about this, one sec
[15:43:50] <NodeX> http://derickrethans.nl/64bit-ints-in-mongodb.html
[15:52:16] <CraigH> Very nice... seems to fix it! nice one
[15:58:19] <Tobsn> Derick, you know why the php drivers always return arrays even if it's an object in the doc?
[15:58:50] <Tobsn> data:[{test:123}] would turn into data[0]['test']
[15:58:51] <Derick> Short answer: no
[15:58:58] <Tobsn> when it should be [0]->test
[16:01:01] <Tobsn> hmm... weird thing is, icant even find anything about it
[16:01:05] <Tobsn> like nobody ever complained about it
[16:01:14] <Tobsn> wrote kristina on twitter, we'll see
[16:01:17] <Tobsn> it's super odd...
[16:03:09] <Derick> Long answer is that it was like this from way before I started. Adding documents to mongo with an object syntax directly is really tricky.
[16:03:22] <Derick> Objects are also a lot heavier on the PHP side
[16:07:38] <Tobsn> sure inserting i understand, but reading out should keep the doc structure as it is in the db
[16:08:10] <Tobsn> ah found it
[16:08:11] <Tobsn> https://jira.mongodb.org/browse/PHP-16
[16:08:15] <Tobsn> 3 years old...
[16:11:10] <Tobsn> ah was implemented, then reverted again
[16:14:46] <Derick> Tobsn: stdclass is pretty useless, but PHP has lots of array functions. So I think it makes sense
[16:15:12] <Tobsn> but you can save it as object
[16:15:27] <Tobsn> so everytime you want to save it back as object or add one object you insert it as new stdclass
[16:15:33] <Tobsn> but you dont get stdclass from mongo
[16:16:23] <sirious> the mongo docs say that doing an $or search, each clause is done in parallel and can use a different index
[16:16:44] <sirious> but i'm noticing that both of them are returning using basiccursor and aren't using btree
[16:16:55] <sirious> so it seems it's ignoring the index when using $or
[16:16:58] <sirious> anyone else seeing this?
[16:21:33] <NodeX> try hinting
[16:27:22] <sirious> NodeX: the documentation doesn't talk about hinting
[16:27:27] <sirious> i'm just going off of the documentation
[16:31:32] <remonvv> sirious, does it hit the index if you execute the clauses seperately?
[16:31:45] <remonvv> $or clauses definitely go through the query optimizer
[16:36:19] <sirious> yea, if i do a find on each clause separately, they hit the correct index
[16:36:35] <remonvv> and the $or is top level?
[16:36:58] <sirious> top level as in not searching within a subdocument?
[16:37:32] <remonvv> top level as in the only element in the query, eg find({$or:[..,..]}) and not find({blah:.., $or:[..,..]})
[16:37:38] <sirious> ahh, yes
[16:37:38] <remonvv> top level isn't the appropriate qualification
[16:37:46] <remonvv> but my english was failing me ;)
[16:37:49] <sirious> :)
[16:37:57] <sirious> yea, it's the only element in the query
[16:38:04] <remonvv> ok, can you pastie the query?
[16:38:34] <sirious> db.samples.find({$or: [{'detection.results.result': 'stack_strings'}, {'analysis.detection.results': 'stack_strings'}]})
[16:46:02] <sirious> remonvv: i just figured it out
[16:46:05] <sirious> sonofa
[16:46:22] <sirious> analysis.detection.results != analysis.results.result
[16:46:28] <sirious> the latter being the correct indexed field
[16:46:51] <sirious> developer copied the wrong query and sent me down a rabbithole
[16:47:05] <sirious> if one of the clauses in the or is not indexed, all clauses won't use an index it seems
[16:47:20] <remonvv> Hm, that wouldn't be my expectation but that is possible.
[16:47:41] <remonvv> As in, I haven't encountered that specific situation
[17:21:53] <thurmda> I'm struggling with an aggregation task. I had hoped it could be done with the aggregation framework or map reduce but I'm not so sure...
[17:22:25] <remonvv> thurmda, let's have a look. Pastie the source data and your result format
[17:22:31] <thurmda> I want to chart geo data drilling down into state> couny > city
[17:22:49] <thurmda> I want to create something like this : http://bl.ocks.org/4063423
[17:23:07] <thurmda> Here is the data supporting that graph : http://bl.ocks.org/d/4063423/flare.json
[17:23:26] <thurmda> that is used in many d3 examples actually
[17:23:30] <remonvv> Right, and what's your source data?
[17:23:39] <thurmda> I want to do similar with geo data though
[17:23:41] <remonvv> schema*
[17:24:20] <remonvv> Okay, but you said you had issues with the aggregation part. I'm assuming there is data with geoloc fields that you need to aggregate to something along the lines of that JSON
[17:25:04] <thurmda> I have locale codes broken out into {country : 'US', language : 'en'} for each web request
[17:26:26] <thurmda> I'd like to roll them up like { 'US' : {'en' : 5000, 'es' : 1000, 'fr' : 200}, 'FR' : {'fr': 3000, 'en' : 200}} ...
[17:27:19] <thurmda> So I'd have a step where I grouped by Country and then with in that grouped by language
[17:27:29] <thurmda> I can't find an example like that
[17:30:37] <remonvv> Hm
[17:33:47] <IAD> thurmda: look at $unwind
[17:34:10] <remonvv> unwind is an array operation
[17:34:48] <remonvv> His issue is that he wants a field value in his source data to become a field name in the aggregated data and group on that field.
[17:34:52] <thurmda> I have.. but haven't mastered it yet
[17:36:36] <remonvv> thurmda, I don't think this is possible. What you can do is : {countries:[{name:"US", count: 9203092, languages:[{name:"en", count: 29382}]}]}
[17:37:30] <remonvv> AF is not something I use often so my expertise is limited as well I'm afraid but I don't recall a feature where you can use field values and project them to field names. Checking docs though.
[17:38:15] <thurmda> I could work with that...
[17:38:28] <thurmda> https://gist.github.com/4666040
[17:38:50] <thurmda> I'd do something like that by looping over all docs but...
[17:39:03] <remonvv> charming ;)
[17:39:45] <thurmda> I'm still not quite sure how to approach it to produce the output you suggested
[17:40:06] <thurmda> can I achieve that with an aggregation pipeline?
[17:41:12] <remonvv> I suspect so.
[17:41:50] <remonvv> Sorry, I'm not familiar enough to really help you along without starting up MongoDB and test a few things and I'm about to leave.
[17:42:10] <thurmda> np
[17:42:46] <thurmda> I appreciate the help sp far
[17:43:15] <thurmda> I can group by country
[17:43:37] <remonvv> Yeah that's easy {$group:{_id:$country, count:{$sum:1}}
[17:43:41] <thurmda> but I can't get the grouping by language inside there
[17:44:18] <thurmda> I can't find any aggregation example anywhere that has any form of hierarchy
[17:44:58] <JoeyJoeJo> Can mongo report the number of inserts/sec? I have a few different insert jobs running and I'd like to get total rate for all of them combined
[17:45:10] <remonvv> I think the first step has to be the $group on languages
[17:45:19] <remonvv> I'm tempted to give this a go now
[17:45:24] <thurmda> haha
[17:45:39] <remonvv> One sec, I'll make some test data and see ;)
[17:45:42] <remonvv> Challenge accepted.
[17:45:47] <thurmda> There is so many uses for this
[17:45:48] <remonvv> AF is my blind spot in MongoDB.
[17:45:55] <remonvv> Ya, hence being interested ;)
[17:46:36] <thurmda> Tags within article sections
[17:46:58] <thurmda> products within ecomerce catalogs
[17:48:58] <svm_invictvs> Does Mongo have the concept of a temporary database?
[17:49:07] <svm_invictvs> er, temporary collection?
[17:50:12] <mr_smith> svm_invictvs: there are time-to-live collections which expire individual documents.
[17:50:49] <svm_invictvs> Is that the "options" parameter passed to the collection when you create it?
[17:51:41] <mr_smith> svm_invictvs: no, i think it's an option when creating an index.
[17:52:27] <svm_invictvs> Oh, no that just controls caps
[17:52:45] <remonvv> thurmda: Ya, hence being interested ;) Let me have a quick look. One second.
[17:53:42] <svm_invictvs> Ah
[17:53:42] <svm_invictvs> http://docs.mongodb.org/manual/tutorial/expire-data/
[17:55:03] <svm_invictvs> hmmm
[17:55:17] <svm_invictvs> Looks like removal is neither timely nor guaranteed
[17:56:04] <mr_smith> svm_invictvs: well, it's hard to go WEBSCALE! if you start doing boring stuff like guarantees.
[17:56:30] <svm_invictvs> mr_smith: Can't tell if sarcastic...
[17:57:12] <svm_invictvs> mr_smith: It's like a garbage collector, though.
[17:57:58] <mr_smith> svm_invictvs: yeah. i mean it's probably not any worse than any other cache expiration.
[17:58:37] <svm_invictvs> Oh of course
[17:58:54] <svm_invictvs> In reality how much longer will objects linger?
[17:59:18] <mr_smith> i really don't know. i've only used it on one project and we never had a problem.
[17:59:25] <svm_invictvs> intersting
[17:59:34] <svm_invictvs> Well, I'll fiddle with it
[18:00:42] <svm_invictvs> mr_smith: Thanks!
[18:00:47] <mr_smith> yw
[18:22:15] <thurmda> @remonvv … I think I did it!
[18:22:51] <thurmda> my schema was actually a little more complex than I was presenting
[18:22:59] <thurmda> about to share a gist now though
[18:24:27] <thurmda> https://gist.github.com/4666040
[18:37:04] <waheedi> hello
[18:37:10] <waheedi> i have a replica set of three severs
[18:37:39] <waheedi> the primary and another secondary are getting quires normally but the second secondary has no quires at all
[18:38:08] <waheedi> queries*
[18:38:33] <waheedi> hello
[18:38:36] <waheedi> anone here?
[18:38:41] <waheedi> anyone*
[18:40:51] <waheedi> hello people
[18:40:57] <waheedi> are you all sleep or what?
[18:59:20] <strnadj> waheedi: hello, we have 19:39 :)
[18:59:30] <strnadj> so we are start working :)
[19:00:07] <waheedi> :)
[19:00:11] <waheedi> i didn't get that
[19:00:22] <waheedi> anyhow thank toy strnadj
[19:01:49] <strnadj> waheedi: sorry, i wrote it bad, we starting to work now :)
[19:02:14] <waheedi> good. so can i ask now
[19:02:14] <waheedi> ?
[19:04:54] <waheedi> strnadj: are you here?
[19:04:55] <waheedi> i have a replica set of three severs, the primary and another secondary are getting queries normally but the second secondary got no queries at all
[19:04:59] <strnadj> waheedi: You can try it :) If i know the answer, i will help you :)
[19:06:48] <strnadj> How do you mean "second secondary got no queries at all"?
[19:06:55] <strnadj> what kind of queries is it?
[19:07:02] <waheedi> lol
[19:07:03] <strnadj> and how do you connect to database?
[19:07:23] <waheedi> i have three different application wrote in different languages
[19:07:33] <waheedi> one is ruby another is c++ and another lua
[19:07:46] <waheedi> and three of them can read from the third secondary server
[19:13:34] <strnadj> hm interesting, i never connect to mongo from ruby, c++ or lua :(
[19:14:15] <strnadj> what is report from rs.status() ?
[19:20:44] <waheedi> here you go http://pastebin.com/qLxvpCLa
[19:30:48] <waheedi> hello
[19:30:51] <waheedi> anyone can help
[19:31:11] <waheedi> yeah mongo support is paid service and you seems guys gets motivated by money
[19:32:25] <JoeyJoeJo> Does anyone know if the C driver is any faster than the python or PHP drivers?
[19:33:10] <JoeyJoeJo> Specifically faster at inserting
[19:34:34] <tjgillies> i have > db.bird_logs.find()
[19:34:34] <tjgillies> { "_id" : ObjectId("51082367688b7479be000001"), "entry" : { "severity" : "WARN", "time" : ISODate("2013-01-29T19:30:47.478Z"), "program" : null, "hostname" : "Heathers-MacBook-Pro.local", "message" : "test" }, "created_at" : ISODate("2013-01-29T19:30:47.487Z"), "updated_at" : ISODate("2013-01-29T19:30:47.487Z") }
[19:34:47] <tjgillies> anyone know how i can get entries with severity WARN?
[19:35:26] <JoeyJoeJo> I think it would be {entry:{severity:'WARN'}}
[19:36:09] <tjgillies> db.bird_logs.find({entry:{severity:'WARN'}}) returns nada
[19:37:18] <BadCodSmell> How come mongo can't use a composite key for sorting?
[19:37:23] <TkTech> It's .find({'entry.severity': 'WARN'}) is it not?
[19:37:30] <TkTech> tjgillies: ^
[19:37:48] <BadCodSmell> When I try to sort my result it switches to using an index on a single field.
[19:37:59] <tjgillies> TkTech: thanks
[19:39:20] <BadCodSmell> for example, the index {a=1,b=1,c=-1} works for fine({a:123,b:567,c:89}) but I also have the index {b:1} and when I did find({a:123,b:567}).sort({c:-1}); it uses {b:1} instead
[19:39:38] <BadCodSmell> I thought a btree could handle this
[19:41:02] <BadCodSmell> I don't understand why this isn't index only
[20:02:25] <owen1> what read preference modes are available on 2.0.1? this page (http://docs.mongodb.org/manual/applications/replication/#replica-set-read-preference-behavior-nearest) say that the whole modes in only in 2.2. what does it mean for me?
[20:03:02] <owen1> i have 3 hosts in 1 datacenter and 2 in the other. i want to allow clients to connect to the nearest host.
[20:03:36] <owen1> i also want to be able to read from each host
[21:23:46] <nanashiRei> Hi, i want to select everything from a collection that has "timestamp" (Date Obj) in between the span of 24h of the last day. can't figure it out... whenever i add a condition to my group query it just ... "crashes" (0 docs result)
[21:24:39] <nanashiRei> So i mean from the current view the span i would want is 28.Jan 00:00 - 23:59
[21:29:51] <nanashiRei> https://gist.github.com/af999fc63775d167798e here i illustrated what i'm doing
[21:30:33] <nanashiRei> if anyone can help, pleeeeaaaaasseeeeeee ^^ my brain starts hurting from this
[21:30:58] <nanashiRei> and i need to get stuff out of the log db soon, got 12,5m docs already D:
[21:58:01] <apwalk> nanashiRei: cond.processed needs a comma after it and cond.timestamp is missing an ending }
[21:58:29] <apwalk> just a helpful guess, since i'm not familiar with that query syntax
[21:59:00] <nanashiRei> naw syntax is just ruffly converted from coffeescript
[21:59:26] <nanashiRei> it works, as long as i leave out the cond part with the timestamp
[22:00:40] <owen1> how to make each of the secondaries readable on 2.0.1 ?
[22:40:06] <waheedi_> Hello
[22:40:19] <waheedi_> anyone got a good experience with replicasets
[22:40:38] <waheedi_> i have three servers one primary two secondaries
[22:42:18] <waheedi_> my primary and one of the secondaries do read writes updates, but the third one secondary don't do any reads
[22:43:56] <rickibalboa> I'm working with quite a large result set and doing some processing on it very often, around 10 million documents in one collection, in the app that does the processing would it be more efficient or more quicker to stream the results using the native driver or get it in sections and do the processing.. ?
[22:55:01] <nanashiRei> rickibalboa: that question answers itself
[22:55:16] <waheedi_> what about minerale nanashiRei
[22:55:17] <nanashiRei> but it depends on how you process the data and what you need to process
[22:55:34] <nanashiRei> sorry what?
[22:55:50] <waheedi_> sorry nanashiRei what about my question :)
[22:57:24] <nanashiRei> what question?
[22:57:36] <waheedi_> i have three servers one primary two secondaries
[22:57:37] <nanashiRei> minerale?! don't know what that is :S
[22:57:45] <waheedi_> my primary and one of the secondaries do read writes updates, but the third one secondary don't do any reads
[22:57:51] <waheedi_> sorry for confusion nanashiRei
[22:58:15] <nanashiRei> oh, my answer is for rickibalboa
[23:45:20] <ejcweb> When running mongo locally on Ubuntu, the data I save doesn't seem to persist (ie. if I reboot it isn't there). Is this normal?
[23:45:28] <ejcweb> (ie. saving into db.test)
[23:45:43] <waheedi> locally?
[23:46:06] <waheedi> where is you db dir?
[23:47:03] <waheedi> dbpath?
[23:48:32] <ejcweb> waheedi: Ah, I do actually see data inside /var/lib/mongodb. Perhaps I'm just connecting to the wrong db or something.
[23:56:03] <owen1> are there documentation of not recent version of mongo? i would like to know what options available to me regarding read preference of 2.0.1.
[23:57:00] <waheedi> owen1: http://docs.mongodb.org/manual/applications/replication/#read-preference