[07:30:32] <lfamorim> ENVIADO DIA 31/01/2013 E AINDA NÃO CHEGOU GOSTARIA DE SABER QUANDO IRÁ CHEGAR, ESPERO QUE CHEGUE E QUE O PRODUTO ESTEJA CERTO E SEM NENHUM PROBLEMA." }, "updated" : ISODate("2013-03-06T18:37:30.539Z") }
[11:45:29] <Derick> otherwise you will forget once and get crap
[11:45:41] <Nodex> 3 hours I have been wondering why I could not remote post to facebook
[11:45:42] <Derick> iirc, native long is only used for retrieval
[11:45:43] <flok> hello. if i have a document with 3 key/value fields a, b and c and i would like to update that document using the java api update() command, is it possible to only update field b and am I required to also add field a and c to the update-document? or can I only say change document to a BasicDBObject with in that b=something?
[11:45:53] <Nodex> it turns out it was the 64bit int saving badly hahahaha
[11:59:09] <flok> yeah, that's a nosql berkelydb-ish key/value database
[12:00:44] <Derick> yeah, I know... just had never heard anybody actually using it
[12:11:08] <aboudreault> We use it too, as a caching step for loading/organizing a lot of data. works well.
[12:18:50] <flok> it does not seem to be entirely safe, this kyoto cabinet: if I do a System.exit(1) somewhere instead of a clean close, I occasionally get an error about the file length not being correct(?!)
[13:00:43] <Nodex> anyone suffer from Comment/forum spam from proxies?
[13:13:20] <strigga> anybody knows a good article about user administration and the general concepts in mongodb?
[13:26:08] <lujeni> strigga, the official documentation is nice
[13:31:31] <strigga> lujeni: Hm I looked at that. I think I more or less understood. Actually I think my main issue ist "translating" that into PHP :) Anyway that's a php issue and not a mongodb issue it seems
[13:33:41] <lujeni> strigga, so you should read the documentation about the PHP driver
[13:33:56] <strigga> lujeni: Jap - you caught me doing that :)
[13:34:54] <strigga> lujeni: I really have to get used to the concept that there are no actual "queries" as such :)
[13:44:01] <failshell> hello. does anyone know of a good product to monitor a sharded cluster? for example, the state the cluster is in? also, im looking for a backup tool to backup said cluster
[13:45:22] <majoh> I'm trying to get MMS to generate alerts but it's not working. Just a couple of minutes ago the status page said there was some issue with the service. Exactly how reliable are the MMS alerts?
[13:48:48] <majoh> Derick: yeah, I've seen that.. doesn't seem to be up-to-date though... in the MMS interface there are more than three alert types for instance... Oh, well, I'll try to get it working
[13:49:14] <Derick> majoh: ok, I'll poke people about that
[14:00:58] <Derick> majoh: there is an (internal) ticket for adding those metrics to the docs.
[14:18:48] <failshell> Derick: ive added a config server from my cluster to MMS. how long will it take before it finds the other nodes in the cluster? and i start to get data?
[14:21:59] <Derick> failshell: i thought you should give it a mongos...
[15:07:48] <pats_Toms> I am going to make my mongodb secure
[15:09:29] <pats_Toms> how to defina username and password?
[15:15:28] <therealkasey> pymongo question: db.test.find({"_id" : <id>}, {"_id" : 1}) # assuming _id is indexed, will this pull the _id out of index or am i still going to hit disk?
[15:56:47] <JoeyJoeJo> Does anyone know of an existing method to dump pcap data into mongo? I was hoping that there was something that could take live output from something like tcpdump and put it straght into mongo
[16:06:41] <eka> hi, is there a driver for Go language?
[16:10:26] <strigga> I there, I am trying to connect to a mongo-server (10gen 20 on Ubunu) from two different machines. One is a windows machine, on which I can connect with no problem. The other one is another Ubuntu machine, where the connection does not work. I am using "mongo host/db" and it never finishes with an error message.
[16:16:53] <strigga> eka: I dont think ubuntu block anything outgoing by default, does it? I installed no package filters. It's a "naked" ubuntu 12.6 with webserver (apache2), and the mongo package
[16:17:13] <eka> strigga: not by default. did you try telnet host port?
[16:18:02] <eka> strigga: also try mongo --host host -port port
[16:20:50] <strigga> Strange. Lemme check for package filters on that machine
[16:20:57] <eka> strigga: so you may have a problem there
[16:21:52] <strigga> eka: strange thing is - it must be on the machine trying to connect (not the mongo server) I would not know a reason why I should block a port which I haven't used until 10 minutes ago :).. The connection to a mysql-server on the same route is working OK
[16:24:22] <strigga> anybody got a mongo server available I can try to connect to from my client that cannot connect? Just for a quick test,..
[16:31:00] <failshell> hello. in my sharded cluster, i added a new node to our replica sets. the RS see them. but not the cluster. how do i tell it there's a new node in both RS?
[17:24:12] <rpcesar> im having this really strange issue. im getting the too much data for sort issue, please use an index, but an explain shows that it is using an index. ive tried every combination of values I can think of, as well as checked jira. While there was a bug report open for a similar issue ~1.8, we are running 2.1 and it was marked as fixed.
[17:25:13] <rpcesar> the first 2 are explains on the query, before and after a hint. then it shows the error (which is thrown on the first iteration of the cursor), and the final dump is the actual query being sent to it
[17:26:20] <rpcesar> heres the code snippet producing that (minus the exception call)
[17:34:35] <rpcesar> that query is generated by a search engine, in can include a couple dozen more (optional) fields...
[17:35:06] <rpcesar> sounds like ill wind up having 50+ indexes.
[17:35:24] <rpcesar> is there an alternative or am i not understanding correctly>?
[17:35:26] <kali> rpcesar: it can tolerate a few not much selective fields being absent, but you may need to hint the right index
[17:36:49] <rpcesar> im fine with hinting. is there a "rule" of some sort I can base this around? "a few not much selective fields being absent" doesen't really tell me much. am I going to be better off doing the sort client side?
[17:37:33] <rpcesar> i was trying to do it client side because i only wanted the id's from them (didnt need the askingprice returned)
[17:38:04] <kali> rpcesar: it might be an option... highly flexible multicriteria is a pain to deal with
[17:39:40] <kali> rpcesar: on the other hand, you may find that most queries will have a common set or subset of fields, so you can index these fields together (with or without the sorting field)
[17:40:09] <rpcesar> yes, the ones you saw are common. well kinda, some queries don't even come with a city, just a zip
[17:40:33] <rpcesar> states guarenteed, l_statuscat is guarenteed, rest of it is a complex search of documents, we have indexes on those items
[17:40:55] <rpcesar> but if I have to have an index on every permutation of terms, i can say good by to reasonable ram constraints for holding the index
[17:42:15] <rpcesar> the query optimizer (and hint) already shows its using an index, but apparently not "right". all I want in the case of this query is to perform the search indicated by the query (which works), sort by asking price (to give us a usable sort order), and return the _id's of the results.
[17:43:24] <Killerguy> eka, I mean doing multiple update at once for different match
[17:44:02] <eka> Killerguy: doesn't make much sense… how would you do that in 1 sentence ?
[17:44:02] <rpcesar> kali: so if im correct in understanding you, your saying that an index , to be used by sort (and not run into that issue with large result sets) needs to be a multi-key index covering all fields (and permutations) of the search just to do that?
[17:44:08] <kali> rpcesar: you need to understand how the optimizer works, but basivally only one index is used
[17:44:21] <rpcesar> and that index has to cover what?
[17:44:39] <rpcesar> yea, i saw it basically selects the one at the top, and hint moves it there
[17:45:04] <kali> rpcesar: it has to cover a "selective enough" subset of the fields on which you filter. i'm sorry, i can't be more specific there
[17:46:55] <rpcesar> kali: so is it basically that the limitation is imposed on the height/depth of the btree which is selected? and that error is thrown when the number of scans would be too high?
[17:46:58] <failshell> ive added a node to each of my replica sets. but my sharded cluster doesnt see them. what should i do?
[17:49:39] <kali> rpcesar: well, if the combination of fields in the index is a subset of the combination of fields you're filtering on, AND the sortable key comes last in the index defintion, then the optimizer can parse the index and will just have to parse the btree slice and pick the records that match your filtering conditions
[17:50:02] <kali> rpcesar: in that case, the picked documents are in the right order
[17:50:19] <rpcesar> "AND the sortable key comes last in the index defintion" - i think that might be what im missing
[17:50:52] <rpcesar> kali: for the record, does it matter if you return the field or not?
[17:52:21] <kali> check out one of the presentation about indexing and optimizing. you need to put yourself in the shoes of the optimizer
[17:56:16] <rpcesar> kali, set the index on {'L_StatusCat' : 1, 'AskingPrice' : 1} and its working now. im wondering if I should be adding "State" to that as well since thats in all queries, but not sure if the relative impact would be positive or negative.
[17:56:24] <rpcesar> (state is barely segmented, we only cover 2)
[17:57:14] <rpcesar> take that back, 4, but the bucket size for 2 of those segments is about 30 results, the other 2 are around 50,000
[17:57:25] <kali> rpcesar: don't bother if it's not selective, it will ruin the "sort by index" for request not including it
[17:58:01] <rpcesar> and "AskingPrice" being indexed isnt a bad thing? (considering its EXTREEMLY selective, there are very few prices that are the same obviously)
[17:58:29] <rpcesar> ive generally been running by the hard and fast rule (if it doesent segment well, dont index it) but im wondering if that was an idiotic assumption to make
[17:58:43] <kali> if you need to sort on it, you need it in the index, end at the end
[17:59:09] <rpcesar> got it, i think you cleared it up for me very well, thanks :)
[18:52:39] <CupOfCocoa> I have a shard accross 4 servers here and a local mongos process on port 3303 on the machine I am logged in to. However, when I run 'mongo localhost:3303' I get a connection error, but if I substitute localhost for the LAN IP of the server it works. Am I missing something or why does localhost not work?
[18:53:48] <CupOfCocoa> fwiw there is also a local mongod process running as part of the shard on the machine but on a different port
[18:58:08] <troydm> say my document sometimes haz a property named 'userIds'
[18:58:24] <troydm> how do i check using query if it has that property?
[21:13:25] <bobinator60> hello mongo people. if I have an array in my document, and I do a query against the array with a projection, can I append NEW elements to the proejction? or do I have to manifest the entire proejction in the client?
[21:13:44] <bobinator60> sorry: new elements to the ARRAY
[21:14:05] <bobinator60> and do I have to manifest the entire array in the client?
[21:21:51] <therealkasey> not sure what you mean by "manifest". you can do an in-place update back to the document if your concern is how to update the array after reading it.
[21:29:08] <gazarsgo> aww man, the mongodb ami in the cloudformation template isn't even 2012.09...
[21:29:32] <gazarsgo> is there a better ami-id to plug into my cloudformation script than ami-e565ba8c
[22:08:54] <bobinator60> therealkasey: thanks. 'manifest' means to 'materilize', meaning reading the entire mongodb document into the client, updating it, and sending it back
[22:10:39] <therealkasey> [what i suggested earlier] there are potential race conditions (since you're reading then writing the array), could be problematic depending on what's in it.
[22:12:02] <therealkasey> i like to keep a version counter in documents where that can be a problem. include the version number in your projection, use it as a condition of your update statement (and $incr it) so it will fail if something else touches the doc between read and write.
[23:51:54] <sinisa> anyone from business insider here?
[23:54:33] <ins0mnia> is it possible to update a nested item like so db.mycollection.update({item.id.name}, {$set:value})?
[23:54:53] <ins0mnia> given id is mongo's object id