PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 13th of March, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[05:36:43] <someprimetime_> anyone had any experience with solr + mongo?
[05:37:14] <ron> sure
[07:29:29] <lfamorim> Hello guys
[07:29:57] <lfamorim> I have a document who are "dict.key-name"
[07:30:07] <lfamorim> i think the minus are blocking me to query the dictionary
[07:30:15] <lfamorim> how can i escape the minus signal?
[07:30:23] <lfamorim> > db.confiometro.find({"consumidor.primeiro-nome" : "TAFARREL", "consumidor.sobrenome" : "PINHEIRO"})
[07:30:31] <lfamorim> > db.confiometro.find({"consumidor.nome" : "TAFFAREL MAIA PINHEIRO"})
[07:30:32] <lfamorim> { "_id" : ObjectId("51378cea7f8b9a706f00107d"), "reclamacao-id" : 37510, "created" : ISODate("2013-03-06T18:37:30.527Z"), "consumidor" : { "nome" : "TAFFAREL MAIA PINHEIRO", "primeiro-nome" : "TAFFAREL", "sobrenome" : "PINHEIRO" }, "empresa" : "BEST INFORMATICA", "data" : ISODate("2013-02-14T19:05:00Z"), "localidade" : { "cidade" : "FORTALEZA", "estado" : "CE" }, "opiniao" : { "titulo" : "ENTREGA DO PRODUTO", "texto" : "MEU PEDIDO FOI
[07:30:32] <lfamorim> ENVIADO DIA 31/01/2013 E AINDA NÃO CHEGOU GOSTARIA DE SABER QUANDO IRÁ CHEGAR, ESPERO QUE CHEGUE E QUE O PRODUTO ESTEJA CERTO E SEM NENHUM PROBLEMA." }, "updated" : ISODate("2013-03-06T18:37:30.539Z") }
[07:31:17] <lfamorim> someone knows how?
[08:08:54] <cloudgeek> Please make at least 3379MB available in /var/lib/mongo/journal
[08:09:14] <cloudgeek> I using readhat ec2
[08:09:18] <cloudgeek> instance on aws
[08:09:29] <cloudgeek> not able start the mongodb
[08:19:43] <Nodex> is there a question in there somewhere?
[08:35:51] <[AD]Turbo> hola
[09:53:14] <bmcgee> hey guys, any of you using mongodb-async-driver?
[10:47:45] <Zelest> o/ remonvv
[11:38:35] <Nodex> Derick : ping
[11:39:18] <Zelest> bwahaha!
[11:39:32] <Zelest> :'(
[11:39:38] <Derick> Nodex: yes?
[11:39:58] <Zelest> ron, see, how you interupted me so the packet got through! :(
[11:40:00] <Zelest> now*
[11:40:08] <ron> \o/
[11:40:57] <Nodex> 1 sec Derick : I think your blog answered my question - just testing
[11:41:12] <Derick> :D
[11:41:27] <Nodex> No it didn't
[11:41:48] <Nodex> ok, I have a 64 bit int which is returning as 1.7690865566035E+14 ... how do I get that back to an actual integer?
[11:43:06] <Derick> you got it as a float then
[11:43:12] <Derick> did you set mongo.native_long ?
[11:43:15] <Derick> (to 1)
[11:43:19] <Nodex> yeh
[11:43:29] <Nodex> it saves as "NUmberLong(.....) in mongo
[11:43:38] <Nodex> but return as ^^ in the driver
[11:44:07] <Derick> numberlong on the shell makes sense
[11:44:15] <Derick> but you should get it back as a real int in PHP
[11:44:25] <Derick> if not, you haven't mongo.native_long set
[11:44:58] <Nodex> I have to set it in the retrieval aswell as the saving?
[11:45:00] <Nodex> my bad
[11:45:08] <Derick> you need to set it globally
[11:45:29] <Derick> otherwise you will forget once and get crap
[11:45:41] <Nodex> 3 hours I have been wondering why I could not remote post to facebook
[11:45:42] <Derick> iirc, native long is only used for retrieval
[11:45:43] <flok> hello. if i have a document with 3 key/value fields a, b and c and i would like to update that document using the java api update() command, is it possible to only update field b and am I required to also add field a and c to the update-document? or can I only say change document to a BasicDBObject with in that b=something?
[11:45:53] <Nodex> it turns out it was the 64bit int saving badly hahahaha
[11:45:59] <Nodex> and sending the wrong app id
[11:46:03] <Derick> flok: have a look at the $set operator
[11:47:31] <Nodex> thanks Derick :)
[11:47:41] <Nodex> sometimes my stupidity amazes me
[11:57:44] <flok> Derick: i did some googling and i think i understand the $set construction. thanks.
[11:58:01] <Derick> np :-)
[11:58:02] <flok> fwiw: i'm migrating a kyoto cabinet app to mongodb
[11:58:19] <Derick> kyoto cabinet
[11:59:09] <flok> yeah, that's a nosql berkelydb-ish key/value database
[12:00:44] <Derick> yeah, I know... just had never heard anybody actually using it
[12:11:08] <aboudreault> We use it too, as a caching step for loading/organizing a lot of data. works well.
[12:18:50] <flok> it does not seem to be entirely safe, this kyoto cabinet: if I do a System.exit(1) somewhere instead of a clean close, I occasionally get an error about the file length not being correct(?!)
[12:23:26] <Derick> oi
[13:00:43] <Nodex> anyone suffer from Comment/forum spam from proxies?
[13:13:20] <strigga> anybody knows a good article about user administration and the general concepts in mongodb?
[13:26:08] <lujeni> strigga, the official documentation is nice
[13:31:31] <strigga> lujeni: Hm I looked at that. I think I more or less understood. Actually I think my main issue ist "translating" that into PHP :) Anyway that's a php issue and not a mongodb issue it seems
[13:31:41] <strigga> lujeni: thanks
[13:33:41] <lujeni> strigga, so you should read the documentation about the PHP driver
[13:33:56] <strigga> lujeni: Jap - you caught me doing that :)
[13:34:54] <strigga> lujeni: I really have to get used to the concept that there are no actual "queries" as such :)
[13:44:01] <failshell> hello. does anyone know of a good product to monitor a sharded cluster? for example, the state the cluster is in? also, im looking for a backup tool to backup said cluster
[13:44:19] <Derick> firelynx: have you seen MMS?
[13:44:25] <Derick> failshell: have you seen MMS?
[13:44:32] <failshell> no what is it?
[13:44:44] <Derick> it's the Mongo Monitorring Service
[13:45:02] <firelynx> is that the most failed invention ever for cellphones?
[13:45:05] <firelynx> :p
[13:45:22] <Derick> http://www.10gen.com/products/mongodb-monitoring-service
[13:45:22] <majoh> I'm trying to get MMS to generate alerts but it's not working. Just a couple of minutes ago the status page said there was some issue with the service. Exactly how reliable are the MMS alerts?
[13:45:40] <Derick> http://mms.10gen.com/help/
[13:45:51] <failshell> more importantly, sounds like something that's free now and will be expansive later
[13:45:56] <Derick> majoh: I don't know that, it's best to report that as an issue in jira
[13:46:09] <Derick> failshell: From what I know, MMS is going to stay free.
[13:46:38] <majoh> is there any "uptime" statistics or what not? not sure I want to rely on mms if it's not able to generate alerts... :/
[13:46:46] <Derick> It's a hosted service, and it might be possible that we'll start offering a non-hosted version at some point too.
[13:47:11] <failshell> ok, well, im going to look into it
[13:47:14] <Derick> majoh: I don' tknow it too well, but the alerts are described here: http://mms.10gen.com/help/usage.html#alerts
[13:47:22] <failshell> as right now, our monitoring is not sufficient
[13:47:27] <failshell> our cluster crashed yesterday\
[13:47:31] <failshell> and i have no clue why
[13:48:48] <majoh> Derick: yeah, I've seen that.. doesn't seem to be up-to-date though... in the MMS interface there are more than three alert types for instance... Oh, well, I'll try to get it working
[13:49:14] <Derick> majoh: ok, I'll poke people about that
[13:50:26] <Derick> ah, the metrics are new
[14:00:58] <Derick> majoh: there is an (internal) ticket for adding those metrics to the docs.
[14:18:48] <failshell> Derick: ive added a config server from my cluster to MMS. how long will it take before it finds the other nodes in the cluster? and i start to get data?
[14:21:59] <Derick> failshell: i thought you should give it a mongos...
[14:22:43] <failshell> ah
[14:50:59] <failshell> Derick: the munin agent, does that need to be installed on all nodes? or just the agent?
[14:51:15] <Derick> iirc, all nodes
[14:51:24] <Derick> but I have never tried that one myself
[14:55:16] <pats_Toms> hi, is there any way to see which configuration file mongodb is using?
[15:02:14] <Nodex> ps ax | grep mongodb
[15:02:17] <Nodex> ^^
[15:07:27] <pats_Toms> thanks Nodex
[15:07:48] <pats_Toms> I am going to make my mongodb secure
[15:09:29] <pats_Toms> how to defina username and password?
[15:15:28] <therealkasey> pymongo question: db.test.find({"_id" : <id>}, {"_id" : 1}) # assuming _id is indexed, will this pull the _id out of index or am i still going to hit disk?
[15:17:24] <therealkasey> rather: db.test. find_one({"_id" : <id>}, {"_id" : True})
[15:18:28] <Derick> Nodex: are you running PHP driver 1.3.4 in a sharded production environment?
[15:18:32] <Derick> (or anybody else really)
[15:21:22] <pats_Toms> sharder enviroment?
[15:21:29] <pats_Toms> you mean shared Derick?
[15:21:57] <Derick> no, sharded
[15:56:47] <JoeyJoeJo> Does anyone know of an existing method to dump pcap data into mongo? I was hoping that there was something that could take live output from something like tcpdump and put it straght into mongo
[16:06:41] <eka> hi, is there a driver for Go language?
[16:10:26] <strigga> I there, I am trying to connect to a mongo-server (10gen 20 on Ubunu) from two different machines. One is a windows machine, on which I can connect with no problem. The other one is another Ubuntu machine, where the connection does not work. I am using "mongo host/db" and it never finishes with an error message.
[16:10:31] <strigga> Anybody seen that before?
[16:10:37] <strigga> All machines are in the same network
[16:11:01] <eka> found http://labix.org/mgo
[16:13:22] <bean> strigga: I would just do "mongo host" then "use dbname" on the shell
[16:13:43] <eka> strigga: how do you connect on linux?
[16:13:46] <bean> strigga: though it depends on if its set to listen on localhost only on ubuntu
[16:14:42] <strigga> bean: makes no difference. It defaults to "test" (connecting to:host/test). I can connect from a windows machine.
[16:15:02] <strigga> eka: mongo host/db - make sure to install the latest mongo package from the 10gwen repository
[16:15:26] <pats_Toms> when I use db.system.users.find() I got my created user but I can't get him working
[16:15:27] <eka> strigga: any firewall that you may have in your linux box that could block you out connection?
[16:16:08] <pats_Toms> { "_id" : ObjectId("51409894a50ab25054b9e01f"), "user" : "oscar", "readOnly" : false, "pwd" : "somehash" }
[16:16:14] <pats_Toms> something wrong?
[16:16:53] <strigga> eka: I dont think ubuntu block anything outgoing by default, does it? I installed no package filters. It's a "naked" ubuntu 12.6 with webserver (apache2), and the mongo package
[16:17:13] <eka> strigga: not by default. did you try telnet host port?
[16:18:02] <eka> strigga: also try mongo --host host -port port
[16:18:39] <strigga> eka: OK hang on
[16:20:39] <strigga> eka: telnet: no connection.
[16:20:50] <strigga> Strange. Lemme check for package filters on that machine
[16:20:57] <eka> strigga: so you may have a problem there
[16:21:52] <strigga> eka: strange thing is - it must be on the machine trying to connect (not the mongo server) I would not know a reason why I should block a port which I haven't used until 10 minutes ago :).. The connection to a mysql-server on the same route is working OK
[16:24:22] <strigga> anybody got a mongo server available I can try to connect to from my client that cannot connect? Just for a quick test,..
[16:25:20] <eka> lol
[16:25:28] <eka> sorry
[16:25:35] <strigga> eka: was worth a try, wasn't it :)
[16:26:06] <eka> strigga: IMHO most of the mongo machines run in a LAN
[16:26:17] <eka> so no exterior connection
[16:26:23] <eka> YMMV
[16:26:26] <strigga> eka: I would guess so. Or at least have IP-Addresses bound
[16:30:05] <strigga> eka: trying to connect to a local instance from that machine is working..
[16:30:32] <strigga> so: 27017 does not seem to be blocked
[16:30:34] <strigga> arrrgh
[16:31:00] <failshell> hello. in my sharded cluster, i added a new node to our replica sets. the RS see them. but not the cluster. how do i tell it there's a new node in both RS?
[16:53:45] <strigga> eka: no firewall :)
[16:54:28] <eka> strigga: still something is wrong there… can you paste your mongo command… maybe there is a typo there that you didn't see
[16:55:09] <strigga> I tried several dofferent versions now.. :) Lemme re-try them and post them here,,,
[16:55:35] <strigga> mongo host
[16:55:47] <strigga> mongo --host host --port 27017
[16:56:01] <strigga> (sorry, I will not post the hostname here as that one IS in the internet) :)
[16:56:21] <Killerguy> hi
[16:56:27] <Killerguy> is it possible to do bulk update?
[16:56:38] <Killerguy> like insert
[16:56:58] <strigga> mongo fullyqualified/database
[16:57:49] <Killerguy> hum?
[16:58:28] <strigga> eka: got it!!!
[16:58:34] <eka> strigga: what was it?
[16:58:43] <strigga> eka: dns returnd the wrong IP address
[16:58:52] <strigga> arrrgh :D
[16:59:06] <strigga> eka: but anyway thanks for your effords!!
[16:59:11] <eka> np
[17:00:22] <eka> Killerguy: what do you mean with bulk update? if you query does match many docs all of them will be updated…
[17:00:49] <eka> Killerguy: http://docs.mongodb.org/manual/applications/update/#update-multiple-documents
[17:24:12] <rpcesar> im having this really strange issue. im getting the too much data for sort issue, please use an index, but an explain shows that it is using an index. ive tried every combination of values I can think of, as well as checked jira. While there was a bug report open for a similar issue ~1.8, we are running 2.1 and it was marked as fixed.
[17:24:26] <rpcesar> http://pastie.org/6473906
[17:25:13] <rpcesar> the first 2 are explains on the query, before and after a hint. then it shows the error (which is thrown on the first iteration of the cursor), and the final dump is the actual query being sent to it
[17:26:20] <rpcesar> heres the code snippet producing that (minus the exception call)
[17:26:21] <rpcesar> http://pastie.org/6473983
[17:27:06] <rpcesar> any ideas as to what could be the issue here? is there something simple im missing?
[17:28:16] <kali> rpcesar: wwwaaa... this php dump are really ugly
[17:28:24] <kali> rpcesar: would you mind doing the same in the mongo shell ?
[17:29:25] <rpcesar> yea, php is ugly. sure, the top pastie is the explains and the sort though,
[17:29:58] <kali> where is the query ?
[17:30:19] <rpcesar> very bottom of http://pastie.org/6473906
[17:30:34] <kali> ha !
[17:30:38] <rpcesar> that last json dump is the query being passed to it, sort is being applied on askingprice. the very top of the link
[17:30:41] <kali> you need a composite index
[17:30:43] <rpcesar> is the explain
[17:30:55] <rpcesar> what do you mean?
[17:31:02] <rpcesar> im not using compound keys for that query
[17:31:11] <kali> {state:1, city:1, classtype:1, l_statuscat:1}
[17:31:53] <rpcesar> i need to have an index covering all of those fields?
[17:32:24] <rpcesar> that query can be much more complex then that unfortunately, this is a simple case thats failing.
[17:33:42] <kali> rpcesar: all the field tou're querying, + the fields you're sorting on at the end
[17:33:53] <kali> rpcesar: and in the right order (but you only have one)
[17:33:57] <rpcesar> and it all has to be a single compound index?
[17:34:10] <kali> you may want to have a look at one of these presentations... http://www.10gen.com/presentations/indexing-and-query-optimization-1
[17:34:13] <kali> yes.
[17:34:35] <rpcesar> that query is generated by a search engine, in can include a couple dozen more (optional) fields...
[17:35:06] <rpcesar> sounds like ill wind up having 50+ indexes.
[17:35:24] <rpcesar> is there an alternative or am i not understanding correctly>?
[17:35:26] <kali> rpcesar: it can tolerate a few not much selective fields being absent, but you may need to hint the right index
[17:36:49] <rpcesar> im fine with hinting. is there a "rule" of some sort I can base this around? "a few not much selective fields being absent" doesen't really tell me much. am I going to be better off doing the sort client side?
[17:37:33] <rpcesar> i was trying to do it client side because i only wanted the id's from them (didnt need the askingprice returned)
[17:38:04] <kali> rpcesar: it might be an option... highly flexible multicriteria is a pain to deal with
[17:39:40] <kali> rpcesar: on the other hand, you may find that most queries will have a common set or subset of fields, so you can index these fields together (with or without the sorting field)
[17:40:09] <rpcesar> yes, the ones you saw are common. well kinda, some queries don't even come with a city, just a zip
[17:40:33] <rpcesar> states guarenteed, l_statuscat is guarenteed, rest of it is a complex search of documents, we have indexes on those items
[17:40:55] <rpcesar> but if I have to have an index on every permutation of terms, i can say good by to reasonable ram constraints for holding the index
[17:42:15] <rpcesar> the query optimizer (and hint) already shows its using an index, but apparently not "right". all I want in the case of this query is to perform the search indicated by the query (which works), sort by asking price (to give us a usable sort order), and return the _id's of the results.
[17:43:24] <Killerguy> eka, I mean doing multiple update at once for different match
[17:44:02] <eka> Killerguy: doesn't make much sense… how would you do that in 1 sentence ?
[17:44:02] <rpcesar> kali: so if im correct in understanding you, your saying that an index , to be used by sort (and not run into that issue with large result sets) needs to be a multi-key index covering all fields (and permutations) of the search just to do that?
[17:44:08] <kali> rpcesar: you need to understand how the optimizer works, but basivally only one index is used
[17:44:21] <rpcesar> and that index has to cover what?
[17:44:39] <rpcesar> yea, i saw it basically selects the one at the top, and hint moves it there
[17:45:04] <kali> rpcesar: it has to cover a "selective enough" subset of the fields on which you filter. i'm sorry, i can't be more specific there
[17:46:55] <rpcesar> kali: so is it basically that the limitation is imposed on the height/depth of the btree which is selected? and that error is thrown when the number of scans would be too high?
[17:46:58] <failshell> ive added a node to each of my replica sets. but my sharded cluster doesnt see them. what should i do?
[17:49:39] <kali> rpcesar: well, if the combination of fields in the index is a subset of the combination of fields you're filtering on, AND the sortable key comes last in the index defintion, then the optimizer can parse the index and will just have to parse the btree slice and pick the records that match your filtering conditions
[17:50:02] <kali> rpcesar: in that case, the picked documents are in the right order
[17:50:19] <rpcesar> "AND the sortable key comes last in the index defintion" - i think that might be what im missing
[17:50:52] <rpcesar> kali: for the record, does it matter if you return the field or not?
[17:51:03] <kali> rpcesar: nope
[17:51:16] <kali> (well not in that case)
[17:52:21] <kali> check out one of the presentation about indexing and optimizing. you need to put yourself in the shoes of the optimizer
[17:56:16] <rpcesar> kali, set the index on {'L_StatusCat' : 1, 'AskingPrice' : 1} and its working now. im wondering if I should be adding "State" to that as well since thats in all queries, but not sure if the relative impact would be positive or negative.
[17:56:24] <rpcesar> (state is barely segmented, we only cover 2)
[17:57:14] <rpcesar> take that back, 4, but the bucket size for 2 of those segments is about 30 results, the other 2 are around 50,000
[17:57:25] <kali> rpcesar: don't bother if it's not selective, it will ruin the "sort by index" for request not including it
[17:58:01] <rpcesar> and "AskingPrice" being indexed isnt a bad thing? (considering its EXTREEMLY selective, there are very few prices that are the same obviously)
[17:58:29] <rpcesar> ive generally been running by the hard and fast rule (if it doesent segment well, dont index it) but im wondering if that was an idiotic assumption to make
[17:58:43] <kali> if you need to sort on it, you need it in the index, end at the end
[17:59:09] <rpcesar> got it, i think you cleared it up for me very well, thanks :)
[17:59:19] <kali> good.
[18:52:39] <CupOfCocoa> I have a shard accross 4 servers here and a local mongos process on port 3303 on the machine I am logged in to. However, when I run 'mongo localhost:3303' I get a connection error, but if I substitute localhost for the LAN IP of the server it works. Am I missing something or why does localhost not work?
[18:53:48] <CupOfCocoa> fwiw there is also a local mongod process running as part of the shard on the machine but on a different port
[18:58:08] <troydm> say my document sometimes haz a property named 'userIds'
[18:58:14] <troydm> sometimes it doesn't
[18:58:24] <troydm> how do i check using query if it has that property?
[21:13:25] <bobinator60> hello mongo people. if I have an array in my document, and I do a query against the array with a projection, can I append NEW elements to the proejction? or do I have to manifest the entire proejction in the client?
[21:13:44] <bobinator60> sorry: new elements to the ARRAY
[21:14:05] <bobinator60> and do I have to manifest the entire array in the client?
[21:21:51] <therealkasey> not sure what you mean by "manifest". you can do an in-place update back to the document if your concern is how to update the array after reading it.
[21:29:08] <gazarsgo> aww man, the mongodb ami in the cloudformation template isn't even 2012.09...
[21:29:32] <gazarsgo> is there a better ami-id to plug into my cloudformation script than ami-e565ba8c
[22:08:54] <bobinator60> therealkasey: thanks. 'manifest' means to 'materilize', meaning reading the entire mongodb document into the client, updating it, and sending it back
[22:10:39] <therealkasey> [what i suggested earlier] there are potential race conditions (since you're reading then writing the array), could be problematic depending on what's in it.
[22:12:02] <therealkasey> i like to keep a version counter in documents where that can be a problem. include the version number in your projection, use it as a condition of your update statement (and $incr it) so it will fail if something else touches the doc between read and write.
[23:51:54] <sinisa> anyone from business insider here?
[23:54:33] <ins0mnia> is it possible to update a nested item like so db.mycollection.update({item.id.name}, {$set:value})?
[23:54:53] <ins0mnia> given id is mongo's object id
[23:56:26] <ins0mnia> anyone? ;)
[23:56:32] <sinisa> insomnia... wait a sec :)
[23:57:02] <ins0mnia> hehe
[23:59:03] <sinisa> it should be something with positional
[23:59:04] <sinisa> $