[07:55:52] <acidjazz> small ?, i am querying data from teh mongo JS prompt and i want to only show a 2nd dimension of a field, like from > db.wap.find({}, {SSID: true}); to > db.wap.find({}, {SSID.encryption: true}); which obv doesnt work.. is htis possible?
[07:56:08] <joannac> why does that obviously not work?
[07:57:01] <acidjazz> Wed Dec 17 23:55:30.858 SyntaxError: Unexpected token .
[08:08:22] <aburass> hi all, can you please advise which is better to use, mongoDB or Redis as a nosql persistent store? and which is easier to maintain and configure in terms of administration
[08:09:21] <joannac> acidjazz: DBQuery.shellBatchSize = X; for appropriate value of X
[08:12:38] <acidjazz> lol joannac look at the 4th answer http://stackoverflow.com/questions/3705517/how-to-print-out-more-than-20-items-documents-in-mongodbs-shell
[08:28:38] <aburass> hello, can you please advise?
[08:36:06] <aburass> modulus: so i cannot use redis as a nosql store?
[08:38:14] <kali> aburass: both are great tools, relatively easy to use and maintain. both widely extends the k/v store idea, but in very different directions
[08:38:55] <kali> aburass: you need to read about them, there is not a one line answer to this question
[08:40:31] <aburass> kali: i worked on redis as a cache store only, but not as nosql store, from your knowledge and experience is it as reliable and easy as mongo?
[08:41:23] <kali> ask the redis guys, my experience with it is too old to be of value
[09:13:43] <Guest44126> i have problem with drop collection: http://pastie.org/9787740 could someone help me?
[09:30:38] <mnms_> Guys I want to use mongodb for system which will store logs there. 200 000 records per day or more. I will need to make some aggregates on this data, simple operations. Does mongo db is good choise for that scenario, what problems I can have ?
[09:32:03] <mnms_> I would like to use features like sharding and replication also. A lot of people try to convince me mongodb is not good choice and I can have problem with data analyze
[09:32:17] <winem_> did you take a look at the ELK stack? elasticsearch, logstash or fluentd and kibana as dashboard. logstash or fluentd might do your job
[09:32:46] <mnms_> 2 years on market it sounds very dangerous to me
[09:35:53] <mnms_> So where mongodb could have a big problem ?:)
[09:40:35] <winem_> don't see any problems. but you might get the same result with less effort for reporting and aggregation, e.g. if you use a tool with it's scope on it
[09:40:49] <winem_> but this is the wrong channel. so let's concentrate on mongodb here :)
[09:44:40] <Guest44126> winem_, help me with remove shard http://pastie.org/9787701
[09:47:54] <mnms_> winem_: my question was about mongodb ;)
[09:58:50] <winem_> please run db.locks.find() to see if there are any locks
[10:08:44] <Guest44126> winem_, yes i have more than 2000 record on this collection
[10:09:51] <Guest44126> but in locks is one record with the same name like collection which can't be move
[10:09:59] <Guest44126> what should i do to fix it?
[10:12:22] <Guest44126> i have 3 chunks which cant migrate to another shard and the move error is: the collection metadata could not be locked with lock migrate-{ temp_actual_time: MinKey }
[10:13:19] <Guest44126> if i run removeshard command i see draining ongoing and "chunks" : NumberLong(3)
[10:14:51] <Guest44126> what is the best way to solve it?
[10:16:58] <winem_> do you run mongodb 2.2 or newer?
[12:51:54] <Absorbent> I'm building a relatively small blog with koajs
[12:52:21] <Absorbent> Should I make a seperate collection for the tags and the comments or with the same collection of the posts?
[14:04:27] <sgo11> hi, a newbie. I read the doc. it says " the server will automatically close the cursor after 10 minutes of inactivity". but I tried it in mongo shell. c = db.testData.find(). after 20 minutes, I can still access c.next(). why?
[14:12:28] <agenteo> hi, can you do a order by FIELD( in mongodb?
[14:14:28] <Absorbent> I think this channel is dead
[14:14:36] <Absorbent> never had an answered question here
[14:25:48] <sgo11> agenteo, what time can you normally get an answer here? which time zone? thanks.
[14:25:49] <safzouf> We need to fix a curson timeout
[14:26:22] <safzouf> the problem is that maxTimeMS is only available in 2.6 !
[14:26:48] <safzouf> And if I try to call the function from PHP a get the error that the function is not defined
[14:27:00] <kali> sgo11: because the results are fetch in batch. so you don't need to get back to the server each time you cann next()
[14:27:01] <safzouf> how we can fix a timeout from PHP ?
[14:27:07] <agenteo> I am on EST, I remember asking question in the afternoon and getting feedback, I don’t mean immediately but I remember getting good insight after a 10 or 15 minutes
[14:27:38] <Guest44126> someone know how to fix this problem: the collection metadata could not be locked with lock migrate-{ temp_actual_time: MinKey } ?
[14:28:26] <safzouf> Any expert here of Timeouts ? :)
[14:28:49] <kali> safzouf: you can tweak the behaviour when performing the query
[14:28:52] <sgo11> kali, thanks a lot for your reply. how can I test the cursor timeout then? I am really new to mongodb.
[14:34:43] <sgo11> kali, the point is: since I am non-english speaker, without tests, I may misunderstand those English words. tests give me confidence. :)
[14:35:02] <sgo11> kali, by the way, does "fetch in batch" mean "store the result in memory. so everytime I call c.next(), it fetches the result from memory instead from server"?
[14:35:28] <kali> safzouf: mongo has supported at least "no timeout" for ages
[14:35:41] <kali> safzouf: i'm not sure about specific non infinite values
[14:35:56] <kali> safzouf: so maybe there is something weird with the driver
[14:37:01] <kali> sgo11: well, this is very a corner case, but i think you can call .batchSize(1) on your cursor
[14:47:21] <sgo11> kali, maybe, I misunderstand this cursor timeout concept. if the results are fetched in a single batch and iteration of the cursor does not need to get back to the server anymore, what is the point to keep the cursor open in the server for 10 minutes? (does my question make sense?) thanks.
[14:54:09] <kali> sgo11: results are fetch in batch no larger than the "batchSize". so if the results are bigger, the server keeps the cursor around for you to be able to pull more results
[14:54:46] <kali> sgo11: the timeout only exists to discard cursor that have been forgotten by crashed clients, for instance
[14:57:34] <sgo11> kali, ok, finally got it. thank you very much. :)
[14:58:47] <sgo11> finally, I got "Error: getMore: cursor didn't exist on server, possible restart or timeout?". very cool.
[16:50:04] <felipesabino> can anybody help me automate a replica set set up using docker? http://stackoverflow.com/questions/27449306/what-is-the-proper-way-of-setting-a-mongodb-replica-set-using-docker-and-fig
[16:50:04] <felipesabino> I am having a hard time having to go to the images and type rs.initiate() all the time, it is not an automatic process at all...
[17:28:15] <Max-P> Hi, I have a text index in one of my collections, but for some reason it is very slow the first time I query a keyword. According to the profile, it took just a little over 12 seconds for the query. Following queries are always near instant. Any ideas why?
[17:31:03] <Max-P> The entire database's index size is 800MB, so it should fit in RAM just fine, but the current memory usage is of only 150MB.
[17:45:52] <geri> hi, does someone use other databases than mongodb?
[17:51:11] <Max-P> geri, Why? What are your goals? Are you trying to compare databases?
[17:51:40] <geri> trying to understand where both are good at
[18:32:26] <vacho> can someone please take a look here at tell me if that's a good structure? Any advice is appreciated: http://stackoverflow.com/questions/27536820/whats-a-good-data-model-in-mongo-for-my-data
[18:35:19] <Max-P> vacho, Looks pretty good to me. That's sort of the point of having JSON documents.
[18:36:20] <Max-P> You can easily update the subdocuments with the positional operators and $set. But unless you plan on editing those really often, you might as well just load and save the whole document entirely.
[18:38:07] <vacho> Max-P: ok, I was more worried if there are any drawbacks to this approach. I am new to MongoDB, just want to get some expert advice before I built my entire app around this data model :)
[18:38:30] <vacho> Max-P: And I really appreciate your time to review my data model, thanks!
[18:40:40] <Max-P> vacho, The best way to store documents really depends on how you will be accessing and updating your data.
[18:41:47] <Max-P> Sometimes it's convenient to make the structure a tiny bit different for performance, but most of the time the logical way works just fine
[18:43:51] <vacho> Max-P: well, I will store about 50mb of data total? and I will have around 50-100 updates a day. System will be used by 10 users daily..nothing crazy
[18:55:23] <Max-P> vacho, I update large documents in chunks of 2-4MB quite often in my app. In your case it won't matter at all, no worries
[19:29:37] <cheeser> with wiredtiger, yes. kind of.
[19:29:41] <mike_edmr> you know that missing feature where it manages disc allocation in real time instead of growing uncontrollably and requiring you to step down servers, compact, and rejoin
[19:54:28] <mrmccrac> for what its worth, replacing NumberLong(1) instances with 1 (and not "1") in the dump .json files did seem to work
[19:56:03] <jabclab> hey all, just wondering if large log file size can potentially slow down Mongo querying at all?
[19:56:11] <jabclab> also using particularly verbose logging?
[19:57:54] <mrmccrac> file system logging can often be a source of high I/O usage
[19:58:06] <mrmccrac> and yes degrade performance for others
[19:58:50] <jabclab> mrmccrac: great, thanks- i'm guessing a mongod.log of ~3GB is far above what it should be?
[19:59:44] <mrmccrac> difficult to say since i dont know your environment
[20:01:24] <jabclab> sure, but rotating this file will likely lead to a performance improvement? or would it likely be negligible?
[20:02:06] <mrmccrac> the size of the file itself probably doesn't matter too much, its mostly the total number of individual writes regardless of current log file size
[20:02:33] <mrmccrac> in other words rotating wont give you a noticeable performance improvement i suspect
[20:02:47] <mrmccrac> reducting the total number of log messages being recorded will
[20:06:22] <jabclab> ah ok brilliant, thanks- and that's achieved just by adding `quiet = true` to mongod.conf?
[20:08:01] <mrmccrac> i believe there is a logLevel option to control the verbosity of logging
[20:12:18] <jabclab> ok great, thanks for all your help :)
[20:51:53] <keksike> I'm using mongoose to connect to mongodb. For some reason my mongod.exe log shows that it opens 5 connections when I run the server. Why is this?
[20:51:55] <culthero> what is the place in debian where you can add some scripts to run on boot? Someone from here helped me with it and for the life of me I don't know where it is
[20:52:29] <keksike> https://github.com/Keksike/luuppiquotes/blob/master/app.js heres my node server
[21:46:35] <mrmccrac> 2.8 will have document level locking with WiredTiger
[22:09:10] <Guest11305> hi. I have a collection with thousands of documents which all contain a polygon or shape based on coordinates and would like to query a single coordinate on the earth and for each shape it is contained it, return that document
[22:09:32] <Guest11305> I tried using $geoWithin, but i'm not sure it works with a coordinate
[22:10:28] <Guest11305> for example, db.stores.find{
[22:10:43] <Guest11305> does not work because i think point does not work with $geoWithin
[22:11:49] <Guest11305> can anyone suggest a good way to query my document, given that POLYGON is a polygon on a city which is in the stores document?
[23:08:36] <armin> hi. i just created a db called ara and use that one, but ara.createUser() tells me "ReferenceError: ara is not defined (shell):1" - any help?
[23:13:57] <chetandhembre_> are you using mongo shell ?
[23:15:41] <Synt4x`> getting an error I haven't seen before, it works in Mongo shell but not in PyMongo. Here is the mongo shell command: db.converted_pbp.find({'game-code':20001}).sort({'timestamp':1}).limit(10)
[23:15:53] <Synt4x`> here is the PyMongo command: dataSet = db.converted_pbp.find({'game-code':20001}).sort({'timestamp':1})
[23:21:01] <armin> any hint what i'm doing wrong when getting a "Fri Dec 19 00:18:50 uncaught exception: password can't be empty" exception when trying to create a user using db.addUser? (i specified pwd)
[23:52:57] <chetandhembre_> have u switched to admin collection ?