PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 19th of December, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[03:47:56] <jerome-> Is it possible to modify an index? Change a field from ascending to descending for example
[03:48:09] <jerome-> or remove a field in an index
[03:55:21] <joannac> jerome-: drop, reindex
[03:55:28] <jerome-> that's all?
[03:55:45] <joannac> yes?
[03:55:57] <jerome-> I'm working on an UI to create, drop and maybe update indexes
[03:56:16] <jerome-> so I guess it simplify my UI :)
[03:56:17] <jerome-> thanks
[04:01:45] <acidjazz> if i have an attribute in my collection/document thats an object of key=>value
[04:01:52] <acidjazz> can i sort by its amount of keys?
[04:02:06] <acidjazz> soert of like how JS doe Object.keys(myobj).length ?
[04:26:21] <jerome-> it seems that ensureIndex is deprecated, but in the doc, I see only example with ensureIndex
[07:09:45] <yoshk> can someone recommend ways to speed up tests that involve interaction with MongoDB? loading and discarding of test fixture for each test case is a tad too slow, so i'm looking for something like an in-ram instance or mock library.
[07:24:11] <fred-fri> im a n00b, would appreciate any input on this mongoose schema. mainly the tags and the scores. are they correctly defined? is it ok to simply have them defined inside the IdeaSchema like this or should they have their own separate schemas and be referred similar to how user is? http://pastebin.com/t9SGbvzn
[08:01:34] <ut2k3> Hi guys I have 4 SSH-Tunnel, 2 Tunnel (Port 27017,27018) going to a Replication-Set which is local running also on the ports 27017 and 27018. The other 2 Tunnel (Port 28017, 28018) going to another Replication-Set which is local running on the Ports 27017 and 27018. When I use the php-mongodriver to connect to the replication-Set on 28017,28018 PHP uses not this replication set it uses 27017,27018. Does the PHP-Driver read the replicati
[09:51:22] <Guest44126> hi, my config server floods my log file CMD fsync: sync:1 lock:0, so my log file is very large, how to disable this? i tried add quiet: true in config but the logs are still there
[10:07:12] <jabclab> hey guys, would anyone be available to give me some advice around distinct queries?
[10:23:08] <techsethi_> Hi Folks, I am having an issue where the explain suggest that table is doing full scan. could you please provide me pointers. here is my gist. https://gist.github.com/yayati-tc/16f89e03d045bfa4ef28
[11:20:01] <avril14th> Hello, I have a collection of floats, is there a way to write a query matching the entry with the float the closest from a given number?
[14:15:27] <daniel86> Hi! I am currently trying to figure out why some queries turn slow after adding some new data to mongo. Using the explain method i saw that "indexBounds" have a strange value after appending the data. Am i right that this value should represent the range given by the query? Here is the output for the query before adding the data (about 4GB of data in the DB): http://pastebin.com/tsaTFfbr ; And here is the output after adding about
[14:15:27] <daniel86> Any ideas why the lower bound turned into ´true´ instead of being a date?
[14:39:05] <chetandhembre__> is there upsert command in mongodb ? I am using pedis
[14:39:21] <chetandhembre__> sorry mongo native client driver
[14:39:38] <cheeser> "native client driver?"
[14:41:21] <chetandhembre__> java driver provided by mongodb
[14:42:02] <cheeser> ah
[14:43:05] <Deck`> Is there any features of 64bit mongodb about memory usage? I have a dispute. We have a mongod instance on a server with 4gb RAM, free shows that 2.5 Gb is free. My opponent says (literally) we have to increase RAM to 16GB that mongo will able to use memory properly
[14:43:14] <Deck`> I doubt in that
[14:43:30] <cheeser> chetandhembre__: https://api.mongodb.org/java/current/com/mongodb/DBCollection.html#update(com.mongodb.DBObject,%20com.mongodb.DBObject,%20boolean,%20boolean)
[14:56:11] <Sticky> Deck`: more memory could improve disk caching, but if it would help, or if you actually need it would depend on the usage
[14:57:15] <Deck`> Sticky, yes, I know it. mongodb tries to use the whole memory and mmaps files. If another process need some memory mongo yields it, right?
[14:58:47] <Sticky> yeah, I believe so looking at http://docs.mongodb.org/manual/faq/storage/
[14:59:07] <daniel86> Seems my problem is similar to this one: http://stackoverflow.com/questions/13121817/nested-queries-date-range
[14:59:07] <daniel86> Still no clue what's wrong in particular, the person on stackoverflow just stated that it might be "unclean data". But what is wrong with the data in particular? Since it's 5GB of data i can't find anything by looking at it
[15:00:34] <Sticky> Deck`: I would look at the stuff about the size of your working set, and if it does fit in 4gb, if not, maybe more ram would help
[15:01:12] <brianseeders> by "unclean", he maybe means that the guy stored a bool and a timestamp in the same field
[15:01:20] <brianseeders> like a bool in one document but a timestamp in another
[15:03:13] <daniel86> yes that makes sense. but queries for 'true' or 'false' as value for transforms.header.stamp return nothing
[15:03:36] <Deck`> Sticky, https://cpaste.org/pcvudtmwv here are stats of db. What would you suggest?
[15:03:55] <daniel86> brianseeders: any way to query for the types used in a particular field or something similar ?
[15:04:25] <daniel86> i just want to validate what's wrong so that i can tell it to the guy who gave me the data
[15:05:48] <brianseeders> Deck` it looks like you have less than 4gb of data+indexes
[15:05:56] <brianseeders> if you got more RAM, you wouldn't have anything to put into it
[15:06:18] <brianseeders> except connections I guess
[15:06:30] <brianseeders> but you have plenty of memory for that free
[15:06:59] <Deck`> yes, I see. thank you
[15:07:28] <asd1> helllo
[15:09:33] <asd1> \join #ruby
[15:10:35] <cheeser> nevar!
[15:13:37] <daniel86> Ok.... `db.tf.find( { "transforms.header.stamp": { $type: 9 } } )` returns many results and all other types return nothing except of timestamp (17) where i get following error: error: { "$err" : "wrong type for field () 17 != 9", "code" : 13111 }
[15:13:59] <daniel86> so that means that there is a timestamp value there where it should be a date ?
[15:16:50] <brianseeders> well
[15:17:12] <brianseeders> I don't really know anything about the timestamp type, but this says:
[15:17:14] <brianseeders> "The BSON timestamp type is for internal MongoDB use. For most cases, in application development, you will want to use the BSON date type. See Date for more information."
[15:18:10] <brianseeders> so it sounds like it should be a date
[15:20:27] <daniel86> yes. The entries look like this: { "stamp" : { "$date" : 1396512465172 } } which might correspond to dates. Still haven't found the timestamp entries in the data
[15:28:36] <daniel86> grepping the json dump for "\$timestamp" also yields nothing
[15:43:18] <chetandhembre_> how can i do upsert for multiple document
[15:47:37] <daniel86> brianseeders: thanks for your help. I had no luck yet tracking down the problem (also not sure if the wrong type for field error really means that there are timestamp values). I will try to find it out next week
[15:49:18] <brianseeders> no prob, sorry I couldn't help more
[17:21:31] <blizzow> If I run a mongorestore of the same bson file twice, will it overwrite the original records or duplicate them? Assuming the records have not changed since they were originally imported.
[17:54:38] <vacho> gentlemen... db.shows.insert({"name": "Big Bang Theory, The", "seasons": [{"number": "3", "episodes": [{"number": "1", 'title' : 'episode 1'}, {"number": "2", 'title' : 'episode 2'}]}]})
[17:55:05] <vacho> why does that insert a whole new document? I already have an existing document with "name" : "Big Bang Theory, The" .... any help is appreciated
[17:55:17] <vacho> I am trying to update an existing document.
[18:07:27] <vacho> whoa, help is pretty limited here it seems.
[18:10:32] <cheeser> inserts always create new docs
[18:18:44] <vacho> cheeser: ok, I am trying with $set, but having same experienc.
[18:19:04] <vacho> cheeser: if I do set on a key that does not exist.. will it still insert/update it?
[18:19:30] <cheeser> you set fields not documents.
[18:19:49] <vacho> cheeser: yes, if I set a field that does not exist.. will it automatically create that field?
[18:20:24] <cheeser> what happened when you tried?
[18:21:01] <vacho> cheeser: it created a whole new document, even though my first parameter(the query) matches an existing document
[18:21:20] <cheeser> pastebin your stuff
[18:22:48] <Guest11305> hi I have a polygon which is failing to be indexed with a 2dsphere, but it is a valid geojson polygon
[18:22:59] <Guest11305> can anyone help me troubleshoot the issue/
[18:26:06] <ronyo> hi everyone, I have a collection with a valid geojson polygon and i'm unable to create a 2dsphere index on it, the error is malformed geometry..
[18:26:25] <ronyo> is anyone here an available to help/
[18:26:33] <ronyo> *and
[18:26:55] <vacho> cheeser: http://pastie.org/9790340
[18:27:09] <cheeser> insert is not an update
[18:29:10] <vacho> cheeser: I thought insert with the $set operator makes it an update?
[18:29:23] <cheeser> why would you think that?
[18:29:35] <cheeser> http://docs.mongodb.org/manual/reference/method/db.collection.insert/
[18:30:01] <vacho> cheeser: sorry..I mean.. I am not updating in this case. I am adding a new season (season 2) .. .so it's not an update it should be an insert?
[18:30:50] <cheeser> $set is only for updates
[18:30:53] <mrmccrac> (dont know your issue but sounds like you might want to know about upserts)
[18:31:12] <cheeser> or just the update() function
[18:31:25] <cheeser> http://docs.mongodb.org/manual/reference/method/db.collection.update/
[18:31:53] <vacho> db.shows.update({"name": "Big Bang Theory, The"}, {"seasons": [{"number": "2", "episodes": [{"number": "1", 'title' : 'episode 1'}, {"number": "2", 'title' : 'episode 2'}]}]} )
[18:32:17] <vacho> cheeser: that still creates a new document instead of updating current.
[18:32:32] <cheeser> db.shows.count() returns 2?
[18:33:43] <vacho> cheeser: I see what's happening..I am trying to add a new season (season 2), but my query is overwriting season 1 and replacing it with season 2
[18:33:49] <cheeser> yep.
[18:33:58] <cheeser> you want $push or $addToSet
[18:35:15] <vacho> i'll look into push
[18:37:22] <vacho> db.shows.update({"name": "Big Bang Theory, The"}, {$push: {"seasons": [{"season number": "1", "episodes": [{"episode number": "1", 'title' : 'episode 1'}, {"episode number": "2", 'title' : 'episode 2'}]}]}})
[18:37:32] <vacho> still does not add it.
[18:38:46] <cheeser> pastebin: find before, update, and find after
[18:39:04] <vacho> the results of find? sure.
[18:40:30] <vacho> http://pastie.org/9790358
[18:42:30] <cheeser> update() not insert()
[18:44:59] <vacho> cheeser: my bad.. http://pastie.org/9790364
[18:50:37] <cheeser> do this:
[18:50:54] <cheeser> is this a dev db that you can trash?
[18:51:55] <cheeser> well, anyway:
[18:52:33] <cheeser> db.shows.count(); db.shows.find(); db.shows.update(/* your updates here */); db.shows.count(); db.shows.find()
[18:57:08] <ronyo> Hi ! i just posted this question to stackoverflow I have been able to reproduce my issue and have it up there .
[18:57:15] <ronyo> Can anyone help me with it? http://stackoverflow.com/questions/27572195/2dsphere-index-on-document-with-valid-geojson-polygon-fails-with-error-cant-ex
[19:01:40] <ronyo> ..
[19:02:01] <MacWinner> if I have a simple replica set thatI want to distribute across 2 datacenters. What is the maximum latency between the nodes that Mongo can tolerate? is it ok to replicate across geographic areas?
[19:02:14] <ronyo> ..
[19:02:14] <ronyo> ..
[19:02:16] <MacWinner> any pointers to articles or insight into this would be greatly appreciated
[19:09:02] <vacho> cheeser: yes I can def trash it..brb gonna run ur cmd's
[19:16:47] <dundermifflinpc> Hey guys any idea why this error is produced "No package mongodb-org is available."
[19:21:19] <chetandhembre_> ERROR: child process failed, exited with error number 1
[19:21:30] <chetandhembre_> I am getting this error while starting mongod
[22:09:10] <_newb> could i humbly request a scan of my question: http://stackoverflow.com/questions/10277174/upsert-in-an-embedded-document
[22:34:34] <glaukommatos> Is there some limit to the number of meters I can specify when using geoNear with a 2dsphere index? I've noticed that no matter how big I make max_distance, I'm not seeing any results that go further than about 900km. Is this a normal limitation, or should I theoretically be able to use a massive number and get everything in the index to show up?
[23:23:02] <JeffC_NN> What happens when a mongo node runs out of disk space?
[23:54:15] <cheeser> JeffC_NN: things break