PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 2nd of July, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:04:19] <leotr> does db.collection.reIndex() block database?
[00:07:45] <cheeser> iirc, it takes a lock out on the index at least and probably the database.
[00:09:12] <joannac> probably the database, same as the ensureIndex() command
[00:09:30] <cheeser> almost certaily the same code path
[00:13:14] <leotr> i have collection with 300 000 docs in collection. How much time does it take to index it on single field?
[00:13:41] <leotr> i did background: true... how do i see progress of index creation?
[00:14:11] <cheeser> db.currentOp() will show if it's still running.
[00:14:20] <cheeser> not sure progress is exposed, though
[00:14:46] <leotr> yes it shows!
[00:14:48] <leotr> thanks
[00:15:57] <leotr> by the way shouldn't it be fast to create index for ~400 000 documents?
[00:16:19] <leotr> or probably it depends on othere db operations
[00:21:14] <joannac> yes
[02:21:50] <bePolite> Good day
[02:23:45] <bePolite> I am trying to start mongo db and I get the error http://git.io/zpbl4g
[02:24:45] <joannac> are you running a mongod process ?
[02:26:03] <bePolite> joanac
[02:26:09] <bePolite> I don't think so
[02:26:33] <LouisT> oh, joannac, not sure if you're the one to talk to about this.. but on MMS, i get an alert saying that my mongodb instance is connectible from the internet, but i only run mongodb on localhost..?
[02:26:34] <bePolite> joannac: I'm just running 'mongo' from my terminal
[02:27:26] <joannac> bePolite: then that's why it's not working
[02:28:23] <joannac> you need to start the server process, so you ahve something to connect to http://docs.mongodb.org/manual/tutorial/manage-mongodb-processes/#start-mongod-processes
[02:28:41] <joannac> LouisT: you have bindIp on?
[02:28:50] <LouisT> yes, to 127.0.0.1 only
[02:29:35] <LouisT> i assume the mms service runs a mongodb instance as well.. do you think it might be connecting back to itself as i've added my server as a domain that points back to localhost?
[02:31:30] <joannac> LouisT: perhaps. what group?
[02:31:40] <LouisT> LTDev
[02:34:34] <joannac> oic
[02:34:45] <joannac> yeah, file a mms ticket (mms feature request)
[02:38:13] <bePolite> Thanks joannac that worked
[02:49:55] <NEO_> anyone there?
[02:50:12] <joannac> NEO_: don't ask to ask. just ask your question
[02:50:24] <joannac> you coul've had an answer by now, assuming I could answer it
[02:50:55] <NEO_> Mongodb has a new Full Text Search with MongoDB and Node.js
[02:51:06] <NEO_> does that also work with gridfs
[02:51:24] <NEO_> gridfs is a real file system based on mongodb
[02:51:51] <joannac> gridfs is for storing files...
[02:51:57] <joannac> text search is for text fields
[02:52:46] <joannac> I'm not sure how you think text search would work there
[02:53:36] <NEO_> i want to word mine the data on the file system
[02:54:19] <joannac> but gridfs stores chunks of files in binary format...
[02:54:31] <joannac> it's not in a form you can mine
[02:55:40] <NEO_> and what is the difference then with mySQL
[02:59:00] <NEO_> Thanks for your time joannac
[03:00:16] <joannac> NEO_: um, in what sense?
[03:04:22] <NEO_> Hey @joannac
[05:48:53] <virgilivs> does mongo support $geoWithin applied to GeoJSON GeometryCollections, or must I query individual Geometries one at a time?
[05:51:12] <virgilivs> 1~The reason I ask is because I would like to query each points contained in any polygon in a set of polygons.
[05:53:16] <virgilivs> Perhaps I should try MultiPolygon
[06:03:36] <geohot> so perf question, i'm inserting 50000 docs in a new collection
[06:03:58] <geohot> this is taking 8 seconds in mongoc, but only 4 seconds in python
[06:04:48] <geohot> first off, both these times seem quite high, but second off, how could python be twice as fast?
[06:05:08] <geohot> doing bulk inserts in both, dropping the collection before hand
[06:18:48] <joannac> what's mongoc?
[06:21:23] <geohot> the mongo c native driver
[06:21:58] <geohot> hmm, so i was using the mongo bundled with meteor
[06:22:28] <geohot> i switched to the latest and it's fast, who knows
[06:22:42] <geohot> ~1 second for c, ~4 seconds for python
[06:55:51] <mehwork> in mongoose, how do you say: new mongoose.Schema({ things: {'foo': 'bar', 'baz': 'quux' } }); In other words, how do you make a schema of an array of type String?
[08:01:01] <Stiles> Hey guys. I'm using GridFS to store photos and pull them when they're needed. It seems that GridFS is REALLY slow at pulling the photos out though (like 2-3 seconds) the more I pull, the longer it seems to take. If I have like 5 on a page the first might take 2-3 then second is 3-4 third will be 4-5 etc..
[08:02:17] <rspijker> how big are the photos how large are your chunks?
[08:03:46] <Stiles> rspijker, the photos vary but the larger one I have seems to be 181917
[08:04:03] <Stiles> 90089 seems to be the largest actually
[08:04:11] <rspijker> bytes?
[08:04:19] <Stiles> yeah
[08:04:24] <Stiles> so no larger than 100kb
[08:04:26] <rspijker> and how large are the chunks?
[08:05:07] <Stiles> How can I tell?
[08:06:51] <Stiles> Ah I see 261120
[08:28:28] <spacepluk> should I expect better mapreduce performance by using a cluster?
[08:33:27] <kali> spacepluk: sharding may improve things, but you have to try it
[08:34:45] <spacepluk> I see, thanks
[08:35:20] <dandre> Hello,
[08:37:50] <dandre> I can read on this page:http://docs.mongodb.org/manual/faq/fundamentals/ that 32 bits version of MongoDB are limited to 2G of data.
[08:37:50] <dandre> and also that 'do not deploy MongoDB to production on 32-bit machines.'
[08:37:50] <dandre> I wonder if in my case where 2GB limitation isn't an issue, it is safe to use 32bit version of mongodb
[08:54:38] <inad922> hi
[08:54:56] <inad922> How can I bypass the limitation on find on a collection to display only 10 results?
[08:55:10] <inad922> I only have 117 elements in the collection and I want to see them all
[08:55:35] <lqez> See the manual : http://docs.mongodb.org/manual/reference/method/db.collection.find/
[08:55:42] <lqez> there are 'limit' and 'skip' method
[08:55:56] <stefandxm> the limit is just in mongoshell no?
[08:56:03] <lqez> yes
[08:56:04] <Nodex> it's only applied in the shell
[09:13:33] <W0rmDr1nk> dandre, I dont think it is
[09:13:53] <W0rmDr1nk> dandre, there is some things I read about 32 bit mode that sounded scary - but cant remember specifics
[09:14:09] <W0rmDr1nk> dandre, I think it silently drops data once its gets too big
[09:14:18] <W0rmDr1nk> dandre, why not run in 64 bit ?
[09:14:36] <W0rmDr1nk> dandre, its 2014 after all - EMT64/AMD64 have been around for a long time by now
[09:21:58] <dandre> W0rmDr1nk: yes but I have a pretty large (more than 100) servers that can't be upgraded to a 64bit kernel. And I must run my app on them uses mongodb
[09:22:11] <W0rmDr1nk> hmm
[09:22:18] <W0rmDr1nk> dandre, there are some Issues with 32 bit
[09:22:40] <W0rmDr1nk> http://blog.serverdensity.com/does-everyone-hate-mongodb/
[09:22:44] <W0rmDr1nk> look there
[09:23:54] <W0rmDr1nk> link is broken
[09:24:03] <W0rmDr1nk> https://web.archive.org/web/20140531134319/https://blog.serverdensity.com/does-everyone-hate-mongodb/
[09:24:28] <W0rmDr1nk> just search for 32
[09:25:01] <W0rmDr1nk> actually ok - the problem was not related to 32 bit
[09:25:08] <W0rmDr1nk> it was cos they were using unconfirmed writes
[09:25:13] <W0rmDr1nk> writes = updates
[09:27:28] <stefandxm> http://i.imgur.com/gZApYAw.jpg
[09:57:52] <W0rmDr1nk> dandre, so it should be fine (no warranties ;))
[09:58:13] <W0rmDr1nk> stefandxm, [nsfw] plz ;)
[10:08:55] <dandre> ok thanks
[11:13:15] <Guest85656> got a best-practive question ... using vb.net (c# driver).... i ahve this complex linq query , that to me is taking too long to query (around 4 seconds) ... i think it can be faster if i use either the aggreation framework or map reduce ....... when creating either of those...cna i store those routines in system.js within mongodb and call it from the webpage ...or should i jsut write it in webpage (i perfer server...jsut wondee
[11:59:44] <stefan_l> I have a setup with two mongo shards, the application ought to only perform only INSERT queries, however monitoring indicates not just INSERTs like expected but also GETMORE and some UPDATE queries. Is this to be expected due to sharding rebalancing or is there some other probable cause?
[12:13:40] <tscanausa> updates will happen to the config database as information moves around. but it could also be your mongo client
[12:13:57] <rspijker> getmore is probably just replication
[12:14:05] <rspijker> but it could be all kinds of stuff, really
[12:14:13] <rspijker> anything that gets results from a cursor
[12:18:04] <stefan_l> OK, thanks! I see updates on the data-shards themselves
[12:18:38] <stefan_l> The application would normally just write data, not read anything
[12:23:01] <__NiC> I'm using mongo 2.4, and when I set keyFile, is any of the replication between my nodes affected by creating users in any of the databases, or is it purely done through the keyfile?
[12:24:44] <__NiC> (or does it only affect my ability to get the nice replica set prompt)
[12:51:55] <__NiC> hm, users being messed up doesn't seem to impact the replica set itself from what I can see
[12:53:12] <__NiC> But I read something about a user having to have read access to config and/or local to be able to add shards or something like that. I didn't quite get how that fit into everything else though..
[13:26:47] <saml> so i get segfault when running mongoimport
[13:27:02] <saml> mongoimport -h rs0/mongo01,mongo02
[13:27:14] <saml> and mongo01 and mongo02 uses different mongod version lolololololols
[13:27:18] <saml> 2.2 and 2.4
[13:32:13] <tscanausa> that could be a problem typically you dont want to run mixed version of cluster software
[13:41:00] <TeTeT> Hi, is there a way to unset all fields in a document that are an empty string ""?
[14:14:06] <ue> hello, im having trouble upgrading mongodb from 2.4.9 to 2.6.1
[14:14:26] <ue> can someone help me with the best way to upgrade?
[14:14:50] <ehershey_> are you having specific problems or just don't know where to begin?
[14:15:13] <ue> i already had mongo 2.4.9 working
[14:15:37] <ue> but when downloaded 2.6.3, it was working only on the directory where i downloaded 2.6.3
[14:15:52] <ue> i want my default mongo to run 2.6.3
[14:16:03] <ue> is there a way to uninstall 2.4.9?
[14:17:31] <ehershey> of course
[14:17:41] <ehershey> what os are you on and how did you install 2.4.9?
[14:18:30] <ue> mac os
[14:18:41] <ue> i did it via homebrew
[14:19:28] <ue> https://www.irccloud.com/pastebin/OOpVZhXb
[14:27:06] <ehershey> that doesn't seem to have anything to do with mongodb
[14:27:34] <ehershey> did you install 2.6.3 via homebrew?
[14:27:42] <ehershey> homebrew should make it pretty easy
[14:29:56] <ue> i wanted to do it via homebrew but i got the above error :(
[14:31:54] <ehershey> you could try asking in #homebrew
[14:32:02] <ehershey> but like I said I don't think that has anything to do with mongodb
[14:32:32] <ehershey> you can move the 2.6.3 install folder anywhere you like
[14:32:38] <ehershey> but I would fix your homebrew environment
[14:32:46] <ehershey> maybe run 'brew doctor' or reinstall homebrew
[14:36:28] <ue> ill do that
[14:36:47] <ue> but another question is: what is the best way to remove mongo 2.4.9
[14:36:57] <ue> like all the data and connections
[14:42:53] <ue> they say i need an inivtation to joib #homebrew
[14:43:09] <ue> if u r there, can u send me an invitation?
[15:20:45] <rspijker> ue: I can join just fine… maybe you need to be identified?
[15:24:45] <ue> how can i do that?
[15:30:15] <rspijker> ue: /msg nickserv help
[15:31:35] <jamesshijie> Hello, I'm using pymongo and have a query that I'm doing a count() on that's very very slow
[15:31:49] <jamesshijie> I'm new to MongoDB, but I think what I need is some good indexing
[15:32:06] <jamesshijie> Can I get some advice on how to index this specific query?
[15:32:11] <jamesshijie> Here's the query:
[15:32:13] <jamesshijie> events = db_events.find({'user_id': self.id, 'action': 'slide_complete', 'location': {'location_id': slide, 'location_type': 16}})
[15:32:25] <jamesshijie> then I do an events.count() on it
[15:32:44] <jamesshijie> I'm using v2.4.9 but it's still painfully slow (15 second page load times)
[15:34:48] <ue> maybe u can index on location
[15:34:55] <kali> jamesshijie: have you tried the obvious index ? {user_id:1, action: 1, location:1 } ?
[15:35:01] <ue> i think the function is: ensureIndex
[15:35:43] <kali> jamesshijie: also, count() as better optimizations in 2.6 than 2.4 when index are present
[15:35:54] <jamesshijie> kali: that's good to know
[15:36:02] <jamesshijie> I'll upgrade and put that index in
[15:36:26] <kali> jamesshijie: even in 2.4, it should make a big difference
[15:36:29] <jamesshijie> Would I need to specify the index in my query?
[15:37:00] <kali> jamesshijie: you don't need to. just create the index, the optimizer should pick it
[15:37:09] <kali> jamesshijie: explain() on the query will confirm it
[15:37:32] <jamesshijie> ok great. Thanks a ton. I'll implement that and see how it goes. Might come back if I get stuck, hopefully I won't.
[15:37:34] <kali> jamesshijie: and you can hint() the right index to the optimizer, but it should not be necessary
[15:54:45] <jamesshijie> Holy crap
[15:54:55] <jamesshijie> kali: It's orders of magnitude faster
[15:55:07] <jamesshijie> kali: I can't thank you enough. Gahh this is amazing.
[16:04:56] <ue> how did u create the index??
[16:15:33] <ArSn> when I do a remove without any query, where does it start to remove? from the lowest ID?
[16:33:34] <d0x> Hi, to keep the jsons in sync with my application i need to update all jsons. currently i implement it like this db.col.findAll().forEach(function(doc){...}). Is there a way to compute the .forEach on all cores?
[16:37:52] <d0x> Is it possible to execute http://docs.mongodb.org/manual/reference/method/cursor.forEach/ in parallel?
[17:29:46] <d0x> Is it possible to execute http://docs.mongodb.org/manual/reference/method/cursor.forEac
[17:29:49] <d0x> ups
[17:30:06] <d0x> Hm, i just saw that even map reduce is not running on multiple cors
[17:30:08] <d0x> cores
[17:30:12] <d0x> hm
[17:34:05] <kali> d0x: nope, it's not. mongodb is not optimized for that, but for fast queries
[17:34:27] <kali> d0x: if you need heavy batch processing, you'll have to pull the data out and process it asynchronously
[17:35:18] <kali> d0x: and if it's in the context of interactive use, you need to remodel the data
[17:35:24] <d0x> kali: I just need it if we perform an update on our backend (to keep the structure in sync)
[17:35:46] <d0x> I mean with update "deploying a new version"
[17:36:20] <kali> d0x: well, mongodb is not good for massive update either. no database is :)
[17:37:27] <kali> d0x: progressive data migration is the only valid option when the database gets bigger
[17:39:40] <d0x> okay, ty. Do you know any use cases describing "progressive data migration" in a java/spring context?
[17:41:38] <kali> d0x: nope. and let's face it, it's a mess
[17:41:59] <d0x> kali: okay, thanks :)
[18:10:44] <alex285> hello, I need a theoritical documentation
[18:10:53] <alex285> I have an account/profile model
[18:11:33] <alex285> and I was wondering if I should embed the account inside profile (I think thats the right approach) or the profile inside account
[18:11:52] <alex285> profile will be connected to lot of things
[18:12:00] <alex285> any recomentaations plz?
[18:14:25] <tscanausa> alex285: modeling questions are generally better when you describe your use case in more detail
[18:16:09] <alex285> tscanausa,Each user has a unique profile. Profile keeps ratings, karma and things that a user has uploaded. It is a download service
[18:16:57] <alex285> tscanausa, I am using "User" only for authedication
[18:17:48] <kali> why do you need two concepts ?
[18:18:34] <tscanausa> does a user need multiple price
[18:18:43] <tscanausa> multiple profiles?
[18:18:43] <alex285> kali, because I dont want to extend the authedication I am using (oniauth)
[18:19:17] <alex285> tscanausa, no a userhas only one profile
[18:19:33] <alex285> I can emmbed profile into a user
[18:19:45] <alex285> or embed a user to a profile
[18:20:03] <alex285> right now I am using the second strategy
[18:20:26] <tscanausa> they seem like the same thing and you are adding unneeded complexity
[18:20:28] <alex285> but the 1st is kinda easier, so I want to ask if it is right too
[18:20:55] <alex285> so you reckon to have a single object?
[18:22:18] <alex285> but will user.profile=>artwork will work?
[18:22:38] <tscanausa> i recon you have a single object.
[18:22:39] <alex285> can I embed profile to a user and then connect things to profile?
[18:25:41] <alex285> tscanausa, I dont want to use the single model approach to save me maintaning the authentication system (devise in this case)
[20:12:15] <wiscas> hello guys, I've tried to upgrade a mongdb hdd as it was runing out of space, so I've copied the content from one mountpoint to a new one, but when I'm trying to restart the db I get the following error:
[20:12:15] <wiscas> Wed Jul 2 20:09:20.654 [conn20] xxxxxx.userNonAtomic Deleted record list corrupted in bucket 3, link number 7, invalid link is 6647145:1ef, throwing Fatal Assertion
[20:12:36] <wiscas> I'm not a mongodb expert.. can someone tell me what I'm looking at here and how to fix it?
[20:12:44] <wiscas> where to read about it would be cool too :)
[20:13:57] <akp> hello.
[20:14:24] <wiscas> hello
[20:15:03] <akp> how does one do an insert in to an existing collection? i've done db.<collection>.insert ( {} ); my data
[20:15:14] <cheeser> well, just like that.
[20:15:28] <akp> but when i do the db.<collections>.find() it only returns the entry i just made
[20:15:33] <wiscas> also, --repair is not fixing it
[20:16:46] <akp> am i doing something wrong with the way i am using the insert statement?
[20:21:09] <wiscas> anyone can give me some lights pls?
[20:26:04] <stefandxm> wiscas, its not good for you
[20:26:36] <wiscas> is there any way to fix it?
[20:26:46] <stefandxm> yeah just stop it
[20:27:07] <wiscas> i dont have any mongod process running at this point
[20:27:12] <wiscas> that happens when I'm starting it up
[20:29:31] <wiscas> stefandxm, I've also tried to start it with --journal and still it will creash with the same error
[21:12:24] <Pulpie> is there a way to see when clients connected to a mongodb?
[21:12:35] <Pulpie> even if they aren't active connections
[21:13:20] <LouisT> uhh read the logs?
[21:13:32] <Pulpie> hmm
[21:13:41] <Pulpie> im not sure I have access to those...
[21:18:53] <Pulpie> oh I do :) yay
[21:19:00] <ehershey> yay
[21:21:54] <federated_life> Pulpie: likely, youll want to do $ db.currentOp() , or $ db.adminCommand({ connPoolStats: 1 } )
[21:33:29] <Pulpie> federated_life: for what?
[21:33:41] <Pulpie> to find current operations right?
[21:33:54] <Pulpie> I needed past ones, logs worked well
[22:26:51] <mango_> hi wondering if anyone is starting the M202 next week?