PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 6th of November, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:02:48] <bjori> tab1293: I suspect the interviews start early december
[00:03:36] <bjori> tab1293: I don't know though.. may depend on which office you applied to?
[00:04:29] <tab1293> bjori: Okay just wondering, thanks for everything!
[00:06:04] <bjori> :)
[00:11:03] <rcsrcs> Can I ask a concept question here?
[00:11:13] <jkitchen> why not
[00:12:18] <rcsrcs> OK - new to mongo. Have an instance up. I created an admin user per the docs. Stopped mongo instance, restarted with auth=true. When I connect via command-line mongo.exe with user/pass I don't get an error. But I get unauthorized if I try to do anything (like "show dbs"). This is win8
[00:13:29] <rcsrcs> Any ideas on what I'm missing to get this working?
[00:18:26] <bjori> did you login to the database you created the user on?
[00:18:34] <bjori> and which mongod version is this?
[00:19:30] <rcsrcs> yes - on the same machine, just ran "mongo -u admin -p mongopass --authenticationDatabase admin"
[00:20:06] <rcsrcs> version 2.4.7
[00:21:03] <bjori> rcsrcs: try just running "mongo" then type: use admin
[00:21:09] <bjori> db.auth("admin", "mongopass")
[00:21:27] <rcsrcs> yes - that worked
[00:21:42] <bjori> and then 'show dbs'
[00:21:53] <bjori> wait, how did it work? did it return 1 or 0?
[00:22:14] <rcsrcs> Sorry - first part.. did login, got a "1" back
[00:22:22] <rcsrcs> did show dbs and get "listDatabases failed:{ "ok" : 0, "errmsg" : "unauthorized""
[00:22:34] <bjori> how did you create the user?
[00:22:50] <bjori> sounds like you didn't create a admin user, just a lousy user without any roles
[00:22:59] <rcsrcs> db.addUser( { user: "admin",
[00:22:59] <rcsrcs> pwd: "mongopass",
[00:23:00] <rcsrcs> roles: [ "userAdminAnyDatabase" ] } )
[00:23:32] <rcsrcs> without auth turned on, I first did "db = db.getSiblingDB('admin')"
[00:23:37] <rcsrcs> then that statement - and that's it
[00:24:30] <bjori> you really shouldn't be overwriting the 'db' variable, its magical :)
[00:24:36] <bjori> but it shouldn't have done any harm in this case
[00:25:01] <rcsrcs> it's from the docs! http://docs.mongodb.org/manual/tutorial/add-user-administrator/
[00:25:08] <bjori> do show collections
[00:25:11] <joannac> userAdmin is user administration
[00:25:23] <rcsrcs> > show collections
[00:25:24] <rcsrcs> Tue Nov 05 19:24:03.820 error: {
[00:25:24] <rcsrcs> "$err" : "not authorized for query on admin.system.namespaces",
[00:25:24] <rcsrcs> "code" : 16550
[00:25:24] <rcsrcs> } at src/mongo/shell/query.js:128
[00:25:29] <joannac> I don't believe it gives you access to anything else
[00:25:31] <bjori> heh
[00:25:49] <joannac> other than user administration, which is system.users on every database
[00:25:56] <rcsrcs> Oh - that would make sense. So how to create a 'regular' read/write user for a database?
[00:26:16] <joannac> http://docs.mongodb.org/manual/reference/user-privileges/
[00:26:50] <rcsrcs> Sweet - thanks!
[00:27:04] <rcsrcs> OK - this gets me unstuck, thanks gang!!
[00:27:56] <bjori> that doc page is.. not cool
[00:30:35] <joannac> bjori: PM?
[00:32:26] <bjori> sure
[00:38:32] <synth_> hello, i'm new to mongodb and i'm currently connecting via the pymongo driver. i'm curious if there is an easy way to list collections for a database connection?
[00:42:50] <bjori> synth_: http://api.mongodb.org/python/current/api/pymongo/database.html#pymongo.database.Database.collection_names
[00:43:16] <synth_> thank you!
[00:45:01] <rcsrcs> FYI - in case it helps someone else (I saw this channel is logged). To create a new read/write user for a database, you can do this: create the database, create the user, then authenticate as that user - with this syntax
[00:45:03] <rcsrcs> > use TestDB
[00:45:03] <rcsrcs> switched to db TestDB
[00:45:04] <rcsrcs> > db.addUser( { user: "testdbuser",
[00:45:04] <rcsrcs> ... pwd: "mongopass",
[00:45:04] <rcsrcs> ... roles: [ "readWrite" ] } )
[00:45:04] <rcsrcs> > db.auth("testdbuser","mongopass")
[00:45:20] <rcsrcs> thanks again everyone...
[00:46:44] <synth_> probably should have pastebin'd that
[01:25:26] <bakhtiyor> Hi there
[01:29:43] <bakhtiyor> Anyone there?
[01:35:12] <Fervicus> what's the best way to get a random document from my collection?
[01:52:28] <rdegges_> Hi all, I'm new to mongo, and trying to figure out how to select all distinct city, states from my app. I'd like to do the equivalent of: select distinct city, state from businesses;
[01:52:39] <rdegges_> It looks like the .find() command can't be used with a distinct clause.
[01:52:47] <rdegges_> And the .distinct() clause doesn't seem to do what I need it to do.
[01:52:56] <rdegges_> I need to get documents that have a distinct city AND state pair together.
[01:53:00] <rdegges_> Any suggestions?
[01:54:25] <rdegges_> I'd love to do something like: mongo.db.businesses.find({}, {'state_abbreviation': 1, 'city': 1}).distinct({'city': 1, 'state': 1}) or something
[01:59:32] <k_sze[work]> Is it a bad idea to have a mix of document types in a collection?
[02:00:41] <rdegges_> Hey all, in regards to my previous question, I got the following solution working: http://stackoverflow.com/questions/11973725/how-to-efficiently-perform-distinct-with-multiple-keys
[02:00:48] <rdegges_> (Just incase anyone else is interested)
[02:00:56] <rdegges_> It does seem to return a weird result set format though, but oh well!
[02:27:44] <basichash> How can I start mongodb?
[02:27:46] <basichash> *run
[02:28:03] <jkitchen> basichash: how did you install it?
[02:28:09] <basichash> apt
[02:28:28] <jkitchen> on ubuntu, debian, mint?
[02:28:39] <basichash> yeah ubuntu 12.04
[02:28:45] <basichash> thought mongod did it
[02:28:47] <jkitchen> service mongodb start
[02:29:04] <basichash> brilliant, thanks
[02:29:22] <jkitchen> yupyup
[02:31:11] <basichash> jkitchen: when I try to run shell, I'm getting "couldn't connect to server 127.0.01 shell/mongo.js:84; exception: connect failed". Is that due to localhost being in use?
[02:31:17] <basichash> or something else
[02:31:51] <jkitchen> basichash: is mongod actually running?
[02:33:07] <basichash> jkitchen: how can I check?
[02:33:24] <jkitchen> basichash: 'service mongodb status' might help, or you can just check the process list
[02:33:38] <basichash> stop/waiting
[02:34:07] <basichash> not too sure what that means really lol
[02:34:34] <jkitchen> means upstart thinks the service is not running
[02:34:37] <jkitchen> 'ps aux | grep mongo'
[02:35:00] <basichash> the process does exist
[02:35:49] <jkitchen> more than likely it's one you started earlier with 'mongod'
[02:35:50] <jkitchen> kill it.
[02:35:55] <jkitchen> then service mongodb start
[02:37:24] <basichash> right, same message I got before
[02:37:31] <basichash> so assuming it's started now
[02:38:14] <jkitchen> service mongodb status
[02:38:16] <jkitchen> find out
[02:38:51] <basichash> same as before, 'mongodb stop/waiting'
[02:39:05] <jkitchen> ok, check process list again.
[02:39:18] <basichash> its there
[02:39:23] <jkitchen> different pid?
[02:39:32] <basichash> yep
[02:39:50] <jkitchen> shrug
[02:39:55] <basichash> I do get this error when trying to start: "Wed Nov 6 02:33:23 [initandlisten] exception in initAndListen: 10296 dbpath (/data/db/) does not exist, terminating"
[02:41:00] <joannac> basichash: So does that path exist?
[02:41:21] <basichash> nope
[02:44:11] <basichash> I'm going to try to reinstall it
[02:48:52] <joannac> You don't need to reinstall mongodb
[02:49:01] <joannac> But mongodb needs somewhere to put its data files
[02:49:09] <joannac> the default place is /data/db
[02:49:21] <jkitchen> the default place from ubuntu's config is /var/lib/mongodb
[02:50:03] <joannac> well, the place it's looking for is /data/db
[02:51:27] <jkitchen> if he just ran 'mongod' from the command line, it's probably using all the defaults, yea
[04:50:44] <pplcf> "Wed Nov 6 05:40:19.269 [conn1534] insert stats.vk_send_gift ninserted:1 keyUpdates:0 locks(micros) w:117 209ms",
[04:50:45] <pplcf> "Wed Nov 6 05:41:02.969 [conn1535] insert stats.vk_ho_finish ninserted:1 keyUpdates:0 locks(micros) w:94 554ms",
[04:50:55] <pplcf> I have a bunch of these in my log
[04:51:11] <pplcf> that's not good, right?
[06:16:36] <jinmatt> whats the best way to shutdown mongodb? Should I user db.showdownServer() from the console or just shutdown the service from the bash? when I use 'sudo service mongodb stop', then next time when start it again, I cannot access the mongo console due to the mongod.lock
[06:29:34] <Garo_> I'm planning to move a config server in my setup. Is it ok if the hostname is the same but the underlying ip changes? All places (which I could find) use the hostname to specify the server
[07:20:56] <kali> Garo_: http://docs.mongodb.org/manual/tutorial/migrate-config-servers-with-same-hostname/
[07:22:21] <Garo_> kali: thanks, I looked another doc before and it wasn't as clear on the topic
[07:52:50] <jinmatt> whats the best way to shutdown mongodb? Should I user db.showdownServer() from the console or just shutdown the service from the bash? when I use 'sudo service mongodb stop', then next time when start it again, I cannot access the mongo console due to the mongod.lock
[07:53:08] <jinmatt> btw I'm running on a ubuntu 12.04 LTS server
[07:56:40] <NodeX> service mongodb stop
[07:57:07] <NodeX> rm the lock, that shouldn't happen
[07:58:31] <pplcf> is there a way to disable indexing for some part of data? i.e. older than three months
[07:59:38] <NodeX> no, data is indexed period
[08:00:09] <NodeX> if you want just three months of data then I suggest you use a capped collection or move the data to another collection and index that
[08:00:19] <jinmatt> NodeX: the lock happens when something is not right, isn't it? if I use db.shutdownServer() and if I run service mongodb start the next time I can successfully login into mongo console with out showing errors about the lock
[08:01:38] <NodeX> a lock occurs when a clean shutdown doens't happen
[08:01:43] <jinmatt> ya
[08:01:58] <jinmatt> so is there something wrong with service mongo stop?
[08:02:11] <jinmatt> *mongodb stop
[08:22:45] <NodeX> jinmatt : I can't answer that, perhaps it doesn't have permissions maybe?
[08:23:00] <jinmatt> ok
[08:30:20] <liquid-silence> anyone here using mongodb-native with node?
[08:38:25] <NodeX> liquid-silence : yeh
[08:38:42] <liquid-silence> NodeX I have a massive question
[08:38:48] <liquid-silence> mind if I pm for a few minutes?
[08:39:56] <NodeX> I block pM's sorry
[08:40:02] <NodeX> best to just ask it in here ;)
[08:40:20] <liquid-silence> ok so
[08:40:37] <liquid-silence> https://github.com/mongodb/node-mongodb-native/blob/1.4/examples/gridfs.js
[08:40:40] <liquid-silence> line 57
[08:41:00] <liquid-silence> http://pastebin.com/jU1G4TD1
[08:41:06] <liquid-silence> that my implementation of that example
[08:41:16] <singh_abhinav> is it good practice to store user payment related data in mongodb
[08:41:30] <liquid-silence> I think its tracing back to this prototype
[08:41:30] <liquid-silence> https://github.com/mongodb/node-mongodb-native/blob/1.4/lib/mongodb/gridfs/gridstore.js#L744
[08:41:50] <liquid-silence> but I am getting first argument must be a string or buffer
[08:42:36] <NodeX> singh_abhinav : do you mean do the whole transaction thru mongodb?
[08:43:05] <liquid-silence> NodeX any ideas?
[08:43:27] <singh_abhinav> yeah NodeX starting from shopping cart to final checkout and payment
[08:44:27] <NodeX> are you sure the ranges are true?
[08:44:55] <liquid-silence> well yes
[08:44:57] <liquid-silence> if I do
[08:45:01] <liquid-silence> console.log(data);
[08:45:02] <NodeX> singh_abhinav : Mongodb doesn't support ACID or rollbacks or anything like that so it would all really depend on how much you trust your app to not break
[08:45:02] <liquid-silence> i get
[08:45:15] <NodeX> \n is not punctuation
[08:46:58] <liquid-silence> start is 0 and chunksize is 1
[08:47:11] <liquid-silence> sorry chunksize is 2
[08:47:26] <liquid-silence> so it should read everything between byte 0 and 2
[08:47:30] <NodeX> what error does it yield?
[08:47:41] <singh_abhinav> NodeX: okay got it ..i will need to store user payments data in mysql then
[08:47:59] <liquid-silence> NodeX if I do GridStore.read(db, 'test', start, chunksize, function(err, data) {
[08:48:33] <NodeX> singh_abhinav : I didn't say that
[08:48:54] <liquid-silence> ok getting file does not exist now
[08:49:07] <liquid-silence> Filename in gridfs is test though
[08:50:08] <liquid-silence> even if I change test to the id of the object
[08:50:41] <singh_abhinav> NodeX: then what should be the ideal solution then? app may break, network failure may happen, database may crash
[08:52:24] <NodeX> singh_abhinav : Mysql is not the only database in the world that supports transactions/rollbacks
[08:55:43] <liquid-silence> NodeX why is it not finding my file?
[08:57:31] <NodeX> I don't have a clue, perhaps it doesn't exist?
[08:57:42] <liquid-silence> I am looking at it in robomongo
[08:57:43] <liquid-silence> :(
[08:57:55] <NodeX> different db maybe?
[08:58:17] <liquid-silence> ok sorted this one issue
[08:58:36] <liquid-silence> now it finds the file
[08:59:00] <liquid-silence> and writes out <Buffer 00 00 00 20 66 74 79 70 6d 70 34 32 00 00 00 00 69 73 6f 6d 69 73 6f 32 61 76 63 31 6d 70 34 31 00 00 6d f4 6d 6f 6f 76 00 00 00 6c 6d 76 68 64 00 00 0........>
[08:59:03] <liquid-silence> which is good
[08:59:11] <liquid-silence> but then it stalls
[08:59:15] <liquid-silence> nothing else happens
[08:59:52] <liquid-silence> the consumption device, is not starting to play the video, its just hanging, If I leave this for a few minutes I am sure it will start playing
[09:01:30] <liquid-silence> NodeX when doing GridStore.read(db, '52794a1100000066f7000005', 'test.mp4', start, chunksize, function(err, data) { } I get [Error: Illegal mode 2]
[09:01:50] <NodeX> I am not sure that's a mongodb problem, once you have the data and can write it, it becomes the web servers problem
[09:02:20] <liquid-silence> I know this, but my implementation of this is wrong
[09:02:33] <liquid-silence> and I am at the point where I don;t know how to fix this anymore
[09:02:33] <NodeX> is the 5279..... an object id?
[09:03:48] <liquid-silence> no its a string
[09:03:50] <NodeX> you have 6 args in that not 5
[09:03:56] <liquid-silence> that is saved to _id
[09:04:11] <NodeX> GridStore.read(db, 'foobar2', 6, 7, function(err, data) { <----- 5 args including the callback
[09:04:17] <liquid-silence> when I do GridStore.read(db, 'test.mp4', start, chunksize function(err, data) {
[09:04:36] <NodeX> GridStore.read(db, '52794a1100000066f7000005', 'test.mp4', start, chunksize, function(err, data)<----- 6 args including callback
[09:04:48] <liquid-silence> [Error: File does not exist]
[09:04:54] <liquid-silence> but
[09:05:36] <liquid-silence> GridStore.exist(db, "test.mp4", function(err, result) { } is true
[09:06:35] <NodeX> I don;t know the answer sorry, try posting on stack overflow or somehting
[09:07:14] <liquid-silence> NodeX doing GridStore.read(db, 'test.mp4', function(err, data) {
[09:07:23] <liquid-silence> That actually writes a buffer to the console
[09:07:58] <liquid-silence> thanks for the help anyway NodeX
[09:08:57] <liquid-silence> NodeX weird GridStore.read(db, 'test.mp4',6,7, function(err, data) { } writes a buffer
[09:09:26] <liquid-silence> now I am confused
[09:09:46] <liquid-silence> I presume 6 is start and 7 is size?
[09:11:50] <NodeX> it would appear so
[09:12:22] <liquid-silence> so the 6 = start, 7 = chunkzise?
[09:12:34] <liquid-silence> s/chunkzise/chunksize
[09:15:32] <liquid-silence> NodeX doing GridStore.read(db, 'test.mp4',0,7, function(err, data) { } it says the file does not exist
[09:17:52] <liquid-silence> NodeX start must be 1
[09:18:01] <liquid-silence> for some reason but yet, the code fails
[09:19:45] <liquid-silence> NodeX we where wrong, 6 == length, 7 = offset, so 6 == chunksize and 7 == start
[09:21:29] <liquid-silence> NodeX ok I think I am getting some where, its writing two buffers straight after each other
[09:21:37] <liquid-silence> <Buffer 00 00>
[09:21:37] <liquid-silence> <Buffer 00 00 00 20 66 74 79 70 6d 70 34 32 00 00 00 00
[09:21:37] <liquid-silence> 0 ...>
[09:21:49] <liquid-silence> but the player is still stalling
[10:33:01] <robothands> hi all, Mongo noob here. I've read that if your indexes won't fit in memory, then you will experience performance problems as they must be served from disk....makes sense
[10:33:27] <robothands> but adding up my indexes reaches 9.33GB on a server with 8GB RAM, and I see no issues, any hints as to what I'm missing here?
[10:33:37] <robothands> bit vague, sorry
[10:33:49] <joannac> Not all your indexes need to be paged in?
[10:34:19] <robothands> I see
[10:34:24] <kali> robothands: well, most of your indexes fit. maybe that's good enough for your database load
[10:34:54] <robothands> joannac any way I can dig further here and see what is paged?
[10:35:11] <robothands> thanks kali
[10:36:13] <kali> robothands: this may also hint that you have useless indexes, to be honest :)
[10:36:45] <kali> robothands: you might want to check that your app is actually using all of them
[10:36:48] <robothands> wouldn't be surprised, we have next to no Mongo knowledge in house :)
[10:39:14] <kali> robothands: you may want to try and play with this: http://eng.wish.com/mongomem-memory-usage-by-collection-in-mongodb/ (i never got it working, but havn't tried very hard)
[10:40:08] <frankblizzard> whats the best way to check if a substring 'q' / query is either in 'name' or 'description' in mongodb?
[10:40:36] <liquid-silence> gridstore, mongodb native, is there a way to pass in the _id and not the name of the file?
[10:41:19] <frankblizzard> $or?
[10:44:29] <NodeX> frankblizzard : $or
[11:27:50] <mariayan> why mongodb returns new fields called '__t'?
[11:29:04] <ron> are you anti-__t?
[11:37:25] <mariayan> ron: No, we just throw error if there is any ignore fields in the request. Previous mongo db does not response return __t
[12:09:12] <cheeser> mariayan: could you pastebin an example?
[12:50:47] <Walex> we have a semi-typical issue with MongoDB and memory usage and disk traffic, and first let me say that I understand well the memiry mapped file access logic and cache... It is not about that.
[12:52:02] <Walex> the issue is that the 'mongod' process (unfortunately 2.0.6) has an "anon" memory segment that eventually grows form a few GiB to 11-12GiB and then disk traffic becomes very heavy as it crowds out the page cache.
[12:52:50] <Derick> 2.0.6?
[12:52:58] <Derick> I think it's time for you to upgrade, three times.
[12:53:04] <Derick> sorry, two times
[12:53:15] <Walex> the "anon" memory segment is perhaps likely a 'malloc' arena, but I don't really know. It started growing like crazy after a restart last week, and seems to grow linearly with number of queries received...
[12:53:50] <Walex> Derick: regrettably we are still running Debian 6/Squeeze :-/ and indeed considering doing or using a backport.
[12:54:01] <Derick> Walex: we have our own apt repository
[12:54:08] <Derick> please do not use the distrbution's ones
[12:54:50] <Derick> http://docs.mongodb.org/manual/tutorial/install-mongodb-on-debian/
[12:54:51] <Walex> Derick: our team is Debian fanatics, but I guess given the issue, we can negotiate the use of a less sanctified repository :-)
[12:58:32] <bcave> hi all
[12:58:55] <Walex> BTW it would be interesting to know anyhow what that huge anon-mapped segment is being used for, given that MongoDB is supposed to use page cache for the memory mapped database files, indices and journals...
[12:59:18] <Derick> it memory maps files, I think "anon" is also used for that
[13:02:18] <bcave> i was wondering if anyone could help with determining an age from a date in mongo? I've been trying {$sub[new Date(),"user.dateOfBirth"]} and a number of other variations. user.dateofBirth is an ISODate
[13:02:45] <Walex> Derick: nah, it cannot be because 'anon' memory is process own data space, indeed usually 'malloc' arenas.
[13:03:10] <Derick> Walex: then, it could be a memory leak I suppose...
[13:03:44] <Walex> Derick: yes, I was worrying about that, and it is strange that it just "happened". I guess that the best advice remains to upgrade to a later version.
[13:26:24] <amitprakash> Hi, how can I cast Gridout object to a File type object? [python]
[13:27:26] <amitprakash> basically, I am using openpyxl, and it uses zipfile to open the file passed if isinstance(<file>, file) == False ... since zipfile excepts a file path, I can't pass gridout to openpyxl
[13:27:42] <amitprakash> so I was wondering if I could typecast it tp File
[13:55:00] <tomtom_> Hi everone
[13:56:56] <tomtom_> I am using mongodb with mongoid on my rails app and when I send a lot of requests I get a lot of 500s. It seems it comes from mongodb not handling all the requests but I don't know why and I wish I could have some help
[14:02:39] <NodeX> are you having write locks?
[14:03:30] <tomtom_> I think so yes
[14:03:39] <tomtom_> but how can I see that ?
[14:04:31] <tomtom_> sorry for the noob questions but I am very newb to mongodb
[14:04:58] <NodeX> tail the mongodb.log
[14:05:13] <NodeX> tail -f /path/to/your/mongodb/log
[14:05:32] <tomtom_> nothing shows up
[14:05:34] <tomtom_> except
[14:06:13] <tomtom_> Wed Nov 6 14:03:40.329 [initandlisten] connection accepted from 127.0.0.1:43669 #106 (1 connection now open)
[14:06:13] <tomtom_> Wed Nov 6 14:03:40.354 [initandlisten] connection accepted from 127.0.0.1:43670 #107 (2 connections now open)
[14:06:13] <tomtom_> Wed Nov 6 14:03:40.388 [initandlisten] connection accepted from 127.0.0.1:43671 #108 (3 connections now open)
[14:06:13] <tomtom_> Wed Nov 6 14:03:40.413 [initandlisten] connection accepted from 127.0.0.1:43672 #109 (4 connections now open)
[14:06:13] <tomtom_> Wed Nov 6 14:03:40.329 [initandlisten] connection accepted from 127.0.0.1:43669 #106 (1 connection now open)
[14:06:13] <tomtom_> Wed Nov 6 14:03:40.354 [initandlisten] connection accepted from 127.0.0.1:43670 #107 (2 connections now open)
[14:06:13] <tomtom_> Wed Nov 6 14:03:40.388 [initandlisten] connection accepted from 127.0.0.1:43671 #108 (3 connections now open)
[14:06:14] <tomtom_> Wed Nov 6 14:03:40.413 [initandlisten] connection accepted from 127.0.0.1:43672 #109 (4 connections now open)
[14:06:14] <tomtom_> Wed Nov 6 14:03:40.329 [initandlisten] connection accepted from 127.0.0.1:43669 #106 (1 connection now open)
[14:06:15] <tomtom_> Wed Nov 6 14:03:40.354 [initandlisten] connection accepted from 127.0.0.1:43670 #107 (2 connections now open)
[14:06:15] <tomtom_> Wed Nov 6 14:03:40.388 [initandlisten] connection accepted from 127.0.0.1:43671 #108 (3 connections now open)
[14:06:16] <tomtom_> Wed Nov 6 14:03:40.413 [initandlisten] connection accepted from 127.0.0.1:43672 #109 (4 connections now open)
[14:06:21] <cheeser> oi. not here.
[14:06:31] <tomtom_> oops that wasn't refreshing
[14:06:49] <Derick> tomtom_: *please* use a pastebin
[14:08:33] <tomtom_> ok I'll use it next time
[14:09:06] <tomtom_> It seems I go up to 5 or 6 connections but I don't see any error message or anything
[14:09:27] <NodeX> are you doing silly things like closing the connection each time?
[14:10:18] <Zelest> sounds like you're using apache :o
[14:10:32] <tomtom_> well I am letting mongoid handling these things for me. I am using it for my rails app
[14:10:53] <Zelest> doesn't it spawn new workers as needed?
[14:10:54] <tomtom_> and I am behind puma on my local machine
[14:11:06] <Zelest> oh, then I'm clueless
[14:12:18] <NodeX> is it happening on reads or writes?
[14:12:49] <tomtom_> write. my service is batch inserting records
[14:14:05] <tomtom_> or documents as you call it
[14:15:33] <tomtom_> sorry for the copy / paste but it seems pastebin just went offline
[14:15:40] <tomtom_> 127.0.0.1 - - [06/Nov/2013 13:07:19] "POST /permutations/collect HTTP/1.1" 200 - 0.0241
[14:15:40] <tomtom_> 127.0.0.1 - - [06/Nov/2013 13:07:19] "POST /permutations/collect HTTP/1.1" 200 - 0.0251
[14:15:40] <tomtom_> 127.0.0.1 - - [06/Nov/2013 13:07:20] "POST /permutations/collect HTTP/1.1" 500 - 1.1278
[14:15:40] <tomtom_> 127.0.0.1 - - [06/Nov/2013 13:07:20] "POST /permutations/collect HTTP/1.1" 200 - 0.0276
[14:16:00] <tomtom_> that's the sort of trace I get from puma
[14:18:08] <cheeser> gist.github.com, ftr
[14:22:41] <NodeX> it's yeilding a write lock for whatever reason
[14:23:24] <tomtom_> thx cheeser, done it maybe you will have more clue : https://gist.github.com/lothar59/7336788
[14:23:44] <tomtom_> hum
[14:26:24] <Walex> tomtom_: there are probably dozens of pastebin sites...
[15:39:54] <eldub> Anyone know why I get this error sometimes? service mongod status
[15:39:54] <eldub> mongod dead but pid file exists
[15:42:45] <cheeser> probably your server died and couldn't clean up that pid
[15:42:50] <cheeser> pid *file*
[15:55:35] <platzhirsch> How can I query a document where I specify date nearest to the one I pass
[15:56:19] <NodeX> eh?
[15:57:23] <platzhirsch> let me rephrase: I want to query a document based on a date field. The query should the return a document with the date nearest to the one specified in the query. If there is a document with the same date, then this one. If not, the one with the smallest time difference on the date field
[15:58:10] <platzhirsch> I think its discussed here: http://cookbook.mongodb.org/patterns/date_range/
[15:58:39] <NodeX> before the date or after it?
[15:59:20] <platzhirsch> NodeX: both, the one that's nearest
[15:59:51] <NodeX> then you need to construct a query that goes either side of the date and do a sort on the date field
[16:00:02] <platzhirsch> you query for 2013-01-15 on documents with (2013-01-01, 2013-01-31, 2013-01-14) then you want to get 2013-01-14
[16:00:26] <platzhirsch> NodeX: makes sense, okay thank you :)
[16:00:47] <NodeX> A........Date.........B <--- sort on "Date"
[16:00:55] <cheeser> it's not *quite* that simple though.
[16:00:56] <NodeX> A&B being the outer bounds of it
[16:01:39] <NodeX> it's a tricky one to get right tbh, personaly I would construct it with a date just in the future and go from 0 - FUTURE_DATE
[16:02:00] <jolene> jerkey's head was spinning. Almost before he had begin to lather up and relax as he let the water sluice down his body, in came setient with another guy, the final member of what was now a gang of seven, who introduced himself as wiretapped. Two guys with heavy swinging dicks coming to stand right next to him in Noisebridge. Two colossally heavy cocks which had very obviously been hard at work; swollen, flushed with their exertions, ...
[16:02:03] <NodeX> but that might bring back a very large cursor - depends on your situation tbh
[16:02:06] <jolene> ... gradually relaxing for a while before they resumed work. Oh fuck. What a crazy place this was. And their conversation! One minute it was all arduino, next moment wiretapped was telling setient that he was about to 'graduate' a couple of students and send them along to him, 'though they both have a little further to go in the depth department. But they're both pretty well adjusted girth wise. How's about we work in parallel; you ...
[16:02:12] <jolene> ... begin shallow with both of them while I carry on deepening them out for you?' They shook hands on it. And then they were back to arduino again. Crazy.
[16:02:15] <jolene> jerkey was about to leave them to it when wiretapped finally took a good look at him. 'Hey bro! That's some dick you have hanging there. And you only thirty-five. Damn. And it looks like you and I might be two of a kind. Though Torpedo here is just the small scale model.' He pointed to where half way down his own floppy hosepipe there was a definite bulge, very fat despite its post erection torpor. 'That's a pretty fierce belly ...
[16:02:25] <NodeX> ransom
[16:02:27] <NodeX> random*
[16:04:17] <Number6> That was... interesting
[16:07:06] <cheeser> "she" got k-lined by the netops so "she" won't be back. under that nick anyway.
[16:08:12] <platzhirsch> I cannot find this piece of prose on the interwebs ^^
[16:08:38] <Derick> platzhirsch: maybe you should start a blog
[16:08:51] <Derick> wanna bet DyanneNova is one too? :-/
[16:09:17] <DyanneNova> hmm, what am I?
[16:09:18] <platzhirsch> Derick: I have one, not much posts, though. Do you mean write about the date issue?
[16:09:23] <Derick> DyanneNova: never mind
[16:09:33] <Derick> DyanneNova: just a very very similar host to a spammer that just showed up :)
[16:09:48] <Derick> platzhirsch: date issue?
[16:09:59] <Derick> ah, missed that
[16:10:04] <DyanneNova> ah, ok, nope, I'm not a spammer
[16:10:05] <platzhirsch> Derick: what were you talking about? Write a blog about this "prose" that got posted :P
[16:10:09] <Derick> DyanneNova: yay! :-)
[16:10:29] <Derick> platzhirsch: :-)
[16:10:34] <platzhirsch> ;)
[16:10:42] <Derick> platzhirsch: blog posts are good - it's my way of remembering things I've done in the past
[16:10:58] <Derick> i've had it happen that when googling for a problem a few months later, I find my own blog post :)
[16:11:13] <platzhirsch> Derick: hopefully I will pick it up soon, similar practice to commit every day
[16:11:14] <ron> ah, I mostly try to forget my past.
[16:12:26] <NodeX> Interwebz trolls ftw
[16:12:40] <ron> NodeX: makes you feel at home, eh?
[16:12:42] <ron> ;)
[16:13:00] <NodeX> I am king troll
[16:34:49] <zymogens> Does anyone know if it possible to find().populate().find() all in one query?
[16:35:40] <zymogens> i.e. find some based on a condition.. populate one of the fields.... and then do a find on those populated fields... then return the results from the server?
[16:40:57] <NodeX> eh?
[16:41:27] <NodeX> you want to find() then itterate, add a field then do another find all in one query?
[16:42:44] <zymogens> I wanted to find a set of documents. Then populate them with a doc each one is referencing. then search that populated doc.
[16:43:01] <zymogens> returning a subset of the original set of documents.
[16:43:17] <NodeX> so you want to join in one query?
[16:43:40] <zymogens> Kind of.
[16:43:51] <NodeX> then no, you would have to map/reduce
[16:43:51] <zymogens> but by using populate
[16:44:04] <zymogens> oh ok.. will look into that... thanks
[16:46:27] <NodeX> I would advise you avoid Joins whereever possible
[16:47:01] <zymogens> Have been trying not to... thanks NodeX
[17:25:34] <lfamorim> Anyone know how I cover up this query?
[17:25:35] <lfamorim> http://pastebin.com/sKpHEBJM
[18:23:42] <sec^nd> Would it be better for me to store 10Mb data chunks in one document or store it in gridfs like manor? I need it to be able to handle many reads and writes concurrently.
[18:29:57] <sec^nd> -.-
[18:30:02] <sec^nd> Would it be better for me to store 10Mb data chunks in one document or store it in gridfs like manor? I need it to be able to handle many reads and writes concurrently.
[19:30:40] <RDove> If MongoDB isn't configured to be secure with user/password authentication, and db.authenticate() is called in Java, what will be the result of the boolean?
[20:56:00] <rdegges_> Hey guys, I've got a quick question about using the aggregate function.
[20:56:27] <rdegges_> I'm using aggregate with $group() as a way to do a distinct() query. Essentially I'm trying to grab two distinct columns from a large collection (300 million documnets).
[20:56:32] <rdegges_> I'm currently doing this with:
[20:56:43] <joannac> mongodb distinct
[20:57:02] <rdegges_> joannac: I tried that, but it only lets you do distinct on a single column.
[20:57:09] <rdegges_> I've got to do it on two together (state, city)
[20:57:15] <joannac> ah, okay.
[20:57:20] <rdegges_> But my question is:
[20:57:32] <rdegges_> How can I make my aggregate function scale when I've got 300 million documents?
[20:57:43] <rdegges_> I've noticed that it's doing everything in a single iteration right now, which is pretty slow.
[20:57:51] <rdegges_> It's not giving me a cursor to iterate through or anything
[20:58:04] <joannac> DO you have indexes?
[20:58:17] <rdegges_> Yes, I have a compound index on those two fields (city, state)
[20:58:39] <joannac> How fast is it, and how fast do you want it to be?
[20:58:40] <rdegges_> I suppose I'm mostly worried about it getting progressively slower as I add in documents.
[20:58:55] <rdegges_> So, right now I have roughly 300m documents, and it's taking about 1 minute.
[20:59:15] <rdegges_> I'd like for it to take a few seconds if possible
[21:04:33] <joannac> If you put a $limit in, does that help? do you need it to be sorted?
[21:04:56] <joannac> Does the explain output show it using the index?
[21:10:19] <rdegges_> joannac: let me check on that
[21:10:21] <rdegges_> I'm not sure =0
[21:14:45] <rdegges_> Also: how can I do an aggregate against an array?
[21:15:09] <rdegges_> So for instance, I have a field called 'categories', that's an array ( ['pizza-place', 'office', '...'] )
[21:15:26] <rdegges_> I'd like to include each category individually as part of my group clause for my aggregate function.
[21:15:55] <joannac> $unwind
[21:16:01] <rdegges_> That way, if I say: {'$group': {'_id': {'city': '$city', 'state': '$state', 'categories': '$categories'}}}
[21:16:07] <rdegges_> oO
[21:16:18] <cheeser> i'm working on the aggregation api for morphia as we speak!
[21:16:20] <rdegges_> How do I apply that in my group clause (a bit new to mongo, sorry for the ignorance).
[22:51:24] <RDove> is this date in a BSON format? .append("date", new Date());
[22:53:21] <tystr> anyone in san diego? going to the meetup tonight?
[23:12:10] <RDove> How do you convert the java date into a ISODate? append("date", new Date()); is what I have right this second -- but it comes out looking like a "string" date instead of something that can be read using TTL
[23:22:31] <smw> anyone have an example of using mapreduce to modify records?
[23:23:15] <smw> I don't really want map reduce. I want to change an old schema
[23:23:27] <tystr> o__O
[23:23:41] <tystr> why would you use map/reduce for that
[23:24:12] <smw> tystr, I don't know, it looked like the thing to do. I realized the stupidity of that and decided to ask my real question :-)
[23:24:20] <tystr> lol
[23:24:27] <smw> I just want to run through every record and apply a mapping function to that.
[23:24:33] <smw> how would you do that?
[23:24:48] <tystr> you could use any language with a mongo driver to do that
[23:25:06] <smw> tystr, atomically?
[23:25:46] <smw> tystr, if possible, I would like to use javascript running on the mongo server. I am trying to learn mongo from this :-)
[23:47:18] <joannac> smw: will the aggregation framework do what you want?
[23:50:45] <azathoth99> node.js!+mongo!! arent javacript simply small cgi?
[23:50:51] <azathoth99> "API"
[23:50:56] <azathoth99> just laod of cgi script in js?