PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 7th of November, 2012

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:15:33] <dblado> in the product category use case would you add a 'root' category so even a top level category would have a 'parent'?
[00:17:24] <FrenkyNet> Derick: I've found a bug in the php RC, the connect option is either ignored, or the reporting is wrong: http://snipr.it/~qr
[00:26:11] <astropirate> Can the size and count of a Capped Collection be changed after it has been created?
[00:26:21] <FrenkyNet> for who's interested, I've created an issue for it: https://jira.mongodb.org/browse/PHP-560
[00:39:56] <tjr9898> I'm working on dev homework 2.2 and am able to print out the sorted hw scores with pymongo. I can't get the update to work by using the same iterate
[00:47:10] <tjr9898> anyone else complete the dev homework
[00:51:44] <ckd> FrenkyNet: It may have been deprecated
[00:54:20] <tjr9898> what's that
[01:01:58] <ckd> FrenkyNet: (but i don't think it was, nice catch)
[01:02:12] <bjori> that should work just fine
[01:02:42] <bjori> (and I can't reproduce that issue, and our tests for it pass :])
[01:02:51] <ckd> bjori: I see it here too
[01:02:56] <bjori> O.o
[01:03:01] <bjori> ckd: which PHP version?
[01:03:10] <ckd> 5.4 something
[01:03:16] <bjori> ehh
[01:03:49] <bjori> $ php5.4 -n -c php.ini -r '$m = new Mongo("standalone", array("connect" => false)); var_dump($m->connected);'
[01:03:52] <bjori> bool(false)
[01:06:38] <bjori> ckd: do you mind running MongoLog::setLevel(MongoLog::ALL); MongoLog::setModule(MongoLog::all); $m = new Mongo("..", array("connect" => false)); echo "Dumping prop\n"; var_dump($m->connected); for me and pastebin the output?
[01:06:52] <ckd> of course not, gimme a sec
[01:14:23] <ckd> bjori: http://pastebin.com/cWYiFSVm
[01:17:52] <bjori> uhm
[01:18:01] <bjori> thats not the same code..
[01:19:47] <bjori> what is line 34?
[01:19:59] <ckd> oh, sorry about the line numbers, i have a lot of commented-out shite before it
[01:20:31] <bjori> yeah.. but what is on that line? the var_dupm()?
[01:20:46] <ckd> probably, one sec
[01:20:55] <bjori> and this is a replicaset connection, not a standalone server :)
[01:21:14] <bjori> $ php5.4 -n -c php.ini -r '$m = new Mongo("primary", array("replicaSet" => true, "connect" => false)); var_dump($m->connected);'
[01:21:18] <bjori> bool(false)
[01:21:20] <bjori> well, even that works locally though :)
[01:22:46] <ckd> i'm still seeing it on my local VM
[01:22:54] <ckd> getting that log for you
[01:25:12] <ckd> hm, now i can't reproduce it
[01:25:22] <ckd> for a bit, i was getting different results every other page load
[01:26:54] <bjori> that could have been the persistent connections
[01:27:12] <bjori> if you hit the same process again the connection is still open
[01:27:30] <ckd> oh, it's because i changed 'standalone' to 'localhost'
[01:27:43] <ckd> derp
[01:28:12] <bjori> standalone is my local hostname for one of the standalone vms I'm running :)
[01:28:52] <bjori> but if you are running an app on the same server, and hit a process that already had an connected mongodb server it will still be connected
[01:29:11] <bjori> since connections are persistent between requests, just not between processes
[01:30:06] <ckd> yeah, i shutdown fpm, and now i consistently see it working as expected locally
[01:30:57] <bjori> :)
[01:31:20] <ckd> FrenkyNet: does that solve your issue?
[02:27:13] <ttdevelop> is there anyone here who used mongoose before?
[02:42:41] <zastern> I'm trying to set a variable in the mongo shell equal to the contents of an object returned with a db.foo.find({ some_condition }). I'm doing it like this but it's not working, any thoughts? t = db.products.find({_id : ObjectId("507d95d5719dbef170f15c00")})
[02:48:33] <mrobben> astropirate: not size.
[02:49:04] <mrobben> ttdevelop: I have.
[02:49:16] <ttdevelop> hi mrobben
[02:49:24] <ttdevelop> i'm stuck with some db unauthorize issue
[02:49:33] <ttdevelop> when i'm trying to use mongoose to do a find() on an existing collection
[02:49:46] <mrobben> is it local or remote?
[02:50:22] <ttdevelop> i'm working with openshift for this
[02:50:29] <ttdevelop> trying to translate this
[02:50:30] <ttdevelop> self.db.collection('parkpoints').find().toArray(function(err, names) {
[02:50:33] <ttdevelop> to mongoose
[02:51:19] <mrobben> mongoose is ODM based - you create a schema for the collection you're trying to call 'find' on.
[02:51:38] <mrobben> then you instantiate the schema to create a model and apply the operation to that.
[02:52:03] <mrobben> if you have an existing schema defined, throw it in a gist and I'll take a look
[02:52:09] <ttdevelop> what i don't really get from examples is there is little mention on collections
[02:52:26] <ttdevelop> schema looks like it is the properties for the collection, like name, address
[02:52:31] <ttdevelop> same as sql
[02:52:32] <mrobben> yep.
[02:52:39] <ttdevelop> but then say i have
[02:52:46] <ttdevelop> parks.parkpoints.find() to run
[02:52:57] <ttdevelop> there is no mention where i should define parks and parkpoints collection
[02:53:13] <ttdevelop> i will try to push this to a github
[02:53:38] <mrobben> use gist.github.com
[02:53:56] <mrobben> the key is that you don't have to define the collection explicitly.
[02:54:20] <mrobben> when you create the schema instance, it automagically maps that to a collection for you.
[02:54:54] <ttdevelop> hmm
[02:55:28] <ttdevelop> that's what i'm trying to do from what i learned here : http://stackoverflow.com/questions/5794834/how-to-access-a-preexisting-collection-with-mongoose
[06:35:16] <ryn1> guys,. pls help, im trying to export data from mongodb, using this query --> mongoexport -d dbName -c myCollection -f title,meta,contents --csv -o test_explort.csv,. my problem is, the content field, in the exported file doesnt get the complete data content and convert the other part of the content to ellipsis..
[06:36:15] <ryn1> how can i get the exact content and remove that ellipsis?
[06:38:40] <ryn1> anyone can help?
[06:39:58] <crudson> what types of objects are meta and contents? (j|b)son doesn't translate to csv particularly well except for the simplest of data types
[06:42:33] <crudson> have to run, but I bet that it's some nested attribute, so I'd advise using json as the export format
[07:16:33] <ryn1> meta and content are just strings..
[08:37:10] <samurai2> hi there, is there a way to change some field value in input collection while we're still in the map phase of the map/reduce? thanks :)
[08:41:23] <[AD]Turbo> hola
[12:10:18] <Danielpk> How i can start mongodb at OSX? i tried "mongod --fork --logpath /var/log/mongodb.log --logappend" but don't started. Any tip?
[12:10:36] <kali> Danielpk: look at the log
[12:10:46] <frsk> or the command output
[12:11:40] <Danielpk> Humm i got this error "exception in initAndListen: 10296 dbpath (/data/db/) does not exist…."
[12:12:00] <kali> seems pretty self explanatory, i think
[12:12:55] <Danielpk> dont :D
[12:12:57] <Danielpk> Ops
[12:12:58] <Danielpk> done*
[12:12:58] <Danielpk> :D
[12:13:07] <Danielpk> thx for help,
[12:19:04] <arkin> Danielpk: you might also need to sudo the command
[12:19:17] <Danielpk> arkin: worked fine without sudo.
[12:19:54] <arkin> sweet
[14:13:51] <Derick> FrenkyNet: here?
[14:14:01] <ron> Derick: I'm here. does that count?
[14:14:09] <Derick> no
[14:14:12] <Derick> sorry
[14:14:23] <ron> :(
[14:14:28] <ron> I knew you didn't like me.
[14:14:39] <Derick> It's more that you didn't file PHP-560 and FrenkyNet did :)
[14:14:57] <FrenkyNet> Derick: yeah here
[14:14:58] <ron> is PHP-560 like ORA-600?
[14:16:04] <FrenkyNet> Derick: the issue is weird, need a second for the logs
[14:18:06] <Derick> ron: what's ORA-600?
[14:19:50] <modcure> ORA is an oracle db
[14:20:08] <modcure> ORA-00600 (ORA 600) error is a generic internal error from Oracle
[14:21:12] <eka> hi all... I see that .hint() on a find improves the performance of my query. Is there a way to give hint() to aggregation or map/reduce?
[14:28:56] <FrenkyNet> Derick: http://snipr.it/~qw
[14:29:37] <Derick> that's all good
[14:29:55] <FrenkyNet> how can that be good?
[14:30:02] <Derick> the connect option isn't part of the "connection string" but it's still interpreted later
[14:30:31] <Derick> can you do a var_dump( $connection) after that?
[14:30:40] <Derick> (also, we probably should take this off-channel)
[14:31:27] <FrenkyNet> Derick: pm'd you
[14:38:10] <ron> Derick: ORA-600 is oracle's way of saying 'Something went wrong, it's really bad, but we have no idea what the problem is, so you're pretty much screwed. Good luck!'.
[14:38:39] <Derick> ah
[14:38:54] <Derick> no, PHP-560 is not like that
[14:40:17] <ron> Derick: okay, was just curious.
[14:40:46] <Derick> https://jira.mongodb.org/browse/PHP-560 ← in case you care
[14:41:20] <ron> Derick: I see. perhaps a JIRA bot would be nice in the channel.
[15:42:59] <sander__> Do anyone know about a good beginners guide for mongodb and nosql.. for those who knows SQL good already?
[15:53:42] <eka> sander__: http://nosql-database.org/
[15:57:11] <sander__> eka: I can't see any introduction to mongodb there.
[15:57:26] <eka> sander__: you said an nosql ...
[15:57:29] <eka> sander__: wait
[15:58:02] <eka> sander__: http://www.mongodb.org/display/DOCS/SQL+to+Mongo+Mapping+Chart
[15:58:13] <eka> sander__: http://www.mongodb.org/display/DOCS/Introduction
[16:01:27] <sander__> eka: What does the 1's means here?: db.users.find({}, {a:1,b:1})
[16:01:45] <squawknull> sander__: 7 databases in 7 weeks is awesome... http://pragprog.com/book/rwdata/seven-databases-in-seven-weeks
[16:01:59] <squawknull> it's not specifically on mongo, but it's a good primer on nosql in general, and the different approaches
[16:03:16] <squawknull> sander__: fowler's book is also supposed to be really good. i've not gotten around to reading it yet. http://martinfowler.com/books/nosql.html
[16:04:38] <sander__> squawknull, Thanks.. but I prefer something I don't have to order first.
[16:38:15] <prune> hello there
[16:38:41] <prune> anybody ? :)
[16:39:01] <Derick> plenty of people here, but we're not all going to say "hello"
[16:39:11] <prune> yep :)
[16:39:33] <prune> amybe one of you will help me build a good schema for a foursquare like application
[16:40:54] <prune> as people do "checkins" but never "checkout", I'm wondering where to keep the checkins (which is a triple of user, location and date) and where to keep the current user count on each location (which is equal to the number of checkins in the last 30 minutes)
[16:41:37] <ckd> heh
[16:41:45] <ckd> Derick: testing your patch now
[16:41:58] <prune> For the moment my idea is a collection for places, a collection for users and a 'live and highly changing' collection for checkins, where I may duplicate some informations
[16:42:10] <Derick> ckd: just realized it still has a problem for replicasets
[16:42:33] <prune> anyone worked on a problem like that ?
[16:42:41] <eka> prune: that sounds good
[16:42:59] <ckd> Derick: ah, I'll hold off
[16:43:22] <Derick> ckd: I would still appreciate if you could test it
[16:43:26] <Derick> it doesn't make things worse
[16:43:34] <ckd> Derick: haha, i'll not hold off :)
[16:43:39] <prune> eka : but how will I display the "checkins count" for every places ? a map/reduce every 5 minutes in a backend task ?
[16:44:00] <eka> prune: aggregation is faster now
[16:44:09] <Derick> why do you need m/R for that in the first place?
[16:44:31] <Derick> yes, aggregation - what eka says
[16:44:44] <prune> I need the number of checkins in the last 30 mins for each place to display on the map
[16:44:56] <prune> let me check the docs on aggregation
[16:45:25] <Derick> ckd: also, writing test cases for this is a pain :)
[16:45:29] <ckd> Derick: did your changes get rolled into pecl?
[16:45:38] <Derick> ckd: no, just on github
[16:45:46] <Derick> it'll get into 1.3.0RC2
[16:46:06] <ckd> I was just making sure I can duplicate the orig problem on a different box, and it reports i'm running RC2-dev
[16:46:19] <ckd> at any rate, all good… building from the pull now
[16:46:20] <Derick> yes, that's from github
[16:46:45] <ckd> oh, to clarify, RC2-dev got installed via pecl
[16:47:23] <prune> ok, will try to use the aggregate framework
[16:47:30] <prune> many thanks for the hint
[16:48:52] <Derick> ckd: uh - that seems odd
[16:49:03] <Derick> ckd: maybe I messed up the package version nr in it though
[16:53:30] <ckd> Derick: Yep, you're right… I just never noticed it, RC1 does report itself as RC2-dev
[16:59:01] <JakePee> Does anyone have any insight regarding the impacts of highly volatile collections, essentially used as a cache/staging area
[16:59:12] <FrenkyNet> Derick: what are the default levels for MongoLog? I removed the log setting but the settings remain the same lol
[16:59:23] <Derick> FrenkyNet: no defaults at all
[16:59:39] <JakePee> Collections would be filled every 10 seconds or so by a threaded application, then processed and dumped every 30 seconds
[16:59:52] <FrenkyNet> I guess I'll try to reset php...
[17:00:01] <Derick> FrenkyNet: should not be necessary
[17:00:38] <FrenkyNet> ah lol, was editing the wrong file, my bad
[17:00:52] <FrenkyNet> hahaha
[17:01:01] <FrenkyNet> foutje bedankt :)
[17:04:45] <podman> Good afternoon.
[17:08:32] <ckd> Derick: Appears to not fix it :/
[17:08:59] <Derick> ckd: are you certain?
[17:09:18] <Derick> it most definitely fixes it here
[17:09:44] <Derick> you have two normal connections right, no replicasets?
[17:11:02] <podman> I've got a problem that I'm trying to track down and I'm really not sure where to start looking. I have a ruby processes that pops messages off of a queue and then upserts documents in mongodb. I have this process running on two different machines. one runs fine, but the other one occasionally gets stuck for about 10 seconds while updating a document.
[17:13:06] <ckd> Derick: one is a replica, one is standalone
[17:15:19] <Derick> ckd: okay, that should still work - I tried that case too.
[17:15:39] <Derick> ckd: you're certain you deployed the latest version of the extension?
[17:16:16] <ckd> let me just double check it's built from your branch
[17:16:24] <Derick> ckd: can I suggest you try to use MongoLog
[17:16:30] <ckd> of course
[17:16:33] <Derick> ckd: yes, not just rpeository, also the correct branch
[17:16:59] <Derick> you need to use: git checkout PHP-559-random-connection
[17:17:11] <Derick> I have to leave in 10 mins though - have another appointment
[17:17:11] <ckd> there's a VERY good chance it might have pulled from your repo's master
[17:17:19] <Derick> thought so :-)
[17:18:07] <podman> anyone have any ideas?
[17:18:53] <Derick> podman: investigated what's in the log file?
[17:18:55] <ckd> yep, that's definitely what happened
[17:18:57] <ckd> rebuilding now
[17:19:05] <podman> Derick: the mongodb log?
[17:19:29] <Derick> yes
[17:19:55] <Derick> ckd: make sure you have all your replicaset members in your connection string for now
[17:20:08] <podman> Derick: yeah, not seeing anything out of sorts there, as far as i can tell
[17:20:10] <Derick> (that's the thing I still need to fix)
[17:21:29] <Derick> podman: hmm - sorry for now then
[17:22:10] <podman> a few updates are taking a little more than 100ms, but the client is hanging for 10 seconds
[17:22:35] <ckd> Actually gave it a shot before I saw your note about members, seems to work
[17:22:53] <Derick> woo!
[17:23:13] <Derick> podman: it could be happening on the client side too of course
[17:23:23] <podman> Derick: right.
[17:23:50] <Derick> ckd: i'll attempt the replicaset variant fix tomorrow
[17:23:57] <Derick> going to have a quiet evening tonight :)
[17:24:02] <ckd> hah!
[17:24:19] <podman> i would have thought it was a locking issus, but you'd think both clients would be affected. it seems to only be one client though
[17:24:39] <ckd> yeah, it's not super mission critical for me, I was just using that second connection to write out logs to a separate instance, but it was on my list to take that out of the path anyway
[17:24:51] <ckd> but happy to test any scenarios out
[17:25:42] <Derick> awesome - thanks
[17:25:47] <Derick> I might take you up on that :-)
[17:25:55] <Derick> Gargoyle: no progress on your issue yet, but we're looking too
[17:28:06] <ckd> Derick: by all means… very much appreciate your help, you guys are fantastic
[17:29:56] <ckd> although…. i maybe have another problem :)
[17:32:10] <ckd> that test code, I wrapped it in a for block to insert a record 1000 times… mongo keeps trying to use the same id, so it's failing
[17:48:36] <ckd> oh, interesting, I never realized that the driver adds the _id to the original object
[17:56:00] <ckd> oh, right, pass by reference. i should have more coffee
[18:25:17] <podman> Derick: ok, i figured out the problem. nothing to do with mongodb afterall
[18:41:18] <AugCampos> I there is possible to get all diferent values for a field in all docs in a collection?
[18:43:49] <AugCampos> ex: { name: "zzz", type: ["typea", "typeb"]} , { name: "zzz2", type: ["typea", "typec", "typed"]} and retrieve ["typea", "typeb", "typec", "typed"]?
[18:46:19] <AugCampos> I there is possible to get all different values for a field in all docs in a collection? ex: { name: "zzz", type: ["typea", "typeb"]} , { name: "zzz2", type: ["typea", "typec", "typed"]} and retrieve ["typea", "typeb", "typec", "typed"]?
[18:46:49] <choover> Hello! Noob here... I am trying to set up a replica set on my mac. I want all three mongod servers on my localhost to be part of a replicaset per these directions here: http://docs.mongodb.org/manual/tutorial/deploy-replica-set/ ... I have followed the instructions succesfully up to step 6 where it says to add members to the replica set you need to call the add method on the rs object like: rs.add("localhost:27018") ... When I at
[18:46:54] <choover> can't use localhost in repl set member names except when using it for all members
[18:47:01] <choover> "assertionCode" : 13393,
[18:52:04] <psyprax> Hi all
[18:52:27] <psyprax> I'm new to mongodb and a have a quick question
[18:53:45] <psyprax> I'm interested in storing a large (perhaps 1M by 1M) 2-dimensional matrix (not sparse, but not completely dense either) whose cells will be updated frequently
[18:54:25] <kali> psyprax: what about the read ?
[18:54:27] <psyprax> given mongodb's document size limits, do you think this would be possible?
[18:54:54] <kali> psyprax: honestly, i'm not sure it is the right fit
[18:54:55] <psyprax> kali: it wouldn't be read too frequently
[18:55:14] <psyprax> ok, that's what I was afraid of
[18:55:49] <kali> psyprax: if you store it in forms of a collection of (x,y,value), you'll have a big overhead
[18:55:53] <psyprax> kali: actually it would need to be read frequently too since I'm interested in updates, not just writes
[18:55:55] <kali> because of the document model
[18:57:15] <kali> psyprax: maybe you store it per row or column, depending how dense it is. each document in mongodb must be below 16M
[18:57:46] <kali> my advice if you can choose the techno, don't make this your first experience in mongodb, as i fear it will be painfull :)
[18:58:53] <psyprax> kali: haha, ok. Thanks! I'll experiment a bit with an (x,y, value) representation, but will probably go with something else as you said
[18:59:15] <kali> psyprax: at least, use short field names
[18:59:33] <kali> x,y,v :)
[19:00:05] <psyprax> ok, will do
[19:18:00] <AugCampos> I there is possible to get all different values for a field in all docs in a collection? ex: { name: "zzz", type: ["typea", "typeb"]} , { name: "zzz2", type: ["typea", "typec", "typed"]} and retrieve ["typea", "typeb", "typec", "typed"]?
[19:18:09] <AugCampos> any help, please....
[19:21:02] <crudson> AugCampos: There is a 'distinct' aggregate function.
[19:24:11] <AugCampos> crudson: in the examples in (http://www.mongodb.org/display/DOCS/Aggregation#Aggregation-Distinct) only show for value fields for arrays works too?
[19:24:22] <crudson> AugCampos: did you try it? :)
[19:30:15] <tystr> hello
[19:30:38] <tystr> I'm getting this error in the logs of both of my secondary replica set members:
[19:30:39] <tystr> info DFM::findAll(): extent 4:5c117000 was empty, skipping ahead. ns:local.replset.minvalid
[19:31:19] <tystr> I'm concerned b/c both are receiving this error 2-3 times per second
[19:31:36] <AugCampos> crudson: not yet
[19:31:43] <ckd> tystr: did you Google it?
[19:31:47] <tystr> yeah
[19:31:55] <tystr> seems like it's not a huge issue, from what I found
[19:32:03] <crudson> AugCampos: well if you're too busy then I did try it, and here is the result: http://pastie.org/5342020
[19:32:03] <tystr> but I'm getting so many of them, I'm a little concerned
[19:32:55] <tystr> hmm also seems like it could be related to slow queries?
[19:32:56] <ckd> tystr: that first result, the group thread…. did that reassure you in any way?
[19:33:35] <tystr> ckd ya, a little
[19:33:41] <tystr> we're not deleting anything
[19:33:47] <ckd> (I don't know anything about it, so I'm very curious now)
[19:33:50] <tystr> heh
[19:34:04] <tystr> well our traffic is a bit higher today, and this is the first time I've seen this error
[19:34:44] <ckd> have you used the profiler?
[19:34:46] <tystr> er "log entry"
[19:34:49] <tystr> in the past,
[19:35:01] <tystr> I'm about to turn it on again and see if there's some slow queries
[19:35:14] <tystr> we've been getting load spikes here and there all day since our traffic spiked
[19:35:56] <ckd> yeah, i think that'll be a good starting point
[19:36:40] <tystr> seems like google says large deletes and slow queries are a common cause
[19:36:51] <tystr> if there's massive deletes, then I've got a serious problem lol
[19:37:07] <ckd> hah
[19:37:36] <ckd> well, if you're seeing high traffic today and def not doing deletes, then slow queries seems like the obvious culprit
[19:37:42] <tystr> yeah
[19:37:46] <tystr> makes sense
[19:38:09] <ckd> i don't have the luxury of high traffic yet, so I haven't encountered that, but I'd be curious to know what caused it for you in case i ever do see it
[19:39:48] <AugCampos> crudson: Many thanks...
[19:40:00] <crudson> AugCampos: I didn't necessarily expect it to work, but always worth trying. The docs are good, but not always complete.
[19:43:07] <Derick> podman: what was it?
[19:45:37] <podman> Derick: issues with a geoip gem i'm using. for some reason some lookups are taking almost 10 seconds
[19:45:48] <Derick> ah, good to know
[19:45:48] <podman> wonder if it's an EBS issue or something. can't really tell
[19:45:51] <Derick> how did you find out?
[19:46:56] <podman> Derick: i just timed every single thing that i was doing
[19:47:08] <podman> all of the updates to mongodb were fast
[19:47:16] <podman> few milliseconds each at most
[19:51:18] <podman> going to try launching an instance with ephemeral storage and see if that does anything to help
[19:53:44] <tystr> hmmm
[19:53:50] <tystr> doesn't seem to be any slow queries
[19:59:09] <diegobz> Hi. Is there a $unwind like operator that can be used with regular .find()?
[20:34:06] <JakePee> anyone know if it's better to allow php's built in connection pooling manage multiple connections or to just use
[20:34:16] <JakePee> a global connection
[20:36:55] <crudson> diegobz: it's an aggregate operation only
[20:52:16] <ckd> JakePee: The upcoming 1.3 release changes how all of that works
[20:54:43] <ckd> JakePee: They, among other things, rewrote the connection stuff… Hannes is doing a talk on it next week: http://www.10gen.com/events/webinar/whats-new-php-driver
[20:56:45] <rshade98> is there a way to configure a replica set from the mongo ruby driver
[21:00:54] <diegobz> crudson: ok, thanks.
[21:02:08] <JakePee> ckd: I actually just threw together some tests against it with 100,000 iterations doing a findOne against the same collection. If i create a connection on each iteration, it took 7.68110585213 seconds. If I reuse the same connection, 6.49627995491 seconds.
[21:02:34] <ckd> JakePee: which version of the driver are you using?
[21:02:36] <JakePee> not exactly an exhaustive test, but show that there's not a lot of overheard on creating a connection
[21:03:31] <JakePee> 1.2.9
[21:04:19] <JakePee> also, if you don't dereference the connections, a single pool only handles up to ~190 connections
[21:04:22] <ckd> try it against 1.3.0beta2
[21:04:23] <JakePee> probably setup specific
[21:05:25] <ckd> err, maybe RC1, but don't use it in production
[21:05:33] <ckd> "The new framework no longer has the concept of a connection pool, but instead make sure there is only one connect per node/db/username."
[21:08:37] <ckd> JakePee: hopefully that'll save you some time, the issue is somewhat academic at this point :)
[21:27:55] <JakePee> ckd: thanks, I was more curious if there was any reason to leverage the connection pooling but it would seem to be more of a beneficial exercise for a threaded environment
[21:30:08] <Derick> ckd: btw, beta2 isn't much better... it has other interesting issues
[21:30:36] <JakePee> that is, it seems to have more of an application with asynchronous requests
[21:44:15] <ckd> Derick: anything bad enough to make me cry into my pillow?
[21:44:58] <Derick> ckd: no :-)
[21:45:08] <Derick> but I am not sure what monsters you are afraid off :)
[21:45:38] <ckd> well, let's just say that I tested beta 1 and 2 for about 5 minutes each before I deployed 'em in production
[21:45:45] <ckd> so…. I guess I'm not afraid of much :P
[21:48:59] <Derick> oi!
[21:49:00] <Derick> haha
[21:49:05] <Derick> ah well
[21:49:11] <Derick> we do also need users like that :-)
[21:49:24] <ckd> although that cluster is pretty much doing all reads, so I figured nothing too horrible could happen
[21:49:55] <ckd> that's why i thought it was quite funny that i stumbled upon that particular issue
[21:51:31] <Derick> ckd: we'll put you on a list to try out RC's ;-)
[21:51:56] <ckd> speaking of which… so tossing just one host from the replica set doesn't have any issues for me
[21:52:33] <ckd> but passing a string like 'mongodb://host1:port,host2:port,etc' to the constructor makes things cranky
[21:52:39] <Derick> cranky how?
[21:53:21] <ckd> let me check my logs...
[21:57:57] <ckd> nope, that's me crying wolf again. there was a \n in the string
[21:58:13] <Derick> ;-)
[21:58:25] <ckd> so it actually works both ways
[21:58:34] <ckd> but you were expecting one to not?
[21:58:56] <Derick> uh?
[21:59:41] <ckd> i vaguely remember you suggesting something about it only working if i specify all members
[22:00:02] <Derick> oh no, I would just recommend that
[22:00:08] <Derick> especially with my branch
[22:00:17] <ckd> oh! ok
[22:00:21] <Derick> as it doesn't use all other hosts right now on every request but the 1st
[22:44:00] <rshade98> anyone know whats wrong with this
[22:44:06] <rshade98> db.command({ "replSetInitiates" : { _id: 'repl1', members: [{_id: 0, host: '10.202.18.48'},{_id: 1, host: '10.144.153.251'},{_id: 2, host: '10.144.15.96'}, ] } })
[22:45:12] <cjhanks> Hey all -- with the C++ version client drivers; is it possible to set the FD_CLOEXEC flag on the DBClientConnection with out modifying the driver source code?
[22:49:07] <JakePee> rshade98: db.runCommand?
[22:52:14] <cjhanks> In fact I can't find the tcp code in the source. Any suggestions would be useful.
[22:53:12] <rshade98> Jake I am using the ruby driver
[22:53:31] <rshade98> the only thing I see is a command
[22:54:02] <JakePee> is that the eact command you're runnig?
[22:54:15] <JakePee> it's 'replSetInitiate' not 'replSetInitiates'
[22:54:21] <Guest19926> Installed MongoDB on a 32-bit debian machine via 10-gen package. When I start mongo, it drops after about 30 seconds without warning. Before it does, I am able to visit the web admin and see things. Any ideas as to why? Thanks.
[22:54:31] <rshade98> yeah, haha that is a bad typo, lol
[22:55:12] <cjhanks> Guest19926: My experience is the packages are terrible and you're better off just building it yourself.
[22:56:10] <Guest19926> cjhanks: Thank you for that knowledge. I will definately try self installation.
[22:56:15] <rshade98> odd number list for Hash
[23:05:59] <ckd> Guest19926: what do the logs show
[23:15:53] <Guest19926> ckd: Lest couple messages say: connection accepted from 192.168.2.159 (My lap top while on the network) Invalid operation at address: 0x819b503 from thread: conn1
[23:15:53] <Guest19926> ckd: Never warns me about the shut down though.
[23:17:03] <ckd> Guest19926: what processor are you running on?
[23:17:54] <ckd> Guest19926: See if this is relevant to your specific setup: https://jira.mongodb.org/browse/SERVER-7012
[23:18:07] <Guest19926> ckd: ALso not, I do get a warning on startup: 32-bit servers don't have journaeling by default. PLease use --journal if you want durability.
[23:18:22] <Guest19926> ckd: How would I check that on Debian?
[23:19:05] <ckd> cat /proc/cpuinfo
[23:19:52] <Guest19926> THanks there we go. AMD Athlon(tm) XP 3000+
[23:21:49] <ckd> Guest19926: That processor is from '03, I think it's reasonable to assume you could be running into that same issue as in the reported issue
[23:22:40] <ckd> Guest19926: But I know VERY little about this sorta thing
[23:23:59] <ckd> yeah
[23:24:26] <ckd> Guest19926: I looked at some of the related issues, and it would seem that your processor isn't supported
[23:25:00] <Guest19926> Ahhhhh
[23:25:04] <ckd> Guest19926: There's an open ticket to have Mongo check if it's running on an unsupported CPU, but it's not implemented yet
[23:25:23] <Guest19926> Thank you good sir. You saved me a lot of time.
[23:25:31] <ckd> But for the record, 10gens packages are quite stable
[23:25:52] <Guest19926> Roger that.
[23:25:58] <ckd> (at least in my experience)
[23:27:14] <Guest19926> THanks again.
[23:27:23] <ckd> absolutely my pleasure
[23:36:19] <cjhanks> ckd: Fedora bunked my system.
[23:37:33] <cjhanks> ckd: That is, 10gen RPM for Fedora hung the init system and was completely screwed. The Debian package require reconfiguring, though it didn't hang the init.
[23:45:57] <spartango> hi, new to mongo (coming from cassandra)...question about data model: if i have a list of strings that i want to associate with an id... is storing them as keys in a document with no/empty values looked down upon?
[23:52:11] <rossdm> that sounds kind of bogus, why not store them in an array?
[23:54:03] <spartango> its not all that different, i could definitely do that. is there a particular reason to store them in an array?
[23:55:53] <rossdm> well you said that you have a list of strings, which makes me think that they're related in some way and it would be nice to iterate over them. Making them keys in an object with null values sounds like it doesn't represent the data properly. Hard to say more without knowing specifics
[23:57:41] <spartango> they're URLs for parts of a file, accumulated over time. so you can imagine that i'll want to append to that list a bunch of times, but when the parts are their i'll probably grab all the strings at once
[23:57:47] <spartango> *there