[00:15:33] <dblado> in the product category use case would you add a 'root' category so even a top level category would have a 'parent'?
[00:17:24] <FrenkyNet> Derick: I've found a bug in the php RC, the connect option is either ignored, or the reporting is wrong: http://snipr.it/~qr
[00:26:11] <astropirate> Can the size and count of a Capped Collection be changed after it has been created?
[00:26:21] <FrenkyNet> for who's interested, I've created an issue for it: https://jira.mongodb.org/browse/PHP-560
[00:39:56] <tjr9898> I'm working on dev homework 2.2 and am able to print out the sorted hw scores with pymongo. I can't get the update to work by using the same iterate
[00:47:10] <tjr9898> anyone else complete the dev homework
[00:51:44] <ckd> FrenkyNet: It may have been deprecated
[01:06:38] <bjori> ckd: do you mind running MongoLog::setLevel(MongoLog::ALL); MongoLog::setModule(MongoLog::all); $m = new Mongo("..", array("connect" => false)); echo "Dumping prop\n"; var_dump($m->connected); for me and pastebin the output?
[01:28:12] <bjori> standalone is my local hostname for one of the standalone vms I'm running :)
[01:28:52] <bjori> but if you are running an app on the same server, and hit a process that already had an connected mongodb server it will still be connected
[01:29:11] <bjori> since connections are persistent between requests, just not between processes
[01:30:06] <ckd> yeah, i shutdown fpm, and now i consistently see it working as expected locally
[01:31:20] <ckd> FrenkyNet: does that solve your issue?
[02:27:13] <ttdevelop> is there anyone here who used mongoose before?
[02:42:41] <zastern> I'm trying to set a variable in the mongo shell equal to the contents of an object returned with a db.foo.find({ some_condition }). I'm doing it like this but it's not working, any thoughts? t = db.products.find({_id : ObjectId("507d95d5719dbef170f15c00")})
[02:55:28] <ttdevelop> that's what i'm trying to do from what i learned here : http://stackoverflow.com/questions/5794834/how-to-access-a-preexisting-collection-with-mongoose
[06:35:16] <ryn1> guys,. pls help, im trying to export data from mongodb, using this query --> mongoexport -d dbName -c myCollection -f title,meta,contents --csv -o test_explort.csv,. my problem is, the content field, in the exported file doesnt get the complete data content and convert the other part of the content to ellipsis..
[06:36:15] <ryn1> how can i get the exact content and remove that ellipsis?
[06:39:58] <crudson> what types of objects are meta and contents? (j|b)son doesn't translate to csv particularly well except for the simplest of data types
[06:42:33] <crudson> have to run, but I bet that it's some nested attribute, so I'd advise using json as the export format
[07:16:33] <ryn1> meta and content are just strings..
[08:37:10] <samurai2> hi there, is there a way to change some field value in input collection while we're still in the map phase of the map/reduce? thanks :)
[14:20:08] <modcure> ORA-00600 (ORA 600) error is a generic internal error from Oracle
[14:21:12] <eka> hi all... I see that .hint() on a find improves the performance of my query. Is there a way to give hint() to aggregation or map/reduce?
[14:38:10] <ron> Derick: ORA-600 is oracle's way of saying 'Something went wrong, it's really bad, but we have no idea what the problem is, so you're pretty much screwed. Good luck!'.
[16:01:27] <sander__> eka: What does the 1's means here?: db.users.find({}, {a:1,b:1})
[16:01:45] <squawknull> sander__: 7 databases in 7 weeks is awesome... http://pragprog.com/book/rwdata/seven-databases-in-seven-weeks
[16:01:59] <squawknull> it's not specifically on mongo, but it's a good primer on nosql in general, and the different approaches
[16:03:16] <squawknull> sander__: fowler's book is also supposed to be really good. i've not gotten around to reading it yet. http://martinfowler.com/books/nosql.html
[16:04:38] <sander__> squawknull, Thanks.. but I prefer something I don't have to order first.
[16:39:33] <prune> amybe one of you will help me build a good schema for a foursquare like application
[16:40:54] <prune> as people do "checkins" but never "checkout", I'm wondering where to keep the checkins (which is a triple of user, location and date) and where to keep the current user count on each location (which is equal to the number of checkins in the last 30 minutes)
[16:41:58] <prune> For the moment my idea is a collection for places, a collection for users and a 'live and highly changing' collection for checkins, where I may duplicate some informations
[16:42:10] <Derick> ckd: just realized it still has a problem for replicasets
[16:42:33] <prune> anyone worked on a problem like that ?
[17:09:18] <Derick> it most definitely fixes it here
[17:09:44] <Derick> you have two normal connections right, no replicasets?
[17:11:02] <podman> I've got a problem that I'm trying to track down and I'm really not sure where to start looking. I have a ruby processes that pops messages off of a queue and then upserts documents in mongodb. I have this process running on two different machines. one runs fine, but the other one occasionally gets stuck for about 10 seconds while updating a document.
[17:13:06] <ckd> Derick: one is a replica, one is standalone
[17:15:19] <Derick> ckd: okay, that should still work - I tried that case too.
[17:15:39] <Derick> ckd: you're certain you deployed the latest version of the extension?
[17:16:16] <ckd> let me just double check it's built from your branch
[17:16:24] <Derick> ckd: can I suggest you try to use MongoLog
[17:24:19] <podman> i would have thought it was a locking issus, but you'd think both clients would be affected. it seems to only be one client though
[17:24:39] <ckd> yeah, it's not super mission critical for me, I was just using that second connection to write out logs to a separate instance, but it was on my list to take that out of the path anyway
[17:24:51] <ckd> but happy to test any scenarios out
[18:46:19] <AugCampos> I there is possible to get all different values for a field in all docs in a collection? ex: { name: "zzz", type: ["typea", "typeb"]} , { name: "zzz2", type: ["typea", "typec", "typed"]} and retrieve ["typea", "typeb", "typec", "typed"]?
[18:46:49] <choover> Hello! Noob here... I am trying to set up a replica set on my mac. I want all three mongod servers on my localhost to be part of a replicaset per these directions here: http://docs.mongodb.org/manual/tutorial/deploy-replica-set/ ... I have followed the instructions succesfully up to step 6 where it says to add members to the replica set you need to call the add method on the rs object like: rs.add("localhost:27018") ... When I at
[18:46:54] <choover> can't use localhost in repl set member names except when using it for all members
[18:52:27] <psyprax> I'm new to mongodb and a have a quick question
[18:53:45] <psyprax> I'm interested in storing a large (perhaps 1M by 1M) 2-dimensional matrix (not sparse, but not completely dense either) whose cells will be updated frequently
[18:57:15] <kali> psyprax: maybe you store it per row or column, depending how dense it is. each document in mongodb must be below 16M
[18:57:46] <kali> my advice if you can choose the techno, don't make this your first experience in mongodb, as i fear it will be painfull :)
[18:58:53] <psyprax> kali: haha, ok. Thanks! I'll experiment a bit with an (x,y, value) representation, but will probably go with something else as you said
[18:59:15] <kali> psyprax: at least, use short field names
[19:18:00] <AugCampos> I there is possible to get all different values for a field in all docs in a collection? ex: { name: "zzz", type: ["typea", "typeb"]} , { name: "zzz2", type: ["typea", "typec", "typed"]} and retrieve ["typea", "typeb", "typec", "typed"]?
[19:21:02] <crudson> AugCampos: There is a 'distinct' aggregate function.
[19:24:11] <AugCampos> crudson: in the examples in (http://www.mongodb.org/display/DOCS/Aggregation#Aggregation-Distinct) only show for value fields for arrays works too?
[19:24:22] <crudson> AugCampos: did you try it? :)
[19:38:09] <ckd> i don't have the luxury of high traffic yet, so I haven't encountered that, but I'd be curious to know what caused it for you in case i ever do see it
[20:36:55] <crudson> diegobz: it's an aggregate operation only
[20:52:16] <ckd> JakePee: The upcoming 1.3 release changes how all of that works
[20:54:43] <ckd> JakePee: They, among other things, rewrote the connection stuff… Hannes is doing a talk on it next week: http://www.10gen.com/events/webinar/whats-new-php-driver
[20:56:45] <rshade98> is there a way to configure a replica set from the mongo ruby driver
[21:02:08] <JakePee> ckd: I actually just threw together some tests against it with 100,000 iterations doing a findOne against the same collection. If i create a connection on each iteration, it took 7.68110585213 seconds. If I reuse the same connection, 6.49627995491 seconds.
[21:02:34] <ckd> JakePee: which version of the driver are you using?
[21:02:36] <JakePee> not exactly an exhaustive test, but show that there's not a lot of overheard on creating a connection
[21:05:25] <ckd> err, maybe RC1, but don't use it in production
[21:05:33] <ckd> "The new framework no longer has the concept of a connection pool, but instead make sure there is only one connect per node/db/username."
[21:08:37] <ckd> JakePee: hopefully that'll save you some time, the issue is somewhat academic at this point :)
[21:27:55] <JakePee> ckd: thanks, I was more curious if there was any reason to leverage the connection pooling but it would seem to be more of a beneficial exercise for a threaded environment
[21:30:08] <Derick> ckd: btw, beta2 isn't much better... it has other interesting issues
[21:30:36] <JakePee> that is, it seems to have more of an application with asynchronous requests
[21:44:15] <ckd> Derick: anything bad enough to make me cry into my pillow?
[22:45:12] <cjhanks> Hey all -- with the C++ version client drivers; is it possible to set the FD_CLOEXEC flag on the DBClientConnection with out modifying the driver source code?
[22:52:14] <cjhanks> In fact I can't find the tcp code in the source. Any suggestions would be useful.
[22:53:12] <rshade98> Jake I am using the ruby driver
[22:53:31] <rshade98> the only thing I see is a command
[22:54:02] <JakePee> is that the eact command you're runnig?
[22:54:15] <JakePee> it's 'replSetInitiate' not 'replSetInitiates'
[22:54:21] <Guest19926> Installed MongoDB on a 32-bit debian machine via 10-gen package. When I start mongo, it drops after about 30 seconds without warning. Before it does, I am able to visit the web admin and see things. Any ideas as to why? Thanks.
[22:54:31] <rshade98> yeah, haha that is a bad typo, lol
[22:55:12] <cjhanks> Guest19926: My experience is the packages are terrible and you're better off just building it yourself.
[22:56:10] <Guest19926> cjhanks: Thank you for that knowledge. I will definately try self installation.
[23:05:59] <ckd> Guest19926: what do the logs show
[23:15:53] <Guest19926> ckd: Lest couple messages say: connection accepted from 192.168.2.159 (My lap top while on the network) Invalid operation at address: 0x819b503 from thread: conn1
[23:15:53] <Guest19926> ckd: Never warns me about the shut down though.
[23:17:03] <ckd> Guest19926: what processor are you running on?
[23:17:54] <ckd> Guest19926: See if this is relevant to your specific setup: https://jira.mongodb.org/browse/SERVER-7012
[23:18:07] <Guest19926> ckd: ALso not, I do get a warning on startup: 32-bit servers don't have journaeling by default. PLease use --journal if you want durability.
[23:18:22] <Guest19926> ckd: How would I check that on Debian?
[23:19:52] <Guest19926> THanks there we go. AMD Athlon(tm) XP 3000+
[23:21:49] <ckd> Guest19926: That processor is from '03, I think it's reasonable to assume you could be running into that same issue as in the reported issue
[23:22:40] <ckd> Guest19926: But I know VERY little about this sorta thing
[23:36:19] <cjhanks> ckd: Fedora bunked my system.
[23:37:33] <cjhanks> ckd: That is, 10gen RPM for Fedora hung the init system and was completely screwed. The Debian package require reconfiguring, though it didn't hang the init.
[23:45:57] <spartango> hi, new to mongo (coming from cassandra)...question about data model: if i have a list of strings that i want to associate with an id... is storing them as keys in a document with no/empty values looked down upon?
[23:52:11] <rossdm> that sounds kind of bogus, why not store them in an array?
[23:54:03] <spartango> its not all that different, i could definitely do that. is there a particular reason to store them in an array?
[23:55:53] <rossdm> well you said that you have a list of strings, which makes me think that they're related in some way and it would be nice to iterate over them. Making them keys in an object with null values sounds like it doesn't represent the data properly. Hard to say more without knowing specifics
[23:57:41] <spartango> they're URLs for parts of a file, accumulated over time. so you can imagine that i'll want to append to that list a bunch of times, but when the parts are their i'll probably grab all the strings at once