[00:19:44] <morenoh149> lost1nfound: I use mongohub but its not web
[16:01:03] <atarihomestar> i’m new to mongo. i have a friends and a users collection. i want to query the users collection getting everybody who isn’t already my friend. that would mean i’d have to somehow also look in the friends collection and figure out who is already my friend. how do i do that?
[16:11:39] <GothAlice> atarihomestar: MongoDB isn't relational, so what you are asking will require multiple queries within your application. See also: http://www.javaworld.com/article/2088406/enterprise-java/how-to-screw-up-your-mongodb-schema-design.html
[16:13:10] <GothAlice> As another note, MongoDB isn't a good fit for storing graphs (i.e. social networks, multi-degree frend-of-friend type stuff will be quite difficult, etc.)
[18:59:00] <Strinnityk> I have a list of documents I've obtained (the documents contain score, kind and name).Is there a way I can get the 3 documents with highest score per each different name
[20:42:48] <altamish> Hi all, I am trying to upload an image file using pymongo flask, the file wouldnt be over 16MB so I dont need GridFS. So how do I upload it as a document?
[21:33:07] <xerxes> # Convert from subtype 3 to subtype 4 .. Whats subtype 3 and 4?
[21:33:27] <GothAlice> xerxes: http://bsonspec.org/spec.html < search for "subtype" on this page
[21:33:36] <GothAlice> That applies to how UUIDs are being stored.
[22:12:01] <MacWinner> Hi, my mongo replica set is dumped at midnight every night. I noticed at the time I got alerts about high disk I/O on the server. But I also noticed that my queries to mongo were reaching 30 second timeout. I have a simple PHP script that queries for some activity documents. However, I have PRIMARY_PREFERRED as my read preference, so I didn't think this timeout should occur. Any pointers on why this timeout was happening during the mongodump?
[22:12:07] <MacWinner> Or is it expected behavior?
[22:12:27] <MacWinner> THough only thing I could come up with was maybe I need to set "secondaryAcceptableLatencyMS" higher
[22:20:03] <RegexNinja47> My database size in mongo reports <32mb, but my mongodb directory itself is around half a gigabyte. What should I do to fix/troubleshoot the issue?
[22:20:25] <RegexNinja47> Also, smallfiles is and has been on, so that's not the issue
[22:36:59] <RegexNinja47> one sec. I'll check the prealloc file size
[22:38:05] <MacWinner> Derick, hi, not sure if you were around when I asked the question above... any idea why read operations would fail on a replica set during a mongodb dump operation (about 50GB db). I'm doing teh query via PHP script that has read preference set to PRIMARY_PREFERRED
[22:38:07] <RegexNinja47> Yeah. >90% of the storage is those prealloc files
[22:39:24] <MacWinner> RegexNinja47, i got stuck on this when I first started using mongo too.. would be nice if the mongo startup script echo'd something out during the initial startup that it's about to allocate all that space.. I noticed it more because it took like 10 minutes for it to create the file on my 2TB hard drive
[22:39:45] <MacWinner> i'm not sure what the process is to reduce the size of it though
[22:39:52] <RegexNinja47> Noprealloc is already set to true
[22:45:50] <MacWinner> Derick, can't seem to find any relavent log entries.. but probably more because I saw the problem very late at night and I can't remember exactly what day.. was up all night fixing something else.. any particular string you think would be helpful to grep in the log files?
[22:46:09] <RegexNinja47> Both smallfiles and noprealloc are set to true by default on my hosting service, but there are three 128mb prealloc files
[22:46:36] <RegexNinja47> They're set in mongodb.conf
[22:47:51] <RegexNinja47> Should I contact the people who host it and talk to them? I don't want to sound like an idiot
[22:48:21] <RegexNinja47> Although one would assume noprealloc would get rid of prealloc files
[22:52:48] <RegexNinja47> The mongodb log says that they're enabled
[22:53:31] <RegexNinja47> Sat Mar 21 17:59:52.203 [initandlisten] options: { ... noprealloc: "true", ... smallfiles: "true" }
[22:54:29] <RegexNinja47> But there are two 128mb and one 113mb prealloc.# files in my journal directory
[23:02:32] <RegexNinja47> Deleting those prealloc files made mongo default to the other journal format
[23:02:46] <Derick> there is only one journal format?
[23:02:46] <RegexNinja47> I guess it uses prealloc files if they exist
[23:07:05] <MacWinner> Derick, if the mongodump is happening on the same server as the primary, and it's causing a spike in disk I/O, could you see this as possibly causing timeouts at 30 seconds? mongodb seemed to have started responding fine again after the dump was complete.
[23:07:35] <MacWinner> assuming nothing else is happening on the server
[23:12:00] <MacWinner> out of curiousity, does having a delayedSLave of lets say 1-day serve a purpose as a backup?
[23:12:14] <MacWinner> so basically i have a continuous backup
[23:12:18] <Derick> you can, but you need to make sure your oplog size is that big
[23:13:07] <MacWinner> i'm not clear how i would recover from the delayed node.. would I just run a mongodump on it and then mongoimport back on the primary or new cluster?
[23:13:26] <MacWinner> recover from a failure using the delayed node I mean