[02:11:33] <sputnik13> I have a 32bit system running mongodb, and it hit the 2GB mark and now refuses to start... I need to get to at least delete some data and get the thing back up, is there a way to delete objects from a mongodb database file offline?
[02:13:15] <joannac> mongodump from the dbpath, and move to 64bit
[02:13:30] <sputnik13> right, not an option right now
[07:15:03] <snowmanas> HI. I have a problem. I have migrated my replicaset to new hardware and did all of the rs.reconfig(cfg) stuff etc.
[07:15:29] <snowmanas> in rs.status() I see all new members present, 1 primary and 2 secondaries, however in sh.status through mongos I see the old ones in the "Shard" section
[07:15:44] <snowmanas> how can I update the config databse in order to have the correct replica sets?
[07:19:08] <joannac> you'll have to modify the config database, and then bounce all your members
[07:21:34] <joannac> Migrating bit by bit would've avoided this
[09:23:41] <davidcsi> hello guys, I wonder if you can help me out with a problem I'm having with c++'s driver... I'm executing an update with the upsert flag as true, the query matches exactly, but the driver is inserting a new one instead of "pushing" in the existing doc. I set the profiling on 2 and copied/pasted the query/update on the shell and it works perfectly... any ideas? I posted the question on mongodb-user's list and on stackoverflow (http:/
[09:25:29] <rspijker> davidcsi: the link got cut off :)
[09:25:30] <ernetas> Derick: oh, by the way, we managed to extract all of the data! My colleague wrote an extractor in golang with another mongo driver. Worked like a charm, although it tooks us a day to scan 300 gigs of disk dumps, verify if data stored in BSONs isn't malformed and then do mongorestore with it on an old backup...
[09:25:58] <Derick> ernetas: wow! Where is the blog post and where is the script? :-)
[10:25:16] <Zelest> Derick, I also did some benchmarks with GridFS vs normal docs for that CDN idea of mine.. and its actually faster with GridFS than using normal docs :o
[10:27:49] <davidcsi> ssara you need to create a new connection via SSH
[10:27:54] <davidcsi> ssarah you need to create a new connection via SSH
[10:28:37] <davidcsi> that is, if you're not on the same box...
[10:42:05] <ssarah> davidcsi: i am on the same box. But what do you mean, a new connection via SSH?
[10:49:22] <davidcsi> ssarah, on the create connection box, you should see a tab called "SSH", you create a tunnel using that, and the actual connection to mongo is via localhost
[11:41:31] <remonvv> I don't understand why peope on SO keep claiming embedding collections is the preferred way to manage 1:N relationships in MongoDB.
[11:48:38] <davidcsi> anyone know why on Debian, when I have installed "mongodb-org" (2.6) and I try to install "mongodb-dev", apt wants to remove mongodb-org???
[11:49:06] <Derick> that's a good question. Let me see
[11:49:46] <Derick> davidcsi: it's not from the same repository. It wants to install the 2.4.10dev package from the debian distribution
[11:58:18] <Derick> perhaps you're not using the right index?
[11:59:12] <oleg008> so when not reading the big documents there should be 0 slowdown and additional memory usage ...
[12:21:05] <oleg008> what happens if I have a very big field in a document, f.e. description, but I when I fetch, I don't select it. Will this perform at same speed like there is no description or there is an overhead?
[12:22:37] <Derick> oleg008: the only difference is that those fields are not send over the wire
[12:22:46] <Derick> a document is always read/brought into memory in full
[12:27:33] <Derick> oleg008: this is called "Swapping" - the OS manages MongoDB's in-memory memory consumption - even though MongoDB has all data *mapped* into memory.
[12:27:48] <Derick> not sure how to explain this better...
[12:28:11] <oleg008> so os will take some memory from mongo when needed
[12:34:31] <Derick> duh, davidcsi timed out as I wanted to give an answer
[12:57:31] <rspijker> Derick: it’s not really swapping is it… Paging, sure. But it shouln’t swap anything?
[12:58:07] <Derick> rspijker: if you run out of memory, sure.
[12:58:19] <Derick> rspijker: used that term as it's often more "windows user" friendly
[13:00:50] <rspijker> well, at some point it would swap, sure… But since mongo just mmaps the data, it will only ever page that out, right? Or are there circumstances where it could write that to swap space as well?
[13:01:44] <Derick> rspijker: it's the same thing, not? :)
[13:13:00] <rasputnik> mongo replication seems to be a bit of a black box - if i have replica members that don't seem to be catching up, where do i start to debug?
[13:22:11] <sec_> Derick: last document always is system. ?
[13:49:36] <rspijker> future28: if you have journalling turned on, that can also use 3GB
[13:49:45] <rspijker> mongo has a very large footprint initially…
[13:54:28] <remonvv> Your opinions please; should a typical ODM mapping library reject types that do not have a 1:1 mapping to a BSON type or should it try to do non-lossy casts where needed. For example, in Java should {short a;} throw an error (16 bit signed integers not supported in BSON) or auto-cast it to int(32)
[13:56:17] <kali> remonvv: i think in java it should throw
[13:56:33] <oleg008> your application shoud use types which are ok for the db
[13:56:50] <davidcsi> hello guys,something's wrong with mongodb's downloads:
[13:57:10] <kali> remonvv: if you're odming, then you have strong data types, so you'd better go all the way
[13:58:04] <remonvv> kali : That's my opinion but some people here are making the somewhat valid case that where non-lossy auto casting is possible it would help with legacy POJOs and whatnot. I mean there's no actual risk in doing Java short -> BSON int32 and back again.
[13:58:13] <remonvv> Figured i'd poll for opinions here ;)
[15:04:18] <reencoded> anyway.. yeah... im taking the course on java+mongo development
[15:04:42] <uehtesham90> hello, during aggregation, if i am grouping by multiple fields, and one of the fields does not exist in a document, how will the grouping happen? for e.g. if i group by name, email, nickname and if email does not exist in a document, will it group with email as null?
[15:04:55] <reencoded> and since i frequent freenode im like hanging around here too :p
[15:19:08] <quantum> Hey guys, I have a slightly newbie question. I've been using mysql for ages, and I can view the tables with phpMyAdmin, which is useful for backing them up, and I can also display information on my website by fetching info from the table
[15:20:12] <quantum> I just installed mongodb on my server, how can I do the same things using mongo?
[15:32:37] <rspijker> uehtesham90: that does mean that documents which have that field explicitly set to null will be in the same group as documents that are missing the field
[15:33:50] <uehtesham90> oh really? i didnt know that...but in my collection, there are no fields set to null....is there a way to take of this situation?
[15:34:09] <uehtesham90> maybe i can use the exists fields in my $match opeerator?
[15:35:46] <dukeatcoding> hey guys, how to delete a single file from gridfs when i know the mongoid via mono shell ?
[15:36:07] <dukeatcoding> and even new MongoId doesnt work
[15:36:14] <dukeatcoding> 0 TypeError: Property 'delete' of object testfs.fs.files is not a function
[15:38:44] <rspijker> uehtesham90: you can do that, filter out documents where they don;t exist, or filter out documents where it’s explicilty set to null
[15:46:11] <agenteo> hi, I am trying to run a one off query something like “db.myColl.find({_id: {$in: LIST_OF_IDS}})”. I would like list of ids to come from another query… I understand I can use toArray, but the query .find returns key value pairs. Is there a way to return only values?
[15:46:28] <dukeatcoding> Derick: that was what i was thinking
[15:46:35] <Derick> if you remove them both, then an insert should create them both as well, as it's just two (or more) inserts into multiple collections
[15:46:47] <dukeatcoding> is no "fs" no allowed collection name anymore since fs.files and fs.chunks is there ?
[15:46:52] <Derick> i know that the php driver even creates indexes
[15:47:03] <Derick> dukeatcoding: no, shouldn't matter
[15:47:50] <dukeatcoding> agenteo: never seen only values...
[15:48:54] <agenteo> I am coming from classic SQL, what if you want to find a query based on SELECT * FROM table WHERE id IN (ANOTHER QUERY);
[15:49:04] <Derick> agenteo: you need to do two queries
[15:49:21] <Derick> *or* rethink your database schema
[15:49:22] <agenteo> ok and massage the key value output accordingly I guess
[15:49:43] <agenteo> naa it’s a one off query for some one off data migration
[15:49:53] <ssarah> on the example here, http://www.tutorialspoint.com/mongodb/mongodb_insert_document.htm, how do i go about running that code in mondo? i can't copy paste properly....
[15:49:58] <rspijker> agenteo: you can use aggregation framework to massage the data
[16:31:14] <dukeatcoding> http://docs.mongodb.org/manual/reference/method/db.currentOp/ shows you the current running ops but you have to "reconstruct" the querry itself
[16:31:40] <dukeatcoding> probably you would log it with the app when an error occurs instead of logging all past querries
[16:33:28] <anildigital> dukeatcoding: I want to log all queries while in development mode of my app
[16:33:43] <anildigital> I am surprised google not turning up with any results
[16:33:49] <anildigital> dukeatcoding: I want to log it to a file
[17:02:54] <anildigital> stefandxm: in which database should I set profiling level
[17:03:04] <anildigital> I have set in the one, I want to see queries for
[17:09:25] <Moonjara> Hi! I don't know if i'm the right place but I'd like some help with basic manipulation in Java. I've a directory with several link as embedded Document. If i do db.directories.findOne({}) in command line i get every link in my directory, but in Java the findOne method does get the embedded documents. Does anyone have any clue to help me? Thanks anyway!
[17:11:12] <anildigital> Is it possible to log mongo queries run by node mongo driver?
[17:11:23] <anildigital> Google is not helping at all
[17:27:17] <tejas-manohar> i jiust installed it via homebrew
[17:37:11] <ssarah> k, i think i grasped the concepts now. What i've been asked to do is to setup an environment where i have 2 shards (each a replicate set of 2) on the same machin... i think with arbiters and what not
[17:37:19] <ssarah> got any reference i can use for this?
[17:40:26] <ssarah> http://docs.mongodb.org/manual/tutorial/ <- plenty here to read, i guess -_-
[17:45:08] <joshua> You'll need some arbiter processes at least. You can't really have a replica set with 2 members cause if one goes down it won't know what to do
[18:02:39] <ssarah> just one question, if'm running two shards (a replica set of two each), on the same machine (for staging purposes), am i gona need more than one mongod?
[18:49:07] <staykov> sorry had a call: i want the results to be sorted by testimonialCount, and then reviewsCount - so the result should be that all the objects that have the same number of testimonials are then sorted by reviews
[19:00:18] <kali> well, that's what i thought... .sort({ reviewsCount: -1, testimonialCount: -1 })
[19:00:36] <kali> to get the most reviewed and testimonialed first
[19:44:27] <CaptainLex> I have a pretty newbish question
[19:45:25] <CaptainLex> Basically, I have a record I'd like to store that contains a bit of atomic information and also references to other structures
[19:45:54] <CaptainLex> (It's the idea of a movie "screening", which includes references to the films being shown, the venue where it's happening, the showtimes, &c)
[19:47:15] <CaptainLex> Is the mongo convention to embed the entire referenced structure in the document (as in, is that how I should design the program), or would I still embed a reference like an ID instead?
[19:47:38] <CaptainLex> And if the latter, does the query language allow me to "piece together" the main structure and suck up all the reference objects?
[19:48:06] <CaptainLex> I don't know much about databases so I wasn't sure what to search for on the documentation
[20:55:21] <tejas-manohar> hey guys i needa drop my db completely because i cant start my node app because of it
[21:43:06] <joshua> tejas-manohar: use database, and then db.dropDatabase(); ( so make real sure you run use on the correct one first)
[21:43:36] <joshua> or you can drop it per collection instead
[22:26:58] <enjawork> I’m trying to figure out optimizing my indexes. if i have an $or query that would hit 2 different indexes, should i make a new index that indexes both fields im hitting in the $or?
[22:27:16] <enjawork> further, if i use a $hint to specify one of those indexes, is that going to ignore the other index?