[04:16:54] <SDr> 14th database engine, here I come.
[04:17:07] <SDr> you will be assimilated. resistance is futile.
[05:09:08] <ianbytchek> greetings. is there a way to get the unique mongo instance id? is that what isSelf command does? alternatively i can use hostname, but there's a chance that it would be the same for two instances…
[05:42:24] <malczu> hi guys, in docs http://api.mongodb.org/csharp/current/html/M_MongoDB_Driver_IMongoCollection_1_FindOneAndReplaceAsync__1.htm you write that method FindOneAndReplaceAsync returns "The returned document"
[05:43:13] <malczu> what is the returned document, the new version? since I am getting old one from before update and I don't know if this is correct behavior
[06:41:36] <Jonno_FTW> if copyTo is deprecated, what are we supposed to use?
[10:28:03] <nalum> Hello all, I'm trying to combine some data from one document into another creating a new field which is constructed as follows: "content.languages." + doc.language: doc.content
[10:28:22] <nalum> Mongo is complaining about the +. Is there a way to do this?
[10:32:32] <nalum> coudenysj: I didn't think that worked on field names, will look at the docs again.
[10:59:14] <sabrehagen> i'm trying to work out the best way to write a query. i have server logs stored in mongodb, and each log entry has a user property. i want to build a query that will show me users that were active each week for n weeks.
[10:59:43] <sabrehagen> e.g. for n=3, i want to know only users that were active this week, last week, and the week before.
[11:05:43] <sabrehagen> so far my idea is to have something like { $and : [ { $and : [ { user : 'username' }, { _id : { $gt : from1 } }, { _id : { $lt : to1 } } ] }, { same $and object as before but with from2, and to2 } ] }
[11:06:07] <sabrehagen> is that a terrible way to do the query? how mongodb-processing-intensive is that way? should i be using the aggregation framework?
[11:29:51] <ianbytchek> is there a way to get the unique mongo server / instance id? is that what isSelf command does? alternatively i can use hostname, but there's a chance that it would be the same for two instances…
[15:59:36] <pamp> but I cant connect to the instance
[16:00:19] <pamp> Anyone knows something about this?
[16:02:56] <pamp> Why cant kill mongod process, Why cant access mongod ? and what could crashed the process at the begining
[16:03:15] <pamp> Already saw all the logs and I dont see any info about this
[16:22:34] <gruntruk> hi, i was hoping someone could help with an aggregation pipeline question. i’ve setup some contrived example data based on super dumb web traffic data here: http://pastie.org/private/vqaa6qtxicrc5agy5bula trying to get sites ranked by # of unique “hits” … i’ve been tinkering with $addToSet in my group operation but can’t project that data into the next pipeline step as mongo.
[16:23:00] <gruntruk> this is my first forray into mongo so apologies if i’m missing something obvious
[16:23:28] <mylord> how do I push into a subarray in a mongodb record? kind of like insert based on a query
[17:09:08] <terminal_echo> hey guys, with gridfs...if you are writing a small web app so users can upload files, should these files be uploaded to /tmp and then put with gridfs?
[17:10:21] <cheeser> might be slightly faster response times if you thread the eventual write to gridfs.
[17:10:40] <cheeser> but if you're going to do it all in the response cycle, might as well just write to gridfs directly
[17:12:37] <terminal_echo> so I'm using mongoengine
[17:12:58] <terminal_echo> I guess I'm just confused how gridfs gets its hands on the file
[18:01:57] <tejasmanohar> is there a mongodb query for field: { $not one of these strings: ['hi', 'another', 'random' ] }
[18:02:02] <tejasmanohar> (for that functionality)
[18:03:09] <tejasmanohar> doesn't seem possible to use $nin on strings. do i need to text index this field and use text search regex for this?
[18:10:18] <jr3> what would be a good way to take in all documents in a collection, sort an array on each document, drop 10 off the end, then push those changes back to the collection
[18:10:26] <jr3> is this a perfect problem for map reduce
[19:36:14] <diegoaguilar> In a large scale, production application
[19:36:14] <diegoaguilar> what strategy should be follow for a web api in order to "care for mongo operations completion"
[19:36:14] <diegoaguilar> I mean, with write locks, latency, etc, what alternative should be followed so to response and accept to client write away
[19:36:37] <diegoaguilar> I mean, with write locks, latency, etc, what alternative should be followed so to response and "assume" task completion and send response back to client write away
[19:46:31] <mylord> diegoaguilar, the question is there, i think.. to restate, how do i atomically insert to "rounds", get the ObjectID, use that when inserting to "deposits"
[19:48:03] <mylord> you can answer here as well: http://stackoverflow.com/questions/33156339/how-to-atomically-find-latest-round-id-then-use-that-rid-in-an-insert-into-anot
[19:50:43] <diegoaguilar> what do u mean by atomically
[19:59:39] <mylord> i mean, if another process B inserts into "rounds" after A inserted, they then have different <latest> round id, so A'd use the old one, not the latest rid, when inserting into "deposits"
[20:15:24] <Owner_> how do i delete all mongo databases
[20:17:38] <devdvd> Owner_, something like this http://stackoverflow.com/questions/6376436/mongodb-drop-every-database The only exception you need to make is make is an if statement inside the for that says if i = "admin" then break
[20:18:04] <devdvd> however if you only have 2 or 3 databases
[20:18:13] <Owner_> hmm i deleted the data off the hard disk
[20:18:14] <devdvd> then use cheeser's solution as it's faster :)
[20:46:17] <bdiu> If for some reason the secondary servers become unresponsive in a replica set, does the primary continue to function as a standalone server or does it go into a "read only mode" of some sort until the set is restored to it's expected state?
[20:46:37] <bdiu> is there a page on the docs about that?
[20:53:19] <n0arch> can v2.6 mongo shell connect to a v3.0 query router?
[21:59:41] <pythonmaniacub> hello, i need help please, i got this error when i try to authenticate mongodb://m101:m101@localhost:27017/m101 and the error http://pastebin.ca/3198384
[22:00:48] <pythonmaniacub> i can access from mongo shell to m101 database using m101 user
[22:01:12] <pythonmaniacub> but the app doesnt work
[22:06:14] <Boomtime> what is the mongo shell command-line that you are using to test this with?
[22:07:06] <Boomtime> can you capture the log lines from the server log file containing both attempts to connect?
[22:07:27] <Boomtime> pastebin the server log covering a log in from the shell, and a login attempt from the node driver
[22:09:08] <Boomtime> (if you aren't using node, let me know what you are using)
[22:12:16] <pythonmaniacub> Boomtime: mongodb global log says http://pastebin.ca/3198405
[22:12:40] <pythonmaniacub> Boomtime: sorry im totally new in mongodb
[22:18:14] <fxmulder> so I have 2 sharded replica sets going, I added the second replica set earlier this year and it migrated data to from the first replica set to the second replica set. The first replica set seemed to retain the original records it migrated so we are doing a cleanorphan on it to clean it out. I would expect that the empty space left by the removed orphans would be reused by new entries into
[22:18:15] <fxmulder> the collection but disk usage keeps climbing
[22:18:31] <fxmulder> any idea why that is? I'm going to run out of disk space before too long
[22:20:54] <Boomtime> @pythonmaniacub: there is no successful authentication in your server log snippet - but i can stil guess at what might be happening
[22:21:22] <Boomtime> you are running 3.0.x of the server, but you're using a driver which doesn't understand SCRAM auth, it is old and needs to be updated
[22:21:35] <jr3> so this is interesting I just trunacted a lot of embedded arrays and the collection size didnt reduce at all......
[22:22:13] <jr3> modified 60k documents and removed around 200 elements from each, but collection.stats() reports no data size decrease
[22:22:50] <Boomtime> @fxmulder: free space in collections will only be used for that same database, the extents are alsready allocated - it will also only be used for appropriately sized documents
[22:23:04] <Boomtime> if your document size has grown, then no, it will need to allocate new extents
[22:23:15] <Boomtime> btw, all this is relevant to MMAP, not to WiredTiger
[22:23:23] <pythonmaniacub> Boomtime: how do i update the driver
[22:25:50] <fxmulder> Boomtime: we only have one database and one collection in use, the documents have a max size but are otherwise all over the board (they hold email content), but the disk usage seems to grow pretty steadily
[22:26:07] <Boomtime> bugger, what server version?
[22:27:37] <Boomtime> @fxmulder: for 1.4 of the driver, it needs to be fairly recent, 1.4.29 is what the docs say you need at least
[22:27:47] <Boomtime> sorry, that was for pythonmaniacub
[22:27:57] <Boomtime> @pythonmaniacub: for 1.4 of the driver, it needs to be fairly recent, 1.4.29 is what the docs say you need at least
[22:28:18] <Boomtime> @fxmulder: are you running mmap or wiredtiger? it sounds like mmap..
[22:29:03] <Boomtime> @fxmulder: also, could you provide a db.stats() and a db.collection.stats() in a pastebin? i only want to see the stats, so the names can be removed
[22:33:34] <fxmulder> Boomtime: yeah we never switched to the wiredtiger setup
[22:34:04] <fxmulder> Boomtime: here's the stats http://www.nsab.us/public/mongodb
[23:06:19] <Boomtime> @fxmulder: your dbstats show you're running at less than 6% overhead _file size_ vs data size
[23:06:38] <fxmulder> I think the growth is in the moveChunk directory
[23:07:05] <Boomtime> oh crap.. you have a lot of migrations?