PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 17th of October, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:10:36] <zmansiv> if i have "project" documents and each project can have multiple screenshots, would it be better to a) store them as an array of binary data in the project document, b) store them as an array of base64d strings in the project document, or c) use gridfs?
[00:10:56] <zmansiv> i really doubt they will exceed 16mb in size
[02:30:14] <geoffeg> If I have a huge DB, say 500GB, and I run a db.coll.remove({'some-non-indexes-field' : 1}) in mongodb 2.2, will that grab the db write lock and prevent other writes?
[02:38:10] <cheeser> geoffeg: basically
[02:43:21] <TkTech> geoffeg: For our huge (multi terabyte) collections, we gave up using MongoDB for anything other than raw storage.
[02:43:45] <TkTech> geoffeg: We crawl products, including their attributes and other metadata and write them into a sharded MongoDB database
[02:44:16] <TkTech> geoffeg: Then periodically read (from the replicas only) into redshift and do our work there.
[02:57:49] <joannac> cheeser: Surely it would yield periodically?
[02:58:43] <cheeser> joannac: i dunno. but I can ask tomorrow. :D
[02:59:04] <cheeser> anecdotal evidence suggests otherwise, though.
[03:46:11] <joannac> cheeser: My testing says otherwise.
[03:47:18] <joannac> Wait... not testing on 2.2 branch
[03:48:43] <joannac> Okay, well my testing says otherwise on 2.4
[05:05:39] <Steve009> is $eq with the agg framework supported for the ruby driver?
[05:05:56] <Steve009> when i run: { "$match" => {created_year: {"$eq" => 2011}}}, it fails
[05:06:06] <Steve009> but if i run { "$match" => {created_year: {"$gt" => 2011}}}, it works fine
[05:06:22] <Steve009> i get: Database command 'aggregate' failed: (errmsg: 'exception: invalid operator: $eq'; code: '10068'; ok: '0.0'). (Mongo::OperationFailure) with the $eq
[05:14:38] <joannac> The docs say $eq takes 2 values
[05:34:55] <joannac> Actually, hrm.
[05:35:16] <joannac> File a ruby bug?
[05:38:44] <joannac> I would be a bit dubious that such an bug would've gone unnoticed though. Steve009 -- what version of the ruby driver?
[05:39:10] <Steve009> 1.9.2
[05:39:39] <Steve009> i found a few jira posts talking about the need for $eq
[05:39:45] <Steve009> but they are several years old
[05:39:54] <Steve009> thinking before eq existed
[05:41:05] <Steve009> $gt and $eq should work exactly the same according the the docs
[05:41:10] <Steve009> but different comparison
[05:41:21] <Steve009> my thinking is that its a ruby driver issue
[05:41:51] <Steve009> will have to test it with robomongo later to see if it works with straight mongo commands
[05:55:42] <joannac> BTW it doesn't work in shell either. So I would hold off the bug report
[06:00:42] <joannac> db.foo.aggregate({$match: {a : 1}}) will work ?
[06:02:11] <deepender> my collection contain folders only
[06:02:31] <deepender> not intially everything was designed thinking of one user
[06:02:58] <deepender> now i have to add another user also who will alos have their different folder
[06:03:03] <deepender> also*
[06:03:08] <deepender> how can i do that ?
[06:03:14] <deepender> do i use refrencing
[06:03:16] <deepender> ?
[06:03:49] <deepender> sorry different users.
[06:11:07] <Steve009> @joannac any luck?
[06:11:34] <joannac> Steve009: db.foo.aggregate({$match: {a : 1}}) works
[06:11:37] <joannac> try that
[06:11:47] <Steve009> what is "a"
[06:11:52] <Steve009> OH
[06:12:03] <joannac> a is my key, 1 is my value ;)
[06:12:34] <Steve009> ya seemed to work
[06:12:59] <Steve009> so likely a legacy thing kicking around
[06:13:08] <Steve009> mmmm can think of the reason for eq to exist?
[06:15:27] <Steve009> probably something to update in docs
[08:24:13] <Tiller> hey
[08:25:59] <Tiller> Is it possible from {id: 1, a: "Last", b: ["Old", "Old"]} to make an update request to have {id: 1, a: "Last", b: ["Old", "Old", "Last"]} ? =/
[08:29:04] <kali> no
[08:29:26] <Tiller> You can't use other fields value to make an update?
[08:30:10] <kali> that's right
[08:30:26] <Tiller> huum :/
[08:58:44] <joannac> Tiller: http://pastebin.com/8Fcd4uCn
[09:00:00] <Tiller> Thanks joannac, I saw something like that before :) I'll have to look at the java driver doc :)
[09:46:09] <DragonBe> what's the preferred setup for running a single mongodb instance? disk, RAM and CPU requirements are our major focus at this time.
[11:53:51] <fel> hello
[11:54:16] <fel> I think there is a problem with education.mongodb.com
[11:55:04] <HashMap> works for me..
[11:55:19] <joannac> What do you think is wrong with it?
[11:55:19] <fel> i follow M101J
[11:55:49] <fel> after a week the course is closed now and M101JS is open
[11:57:05] <HashMap> oh I see.. there are duplicate courses.. https://education.mongodb.com/courses
[11:57:27] <fel> yes M101JS is there twice
[11:57:47] <HashMap> both with different starting dates though
[11:57:49] <fel> one that starts on 21st a new one that started on 14th.
[11:57:52] <joannac> It's in there twice because it's being run twice. Check the start dates
[11:57:57] <HashMap> same with node
[11:58:15] <joannac> I'm guessing someone is updating the new tracks?
[11:58:22] <fel> and M101J is no more accessible
[11:59:52] <fel> I subscribed to both. I am now late on M101JS ;)
[12:00:45] <joannac> fel: when did your M101J course start?
[12:01:24] <fel> It started on October 7th
[12:02:02] <joannac> Oh! I see the problem
[12:07:55] <fel> haha!! and the first M101J (starting on 21st Oct.) as the second M101JS (starting on 14th Oct) have the dead python picture instead of coffee and nodejs logos
[12:08:09] <fel> they are probably fake courses, damned!
[12:11:34] <joannac> Working on it
[12:13:21] <fel> @joannac : ok, thank you very much.
[12:16:37] <joannac> Okay, I can't fix it, but I pinged some people. WIll probably be fixed once the US wakes up.
[12:20:35] <tonni> how do I query for the last element of an array? Something lihe the first element from the docs: db.inventory.find( { 'memos.0.by': 'shipping' } )
[12:24:34] <tonni> or is it not possible with the dot notation
[14:41:58] <quickdry21> I'm having some weird issues with primaries refusing/closing connections... When I do a rs.status() from a secondary, the primary is shown as healthy, with the last heartbeat received w/in the last minute... I'm getting the following error when trying to connect via mongo shell:
[14:41:59] <quickdry21> Error: DBClientBase::findN: transport error: giordano.gmlapi.com:10002 ns: admin.$cmd query: { whatsmyuri: 1 } at src/mongo/shell/mongo.js:147
[15:38:17] <saml> hey
[15:38:22] <saml> accidentally whole production db
[15:38:31] <saml> is there a way to restore? ops say no backup is there
[15:38:34] <saml> no replicaset
[15:38:40] <n008> i have a collection with a list in it
[15:38:44] <saml> single db.. accidentally whole stuff
[15:38:49] <n008> how do I query if the second item in the list is null ?
[15:39:03] <n008> {'attr.1': null} not working
[15:51:25] <n008> any help anyone?
[15:53:46] <n008> why can't I query for collections using 'attr.1' ?
[15:56:28] <stongo1> I have three servers, each with host names in /etc/hosts pointing to each other. I'm trying to create a replica set with the three machines, but keep getting an error on rs.initiate saying all members are not up
[15:56:54] <stongo1> but if I telnet to each hostname and port, I can verify each is accessible
[15:58:00] <stongo1> do I need to actually have FQD's working for the replica set servers?
[15:58:43] <saml> how can I recover collection foobar from foobar.0 foobar.ns ?
[15:58:48] <joe_p> stongo1: they all have to be able to reach each other. try doing a telnet to the mongo port from each machine to make sure they are all reachable from one another
[15:59:31] <stongo1> joe_p, I did confirm with telnet all are reachable
[15:59:58] <stongo1> I have authentication enabled on each server, does that make any difference?
[16:00:10] <stongo1> mongo auth
[16:00:22] <stongo1> by db
[16:05:05] <stongo1> here is the config I'm passing to rs.initiate() http://pastebin.com/chqdcsus
[16:06:18] <stongo1> wasn't able to run rs.initiate() without passing a config obj, either
[16:07:48] <TkTech> saml: By now, likely not. Why were you using it in production with no redundancy or backups? :|
[16:08:55] <saml> TkTech, yah mistake
[16:09:18] <saml> TkTech, why "by now" ?
[16:12:28] <TkTech> There are a few methods to recover a file (assuming it hasn't been 0'd) depending on the filesystem and OS.
[16:13:31] <stongo1> hrm, looks like I had to turn auth off on the mongo shell I was trying to execute rs.initiate() from
[16:14:14] <saml> i started new mongod using the test.0, test.ns, journal/ but test collection in the test db is gone :(
[16:15:21] <stongo1> my admin user has clusterAdmin privs ... that's weird rs.initiate() wouldn't work
[16:32:23] <stongo1> ugh, replica sets were much easier to setup in the mongodb course when each instance was on the same machine :P
[16:35:30] <stongo1> seems like auth is creating the problem ... disabling auth allows me to add the other machines
[16:35:41] <stongo1> disabling auth on all three machines*
[16:36:30] <stongo1> so how does one use auth with replica sets? should the rs.add() string be "username:password@name:port/dbname" ?
[17:12:10] <stongo1> I added a keyFile to mongodb.conf, but now don't have any permissions to do anything when I re-enter mongo shell
[17:12:20] <stongo1> does auth have to be enabled at the same time?
[17:12:42] <stongo1> I wish I could find some clearer instructions/resources on this
[17:13:02] <stongo1> the example of setting up a replica set on mongodb doesn't use any auth, so pretty much useless in the real world
[17:25:38] <Nodex> not true, unless your app puts all the auth on the DB
[17:25:51] <Nodex> and let's be honest, a database is not an auth server
[17:32:34] <saml> http://www.mongodb.com/presentations/mongodbs-storage-engine-bit-bit
[17:33:02] <saml> Nodex, give me a script that parses mycollection.* files and gives me json documents so that i can import them again
[17:33:13] <saml> i did db.collection.drop() accidentally
[17:39:24] <saml> hrm i don't think it's possible
[17:39:27] <saml> crap
[17:41:58] <cheeser> what files are you trying to parse?
[18:35:49] <saml> how can I replay journal?
[18:35:52] <saml> to reconstruct database
[18:35:56] <saml> cause i dropped it
[18:36:07] <saml> but journal is still there. i can replay, right?
[19:01:03] <redsand> is there a format that mongoimport or mongorestore needs for writing bson objects directly to a file, and then importing them in?
[19:40:12] <flowr> i got the first time a file saved in mongo ... can i output the content anyway?
[19:48:45] <tripflex> thats a vague question flowr
[19:48:51] <tripflex> did you read the online documents
[20:27:02] <tpayne> can i get the current numerical position of a result set?
[20:27:14] <tpayne> cursor
[20:27:34] <tpayne> i'm using .skip and .limit, but i need to persist the next location
[20:50:31] <quickdry21> Is there any way to convert a replica set shard to a stand alone shard?
[20:50:43] <quickdry21> Or disable replication?
[21:03:17] <duncancmt> Hi! After a reboot, I'm getting this error message on my sharded cluster, and I can't access any of my data. "error creating initial database config information :: caused by :: can't find a shard to put new db on" the code is 10185.
[21:03:22] <duncancmt> How do I get my data back?
[21:03:41] <astropirate> you don't
[21:03:49] <duncancmt> what?
[21:04:01] <duncancmt> my data's just gone?!?
[21:04:24] <astropirate> forever and evern
[21:04:28] <astropirate> hope you made backups
[21:04:51] <astropirate> duncancmt, haha just kidding
[21:04:54] <astropirate> i don'tk now
[21:04:56] <duncancmt> why?! How the hell is that acceptable behavior for a database?!?! It wasn't even an unclean shutdown
[21:04:59] <duncancmt> oh...
[21:04:59] <astropirate> i'm a mongo nub
[21:05:04] <duncancmt> heart attack lessening
[21:05:07] <astropirate> hahahah
[21:05:28] <duncancmt> I have backups, but they'll take a good 6 hours to restore
[21:06:03] <astropirate> wow
[21:14:20] <joannac> duncancmt: it was working okay before the shutdown?
[21:14:28] <joannac> astropirate: that wasn't nice
[21:14:56] <duncancmt> Yeah, it was working just fine before the shutdown
[21:15:06] <joannac> duncancmt: checked allthe servers are back up?
[21:15:46] <duncancmt> Yep! All back up!
[21:16:32] <joannac> i gotta run, but pastebin your mongos log
[21:16:46] <duncancmt> ok... it's awful long I'll post as much as possible
[21:16:59] <joannac> just the [art post shutdown
[21:21:26] <duncancmt> Here's the pastebin http://pastebin.com/8xjEbCe8
[21:24:31] <joe_p> duncancmt: are your config servers running - the error log implies they are not - ERROR: config servers not in sync! no config servers reachable
[21:25:21] <joe_p> sorry - that was the top of the log - my bad
[21:27:16] <duncancmt> yeah, the config servers took a bit longer to come up than I expected and I jumped the gun on starting the mongos
[22:01:01] <joannac> Open up a mongo to a mongos and see what's in your config db?
[22:04:20] <defaultro> hey guys, how can we mimic MySQL's limit offset?
[22:07:02] <tripflex> http://www.querymongo.com/
[22:07:17] <tripflex> defaultro: ^
[22:07:37] <defaultro> cool
[22:08:46] <tripflex> yeah helped me out a few times for sure
[22:09:01] <defaultro> very nice, db.myTable.find().limit(100).skip(50);
[22:14:17] <tripflex> haha yeah guess i could have told you that
[22:14:25] <tripflex> but so much cooler with the site :P
[22:14:28] <tripflex> and you will use it again
[22:23:45] <defaultro> yup :)
[22:23:53] <defaultro> thanks a lot
[22:29:42] <ProLoser> hallo
[22:30:10] <ProLoser> is it possible to do this?: { $set: { 'commands["key"]': 'value' } }
[22:56:17] <disorder> <disorder> I'm using mongodb on the server to store files
[22:56:17] <disorder> <disorder> now the server send the client a file
[22:56:23] <disorder> <disorder> which happens to be a text file
[22:56:23] <disorder> <disorder> and I don't know how to read it
[22:56:37] <disorder> it returns an array of chars or kind of
[22:56:50] <disorder> using javascript client and server
[23:18:18] <disorder> solved