PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 7th of October, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[02:08:41] <nixnerd> Hello mongo dbers
[02:09:36] <nixnerd> I don't know anything about databases beyond their conceptual reason for existence. I never really liked sql. But... I have a new project that I'm going to use mongo on but I have a question about file storage.
[02:10:57] <nixnerd> if I'm storing a BUNCH of photos in a db... should I just write the links to said files in a json object or store the actual photo in mongo db?
[02:11:11] <nixnerd> Perhaps a stupid question... sorry.
[02:11:28] <StephenLynx> i do that
[02:11:44] <StephenLynx> i store files using gridfs
[02:12:19] <nixnerd> gridfs huh?
[02:12:35] <nixnerd> Looking up now
[02:13:41] <nixnerd> StephenLynx: This looks sick
[02:13:43] <nixnerd> thanks!
[02:16:20] <nixnerd> Also... I keep hearing a lot of mongo bashing from SQL lovers. Can anyone explain from a nosql point of view when relational dbs make sense and when they don't?
[02:18:16] <StephenLynx> is not about SQL, but about relational dbs being a good fit.
[02:18:26] <cheeser> if you have highly relational data subject to a lot of joins, an RDBMS might be a better fit.
[02:18:33] <StephenLynx> that.
[02:18:41] <cheeser> it's possible to model your data in mongo to avoid much of that but not always.
[02:18:46] <StephenLynx> also when you need relational integrity
[02:18:58] <nixnerd> I'm not even in a position to evaluate what is the best idea. I have a bunch of pictures I want to serve up to users. That's pretty much it.
[02:19:02] <StephenLynx> and MUST have that integrity validated.
[02:19:10] <nixnerd> But I don't want to commit to a certain schema
[02:19:11] <StephenLynx> yeah, mongo fits that well.
[02:19:22] <nixnerd> That's what I thought.
[02:19:29] <StephenLynx> with gridfs you can easily save the file along with arbitrary metadata.
[02:19:38] <nixnerd> I just love json and I love that I can just know that and be fine pretty much.
[02:19:51] <nixnerd> StephenLynx: So like custom exif data?
[02:19:57] <nixnerd> sort of
[02:19:57] <StephenLynx> no, json.
[02:20:21] <StephenLynx> you just put w/e you want on the metadata field of the file.
[02:20:44] <nixnerd> StephenLynx: I said that wrong or I'm not explaining what I'm thinking properly. But if I want to timestamp each photo and tag it with whatever... I can just add a json object to do the job?
[02:20:53] <StephenLynx> yes
[02:20:59] <nixnerd> sounds sexy
[02:21:01] <nixnerd> I love it
[02:21:20] <StephenLynx> http://pastebin.com/DY6McuRp
[02:21:24] <StephenLynx> example on my project
[02:23:07] <nixnerd> that's pretty sweet.
[03:45:09] <Jonno_FTW> when using find_one_and_update, how do I $set a value based on another field?
[05:12:11] <thinrhino> Hello Folks, I have a question on "eventually consistant" state of the DB.
[05:12:52] <thinrhino> As I understand, when I update a record, and time taken to update the other copies of the record takes time and they become 'eventually consistent'
[05:13:42] <thinrhino> But, when I create a new record and try to read it, will I get an answer or I may miss the record, since the read went to a different server?
[06:16:25] <thinrhino> hello anybody around, who can help me a little in understanding 'eventual consistency'
[06:29:06] <Mattias> thinrhino: there's a wikipedia page for that
[10:40:47] <bakhtiyor> https://groups.google.com/forum/#!topic/mongodb-user/LX0WDNpplqg
[10:43:38] <bakhtiyor> please, everyone put your suggestions to this post
[13:28:50] <spydon> Is ObjectId unique for each collection or for all collections in a DB?
[13:29:13] <spydon> for each document in a collection*
[13:29:33] <cheeser> ObjectId is just value... you can put in duplicates all you'd like.
[13:29:56] <spydon> Yeah, but if you leave it out and it's automagically created
[13:30:08] <cheeser> constructing a new ObjectId is reasonably guaranteed to create unique values
[13:30:21] <cheeser> no. _id is automatically generated.
[13:30:31] <spydon> Sorry, that is what I meant
[13:30:49] <spydon> And I guess it uses ObjectId normally?
[13:32:19] <spydon> But _id just creates an ObjectId, or does it do more magic things?
[13:36:35] <cheeser> just creates a new ObjectID
[16:03:25] <deathanchor> The only magic is no two are alike, like snowflakes.
[16:22:57] <ksmtk> hey
[16:23:08] <ksmtk> a quick noob question
[16:23:55] <ksmtk> I need a mongo query that will insert new item into array field of document if it doesn't exist or will update existing one
[16:24:04] <ksmtk> is it possible?
[16:24:39] <ksmtk> I'm using $addToSet right now but it adds new item into array everytime (array items are Object type)
[16:29:29] <StephenLynx> the object is somehow different.
[16:29:44] <StephenLynx> try comparing the result of them turned into strings.
[16:30:33] <cheeser> field order matters since comparison is done by comparing byte streams
[16:30:34] <StephenLynx> I don't think you will be able to use an upsert on that case.
[16:31:30] <StephenLynx> actually, you might do, if you manage to get the order of the fields right.
[16:42:30] <ksmtk> http://stackoverflow.com/questions/32997801/mongodb-query-to-insert-new-document-into-array-field-or-update-existing
[16:46:25] <deathanchor> mongo javascript, is there away to connect to a replset and set read secondaryPreferred?
[16:48:27] <topwobble> deathanchor yes. use rs.slaveOk() once connected to secondary
[16:48:58] <topwobble> You can also use "Model.find(conditions).read('sp')"
[16:49:30] <deathanchor> topwobble: I meant where i provide the mySet/host1:27018,host2:... URL in the connect in the javascript
[16:50:31] <topwobble> read preference are typically on a per client basis
[17:43:57] <rocky1138> when I run mongodump -d xyz some of my BSON files are 0 bytes. How can I force a full export of my mongodb database?
[18:11:33] <cheeser> that *does* do a full dump
[18:55:37] <shlant> anyone know if using secondaryPreferred vs secondary would result in "not master" errors?
[18:55:51] <shlant> or is it something to do with slave_ok?
[18:57:21] <cheeser> only if you're trying to write to that host
[19:00:44] <shlant> cheeser: do you know how to prevent writing to secondary? I have secondaryPreferred, but that's for reads, correct?
[19:11:45] <cheeser> secondary writes aren't possible under any circumstances.
[19:30:59] <shlant> cheeser: any idea why I am getting "not master" errors then? might it be a problem with the node.js mongodb connector?
[19:31:33] <cheeser> i haven't seen any code and I don't do node.js so it probably wouldn't really help if I did
[20:20:22] <stickperson> how can i get back the most recent document by timestamp and only see the value of timestamp? so far i have “db.articles.find().sort({'timestamp': -1}).limit(1)”
[20:25:18] <cheeser> stickperson: use a project on your find()
[20:25:23] <cheeser> projection
[20:38:13] <stickperson> cheeser: “db.articles.find({"timestamp.$": 1}).sort({'timestamp': -1}).limit(1)” ?
[20:39:01] <stickperson> cheeser: i mean “db.articles.find({}, {"timestamp.$": 1}).sort({'timestamp': -1}).limit(1)”, but that doesn’t work
[20:50:46] <devdvd> hi all, so i have a little issue. I'm running mongo 2.4.5 (yes i know, im working on upgrading to 3.0.x) We had a problem the other day and one of my colleges decided it would be a wonderful idea to fix the problem by removing all the other members in the replica set manually. Now in addition to having a standalone mongo instance. It also segfaults a few seconds after I start it up with the --replSet flag. My question is this. If
[20:50:47] <devdvd> i wipe the records from local.system.replset on the master instance and the slave instances, can i just configure the replica set again from the master just like I did when I first set it up. What about the data that already exists on the slave, should I wipe it? Is there any danger to the data on the master?
[23:38:17] <sewardrobert> Hello I have a development Mongo instance using wiredTiger that won't start. How I can I blow away or re-initialize the data directory so Mongo instance will start again ?
[23:41:38] <joannac> shut down the mongod, move all the files aside, restart?
[23:41:53] <joannac> assuming it is a problem in the data directory
[23:48:30] <cheeser> are you trying to start it with service scripts? or manually?
[23:52:48] <sewardrobert> Yep. I moved the existing directory out of the way and created a new directory with the correct permissions. Now mongo is happy again. Thanks.