PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 5th of January, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:05:09] <Patteh> whats the best way to share a model definition between files?
[00:07:48] <Patteh> between modules*
[00:08:02] <Patteh> i define my models and schemas in one file
[00:08:16] <Patteh> and try to start a restful server api in another
[00:08:28] <Patteh> but the server file cannot see the model definition
[00:08:45] <Patteh> even though i have required it
[00:13:04] <Boomtime> hi Patteh, are you in the right channel? if you asking about using mongodb with a specific library you should mention what the library is, somebody may be able to help you
[00:16:47] <Patteh> mongoose is what i've used to model
[00:17:08] <Patteh> but it is perhaps a more general question about sharing model definitions between modules
[00:18:56] <Patteh> as you cannot define it twice, does anyone have a method for sharing a single definition?
[00:19:02] <Patteh> i cannot make module.exports work
[00:19:26] <Boomtime> ok, your question is about Node.js
[00:22:52] <Boomtime> Patteh, you may have better luck asking in #Node.js, although someone here might know, your question is more likely to be answered there
[04:36:16] <capleton> This is probably a noob question... but how do i get the property of an element in an array? For example, if i want to return the "status" field of a subdocument... how do i get that?
[04:36:45] <capleton> I can narrow it down to that paricular document, but i don't know how to get the value of one of the fields
[04:39:31] <cheeser> "doc.field"
[04:41:54] <capleton> cheeser: is that possible from the mongo prompt?
[04:42:07] <dimon222_> you mean mongo shell?
[04:42:09] <capleton> yeah
[04:42:10] <cheeser> sure
[04:42:12] <dimon222_> guess so
[04:42:27] <cheeser> i'm off to bed. good luck.
[04:43:05] <capleton> i' doing something wrong then... i have db.database.find({...}).subdocuments.field;
[04:43:08] <capleton> nn cheeser
[04:44:30] <joannac> wrong
[04:44:38] <joannac> on many levels
[04:44:59] <joannac> db.collection.find({query}, {"subdocument.field":1})
[04:45:18] <capleton> i have that part
[04:45:42] <capleton> but i had to do "subdocument.$":1
[04:45:46] <joannac> http://docs.mongodb.org/manual/tutorial/project-fields-from-query-results/
[04:46:04] <joannac> sure
[04:46:46] <capleton> ha, i was looking at that man page earlier
[04:46:57] <capleton> i must be doing something wrong
[04:47:43] <capleton> when i try {"subdocument.field": 1} i get all of those values, as though the filter didn't work....
[04:47:48] <joannac> you should possibly pastebin your query and a sample document, then
[04:47:52] <capleton> *for all of the suboduments
[04:47:55] <capleton> ok 1 sec
[04:54:52] <capleton> joannac: http://pastebin.com/ypxC5Saq
[04:57:04] <capleton> In case you're wondering why, i'm trying to make sure that the element selected is not active before another mongodb operation writes to it
[04:57:20] <joannac> { "_id" : 100, "dates" : [ { "name" : "New Years", "parent_id" : "", "active" : false, "date" : ISODate("2015-12-30T13:00:00Z"), "createdBy" : "system", "dateEnabled" : 0, "dateModified" : 0 } ] }
[04:57:27] <joannac> that's what I get, which looks right to me
[04:57:56] <capleton> right.. but how do i pull out the "false" part for the "active" field?
[04:58:45] <capleton> or am i going about this all wrong?
[04:59:19] <joannac> oh, just get the whole document and pull that field out
[05:00:40] <capleton> so put it into a var?
[05:01:19] <capleton> or... i guess this is what i don't know how to do... how do I atually pull that field out from the document?
[05:01:27] <joannac> db.baz.findOne({_id: 100,"dates.name": "New Years"},{"dates.$.active": 1}).dates[0].active
[05:02:29] <capleton> find ONEONEONEONEONEON
[05:02:30] <capleton> fml
[05:02:42] <capleton> it works
[05:02:46] <capleton> lol, thanks joannac
[05:13:53] <capleton> joannac:
[05:14:00] <capleton> i don't think it's working actually
[05:14:58] <capleton> what's findOne's behavior when it comes to a lot of subdocuments?
[05:15:35] <_rht> hello
[05:15:37] <capleton> actually, that can't be the problem because it's looking for a specific name
[05:16:20] <capleton> for some reason, i'm getting the same true/false value for each date name i try per user.. and i can't figure out why
[05:16:39] <capleton> for some users, all dates are returning "true", and for others they are "false"
[05:16:57] <_rht> BsonSerializer.RegisterSerializer(typeof(DateTime), new DateTimeSerializer(DateTimeSerializationOptions.LocalInstance));
[05:16:59] <_rht> hello, i have used following code snippet in my mongodb c# driver code,
[05:17:14] <_rht> to store my date as local date
[05:17:27] <_rht> but it is not taking any effect
[05:17:35] <_rht> can any one help me please
[05:18:06] <morenoh149> _rht: sounds like a c# question
[05:18:16] <_rht> hmm
[05:18:42] <_rht> but i am using mongo db c# driver
[05:19:44] <_rht> same issue as -> http://stackoverflow.com/questions/8063323/how-to-save-date-properly/8064980#8064980
[05:20:09] <_rht> but that solution not working for me
[05:21:13] <morenoh149> _rht: but hardly anyone here would know the c# syntax to help you
[05:22:56] <Boomtime> _rht: what do you mean "to store my date as local date"? 'local' is an entirely client-side condition, all datetimes are stored as UTC at the server
[05:23:15] <Boomtime> your datetime will be preserved as whatever you supply
[05:24:27] <joannac> capleton: ?
[05:24:52] <_rht> Boomtime: i mean i want to store my datetime as what i'm giving to mongo db, but mongodb always convert it to UTC
[05:25:16] <capleton> joannac: i think i figured it out... ultimately it's going to come down to me needing sleep.....
[05:25:37] <capleton> thanks for the help joannac i think i'll revisit this once i have some fresh eyes
[05:25:40] <Boomtime> _rht: which is to say, the value is preserved
[05:26:48] <Boomtime> _rht: what value are you storing? please provide a snippet of code showing the creation/source of the datetime
[05:26:54] <_rht> Boomtime: is there any way to prevent it
[05:28:09] <Boomtime> _rht: you think there is a conversion going on, there is not
[05:28:26] <_rht> ok thanks Boomtime
[05:28:38] <Boomtime> the data is preserved as you gave it, this is why you can't seem to "change the behaviour", to do so would be to damage the data
[06:05:09] <vagueBrother> hey all, how far can i nest data in a mongo document? are there recommendations?
[06:05:24] <vagueBrother> i’ve been having problems navigating many levels of objects and arrays
[06:05:43] <vagueBrother> when trying to look stuff up in a very nested document
[06:05:44] <vagueBrother> http://plnkr.co/edit/6E7vOFTfcvPYWC73h77g
[06:06:05] <joannac> that would suggest you're nesting too deep
[06:07:53] <vagueBrother> should i just have a collection for shows and a collection with episodes?
[06:08:01] <vagueBrother> what if i need info from both collections at the same time
[06:08:32] <vagueBrother> i want to keep them associated with each other so when i look up an episode, i have all this other great info
[06:12:23] <joannac> that's well and good except you're having trouble finding an episode, no?
[06:14:55] <vagueBrother> well i can’t tell if it’s because i’m nesting too deep or if i suck at querying
[06:15:00] <vagueBrother> i have the episode ID always
[06:49:38] <TheAncientGoat> Anyone know a way to fix broken ObjectID's eg a plain object with only a _str field in a $match query?
[10:18:49] <okanck> hello, im trying to add a shard but I got an error: couldn't connect to new shard socket exception [CONNECT_ERROR] for shard1.example.com
[10:19:04] <okanck> i tried to add the shard with IP but it didn't work
[10:19:09] <okanck> anyone help ?
[10:45:50] <joannac> okanck: you can connect to it?
[10:46:28] <okanck> joannac: thanks i have just tried to connect on query server. but i couldnt reach. i guess i ve some network problems
[10:47:00] <joannac> yes
[11:34:03] <okanck> joannac: i solved the problem. but when i close the ssh windows and try to connect to query server and wanna see the sh.status() it says that printShardingStatus: this db does not have sharding enabled. be sure you are connecting to a mongos from the shell and not to a mongod.
[15:24:52] <Sticky> I just did a 2.4->2.6 upgrade, after the upgrade the users that were previously present now seem missing. http://docs.mongodb.org/manual/release-notes/2.6-upgrade/ seems to claim that users should continue to work after the upgrade. Anyone any idea what could be causing this?
[16:44:56] <eboqu> hey, anyone knows if it is possible to backup a mongodb database situated on a windows azure vm using MMS?
[16:46:07] <cheeser> yep. should work just fine.
[16:47:17] <eboqu> is there specific documentation related to this because I cannot seem to find any update docs on the web
[16:48:42] <cheeser> dunno offhand
[18:25:06] <Streemo> is there something like findRandomOne that doesn't make me fetch the entire array from the cursor?
[18:58:40] <okanck> is there any connection limit on mongo?
[18:59:22] <cheeser> limits of the hardware and/or 20000 iirc
[18:59:38] <cheeser> i.e., you'd have to have a bug to hit it
[19:07:39] <okanck> I've switched to sharded cluster. when I wanna list the data on php, i get error almost all requests, the errors: "Read timed out after reading 0 bytes, waited for 30.000000 seconds" or "Remote server has closed the connection "
[19:16:47] <talbott> hello stripers
[19:18:05] <talbott> quick q
[19:18:25] <talbott> can i use the checkout for a customer to signup to a subscription?
[19:18:47] <cheeser> pretty sure you're in the wrong channel
[19:18:54] <talbott> oh
[19:18:56] <talbott> yah
[19:18:57] <talbott> sorry!
[19:19:08] <cheeser> :D
[19:39:59] <arussel> I have a doc: {a:"a", arr: [{a:1, b:2},{a:"x", b:"y"}]}, how do I find the docs where none of the elements of arr has a==z ?
[19:40:27] <arussel> same but with at least one of the elements of arr has a ==z ?
[19:44:46] <arussel> 1. $not : {$elemMatch: {a:"z"}}
[19:45:13] <arussel> 2. {"arr.a":Z}
[19:45:17] <arussel> is that right ?
[19:50:35] <Streemo> is there something like findRandomOne that doesn't make me fetch the entire array from the cursor?
[20:35:18] <robo_> Hi - I'm attempting to test a function that connects to a mongodb database.
[20:35:33] <robo_> here is a pastebin: http://pastebin.com/xQStrruS
[20:36:00] <robo_> it keeps failing with 'expected undefined to be an object'
[20:36:16] <robo_> any help is appreciated - thanks!
[21:04:22] <ehershey> robo_: looks like maybe an issue outside of mongo/mongoose
[21:04:24] <ehershey> with eeg stuff
[21:04:26] <ehershey> but hard to tell
[21:04:29] <ehershey> the mongo stuff looks right
[21:08:02] <robo_> cool - thanks ehershey
[21:08:50] <robo_> maybe an issue with chai then -
[21:17:26] <Synt4x`> so I'm using somebody else's DB and it has a very weird format, there is a big list, each element is a single dict with the same key value, and different value pairs... it looks like [ {game : gameID}, {game : gameID}, {game : gameID}, ... ]
[21:17:43] <Synt4x`> I'm trying to see if the gameID I have is currently in that list... what's the best way of doing so?
[21:23:13] <ehershey> db.collection.find({ "listfield.game": yourgameIDHere})
[21:25:52] <Synt4x`> thanks ehershey:
[21:27:26] <ehershey> sure thing
[21:36:45] <StephenLynx> hello there. I'm having an issue with mongo and my server service under systemd.
[21:36:52] <StephenLynx> I need my service to start after mongo
[21:37:02] <StephenLynx> but mongo is a sysvinit service, not a systemD
[21:37:18] <StephenLynx> how do I setup my service to start after mongo in this case?
[21:38:44] <mike_edmr> systemd problems...
[21:39:32] <StephenLynx> yeah, yeah. but is what centOS is using
[21:39:59] <StephenLynx> and apart from mongo taking too long to boot so I have to have my server to boot only after it, everything is fine
[21:40:10] <mike_edmr> better question for systemd people than mongo, service dependencies are a systemd issue
[21:40:43] <mike_edmr> imho
[21:40:49] <mike_edmr> but maybe someone else can answer here
[21:40:56] <StephenLynx> indeed. but I wondered if anyone here had experience with that, since mongo has not updated it's daemon scripts for systemD distros.
[21:41:06] <buzzalderaan> curious if you could write a simple systemd service script for mongodb then tie your app to that, but i like mike_edmr says, probably a better question for systemd
[21:45:36] <mike_edmr> arch apparently has a script in its mongo package
[21:45:53] <mike_edmr> take a look at the one on this page: https://gist.github.com/mbj/1605894
[21:46:40] <mike_edmr> the key is those wants= after= to get your app to start up after mongo
[21:47:03] <mike_edmr> assuming you can make the app run as a service too
[21:51:24] <StephenLynx> yes, my app already is a service
[21:51:39] <StephenLynx> and thanks for that, It may prove useful as a last resort
[21:52:17] <StephenLynx> if I am unable to make my app a dependency of the sysvinit service, I will remove the mongo sysvinit service and replace by this systemd service
[22:02:18] <StephenLynx> aw yss, got it to work
[22:03:14] <StephenLynx> just had to put After=mongod.service on the Unit block
[22:06:31] <StephenLynx> thanks for the attention
[22:06:32] <StephenLynx> :v
[22:54:25] <pkaeding> hi all, I'm noticing my logs filling up with messages like `connection accepted from 127.0.0.1:40185 #20593353 (77 connections now open)`, and I want to turn down the verbosity. I think `systemLog.verbosity` is the config parameter I need, but I can't find any docs explaining what is included in each level (or what the default level is)
[22:57:30] <AlecTaylor> Hi
[22:57:39] <AlecTaylor> Is Spark the right option as an alternative to MongoDB + Mahout via HDFS? - http://stackoverflow.com/a/27772606/587021
[23:03:40] <saml_> tell me
[23:03:43] <saml_> !transaction
[23:04:32] <saml_> a script runs for 3 minutes every 10 minutes. during 3 minutes, query results are wrong
[23:04:55] <saml_> how can I make the script update about 1000 documents in one transaction?
[23:09:15] <regreddit> saml_, so you need concurrency and atomicity?
[23:09:40] <saml_> maybe
[23:09:54] <saml_> the script updates 1000 documents with two fields: date,size
[23:09:57] <regreddit> you may need to implement a two phase design pattern in your script
[23:10:09] <saml_> many apps query those date,size
[23:10:23] <saml_> so during script is updating, query result is wrong
[23:10:30] <regreddit> and that update takes 3 minutes?
[23:10:35] <saml_> yes
[23:10:50] <regreddit> are a bunch of calculations being done or something?
[23:10:52] <saml_> two phase can be isolated to the script only? don't need to change apps, right?
[23:11:38] <regreddit> well, classic two-phase commits use an interim transaction collection
[23:11:40] <saml_> var mostRecent = db.docs.find({...}).sort({date:-1}).limit(1)[0].date; db.docs.find({..., date: mostRecent})
[23:11:44] <saml_> apps query something like that
[23:11:49] <regreddit> so probaby no app changes
[23:12:03] <saml_> i see. i'll read up on it
[23:12:12] <regreddit> http://docs.mongodb.org/manual/tutorial/perform-two-phase-commits/
[23:12:48] <regreddit> what you do is update the transaction collection with the new values, then update the original collection with those calculated values, then delete the transaction values
[23:13:25] <regreddit> that way the original data is never updated until all the results are calculatedd
[23:13:39] <regreddit> which im assuming is the time consuming part
[23:59:52] <AlecTaylor> Is Spark the right option as an alternative to MongoDB + Mahout via HDFS? - http://stackoverflow.com/a/27772606/587021