PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 6th of January, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:39:31] <vagueBrother> hey yall, still trying to get an answer to my question about storing sub documents inside documents in a collection. are there any best practices regarding how to do this? it seems simple enough but i’m having trouble figuring out how to query an episode.id in this example http://plnkr.co/edit/6E7vOFTfcvPYWC73h77g
[00:46:43] <vagueBrother> can i query just on the id of the episode object without knowing its parent documents?
[00:51:39] <mordof> vagueBrother: wouldn't you have to do a query for the show, then query the seasons, then episodes? i'm fairly new to mongo
[00:51:54] <mordof> so i don't know if there's another/better way to do it
[00:51:57] <vagueBrother> that’s what i was hoping against
[00:52:20] <vagueBrother> that sort of structure makes more sense to me conceptually, rather than having an episodes collection and a shows collection
[00:52:25] <vagueBrother> but maybe i’m wrong
[00:53:11] <mordof> i know mongoid allows the use of full querying on subdocuments, so it's gotta be possible somehow
[00:53:49] <mordof> vagueBrother: http://stackoverflow.com/questions/20519073/mongodb-embedded-document-querying
[00:54:03] <mordof> embedded documents* searching for that brought up relevant answers
[00:54:08] <Boomtime> vagueBrother: db.test.find({"seasons.episodes.id":3456})
[00:54:19] <vagueBrother> hmm
[00:54:23] <vagueBrother> let me try that
[00:54:53] <vagueBrother> doesn’t work
[00:54:56] <vagueBrother> i think it’s because
[00:54:58] <vagueBrother> seasons is an array
[00:55:03] <vagueBrother> and episodes is an array
[00:55:05] <Boomtime> nope, it works for me
[00:55:08] <vagueBrother> hmm
[00:55:20] <Boomtime> using your document (though i had to fix the objectid since that is invalid)
[00:55:24] <mordof> vagueBrother: they come out as arrays, but if they are actually embedded documents it works
[00:55:29] <vagueBrother> yeah, i just made it up
[00:56:01] <Boomtime> the rest i pasted directly, minus the ... bits (also invalid), then ran the query i pasted here
[00:56:16] <Boomtime> this is an extremely common practice
[00:56:19] <vagueBrother> hmm, not getting it to work, hold on
[00:56:43] <mordof> Boomtime: is there antything special one has to do for creating embedded documents?
[00:56:47] <Boomtime> 11:54:43@test>db.test.find({"seasons.episodes.id":3456})
[00:56:47] <Boomtime> { "_id" : ObjectId("acfb087a8efcb83086bfec03"), "title" : "Star Trek", "id" : 1234, "seasons" : [ { "id" : 2345, "seq" : 1, "episodes" : [ { "id" : 34
[00:56:47] <Boomtime> 56, "seq" : 1, "title" : "The Escape" } ] } ] }
[00:56:47] <Boomtime> 11:54:44@test>
[00:57:03] <vagueBrother> yeah like, the “document” is the show
[00:57:08] <vagueBrother> episodes don’t have ids
[00:57:12] <vagueBrother> or rather, like mongo ids
[00:57:18] <vagueBrother> they only have the id that i give them
[00:57:22] <Boomtime> no, the document is the entire thing
[00:57:29] <vagueBrother> ok that’s what i thought
[00:57:36] <mordof> ah... lol
[00:57:44] <Boomtime> if you want something else you'll need to transform it
[00:57:56] <vagueBrother> still not getting this query to work
[00:59:23] <Boomtime> it sounds like you are trying to run before you can walk
[00:59:55] <vagueBrother> i sort of assumed getting stuff out of this nested object would be similar to how you do it in normal js
[01:00:17] <vagueBrother> i already have an episodes collection that works fine when querying by episode id
[01:00:26] <Boomtime> all queries return the document
[01:00:41] <Boomtime> you can filter the results to return only those fields you want
[01:01:10] <vagueBrother> GOT IT
[01:01:30] <vagueBrother> i’m dumb, and in my real data the episode id is called “episodeId” AND it’s a string, not a number, b'oh
[01:01:32] <vagueBrother> thanks Boomtime
[01:01:49] <Boomtime> cheers
[01:02:25] <vagueBrother> so, having said that
[01:02:51] <vagueBrother> Boomtime: is there a best practice for this sort of thing? would it be better or faster somehow if i had a discrete episodes collection and then a separate shows collection?
[01:03:07] <vagueBrother> i’m sure the answer is usually “it depends”
[01:03:28] <vagueBrother> but i’m trying to get a feel for how best to deal with this sort of data from people who are more experienced with mongo
[01:05:03] <Boomtime> vagueBrother: firstly, it is not generally recommended against using multiply nested arrays where you want to query on the items inside the deepest level
[01:05:12] <Boomtime> double negative.. sigh
[01:05:35] <Boomtime> * we recommend against multiply nested arrays where you want to query on the items inside the deepest level
[01:06:23] <Boomtime> the source of this recommendation is a limitation in the array positional operator ($) which can only reference the first level array position in a succesful match
[01:06:25] <vagueBrother> is that for performance reasons?
[01:06:35] <vagueBrother> or just because it’s a pain in the ass?
[01:06:37] <Boomtime> so not for performance reasons
[01:06:53] <Boomtime> it can become quite difficult to work with such documents
[01:06:57] <vagueBrother> i see
[01:07:19] <vagueBrother> so you might recommend to have an episode object that has an associatedShow id or something?
[01:07:28] <vagueBrother> and then link up the episodes to the shows that way?
[01:07:39] <vagueBrother> i just didn’t want to store repeats of data
[01:07:39] <Boomtime> in your case, i would recommend you ditch the seasons level
[01:07:50] <vagueBrother> and just put that info inside the episode?
[01:07:53] <Boomtime> yes
[01:08:00] <vagueBrother> makes sense
[01:08:25] <Boomtime> you can contruct a season still if you need to, but you're more likely interested in the individual episode or the show generally anyway
[01:08:28] <vagueBrother> so i’d just have an array of episodes from ALL seasons
[01:08:31] <vagueBrother> yeah
[01:08:32] <vagueBrother> agreed
[01:08:45] <vagueBrother> maybe with an seasonNum property
[01:08:49] <Boomtime> right
[01:09:01] <vagueBrother> that sounds best, thanks a lot for your advice!
[01:09:09] <Boomtime> no problem
[01:09:48] <vagueBrother> one final question and i’ll leave you alone, and if you want to point me to docs, i’m happy to read them
[01:09:59] <vagueBrother> if i want to query for episodes only inside a certain showId
[01:10:13] <vagueBrother> so likw i want showId: 12 and episodeId: 34
[01:12:37] <Boomtime> db.test.find({id:12,"seasons.episodes.id":34})
[01:13:11] <vagueBrother> interesting
[01:13:13] <Boomtime> just provide all the matches that you need and they will all be required (by default) for a document to match
[01:13:27] <Boomtime> btw, you use _id and id
[01:13:29] <Boomtime> any reason?
[01:13:47] <vagueBrother> in my real data i do not, i just did that for the plunker
[01:13:56] <vagueBrother> they’re all showId, episodeId, etc
[01:14:19] <vagueBrother> so i can just put in a comma separated list of things i want to match and it’ll AND them all together?
[01:14:45] <Boomtime> mongodb requires a _id be present, you should provide a value for it and make use of it if you have something applicable
[01:15:09] <vagueBrother> that was a side question, actually. say i don’lt want to use _id and want to, instead index on showId?
[01:15:19] <vagueBrother> like that will be my unique value, not _id
[01:15:21] <vagueBrother> can i do that?
[01:15:29] <vagueBrother> and get rid of _id altogether?
[01:15:31] <Boomtime> yes, but it's pointless
[01:15:36] <vagueBrother> why’s that?
[01:15:39] <Boomtime> you cannot get rid of _id
[01:15:41] <vagueBrother> oh
[01:15:49] <Boomtime> mongodb requires a _id be present, you should provide a value for it and make use of it if you have something applicable
[01:15:58] <vagueBrother> can i put in any old value?
[01:16:00] <Boomtime> you have something applicable
[01:16:04] <vagueBrother> like, can i put my showId into _id?
[01:16:05] <Boomtime> yes
[01:16:14] <vagueBrother> but i can’t rename it to showId
[01:16:14] <Boomtime> the requirement is that it be collection-level unique
[01:16:22] <vagueBrother> gotcha
[01:16:22] <Boomtime> no, it must be named _id
[01:16:28] <vagueBrother> okay
[01:16:33] <vagueBrother> hmm, maybe i’ll do it that way instead
[01:16:36] <Boomtime> it will be provided for you if you do not set it
[01:16:45] <vagueBrother> because i obviously don’t want two of the same showId
[01:17:04] <vagueBrother> is the provided _id a derivative of timestamp?
[01:17:09] <vagueBrother> it looks like it might be
[01:17:16] <Boomtime> it has a timestamp in it
[01:17:31] <Boomtime> http://docs.mongodb.org/manual/reference/object-id/
[01:17:31] <vagueBrother> is it encoded in hex or something?
[01:17:41] <Boomtime> that link tells exactly how object-id is generated
[01:17:48] <vagueBrother> cool, looking
[01:17:48] <vagueBrother> thanks
[01:18:06] <vagueBrother> this is perfect
[01:19:32] <mordof> I've got a 'legacy' 2d index on location: [x, y] fields in my collection. i tried doing a location greater than [3, 3] but it only matches the x portion for the greater than. any way i can get both to match?
[01:20:09] <mordof> or would i need to separate the location into two individual keys and index each individually?
[01:20:10] <Boomtime> what is the query you tried?
[01:20:39] <Boomtime> also, can you provide the index definition? (use db.COLLECTION.getIndexes()
[01:20:43] <mordof> hmm.. i'm using mongoid, so i'm not sure of the exact equivalent :/
[01:20:47] <mordof> ok
[01:21:31] <Boomtime> always try to know the equivalent in shell, it is invaluable dignostic aid
[01:21:56] <mordof> mhmm - i'm learning that gradually. i started out using mongoid so that knowledge is a bit further ahead
[01:22:28] <mordof> http://pastie.org/9815362 my indexes
[01:22:55] <mordof> i'll try to find the equiv query
[01:23:43] <Boomtime> documents have a "location" field specified as [longitude,latitude] ? (or [x,y] if you just want dimensionless euclidean distance)
[01:23:57] <mordof> if it's 2dsphere index it's assumed to be long,lat
[01:24:15] <Boomtime> the index you have is 2d, not 2dsphere
[01:24:19] <mordof> indeed
[01:24:23] <mordof> so it's normal euclidean
[01:25:31] <mordof> i suppose you could put lng,lat in 2d index? i'm not sure
[01:25:38] <Boomtime> most people do
[01:25:47] <Boomtime> just because it's easy to source
[01:27:10] <Boomtime> if you want to cheat to get the query that mongoid is running you can increase logging on the server to print it out to the logs, which will emit in shell format
[01:27:34] <Boomtime> but that might be more trouble than just learning to convert whatever mongoid does
[01:27:45] <mordof> mhmm
[01:29:22] <charform> is there anyway to apply an index to a subset of documents based on the value of a field rather than the existence of a field? sparse index on documents where field == x?
[01:30:00] <Boomtime> charform: no
[01:30:25] <charform> hmm how might I achieve the following then..
[01:31:10] <charform> i have a users collection that contains users that signed up with their email and users that login via social media... I want a unique index on type: 'localUser', email: 1 but for that to not apply to social users..
[01:31:51] <charform> social users may have an email field that is the same as other social users ie if they login with google then facebook where the accounts are botha attached to the same email
[01:32:44] <charform> I guess I could create a separate collection for localUsers and social login users but other than this difference they have mostly the same fields and it would mean checking for user types before quering for users client side etc
[01:33:15] <charform> seems redundant / messy
[01:33:37] <Boomtime> charform: it sounds like you have an underlying unique condition that you have not fully captured
[01:34:01] <Boomtime> would you expect the same email to appear from two seperate facebook logins?
[01:34:35] <charform> no since I think fb enforces unique emails
[01:34:43] <charform> is a sparse 3 feild index a decent solution? userType: 1, email: 1, socialId: 1
[01:34:46] <Boomtime> right, so i think you have two choices
[01:34:59] <mordof> Boomtime: db.stars.find({ location: { $gt: [3, 3] } }) is the query i'm running
[01:35:01] <Boomtime> yes, that is one of the options
[01:35:12] <mordof> Boomtime: and I get back an entry with location: [6, 1]
[01:35:30] <Boomtime> mordof: right, because 6 is greater than 3
[01:35:39] <mordof> indeed
[01:35:41] <Boomtime> you are not using the location index
[01:35:57] <mordof> hm? how so?
[01:36:12] <Boomtime> mordof: you are querying the field as though it were a regular array, you should look up the spatial query operators
[01:37:17] <Boomtime> charform: if you create a unique sprase index where one of the fields is only present when the other two fields are combinatorially unique, that should work for your case
[01:39:01] <charform> does mongo short circuit on such indexes? ie if after checking userType and email, if they are unique will it immidiately go ahead with the write operation?
[01:39:07] <mordof> Boomtime: so this: db.stars.find({ location: { $geoWithin: { $box: [ [ 3, 3 ], [ 7, 8 ] ] } } }) is for suer using the index?
[01:39:13] <mordof> sure*
[01:39:43] <Boomtime> mordof: to check if a query is using a particular index, add .explain() to the end
[01:40:33] <mordof> ah..
[01:40:51] <mordof> indexBounds, then a whole bunch of numbers in arrays, lol
[01:41:01] <Boomtime> charform: the index will short-circuit as fast as it can, it doesn't need to check any specifics - the index is effectively a binary search path
[01:41:20] <charform> or what about concatenating email + socialId and using that as _id? for local users it would jsut be their email social users would break ties with their socialId and index on a single field
[01:42:29] <charform> although emails may be quite long I suppose and I would have to store the email again in a separate field
[01:43:13] <Boomtime> charform: that's an interesting idea, it certainly sounds plausible, you have options to test by the sounds of it
[01:43:42] <Boomtime> considering the data replication, consider that an index effectively replicates the data from the document
[01:44:05] <Boomtime> you may find that storage wise, some options are equivalent
[01:44:59] <newmongouser> Hi, does anyone know where ListField is defined. I'm getting error : "NameError: name 'ListField' is not defined
[01:46:47] <Boomtime> you will need to provide more information, where are you seeing this error? what are you trying to do when it happens? what libraries/environment/versions etc
[01:49:21] <newmongouser> It's python 2.7.6, I'm actually trying to work with Flask, and I'm defining my database schema. I have class Org(db.EmbeddedDocument) and class Category(db.Document), in class Category() I've defined orgs=ListField(db.EmbeddedDocumentField('Charity')) and this is where I get the error
[01:50:24] <Boomtime> you are expecting ListField to be defined somewhere then?
[01:50:34] <Boomtime> is this some sample code you got from somewhere?
[01:53:12] <charform> Boomtime: would the standard ObjectId be more efficient for subsequent lookups though (as opposed to the concatenated email + socialId string)? I also have many documents in different collections that reference these user _id fields
[01:53:32] <Boomtime> it might be slightly
[01:54:27] <Boomtime> objectid is assured unique inside of 12 bytes - that's pretty fast binary divergence compared to virtually any data you're likely to have on hand
[01:54:52] <Boomtime> but if that field is not being used consistently by you then why have it at all?
[01:55:09] <Boomtime> 'fast' only matters if you use it
[01:55:18] <charform> they are appearing to be mostly a wash when running them now, mongoId slight faster but I have to create the object each time client side from the string my mongoclient returns to the app
[01:56:21] <charform> which field isnt being used consistently? _id?
[01:56:58] <charform> after insert, that is the field I use always to retrieve a user
[01:57:07] <Boomtime> ok, then you use it consistently
[01:57:12] <charform> I just need this index initiall for user creation
[01:57:16] <charform> initially*
[01:57:32] <Boomtime> i still doubt the small difference in speed will make any tangible difference in reality
[01:58:28] <charform> yah I suspect I will just go wit the sparse 3 field index unless there is a solution that is an order of mag better the concat seems mostly identical if not slightly worse
[02:00:07] <charform> a separate collection would mean two smaller indexes I guess, smaller b trees to traverse but I doubt that would be of any consequence unless the worlds population was using this service
[02:00:34] <Boomtime> sparese index is small, it contains only those documents which have the fields
[02:01:54] <Boomtime> you may consider that your sparse index only protects against the localuser duplicates, it does not prevent duplicates of combinations that should logically be unique (email:facebook for example)
[02:06:46] <charform> true, I was just thinking of the entire index, ie, every user is guaranteed to have at least one of the three fields
[02:07:12] <charform> therefore the unique check runs on the entire collection rather than the one it needs to only
[02:07:34] <charform> although if I put the socialId field first in the index it would short circuit liek you say
[02:07:44] <charform> therefore effectively being as efficient as two collections
[02:08:20] <charform> in a lot of cases
[02:14:24] <newmongouser> I'm using the Tumblelog tutorial on Mongo's website, and I've verified everything up to the point asking for a ListField, but I'm still getting a name error. Is there a known reason for this or should I just reinstall all my extensions?
[02:15:55] <newmongouser> I think I've solved it
[03:28:17] <chetandhembre> Hi I am storing array of object in mongodb but when i want to fetch it does not return result in proper formate,
[03:28:17] <chetandhembre> I am using mongoose
[03:28:45] <chetandhembre> My query is simple
[03:28:45] <chetandhembre> internal.profile.findOne({
[03:28:45] <chetandhembre> apiKey : apiKey
[03:28:45] <chetandhembre> },{
[03:28:45] <chetandhembre> counters : true
[03:28:46] <chetandhembre> , _id : false
[03:28:48] <chetandhembre> }, console.log)
[03:29:01] <chetandhembre> can any one help me with this ?
[03:29:52] <newmongouser> on the mongo site comparing sql to mongo, it says these statements are equivalent: "SELECT user_id, status FROM users" is equal to "db.users.find( { }, { user_id: 1, status: 1, _id: 0 } )", what's the purpose of the empty document in the db.users.find(), it's in some of their queries and not in others
[03:29:57] <cheeser> use pastebin please
[03:30:28] <cheeser> newmongouser: that's the empty where clause
[03:32:00] <newmongouser> Oh OK, is it necessary? If I if I ran the query without it would it result the same?
[03:32:49] <chetandhembre> My query look like this http://pastebin.com/UiMBDWRb
[03:35:38] <newmongouser> Is the empty where clause necessary when i only want to return objects with those fields present?
[03:51:40] <cheeser> newmongouser: you have to specify a query document even if it's empty.
[03:56:56] <newmongouser> On mongos docs it read: "Not specifying a query document to the find() is equivalent to specifying an empty query document." Do you think including it is just being explicit. Even from the page (http://docs.mongodb.org/manual/tutorial/query-documents/) it's not entirely clear
[03:57:35] <newmongouser> But in the snippet above - "db.users.find( { }, { user_id: 1, status: 1, _id: 0 } )", isn't the second document the query document?
[04:19:17] <rh1n0> Having trouble. I have a user that can connect to the mongodb but i keep getting an authorization error on a specific query - can you grant privileges to queries?!?
[04:19:39] <rh1n0> oh i bet its a stats privilege - nevermind ;)
[04:34:23] <newmongouser> Do you need an empty where clause on a query or does mongo supply it itself? in the docs it says find() will supply it if not given, but in all of their examples it is present
[04:37:11] <Boomtime> newmongouser: the docs use the shell find() method as the example, which can be called with no parameters and it will assume an empty document
[04:38:00] <Boomtime> not that if you need to pass a value for one of the other parameters to that method (filter or options) then you will have to supply an empty document in the first parameter because, well, javascript
[04:38:04] <Boomtime> not=note
[04:38:46] <Boomtime> in short, it depends on the language and driver you are using - what language and driver are you using?
[04:42:17] <Boomtime> newmongouser: did you see my reply?
[04:42:39] <newmongouser> Boomtime: is it a good habit to use the first blank document in case you invoke other methods later down the road (other than the find() method?
[04:42:46] <newmongouser> yes, sorry i was disconnected for a minute
[04:42:49] <cheeser> not really, no
[04:44:43] <Boomtime> newmongouser: your question also comes with the assumption that you'll be doing a lot of "find all" queries, which is kind of defeating half the reason of having a database - why would you always do this?
[04:46:56] <newmongouser> I won't. I was just going through Mongo's site in order to get more comfortable with best practices.
[04:47:35] <Boomtime> in this regard, i would say do whatever you are comfortable with
[04:52:19] <newmongouser> For my particular use I have roughly 40 'categories' with organizations listed in their respective category, so I'm running something similar to find( {category: 1}) or find({category: 3}) (category indexed)... There won't ever be more than 1500 organizations, so is the find syntax something I shouldn't bother getting more familiar with?
[04:53:03] <cheeser> you're going to need to know the "native" query language regardless of what driver your app uses.
[04:53:10] <cheeser> how else are you ever going to debug?
[04:53:45] <newmongouser> So it does become relevant later?
[04:53:56] <cheeser> it's relevant from day 1
[04:54:07] <newmongouser> I'm sorry. I'm confused.
[04:54:18] <newmongouser> Is sending the blank query document necessary in find()?
[04:54:32] <cheeser> no. that's already been said.
[04:55:01] <newmongouser> When does the blank query document become something I need have concern for?
[04:55:16] <cheeser> when you need to do projections like we covered earlier.
[04:59:19] <newmongouser> Ok, thank you cheeser. I'll read more on the read operations and projections docs. I appreciate your help.
[04:59:51] <cheeser> np
[07:06:06] <jimbeam> Is it ok to declare two different schema in two files for the same object? I only want to access a subset of the properties in each file
[07:08:30] <Boomtime> jimbeam: it sounds like your question is regarding a library you are using, mongodb is schemaless - what library are you using?
[07:08:42] <jimbeam> mongoose
[07:09:16] <Boomtime> righto, i have not used mongoose, but maybe somebody else here can help
[07:09:35] <jimbeam> ok thank you!
[08:00:46] <charform> if a field is part of a composite index does that it is also indexed individually or should i create a separate index on it if I plan to query based on this field regularly?
[08:00:57] <charform> does that mean*
[10:33:57] <Abluvo> hi, I am trying to install mongodb with a YAML config file which includes the storage.nsSize property but I keep getting parsing errors. Is there anything wrong with this YAML file http://pastebin.com/LAa5uapg?
[10:34:26] <Abluvo> The error I keep getting is: Error parsing INI config file: unrecognized line in 'storage:'
[10:35:22] <arussel> I have foo: [[XXX]], how do I match document for which XXX === "a" ?
[12:35:15] <theRoUS> Derick: happy new year
[12:36:41] <Derick> thanks - you too :-)
[12:44:20] <theRoUS> Derick: i'm mining a ticketing system, which means daily fetches of about 100K documents of at least 2K each from the ticketing system, and putting them into a mongodb.
[12:45:11] <theRoUS> i only want to do insert-new and update-if; is there a recommended best practice for checking to see if a document in mongo is different from one in memory?
[12:45:58] <Derick> I don't know... I guess you could calculate a clever hash - that you also store in the documents in the collection? I think that would be fastest
[12:46:08] <theRoUS> atm, i'm working along the lines of including a shasum field in the db documents, and just doing a find() of the key field and the shasum
[12:46:16] <Derick> :-)
[12:46:36] <theRoUS> and comparing the shasum from the find() with the one calculated for the in-mmemory data fetched from the ticketing system
[12:46:49] <theRoUS> heh.
[12:46:54] <Derick> right - that's what I was suggesting :D
[12:47:49] <theRoUS> update() has :upsert; i wish insert() had an :update_if (if the insert document already has an _id field)
[12:48:16] <theRoUS> Derick: okey, so it looks like i'm already on the right track
[12:48:23] <theRoUS> Derick: thanks!
[13:35:40] <Naeblis> Hi. How can I count the *total* of all nested arrays for my collection items which match a certain query? Eg: { $match: { foo: bar }} and get a total count of all the result.myArray items
[13:42:48] <leotr|2> hi! I have a problem. Disk space is out
[13:43:30] <leotr> i tried to create nfs share and move some of files to that share and then created symlinks
[13:43:58] <leotr> but it didn't help, mongodb shows me error messages because it can't lock file
[13:44:07] <leotr> how can i solve the problem
[13:46:28] <cheeser> iirc, locking files on nfs is basically a non-starter
[13:54:58] <leotr> cheeser: so it's better to use iSCSI or if possible to connect some external drive
[14:00:57] <cheeser> networked drives, in general, are going to have perf problems
[14:16:40] <aliasc> Hi all
[14:17:24] <aliasc> anyone using mongodb on game dev ?
[14:25:07] <Sticky> aliasc: game dev is no different to programming any other application wrt db's
[14:26:00] <aliasc> just asking though. a guy in this channel dropped me a link on a project using mongodb
[14:26:22] <cheeser> it's a rather vague question without an interesting answer
[14:30:07] <aliasc> im using mongodb on web development with nodejs
[14:30:13] <aliasc> and im enjoying it
[14:30:20] <aliasc> its easy and fast
[14:30:34] <aliasc> i used to be mysql guy
[14:31:05] <aliasc> im not new here :)
[14:31:27] <aliasc> im just asking if someone tried mongodb on games
[14:49:03] <SpeakerToMeat> Before I google (this might be stupid), I wanna ask here... is there a "futton" for mongo?
[14:54:21] <cheeser> wth is a "futton"
[14:58:42] <SpeakerToMeat> cheeser: On couch db, a built in query/work ui
[15:13:56] <Sticky> SpeakerToMeat: do you mean a gui for mongo?
[15:14:19] <Sticky> there is not one built in, there are a fair few 3rd party ones
[15:14:44] <SpeakerToMeat> I... I think I wont try mongo for this, for stupid reasons
[15:21:39] <xissburg> Is it possible to constrain the value of a property such as 'gender' to make sure only 'm' or 'f' are inserted?
[15:21:53] <cheeser> no
[15:25:13] <xissburg> kthxbye
[16:04:47] <kexmex> why do i need to delete system.users.metadata.json
[16:04:49] <kexmex> before restoring my db?
[16:05:10] <kexmex> no perms to create indexes on a system table?
[16:48:59] <dman777_alter> how come the documentation does not have cursor.length() method?
[18:59:55] <Snoopy> Hey guys, I had a quick question about the php driver for mongo if that if this it the right place to ask it..
[19:00:24] <dman777_alter> might want to ask on php
[19:00:28] <dman777_alter> pretty dead here
[19:00:54] <Snoopy> ok do you know the channel?
[19:01:26] <cheeser> don't confuse dead with quiet.
[19:08:56] <dman777_alter> lol
[19:18:06] <Songbun51> join north korean news
[19:22:37] <cheeser> never!
[19:26:08] <proteneer> in GridFS - if I’m updating an existing file, and a read comes in between the update, does it load the old file?
[19:28:35] <proteneer> or is the behavior driver specific?
[20:06:01] <bhahn> does anyone know whether attempting a failover during a long foreground index creation will cause any problems (mongo 1.8)?
[20:06:24] <Derick> 1.8?
[20:06:40] <bhahn> yeah
[20:06:54] <Derick> possibly?
[20:06:58] <Derick> that's... old
[20:07:56] <bhahn> we just got bitten by https://jira.mongodb.org/browse/SERVER-3067
[20:08:21] <bhahn> so we’re basically close to outage state
[20:09:03] <Derick> that's fixed in 2.4 from what I can see from the issue
[20:09:19] <Derick> you *really* should consider upgrading 1.8 to 2.6 (or at least 2.4)
[20:09:25] <bhahn> absolutely
[20:09:46] <bhahn> trying to find out if there’s anything we can do now to stop the outage
[20:09:50] <Derick> k
[20:10:02] <Derick> I think you should be able to step down the primary node
[20:10:09] <Derick> but, I can not guarantee it won't break
[20:10:39] <bhahn> what does break mean? data corruption? just an error?
[20:14:34] <Derick> an error
[20:14:38] <Derick> do you have a backup?
[20:25:25] <bhahn> not a recent-enough one
[20:28:58] <mng7> hi
[20:29:22] <mng7> db.collection.update({processed: {$exists: false}}, {$set: {processed: false}})
[20:29:46] <mng7> why doesn't that work to set processed to false on a single document where processed doesn't exist?
[20:30:00] <mng7> (i know i should specify multi to make it work for all of them, but let's stick to this for now..)
[20:44:02] <hrrld> Hello, I'm using the node.js driver and I'm calling .each() on a cursor for the first time. How do I signal that I've accumulated as many documents as I want, and that I don't want any more calls to the callback I passed to .each() ?
[20:53:11] <hrrld> Hm. There may not be a way: https://github.com/mongodb/node-mongodb-native/blob/2.0/lib/cursor.js#L420
[22:25:59] <harttho> Hey, I've got a db/collections on shard 1 (and not on shard 2)
[22:26:19] <harttho> one mongos connection allows me to use the database and show collections
[22:26:30] <harttho> and another mongos connection doesn't show the collections
[22:27:39] <harttho> sh.status() shows the primary for the DB is on shard 2, where the data is actually on shard 1
[22:31:36] <harttho> (for both mongos connections on the sh.status()))
[22:35:02] <harttho> Closest issue I could find is https://jira.mongodb.org/browse/SERVER-12268
[22:37:18] <Boomtime> harttho: did you issue a movePrimary or have you dropped the database and re-created it any point in the past?
[22:37:54] <harttho> have dropped and recreated
[22:47:39] <harttho> running mongostat on 1 mongos shows reads/writes/etc
[22:47:52] <harttho> running mongostat on the other mongos (with missing collections) shows no reads/writes/etc
[22:48:02] <harttho> (Note, the 2nd shard isn't being uses yet with proper sharding)
[22:58:07] <harttho> scratch those last 3 statements (irrelavent)
[23:17:02] <harttho> Boomtime: any thoughts? It appears mongos2 is routing to the incorrect shard, and mongos1 is reading from the correct, but empty, shard
[23:17:16] <harttho> sh.status has same result on both
[23:17:57] <Boomtime> "harttho: have dropped and recreated"
[23:18:03] <Boomtime> the database?
[23:18:37] <harttho> Yes
[23:18:43] <Boomtime> https://jira.mongodb.org/browse/SERVER-939
[23:19:01] <Boomtime> it's subtle and that ticket will look baffling at first
[23:19:17] <Boomtime> may be easier to read this one first: https://jira.mongodb.org/browse/SERVER-15213
[23:20:04] <Boomtime> the good news is that you have detected it and can fix your situation very easily now; restart the mongos
[23:20:45] <harttho> Similar to the conclusion we came up with
[23:21:05] <Boomtime> you can avoid a restart of the mongos by issuing a flushRouterConfig if you prefer
[23:21:08] <harttho> Question is, when restarting the mongos, will the data from shard 1 be moved? or will it just start to write on the correct shard
[23:21:20] <Boomtime> it will start to write to the correct shard
[23:21:40] <Boomtime> if you have data now residing on shard 1 then you have a problem
[23:21:44] <harttho> We were going to move data from shard 1 to 2 (new primary), flushRouterConfig, enjoy writing data to existing collections on correct shard
[23:21:58] <harttho> Will this work?
[23:21:59] <Boomtime> you will need to manually access that data, connect to the shard directly and reconcile it manually
[23:22:15] <Boomtime> yes, as you describe
[23:22:40] <harttho> Haha thanks for the help, glad our thoughts line up with yours/the ticket's
[23:24:09] <Boomtime> np, sorry for the mess, it is a nasty effect that one
[23:58:23] <Boomtime> it is basically impossible to query on that field
[23:58:28] <proteneer> the files field?
[23:58:37] <proteneer> or if I make a shadow field
[23:58:55] <proteneer> like files : {“a.b” :”c”, “a”: {“d”: “e"}}}