PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 23rd of April, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:29:00] <MedicalJaneParis> i have a 3 node replica setup, primary and two secondaries. one of my secondaries went down and my clients (nodejs) are occassionally throwing errors
[00:29:03] <MedicalJaneParis> "No replica set member available for query with ReadPreference undefined and tags undefined"
[00:30:26] <MedicalJaneParis> now that i think about it, it may be ignoring the pref in the URL, since i pass in an object. ill try adding it there
[02:29:36] <b1nd> anyone know why I am unable to save data to my mongo db?
[02:29:53] <b1nd> It is almost like each new connection has it's own database results
[03:24:07] <fresh2dev> if I have 1000 customer databases with different credentials, using the node.js native driver... it seems i have to call .connect 1000 and can't reuse the same mongoclient or db object. this seems "heavy" as it basically opens 1000 connection pools. does anyone have good resources on this.... I've found this post describing similar to what i'm trying to do: http://stackoverflow.com/questions/15680985/what-is-the-right-way-to-deal-wi
[03:24:07] <fresh2dev> th-mongodb-connections but i can't find any definitive best practice for this sort of thing
[03:26:40] <fresh2dev> i haven't tried this yet but the only sane way i think is to remove the different credentials from the databases and make them all use the same username and password and then i can reuse the same connection pool?
[04:40:30] <in_deep_thought> right now to connect to my mongolab mongodb, I use this: http://bpaste.net/show/zpGPZFje4vZT8oUaRtV6/
[04:41:11] <in_deep_thought> however I want to use this same production db with my local code. should the same thing work? or do I have to substitue process.env for something?
[05:43:57] <fluter> hi,
[05:44:21] <fluter> why SocketException is removed from c++ driver 2.6.0?
[08:33:47] <doabue> hi mongodbers, question re: ulimit for open files and MMS, if anyone here is able to help
[08:34:22] <doabue> server has ulimit -n reporting 64000, but running db.hostInfo(){'extra'].maxOpenFiles reports 1024
[08:34:37] <doabue> it seems MMS picks up the latter, and is warning that I have a low files limit
[08:35:36] <doabue> i'm wondering how db.hostInfo()['extra'].maxOpenFiles is calculated, and where the discrepancy lies
[09:10:59] <dmarkey> If my document has sub-documents, does the 16M limit apply to the size of the all the sub documents as well?
[09:29:00] <Nodex> document is limited to 16mb period
[09:29:16] <Nodex> sub doucments are just an object inside the document
[09:29:20] <Nodex> parent document*
[09:56:58] <dmarkey> Nodex: ok thanks
[10:54:52] <tinco> hey guys I'm trying to transform a document like this: { 'time' => 1, 'kind' => 'myKind', 'occurrences' => 5} to { 'time' => 1, 'myKind' => 5}, is that possible?
[10:55:43] <Nodex> of course, just loop the collection and do it
[10:55:56] <tinco> loop the collection?
[10:55:57] <Nodex> and use $unset to remove old fields
[10:56:11] <Nodex> you know what a loop is right?
[10:56:31] <tinco> oh sorry, I mean that I want to query the db and get that result, not really transform the document in the database
[10:57:03] <Nodex> then no you can't do "SELECT Blah AS foo"
[10:58:06] <tinco> hmm
[10:58:19] <tinco> my query is like this atm: http://pastie.org/private/uwwfejnfjg4y7ycmfbd7nq
[10:58:23] <tinco> it's pretty big :P
[10:58:45] <tinco> perhaps I should just leave it like this
[10:58:53] <tinco> and do the last step on the client side..
[10:59:57] <Nodex> right, you're aggregating, might want to mention that :)
[11:00:40] <tinco> basically I want the end result to be {"slot" => {"begin" => 0, "end" => 5}, "myKind1" => 4, "myKind2" => 3, etc.. }
[11:02:25] <tinco> but this is just fine, I'll skip the last step
[11:02:31] <tinco> making big queries is fun :P
[12:20:28] <Feel> Hello. I start learning the MongoDB and have a question. In MongoDB two object document can have different field number. For example, in collection "Users" I can have two object documents - 1) With a key "Name" 2) With keys "Name", "Surname". But how MongoDB been works if I want get all names in "Users"? (second document hasn't key "Name").
[12:21:43] <Nodex> eh?
[12:25:18] <jsfrerot1> hi, i'm trying to test a restore for mongodb sharded cluster, and following the documentation i'm missing some command details.
[12:25:45] <jsfrerot1> I'm following this procedure: http://docs.mongodb.org/manual/tutorial/restore-sharded-cluster/#procedure
[12:26:05] <jsfrerot1> and I'm wondering how to do this step: Restore the Config Database on each config server.
[12:26:21] <jsfrerot1> i'm obviously new to mongo, so any help would be appreciated
[12:29:58] <ggoodman> Given documents with the following (partial) schema: { identities: [ { user_id: id_1, service_name: service_1 }, { user_id: id_N, service_name: service_N } }
[12:30:49] <ggoodman> I would like a unique index so that no two documents can be created with the same {'identities.user_id': 1, 'identities.service_name': 1} pair.
[12:31:20] <ggoodman> My issue is that I would also like documents that have no identities and this seems to cause the unique index to fail. Anyone know what's going on?
[12:33:42] <ggoodman> Just found: https://jira.mongodb.org/browse/SERVER-3934
[12:43:56] <ggoodman> Seems the index works if made sparse: true
[14:07:23] <locojay> hi when running a query i get error: { "$err" : "BSONElement: bad type 109", "code" : 10320 }
[14:17:39] <locojay> if i traverse the collection with pymongo i get AssertionError: Result batch started from 0, expected 205881
[14:25:10] <locojay> anyway i can find this s..ker
[14:42:15] <locojay> i identified the doc within a range based on some attribute but when i try to remove i get BSONObj size: 1597071153 (0x315F315F) is invalid. Size must be between 0 and 16793600(16MB) First element: mall.jpg: ?type=115
[18:12:29] <foofoobar> Hi. Some architectural question: I have a "Group". This group has "Albums". Each album contains "Photos".
[18:12:44] <foofoobar> Photos may be a huge list of links (e.g. 800 photos)
[18:13:34] <foofoobar> My current structure is something like: Group { albums: [albumId1, albumId2, albumId3]}, Album { photos: [photoId1, photoId2, photoId3]}
[18:14:08] <Derick> I'd store a document per photo:
[18:14:10] <foofoobar> Is this the right approach or should I make one huge Document like Group { albums: [ { photos: [...] }, ...]}
[18:14:57] <Derick> { _id: photoId, albums: [ albumId1, albumId2 ], Group: [ groupId1, groupId1 ] }
[18:15:21] <foofoobar> a photo can only be in one album. and one album can only be in one group
[18:15:37] <foofoobar> but a group has some meta data (name) and an album also has a name
[18:15:49] <Joeskyyy> you'd potentially run into memory limits with putting every photo in an album doc
[18:16:29] <Derick> foofoobar: in that case, store album metadata seperately
[18:16:53] <Derick> { _id: photoId, albums: albumId1, Group: groupId1 }
[18:17:01] <foofoobar> Derick, so this is what I have in my current approach, right?
[18:17:10] <Derick> no, you store lists of lists
[18:17:17] <Derick> and you cause unbound sizes for documents
[18:18:15] <foofoobar> http://hastebin.com/ozosibahuj better?
[18:23:29] <foofoobar> Derick, is that approach better?
[18:48:00] <arussel> I would like to run some function regularly (fetch some doc, update based on some attributes ...). I'm not sure how/where I should store the function diffinition
[18:50:05] <arussel> why does the doc states: We do not recommend using server-side stored functions if possible.
[18:55:27] <arussel> or is there a way to send a javascript file to be run by the server ? (I can't store it on the same machine)
[19:00:53] <ranman> arussel: you can run it on an app server? you can send functions to be executed over the shell
[19:00:59] <ranman> and the shell can execute arbitrary javascript files
[19:01:14] <ranman> but yes storing it in a document as a code object is possible but not recommended
[19:01:24] <ranman> mainly because it becomes harder to edit and view
[19:01:37] <ranman> and version control
[19:02:29] <arussel> ranman thanks for the answer, I just need to find out how to do this with my driver.
[19:02:50] <ranman> arussel: which driver?
[19:02:55] <arussel> reactive mongo
[19:06:54] <ranman> arussel: have you checked out https://github.com/mongodb/casbah
[19:07:56] <arussel> yes, I've used it on some previous project
[19:08:54] <arussel> I quite like the non-blocking/asynchronous way of reactive mongo
[19:16:59] <Zelest> I've installed both mongo and bson_ext via gem and for some reason it refuse to load the bson_ext gem.. any idea? :/
[19:22:42] <Zelest> nvm, solved it.. gem gave me retarded permissions
[20:34:59] <hocza> hi
[20:36:58] <hocza> When I want to use reference (one to many or many to many) for example: category [{"name": "Test", "_id": ObjectID("535621784d5001f173000001")}] and in my Transactions collection, when i use the ref, how should I store it?
[20:37:12] <hocza> "category_id": ObjectID("535621784d5001f173000001") ?
[20:37:27] <hocza> or just "category_id": "535621784d5001f173000001"
[20:40:05] <skot> store the objectid because it is the correct type, and only 12 bytes
[20:40:11] <tscanausa> hocza: that looks like an antipartern
[20:42:00] <hocza> however that ObjectID() is only visible in a specific mongo db viewer. In anything else it just seems like a simple string.
[20:42:20] <hocza> :\
[20:43:13] <hocza> tscanausa: what makes it anti pattern? :)
[20:44:32] <tscanausa> trying to have one to many or many to many relationships in a transactions collection.
[20:44:56] <hocza> oh well
[20:44:57] <tscanausa> generally when I need "relationships" I store the string of the object id
[20:45:12] <hocza> ohh i see :)
[20:45:58] <hocza> I have a mysql DB and porting it to mongoDB, and it just felt a bit strange but its okay then. :) Couldn't find a very-well explained example
[20:46:46] <tscanausa> store it like you would do with foreign keys
[20:47:28] <hocza> basically a transaction contains a: date, resource_id, category_id, amount, Tags[this going to be embedded], ItemName, Status
[20:47:41] <hocza> hm okay. So in mongoose i just should go with
[20:47:48] <hocza> resource_id: String
[20:47:56] <tscanausa> yes
[20:48:24] <hocza> by the way, i really like this embedded stuff :)
[20:48:35] <tscanausa> you will for now
[20:48:55] <hocza> Previously i had to maintain a finance_transaction_tags for tagging
[20:49:17] <hocza> and for the clients too...
[20:50:23] <hocza> When my boss decided: hmm, what if I could give more addresses to someone? I was like, oohh you got to be kidding me, I asked that if you ever need to store more than 1 address... now i would go with: address: [{...}]
[20:52:04] <hocza> And this node.js as backend javascript... and frontend javascript programming... Using a single language for a web app... hmmm :) And with mongoDB i can store my objects without converting to mySQL tables... and back...
[20:53:11] <tscanausa> nice to see you are going full koolaid with javascript.
[20:53:52] <hocza> I do not know what koolaid is sorry :(
[20:54:36] <cheeser> http://en.wikipedia.org/wiki/Drinking_the_Kool-Aid
[20:54:48] <cheeser> it was actually flavoraid and not kool-aid
[20:54:49] <tscanausa> thanks cheeser
[20:55:42] <ggoodman> given a node-mongodb-native cursor, can I do both a count and a toArray() without reconstructing the query?
[20:56:14] <tscanausa> var a = thing.toArray()
[20:56:23] <tscanausa> length = a.length
[20:56:33] <hocza> thanks cheeser
[21:20:28] <Efrem> Is there a limit to how many levels of embedded documents you can have? I am looking at storing a tree as a single doc but I feel like that could be dirty/smell.
[21:21:59] <kali> Efrem: it can get difficult to query subdocument of subdocuments of a document
[21:22:23] <kali> Efrem: but if it's just storage you need, no querying, then it should not be a problem
[21:22:53] <Efrem> hmm, yeah that's a good point. The data I'm going to store shouldn't need to query anything that deep. It's just storage of brackets for competitions.
[21:26:35] <kali> Efrem: local update can get tricky too
[21:27:03] <Efrem> yeah I would have to rewrite the entire doc on update, for the most part, wouldn't I kali?
[21:27:41] <kali> probably, yes
[21:28:29] <Efrem> hmm, that gives me info to go on, thx kali
[22:07:25] <test123> anybody here set up a replica set on mongodb?
[22:07:41] <test123> *on aws
[22:10:24] <tscanausa> test123: I have done is in rackspace….very similar what up?
[22:11:45] <test123> rs.initiate is taking a long time
[22:11:55] <test123> should I have set the oplogsize to 100mb?
[22:13:34] <joannac> how much activity are you going to be running on it?
[22:13:53] <joannac> how many nodes, how geographically distributed, etc?
[22:17:32] <dberg2> how can I exclude multiple fields in a query? Something like "ne { a: 10, b: 20 }"
[22:18:14] <alex______> hi guys
[22:18:34] <alex______> i have a problem with PHP driver throwing "No candidate servers found"
[22:20:10] <alex______> $connection = new MongoClient("mongodb://10.88.217.247:27017,10.88.217.247:27018", array('username'=>"{$username}", 'password'=>"{$password}", 'replicaSet' => true, 'readPreference' => 'primaryPreferred'));
[22:25:12] <joannac> alex______: do you have a primary?
[22:25:28] <joannac> dberg2: do you mean exclude, or a not equals query?
[22:28:10] <dberg2> joannac: not equals
[22:28:25] <dberg2> joannac: but not equals on 2 or more key value pairs.
[22:31:03] <joannac> dberg2: db.foo.find({a:{$ne: 1}, b:{$ne: 2}}) ?
[22:32:01] <dberg2> the problem is that I want to exclude when both a and b match.
[22:32:38] <dberg2> this would exclude the entry for instance when a value matches but b does not match.
[22:33:05] <dberg2> in sql it would be something like, where a != 10 AND b != 20
[22:33:06] <dberg2> for example
[22:33:38] <dberg2> if I just use "$ne" this becomes an OR
[22:35:10] <joannac> dberg2: huh?
[22:35:46] <joannac> dberg2: here are 4 documents. which ones should match http://pastebin.com/Dn1YtiYU
[22:36:50] <dberg2> I want to say everything that does not include the pair (a = 1, b = 1)
[22:36:57] <dberg2> this should return the last 3 records
[22:39:08] <jhubert> Hi everyone. I’m trying to use the aggregation framework to do get some stats out of a mongo collection. I’ve outline the problem here: https://gist.github.com/jhubert/ecef5a8f989753a5af55
[22:39:45] <jhubert> Essentially, I’m trying to use both count and distict to get some total numbers but am concerned about hitting the BSON object limits (or the group by 20k limit)
[22:40:26] <jhubert> My first pass just used direct distict calls for each number I want, but that seems like a rather inefficient way to do it and was curious if anyone else had a better suggestion.
[22:40:28] <joannac> dberg2: db.foo.find({$or: [{a:{$ne: 1}}, {b:{$ne:2}}]})
[22:40:50] <brucelee> when i have replica sets, is data only written to the primary?
[22:40:50] <joannac> dberg2: I feel like that could be better optimised though, but I don't have time right now :(
[22:40:57] <joannac> brucelee: yes
[22:41:07] <joannac> then it gets replicated to the secondaries
[22:41:15] <brucelee> as far as IO intensitity
[22:41:21] <brucelee> whats the difference between writes
[22:41:21] <joannac> you can use write concern to wait until it gets written to more nodes
[22:41:23] <brucelee> and replication
[22:42:00] <brucelee> cant i just say data is being written to everything then?
[22:42:02] <brucelee> :p
[22:42:16] <dberg2> joannac: ah, there's an $and operator. let me try that.
[22:42:16] <joannac> no, because it's not instant
[22:43:33] <joannac> brucelee: eventually any write will be at all your nodes. But it's not instant, replication is asynchronous
[22:48:11] <dberg2> joannac: yeah, that doesn't work. The problem stems from the fact that "$ne" is a post operator on a field. Ideally it would run the other way around. Like { "$ne": { a: 1, b: 2 }}
[22:48:25] <dberg2> let me try the second form.
[23:00:50] <jhubert> Anyone?
[23:15:44] <Ontological> Hey guys. I'm having a weird issue with GridFS. My files collection is called "files". If I do db.files.files.findOne({'_id': ObjectId("...")}) I am given a result. When I try to do files.get(ObjectId("...") I am told the file doesn't exist. Any suggestions?
[23:16:08] <Ontological> using pymongo
[23:17:15] <Ontological> Nevermind
[23:17:26] <Ontological> Helps if the python is running on the same mongo server. derpus
[23:23:31] <in_deep_thought> is there a .list method for queries? and if so, can someone help me find the documentation on it?
[23:25:25] <joannac> in_deep_thought: .toArray() ?
[23:25:38] <joannac> (I'm not sure if that's what you're asking for)
[23:29:29] <in_deep_thought> joannac, I see .list() used on queries in serveral examples I find online. yet when I search docs.mongodb.org for documentation on .list(), I find nothing. What is .list() ?
[23:31:42] <joannac> in_deep_thought: link plz?
[23:31:49] <ap_> If I'm trying to send data from mongodb to a neo4j database, can I use MongoConnector? https://github.com/10gen-labs/mongo-connector
[23:33:12] <in_deep_thought> joannac, go to the querying section in http://www.querydsl.com/static/querydsl/2.1.0/reference/html/ch02s07.html or line 37 of this: https://github.com/madhums/node-express-mongoose-demo/blob/master/app/controllers/articles.js
[23:35:22] <joannac> Oh, neither of those is the mongo shell
[23:35:59] <joannac> if you write your own layer, you can call your functions whatever you want
[23:37:02] <in_deep_thought> joannac, what do you mean? what layers are they using in this case?
[23:37:09] <joannac> One is Node.js
[23:37:20] <joannac> One is Querydsl ?
[23:37:37] <in_deep_thought> so in Node.js, what does .list() do?
[23:37:59] <joannac> No idea. Try it and see