PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 30th of April, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:04:08] <Raynos> I keep reading about "extents" in the mongodb docs
[00:04:12] <Raynos> like `lastExtentSize`
[00:04:16] <Raynos> What is an extent
[00:43:06] <tjmehta> Is there anyway to use mongo's indexes to determine all the values of a particular key by all the documents in a collection.
[00:45:43] <redsand> w2
[01:07:41] <Raynos> Oh I have to set the batchSize manually since it has a horrible default -.-
[01:07:44] <Raynos> got it. great.
[01:30:45] <mikeerickson> Hello gang, first time visitor
[02:05:53] <resting> does anyone knows if its possible to expand an array while exporting with mongoexport?
[02:13:24] <mikeerickson> Guess this channel doesn't have a lot of user activity.
[02:13:44] <jon_r> depends on your perspective
[03:17:00] <spaceinvader> hi, I think I'm having some problems with geoNear: https://gist.github.com/tmenari/5486403. The distance returned is 4km but I think it's more like 2km
[03:28:48] <spaceinvader> ahaha i figured it out
[03:29:02] <spaceinvader> lng/lat are the wrong way, but in both geojson, so the error is not huge
[03:40:55] <dukedave> Is there anything I can do at the collection level to define transformations to apply to a path when it's written and retrieved?
[03:41:54] <dukedave> I've trying to represent the same polymorphic doc in two ODMs (Mongoose and Mongoid), but they both insist on using their language's class name
[03:42:51] <dukedave> I'd like to write something ODM agnostic to the doc, but return the class names each ODM expects as two attributes
[03:55:32] <jon_r> dukedave I'm fairly certain mongoid allows you to change the field used to reinstansiate the klass, if it's conflicting with mongoose change it?
[03:59:18] <dukedave> jon_r, ah, well, I've got that far, but it just means that each doc has two klass fields on it, so something like this: { _id: ..., actual: 'data', _mongooseType: 'MyMongooseKlasssName', _mongoidType: 'TheRubyKlassName' }
[03:59:45] <jon_r> yeah, you're probably not going to do any better than that
[03:59:58] <dukedave> I'd like to just store { ... _type: 'KlassName' }, and generate the return types for free
[04:00:03] <dukedave> jon_r, crap :
[04:00:04] <dukedave> :(
[04:00:20] <jon_r> you could do _type: { mongoose: '', mongoid: '' }
[04:00:51] <dukedave> jon_r, ah yeah, and see if I can pass a path to the ODMs
[04:01:23] <dukedave> I looked at monkey patching Mongoid to accept a mapping from string to a Class (and similarly for Mongoose), but they're both pretty hung up on the idea that the String *is* the class name
[04:01:41] <dukedave> (so there's no good place to make the abstraction)
[04:02:34] <dukedave> But.. What I'm thinking I'll do now, is just use separate collections, and generate the collection name from the klass name
[04:07:46] <jon_r> dukedave if you can do that why did you ever use the same collection?
[04:08:23] <dukedave> jon_r, well, my situation is very close to this SO question: http://stackoverflow.com/questions/9172834/choosing-mongodb-collections-structure-for-similar-data-structures
[04:09:04] <dukedave> Except my docs have a timestamp and a lat/lng
[04:10:15] <dukedave> And we'll be querying only on those two attributes, then iterating through the results
[04:12:23] <dukedave> But now I'm doing some quick testing locally and it seems that the (collection size) cost of storing a _type (or two as appears would be required) for each doc makes it significantly less attractive. I'm using nearly as much space storing the _type as the data :|
[07:00:37] <resting> any idea what could be the problem when a php findOne doesn't return results but the mongoshell does using the same conditions?
[07:05:07] <Nodex> can you pastebin your query?
[07:05:31] <Nodex> at a guess I would say it;s type casting but without your query I cannot be sure
[07:07:13] <svm_invictvs> Heya
[07:14:23] <Nodex> best to just ask your question
[07:18:22] <qwebirc25680> ello
[07:18:40] <Aorion> i am trying to learn node.js/mongo
[07:18:44] <Aorion> but am having some issues...
[07:19:14] <Aorion> The quick and dirty is that I can connect to the DB, create a collection, and then list collections
[07:19:19] <Aorion> but the collections list is empty
[07:19:27] <Aorion> all of the errors are null
[07:19:29] <Aorion> what can cause this?
[07:19:41] <Aorion> that's the simplest issue I'm seeing
[07:20:02] <Aorion> more are such that I can enter data, on what should be a valid collection, and nothing makes it to the server
[07:20:33] <Aorion> The pastebin is here http://pastebin.com/uRkd5taR
[07:20:39] <Aorion> is this a bug?
[07:22:01] <Aorion> The only thing I can think of is that the data is added to the DB before the connection completes
[07:22:07] <Aorion> but that should return an error, correct?
[07:22:20] <svm_invictvs> I'm looking at creating an auto-incrementing _id field
[07:29:40] <Aorion> last time I checked, I wasn't an idiot, so I feel like if there are no errors reported by the library, and the database echoes that my client connected, and there are no errors in the transactions, that the database should have changed state
[07:29:43] <Aorion> but that is not occuring
[07:29:44] <Aorion> at all
[07:30:18] <Aorion> there are only so many permuations of the code I can try before this gets ridiculous
[07:33:06] <Aorion> lots of people coming and going, is anyone actually in here?
[07:33:24] <Nodex> lol
[07:33:28] <Nodex> have some patience
[07:34:23] <Nodex> check on the shell that the documents have infact been inserted
[07:34:32] <Aorion> they have not
[07:34:49] <Aorion> that was beyond my initial problem
[07:35:29] <Aorion> my initial problem was that I use the createCollection method, then collections() and collectionNames() and neither of the latter will return anything
[07:35:37] <Aorion> but there is no error from createCollection
[07:35:41] <Aorion> and no error from open
[07:35:49] <Nodex> you dont need to create a collection
[07:36:03] <Aorion> so if we are opened, and we successfully added a collection, why aren't any of the collection methods working?
[07:36:06] <Nodex> collections are created auto when inserting a document if not exists
[07:36:22] <Aorion> okay. let's pretend I am just verifying that I have access to the database
[07:36:27] <Aorion> which has been proving difficult
[07:37:20] <Nodex> I don't know node.js all that well but perhaps "user_db.users" is wrong maybe?
[07:37:23] <Aorion> in this code ->http://pastebin.com/uRkd5taR I create the object which causes mongod to echo some initandlisten messages. Then I call the init() method which creates a collection and throws some default entries in there
[07:37:34] <Aorion> I've tried it in any case
[07:37:58] <Aorion> db.collections("users") or db.collections("users_db.users")
[07:37:58] <[AD]Turbo> ciao
[07:38:18] <Nodex> that's not how it works on the shell
[07:38:18] <Aorion> the latter was a test after I read on the docs that the names of the db are in the collection name implicitly
[07:38:30] <Aorion> well if it worked the same in the shell I would be fine
[07:38:31] <Nodex> db.COLLECTION_NAME.find(....);
[07:38:37] <Aorion> i can get everything working in the shell perfectly
[07:38:43] <Aorion> but the node interface is not working
[07:38:53] <Nodex> which I am trying to help with
[07:38:54] <Aorion> and I feel like I am doing the same operations
[07:39:00] <Aorion> okay.
[07:39:16] <Nodex> this.db.collection('users'...
[07:39:21] <Nodex> not user_db.users
[07:39:31] <Aorion> to start, I am under the impression that the datatype of "db.test" in the shell is a collection object for the test collection, is this correct?
[07:39:36] <Nodex> from what I gather you've already selected the DB when you "init"
[07:39:44] <Aorion> correct
[07:40:03] <Nodex> so why are you 1 creating a colleciton with "." in the name
[07:40:11] <Nodex> and 2. trying to query with a "." in the name
[07:40:32] <Nodex> pretty sure "." is not allowed in collection or key names (could be wrong)
[07:40:39] <Aorion> as I said, that was a temporary thing I was trying
[07:40:43] <Aorion> let me find it in the manual
[07:41:28] <Nodex> this.db.collection('users_db.users', function(error, user_coll) { ... console.log(user_coll); } ... <--- what does that log
[07:42:06] <Aorion> nil
[07:42:09] <Aorion> nothing
[07:42:17] <Aorion> and it's just 'users'
[07:43:00] <Aorion> can I synchronize on the database open or does that matter?
[07:43:08] <Aorion> will it retry/timeout?
[07:43:24] <Nodex> http://mongodb.github.io/node-mongodb-native/markdown-docs/collections.html <-- perhaps that will work?
[07:43:39] <Aorion> i had that open in another tab already
[07:43:45] <Nodex> I don't know what "synchronize" refers to sorry
[07:43:56] <Aorion> so the open has a callback method
[07:44:19] <Nodex> http://mongodb.github.io/node-mongodb-native/markdown-docs/insert.html
[07:44:21] <Aorion> so it happens asynchronously, and I am not sure of the operation for createCollections if the database is not yet opened
[07:44:29] <Nodex> the bottom function
[07:45:04] <Nodex> certainly the first part of it is relevant to you if not the findAndModify
[07:45:10] <Aorion> okay. I understand the insert, collections, etc. operation
[07:45:18] <Aorion> i am not confused as to the STATED operation of mongod
[07:45:26] <Aorion> i am confused as to the IMPLICIT operation of mongod
[07:45:29] <Aorion> there is something going on
[07:45:30] <Nodex> I don't know what "STATED" means sorry
[07:45:42] <Aorion> okay
[07:45:44] <Nodex> I don't use big hipster words - I am a mere coder
[07:45:45] <Aorion> here's my problem
[07:46:38] <Aorion> I read the docs, and the docs say, that I should open a connection to the db. The docs also say that I can create a collection with a given name. The docs also say that I can read the collection names using collections() or collectionNames
[07:46:55] <Aorion> the docs also say that errors at any point in that process will be shown by the first parameter of any callback
[07:46:57] <Nodex> perhaps you should do one thing at a time
[07:47:02] <Aorion> that is the stated operation
[07:47:08] <Aorion> i am doing one thing at a time
[07:47:12] <Aorion> i am trying to simply open a connection
[07:47:13] <Nodex> i/;e make sure you can connect, then make sure you can insert
[07:47:30] <Aorion> i know i can connect, I see the echo from mongod
[07:47:31] <Nodex> my advice is to follow the first part of the last funciton on this page ... http://mongodb.github.io/node-mongodb-native/markdown-docs/insert.html
[07:47:54] <Aorion> look, I appreciate your help, but I don't think you are listening to me
[07:48:01] <Aorion> my problem is not with the docs
[07:48:05] <Aorion> there is something going on
[07:48:08] <Aorion> behind the scenese
[07:48:08] <Nodex> ok good luck :)
[07:48:17] <Aorion> that is not listed in the docs or anything
[07:48:32] <Aorion> very simply, starting at state zero, if I add a collection, I should be able to see that collection
[07:48:33] <Nodex> it works for everyone else but you're somehow the special edge case it doens't work for
[07:48:35] <Aorion> but I can't
[07:48:41] <Aorion> so there is something wrong with the setup
[07:48:52] <Aorion> that the error values are not catching
[07:49:06] <Aorion> and it does work for me
[07:49:20] <Aorion> i can go into mongo repl and do whatever I want
[07:49:23] <Aorion> it works perfectly
[07:49:48] <Aorion> but if I try to do it through node, the connection gets fucked but it doesn't tell me why, so I am left scratching my head
[07:49:57] <Aorion> which is bad error handling
[08:06:38] <svm_invictvs> January Jones sounds like a pornstar's name.
[08:06:54] <Nodex> lol
[08:07:38] <Zelest> true :P
[08:14:22] <diffuse> if i do a batch insert, do the _id's of each document get returned?
[08:15:20] <donCams> it seems a good idea to store chat logs into mongodb. hmmm
[08:49:11] <Siyfion> hey guys, Nodex you about?
[08:53:34] <ron> Nodex is always here. ALWAYS.
[09:10:02] <Siyfion> ron: except now apparently.. lol
[09:10:30] <ron> Siyfion: he's here. trust me. but... is your question related to mongodb, php, mongodb and php or something completely different?
[09:12:14] <Siyfion> basically I was just wondering what the best / fastest way of copying a load of documents is with a minor change (and change in _id).
[09:12:46] <Siyfion> I'm trying to implement a "clone" function for my product_category model,
[09:13:21] <Siyfion> I need it to create a new product category with the new name specified, then copy all the products in the old category into the new one.
[09:15:40] <Siyfion> ron: Ah I think I've found the answer myself.. basically I just need to query the products to clone, delete their _id's and change their category refs, then insert them all back in again
[09:15:56] <Siyfion> (I'm using mongoose, so that made it slightly harder, but I'll use http://mongoosejs.com/docs/api.html#model_Model.create)
[09:34:47] <ExxKA> Hey Guys. I cant figure out how to group on the day and hour of a timestamp extracted fromt he Id. I can do it from a timestamp field I have created, so I guess I am just missing a small step?
[09:45:40] <BadDesign> ExxKA: You have to contruct the OID manually, the steps are: 1) get a timestamp (uint64_t) for the day and hour you try to build the query for, 2) get a hex representation (uint64_t) of the timestamp 3) construct the OID by converting the hex representation obtained in the previous step to a string and concatenate to it 0000000000000000
[09:46:58] <BadDesign> ExxKA: Then you can query for documents in the collection using something like { \"_id\" : { $gte : { \"$oid\" : \"" + oid1 + "\" }," "$lte : { \"$oid\" : \"" + oid2 + "\" } } }"
[09:47:57] <ExxKA> BadDesign, I don't think I was clear before..
[09:48:11] <ExxKA> This query works: db.baskets.aggregate([{ $match: { merchant: ObjectId("517a98d82893817876000002") } }, {$group: { _id: { year: { $year: "$created" }, month: { $month: "$created" }, day: { $dayOfMonth: "$created" }, hour: {$hour: "$created"} }, total: {$sum: "$amount"} } }, {$sort:{_id:1}}])
[09:48:37] <ExxKA> But here I am using a made up field called "created"
[09:48:46] <ExxKA> I just want to replace that with _id :)
[09:48:57] <ExxKA> But it is not that simple
[09:52:14] <BadDesign> That first 4 bytes from the ObjectID gives you the moment when the document was inserted in the collection
[09:52:28] <BadDesign> Is this what you're trying to group by?
[09:52:56] <ExxKA> Yes - I want output similar to this:
[09:53:10] <ExxKA> Hmm, let me just put that in a gist
[09:53:11] <ExxKA> 2 sec
[09:53:36] <ExxKA> https://gist.github.com/ExxKA/674fbdc5b7dd395233c3
[09:54:10] <ExxKA> That is what I get with the current query
[09:54:45] <diffuse> if i do a batch insert, do the _id's of each document get returned? I am trying to figure out how to retreive this with the C driver.
[09:56:27] <BadDesign> diffuse: no you won't get the id's returned, because the id's should be calculated on the client
[10:00:03] <ExxKA> ..
[10:00:25] <ExxKA> BadDesign, did you have a chance to see the gist?
[10:02:36] <BadDesign> ExxKA: the query you're using is not ok after the $group part
[10:03:06] <ExxKA> ? It works...
[10:03:22] <ExxKA> But please, let me know what is wrong with it - it is my first attempt :)
[10:03:29] <BadDesign> ExxKA: you should do what I said above in those steps... and { $group: _id : ObjectId("myconstructedoid") }
[10:03:36] <BadDesign> Sorry I have to leave now
[10:03:41] <BadDesign> talk in an hour
[10:03:59] <ExxKA> Sure
[10:04:08] <ExxKA> Thanks for your help
[10:07:43] <colock> greetings
[10:16:23] <dob_> somebody here?
[10:17:41] <dob_> I am aggregating a collection a using group. I have a field refid and refname. Now I am grouping by refid, but I also want to output refname, but ignore it if it's different and output the first refname found and add it to my group
[10:17:47] <dob_> is that possible?
[10:54:00] <diffuse> how do you get the last insert id of a document with the C driver? Is it possible with mongo_insert or do you have to run mongo_run_command?
[10:55:00] <algernon> you can't do that reliably, unless you generate the id yourself.
[10:55:38] <algernon> (or if the driver does it, and caches it for you, which is not something the C driver does, iirc)
[10:58:04] <diffuse> thats unfortunate
[12:03:51] <tom_2321> hello, the primary node of my mongodb replicaset gets lots of these messages: ... [conn845090] getmore local.oplog.rs query: { ts: .... Is there something wrong with my oploc or the size of my oplog?
[12:04:46] <tom_2321> also these queries have a ten time higher response time, than normal queries
[12:09:27] <kali> tom_2321: don't worry about it, it's the replication log
[12:42:55] <csengstock> Hi. Is there a low level access to the index structures from within the mongo shell? For example, i want to get the geohash strings for my records (with an attributes indexed by a 2d index). Or even better, i want to query the number of indexed documents, given a particular geohash string. The aim is to use the quadtree structure to efficiently retrieve document distributions over geographic space.
[12:44:06] <csengstock> of course i mean quadtree-like structure. haven't looked for it in depth, but probably the geohash strings are indexed by a btree?
[13:02:06] <aseba> Morning.. I have a question.. I have a mongodb server running. The disk for that server (ubuntu) got full, so I've attached another volume and rsynced all the /var/lib/mongodb to the new volume. I changed the config to load the new directory and ran it again. Everything seemed to be working fine, but now I get "$err" : "Invalid BSONObj size: -286331154 (0xEEEEEEEE) first element: EOO",
[13:02:15] <aseba> a db.repairDatabase() would fix that?
[13:05:48] <johnny_fg09384> sometimes mongostats shows me, that one of my mongodb collection is locked over 100%, eg, 105,8%. what does it means?
[13:20:56] <ExxKA> Hey BadDesign. I figured out the problem, using map reduce instead of the aggregation framework. Thanks for your help
[13:29:35] <BadDesign> ExxKA: Did you construct your own ObjectID?
[13:30:14] <ExxKA> BadDesign, nope, they are already in the database
[13:30:44] <ExxKA> I used the .getTimestamp() function on the Id to get a Date object I could extract the values from
[13:32:44] <ehershey> good trick
[13:35:12] <Number6> It's a very handy feature
[13:37:35] <Number6> Also, hi ehershey!
[14:26:03] <Nodex> we've all heard of cloud computing but has anyone seen this new thing called cloud commuting. It's being pioneered by airlines all over the world
[14:28:21] <Nodex> I think it will revolutionise the way we travel around the world
[14:28:32] <Nodex> and interconnect between countries
[14:42:58] <Psi|4ward> Hey there, any German mongo expert here for some (payed) consulting?
[14:43:44] <ron> that's a bit racist.
[14:44:15] <Psi|4ward> why?
[14:46:58] <xenoxaos> he wants consulting, and i'm assuming his native tongue is german...
[14:47:33] <Psi|4ward> i the main goal is to sit vis a vi on a table
[14:50:16] <Derick> Psi|4ward: 10gen provides consulting: http://www.10gen.com/products/mongodb-consulting
[14:51:38] <Psi|4ward> yea i know but 450/hour brings me no effort
[14:52:22] <Psi|4ward> i more search someone who can escort my with my first mongodb project
[14:52:34] <Derick> Psi|4ward: where in germany are you?
[14:52:39] <Psi|4ward> nürnberg
[14:53:42] <Derick> hmm, I was thinking of suggesting to join a MongoDB UG meeting... but not sure if there are any in nürnberg
[14:53:45] <Derick> let me see
[14:54:12] <Psi|4ward> searched already but found none
[14:54:33] <Derick> there is one in Munich and Frankfurt though:
[14:54:41] <Derick> http://www.meetup.com/Frankfurt-Rhine-Main-MongoDB-User-Group/
[14:54:42] <xenoxaos> anyone maintain the source code around here?
[14:54:53] <Derick> http://www.meetup.com/Muenchen-MongoDB-User-Group/
[14:55:23] <Derick> xenoxaos: which source code?
[14:55:29] <xenoxaos> mongodb
[14:55:47] <xenoxaos> fighting getting it to play nice on arm since ~v1.8
[14:56:22] <Derick> hmm
[14:56:29] <Derick> I know the debian/ubuntu team is working on that too
[14:56:39] <xenoxaos> more x86 asm stuff in the newer versions
[14:57:27] <Mortah> hullo. What's the best way to query something that has the same key multiple times? (e.g. the oplog..)
[14:58:55] <Derick> the same key multiple times? can you show an example of that?
[15:00:48] <Mortah> well... no
[15:00:51] <Mortah> I'd like to :D
[15:01:11] <Mortah> I'm trying to debug an issue (with our app) and looking at the oplog to see the exact updates made
[15:01:51] <Mortah> unfortunately, mongo shell shows the update as: "o " : { "$set" : { "_version" : 3 }, "$set" : { "_version" : 3 }, "$set" : { "_version" : 3 } } }
[15:02:22] <Mortah> and, pymongo shows it as {u'$set': {u'sources': [51919L, 47756L]}}
[15:02:55] <Mortah> and I can only think its because $set has been specified on the document multiple times... and then somewhere in the client its kindly mangling it
[15:03:47] <Nodex> $set can only be used once per update
[15:04:08] <Nodex> so I suggest you look at your python code and see how it's being set 3 times
[15:04:09] <Derick> Mortah: that's a bug in MongoDB - which version are you running?
[15:04:18] <Mortah> 2.2.2
[15:04:23] <Derick> let me check that
[15:05:40] <Derick> Mortah: https://jira.mongodb.org/browse/SERVER-1606
[15:05:42] <Mortah> Is there a more raw way to query the oplog?
[15:05:52] <Derick> no
[15:06:42] <Mortah> gah
[15:06:58] <Mortah> cos upgrading won't actually help.. the entries are already there :D
[15:07:40] <Derick> that's true :-/
[15:07:53] <Mortah> it looks to me like the data is there though
[15:08:11] <Mortah> because depending on which way I query it I get one key or the other (guess it depends on internal ordering of the map)
[15:11:45] <Nodex> anyone know of a neat hipster trick I can apply to a 3 part compound index where I can always query at least 2 of the fields (one is static - cid) and the other 2 vary - (uid, aid), I've been raqkcing my brains but it always results in 2 indexes
[15:13:04] <Derick> store uid/aid duplicate in another field - f.e. uaid?
[15:13:40] <Mortah> could use multikeys too
[15:13:41] <Nodex> I always need to query on client + uid/aid
[15:13:51] <Mortah> e.g. blah_field = [uid, aid]
[15:13:57] <Mortah> then index cid, blah_field
[15:14:03] <Derick> right, that's what I meant :)
[15:14:13] <Nodex> yeh, I wanted ot avoid that
[15:14:15] <Nodex> to*
[15:14:28] <Mortah> :(
[15:14:36] <Nodex> I guess I can't avoid it
[15:14:52] <Mortah> how many documents will have the same cid?
[15:14:59] <Derick> Nodex: text search index on both fields?
[15:15:13] <Nodex> could be a lot of docs, It's a multi tenant CRM
[15:15:15] <Derick> but then you get stemming
[15:15:47] <Nodex> if I mix the aid,uid into an array I can never garuntee order of which is the uid/aid
[15:15:54] <Mortah> prefix?
[15:16:01] <Mortah> 'u1289', 'a1829'
[15:16:03] <Nodex> I mean I can sort the array in php before I send it but it's a little hacky
[15:16:12] <Derick> order can't be guaranteed
[15:16:16] <Nodex> yeh, gonna have to, I wanted to use ObjectId's
[15:16:35] <Nodex> @Derick : if you sort() in php you can sort ObjectId's and garuntee the order
[15:16:40] <Nodex> (as far as I have tested anyway)
[15:16:53] <Derick> sure, but order is not guaranteed to be preserved for fields at least
[15:17:24] <Nodex> so it will have to be an object .. between : {aid:'...',uid:'....'}
[15:17:25] <Derick> i guess that's not an issue for arrays though
[15:18:11] <Nodex> my main trouble with sub documents is I put everything in SOLR too and sub docs don't really map back into solor
[15:18:15] <Nodex> solr *
[15:19:05] <Nodex> Oh well, guess one can't have everything one wants LOL, where would the fun be in that!
[15:19:11] <Derick> :D
[15:20:08] <Mortah> so... if I managed to insert a document into mongo with the same key specified twice {bob: 1, bob:2} I have no way to query for it?
[15:20:09] <Nodex> solr now has awesome json support so it's really close to map mongo-> solr now :D
[15:20:44] <Nodex> Mortah : will that query not fail?
[15:21:12] <Mortah> as I understood it, bson allows docs with the same key multiple times... so could in theory get one into mongodb
[15:21:57] <Derick> Mortah: most languages/drivers however do not support that
[15:22:10] <Derick> Mortah: what do you want to query for?
[15:22:10] <Mortah> which ones do support it? :D
[15:22:17] <Derick> i don't know actually
[15:22:45] <Mortah> :( might just have to take a copy of the oplog then
[15:43:30] <Mortah> ahhhh
[15:43:31] <Mortah> mongosniff
[15:43:32] <Mortah> :D
[15:48:03] <Mortah> Derick: mongosniff rescued me :)
[15:48:16] <Mortah> o2: { _id: ObjectId('517fb43baf6670191daf664c') }, o: { $set: { _version: 3 }, $set: { priority_origin: [ "T" ] }, $set: { updated_date: new Date(1367323738191) } }
[16:03:24] <bartzy> Derick: I need to put a md5 hash in _id (it's unique)
[16:03:57] <Derick> bartzy: sure
[16:04:35] <Derick> what are you asking?
[16:04:44] <bartzy> haha :)
[16:04:51] <bartzy> Should I save it as a hex string, or as an int ?
[16:05:20] <bartzy> I mean - probably as an int it will be smaller and better for indexing ? or maybe it doesn't matter a lot ?
[16:05:40] <bartzy> Because it will be slightly more comfortable for me to use the hex string in _id , and not its number representation.
[16:57:12] <JoeyJoeJo> Is there a way in the php driver to get the number of results in the cursor without iterating over it first?
[17:00:14] <jeffwhelpley> is it best practice to try and keep documents flat or logically group fields into subdocuments when it makes sense. Or does it not matter at all either way?
[17:54:26] <jok> hi@all
[17:54:41] <jok> my Q without answer
[17:55:27] <jok> run 'mongod --rest' and have web interface to DBs and tables with url: localhost:28017/dbName/tableNameand and on localhost:28017/.
[17:55:35] <jok> how to hide this spy status page
[17:55:50] <jok> course it is on production server
[17:56:50] <ranman> jok: you could block port 28017 for non-local machines
[17:57:00] <ranman> via iptables or your firewall
[17:58:02] <jok> ranman, I want to give access for third-party developers to the DBs
[17:58:12] <jok> but hide status info and logs
[17:59:10] <ranman> jok: you probably want to roll your own page, using another rest wrapper.
[18:00:38] <jok> ranman, it can't be done using only mongo instruments&
[18:00:43] <jok> ?
[18:01:41] <ranman> jok: not that I'm aware of.
[18:13:10] <mfletcher> Im setting up a QA environment. For mongodb, Ive cloned one of our production servers (its a VM), which is part of a replica set. The QA server is going to be a single node, it won't be a part of the production replica set. To ensure this, all I need to do is make sure that the relpSet variable in my mongod.conf is commented out. Is this accurate?
[19:13:15] <dcholth> Hello, anyone around?
[19:14:38] <dcholth> I'm trying to figure out hte best way to write an API call that mashes multiple collection objects together...
[19:14:49] <dcholth> If anyone wants to hlep me bounce some ideas around
[19:33:56] <leifw> dcholth: what's up?
[19:34:25] <dcholth> I think I figured it out... by breaking it down into smaller functions that call a callback, and when all functions finished, it finally sends the response...
[19:34:28] <leifw> I probably can't help much but I can bounce ideas
[19:34:36] <leifw> ok :)
[19:47:33] <dcholth> hmm... hitting a wall here
[19:52:32] <pwelch> hey everyone, I was wondering if someone could clear something up for me with Replicasets
[19:53:21] <pwelch> I understand a new deployment should use replicasets and not master/slave replication. My question is what happens if a secondary gets a write
[19:53:36] <pwelch> does the mongo cluster forward that to the node elected as primary?
[19:53:47] <pwelch> is that something that is handled with the client side software?
[19:59:31] <leifw> pwelch: I know the slave server will protect itself and throw an error, but I don't know how drivers handle that error
[19:59:31] <leifw> s/slave/secondary/
[20:00:29] <pwelch> ok, thats what Im confused about. The docs mention writes should go to the primary. but what is the common way everyone is doing that? load-balancer? If it changes then the new primary needs to add itself to the load balancer
[20:00:55] <leifw> not sure, sorry
[20:01:41] <pwelch> leifw: thanks anyways. I'll hang around for a bit in case some sees the question.
[20:02:52] <Gargoyle> pwelch, leifw: The driver normally handles that.
[20:03:08] <pwelch> Gargoyle: so my developers need to be designing for this?
[20:03:56] <Gargoyle> Are you writing a client driver for mongo?
[20:04:32] <Gargoyle> pwelch: http://docs.mongodb.org/manual/core/read-preference/
[20:04:59] <pwelch> thanks, that helps I think
[20:51:00] <wintron_> hey there, any have experience using MongoDB with Vagrant? trying to set up port forwarding between MongoDB on my Vagrant VM and host machine, but can't connect.
[20:52:13] <wintron_> I have bind_ip set to 0.0.0.0 but no joy. any ideas?
[21:07:25] <tphummel> wintron_: a while back i set up a cluster of three vagrant vm's. each with a line like: m.vm.network :hostonly, "192.168.33.10"
[21:07:36] <tphummel> giving each a different ip
[21:08:06] <tphummel> seemed to work well for my local testing, if i remember
[21:10:12] <tphummel> sorry if that isn't exactly what you are asking
[21:10:33] <wintron_> tphummel: hmmm, wonder if should just destroy my vm and startover. can't see why it shouldn't _just work_
[21:14:44] <tphummel> wintron_: that is nice thing about vagrant boxes. destroy and start over.
[21:15:30] <tphummel> wintron_: is it important that the port be forwarded to the host or just to be able to connect?
[21:15:52] <wintron_> just need it to connect, thought port forwarding would be the easiest way to achieve that
[23:36:28] <Sxan> Are there any well-known strategies for performing counting on collections with moderate write loads?
[23:39:44] <federated_life> make sure you workset fits in memory
[23:40:17] <federated_life> send them to the secondaries
[23:44:09] <Sxan> Write to master, count on secondary?
[23:51:48] <federated_life> Sxan: yes
[23:52:02] <federated_life> you'll want secondaries for many reasons, some of which are scaling reads
[23:52:22] <federated_life> depending on your driver, you'll have to set read-preference