[03:17:00] <spaceinvader> hi, I think I'm having some problems with geoNear: https://gist.github.com/tmenari/5486403. The distance returned is 4km but I think it's more like 2km
[03:29:02] <spaceinvader> lng/lat are the wrong way, but in both geojson, so the error is not huge
[03:40:55] <dukedave> Is there anything I can do at the collection level to define transformations to apply to a path when it's written and retrieved?
[03:41:54] <dukedave> I've trying to represent the same polymorphic doc in two ODMs (Mongoose and Mongoid), but they both insist on using their language's class name
[03:42:51] <dukedave> I'd like to write something ODM agnostic to the doc, but return the class names each ODM expects as two attributes
[03:55:32] <jon_r> dukedave I'm fairly certain mongoid allows you to change the field used to reinstansiate the klass, if it's conflicting with mongoose change it?
[03:59:18] <dukedave> jon_r, ah, well, I've got that far, but it just means that each doc has two klass fields on it, so something like this: { _id: ..., actual: 'data', _mongooseType: 'MyMongooseKlasssName', _mongoidType: 'TheRubyKlassName' }
[03:59:45] <jon_r> yeah, you're probably not going to do any better than that
[03:59:58] <dukedave> I'd like to just store { ... _type: 'KlassName' }, and generate the return types for free
[04:00:20] <jon_r> you could do _type: { mongoose: '', mongoid: '' }
[04:00:51] <dukedave> jon_r, ah yeah, and see if I can pass a path to the ODMs
[04:01:23] <dukedave> I looked at monkey patching Mongoid to accept a mapping from string to a Class (and similarly for Mongoose), but they're both pretty hung up on the idea that the String *is* the class name
[04:01:41] <dukedave> (so there's no good place to make the abstraction)
[04:02:34] <dukedave> But.. What I'm thinking I'll do now, is just use separate collections, and generate the collection name from the klass name
[04:07:46] <jon_r> dukedave if you can do that why did you ever use the same collection?
[04:08:23] <dukedave> jon_r, well, my situation is very close to this SO question: http://stackoverflow.com/questions/9172834/choosing-mongodb-collections-structure-for-similar-data-structures
[04:09:04] <dukedave> Except my docs have a timestamp and a lat/lng
[04:10:15] <dukedave> And we'll be querying only on those two attributes, then iterating through the results
[04:12:23] <dukedave> But now I'm doing some quick testing locally and it seems that the (collection size) cost of storing a _type (or two as appears would be required) for each doc makes it significantly less attractive. I'm using nearly as much space storing the _type as the data :|
[07:00:37] <resting> any idea what could be the problem when a php findOne doesn't return results but the mongoshell does using the same conditions?
[07:22:01] <Aorion> The only thing I can think of is that the data is added to the DB before the connection completes
[07:22:07] <Aorion> but that should return an error, correct?
[07:22:20] <svm_invictvs> I'm looking at creating an auto-incrementing _id field
[07:29:40] <Aorion> last time I checked, I wasn't an idiot, so I feel like if there are no errors reported by the library, and the database echoes that my client connected, and there are no errors in the transactions, that the database should have changed state
[07:34:49] <Aorion> that was beyond my initial problem
[07:35:29] <Aorion> my initial problem was that I use the createCollection method, then collections() and collectionNames() and neither of the latter will return anything
[07:35:37] <Aorion> but there is no error from createCollection
[07:35:49] <Nodex> you dont need to create a collection
[07:36:03] <Aorion> so if we are opened, and we successfully added a collection, why aren't any of the collection methods working?
[07:36:06] <Nodex> collections are created auto when inserting a document if not exists
[07:36:22] <Aorion> okay. let's pretend I am just verifying that I have access to the database
[07:36:27] <Aorion> which has been proving difficult
[07:37:20] <Nodex> I don't know node.js all that well but perhaps "user_db.users" is wrong maybe?
[07:37:23] <Aorion> in this code ->http://pastebin.com/uRkd5taR I create the object which causes mongod to echo some initandlisten messages. Then I call the init() method which creates a collection and throws some default entries in there
[07:39:31] <Aorion> to start, I am under the impression that the datatype of "db.test" in the shell is a collection object for the test collection, is this correct?
[07:39:36] <Nodex> from what I gather you've already selected the DB when you "init"
[07:46:38] <Aorion> I read the docs, and the docs say, that I should open a connection to the db. The docs also say that I can create a collection with a given name. The docs also say that I can read the collection names using collections() or collectionNames
[07:46:55] <Aorion> the docs also say that errors at any point in that process will be shown by the first parameter of any callback
[07:46:57] <Nodex> perhaps you should do one thing at a time
[07:47:08] <Aorion> i am doing one thing at a time
[07:47:12] <Aorion> i am trying to simply open a connection
[07:47:13] <Nodex> i/;e make sure you can connect, then make sure you can insert
[07:47:30] <Aorion> i know i can connect, I see the echo from mongod
[07:47:31] <Nodex> my advice is to follow the first part of the last funciton on this page ... http://mongodb.github.io/node-mongodb-native/markdown-docs/insert.html
[07:47:54] <Aorion> look, I appreciate your help, but I don't think you are listening to me
[07:48:01] <Aorion> my problem is not with the docs
[09:10:02] <Siyfion> ron: except now apparently.. lol
[09:10:30] <ron> Siyfion: he's here. trust me. but... is your question related to mongodb, php, mongodb and php or something completely different?
[09:12:14] <Siyfion> basically I was just wondering what the best / fastest way of copying a load of documents is with a minor change (and change in _id).
[09:12:46] <Siyfion> I'm trying to implement a "clone" function for my product_category model,
[09:13:21] <Siyfion> I need it to create a new product category with the new name specified, then copy all the products in the old category into the new one.
[09:15:40] <Siyfion> ron: Ah I think I've found the answer myself.. basically I just need to query the products to clone, delete their _id's and change their category refs, then insert them all back in again
[09:15:56] <Siyfion> (I'm using mongoose, so that made it slightly harder, but I'll use http://mongoosejs.com/docs/api.html#model_Model.create)
[09:34:47] <ExxKA> Hey Guys. I cant figure out how to group on the day and hour of a timestamp extracted fromt he Id. I can do it from a timestamp field I have created, so I guess I am just missing a small step?
[09:45:40] <BadDesign> ExxKA: You have to contruct the OID manually, the steps are: 1) get a timestamp (uint64_t) for the day and hour you try to build the query for, 2) get a hex representation (uint64_t) of the timestamp 3) construct the OID by converting the hex representation obtained in the previous step to a string and concatenate to it 0000000000000000
[09:46:58] <BadDesign> ExxKA: Then you can query for documents in the collection using something like { \"_id\" : { $gte : { \"$oid\" : \"" + oid1 + "\" }," "$lte : { \"$oid\" : \"" + oid2 + "\" } } }"
[09:47:57] <ExxKA> BadDesign, I don't think I was clear before..
[09:54:10] <ExxKA> That is what I get with the current query
[09:54:45] <diffuse> if i do a batch insert, do the _id's of each document get returned? I am trying to figure out how to retreive this with the C driver.
[09:56:27] <BadDesign> diffuse: no you won't get the id's returned, because the id's should be calculated on the client
[10:17:41] <dob_> I am aggregating a collection a using group. I have a field refid and refname. Now I am grouping by refid, but I also want to output refname, but ignore it if it's different and output the first refname found and add it to my group
[10:54:00] <diffuse> how do you get the last insert id of a document with the C driver? Is it possible with mongo_insert or do you have to run mongo_run_command?
[10:55:00] <algernon> you can't do that reliably, unless you generate the id yourself.
[10:55:38] <algernon> (or if the driver does it, and caches it for you, which is not something the C driver does, iirc)
[12:03:51] <tom_2321> hello, the primary node of my mongodb replicaset gets lots of these messages: ... [conn845090] getmore local.oplog.rs query: { ts: .... Is there something wrong with my oploc or the size of my oplog?
[12:04:46] <tom_2321> also these queries have a ten time higher response time, than normal queries
[12:09:27] <kali> tom_2321: don't worry about it, it's the replication log
[12:42:55] <csengstock> Hi. Is there a low level access to the index structures from within the mongo shell? For example, i want to get the geohash strings for my records (with an attributes indexed by a 2d index). Or even better, i want to query the number of indexed documents, given a particular geohash string. The aim is to use the quadtree structure to efficiently retrieve document distributions over geographic space.
[12:44:06] <csengstock> of course i mean quadtree-like structure. haven't looked for it in depth, but probably the geohash strings are indexed by a btree?
[13:02:06] <aseba> Morning.. I have a question.. I have a mongodb server running. The disk for that server (ubuntu) got full, so I've attached another volume and rsynced all the /var/lib/mongodb to the new volume. I changed the config to load the new directory and ran it again. Everything seemed to be working fine, but now I get "$err" : "Invalid BSONObj size: -286331154 (0xEEEEEEEE) first element: EOO",
[13:02:15] <aseba> a db.repairDatabase() would fix that?
[13:05:48] <johnny_fg09384> sometimes mongostats shows me, that one of my mongodb collection is locked over 100%, eg, 105,8%. what does it means?
[13:20:56] <ExxKA> Hey BadDesign. I figured out the problem, using map reduce instead of the aggregation framework. Thanks for your help
[13:29:35] <BadDesign> ExxKA: Did you construct your own ObjectID?
[13:30:14] <ExxKA> BadDesign, nope, they are already in the database
[13:30:44] <ExxKA> I used the .getTimestamp() function on the Id to get a Date object I could extract the values from
[14:26:03] <Nodex> we've all heard of cloud computing but has anyone seen this new thing called cloud commuting. It's being pioneered by airlines all over the world
[14:28:21] <Nodex> I think it will revolutionise the way we travel around the world
[14:28:32] <Nodex> and interconnect between countries
[14:42:58] <Psi|4ward> Hey there, any German mongo expert here for some (payed) consulting?
[15:02:22] <Mortah> and, pymongo shows it as {u'$set': {u'sources': [51919L, 47756L]}}
[15:02:55] <Mortah> and I can only think its because $set has been specified on the document multiple times... and then somewhere in the client its kindly mangling it
[15:03:47] <Nodex> $set can only be used once per update
[15:04:08] <Nodex> so I suggest you look at your python code and see how it's being set 3 times
[15:04:09] <Derick> Mortah: that's a bug in MongoDB - which version are you running?
[15:07:53] <Mortah> it looks to me like the data is there though
[15:08:11] <Mortah> because depending on which way I query it I get one key or the other (guess it depends on internal ordering of the map)
[15:11:45] <Nodex> anyone know of a neat hipster trick I can apply to a 3 part compound index where I can always query at least 2 of the fields (one is static - cid) and the other 2 vary - (uid, aid), I've been raqkcing my brains but it always results in 2 indexes
[15:13:04] <Derick> store uid/aid duplicate in another field - f.e. uaid?
[15:20:08] <Mortah> so... if I managed to insert a document into mongo with the same key specified twice {bob: 1, bob:2} I have no way to query for it?
[15:20:09] <Nodex> solr now has awesome json support so it's really close to map mongo-> solr now :D
[15:20:44] <Nodex> Mortah : will that query not fail?
[15:21:12] <Mortah> as I understood it, bson allows docs with the same key multiple times... so could in theory get one into mongodb
[15:21:57] <Derick> Mortah: most languages/drivers however do not support that
[15:22:10] <Derick> Mortah: what do you want to query for?
[16:04:51] <bartzy> Should I save it as a hex string, or as an int ?
[16:05:20] <bartzy> I mean - probably as an int it will be smaller and better for indexing ? or maybe it doesn't matter a lot ?
[16:05:40] <bartzy> Because it will be slightly more comfortable for me to use the hex string in _id , and not its number representation.
[16:57:12] <JoeyJoeJo> Is there a way in the php driver to get the number of results in the cursor without iterating over it first?
[17:00:14] <jeffwhelpley> is it best practice to try and keep documents flat or logically group fields into subdocuments when it makes sense. Or does it not matter at all either way?
[18:13:10] <mfletcher> Im setting up a QA environment. For mongodb, Ive cloned one of our production servers (its a VM), which is part of a replica set. The QA server is going to be a single node, it won't be a part of the production replica set. To ensure this, all I need to do is make sure that the relpSet variable in my mongod.conf is commented out. Is this accurate?
[19:34:25] <dcholth> I think I figured it out... by breaking it down into smaller functions that call a callback, and when all functions finished, it finally sends the response...
[19:34:28] <leifw> I probably can't help much but I can bounce ideas
[19:52:32] <pwelch> hey everyone, I was wondering if someone could clear something up for me with Replicasets
[19:53:21] <pwelch> I understand a new deployment should use replicasets and not master/slave replication. My question is what happens if a secondary gets a write
[19:53:36] <pwelch> does the mongo cluster forward that to the node elected as primary?
[19:53:47] <pwelch> is that something that is handled with the client side software?
[19:59:31] <leifw> pwelch: I know the slave server will protect itself and throw an error, but I don't know how drivers handle that error
[20:00:29] <pwelch> ok, thats what Im confused about. The docs mention writes should go to the primary. but what is the common way everyone is doing that? load-balancer? If it changes then the new primary needs to add itself to the load balancer
[20:51:00] <wintron_> hey there, any have experience using MongoDB with Vagrant? trying to set up port forwarding between MongoDB on my Vagrant VM and host machine, but can't connect.
[20:52:13] <wintron_> I have bind_ip set to 0.0.0.0 but no joy. any ideas?
[21:07:25] <tphummel> wintron_: a while back i set up a cluster of three vagrant vm's. each with a line like: m.vm.network :hostonly, "192.168.33.10"