[01:48:20] <GeraldDev> hello. In order to have a psuedo-auto incrementing ID, I created a field called "my_id" for each record in my collection. It just converts the last 3 bytes of the default ObjectID to an int. I am using the PHP mongo driver. The problem is..if I wait a few hours, it seems the global counter drops back to 40. Anyone know why this might be the case?
[03:15:03] <edrocks> how do you use $centerSphere on geojson objects? do you just pass in the coordinates part or would that end up using the legacy coordinate pairs?
[04:11:47] <SethT> anybody have experience using mongo mapreduce on large datasets to dedupe
[07:06:25] <nukeu666> how do i cancel a command when its stuck in ... ?
[07:25:36] <kali> nukeu666: press return three times
[09:59:14] <Industrial> I'm trying to find what the cursor option actually does but
[09:59:17] <Industrial> "Optional. Specify a document that contains options that control the creation of the cursor object."
[10:01:03] <Industrial> basically I'm just wonderinf if I need to configure anything at all there.
[10:56:34] <kali> Industrial: it basically depends how big is the result you're expecting
[10:57:02] <kali> Industrial: if it's significantly smaller than 16MB, no need to bother with the cursor
[12:13:48] <CJ_> Is this correct? Range operations for paging will only work if I have an indexed field that is sorted in the way that I want to query?
[13:43:14] <Mikee> Hi - I'm having an exception thrown with the PHP drivers about an overflow. is this the right place to ask about php drivers?
[13:43:18] <Mikee> " MongoCursorException: localhost:27017: Runner error: Overflow sort stage buffered data usage of 33556006 bytes exceeds internal limit of 33554432 bytes"
[13:43:26] <Mikee> I'm not too sure how to solve it, and google isn't being much help
[13:43:53] <Derick> Mikee: that means that the sort job on the server needed more memory than was available for it
[13:44:03] <Derick> it's a server message, not a driver message
[13:44:38] <Mikee> I'm just sorting by a timestamp - and this collection isn't too big yet. What can I do to avoid this happening?
[13:44:50] <Mikee> (there's only about 4500 documents in the table)
[13:44:54] <Derick> make sure you use an index for sorting
[14:54:20] <pmercado> so , there is two databases, one for data and one for files (gridfs), gridfs use two collections: one for data and another for metadata
[14:55:55] <Derick> gridfs is not always necessary, only if you use files > 15.9MB or so
[14:56:52] <pmercado> the only way to asociate an image to a document is doing a reference to gridfs? I mean.... there is no vice-versa reference, because gridfs is only "a container"
[14:58:13] <pmercado> vice-versa relation thought as rdb
[14:58:17] <Derick> well, I suppose you can store your document in the meta data
[14:59:53] <pmercado> and if file is about 19MB, is inserted anyway using Gridfs or engine throw an error and will not insert file?
[15:08:30] <pmercado> from http://docs.mongodb.org/manual/reference/mongodb-extended-json/#binary :
[15:08:52] <pmercado> "<t> is the hexadecimal representation of a single byte that indicates the data type." <--- where can I find reference of "single byte that indicates data type"?
[15:09:36] <Derick> http://bsonspec.org/spec.html (search for "subtype")
[16:49:59] <pmercado> An array of bytes (byte[]) as a value will automatically be wrapped as a Binary type. Additionally the Binary class can be used to represent binary objects, which allows to pick a custom type byte.
[16:50:09] <pmercado> in java driver doc, easy!, but hard to find
[17:11:42] <Fender> hi there, I am in mongoshell and I need to add one document ("b") to the collection that is exactly another document ("a") in the same collection but with one value changed. Can I do this from mongoshell rightaway?
[17:12:59] <Fender> something like db.getCollection("C").find({"symbol":"a"})... <and then somehow change "a" to "b" and insert it>
[17:13:34] <Fender> I know there is only one doc with symbol "a", just in case
[17:25:12] <Fender> it's so easy, you just have to find the right post. Thanks for ehm your presence :)
[18:19:01] <omus> Can anyone help me with a mongoexport problem I'm having?
[18:21:16] <jiffe98> I just had a disk error and now that replica is showing '"errmsg" : "syncThread: 10334 BSONObj size: -1717986919 (0x99999999) is invalid. Size must be between 0 and 16793600(16MB) First element: ...' in rs.status()
[18:21:32] <jiffe98> any way to skip whatever entry is causing this problem?
[21:57:59] <ranman> Guest78521: what do you mean by that? to sort on something else?
[21:58:26] <ranman> Guest78521: you can reinsert the documents in a different order or sort on something else, as far as I'm aware that's your only choice.
[21:59:18] <Guest78521> I would like to get my data on the descendant order of insertion
[21:59:34] <Guest78521> Now the default is ascendant
[22:00:10] <Guest78521> without specifying a sort option in my query
[22:05:21] <Guest78521> Is it possible to set it by default ?
[22:11:17] <ranman> Guest78521: I don't know what you mean by that but probably not
[22:11:28] <ranman> Guest78521: just within the shell?
[22:12:04] <Guest78521> Meaning everytime I do a query, even if I don't specify a sort() it will return the result sorted by natural -1
[22:14:38] <Guest78521> I'm using a capped collections
[23:29:24] <daidoji> Guest70926: was it sorted when you inserted it?
[23:29:52] <daidoji> also, anyone here have experience with bulk inserts and have thoughts on best workflow for dealing with them?
[23:31:04] <daidoji> basically, I'm using a field as a key that gets updated every once in a while so my bulk insert operation will throw a DuplicateKeyError
[23:31:23] <daidoji> but what I'd like to do is get all those exceptions and save those records in another file in my script for review later
[23:31:50] <daidoji> but the pymongo documentation and mongo docs look like it'll only return one of my errors and not all the errors for a given bulk insert