[00:02:15] <mchammer> hi! i am new to mongodb and have a question about storing large amounts of data via a nodejs-shell-script. on the client i have a data-scructure that looks like this: i've got a 2d-array of chunks (100x100) each chunk containing another 2d-array of 64 * 16 "tiles" and in each of tile some strings and numbers. what would be the best way to save this in mongodb?
[00:03:47] <mchammer> my (noobish) try was to use mongoose, create some schemes and save it in this exact format into mongo but it runs out of memory and / or exceeds the maximum bson-size
[03:47:17] <miskander> Anyone have any suggestions or documentation as to how to handle data migrations for an RoR app that uses MongoDB. I am dropping a collection and moving the data into another collection.
[05:49:25] <mattbillenstein> of course - I meant to ask why the bare string form stopped working...
[05:49:34] <mattbillenstein> this seems recent to me
[06:07:03] <_aegis_> I never got it working on 2.x
[06:07:13] <_aegis_> don't remember which versions I tried it on
[06:07:28] <_aegis_> it's required in the javascript console too
[06:08:56] <synchrone> Hi all! I was wondering, what happens if in my .NET application I try to use MongoDBFileStream after IDisposable RequestStartResult object was disposed ?
[06:18:32] <mattbillenstein> _aegis_: i see -- I used to do this all the time, so it has a subtle way of breaking things if this actually changed
[09:15:55] <yarco> always get "db object already connecting, open cannot be called multi"…when using nodejs with mongodb
[10:05:53] <salentinux> Guys, is it possible using the php driver to execute custom code that returns array of data? I'm able to get a scalar value but not an array of data. For example not able to access to result of this code execution: http://pastebin.com/r52jv7Gh
[11:52:42] <arussel> is there a way to add multiple document in a single query ?
[11:53:21] <arussel> I get as json an array of doc, I would like to insert it in a collection without having to deserialize it
[11:54:37] <kali> arussel: you'll have to deserialize it anyway, mongodb does not deal with json but bson
[11:55:14] <arussel> is it just about "a":"b" -> a:"b" ?
[11:56:26] <kali> nope, bson, is a very binary encoding
[11:58:35] <arussel> I can't really deserialize, I don't know what the doc are, but I could request that the sender send valid bson. Is Bson valid Json ?
[12:06:13] <arussel> it is what I usually do. But I also have a use case of receiving collections as json and having to create the collection and inserting the docs.
[12:06:40] <arussel> Then I should be able to run queries on this collection, but I have no idea what the docs are.
[12:08:14] <kali> what's the problem with not knowing what the docs are ?
[12:10:07] <arussel> knowing what the doc is, I can create a MongoRecord and read json, write to bson
[12:11:54] <arussel> but if I can translate from Map[String, Any] to Bson, that should do
[12:12:17] <kali> you want a deserialization, not odm mapping
[12:38:27] <igotux> is it possible to log all the requests processed by mongo server to some file ?
[12:39:46] <kali> igotux: this is not exactly what you're asking for, but it may help http://docs.mongodb.org/manual/tutorial/manage-the-database-profiler/
[12:40:40] <kali> igotux: mongosniff may also help
[13:13:48] <FrankJLhota> I cannot use mongo C driver 0.6 because of a leak, so I tried to upgrade to 0.7, only to discover that it did not support adding custom metadata to grid files.
[13:14:47] <algernon> you can get around that by manually updating the document in the fs.files collection, I suppose. though, that's not too optimal.
[13:15:26] <FrankJLhota> We actually use that feature, so we are seriously contemplating the creation of a 0.6 / 0.7 hybrid to bring that feature back.
[13:16:15] <algernon> fwiw, my unofficial lib supports adding custom meta-data to gridfs files O:)
[13:16:48] <FrankJLhota> Was that a mistake on the part of the mongo C driver team, or did they want to get rid of custom metadata for some reason? Who should I ask?
[13:18:04] <algernon> I'd suspect it was a mistake.
[13:20:02] <FrankJLhota> I think we may go the route of re-inserting the 0.6 implementation of this feature into 0.7. I would be happy to upload the resulting code so that this feature can be restored in the next revision.
[13:22:53] <FrankJLhota> The parameter for custom metadata were removed from the functions gridfile_writer_done() and gridfs_insert_file().
[13:26:27] <algernon> are you sure that was part of any C driver?
[13:26:57] <algernon> I mean... I went through the history of src/gridfs.h, and gridfile_writer_done() only ever had one single parameter
[13:27:21] <Dededede4> kali : There is a need to use ensureIndex only once a collection?
[13:29:15] <algernon> FrankJLhota: the v0.6 tag has MONGO_EXPORT int gridfile_writer_done( gridfile *gfile ); in src/gridfs.h
[13:35:28] <FrankJLhota> This is odd; it was part of the 0.6 mongo C driver that we downloaded months ago, leaving me to wonder how could custom metadata ever be added via the C driver.
[13:37:59] <FrankJLhota> Is the C driver the only one where adding custom metadata is an issue?
[18:56:04] <rekibnikufesin> and that should sort it for you as well
[19:27:32] <pringlescan> Hello all, I'm trying to decide between SQL and MongoDB for a project. It's a testing platform for schools that tracks metrics for different skills (i.e. multiplication, long division, etc). There are a ton of reporting requirements SQL will be well-suited for… but the kind of rapid changes we'll want to make and the table layout (i.e. test -> questions -> answers -> grades) would quickly create tons of rows. Any input?
[19:29:05] <wereHamster> pringlescan: you'll either have tons of rows or tons of documents.
[19:29:31] <wereHamster> ask yourself how you will query the data, and then pick sql or a document oriented database based on which one is more suitable
[19:30:00] <pringlescan> I don't see scalability being an issue. I just don't want things to be too rigid. I can't anticipate the way we'll query it yet, it needs to be easy to create new reporting types.
[19:30:55] <wereHamster> does the data schema change a lot or only the reporting types?
[19:32:06] <jollyBoy> Would anyone please be able to help me?
[19:32:31] <wereHamster> have you asked your question?
[19:33:25] <jollyBoy> I'm struggling to use the mongorestore to upload my collection on the local machine to mongoLabs
[19:34:13] <jollyBoy> I tried this but no luck: mongorestore -h ds047057.mongolab.com:47057 -d my_DB -c my_Coll -u xxxx -p xxxx /dump/my_database/my_collection
[19:37:04] <pringlescan> wereHamster, initially the schema, but once that settles down, reporting types will be the more frequent change
[19:40:05] <arussel> is this: "foo": /^K/ valid json ?
[19:46:27] <zastern_> Is it possible to handle replica set elections in a way that is transparent to my application (excluding the brief downtime)? E.g. not having to have the application try to hit mongo1, then if it fails, hit mongo2, etc. Is a mongos what I need?
[19:47:33] <wereHamster> zastern_: mongos is for sharding. And the drivers should automatically reconnect when a new master is elected
[19:47:49] <wereHamster> the driver may throw an exception or return an error if an election is underway
[19:47:55] <zastern_> wereHamster: but . . . reconnect to what, thats the thing. how does the driver know about my multiple mongo servers
[19:48:04] <wereHamster> your application should handle that exception/error
[19:49:35] <wereHamster> your devs have to handle errors/exceptions anyway
[19:51:00] <jollyBoy> Why does "mongorestore -h ds047057.mongolab.com:47057 -d my_DB -c my_Coll -u xxxx -p xxxx /dump/my_database/my_collection" give an error and not let me import the collection in my dump folder?
[19:51:39] <wereHamster> jollyBoy: we can only guess. And gussing without seeing the error is difficult.
[19:53:16] <jollyBoy> This is the error: boost::filesystem::file_size: No such file or directory: "/dump/my_database/my_collection.bson"
[19:53:47] <wereHamster> mongorestore presumably wants the bson file
[19:53:51] <wereHamster> you're giving it a directory
[19:55:56] <jollyBoy> The bson file is in the dump directory but it says no such file or directory
[19:56:18] <wereHamster> so why don't you give mongorestore the full path to that file?
[19:56:38] <manflo> Looking for info on mongodb + glusterfs...
[19:57:07] <manflo> does it make sense to compare glusterfs vs GridFS?
[20:01:50] <jollyBoy> Still doesn't recognise the directory....I'm using the command for import from MongoLabs to do this: "mongorestore -h ds047057.mongolab.com:47057 -d blah -c my_Coll -u <user> -p <password> my_Coll.bson"
[20:02:37] <wereHamster> jollyBoy: and the error is.. ?
[20:04:44] <zastern_> wereHamster: ok, so what if I was sharding accross two replica sets, how would you handle exceptions then? since it's behind a mongos the devs dont even really know about it right
[20:05:32] <wereHamster> it depends on the driver how it handles that situation
[20:07:17] <arussel> anyone know of a java lib that can parse a mongo query string into a DBObject ?
[20:08:17] <jollyBoy> So based on the import command given by monogLab: I did "mongorestore -h ds047057.mongolab.com:47057 -d my_DB -c my_Coll -u xxxx -p xxxx /dump/my_database/my_collection". But I get the No such file or directory error
[20:08:29] <jollyBoy> Even if I include the full pathname