[00:08:00] <SQLDarkly> Taking post: http://pastebin.com/CrrDsxNv into consideration. Im getting the following error : >ArgumentError: wrong number of arguments (0 for 2)< when attempting to >Node.create< any help or advice would be appreciated. This is using mongomapper.
[00:34:33] <Cygnus_X> can anyone help me with a query problem?
[01:19:12] <progolferyo> has anyone here had experience with aggregation and $group in a sharded cluster. im running a fairly simple $group command and the results never find any total value greater than 1. here is my code: https://gist.github.com/69de51047a7307b0efad
[01:20:10] <progolferyo> the only thing i can think of is that the group is only grouping per shard and not across the whole cluster
[02:11:57] <progolferyo> does anyone know if aggregation supports running things on secondaries? im having trouble getting anything to run on a secondary, even when i do setSlaveOk
[03:12:56] <timah> in what ways am i able to interact with multi-dimensional arrays when using the aggregation framework?
[03:16:48] <timah> i have a field with a multi-dimensional array as its value, similar to this: [ [ 0, 0, 0 ], [ 0, 0, 0 ] ].
[03:19:56] <timah> i'm looking to aggregate these values as such… $sum [0][0] across all documents… $sum [0][1] across all documents… etc...
[03:20:14] <timah> but also $sum [0] across all documents.
[03:24:25] <IAD> timah: try to $unwind : http://www.mongodb.org/display/DOCS/Aggregation+Framework+-+$unwind
[03:25:15] <timah> iad: thank you… i've looked at $unwind as well… but i am unable to determine which element i'm dealing, with exception to first or last.
[03:32:32] <timah> is there a way to either a) somehow determine the iteration/element of the $unwind operation, or b) access array/sub-array elements using dot notation?
[03:32:44] <timah> i know i can do this very easily with map-reduce.
[03:34:53] <timah> it just feels like i'm missing something in direct regards to the aggregation framework.
[03:48:42] <IAD> timah: can you add an index into nested arrays like { 1 : { 1:0, 2:0, 3:0 }, 2 : {1:0, 2:0, 3:0 } }?
[03:49:52] <timah> well i could at the expense of storing unnecessary keys.
[03:55:08] <timah> i mean, as of this very moment the fastest, most efficient method for aggregating this data is to retrieve the date specific range of data and roll it up in the application context.
[04:00:39] <timah> could i do something like combine $addToSet with $each and $inc to +1 for each $unwind and provide that positional info i'm looking for?
[05:13:43] <alex__> hello,how to disable journal under ubuntu (mongodb installed from repository)
[07:51:10] <oskie> it is more or less recommended to run mongod (data), mongod (cfg) and mongos, and perhaps an arbiter for a different replica set on the same machine in sharded configs,
[07:51:31] <oskie> but not two database mongods on the same server in production environments
[07:54:45] <vr__> I'm currently seeing that my locked db is often reaching 50-60%
[07:54:52] <vr__> while iostat doesn't indicate there is a big problem w/ the disk
[07:55:11] <vr__> that means to me that the database level lock is potentially the problem
[08:56:07] <oskie> vr__: I don't know about the locking issue... 50-60% is very much. maybe it would be worthwhile to look into what is locking it
[09:00:40] <TeTeT> hi, I'm still struggling with deleting a dbref from an array - http://pastebin.ubuntu.com/1572169/
[09:01:24] <TeTeT> meantime I tried to just remove an objectid and that worked with: db.arr.update( {'_id': 4}, { $pull: { users: new ObjectId("510634355fc223c57bd919b3") } } )
[09:02:28] <TeTeT> it seems I cannot get the syntax for the dbref right - I tried, among others, $pull: { "users.$id": new ObjectId("510299093004e51abdd65c9d") } OR $pull: { users: DBRef("User", new ObjectId("510299093004e51abdd65c9d")) }
[12:24:45] <NodeX> I dont suppose there is any way to split mongo data dirs up on a system is there ... for example I want my most frequently accessed DB's on SSD and some less important ones in another drive/partition
[12:48:19] <Derick> so put those on the SSD through a symlink - and set the default data dir to the SATA. If you then create a new DB, it shows up on the SATA automatically.
[12:48:43] <NodeX> that's what I thought, else it will appear on the SSD first
[13:15:48] <kali> jtomasrl: depends what you mean... it will behave as in find() and match full documents. if you need to extract sub docs, you need to $unwind them first
[13:58:39] <listerine> Can someone help me with this issue? https://jira.mongodb.org/browse/DOCS-577?focusedCommentId=251700#comment-251700
[13:59:38] <listerine> I'm trying to update my mongo through brew since I use OSX, running mongo --version returns the correct version but "db.runCommand( {buildInfo: 1} )" returns the old version.
[14:00:10] <listerine> I'm guiding myself on these two questions: http://stackoverflow.com/questions/13695851/mongodb-java-driver-no-such-cmd-aggregate and http://stackoverflow.com/questions/8495293/whats-a-clean-way-to-stop-mongod-on-mac-os-x
[14:04:46] <vr__> can we use mongos (mongo sharded) in a replica set?
[14:06:19] <jtomasrl> imnot sure how to perform an upsert for a nested object. I have a checkins document and i want to update orders of different items, if the item exist, just add an order, if it doesnt, add the item and nest the ordet. this is what i got so far, but i dont see it going to work. https://gist.github.com/4655708
[14:28:42] <JoeyJoeJo> How can I see what a collection's shard key is?
[14:31:52] <joe_p> vr__: if your running a simple replicaset then you do not use mongos. mongos is used only if you have a sharded mongo environment
[14:32:48] <vr__> joe_p: yes. but can I use mongos in a highly-available fashion.
[14:36:03] <vr__> i see. That seems ghetto given how nice mongo is in general about failover
[14:37:22] <joe_p> if you only list a single host to connect to and the host is down is it suppose to magically figure out where to connect to?
[14:38:04] <vr__> yes. In the same way that if I am using a replicaSet it 'magically' figures out where to connect to.
[14:40:33] <joe_p> vr__: explain how that works? as if you connect directly to a single member of a replica set and that member is down your no longer connected to mongodb. you have to do the same thing there and have multiple hosts listed in your connection string. I fail to see how mongos and replica set connections are different in that regard
[14:41:08] <vr__> if I connect to one member, it discovers the other members
[14:41:32] <vr__> and connects to those as well. if that mongo dies it will still connect to the other ones
[14:54:36] <DinMamma> I understand it might be difficult to say, but would a compress of a collection that is 490Gb be a bad idea when I have 28G free disk space?
[15:16:58] <rquilliet> i mean 15 min for 10 M lines
[15:16:59] <DinMamma> rquilliet: A file needs to be seriously long to not be iterable, at least on my machine.
[15:17:14] <DinMamma> Which is not the beefiest of machines.
[15:19:03] <NodeX> 15 mins, wth are you doing in the loop lol?
[15:21:21] <JoeyJoeJo> I have a few databases on 4 shards and I want to wipe one completely out. Is it safe to go to my dbdir and just delete the files (MyDB.1, MyDB.2, etc) manually?
[15:26:23] <jtomasrl> im not sure how to perform an upsert for a nested object. I have a checkins document and i want to update orders of different items, if the item exist, just add an order, if it doesnt, add the item and nest the ordet. this is what i got so far, but i dont see it going to work. https://gist.github.com/4655708
[15:29:32] <Zelest> rquilliet, might be the ugliest code I've ever written.. but that let you read huge files without loading up the entire file into RAM.
[15:29:42] <JoeyJoeJo> kali: I have 220+ million documents and .drop() is taking forever
[15:29:47] <Zelest> rquilliet, instead of echo, simply insert it into mongo.
[15:33:04] <JoeyJoeJo> NodeX: I think I phrased my question poorly. I want to completely remove a database from 4 shards, not a shard. Is that possible?
[15:33:15] <rquilliet> Zelest : thanks tor the doc
[15:43:56] <NodeX> personaly I would split the files and fork it with non safe writes and remove the preg_match - this will give you the biggest performance
[15:46:15] <rquilliet> so you would 1/ split the files in chunks
[16:10:08] <JoeyJoeJo> I'm inserting a few million documents using a python script I wrote. It usually inserts at about 5000 docs/sec but for some reason it'll just hang for 200-300 seconds at a time, then continue inserting. How can I track down what is causing inserts to hang?
[16:10:28] <JoeyJoeJo> I'm only running one instance of my script and nothing else is using that DB or collection, so I don't think it's a write lock issue
[16:16:46] <JoeyJoeJo> timah: Holy crap, I wish I could insert half as fast as that. What's your setup like?
[16:17:57] <timah> it's taking a single collection containing 30m+ documents, iterating the cursor (find all), transforming from old document schema to new (including hash to mysql id lookups via apc), and inserting batches of 5000, which equates to being anywhere between 25k-50k/sec.
[16:24:02] <timah> create a plain ol' array before your loop. push your document onto that array when you would usually insert your document into the collection. then check the length of your array and if it's >= your desired batch insert amount then call your .insert(batchArray).
[16:25:42] <JoeyJoeJo> That makes sense. I'm going from csv->mongo and I just iterate over each line of the csv. The current csv files I'm inserting are about 4.8M lines
[16:27:15] <JoeyJoeJo> I'm also using python. I wonder if there is any speed difference between the python driver and the php driver?
[16:45:13] <bean> does a db.copyDatabase() read lock?
[17:25:49] <JoeyJoeJo> Is this line from my log file and error? CMD fsync: sync:1 lock:0
[17:26:04] <JoeyJoeJo> Or maybe a better question is, why does it show up so much?
[18:02:55] <theotherguy> recommendation on a dbaas for mongodb?
[18:14:37] <JoeyJoeJo> Is there a quick way to stop a collection? By stop I mean tell mongo to not process any queries and stop whatever ops it's currently doing? I have one collection taking up a huge amount of lock % and screwing over my other collections. I'm trying to do .drop() but it's taking forever.
[20:34:03] <rideh> struggling to import a csv, renders fine in excel, when i open with sublime it carries past a ton of lines, tried fixing enconding and line nedings, cannot get it into mongo properly. advice?
[20:57:54] <JoeyJoeJo> How can I remove a document if the value of a field is not an int?
[21:36:37] <owen1> one of my hosts have old version of the replica set configuration. how to force him to reconfigure?
[21:37:04] <owen1> when i try to reconfigure it it say: exception: member localhost:27017 has a config version >= to the new cfg version; cannot change config
[22:10:28] <jtomasrl> how can i store an array of objectId
[22:32:07] <timah> so i know how to successfully generate an objectid based on timestamp, but it is missing everything in it's definition except for that timestamp.
[22:32:30] <timah> is there a way to generate an objectid and have it include the machine id, process id, inc, etc?
[23:03:32] <weq> What's the difference between MongoClient.connect and MongoClient.open?
[23:28:47] <kenneth> hey there; does anybody know how to do multiple sets in array with the following syntax: