[02:34:50] <packagex> I am new to mongodb and need some expert suggestion or recommendation. My question is what is recommended: 1) A HUGE collection with lots of documents in it or 2) MULTIPLE collections with documents. In both the approach, my documents will be similar (saving event logs)
[03:24:56] <ranman> daidoji: http://api.mongodb.org/python/current/examples/bulk.html#ordered-bulk-write-operations 2nd example in
[03:26:12] <ranman> daidoji: or here http://api.mongodb.org/python/current/examples/bulk.html#unordered-bulk-write-operations
[06:57:29] <zhaoyulong> hello, can anybody tell me what if the new selected master in a replset has a small local time than the previous master?
[06:58:00] <zhaoyulong> hello, can anybody tell me what will happend if the new selected master in a replset has a small local time than the previous master?
[07:28:50] <zhaoyulong> @joannac what I means is: the newly selected master's local time is behind the preivous master
[07:28:50] <zhaoyulong> @joannac, when the new master was a slave, it get a row A, timestamp is {1000, 9}, then it becomes the new master, when some one insert a row, but it's time is behind the previous master, the new inserted row has a timestamp {900, 1}
[07:28:51] <zhaoyulong> so, the new inserted row has a smaller timestamp in oplog?
[07:31:17] <joannac> zhaoyulong: pretty sure all of that is in UTC
[07:31:54] <joannac> unless you mean, your server time is behind
[07:43:38] <king1989> f i run mongodb on virtualization ( vmware esxi 5.5), is it ok?
[07:45:56] <zhaoyulong> @joannac, so the problem in indeed exists, what I want to know is how mongo do with this issule when sync between replset members, you know, time can't be excactly same between servers, differs always exist
[12:16:26] <Pet0r> I'm using the latest PECL Mongo driver in PHP (1.5.2) and I'm getting this error trying to connect to my Mongo server - "Failed to connect to: localhost:27017: get_server_flags: got unknown node type"
[12:16:52] <Derick> which mongodb version are you using?
[15:06:31] <jiffe98> the machines are at 64% capacity, plan was to shard at 70
[15:12:20] <tscanausa> Ya I have a 3 shard cluster with only 50GB each
[15:12:46] <tscanausa> but I functionally separate my clusters so I have 6 clusters
[15:14:10] <jiffe98> we replaced a drive in one of them which caused an IO problem and now replication is broken
[15:36:15] <agenteo> hi, when I run a find on a mongo client, is there a way to get an unescaped result? I currently see all my “ escaped. thanks
[15:52:56] <richthegeek> hi, does anyone have experience storing Mongo on a ZFS volume?
[15:53:19] <richthegeek> or any other form of compressed volume, for that matter
[15:53:44] <agenteo> ok I bypassed that by trying to export a query from mongo console to text file passing a .js file with the following query: db.collection.find({"article_type": "Article", "body": /<table/}, {"body": 1}).sort({"source_id_string": -1}).limit(1)
[15:55:11] <agenteo> the result is a json of a query I think… any idea why I am seeing that instead of the expected working result I see in the interactive console?
[16:28:28] <jet> is there an API to get the list of index with mongo-c-driver?
[16:53:35] <unitclick> Hey guys. I'm trying to use mongoose's findByIdAndUpdate to update a document with a complete JSON object, not a single parameter as seen in the documents. Is it possible? If so what's best practice for doing it?
[17:19:12] <skot> The bulk inserts being referred to in your link are not the new bulk api commands, but the old bulk insert method which only returns a single error and doesn't use the new commands/api.
[17:19:44] <daidoji> ahhh, roger. That makes sense
[18:20:24] <tscanausa> daidoji: pretty sure you need to call it yourself
[18:22:16] <daidoji> tscanausa: well like I was wondering what was kosher
[18:22:21] <daidoji> like I'm loading really big data sets
[18:22:30] <daidoji> like in the terabytes of size
[18:23:03] <daidoji> so is it kosher to blk_handle.insert( record_from_generator) and then call blk_handle.execute() once at the end?
[18:23:33] <daidoji> and the glories of memory savings in the generator will be realized through the driver, or do I have to manage that all msyelf?
[19:07:13] <ddod> Open question to anyone with opinions: If I were building something like a blogging platform, would it be better to store posts in their own collection as individual records or throw them in an array inside the user records?
[19:13:45] <houms> good day, I installed mongodb from 10gen repo on centos 6. we installed version 2.6.1-2, but for some reason mongod cannot be stopped using the service mongod stop
[19:13:54] <houms> it can be started but not stopped
[19:15:18] <houms> the stop function in the init script is http://pastie.org/9182630
[19:16:20] <houms> possible this line killproc -p "$PIDFILE" -d 300 /usr/bin/mongod as the rm -f of subsys/mongod does seem to get run
[19:20:34] <houms> so on stop it does not remove pid and lock file it seems
[19:32:14] <BillCriswell> I'm semi familiar with MySQL, a little less with Mongo. I am wonder when would be a good time to store a field as an array of things. The idea feels awesome but wouldn't querying based on something like that be very slow?
[19:33:27] <BillCriswell> Like, say a task hasMany note, an array of "note" can be useful since I rarely have to get the notes outside of the task, but if I needed to ever query "notes" would that be best as it's own collection?
[19:33:43] <cheeser> querying isn't the problem. it's document size and growth that'll bite you.
[19:34:23] <BillCriswell> cheeser: So I'm better off thinking about it as like relational for the most part?
[19:35:17] <BillCriswell> notes, tasks and notes has a task_id column?
[19:39:47] <BillCriswell> I should maybe even start further back haha
[19:40:21] <BillCriswell> A record being a "document" feels weird to me right now. Sure if I read more I'll get it.
[20:41:57] <dgarstang> What does this mean when starting mongo? "Error parsing INI config file: unknown option nojournal". There are -ZERO- references via google
[20:48:58] <proteneer> anyone deployed 2x replica sets before on EC2?
[20:49:16] <proteneer> we're thinking of using r2.large for the replicas, and something cheap like an m3.medium for an arbiter
[21:00:01] <tscanausa> proteneer: if you have the money have a 3rd replica
[21:01:10] <proteneer> is provisioned IOPS critical?
[21:02:13] <proteneer> and each Mongo instance needs its own provisioned IOPS EBS?
[21:02:19] <tscanausa> depends on your use case. my application timing is everything so on the most critical items we have local ssds everything else is ebs
[22:00:32] <toddwildey> Question about aggregation and $unwind
[22:00:38] <toddwildey> I've got a schema setup as such:
[22:19:25] <Guest36106> hi, i have a question.. i have about 1m documents in my collection, and i'm searching on a key 'statskey' which takes 2seconds even thoughi have an index on it {'statskey':1}
[22:19:31] <Guest36106> is there anyway i can make this faster?