PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 16th of May, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:00:39] <daidoji> anybody?
[00:33:51] <daidoji> hello, anybody around?
[02:34:41] <packagex> Hello
[02:34:50] <packagex> I am new to mongodb and need some expert suggestion or recommendation. My question is what is recommended: 1) A HUGE collection with lots of documents in it or 2) MULTIPLE collections with documents. In both the approach, my documents will be similar (saving event logs)
[03:24:56] <ranman> daidoji: http://api.mongodb.org/python/current/examples/bulk.html#ordered-bulk-write-operations 2nd example in
[03:26:12] <ranman> daidoji: or here http://api.mongodb.org/python/current/examples/bulk.html#unordered-bulk-write-operations
[03:39:02] <Sirius> hello
[03:39:33] <Sirius> any one out there for a little mongodb help
[03:39:53] <ranman> Sirius: what's your Q?
[03:40:23] <Sirius> the default data path is said to be data/db
[03:40:51] <Sirius> i have mongo db in C:\mongoDB
[03:41:23] <Sirius> where should i create the folder data\db ?
[03:41:31] <Sirius> pls help
[03:43:26] <Sirius> ok
[03:45:15] <ranman> Sirius: http://docs.mongodb.org/manual/tutorial/install-mongodb-on-windows/
[03:46:01] <Sirius> @ranman i follow the link it is written there that create the folder data\db
[03:46:12] <Sirius> but where i would create it?
[03:46:36] <ranman> C:\data\db
[03:46:50] <ranman> the preceding slash means the root of the volume
[03:47:04] <ranman> you can also pass in the --dbpath option
[03:47:14] <ranman> to specify where you want it to look for data
[03:53:51] <Sirius> thansk a lot ranman
[03:53:57] <Sirius> it solved my prob
[03:54:02] <ranman> NP, GL
[03:54:04] <Sirius> many thanks
[03:54:14] <Sirius> :(p
[06:57:29] <zhaoyulong> hello, can anybody tell me what if the new selected master in a replset has a small local time than the previous master?
[06:58:00] <zhaoyulong> hello, can anybody tell me what will happend if the new selected master in a replset has a small local time than the previous master?
[07:15:08] <joannac> I don't know what that means
[07:15:51] <joannac> zhaoyulong: what do you mean by "small local time"
[07:26:48] <king1989> how i scaling my mongodb
[07:26:54] <king1989> please help me
[07:27:38] <joannac> um, going to need more info than that
[07:27:39] <king1989> anyone here?
[07:28:50] <zhaoyulong> @joannac what I means is: the newly selected master's local time is behind the preivous master
[07:28:50] <zhaoyulong> @joannac, when the new master was a slave, it get a row A, timestamp is {1000, 9}, then it becomes the new master, when some one insert a row, but it's time is behind the previous master, the new inserted row has a timestamp {900, 1}
[07:28:51] <zhaoyulong> so, the new inserted row has a smaller timestamp in oplog?
[07:31:17] <joannac> zhaoyulong: pretty sure all of that is in UTC
[07:31:54] <joannac> unless you mean, your server time is behind
[07:31:57] <joannac> in which case, fix it
[07:32:10] <joannac> king1989: okay, without more info: get better hardware and/or shard
[07:32:35] <king1989> hi joannac
[07:32:47] <king1989> i am planing a new mongodb model, about system architecture
[07:33:03] <king1989> should i create shard cluster at begin
[07:33:36] <joannac> depends how much load you're expecting and whether your hardware can handle it
[07:34:43] <king1989> i intend create a vm with 24Gb ram, 8vCPU and 500GB HDD ( RAID 10) on vmware esxi
[07:35:05] <king1989> have a problem? when i run mongodb on that,?
[07:35:29] <joannac> depends how much load you're expecting
[07:36:51] <king1989> how to test it, joannac?
[07:37:45] <king1989> and if i run mongodb on virtualization ( vmware esxi 5.5), is it ok?
[07:42:24] <joannac> run whatever production load you expect, and see how it performs
[07:42:35] <joannac> then increase load and see what your bottleneck is
[07:43:28] <king1989> hi
[07:43:38] <king1989> f i run mongodb on virtualization ( vmware esxi 5.5), is it ok?
[07:45:56] <zhaoyulong> @joannac, so the problem in indeed exists, what I want to know is how mongo do with this issule when sync between replset members, you know, time can't be excactly same between servers, differs always exist
[07:51:50] <king1989> hi all
[07:54:31] <king1989> anyone here
[08:33:23] <king19891> hi all
[08:33:32] <Zelest> o/
[08:33:44] <king19891> mongodb on centos and mongodb on Ubuntu, which is better?
[08:34:01] <king19891> about performance
[08:34:11] <Zelest> both are Linux, equally good/bad imo.
[08:37:54] <king19891> i heard that mongodb run on ubntu is better than run on centos
[08:37:54] <king19891> right?
[08:39:21] <Zelest> http://www.troll.me/images/futurama-fry/not-sure-if-trolling-or-just-stupid.jpg
[09:17:29] <pen> hey
[09:17:30] <pen> anyone here?
[09:17:38] <pen> wonder if it is possible to do bitwise query?
[10:03:43] <wei2912> hi
[10:04:17] <wei2912> how can i increment a dictionary of numbers using the update query?
[10:19:09] <richwestcoast> hello guys
[10:19:33] <richwestcoast> anyone know how i can fix this issue : https://pbs.twimg.com/media/Bnv3FocIIAASEOY.png:large
[10:22:47] <wei2912> richwestcoast, the error says out the problem quite clearly
[10:22:57] <wei2912> do you have permissions to write to .meteor/local?
[10:23:05] <wei2912> and, what's the filesystem?
[10:23:08] <wei2912> does it support file locking?
[10:30:06] <richwestcoast> wei2912: /dev/simfs simfs 10485760 4428220 6057540 43% /
[10:30:29] <richwestcoast> drwxrwxrwx 3 root root 4096 May 16 05:29 .meteor
[10:30:32] <wei2912> richwestcoast, sry, no idea if simfs supports file locking
[10:30:46] <wei2912> in tht case you'd hv permissions
[10:31:54] <richwestcoast> ok so it could be file locking issue
[10:32:10] <wei2912> yeh
[10:33:57] <richwestcoast> ok thanks wei2912
[12:16:26] <Pet0r> I'm using the latest PECL Mongo driver in PHP (1.5.2) and I'm getting this error trying to connect to my Mongo server - "Failed to connect to: localhost:27017: get_server_flags: got unknown node type"
[12:16:52] <Derick> which mongodb version are you using?
[12:18:15] <Pet0r> 2.6.1
[12:20:38] <Pet0r> never mind, I'm an idiot, I still have the old server in /etc/hosts on this box
[12:20:43] <Derick> :-)
[12:20:48] <Derick> yeah, 1.8 perhaps?
[12:20:55] <Derick> or a mster/slave set?
[12:21:00] <Derick> there were some issues with thta
[12:21:12] <Pet0r> it was connecting to the old master which is now been pulled out of the replica set
[12:21:21] <Derick> ok
[12:21:30] <Pet0r> cheers
[12:50:13] <Kaim> hi
[12:50:50] <Kaim> does index deal with remove command ?
[12:51:10] <Kaim> if I do sth like db.col.remove({123="aaa"})
[12:51:17] <Kaim> should I make index on 123 field ?
[12:52:04] <Derick> you should not have values as key names
[12:52:18] <Derick> but yes, the index is used for finding things, and you're finding something to remove it
[12:55:17] <Kaim> okay thx
[14:52:46] <jiffe98> anyone ever done a full sync on a db several tb's in size?
[14:53:18] <jiffe98> ladt time I tried it kept starting over and over and never finished, ended up going the rsync route
[14:59:17] <tscanausa> jiffe98: interesting senario.
[15:05:32] <kali> i would personnaly shard before this point
[15:05:53] <kali> or at least try :)
[15:06:31] <jiffe98> the machines are at 64% capacity, plan was to shard at 70
[15:12:20] <tscanausa> Ya I have a 3 shard cluster with only 50GB each
[15:12:46] <tscanausa> but I functionally separate my clusters so I have 6 clusters
[15:14:10] <jiffe98> we replaced a drive in one of them which caused an IO problem and now replication is broken
[15:36:15] <agenteo> hi, when I run a find on a mongo client, is there a way to get an unescaped result? I currently see all my “ escaped. thanks
[15:52:56] <richthegeek> hi, does anyone have experience storing Mongo on a ZFS volume?
[15:53:19] <richthegeek> or any other form of compressed volume, for that matter
[15:53:44] <agenteo> ok I bypassed that by trying to export a query from mongo console to text file passing a .js file with the following query: db.collection.find({"article_type": "Article", "body": /<table/}, {"body": 1}).sort({"source_id_string": -1}).limit(1)
[15:55:11] <agenteo> the result is a json of a query I think… any idea why I am seeing that instead of the expected working result I see in the interactive console?
[15:56:50] <agenteo> nevermind
[15:56:58] <agenteo> had to use hasNext
[16:28:28] <jet> is there an API to get the list of index with mongo-c-driver?
[16:53:35] <unitclick> Hey guys. I'm trying to use mongoose's findByIdAndUpdate to update a document with a complete JSON object, not a single parameter as seen in the documents. Is it possible? If so what's best practice for doing it?
[17:14:13] <daidoji> hello #mongodb
[17:14:30] <daidoji> I have a question about BulkWrite for anyone who might be familiar with it
[17:14:54] <daidoji> basically, in the pymongo documentation it seems to say it'll return a list of errors on a BulkWrite operation
[17:15:05] <daidoji> but in the mongo documentation it claims it'll only return the last error
[17:15:21] <daidoji> http://docs.mongodb.org/manual/core/bulk-inserts/
[17:15:32] <daidoji> http://api.mongodb.org/python/current/examples/bulk.html
[17:17:45] <daidoji> anyone know what this discrepency is about?
[17:18:21] <skot> The shell docs you are reading are for a different thing, read this: http://docs.mongodb.org/manual/reference/method/Bulk/#Bulk
[17:18:46] <daidoji> skot: roger
[17:19:12] <skot> The bulk inserts being referred to in your link are not the new bulk api commands, but the old bulk insert method which only returns a single error and doesn't use the new commands/api.
[17:19:44] <daidoji> ahhh, roger. That makes sense
[17:19:46] <daidoji> it just confused me a bit
[17:19:52] <daidoji> thanks for your help
[17:20:29] <skot> np, the docs are a bit confusing, no question.
[17:21:11] <skot> feel free to click the feedback icon and leave a "this is confusing in the light of the new bulk api/commands"
[17:21:38] <daidoji> roger
[17:21:45] <skot> (esp. since they aren't linked on that page)
[17:41:20] <daidoji> skot: do you use pymongo?
[17:41:50] <daidoji> cause I'm not really sure how to run this code http://api.mongodb.org/python/current/examples/bulk.html
[17:42:14] <daidoji> like its not working for me because a lot of those methods they use here don't look like they're available to my MongoClient
[17:43:53] <daidoji> and it looks different than the API page http://api.mongodb.org/python/current/api/pymongo/bulk.html
[17:53:17] <daidoji> also, if I pass in a generator with insert, is there any way to "execute" every so often automagically?
[17:53:28] <daidoji> or do I have to call execute myself every so many records?
[18:19:31] <daidoji> anybody?
[18:20:24] <tscanausa> daidoji: pretty sure you need to call it yourself
[18:22:16] <daidoji> tscanausa: well like I was wondering what was kosher
[18:22:21] <daidoji> like I'm loading really big data sets
[18:22:30] <daidoji> like in the terabytes of size
[18:23:03] <daidoji> so is it kosher to blk_handle.insert( record_from_generator) and then call blk_handle.execute() once at the end?
[18:23:33] <daidoji> and the glories of memory savings in the generator will be realized through the driver, or do I have to manage that all msyelf?
[19:07:13] <ddod> Open question to anyone with opinions: If I were building something like a blogging platform, would it be better to store posts in their own collection as individual records or throw them in an array inside the user records?
[19:08:09] <cheeser> separate collection
[19:13:45] <houms> good day, I installed mongodb from 10gen repo on centos 6. we installed version 2.6.1-2, but for some reason mongod cannot be stopped using the service mongod stop
[19:13:54] <houms> it can be started but not stopped
[19:15:18] <houms> the stop function in the init script is http://pastie.org/9182630
[19:16:20] <houms> possible this line killproc -p "$PIDFILE" -d 300 /usr/bin/mongod as the rm -f of subsys/mongod does seem to get run
[19:20:34] <houms> so on stop it does not remove pid and lock file it seems
[19:20:42] <houms> this is from mongod repo
[19:20:55] <houms> anyone point me in the right direction?
[19:26:44] <houms> it seems killall -15 mongod works so i am wondering why the init shows killproc?
[19:26:50] <houms> not sure what killproc even is?
[19:27:42] <houms> says its part of the init.d/functions
[19:27:50] <houms> but it does not seem to work
[19:32:14] <BillCriswell> I'm semi familiar with MySQL, a little less with Mongo. I am wonder when would be a good time to store a field as an array of things. The idea feels awesome but wouldn't querying based on something like that be very slow?
[19:33:27] <BillCriswell> Like, say a task hasMany note, an array of "note" can be useful since I rarely have to get the notes outside of the task, but if I needed to ever query "notes" would that be best as it's own collection?
[19:33:43] <cheeser> querying isn't the problem. it's document size and growth that'll bite you.
[19:34:23] <BillCriswell> cheeser: So I'm better off thinking about it as like relational for the most part?
[19:35:17] <BillCriswell> notes, tasks and notes has a task_id column?
[19:35:54] <cheeser> i'd start here: http://docs.mongodb.org/manual/core/data-model-design/
[19:38:25] <BillCriswell> Ok, I'll read up.
[19:39:47] <BillCriswell> I should maybe even start further back haha
[19:40:21] <BillCriswell> A record being a "document" feels weird to me right now. Sure if I read more I'll get it.
[20:41:57] <dgarstang> What does this mean when starting mongo? "Error parsing INI config file: unknown option nojournal". There are -ZERO- references via google
[20:48:58] <proteneer> anyone deployed 2x replica sets before on EC2?
[20:49:16] <proteneer> we're thinking of using r2.large for the replicas, and something cheap like an m3.medium for an arbiter
[21:00:01] <tscanausa> proteneer: if you have the money have a 3rd replica
[21:00:19] <proteneer> I see
[21:01:10] <proteneer> is provisioned IOPS critical?
[21:02:13] <proteneer> and each Mongo instance needs its own provisioned IOPS EBS?
[21:02:19] <tscanausa> depends on your use case. my application timing is everything so on the most critical items we have local ssds everything else is ebs
[22:00:32] <toddwildey> Question about aggregation and $unwind
[22:00:38] <toddwildey> I've got a schema setup as such:
[22:01:30] <toddwildey> new Schema({..., array: [{ _id: { type: Schema.ObjectId, ref: 'OtherModel' }, num: { type: Number, default: 0} }], ...});
[22:01:36] <toddwildey> (This is using Mongoose)
[22:02:04] <toddwildey> My goal is to $unwind 'array' to get tuples of (_id, num)
[22:02:26] <toddwildey> Currently, I'm getting two separate documents instead of a tuple though
[22:02:39] <toddwildey> That would be fine, except I have no way of re-attaching those documents together
[22:04:12] <toddwildey> Is there any way to guarantee a tuple? Or do I need to change the way my Schemas work?
[22:05:43] <proteneer> holy shit digital ocean is like a match made in heavy for hosting mongo
[22:05:47] <proteneer> heaven*
[22:07:43] <toddwildey> Agreed
[22:15:00] <proteneer> except I can't build mongo
[22:15:00] <proteneer> lol
[22:15:20] <proteneer> and i'm not sure if the 2GB ram is enough
[22:18:41] <toddwildey> yum/apt it?
[22:19:09] <proteneer> i need ssl support
[22:19:17] <proteneer> {standard input}: Assembler messages:
[22:19:18] <proteneer> {standard input}:37615: Warning: end of file not at end of a line; newline inserted
[22:19:18] <proteneer> g++: internal compiler error: Killed (program cc1plus)
[22:19:20] <proteneer> what the fuck
[22:19:25] <Guest36106> hi, i have a question.. i have about 1m documents in my collection, and i'm searching on a key 'statskey' which takes 2seconds even thoughi have an index on it {'statskey':1}
[22:19:31] <Guest36106> is there anyway i can make this faster?
[22:19:43] <proteneer> get an SSD
[22:20:05] <swens> proteneer: i'm on mongodb
[22:20:08] <swens> mongohq i mean
[22:20:21] <proteneer> swens, they dont support ssl do they?
[22:20:34] <proteneer> and they're pretty expensive
[22:20:36] <swens> ssd you meanassd or ssl?
[22:20:41] <proteneer> ssl*
[22:20:48] <proteneer> only mongodirector supports it right now
[22:21:33] <swens> not sure
[22:21:55] <proteneer> we don't have the luxury of having everything in the same datacenter
[22:24:37] <proteneer> how CPU intensive is mongo?
[22:24:40] <proteneer> are 2 cores enough?
[22:24:47] <proteneer> and 4GB of ram?
[22:27:36] <tscanausa> with mongo there is no such thing as too much
[22:27:47] <proteneer> well AFAIK everything is single threaded
[22:27:54] <proteneer> and if I don't do sharding, then multiple CPU threads are not all that useful
[22:28:44] <toddwildey> As long as Mongo still has a global write lock, there is such thing as too much
[22:32:01] <proteneer> in a replicatSet, do I need to issue things like ensureIndex() to all the replicas?
[22:32:05] <proteneer> or can I just execute that for the pimary