PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 8th of October, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:48:06] <datafirm> is the 16MB limit on the aggregation pipeline for each document returned as a max size, or the collection response as a whole?
[00:52:18] <Lobosque> Hey guys, I'm new to mongo. I have three collections: users, items, and quotes. items references users and quotes references items. What is the best way to get all quotes for items of a given user?
[01:02:14] <cheeser> if they're not nested you'll have to do 3 queries
[01:12:03] <rafaelhbarros> is there an .explain() to mapReduce()?
[01:23:49] <cheeser> rafaelhbarros: no
[01:24:12] <rafaelhbarros> cheeser: I figured out. I've been facing a bunch of interesting problems with mongo & pymongo this week
[01:46:27] <rafaelhbarros> cheeser: is there a way to cover a mapreduce entirely with indexes?
[02:00:30] <adizam> Using mongoose & nodejs.. mongoose attempts to call 'collStats' quite a bit.. and looking at my mongo logs, I'm seeing: DateTime [conn#] command denied: { collStats: "collection_name" }
[02:00:33] <adizam> a _LOT_
[02:00:48] <adizam> Just wondering if thats normal. Haven't done much performance tuning in mongo
[02:09:16] <cheeser> afaik, mapreduce does a collection scan
[02:26:49] <adizam> I actually figured it out cheeser
[02:27:14] <adizam> It is nodetime.. initiating a new (non admin) connection for every collection I have.. and attempting to run db.collection.stats(), among other things.. does it every X seconds
[02:27:44] <adizam> I'm not sure if map reduce does a collection scan.. and I know I have a few map reduces in my codebase.. so maybe that is another instance.. but I have confirmed this is the case for me at least. Thanks for the feedback tho :)
[03:01:59] <defaultro> hey guys, is it possible to implement somewhat similar to triggers? I want mongodb to react when a field gets updated. What it would need to do is connect to our nodejs server
[03:03:39] <adizam> defaultro: http://stackoverflow.com/questions/9691316/how-to-listen-for-changes-to-a-mongodb-collection
[03:04:17] <defaultro> cool
[04:10:31] <ruphos> I'm running into an issue using the MongoDB perl driver. I'm trying to page through a large number of documents (125M) to move an array field to its own collection. When I set a batch to be much over 100 docs, I get "can't get db response, not connected"
[04:11:08] <ruphos> it looks like an older issue (https://jira.mongodb.org/browse/PERL-196), but I'm using the latest version of the driver so that should be fixed.
[04:11:37] <ruphos> Anyone around to help me muddle through?
[05:51:27] <ctppppp> Hi everyone, was wondering if when upgrading a mongo replica set from 2.0 if I need to upgrade from 2.0 to 2.2 or I can just go 2.0 straight to 2.4 by following the upgrade procedure in the manual?
[06:30:00] <ctppppp> the answer is yes!
[06:39:26] <mark____> http://pastebin.com/uBikAYBj help with the output???
[06:59:56] <johnnode> Hello. I did try to apply this case study (http://docs.mongodb.org/ecosystem/use-cases/storing-comments/#hybrid-schema-design) with this test : http://pastebin.com/rUQheti9. But I got inconsistent data for parent & child. The results of parent & child are not consistent. Did I miss some write/update concern in the code? Thanks a lot for your help.
[07:18:58] <johnnode> Hello. I did try to apply this case study (http://docs.mongodb.org/ecosystem/use-cases/storing-comments/#hybrid-schema-design) with this test : http://pastebin.com/rUQheti9. But I got inconsistent data for parent & child. The results of parent & child are not consistent. Did I miss some write/update concern in the code? Thanks a lot for your help.
[07:38:27] <defaultro> anyone awake?
[08:28:28] <hello_kitty> can mongodb (or a lightweight likeness) be built into an offline desktop app and packaged/configured in an installer?
[09:14:56] <quattr8> I'm getting really weird behaviour from mongodb :/ I have a session tracking collection were i do findOne's only on the _id (=shard key) before getting the session data I get the site data also with findOne, i cache the site data in memcached, this way the tracking findOne takes about 0.04 seconds to fetch, but when I disable memcached on the site data and force it to get the data from mongodb the tracking findOne takes 0.001 seconds and t
[09:14:56] <quattr8> site findOne now is slow
[09:17:21] <quattr8> how can querying the tracking collection suddenly be fast when querying another collection before?
[09:38:59] <KamZou> Hi, i've the following command : mongodump -d stats -c $CUR_COLLECTION --query '{"_id.d": '$YDAY_TIMESTAMP'}' -o ... <<< It's working with integers but not with strings, how could i do please ?
[09:39:16] <KamZou> if the YDAY_TIMESTAMP is in string format ?
[10:49:08] <ixti> hi all
[11:13:19] <Guest22771> list
[12:17:48] <phrearch> hello
[12:18:32] <phrearch> is there some example populate function i can use to for instance populate userids with the appropriate user objects from another query? i know about mongoose, but cant use that in my case
[13:16:49] <Kim^J> I need some help with $elemMatch. According to http://docs.mongodb.org/manual/reference/projection/positional/#proj._S_ and http://docs.mongodb.org/manual/reference/method/db.collection.find/#db.collection.find and http://docs.mongodb.org/manual/reference/projection/elemMatch/ I'm using them right. But my result is a bit odd. Instead of selecting the matching array element it selects every ...
[13:16:55] <Kim^J> ... array element, but with no values. :S
[13:17:03] <Kim^J> Query+Data https://gist.github.com/hagbarddenstore/6884462
[13:50:22] <birdy111> I am facing an issue with mongodb.... during continuous writes to mongodb ... if I drop some older database, the write performance drastically drops....
[13:52:20] <birdy111> We want maximum 365 databases to exists at a time... so as soon as 366th database comes in we want the first one to be dropped....
[13:52:48] <hipsters_> a database a day, what's the use case there?
[13:53:08] <birdy111> we want to retain data for maximum 1 year
[13:54:18] <birdy111> For example today's db is 08-10-13... so the database of 08-10-12 should be removed....
[13:56:00] <Kim^J> birdy111: Uhm?
[13:56:49] <Kim^J> birdy111: If you let each database exist on different drives it shouldn't happen.
[13:57:13] <Kim^J> birdy111: I think the performance drops because it now writes twice the amount of data.
[13:57:26] <Kim^J> Or something, depending on how it removes the database.
[13:59:14] <birdy111> we remove database by issuing "db.dropDatabase()" from mongos... after running "use 08-10-12"
[14:01:54] <birdy111> For drop database, mongodb has to just remove the database file and clean config db, which should finish within a second... according to some experts... But in my case it takes 4-5 minutes on each shard
[14:18:37] <talbott> hi all
[14:18:59] <talbott> number6 was kindly helping me with a rep issue yesterday
[14:19:04] <talbott> (im a bit of a newbie)
[14:19:35] <talbott> so i have a primary (2gb of data) and secondary (empty)
[14:19:54] <talbott> i've done an rs.initiate() on the primary
[14:20:02] <talbott> but i stupidly wasn't using key based auth
[14:20:15] <talbott> (actually i'd turned to to no-auth while i setup)
[14:20:23] <talbott> so i just turned auth on and restarted mongo on the primary
[14:20:33] <talbott> and for some reason it thinks it's a secondary :(
[14:20:42] <talbott> and here i am
[14:24:26] <Number6> Are both servers showing up on rs.status()? If so, you may need to add another server (or arbiter) to balance out the votes
[14:28:19] <talbott> oh hey number6!
[14:28:30] <Number6> Hey, talbott
[14:28:40] <talbott> it's weird that when i start the mongo command line on the primary though, that the prompt is :SECONDARY!
[14:30:21] <talbott> switched to db datasiftmongodb
[14:30:21] <talbott> twizoo:SECONDARY>
[14:31:21] <Number6> Ok - does your rs.status() show two nodes? If so, you'll need to add in another one, for a majority to take place
[14:31:36] <talbott> twizoo:SECONDARY> rs.status()
[14:31:36] <talbott> {
[14:31:36] <talbott> "set" : "twizoo",
[14:31:37] <talbott> "date" : ISODate("2013-10-08T14:29:49Z"),
[14:31:39] <talbott> "myState" : 2,
[14:31:42] <talbott> "members" : [
[14:31:43] <talbott> {
[14:31:46] <talbott> "_id" : 0,
[14:31:48] <talbott> "name" : "twizoo:27017",
[14:31:50] <talbott> "health" : 1,
[14:31:52] <talbott> "state" : 2,
[14:31:55] <talbott> "stateStr" : "SECONDARY",
[14:31:56] <talbott> "uptime" : 28,
[14:31:58] <talbott> "optime" : Timestamp(1381147845, 1),
[14:31:59] <talbott> "optimeDate" : ISODate("2013-10-07T12:10:45Z"),
[14:32:02] <talbott> "errmsg" : "not electing self, not all members up and we have been up less than 5 minutes",
[14:32:03] <talbott> "self" : true
[14:32:05] <talbott> },
[14:32:08] <talbott> {
[14:32:10] <talbott> "_id" : 1,
[14:32:11] <talbott> "name" : "mongo.secondary1.twizoo.com:27017",
[14:32:13] <talbott> "health" : 1,
[14:32:16] <talbott> "state" : 6,
[14:32:18] <talbott> "stateStr" : "UNKNOWN",
[14:32:18] <defaultro> hey guys, I'm following this guy's howto. I tried inserting a data but nothing happens on my php page. http://jwage.com/post/30490196727/mongodb-tailable-cursors
[14:32:20] <talbott> "uptime" : 28,
[14:32:21] <talbott> "optime" : Timestamp(0, 0),
[14:32:23] <talbott> "optimeDate" : ISODate("1970-01-01T00:00:00Z"),
[14:32:26] <talbott> "lastHeartbeat" : ISODate("2013-10-08T14:29:49Z"),
[14:32:27] <defaultro> talbott: post it on pastebin dude
[14:32:28] <talbott> "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"),
[14:32:29] <talbott> "pingMs" : 0,
[14:32:31] <talbott> "lastHeartbeatMessage" : "still initializing"
[14:32:34] <talbott> }
[14:32:35] <talbott> ],
[14:32:37] <talbott> "ok" : 1
[14:32:39] <talbott> }
[14:32:41] <talbott> ah ok
[14:32:45] <talbott> i was trying to setup admin users etc
[14:32:47] <talbott> but it didnt like it, because it's readonly
[14:32:49] <talbott> ah yes, sorry default
[14:32:50] <defaultro> glad you were kicked out, lol
[14:32:57] <defaultro> you were not kicked out
[14:33:00] <defaultro> :)
[14:33:02] <talbott> yeh sorry!
[14:33:03] <talbott> http://pastebin.com/sjFG0Zqp
[14:33:12] <talbott> and i'm an IRCer of 20 years
[14:33:19] <defaultro> any mongodb master here around?
[14:33:20] <talbott> should know better
[14:33:45] <defaultro> to me, 5 lines is acceptable
[14:33:52] <defaultro> :)
[14:33:53] <Number6> Your cluster is in, essentially, failsafe mode. This kicks in to prevent split brain senarios, such as having two primaries.
[14:36:38] <talbott> yeh
[14:36:57] <talbott> so is it best for me to comment out the replset in the primaries mongo.conf
[14:36:58] <defaultro> btw, when I ran show dbs earlier, I didn't see test. But now, it's showing it. Why?
[14:37:06] <talbott> while i sort out the auth?
[14:37:12] <talbott> and then bring them back up?
[14:40:27] <defaultro> hey guys, how do we create a db? I know how to create the collection but not the db. What is the command?
[14:43:00] <defaultro> how do I verify my collection if it's capped?
[14:45:15] <bmcgee> db.collection.isCapped()
[14:45:20] <bmcgee> defaultro ^^
[14:45:32] <defaultro> cool
[14:45:43] <defaultro> it's true
[14:45:49] <bmcgee> And on a related note, I need some help with tailable cursors, mine aren't behaving as I would expect
[14:45:55] <defaultro> not sure why I can't tail it in PHP
[14:46:04] <defaultro> it's not displaying the new data I inserted
[14:46:14] <bmcgee> defaultro: seems we have the same problem, different languages
[14:46:14] <defaultro> oh, we are working on the same issue
[14:46:19] <defaultro> lol :D
[14:46:22] <bmcgee> ha
[14:46:32] <defaultro> wait, meeting
[14:46:34] <defaultro> brb
[14:46:41] <bmcgee> so what we need now is a third person who can help us both :)
[14:48:45] <bmcgee> I think my issue is down to my key. I know that capped collections maintain the insertion order. If I am using a capped collection as a kind of oplog and want to restart the cursor from a known point, am I able to do something like find({ _id: { $gt: "bla" }}).sort({ $natural: 1
[14:49:13] <bmcgee> soz hit enter early find({ _id: { $gt: "bla" }}).sort({ $natural: 1}).addOption(Bytes.QUERYOPTION_TAILABLE)
[14:53:29] <bmcgee> anyone?
[14:53:41] <defaultro> i'm back
[14:53:51] <defaultro> yes, maybe it's working and we're just forgetting something
[14:54:01] <defaultro> like a config that needs to be enabled
[14:54:53] <defaultro> it worked!!!
[14:54:56] <bmcgee> when i startup i can read everything that's there, just nothing that's added
[14:54:58] <defaultro> i saw the value in my php
[14:55:03] <defaultro> but not sure when it got displayed
[14:55:11] <defaultro> i'll try again
[14:56:33] <defaultro> i'll use perl, it's better for this kind of things
[14:57:27] <talbott> when auth is on, does a user need a special role to do an rs.status()?
[14:57:30] <talbott> > rs.status()
[14:57:30] <talbott> { "ok" : 0, "errmsg" : "unauthorized" }
[14:57:30] <talbott> >
[15:01:27] <remonvv> \o
[15:14:09] <appleboy> is there a way to disable the logging of "build index tmp.mr.records"?
[15:27:35] <bakis> hey guys i'm using mongohq on heroku, is there a way to dump the data without the user credentials? like the heroku username/password etc? or restore the data in a database while skipping that step?
[15:30:59] <appleboy> also, if a map reduce job errors out on insert due to duplicate key, will it stil try to enter everything else?
[15:31:12] <appleboy> or does it just fail the rest of the job at that point?
[15:36:53] <chaotic_good> hi
[15:37:05] <chaotic_good> how can I make sure mongo doesnt expand over a given drive size?
[15:37:15] <chaotic_good> say I have 280G /data/mongo drive
[15:41:02] <remonvv> chaotic_good: By making sure you don't store more data than fits on a drive I suppose.
[15:46:20] <chaotic_good> is ther a setting that limits mongo?
[15:46:25] <chaotic_good> I have 3 exact same
[15:46:30] <chaotic_good> ery basic config I inherited
[15:46:42] <chaotic_good> so 3 x 280 potential storage
[15:46:49] <chaotic_good> is there a wya to say dont flush more than 280G?
[15:46:52] <chaotic_good> or something?
[15:47:08] <cheeser> you can limit the size of a shard but not a bare mongod afaict
[15:49:43] <remonvv> It's rather like asking how you can prevent a bucket from overflowing if you put too much water into it. The cause is you putting too much water in it, so prevent that from happening.
[15:54:01] <chaotic_good> there is no shield to be raised?
[15:54:07] <chaotic_good> no cork?
[15:54:12] <chaotic_good> or figner in the dyke?
[15:54:18] <chaotic_good> http://www.youtube.com/watch?v=kt3745NRxpo
[15:54:25] <cheeser> well mongod will complain when it runs out of space
[15:55:45] <chaotic_good> http://docs.mongodb.org/manual/core/capped-collections/ wait a second
[15:55:48] <chaotic_good> whats this???
[15:55:56] <remonvv> Right, but you don't want to hit that point really. If you want to do a write but you can't because you ran out of diskspace you need more diskspace or less data. There's no "Oh we'v reached 20Gb, I'm going to reject writes from now on" sort of functionality.
[15:56:13] <chaotic_good> the devs had mongo lock
[15:56:18] <chaotic_good> cut it his 100G limit
[15:56:25] <chaotic_good> and liek we couldnt delete stuff
[15:56:31] <chaotic_good> it was locked by runnig m,ongo
[15:56:34] <chaotic_good> very embaresing
[15:56:43] <chaotic_good> so I want kinda stop that u know?
[15:56:45] <chaotic_good> :)
[15:56:46] <chaotic_good> now
[15:57:00] <chaotic_good> if they are simply storing files......is the mongo gridFS a better way to go?
[15:57:01] <chaotic_good> or?
[15:57:02] <remonvv> chaotic_good: That's a specific type of collection that is size limited, and thus loses data (FIFO) if it becomes full.
[15:57:14] <chaotic_good> oh?
[15:57:23] <chaotic_good> you mean capped collections?
[15:57:31] <remonvv> Err. Well. Depends who you ask. I wouldn't really store files in MongoDB usually.
[15:57:46] <chaotic_good> redis?
[15:57:48] <chaotic_good> what?
[15:57:48] <remonvv> Yes, that's very specific to capped collections. It's the collection type used for mongo's oplog for example.
[15:57:54] <chaotic_good> hm
[15:57:59] <remonvv> Well no, something that likes to store and serve files.
[15:58:04] <chaotic_good> mogileFS?
[15:58:06] <chaotic_good> ceph?
[15:58:11] <chaotic_good> nfs?
[15:58:13] <chaotic_good> 9p?
[15:58:16] <chaotic_good> openafs?
[15:59:31] <remonvv> Whichever, some sort of storage solution optimized for file storage and serving. In my humble opinion GridFS (and databases in general) aren't exactly the sweetspot for that sort of thing.
[16:09:25] <chaotic_good> hmm
[16:09:31] <chaotic_good> bugger
[16:09:35] <chaotic_good> now IM stuck
[16:09:37] <chaotic_good> ah well
[16:10:00] <defaultro> amazing, tailable works pretty well with Perl :D
[16:10:10] <defaultro> now, I need to work on oplog
[16:10:53] <defaultro> this is the output of the cursor though, how can I use this? HASH(0x7fe9397068b8)
[16:14:02] <talbott> been looking at the add.User commmand
[16:14:22] <talbott> is there a command you can use just to add a new role to an existing user?
[16:14:28] <talbott> i cant see an updateUser command?
[16:14:31] <quickdry21> When migrating the primary of a replica set to a new machine, does the balancer need to be disabled?
[16:14:48] <ruphos> defaultro: http://search.cpan.org/~friedo/MongoDB-0.702.2/lib/MongoDB/Cursor.pm#METHODS
[16:15:02] <defaultro> tahnks
[16:15:03] <defaultro> thanks
[16:16:55] <ruphos> I'm running into an issue using the MongoDB perl driver. I'm trying to page through a large number of documents (125M) to move an array field to its own collection. When I set a batch to be much over 100 docs, I get "can't get db response, not connected" or "missed the response we wanted, please try again"
[16:17:06] <ruphos> it looks like an older issue (https://jira.mongodb.org/browse/PERL-196), but I'm using the latest version of the driver so that should be fixed.
[16:17:10] <defaultro> i saw that question the other day :)
[16:17:23] <ruphos> I still can't figure it out. :/
[16:17:35] <ruphos> it should be fixed, but it isn't
[16:17:44] <defaultro> wait for someone here
[16:17:51] <defaultro> mongo db god :)
[16:17:59] <defaultro> or could be perl related
[16:18:39] <ruphos> it seems like an issue with the perl driver, or maybe how the master/slaves are set up
[16:18:54] <cheeser> mike just left for lunch :D
[16:21:03] <remonvv> Damnit mike!
[16:25:36] <defaultro> what does capped mean in MongoDB world?
[16:26:34] <remonvv> The only relevant context is capped collections, which are size limited collections with FIFO semantics.
[16:26:59] <remonvv> They are limited in soms ways but do offer tailable cursors.
[16:32:59] <defaultro> hey folks, how do I tail oplog?
[16:33:30] <remonvv> db.oplog.find().addOption(2)
[16:33:48] <remonvv> that returns the tailable cursor
[16:34:24] <remonvv> There might also be a way to just have mongodb log oplog entries but I've never used or tried that.
[16:47:16] <drag> I have two documents, A and B. B references A through a field. I want to delete A and, when that happens, I want to delete all B that references A as well. Is there a way for me to do it via aggregation framework?
[16:51:25] <remonvv> No.
[16:52:11] <drag> remonvv, is there a way to do something like that without embedding B in A?
[16:52:14] <drag> and not retrieving all B referencing a and deleting them?
[16:53:02] <remonvv> No. Your only option is embedding if it needs to be an atomic removal or execute two deletes if you do not have that requirement.
[16:53:10] <remonvv> One for A, one for all B's references by A.
[16:53:22] <remonvv> referenced*
[16:53:43] <drag> remonvv, thanks
[16:53:52] <remonvv> Of course, if B's can no longer be references once the A is gone you can opt for removing the orphaned B's at a later time.
[16:54:01] <remonvv> referenced*
[16:54:04] <remonvv> damn that D key
[16:54:09] <talbott> hey number6 are you around?
[16:54:13] <talbott> your friendly numpty again
[16:58:59] <chaotic_good> mongoDB!!!
[16:59:01] <chaotic_good> oh yeahheaheh!!
[16:59:03] <chaotic_good> ok
[16:59:11] <chaotic_good> so if i have 3 280 mongodb with basics configs
[16:59:16] <chaotic_good> do I have 1 big block of storage
[16:59:23] <chaotic_good> or 3 replicat3ed blocks of storage?
[16:59:37] <remonvv> May I suggest you simply read the documentation kind sir?
[17:03:46] <talbott> damn flakey internet
[17:03:50] <talbott> connection
[17:03:55] <chaotic_good> docs are kinda wak
[17:25:40] <clarkk> can someone please tell me what the difference is between the _id and the obj.parent[0]._id in this database? http://pastebin.com/raw.php?i=qEFpSH9i
[17:26:00] <clarkk> is there any advantage/disadvantage to having one or the other?
[17:29:40] <cheeser> welll, _id is the id of that document. and the other seems to be more of a foreign key.
[17:30:05] <clarkk> cheeser: yes, but I don't understand the difference in the syntax
[17:30:17] <cheeser> it's just a nested document.
[17:30:56] <clarkk> cheeser: what is the difference between what looks like a simple string for the _id and the ObjectID('xxxxx') of the sub document?
[17:31:40] <clarkk> what is the advantage/disadvantage of each?
[17:32:02] <ruphos> ObjectID is specific datatype in mongodb
[17:32:31] <ruphos> as opposed to a string / int / etc
[17:33:11] <clarkk> ruphos: the _id is supposedly the originating unique id, that the subdocument references
[17:33:28] <clarkk> ruphos: is there any disadvantage to leaving the _id as a string?
[17:33:49] <redsand> clarkk: its 2x as big
[17:34:23] <clarkk> ugh
[17:34:31] <ruphos> not sure of the advantages / disadvantages
[17:34:45] <clarkk> well it sounds like storage and performance is the disadvantage
[17:36:01] <ruphos> using objectid is more optimized
[17:36:05] <ruphos> e.g., http://stackoverflow.com/questions/7764781/store-id-as-object-or-string-in-mongodb
[17:36:39] <clarkk> I have tried all day to find something that will allow me to load fixtures (json data) from a file, where subdocuments reference the first-class objects. Does anyone know how I can do this, please?
[17:37:08] <clarkk> I've tried two node packages - pow-fixtures and mongoose-fixtures, but neither do it correctly
[17:41:08] <clarkk> actually, I've just tweaked mongoose-fixtures and now it works! Thank god! :)
[17:41:35] <clarkk> thanks for your advice, ruphos and redsand
[18:04:01] <neeky> at the application level, is having default values for all document fields in my schema a good practice?
[18:04:24] <neeky> even if default is null
[18:08:32] <chaotic_good> yesno
[18:08:59] <chaotic_good> at the app level you should strive to keep blobs in disk and logic in ram, including the string key to the blob location
[18:19:18] <neeky> chaotic_good, "blobs in disk and logic in ram", what does that mean?
[18:25:37] <defaultro> hey guys, if we update(NOT insert) a value in the collection, will oplog see it?
[18:26:11] <chaotic_good> blobs are binary objects
[18:26:16] <chaotic_good> things liek pics n videos
[18:26:24] <chaotic_good> logic is your program logic data
[18:26:35] <chaotic_good> such as lists hashes red black trees
[18:26:37] <chaotic_good> etc.
[18:26:45] <chaotic_good> haskell sets
[18:26:47] <chaotic_good> shit liek that
[18:27:12] <ish1301> #netbeans
[18:27:17] <ish1301> join #netbeans
[18:27:25] <cheeser> third time's the charm
[18:28:38] <chaotic_good> supercjacafragilisticexpialisdshious
[19:03:19] <squeakytoy> Hey all. Is this a typicle situation where a relationship database is better? I have a collection of Videos where each Video can have a number of Tags. Here is the killer requirement; I should be able to find Videos by Tags.
[19:04:10] <cheeser> depends on how you want to model tags, but you could just embed the array of tags directly on the video's document. mongodb lets you query against those, too.
[19:04:29] <squeakytoy> oh, by critera right?
[19:04:38] <cheeser> by what now?
[19:04:58] <defaultro> hey guys, how do I replicate mongodb?
[19:05:09] <squeakytoy> Criteria
[19:05:25] <squeakytoy> ok, thanks - ill try
[19:05:41] <cheeser> defaultro: as in, replica sets?
[19:05:50] <defaultro> yes
[19:06:01] <defaultro> I would like to try this, http://danielweberonline.wordpress.com/tag/tailable-cursors/
[19:06:10] <cheeser> http://docs.mongodb.org/manual/replication/
[19:06:20] <defaultro> I want to tail oplog.rs so I can see updates/deletes and inserts
[19:06:22] <defaultro> k
[19:06:54] <defaultro> do I need another MongoDB or can my existing one be converted to a replicaset?
[19:07:17] <cheeser> i think you can convert one.
[19:07:26] <cheeser> back up your data before trying, of course :)
[19:09:16] <defaultro> ok
[19:09:25] <defaultro> my data isn't really importnat, it's a new mongodb
[19:09:57] <cheeser> fair enough.
[19:11:09] <defaultro> however, on our existing work setup, data is big. When I make a new replica, do I still need to reconfigure our existing mongodb?
[19:11:50] <defaultro> http://docs.mongodb.org/manual/tutorial/convert-standalone-to-replica-set/
[19:12:16] <cheeser> you can add new replicat set members as you go
[19:13:02] <defaultro> without converting my standalone?
[19:14:03] <defaultro> since I'm on the experimtation part, what would you suggest to me if what I'm doing is tailing the oplog to monitor updates/inserts/deletes? Should I convert to replicaset or bring up another mongodb server?
[19:14:28] <cheeser> your standalone needs to be a replSet member first.
[19:14:35] <cheeser> then you can just rs.add() the new members
[19:15:33] <defaultro> in your first sentence, does it mean I should convert my standalone as replicaset?
[19:20:15] <cheeser> follow that link you posted.
[19:20:18] <defaultro> got this after runnign rs.initiate(), http://pastebin.com/e1wJ4PHy
[19:22:38] <defaultro> reading this now, http://stackoverflow.com/questions/13603096/starting-over-with-replica-configuration-in-mongodb
[19:24:11] <defaultro> still failed with the same error
[19:25:23] <defaultro> i changed it to localhost and it now works
[19:26:49] <cheeser> great. i just converted a standalone to rs and had a fix for you :)
[19:30:07] <defaultro> :D
[19:30:17] <defaultro> now, I'm reading the next instructions
[19:30:24] <defaultro> Expand the Replica Set
[19:30:24] <defaultro> Add additional replica set members by doing the following:
[19:30:46] <defaultro> should this be done on the same machine or should it be done on another mongodb machine?
[19:30:53] <cheeser> on the same machine.
[19:30:57] <defaultro> k
[19:31:11] <defaultro> that's if I want to add another replica? COrrect?
[19:31:12] <cheeser> that other instance will need to be brought up using the same --replSet option, though.
[19:31:17] <defaultro> got it
[19:31:23] <cheeser> right. that'll add a secondary to the set
[19:31:35] <defaultro> that means, I won't need it now since my focus is on oplog.rs and tailing it
[19:31:55] <defaultro> i just want to see real time updates/deletes/inserts in oplog
[19:33:24] <cheeser> you don't need a secondary for that.
[19:33:29] <cheeser> you should have an oplog now
[19:34:54] <defaultro> yup, i see it under local
[19:35:15] <dllama> hi
[19:35:18] <defaultro> now, I need to figure out the perl code for tailing it. I have a code for tailing the regular tailable cursor
[19:35:38] <defaultro> hopefully, someone has made it and shared to the web
[19:35:40] <defaultro> :)
[19:36:18] <cheeser> ugh, perl. you're on your own there. :D
[19:36:28] <dllama> hi, i'm using mongoid in a rails4, would this be the wrong room to get some info on eager loading?
[19:36:45] <defaultro> oh, I started this mongodb with --replSet rs0. Should I kill and restart it without this parameters?
[19:36:55] <defaultro> What language are you using cheeser ?
[19:40:47] <ruphos> defaultro: use tailable(1) when you run the query and it should work
[19:41:28] <dllama> guys, anyone? eager loading? https://gist.github.com/mvoloz/3f920a722b233e37f3ea — this definitely does not seem the least bit efficient when fetching records through an association
[19:41:43] <ruphos> e.g., my $cursor = $coll->find()->tailable(1);
[19:42:30] <defaultro> ruphos: I do, this is my code from my first one but it's not query oplog.rs, --replSet rs0
[19:42:35] <defaultro> oops, lol
[19:42:50] <defaultro> my $cursor = $collection->find->tailable(1);
[19:43:09] <defaultro> and it's working great for the cursor. But how do I tail oplog.rs?
[19:43:41] <defaultro> this is my working code, http://pastebin.com/ZpfJtF7L
[19:45:08] <ruphos> so the cursor works fine, but you're just not getting any results?
[19:45:38] <defaultro> no, I need to point it to oplog.rs rather than my my_collection
[19:45:49] <defaultro> but i don't know the proper format
[19:47:04] <defaultro> ah, http://phpmanuals.net/es/mongocursor.awaitdata.html
[19:47:08] <defaultro> maybe that will help
[19:47:53] <ruphos> I think you just need to use 'rs.oplog' as collection instead of 'my_collection'
[19:48:03] <ruphos> oplog.rs, excuse me
[19:48:15] <defaultro> yup
[19:48:32] <defaultro> do I need to restart mongodb since it's currently running with --replSet rs0?
[19:48:52] <ruphos> no, it should just be a setting used by the perl driver
[19:48:55] <defaultro> i converted my standalone to a replicaset half an hour ago
[19:48:56] <ruphos> I would think, anyway
[19:49:01] <defaultro> k
[19:49:10] <ruphos> though if that doesn't work, give it a try I guess?
[19:49:54] <defaultro> yup
[19:58:18] <defaultro> code is running without errors, I'll update a field
[19:58:33] <defaultro> googling on how to update a row :D
[19:58:46] <ruphos> woo!
[19:59:30] <defaultro> so looks like rows are the documents in mongodb world
[20:00:53] <dllama> so is anyone here familiar with rails?
[20:01:14] <dllama> I can't seem to get an answer anywhere to my eager loading issue, and its quiet frustrating
[20:02:37] <defaultro> do I need to add "_id" : ObjectId("525404579a368e6906e98c5f"), to my update command?
[20:05:20] <defaultro> I got it working! :D
[20:05:56] <defaultro> now, I can install http modules in perl so I can write a code that will connect to NodeJS when there is an update in mongodb :D
[20:18:59] <defaultro> i'm having some minor issues with the field lenght. The lastname field is not accepting more than 7 letters. failing update: objects in a capped ns cannot grow
[20:19:17] <defaultro> how do I specify the size of the fields?
[20:24:17] <cheeser> why are you using a capped collection?
[20:29:12] <defaultro> to enable tailing
[20:30:10] <defaultro> is it possible to specify the max characters of a field?
[20:30:17] <cheeser> no
[20:39:23] <defaultro> oh
[20:39:42] <defaultro> will oplog still work even if i create a non-capped collection?
[20:39:56] <defaultro> I mean, will i be able to still tail oplog?
[20:39:59] <defaultro> I'll try
[20:44:24] <diegows> is it safe to remove all the files from a DB from the file system? I need a repair, don't have enough space and I don't need one of the DB anymore
[20:58:51] <cheeser> defaultro: the oplog does its thing regardless of what kind of collection you create
[20:58:57] <defaultro> ah cool
[20:59:18] <defaultro> so if I make a non capped collection, there will be no limit on string lenght?
[21:00:13] <cheeser> correct
[21:00:24] <cheeser> there isn't a capped collection either. you just have update issues on those.
[21:01:32] <cheeser> isn't a limit in capped collections that is.
[21:02:45] <defaultro> the update was really simple
[21:03:13] <defaultro> db.my_collection.update( { "name" : "ron" }, { $set: { name: "ron ron ron ron ron ron ron ron" } } )
[21:03:17] <defaultro> that failed
[21:03:30] <defaultro> error was: failing update: objects in a capped ns cannot grow
[21:03:47] <defaultro> but when I shortened it, it took it
[21:04:15] <davi1015> Can anyone tell me what moveChunk.commit is doing in the change log, sometimes i am seeing 1 hour pass from that point to the point moveChunk.from shows up ( doing a loop to moveChunk's manually so thats actually the time commit took to run)
[21:40:06] <cordoval> guys
[21:40:53] <cordoval> can i ask a question on mapping superclass which is not clear from the docu?
[21:41:05] <cordoval> basically i am using no annotations
[21:41:45] <cheeser> this is in ... java? c#?
[21:41:52] <cordoval> php
[21:41:58] <cheeser> oh! i'm out! :D
[21:42:04] <cheeser> seriously, dunno about the php bits.
[21:42:13] <cordoval> someone does here
[21:42:15] <cheeser> anyway, i have to catch a train...
[21:42:17] <cordoval> jmikola: maybe?
[21:58:58] <tavooca> hi import dbf in mongo?
[21:59:13] <cordoval> what is dbf?
[21:59:13] <tavooca> script in python?
[21:59:16] <cordoval> no
[21:59:17] <cordoval> php
[21:59:24] <tavooca> vfp
[21:59:48] <tavooca> microsoft visual fox pro database
[21:59:54] <cordoval> oh no
[22:00:59] <tavooca> ok tanks
[22:01:11] <tavooca> cordoval
[22:01:24] <tavooca> script python?
[22:01:25] <cordoval> do you have examples of xml mapping with doctrine
[22:01:26] <cordoval> <field name="id" id="true" />
[22:01:28] <cordoval> is for id
[22:01:36] <tavooca> ok
[22:01:42] <tavooca> xml
[22:01:42] <cordoval> and i now just need to get the mapping lines for when inheriting a supperclass mapping
[22:03:18] <tavooca> some example
[22:03:36] <tavooca> script in python
[22:03:57] <tavooca> or javascript
[22:36:58] <dllama> hey guys, i'm getting a "too much data for sort() with no index" error. am not really sure how to fix it. i've tried running db:create_indexes but that didn't solve it
[22:57:18] <cheeser> dllama: what's your query look like?
[22:57:31] <dllama> cheeser, i already got it fixed, thanks though!
[22:57:41] <dllama> i didn't have an index in my model, adding that solved a lot of issues