PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 3rd of February, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:13:10] <joannac> hahuang61: depends what the usecase is
[00:15:18] <hahuang61> joannac: we have 2 db clusters, and trying to cut over on 1 day for a seamless upgrade, so I've been dumping the database on one side to the other, taking deltas every so often, and we'd take smaller and smaller deltas
[00:15:43] <joannac> why not just use replication?
[00:15:47] <hahuang61> joannac: until the actual cutover, but basically this means that we have to shutdown one side, refuse all traffic and dump the entire colltections that are mutable in our app, restore them, then bring it back
[00:16:08] <hahuang61> joannac: can I set up replication between 2.0.4 and 2.6.7?
[00:16:15] <hahuang61> joannac: like a hidden replica set member?
[00:16:32] <joannac> no, but neither can you mongodump from 2.0.4 and restore to 2.6.7
[00:16:54] <hahuang61> joannac: what? are you sure? Seems to be working fine.
[00:17:10] <Boomtime> it might work
[00:17:30] <hahuang61> we actually did it for our QA database and tried this and it worked just fine, pretty seamless
[00:17:41] <hahuang61> but much easier to control data flow in QA.
[00:17:41] <Boomtime> what works now is not assured to keep working though
[00:18:38] <gozu> go go gadget "custom oplog applier"!
[00:18:42] <hahuang61> oh god
[00:25:36] <joannac> hahuang61: sharded cluster?
[00:26:01] <hahuang61> joannac: yups
[00:26:59] <joannac> dumping through a mongos, restoring through a mongos?
[00:27:08] <joannac> config db regenerated from scratch?
[00:27:20] <hahuang61> yups
[00:27:55] <hahuang61> joannac: wellllllllllll NOT exactly.
[00:28:30] <hahuang61> joannac: dumping from an unsharded mongo that takes a daily dump from a mongos for a mongo cluster. Don't ask me why they're doing this. No idea, they did this before I joined.
[00:28:36] <hahuang61> joannac: but then yes, restoring through a mongos.
[00:29:01] <hahuang61> joannac: the unsharded mongo is basically used for manual queries
[00:29:14] <joannac> okay, i will revise to "not recommended, but if it works, good for you"
[00:29:33] <gozu> hahuang61: aka "reporting instance"
[00:29:54] <hahuang61> gozu: yeah, i guess they did it since you can't attach a replica of all shards.
[00:30:19] <hahuang61> is there a planned release of this upsert on mongorestore?
[00:30:28] <hahuang61> nope "planned but not scheduled"
[00:30:30] <hahuang61> dammit
[00:32:33] <gozu> now that i think about it, i will have to ask the guys running this 12-shard 3-replica various-extras clustah how their reporting thingie (one of the extras) works ... is there some way to tell a mongos to be a read-only frontend for rs-secondaries?
[00:33:25] <hahuang61> I'd love to hear it. Lemme know when you find out :)
[00:33:56] <hahuang61> joannac: so in any case, there's no way huh? Full dump full restore will "work" if it works.
[00:34:15] <hahuang61> mostly it's not a problem, but one of these collections is pretty large.
[00:45:23] <hahuang61> how much slower is mongoexport/mongoimport than mongodump/mongorestore?
[00:45:29] <hahuang61> trying to see if that's an option.
[00:46:27] <gozu> wouldnt expect the difference to matter if you are going through a mongos.
[00:48:17] <Boomtime> hahuang6: note that mongoexport/import transitions via javascript which cannot express the same type fidelity as the binary format
[00:49:01] <gozu> (which, depending on the "not a schema", may actualy be an advantage in this case...)
[00:49:38] <Boomtime> not when your large 64bit ints get rounded via doubles
[00:49:52] <gozu> yeah. that falls under "depends". :)
[00:50:39] <gozu> but going from a "speshul" 2.0.4 setup to 2.6.7 sounds like "depends" is the main theme.
[00:53:36] <hahuang61> yeah. It's a difficult situation.
[00:53:57] <hahuang61> it's also a difficult sitatuon to be the guy with the most mongo experience when the 2 previous architects have quit
[00:54:18] <hahuang61> AFAIK, they tried upgrading to 2.2 an 2.4 before, in-place
[00:54:52] <hahuang61> and it failed for a few reasons. I don't know what they were, but once they'd blamed the yielding strategy casuing perf issues and rolled back the upgrade.
[01:14:37] <Trindaz> mongoose model runs save() just fine, no errors. debug output shows document had all it's properties updated correctly. BUT. the save just never persists in the DB. How can this be?!
[01:15:32] <joannac> Trindaz: checking the right db and collection?
[01:16:17] <Trindaz> yup all definitely correct. I know this because I start by fetching the document from the DB and using console output to verify that it has exactly the right data. Also verified _id fields. It's the right document.
[01:17:32] <Trindaz> lets see what markModified has for me
[01:18:46] <Trindaz> markModified didn't help
[01:18:56] <joannac> http://mongoosejs.com/docs/api.html#model_Model-save
[01:19:13] <joannac> check numberAffected
[01:21:14] <Trindaz> checking ...
[01:23:01] <Trindaz> numberAffected === 0
[01:23:04] <Trindaz> how could thisbe
[01:26:38] <joannac> Trindaz: sounds like it's not matching the document you want
[01:26:47] <Trindaz> that's what it would seem
[01:27:20] <Trindaz> however I'm 100% certain that the document has an id of 54cc287734072ab454ba1360
[01:27:42] <Trindaz> that's what the _id field is set to when the document is fetched form Mongo to make edits, and that's still the value of _id when the document is attempted to be saved
[01:28:00] <joannac> string or objectID?
[01:28:58] <Trindaz> console.log('type of _id', typeof(distribution._id));
[01:29:00] <Trindaz> gives
[01:29:11] <Trindaz> type of _id object
[01:29:27] <Trindaz> maybe that's not enough of a type check?
[01:29:43] <joannac> can you find the document by _id?
[01:30:05] <Trindaz> Running db.distributions.find(ObjectId('54cc287734072ab454ba1360')); in Robomongo gives me the document I want
[01:30:05] <joannac> anyway, this is beyond what I know of mongoose
[01:30:13] <Trindaz> thanks anyway joannac
[01:35:04] <Trindaz> @joannac it was markModified after all FYI because it was a "mixed" field type that I was updating
[04:44:30] <morenoh150> how do I create a multikey index?
[04:44:35] <morenoh150> doc.tags is array
[04:44:56] <morenoh150> do i do db.docs.ensureIndex({tags: ??})
[04:48:27] <Boomtime> morenoh150: db.docs.ensureIndex({tags:1})
[04:49:02] <morenoh150> Boomtime: okay. what would happen if I did -1?
[04:49:12] <Boomtime> same thing in this case
[04:49:33] <Boomtime> the 1 and -1 only really mean something on compound indexes
[04:50:22] <Boomtime> i.e {foo:1} is functionally the same as {foo:-1}, but {foo:1,bar:1} is different from {foo:-1,bar:1}
[04:53:48] <morenoh150> cool
[07:36:34] <disappeared> when i use mongorestore to import a bson file into a collection…what does it do if the collection exists already with data?
[07:42:29] <disappeared> appears to append by object id
[07:42:31] <disappeared> so nm
[07:42:34] <Boomtime> http://docs.mongodb.org/manual/reference/program/mongorestore/#behavior
[08:51:31] <bybb> Hi everyone
[08:52:00] <bybb> I'm discovering MMS on my laptop
[08:52:14] <bybb> the automation agent has a problem apparently
[08:52:18] <bybb> port missing from conf file pArgs
[08:52:49] <bybb> I used all the default settings, I don't know what it means
[09:20:31] <joannac> bybb: erm what? what's your mms group name?
[09:54:19] <bybb> joannac: epreuves-certifiees.com
[09:55:58] <bybb> Actually, from the beginning I have Deploying changes in progress
[10:15:32] <bybb> In MMS itself, is there a way to undo everything and start again?
[10:23:55] <oceanx_> hello, I didn't understand if a cursor timeout during a long running aggregation/mapreduce means that the operation is aborted, or if it continues running (did find the latter explanation in a stackoverflow answer) anyone can help?
[10:25:22] <chou> hat-tip to agolden, "Thanks for the email reports. I'd rather have them in private email than not at all."
[10:44:33] <Chulbul> I'm new to mongodb, I have installed mongo db, now whats next , Im reading from here http://docs.mongodb.org/manual/tutorial/install-mongodb-on-windows , how to set data directory and where ?
[10:50:35] <bybb> Chulbul: I guess yu should create it on C:
[10:53:22] <Chulbul> bybb, ok
[10:54:39] <bybb> You could create it wherever you want actually but you'll need to run mongod with a parameter (dbpath) specifying the path
[10:56:36] <Chulbul> bybb, ok got it, thanks
[10:57:30] <bybb> MMS is not so simple!
[11:00:32] <Chulbul> bybb, MMS ??
[11:02:15] <bybb> Chulbul talking to myself Mongo Management System I guess
[11:02:23] <Chulbul> command created 3 files and one _tmp folder, local.0, local.ns and mongodb.lock, it seems that Im on right track ?
[11:03:01] <Chulbul> bybb, ok
[11:05:44] <bybb> Chulbul yep, that's it
[11:07:03] <Tug_> In terms of write performance, do you know which one is the best to insert a lot of (x,y,z) between those 2 data models:
[11:07:03] <Tug_> 1) { id1: x, id2: y, v: z }
[11:07:04] <Tug_> 2) { id1: x, v: [{ id2: y, v: z }] }
[11:07:35] <Tug_> So basically, does a $push update performs well compared to an insert operation ?
[11:08:24] <Chulbul> bybb, lot more to go :)
[11:10:06] <oceanx_> Tug_: I don't think write performance should be an issue unless you enable write concern > 0
[11:10:44] <oceanx_> i usually prefer to reason in terms of queries I'll need to execute next, to decide the structure of the data I'm inserting in mongodb
[11:10:52] <Tug_> oceanx_: I have write concern > 0
[11:11:33] <oceanx_> http://docs.mongodb.org/manual/core/write-performance/
[11:12:37] <oceanx_> i think there's some overhead from the find() mongo has got to execute before updating
[11:12:50] <oceanx_> while insertion will just append the document
[11:13:54] <Tug_> oceanx_: I see, maybe that's why I went with 1) in the first place. I'm just wondering if 2) could perform as well as it fits my model better
[11:15:07] <oceanx_> Tug_: depending on what you need later you could just use an index on id1
[11:15:35] <oceanx_> and read performances should not change much (if you need to find (y, v,z) based on x)
[11:16:13] <oceanx_> or use x as _id (and mongo will always use an index in that case)
[11:16:43] <Tug_> yes, I'm not too woried about read performance though
[11:16:57] <Tug_> I use mongo to gather data
[11:17:16] <oceanx_> if you need to store more data for (id1: x) which is not changing the best thing could be using a collection (maybe capped) for this data
[11:17:41] <oceanx_> and then another collection with the other data which you are pushing using x as index
[11:18:14] <oceanx_> when you need to find() you can just do two find on the two different collections and client-side join eventually
[11:18:15] <Tug_> so one collection per x ?
[11:18:25] <Tug_> s/x/value of x/
[11:18:52] <oceanx_> nope one collection storing documents which contain {_id:x, info: "data which won't change and you know the max size"}
[11:19:36] <oceanx_> and the second collection {_id: new ObjectId(), id2:x, data:[y,v,z]}
[11:19:52] <oceanx_> if you need more data stored with x otherwise just the second collection :)
[11:20:12] <oceanx_> the first one can be capped eventually which means better read/write performances
[11:20:33] <oceanx_> on the second one you can put an index on id2 if you need to find based on the value of x
[11:20:34] <Tug_> oceanx_: So a capped collection should perform better on insertion, do you have documentation on this?
[11:20:45] <oceanx_> http://docs.mongodb.org/manual/core/capped-collections/
[11:20:56] <oceanx_> yes but it works only if you know a max size the document will take
[11:21:19] <Tug_> I'm not sure I know
[11:21:38] <Tug_> if I say 16MB that's too much ?
[11:23:52] <Tug_> "Capped collections are fixed-size collections" << so it's in terms of number of documents in the collection
[11:25:59] <oceanx_> yes capped collection A fixed-sized collection that automatically overwrites its oldest entries when it reaches its maximum size. The MongoDB oplog that is used in replication is a capped collection. See Capped Collections
[11:26:10] <Tug_> I see, so it's some sort of stack
[11:26:20] <oceanx_> if you don't need oldish data you can use them
[11:26:24] <oceanx_> for example
[11:26:42] <oceanx_> actally I think it was wrong for your use case sorry :)
[11:27:05] <Tug_> No I can't use this. My documents have an expires field so they are removed based on the time not the quantity of data
[11:27:54] <Tug_> It looks interesting but it has a very narrow use case I guess
[11:28:49] <oceanx_> then just stick with the second scheme using indexes were needed {_id: new ObjectId(), id2:x, data:[y,v,z]}
[11:30:07] <oceanx_> you will have a chance to client-side join based on x (or server-side aggregate eventually or mapreduce/find().forEach() and can set an expiration timeout on the cursor if you have lot of data)
[11:30:47] <oceanx_> and you just create an index on x to have better performances.
[11:32:33] <Tug_> I won't need to merge on x I think.
[11:32:48] <Tug_> But I usually retrieve data for all x at the same time
[11:33:25] <Tug_> I just don't want to be limited on write perfomance if I have 100k values of z
[11:34:54] <Tug_> for all x, I meant every documents matching x
[11:35:25] <oceanx_> yep so just use the index on id2 (x) and you can find at the same time every document matching
[11:35:37] <oceanx_> the index will greatly help
[11:36:01] <oceanx_> at the cost of some more storage used :)
[11:37:01] <Tug_> yeap so 1)
[11:37:09] <Tug_> it's more flexible anyway
[11:37:31] <Tug_> (I used id1 and id2 but yeah id1 can be _id)
[11:39:12] <Tug_> oceanx_: thanks for your input :)
[11:45:19] <Tug_> according to this article: http://askasya.com/post/largeembeddedarrays $push is more expensive as documents needs to be moved on the disk as it grows
[11:45:34] <Tug_> not sur it is up to date though but it makes sense
[11:46:17] <joannac> that's still accurate
[11:46:42] <joannac> doubly so if you want to search on it and hence also need an index
[11:48:22] <Tug_> ok thx :)
[13:01:23] <izx> I am trying the repair the mongodb using the option >> 'mongod -f /etc/mongodb.conf --repair' and I get this error >> pulp_database Cannot repair database pulp_database having size: 6390022144 (bytes) because free disk space is: 4213800960 (bytes). Any help or advice would be appreciated.
[13:04:41] <joannac> izx: well, the message says it all. you don't have enough free disk space to repair
[13:05:00] <joannac> izx: why are you repairing in the first place? reclaim disk space or corruption?
[13:05:29] <izx> joannac, Do i need to have the same amount of free space as the DB size?
[13:05:45] <joannac> yes
[13:05:56] <joannac> just like it says in the documentation ;)
[13:06:05] <izx> joannac, I am not able to start the mongod service as it is throwing >> exception in initAndListen std::exception: boost::filesystem::status: Permission denied: "/var/lib/mongodb/mongod.lock", terminating
[13:06:19] <joannac> erm
[13:06:23] <joannac> that's a permissions issue
[13:06:31] <joannac> check the permissions on that file
[13:07:20] <izx> joannac, permission is already 644
[13:07:33] <joannac> who owns the file?
[13:08:25] <izx> joannac, mongodb owns the file
[13:08:33] <izx> what should be right permission ?
[13:09:10] <joannac> depends what the service runs as
[13:09:53] <joannac> that should be right though
[13:10:10] <joannac> check up the directory tree? do all the directories have the right permission?
[13:11:47] <izx> joannac, Directory has the permission of 755
[13:16:21] <joannac> izx: okay, check the contents of that file
[13:18:16] <izx> joannac, Initially while repairing the DB i got >> Can't specify both --journal and --repair options. I then removed the journaling option from the conf file and then tried. Is that fine?
[13:18:36] <joannac> well, you never told me why you were repairing
[13:19:28] <izx> joannac, Was not able to start the mongod services. Read in a forum removing the mongod.lock and repairing would be the solution. So tried it.
[13:20:07] <joannac> if that's the case why do you still have a mongod.lock file?
[13:20:33] <izx> joannac, I do not have the lock now as I have removed it. I just get the file system space error.
[13:20:41] <izx> joannac, Increasing it now.
[13:20:57] <joannac> ...
[13:21:03] <izx> Just want to make sure, anything else has to be in place before repairing it again.
[13:21:32] <joannac> I still have no idea why you're repairing
[13:21:57] <joannac> or whether repair is the right thing for you to do
[13:22:06] <joannac> so good luck and I hope you have backups
[13:22:08] <bybb_> Hi all
[13:22:44] <bybb_> would you know why I would have the "hosts unreachable" on MMS?
[13:22:58] <joannac> your monitoring agent can't reach the host by the hostname
[13:23:01] <joannac> bybb_: ^^
[13:23:15] <bybb_> yeah I guess :)
[13:23:34] <joannac> bybb_: so fix it?
[13:23:50] <bybb_> but I can ssh the server, but I can't "mongo domain:27000"
[13:25:01] <bybb_> joannac: it's thoses are actually VM guetes on my laptop
[13:25:05] <bybb_> these
[13:25:23] <bybb_> the agents are all green
[13:25:33] <joannac> bybb_: okay?
[13:26:02] <bybb_> that's the actual status
[13:26:05] <joannac> not sure how that is relevant
[13:26:22] <bybb_> I'm just discovering MMS
[13:26:31] <joannac> figure out why you can't connect to those mongod processes
[13:27:10] <joannac> hostname not resolvable, firewall blocking, ports are wrong... there are many options
[13:27:21] <joannac> figure out which one it is and fix it
[13:27:56] <bybb_> I've been doing this kind of stuff since sunday, kind of depressing
[13:28:06] <bybb_> well at least it helps me to focus
[13:29:08] <bybb_> ah stupid question
[13:29:39] <bybb_> MMS doesn't install any mongodb servers?
[13:29:44] <joannac> ?
[13:29:55] <joannac> you can deploy mongod instances
[13:30:18] <joannac> but I have no idea what you've done
[13:30:49] <joannac> there's a nice wizard that takes you through the whole MMS setup process
[13:30:54] <joannac> did you go through that?
[13:31:07] <bybb_> yeah definitely
[13:31:11] <joannac> okay
[13:31:30] <joannac> so go to your MMS group, click on deployment, then click on "Edit mode" at the top
[13:31:35] <joannac> are your mongods running?
[13:31:36] <bybb_> I just say, if I can't deploy any mongod because hosts are unreachable
[13:31:56] <bybb_> it's normal I can't mongo domain:27000
[13:32:07] <joannac> deploy is not the same as monitoring
[13:32:21] <joannac> also, I have no idea what "ploy is not the same as monitoring
[13:32:28] <joannac> oops
[13:32:34] <joannac> also, I don't know what you mean
[13:32:43] <joannac> you don't expect to be able to connect to your mongod?
[13:33:41] <bybb_> No I need to figure out what's a mongod from MMS point of view
[13:33:48] <bybb_> I just need to read the doc again
[13:34:09] <bybb_> because I didn't deploy anything yet
[13:34:43] <joannac> yeah, go read the docs. then deploy. then worry about monitoring
[13:43:54] <Tomasso> im executing a text search query from ruby driver such as "$search" => "\\\"Thank you information\\\" Thank you information" in order to get either documents with entire phrase or documents with any of the words of the entire phrase
[13:44:13] <Tomasso> i think i have problems with escape characters ...
[13:45:38] <Tomasso> i always get 0 results
[13:52:01] <Tomasso> and doing smth like $search => ("\"\"" + "Thank you information" + "\"\" " + "Thank you information") // Works for phrases but not for individual words
[14:28:55] <scellow> Hello guys, it's an emergency, im unable to import a database
[14:29:03] <scellow> i have the file dbName.0 and dbName.ns
[14:29:20] <scellow> if i put them in the folder of mongodb i get an error and i can't start mongo
[14:29:33] <scellow> Tue Feb 3 15:24:49.250 Error: couldn't connect to server 127.0.0.1:27017 at src/mongo/shell/mongo.js:145
[14:30:10] <scellow> If someone could help me that'd be ultra nice :)
[15:06:48] <scellow> anyone ?
[15:08:33] <Hilli> scellow: Check the logfile first...
[15:15:03] <scellow> Hilli, it says i dont have the permission to open the file, but i have! https://gist.github.com/anonymous/01aa89a701789e3798e4
[18:11:51] <swapnesh> Hello Friends, I am modeling a schema for database ..but stuck at a point any help plz !! ??
[18:12:49] <swapnesh> let me know if anybody can share his/her thoughts as I am developing an ecommerce site on mongodb
[18:13:58] <swapnesh> I am stuck with managing some common attributes, who have different values on the basis of gender
[18:15:17] <swapnesh> Like regular size Jeans may have different sizes in men..and women categories
[18:15:25] <swapnesh> any help or suggestion ??
[18:18:33] <swapnesh> anyone ??
[18:19:40] <swapnesh> anyone in the group ??
[18:22:16] <swapnesh> ????
[18:27:19] <jpb> Anyone know if there are known issues with downloads-distro.mongodb?
[18:27:51] <jpb> currently seeing 403 + "There is a 10MB limit on downloads" for mongodo-org-tools 2.6.7
[18:29:49] <swapnesh> ????????????/
[18:30:32] <swapnesh> ?????????????
[18:30:33] <StephenLynx> ???????!!!!1?!!?1sd45fdsf :^)
[18:30:40] <swapnesh> anyone ????
[18:30:51] <appledash> Nobody is going to help you if you spam
[18:30:53] <StephenLynx> people usually don't do other people's work just like that.
[18:31:12] <StephenLynx> if the project is open source, post what you got.
[18:31:24] <StephenLynx> if it isn't, then you are literally asking for people to work for you for free.
[18:31:50] <StephenLynx> you are not even asking help with a specific issue.
[18:32:07] <StephenLynx> like "how do I query something to give me a result like this?"
[18:32:29] <StephenLynx> you are just "tell me how I should make this vague task"
[18:33:16] <swapnesh> hi...anyone here ???
[18:33:39] <appledash> No, nobody is here
[18:33:48] <StephenLynx> or you can just ignore what I said and keep acting like an idiot until everyone ignores you or you get banned.
[18:33:49] <appledash> And since nobody is here, you might as well leave.
[18:33:52] <StephenLynx> :^)
[18:34:11] <jpb> .. or perhaps his IRC client's not picking up responses, geez
[18:34:23] <StephenLynx> I think he's just stupid.
[18:34:24] <appledash> Doubtful.
[18:34:29] <appledash> Doubtful @ jpb
[18:34:38] <swapnesh> thats good
[18:34:46] <jpb> *shrug*
[18:35:17] <jpb> anyway, nobody getting 403s from mongo's distro server? must be me. really weird.
[18:35:36] <appledash> jpb: What's the link?
[18:35:53] <swapnesh> following your marks
[18:36:29] <jpb> http://downloads-distro.mongodb.org/repo/ubuntu-upstart/dists/dist/10gen/binary-amd64/mongodb-org-tools_2.6.7_amd64.deb
[18:36:40] <jpb> @appledash
[18:36:42] <appledash> Working fine for me
[18:36:48] <appledash> Just finished the download
[18:37:09] <appledash> And it appears to be a valid pakage
[18:37:11] <appledash> package*
[18:37:20] <jpb> ty. maybe amtrak's got some draconian traffic management going on here.
[18:38:27] <appledash> If there's some way you can get it over https then that'd probably work
[18:41:49] <jpb> @ appledash mongodb-org-tools looks like the only issue, I'll just skip the installer package and install server + shell directly
[18:42:10] <jpb> so bizarre, though. ¯\_(ツ)_/¯
[18:42:53] <appledash> Indeed
[18:43:33] <swapnesh> Is 10gen also maintaining mongoose...or I need to chk mongoose IRC separately ??
[18:44:09] <StephenLynx> i think 10gen maintains mongoose.
[18:45:01] <swapnesh> okkk
[18:45:49] <kali> 10gen maintains mongoose ? since when ?
[18:46:11] <kali> afaik, 10gen maintains the native node driver, but that's it
[18:47:11] <jpb> nah @ swapnesh it's on #mongoosejs
[18:48:17] <swapnesh> okkk @jpb thx for the info mate .....so is it good to ask queries about data modeling in mongoosejs irc or mongodb ?
[18:48:25] <jpb> some company maintains it
[18:48:45] <jpb> usually questions that are application-specific are not well suited to IRC
[18:49:23] <swapnesh> okkk... learnboost is maintaining mongoosejs
[18:49:34] <jpb> but questions about the ORM would go to mongoosejs
[18:51:30] <swapnesh> actually one problem I am constantly facing with mongo command..I am not sure if its the way mongodb behave..let me discuss with you then
[18:54:38] <swapnesh> I am using Ubuntu ....and every time I boot the system ..I need to specify db path and then run mongo
[18:54:53] <swapnesh> if I directly run mongo command it will always fail
[18:57:24] <swapnesh> even at times I need to rm mongod.lock file wwhenever I start my PC
[18:57:56] <swapnesh> I followed everything http://docs.mongodb.org/manual/tutorial/recover-data-following-unexpected-shutdown/
[18:58:56] <swapnesh> from this but still I need to run once instance from root privileges to > mongod --dbpath /var/lib/mongodb
[18:59:05] <swapnesh> and in next terminal > mongo
[18:59:30] <swapnesh> If I close the first terminal ..then mongo stops in the second terminal as well
[18:59:51] <swapnesh> let me know if this is ok...or I am doing something wrong here ??
[19:01:38] <swapnesh> If this is an off topic for here..let me the IRC from where I can get info about this ???
[19:05:45] <kali> swapnesh: you need your system to start mongod: look there: http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/
[19:19:21] <ldiamond> I'm looking for a graph database solution. Anyone know anything good available? Or will it be easier to just hack one up on a mongodb?
[19:20:36] <ldiamond> I've looked at Cayley but it's too buggy and there's barely any development on it.
[19:59:38] <Nilium> I'm strangely pleased that I haven't actually had any trouble in the process of mongorestore-ing a 23gb collection so far
[21:01:13] <Tyler_> Is it possible to update using a callback? findOneAndUpdate({phone: req.phone}, {start: req.start, code: callback.code}, function(err, callback)});
[21:02:43] <Tyler_> findOneAndUpdate({phone: req.phone}, {start: req.start, code: makeRandomCode(callback.section)}, function(err, callback)});
[21:50:34] <geri> hi, can mongodb be used as a distributed key-value store?
[21:51:58] <Hilli> scellow: If you are root, you have permission. But I don't think that mongod is running as root, is it?
[21:55:19] <Hilli> geri: I suppose, if you bend it. But there are probably better options. Riak or someting like that. MongoDB is more document oriented, so your queries would be something like key: something and you would get value: somevalue out. instead of just gimme(key) => value
[21:55:51] <geri> Hilli: what other options?
[21:55:53] <Hilli> If thats OK, then the distributed bit is easy enough.
[21:56:14] <Hilli> geri: Like riak or cassandra
[21:56:59] <geri> ok, can you cache cassandra somehow in a cache?
[21:57:59] <Hilli> That I don't know
[22:15:38] <jayjo> Is there any issue with exposing my ObjectID variables to the public?
[22:15:44] <jayjo> security issue?
[22:18:40] <Hilli> Can a user guess another ObjectID and delete the object afterwards? If not, I'd say no
[23:04:56] <morenoh151> { slug: params.slug, publishedDate: { $exists: true } }
[23:05:13] <morenoh151> will this return their OR or AND ?
[23:05:18] <morenoh151> in a whereclause
[23:06:21] <joannac> and
[23:07:58] <morenoh151> I don't think so. I'm getting more than one document. And I'm asking for a particular slug
[23:09:53] <morenoh151> err I mean the criterea clause http://docs.mongodb.org/manual/reference/method/db.collection.find/
[23:59:05] <scellow> arghhh stupid irc client, i can't scroll back to where someone mentioned my name
[23:59:37] <scellow> this chat is spammed by 'x has joined' 'x has quit' 'x is known as y' log message lol