PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 28th of April, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:01:30] <Mark_> anyway to just get keys in a result set?
[00:01:51] <Mark_> trying to save bw/speed on the wire
[00:02:44] <xissburg> use a projection?
[00:03:56] <xissburg> e.g. db.users.find(/*condition*/, {_id: 1})
[00:04:42] <Mark_> yea
[00:04:51] <Mark_> doesnt filter quite how i want it to though
[00:04:56] <Mark_> so let me just show you what ive got actually
[00:05:11] <Mark_> this is pretty representative of the mongo structure
[00:05:11] <Mark_> http://npa.azurewebsites.net/v1/dump/480888
[00:05:32] <Mark_> and i want a dump interface that just lists all the nxx in the npa (area codes)
[00:05:34] <Mark_> like so:
[00:05:34] <Mark_> http://npa.herokuapp.com/v1/dump/703
[00:05:36] <Mark_> but
[00:05:44] <Mark_> on the wire im getting all the values as well
[00:05:53] <Mark_> im thinking i have to mapreduce, per this stack post
[00:06:01] <Mark_> http://stackoverflow.com/questions/2298870/mongodb-get-names-of-all-keys-in-collection
[00:07:30] <Mark_> i.e. db.npa_nxx_combined.find({},{ _id: -1, 602: 1 })
[00:07:32] <Mark_> gives me a huge list of
[00:07:42] <Mark_> "997" : { "lata" : 666, "ratecenter" : "PHOENIX", "display" : "PHOENIX" }, "999" : { "lata" : 666, "ratecenter" : "PHOENIX", "display" : "PHOENIX" } } }
[00:07:44] <Mark_> i just want the 997
[00:08:38] <Mark_> im still new to mongo (which is why i wrote this app in both C# and php for fun) :P
[00:12:25] <toothrot> how is that a `projection`?
[00:16:27] <toothrot> well, my mistake
[00:16:33] <toothrot> i was confused about terms
[00:17:13] <stephenmac7> Mark_: You could use https://github.com/variety/variety
[00:18:23] <stephenmac7> However, you might be better off with a list rather than an object
[00:19:47] <Mark_> if a phone number is (npa) nxx-yyyy
[00:19:58] <Mark_> my structure is exactly that
[00:20:04] <Mark_> npa -> nxx -> y
[00:20:14] <Mark_> meaning is ridiculously fast, basically nested memcache
[00:20:25] <Mark_> but i want to keep the transfer on the wire down for the dump scenario
[00:20:31] <stephenmac7> I see.
[00:20:52] <stephenmac7> Yeah, variety or mapreduce would be your best bet
[00:20:53] <Mark_> this use case isnt a problem
[00:20:53] <Mark_> http://npa.herokuapp.com/v1/info/14809993100
[00:21:01] <Mark_> but
[00:21:12] <Mark_> when i get into the 'show me all the exchanges in this area code' use case
[00:21:16] <Mark_> it gets a big heavy on the wire
[00:21:59] <stephenmac7> You could also generate some type of index for it
[00:22:16] <Mark_> well, i tried that at first, but it got pretty meaty for index sizes and these are all free tiers
[00:22:19] <Mark_> its just a learning experience
[00:22:29] <Mark_> im using mongolab free tier and heroku and azure for app hosting
[00:22:38] <stephenmac7> No, I mean create another collection
[00:22:43] <Mark_> oh with just that data
[00:22:46] <Mark_> yea, i could do that
[00:22:47] <stephenmac7> Yes
[00:22:59] <Mark_> thats what i do for ocn names (carrier id's assigned by the fcc to their actual name)
[00:23:03] <Mark_> as well as area code to name mappings
[00:23:14] <Mark_> this was all originally relational, but i wanted to play with mongo so i broke it down :P
[00:25:30] <Mark_> i wish i knew more about this
[00:25:30] <Mark_> http://docs.mongodb.org/manual/reference/method/cursor.map/
[00:25:35] <Mark_> what exactly is the structure of u
[00:25:43] <Mark_> can i pull a list of keys out of it?
[00:26:07] <stephenmac7> u is a document
[00:26:38] <stephenmac7> Though it seems to me like you're stuffing everything into one document
[00:27:18] <stephenmac7> I would suggest having a collection for area codes and index them
[00:28:05] <Mark_> ive got 4 documents so far
[00:28:12] <Mark_> npa_name, which is a relation of area codes to area names
[00:28:14] <stephenmac7> So, have a document like {"code": 997, "lata": 666, "ratecenter": "PHOENIX", "display": "PHOENIX"} for each area code
[00:28:31] <Mark_> npa_nxx_combined, which is definitely the beefy one, which is npa->nxx->dataaboutthatcombo
[00:28:33] <stephenmac7> Documents are like rows in a relational DB
[00:28:49] <stephenmac7> Don't try to stuff everything into one row
[00:29:11] <Mark_> ah, i think i see what you're saying
[00:29:14] <stephenmac7> Therein lies your problem
[00:29:24] <Mark_> refactor npa_nxx_combined majorly
[00:29:30] <Mark_> and index it heavily
[00:29:37] <stephenmac7> Exactly
[00:29:46] <Mark_> then i can just return all nxx with npa = whatever
[00:29:52] <stephenmac7> Correct
[00:30:19] <stephenmac7> You index documents, not pieces of documents
[00:31:49] <Mark_> while ($stmt->fetch()) {
[00:31:49] <Mark_> $i[] = [ 'npa' => $npa, 'nxx' => $nxx, 'lata' => $lata, 'ratecenter' => $rc, 'display' => $dis ];
[00:31:49] <Mark_> }
[00:31:49] <Mark_> $c->batchInsert($i);
[00:31:50] <Mark_> yea
[00:32:00] <Mark_> now i just need to read up on indices
[00:32:12] <stephenmac7> Sorry, I know nothing about PHP
[00:32:40] <stephenmac7> Mark_: You might do better with an ODM like http://mandango.org/
[00:33:03] <Mark_> well its nothing terribly important
[00:33:08] <Mark_> just a 'lets learn mongo' thing
[00:34:01] <stephenmac7> In some ways, ODMs help you learn the mongo style
[00:37:08] <Mark_> so now im rockin this
[00:37:15] <Mark_> > db.npa_nxx_data.find({ "npa": 602 }, { "_id": 0, "nxx": 1 } );
[00:37:15] <Mark_> { "nxx" : 200 }
[00:37:15] <Mark_> { "nxx" : 201 }
[00:37:15] <Mark_> { "nxx" : 202 }
[00:37:23] <Mark_> which is lighter
[00:37:32] <Mark_> thanks
[00:38:04] <stephenmac7> You're welcome
[00:41:02] <Mark_> npa_nxx_data 169,497 true 21.47 MB
[00:41:04] <Mark_> not too heavy even
[00:41:08] <Mark_> although no indices setup yet
[00:41:25] <Mark_> the old structure was npa_nxx_combined 1 true 11.09 MB
[00:42:48] <Mark_> i just didnt do it this way before because i hated the thought of duplicating the area code so many times
[00:42:59] <Mark_> im still in that rdbms mode of thinking
[02:13:00] <zenmaster> Hi.
[02:13:19] <zenmaster> I was in here posting last night, about how to convert large tab deliminted files into json format?
[02:13:30] <zenmaster> Is there a easy way to do this, and does size count?
[02:26:55] <cheeser> did you try this? http://docs.mongodb.org/manual/reference/program/mongoimport/
[02:30:43] <zenmaster> Thanks, I keep making full circle on the knowledge I am gaining in trying to learn mongoDB.
[02:31:07] <zenmaster> I just have 120GB Pipe Deliminted, TXT files. I know I said tab, it really is pipe.
[02:31:18] <zenmaster> Typically, I would break it down into a bunch of smaller files.
[03:25:12] <mandeep> quit
[05:07:52] <NaN> hi there
[05:10:07] <joannac> hi
[05:10:23] <NaN> why does $sudo service mongod start gives me an error but $sudo mongod doesn't?
[05:13:22] <NaN> this is the result of systemctl status mongod.service > http://paste.fedoraproject.org/97333/39866190/
[05:14:55] <joannac> mongod logs?
[05:17:58] <NaN> joannac: 2014-04-28T00:17:13.757-0500 ERROR: Cannot write pid file to /var/run/mongodb/mongod.pid: No such file or directory
[05:18:12] <NaN> the server was running and all was OK
[06:04:21] <NaN> seems I'm not the only one with that problem > http://stackoverflow.com/questions/23086655/mongodb-service-will-not-start-after-initial-setup
[07:14:09] <dmitrijs> Hello
[07:18:38] <dmitrijs> Hello, I am going to create two nodes which will share one mongoDB database, the idea is that one node will write data to the DB and second node will read the opLOG, and trigger the 3rd party WS as soon as the new entry appears into the opLOG, how do you think, is it possible to create such architecture and what cons might it bring?
[07:35:09] <fl0w> dmitrijs: I’m no expert. But it sounds bottlenecked. Why not run Mongos regular scaling per node?
[07:35:43] <fl0w> dmitrijs: Or rather, can you share specifics relating to the problem you want to solve with that type of solution?
[07:38:44] <dmitrijs> Well, I have to develop an app, which will receive and execute payment transactions. There is a risk, that someone might attempt to hack or disable this application. That
[07:39:06] <dmitrijs> That's the reason why I decided to develop two nodes, which would share one DB
[07:39:45] <dmitrijs> There would be two nodes, one frontend node, which would accept the transactions from iOS, android, web app
[07:40:17] <dmitrijs> And second node, which would read the data, written by frontend node, and execute the transactions using the bank API
[07:41:13] <dmitrijs> In such way, even if frontend node, goes down, the backend node would be still running..
[07:41:23] <dmitrijs> And executing the transactions
[07:42:40] <fl0w> dmitrijs: So how is that different from having a replicated node instead? I’m going to stop here because I feel I will not be helping. You should hang around for better folks!
[07:43:07] <dmitrijs> k, thx, I will think about replicated node
[07:52:34] <kees_> dmitrijs, sounds like you need a messagequeue, not a mongodb perse
[07:53:02] <dmitrijs> Thx, I'll it checkout
[07:53:20] <kees_> something like rabbitmq, or activemq
[08:34:05] <chowndevil> Hi guys =) Need to get some quick input on a question... Building a an application using many collections, I am using redis as a non-persistent cache solution to certain queries performed. As of right now there is no median in the average size of documents in Mongodb, I am wondering though in general without using conditions or modifiers, performing a simplr find() query.. Would it be advisable to cache larger results or
[08:34:05] <chowndevil> smaller in the terms of Mongodb? Thanks in advance
[08:53:49] <clu3> i've installed mongodb 2.6 as instructed here http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/ and I don't know how to start the server now
[08:54:46] <clu3> oh nevermind i got it
[08:54:50] <Nodex> service mongodb start?
[08:55:18] <clu3> i was trying mongod and it suggested mongodb-server which is apparently not used any more. That confused me
[08:55:33] <clu3> sudo /etc/init.d/mongod restart is the one
[08:57:44] <clu3> actually it is confusing. I tried service mongod start
[08:57:58] <clu3> and when doing a check "ps axu | grep mongo" i see nothing
[08:58:07] <clu3> and no error log at all
[09:01:39] <Nodex> there should be an error log no matter what
[09:01:52] <Nodex> updatedb & locate monogd.log
[09:01:58] <Nodex> &&
[09:04:02] <clu3> i got this start-stop-daemon: unable to stat /usr/bin/mongod (No such file or directory) in /var/log/upstart/mongod.log
[09:04:20] <clu3> but I already installed the mongodb-org meta package
[09:05:12] <Nodex> then it's not been installed
[09:07:26] <clu3> this is getting really weird
[09:08:20] <Nodex> it just so happens I am just about to install mongod on a brand new ubuntu
[09:09:22] <clu3> i installed it in my other brand new ubuntu box and things are fine
[09:09:49] <clu3> with this one, i'm upgrading mongodb. I already apt-get remove'd the old mongo-server and * packages
[09:10:10] <Nodex> things changed between old and new
[09:10:14] <clu3> things went just fine but i just can't start the service
[09:10:15] <Nodex> mognodb now called mongod
[09:10:32] <clu3> yep, but i mean service mongod start should have started
[09:10:50] <clu3> apt-get removing the old packages is like i'm installing from a new box
[09:13:20] <Nodex> not really but ok lol
[09:13:42] <clu3> yeah, can't understand wtf is going on
[09:32:45] <Nodex> with master / slave replication can there be more than one slave?
[09:33:39] <Nodex> keep getting this in my logs http://pastebin.com/ND24A6fd
[09:34:14] <traplin> Friends.findOne({uid: Meteor.user().services.facebook.id}, {_id: 0, name: 1});, i have that code to limit _id and name, but it keeps returning everything no matter what i do
[09:34:54] <thanasisk> anyone knows when mongodb 2.6.1 comes out? due to a bug, we have to run 2.6.1-rc0 in production
[09:36:12] <Nodex> scratch that, it appears that upgrading to 2.6 re-adds the line "bind_ip 127.0.0.1"
[09:36:14] <Nodex> :/
[10:39:12] <thanasisk> 014-04-17T03:29:10.135+0200 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
[10:39:23] <thanasisk> sorry for color codes - effing OSX :-)
[11:32:46] <vijay_> hi
[11:33:03] <vijay_> #mongodb
[11:36:13] <vijay_> is mogodb 2.6 is free to use
[11:36:16] <vijay_> ?
[11:38:28] <nEosAg> yes sir
[11:40:48] <vijay_> can i use mongo-db 2.6 in production for free ?
[11:45:11] <nEosAg> yes sir..absolutely
[12:04:28] <geekvijay> is mongodb 2.6 is free to use in production?
[12:05:19] <adulteratedjedi> geekvijay: yup
[12:05:41] <geekvijay> where is the installation process
[12:10:30] <adulteratedjedi> geekvijay: on what operating system?
[12:11:38] <geekvijay> Ubuntu
[12:12:03] <adulteratedjedi> geekvijay: http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/
[12:12:38] <geekvijay> on that link they have shown how to instll mongodb enterprise
[12:12:45] <geekvijay> install
[12:13:46] <geekvijay> check it out http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/#install-the-mongodb-packages
[12:18:17] <adulteratedjedi> indeed, mongodb-enterprise is the enterprise package though mongodb-org is what you need .. so just follow instructions
[12:19:50] <geekvijay> ok thanks for help
[12:21:36] <geekvijay> i got confused with wordings
[12:24:37] <adulteratedjedi> it's okay
[13:13:02] <jsfrerot> hi all, is it possible to set the mongodb cluster in read-only mode ? I mean not only the metadata, but the make the data in databases in read only mode.
[13:15:00] <nEosAg> read from secondary in replica set..
[13:15:45] <jsfrerot> humm, in this case, would it be possible to demote the primary and have only secondary nodes in my cluster ?
[13:16:58] <nEosAg> it won't happen in replica set..
[13:17:29] <jsfrerot> what do you mean ?
[13:17:32] <nEosAg> once u demote primary, your earlier secondary will become primary ..provided you have only two members and an arbiter
[13:18:27] <jsfrerot> well what about setting the priority to 0 on all nodes ?
[13:22:02] <nEosAg> that i don't know..u mean in replica set?
[13:22:36] <jsfrerot> yes
[13:26:39] <nEosAg> have u tries sharded config for mongo?
[13:27:46] <rspijker> you can just use authentication
[13:27:56] <rspijker> have a user that only has read access
[13:29:47] <jsfrerot> i'm using sharded configs
[13:30:16] <jsfrerot> and unfortunately, i'm not using authentication, and I can't setup auth on my setup
[13:31:46] <jsfrerot> i'm trying to find a way to do a maintenance without downtime, moving all my date from an old cluster (that I need to still be available in read-only mode) to a new much bigger cluster. I know this is a wicked maintenance...
[13:32:53] <nEosAg> sorry need to go..
[13:32:59] <jsfrerot> so i'm trying to figure out a way to set my old cluster in read only while a reconfigure the new cluster currently replicating the old
[13:34:04] <jsfrerot> reading at the "priority" option, seems that if all my nodes are set to 0, there will be no election
[13:34:20] <jsfrerot> wondering if my primary will step down when requested to step down
[13:34:31] <vegivamp> io io
[13:34:51] <vegivamp> Is it possible to get the workingSet data through the REST interface ?
[13:35:04] <vegivamp> ?workingSet=1 doesn 't work :-p
[15:12:51] <cybertoast> do unique indexes affect the count() function? i've got a situation where adding a sparse, unique index causes the count to return zero results, but without this index it works fine.
[15:21:02] <vegivamp> Is it possible to get the workingSet data through the REST interface ?
[16:11:32] <jklb> hollerrrr
[16:11:48] <jklb> Getting an error trying to connect to my db at mongohq
[16:11:50] <jklb> Error: database names cannot contain the character '.'
[16:12:06] <jklb> straight from mongoHQ "mongodb://<user>:<password>@candidate.14.mongolayer.com:10120,candidate.15.mongolayer.com:10120/db_name"
[16:12:38] <jklb> But if I connect like this it works just fine: straight from mongoHQ "mongodb://<user>:<password>@candidate.15.mongolayer.com:10120/db_name"
[17:01:04] <NaN> if I do (shell) var foo = db.foo.find({'_id': 'foo'}), if I print foo it gives me the doc, but if I print it again it's clean, why?
[17:02:02] <skot> foo is not the doc, it is cursor. If you want the doc do this: var foo = db.foo.findOne({'_id': 'foo'})
[17:02:40] <skot> or if you want the results as an array, do this: find(…).toArray()
[17:02:52] <NaN> so the cursor 'autocleans'?
[17:03:14] <skot> it is something you consume and then is empty
[17:03:22] <NaN> suppose I want not only 1 foo but more (using another key), that way I couldn't use findOne
[17:11:17] <skot> You can convert the results to an array, with toArray(), for example.
[17:12:07] <skot> you may want to read the docs about cursors and iteration (basic concept in most languages) to better understand your choices.
[17:12:12] <Dynetrekk> howdy, I know how I can define a basic schema with {name: String, phonenumber: Number} but how can I define that phonenumber is a dict of {'home': 123, 'work': 456}, etc?
[17:12:33] <skot> NaN: http://docs.mongodb.org/manual/tutorial/iterate-a-cursor/
[17:12:59] <NaN> thanks skot
[17:13:15] <skot> http://docs.mongodb.org/manual/core/cursors/
[17:13:20] <skot> np, enjoy.
[17:13:38] <Dynetrekk> I'm using mongoose on node.js, if that matters
[17:33:17] <unholycrab> the mongomms preferred hostname setting tells the agent to prefer one hostname over another when two are present... when are two hostnames present in a replica set configuration??
[17:35:32] <unholycrab> im trying to figure out how to get mongo-mms-monitoring-agent to point at multiple AWS regions
[17:35:57] <unholycrab> the problem is that it discovers members of replica sets via the rs configuration, which are configured with internal hostnames
[17:36:05] <unholycrab> and i can't reach instances by their internal hostnames accross regions
[17:36:10] <unholycrab> and i can't use two monitoring agents...
[17:36:58] <unholycrab> is there a way to manually add replica set secondaries to the monitoring agent? or configure it to translate internal -> external hostnames
[18:51:08] <pure> Hello!
[18:51:43] <pure> How plausible is it to bundle mongod with my application if I want to use MongoDB as a persistance thingy?
[19:07:55] <qrf> Is there a common solution to achieving multi-document transactional semantics in MongoDB?
[19:09:43] <qrf> I suppose you can work with some "dirty" state marker and making copies of stuff until you finally commit the state to a single document hm
[19:09:54] <kali> the closest thing to this is two phase commit: http://cookbook.mongodb.org/patterns/perform-two-phase-commits/
[19:10:45] <kali> (which is by no way mongodb specific)
[19:25:58] <qrf> Thanks, that'll do
[19:54:05] <frodo_baggins> I need some advice on doing text searches.
[19:54:32] <frodo_baggins> I want to be able to search an entire collection for a piece of text.
[20:15:01] <frodo_baggins> so it looks like {$text: {$search: "foo"}} is what I'm looking for.
[20:15:08] <frodo_baggins> nifty.
[20:18:15] <scruz> hello. trying to implement a tagging-like system. i want to create a collection for each possible type of tag group (for a book store, think things like publisher and subject are tag groups, then a list of all possible values for the tag group), then each individual book would have a tags property which would something like {‘publisher’: ‘Apress’, ‘subject’: ‘Databases’}
[20:21:00] <scruz> is that a good way to structure it? i need to later query on the tag groups, for instance, find all Apress books on Web Development published in 2013
[20:25:30] <Joeskyyy> scruz: They actually have almost that exact same example on this page
[20:25:31] <Joeskyyy> http://docs.mongodb.org/manual/tutorial/model-referenced-one-to-many-relationships-between-documents/
[20:25:36] <Joeskyyy> In case it's of any use.
[20:39:25] <frodo_baggins> hmm, I'm trying to figure out what would be the best way to indicate when a function can be called.
[20:41:05] <frodo_baggins> oh, nevermind, looks like I was able to solve this problem with callbacks.
[20:50:38] <frodo_baggins> I'm seeing: "Failed to load c++ bson extension, using pure JS version"
[20:56:00] <mikebronner> got a question: how do I find a user based on a book title, given that the user embedsMany books, using PHP?
[20:57:31] <mikebronner> I've tried $user = User::find('books.title', 'The Great Gatsby'); and similar searches using where, but I always just get an empty collection or NULL returned.
[21:01:55] <AlexZan> hello i am using mongoose and i am running the command in the shell db.user.remove() and i am getting an error: mongodb remove needs a query at src/mongo/shell/collection. Any ideas what im doing wrong?
[21:08:46] <Goopyo> AlexZan: you’re not telling it what to remove
[21:09:53] <Goopyo> if yo uwant to remove all users db.users.remove({})
[21:36:17] <AlexZan> Goopyo, ahhh thank you :D
[21:36:34] <Goopyo> np
[21:42:14] <thearchitectnlog> i need a good example connecting to replicaset using mongodb nodejs native driver
[21:42:19] <thearchitectnlog> any help
[22:29:51] <thewiz_> hello
[22:30:21] <thewiz_> im having trouble connecting to my own database
[22:31:14] <thewiz_> im using mongodb://admin:pass@ip:27017/name-db';
[22:31:35] <thewiz_> but my mongod isnt picking up any connection attempts
[22:31:53] <thewiz_> could this be because of port forwarding?
[22:32:06] <thewiz_> i only have basic web and ssh ports forwarded on my machine
[22:50:37] <thewiz_> nvm yeah that was it
[22:50:40] <thewiz_> hurr me
[23:17:45] <thewiz_> question
[23:18:00] <thewiz_> with everything set up my thing works localhost:3000
[23:18:06] <thewiz_> or naturally ip:3000
[23:18:14] <thewiz_> how can i change that port to be something else
[23:51:46] <thearchitectnlo1> anyone can help i need a good example about replicaset nodejs