PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 6th of August, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:59:56] <crazydip> newbie q: can a WiredTiger collection be in the same database as a GridFS collection(s)?
[01:00:34] <StephenLynx> afaik, WT is the engine.
[01:00:41] <StephenLynx> it would handle all collections in the database.
[01:01:00] <StephenLynx> I suspect the saved data is the same for any engine.
[01:01:03] <StephenLynx> but I am not sure on that
[01:02:53] <cheeser> that's correct.
[01:02:54] <Boomtime> crazydip: these terms mean different things, WiredTiger is a storage engine, it will store whatever it is told to - mongodb uses it to store databases/collections/indexes - gridfs is basically just a very particular schema applied to two chosen collections
[01:03:23] <crazydip> Boomtime: thanks that makes sense... i thought GridFS is like a Schema + Engine together....
[01:05:37] <crazydip> StephenLynx: thanks you as well ;)
[03:20:28] <acidjazz> hi ewveryone
[03:20:30] <acidjazz> everyone*
[03:20:57] <acidjazz> is it possible to search a documents object keys.. like similar to a documents array elements?
[03:28:47] <acidjazz> i think the answer is i should chagne my schema for actual reasonable queries
[03:31:37] <acidjazz> lol i ordered a super nice pocket flash light
[03:31:38] <acidjazz> nitecore
[03:31:47] <acidjazz> takes 18650 batties
[03:32:06] <acidjazz> binge watching too many horror movies .. where most of em people are in dark places using flashlights
[03:32:55] <acidjazz> why arent horror movies sponsored by flashlight copmanys
[03:33:34] <acidjazz> guaranteed to not randomly flickr so you dont see the demon face for 0.25s
[05:53:37] <crazydip> opening connection to MongoDB via pymongo on localhost has a noticeable delay... my python script has nothing but db = MongoClient(uri) and takes 0.615s while connecting via mongo -u blah db -eval 'quit()' -p pw takes 0.101s - what could i be doing wrong?
[05:54:33] <crazydip> if i comment out the db = MongoClient() line the python script takes 0.08s
[05:55:26] <crazydip> pymongo 3.0.3, mongodb 3.0.5
[05:56:26] <leptone> how do i remove an item from my mongodb collection by id?
[05:56:41] <leptone> i am trying db.cafes.remove({"_id": "55c2d144df713adba508c72e"})
[05:56:46] <leptone> but its not working
[08:28:25] <rawl79> hello
[08:29:37] <rawl79> I'm running an aggregation over a large collection (that's frequently updated) using Go. I always get "Exec error: PlanExecutor killed"
[08:29:53] <rawl79> any reason why the aggregation statement is being killed?
[09:43:28] <supersym> I find the documentation on enforce-unique-keys-for-sharded-collections extremely confusing
[09:50:54] <supersym> a. MongoDB does not support creating new unique indexes in sharded collections
[09:50:58] <supersym> b. and will not allow you to shard collections with unique indexes on fields
[09:51:06] <supersym> c. other than the _id field
[09:51:36] <supersym> = so it _is_ possible... but anyway, "If you need to ensure that a field is always unique in a sharded collection, there are three options"
[09:51:47] <supersym> - Use guaranteed unique identifiers.
[09:51:48] <supersym> Universally unique identifiers (i.e. UUID) like the ObjectId are guaranteed to be unique.
[09:53:11] <supersym> So it possible but only when using UUID on _id as ObjectID?
[09:53:43] <supersym> Terms are used confusingly (or I'm a n00b, more likely) but anyway I assume ObjectID is a data type and _id is the field name which could be UUID
[09:55:24] <supersym> Oh wait ... UUID __like__ the ObjectID... but ObjectID doesnt __have__ to be UUID does it
[10:09:13] <djlee> Hi all, does nesting a document field have negative performance impacts (i.e. like it has to drag out all documents and parse them in memory). I was to store a bunch of dates relating to different things inside a "metadata" field on my document for cleanliness, but i need to be able to do thing like "where doc.metadata.sometime >= <some time>"
[10:10:17] <djlee> will using a nested document have a noticeable impact on performance vs a top level attribute (i.e. i could store them as metadata_sometime rather than as a nested doc)
[10:42:29] <supersym> djlee: http://stackoverflow.com/a/7976370
[10:42:44] <supersym> in short: No difference due to serialized data read
[10:45:23] <djlee> cheers supersym, managed to find that exact SO post after i asked here :P
[11:31:35] <Lonesoldier728> anyone know best way of limiting docs in mongo http://stackoverflow.com/questions/31854506/mongoose-query-to-keep-only-50-documents
[11:34:24] <gemtastic> hello
[12:31:10] <kenalex> hi everyone
[12:31:22] <kenalex> I am pretty new to mongodb
[12:31:34] <gemtastic> hello
[12:32:40] <kenalex> i am wondering if mongodb would fit my use case
[12:35:12] <kenalex> i am building an application that will manage cell site (mobile network) and their equipment. each cell site has similar and unique attributes. the application is like a catalog of sorts but also will store summary stats from data from other system as to their performance
[12:36:10] <StephenLynx> I think it would fit just alright.
[12:37:20] <StephenLynx> since it looks like you have a very flexible data model and is barely or even not relational at all.
[12:39:00] <kenalex> i attempted modeling the data schema (using visio) and cam up with around 18 tables for the whole application, that's when a friend of mine mentioned document databases
[12:40:27] <kenalex> thanks StephenLynx
[12:40:34] <StephenLynx> I just use plain txt to model :v
[12:40:56] <StephenLynx> https://gitlab.com/mrseth/LynxChan/blob/master/doc/Model.txt
[12:41:21] <kenalex> StephenLynx: i do that as well :) i had visio at work which I normall use for network design, so I gave it a try
[13:21:03] <crised> Can you explain me in simple words what's a document store?
[13:21:13] <crised> I know a bit about relational dbs, key value stores like dynamo db
[13:21:40] <crised> is a document store, a db that indexes every element in the schema?
[13:24:00] <crised> could anyone explain this paragraph: "Document-oriented databases are inherently a subclass of the key-value store, another NoSQL database concept. The difference lies in the way the data is processed; in a key-value store the data is considered to be inherently opaque to the database, whereas a document-oriented system relies on internal structure in the document order to extract metadata that the database engine uses for further optimization"
[13:24:24] <StephenLynx> hm
[13:24:29] <StephenLynx> it is a
[13:24:31] <StephenLynx> object
[13:24:32] <StephenLynx> with fields
[13:24:43] <StephenLynx> and each field holds a value. this value might be another object.
[13:24:54] <StephenLynx> and it does't care about the structure of the objects.
[13:24:57] <StephenLynx> it just stores it.
[13:25:10] <crised> StephenLynx: just like java classes
[13:25:19] <crised> it can have fields of other objects?
[13:25:24] <StephenLynx> yes
[13:25:27] <crised> e.g. private Animal dog;
[13:25:43] <crised> StephenLynx: How is this different for a key value store?
[13:25:52] <crised> key value cannot have other objects inside?
[13:25:58] <StephenLynx> I am not familiar with key value storages.
[13:26:05] <StephenLynx> so I can't tell you that.
[13:26:26] <crised> StephenLynx: how does mongo extract the metadata of each document?
[13:26:37] <Folkol> I think that your quote pretty much pin-point the difference. In a key-value store, the database does not understand, or care about, the contents of the value (i.e. opaque values). In a "document store", you are storing a structured document (for example a JSON document), that the database can reason about.
[13:27:06] <crised> Folkol: Please give me an example of how db can reason abou tit?
[13:27:11] <crised> s/tit/it
[13:27:35] <StephenLynx> for example
[13:27:37] <crised> Folkol: in key value, database also knows about key data type, and value data type as well
[13:27:45] <Folkol> For example: "Add an index for the age property" etc.
[13:27:51] <StephenLynx> you can ask it to project given a certain condition.
[13:27:54] <StephenLynx> like
[13:27:59] <Folkol> Or query the document store for documents matching certain criteria.
[13:28:04] <crised> Folkol: key value can also do that, secondary indexes in dynamodb
[13:28:14] <StephenLynx> "project only the objects on this array that match that condition"
[13:28:20] <crised> Folkol: same as dynamodb
[13:28:32] <StephenLynx> or "only return me documents which this field holds an array with this size"
[13:28:36] <crised> see, I don't get the difference between key value, and document
[13:29:28] <Folkol> Yes, but that is not a property of a generic "key-value store". That is a property of DynamoDB. A generic key-value store will store some opaque data for you, under a given key.
[13:29:32] <crised> StephenLynx: mmm, I think Mongo has alot more functions than a key value db engine like dynamodb, I would guess key value and document store are pretty similar
[13:29:53] <crised> Folkol: oh I see, then dynamodb has also propoerties of document store
[13:29:55] <Folkol> Yes
[13:29:57] <StephenLynx> like gridfs
[13:30:00] <crised> Folkol: awesome
[13:30:14] <StephenLynx> gridfs takes a file and split into documents
[13:30:21] <StephenLynx> saving metadata in a collection
[13:30:25] <StephenLynx> and the parts in another
[13:30:34] <crised> StephenLynx: ok
[13:30:36] <StephenLynx> and then you can do stuff like apply sharding to the collection
[13:30:40] <crised> Does MongoDB provide transactional support?
[13:30:42] <StephenLynx> no
[13:30:47] <Folkol> (Mind that early versions of DynamoDB only kept a bunch of dumb data for you, IIRC.)
[13:31:05] <crised> Folkol: hows that?
[13:33:05] <Folkol> I might be mistaken (the only NoSQL databases that I work with Couchbase and Mongo), but I recall that DynamoDB was a simple key/value-store in the beginning, but got support for structured data later on.
[13:33:56] <crised> Folkol: ok
[13:39:16] <deathanchor> crised: http://www.tokutek.com/2013/10/introducing-tokumx-transactions-for-mongodb-applications/
[13:42:09] <crised> deathanchor: thanks
[13:42:26] <StephenLynx> be advised toku makes several compromises
[13:42:34] <StephenLynx> it is not a trivial decision to use it.
[13:42:50] <crised> StephenLynx: yes, I thoght the same :)
[13:42:57] <StephenLynx> read it's documentation throughly before adopting.
[13:43:11] <StephenLynx> it doesn't support unique indexes, AFAIK, so you can have an idea.
[13:43:17] <cheeser> also, they got bought so i'm not entirely sure how viable that product is any more
[14:10:15] <deathanchor> it
[14:14:20] <deathanchor> thorough testing should help you find all your issues
[14:19:30] <troulouliou_div2> hi is it classic in mondodb to duplicates data. i m storing tweets and users in mongodb for test and the tweets have a user_mentoin list of mentionned users
[14:19:53] <troulouliou_div2> should i insert it like this or insert separate users and create somekind of relaiton when retrievng datas ?
[14:36:09] <StephenLynx> "is it classic in mondodb to duplicates data." no
[14:37:48] <troulouliou_div2> StephenLynx, what should i do in this case remove the mentionned_users list and add a new user in the usercolletions ?
[14:38:05] <troulouliou_div2> and just add an id in the mentionned_users ?
[14:38:23] <StephenLynx> oh
[14:38:24] <StephenLynx> hold on
[14:38:31] <StephenLynx> I misunderstood your question
[14:38:44] <StephenLynx> yes, people duplicate data often to make fake relations.
[14:38:57] <cheeser> it's called denormalization
[14:38:58] <StephenLynx> and prefent documents from being too complex
[14:39:22] <troulouliou_div2> cheeser, StephenLynx ha ok i will read about denormalization ; thanks
[14:39:31] <troulouliou_div2> just needed the term :)
[14:39:39] <cheeser> this might help: http://askasya.com/post/socialstatusfeed
[14:48:58] <amcsi_work> is it possible to do a compound index in()?
[14:52:02] <amcsi_work> or is it possible to runtime have results returned with a temporary column name with the multiple values?
[15:10:52] <ssarah> guys, theoretical question. If i have a Replicate set with primary secondary and arbiter and primary and arbiter go down
[15:10:58] <ssarah> how do i make secondary primary?
[15:11:38] <cheeser> you wouldn't. the replSet can't elect a primary
[15:12:07] <ssarah> so how do i get out of the mess?
[15:12:20] <cheeser> bring at least one of the other nodes up
[15:15:08] <ssarah> what about i change force change the rs with a force: change and give a lot more priority to the machine that is up?
[15:16:16] <cheeser> you have 1/3 of a replSet up. nothing (sane) will force an election.
[15:16:41] <cheeser> the best you can hope for is to restart that last node without --replSet
[15:16:54] <cheeser> why can't those other nodes be restarted?
[15:27:02] <ssarah> they just cant, their whole machines are down
[15:27:31] <cheeser> so ... bring them back up?
[15:27:43] <ssarah> not on our location
[15:28:01] <cheeser> um. ok...
[15:31:23] <joshua> Well if you really want to go crazy you can disable the replica set and boot it as a single server but when things come back up you might be in trouble.
[15:31:56] <joshua> Data will be out of sync
[15:32:15] <ssarah> that's the same that cheeser suggested, joshua?
[15:32:29] <joshua> Ah sorry, I missed that
[15:32:43] <ssarah> yeh, its kind of weird
[15:33:40] <joshua> Its the kind of thing you have to work out while they are all still up to plan for since you can't change rs.config on a server you can't reach.
[16:07:55] <djlee> Is there any way to do a non datatype specific lookup. Someone has inserted a load of crap into the database and all the integers are casted as strings
[16:08:21] <djlee> is there a flag i can use to make mongo match the value regardless of data type
[16:11:17] <dddh> kenalex: mongodb is not a RDBMs ;(
[16:14:17] <dddh> no ACID in there ;(
[16:44:14] <crazydip> this just in: python is not c++!
[16:44:21] <crazydip> ;)
[16:46:13] <crazydip> i have an issue where opening connection to MongoDB via pymongo on localhost has a noticeable delay... my python script has nothing but db = MongoClient(uri) and takes 0.615s while connecting via mongo -u blah db -eval 'quit()' -p pw takes 0.101s - what could i be doing wrong?
[16:47:22] <crazydip> the database itself is comprised of 1 collection with 1 tiny document
[16:47:40] <deathanchor> how are you measuring this?
[16:47:44] <crazydip> time
[16:47:55] <deathanchor> LOL, python is an interpreted language
[16:48:05] <deathanchor> compiled everytime you run it
[16:48:25] <crazydip> so what?
[16:48:29] <deathanchor> mongo command is already compiled.
[16:48:45] <crazydip> yeah but scripts a lot more complicated take shorter time than that
[16:48:52] <deathanchor> you aren't timing just the connection, your are timing the compile time
[16:49:24] <crazydip> well yes but 0.6s??
[16:49:33] <crazydip> that's insane
[16:49:42] <crazydip> if i comment out that one line db= MongoClient
[16:49:42] <deathanchor> probably .5s just to compile the script
[16:49:51] <crazydip> time shows 0.08s
[16:49:56] <crazydip> you're plain wrong
[16:50:20] <deathanchor> sounds like a pymongo problem, not a db problem.
[16:50:30] <deathanchor> I use pymongo and have no issues.
[16:51:13] <cheeser> well, you're in luck! pymongo 2.9rc0 was just released. maybe it fixes your issue.
[16:51:14] <crazydip> it takes less time for python to "compile" a whole framework on this computer so there's something wrong somewhere.... it's not "normal"
[16:51:58] <crazydip> deathanchor: oh i would assume it's something with pymongo not mongodb since timing mongo client it's 6x faster
[16:52:14] <dddh> crazydip: do you mean your problem could be related to dns or something else?
[16:52:15] <deathanchor> how about you precompile your code
[16:52:23] <crazydip> dddh: it's all localhost
[16:52:26] <deathanchor> pymongo -m py_compile script.py
[16:52:31] <deathanchor> er
[16:52:35] <deathanchor> python -m py_compile script.py
[16:52:38] <crazydip> deathanchor: good point
[16:52:47] <deathanchor> then run python script.pyc
[16:53:23] <crazydip> same
[16:53:25] <crazydip> real 0m0.619s
[16:54:26] <crazydip> cheeser: what do you mean 2.9 was just released? i'm using 3.0.3
[16:54:54] <deathanchor> he's talking pymongo module
[16:55:31] <crazydip> right
[16:55:37] <crazydip> i just checked github
[16:55:40] <crazydip> i don't get it
[16:55:52] <crazydip> how is 2.9 newer than 3.0.3?
[16:56:00] <crazydip> different branches i take it
[17:00:36] <cheeser> pymongo?
[17:00:43] <crazydip> yes
[17:01:06] <crazydip> deathanchor: you mentioned you don't see this "delay": what version of pymongo and mongodb are you using?
[17:01:16] <cheeser> ah. i didn't realize it was for an older branch. not a python user. i just saw the email from bernie about the release.
[17:02:12] <deathanchor> no, .5s for startup doesn't affect me
[17:04:00] <crazydip> ok, thanks
[17:40:46] <silasx> Seeing if I can speak here now that I’m registered
[17:43:58] <silasx> Can anyone see my text?
[17:46:39] <silasx> Well I’ll ask the question anyway: I have a replica set with three members, one of them in startup2 mode and the other two are secondary. One ofthe secondaries (call it 8501) has the highest priority by far … but no election is happening, and none is becoming primary.
[17:46:51] <silasx> Can I force them to have the election and make one primary?
[17:48:56] <cheeser> you should determine why that one is stuck in STARTUP2
[17:49:33] <silasx> well, it’s not stuck, it was recently re-added to the replica set and is legitimately starting up, it has to re-sync.
[17:49:51] <silasx> But shouldn’t one of the secondaries become primary?
[17:50:46] <cheeser> i would expect one to be primary, yes.
[17:50:52] <cheeser> which one was the high priority?
[17:51:04] <silasx> 8501 — it’s 600
[17:51:18] <silasx> the one called 8500 is in startup2, and its priority is 500
[17:51:34] <cheeser> ok. so 8501 is not stuck?
[17:52:07] <silasx> it’s not stuck in startup2, but it is stuck in secondary
[17:52:16] <silasx> even though it should be becoming primary now, right?
[17:52:21] <silasx> this is mongo 2.4 by the way
[17:52:34] <cheeser> i would expect so, yes. check the logs and see if there's anything there.
[17:52:50] <cheeser> is there data on the node stuck in start up?
[17:54:10] <silasx> well I recently took it down because it was stuck in “recovering” for a week, then (per the manual) deleted its db directory, and then just started it back up
[17:54:23] <cheeser> still stuck?
[17:54:56] <silasx> yes there are still two nodes in secondary and one in startup2
[17:55:02] <silasx> Thanks for helping btw :-)
[17:55:55] <cheeser> try removing the one stuck in startup and see if the election happens.
[17:56:16] <cheeser> no clue as to why that node is stuck in startup?
[17:56:21] <silasx> like remove it from the replica set?
[17:56:35] <cheeser> instead of removing it, restart with more verbosity
[17:56:37] <silasx> well I only reconnected 8500 recently
[17:56:38] <cheeser> e.g. -vvvvv
[17:56:51] <silasx> so it’s reasonable to still be in startup2 right?
[17:57:17] <cheeser> dunno.
[17:57:18] <silasx> a few seconds ago rs.status() gave: “lastHeartbeatMessage" : "initial sync cloning db: <db name>“
[17:57:40] <cheeser> that's what I can't remember: if it stays in startup until the sync is done.
[17:58:12] <silasx> so you mean I should restart the 8500 instance, not just remove it from the replica set
[17:58:15] <silasx> ?
[17:58:26] <cheeser> do this for the new node: http://docs.mongodb.org/manual/tutorial/resync-replica-set-member/
[17:59:21] <silasx> oh yikes I just looked at the logs (I know, hsould have done earlier …) and found this:
[17:59:22] <silasx> [rsHealthPoll] replset info localhost:8502 thinks that we are down
[17:59:23] <silasx> [rsMgr] not electing self, localhost:8502 would veto with 'I don't think localhost:8501 is electable'
[17:59:24] <silasx> [rsHealthPoll] replset info localhost:8502 thinks that we are down
[18:00:00] <cheeser> that's why i was asking about the logs :D
[18:00:18] <cheeser> meeting time. back in a bit.
[18:03:49] <silasx> k
[18:06:33] <BladeSling> Howdy,I am attempting to optimize a mongo server and one of the indexes that has been created doesn't seem to take priority like I think it should. To me it is fairly obvious which index should be picked but maybe someone here can explain why it picks the other index. Attached is the index executions for each. http://pastebin.com/WnqPBQFq
[18:12:50] <BladeSling> Here is the data from the query planner. I excluded the many other indexes, but if the order of the rejectedPlans has anything to do with it, the index I want is the first one to appear there. http://pastebin.com/ULPBLyqi
[18:17:08] <bakhtiya> Hey, regarding mongo indexes - I'm not seeing a direct performance increase on a distinct() operator on a key I have defined an index for - is there any optimization techniques around this? maybe an index isn't the solution - how can I optimize the distinct() operator
[18:32:17] <silasx> @cheeser thanks for the help, turned out ot be that something was wrong with the autossh tunnel from 8502 to 8501, just had to restard it and 8502 allowed the election and 8501 was restored to its rightful role as primary
[18:32:58] <silasx> *restart the autossh tunnel, I mean
[18:43:59] <jr3> do read replicas act as a load balancer?
[18:49:04] <cheeser> no, they don't. typically.
[18:49:29] <cheeser> replica sets provide durability. sharding provides scale.
[18:54:56] <leptone> does this command change the ordering of my collection or just the order of the output?
[18:54:57] <leptone> db.collection.find().sort( { age: -1 } )
[18:57:58] <cheeser> just the output
[19:06:41] <leptone> how do i rearrange the data cheeser ?
[19:17:13] <cheeser> leptone: say what now?
[19:17:44] <leptone> id like to sort my collection and store the collection in that order
[19:17:58] <leptone> cheeser, ^
[19:18:09] <cheeser> in a new collection or the same?
[19:18:17] <jr3> I dont understand why a db.shutDown() then a mongod fails
[19:18:25] <jr3> there's always some issue
[19:18:29] <jr3> with restarting
[19:18:33] <cheeser> jr3: what error do you get?
[19:18:59] <jr3> exception in initAndListen: 10309 Unable to create/open lock file: /data/db/mongod.lock errno:13 Permission denied Is a mongod instance already running?, terminating
[19:19:05] <jr3> is this a permission issue
[19:19:12] <jr3> no mongod is running
[19:19:33] <jr3> also /data/db isnt the correct path
[19:19:38] <jr3> its /var/lib/mongo
[19:19:59] <jr3> and that's set in my mongod.conf so I don't know why it's trying that path
[19:21:53] <cheeser> are you telling mongod to read your .conf file with -f?
[19:24:55] <jr3> no
[19:24:57] <jr3> -f?
[19:25:05] <cheeser> mongod -f mongo.conf
[19:25:14] <cheeser> or wherever your config lives
[19:25:25] <cheeser> if you don't give it a config file, it'll default everything.
[19:25:48] <jr3> so what about when the server it's on reboots?
[19:26:10] <cheeser> you'd use the system appropriate launch scripts.
[19:26:13] <jr3> is there a way to say on reboot use the config as well
[19:26:32] <cheeser> sure. the linux packages set that up for you.
[19:28:28] <jr3> sudo mongod --fork --config /etc/mongod.conf --logpath /var/log/mongodb/mongod.log
[19:28:35] <jr3> that seems to work
[19:29:31] <cheeser> you can put the fork and logpath options in your .conf
[19:29:59] <jr3> they are, so I should just be able to do sudo mongod --config ?
[19:30:30] <cheeser> well, with the path, sure. or just /etc/init.d/mongod start
[19:30:41] <cheeser> but you can configure that to happen at start up, too.
[19:31:56] <leptone> cheeser, in the same collection
[19:32:19] <leptone> i would like to rearrange the docs in the collects but keep them in the same collection
[19:33:37] <cheeser> what's the point?
[20:09:14] <BladeSling> Can anyone explain why this index gets picked instead of the index that covers the query better? Query planner data: http://pastebin.com/ULPBLyq execution data: http://pastebin.com/WnqPBQFq
[20:12:31] <crazydip> jr3: 1) you should not be launching mongod by hand on a server but have it automated by systemd and 2) you should not be using sudo to launch mongod, nor should you launch it via root, but have a mongodb user / group -- again, this would be automated by systemd (you do have to create the user/group once if your disto package does not do it)
[22:20:21] <leptone> when i add in this .sort() method it changes the order my collection prints in however its not the correct (ascending or descending) order
[22:20:24] <leptone> http://pastebin.com/EeX4iSzT
[22:23:46] <nathesh> Hi
[22:23:54] <nathesh> Am I in?
[22:24:26] <nathesh> okay I have a question in setting up my schema for my MongoDB
[22:24:32] <nathesh> can someone help me with this?
[22:25:20] <nathesh> thank you
[22:46:08] <joannac> leptone: that is throughly unhelpful. What's the output?
[22:48:37] <joannac> nathesh: you would do better just to ask your question
[22:48:51] <nathesh> okay cool
[22:49:20] <nathesh> so I have a mobile app and I want to create analytics for it.
[22:50:20] <nathesh> I get user data (the name of the user, what webpage he hit, the number of mins he was logged in, what are actions he did)
[22:50:31] <nathesh> how do I store this information in Mongo?
[22:50:40] <nathesh> I know it is vague
[22:50:54] <joannac> that is vague
[22:51:03] <joannac> figure out what kind of information you need to get out
[22:51:09] <nathesh> how can I be clearer?
[22:51:36] <nathesh> 1. the average time spent in our app
[22:51:39] <nathesh> per user
[22:51:44] <nathesh> that is one metric I want to have
[22:51:57] <nathesh> lets take that example
[22:52:41] <nathesh> 1. the average time a user spent in our app 2. The number of users per zipcode
[22:53:26] <nathesh> I get an api call to the backend with the action and user id
[22:54:38] <nathesh> is that more specific?
[23:00:40] <nathesh> @joannac: is there an example how people have done logging on websites for users using Mongo?