PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 12th of June, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[02:54:23] <sabrehagen> if no sort option is provided, does mongo db return results sorted by insertion date?
[02:57:36] <cheeser> "natural order"
[02:57:46] <cheeser> which is usually insertion order
[02:59:35] <joannac> sabrehagen: if you don't provide a sort order, there are no guarantees about what order you get things back
[03:05:54] <rajat_> h
[03:06:11] <rajat_> I need help with an aggregation , query : http://pastebin.com/sFuDZjpJ. I am getting this error "SyntaxError: Unexpected token : " Please help P.S. i am new with mongodb
[03:06:32] <joannac> rajat_: what have you tried to debug so far?
[03:07:33] <rajat_> i have tried to run query individually, nd it'w working but here it is showing me this error .
[03:07:53] <Boomtime> not mongodb, the problem is it isn't valid JSON
[03:08:05] <Boomtime> "then: cusers:{addToSet:"$userid"}" <-
[03:08:26] <joannac> rajat_: if you remove the entire $project part, does it work?
[03:08:39] <rajat_> let me try
[03:09:11] <rajat_> nope :(
[03:09:49] <joannac> same error or different error?
[03:10:15] <rajat_> same error
[03:10:46] <joannac> rajat_: pastebin what you ran, and what error you got
[03:10:54] <joannac> because I just ran it, and I don't get that error
[03:11:55] <rajat_> ok just a sec
[03:12:32] <rajat_> http://pastebin.com/kuSr6q9B
[03:13:32] <joannac> rajat_: you don't actually understand JSON or javascript. there's not a lot I can do for you here
[03:13:53] <joannac> db.transcripts.aggregate([{$group:{"_id":"$objectid"}}]) is what I ran
[03:14:04] <joannac> I don't even think that part is what you intend
[03:15:32] <rajat_> ohk let me explain i am trying to merge http://pastebin.com/jzUN59tt and http://pastebin.com/TVWd0fNg both .
[03:15:44] <rajat_> can you tell me what it the best way to done this
[03:16:07] <Boomtime> rajat_: somebody else wrote that right?
[03:16:16] <rajat_> nope it was me :p
[03:16:26] <rajat_> i wrote that
[03:18:19] <Boomtime> the trouble rajat_ is the two later examples you've provided contain valid JSON, where-as the example you provided at first contains, quite basic, syntax errors.. which i pointed out
[03:18:50] <rajat_> ok Boomtime let me try
[03:19:49] <Boomtime> i feel that you are trying to fly when you haven't learnt to crawl - you should start with simpler use cases
[03:21:40] <rajat_> I have only started learning it i will try to implement it from basic again . BTW thanks a lot
[03:22:05] <Boomtime> you should do some stuff with javascript first, particularly JSON
[03:22:31] <Boomtime> make sure you understand the rules of JSON before trying to construct complicated aggregation pipelines
[03:23:06] <Boomtime> the issues you have are not related to mongodb (at the moment) but with the syntax of the language you are using (javascript)
[03:23:51] <rajat_> ok i will go to the basic again :)
[09:49:07] <kas84> hi guys
[09:49:16] <kas84> and girls
[09:49:19] <kas84> :D
[09:50:04] <kas84> how can I set a maximum of ram for the mongodb?
[09:50:17] <kas84> my OS is killing it when it has no more RAM
[09:50:50] <kas84> I’m using mongo 3.0.3 with wiredTiger storage engine
[09:51:02] <lqez> kas84: http://docs.mongodb.org/manual/faq/fundamentals/#how-do-i-configure-the-cache-size-for-mmapv1
[09:51:20] <lqez> If you're using WT, then set http://docs.mongodb.org/manual/reference/configuration-options/#storage.wiredTiger.engineConfig.cacheSizeGB
[09:51:25] <kas84> I have this setup in my mongod.conf wiredTiger.engineConfig.cacheSizeGB: 1
[09:51:38] <kas84> but it was using 1.2GB when it was killed
[09:52:04] <lqez> are you using a configuration file for it?
[09:52:07] <kas84> yes
[09:52:11] <lqez> then checkout the path of conf
[09:52:54] <kas84> it’s /etc/mongod.conf
[09:52:58] <kas84> as usual
[09:53:00] <kas84> http://hastebin.com/ogisuwotil.sm
[09:53:02] <kas84> here’s my config
[09:53:08] <kas84> I think it should be okay
[09:53:24] <lqez> hmmmm
[09:54:36] <kas84> how can I get the config options from the mongo db console?
[09:54:47] <kas84> so I can know it’s loading that configuration
[09:55:14] <lqez> maybe db.serverStatus() ?
[09:59:50] <kas84> it doesn’t have that info :/
[10:00:27] <lqez> ?!
[10:01:07] <lqez> kas84: http://docs.mongodb.org/manual/reference/server-status/
[10:01:16] <lqez> See wiredTiger section
[10:01:27] <kas84> It’s definitely loading that conf—> ps -ef |grep mongod —> mongod 1889 1 26 09:42 ? 00:04:18 /usr/bin/mongod -f /etc/mongod.conf
[10:01:58] <lqez> if you configured WT as well, then "maximum bytes configured" should be shown
[10:02:37] <kas84> oh, yeah sorry
[10:06:28] <kas84_> lqez: it’s 1GB
[10:06:46] <kas84_> it’s "maximum bytes configured" : 1073741824
[10:06:54] <kas84_> which is 1.07GB
[10:07:30] <kas84_> when it was killed, the mongod process was 1.2GB
[10:07:57] <lqez> mongod may use ram more then configuration, i think
[10:08:16] <lqez> because cacheSizeGB only limit the cache size of wiredTiger usage
[10:09:10] <lqez> but I don't know about the ram 'margin'
[10:09:18] <kas84_> aha
[10:09:29] <kas84_> can I give it 0.8 GB then?
[10:09:34] <lqez> try it :)
[10:10:17] <lqez> This is an another case :
[10:10:30] <lqez> I'm using tokuft and set cachesize as 32G
[10:10:36] <lqez> but it uses 32.3G actually
[10:10:49] <lqez> and sometime it uses up to 32.8G
[10:11:15] <lqez> I ran lots of aggregation and other jobs on same instance so it uses more than 32G that I set
[10:17:29] <kas84_> okay, thanks lqez
[11:20:54] <Hanumaan> getting following error from the server "unauthorized db:tapari lock type:-1 client:156.211.156.175 any solution?
[11:50:25] <joannac> Hanumaan: that doesn't looks like a server error message I've seen before
[11:50:31] <joannac> the format is all wrong
[11:50:36] <joannac> what version are you running?
[14:43:18] <jr3> https://gist.github.com/anonymous/3436b47da1592169af9e -- If I comment out this populate I go from 1.5second return to a 300ms return
[14:43:50] <jr3> is the only way to really optimize this to bring the data into the document
[14:52:40] <Hanumaan> jonnac, I am using 1.8.1 version ..
[15:22:18] <jr3> anyone famaliar with mongodb instances on aws?
[15:30:03] <csd_> Is there any way to create something similar to a tailable cursor for a non-capped collection?
[15:32:50] <saml> can i mount mongodb?
[15:33:08] <saml> want to use normal find, grep, .. after mounting mongodb
[15:34:12] <saml> csd_, what do you want to tail?
[15:34:24] <saml> oplog probably gives all that you want to tail
[15:35:26] <csd_> saml: I have a query that I basically want to create a listener for. It's basically like, give me the result where attr is smallest. And so listener would then let me know immediately when this changes
[15:35:48] <saml> i don't understand. query = find ?
[15:36:12] <saml> can't you just sort by attr?
[15:36:13] <csd_> yeah sorry i come from the rdbms world
[15:36:48] <saml> db.docs.find(query).sort({attr:1})[0] will be the document with smallest attr
[15:37:18] <saml> so you want to listen for updates
[15:37:26] <csd_> yeah
[15:37:38] <saml> you can tail oplog and create minheap?
[15:37:57] <csd_> oh i didnt realize oplog was tailable
[15:38:22] <saml> actually, doesn't have to be heap. in your script, you'll store an _id and attr. and it tails oplog. for updates or removes, compare attr.. and update _id and attr accordingly
[15:39:05] <saml> what's updating db?
[15:39:07] <csd_> do you know of any examples of this in practice?
[15:39:15] <saml> it's also okay to put this logic in the app
[15:39:30] <saml> do you have multiple apps updating db?
[15:39:53] <csd_> multiple workers yes
[15:40:19] <saml> what language are you using?
[15:40:32] <saml> http://stackoverflow.com/questions/20871009/how-can-python-observe-changes-to-mongodbs-oplog
[15:41:01] <csd_> the find() is working over a set of tasks that have a time attribute for when they need to be executed and another attribute saying if the task is in process / completed. so it might be that the task with the nearest time isn't ready for execution yet, so youd have to wait . but during that waiting period, a new task with a smaller time could be added, and that would have to be executed first
[15:41:38] <csd_> and so for that id want the listener
[15:57:18] <csd_> saml: i think i see how to make what you linked work. thanks
[17:47:34] <rajat_> can these both queries can be merged in the following way http://pastebin.com/sMWQSbhv and http://pastebin.com/7Hn1A5FP and my solution is http://pastebin.com/tUzVEreL please help
[18:11:45] <symbol> I'm a bit surprised by the lack of blog presence mongodb has.
[18:12:44] <StephenLynx> srs people doing srs biznis
[18:12:51] <StephenLynx> blogging isn't produtive >:3
[18:13:23] <symbol> haha sure - I'm just tired of seeing the first result on google being that "Why you should never us MongoDB" post
[18:14:09] <StephenLynx> thats when FOSS original mentality kicks in
[18:14:20] <StephenLynx> FOSS don't need people, people need FOSS.
[18:14:41] <StephenLynx> if one just see a random blog post and decides without concrete arguments, its their loss.
[18:14:44] <GothAlice> symbol: Actually, some of the largest names in IT blog heavily about it.
[18:15:20] <GothAlice> https://blog.serverdensity.com/does-everyone-hate-mongodb/ < a good article about other articles, and the SD guys have many other great articles, too.
[18:15:34] <symbol> Oh, great link! Thank you.
[18:15:55] <symbol> I agree, that FOSS mentality is dangerous and I suppose it makes sense that the controversial posts get the higher rankings.
[18:16:13] <GothAlice> They're more heavily linked to, esp. by those wanting to counter the arguments.
[18:16:30] <Spec> foss mentality is ... dangerous
[18:16:32] <GothAlice> Google "teaches the controversy" rather than providing true facts, on most searches. ;)
[18:16:35] <GothAlice> How so?
[18:16:54] <Spec> I don't know, that's a quote I just read from symbol
[18:16:57] <GothAlice> Wait. This is probably a discussion better held in ##argument. ;)
[18:17:01] <Spec> :)
[18:17:18] <symbol> Ah, sorry, not trying to spark anything.
[18:17:25] <StephenLynx> "dangerous" by no means. that is what keeps FOSS mostly clean.
[18:17:49] <symbol> I think I misinterpreted the meaning of FOSS.
[18:17:57] <StephenLynx> free open source software.
[18:18:07] <symbol> Still working on my acronym language :)
[18:18:08] <GothAlice> Free as in speech, not free as in beer.
[18:18:09] <Spec> (which does not imply any particular way a project manages its code base)
[18:19:09] <GothAlice> The vast majority of my FOSS code is actually commercial FOSS. I was paid to work on it.
[18:19:19] <GothAlice> Similarly, one of the engineers I've hired I'm paying to work on FOSS.
[18:19:41] <Spec> GothAlice: same
[18:19:56] <StephenLynx> not to mention commercialized FOSS, like red hat.
[18:20:03] <GothAlice> FOSS is a value-add for business due to the ability for improvements to be contributed at little cost (code review time, basically,) and community recognition.
[18:20:04] <Spec> of course, my "vast majority" of a small subset of code isn't saying much :P
[18:21:24] <symbol> I love hearing about companies that pay devs to work on FOSS (now that I know the correct definition)
[18:22:47] <GothAlice> Our philosophy at work: if it's not business critical, open it up. The secret is not about the bits, it's about how the bits are specifically put together.
[18:24:41] <symbol> That's a solid philosophy.
[18:27:10] <GothAlice> For example: a distributed MongoDB-based RPC system is nifty and all, but open-sourcing it reveals little about the things we're using it for, such as screen scraping or data processing (fully proprietary code).
[18:51:31] <m3t4lukas> hey guys
[18:52:27] <m3t4lukas> is it just robomongo or is something like "db.getCollection('test').find({"array": {"$size" : {"$lt": 8}}})" not possible right now?
[18:53:13] <cheeser> "right now?"
[18:53:29] <m3t4lukas> cheeser: maybe it is to be added in the future :P
[18:53:49] <m3t4lukas> I mean this seems like a pretty basic feature to me
[18:54:50] <cheeser> did you try that query in the shell?
[18:55:10] <m3t4lukas> nope, I will now
[18:55:26] <m3t4lukas> as soon as I found out how to auth in the shell
[18:56:03] <StephenLynx> user:pass@address/db
[18:56:05] <StephenLynx> afaik
[18:57:09] <m3t4lukas> didn't work
[18:57:18] <m3t4lukas> but I found it now :P
[18:57:26] <m3t4lukas> haven't used the shell for ages
[18:57:36] <m3t4lukas> always did C or robomongo
[18:59:09] <m3t4lukas> shell thells me that user is not defined :/
[18:59:37] <GothAlice> m3t4lukas: You might need an --authenticationDatabase argument, if your user is in a different database than the one they have access to.
[19:01:30] <m3t4lukas> the error persists
[19:03:30] <m3t4lukas> on my test server there is a database test with a user "lukas" with the password "lukas"
[19:03:44] <m3t4lukas> it works in C and with robomongo
[19:04:34] <m3t4lukas> okay, now it works, sry :P
[19:04:39] <m3t4lukas> made a mistake
[19:04:45] <m3t4lukas> not used to the shell anymore
[19:05:46] <m3t4lukas> okay I tried "db.getCollection('test').find({"array": {"$size" : {"$lt": 8}}})" now
[19:06:03] <m3t4lukas> no error but also no results where there should be results
[19:15:30] <coderman1> is tehre a way to turn on timing for a query in the mongo cli?
[19:17:21] <coderman1> also a way for it to tell me if its using the indexes ive created
[19:18:41] <m3t4lukas> coderman1: it is bound to use indexes you create
[19:20:28] <m3t4lukas> if you want to test your index just do some small code in any language you prefer that a driver exists for and generate a massive amount of documents and test your queries with and without an index and compare those results
[19:28:26] <brotatochip> hey guys, I want to create a database user in mongodb for Icinga to be able to use to monitor the replication status, can anybody tell me what roles I'll need to grant that user?
[19:29:55] <brotatochip> basically I want the user to be able to execute rs.status() on the command line(similar to mysql -e) so I can grep/awk out the relevant info
[19:34:52] <brotatochip> oh actually I think I found it, but feel free to correct me if I'm wrong: db.createUser({user:"icinga",pwd:"<password>",roles:["replSetGetStatus"]})
[19:35:45] <brotatochip> nope
[19:44:33] <StephenLynx> I dont think many here have experience with icinga
[19:44:39] <StephenLynx> I have never heard of it before.
[19:45:59] <brotatochip> StephenLynx: Icinga is just a fork of nagios and entirely irrelevant to what I'm trying to do :-P
[19:46:14] <StephenLynx> never heard about nagios either :v
[19:46:43] <brotatochip> Ah, it's a monitoring server, executes checks and sends out emails if a condition is met or not met
[19:48:04] <brotatochip> I'm trying to create a custom role to grant access to the replSetGetStatus action on the cluster resource
[19:49:49] <coderman1> is there anything comparable to kibana for mongodb?
[19:52:22] <StephenLynx> what is kibana?
[19:54:08] <brotatochip> StephenLynx: Kibana is the gui for the ELK bundle - Elasticsearch, Logstash, and Kibana
[19:57:17] <brotatochip> The documentation for this page seems to be broken http://docs.mongodb.org/v2.6/tutorial/define-roles/
[19:57:32] <preaction> Square was building something like kibana for mongo, i don't remember what it was called though
[19:57:35] <brotatochip> Getting a syntax error when using db.createRole()
[19:58:18] <brotatochip> Specifically this command: db.createRole( { role: “replicaMon", privileges: [ { resource: { cluster: true }, actions: [ “replSetGetStatus" ] } ], roles: [] } )
[19:59:05] <brotatochip> Would anyone be able to tell me how to properly formulate that query because the one in the doc does not work
[20:01:33] <brotatochip> It doesn't like "replicaMon" as a name for a custom role...
[20:10:51] <DeadPixel_> guys
[20:10:52] <DeadPixel_> anyone home?
[20:12:21] <coderman1> yea
[20:12:42] <DeadPixel_> yo coderman1
[20:12:48] <DeadPixel_> could you look at my gist?
[20:13:18] <brotatochip> ^ pickup lines from the future
[20:13:57] <DeadPixel_> oh
[20:13:57] <DeadPixel_> shit
[20:14:05] <DeadPixel_> i'm a shy 17 year old and i just made a pickup line
[20:14:09] <DeadPixel_> but really, look at this, would you?
[20:14:09] <DeadPixel_> https://gist.github.com/MopperigeKat/e376ff4eb476f47eac08
[20:16:33] <StephenLynx> there are no relational integrity checks on mongo, deadpixel.
[20:16:43] <StephenLynx> so the only relation is the one you implement in your application.
[20:16:56] <StephenLynx> there is something called dbref, but it its just syntax sugar.
[20:17:23] <DeadPixel_> yeah thats why i said
[20:17:29] <DeadPixel_> maybe ill use rethinkdb
[20:17:30] <DeadPixel_> it has joins
[20:17:35] <DeadPixel_> but the principal is the same
[20:17:38] <DeadPixel_> and #rethinkdb is not very active
[20:17:42] <DeadPixel_> so I thought i'd give this channel a shot
[20:20:27] <brotatochip> anybody around who can help me understand how to configure a replica set with auth enabled?
[20:20:50] <brotatochip> nvm I can google
[20:24:47] <cheeser> http://docs.mongodb.org/manual/tutorial/deploy-replica-set-with-auth/
[20:31:35] <brotatochip> Yeah cheeser, thanks, following that doc already :-)
[20:32:19] <brotatochip> btw I think my syntax errors when attempting to create the custom role was due to weird characters being copied into my terminal from the notes app that I was using for editing the command...
[20:35:31] <coderman1> in collection.stats() what unit is avgObjSize?
[20:36:10] <coderman1> nm its bytes
[20:41:32] <deathanchor> coderman1: unless you say collection.stats(1024)
[20:42:32] <coderman1> word
[20:44:05] <coderman1> is that size on disk? or uncompressed?
[20:44:56] <deathanchor> mongo is always uncompressed
[20:55:26] <daidoji> Hello, when I create an index does it immediately show up in getIndexes or does it wait until its completed its creation?
[20:59:01] <deathanchor> daidoji: background or foreground?
[21:12:32] <brotatochip> cheeser do you know if I modify my production standalone mongodb instance to a replica set, add my secondaries, and the initial sync begins, will I be able to bring my application back online or should I schedule a downtime for however long that sync takes?
[21:13:33] <brotatochip> the oplog is default size, the DB is 25gb with ~9gb of indexes
[21:14:01] <daidoji> deathanchor: background
[21:14:43] <brotatochip> or deathanchor do you know the answer to the question I just asked by chance?
[23:03:20] <d4rklit3> hi
[23:03:22] <d4rklit3> im having some trouble mongodumping a remote db.
[23:03:29] <d4rklit3> i can auth into the shell
[23:03:36] <d4rklit3> btu i can't seem to run teh dump
[23:03:41] <d4rklit3> i get auth error
[23:04:49] <d4rklit3> nvm
[23:04:54] <d4rklit3> mispelled authenticateionDatabase
[23:05:04] <d4rklit3> but not like that, a different way.