PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 6th of October, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:49:31] <Jonno_FTW> I have a question
[00:50:17] <jaitaiwan> Jonno_FTW: good place to ask :)
[00:52:25] <Jonno_FTW> I have a query and I want to group by the results by a field, how do I do that? I just want to know the values of the fields
[00:53:13] <StephenLynx> either with $group on an aggregate
[00:53:23] <StephenLynx> and I think you can group regular cursors, but I am not sure.
[00:54:23] <Jonno_FTW> I tried aggregate but I got an error
[00:55:14] <Jonno_FTW> http://pastebin.ws/503xks
[00:57:43] <StephenLynx> you didnt used aggregate
[00:57:48] <Jonno_FTW> I just want the set of site_no that don't have predictions
[00:58:27] <Jonno_FTW> that was the error I got with aggregate
[00:58:43] <StephenLynx> you
[00:58:45] <StephenLynx> didnt
[00:58:45] <StephenLynx> used
[00:58:47] <StephenLynx> aggregate
[01:03:24] <Jonno_FTW> ok I figured it out
[02:46:30] <Jonno_FTW> I have another question
[02:47:10] <joannac> are you going to ask? we're not mind readers
[02:47:25] <Jonno_FTW> I want to change the format of all documents in a collection, I want to make a list of objects into an object where the keys are a value in the list objects
[02:48:07] <Jonno_FTW> how should i do this?
[02:48:14] <joannac> settings values as keys is a really bad idea, imo
[02:48:45] <joannac> so unless you have a very well reasoned use case, my advice is "don't"
[02:48:49] <Jonno_FTW> well the objects have 2 pairs, sensor and count, I just want an object of {sensor:count...}
[02:49:37] <joannac> is "sensor" the actual field name?
[02:49:45] <Jonno_FTW> yes
[02:49:54] <joannac> oh
[02:50:15] <Jonno_FTW> so in the document I want readings : {sensor1: 1, sensor2:2...}
[02:50:23] <joannac> well, write some code. use forEach, or grab all the docs via your favourite language driver
[02:50:34] <joannac> how many sensors do you have?
[02:50:45] <Jonno_FTW> each one has 24
[02:50:51] <joannac> are you going to need to know how many readings there are in a document?
[02:50:51] <Jonno_FTW> each document that is
[02:50:56] <Jonno_FTW> no
[02:51:00] <joannac> what if one's missing?
[02:51:25] <Jonno_FTW> the count is what the sensor reports
[02:51:28] <joannac> how are you planning to index it?
[02:51:31] <Jonno_FTW> not a count of snesors
[02:51:41] <Jonno_FTW> I'm not going to index the readings field
[08:03:13] <arussel> db.i.insert({a:"a", b:{c:{d:[null]}}})
[08:03:29] <arussel> db.i.update({},{$pull:{"b.c.d":null}})
[08:03:44] <arussel> => error: cannot use the part (b of b.c.d) to traverse the element ({b: []})
[08:04:11] <arussel> when I was expecting the null value to be removed from the array. What am I doing wrong ?
[08:05:48] <arussel> got it, the problem was not coming from the value I've just inserted but from another value
[08:11:28] <arussel> could someone explain: http://pastebin.com/sk5KZd5v
[08:12:10] <arussel> why does an update with {} doesn't match all docs, and why null isn't removed from the array ?
[08:37:25] <lobbin> I have two collections, same documents but different languages, I plan on merging for searching. What is the preferred way, to have one document per language and group them by id or one document per document containing an aggregated array for the search fields?
[09:47:48] <lobbin> On the other hand, $text on an indexed array is perhaps better than $group?
[09:58:26] <Walex> I have what seems a classic issue: MongoDB bloat. An application (Juju) creates and then soon deleted a lot of small "transaction" records. Instead of reusing deleted space MongoDB allocates more file space continuously. If I stop a secondary in a replSet, delete the database and reload it goes from 390GB to 4GB, which seems to confirm that. Any MongoDB config parameters that might be relevant?
[10:06:37] <lobbin> I was under the impression that mongo would re-use claimed data if the data was deleted.
[10:07:18] <Derick> it certainly does
[10:07:43] <Derick> Walex: which version are you running?
[10:07:53] <Walex> sure, it is supposed to, but it is not happening here, that's why I am wondering about configuration items
[10:08:44] <Walex> The version distributed with Juju is 2.4.9
[10:08:51] <Derick> okay
[10:09:15] <Derick> so, in 2.6 (or 2.8, i forgot), the default allocation method is now "PowerOf2"
[10:09:18] <Derick> in 2.4, it is not
[10:09:30] <Derick> but this is a flag you need to set on an (empty) collection
[10:09:40] <Derick> and should help a lot with fragmentation issues
[10:10:29] <Walex> Ahhh PowerOf2 that sounds good. Buddy systems tend to be resistant.
[10:10:41] <Derick> "buddy systems"
[10:10:47] <Derick> also, 2.4 is old, and no longer supported
[10:11:42] <Walex> Derick: that's that comes with that application in the current "stable" release. Usual issue with LTS releases (underlying system is Ubuntu LTS 14).
[10:12:38] <Derick> Walex: running old software is IMO irresponsible
[10:13:14] <Walex> Derick: Ubuntu supposedly backport critical fixes, but not performance fixes.
[10:13:35] <Walex> Derick: same like Debian and all the other LTS people. Which sort of helps.
[10:13:51] <Derick> yeah, I've heard that before - but do you really expect non-original-devs to understand as well what they are doing - especially with MongoDB?
[10:13:55] <Derick> it's a red herring IMO
[10:15:03] <Walex> Derick: I tend to agree with you on that, but it is a difficult discussion.
[10:15:35] <Walex> the main argument for LTS is that some people (not here BTW) have enormous costs for testing before putting something into production.
[10:16:09] <Walex> I have seen environments where testing a new system release takes months.
[10:16:16] <Derick> Walex: http://docs.mongodb.org/master/tutorial/install-mongodb-on-ubuntu/?_ga=1.186779368.1943177576.1336128512
[10:16:35] <Derick> Walex: "outdated practises" :-)
[10:19:53] <Jack07> test
[10:20:03] <Jack07> hey guys, anybody had difficulties installing mongodb 3.0 on opensuse?
[10:20:27] <Jack07> I followed the official tutorial but no luck
[10:22:04] <Jack07> or anybody experienced installing it on any linux machine?
[10:23:16] <Jack07> SSL certificate of mongodb seems cannot be verified by cUrl
[10:26:13] <Walex> Jack07: probably asking in #OpenSUSE is best. Also they tend to have the latest packages of everything in one of their testing repos.
[10:26:55] <Jack07> ok, thanks
[11:29:56] <sid606> Hi to All I have a question
[11:30:02] <sid606> I have vagrant box create wiht https://puphpet.com/ with mongodb 2.6 on nginx and ubuntu 14.04.
[11:30:08] <sid606> Also I have on my host os on ubnut with mongodb. How to connect to the vagrant mongodb with robomongo.
[11:30:25] <sid606> thanks
[12:10:27] <Walex> sid606: you use the address and credentials you configured the VM with.
[12:11:17] <sid606> Yes i did this
[12:11:30] <Walex> sid606: then it works.
[12:11:58] <sid606> :)
[12:12:00] <sid606> well no
[12:12:28] <Walex> sid606: I am not sure I understand your question: it seems that you are asking us to tell you how you setup and configured your VM and MongoDB within it.
[12:13:18] <sid606> yes
[12:13:26] <Walex> sid606: have you tried accessing the MongoDB with the usual command line tool?
[12:13:53] <sid606> no
[12:14:21] <Walex> try then with 'mongo -u admin -p ..... address:2710/admin'
[12:14:50] <Walex> try then with 'mongo -u admin -p ..... address:27017#/admin'
[12:15:07] <Walex> with the dots replaces by the password and "address" replaced by the VM address.
[12:15:32] <sid606> do I need to setup port forwardin in vagrant
[12:15:45] <sid606> because I have mongo on the localhost as well
[12:15:58] <Walex> sid606: that depends on how you setup the VM network access, bridged or whatever else.
[12:16:21] <Walex> sid606: hopefully your virtual machine has its own address...
[12:17:47] <Walex> Derick: BTW as to my previous query that you helpfully answered as to lack of reuse of deleted record space, here is an example where a collection has 432 records of around 240 bytes, and takes 170GB over 332 extents: http://paste.ubuntu.com/12696552/
[12:19:10] <Walex> sid606: perhaps you first should test access to your MongoDB instance *inside* the VM to be sure.
[12:21:03] <sid606> I can connect inside without problem
[12:21:24] <deathanchor> sid606: are you using bind_ip?
[12:21:46] <deathanchor> if you are you are limited to that list of who can connect
[12:21:56] <sid606> no
[12:22:07] <sid606> the default setings from puphpet
[12:56:48] <Walex> suppose that I have a collection with 432 documents each of around 240 bytes that takes however 170GB of storage space. If I run 'compact' on it access will be suspended... For how long? Proportional to actual space (around 100KB) or to allocated space (around 170GB)?
[12:58:09] <deathanchor> Walex: however long it takes to run
[12:58:14] <deathanchor> if you want to avoid that there is another way
[12:59:20] <cheeser> how did that collection get to 170G for what is essentially 10k of data?
[12:59:33] <deathanchor> cheeser: happens when cleanup is forgotten
[12:59:42] <cheeser> cleanup?
[12:59:57] <StephenLynx> bad design, lots of very short-lived data.
[13:00:19] <deathanchor> the cleanup script had no monitoring so data piled up over time
[13:00:22] <cheeser> there would've had to have been a *ton* of churn on the data in that collection. i'm guessing mmapv1?
[13:00:55] <deathanchor> is that fixed with WT storage? does it reclaim space back?
[13:01:21] <cheeser> iirc, WT is better about reusing disk space, yes.
[13:02:24] <deathanchor> does it ever shrink files or re-write them to be smaller if data is removed
[13:02:32] <StephenLynx> why just not using a TTL index to cleanup the data?
[13:02:44] <StephenLynx> or even using the application itself to clean its own data?
[13:02:48] <Walex> cheeser: it is a Juju collection for status transactions. Lots of churn....
[13:02:52] <deathanchor> StephenLynx: devs aren't always the brightest bunch :)
[13:02:55] <cheeser> ah
[13:03:00] <StephenLynx> again, bad design.
[13:03:04] <Walex> yes....
[13:03:45] <deathanchor> Walex: if you have a replset, just promote a secondary to primary and stop the full machine, clear out the data dir and resync.
[13:03:58] <deathanchor> no downtime that way
[13:04:42] <cheeser> except for the election
[13:05:07] <Walex> deathanchor: I have done that a few times, I am then going to set 'usePowerOf2Sizes'. But this is a replSet...
[13:05:08] <deathanchor> if your code can't handle mongo exceptions then you have bigger issues
[13:07:55] <Walex> so my plan is to delete the secondaries and resync, but I have also set 'usePowerOf2Sizes' on the primary. I guess I can just go along and run another round-robin of rebuilds with 'rs.stepDown' on the primary. Ah well.
[13:10:12] <cheeser> i'd probably round robin the compactions
[13:32:03] <Walex> cheeser: yes, just done a 'mongodump' and going on with it.
[13:50:17] <Walex> yes, with powers-of-2 allocations the 170GB collection with 432 records of 240 bytes each now takes 170KB and does not grow madly anymore
[13:52:43] <cheeser> heh
[14:00:10] <repxxl> Hello, plase i have a problem i'm using ObjectId as default i really like it, but i need to use some kind of pretty url hash function, like hashids from http://hashids.org/ should i keep _id objectid and create an additional normal auto increment id which would go from 1,2,3.. etc ?
[14:03:08] <Pinkamena_D> I have a strange question I was looking to get an opinion on. I have a mongodb instance in a company vpn, where no indound ports are open. I have servers pushing audit data to another database on the outside, but it is desiged to have access to this data inside the vpn. Currently we have a cronjob which connects out to the other server to read in new data using mongodump/restore.
[14:03:33] <Pinkamena_D> Is it possible to make use of any of mongo's replication features if only the one server can initiate connection to the other but not the reverse?
[14:27:07] <Jack07> test
[14:27:11] <Jack07> hello
[14:27:19] <Jack07> anybody encounter this? mongod: symbol lookup error: mongod: undefined symbol: FIPS_mode_set
[14:27:24] <Jack07> please help
[14:40:25] <jhertz> Hi, can I use find to do a case insensitive query without using regex?
[14:40:48] <Derick> no
[14:40:55] <Derick> or rather: not yet
[14:41:04] <jhertz> Derick: ok, thanks#
[14:46:40] <cheeser> jhertz: when i need that, i keep a duplicate field that's normalized to all lower/upper case then query against that instead.
[14:47:13] <Derick> oh yes
[14:47:44] <Derick> I wrote an article about it too: http://derickrethans.nl/mongodb-collation.html
[15:19:38] <Ytoc> yo
[15:21:09] <Ytoc> I've got a bit of a weird question.. How would I go about making a mongo collection.find() for a query like this? http://hastebin.com/himafivara.sm
[15:21:37] <Ytoc> So looking by id of an object inside an array
[15:25:43] <StephenLynx> array:{$all:{ new ObjectID(yourid)}}
[15:25:58] <StephenLynx> wait, thats a little wrong
[15:26:03] <StephenLynx> array:{$all: new ObjectID(yourid)}
[15:26:09] <StephenLynx> damn it
[15:26:17] <StephenLynx> array:{$elemMatch: new ObjectID(yourid)}
[16:23:13] <Ytoc> Stephen, sorry for dissapearing. db.getCollection('Event').find({"array._id": "something"}) did what I needed
[16:28:49] <StephenLynx> welp
[16:29:32] <Ytoc> yeah, didn't realize I could do a simple array.objectElement: "something"
[16:29:36] <Ytoc> pretty nifty
[19:22:13] <saif> hello all, I am switching my test server from ms sql to mongo db. I have installed mongodb on my test setuop, But I am now looking for a GUI like management studio. Is there any free gui you guys would recommend for me?
[19:24:57] <ahihi> I use robomongo, it's alright
[19:31:42] <saif> ahihi, looks good to me, But it looks like this page "http://docs.mongodb.org/ecosystem/tools/administration-interfaces/" is not categorised/prioritised. There is no recommendations for a new user. just raw information.
[19:33:01] <ahihi> guess so. I've never looked at page, robomongo was recommended by a colleague of mine :)
[19:33:07] <ahihi> that page*
[19:40:12] <StephenLynx> I just use the terminal.
[19:40:42] <StephenLynx> with .pretty() :v
[19:52:21] <louie_louiie> hey guys, i am trying to add one large collection to another larger collection and taking out the duplicate '_id's. the goal is to do a server cache thing where the smaller collection holds a 30 day range of data, then merges with the bigger collection every week without duplicate entries. any recommendations?
[20:06:19] <StephenLynx> hm
[20:06:38] <StephenLynx> I would pre-aggregate the data as I input it.
[20:06:49] <StephenLynx> and then add an expiration on the stuff you would delete.
[20:17:45] <louie_louiie> @StephenLynx, I am not sure what you mean by pre-aggregate the data while inputting
[20:17:56] <louie_louiie> like cluster week1, week2, ....?
[20:18:20] <StephenLynx> insert the data in the main collection that will be pruned eventually and do an upsert on the permanent long-term collection.
[20:18:55] <StephenLynx> so you can just put an expiration on the main collection
[20:19:02] <StephenLynx> and not bother manually deleting it.
[20:27:06] <louie_louiie> from what I believe you are saying is do a update() with upsert=true, then put a TTL date on the main collection
[20:28:35] <StephenLynx> yes
[20:30:37] <louie_louiie> cool. I'll try that. Thanks @StephenLynx
[21:31:46] <topwobble> im trying to figure out which options were used to make an index so I can re-create the command used to make the index. Unfortunately, im getting an error "index <indexname> already exists with different options" and I can't figure out what is different.
[21:31:58] <topwobble> Here is the output of getIndexes and createIndex: https://gist.github.com/objectiveSee/7c83f9b77f23c42836b2
[21:53:18] <topwobble> specifying `{ "safe" : null }` fixes it. weird
[21:57:55] <shlant> hi all. I am writing to a replica set from a node.js app with https://docs.strongloop.com/display/public/LB/MongoDB+connector. I am running into issues where some writes return an error of "not master". I have secondaryPreferred for reads, but any idea why some writes are happening on non-master?
[22:04:50] <topwobble> shlant you need to use `secondary` not `secondarypreferred`
[22:05:44] <shlant> topwobble: awesome, I thought it was an easy fix. thanks!
[22:05:57] <topwobble> ont sure about that ODM but with mongo shell you need to use rs.secondaryReadOk() (or something, thats not it exactly)
[22:06:15] <shlant> yea I saw that here: http://stackoverflow.com/questions/8990158/mongodb-replicates-and-error-err-not-master-and-slaveok-false-code
[22:06:39] <shlant> but my error is just "not master" instead of "not master and slaveOk=false"
[22:06:43] <shlant> but i'll keep it in mind
[22:07:39] <topwobble> im not sure about writes. thats a different permission probably
[22:07:59] <topwobble> its not wise to write to secondary I believe
[22:12:23] <Boomtime> topwobble: you can't write to a secondary, any node which believes itself to be secondary won't let you write to it, there is no permission to allow that
[22:13:08] <topwobble> Boomtime makes sense.
[22:13:12] <topwobble> out of curiousity, What happens if you take the secondary out of the set, write to it, then put it back in the set as secondary? DOes that new data get lost?
[22:22:19] <stickperson> when running the cloneCollection command, i get an error that i’m not authorized to execute that command. how can i fix this?