[02:47:10] <joannac> are you going to ask? we're not mind readers
[02:47:25] <Jonno_FTW> I want to change the format of all documents in a collection, I want to make a list of objects into an object where the keys are a value in the list objects
[08:03:44] <arussel> => error: cannot use the part (b of b.c.d) to traverse the element ({b: []})
[08:04:11] <arussel> when I was expecting the null value to be removed from the array. What am I doing wrong ?
[08:05:48] <arussel> got it, the problem was not coming from the value I've just inserted but from another value
[08:11:28] <arussel> could someone explain: http://pastebin.com/sk5KZd5v
[08:12:10] <arussel> why does an update with {} doesn't match all docs, and why null isn't removed from the array ?
[08:37:25] <lobbin> I have two collections, same documents but different languages, I plan on merging for searching. What is the preferred way, to have one document per language and group them by id or one document per document containing an aggregated array for the search fields?
[09:47:48] <lobbin> On the other hand, $text on an indexed array is perhaps better than $group?
[09:58:26] <Walex> I have what seems a classic issue: MongoDB bloat. An application (Juju) creates and then soon deleted a lot of small "transaction" records. Instead of reusing deleted space MongoDB allocates more file space continuously. If I stop a secondary in a replSet, delete the database and reload it goes from 390GB to 4GB, which seems to confirm that. Any MongoDB config parameters that might be relevant?
[10:06:37] <lobbin> I was under the impression that mongo would re-use claimed data if the data was deleted.
[10:10:47] <Derick> also, 2.4 is old, and no longer supported
[10:11:42] <Walex> Derick: that's that comes with that application in the current "stable" release. Usual issue with LTS releases (underlying system is Ubuntu LTS 14).
[10:12:38] <Derick> Walex: running old software is IMO irresponsible
[10:13:14] <Walex> Derick: Ubuntu supposedly backport critical fixes, but not performance fixes.
[10:13:35] <Walex> Derick: same like Debian and all the other LTS people. Which sort of helps.
[10:13:51] <Derick> yeah, I've heard that before - but do you really expect non-original-devs to understand as well what they are doing - especially with MongoDB?
[10:15:03] <Walex> Derick: I tend to agree with you on that, but it is a difficult discussion.
[10:15:35] <Walex> the main argument for LTS is that some people (not here BTW) have enormous costs for testing before putting something into production.
[10:16:09] <Walex> I have seen environments where testing a new system release takes months.
[10:20:03] <Jack07> hey guys, anybody had difficulties installing mongodb 3.0 on opensuse?
[10:20:27] <Jack07> I followed the official tutorial but no luck
[10:22:04] <Jack07> or anybody experienced installing it on any linux machine?
[10:23:16] <Jack07> SSL certificate of mongodb seems cannot be verified by cUrl
[10:26:13] <Walex> Jack07: probably asking in #OpenSUSE is best. Also they tend to have the latest packages of everything in one of their testing repos.
[12:12:28] <Walex> sid606: I am not sure I understand your question: it seems that you are asking us to tell you how you setup and configured your VM and MongoDB within it.
[12:14:21] <Walex> try then with 'mongo -u admin -p ..... address:2710/admin'
[12:14:50] <Walex> try then with 'mongo -u admin -p ..... address:27017#/admin'
[12:15:07] <Walex> with the dots replaces by the password and "address" replaced by the VM address.
[12:15:32] <sid606> do I need to setup port forwardin in vagrant
[12:15:45] <sid606> because I have mongo on the localhost as well
[12:15:58] <Walex> sid606: that depends on how you setup the VM network access, bridged or whatever else.
[12:16:21] <Walex> sid606: hopefully your virtual machine has its own address...
[12:17:47] <Walex> Derick: BTW as to my previous query that you helpfully answered as to lack of reuse of deleted record space, here is an example where a collection has 432 records of around 240 bytes, and takes 170GB over 332 extents: http://paste.ubuntu.com/12696552/
[12:19:10] <Walex> sid606: perhaps you first should test access to your MongoDB instance *inside* the VM to be sure.
[12:21:03] <sid606> I can connect inside without problem
[12:21:24] <deathanchor> sid606: are you using bind_ip?
[12:21:46] <deathanchor> if you are you are limited to that list of who can connect
[12:22:07] <sid606> the default setings from puphpet
[12:56:48] <Walex> suppose that I have a collection with 432 documents each of around 240 bytes that takes however 170GB of storage space. If I run 'compact' on it access will be suspended... For how long? Proportional to actual space (around 100KB) or to allocated space (around 170GB)?
[12:58:09] <deathanchor> Walex: however long it takes to run
[12:58:14] <deathanchor> if you want to avoid that there is another way
[12:59:20] <cheeser> how did that collection get to 170G for what is essentially 10k of data?
[12:59:33] <deathanchor> cheeser: happens when cleanup is forgotten
[13:03:45] <deathanchor> Walex: if you have a replset, just promote a secondary to primary and stop the full machine, clear out the data dir and resync.
[13:05:07] <Walex> deathanchor: I have done that a few times, I am then going to set 'usePowerOf2Sizes'. But this is a replSet...
[13:05:08] <deathanchor> if your code can't handle mongo exceptions then you have bigger issues
[13:07:55] <Walex> so my plan is to delete the secondaries and resync, but I have also set 'usePowerOf2Sizes' on the primary. I guess I can just go along and run another round-robin of rebuilds with 'rs.stepDown' on the primary. Ah well.
[13:10:12] <cheeser> i'd probably round robin the compactions
[13:32:03] <Walex> cheeser: yes, just done a 'mongodump' and going on with it.
[13:50:17] <Walex> yes, with powers-of-2 allocations the 170GB collection with 432 records of 240 bytes each now takes 170KB and does not grow madly anymore
[14:00:10] <repxxl> Hello, plase i have a problem i'm using ObjectId as default i really like it, but i need to use some kind of pretty url hash function, like hashids from http://hashids.org/ should i keep _id objectid and create an additional normal auto increment id which would go from 1,2,3.. etc ?
[14:03:08] <Pinkamena_D> I have a strange question I was looking to get an opinion on. I have a mongodb instance in a company vpn, where no indound ports are open. I have servers pushing audit data to another database on the outside, but it is desiged to have access to this data inside the vpn. Currently we have a cronjob which connects out to the other server to read in new data using mongodump/restore.
[14:03:33] <Pinkamena_D> Is it possible to make use of any of mongo's replication features if only the one server can initiate connection to the other but not the reverse?
[15:21:09] <Ytoc> I've got a bit of a weird question.. How would I go about making a mongo collection.find() for a query like this? http://hastebin.com/himafivara.sm
[15:21:37] <Ytoc> So looking by id of an object inside an array
[15:25:43] <StephenLynx> array:{$all:{ new ObjectID(yourid)}}
[15:25:58] <StephenLynx> wait, thats a little wrong
[15:26:03] <StephenLynx> array:{$all: new ObjectID(yourid)}
[19:22:13] <saif> hello all, I am switching my test server from ms sql to mongo db. I have installed mongodb on my test setuop, But I am now looking for a GUI like management studio. Is there any free gui you guys would recommend for me?
[19:31:42] <saif> ahihi, looks good to me, But it looks like this page "http://docs.mongodb.org/ecosystem/tools/administration-interfaces/" is not categorised/prioritised. There is no recommendations for a new user. just raw information.
[19:33:01] <ahihi> guess so. I've never looked at page, robomongo was recommended by a colleague of mine :)
[19:52:21] <louie_louiie> hey guys, i am trying to add one large collection to another larger collection and taking out the duplicate '_id's. the goal is to do a server cache thing where the smaller collection holds a 30 day range of data, then merges with the bigger collection every week without duplicate entries. any recommendations?
[21:31:46] <topwobble> im trying to figure out which options were used to make an index so I can re-create the command used to make the index. Unfortunately, im getting an error "index <indexname> already exists with different options" and I can't figure out what is different.
[21:31:58] <topwobble> Here is the output of getIndexes and createIndex: https://gist.github.com/objectiveSee/7c83f9b77f23c42836b2
[21:57:55] <shlant> hi all. I am writing to a replica set from a node.js app with https://docs.strongloop.com/display/public/LB/MongoDB+connector. I am running into issues where some writes return an error of "not master". I have secondaryPreferred for reads, but any idea why some writes are happening on non-master?
[22:04:50] <topwobble> shlant you need to use `secondary` not `secondarypreferred`
[22:05:44] <shlant> topwobble: awesome, I thought it was an easy fix. thanks!
[22:05:57] <topwobble> ont sure about that ODM but with mongo shell you need to use rs.secondaryReadOk() (or something, thats not it exactly)
[22:06:15] <shlant> yea I saw that here: http://stackoverflow.com/questions/8990158/mongodb-replicates-and-error-err-not-master-and-slaveok-false-code
[22:06:39] <shlant> but my error is just "not master" instead of "not master and slaveOk=false"
[22:07:39] <topwobble> im not sure about writes. thats a different permission probably
[22:07:59] <topwobble> its not wise to write to secondary I believe
[22:12:23] <Boomtime> topwobble: you can't write to a secondary, any node which believes itself to be secondary won't let you write to it, there is no permission to allow that
[22:13:12] <topwobble> out of curiousity, What happens if you take the secondary out of the set, write to it, then put it back in the set as secondary? DOes that new data get lost?
[22:22:19] <stickperson> when running the cloneCollection command, i get an error that i’m not authorized to execute that command. how can i fix this?