PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 7th of January, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:00:12] <Boomtime> my guess is that Golang driver is not enforcing object check like it should, i thought 2.6.4 checked at the server but apparently not
[00:22:27] <proteneer> Boomtime: can I use periods in a list? like [“a.b”, “b.c”]
[00:22:47] <proteneer> “foo”: [“a.b”, “c.d"]
[00:24:12] <cheeser> what happened when you tried?
[00:24:25] <proteneer> well my Go driver is already pretty bored
[00:24:26] <proteneer> borked*
[00:24:29] <proteneer> so I don’t really trust it
[00:24:36] <cheeser> using mGo?
[00:24:38] <proteneer> yeah
[00:24:46] <proteneer> it’s already letting me insert fields with a period
[00:24:52] <cheeser> there you go
[00:25:38] <proteneer> well
[00:25:39] <proteneer> that’s BAD
[00:25:40] <proteneer> lol
[00:27:13] <cheeser> what?
[00:27:24] <proteneer> you’re not supposed to be able to insert a field with a period
[00:27:38] <cheeser> what makes you think that?
[00:27:54] <cheeser> values can have virtually anything. *keys* can't have periods.
[00:30:16] <Boomtime> proteneer: this is fine: “foo”: [“a.b”, “c.d"]
[00:30:23] <Boomtime> "a.b" here is a value
[00:30:34] <proteneer> yeah I just wanted to make sure
[00:30:36] <proteneer> field == key
[00:30:39] <proteneer> ^ cheeser
[00:30:45] <proteneer> sorry i think we just got confused on terminology
[00:30:52] <proteneer> mgo lets you insert keys with periods
[00:30:57] <cheeser> no. field != key
[00:31:15] <cheeser> field == key/value
[00:31:27] <proteneer> Restrictions on Field Names
[00:31:30] <proteneer> Field names cannot contain dots (i.e. .) or null characters, and they must not start with a dollar sign (i.e. $). See Dollar Sign Operator Escaping for an alternate approach.
[00:31:36] <cheeser> field names. keys.
[00:31:44] <cheeser> nothing about field values
[00:31:52] <proteneer> fine
[00:31:56] <proteneer> point taken.
[02:49:58] <rawl79> hi guys
[02:50:19] <rawl79> quick question guys. how do I install the mongodb 2.8 RCs using yum?
[02:50:44] <rawl79> is that possible? can't see any official "testing" repository for mongodb in /etc/yum.repos.d/mongodb.repo
[02:51:25] <cheeser> i don't those are published to the yum repos.
[02:52:27] <rawl79> ok, looks like i'm gonna try the tarball binary instead then.
[02:52:55] <cheeser> i did that just yesterday on OS X. no biggie.
[03:50:50] <Meatkey> Hi, I was having some trouble setting up mongodb on ubuntu and I was wondering if anybody could help
[03:54:59] <joannac> Meatkey: you'll need to give more details than that if you want help
[03:57:16] <Meatkey> specifically, it's already been set up. I can use the mongo shell locally but when I try to connect from the outside, I can't seem to get in. I didn't set up a particular port so it should be using 27015.
[03:59:36] <joannac> 27017 is the default port
[03:59:48] <Meatkey> I'm sorry, that was a typo. I am using 27017.
[04:01:15] <joannac> connect locally and run db.serverCmdLineOpts()
[04:01:24] <joannac> and look for bindIp
[04:01:38] <joannac> if it's set to 127.0.0.1, fix it in your conf file
[04:02:01] <joannac> (assuming that you don't actually have trouble connecting to the server because of a firewall or something)
[04:02:22] <Meatkey> does bindIp have to reflect it's ip address? I thought it could be localhost or 127.0.0.1 and still be accessible through it's external ip
[04:07:36] <Meatkey> thanks joannac, that fixed the issue
[08:25:40] <amitprakash> Hi, is it possible to remove all documents older than a month for all collections?
[09:48:44] <jerome-> I do MongoHub. I would like to know if it is useful for users to have both aggregation and map/reduce or if there is aggregation UI, map/reduce is useless?
[12:23:27] <durre> I have a document that looks like this: https://gist.github.com/durre/d3ec06cb75f9dbd139b6 ... I wish to update it to replace the versions that matches id: ObjectId("200"). can mongodb do that in one operation or do I have to read back en entire "photos" array, modify it, then store it back to the db?
[12:34:41] <durre> found it
[12:47:06] <RizziCR> hi
[12:47:33] <RizziCR> i use mongodb as filesystem (gridfs)
[12:47:57] <RizziCR> i remove some documents from the files collection
[12:48:37] <RizziCR> how can i „refresh“ the chunks collection or remove no more needed file data from the chunk collection ?
[12:49:11] <RizziCR> i remove the file documents with the remove query
[12:49:36] <RizziCR> and with some criteria
[12:52:10] <RizziCR> i try something equals sql state db[chunks].remove({_id:{$not:{$in:db[files].find({},{_id:1})}}) but $in only accept an array
[12:54:34] <RizziCR> ok.. with the right search key on google i think i find a solution
[13:10:05] <RizziCR> problem is solved.. thanks for your attention ;)
[14:56:43] <dberry> running a replica set (v2.4.5) and we lost a secondary. We rebuilt the server and started an initial sync. RS.status says it is a secondary but the data directory is 1/10 the primary data dir
[14:57:08] <dberry> how long does the initial sync take and where are the other dbs?
[15:06:16] <kees_> dberry, maybe the primary data dir has a lot of fragmentation
[15:07:25] <dberry> when I connect to the secondary and show dbs, I do not see all of the dbs that are listed on the primary
[15:37:03] <sarthak> hello i am curretly working on mongoDB geospatial function
[15:42:54] <domo> hey, I have a process that creates temporary databases and removes them on the fly for some data processing. is there anything bad about this? e.g., hard drive space allocation by mongo
[15:49:42] <cheeser> not really. dropped databases' space is released back to the OS
[15:50:04] <cheeser> you'd just have a bit of a perf hit for the drop command to finish, potentially.
[15:52:37] <sarthak> hey i am usign mongo 2.6 but then also 'or'ing of queries is giving an error for LineIntersection query
[16:16:25] <edrocks> has anyone used redis combined with mongodb for pagination before?
[16:31:06] <dman777_alter> I did a db.copyDatabase("torq_dev", "torq_dev", "10.14.11.121") which finished ok. However, the copied database was about 1 Gb less in size. Why is this?
[16:31:15] <Tyler_> Hey everyone! Any idea why findOne would work but not find()?
[16:31:59] <edrocks> dman777_alter: not 100% here but it may be that it only copies acaul data instead of also coping the reserved space
[16:32:32] <edrocks> dman777_alter: since mongodb will reserve say 2gb as soon as you hit 1gb and when you hit 2gb it reserves 4gb until you use 4 and so on
[16:32:52] <edrocks> but 100% on that but it makes sense there's no need to copy 0s
[16:34:17] <cheeser> well, it can anyway. but copying a db will, in essence, defragment your storage on the new disk resulting in less space used.
[16:35:55] <Dances> Hello, if I have documents that have two fields "computername" and "ip" but sometimes ip can be different. How do I get a list of distinct documents with both computername and ip in the result
[16:36:32] <Dances> I know there is $distinct but it only returns a single field.
[16:36:44] <Dances> And i've tried aggregate but can't seem to get exactly what I'm looking for
[16:38:22] <edrocks> what do you mean return them with both computername and ip? If you query for a document it returns the whole thing
[16:38:33] <edrocks> do you want to only return those 2 fields?
[16:39:21] <Dances> edrocks: No, id be happy with the entire document, but im trying to find a list of all documents that have distinct ips for the same computername
[16:39:38] <Dances> but there can be many documents with different computer names
[16:39:40] <edrocks> ook i understand now
[16:39:42] <Dances> yeah
[16:39:55] <edrocks> so you want all the ips for a computername essentially
[16:40:34] <Dances> yes, and i was able to do that with $addToSet, but the problem there is that I lose context on the information pertaining to those documents
[16:40:56] <edrocks> you might need to put either your computername or ip in a seperate doc and link to it instead
[16:41:15] <Dances> ah
[16:42:35] <dman777_alter> edrocks: thanks
[16:42:45] <edrocks> dman777_alter: your welcome
[16:43:01] <edrocks> Dances: I'd draw a picture they usually help for things like that
[16:43:16] <Dances> edrocks: Yeah, thanks for the help! ill try that
[16:44:47] <Dances> I suppose also I could query everything and get the list of ips using program logic
[16:45:08] <Dances> probably bad for performance though
[16:45:12] <edrocks> yea
[16:45:36] <edrocks> can an ip have more then one computer name?
[16:45:59] <edrocks> o nevermind
[16:46:01] <cheeser> if by computer name you mean DNS entry, yes.
[16:46:08] <Dances> yeah
[16:46:25] <Dances> if say a machine got reassigned at some point
[16:47:26] <Dances> normally i really wouldn't care except this is supposed to be part of an audit functionality so its useful to know this information
[16:47:58] <Dances> hmmm
[16:48:17] <Dances> i think i can refactor this to have the same functionality without this headache..sorry thinking outloud
[16:50:00] <Dances> yes. yes i can.
[17:26:03] <sijojose> Hi all, how can we handle different types of users in MongoDB....? say for example in a college there can be different users like Professors, Students etc..
[17:52:32] <dman777_alter> when would I use elasticsearch-river-mongodb?
[18:25:49] <drags> I'm setting up sharding, playing around with a test cluster. I have a collection that is both sharded and not sharded:
[18:27:03] <drags> it appears in the config server in the collections collection, and appears on both shards, yet the data is not balancing and the non-primary shard thinks the collection is naturally empty
[18:27:42] <drags> I think it is due to the config server being inaccessible during the initial setup, though that wasn't discovered until a count on the collection had to timeout trying to contact the config server
[18:28:32] <drags> I want to unshard and reshard the collection (test data, does not matter), but it seems like unsharding collections is not easy. Is it easier to just blow away the config server and start over?
[18:30:08] <drags> there is about 8gb of data in there, so not a lack of chunks issue
[18:30:26] <drags> is it possible to view the shard key segmentation?
[18:33:27] <dman777_alter> How would I set up --replSet in the mongod.conf so the daemon starts with --replSet?
[18:36:39] <drags> dman777_alter: drop the -- from the argument:
[18:36:45] <drags> replSet=MyRepl
[18:36:52] <cheeser> dman777_alter: http://docs.mongodb.org/manual/reference/configuration-options/#config-file-format
[18:40:30] <dman777_alter> drags: hmm...I have replSet=rs0 in mongod.conf but rs.status() shows "not running with --replSet"
[18:43:16] <drags> dman777_alter: make sure you restart post that config change, also make sure your init scripts are definitely using /etc/mongod.conf as the config file
[18:44:02] <drags> there should be a '-f /etc/mongod.conf' argument hanging off the mongod process in your process table
[18:47:37] <dman777_alter> drags: ah...ok..thanks. It is using /etc/mongodb.conf.
[18:47:55] <drags> dman777_alter: ran into that problem before :)
[18:48:36] <drags> back to sharding fun for anyone reading along: I just sh.shardCollection'd another collection (now that all mongod/mongos/cfgservers can definitely talk to each other)
[18:48:50] <drags> added to sharding without trouble, but from the view of the mongos instance that collection has 0 records in it
[18:49:05] <drags> data only appears on the shard where it was mongorestore'd to
[18:49:27] <drags> anyone know of more ways to poke at this and why the mongos instance thinks these collections are empty?
[18:56:26] <skot> you must run your programs against mongos, not the ind. shards. Talking directly to the shards is problematic and not suggested.
[18:56:52] <drags> skot: I'm just in setup and testing mode here, but as I said, from the view of the mongos instance my sharded collections have 0 documents
[18:57:13] <drags> skot: do I need to mongorestore against the mongos?
[18:57:18] <skot> yes
[18:57:20] <drags> ah
[18:57:35] <skot> Once they are shards, don't write directly to them.
[18:57:51] <skot> You *must* go through the mongos instance, which is why they are there.
[18:59:34] <drags> sorry, I was getting optimistically confused after reading about converting existing RS to sharded clusters
[18:59:46] <drags> will start over and do my restores via mongos, thank you
[19:05:21] <dman777_alter> would it be bad to manually feed rs.initiate(rsconfig); rsconfig = {"_id":"rs1","members":[{"_id":1,"host":"127.0.0.1"}]}; to fix "errmsg" : "couldn't initiate : can't find self in the replset config"?
[19:09:39] <dman777_alter> nm...got it
[19:11:16] <Dances> What is the correct way to store a date from python into mongo? I'm passing a datetime.now() object but it seems that its getting stored as UTC timezone but now convering to UTC
[21:02:10] <julienrbt> mais bon tant pis, j'ai un ubuntu touch a me prendre
[21:02:45] <julienrbt> peu etre va t'il baisser sous la barre des $100 :D (apres j'hesiterai plus xD)
[21:03:44] <julienrbt> ops
[21:04:09] <julienrbt> sry
[21:38:34] <edrocks_> is there anyway to get an array of elements from an array of ids?
[22:52:36] <jumpman> hey... i'm struggling to build a fairly complicated component of this project I'm working on because I'm not sure really how I should store data the most efficiently
[22:52:57] <jumpman> a join would be quite useful xD
[22:53:09] <jumpman> http://i.imgur.com/Xq6iODm.png <-- this is a screenshot of the wireframe i'm building off of
[22:53:25] <jumpman> table & filters & everything is 100% done. I just need to build the actual activity log now.
[22:54:07] <jumpman> Originally, (since this project is Meteor (handlebars ish)), I was thinking that something like this: http://pastie.org/9819132
[22:54:32] <jumpman> would be a really succinct way to store the data. The first tier, 'Permissions' is the category. maybe enum of the cateogories on the right
[22:54:47] <jumpman> but then you see the problem. the links!
[22:55:10] <jumpman> in order to generate a link, I need the (ie) User's name, the User's slug/id, and the rest of the object
[22:56:08] <jumpman> but I can't really do it like I wanted to here, because storing html objects in the db is silly and with links it's especially silly. would produce links that 404, the routing couldn't change, etc
[22:56:49] <jumpman> but I really don't want to search the DB for each row of the table in order to take JUST an ID without the other info and getting it for JUST the links
[23:15:16] <jumpman> maybe i could establish some language and then store the IDs of everyone and a small extra data layer for purely rendering text