PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 31st of January, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:47:22] <tracker1> I'm bumping up against a limit on the number of indexes I can have in my collection... if I only do indexes in one direction, but say the sort is the other direction, will it use the index?
[00:47:48] <tracker1> "too much data for sort() with no index. add an index or specify a smaller limit"
[00:50:45] <tracker1> I already have a compound index on the fields in the {find} and {sort} combined
[00:58:45] <oinkon> can you index a collection based on fields inside an object??
[01:02:25] <oinkon> oinkon, yes you can, rtfm :)
[01:12:10] <etraveler> Hey guys - any mongoose experts?
[01:18:24] <tracker1> etraveler, you may have better luck in #Node.js
[01:18:48] <etraveler> ya, I am in there too. thanks tracker1
[01:19:15] <tracker1> np.
[02:22:19] <sg> can i evaluate a regex as true/false in a $project?
[02:25:58] <arnpro> Hey guys, Im trying to use mongodb cli to test out some commands, however when pasting them into it, the console either stays on "..." or just says Display all 144 possibilities? (y or n). I am sure my command is well formed, so I don't know what could be going on? I can paste it if needed
[02:55:30] <sg> anyone?
[03:01:12] <JoeyJoeJo> How can I kill a .remove()?
[04:19:01] <arnpro> is there a CASE WHEN equivalent in mongodb? ... for range queries, for example a 'field' that is greater or equal than 0 and lower or equal than 10 = 1, greater or equal than 11 and lower or equal than 20 = 2, and so on...
[04:19:44] <arnpro> How can I nest them $cond-itions in 1 field ?
[04:35:35] <oinkon> how do i sanitize sub-document keys?
[04:37:14] <oinkon> nevermind i shouldn't really be using user input for keys
[05:16:27] <oinkon> would removing dots(.) from a subkey make it safe to use?
[05:34:33] <zaki_> Hey, anyone used GridFS on Amazon S3 ??
[06:00:24] <chovy> anyone use mongohub?
[08:35:32] <HardFu> Hey, this is my first time with aggregation framework
[08:35:46] <HardFu> I have a collection 'page_views'
[08:36:21] <HardFu> it includes full URL, timestamp, and useragent
[08:36:49] <HardFu> timestamp is ISODate
[08:36:56] <HardFu> now I'd like to create hourly reports
[08:37:03] <HardFu> but don't even know how to start :/
[08:38:17] <HardFu> I'm thinking I need to $project the date to split it into hours?
[08:39:14] <HardFu> then group result by those hours?
[08:39:21] <[AD]Turbo> hola
[10:51:18] <HardFu> can I form a date in aggregation process?
[10:51:41] <HardFu> I have a date field, now I'm grouping per hour, is it possible to "extract" the date with hour precision?
[10:53:12] <NodeX> http://docs.mongodb.org/manual/reference/aggregation/#date-operators
[10:59:38] <HardFu> I've read this
[10:59:43] <HardFu> I'm a bit confused :/
[10:59:53] <HardFu> I have a date field
[10:59:57] <mortah> hey, I've just brought on another secondary and letting it do a sync over the network. If I do rs.status() it doesn't tell me what member its syncing from... any ideas where this disappeared? (2.2.2)
[12:57:57] <Guest22427> hello
[12:58:08] <Guest22427> I'm new to MongoDB and I have a question.
[12:58:21] <Guest22427> I'd like to add a single subdocument to my schema, not an array
[12:59:11] <Guest22427> like the StatsSchema in this paste: http://pastie.org/5985682
[12:59:15] <Guest22427> is that possible?
[13:01:49] <kali> Guest22427: this is a mongoose question, not realy a mongodb specific question, so i'm not sure you'll find much help here (it does not mean i'm aware to point you to a better place, but...)
[13:02:04] <kali> and please reorder my words when needed
[13:02:14] <Guest22427> ok, thanks
[13:02:24] <Guest22427> so I take it this is not intrinsically a MongoDB issue
[13:03:18] <kali> yeah, mongodb is more or less schemaless, the schema is a mongoose concept
[13:03:58] <Guest22427> ok, thanks
[13:05:53] <kali> "schemaless" is disturbing close to something else
[13:06:00] <kali> +ly
[13:49:04] <Zariel> Is it possible to access the whole document when doing aggregations?
[13:51:51] <Zariel> so instead of doing something like, { $addToSet: { fields.. } } can you add the WHOLE document when doing a group?
[14:05:15] <chrisq> is there a way to get the time it took to make a query in mongo shell
[14:06:18] <kali> chrisq: search the doc for profiling
[14:07:18] <chrisq> kali: thanks
[14:38:04] <NodeX> Derick : is MongoCollection::listCollections() broken currently?
[14:38:10] <NodeX> https://gist.github.com/4683248 <---- bad output
[14:38:41] <Derick> what's wrong with it?
[14:39:09] <NodeX> check the gist
[14:39:15] <Derick> I did
[14:39:20] <NodeX> it's meant to list the collections for the db and it doesn't
[14:39:28] <Derick> yes it does
[14:39:32] <Derick> public array MongoDB::listCollections ([ bool $includeSystemCollections = false ] )
[14:39:35] <Derick> Gets a list of all the collections in the database and returns them as an array of MongoCollection objects.
[14:39:37] <nb-ben> could I compose a mongodb cluster of 3 servers total?
[14:39:51] <Derick> nb-ben: yes - but what are you wanting to do?
[14:40:03] <Derick> NodeX: you want http://www.php.net/manual/en/mongodb.getcollectionnames.php perhaps?
[14:40:07] <nb-ben> I just want to have a scalable webhosting environment for my projects
[14:40:08] <NodeX> "Returns an array of MongoCollections. "
[14:40:13] <nb-ben> that has the same data in the database
[14:40:28] <NodeX> quite possibly i do then, my mistake, I assumed this would return the array
[14:40:45] <Derick> NodeX: yes, and it does - it returns an array, but each element is a MongoCollection *object*, not the name
[14:41:06] <Derick> The docs have some typoes, I'll go fix.
[14:42:34] <NodeX> they're a little misleading/confusing whicih is why I assumed it would do what "getcollectionnames" does
[14:42:38] <NodeX> :D
[14:48:36] <nb-ben> could I make an unsharded cluster?
[14:48:43] <nb-ben> or maybe a cluster with a single shard?
[14:49:46] <nb-ben> the only thing I want is that all data is accessible on all nodes
[14:49:47] <NodeX> do you want write or read scalability
[14:49:58] <nb-ben> mostly read
[14:50:05] <NodeX> you'll probably want relica sets then
[14:50:09] <NodeX> replica*
[14:50:32] <nb-ben> so it's just distributing all data to all nodes, saving all data on every node locally I assume?
[14:50:39] <NodeX> correct
[14:50:51] <NodeX> master>slave
[14:51:05] <NodeX> or "secondaries" as they're known in MongoDB
[14:51:09] <nb-ben> I see
[14:51:25] <nb-ben> I was really intrigued by the idea of having it all distributed with no master tho :(
[14:51:50] <NodeX> where did you get that idea from?
[14:52:10] <nb-ben> tif master is dead, slaves fail?
[14:52:19] <NodeX> no, they re-elect a new master
[14:52:27] <nb-ben> oh, that's fine then
[14:52:42] <NodeX> then it resync's when the old master gets back online
[14:53:09] <nb-ben> I suppose I could write to slaves too right? not just read from slaves
[14:53:42] <NodeX> no
[14:53:56] <nb-ben> have to do writes to master and master only?
[14:53:57] <NodeX> well you can directly but it won't sync to the rest
[14:54:25] <nb-ben> I see
[14:59:11] <JoeyJoeJo> I did db.dropDatabase() and it returned "ok", but when I do "show dbs" that database still shows up and says it's using 163GB. How can I get rid of it completely?
[15:17:03] <pringlescan> Mongo is crazy… when I do explain some of my queries return in less than a MS… is that possible?
[15:17:45] <kali> yes.
[15:21:12] <NodeX> when I do it my queries actualy happen before I perform them
[15:21:18] <NodeX> now that's efficiency
[15:24:45] <JoeyJoeJo> I have 4 shards, each with 256GB of ram and the mongod process on each of them is taking up 200GB+, and insertion performance is getting pretty bad. Is it normal for this to happen? My db is 227GB and 227M documents
[15:39:31] <jiffe98> if I have a config server drop, can I reconstruct it by copying the data directory from another config server?
[16:36:16] <Derick> probably the most time I've even spend on a SO question: http://stackoverflow.com/questions/14626653/how-can-two-amazon-ec2-instances-share-the-same-data
[16:58:45] <Black_Phoenix> Is there any way I could import files into GridFS and have the filenames converted to UTF-8 automatically
[17:11:25] <E1ven> I'm having an issue (on 2.2.2) where mongos can't connect, on one of my QA servers; I've tried restarting it, as well as the mongod instances, but it's being stubborn. The other mongos servers can connect. Watching the log, it looks like it's acquiring locks somewhat frequently, but otherwise pretty normal. I've tried running it in -vvvv mode. Is there anything else I can/should do to try to diag?
[17:15:29] <AlecTaylor> hi
[17:15:35] <geekie> Hai ;D
[17:16:31] <AlecTaylor> I am porting a MongoDB database to a relational syntax (for an abstraction layer). Is there an automated MongoDB to MySQL converter?
[17:16:46] <AlecTaylor> (because then I can just convert that programically to my relational syntax)
[17:20:30] <AlecTaylor> Mmm, NVM; not that much more code to get through before it's fully reverse engineered by hand
[17:30:08] <Killerguy> hi all
[17:30:16] <Killerguy> I have a pb with a shard on my cluster
[17:30:23] <Killerguy> I removed it and add again
[17:30:26] <Killerguy> but I have this error :
[17:30:27] <Killerguy> Thu Jan 31 18:28:45 [Balancer] balancer move failed: { errmsg: "exception: field not found, expected type 2", code: 13111, ok: 0.0 } from: shard12 to: shard16
[17:30:36] <Killerguy> wtf? :/
[17:47:10] <arnpro> is there a channel support for mongodb php ?
[17:49:49] <bjori> nah, but feel free to ask here arnpro
[17:51:09] <arnpro> alright so I'm doing a db.runComm() with aggregate functiona and the pipeline option. However I need to check if it returned any data and display it. I have been looking for methods of iterating over that set of data, but can't find any
[17:51:35] <arnpro> I have tried foreach, even var_dump(), nothing, so how can I print 1 by 1 the docs returned by the aggregate() ?
[17:54:34] <bjori> http://php.net/mongocollection.aggregate
[17:54:53] <bjori> foreach($coll->agregate()["results"] as $document) { var_dump($document); } ? :)
[19:55:24] <nb-ben> In a replica set, say I have 3 nodes on 3 different servers which also serve HTTP at 3 different data centers. Do I need to make a connection to the remote master node when using the two slave servers in order to write information?
[19:56:03] <nb-ben> or can I simply connect to the local mongod instance and it will asynchronously take care of the update?
[20:04:19] <kali> nb-ben: you need to connect to the master
[20:04:54] <kali> nb-ben: if you open the connection with a slave, the driver will also open a connection to the master
[20:05:10] <nb-ben> ah I see
[20:05:32] <nb-ben> so as far as the application layer goes, I can just connect tot he local mongod, but behind the scenes connection will be made to the master
[20:06:02] <nb-ben> or am I off the track
[20:06:16] <kali> yes
[20:07:19] <nb-ben> and as far as reads go, will they be processed completely locally? (given full replication)
[20:07:52] <nb-ben> sounds dum if they won't :D then there's no point for replication cept for backup
[20:14:55] <kali> nb-ben: http://docs.mongodb.org/manual/applications/replication/#read-preference-modes
[20:18:00] <nb-ben> kali, you've given me a lot of help and saved me a lot of time. thank you
[20:27:57] <michaeldausmann> hello
[20:40:56] <jiffe98> is there a quick way to perform this kind of operation in mongodb? update test set c = concat('INBOX2.', substr(c, 7, length(c) - 6)) where c like 'INBOX.%';
[20:44:06] <kali> jiffe98: nope
[21:07:02] <totic> Let says I have a crawler that cashes in mongo all the domain of pages it has visited what will be the best way to get that as a set?
[21:07:09] <totic> caches*
[21:11:28] <shadfc> i feel like i'm getting inconsistent results with read preferences. Does this make sense to anyone? http://dpaste.org/OkyOm/
[21:12:27] <shadfc> actuall, http://dpaste.org/GAsZg/ shows a little more weirdness
[21:13:35] <kali> looks eventually consistent to me :)
[21:14:54] <kali> shadfc: without knowing what else is happeengin on the collection, it's hard to know if this is corrent or not
[21:15:22] <shadfc> how so? i query without a read preference and get no results (should be querying the primary, i would assume). Then I specify the primary/secondary and get results. Then the real problem is that the first queries give me _ids which the later queries do not find
[21:16:13] <kali> shadfc: are you sure there is not another process altering the collection ?
[21:17:33] <shadfc> kali: not removing those objects. I've been working with the same ones for the last hour without problem. here is a new session i just recorded which shows the queries that do return results at either end of the ones that dont -- http://dpaste.org/UoXzc/
[21:17:56] <kali> shadfc: just to meake sure... this is not a capped collection, right ?
[21:18:02] <shadfc> nope
[21:18:06] <shadfc> sharded
[21:21:07] <shadfc> kali: http://dpaste.org/5gHEK/
[21:21:46] <kali> shadfc: your replica sets are in good health ?
[21:22:07] <stefan41> how can i modify an index to be sparse (it is already unique, and now having it not sparse is causing me trouble)
[21:22:24] <kali> stefan41: i think you need to drop it
[21:22:59] <stefan41> can i drop the index, or do i have to drop the whole collection?
[21:23:06] <kali> the index :)
[21:23:21] <shadfc> kali: yeah. But that last paste shows that the same server gives different results n=0 vs n=2
[21:24:36] <kali> shadfc: try querying the shards themselves, replica per replica
[21:25:07] <kali> well, i mean the three replica of graphdb-b-3
[21:29:01] <shadfc> kali: we're investigating the hypothesis that mongos has incorrect shard locations for some chunks. _id is our shard key
[21:29:20] <kali> shadfc: iiirk
[21:29:37] <shadfc> so the faid, url query hits all shards and gets some results, while the _id query goes to a specific (wrong) server and gets nothing
[21:29:39] <shadfc> yeah
[21:34:25] <shadfc> kali: that is it. apparently mongos things those records should belong on the 4th shard
[21:35:09] <kali> :/
[21:35:45] <shadfc> we had some balancer migrations that got killed a while back. its probable that the objects existed in two places, then got removed the "correct" one. now when we run queries that hit all shards, we still get results
[21:36:02] <shadfc> guess i'm writing something to clean that data up
[21:44:54] <JoeyJoeJo> How can I kill a .remove() command?
[21:47:45] <JoeyJoeJo> Also, how can I tell what is currently holding the lock?
[22:45:24] <diamonds> what the heck http://pastie.org/pastes/5998821/text
[22:46:02] <diamonds> I put {a:1} in a mongo collection and now it's using almost 200M on disk?
[22:54:52] <IAD> diamonds: it's reserved for future
[23:03:43] <diamonds> IAD, seems a bit excessive, no?
[23:03:58] <diamonds> reserving 200MB when I write 1 document with 1 key?
[23:04:38] <IAD> diamonds: you can decrease it
[23:05:26] <diamonds> # nssize = <size>
[23:05:31] <diamonds> ^this?
[23:05:48] <diamonds> I just disabled journaling (local learning system)
[23:11:20] <IAD> diamonds: mongodb.conf: smallfiles=true
[23:11:21] <diamonds> ty
[23:59:33] <diamonds> this upsert behavior is a bit shocking...
[23:59:59] <diamonds> if no document matches the criteria, insert a new document with the fields and values of the update parameter and if the update included only update operators, the query parameter as well ."