PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 12th of March, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:46:40] <hoverbear> How can I atomically push to an array inside an array in Mongo?
[00:51:42] <tomas> hi I'm very new to nosql world, can anyone please tell me what is the proper way of referencing to linked document from the document I've queried? For example I've got: http://pastebin.com/ARzsTd7A
[00:52:16] <tomas> how do I get my note author from my note?
[00:55:50] <joannac> hoverbear: do you know the index of the first array?
[00:55:59] <hoverbear> @joannac: I can find out.
[00:56:44] <joannac> hoverbear: then db.coll.update({}, {$push: {"arr.X.arr2": Y}})
[00:56:52] <joannac> where X is the index, and Y is the thing you want to push
[00:57:09] <hoverbear> joannac: Oh ok interesting, I'll try that.
[00:57:28] <joannac> tomas: your note doesn't have a link to the author?
[00:57:52] <joannac> tomas: to get the note from the author document, you would need to do another find with the objectid in your author document
[00:59:46] <tomas> joannac: can I get author from my note?
[01:00:00] <tomas> and thanks for having a look
[01:01:05] <joannac> tomas: I guess in theory, db.author.find({note_id: NOTEID})
[01:01:17] <hoverbear> joannac: I need to interpolate X programmatically, but JS doesn't like logic in keys
[01:04:11] <joannac> hoverbear: I wonder if you can do it with the positional operator instead...
[01:27:43] <hoverbear> joannac: Either that or if I could iterate over to match some value, there's less then 12 of the X
[01:28:32] <tzu_chi> awdd
[01:29:05] <hoverbear> joannac: I'm basically having a data race issue, I need some way of detecting when my values are stale.
[01:29:20] <hoverbear> joannac: I want to use the atomic updates.
[01:29:46] <hoverbear> Instead of get data, edit data, save data.
[03:41:32] <l_s> after importing a csv file using mongoimport, any operations i do with the collection will return a syntax error, even with db.collection.drop(). what could be the problem?
[03:45:13] <l_s> the exact message is SyntaxError: Unexpected token Illegal
[03:53:02] <joannac> operations in the shell?
[03:53:26] <l_s> joannac, yes in mongo shell
[03:54:16] <l_s> i am reading the doc, to see if there is an privilege thing i have to setup
[03:54:58] <joannac> I'd say it's more likely to be something like this http://stackoverflow.com/questions/12719859/syntaxerror-unexpected-token-illegal
[03:58:47] <l_s> joannac, it really looks like it, I have many characters like @ and # in my csv
[04:07:06] <joannac> did the mongoimport complete successfully?
[04:22:32] <hoverbear> joannac: I can't find any mention of a $pos operator. Is there a way to make it check each item in the array?
[04:30:21] <joannac> hoverbear: a normal search does anyway?
[04:30:30] <hoverbear> joannac: Oh, that might work then.
[04:31:07] <joannac> I'm not sure if that's still true for arrays inside arrays
[04:37:40] <hoverbear> joannac: Normal search works, now I need to push onto the nested array. :S
[05:34:59] <hoverbear> Ugh, "mod on _id is not allowed"
[06:07:44] <Mothership> hey
[06:10:58] <Mothership> is there any place i can download precompiled mongodb for arm architecture? don't have to be latest
[06:14:50] <hoverbear> Mothership: No ARM for Mongo iirc
[06:15:49] <Mothership> :(
[08:06:18] <nosqlfreak> hello
[08:06:27] <Zelest> heya
[08:11:16] <nosqlfreak> how much data can gridfs handle?
[08:12:13] <Zelest> per file? or what do you mean?
[08:15:11] <ron__> hi guys...
[08:15:32] <ron__> I have a weird issue searching based on date in monog console and java program
[08:17:02] <ron__> I m storing date in a field named "createdDate" as String in a collection
[08:17:52] <ron__> when I try to retrieve the records matching the date rage (createdDate $gte), it works fine except the date to check is not Sunday
[08:20:18] <ron__> for example db.record.find({"createdDate":{"$gte":"Fri Feb 14 16:25:36 GMT 2014"}}) ---> this works
[08:20:42] <ron__> db.record.find({"createdDate":{"$gte":"Sun Feb 16 16:25:36 GMT 2014"}}) ---> this does not works
[09:16:42] <Diplomat> Hey guys.. somehow my query doesnt work when i have
[09:16:50] <Diplomat> Array ( [project_id] => 12 [site_id] => 10 [click_time] => Array ( [$gt] => 1393970400 [$lt] => 1394661599 ) )
[09:16:53] <Diplomat> but it works when i have
[09:16:58] <Diplomat> Array ( [project_id] => 12 [site_id] => 10 [click_time] => Array ( [$gt] => 1393970400 )
[09:17:17] <Diplomat> Any ideas why it's so?
[09:20:51] <Diplomat> nvm :(
[09:20:58] <Diplomat> i didnt cast it correctly again
[09:34:28] <Nodex> lol
[09:35:13] <Zelest> Diplomat, i blame the chris hansen discussion in the other channel :P
[09:36:33] <Nodex> best to just cast everything in PHP, whether you're using mongodb or not
[09:37:14] <Zelest> === is your friend.
[09:38:24] <Nodex> also in your query, if you're adding an index for it, make sure "click_time" is the last part of the index as it's a range
[09:39:05] <Nodex> and I think you gain some APC performance when casting as it doens't have to check things all the time
[10:09:00] <Diplomat> guys
[10:09:09] <Diplomat> does findOne takes first or last row ?
[10:09:21] <Diplomat> lol yes Zelest
[10:09:36] <Zelest> depends what you sort on
[10:09:47] <Zelest> and depends what you find on
[10:09:47] <Diplomat> click_time is -1
[10:09:51] <Diplomat> :D
[10:09:56] <Diplomat> that's my only index i think
[10:10:08] <Diplomat> i queried like
[10:10:25] <Diplomat> SELECT * FROM visitor_click WHERE project_id=4 AND visitor_id="blabla"
[10:10:56] <Zelest> ...LIMIT 1;
[10:10:59] <Zelest> seeing you use findOne
[10:11:03] <Diplomat> yes
[10:11:20] <Diplomat> http://puu.sh/7soI2.png
[10:11:57] <Zelest> http://www.querymongo.com ;)
[10:14:12] <Nodex> findOne() uses $natural iirc which is disk order
[10:14:36] <Nodex> if you want garunteed latest document you must do find({...}).limit(1).sort({_id:-1});
[10:14:51] <Diplomat> yeah it takes the first one
[10:15:03] <Nodex> no it takes disk order
[10:15:33] <Diplomat> well yes, in my case it's the first one
[10:16:10] <Nodex> but it might not always be the one you want
[10:17:14] <Diplomat> yup
[10:17:35] <Nodex> so again to garuntee it just sort on _id -1 || 1
[10:29:41] <Diplomat> for some weird reason it doesnt return anything again
[10:30:02] <Diplomat> only MongoCursor Object ( )
[10:39:42] <Nodex> cursor needs to be iterated
[10:41:19] <Nodex> personaly I use return iterator_to_array($cursor, true);
[10:51:44] <Diplomat> it worked, thank you :)
[10:53:40] <abishekk92> Hi, one of our secondaries stopped playing the oplogs, as the disk space was maxed out. So I add more space in the partition containing the data directory, and I check db.printReplicationSlaveInfo on the Primary and ot says it was synced 1 sec ago, however, the new data doesn't appear on the secondary. Any idea why this is happening?
[10:54:16] <abishekk92> Should not the secondary replay the oplogs?
[11:06:39] <csrgxtu> hi, i am using Java query mongodb, and how can i get a filed value
[11:07:11] <csrgxtu> i mean, when i use DBCursor find method, it returns a DBCursor object
[11:11:06] <kali> csrgxtu: next()
[11:11:12] <kali> or iterator()
[11:14:42] <csrgxtu> well, i use cursor.next() and the next will return u a DBCursor object
[11:14:57] <csrgxtu> but i want to access the field in this record
[11:15:23] <kali> next() returns a DBObject, bot a DBCursor
[11:15:29] <kali> http://api.mongodb.org/java/2.6/com/mongodb/DBCursor.html
[11:16:09] <csrgxtu> kali: oh, sorry for that, next do return a DBObject
[11:16:29] <smargareto> hi
[11:16:51] <csrgxtu> and this object contains Username, Password, how can i retrive the Username directly
[11:17:14] <smargareto> should I post my question directly here?
[11:17:44] <kali> csrgxtu: DBObject extends BSONObject, so its get("Username"). see http://api.mongodb.org/java/2.6/org/bson/BSONObject.html#get(java.lang.String)
[11:18:04] <kali> smargareto: yes, skip the meta question and jump to the question :)
[11:18:10] <smargareto> ok
[11:18:34] <smargareto> I'm developping a program that inserts data to mongodb (doh!)
[11:19:25] <smargareto> It processes a file that has 1M records, accumulates the data in 6 ways and then inserts (upserts) in mongo
[11:19:38] <smargareto> I use the C++ driver
[11:19:44] <csrgxtu> got it, "DBObject extends BSONObject" solved my problem, thanks man
[11:20:42] <smargareto> the problem I have is that when I insert in only 1 table of the 6, I get good speed (~2s of "pure" db work)
[11:21:13] <smargareto> but when I start adding more tables, the times of the other tables are worse and worse
[11:21:37] <smargareto> I don't know what to llok for
[11:22:18] <smargareto> for example, with 2 tables I get:
[11:22:20] <smargareto> WebNavigationUri: 9493ms 414610 items; 0 upserts
[11:22:20] <smargareto> :: 0ms
[11:22:20] <smargareto> Add 4952ms
[11:22:20] <smargareto> Upsert 4540ms / 4166ms
[11:22:20] <smargareto> 3666ms (3666 + 0 + 0 + 0)
[11:22:28] <smargareto> WebAccumulationCategory: 6877ms 261749 items; 0 upserts
[11:22:28] <smargareto> :: 0ms
[11:22:28] <smargareto> Add 4936ms
[11:22:28] <smargareto> Upsert 1940ms / 1781ms
[11:22:28] <smargareto> 1465ms (1465 + 0 + 0 + 0)
[11:23:16] <smargareto> the interesting numbers are the last ones: 3666ms to upsert 414610 records, and 1465ms to upsert 261749
[11:23:36] <smargareto> but when I process the same file with the 6 tables:
[11:23:57] <smargareto> WebNavigationUri: 33046ms 414610 items; 0 upserts
[11:23:57] <smargareto> :: 0ms
[11:23:57] <smargareto> Add 4742ms
[11:23:57] <smargareto> Upsert 28304ms / 27843ms
[11:23:57] <smargareto> 27317ms (27317 + 0 + 0 + 0)
[11:24:41] <smargareto> WebNavigationUri now takes 22s to do the same work
[11:25:01] <smargareto> same file, dropped tables...
[11:25:51] <Nodex> please use a pastebin :)
[11:27:50] <smargareto> ok
[11:30:58] <Gr1> Hi everyone.
[11:31:47] <Gr1> I was setting up mongodb with replication and sharding enabled
[11:32:12] <Gr1> and I got a message that replication should not be enabled on a config server when I try to start the daemon as mongod --configsvr --dbpath /var/lib/mongodb/ --port 27019 --config /etc/mongodb.conf
[11:32:41] <Gr1> Does this mean that the config server would hold only meta data and not the actual db data?
[11:32:44] <smargareto> http://pastebin.com/1Cw0UByu
[11:34:14] <kali> Gr1: yes
[11:35:24] <kali> Gr1: and even if your three config server "mirror" each other, they are not a replica set, it's a totally different protocol
[11:36:30] <kali> Gr1: so your typical sharding setup should have 3 config servers somewhere, at least one 3-node replica set somewhere else (even if two or more makes more sense), and some mongos somewhere else
[11:36:30] <Gr1> Thank you Kali. I see. So if I am setting up an environment where I need data replication, and sharding, what all services should I run?
[11:36:40] <Gr1> I see
[11:37:03] <kali> Gr1: that said, you don't have to set all of them on different computers
[11:37:29] <Gr1> So config server and replica can remain on the same nodes?
[11:37:47] <kali> same computer, but two mongodb processes
[11:37:57] <kali> with different port, and different dbpath
[11:38:01] <Gr1> Ohh. And different port numbers
[11:38:03] <Gr1> Gotcha
[11:38:07] <Gr1> Thanks a lot kali.
[11:38:16] <Gr1> I was stuck there.
[11:38:21] <Gr1> :)
[11:38:39] <Diplomat> guys any ideas why http://puu.sh/7sru7.png doesnt update my row?
[11:38:46] <Diplomat> it says this Array ( [updatedExisting] => [n] => 0 [connectionId] => 17 [err] => [ok] => 1 )
[11:39:53] <kali> Diplomat: man, is this unreadable. i suggest you try manually your query in the mongodb shell :)
[11:39:56] <Nodex> what are project__id ?
[11:40:13] <Nodex> click_time ? - int's?
[11:40:17] <Diplomat> yes
[11:40:21] <Nodex> then cast them
[11:40:24] <Diplomat> OMG
[11:40:25] <Diplomat> gd
[11:40:27] <Diplomat> LOL
[11:40:31] <Nodex> cast everything
[11:40:39] <kali> ha. php crap. ok :)
[11:40:51] <Nodex> lol
[11:41:23] <Diplomat> stupid PHP thinks everything is string -.-
[11:41:50] <Derick> to me it says you don't understand it well enough yet
[11:42:00] <Nodex> pretty sure I've mentioed 4 times now to cast everything in and out
[11:42:09] <Derick> Diplomat: did you read the URL that I posted yesterday?
[11:42:27] <Diplomat> Derick, I do understand , but I'm not used to cast everything
[11:42:53] <Diplomat> I usually use PDO where I use prepare
[11:43:16] <Derick> Diplomat: can you please read: http://derickrethans.nl/mongodb-type-juggling.html
[11:44:44] <Diplomat> yeah i know everything that's written there, but i just forget :D
[11:46:15] <ajph> is it possible to group by regexp in an aggregation query? for example, $group: { "test": { "$sum": { "$regexp": "^a" } } }
[11:47:50] <kali> ajph: i doubt this $group will produce anything interesting even if it valid. can you show us an example document and explain what you're trying to do ?
[11:51:44] <ajph> kali: i'm looking to sum all fields that begin with "a" : http://pastie.org/8911174
[11:51:47] <smargareto> The dictionaries that accumulate the data are flushed to mongo each 5000 lines
[11:52:11] <Nodex> ajph : can't you just $match them in the first pipeline?
[11:52:20] <smargareto> and they are processed in the reverse order of the log (WebAccumulationCategory..WebNavigationUri)
[11:53:01] <kali> smargareto: what should start by "a" ? the field name or the value ?
[11:53:17] <ajph> Nodex: hmm, now i see the problem with not posting an exact use-case. i can't, no. let me get a better example
[11:53:22] <ajph> kali: the field name
[11:53:31] <kali> ajph: no, you can't.
[11:54:00] <smargareto> it's like the connection gets tired because the times of the first table do not degrade
[11:54:24] <kali> smargareto: (sorry for the mismatched message)
[11:54:33] <smargareto> no prob
[11:55:53] <kali> ajph: you should consider changing your schema, actually. if you have to reason on key fields, many different things in mongo will get in the way
[11:56:42] <kali> ajph: consider moving from { key1 : value1, key2 : valueé2 } to [{ key: key1, value: value1}, { key: key2, value: value2 }]
[11:57:30] <ajph> kali: really i'm looking to sum all fields that have the format test.(.*).clicks where (.*) is unknown. i don't use arrays because i need to upsert.
[11:57:49] <ajph> i suspect i'll need to make sure my (.*) is known somehow
[12:07:17] <kali> ajph: yeah, if you know the values beforehand, you can write the sum by hand
[12:07:31] <kali> ajph: ot by code :)
[12:41:06] <Gr1> kali: Hi,
[12:41:56] <Gr1> If I have config server and mongod instances running on the same server with different port, which one should I connect to, to make a query?
[12:42:09] <Gr1> config server or the mongod instance?
[12:42:34] <Derick> neither
[12:42:38] <Derick> you need to talk to mongos
[12:43:32] <Gr1> Derick: Please correct me if I am wrong, but I seems to be able to connect to mongos only when I start the config server. So isn't mongos the config server instance?
[12:43:55] <Derick> no
[12:44:46] <Derick> http://docs.mongodb.org/manual/core/sharding-introduction/
[12:45:03] <Gr1> Ok Thank you Derick :)
[13:03:53] <amitprakash> Hi, is there a good way to combine two update queries in mongo?
[13:04:28] <amitprakash> for example combine two of db[collection].update(filter_criteria, update_dict)
[13:05:37] <amitprakash> the issue is with combining update_dicts, the first update_dict could be {'$addToSet': {'key': value}} and the second update_dict would be {'$addToSet': {'key_alt': value_alt}}
[13:14:35] <amitprakash> Something like db.collection.update({'$and': [filter1, filter2]}, {'$and': [update1, update2]})
[13:21:43] <Nodex> you can use multiple addToSet's in one query as long as they don't change or attempt to change the same value
[13:29:53] <muraliv> hi. is there a way to hard-code a key in map-reduce emit function? in a composite keys situation.
[13:30:03] <muraliv> hi. is there a way to hard-code a key in map-reduce emit function? in a composite keys situation.
[13:55:29] <Sidd> Hello
[13:57:38] <Nodex> hi2u2
[13:59:04] <Gr1> Hi
[13:59:18] <Gr1> When I am trying to run a query on my master db, I am seeing
[13:59:19] <amitprakash> Nodex, point wasnt multiple add to sets, the point was two dicts containing addToSet and merging them somehow
[13:59:25] <Gr1> error: { "$err" : "not master and slaveOk=false", "code" : 13435 }
[13:59:32] <amitprakash> Decided against it anyway
[13:59:53] <Gr1> eventhough I ran rs.slaveOk() on my 2 secondary slaves
[14:00:04] <Gr1> Is there something that I am missing to run on master?
[14:02:29] <kali> Gr1: you should not have this message when running on the primary
[14:03:48] <Gr1> kali: I am running this on primary.
[14:04:20] <Gr1> Ahh.. It came now. Not sure why
[14:04:36] <Sidd> I am having trouble with connection string, or so I think. I wrote a test application that works with connecting to my mongod instances, but I am trying to test connecting to my Mongos instance. I have sharding and replication set up. Everything appears to be working fine.
[14:04:43] <Gr1> Is that something I should worry about?
[14:04:49] <Gr1> replication lag or something?
[14:05:05] <Sidd> I am using the official C# driver, the message I get is "{"Unable to connect to server nc-dev-api:27018: Configuration system failed to initialize."}"
[14:05:59] <kali> Gr1: it's not supposed to happen. if you see this message, you're just not connected on the primary (or there is a terrible bug, but i doubt it at this stage)
[14:06:53] <Gr1> kali: It just disappeared without any intervention.
[14:07:24] <Gr1> All I did was rs.slaveOk() but this error came a couple of times afterwards. then it disappeared.
[14:07:27] <Gr1> Thanks again Kali
[14:07:35] <Gr1> I will keep posting if this appears again
[14:08:26] <kali> rs.slaveOk() will not change anything if you're talking with a primary
[14:08:49] <Gr1> I did it on secondary node.
[14:10:50] <kali> mmmm i think you're confused. it's a mongodb shell settings and it only affects the current session
[14:13:01] <Gr1> I see.
[14:14:20] <smargareto> I have posted it in stackoverflow: http://stackoverflow.com/questions/22354246/mongo-upsert-performance-c
[14:16:19] <Sidd> If someone can help me with my issue I would appreciate it, I have been unable to find anything with google searching, I am sure Im not asking the right question, but I posted on stack overflow http://stackoverflow.com/questions/22354298/unable-to-connect-to-mongos-in-mongodb
[14:19:24] <eagen> You want to connect to the mongos host and port, not to one of the shards directly.
[14:19:51] <Sidd> that is what I am trying to do
[14:20:12] <Sidd> however i just fixed it....some local data file was corrupt and was causing the problem
[14:20:19] <eagen> Ah ok.
[14:20:24] <Sidd> didn't have anything to do with mongodb
[14:20:43] <Sidd> >.> it just throwing the error from my application right when I was connecting.
[14:20:48] <Sidd> Thanks though eagen
[14:21:07] <starfly> always the app's fault… ;)
[14:23:09] <Sidd> Haha, I don't know why I didn't look at the inner exception sooner. It was having trouble reading the xml from the app data file. I don't think this is the first time this file has messed up on me either.
[14:23:58] <saml> in casbah, is there replicaset client?
[14:24:05] <saml> something like pymongo http://api.mongodb.org/python/current/api/pymongo/mongo_replica_set_client.html
[14:24:17] <saml> or do I need to specify all mongo hosts in mongo uri?
[14:24:24] <saml> and use normal MongoClient
[14:24:54] <kali> saml: MongoClient is supposed to be the one and unique, i think it's the case for casbah
[14:25:12] <kali> saml: the replica set client should fade out from all the drivers
[14:25:14] <Sidd> I would think you still need to specify all mongo host in the mongo uri for the connection string
[14:25:34] <kali> yes, that is true
[14:26:22] <saml> i see.
[14:26:34] <saml> kali, why replica set client should fade out?
[14:27:56] <kali> saml: it's deprecated in favor of MongoClient, and all drivers are supposed to converge as much as possible to the same API
[14:28:29] <saml> kali, so, a good practice is to use MongoClient and specify all possible mongohosts in mongo uri
[14:29:05] <kali> saml: yes
[14:29:17] <saml> where is deprecation document?
[14:29:31] <saml> we all changed python projects to use MongoReplicaSetClient recently :P
[14:30:06] <saml> http://api.mongodb.org/python/current/api/pymongo/mongo_replica_set_client.html#pymongo.mongo_replica_set_client.MongoReplicaSetClient no mention of deprecation
[14:30:44] <saml> pymongo just uses C driver
[14:31:01] <saml> http://api.mongodb.org/c/current/api/annotated.html i don't know how to find MongoClient here
[14:31:17] <kali> saml: well, i don't know about pymongo specifics. my reference would be that http://derickrethans.nl/mongoclient.html and various blog entries/discussions indicating a wish to make all APIs similar
[14:31:24] <kali> ping Derick ?
[15:13:09] <devdazed> any mongodb folks around, i have a question about mms
[15:13:16] <devdazed> I'd like to bulk remove hosts in the mms console
[15:13:34] <devdazed> i have about 1400 dead hosts in the "Mongos" section and from what i see, i can't remove them
[15:13:39] <devdazed> except one by one
[15:13:50] <devdazed> each having a confirmation before
[15:13:55] <devdazed> that would take days
[15:14:12] <jarjar_> hi guys
[15:15:14] <jarjar_> I'm having some trouble running my script on nodejs with mongoose... keep having this error when I launch it:
[15:15:16] <jarjar_> TypeError: 'undefined' is not a function (evaluating 'inherits(InsertCommand, BaseCommand)') /var/node/suchmaschine/node_modules/mongoose/node_modules/mongodb/lib/mongodb/commands/insert_command.js:38
[15:15:21] <jarjar_> any idea?
[15:23:12] <Derick> kali: pong
[15:23:43] <kali> Derick: you may want to confirm or infirm what i was saying about the drivers and the client classes
[15:24:00] <Derick> sorry, no scrollback right now - can yo urepeat?
[15:24:44] <kali> Derick: http://irclogger.com/.mongodb/2014-03-12#1394634236 starting there
[15:25:18] <Derick> yeah, I think you're right
[15:25:25] <Derick> but we're not quite there yet
[15:25:49] <Derick> 10:30 <saml> pymongo just uses C driver
[15:25:51] <Derick> that's not true
[15:25:55] <Derick> (yet)
[15:26:03] <saml> ah i just assumed
[15:26:12] <Derick> there is a C driver in the making
[15:26:28] <Derick> pymongo has an *optional* C-based addition to support faster bson en/decoding
[15:26:28] <saml> Derick, so should i not use MongoReplicaSetClient?
[15:26:52] <Derick> i think pymongo is still odd - so I don't quite know
[15:45:33] <KamZou> Hi, i've a strange comportment in my MongoDB : when i type : show dbs i get 200GB for a specific database. And when i use this db and type a db.mycollection.stats i've 100gb+ difference. Any idea please ?
[15:48:23] <Nodex> padding?
[15:48:32] <Nodex> and or reserved space
[15:49:05] <kali> KamZou: http://docs.mongodb.org/manual/faq/storage/#why-are-the-files-in-my-data-directory-larger-than-the-data-in-my-database
[15:49:13] <kali> KamZou: you can actually skim the whole page :)
[15:50:47] <KamZou> kali, so the following command : "show dbs" is listing the room user on my disk ??
[15:50:52] <KamZou> *used
[15:52:46] <kali> yes, show dbs is the disk size
[16:15:45] <KamZou> Thanks kali
[17:16:55] <Wil> Hey, I was hoping someone wouldn't mind taking a few minutes and looking over an aggregation pipeline for inefficencies I'm sure exist. Trying to optimize a database that shouldn't be so bogged down with the small amount of traffic it's getting.
[17:17:00] <Wil> Pipeline: http://www.pasteall.org/50191
[17:18:35] <Wil> The collection itself is just a time-series database where each document is the address, createdAt time, the values at that time. Index on both the address and the date.
[17:20:13] <Wil> time-series collection *
[17:20:25] <Wil> Not sure how smart it is to actually index a date.
[17:20:55] <Wil> If that's a really dumb thing feel free to call it as it is. :-P
[17:21:21] <Hattersley> date on it's own is a bad index
[17:21:26] <Hattersley> date with something else is generally okay
[17:21:32] <Hattersley> oh, hold on
[17:21:37] <Hattersley> I was thinking about shard keys
[17:21:40] <Hattersley> ignore me :D
[17:22:30] <Wil> I'm using mongoose with node, and I marked both as an index, but I don't think it's a joint index.
[17:22:38] <Wil> Soooooo yeah.
[17:23:57] <Wil> This is my first time making use of mongodb. That aggregation pipeline is being used by ~5000 people every 5 minutes to every 24 hours. Depends on the granularity.
[17:24:24] <Wil> But even so, I don't think it's a major bottleneck.
[17:24:47] <Wil> DB is writing to that same collection ~5000/minute.
[17:25:03] <Wil> But, again, I feel like these numbers are extremely slow to be causing such a load.
[17:25:22] <Wil> slow=low
[17:26:58] <Nodex> Wil : can you pastebin your getIndexes() command
[17:27:18] <Wil> Yeah just a minute. :-)
[17:29:51] <Wil> http://pastebin.com/vHXRZ4RD
[17:30:47] <Nodex> can you run an explain() on your match (just do a find({address:....,createdAt:...}).explain();
[17:34:47] <Nodex> also your last index is redundant because the previous one covers it
[17:36:12] <Wil> Nodex: http://pastebin.com/dHfgThDB
[17:36:29] <Wil> 81 yields.
[17:36:30] <Wil> Eesh.
[17:37:06] <Nodex> how many documents does your collection have?
[17:37:18] <Wil> Let's find out...
[17:37:44] <Wil> 25963413
[17:37:56] <Wil> Heh, 25 million.
[17:38:08] <Nodex> what spec is the machine?
[17:38:23] <Wil> m3.large on Amazon AWS/EC2
[17:38:35] <Nodex> that doens't mean much to me I'm afraid
[17:40:30] <Wil> 7.5gb memory, 1 32gb SSD,
[17:41:02] <Nodex> does your working set fit in RAM?
[17:41:27] <Wil> Not sure...
[17:41:46] <Nodex> you should drop that single address index - the compound covers it
[17:41:53] <Nodex> that will free up some room
[17:44:27] <Wil> Histories is 5.9gb...
[17:44:38] <Wil> I'm assuming size doesn't include totalIndexSize
[17:45:54] <Wil> It must not, because apparently my index size is 7.9gb? o.O
[17:47:16] <Wil> Nodex: Without the single address index, even if queries are run without the createdAt field, does it still make use of the compound index?
[18:34:43] <Wil> Nodex: Droping that index didn't help. :-P
[18:35:10] <Wil> Nodex: Then again, it wouldn't. Not enough ram anyways.
[19:21:06] <Diplomat> guys
[19:21:13] <rkgarcia> hi Diplomat
[19:21:21] <Diplomat> can you please explain me why mysql fanboys are saying that mongodb is web scale
[19:21:22] <Diplomat> hey man
[19:21:46] <rkgarcia> web scale?
[19:21:55] <rkgarcia> mongodb supports a lot of information
[19:21:57] <Diplomat> http://www.mongodb-is-web-scale.com/ i read this.. and it just sounds like one cocky tech guy talks with a normal dude
[19:22:20] <rkgarcia> D: wait
[19:22:45] <Diplomat> most of the DBAs are just sad and lonely , that's why they talk like bastards -.- lol
[19:25:54] <rkgarcia> jajaja yep
[19:26:20] <rkgarcia> mongodb can write the data and notify only if are written
[19:26:31] <rkgarcia> and you aren't fucked...
[19:26:52] <simpsond> "..... If you were stupid enough to totally ignore durability just to get benchmarks, I suggest you pipe your data to /dev/null. It will be very fast.", love that quote from the transcript
[19:27:48] <Diplomat> like
[19:27:54] <Diplomat> hmm
[19:39:42] <rob_1234> hello, I have a question: if I have a collection where the _id is a subdocument, { _id: { first_name: "", last_name: "" } }, can I add a field to this subdocument later (as in: db.collection.update( {}, { $set: { '_id.middle_name': '' } }, { multi: true } ); ) ?
[19:40:10] <ron> afaik, _id fields can't be changed.
[19:40:22] <ron> so you'd have to remove and add the object.
[19:41:58] <rob_1234> ok, thanks for the answer
[19:45:17] <jrdn> So, we're using pre-aggregated data and the aggregation framework for our reporting (with pivots).. When we add a lot of data, the browser will crash.. so, with that said, we need pagination.. Does anyone know a good place to start for paginating aggregation framework queries effectively?
[20:27:30] <samgranger> Hey Mongo guys - I'm fairly new with Mongo, and have a few small questions - anyone here able to answer a few?
[20:28:04] <ron> just ask
[20:29:26] <samgranger> Well, first of all - lets say I have a few records in a table - if I select a random row and delete it straight away after selecting,is there a chance, even a tiny tiny chance, that another request manages to select the same record?
[20:30:22] <samgranger> And if so, is there a different secure way I can accomplish this? I want an option to only allow records to be read once basically
[20:33:36] <samgranger> Might be a weird question but I'm building a web app with certain unique keys in mongo - and let's say someone requests a key simultaneously, I don't want it to be possible that they receive the same one
[20:36:37] <Joeskyyy> only way that could happen would be if you're in a replset and you delete, but you're reading from a secondary
[20:37:28] <LucasTT> i'm getting an error when trying to install the database on a folder
[20:37:32] <LucasTT> (windows)
[20:38:22] <LucasTT> http://prntscr.com/30944d
[20:38:46] <LucasTT> http://prntscr.com/3094ad
[20:39:21] <LucasTT> http://prntscr.com/3094iu
[20:39:22] <samgranger> thanks Joeskyyy
[20:39:39] <LucasTT> anyone knows anything about this error?
[20:39:55] <Joeskyyy> no problem samgranger. You can even avoid that in a replset
[20:40:10] <Joeskyyy> Essentially you can tell an operation to not respond back until it's done replicating
[20:40:27] <Joeskyyy> Then you can avoid it there as well. OF course there's a performance cut there, but take that as you will
[20:40:36] <Joeskyyy> LucasTT: Do you have write permissions on that folder?
[20:41:00] <LucasTT> /data folder?
[20:41:04] <LucasTT> i think so
[20:41:05] <Joeskyyy> Not a windows guy but it looks like a permissions issues
[20:41:19] <LucasTT> should i try to install on a different folder?
[20:42:07] <Joeskyyy> does that folder exist?
[20:42:29] <Joeskyyy> I wonder if the special char in usario has to do with it in all honesty.
[20:43:04] <LucasTT> yes,the folder exist
[20:43:10] <LucasTT> and mondodb even created files inside it
[20:43:28] <LucasTT> it created journal folder and mongod.lock file
[20:44:49] <Joeskyyy> sistema nao pode encontrar o caminho especificado = system ….. something specification? Mind translating
[20:45:33] <LucasTT> sistema nao pode encontrar o caminho especificado = system couldn't find the especified path
[20:45:47] <Joeskyyy> gotcha, yeah notice how is shows Usu?rio in the path?
[20:46:00] <Joeskyyy> I think the special character is throwing mongo off
[20:46:14] <Joeskyyy> can you create a new user, say "mongo" and try it there?
[20:46:18] <LucasTT> if so,it shouldn't have created those files,or should it?
[20:46:32] <LucasTT> huh, i actually don't really know how to create a new user
[20:46:51] <Joeskyyy> it probably created those, but somehow in the configs its trying to reference it's missing out on the char
[20:46:56] <LucasTT> i shoudl've done that before,that username always get me in trouble
[20:47:08] <Joeskyyy> uhhhhh, start > system preferences or something?
[20:47:14] <Joeskyyy> is it windows server, or?
[20:47:25] <LucasTT> windows 7
[20:47:38] <Joeskyyy> http://windows.microsoft.com/en-us/windows/create-user-account#create-user-account=windows-8
[20:47:53] <Joeskyyy> agh that's windows 8
[20:48:09] <Joeskyyy> http://www.howtogeek.com/howto/5261/beginner-geek-add-a-new-user-account-in-windows-7/
[20:49:07] <LucasTT> the weird is that my user is called Takalupe
[20:49:24] <Joeskyyy> right
[20:49:30] <Joeskyyy> knew enough spanish to pick that up haha
[20:49:40] <Joeskyyy> close enough in spanish as well at least ;)
[20:49:43] <LucasTT> oh ok :p
[20:49:58] <Joeskyyy> I think maybe usario is the default user that the mongo config is picking up?
[20:50:09] <LucasTT> what do you mean?
[20:50:25] <Joeskyyy> Well, rather, you're specifying it there
[20:50:29] <Joeskyyy> with the —dbpath flag
[20:52:08] <Joeskyyy> Try creating the \data folder in c:\Users\Takalupe\nodetest1\ or something
[20:52:19] <LucasTT> ok
[20:52:20] <Joeskyyy> Then specify that folder in your —dbpath flag
[20:52:23] <LucasTT> wait a second
[20:52:29] <Joeskyyy> sure thing
[20:52:59] <Joeskyyy> gotta jet yo
[20:55:00] <LucasTT> well,at least it worked in the end.
[21:47:51] <mu> Hello. Can I ask a question about using mongodb here or is this a development channel?
[21:49:02] <Derick> this is an "ask questions about mongodb channel"
[21:49:19] <mu> I have a collection in which we store a bunch of documents with an array of tags... We want to be able to optimize a query for all documents with the same tag, and it looks like it's done through: ensureIndex("tags.Id": 1) and then just querying on that tag. Is that correct or is there more to it?
[21:49:52] <Derick> mu: can you pastebin a document?
[21:50:43] <mu> http://pastebin.com/vkcsEGN6
[21:51:15] <mu> idk if just the Schema suffices
[21:52:16] <mu> in the constructor we just do: @Document = mongoose.model('Document', DocumentSchema)... I'm thinking below that is we add an ensureIndex like I described?
[21:54:42] <Derick> mu: I asked for a document
[21:55:19] <mu> err, so like an example as it exists in a collection?
[21:55:25] <Derick> yes
[21:55:47] <Derick> in general, show it how mongo does things - not with some framework/ODM on top of it
[21:56:40] <mu> Like this? http://pastebin.com/ZhysJSBK
[21:57:31] <mu> And sorry I completely neglected the fact that I'm using mongoose
[21:57:40] <mu> Maybe I should check out a channel for that instead?
[21:57:52] <Derick> right
[21:57:55] <Derick> so
[21:58:10] <Derick> so you have an index on Tag.Id
[21:58:16] <Derick> is that what you search on too?
[21:58:23] <mu> yes
[21:58:25] <Derick> sorry, tags.Is
[21:58:27] <Derick> sorry, tags.Id
[21:58:37] <Derick> yeah, you can easily create an index on that
[21:58:42] <mu> alright how
[21:58:55] <Derick> do you not search for the tag's value?
[21:59:04] <Derick> I think youv';e already created it:
[21:59:10] <Derick> ensureIndex("tags.Id": 1)
[21:59:38] <Derick> not sure wht you need the Id field in tags though
[22:06:01] <mu> I see
[22:06:08] <mu> alright thanks for the assistance
[22:06:16] <Derick> np :-)
[23:02:24] <evilC> Hi all, new to mongo here, and having a little trouple making the conceptual leap to mongo. Any help? My data structure is like this:
[23:02:27] <evilC> {"name":"MyRecord",
[23:02:27] <evilC> "yearChunks":[{"data":"somedata"},
[23:02:27] <evilC> {"data":"somedata"}]}
[23:03:06] <evilC> I am wondering how, in mongodb, to say only update yearchunk 2000
[23:03:30] <evilC> (FYI this is a sparse array)
[23:34:39] <unholycrab> is it considered bad form to run the mongo config servers on the same instances as the mongod instances?
[23:34:55] <unholycrab> second, is there any reason not to run more than 3 config servers?
[23:38:40] <synth_> using pymongo i'm able to do pymongo.ASCENDING; is there an equivalent in PHP?
[23:39:30] <unholycrab> the documentation for sharded clusters says "Three config servers. Each config server must be on separate machines."
[23:40:00] <synth_> i understand the return value of pymongo.ASCENDING or DESCENDING will be a -1 or 1; I'm just curious if the php driver has a variable I can call versus using an integer
[23:40:08] <unholycrab> its a little ambiguous. does it mean at least three clusters? does it mean i can put my config servers on my mongod instances?
[23:40:27] <unholycrab> or is it stating that i need exactly three config servers, each on a dedicated machine
[23:49:47] <ElephantHunter> I'm experiencing an issue with the mongodb interactive tutorial
[23:50:10] <ElephantHunter> The server's responding with "Max number of collections exceeded" every time I attempt to add a collection
[23:50:33] <ElephantHunter> even when I'm running in completely new instances (incognito window) that have no existing collections
[23:51:18] <ElephantHunter> http://try.mongodb.org/
[23:52:05] <ElephantHunter> db.foo.save({a:1})
[23:52:55] <ElephantHunter> Can anybody else confirm?
[23:57:38] <joshua> Interesting. I have never used save before and didn't know it existed
[23:59:26] <joshua> I get "Error: Max number of collections exceeded" so maybe it is broken