PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 20th of January, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[01:13:56] <morenoh149> Give every document with a score less than 70 an extra 20 points.
[01:14:14] <morenoh149> whats wrong with this
[01:14:16] <morenoh149> db.scores.update({score: {$lt: 70}}, score: { $set: "score"+20}, {multi: true})
[01:14:30] <fabianhjr> Hey, I think I met someone from MongoDB Linux Distribution team in MHacks. By the offchance he is here: Hi.
[01:16:29] <fabianhjr> The other thing, does someone know of an #elixir-lang adapter for Ecto/Phoenix even if it is experimental? I was only able to find a simple interface to call mongo from with elixir-mongo ( https://github.com/checkiz/elixir-mongo) but it hasn't been updated recently and it does not integrate with Ecto Models.
[01:28:27] <morenoh149> db.scores.update({score:{$lt:70}},{$set:{'score':'score'+20}},{multi:true,upsert:true})
[01:28:36] <morenoh149> whats wrong with this ^
[01:29:11] <Boomtime> heh, well, at least it's valid javascript now, but the server doesn't receive what you think it will
[01:29:15] <morenoh149> Im getting score values of 'score20'
[01:29:20] <Boomtime> correct
[01:29:43] <Boomtime> because the javascript interpreter is running the "score"+20 equation locally
[01:29:44] <morenoh149> how do I get a scores score value in numbers?
[01:30:03] <Boomtime> you don't want $set
[01:30:11] <Boomtime> $set sets a specific value
[01:30:31] <Boomtime> i.e it replaces whatever value is present with the value you supply
[01:30:56] <Boomtime> you probably want $inc
[01:31:03] <Boomtime> http://docs.mongodb.org/manual/reference/operator/update/inc/
[01:31:12] <morenoh149> riight. can I increment by 20?
[01:31:46] <morenoh149> yes
[01:32:30] <morenoh149> Boomtime: thank you
[01:33:56] <Boomtime> morenoh149: remove the upsert:true condition, you cannot possibly mean that
[01:40:55] <toter> Hi everybody... The following code is doing absolutely what I want, but it's taking 63 seconds to do it. I know there's a foreach loop inside a foreach loop that needs to be removed, but I can't find a way to make it work. Code: http://viper-7.com/XV6kmX
[01:44:27] <dimon222> probably if you collect all your find operations in bulk ordered collection and send it in bulk - it will be faster
[01:44:43] <dimon222> but i definitely believe that additional check for empty and etc - not a good choice for php
[01:44:54] <dimon222> more like python or something
[01:46:14] <dimon222> so i meant this stuff http://docs.mongodb.org/manual/reference/method/js-bulk/
[01:46:23] <dimon222> not sure about implementation in php, but should be somewhere
[01:54:23] <Jonno_FTW> is there a convenient way to update a nested document?
[01:55:48] <cheeser> $set
[01:57:13] <Jonno_FTW> also, my data looks like this: http://pastebin.ws/4j5033
[01:57:32] <Jonno_FTW> and I want to be able to add more dates and more readings
[01:57:45] <Jonno_FTW> is this the best layout for my data?
[01:59:22] <cheeser> readings.readings is odd. misfire on an update?
[01:59:29] <Jonno_FTW> because I have 2m individual readings that I want to insert
[01:59:43] <cheeser> oh, i doubt you want them all nested like that then.
[02:00:29] <Jonno_FTW> I couldn't think of a better name, how should I organise it then?
[02:01:18] <cheeser> i'd store the site number in each reading. then each site could have billions of readings.
[02:01:44] <Jonno_FTW> seems like I'd be storing a lot of redundant data
[02:01:57] <cheeser> just the site number
[02:02:27] <Jonno_FTW> sounds reasonable
[02:02:37] <Jonno_FTW> if I store date and time in a single field
[02:03:06] <cheeser> documents can only be 16MB large so nesting them all under a single site document can run in to problems.
[02:03:38] <Jonno_FTW> there's many sites, so each site is its own document
[02:07:16] <Jonno_FTW> cheeser: there's probably more than 16mb of data for each site though
[02:07:20] <Jonno_FTW> what should I do?
[02:07:35] <cheeser> each reading gets its own document
[02:08:11] <Jonno_FTW> but then it's no better than the csv I have currently
[02:08:23] <cheeser> ...
[02:08:37] <Jonno_FTW> I have 2TB in csv total to add later
[02:08:57] <cheeser> why would you put 2TB into a csv file?
[02:09:14] <Jonno_FTW> it's not all a single file, and it's how i was given the data
[02:09:25] <cheeser> ah
[02:09:41] <cheeser> well, a database is always going to be better than a csv file. for some value of "better"
[02:10:23] <Jonno_FTW> yeah I get that, I just don't want 2TB stored by reducing the redundancy of the current system
[02:11:29] <cheeser> i'm not quite following that one.
[02:12:07] <Jonno_FTW> part of the reason there's 2tb of data is because there is a heap of redundant data
[02:12:24] <Jonno_FTW> ie, every row has time and date and site
[02:12:54] <Jonno_FTW> in the csv it looks like this: {'rec_date': "'2010-01-29'", 'rec_time': "'09:15:00'", 'site_no': '46', 'data_ow
[02:12:58] <Jonno_FTW> ner': '1', 'flow_period': '5', 'vehicle_count': '60', 'detector': '8'}
[02:13:31] <cheeser> well, date/time could be one field
[02:15:44] <Jonno_FTW> I'm doing that part
[02:37:03] <joeyjones> Jonno_FTW: You may also want to check the size of your indexes.
[02:37:25] <Jonno_FTW> I haven't set indexes yet, i'm at the design stage
[02:58:47] <xissburg> If I have a Question collection and it has a pointer to an Approval collection that has the date this question was approved, is it possible to build a query that will return questions sorted by the approval date?
[02:59:58] <xissburg> Something like db.questions.find().sort( { approval.createdAt : -1 } )
[03:08:32] <Boomtime> xissburg: no, you are asking about a join
[03:09:08] <Boomtime> if you want to know the date a question was approved, the question should have that information, otherwise you are doing two queries
[03:09:43] <xissburg> Yeah, I think I will drop the Approval and will add two fields to the Question itself
[03:09:56] <xissburg> I just need the user who approved and date
[03:10:06] <xissburg> will make things simpler
[03:10:07] <Boomtime> that would be the normal approach, embed data that has a direct relationship
[03:10:41] <xissburg> Yeah, thanks
[03:39:33] <pretodor> #.mongodb
[03:41:18] <pretodor> test
[04:01:21] <Jonno_FTW> can anyone help with my question? https://dba.stackexchange.com/questions/89666/mongo-how-should-i-layout-my-data
[04:35:00] <jimbeam> Hey my mongodb just stopped working? ERROR: dbpath (/data/db) does not exist.
[04:35:14] <jimbeam> Did I lose all my data somehow??
[04:35:27] <cheeser> doesn't sound good...
[04:35:42] <jimbeam> I didn't do anything!
[04:36:00] <jimbeam> My hd is almost full maybe that is why
[04:38:32] <jimbeam> also shouldn't there be logs in var/logs/mongodb? I'm on os x if it makes a difference
[04:53:19] <hmsimha> does anyone know if there's any way to do aggregations from the web interface with compose.io?
[07:08:40] <Arvind> I stuck with a issue during saving data in collection. I am getting this error
[07:08:45] <Arvind> http://pastebin.com/nfbkPmm5
[07:09:08] <Arvind> Can you guys please help to resolve this
[07:13:10] <Boomtime> Arvind: the error appears to be in your own code, have you followed the stack trace?
[07:14:12] <Boomtime> also, your question appears to be pure Node.js - have you asked in #nodejs ?
[07:27:25] <Arvind> Boomtime: No, I did not asked to nodejs. My main issue is that I am refering another collection as array to document field and I am not passing the reference object. May be that is the issue but I saved data before like this many times but Now I am getting error
[09:41:38] <Andre-B> does mongod --repair has any kind of progress? so I can see how long it will take?
[09:51:09] <aaearon> im using pymongo and trying to pop a value from one of my documentś arrays. pop is returning the value from the array but not removing it. am i missing something obvious?
[10:07:43] <ekristen> morning
[10:07:50] <ekristen> one of my replic’s is crashing
[10:07:54] <ekristen> Tue Jan 20 09:56:45.492 [repl writer worker 1] getFile(): n=16000 — Tue Jan 20 09:56:45.492 [repl writer worker 1] Assertion: 10295:getFile(): bad file number value (corrupt db?): run repair
[10:08:06] <ekristen> I could use some advice on how to fix it
[10:12:37] <Andre-B> ekristen: did you try repairing the db?
[10:13:17] <ekristen> Andre-B: so this is happening on a replica, so I was going to ask if that was the right move or not — well on a secondary at the moment, my primary and my other secondary in the set seem to be ok
[10:25:27] <Andre-B> I have no idea..
[10:25:40] <Andre-B> does mongodump dump users as well? or only database?
[10:35:40] <Andre-B> looks like..
[10:59:46] <ekristen> Andre-B: I cleared the secondary’s data so it could resync
[10:59:56] <ekristen> but it fails to list databases now on intiial sync
[11:00:03] <ekristen> so my replicaset is down one member right now
[11:00:23] <ekristen> I’m a little nervous about doing anything else in fear that the primary or other secondary dies
[11:28:41] <joannac_> ekristen: you resynced and now it won't list databases? huh?
[11:28:53] <ekristen> no it won’t re-sync
[11:29:00] <joannac_> what's the error?
[11:29:01] <ekristen> I think there is something wrong on the primary
[11:29:09] <ekristen> and the primary just hasn’t crashed yet
[11:29:12] <joannac_> check the logs
[11:29:19] <ekristen> Tue Jan 20 10:47:20.840 [rsSync] replSet initial sync exception: 10005 listdatabases failed 5 attempts remaining
[11:29:30] <ekristen> the primary logs are fine so are the other secondary
[11:30:03] <ekristen> but the secondary that crashed, I cna’t get it to re-sync, I tried deleting the entire dbpath to let it resync from scratch, but fails with that error message
[11:30:14] <ekristen> there is next to no information available on google for that error message either :/
[11:30:19] <joannac_> more log lines please
[11:30:39] <ekristen> one sec, there isn’t much more but let me pastebin them
[11:30:41] <joannac_> pastebin/gist
[11:31:05] <ekristen> http://pastebin.com/pJeu9j7b
[11:32:21] <joannac_> connect to mongodb08:27017 and run listDatabases?
[11:33:59] <ekristen> welp my primary is now foobar
[11:34:00] <ekristen> shit
[11:34:36] <joannac_> backups?
[11:34:44] <ekristen> yeah I have them
[11:34:58] <ekristen> just going to incur downtime it seems, trying to prevent that if possibel
[11:34:59] <joannac_> are you sure there's nothing in the primary logs?
[11:35:09] <joannac_> could be as simple as not enough disk / memory
[11:35:58] <ekristen> joannac_: I think its a ulimit issue
[11:36:18] <joannac_> ...okay, so increase the ulimits and restart the node that was primary
[11:36:42] <ekristen> doing so now
[11:37:30] <ekristen> shit
[11:37:47] <ekristen> Tue Jan 20 11:35:10.049 [initandlisten] exception in initAndListen: 15924 getFile(): bad file number value 16000 (corrupt db?): run repair, terminating
[11:38:02] <ekristen> that is what my secondary crashed with, now my primary crashed with that
[11:40:25] <ekristen> ok how can I force my secondary to take over as primary
[11:41:11] <joannac_> reconfigure with force:true
[11:41:37] <ekristen> so I only have 1 server online
[11:41:41] <ekristen> its telling me I can’t do that
[11:41:45] <joannac_> seems a bit weird though. what version? what OS, etc?
[11:41:50] <ekristen> ubuntu
[11:42:00] <ekristen> 12.04.4
[11:42:39] <joannac_> can't do what?
[11:42:39] <ekristen> I’m not out of inodes, I’ve bumped the nofile and nproc up to 64000, they were at 20000
[11:42:47] <joannac_> also, what *mongodb* version?
[11:42:48] <ekristen> can’t force it to be primary
[11:42:50] <joannac_> vmware?
[11:42:53] <ekristen> 2.4.8 unfortunately
[11:42:55] <ekristen> ec2 aws
[11:43:26] <joannac_> tell me exactly what you're typing
[11:44:03] <ekristen> I http://docs.mongodb.org/manual/tutorial/force-member-to-be-primary/ <— I did the first part, but did {force: true} as an option to rs.reconfig
[11:44:18] <joannac_> and the response is?
[11:44:48] <ekristen> "errmsg" : "exception: need most members up to reconfigure, not ok : mongodb09:27017",
[11:45:06] <joannac_> sigh
[11:45:16] <joannac_> can you just tell me what you're typing?
[11:45:51] <joannac_> or, if you tell me which member is actually up, I can tell you what to type
[11:46:11] <ekristen> oh sorry, ok one sec
[11:47:33] <ekristen> https://gist.github.com/ekristen/84fccd6522641d0c8e2f
[11:48:08] <joannac_> nope
[11:48:34] <joannac_> which host is actually up
[11:48:42] <ekristen> 07
[11:49:12] <joannac_> log into that host and pastebin 'ls -laR /dbpath'
[11:49:22] <joannac_> replace /dbpath with your actual dbpath
[11:49:33] <ekristen> the list is long
[11:50:34] <joannac_> then you have a lot of db files, and maybe I can help you avoid having to restore and resync them :p
[11:50:56] <joannac_> which will take a lot longer than pastebinniing some output, no?
[11:52:01] <aaearon> lol
[11:59:01] <arussel> how do you transform a query object to an array ? the goal is to put it in another document
[11:59:41] <arussel> I'm having: can't save a DBQuery object at src/mongo/shell/collection.js:143 when just trying to set it
[12:00:54] <arussel> found it: qo.toArray()
[12:01:22] <arussel> no documentation can beat a well named function
[13:01:10] <kaushikdr> I need some help with nginx-gridfs module. I've followed this https://github.com/mdirolf/nginx-gridfs and then when I start the nginx server it gives 'nginx: [emerg] unknown directive "gridfs" in /etc/nginx/nginx.conf'
[13:52:33] <tusbar> hey, is something wrong with this text index? http://paste.awesom.eu/V6ZQ I never get any result using using { $text { $search: 'foo' } } (mongod 2.6.7)
[13:56:29] <tusbar> (with http://paste.awesom.eu/uJBw as sample data)
[14:17:39] <tusbar> ok nevermind getting results with 4 chars :D
[14:35:14] <hhburs> good morning everyone. does anyone know if it is possible to determine if a collection is using power of 2 allocations?
[14:44:26] <hhburs> i found it, db.collection.stats(), userFlag = 1 -> usePowerOf2Sizes is enabled
[14:50:39] <Bish> hi, how can i make a query where i ask for "give me all objects where field1<field2"
[14:50:59] <Bish> i tried {field:{"gt":"$other_fild"}}
[14:51:09] <cheeser> you can't do that with find()
[14:51:14] <cheeser> aggregation can do that, though.
[14:51:23] <Bish> oh thats where i got that syntax from, okay thank you
[15:12:50] <winem_> do I understand the docs right that it is a matter of personal preferences if you use db.coll.find({ x : { $gt : "1", $regex : "2" }}) or if you build the query with or?
[15:13:39] <winem_> sorry, talking abount $and, not $or...
[15:14:14] <winem_> and the version without $and should be a bit more performant. is this right or is there any situation where you MUST use $and?
[15:19:37] <winem_> oh ok, I see... it looks to be important in case of queries like db.scores.find( { score : { $gt : 50 }, score : { $lt : 60 }}) because the 2nd condition takes priority and "overwrites" the first one... please let me know if I'm wrong
[15:35:33] <c7hb7e> if i run the following upsert query, which operator is applied first? $setOnInsert or $addToSet? db.myCollection.update({ "$setOnInsert": { "tags": ["public"] }, "$addToSet": { "tags": { "$each": ["tag1", "tag2"] } } })
[17:17:23] <winem_> hi, I have some strange behaviour I can't understand... I guess I'm just confused, because I did partial updates a lot of times before. why does the update in this pastie http://pastie.org/private/cmijtrzi1ymlepmyqytpq has no effect?
[17:17:47] <winem_> the query matches one record, but it's not updated...
[17:19:48] <winem_> guess it will take most of you about a sec to find the error...
[17:21:03] <sekyms> dumb question: What is preferred 'start_time' or startTime
[17:43:28] <winem_> ok, I have no idea why it didn't work.. it works now...
[18:25:41] <Tyler_> Would you guys separate your user document from the billing info?
[18:29:16] <cheeser> probably
[19:06:45] <dman777_alter> does anyone use elasticsearch-river-mongodb? If I wanted to span the shards across 2 servers would this plugin be ok with it?
[19:52:01] <harttho> Is there an upsert equiv for using $inc to set a field that hasn't been set yet?
[19:53:05] <harttho> nevermind: If the field does not exist, $inc creates the field and sets the field to the specified value.
[20:45:57] <kcjd> Hello. Thanks in advance for any help given. I'm attempting to set up mongo, Using osx, downloaded using homebrew. When I run the command mongo --version, it returns $ db version v2.6.7, and does not specify Mongo Shell version.
[20:47:35] <kcjd> I can run mongod, but if I open another terminal window and run mongo it fails with "Unable to lock file: .. Is a mongod instance already running?"
[20:55:38] <cheeser> kcjd: mongod is the server. mongo is the client.
[20:55:43] <cheeser> doh!
[21:34:17] <blizzow> I'm trying to do a mongorestore into a sharded cluster with a couple 150GB bson files and a few 80GB bson files. I'm only getting 1000-2000 inserts/second. I'm running multiple restores against different mongos instances to try and parallelize and speed up the restore, but that's not helping. I added -noobjcheck --noIndexRestore --noOptions and it's still super slow. The mongo replica sets are running with 64GB RAM, 32GB swap, and
[22:12:46] <Tyler_> If you're doing a bunch of database calls with .then, you're still doing nested database calls
[22:12:52] <Tyler_> how do you avoid making 100 nested calls?
[22:45:56] <nnyk_> I would think having 100 nested calls shows a problematic design afaik