PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 18th of May, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[04:08:47] <markholmes> hey all, i'm having trouble figuring out where homebrew installed mongo, and how i can start it
[04:12:24] <markholm_> hi, sorry, if someone responded, my internet disconnected
[04:51:17] <joannac> markholmes: `which mongod`
[05:25:28] <svm_invictvs> Heya
[05:25:41] <svm_invictvs> Using GridFS, is it possible to rename a file atomically?
[07:24:45] <windows7_> hi :)
[07:24:56] <windows7_> can anyone tell me what these errors mean,
[07:25:04] <windows7_> {'$set': {'date_slug': '131126-4' }})
[07:25:08] <windows7_> actually not errors
[07:25:13] <windows7_> its just failing to find a match
[07:25:23] <windows7_> when using db.user.update
[07:25:28] <windows7_> on 2.6.2
[07:26:56] <windows7_> i think i may have failed to create the user in the new database
[07:27:06] <windows7_> after initializing the admin db user
[07:30:36] <windows7_> yes that was the case
[07:30:56] <windows7_> as well as bunging the roles
[07:33:24] <windows7_> how can I add an email to a user?
[07:33:46] <windows7_> i'm getting a error saying: couldn't add user: "email" is not a valid argument to createUser at src/mongo/shell/db.js:1004
[07:35:24] <joannac> windows7_: um, mongodb users don't have an email field
[07:36:27] <joannac> windows7_: if you want to add random other fields, use the customData field http://docs.mongodb.org/manual/reference/method/db.createUser/
[07:42:21] <windows7_> can you shed some light on this statement joannac db.user.update({'email': 'guoqiao@gmail.com'}, {'$set': {'date_slug': '131126-4' }})
[07:43:43] <windows7_> basically that statement is in the setup instructions
[07:43:47] <windows7_> but it keeps failing
[07:43:50] <windows7_> and I'm not sure why
[07:44:04] <windows7_> I've eliminated a few steps such as creating an admin db
[07:44:20] <windows7_> and creating a user from within the admin db
[07:44:26] <windows7_> but now im stuck again
[08:06:37] <arussel> can you give anyAction to a db only ?
[08:06:42] <arussel> not the whole system
[08:18:07] <luc4> Hello! I'm using mongo on a web service running on openshift. It seems that after a few days of uptime I get something like this: http://pastebin.com/gzhDQjqH. Any idea why this might happen? It seems it fails to connect but... why?
[08:21:58] <arussel> luc4: what does mongo log says ?
[08:27:21] <luc4> arussel: mmh... not sure if I have its log somewhere...
[08:41:08] <sabrehagen> would really appreciate it if somebody could provide some input on my question here: http://stackoverflow.com/questions/30229872/node-js-mongodb-connection-in-master-or-forked-thread
[11:45:39] <MachineMan> I’ve got some problems with mongoexport. Trying to export as .csv but my non-_id columns are empty in the .csv. I’m not getting any errors from the commandline though
[11:45:44] <MachineMan> mongoexport --type=csv -d thesis -c ratings -f "_id, uid, action, date" -o usersexport.csv
[11:46:00] <MachineMan> Any clues?
[11:59:41] <joannac> MachineMan: try getting rid of the spces
[12:00:13] <MachineMan> perfect
[12:00:14] <MachineMan> that worked
[12:00:15] <MachineMan> thanks
[13:18:33] <unseensoul> Hi
[13:18:53] <unseensoul> How can I return documents with the columns in a given order?
[13:20:54] <unseensoul> Apparently, you can't after some googling
[13:20:58] <unseensoul> :\
[13:21:16] <cheeser> you can reorder with the aggregation framework. but typically key order doesn't really matter.
[13:21:25] <cheeser> i'll just put that here in case she comes back.
[13:31:51] <StephenLynx> wanting to order keys from the database is bad sign.
[13:32:01] <cheeser> agreed
[13:36:19] <deathanchor> the cursor handler should be doing the work, be it a driver, or a javascript function you use in the shell
[15:09:12] <aendrew> Howdy folks! So, I have an array of objects in my document, “document.officers”, each child of which has an element, “name”. I’m wanting to find all documents that match /HERPA.*?DERPA/i, so my query’s like db.documents.find({‘officers.name’: /HERPA.*?DERPA/i}) — but this returns documents where either “HERPA” or “DERPA” is in one of the names, i.e, “officer.name: ‘HERPA LLAMA’” and “officer.name: ‘DUCK DERPA’
[15:09:13] <aendrew> returns a match, where I only want matches like “HERPA SOMETHING IN THE MIDDLE HERE DERPA”. Any thoughts?
[15:09:50] <aendrew> *officers.name, rather, in the above example.
[15:10:26] <StephenLynx> I can't read half of the characters
[15:15:56] <saml> taht's end of mongodb
[15:16:32] <saml> aendrew, your regex is wrong
[15:16:38] <saml> and don't use regex to find things
[15:17:09] <saml> /HERPA.+DERPA/
[15:17:32] <saml> /^HERPA.+DERPA$/
[15:17:36] <aendrew> Lulz, noted. Any suggestions how I should approach the problem of people with indeterminate middle names?
[15:18:16] <aendrew> saml: Also, adding anchors helped a lot; thanks!
[15:32:10] <StephenLynx> heh, the more you know. If you don't close a mongo connection in the node.js driver, the application won't close.
[15:32:28] <StephenLynx> and if you close the connection while a gridstore file is open, it will throw an error.
[15:33:08] <windows7_> i need help
[15:33:09] <windows7_> ugh
[15:33:36] <windows7_> i have this statement in my instructions
[15:33:40] <windows7_> db.user.update({'email': 'bucyrustech@hotmail.com'}, {'$set': {'date_slug': '131126-4' }})
[15:33:48] <windows7_> but it keeps failing and I don't know why
[15:33:51] <StephenLynx> from your nick I can tell you that you need. lets see.
[15:34:03] <StephenLynx> what is the error it throws at you?
[15:34:50] <StephenLynx> from the query syntax it seems to be fine.
[15:35:22] <StephenLynx> try removing the single quotes from the keys.
[15:35:31] <saml> windows7_, what do you mean by failing?
[15:35:37] <StephenLynx> email instead of 'email' and $set instead of '$set'
[15:35:52] <StephenLynx> also date_slug instead of 'date_slug'
[15:36:27] <saml> https://www.google.com/search?q=bucyrustech@hotmail.com
[15:38:00] <windows7_> saml: lmao stop doxing me
[15:38:02] <StephenLynx> Ohio?
[15:38:07] <windows7_> yeah
[15:38:07] <StephenLynx> lol
[15:38:13] <StephenLynx> you don't know what doxxing is
[15:38:13] <saml> i'm so stalking you
[15:38:18] <StephenLynx> but maybe I am able to show you.
[15:38:26] <windows7_> saml: its not worth it
[15:38:29] <StephenLynx> midwestech, hm
[15:38:47] <StephenLynx> " rinse_wash_repeat"
[15:38:48] <saml> what's the error though?
[15:39:37] <windows7_> not really an error sorry
[15:39:40] <windows7_> it just wont match
[15:39:41] <windows7_> says this
[15:39:44] <windows7_> WriteResult({ "nMatched" : 0, "nUpserted" : 0, "nModified" : 0 })
[15:39:57] <windows7_> and I dont understand how to add the email in the first place
[15:40:00] <windows7_> I think
[15:40:05] <saml> db.user.find({email: 'bucyrustech@hotmail.com'})
[15:40:09] <StephenLynx> " We hack our bodies with artifacts from the future-present. " wtf lol
[15:40:11] <saml> is there such user ?
[15:40:29] <windows7_> StephenLynx: biohackers, some of the magnents are cool
[15:40:41] <saml> db.user.update({'email': 'bucyrustech@hotmail.com'}, {'$set': {'date_slug': '131126-4' }}, {upsert:true})
[15:41:17] <windows7_> thanks :D
[15:41:29] <windows7_> the upsert made it work
[15:42:16] <StephenLynx> "Blitzkrieg Website Design" eh
[15:42:42] <windows7_> yeah its all gone to shit now
[15:42:46] <windows7_> that business
[15:45:37] <StephenLynx> I could dig that crazy forum to get more stuff.
[15:45:53] <StephenLynx> or create a facebook account.
[15:45:55] <StephenLynx> but nah
[15:46:49] <StephenLynx> "830 S Walnut St, Bucyrus, OH 44820"
[15:47:35] <windows7_> lol
[15:47:42] <windows7_> are you looking in my window?
[15:48:31] <StephenLynx> scott?
[15:48:40] <StephenLynx> maybe john?
[15:48:54] <StephenLynx> I don't think you are kalen
[15:49:34] <windows7_> maybe we should keep it a mystery :P
[15:51:07] <StephenLynx> hm, I think its mr sanders.
[15:51:42] <windows7_> oh youre on gplus
[15:53:33] <StephenLynx> "
[15:53:33] <StephenLynx> 308 Helen Ave, Crestline, OH 44827" lol why do people put their home addresses on the internet like this?
[15:54:00] <windows7_> because they want swatted?
[15:54:03] <windows7_> idk
[15:54:52] <StephenLynx> nah, I dont think its sander
[15:55:37] <StephenLynx> hm
[15:55:50] <StephenLynx> kalen is currently at bucyrus
[15:56:55] <StephenLynx> kalen seems pretty obsessed with bucyrus
[15:57:08] <windows7_> lmao
[15:59:36] <StephenLynx> do you happen to know what is 'moz"?
[16:00:40] <windows7_> like mozilla
[16:01:22] <StephenLynx> 830 Walnut Street Bucyrus, OH is where kalen lives, apparently
[16:01:42] <StephenLynx> yeah, youre kalen
[16:01:49] <windows7_> you win
[16:01:53] <StephenLynx> :^)
[16:02:01] <windows7_> lol
[16:02:02] <StephenLynx> so yeah, don't go pasting your e-mail around.
[16:02:23] <StephenLynx> if you are that careless around the internet to the point people can find out your home address.
[16:02:43] <windows7_> i would tell you that i believe in the common decency of people in irc
[16:02:49] <windows7_> but i dont want you to prove me wrong :P
[16:03:10] <StephenLynx> I don't find it dignified to fuck up random people.
[16:03:22] <StephenLynx> but again, most people don't care about dignity these days.
[16:03:33] <cheeser> i gave up on that fantasy years ago. http://bit.ly/1dfuKIs
[17:54:23] <shlant> hi all. I am trying to set up a replica set with MMS. It shows on the Processes tab, but has no state. I also deployed users but when I actually check on the host, they don't exist…
[17:54:53] <shlant> I initially set them up with automation agents
[17:58:10] <shlant> ah there we go
[17:58:15] <shlant> the replica set is up
[17:58:55] <shlant> or not… UI says yes, host says no
[17:58:55] <shlant> > rs.status()
[17:58:56] <shlant> { "ok" : 0, "errmsg" : "not running with --replSet" }
[18:29:09] <proteneer> I have a document whose field is stored using $currentDate timestamp: true
[18:29:21] <proteneer> but I’d like to know the server’s actual time
[18:29:23] <proteneer> from my driver code
[18:29:34] <bfrizzle> hi all, when mongod is configured to run in SSL mode, is it /fixed/ into mutual verification mode? or is there a way to configure mongod to require ssl connections without also requiring client certificates?
[18:30:14] <proteneer> there may be clock skew etc between the server that my DB is hosted on and the actual webserver itself
[18:31:00] <cheeser> use ntp and don't worry about it?
[18:32:24] <StephenLynx> proteneer I always use UCT
[18:33:04] <StephenLynx> UTC*
[18:37:20] <svm_invictvs> If I am writing to a collection, how do I know when that is written and replicated?
[18:37:25] <svm_invictvs> Is that the SAFE write concern?
[18:38:07] <cheeser> http://docs.mongodb.org/manual/core/write-concern/#write-concern-levels
[18:40:01] <StephenLynx> cheeser, is it possible get X documents at once from a cursor in an array?
[18:40:22] <StephenLynx> I would like to get a page, then the next page
[18:40:32] <StephenLynx> without having to perform a second query
[18:42:15] <svm_invictvs> Is the query really that expensive?
[18:42:32] <StephenLynx> no, but they add up.
[18:42:48] <StephenLynx> exponentially.
[18:42:53] <cheeser> sounds like a batch size
[18:43:18] <cheeser> but if you mean between subsequent cursors that can get tricky
[18:43:20] <svm_invictvs> StephenLynx: is this between requests?
[18:43:30] <StephenLynx> no, a single one.
[18:43:48] <cheeser> oh, that's just a batch then.
[18:44:06] <StephenLynx> so if I set batchSize of X, I will get X documents in toArray, then if I call to toArray I get the next X documents?
[18:44:28] <StephenLynx> then if I call to toArray again*
[18:45:57] <cheeser> probably depends on the driver but i think toArray() will pull in the all the results in to the array not just the current batch
[18:46:33] <StephenLynx> and how I get a whole batch?
[18:46:47] <StephenLynx> but just the batch?
[18:46:57] <cheeser> i don't think you can really
[18:47:13] <cheeser> i don't know of any driver that exposes that explicitly
[18:47:21] <StephenLynx> uh
[18:47:34] <cheeser> if you know/set the batch size and you incrementally pull off docs and stop before triggering the getMore
[18:48:10] <StephenLynx> if I call skip twice on a cursor, it just skip more documents?
[18:49:33] <cheeser> skip() gets applied before the cursor gets created on the server end, iirc
[18:50:00] <cheeser> i don't think you can skip once you start iterating the cursor short of just getting and discarding /n/ docs
[18:50:50] <StephenLynx> so you don't know any means to get a number of documents at once while not pulling them all from the cursor?
[18:51:01] <cheeser> i don't
[18:51:41] <cheeser> at least, neither the C# not the java driver expose a method like that. i don't know that any of the others do either.
[18:55:54] <StephenLynx> so I guess I will perform multiple queries with the proper skip then.
[19:11:16] <flicknick> hi all
[19:15:18] <flicknick> we had an unclean shutdown and is trying a repair, but it segfaults straight away. I've searched through the docs, but am unable to figure out how to deal with this. Ideas appreciated.
[19:15:43] <flicknick> this is 3.0.3 on yosemite
[19:19:26] <StephenLynx> >kikesite
[19:19:32] <StephenLynx> damn it
[19:19:34] <StephenLynx> wrong channel
[19:56:48] <dfosl> from docs:
[19:57:03] <dfosl> After every insert, update, or delete operation, MongoDB must update every index associated with the collection in addition to the data itself.
[19:57:23] <dfosl> it means that index is updated even if i update un-indexed field?
[19:57:51] <cheeser> if the document has to be moved on disk, sure.
[19:58:12] <dfosl> hmmm
[19:59:12] <dfosl> and when it needs to be moved?
[19:59:20] <cheeser> ?
[20:00:00] <dfosl> you wrote when it needs to be moved on disk, if i update integer field from 10 to 2000 then it will not make whole structure larger which means no moving ?
[20:00:09] <dfosl> or you talking about something else ?
[20:00:43] <cheeser> changing a number field can not make the document larger
[20:00:58] <dfosl> because i am building write heavy application, but it's write heavy only on un-indexed fields, but i have milions of elements inside
[20:01:22] <dfosl> and docs are saying that even if i update only unindexed fields, db will rebuild index
[20:01:28] <dfosl> on all fields in collection
[20:01:30] <cheeser> no, it says it might.
[20:01:37] <cheeser> on the affected indexed fields.
[20:02:16] <dfosl> "MongoDB must update every index associated with the collection " at the end there is "collection" not "field" :P
[20:02:20] <dfosl> that's why i ask
[20:02:49] <dfosl> because it clearly states every associated with the collection
[20:03:30] <dfosl> it didn't make sense for me, that's why i asked here
[20:06:02] <dfosl> cheeser: sorry but if you would write that only for insert operations that would be true, but in documentation it's true also for update operations
[20:06:13] <dfosl> "After every insert, update"
[20:07:17] <deathanchor> dfosl: perhaps if the document had to be moved because it ran out of padding from the update?
[20:07:22] <cheeser> i've already told you everything i know about it. if you find it confusing, file a jira against it.
[20:07:29] <cheeser> deathanchor: i mentioned that.
[20:08:21] <dfosl> then it's specific case, if most of fields are integers or small strings then this not apply in most cases
[20:08:45] <dfosl> and normally you write what apply to most cases..
[20:08:55] <dfosl> anyway thx for help
[20:08:59] <deathanchor> dfosl: why not write a perf test to see how it behaves?
[20:09:46] <cheeser> or file a support ticket for clarification
[20:10:09] <dfosl> i thought i would get clarification here faster and i was right ;)
[20:14:39] <cheeser> did you, really, though?
[20:16:05] <dfosl> yes, my documents will be almost same size
[20:16:22] <dfosl> so it dosen't apply for my use case
[20:16:27] <dfosl> for specific collection
[20:18:49] <dfosl> last question, without using indexes, on write heavy collection, best way to retrieve list of logs for example for specific user is just to make document for every user and just add logs to field which is array?
[20:19:27] <dfosl> then just retrieve whole array for specific user, which makes less traversal instead of just adding collection with all the logs
[20:20:28] <dfosl> any performance penalties to wathc out in this design ?
[20:20:36] <dfosl> *watch
[20:34:17] <cheeser> documents can only be 16MB so you're likely to run out of space
[20:34:34] <cheeser> you'll almost certainly need to move that document many times even with powerOf2 sizing
[21:08:37] <saml> is there alias for database?
[21:08:58] <saml> with atomic rename
[21:54:33] <SpeakerToMeat> Hi
[21:55:45] <SpeakerToMeat> Sorry to ask this here, but I really trust the architectural view of mongoers.... Would you say a database of movies (cinema), would be more fit to a graph database than a document one, to conserve deduplication/normalization of linked items (like director, to movie(s)) rather than trying to implement rdbms like relationships between documents?
[21:55:51] <SpeakerToMeat> Or am I wrong?
[22:01:11] <blizzow> Does anyone here know if the reactive mongo driver tries to access the arbiters of a replica set?
[22:17:27] <cheeser> blizzow: why would it? arbiters are non-data bearing members. there's nothing of interest there to an application
[22:20:26] <blizzow> cheeser: I'm not sure, but my application is trying to bang on my arbiter server. But the arbiter is not in any of my application config Uri.
[22:20:36] <joannac> cheeser: to keep an eye on the whole set.
[22:21:01] <joannac> blizzow: that sounds like a bug. one or 2 connections to keep an eye on stuff, sure. lots of connections - bad
[22:21:04] <cheeser> joannac: ... maybe. but an arbiter can never be a primary
[22:21:14] <joannac> cheeser: true, but we still do it :)
[22:21:39] <cheeser> i'm not sure a driver will even try to connect to it...
[22:21:47] <joannac> cheeser: C# driver will
[22:22:08] <joannac> i think most of them will actually, but c# for sure
[22:22:39] <cheeser> i guess if it's part of the seed list, it'll connect to discover the cluster topology
[22:22:49] <cheeser> but it should never "pound" it.
[22:23:15] <joannac> right. agreed.
[22:25:56] <SpeakerToMeat> cheeser: You're an OP here too?
[22:26:03] <SpeakerToMeat> cheeser: Wait... you use Mongo?
[22:26:26] <SpeakerToMeat> cheeser: I tought Java guys only believed in rdbms..... and enterprise beans
[22:27:37] <cheeser> that's ridiculous
[22:41:07] <shlant> should I install the Backup Agent in MMS to the primary or Secondary?
[22:41:36] <cheeser> primary
[22:42:50] <shlant> cheeser: thought so. Thanks!
[22:42:59] <cheeser> np
[22:43:15] <cheeser> secondaries can lag so using them for backups is usually a bad idea.
[22:56:33] <itisit> how to clean all data of mongodb (db, user, replicaset, sharding, etc)?
[23:00:37] <dreamdust> If I see huge spikes in query time for finds during periods of very low throughput… like randomly a find will take 14 secs for a collection with less than 500k docs. Indexes are in memory as well.
[23:00:55] <dreamdust> What would be a good place to start debugging? Does this indicate some kind of collection lock
[23:02:36] <joannac> run explain()
[23:08:48] <itisit> how to clean all data of mongodb (db, user, replicaset, sharding, etc)? can I just remove all data from dbpath rather than run commands to drop database etc?
[23:23:17] <joannac> itisit: sure, shut down any mongod / mongos processes,a nd remove all the files
[23:23:28] <joannac> that means ALL your data is gone
[23:38:06] <itisit> joannac: thanks!