PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 30th of September, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[03:24:44] <cipher__> How do I specify a password in a mongodb uri if it ends with @?
[03:24:49] <cipher__> @@ isn't working.
[03:27:32] <cipher__> for example mongodb://root:mypassword@@localhost/mydb
[03:30:35] <cipher__> nevermind. I passed the option struct containing the credentials instead
[03:37:43] <preaction> url encoding
[03:48:16] <msn> how do i restore a db after enabling authentication I am trying mongorestore -u <user> -p --drop -d <db> <directory with backup> but it keeps giving me unauthorized even though <user> has readWrite and dbAdmin rights
[03:51:18] <joannac> msn: in the current db and the resotred db?
[03:53:07] <msn> I have created <db> and a <user> with those 2 rights inthat DB
[03:54:10] <msn> using mongo 2.4
[03:55:18] <msn> joannac: I am not getting your question i think
[03:57:37] <joannac> what are you unauthorised to do?
[03:57:45] <joannac> drop? insert? create?
[03:57:48] <devdvd> Hi all, quick question. Did the way capped collections work between 2.4.5 and 3.0.6 change? the reason i ask is because im trying to migrate from 2.4.5 to 3.0.6 (3.0.6 is on a new server) and did a mongodump on 2.4.5, when i go to import into 3.0.6, i get this message "error creating collection searchCountsByMember: error running create command: exception : specify size:n when capped is true. Here is my searchCountsByMember.metada
[03:57:48] <devdvd> ta.json {"options":{"create":"searchCountsByMember","capped":{"$undefined":true},"size":{"$undefined":true}},"indexes":[{"v":1,"key":{"_id":1},"ns":"foo.searchCountsByMember","name":"_id_"}]}
[03:57:58] <devdvd> . Now I see the problem and I thinki know how to fix it (i fixed it on another one like this by defining a size). Is that the proper way to go about it or am i missing something?
[03:58:53] <devdvd> Update to this as i posted it earlier. thhe undefined size and capped variables, they also exist if i try to do a mongodump from the 2.4 version (i used the 3.0 version initially)
[03:59:13] <msn> joannac: doing a restore and checking
[04:00:55] <msn> this error occurs on start of restore https://paste.debian.net/313879/
[04:02:39] <joannac> msn: check the logs
[04:14:14] <msn> ah well ran into another problem before i could fix first one
[04:14:31] <msn> how do i ROLLBACK after a master returns to a cluster
[04:15:04] <msn> i logged in and am currently on the prompt <cluster>:ROLLBACK>
[04:19:32] <joannac> check the logs. that indicates a rollback occurred
[04:19:48] <joannac> the logs should tell you if rollback is progressing or not
[04:20:58] <msn> joannac: sorry new to mongo just started messing with it
[04:21:55] <msn> joannac: says cannot recover :(
[04:23:50] <joannac> why?
[04:27:57] <msn> this is what i get https://paste.debian.net/313880/
[04:29:09] <joannac> what version is this?
[04:30:53] <msn> 2.4
[04:30:56] <msn> mongo 2.4
[04:31:18] <joannac> 2.4.what?
[04:31:37] <joannac> can you post db.version() and mongod --version ?
[04:32:50] <msn> 2.4.10 let me get that inf
[04:34:56] <joannac> those log lines don't follow the logs in mongodb...
[04:34:57] <msn> TokuMX mongod server v2.0.2-mongodb-2.4.10, using TokuKV rev unknown
[04:35:04] <joannac> ...yeah
[04:35:08] <joannac> you're not running mongodb
[04:35:12] <joannac> go hassle tokumx
[04:35:45] <joannac> I have no idea what they changed
[04:36:09] <msn> k
[05:01:22] <svm_invictvs> Hey
[05:01:59] <svm_invictvs> So I'mr eading the BSON spec here and I'm a little confused becasue it tells me that the first four bytes of a document make up the document'slength
[05:02:28] <svm_invictvs> I've made a BSON document using the bson4jackson API and the first byte hs 0x1 and then four bytes representing the length. Is that a problem with my bson generator?
[05:08:11] <svm_invictvs> Or is BSON not on topic here?
[05:09:06] <Boomtime> @svm_invictvs: it's an OK topic
[05:09:32] <Boomtime> my reading of the spec suggests it should always start with the document length too, though i haven't actually played with BSON much
[05:29:32] <svm_invictvs> Boomtime: Yeah, that's how I read it.
[05:34:38] <Boomtime> svm_invictvs: does it end with a nul byte? (0x00)
[05:35:19] <Boomtime> i wonder if what you are looking at is an isolated e_list, whose first element is a double, rather than a properly formed document
[05:39:06] <svm_invictvs> Boomtime: I found my problem :-
[05:39:27] <svm_invictvs> Boomtime: Basically I was adding a byte at the beginning by accident.
[05:42:57] <svm_invictvs> Boomtime: But what I do find interesting is that each document, embedded or not, needs to be preceded with the length
[05:54:45] <svm_invictvs> Boomtime: Yeah, I realized I was writing a byte to the stream to indicate what type the document was and then I forgot to skip over that.
[05:55:02] <svm_invictvs> Boomtime: So I'm writing something that actually puts the BSON document in an envelope before sending it
[05:55:08] <Boomtime> ah, welp, that would be a you problem :p
[05:55:22] <svm_invictvs> Well, I just realized the document length is not used all the time
[05:55:26] <Boomtime> right, i take it it's working for you?
[05:55:27] <svm_invictvs> And that it's slow to serialize
[05:55:34] <svm_invictvs> Well not working, but I found the issue
[05:55:42] <Boomtime> heh, you are moving again...
[05:55:49] <Boomtime> that is often the best we can hope for
[05:56:54] <svm_invictvs> Yeah
[05:57:12] <svm_invictvs> I may as well may my protocol serialization agnostic at this point.
[05:57:30] <svm_invictvs> I maybe want to sniff it in Wireshark and see it in JSON format for debugging, but turn on BSON for more compact format.
[05:57:44] <svm_invictvs> Boomtime: I'm using BSON as a custom serialization format for a little project I"mw riting.
[06:17:45] <Boomtime> svm_invictvs: cool, bson is pretty nice to work with, all the simplicity of JSON, but traversable
[06:36:53] <m3t4lukas> hey guys
[06:37:46] <m3t4lukas> What is the best way in java to retrieve an Array as ArrayList from org.bson.Document?
[07:16:28] <fewknow> m3t4lukas: not familiar with java, but i am assuming you can issue a query to mongo and get back the entire json.....if you just want an array you can project in the query
[07:19:29] <m3t4lukas> That's what I've done. That returns the org.bson.Document. Now I want to put the data I retrieved to use. When I do ArrayList<Float> = doc.get("someFloatArray", ArrayList.class); I get a warning: "Type safety: The expression of type ArrayList needs unchecked conversion to conform to ArrayList<Float>"
[07:20:58] <m3t4lukas> and http://mongodb.github.io/mongo-java-driver/3.0/bson/documents/ does not say how to actually use those things. It describes how to put data in, but not how to get it out. A database only makes sense if you can work with retrieved data ;)
[09:55:21] <ragezor> Hello guys I'm having problems searching for regex in mongo shell
[09:55:30] <ragezor> db.coll.find({'field': {$regex: /^{/}})
[09:55:58] <ragezor> as you can see i'm searching for { which confuses mongo shell
[09:56:25] <ragezor> how can I escape this character?
[10:01:21] <coudenysj> ragezor: use a backslash?
[10:02:18] <ragezor> it does not help
[10:02:59] <ragezor> I am not escaping special regex character, I need to escape special mongo shell character
[10:03:24] <ragezor> if i wrap regex in string, regex does not work anymore
[10:07:25] <ragezor> ok, found a solution
[10:07:35] <ragezor> db.coll.find({'field': {$regex: "^{"}})
[10:07:47] <ragezor> use quotes instead of / /
[10:09:04] <kali> ragezor: db.coll.find({'field': /^{/ }) ?
[10:09:33] <kali> ragezor: or db.coll.find({'field': /^\{/ })
[10:10:34] <ragezor> kali: you can paste your examples in mongo shell and see they are not working
[10:11:52] <kali> ragezor: indeed, the shell itself get confused
[10:16:35] <skvedula> ..
[11:01:58] <oskie> hello, I read on http://docs.mongodb.org/manual/faq/replica-sets/#how-long-does-replica-set-failover-take that failover takes 10-30s (detection) + 10-30s (election). In the case of a manual stepdown, I guess you will only get the 10-30s delay for election?
[13:16:38] <m3t4lukas> hi, how do I get an Array out of org.bson.Document in java?
[13:18:13] <cheeser> an array of what?
[13:31:31] <m3t4lukas> cheeser: an array of Float
[13:31:58] <m3t4lukas> When I do ArrayList<Float> = doc.get("someFloatArray", ArrayList.class); I get a warning: "Type safety: The expression of type ArrayList needs unchecked conversion to conform to ArrayList<Float>"
[13:35:10] <StephenLynx> shouldn't it be ArrayList<Float> var = doc.get...?
[13:35:20] <StephenLynx> where var is your variable name?
[13:35:57] <m3t4lukas> StephenLynx: yep, just forgot it here in IRC. No syntax highlighting :P
[13:36:11] <m3t4lukas> the variable has a name in the code
[13:36:30] <StephenLynx> did you tried casting the result of doc.get to ArrayList<Float>?
[13:36:49] <StephenLynx> (ArrayList<Float>) doc.get...
[13:36:54] <m3t4lukas> java does not allow to cast that way
[13:37:00] <StephenLynx> :v
[13:37:05] <StephenLynx> its been a while since I used java
[13:37:52] <m3t4lukas> oh, it does, but the warning is still the same
[13:38:10] <StephenLynx> welp
[13:40:10] <m3t4lukas> and the exception when I try to use whatever is in the ArrayList is also the same: java.lang.ClassCastException: java.lang.Double cannot be cast to java.lang.Float
[13:40:38] <StephenLynx> are you sure you should be using an arraylist of floats?
[13:42:29] <m3t4lukas> :o it is an array of double...
[13:42:35] <m3t4lukas> in the database
[13:42:43] <m3t4lukas> but I stored a float
[13:49:21] <cheeser> m3t4lukas: that's just a generics warning. you should be fine with that.
[13:51:05] <m3t4lukas> cheeser: okay, I am, however, surprised that when I append an ArrayList<Float> onto a Document and store it into the database that it is being stored as a double
[13:51:37] <cheeser> bson only has the one floating point type.
[13:52:04] <cheeser> oh, and i just saw your CCE line. yeah. for that you'll need a List<Double>
[13:56:32] <m3t4lukas> cheeser: okay. maybe that's worth mentioning in the docs :)
[13:57:05] <m3t4lukas> cheeser: never mind, I just saw that float is not even listed
[14:22:28] <symbol> I've noticed a strong distaste for Mongoose when using Node.js but what about using Mongorito? I was checking it out earlier and it doesn't seem to get in the way.
[14:22:39] <cheeser> never heard of it.
[14:23:00] <symbol> http://mongorito.com/
[14:23:14] <symbol> Plus side is it's using mostly ES2015
[14:34:14] <m0rpho> hi there, I want to count about 1000 urls per seconds and then sort this data by occurence, ideally in realtime. do you guys have any suggestions about what kind of datastructure to choose?
[14:35:59] <m0rpho> I thought about just using update with upsert:true and a counter with $inc
[14:37:18] <symbol> Without knowing the details of your project, that seems like a viable option.
[14:37:49] <m0rpho> but then i wouldnt be able to know which url occured most of the time in say the last 60 minutes
[14:39:06] <cheeser> why not just insert them every time they occur then aggregate the last 60 minutes?
[14:42:43] <m0rpho> cheeser: thanks for your response! do you think mongodb is fast enought for that? it will be billions of rows. sorry if my question might sound stupid, im pretty new to mongodb
[14:43:17] <cheeser> should be, yeah.
[14:43:32] <cheeser> wouldn't be too hard to prototype
[14:43:51] <m0rpho> ok, i will try it out
[14:44:27] <m0rpho> do i have to set an index or something on the "link" ?
[14:45:08] <cheeser> not necessarily. only if you're querying on that field.
[14:45:26] <m0rpho> sorry, i'm really just starting to dive into mongodb.. so i can make an aggregation like a sql group query?
[14:46:13] <cheeser> http://docs.mongodb.org/manual/aggregation/
[14:46:30] <m0rpho> "Group Documents by a Field and Calculate Count" ah ;)
[14:46:36] <m0rpho> thx
[14:48:08] <m0rpho> one last question: can I combile this grouping with a date filter? i.e. group all urls inserted the last 60 minutes and group by url and count each group by amount of same urls and sort by highest amount?
[14:48:54] <m0rpho> and combining that with a ttl index
[14:49:10] <m0rpho> I think then I would have a perfect trending algorithm
[14:51:26] <m0rpho> and are there wildcard matches for grouping? like regex?
[14:51:46] <Bajix> Can aggregate take the average of a variable without grouping? My projected data looks like { _id: 270, count: 24 }, { _id: 271, count: 26 } etc
[14:55:00] <Bajix> nvm, _id: null in $group ;D
[15:56:06] <saml> is there a way to have mongodb log every single query?
[15:57:29] <cheeser> http://docs.mongodb.org/manual/reference/method/db.setProfilingLevel/
[16:01:58] <saml> thanks
[16:06:20] <deathanchor> anyone know how the mms lock% is calculated?
[16:45:58] <rblourenco> Hello all. I'm looking for info on geospatial indexing on MongoDB, more specifically on R*-trees
[16:47:26] <rblourenco> Any suggestions?
[16:47:44] <cheeser> why kind of info?
[16:47:57] <cheeser> mongodb.org has docs on the geo support
[16:50:26] <rblourenco> I saw while reading the documentation that MongoDB has implemented a b-tree index. I would like to know if thermopilae has opensourced their SD-r*Tree index algorithm...
[16:51:18] <rblourenco> It was a presentation from Nicholas Knize from 2012...
[16:51:44] <rblourenco> I've tried to contact him, but still no response.
[16:53:34] <teknopaul> n00b question using nodejs driver, node-mongodb-native/2.0, any one know where I can find docs on how to create query objects for find(). The docs at http://mongodb.github.io/node-mongodb-native/2.0/api/ all have just find({}) I cant get anything else to work. MongoDB main site docs seem to be for some other driver.
[16:54:43] <oskie> teknopaul: check mongodb docs on find: http://docs.mongodb.org/manual/reference/method/db.collection.find/
[16:55:36] <teknopaul> is query object the same format?
[16:59:22] <oskie> teknopaul: yes
[17:00:38] <teknopaul> oskie: ok cheers
[17:02:41] <StephenLynx> yeah, any $ operator works pretty much the same on the driver than it works on the terminal teknopaul
[17:02:50] <StephenLynx> only difference I can recall is how aggregate syntax works.
[17:03:03] <StephenLynx> in the driver you have to use an array of objects, while in the terminal you use an object.
[17:08:18] <teknopaul> OK, good to know, thanx
[17:21:17] <cheeser> the shell takes an array, too. it just translates from one to the other.
[17:27:29] <StephenLynx> ah
[17:27:56] <cheeser> (i only know that because i was mucking around the other day and had the exact same questions :D )
[17:55:00] <Bajix> Is there any way to make something like $group: { _id: null, average: { $avg: '$count' } } result in a single object, rather than an array wrapping a single object?
[17:55:13] <Bajix> w/ aggregate framework
[17:55:38] <cheeser> pastebin your output
[17:55:44] <cheeser> or at least some ofit.
[17:56:19] <Bajix> [{ average: 3333 }]
[17:56:26] <Bajix> I want { average: 3333 }
[17:56:35] <cheeser> in the shell?
[17:56:44] <Bajix> well in NodeJS
[17:57:16] <Bajix> i can do it outside of the aggregate, it would just make my code messier
[17:58:00] <cheeser> the syntax i'm not 100% sure of but after your pipeline add: , cursor : {}
[17:58:09] <cheeser> https://mongodb.github.io/node-mongodb-native/driver-articles/anintroductionto1_4_and_2_6.html
[18:00:03] <StephenLynx> I don't see why is that a problem.
[18:00:24] <StephenLynx> you can see if you got stuff by checking if the results array has objects and then you just get results[0]
[18:00:49] <cheeser> though if you have a large response a cursor would be better.
[18:01:08] <Bajix> Well, my current design pattern is one in which I have a function that generates middleware from a function that returns an aggregate
[18:01:32] <Bajix> so I'd have to re-write that to add in a transform step
[18:01:51] <StephenLynx> just add a small wraper.
[18:02:04] <StephenLynx> do that transformation on the aggregate callback
[18:02:13] <StephenLynx> and execute the function from the callback.
[18:02:16] <Bajix> yea -- I was just curious if this was somethign supported by aggregate
[18:02:20] <Bajix> thanks
[18:02:33] <StephenLynx> I don't think so, but I could be wrong.
[18:02:59] <cheeser> if what is supported?
[18:03:07] <StephenLynx> http://mongodb.github.io/node-mongodb-native/2.0/api/Collection.html#~resultCallback
[18:03:56] <StephenLynx> I can't see anything indicating it to return just the first document if theres only one.
[18:04:25] <cheeser> a cursor would do that kind of.
[18:04:45] <StephenLynx> how?
[18:04:54] <Bajix> that one doesn't seem obvious
[18:05:05] <StephenLynx> http://mongodb.github.io/node-mongodb-native/2.0/api/Collection.html#aggregate
[18:05:08] <cheeser> it would return a stream of documents rather an array with all of them.
[18:05:09] <Bajix> even if the batch size is 1, that's still an array
[18:05:24] <StephenLynx> http://mongodb.github.io/node-mongodb-native/2.0/api/AggregationCursor.html
[18:05:28] <cheeser> well, it'd return more than one but still
[18:05:36] <StephenLynx> yeah, it would solve a different problem.
[18:05:37] <cheeser> agg.next() done
[18:06:00] <Bajix> ahh, that's clever
[18:06:54] <StephenLynx> indeed.
[18:07:08] <Bajix> mmm not supported by mongoose though
[18:07:14] <cheeser> dafuq?
[18:07:18] <StephenLynx> dont use mongoose.
[18:07:19] <cheeser> mongoose sucks!
[18:07:40] <akoustik> not a big deal, but is there a way to do this with a single update(), but without $min, per the comment on line 4? https://gist.github.com/anonymous/4995c816337143092352
[18:08:24] <Bajix> what do you have against mongoose? I mean, other than the incompleteness
[18:08:51] <cheeser> akoustik: why do you have $min there?
[18:09:17] <cheeser> if you want to only create 'firstLogin' on the first upsert, use $setOnInsert
[18:09:50] <akoustik> cheeser: oh, well there ya go. yeah that's what i want. never saw $setOnInsert for some reason. thanks.
[18:10:20] <StephenLynx> it doesn't handle _id right, is 6x times slower
[18:10:27] <akoustik> cheeser: $min was just one of those "meh, it works" things. :\
[18:10:54] <cheeser> akoustik: np
[18:22:41] <akoustik> oops, ya know, i think $setOnInsert might not be exactly what i need. the document will actually be inserted without a value in the firstLogin field.
[18:23:15] <akoustik> so setOnInsert wouldn't apply if i'm understanding right. i guess i can at least test it first.
[18:46:52] <akoustik> yeah, looks like, unless there's something like "$setOnNewField" that i'm missing, looks like $min might be the way to go.
[18:47:38] <cheeser> ?
[18:49:46] <akoustik> er, i'll try to be more clear. i could probably be smarter, but right now, "users" might get inserted without a "firstLogin" field. so when the upsert from my gist runs, the document already exists, so $setOnInsert won't apply.
[18:50:17] <cheeser> why would you insert those docs without setting firstLogin?
[18:51:47] <akoustik> cheeser: i'm creating "users" with identifying information, before they log in.
[18:52:15] <cheeser> ah
[18:52:33] <cheeser> you'll have to do two updates, then.
[18:52:49] <akoustik> ok, that's what i was starting to think.
[18:52:56] <cheeser> first matching against { $exists : { firstLogin : false }}
[18:53:28] <cheeser> findAndModify can return the document matched either pre/post updates. based on that, you could up date lastLogin
[18:53:29] <akoustik> i could have separate collections for users and "user events", i guess.
[18:53:46] <akoustik> good point.
[18:54:02] <cheeser> or go the other way: $set lastLogin and return the modified doc. if firstLogin is missing, do a $set on that field with the same date.
[18:54:36] <akoustik> i'll think about that. but for now, do you see any actual logical error with the way i'm using $min, other than that the comparison will always have the same result?
[18:55:00] <cheeser> i have no idea why $min is there or why it only works with it present.
[18:56:03] <akoustik> haha, well maybe it's a bad hack, but it works because the first value that gets set there is always in the past relative to `new Date()`
[18:57:17] <akoustik> yeah, ya know, your suggestions are definitely easy enough, i'll just change it.
[18:57:19] <cheeser> personally, i'd go with my 2nd fAM suggestion above
[18:57:46] <cheeser> it'd be 2 operations on the first login but only one after
[18:57:55] <akoustik> sometimes i'm lazy enough i would rather defend my bad decisions than fix them.
[18:58:02] <dddh> omg
[18:58:14] <akoustik> that's totally true though. thanks cheeser
[18:58:28] <dddh> what about C100DBA?
[19:00:13] <dddh> who tried professional mongo exams?
[19:00:21] <cheeser> not me
[19:00:41] <akoustik> clearly, i have. :] (jk)
[19:01:23] <dddh> "Since you are a top performer, you are eligible for a 50% discount ($75) on the next exam."
[19:01:27] <dddh> >_>
[19:03:04] <dddh> I thought I should pass M202 because I passed m101p and m102
[19:50:44] <psuriset_> ping
[19:51:05] <psuriset_> I am trying to run mongod with numa interleave on centos 7
[19:52:56] <psuriset_> looks like interleave happens as per /proc/pid/numa_maps
[19:53:09] <psuriset_> But i fail to start mongod service
[19:53:48] <psuriset_> i guess /etc/init.d/mongod need to be updated.
[19:54:21] <psuriset_> Any one around?
[19:55:31] <psuriset_> akoustik, ^