PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 13th of August, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:00:58] <elux> i cant wait to move this db to postgres
[00:01:39] <cheeser> no time like the present!
[00:04:04] <elux> got that right
[00:04:38] <cheeser> all better now
[00:05:39] <Boomtime> ah.. i see, he didn't get an answer to his issue inside of 6 minutes, fair enough
[00:08:29] <joannac> 6 minutes is pretty generous
[00:08:36] <joannac> I have users who quit qithin 60 seconds
[00:08:40] <joannac> *within
[00:08:54] <cheeser> you're saying they're ... gone in 60 seconds?
[00:08:59] <cheeser> *badump*
[00:09:19] <joannac> cheeser--
[00:09:33] <cheeser> you know you giggled.
[00:59:25] <cygnet> my team is thinking of using mongo for a distributed system of databases
[01:00:10] <cygnet> where each hospital using our system will have their own mongod instance in their on-site servers
[01:00:33] <cygnet> how expensive is it to create a database in mongo?
[01:00:44] <partycoder> expensive in terms of what
[01:05:35] <cygnet> how computationally expensive is it?
[01:06:16] <Boomtime> not, but it pre-allocates a bunch of hard-drive space
[01:06:36] <cygnet> would the database connection be a very slow setup? would it take up many resources on the client/server side?
[01:07:13] <cygnet> so the main concern is the hard drive space availability?
[01:07:15] <cheeser> you *can* pre-allocate. you don't have to.
[01:07:27] <Boomtime> it pre-allocates by default
[01:07:45] <Boomtime> but yes, you can make it not do that
[01:08:18] <cheeser> multitenancy is easier with 1 db and then user-specific tags on the data.
[01:09:00] <cygnet> so what exactly would be the difference between creating many shards and creating a separate db per location?
[01:09:33] <cygnet> would that mean that information pertaining to that location is not necessarily stored at said location?
[01:10:26] <cheeser> if they're not going to be on shared infrastructure then clearly 1 db per "site" is better
[01:10:27] <cygnet> ex: all of a hospital's patient information, would it be saved only at that hospital with sharding, or spread in most convenient manner across all shards?
[01:10:48] <joannac> depends how you set it up
[01:11:01] <joannac> I feel like we've gone straight to implementation details
[01:11:16] <joannac> what exactly is the goal?
[01:12:35] <cygnet> to have a network of institutions, working on a multi-instance network, their data being saved only client-side, on their own instance
[01:12:46] <Boomtime> would you want to access patient data from any hospital?
[01:13:04] <Boomtime> i.e paients from other hospitals
[01:13:20] <cygnet> no
[01:15:45] <Boomtime> ok, so seperate instances is looking like a better option.. but there are more considerations
[01:16:05] <Boomtime> have you thought about data resiliency? do you want patient information to be copied real-time to a remote location?
[01:19:03] <Boomtime> sharding is unlikely to be useful to you in your case. you would only use it if you wanted a cohesively accessible data-set that spanned all hospitals in the network, since you don't want that, you're probably better off with simpler deployments
[01:19:16] <cygnet> not real-time, because of the issue of reliable connectivity in some hospitals. During offline periods, data is stored locally, then transferred to the hospital's instance when connection is reestablished
[01:21:48] <Boomtime> where do you see "the hospital's instance" as being hosted?
[01:22:07] <Boomtime> where do the data bearing machines reside?
[01:24:26] <cygnet> The hospital's instance is hosted in a server located at the hospital
[01:25:04] <joannac> so what does "locally" refer to?
[01:26:14] <cygnet> right now it would be their actual devices
[01:26:45] <cygnet> but the issue we're concerned about is how creating a new instance will affect the rest of the system
[01:27:11] <joannac> well, the idea right now is that each hospital has its own instance
[01:27:12] <cygnet> would it make sense, for example, to create a database per department, or per hospital?
[01:27:28] <joannac> since there's no requirement to different hospitals to share data
[01:27:39] <joannac> do different departments need to share data?
[01:27:43] <cygnet> yes
[01:27:51] <cygnet> occassionally, but yes
[01:28:26] <cygnet> a patient may, for example, be transferred from cardiology to pediatrics,
[01:28:30] <joannac> sure
[01:28:50] <joannac> I'm not seeing the need for multiple databases
[01:29:14] <joannac> maybe a "public" database and a "private" database
[01:29:26] <joannac> public for patients, referrals, triaging,
[01:29:41] <joannac> private for the hospital's records, payroll, research,
[01:30:13] <cygnet> the software would only deal with patient records and treatment details
[01:31:03] <joannac> okay, so I don't really see a need for multiple databases
[01:32:49] <cygnet> multiple databases within a hospital, yes?
[01:34:20] <joannac> well, yes
[01:34:29] <joannac> but your instance is only within a hospital
[01:36:12] <cygnet> so the biggest concern when creating a new hospital's database is ensuring it would have enough hard drive space
[01:39:05] <joannac> HDD and to some extent, RAM
[01:39:35] <joannac> and bandwidth to have many devices connected to it at once
[01:42:20] <cygnet> is it expensive in these regards, or is the expense of RAN and bandwidth negligible?
[01:42:52] <cygnet> actually, more importantly, would mongo use these resources more than another db?
[01:59:07] <joannac> depends on the size of your database and how many clients you have
[02:01:01] <themime> using java (and console), im having a hard time getting a filter on a single day working, can i not just put a year-month-day as a string in there? do i have to compare gte le of the day i want and the later day?
[02:02:20] <cygnet> @joannac: it's highly variant; some hospitals currently have a team of ~50 doctors, with ~1600 patients, will have ~200 doctors with ~8000 patients, some will have an entire hospital (as high as 500 doctors)
[02:19:29] <kyle_l2> Howdy, folks. I have a simple question. Are write commands sent to a secondary member of a replica automatically routed to the primary? Forgive the potential obviousness of this question; I have little experience with Mongo.
[02:28:42] <lambdta> How can I use mongoimport to import a csv file and select the columns which I want to import?
[02:31:15] <lambdta> also is there a way to specify column types?
[03:05:31] <Boomtime> @kyle_l2: "write commands sent to a secondary member of a replica ..." -> are rejected, you cannot write to a secondary
[03:16:08] <themime> anyone know why { "createdOn" : { "$gte" : "2014-08-12 04:00:00" , "$lt" : "2014-08-13 04:00:00"}} does not match date "2014-08-13 03:09:41"
[03:16:26] <themime> (in java)
[03:17:36] <themime> ug it matches in console but i get no objects from the collection in java
[03:18:16] <SahanH> what would be the correct way to group by and select last doc in groups?
[03:18:28] <SahanH> for a messaging app, listing last message from every contact
[03:22:38] <joannac> SahanH: http://docs.mongodb.org/manual/reference/operator/aggregation/last/#grp._S_last
[03:22:54] <SahanH> thanks joannac
[03:42:12] <gabbles> what are the implications of having one database per user on a large system?
[03:42:23] <gabbles> is that against the principles on mongo?
[03:52:20] <themime> gabbles: new to mongo but in general for dbs it depends on what data youre storing and why
[04:04:24] <Boomtime> @gabbels: can you give an index of how users you expect?
[04:04:40] <Boomtime> index? .. bleh.. indication
[04:05:03] <Boomtime> *sigh*, i'll start over
[04:05:15] <Boomtime> @gabbles: can you give an indication of how many users you expect?
[07:02:48] <ezonno> Hi I am seaching for a solution for a problem, I have this data https://gist.github.com/ezonno/11ee956df7705dc2588a I want to make the documents containing the gpsPosition subdocuments of the documents containing the obdEot value but the ts (timestamp) of the subdocument should be between the parent ts - duration
[07:03:36] <ezonno> Can I achieve this with mapreduce or will it also be possible to do this via the aggregation framework?
[08:35:44] <jillesme> Hey guys, is there a difference between name: { type: String } and name: String?
[08:59:27] <rspijker> jillesme: there is…
[08:59:35] <jillesme> What is it? rspijker
[09:00:17] <rspijker> in the first case, name is a field containing a document, in the second case name is a filed containing a value
[09:00:27] <rspijker> in what context is this exactly?
[09:00:35] <rspijker> because the syntax isn’t entirely correct
[09:14:55] <jillesme> rspijker: well if you look at this article http://blog.modulus.io/getting-started-with-mongoose
[09:15:03] <jillesme> and check movieScheme
[09:15:06] <jillesme> schema*
[09:15:34] <jillesme> he's using name: { type: String } and rating: String
[09:15:42] <jillesme> j #ghost
[09:15:47] <rspijker> o, it’s mongoose
[09:15:51] <jillesme> Yeah!
[09:16:20] <rspijker> I don’t know anything about mongoose
[09:41:57] <beebeeep> hello folks, i have some question about how mongodb stores sharding metadata, here is my code http://pastie.org/9469498
[09:44:57] <beebeeep> can anybody explain what is correct way to find out the chunk which stores particular document?
[09:47:12] <BurtyB> beebeeep, assuming you didn't typo your server names/shards (like I did a while ago).. you can end up with orphan data if a migration fails. iirc there's some javascript to help on the mongod site but last time I just wrote a script so I could check things was on both sides before deleting the orphan
[09:50:21] <beebeeep> BurtyB: looks like you are absolutely right http://pastie.org/9469553 :(
[09:51:19] <BurtyB> beebeeep, it's a pita but on the up side the data wasn't lost :)
[09:51:33] <beebeeep> and looks like i recall the script you mean
[09:57:44] <rspijker> beebeeep: 2.6 has a command for it iirc
[09:57:50] <rspijker> cleanupOrphaned or something like that?
[09:59:27] <beebeeep> rspijker: i have 2.4.9 and i'm terrified by thougths that someday i will have to update it :)
[10:00:43] <rspijker> I’m on 2.4.10
[10:00:47] <rspijker> I know your pain
[10:01:46] <rspijker> this is probably the script that BurtyB was talking about: https://github.com/mongodb/support-tools/tree/master/orphanage
[12:37:43] <remonvv> yet -> not ever ;)
[12:41:19] <cheeser> don't be so sure. ;)
[12:42:32] <remonvv> I will rage blog if that happens.
[12:43:13] <remonvv> Which is about as effective as standing in a forrest and list my objections but hey.
[12:44:38] <rspijker> why would you care?
[12:45:15] <rspijker> well… I can see why you would care if embedding would become an “under the hood” reference
[12:46:21] <remonvv> rspijker: Consider the impact it would have on things like the query language, the amount of development resources it would divert from what I consider higher priority issues, the fact that it would basically move MongoDB to RDBMS space.
[12:47:07] <cheeser> embedded docs will never be implicit references
[12:48:20] <rspijker> remonvv: I agree with the diverting of resources part, the rest, not so much. Would depend very heavily on development choices made
[12:48:25] <remonvv> cheeser: Was that one of those "better get used to the idea we're already on it" sort of you never knows?
[12:48:47] <cheeser> it was not :D
[12:48:59] <remonvv> cheeser: You wouldn't tell me if it was ;)
[12:52:54] <cheeser> :D
[12:55:29] <remonvv> rspijker: I recogize it's a highly subjective discussion but catering to the "I could do this in MySQL but not in MongoDB, fixplz" crowd seems like a dodgy route. I don't think there are as many valid arguments for adding proper relational support to something like MongoDB as people assume there are.
[12:57:12] <kali> i could live with join
[12:57:18] <kali> please don't add transactions :)
[12:59:41] <cheeser> why not?
[13:02:47] <kali> i'm more or less convinced it what make the relational system collapse under its own weight, more than relations themselves
[13:04:03] <cheeser> that's orthogonal to transactions, though
[13:04:09] <kali> yes.
[13:04:24] <kali> well, relations and transactions are
[13:04:44] <remonvv> When you say "transactions" what specifically are you referring to?
[13:05:41] <kali> mmm... distributed fine/coarsed grain locking all over the place in the pretense of achieving ACIDity
[13:05:48] <remonvv> They can't really add RDBMSy transactions to MongoDB.
[13:06:11] <cheeser> distributed transactions would be unlikely in any case.
[13:06:23] <remonvv> Hm, well, they'd have to put various constraints on which entities can participate in a specific transaction or live with the fact that it's mutually exclusive with linear scalability.
[13:06:37] <remonvv> Neither are good ideas.
[13:06:45] <kali> my point
[13:06:45] <cheeser> restrict such things to single shard operations.
[13:06:57] <kali> cheeser: yeah, i would not mind that too much either :)
[13:07:03] <remonvv> cheeser: What would that add?
[13:07:08] <cheeser> because XA sucks...
[13:07:19] <cheeser> remonvv: well, in the non-sharded context, lots of awesome.
[13:07:23] <remonvv> cheeser: As in, practically speaking what could you do with that that you cannot do already with about as much effort.
[13:07:25] <kali> Derick: +1
[13:07:30] <cheeser> in the sharded version, limited awesome.
[13:07:53] <cheeser> Derick: agreed
[13:08:21] <kali> Derick: that, and other smaller optimisation in the... optimiser
[13:08:27] <Derick> oh sure
[13:08:32] <remonvv> cheeser: If you can live with non-sharded MongoDB you can live without transactions in the vast majority of cases.
[13:08:55] <cheeser> i'm not sure i agree but whatever :)
[13:09:15] <remonvv> Nobody ever does :(
[13:09:25] <kali> :)
[14:13:26] <uehtesham90> https://www.irccloud.com/pastebin/gSwvWEkB
[14:13:57] <uehtesham90> can someone tell me whats wrong with this command:
[14:14:00] <uehtesham90> mongo course --eval "var c = db.track.find({'event_type' : {'$regex' : '^/courses/progress'}});while(c.hasNext()) {printjson(c.next())}" >> progress_event.json
[14:14:12] <uehtesham90> its not running the query
[14:23:21] <BurtyB> uehtesham90, the $regex is probably being parsed as a variable inside the ""
[14:30:42] <elux> what kind of index would be appropriate for a query like: db.content.find({ $query: { site_id: ObjectId('000388030000fdf6cab527c9'), $or: [ { src_url: "http://www.inc.com/anita-newton/customer-development-the-biggest-lies-are-the-ones-we-tell-ourselves.html" }, { src_uid: "237624761" } ] }, $orderby: { _id: 1 } }).limit(1)
[14:31:27] <elux> i have indexes {site_id:1, src_url:1}, {site_id:1, src_uid:-1}, {src_url:1}, {src_uid:-1} .. and it doesnt seem to be doing any good
[14:31:38] <uehtesham90> BurtyB: is there a way to resolve that?
[14:31:47] <elux> the mongodb log is showing planSummary: IXSCAN { _id: 1 } ntoskip:0 nscanned:3619716 nscannedObjects:3619716
[14:31:55] <elux> this is mongodb 2.6.4 btw
[14:34:20] <elux> maybe this is a client issue somehow
[14:34:46] <elux> running this in the mongo cli, it looks fine.. and .explain() true shows it using the right indexes
[14:35:02] <elux> of src_uid and src_url .. obviously, since theyre there..
[14:35:14] <BurtyB> uehtesham90, I'd normally switch the quotes around (so '' outside and "" inside)
[14:35:20] <elux> and nscannedObjects of 2 or 4
[14:35:34] <odigem> hi
[14:35:41] <odigem> how to find contains?
[14:36:40] <rspijker> odigem: what?
[14:36:43] <uehtesham90> BurtyB: lol thats weird....its working now....but why is that so? is it a javascript this?
[14:38:11] <rspijker> elux: mongo uses all plans at some point, to determine which is best… so depending on where you;re reading that exactly, it might be deceiving...
[14:38:11] <odigem> rspijker: find contains in field
[14:38:30] <rspijker> odigem: can you give me an example of what you mean?
[14:39:06] <odigem> rspijker: u are kidding?
[14:39:23] <odigem> rspijker: select ... where x like "x"
[14:39:53] <rspijker> odigem: ok, simplest is using a regex
[14:40:00] <rspijker> db.coll.find({“x”:/x/})
[14:40:12] <odigem> what?
[14:40:13] <odigem> decode error
[14:41:21] <odigem> rspijker: ?
[14:41:22] <rspijker> where are you entering it?
[14:41:32] <odigem> db.coll.find({“x”:/x/})
[14:41:36] <odigem> your message
[14:41:42] <rspijker> WHERE are you entering it?
[14:41:57] <odigem> lol i see it in chat
[14:42:11] <rspijker> you say ‘decode error’
[14:42:30] <odigem> wtf
[14:42:36] <rspijker> o, you mean you don’t understand?
[14:42:59] <odigem> O.o
[14:43:16] <odigem> fix your codepage
[14:43:29] <rspijker> ok, I’m done. best of luck
[14:45:05] <odigem> fk
[14:45:12] <odigem> its not insensitive search
[14:45:34] <rspijker> /x/i
[14:45:59] <odigem> cool
[14:46:11] <odigem> thx
[14:46:18] <rspijker> np
[14:54:22] <remonvv> I love the friendly ones.
[14:57:58] <Shakyj> can I give mongod an arg to locate it's config file? I have it in /etc/mongod.conf but it's not picking it up
[14:59:15] <JT-EC> http://docs.mongodb.org/manual/reference/program/mongod/
[14:59:57] <Shakyj> cheers JT-EC was looking at the install docs
[15:00:44] <elux> hrmm..
[15:12:37] <dmitchell> I put in a bug report (https://jira.mongodb.org/browse/DOCS-3888) on Monday where cursor.count() was returning a non-zero number but the resultset is empty. It's miscategorized b/c it's against 2.2 and that isn't one of the options. idk if anyone will have a chance to verify it, but I thought at a min it may be useful as a "known problem" if you fi
[15:12:37] <dmitchell> gure out root cause.
[15:21:10] <borkdox> I am getting an uncaughtException in my NodeJS app using mongodb 2.6.4 with 1.4.8 driver. The error originates from the mongodb driver. The error thrown doesn't contain any stacktrace or message, it's just an empty object. I had to use stackup to trace the error.... Here it is: http://pastebin.com/S0TjvkXN Any suggestions regarding this? Submit bug report?
[15:50:03] <uehtesham90> i want to look for a certain field in the database that matches one of a given list of regexes....i cannot use $in with $regex....is there another way to do it?
[15:50:21] <cheeser> $or?
[15:50:25] <uehtesham90> i am using $or with $regex, but then my query becomes huge
[15:50:42] <cheeser> as long as it's <16MB :D
[15:50:52] <uehtesham90> bcuz i am looking for like 5 different regex matches
[15:51:07] <uehtesham90> cheeser: can i do like a full text search with $in?
[15:51:34] <uehtesham90> instead of doing an or of all 5 elements i want to just a list in my pymongo code
[15:51:36] <rspijker> can’t you give regex an array?
[15:51:43] <rspijker> or have I been dreaming that?
[15:51:44] <dmitchell> what about encoding the or in the regex?
[15:51:50] <uehtesham90> can we? i dont think so
[15:51:53] <cheeser> uehtesham90: i dunno
[15:51:58] <uehtesham90> rspijker: i wish we coud
[15:52:01] <uehtesham90> *could
[15:52:59] <uehtesham90> rspijker: ur suggestion gave the following error: Can't canonicalize query: BadValue $regex has to be a string
[15:53:40] <rspijker> uehtesham90: this works for me: db.coll.find({"field":{$regex: [/^9/, /^1/]}})
[15:54:15] <rspijker> I haven’t checked if the result actually is correct or not though… :P
[15:54:38] <uehtesham90> u will most probably get this error: Can't canonicalize query: BadValue $regex has to be a string
[15:54:45] <uehtesham90> bcuz its not a string
[15:55:14] <rspijker> like I said, it works
[15:55:17] <rspijker> I get results
[15:55:26] <rspijker> just don’t know if they actually match those regexes
[15:55:28] <rspijker> let me check
[15:57:02] <rspijker> hmmm, the result is incorrect
[15:57:09] <rspijker> it does run without errors though :/
[15:57:35] <uehtesham90> really?
[15:58:33] <uehtesham90> this is my query: db.tracking.find({event_type : {$regex : ['^/courses/info$','^/courses/progress$']}}).pretty()
[15:58:38] <uehtesham90> can u have a look?
[15:59:09] <rspijker> why are they in quotes?
[15:59:51] <rspijker> o, that;s the default $regex notation
[15:59:59] <rspijker> I’m used to the / … / notation
[16:00:41] <uehtesham90> how can i rewrite it to the /../ notation bcuz i have forward slashes in my regex experession too as u can see
[16:00:52] <uehtesham90> would really appreciate ur help
[16:00:53] <rspijker> yeah, like I said, I can give it an array, but it simply doesn’t work
[16:01:03] <rspijker> it does run, but the results aren't filtered
[16:01:37] <uehtesham90> but in the /../ notation, how to i take care of forward slash in my expression?
[16:01:44] <uehtesham90> do i do '\ /'
[16:05:20] <uehtesham90> the /../ notation is working with the $in operator in find....without the regex operator
[16:05:34] <uehtesham90> i think its doing a full text search
[16:08:09] <rspijker> let me have a look
[16:08:54] <rspijker> uehtesham90: yeah, think \/ should work
[16:19:25] <uehtesham90> but with out the $regex
[16:20:12] <uehtesham90> i dont know if u have ever used pymongo, but do u know how to change a string to raw literal so that i can pass it to my in operator?
[16:20:41] <uehtesham90> as in, if i have ['test/qa', 'test/qb'], i want the raw literal of each string
[16:41:30] <dmitchell> {$regex: /^\/courses\/(info|progress)$/}}
[16:43:29] <Kenjin> Hello
[16:45:42] <Kenjin> Looking a this setup: http://pastie.org/private/eyc4jsmfaahegqdmzncqvq , how would I accomplish searching for a Directory by name for instance and getting back the directories together with their files and the symbolic links that point to those files?
[16:56:56] <DubLo7> How can I set things up so I can run stored procedures on Mongodb?
[16:57:10] <DubLo7> I get an error trying to run - I can search and find, but not run sprocs.
[16:57:12] <DubLo7> db.eval('addNumber(4,2)');
[16:57:12] <DubLo7> 2014-08-13T12:47:02.653-0400 { "ok" : 0, "errmsg" : "unauthorized" } at src/mongo/shell/db.js:402
[16:58:51] <uehtesham90> dmitchell: thank you so much...thats worked perfectly :D
[17:37:03] <saml> hello web scale specialists
[17:37:21] <saml> i got isGood(doc) that returns boolean
[17:37:40] <saml> i want the number of good documents per month for past 6 months
[17:37:45] <saml> how do i do that?
[18:40:29] <putty174> Anyone know why I get Default constructor cannot handle exception when I try to make a new MongoClient?
[18:41:12] <cheeser> pastebin your code
[18:41:36] <putty174> It's literally one line of code:
[18:41:36] <putty174> MongoClient mongoClient = new MongoClient("localhost");
[18:41:52] <cheeser> context matters, though.
[18:42:03] <cheeser> is that a field declaration?
[18:42:45] <putty174> http://pastebin.com/N6K6WRfQ
[18:43:14] <cheeser> yeah. the MongoClient constructor throws an exception and you have to handle that.
[18:43:19] <wavded> I'm getting this error "BSONObj size: 39925696 (0x26137C0) is invalid. Size must be between 0 and 16793600(16MB) First element: type: "KEEP_MUTATIONS" when performing searches. My googling told me to repair the database. I did that, it's back. I'm running a replica set. Any ideas?
[18:43:34] <wavded> kinda annoying
[18:44:08] <cheeser> what's the actual compiler error?
[18:44:54] <putty174> thanks cheeser
[18:45:22] <cheeser> np
[18:45:51] <kali> wavded: can you reproduce it from the shell ?
[18:53:19] <starlord> Hello.
[18:54:38] <starlord> Is MongoDB appropriate when I want 99% of the time I'm doing queries, but occasionally (every few minutes) update some count-keeping records or user-account records?
[18:55:10] <starlord> It's essentially for a web store that has a few products, a concept of a user account, orders, carts, and statistics about how often web-products are downloaded.
[18:56:22] <kali> starlord: i would not recommend a merchant app as a first mongodb attempt
[18:56:43] <starlord> We're currently using Datomic for this, after having moved away from MongoDB.
[18:56:58] <starlord> I'm considering moving back, but with a redesigned DB model.
[18:57:47] <kali> ha. you've already get bitten, and you want more ? :)
[18:58:19] <kali> starlord: it is feasible. but getting it right is difficult
[18:58:24] <wavded> kali: i will try
[18:59:02] <starlord> kali: How so?
[18:59:40] <starlord> kali: We moved to Datomic because I wanted to take advantage of their "for free" caching. But in reality we just need to make fewer redundant DB calls per web request.
[19:01:25] <kali> starlord: well, it depends exactly what data you want to have in mongodb. but dealing with complex "transactions" (as in buying something, marking it paid, marking it available, etc) is tricky, because mongodb does not offer atomic multi document updates
[19:02:12] <kali> starlord: so you have to do build on findAndModify and the atomic operators to make it happen. it is exactly what a rdbms does behind the scene, but in that case, it's your app that will have to do it
[19:02:26] <wavded> kali: i just noticed the primary switched to another member of the set that hadn't repaired the database on, do these issues you'll infect the whole replica set?
[19:03:00] <kali> wavded: start by probing all your replicas and see which ones show the problem
[19:03:24] <kali> wavded: i assume it is reproducible with a find() query ?
[19:05:40] <wavded> kali: yeah, basically i'm doing a "join" finding on one collection and $in by id on another collection, if the ids become too huge could that cause the issue? the id property is indexed
[19:09:49] <kali> wavded: do you have reason to suspect you could get huge ideas ?
[19:09:52] <kali> ids :)
[19:11:32] <wavded> the particular query that was being $in'd had 255925 ids
[19:11:57] <kali> aha :)
[19:12:29] <wavded> but, it seems if i repair the database it works? i'm confused
[19:12:30] <kali> can you try it half and half ?
[19:13:31] <kali> wavded: is it strictly the same query that was broken then working and then broken ? or just queries of the same species ?
[19:13:32] <wavded> that's doable on the application side, not ideal but doable. does mongodb then have a 16mb limit for "queries"?
[19:13:58] <kali> wavded: i think the 16MB limit applies to query too, yeah
[19:14:19] <wavded> although i really doubt that 255925 ids is over 16mb, but maybe
[19:16:47] <wavded> kali: that same query has broken before, but variations on the theme have broken as well
[19:20:24] <kali> unless i'm mistaken, 255k... id is 12 bytes, plus the inefficient array encoding... 1 byte for type, 6+1 for the string, 255*(12+1+7) that's about 5MB
[19:20:35] <kali> so not 16MB, even if you need to watch this
[19:57:20] <ngoyal> in mongodb 2.4 when inserting an array of documents, if an error occurs, is there a way to determine what documents were successfully inserted as a result of that operation?
[19:57:39] <cheeser> not afaik
[19:58:59] <wavded> kali: same query after i repaired works, so donno, all have been repaired (only 3 in the set) so maybe good
[19:59:28] <ngoyal> that is what i gathered as well. and this feels obvious, but maybe im wrong. it's probably more performant to do a single bulk insert rather than issuing a lot of single inserts in parallel correct?
[20:03:18] <pratikbothra> Hey guys, how do you handle updating populated objects in mongodb? Take the use case of likes in post (likes: [{type: Schema.ObjectId,ref: 'User'}])
[20:03:18] <pratikbothra> While retrieving, I use populate and get the user name, link along with their id's. But on updates back to mongodb, I'm facing a lot of trouble as mongodb is expecting an array of id's, while I'm sending it populated objects back. What is the best way to handle this?
[20:05:13] <pratikbothra> I faced a similar problem when users have to join groups as well. The way to handle it is just not clean, and I don't want to resort to hacks at this stage. Any suggestions would be great?
[20:13:55] <joer> If I have a collection within which each document stores an array of its child documents (an array of document ids of other documents), is there a way to fetch an array of all children of a particular document. All children being children of children etc too
[20:19:00] <pratikbothra> joer, can't you just do flatten if you don't care whose parent the child belongs to?
[20:21:02] <joer> pratikbothra: Sorry not really familiar (pretty new to mongo), could you explain?
[20:22:11] <pratikbothra> I mean a javascript equivalent of the flatten function. Just fetch all the children (includes children of children etc) of the required document, and then flatten it with javascript.
[20:23:34] <pratikbothra> Of course, this won't work, if the child parent relationship is also required. Note: Its late, here and groggy. Ignoring me, might be a good thing too. :-)
[20:24:02] <kali> joer: you need to maintain a list of all parents of every entry
[20:24:16] <joer> Ahh I see
[20:25:24] <joer> Might work out more efficiently to be honest than querying it and constructing it every time. Thanks
[20:27:43] <pratikbothra> I'm too tired, and leaving for the day....Would someone take a quick peek, at my question posted above, and tell me how they handle it?
[20:29:33] <pratikbothra> Retreiving an array of populated object is great....Saving/Updating it back a nightmare. Mongodb want's only ids while saving it....Grrrr
[20:33:58] <ngoyal> pratikbothra: i dont know of any easy solution other than to map your object array to _ids..
[20:37:06] <pratikbothra> its very dirty :-(
[20:37:06] <pratikbothra> everytime I have to do a post update
[20:37:06] <pratikbothra> I will have to convert the likes to an array of ids
[20:37:06] <pratikbothra> and then repopulate and send the data back
[20:39:47] <pratikbothra> ngoyal: On post update, I do something like this var post = req.post; which is followed with _.extend(post, req.body); ....Now imagine handling post.likes (both likes being added and people removing their likes)
[20:44:00] <ngoyal> yea i hear you
[20:46:51] <jonyfive> hello all
[20:47:11] <jonyfive> is there any way to obtain the current user's username from within .mongorc.js
[20:47:21] <jonyfive> (as in, the current linux user)
[20:58:19] <sdegutis> kali: Thanks.
[21:36:18] <daveops> jonyfive: db.runCommand({connectionStatus: 1}).authInfo.authenticatedUsers[0]['user']
[21:44:27] <jonyfive> daveops: thanks, that just results in: Cannot read property 'user' of undefined
[21:44:38] <jonyfive> is there any way to print an object in .mongorc.js
[21:44:58] <jonyfive> so i can just print the entire object of db.runCommand({connectionStatus: 1})
[21:45:32] <daveops> wrap it in print()
[21:45:38] <daveops> or printjson()
[21:45:41] <jonyfive> ahh thanks
[21:45:47] <daveops> or just log into the shell and run that command
[21:45:53] <jonyfive> also, it looks like that object is empty for me
[21:45:56] <daveops> you have to have authed first for that one-liner to work
[21:46:01] <daveops> or it'll be empty, as you're finding.
[21:46:02] <jonyfive> { "authInfo" : { "authenticatedUsers" : [ ] }, "ok" : 1 }
[21:46:11] <jonyfive> is that because i am not authenticating through mongo?
[21:46:28] <daveops> likely.
[21:46:39] <jonyfive> oh ok so i can't access my actual linux username?
[21:47:04] <jonyfive> like the out[ut of "whoami"
[21:53:54] <ngoyal> what is the release schedule like for 2.6 patches - i.e. is there a regular schedule of updates for 2.6.5, 2.6.6, etc? or is it as needed?
[22:46:00] <daveops> jonyfive: nope
[22:59:28] <salakalaka> how can i import a csv file and ignore a column?
[23:09:46] <joannac> salakalaka: I don't think you can with mongoimport
[23:10:12] <salakalaka> aww
[23:15:25] <salakalaka> joannac: what about running a javascript function on each record before it is inserted?
[23:24:22] <joannac> salakalaka: I'm not sure what you're asking
[23:25:18] <joannac> If you want to write your own import script to do fancy things, feel free