[00:54:31] <jayjo> I'm trying to convert all of my epoch timestamps to isodate in my mongdb database... I have a script like this: https://bpaste.net/show/acb2b8b53adf
[00:54:54] <jayjo> Can I just paste these into the mongo shell, or is there another way to launch a javascript script?
[00:55:32] <jayjo> oops there's an error in there... betterDateStr is now epoch... other than that I think it should work
[01:29:12] <jayjo> Something is wrong in this script, because it just finished executing and not a single document has been altered
[01:30:01] <jayjo> any ideas? It did run for a while, though. Just no output and querying the database now is the same
[02:51:32] <j-robert> hi all, I've been at this for a couple of hours now. finally decided I'm going to throw in the towel, and ask for a bit of direction this.
[02:51:41] <j-robert> is there any place that I can find some help?
[07:29:52] <fatninja> let's take this example: I have users and posts collections. Posts document have a "createdAt"
[07:30:11] <fatninja> is there a way to retrieve 5 latest posts for each user in one query ? :D
[07:30:37] <fatninja> meanwhile, I'm studying $group and $aggregate and see if there's a way, but if someone can point me into the right dirrection it would be great.
[07:54:14] <fatninja> question now is, can I group data and apply sorting not for the aggregation results but actually for the elements I'm planning to group
[08:58:45] <fatninja> solution found to the q above:
[09:36:08] <jokke> is it possible to "floor" an ISODate to a specific portion like minute, hour, day, etc. in an aggregation stage?
[09:37:53] <jokke> i'm trying to aggregate some samples to a lower resolution. I'd need this functionality to make a proper group stage. I know i can extract the individual components of the date and that would let me group the documents. The resulting documents still should have the date field as an ISODate
[09:38:28] <jokke> i'm not sure how/if this is possible
[11:36:11] <jayjo> I have this script: https://bpaste.net/show/acb2b8b53adf that I ran trying to adjust my epoch time stamp (a number) into an ISODate() in the database. It didn't work... it ran for a while but nothing happened. Any idea what else I need to do?
[11:48:21] <jayjo> If I'm using initializeUnorderedBulkOp(), how do I reference the object itself in the $set? Is that even possible?
[11:51:09] <jayjo> I'm looking at this example in the docs: https://docs.mongodb.com/manual/reference/method/Bulk.find.update/#example
[11:51:21] <jayjo> Do I do a .foreach() on the bulk.find()?
[12:23:10] <jayjo> I asked a SO question here about the same question in more detail : http://stackoverflow.com/questions/39616232/large-bulk-update-mongodb-foreach-document
[12:33:36] <jayjo> StephenLynx: sorry, I had referenced the question before you got here... I'm trying to figure out how to reference the document when doing an initializeUnorderedBulkOperation()
[12:34:32] <jayjo> now I have var bulk = db.siteEvents.initializeUnorderedBulkOp(); bulk.find( { "_t": { $type : 1 } } ).update( { $set: { <UNSURE HOW TO REFERENCE DOC HERE> } } ); bulk.execute();
[12:34:59] <jayjo> do I need to do a .forEach() after the find and before the update?
[12:35:43] <jayjo> Oh, I'm trying to take a number, which is seconds from the epoch and stored as a number... and convert it to an ISODate() in the database
[12:36:07] <jayjo> currently I have 110M documents
[12:39:03] <StephenLynx> so you are ok if it takes a little longer.
[12:39:11] <StephenLynx> did you get what I mentioned by asynchronous recursion?
[12:39:55] <jayjo> yea if it takes a long time it is not a big deal to me. I understand the concept, but do you mean just write a standalone program in whatever language to do this?
[12:40:17] <StephenLynx> in my system, I have a db version system.
[12:40:45] <StephenLynx> where the system checks on boot if the db version changed and runs a migration function for each version it has to upgrade to
[12:43:30] <StephenLynx> so if the db is inversion 5 and the new version of the software is 7, it will run the functions to upgrade from 5 to 6 and then from 6 to 7
[12:46:05] <zylo4747> I am receiving the following error in the log file and it crashed the MongoDB service (2.6.x) - Fatal DBException in logOp(): 16328 document is larger than capped size
[12:53:26] <zylo4747> yeah we have it on the project list but haven't quite gotten there yet
[13:02:25] <warlock> I have been trying to wrap my head around this for hours and still haven't figured it out. I would really like if someone could help me with this. I'm trying to aggregate two collections
[13:02:46] <warlock> basically I have "companies" and "subscribers" - so, people can subscribe to companies and a company can have multiple subscribers to it
[13:03:12] <warlock> https://paste.mrfriday.com/awelubowas.coffee <- how my documents look like in each collection. How can I get all subscribers for all companies?
[13:03:42] <warlock> basically want to select all companies and list each subscriber in the result (meaning, _id and email)
[13:06:33] <warlock> the goal is to get all subscribers for each company that I can later loop through.
[13:09:45] <warlock> none of them make any sense and I'm tired of seeing all the same, nonsense examples ::)
[13:10:03] <StephenLynx> and no, mongo is not designed to reference collections.
[13:10:16] <StephenLynx> that's what relational databases are for.
[13:10:18] <warlock> ok StephenLynx, what is your recommendation on this? I think I am simply thinking wrong
[13:10:39] <StephenLynx> you could not get all client's companies at the same time.
[13:10:48] <StephenLynx> most of the times it boils down to how you query the data.
[13:11:07] <StephenLynx> get ONE company, get its clients, do a join on application code.
[13:11:25] <warlock> gotcha, then I am thinking wrong after all.
[13:11:39] <StephenLynx> then when you have to get another company, do it again.
[13:11:52] <warlock> just to verify, the structure I have, I suspect it would be fully possible to do a count on subscribers for all companies? eg. top 10 list of companies with most subscribers
[13:12:00] <warlock> this should be do-able with aggregate, perhaps?
[13:15:07] <warlock> just to get me completely on the right track as I am still not completely following...
[13:15:17] <warlock> so, the structure I have where I have put all my subscribers in an array.
[13:15:38] <warlock> say that I have the company name (name), how would I go on with getting _all_ subscribers and its information for that given company?
[13:16:25] <StephenLynx> users.find({_id:{$in:{the company's array of users that you queried before}}})
[13:20:58] <warlock> StephenLynx: Ok, now I got it.
[13:32:02] <jayjo> So I put a console.log() statement in this script and nothing is logging to the console. I'm just trying to investigate what's happening here... why won't I see any output? https://bpaste.net/show/25f64ca5b121
[13:32:23] <jayjo> I thought I would see a statement logged for each element in the database, because of the foreach()
[14:02:47] <jayjo> maybe having so many writes at onces (110M) is why this doenst work, so I tried to push the writes through after 999 iterations, but I'm still not getting the print statement to even show anything
[15:58:03] <chris613> silly question, but when adding auth to a mongo URI it's mongod://user:pass@1.2.3.4:27017,9.8.7.6:27017/ I don't need to add the user:pass@ to each replica's address, right?
[16:30:00] <zylo4747> do you know if it's necessary to insert the last oplog record if you resize the oplog using these instructions or can i just skip that part? https://docs.mongodb.com/v2.6/tutorial/change-oplog-size/
[17:46:24] <kuku1g> hi, does someone know how to change the log level of the mongodb java driver? i've followed several stackoverflow answers and the like but nothing seem to work for me. Every insert creates 2 log outputs (debug level). I wan't it to stop print out these messages:
[18:11:19] <lauriereeves> The mongodb-org-server rpm for Redhat 7 crashes Anaconda because package calls groupadd in it's %pre scriptlet which fails if groupadd hasn't been installed yet.
[18:11:30] <lauriereeves> A simple workaround be to add "Requires(pre): /usr/sbin/groupadd" to the spec file for that rpm
[18:11:46] <lauriereeves> How could I get that change made?
[18:32:59] <Doyle> Hey. These entries show up in a config server log. Rsync the /data dir from one of the others? [conn54] Assertion: 17322:write to oplog failed: DocTooLargeForCapped document doesn't fit in capped collection. size: 144 storageSize:5242880 @ 28575 and STORAGE [conn69] couldn't make room for record len: 156 in capped ns local.oplog.$main
[19:37:55] <AlmightyOatmeal> if i want to search for documents that have a field with a list of acceptable values, would i use something like: {"field": {$in: ["val1", "val2", "val3"]}} or is there a better way?
[19:42:53] <Doyle> Anyone have any ideas about these logs on a config server? [conn54] Assertion: 17322:write to oplog failed: DocTooLargeForCapped document doesn't fit in capped collection. size: 144 storageSize:5242880 @ 28575 and STORAGE [conn69] couldn't make room for record len: 156 in capped ns local.oplog.$main
[19:43:28] <Doyle> The mongod service starts for a few seconds. In that time db.oplog.$main.count() shows a number, as expected, but find() returns nothing, and findOne() issues 'null'
[19:52:55] <speedio> i have two schemes, user and roles.. when i create a reference to the roles scheme, do i need to create a new document for each reference, or can i just have two documents(client,admin) in roles scheme and then reference to their id?
[20:00:05] <synthmeat> speedio: what does a "role" contain? you could possibly consider just setting the role directly in user, with no reference to another document
[20:02:40] <nbettenh> We've gone from 3 config servers to 1, the other two are reported as being out of sync. What is the method for getting all three back in sync?
[20:02:47] <speedio> synthmeat: i have role directly in user now, but i need to have it separated, so i can create roles new roles with description..
[20:03:52] <synthmeat> speedio: yeah, you don't need to have unique role per role referenced in user
[20:07:36] <Doyle> nbettenh, ew. Get on them and do use config, db.runCommand("dbhash"). Compare the three results. Hopefully two match.
[20:07:50] <speedio> synthmeat: ok, im not sure how to reference to it when i create a user, so now i have this in my schema(mongoose) "roles: [{ type: Schema.Types.ObjectId, ref: 'Roles' }]" , can i just insert the object id into the roles value when creating a user? and then do a populate query or what its called?
[20:08:17] <Doyle> nbettenh, before doing anything beyond that, backup your dbpath from all 3 config servers.
[20:10:05] <Doyle> Then look at this doc. https://docs.mongodb.com/v3.0/tutorial/replace-config-server/ (for whatever version you're on.)
[20:10:57] <synthmeat> speedio: to be perfectly honest, i regret using mongoose in the first place, and i'd just use it as [{type: String, _id: false}]
[20:13:49] <speedio> synthmeat: ok, so then i can reference by string instead of id?
[20:16:10] <synthmeat> speedio: possibly you'd still want [{type:ObjectId, _id: false}], if you don't want to end up needing to convert ObjectId from roles to .toString() so comparison doesn't fail with string
[21:17:14] <surge_> If I ctrl+c a compound index build, what is the behavior of mongo supposed to be?
[21:17:27] <surge_> I see the compound index when i run `getIndexes()` but I’m not sure if it ran to completion or not.
[21:17:41] <surge_> I did a `currentOp()` and saw that it got to at least 44% but didn’t check on it until later and then I didn’t see the process anymore, so i’m not sure if it finished or was finally killed.
[21:35:12] <cheeser> i believe the index build will continue. ctrl-c just kills the shell which is a separate process that's simply waiting on a response fromthe server
[21:55:58] <Auz> does anyone have experience migrating replica sets up from very old versions (2.2) to the 3 branch? The docs say that it should be compatible and also to upgrade to 2.4, then 2.6 then 3.0 then 3.2...
[22:00:38] <ankurk> How can I check the number of files opened by Mongo while indexing?
[22:03:31] <Auz> cheeser: which is correct, that its compatible or that I have to upgrade via the other versions?
[22:04:59] <Auz> if I join a 3.2 mongo node to a replica set running older versions, what is the failure condition? Will the new secondary give errors? Will it break anything else?
[22:30:29] <cheeser> Auz: you could probably add a new replica set member at 3.2 and just do a rolling upgrade.
[22:30:49] <cheeser> once that new node is done sync'ing, decommission one of the old ones
[22:30:55] <cheeser> repeat until you're all done.
[22:33:40] <cheeser> you have some options for speeding that up
[22:34:18] <cheeser> of course, you could upgrade a secondary through the different versions. that'd dramatically your down time.