[00:10:21] <regreddit> when I inspect the object using something liek node inspector, the property FEATURE_ID is surrounded by double quotes in the object explorer, but no other properties are
[00:17:29] <regreddit> that's the object explorer in node debugger -
[00:18:06] <regreddit> so if I query the collection using the node driver, and inspect one of the objects in the returned dataset, FEATURE_ID is wrapped in ""
[00:18:41] <cheeser> looks like node is treating it as a string
[00:18:42] <regreddit> and in mongo shell, nothing is usable that tries to access FEATURE_ID
[00:29:34] <Boomtime> can you try importing your small file (6 records i think?) to a new collection, but run mongoimport with --verbose and capture the output
[00:31:02] <jiffe> [repl writer worker 15] Invariant failure n < 10000 src/mongo/db/storage/mmap_v1/btree/btree_logic.cpp 2075 <== that ring a bell with anyone? two of the three members in a replica set are dying with this and won't start up again
[00:31:03] <regreddit> and will import it with --verbose
[00:32:52] <regreddit> BUT, db.PLACES.findOne() now retuns one record
[00:34:07] <cheeser> anda find with the FEATURE_ID?
[00:34:42] <regreddit> if you have a public key, i'll be glad to give you ssh access to my desktop PC where I'm doing this
[00:42:26] <Boomtime> the error is specifically occurring because some invariants in an index were violated - the index is invalid, the storage is probably corrupt
[00:45:45] <Boomtime> then that is surprising - i'd suggest starting a re-sync on one of them immediately, it will vote while sync'ing and induce the other to become primary again
[00:46:11] <jiffe> I've never been able to get the internal syncing to work I've always had to rsync
[00:46:31] <jiffe> I should say I've never been able to get it to work with this database, its rather large
[00:52:07] <cornfeedhobo> Boomtime: you have been helpful. before you are done with the day, would you ping me. this problem has me about ready to ditch mongo
[01:07:56] <Sht0> Hi! I'm having a collection of around 200GB in size and mongo queries seem extremely slow; anyone can please tell me if this is good method to check queries and how can I further troubleshoot?
[01:07:56] <Sht0> time mongo scrappstore --eval 'db.collection.find({"language":"en", "app":"com.my.appp"}).forEach(printjson)'
[01:12:18] <Boomtime> @Sht0: do you have an index on 'language' and 'app'? an indexd like: {language:1,app:1} for example
[01:15:51] <Sht0> @Boomtime I see this set https://bpaste.net/show/791aad00e5e1, so I think it's set
[01:17:25] <Boomtime> then you're probably doing as well as you can
[01:17:48] <Boomtime> be aware the shell is intended as an admin tool - it won't be the best at running scripts
[01:18:40] <Boomtime> from the look of the outputs you got it's likely you waited mainly for the data to come back
[01:18:45] <Boomtime> how many documents match that query?
[01:19:34] <Boomtime> in a collection of 200GB, unless you have 200GB of ram it probably doesn't fit anything like a decent portion in ram, so if you ask for a lot of documents, then the server has to read from disk a lot
[01:21:23] <Sht0> the final result it's around 19MB of data, 23k json lines
[01:22:20] <cheeser> i have a feeling a huge part of that runtime is printing out the documents
[01:23:01] <Boomtime> look at the times, it was idle for 90% of it
[01:23:18] <Boomtime> printing was problably the other 10% to be sure
[01:23:54] <Boomtime> was this run only once? can you try running it a couple times in succession?
[01:27:26] <tsturzl> Having the craziest problems setting up an admin user
[01:27:56] <regreddit> back from reboot and starting dinner, and still having issues with the first column of imported DB
[01:28:18] <regreddit> im going to create some collections manually and see if that makes a difference
[01:28:56] <tsturzl> So I configured SSL, and all is fine and well. So I go in and try to `rs.initiate()` and it fails, says I need to be admin. Try to create an admin(on admin DB), and it fails cause its not master.
[01:29:26] <tsturzl> So I'm scratching my head. I cannot create an admin because I have no master because I have no replset, yet I cannot create one without an admin
[01:31:39] <Sht0> @cheeser I guessed the same about printing; is there any way to output to file or something similar instead? I tried `time mongo scrappstore --eval 'db.collection.find({"language":"en", "app":"com.my.appp"}).pretty()` and I don't know if it really does the query or not as total time is around 0.1s
[01:32:20] <Sht0> @Boomtime I tried running it several times, doesn't seem to improve much, getting almost same values
[01:32:32] <tsturzl> Why is it so difficult to create a replset with SSL, seems like things change. Like I have to have an admin user on each machine all the sudden.
[01:35:20] <regreddit> so, db.createCollection('foop'); db.foop.insertOne({"FEATURE_ID" : 1397658, "FEATURE_NAME" : "Ester"}); db.foop.find({FEATURE_ID:1397658}); work fine
[01:35:27] <regreddit> heres the pastebin: http://pastebin.com/vEZFSQbV
[01:35:42] <Sht0> @cheeser I'm getting `DBQuery: db.collection -> {"language":"en", "app":"com.my.appp"` as output when running it with pretty() so I think it does the query, but would like to see the actual data if possible by any easy means
[01:37:16] <Boomtime> tsturzl: are you connecting remotely or running mongo on the localhost where the server is?
[01:43:18] <tsturzl> Boomtime: running on that host
[01:44:00] <tsturzl> Boomtime: I did this before and got it working, but I had to change the config to not have a replset, create and admin, then change it back
[01:46:19] <regreddit> i'll be glad to create an account forsomeone to ssh in to by pc to prove I'm an idiot
[01:52:44] <gardllok> Hey guys, sorry for the probable noob question but...I am working from two connections to the same ( or what I thought the same ) database on localhost default port. One being a direct connection via cli mongo client ( debian system, just typing "mongo" in command prompt bringing up a mongo cli ) the other is from a python script. However with the python scrupt adding documents to a collection I cannot see
[01:52:46] <gardllok> said added documents from the cli connection. I assume I am not "commiting" as SQL programming would be, but I cannot find how to do such googling. So I think I am asking the wrong question lol. Any points in right direction?
[01:53:34] <gardllok> This is also my fist approach to a nosql environment :/
[01:53:44] <regreddit> there's no commit, only a write concern, but case matters, and the db name case matters
[01:54:06] <regreddit> so on a default db, inserting a document should be visible in about 30 ms
[02:10:52] <regreddit> then do a db.test.find({}) from mongo shell
[02:12:19] <coconut> i have an algorithm written in haskell that compares the similarities of two text. i use it to find the most similar text among a pool of artciles. now i put everything inside mongodb, so i need to reimplement everything with mongodb query language?
[02:13:20] <gardllok> Ack, I figured it out, you called it...didn't call the right database from python. Not sure why I couldn't see the alteration from mongo cli's show collections. but whatever, it works now....sorry to waste your time :/
[02:14:40] <gardllok> hah, I'll try to ask a better question next time.
[02:15:56] <coconut> will mongodb provide a version for mobile devices local use?
[02:21:41] <kl0neD> Hi, looking for some help. I'm trying to export some data from a table that's populated by some web forms. JSON data looks like this: "Fields" : [ { "_id" : ObjectId("000000000000000000000000"), "FieldFriendlyName" : null, "FieldName" : "email", "FieldValue" : "email@yahoo.com" }
[02:22:19] <kl0neD> Is there anyway to get the FieldFriendlyName as a column header in a CSV export?
[02:52:49] <regreddit> cheeser, I guess i'll just whip up a script to do the import for me, I'm not going to waste any more time on the issue, it's probably local to my dev box, so no reason for me to lose any brain cells over it
[02:57:12] <cheeser> regreddit: sounds like a plan :)
[05:25:03] <Waheedi> so i can do --prefix for mongodb with scons? and then scons install will get them in the right place?
[09:44:14] <Rumbles> Hi, I set up an arbiter last week, and configured it with log rotation to test I had all the config right. The logs there are rotating as I would expect, so I then took that config and applied it to another db-node at the end of the week, I've checked this morning and although the config is the same, I am getting 2 files created each day, 1 mongodb.log-20160116.gz and mongodb.log.2016-01-16T06-43-30 also created as an em
[09:44:14] <Rumbles> pty file, can anyone help me understand why? Further info: http://fpaste.org/311910/
[09:45:24] <Rumbles> I can't see anything in the logs about this happening, but I'm guessing mongo is creating the extra log file as part of the -SIGUSR1 call...
[10:13:49] <Rumbles> anyone got any idea why I'm getting extra log files on log rotation? http://serverfault.com/questions/750434/mongodb-log-rotation-creates-an-extra-empty-log-file-each-time-it-runs
[12:05:51] <Stevko> Hello. I have a question. I want to use mongoexport to export results of the following into csv: “find({"lastModified" : {$gt : new Date((today = new Date()).getFullYear(), today.getMonth()-1 , 1)}})” (everything that has lastModified newer than some calculated date – in this case first day of last month). How to do it? mongoexport does not like my query.
[15:08:01] <admiun> hi, does anyone here know if it’s possible to configure wiredTiger cache size in mb instead of gb in the config yaml file?
[15:12:32] <m3t4lukas> admiun: doesn't using decimal places work? Remember to use dot notation
[15:13:46] <admiun> i think i got a parsing error when i tried that. i’ll try again
[15:14:07] <admiun> i did find a workaround on the commandline that seems to work for others -> https://groups.google.com/forum/#!topic/mongodb-user/GLrp-H31YWg
[15:15:18] <m3t4lukas> admiun: remember that you probably don't want to use it that way in production
[15:15:19] <admiun> yup, got this one: ‘Error parsing option "storage.wiredTiger.engineConfig.cacheSizeGB" as int: Bad digit "." while parsing 0.24’
[15:15:43] <m3t4lukas> ah, that's because it's an int
[15:15:45] <admiun> well, don’t really have a choice, it uses too much ram now so it’s either this hack or the OOM killer ;)
[15:16:26] <admiun> out of curiosity, why is that not suitable for production?
[15:16:35] <m3t4lukas> admiun: if 1GB of RAM is too much usage for you in production you should consider a stronger machine
[15:17:20] <m3t4lukas> admiun: the more cache wiredTiger gets the faster it can answer requests
[15:17:21] <admiun> it’s actually not production, dev and staging machines have both mongo as well as our app server on them, hence the cache size tweaking.
[15:18:05] <admiun> you got to admit it’s a bit weird to be able to only specify the cache size in whole gigabytes ;)
[15:18:29] <tantamount> When doing a projection in aggregation, is it possible to add a field only to the first element in a collection field?
[15:19:10] <m3t4lukas> admiun: if the chache size is waay too small it could be that it gets way too slow. What also could problems is querying a collection larger than the cache size (or better: pulling data from a collection where the amount of data exceeds the cache size)
[15:19:54] <m3t4lukas> admiun: if that's the case something went wrong with your querying or indexing anyway
[15:20:25] <m3t4lukas> but the more cache you have the better. Especially with a raising number of requests
[15:20:35] <admiun> yeah that will probably not be an issue as these are development servers. and if it is i’ll just use a bigger instance type. but i’d like to first tweak the cache size and then take it from there
[15:20:59] <admiun> it’s not production loads so yeah, there are like 2 clients connected to it at all times ;)
[15:23:33] <admiun> anyway, it’s a bit weird that they chose whole gb as an integer for this field. you can’t raise it to 1.5 either, for example. but i’ll try the commandline workaround then and perhaps file a ticket
[15:23:40] <m3t4lukas> admiun: yeah it's completely okay to limit dev env.
[15:37:53] <cruisibesares> hey mongoers. I have a really old client driver that doesn't support replsets. I need to make our mongo setup able to fail over automatically while the devs work on updating the driver. Im planning on using haproxy with health checks against the rest endpoint to determine the primary
[15:38:03] <cruisibesares> does anyone have any experience with that kinda thing?
[15:39:56] <cheeser> what driver doesnt' support replSets?
[15:44:10] <someone235> Hi, is there a way to clean broken references?
[15:44:11] <Ben_> does someone provide knowledge with the java async driver for mongoDB?
[15:45:48] <Ben_> I don't know whether that async driver provides the whole featureset of the sync driver
[15:46:12] <Ben_> and whether I should use it or the sync variant
[15:47:25] <NotBobDole> Hi all. Mongorestore has been taking 3 days to restore a bcakup of 273GB that was accross 9 shards to one shard.Anything I can do to make this run faster? Recommended specs, parameters, etc? We're doing the restoration on 1TB of SSD storage in AWS
[15:48:27] <NotBobDole> Doing the restoration of one shard took 30 minutes, but my scripted loop has been going for over 4 days. Shouldn't have taken but 5 hours unless I missed something
[15:49:16] <cheeser> Ben_: it should be complete, yes.
[15:49:32] <someone235> Hi, when I'm using db inside $where I get this error: "mongodb ReferenceError: db is not defined near"
[16:16:02] <tantamount> Is it possible to run an update query that replaces the value of a field using a value found elsewhere in the document?
[16:16:09] <Ange7> i use mongoexport → 1100 rows exported. and i use mongoimport → 600 rows imported. I don't understand why ? and i don't know how to debug this ?
[16:18:50] <cheeser> tantamount: not really. you can do that in an aggregation pipeline, though.
[16:21:09] <tantamount> cheeser, I know, but I'm not aggregating :)
[16:31:16] <tzahush3d> Hello, so i am writing a forum application which has the following db realted: Catagory in it a forum, in it threads, and in it posts, what would be the best way of making that kind of system?
[17:42:37] <nawadanp> Hi guys. one of our instance is starting since 1 hours. It's storage engine is WT, with snappy compression. Logs are stuck on wiredtiger_open config: create,cache_size=80G,session_max=20000,eviction=(threads_max=4),statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0)
[17:42:56] <NotBobDole> Hi all. Mongorestore has been taking 3 days to restore a bcakup of 273GB that was accross 9 shards to one shard.Anything I can do to make this run faster? Recommended specs, parameters, etc? We're doing the restoration on 1TB of SSD storage in AWS
[17:43:03] <nawadanp> The only change is the allocated RAM for the process
[18:07:31] <regreddit> if i create a compound index of say three properties, and only query on one of those properties, is that index any less effective than an index on atht property only? will it be used at all?
[18:07:59] <regreddit> should i create overlapping compound AND single property indexes to cover both cases?
[18:08:30] <cheeser> if that one field is the first in the index, you'll probably seem some gain.
[18:10:01] <regreddit> so index {a:1,b:1,c:1} is the same as {b:1} when querying on b only?
[18:10:47] <regreddit> im tryin to figure if im just wasting space creating indexes on compound properties AND individual properties
[18:11:07] <regreddit> since i have a very large collection that gets queried a LOT by several parameters
[18:11:23] <regreddit> (the gnis collection i was having issues importing yesterday)
[18:11:46] <cheeser> no. the index won't be used because b isn't at the front of the query
[18:13:08] <regreddit> so, conversely, is a compund index unnecessary if i also index each property individually, and do multi-property AND single property queries?
[18:14:55] <cheeser> but you might need a b/c index if you query by those fields
[18:15:29] <cheeser> or define your index as b/c/a if you don't query against just a or a/b
[18:16:04] <regreddit> i see - the set of queries against the collection are fixed and known, so that should be easy enough
[18:34:32] <bros_> I am using the Javascript bindings. Converting an ObjectId to a string is *killing* my loops performance wise. Anything I can do to just get the string?
[18:36:10] <cheeser> you're converting the ObjectId to a string ... why? (not a js user so i'm probably missing something)
[18:47:22] <bros_> cheeser: You can't compare two IDs otherwise
[18:49:58] <bros_> I'm profiling a really tight loop that is running 20k times and taking 800ms, so I wonder if this will be any better
[18:51:19] <cheeser> seems reasonably fast to me...
[18:51:42] <bros_> Can I ask you a question in regards to caching MongoDB results cheeser?
[18:52:23] <cheeser> sure. i might not have answer.
[18:53:30] <bros_> Do you do any caching/storing of results?
[19:15:23] <bros_> That's what I thought. I'd also be worried that writing 1mb or so in/out to cache might not be the fastest
[19:38:37] <cycliam> Hi all. How can I enable client auth without restarting my server?
[19:40:21] <cheeser> is auth enabled on the server?
[19:44:03] <cycliam> cheeser: I set it to enabled just now in the config file. I'm asking if there's a way to tell the server to request auth every time without having to restart.
[20:30:22] <mvno_subscriber> hi, i'm pretty new to mongodb. i'm having an issue where the json returned by mongodb has values wrapped in NumberInt(). for example {"stuff": NumberInt(2)}. this is really annoying since it's not valid json. how can i just show the number without the wrapper?
[20:31:31] <cheeser> in the shell? i'm not sure you can.
[20:34:04] <mvno_subscriber> can i convert it to float somehow then? i don't mind if it ends with .0
[20:35:38] <GothAlice> The shell is JS, isn't there JSON.stringify available?
[20:36:12] <GothAlice> I.e. what gets printed out in the REPL represents JavaScript objects, not JSON. You'd need to encode it as JSON to get JSON.
[20:38:38] <GothAlice> NumberInt is a special case because JavaScript doesn't do integers. (All numbers are floats, giving 54 bits of integer accuracy.)
[20:38:53] <mvno_subscriber> i tried that, but the returned value still contains the NumberInt()s. I'm using MongoChef and intelliShell if that is any help
[20:39:02] <mvno_subscriber> tried using JSON.stringify(), that is
[20:39:11] <GothAlice> Ah, yeah, no, I have no idea about that shell setup.
[20:39:39] <GothAlice> The ODM/DAO I use offers .to_json() on all documents, which does The Right Thing™.
[20:40:30] <mvno_subscriber> i switched to proper shell output and it worked, no need for json.stringify even. awesome!
[20:45:55] <mvno_subscriber> i have another question if you don't mind.. i've spent so many hours on this. i have a date that is a NumberLong (epoch time). i want to convert this to a date, i.e. year, month and date (time must be truncated or 00:00 since i use it for grouping). I can't for the life of me figure out how i do this, although i assume it should be pretty basic. http://pastebin.com/MJTsYv5B.
[21:00:17] <GothAlice> mvno_subscriber: If your "date" is actually a number, then it's not a date, it's a naive UNIX timestamp. This makes it extremely difficult to work with.
[21:01:03] <GothAlice> If it were a date, https://docs.mongodb.org/manual/reference/operator/aggregation-date/ would apply.
[21:01:44] <GothAlice> (Which is what I use to get time-based groups for pre-aggregation.)
[21:03:52] <GothAlice> Luckily, for your use (grouping by dropping hours) you _might_ be able to get away with a modulus subtraction; i.e. remove the "remainder" (in seconds). (value - (value % 86400) would strip the time from the timestamps.)
[21:07:17] <mvno_subscriber> seems like new Date("$field") and getting $year, $month and $day kind of works.. think i have to shave off milliseconds though
[21:07:55] <mvno_subscriber> i tried to $concat them, but since they're ints $concat threw a fit, and according to stackoverflow i have to use $substr to convert them to a string
[21:11:04] <GothAlice> Or, since you're just dropping the entire time aspect of the date+time represented by the timestamp integer, the modulo approach will give you an integer "snapped" to midnight the morning of whatever date/time it represents.
[21:11:18] <GothAlice> No string manipulation. No object construction, and can be entirely done within a projection mongodb-side.
[21:12:07] <GothAlice> (As mentioned: value - (value % 86400))
[21:18:09] <GothAlice> UNIX timestamps are just the number of seconds (potentially floating point, thus microseconds) since the UNIX epoch. I _believe_ that's Thursday, January 1st 1970.
[21:18:39] <GothAlice> UNIX timestamps also explicitly state that each day is a regular number of seconds; leap seconds are added by having the same second happen twice. This makes it very math-safe.
[21:19:06] <GothAlice> Though, without a timezone, your timestamps _better_ be in UTC because otherwise leap days can screw absolutely everything up.
[21:19:17] <GothAlice> (Not to mention daylight savings.)
[21:19:42] <GothAlice> UTC has no daylight savings, unlike GMT, so UTC is always the safest way to store dates.
[21:21:15] <mvno_subscriber> yes, we use UTC all over the place in backend. but mongodb is totally new to me and really different from sql. the syntax is also quite different from what i'm used to
[21:21:35] <mvno_subscriber> for some reason, sitting here in the night it starts to sink in (i've been fighting this all day) :)
[21:21:49] <GothAlice> https://docs.mongodb.org/manual/reference/operator/aggregation/mod/#exp._S_mod < the documentation tends to be extremely good, for MongoDB.
[21:22:17] <GothAlice> Sometimes it just takes another set of eyeballs on a problem. Especially if there's sleep depravation in play. ;)
[21:23:26] <mvno_subscriber> hmm, i tried using new Date({$subtract…}), but it got printed as ISODate("0NaN-NaN-NaNTNaN:NaN:NaNZ")
[21:24:21] <GothAlice> Uh, yeah, that's a MongoDB projection, not something you can apply to JavaScript objects.
[21:24:51] <GothAlice> In JS, it's just the simpler mathematical formulae given earlier.
[21:25:23] <GothAlice> I.e. new Date(originalDate - (originalDate % 86400))
[21:27:51] <mvno_subscriber> i'm a bit confused.. :-(
[21:28:41] <GothAlice> Once you get the results back, you're in your application which I'm assuming is JavaScript, because of the "new Date" references you've made.
[21:28:56] <GothAlice> Projection operations happen in the MongoDB server itself.
[21:29:37] <GothAlice> https://docs.mongodb.org/manual/core/read-operations-introduction/#projections is the documentation for this.
[21:30:11] <GothAlice> So you can choose to do the date rounding/snapping within MongoDB itself at the time you make your initial query, or you can do it after in your JavaScript code.
[21:31:50] <GothAlice> Personally, I always try to make MongoDB do as much of the "heavy lifting" as possible. ;)
[21:32:01] <mvno_subscriber> ok, so i can't use javascript in the query or in the projection. i got a bit lost in the JS talk, then
[21:32:04] <cheeser> it's why we uses databases, after all
[21:32:32] <mvno_subscriber> i've used the mongodb documentation extensively, but i often find that there are some basics that have eluded me. takes time to get it..
[21:33:21] <GothAlice> mvno_subscriber: Correct. MongoDB expects, effectively, certain JSON structures for queries and projections. https://docs.mongodb.org/manual/core/crud-introduction/
[21:33:32] <GothAlice> Not actually JavaScript code.
[21:37:12] <mvno_subscriber> i often seem to find methods that aren't documented in docs.mongodb (at least when i search). i stumbled upon $dateToString, but this is not found when i search the doc site. i have to use google for that
[21:38:10] <GothAlice> Some projection operators are only usable with certain types of queries.
[21:39:19] <GothAlice> https://docs.mongodb.org/manual/reference/operator/aggregation/project/#pipe._S_project < many of the fancy / most useful ones are "aggregation pipeline" operators.
[21:39:31] <GothAlice> I.e. $dateToString is found under https://docs.mongodb.org/manual/reference/operator/aggregation-date/
[21:42:49] <mvno_subscriber> well, i found my cure now
[21:43:12] <mvno_subscriber> all that is left is having the output be comma separated json objects in an array - but i guess i can do that with a regexp
[21:43:16] <mvno_subscriber> thanks a ton for all the help
[21:43:24] <mvno_subscriber> you have no idea how much time you have saved me
[21:44:01] <mvno_subscriber> i will paste these results in excel now (please don't hurt me).
[22:01:53] <saml> is there a way to import xml documents to mongodb?
[22:02:03] <saml> generic way to convert xml to mongo?
[22:19:16] <regreddit> saml, i did this, but ended up writing an xsl to convert xml to json
[22:21:13] <GothAlice> I'm sorta building my own method to serialize complex objects to MongoDB: https://gist.github.com/amcgregor/c6ae92325a941ba70568?ts=4
[22:21:23] <GothAlice> That's an example of the resulting XML.
[22:22:54] <regreddit> saml, here's what I used: https://github.com/bramstein/xsltjson/