[00:12:31] <hephaestus_rg> is there a fast way to get the first item added to a mongodb collection? db.articles.first isn't a function
[00:13:22] <hephaestus_rg> articles being the name of the collection i'm concerned with
[00:19:22] <jumpman> hephaestus_rg: what makes this item the 'first' in the collection? first chronologically?
[00:20:08] <jumpman> For standard tables, natural order is not particularly useful because, although the order is often closer to insertion order, it is not guaranteed to be.
[00:20:39] <jumpman> for instance: if a document is updated and doesn't fit their currently allocated space or if a new document is inserted into one of the gaps that might be left by something like this
[00:21:03] <jumpman> If you want a specific order, you must specify one. Except if you use a capped collection
[00:21:51] <jumpman> unfortunately with capped collections... documents cannot be deleted or moved (resized).
[00:22:31] <jumpman> If your usecase fits a capped collection then that's a fairly easy way to implement it. Otherwise, you'll need to add a timestamp field
[00:23:44] <jumpman> can you use the default objectId to order by insertion? i thought they weren't supposed to be countable? are those the same thing?
[03:39:57] <xcesariox> where do i put this "results = Book.collection.map_reduce(map, reduce, out: "vr")" command syntax into? into rails console or mongodb console directly? https://gist.github.com/shaunstanislaus/0f8d87939c0ab01ce5d6
[03:42:49] <cheeser> xcesariox: when you asked that yesterday, i told it wasn't valid shell syntax
[04:05:50] <xcesariox> cheeser: so how does mapreduce works
[04:06:04] <xcesariox> cheeser: cause the book i am following doesn't tell me where to enter this commands/functions
[04:11:01] <cheeser> looks like a ruby driver thing.
[04:32:10] <Freman> "$err" : "BSONObj size: 24852869 (0x17B3985) is invalid. Size must be between 0 and 16793600(16MB) First element: _id: { task_id: \"54add46f370a791400350a68\", failed: false }",
[04:42:14] <joannac> wait, yes. I can't count, sorry. yes, your output document looks like it's too long
[04:44:44] <Freman> have to find another way to prune it huh?
[04:45:14] <Freman> (in future this will be run every 60 seconds, so there should never be that many logs, this is just the first "oh crap we're out of disk space" cleanup)
[07:41:21] <cxz> hi, i'm trying to filter an array in an array where the inner array has a key which should matche one value and another key which should not match, how can i do this?
[07:54:13] <cxz> in that example, i want the first 2 items in the boxes array because it has the 'name' of '1' and an inner name of 'Test' in the 'details' array
[08:03:49] <joannac> Go to your boss / PM and say "we need to change the schema because we cannot query what we need to without scanning every document in the collection"
[08:04:58] <cxz> but i guess we need to scan every document anyway?
[09:07:49] <a|3xx> i am trying to compile something and i am getting the following types of errors, would anybody happen to know what could be wrong? /usr/lib/gcc/x86_64-pc-linux-gnu/4.8.3/../../../../lib64/libmongoclient.a(sasl_client_session.o): In function `mongo::(anonymous namespace)::_mongoInitializerFunction_SaslClientContext(mongo::InitializerContext*)': (.text+0xbd7): undefined reference to `sasl_errstring'
[09:56:58] <Ravenhearty> where can i talk with someone that devs the official mongodb C# driver
[10:46:27] <joannac> Ravenhearty: what's the question?
[10:47:40] <joannac> Ravenhearty: if it's along the lines of "I found a bug / I have a feature request", file a ticket in the CSHARP project at jira.mongodb.org
[11:03:05] <Ravenhearty> well i'd like to ask why the BsonIgnoreExtraElements attribute doesn't work in child classes
[11:25:47] <nyssa> hello everyone. this is kind of a last-resort help call for me, as I am completely stuck with restoring a database after some files got corrupted. after hours of trying to fix it on the go, I finally did a mongodump with repair and then mogorestore to a replicaset - but now any query to one of the databases results in "assertion 13051 tailable cursor requested on non capped collection" - the problem is - the error pops up on a primary in a replicaset, and if
[11:25:49] <nyssa> I move it to a standalone problem still persists. apparetnly it has something to do with master-slave replication but this is not the case here and I was wondering if any of you ran into something remotely similar?
[11:53:37] <cxz> joannac: we were able to solve our problem using $and: [{'name': }, {'boxes.details: elemmatch...}]
[12:40:54] <salty-horse> Hi. I noticed a regression after upgrading from 2.4 to 2.6. I have count() queries on _id prefix (using "_id": /^prefix/) that don't seem to be indexOnly. They did before. now they end up taking hours. Could not find any documentation about it online. has anyone experienced this?
[12:50:51] <StephenLynx> there was a dude with a problem regarding indexes these days
[12:51:07] <StephenLynx> and it ended up being a bug marked to be solved in 2.9 or so
[12:51:19] <StephenLynx> I don't know it it was the same issue that you are experiencing.
[12:55:10] <salty-horse> StephenLynx, I remember there's a problem with using covered indexes in aggregations. do you mean that?
[13:42:12] <kexmex> i signed up to MMS and I got an email from what seems to be a Sales person, asking if I need any help. I asked him a couple of simple questions that I need answered, but instead of getting the answer, he asked me if I looked at enterprise offerings. I replied saying that we are not, yet, after which he just told me to f-off by saying "i advise you to use community resources and local meetups"--
[13:56:02] <StephenLynx> I just console.log anything :v
[13:56:14] <StephenLynx> and then I read on /var/log/messages
[15:10:20] <RoyK> hi all. I have mongodb 2.6.5 on this RHEL7 server, and after a yum update (to kill the ghost bug), mongod just died. This is what the logs say when I try to start it. http://paste.debian.net/142667/
[15:22:23] <bin> I have 3 address and i check them in cycle.. everytime second or third host check blocks longer
[15:23:36] <bin> I mean it blocks longer on the address which is down
[15:23:43] <bin> is there any timeout where i can set to the client ?
[15:25:29] <FunnyLookinHat> Anyone have a preferred third-party vendor for hosted databases? I'm looking at objectrocket ( from Rackspace ) and compose.io - but suggestions would be nice
[15:27:33] <bin> done,fixed ... default timeout is 10s woah
[16:11:39] <kexmex> so how to make mms setup a replicaset that communicates via ssl?
[16:27:46] <kali> FunnyLookinHat: mongolab and mongohub are popular choices
[16:28:36] <FunnyLookinHat> Ah right - compose.io - I've seen them too
[16:49:56] <lilgiant> hi all of you, short question: is it a good idea to use mongodb to store images (size not bigger then 1M) or is it better to keep image files in a separate folder?
[16:56:53] <neo_44> lilgiant: s3 bucket with reference in Mongo
[17:00:09] <GothAlice> FunnyLookinHat: Be aware that you pay a premium for "cloud" hosted services. If I were to host my own data at compose.io it'd cost me half a million a month.
[17:02:30] <lilgiant> neo_44: thanks, I think i will also just store a reference to the images in mongodb, althougt i like the idea to have data centralized in a db.
[17:04:08] <neo_44> lilgiant: images never belong in a database.....on actionable, query-able data. All the metadata for an image should be stored...just not the image. This is just a firm opinion that I have.
[17:04:11] <GothAlice> lilgiant: I take both approaches, my main personal dataset is mostly "attachments" in GridFS, some of my client applications need the images to be more easily web-accessible, so we S3.
[17:04:20] <StephenLynx> is better to keep files off the db because you can stream them from disk easily
[17:05:24] <GothAlice> Also, running my own iSCSI RAID arrays and 1U boxen to run MongoDB GridFS on top of is actually less expensive than "cloud file" storage like S3, which tend to run around $1000/TiB/yr: http://cl.ly/image/0t1x2Q2L1u0E
[17:05:47] <blizzow> I have a bunch of large (+80GB, 150mil records) collections I'm trying to mongorestore into a sharded replicated 2.6 cluster. I'm getting a whopping 700 inserts/second. How the heck can I speed this thing up?
[17:07:10] <FunnyLookinHat> How much of that is indexes? ;)
[17:07:43] <GothAlice> Very little; my primary metadata collection explicitly doesn't have indexes on things. There's too much data, not even the indexes would fit in RAM. (It's a write-heavy dataset, and read performance isn't a concern at all.)
[17:08:07] <GothAlice> About 1.2% of the storage size is actually nothing but the field names in each record.
[17:08:31] <FunnyLookinHat> Wow - sounds like a fun project :D
[17:08:39] <GothAlice> "Exocortex" — even has a fun name. :D
[17:11:58] <neo_44> GothAlice: if reading the data isn't a concern...then why are you collecting it in mongo? Why not just to S3?
[17:18:54] <lilgiant> one more question: i read a little bit about the mongo-connector. is it a better approach to use it to keep mongo and solr in sync or is it better to do this in application level?
[17:19:46] <lilgiant> write now i use mongoose post save function
[17:25:12] <GothAlice> neo_44: Because for the cost of S3 I could replace every drive in all three of my arrays every month.
[17:28:37] <GothAlice> neo_44: I also have better service-level reliability than S3, for example, the primary has an uptime of 1048 days… so far. ;)
[17:29:19] <neo_44> doesn't s3 have an uptime forever?
[17:29:29] <neo_44> just have to wait for the data to distribute, right?
[17:30:01] <GothAlice> neo_44: Tell that most major sites disappearing when AWS has "issues". ;)
[17:31:00] <GothAlice> The last time I used AWS they locked up three zones worth of EBS volumes which caused cascading cross-zone failures despite SLA guarantees that separating your infrastructure across zones would isolate you from issues. Took three days of reverse-engineering InnoDB structures to get our data back. ¬_¬
[17:33:05] <GothAlice> neo_44: We learned from that incident. Now our application DB hosts don't have, use, or need permanent storage at all. We have a "no moving parts" and "automated recovery from armageddon" philosophy for infrastructure. :)
[17:45:43] <igreer_> There shouldn't be any issue with using mongodb to process transactions (ONLY MONGODB), correct? I am new to the language and wanted to ask this simple question before digging too deep into the code.
[17:46:06] <cheeser> depends on what you mean by transactions
[17:46:06] <GothAlice> igreer_: MongoDB itself has no concept of "transactions".
[17:46:15] <cheeser> GothAlice: not across documents anyway. ;)
[17:46:43] <igreer_> But what about two-phase commits?
[17:46:59] <cheeser> GothAlice: they're functionally the same
[17:47:34] <GothAlice> igreer_: Technically possible. Generally I advise people who need transactional safety to use a database that intentionally does transactions. Postgres or Maria. ;)
[18:53:04] <macbroadcast> hello all, i need to create a config file for mongodb on ubuntu , what are the correct file permissions ?
[18:53:34] <neo_44> StephenLynx: there are clear uses for s3,relational, document, key value, and other ways to store data. Right tool for the problem I always say :)
[18:53:53] <neo_44> macbroadcast: do you have a mongodb user?
[18:57:39] <neo_44> I believe you can just chown mongodb:mongodb the config file
[18:58:00] <neo_44> macbroadcast: depends on what your config file is doing
[18:58:03] <macbroadcast> the package i want to install , just use mongodb commands and does not specify a config file so guess i need todo it by myself
[19:10:35] <theRoUS> any mongoid users/mavenin in here?
[21:03:24] <disappeared> don’t know crap about mongo…is there a way i can export a valid json file using the mognodbexport utility…i noticed the export is not valid
[21:41:23] <NyB__> cheeser: yes, I was hoping for something that would point out the differences, esp. for those like me who have been using custon decoders etc.
[22:02:53] <hahuang61> hmmm i've got a 3 node replica set, 2 of the nodes are unhealthy/unreachable, but I can mongo to them... what can I do to fix this situation?
[22:04:28] <joannac> fix in what sense? have a primary again?
[22:06:45] <hahuang61> joannac: yes, and have the other 2 nodes be reachable in the replica set
[22:06:58] <netameta_> How can i refresh a schema ? i forgot adding unique index to one field - and now it wont let me. i assume there is a way to reset the schema ?
[22:07:16] <hahuang61> netameta_: there's not a schema.
[22:07:27] <joannac> hahuang61: you've connected to all 3 nodes successfully? and all3 say "I can't see the other 2"?
[22:07:42] <joannac> netameta_: sounds like you're asking about mongoose?
[22:08:06] <hahuang61> joannac: I can connect to all 3 from all 3, but the rs.status() from each of the 3 say something different. in two of them, the other 2 are unreachable, and in 1, only 1 is unreachable.
[22:08:12] <hahuang61> joannac: kind of a strange situation
[22:10:53] <joannac> hahuang61: log into a server where it thinks 2 are unreachable. connect a mongo shell. type rs.conf() and note what the "host" says
[22:11:09] <joannac> exit the mongo shell, and then "mongo HOST"
[22:11:27] <joannac> where HOST is exactly what the entry in rs.conf() was
[22:12:14] <hahuang61> joannac: which host? There's a host for each of the machines
[22:12:42] <joannac> one of the ones that's not the one you're on :p
[22:13:15] <hahuang61> joannac: you want me to mongo --host `hostname` where hostname is what I got fromt he conf
[22:13:31] <hahuang61> joannac: yeah, I did all of them from every node to every node
[22:13:35] <hahuang61> joannac: they are all working okay
[22:13:51] <joannac> but rs.status() is saying unreachable?
[22:14:21] <disappeared> I’m looking at a json export of a collection and i see these $date properties. What format are these in? they aren’t in unix time..but it is a long integer of some type?
[22:17:43] <joannac> okay, well I presume you know how to fix that
[22:18:24] <joannac> you'd be surprised how often in support, you tell someone to do something in quite clear terms, and they go and do some other thing :p
[22:18:58] <hahuang61> joannac: I believe it :) I've been on both sides of that :D, sorry I mistyped earlier
[22:19:16] <hahuang61> joannac: and yup, I think I'll just have to reapply puppet across these nodes, and restart the mongods
[22:19:29] <hahuang61> joannac: This happened cuz we were applying latest patches to ghost to those machines and rebooted them
[22:19:39] <hahuang61> something must have changed the init.d scripts
[22:20:29] <joannac> hahuang61: no probs, happy to help. glad it's sorted
[22:24:14] <calmbird> Hi! Do you know any good way to update all array object values? Because db.something.update({}, {$set:{myArray.$.value: 10}, {multiple:true}}) updates only first array element
[22:25:05] <neo_44> calmbird: you have to specify each element of the array
[22:34:37] <joannac> post on GG or SO and the community can design a better schema for you
[22:35:05] <calmbird> you mean create another collection?
[22:36:46] <calmbird> well don't have time for changing schema now, recieived this server after another dev, and can't find proper query to update all array elements in one querry
[22:37:52] <calmbird> just found one answer that might to work, use each on cursor to get every document, modify on server app, then update
[22:38:20] <calmbird> but i don't think its good for like 100k + records :P
[22:40:56] <calmbird> btw, mongo should have that feature, otherwise I would have to build relations on server side, its not good well thanx for help will try to figure it out
[22:49:15] <joannac> as a general comment, you're the second person who's come in saying "we can't change our schema".
[22:49:54] <joannac> if the schema doesn't work for your use case, you're just delaying the inevitable
[22:51:20] <calmbird> @joannac yes, because everything is working, but weekly monthly reset, I would have to change modules etc, It would take whole day, and I don't have that time. mby I will try what neo_44 said, to change from array to object
[22:52:43] <calmbird> @joannac or just use, update(... myArray.0.weekly: 0), update(... myArray.1.weekly: 0) etc I know its not good, but can't see any other ways
[22:53:29] <calmbird> well huge operation once per week, I hope mongo db will handle it :P
[22:54:35] <joannac> 100k records is not huge. index it
[22:55:13] <calmbird> or mby just find().each(function() { modify, update }) I don't know :P Will probably just run another process for that reset.
[22:55:48] <calmbird> @joannac ok I will index it, could you please tell me what you think is better: find().each() or update(... myArray.0.weekly: 0) way?
[22:56:01] <joannac> actually, depending on the ratio of find / modify, maybe don't index it
[22:57:10] <calmbird> well this operation will be done only once per week, so... I can sugest changing schema, but they want it working now :P small company you know, don't even have time to write good tests :P
[22:59:35] <calmbird> https://jira.mongodb.org/browse/SERVER-1243 there is ticket for that all documents in array, but low priority bleh ;)
[23:11:24] <hahuang61> joannac: finally all fixed. Somehow the puppet manifests regressed.
[23:16:28] <calmbird> joannac: thank you very much for your help, good night