PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 30th of July, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[02:41:07] <phrozensilver> I'm having some trouble with mongoose/ mongodb trying to send data to my api, heres the data I want to send and the angular js controller - https://gist.github.com/rdallaire/241dd48262ec856620c1
[02:42:56] <phrozensilver> I'm wondering if I just doing the syntax wrong
[03:39:07] <ron_frown> ladies
[03:39:10] <ron_frown> question
[03:39:34] <ron_frown> does mongo have any date aggregation stuff built in? SO I can say give me shit that happened this day and pass in the date and it'll chunk off time precision?
[03:44:16] <joannac> I don't know what you mean by "chunk off time precision"
[03:57:09] <Puckel_> Hello there. I have a problem with my replicaSet. I have a problem with my first two name servers present in my resolv.conf, the third name server is ok, but I get errors in mongo logs because it failed to getaddrinfo of my servers. DNS lookup is not recursive in mongodb ?
[04:00:03] <joannac> Puckel_: getent hosts HOSTNAME
[04:00:19] <joannac> what does that return?
[04:12:26] <Puckel_> joannac: thank you for your reply, the problem is due to human error on the firewall, nameservers are again reachable and the problem is solved. thank you again.
[04:14:56] <joannac> no probs
[07:10:21] <greenmang0> I want to create an index on a replica set, following the documented method here - http://docs.mongodb.org/v2.4/tutorial/build-indexes-on-replica-sets/#index-building-replica-sets, after creating indexes on secondary, i want to create it on primary in background, cause I don't want to step it down, the docs say that "background index creation" on primary will result in "foreground index creation on
[07:10:23] <greenmang0> secondaries", but in my case all secondaries will already have index, so two questions 1) will secondaries again create the index? 2) will it cause secondaries to not accept read/write ops?
[07:11:49] <joannac> the replicated index creation command should be a no-op on the secondaries
[07:21:53] <rpcesar> herro. I am working with the mongodb driver, and curious if there are any cons to storing and passing around the collection instance, as most tests I see people passing around a pure mongoclient, and calling .collection(<collection_name>). is there any pitfalls I need to be aware of regarding passing around the Collection response from the .collection instead?
[07:22:00] <rpcesar> *mongodb nodejs driver
[07:22:47] <rpcesar> in short, i am trying to come up with a good coupling strategy for a "drag and drop" authentication layer.
[07:32:05] <Nodex> can you describe passing round the collection instance?
[07:34:40] <rpcesar> sure
[07:36:36] <rpcesar> in general, one creates the MongoClient, and calls connect. the connect returns a "Database?" instance to the callback (on success). One can then call db.collection('collection name') to get a "Collection?" instance. I am wondering if that result, by itself, is safe to pass around. Basically to initialize a "module" that deals only with a single settable collection.
[07:37:45] <rpcesar> I am trying to make my authentication layer as obviously and explicitly seperate as possible, as its intention is to be seperated (by box) from what store, as an example, the email address of the user in raw form.
[07:38:59] <rpcesar> scouring a few unit tests, it seems like most examples i see of this have people passing around the "Database?" instance, and calling .collection() as necessary to pull up the collection instances, even in cases where this seems unnecessary (only dealing with one collection)
[07:40:20] <rpcesar> however, besides running that function call (havent timed it, presume its minimal though), it also defeats some of the abstraction, as I want all my "Engines" to be potentially abstracted away from even having a named collection identifier (this way I can hot swap certain components and test against things like redis for certain "tables")
[07:41:08] <rpcesar> hopefully my goals make sense, if something isn't let me know and I will try my best to clarify further
[07:44:16] <Nodex> I don't see why you wouldn't be able to set the collection to a variable and pass it round
[07:44:28] <Nodex> it's only an object afterall
[07:44:57] <rpcesar> right, and I am aware I "can". what im trying to figure out is why no one IS.
[07:45:19] <rpcesar> im wondering if there is some con im glossing over. such as the state being decoupled from the database in some way I don't realize
[07:45:26] <rspijker> Should be fine, indeed. Think the reason you usually pass around DBs might be that it’s very common to have multiple DBs with the same structure. Each having a bunch of collections with the same name… It could get complicated if you pass around collections in that case.
[07:45:37] <rspijker> so, basically, namespacing
[07:45:41] <Nodex> ^^
[07:45:53] <Nodex> application layer problem
[07:45:59] <rpcesar> awesome, so passing this around in my case (attempting to force a specific abstract storage concept) is perfectly fine and dandy
[07:46:16] <rspijker> I can’t see anything wrong with it
[07:46:20] <rpcesar> purrfect
[07:46:24] <rpcesar> thank you very much
[09:05:58] <adrienpa> Hi everyone ! i'm trying to plug my mms monitoring agent to my standalone mongo server running on a dedicated box. I've used the IP in place of the hostname, and when I started the agent, it made 3 little requests to the right IP, then restarted with a new hostname (my machine's uname) instead of the IP. the host I can see on MMS GUI has also been updated accordingly.
[09:07:12] <adrienpa> I've searched the docs, where they talk about /etc/hosts, or that you can override some regex to use IPs, but I don't think that this is the problem, since I can reach my box from the server running the agent without a problem.
[09:07:31] <adrienpa> has anyone figured this out already ? thanks !
[09:07:50] <rspijker> what is the actual issue?
[09:08:46] <adrienpa> hi rspijker. the issue is that my monitoring agent is now trying to reach 'uname:27017', which doesn't work
[09:09:00] <adrienpa> (where uname is my machine's uname)
[09:10:09] <rspijker> hmmm, yeah, that’s how MMS works
[09:10:26] <rspijker> easiest is to use the preferred hostname setting in MMS
[09:10:30] <rspijker> just enter the IP there
[09:11:08] <adrienpa> ok i'll go try this. thanks.
[09:14:04] <kkkk> hey
[09:14:16] <kkkk> ive encountered an interesting bug with mongodb
[09:14:49] <Nodex> cool
[09:14:50] <kkkk> essentially when i sort by "startDate", i get a broken object:
[09:15:08] <kkkk> { "": "539fe15009b9e58305956d5c", "friends": [] }
[09:15:22] <Nodex> can you pastebin your code?
[09:15:31] <kkkk> im using mongoose on nodejs
[09:15:56] <Nodex> then I would suggest it's a mongoose error and mongoose has added that document like that
[09:15:57] <kkkk> and it seems to be using a cached copy as i can solve it by restarting mongodb
[09:16:11] <Nodex> and or a cached version
[09:16:15] <kkkk> i've restarted nodejs process but the error still exist
[09:16:27] <kkkk> it was only fixed when mongodb was restarted
[09:16:54] <Nodex> is that "" supposed to be the ObjectId?
[09:17:01] <kkkk> if i sort by "-startDate", i do not have the bug
[09:17:28] <kkkk> if im not wrong, the "" key refers to an element in the friends arry
[09:17:58] <Nodex> It sounds like it's a mongoose error. to double check simply execute the same query on the mongodb shell and see if it does the same thing
[09:21:09] <kkkk> okies'
[09:24:33] <kkkk> hey i was trying to find out how to print the raw query from mongoose, do you happen to know? :P
[09:30:26] <kkkk> ahh found that
[09:34:15] <kkkk> hey yup
[09:34:19] <kkkk> i've tried on robomongo
[09:34:35] <kkkk> it seems like robomongo returned a spoilt result too
[09:35:09] <Nodex> can you login to the shell and do it there plaese
[09:35:13] <Nodex> please*
[09:35:19] <kkkk> okies sure
[09:45:13] <kkkk> here:
[09:45:15] <kkkk> > db.appointments.find({ '$and': [ { friends: { '$in': [ ObjectId("539fe15009b9e58305956d5c") ] } }, { '$or': [ { endDate: { '$lte': new Date("Fri, 29 Sep 2017 18:19:34 GMT") }, startDate: { '$gte': new Date("Fri, 28 Mar 2008 12:59:34 GMT") } }, { startDate: { '$lte': new Date("Fri, 29 Sep 2017 18:19:34 GMT") }, endDate: { '$gte': new Date("Fri, 28 Mar 2008 12:59:34 GMT") } }, { endDate: { '$gte': new Date("Fri, 28 Mar 2008 12:59:34 GMT
[09:45:23] <kkkk> oops
[09:45:26] <kkkk> lemme pastebin it
[09:48:47] <kkkk> http://pastebin.com/AhSWjrtL
[09:49:35] <rspijker> kkkk: what the… Duplicate entry '2147483647' for key 'PRIMARY'
[09:49:40] <rspijker> you broke pastebin?
[09:50:51] <kkkk> what does that mean?
[09:51:49] <rspijker> it means there is an issue with pastebin...
[09:52:02] <rspijker> or do you actually get your paste when you open that link?
[09:52:08] <rspijker> (you might have to hard refresh)
[09:53:05] <kkkk> o wow i get that error now
[09:54:39] <kkkk> https://paste.ee/p/dgITN
[09:54:43] <kkkk> hope this works
[09:56:16] <rspijker> so… how do you get that second line?
[09:56:26] <rspijker> because find should return a cursor… not a result
[09:57:17] <kkkk> i think it returns a result? :o
[09:57:49] <rspijker> the shell you are using might do that
[09:58:23] <kkkk> o thats right
[09:58:29] <kkkk> hmm
[09:58:41] <kkkk> im using mongodb shell 2.4.8
[10:00:05] <kkkk> my db is on 2.6.3
[10:00:28] <kkkk> and it happened both to amazon ami as well as ubuntu
[10:01:19] <rspijker> and you’re sure it’s not your data?
[10:01:26] <kkkk> nope..
[10:01:42] <kkkk> the only change in the query is just the sort()
[10:01:42] <rspijker> I don’t know from the top of my head whether those range queries will succeed if, for instance, the start and endDate fields don’t exist
[10:01:52] <kkkk> all exist
[10:01:57] <kkkk> and are date objects
[10:02:15] <kkkk> theres also only 1 of that document in my local db :P
[10:02:41] <kkkk> though 1 or not does not matter
[10:03:59] <rspijker> so… if you do a count instead of a find on that query, you get 1?
[10:05:12] <kkkk> yup i get 1
[10:05:36] <kkkk> (just to double check, so i added a .count() at the end of the query )
[10:06:41] <rspijker> what happens when you add a .next() at the end?
[10:07:05] <kkkk> same: { "" : ObjectId("539fe15009b9e58305956d5c") }
[10:07:20] <kkkk> but if i use "-startDate", i get a result
[10:07:43] <rspijker> can you do a .explain on the query and pastebin the result?
[10:08:08] <rspijker> only thing I can think of right now is some kind of sparse index issues :/
[10:09:13] <rspijker> the sort direction could influence the index being used…
[10:09:44] <kkkk> https://paste.ee/p/tixwq
[10:10:44] <kkkk> hey i've got to go for now, will be back later
[10:10:47] <kkkk> many thanks ;)
[10:11:55] <rspijker> ok
[10:55:25] <remonvv> \o
[10:55:38] <remonvv> Just curious but has anyone looked at/benchmarked Aerospike?
[11:14:17] <Nodex> I was looking into it the other day remonvv
[11:14:34] <Nodex> havn't had a chance to benchmark it yet
[11:15:11] <remonvv> Same. It seems very hypey. Big claims no data sort of thing. But I'm still curious to see if it might be better for my usecases.
[11:34:07] <kkkk> @rspijker heres the explain for "-startDate": https://paste.ee/p/krYJo
[11:34:47] <rspijker> kkkk: so, that uses a different index...
[11:35:42] <kkkk> yup looks like it
[11:36:06] <rspijker> is the index on the one that’s causing the problem sparse by any chance?
[11:37:06] <kkkk> the "startDate" query uses friends which is an array of objectid
[11:37:20] <kkkk> the "-startDate" query uses the startDate
[11:37:35] <rspijker> I know what they use, is either of them sparse?
[11:38:43] <kkkk> nope
[11:38:47] <kkkk> both not sparse
[11:39:11] <rspijker> can you pastebin db.coll.getIndexSpecs() ?
[11:39:49] <kkkk> > db.Appointment.getIndexSpecs() [ ]
[11:39:55] <kkkk> the output is [ ]
[11:41:17] <rspijker> should that be capitalized?
[11:42:07] <kkkk> > db.appointment.getIndexSpecs() [ ]
[11:42:22] <kkkk> oops
[11:42:23] <kkkk> got it
[11:42:26] <kkkk> sec
[11:43:17] <uf6667> is there a semaphore for mongodb?
[11:43:27] <uf6667> I need to do a series of operations
[11:43:41] <uf6667> transactions, actually
[11:44:28] <kali> uf6667: findAndModify is a CAS, you can implement a polling mutex based on it
[11:45:01] <kali> uf6667: but there is nothing that looks like a relational transation in mongodb
[11:45:24] <uf6667> ahhhh ok thanks, do I use a recursive function for it?
[11:45:34] <kkkk> https://paste.ee/p/yWI2b
[11:45:36] <uf6667> or can it keep doing it?
[11:46:12] <kkkk> hey rspijker i gtg for 2 hours, cya thanks ;)
[14:04:47] <akp> hello, can i use a regex in the the find() call?
[14:05:12] <Ankhers> Yes.
[14:05:26] <Ankhers> http://docs.mongodb.org/manual/reference/operator/query/regex/
[14:24:39] <tobstarr> hi
[14:24:53] <weebl> hi
[14:29:52] <ernetas> Hey guys.
[14:30:17] <ernetas> We had one of our MongoDB servers FS crash badly while trying to make them a replica set...
[14:30:43] <ernetas> We managed to restore database_name.x files, but how do we know which ones are which?
[14:30:47] <ernetas> Does the order matter?
[14:32:19] <kkkk> @rspijker i managed to fix it but doing a reIndex()
[14:33:32] <rspijker> kkkk: yeah, it did look like it was index-related
[14:33:52] <rspijker> can’t really explain why though
[14:34:13] <rspijker> ernetas: I believe that info is stored in the .ns file
[14:34:22] <rspijker> unfortunately, I’m fairly sure that’s a binary format
[14:43:37] <ernetas> rspijker: so no way to extract that info?
[14:45:21] <rspijker> ernetas: let me get this straight… you have the dbName.x files, but you don’t know which x is which, or what?
[14:45:57] <rspijker> what are you hoping to find out _exactly_?
[14:48:55] <kkkk> yup, thanks ;)
[14:49:44] <ernetas> rspijker: filenames. I don't even have "dbName.x", I just know from size that it's that.
[14:50:28] <rspijker> ernetas: eish…
[14:51:13] <rspijker> well… if you don;t have too many you can construct the sequence...
[14:51:19] <rspijker> smallest is 64, they then double in size
[14:51:31] <rspijker> but if you have them up until 2GB you will have multiple files of 2GB
[14:51:42] <rspijker> *64MB
[14:52:00] <rspijker> so, if you have more than 6, you’re in trouble
[14:52:09] <ernetas> No, they're 512MB (smallfiles=true). It's 250...
[14:52:29] <rspijker> you have 250 files?
[14:52:32] <ernetas> Yes.
[14:52:51] <rspijker> ok, then as an aside, you probably shouldn;t be running with smallfiles…
[14:52:54] <rspijker> is this 1 DB?
[14:52:57] <rspijker> or are there several?
[14:53:21] <ernetas> 1 DB.
[14:53:44] <Derick> there might be a sequence number in the files
[14:54:32] <ernetas> In the beginning?
[14:54:53] <Derick> yeah
[14:56:16] <Derick> seems to be bytes 12-15
[14:56:20] <Derick> (0 indexed)
[14:56:24] <Zelest> Derick, do you have any usecases where any big profile companies uses GridFS in production?
[14:56:32] <rspijker> this might be an intereseting, if frightening read ernetas: http://blog.mongohq.com/shipwrecked-a-mongodb-data-recovery-tale/
[14:56:55] <rspijker> if you are sure the data files are intact though and you can get the seq numbers as Derick suggests then definitely go with that :P
[15:00:25] <Derick> edrocks:
[15:00:27] <Derick> ernetas:
[15:00:34] <Derick> sudo od -t x4 -j 12 -N 4 $filename | head -n 1 | cut -d " " -f2
[15:00:40] <Derick> will show you the sequence number
[15:00:51] <Derick> warning: not tested
[15:00:55] <edrocks> i was going to use gridfs in production
[15:00:59] <Derick> well, not beyond experiments
[15:01:09] <Derick> edrocks: with which driver?
[15:01:09] <edrocks> but i just found a different way to do it
[15:01:12] <edrocks> mgo
[15:01:16] <edrocks> im using golang
[15:01:36] <boombaloo> anyone knows how to make monngorestore to restore data from the dump ? I swear I tried like 100 of combinations but it no like thm
[15:01:52] <Derick> edrocks: what would you want to use it for?
[15:01:58] <Derick> Zelest: video streaming
[15:02:06] <boombaloo> keeps saying : root directory must be a dump of a single database
[15:02:08] <boombaloo> what
[15:02:13] <rspijker> boombaloo: what does your dump look like?
[15:02:40] <boombaloo> rspijker: http://pastebin.com/vL8ZUAYF
[15:02:54] <edrocks> i was going to use it for continuous deployment to handle updates being rolledout to all my servers and have them auto restart/talk to nginx(or something simillar that we make) to cordinate when to turn its route on/off
[15:03:13] <Derick> edrocks: rsync it
[15:03:15] <rspijker> boombaloo: mongorestore -d reddots dump/reddots
[15:03:21] <boombaloo> and im trying to restore it like this: ./mongorestore --host REMOTE_HOST --drop -d mongoid dump/mongoid
[15:03:28] <rspijker> that should be fine
[15:03:33] <edrocks> i wanted to be able to restore each version and cordinate with nginx
[15:03:34] <boombaloo> hah...you htink ;-)
[15:03:36] <boombaloo> but its not
[15:03:38] <Zelest> Derick, oh
[15:03:45] <Zelest> Derick, any particular users/sites you know of?
[15:03:52] <rspijker> boombaloo: patebin ls -l dump/mongoid
[15:03:59] <boombaloo> hang on a sec
[15:04:01] <edrocks> but idk if i will continue it i think i could use rsync with something much less complicated
[15:04:05] <boombaloo> works now
[15:04:06] <Derick> Zelest: i am trying remember
[15:04:09] <edrocks> i really just wanted to try gridfs
[15:04:30] <Derick> edrocks: :D
[15:04:41] <Derick> edrocks: makes most sense with large files
[15:04:45] <Derick> not lots of small files
[15:05:08] <edrocks> i think my binaries were atleast 20mb ea
[15:05:36] <Derick> ernetas: any luck?
[15:05:59] <rspijker> boombaloo: it’s my magic touch
[15:06:08] <boombaloo> yeah i htink so
[15:06:12] <Zelest> Derick, we consider using it for graphics (a few million files, not very big)
[15:06:14] <boombaloo> have no other explanation for that :-D
[15:06:18] <Zelest> but mostly to keep the nodes synced
[15:07:45] <Derick> Zelest: not sure why I would pikc gridfs for small files
[15:07:50] <Derick> just a normal document would work too
[15:08:33] <Zelest> yeah true
[15:39:20] <ernetas> Derick: will try a bit later. Thanks though, that does seem like it may work. However, I have some files which differ in size (are larger) just a little bit. Not sure how that will work (will MongoDB just ignore that part or what). Also, what if some files are missing? Those are just questions that are comming to my mind atm, nothing confirmed yet, will let you know how this goes.
[15:39:41] <Derick> if the files are corrupt, I think, you're screwed
[15:39:51] <Derick> and any missing file makes it all not work either
[15:39:54] <Derick> you need all the data
[15:42:47] <rspijker> check out the link I posted earlier ernetas in case you are missing stuff
[15:43:06] <rspijker> it’s quite a bit of detail, but if your data is important enough you can probably salvage (some of) it that way
[15:49:29] <uf6668> yay, got a spinlock working :DDDDDDDDDD
[15:49:35] <uf6668> transactions a la oracle work now
[15:51:39] <ernetas> rspijker: yeah, thanks for the links, I saw it earlier, but hopefully won't need it.
[15:52:40] <ernetas> Is there a way to tell where the data ends?
[15:54:41] <rspijker> Derick might be able to tell you that, he seems to know a bit more about the structure of the files
[15:54:58] <rspijker> I gtg, good luck man :)
[15:58:01] <ernetas> Thanks.
[15:59:31] <Derick> ernetas: I don't know thta, sorry
[16:02:27] <BlakeRG> can anyone point me in the right direction to get the latest single document in a collection based on a date/time?
[16:10:17] <culthero> Anyone around to help me with a broken install of mongodb via Debian? I had attempted to use tokumx and now have to revert back to 2.6.3 of mongo
[16:13:42] <culthero> hm, i had to force it to install version 2.6.3
[18:07:37] <foldedcat> I need to build a feature where a user saves a piece of content for viewing later, what would be the proper schema for saving content to a user? I would need to be able to have a listing of the content where only saved items are highlighted
[18:41:17] <BlakeRG> how do i pull out the last single document from a collection based on a time stamp field?
[19:09:20] <obiwahn> build/linux2/normal/mongo/db/repl/repl_coordinator_impl.o: In function `~TopologyCoordinatorImpl': ... git-mongodb/src/mongo/db/repl/topology_coordinator_impl.h:53: undefined reference to ... `vtable for mongo::repl::TopologyCoordinatorImpl'
[19:11:01] <obiwahn> Is Tovalds right and my compiler is broken? When i move the dtor's body to the cpp it works ...
[19:12:41] <obiwahn> or is it the ordering
[19:22:43] <niks2112> Has anyone worked with mongodb MMS back up? I want to know the connection speed between MMS cloud storage backup and AWS. I want to estimate the time to restore data.
[19:54:23] <obiwahn> and is there no checking before you commit to master?
[20:06:50] <q85> Hey all, I have an urgent issue. We have a 3 member replicaset. One of the members is hidden and delayed. The problem I'm seeing: we have mongos instances which are routing queries to the hidden member. All mongo components are 2.4.9. Can anyone point me to why mongos is routing queries to the hidden member?
[20:18:37] <gequeue> Hi All. What's the best way for me to get a document structured like this { "name" : { "first":"bob", "last":"smith" }, "age": 40 } into { "first":"bob", "last":"smith", "age": 40 } ?
[20:21:22] <stefandxm> gequeue, plain guess; use project
[20:26:31] <gequeue> stefandxm: You're the man! I forgot I can use "$name.first" to get to the sub value
[20:26:36] <gequeue> I got what I needed. Thank you.
[20:27:42] <stefandxm> glad it worked out
[20:36:46] <ernetas> Hmm... Crap.
[20:37:07] <ernetas> One of our collections somehow were not restored after full restore with file copy.
[20:37:11] <ernetas> Why could it be?
[20:46:38] <ernetas> Hmm. Wait.
[20:46:58] <ernetas> Does fs.files collection has to be (is supposed to be) on all shards or just one?
[20:50:06] <ernetas> Hmm...
[20:50:13] <ernetas> I now have only "database.xx" files.
[20:50:25] <ernetas> And local.xx. Is it possible anyhow to restore the whole database?
[21:21:53] <mikebronner> quick question: how can i define a variable in aggregate query that I can use to construct multiple fields in $project section? From what I can read about $let, I can only use it to create a single field, not actually use the variable across multiple fields
[21:40:26] <Es0teric> anyone here?
[21:41:47] <mikebronner> seems not ...
[21:44:07] <Es0teric> mikebronner: well i have a simple question really..
[21:44:19] <mikebronner> im not too good myself, but i can try
[21:44:25] <Es0teric> how do i get a document that contains an array but doesnt have an _id
[21:45:13] <mikebronner> what field are you searching by?
[21:45:23] <mikebronner> have you tried $match
[21:45:32] <Es0teric> $match ?
[21:45:33] <Es0teric> what do you mean?
[21:45:49] <mikebronner> in your query its like a where clause
[21:46:06] <mango_> is MongoDB transaction across collections?
[21:46:12] <mango_> transaction-safe
[21:46:34] <mikebronner> mango_: sorry, dont know
[21:46:58] <mango_> according to this: http://blog.mongodb.org/post/7494240825/master-detail-transactions-in-mongodb
[21:47:02] <mango_> I don't think so.
[21:47:26] <mango_> dependant on nested documents.
[21:49:14] <Es0teric> mikebronner: that doesnt work
[21:49:15] <Es0teric> no results
[21:49:29] <mikebronner> hmm, sorry
[21:49:41] <mikebronner> beyond me at this point :)
[21:49:56] <mikebronner> try stackexchange? i've had better luck there than here
[21:51:37] <Es0teric> mikebronner: its wierd because i can do a .find() and it will show all the records
[21:53:30] <mikebronner> you use the match inside of the find()
[21:53:48] <mikebronner> http://docs.mongodb.org/manual/reference/operator/aggregation/match/
[21:54:02] <mikebronner> sorry, inside aggregate()
[22:30:00] <baegle> I'm seeing something very strange in one of my documents. This is the property: "internalIds", "externalIds" : { },
[22:30:04] <baegle> it
[22:30:20] <baegle> it's ONE property, but it's got 2 names? What is this?
[22:40:43] <humongous> Hey guys, I just switched over to a hosted replica set and now my application is slow and disconnecting a bit. is there something obvious I should look at first?