Can regulation on loan-lossprovisions for credit risk affect the mortgage market? Evidence from administrative data in Chile

UAA

BIS Working Papers

No 780

 

Can regulation on loan-lossprovisions for credit risk affect the mortgage market? Evidence from administrative data in Chile

by Mauricio Calani

 

Monetary and Economic Department

April 2019

Paper produced as part of the BIS Consultative Council for the Americas (CCA) research conference on "Microdata and economic research at central banks”, hosted by the Central Bank of Brazil, Rio de Janeiro, 4–5 June 2018

 

JEL classification: G21, R31

Keywords: Loan Loss Provisions, LTV, Screening, Coarsened Exact Matching, Macro-Prudential Policy

This publication is available on the BIS website (www.bis.org).

 

© Bank for International Settlements 2017. All rights reserved. Brief excerpts may be reproduced or translated provided the source is stated.

ISSN 1020-0959 (print)

ISSN 1682-7678 (online)

 

Can regulation on loan-lossprovisions for credit risk affect the mortgage market? Evidence from administrative data in Chile

by Mauricio Calani

 

Abstract

How is mortgage credit affected by the introduction of a tool of macro-prudential policy? On January 2016 the Chilean banking supervisor raised required loan loss provisions for mortgage credit risk. We argue that financial institutions responded by raising their acceptable borrowing standards on borrowers, enhancing the quality of their portfolio, but also contracting their supply of mortgage credit. We reach this conclusion by developing a stylized imperfect information model which we use to guide our empirical analysis. We conclude that the loan-to-value (LTV) ratio was 2.8% lower for the mean borrower, and 9.8% lower for the median borrower, because of the regulation. Our paper contributes to the literature on the evaluation of macro-prudential policies, which has mainly exploited cross-country evidence. In turn, our analysis narrows down to one particular policy in the mortgage market, and dissects its effects by exploiting unique administrative tax data on the census of all real estate transactions in Chilean territory, in the period 2012-2016.

JEL classification: G21, R31

Keywords: Loan Loss Provisions, LTV, Screening, Coarsened Exact Matching, Macro-Prudential Policy

1. Introduction

On December 30, 2014, the Chilean Banking Regualtor ("Superintendencia de Bancos e Instituciones Financieras", and SBIF, henceforth) announced that starting January 2016, it would enforce new regulation on provisioning against credit risk, stemming from mort­gage loans portfolio. Before this regulatory change, banks would use their own models and decide on their own provisions. However, the view of the regulator was that these provisions were insufficient. Starting January 2016, the SBIF requires to effectively raise financial provisioning for each granted loan. But more importantly, this requirement varies over the maturity of a loan, and is contingent on realized delinquency of the borrower, and borrowers' leverage at the moment entering said delinquency. The chosen measure of bor­rower's leverage is the loan-to-value of collateral (LTV) ratio. This new (or rather modified) regulation implies substantially higher financial cost for banks, if compared to observed pre-regulation provisions.

Did the new regulation affect the mortgage loans market?, and if it did, what aspects and through which mechanism exactly? In this paper we attempt to address these ques­tions by using a two step analysis. First, we analyze the features of the regulation using an off-the-shelve screening-under-imperfect-information model, and adapt it to the problem at hand. Equiped with a model, we can learn about the properties of equilibrium under the new regulation, and grasp a sense of the effects under a wide familiy of parameters. In particular, we argue that in order to reduce the expected financial cost of the new reg­ulation, banks tried to grant loans only to borrowers who were less likely to enter into delinquency; and therefore would entail less provisioning ex-post. But cherry-picking these borrowers is hard from an ex-ante perspective, so banks had to do this using a noisy signal; the LTV ratio. This model is able to produce an endogenous threshold for the signal (LTV limit) which we later document in the data. The second step in our analysis is empirical. We use a unique administrative dataset from the Chilean Internal Revenue Service (Servi- cio de Impuestos Internos, or SII) that records all nation-wide real estate transactions from 2002 onwards. In this dataset we can observe transactional variables such as the property price, downpayments, and the financial institution involved in the mortgage loan. We can also observe characteristics of buyers and sellers, such as income, or if any party is a firm. Lastly we can observe many features about the real estate, such as size, type, and location. This data is unique in its kind, and to the best of our knowledge comparable data has only been gathered and used in IL, USA by Ben-David (2011) to analyze inflated house prices in the years before the International Financial Crisis. In this paper we analyze the before / after of the new regulation, using the coarsened exact matching method by Iacus, King and Porro (2012). We are able to use such a hungry-data method because of the richness of our data set.

Our main findings are: (i) the new regulation had an effect on loan-to-value ratios for new loans: fewer loans with lower LTV ratios were granted. We estimate that, because of the regulation, the LTV ratio is 2.8% lower on average. Furthermore, the median borrower is granted 9.8% lower LTV. We also find that, because of the way the regulation differentiates provisioning below and above 80% of loan to value ratio, a large fraction of loans are granted at exactly that LTV. In particular, we calculate that the fraction of loans granted at 80% LTV more than tripled and represented one fourth of all loans in 2016-17. This agglomeration effect is predicted by our stylized model. Finally we use our model to rationalize they reason why higher financial costs were not off-loaded onto costumers, via higher mortgage rates. We argue that such outcome is an equilibrium outcome stemming from the combination of imperfect information and competition between banks.

The rest of the section is devoted to placing our contribution within the related litera­ture, explaining in detail the exact change in regulation and the data. Section 1.2 presents the sylized model, and section 3 develops our empirical examination of the data. Finally 4 concludes.

1.1. Related literature and our contribution

The new regulation on loan loss provisions for mortgage credit was not introduced ex- plictly as a macroprudential tool, though one of its explicit objectives was "to promote active credit risk management" by financial institutions (Pacheco, Pugar and Valdebenito, 2014). Thus, in practice, it relates to the myriad of macroprudential tools used to deal with excessive credit booms. In particular, under the definition of macroprudential tools by Cerutti, Claessens and Laeven (2017), provisions are similar to capital requirements, which are considered fully fledged macro-prudential tools. Then, this paper joins the literature evaluating the effect of macro-prudential tools on different aspects of the credit markets.

The evaluation of macro-prudential policies has been a very fast evolving literature, which has mainly exploted cross country variation to identify causal effects. This paper makes a contribution by focusing on a regulatory change to evaluate the effect of a given policy, using a unique administrative dataset on the all real estate and mortgage loan trans­actions in Chile in the period 2012-2017.

In particular, there is robust cross-country evidence on the effects of the introduction of macro-prudential policies. For instance, Crowe et al. (2013), Hott (2015), Cerutti, Dagher and Dell'Ariccia (2017) and IMF (2011) discuss the policy options to deal with a real estate booms, and stress the importance of LTV limits for subduing increasing leverage of house­holds, preventing negative home equity, as well as limiting the number of borrowers who access mortgages and fuel real estate booms. Cerutti, Claessens and Laeven (2017) also take a cross country perspective to study the effectiveness of the macro-prudential policy menu. From their analysis we learn that that LTV limits are prominent for the dynamics of mortgage loans, house prices, and overall financial fragility. In turn, Kuttner and Shim (2016) raise the issue of complementarity, and find that LTV and debt-to-income measures, together, are more effective on taming house price booms, than each on their own. Lastly, from Qi and Yang (2009) we learn that LTV limits are not only important to prevent default, but that LTV is the single most important determinant of loan loss given default. We add on the evidence presented on these papers by exploiting micro data instead of cross-country data; which allows us to single out the causal effect of one particular macro-prudential policy.

More broadly, we contribute to an extending group of papers which uses (micro) ad­ministrative data to address macro-financial questions. This avenue has proven to be very rewarding for many strands of the literature, and also for analysing mortgage credit. For instance, Albanesi, De Giorgi and Nosal (2017) use administrative credit file data for the U.S., to examine the evolution of household debt and defaults between 1999 and 2013. They find a new narrative at odds with role of subprime borrowers in the crisis, and find instead that credit growth between 2001 and 2007 -and later mortgage defaults- was concentrated in the prime segment, mostly among real estate investors. A similar strategy is used by Beltratti, Benetton and Gavazza (2017), who use Itialian administrative data to evaluate the effects on mortgage credit of the elimination of the pre-payment penalty of mortgage loans. Similarly, Ben-David (2011) uses micro-data from a county in Illioins to examine the possibility of inflated house prices, and their use by financially constrained households.

1.2. The new regulation on loan loss provision in the mortgage market

The change in regulation by the banking authority (SBIF) is a (non-explicit) macro-prudential measure toward making consistent the expected probability of loss due to delinquency (credit risk), with accounting provisions. It was formally announced in December 2014, and entered into force in January 2016. Before this change in regulation, banks decided on

their level of provisioning following internal models. Banks were often supervised by the regulator when the latter assessed these provisions as being too low, or notably different to the rest of the system. The change in regulation removes discretionary provision accumu­lation. The most important features being:

  1. Timing: Provision have to be calculated monthly, and not only at origination.
  2. Loans included: All outstanding loans are included, not just new loans.
  3. Size: The required loan loss provision for a delinquent borrower is hefty, and can go as high as 30% of the outstanding loan. Furthermore, a borrower can be re-labeled as non-delinquent only after he has paid all debt in arrears on time for four consecutive months. During this period, the bank needs to keep provisions unchanged.
  4. Contingencies: Provisions are explicit functions of (a) time in delinquency, and (b) the LTV ratio. No formal LTV limit exists, but LTV is important because it interacts with time in delinquency to determine the size of required provisions.

Image 2t2a

Figure 1: Financial provisions under new regulation: Expected loss (vertical axis, in per­centage), according to Loan to Value ratio (horizontal axis), and days in arrears at the end of the month. Source: SBIF Chaper B-1 in "Compendio de Normas Contables"

1.3 Data

This paper exploits novel and unique administrative records from the Internal Revenue Service (IRS), for all real estate transactions in Chile, spanning 2002 to 2016. Every real estate transaction in Chilean territory needs to be filed in the presence of a notary of faith ("Notario de Fe" in spanish) who later submits all details of the transaction to a central­ized archive of property called "Conservador de Bienes Raices". Both, the notary and the archive, are obliged to inform the IRS using the "Declaration on Alienation and Registra­tion of Real Estate" form (colloquially known as "Form F-2890"). Currently this dataset is used in the computation of the Housing Price Index by the Central Bank of Chile (Banco Central de Chile, 2017). The information contained in the F-2890 form includes price of the property, mortgage loans, cash downpayments, name of the lender financial institution, and whether the buyer/seller is a person or a company. It also collects information on the identity of the buyer/seller, though this last piece of information is kept confidential. Combined with the Non-Farming Real Estate Property Cadastre ("Catastro de Propiedades no Agricolas", also collected by the IRS) it is also possible to observe characteristics of the real estate in transaction. In particular whether it is residential or commercial property; a house, an apartment, a parking lot, or storage facility; its size, and age. We restrict our analysis to residential properties -houses and apartments- with some kind of mortgage financing.

A quick examination of the data portraits one of the main arguments of this paper. There is a substantial difference in the distribution of LTV ratios before and after the introduction of the new regulation. The LTV ratio for the median borrower declined from 88% in 2014 to 80% in 2016. Of course, these decline cannot be directly attributed to the regulation without further examination of other covariates, but provides a sense of relevance. The one other episode in recent memory where such a decline was observed coincides with the aftermath of the International Financial Crisis and following recession. A different way to approach the same data is to consider the kinks of the regulation. In particular, from Figure 1 we can see the expected loss -and therefore the provisioning costs- considered under the new regulation depends positively on both, the LTV ratio, and on the number of days in delinquency. This relation is highly non-linear. In particular, the difference in provisioning between a non-delinquent credit and a delinquent one is negligible when the LTV ratio is below 80 percent, but is very large when the LTV ratio is equal or above 80 percent. The 80 percent threshold represents a discontinuity which will prove key in the analysis.

Image qjzu

Figure 2: Fraction of loans given at different LTV ratios. The green line show the fraction of loans given by all banks with loan-to-value ratios greater than 80 percent and lower or equal than 90 percent. Blue line do the same for loans with loan-to-value ratios higher than 70 percent and lower or equal than 80 percent. The red bars are the fraction of loans higher than 79.8 and lower than 80.2 percent. Quarterly averages. Source: Own calculation based on data from the IRS.

2014, the fraction of loans with LTV lower or equal than 80% raised steadily (blue line) in detriment of the fraction of loans with LTV higher than 80% (green line).More importantly, the fraction of loans granted at exactly 80% grew very fast after December 2014, unlike any previous episode in the near past.

In the following section we relate the cost of provisioning and all the contingencies specified in the regulation to an endogenous LTV limit. We parameterize the model to gain some insight on the quantitative effects of the new regulation, on the variable of interest. In the next section we focus on the empirical counterpart.

  1. The New Regulation under the Lens of a Simple Model of Financial Screening

    1. Benchmark Model Setup

In the previous section we elaborated on how the new regulation adds a non-negligible (expected) cost contingent on two conditions. First, that after a mortgage is granted, the borrower goes into arrears. And second, that said borrower's mortgage debt represents a large fraction of the pledged collateral (high loan-to-value ratio). Furthermore, for this second condition, the regulation is highly non-linear around the 80% threshold (see figure 1). At this point it is important to stress that the regulation does not legally impose a cap on LTV, but only desincentives granting new loans with high LTVs to low quality borrowers. If banks were able to perfectly observe borrowers' quality, they would refrain from granting loans to those who will later become costly, or immediately offset this higher cost onto them. However, banks cannot separate high from low quality borrowers ex ante. There is an incomple information problem from the perspective of the lender.

In the rest of this section we assess the problem of the financial intermediary using a benchmark model of imperfect information with screening. We do so, because this model allows us to understand why it is that we care about LTV ratios; why the ex-post distri­bution of LTVs concentrates probability mass at exactly 80% of LTV; and why we should expect pass-through of higher financial costs onto mortgage rates, be very limited. Our small model below builds on the canonical models of imperfect information presented in Stiglitz and Weiss (1981), Mas-Colell, Whinston and Green (1995), and some features of the application by Ates and Saffie (2013).

1.1.1 Borrowers Heterogeneity

Every period a mass of size one of new borrowers shows up at the bank asking for a loan to purchase a house. These borrowers are indexed by e

We will assume throughout that 0(e) = ev, with v > 1. Note that if v < 1, 0(e) is a concave function of e, which implies that H-type borrowers are relatively more abundant. On the other hand, if v > 1, H-type borrowers are relatively more scarce; meaning that high probabilities of being a good payer can only be achieved with values of e close to 1. Put differently, v governs the scarcity of H-type borrowers, and while it is a constant parameter in this model, nothing stops it from being countercyclical.

1.1.2. The value of lending to ex-post heterogeneous borrowers

Let us elaborate on the value of lending to an H(L) type borrower from the perspective of the lender. The financial intermediary is assumed to be exactly that; an intermediary who borrows funds at rate rt from a deep pocketed investor , and lends the proceeds to mortgage borrowers at rate rt > rt. For simplicity let us assume that the financial intermediary only lends on perpetuity. We also assume that full default is not a possible event. This assumption buys simplicity, but also allows us to put the emphasis on the effects of the new regulation, i.e. that the higher cost of lending to a L-type borrower comes from the financial burden of continuously provisioning a fraction of the loan in distress. Recall then, that H-type borrowers are those who will not enter into arrears, and the value of lending to one of said borrowers is given by the flow of period earnings derived from the lending/funding interest rate spread times the loan size, Lt

Image f2ey

Also, note that under the assumption that the mortgage is a perpetuity, the loan amount L remains constant. Hence,

Image vjka

Analogously, the value of lending to an L-type borrower is similar to (1), except that there is a probability 5 that borrower will enter into arrears, and trigger the cost of provi­sioning for a non-negligible period of time, before they go back into good standing. Hence the value of lending to an L-type borrower includes this cost,

Image gh4i

with Lt the associated provision the bank has to make in such contingency. Again, because of the perpetuity assumption, we have that

Image u3r

Note that ex ante both types of borrowers are indistinguishable. It is only after the loan is granted that the borrower learns her type. Clearly, from the perspective of the lender it is better to ex post lend to an H-type borrower, and the difference in values is

Image cz4c

which, under the new regulation on provisions for mortgage loans, is positive (Д > 0). This implies that if the financial intermediary could observe a signal that points to a higher probability that the borrower will end up being H-type,then it should choose such borrower over another. In particular, under perfect information the financial intermediary would like to lend to costumers with higher quality ranking e, but it can only observe such statistic up to a noisy signal e. We elaborate next on this information friction.

1.1.3.The Signal

The bank knows that borrowers ability to honor his commitments is related to many factors. Some of which are: financial education; household size; income volatility; value of pledged collateral; total financial burden; to name a few. The two latter are efficiently summarized in two known statistics; the LTV ratio, and the debt-service-to-income (DSTI) ratio. For reasons elaborated above, the LTV ratio is by and large the most reliable signal in this regard. On top of that, while many developed economies extensively use credit scoring to separate high from low quality borrowers; that is not the case in Chile. Information on debt in arrears is collected by the banking supervisor, but it is not publicly available to lending institutions in real time. They must rely on own credit risk analysis from information they request from the borrower. In practice, DSTI and LTV are used to allocate scarce credit funding, with LTV being the most frequently binding constraint.

Let us assume then, that borrowers' quality ranking can be imperfectly observed thor­ough the complement of the LTV ratio, e = 1 — LTV. That is, the downpayment; how much skin the borrower is willing to put in the game. A second interpretation is that (all else the same) higher savings at the moment of dwelling purchase point towards higher inter­temporal discount factor, and higher propensity to save. Thus, the higher the downpay­ment, the stronger the signal of the commitment of the borrower to honor their obligations. Let e £ [0,1] stand for the noisy signal that is related to the true e quality ranking in the following way,

Image g7k1

where p is the bank's screening technology accuracy; meaning that if screening works accurately (with probability p = 1), we have that lower LTV is signal of a borrower with higher e, and higher probability 0(e) = ev of being H-type. On the other hand, with proba­bility 1 — p we have that the observed signal e is simply noise. Even thought the signal is imperfect, as long as p > 0, the signal is positively correlated to the true borrowers quality ranking; and therefore the optimal policy for the financial intermediary is then to set a cut-off threshold e on the realizations of e. This cut-off rule will have two effects on the rationing credit. First, the extensive margin is affected as a more restrictive cut-off rule implies less acceptable borrowers. And second, the intensive margin is affected because on average borrowers (including H-type borrowers) are granted smaller loans, creating a trade-off.

1.1.4. The problem of the financial intermediary

Given the definition of the signal e, we can express loans in terms of this signal; L = (1 — e)P. In the same way the value of lending to an H-type borrower, VH(e), and to an L-type borrower, VL(e), can also be written in terms of e. The problem of the financial intermediary is then: Given prices {rt, rt, Pt}, the constant probability of entering into arrears for L-type borrowers, 5, and the provision required by the regulator (in percentage), ^; the problem of the financial intermediary is to choose threshold e to solve the following program,

Image x81l

Image jeqb

where the indicator function captures the fact that only borrowers with quality ranking of e or more, are granted loans. We can re-express equation (5) as

Image 9j3w

Taking the first order condition and working through the algebra it is possible to solve for the threshold e in closed form,

Then, it can be verified that this threshold is increasing in the cost of lending to an ex­post bad borrower, Sty. This implies that the endogenous LTV is lower the costlier it is to have lent to an ex-post L-type borrower; which is exactly the direction the new regulation took. This conclusion, along with two other are summarized in Proposition 1.

Proposition 1. A Loan to Value limit (l = 1 — e) is endogenously determined by the introduction of a provisioning cost for the contingent L-type borrower. This limit is

  1. Non-increasing in the expected cost of the provision, Sty
  2. Non-increasing in the scarcity of good borrowers, governed by parameter v.
  3. Non-decreasing in the net profitability of each granted loan, as captured by the spread r — r > 0 Proof Direct evaluation suffices.                                                               

The stylized model above has all the intuition necessary to guide our empirical exam­ination. Notably, in such model, the lending interest rate r has been kept constant. That need not be the case. Price discrimination is a well-known strategy of firms to raise profit, but in the following subsection we show that such strategies do not pay off, and we can abstract from them. Key to this conclusion is the interaction of the imperfect information set-up and competition; both of which are good characterizations in the Chilean mortgage market.

  1. 2. Alternative Setups
    1.2.1.Screening and charging two different interest rates

In the benchmark -and simplest- model we analyzed the case in which the bank charges a unique interest to all costumers once it has decided they should be granted a mortgage loan. It could be argued instead, the ex-ante heterogeneity of applicants implies differ­ent probabilities of them turning into L-type borrowers. Then, at least a two-interest-rate strategy should be implemented. The extension to more than two rates follows naturally. Consider such small extension to the benchmark problem of the bank: it has the possibility to deny granting a loan to costumers with signal e below the cut-off level e, charges interest rate rh to borrowers with signal e < e < z, and charges Г < rh to borrowers with signal e > z. The case of perfect screening technology is sketched in

Image m3to

The problem of the financial intermediary is then: given prices {rh, Г, r, P}; the constant probability of entering into arrears for L-type borrowers 5; and the provision required by the regulator (in percentage) ^; the problem of the financial intermediary is to choose {e, z} to solve the following program, 1{e > z\e][6(e)VH(e,rl) + (1 — Q(e))VL(e,rl)] where the first order condition with respect to z boils down to

Image 576d

Image lp3o

Image 0top

This last condition captures the fact that since loans are granted to the right of e, and a financial cost is paid anyhow if the borrower becomes delinquent regardless of the charged interest rate, it optimal for the bank to charge the highest possible interest rate. Hence the choices are,

which mean that bank charges effectively one one interest rate, namely rh, and the cut-off rule is the same as in the benchmark case if r = rh.

1.2.2. The case of two prices strategy and no screening

A third alternative model would be to simply separate the market an charge two different interest rates to costumers with signals below/above a threshold z. This is simply an special case of the previous extension with e = 0, hence it delivers the same conclusions for the same reasons.

1.2.3.Including banking competition

In the benchmark model, we outlined the optimal cut-off rule for the screening problem of a bank that takes interest rates as given (see equation 5). We also outlined the case in which a given bank decides to charge two interest rates, and argued that it is optimal for said bank not to pursue such strategy and charge the highest rate of the two offered. However, a third alternative equilibrium may be possible. It could be plausible to have an equilibrium in the mortgage market with two interest rates, charged by different banks; a leader bank which charges a low interest rate and a follower bank which charges a higher interest rate. In the remainder of this section we argue that this last outcome will, too, not be an equilibrium.

Consider the following setup. There are (at least) two banks (j = 1,2) who compete. For comparability with previous results, let the mass of costumers be normalized to two. Banks set interest rates first, and conditional on these decisions, choose j Suppose we start from an equilibrium with positive profits and in which both banks, charge the same high interest rate (rh). Both banks are identical to the eyes of the potential borrowers, therefore they randomize which bank to go to first, and the result in equation (6) carries on for both banks. If bank j decides to deviate from this equilibrium and charge rj = r—j — e, its profits will differ for two reasons. First, the margin for each granted loan is lower as can be verified from (5). Second, and more importantly, costumers will no longer be randomly assigned between banks. They will go first to the cheapest bank (bank j), and if rejected, will go to the competitor bank which charges a higher interest rate for the same mortgage loan. This sequentiality is not only realistic, but allows us to set the problem in a simple normal form game, where we can use the concept of dominant strategies.

Image zhpd

Let us consider first the problem from the perspective of bank j = 1, who charges interest rate rl while its competitor, bank j = 2 charges rh. Let profits for this bank be denoted by 7ii(rl, rh) where the first argument in parenthesis denotes the action of the first bank and the second argument, the action chosen by its competitor. The optimal cut-off rule for bank 1 is given by equation (6) with r = rl, Vk(e; r) = Vk(e, rl), к = H, L. Given this bank is the cheaper bank, costumers will go ask for a loan to bank 1 first, and if rejected, will turn to bank 2. We assume that it is costless for borrowers to apply for mortgage loans at any bank. The fact that bank 1 receives twice as many applications with respect to the benchmark case, has no effect on its choice of ei, and simply implies that profits will, too, be twice as those in the benchmark case. Hence,

Image 2766

Now consider the case of bank j = 2. Its problem is different because a fraction of borrowers (those with e higher than £1, defined in equation (10)) already got their mortgage loans at bank 1. Then the problem of bank 2 is: Given prices {rh, r, P}, and cut-off rule of the competitor bank, £1, choose £2 in order to solve the following program;

Working out the first order condition, we can obtain

Image 43nl

With this result at hand we can compare the pay-offs to bank 2, for the two alternative interest rates it can charge: r1, rh. With r1, both banks are charging a low interest rate, hence we are back in the benchmark case. Alternatively, if the charged rated is rh, then equilibrium profits n2(r1, rh) are given by plugging in (12) to (11). Both cases are depicted in figure (4) for different values of rh — r1.

Image ueln

From figure (4) we can distill two insights. For small deviations of rh from the competi­tor's charged interest rate, we have that (a) n2(r1, rh) < n2(r1, r1). For large deviations of rh from r1 the opposite is true; and we have that (b) n2(r1, rh) > n2(r1, r1). For now let us focus on the case (a). If bank 1 chooses r1, then it is optimal for bank 2 to also choose r1. If bank 1 choses rh bank 2 can choose an interest rate slightly lower than rh and get all the market for itself, making (almost) twice as much profit as it would if it had chosen rh. Hence, choosing r1 is a dominant strategy for bank 2. Next, consider case (b). Suppose that bank 2 chose an interest rate rh very much higher than Г. It is clear that bank 1 has the incentive to raise Г to Г = rh — e, with e ^ 0. This way, bank 1 raises its profits and still keeps all the market to itself. But this implies that rh — rl = e is very small, and we are back to case (a). In summary, it is very hard for bank two to set an interest rate that is too high and expect the other bank not to set its own interest rate a little below and steal all the market. Given this competitiveness, the optimal action for both banks to set its rate at a unique level of rl. Thus, we go back to the benchmark case.

There are a lot of simplifications in the benchmark model. To start, we have assumed that the demand for mortgage loans is completely inelastic. Negatively sloped demand would further limit the ability of banks to set too-high interest rates. Second, we are as­suming that there is no strategic interaction between borrowers and creditors. Instead, all bargaining power is assumed to belong to the financial institution. This means that bor­rowers do the best they can to provide the highest possible down-payment to value ratio, and if rejected they simply do not raise it again. A third major simplification of the model is to assume univariate signals. Instead of simply signaling good re-payment capacity with a high down-payment, borrowers could present proof of previous debt, add other prop­erties as collateral, past behavior with the same creditor, etc. Specially important, banks could also use the debt-service-to-income to assess the probability of a borrower entering delinquency. We have abstracted from this as we discussed in the previous sections. That said, we still want to use the model to understand how scarce credit was allocated, after the coming into force of the new regulation on loan-loss provisions for mortgages. The LTV is particularly important to our analysis because of the signaling information it provides, and because of the regulation non-linearity in said ratio.

1.3. Calibration of the Model

In this subsection we proceed to analyze a calibrated version of the benchmark model. We do so for two reasons. First, because it allows us to understand the ability of the proposed framework to generate effects on key variables that we can later examine empirically; and second, because it makes it easier to analyze the effects of the non-linearity of the loan-loss provision regulation around the 80% LTV threshold for a wide variety of plausible family parameters.

Our preferred calibration is summarized in table (1), for all parameters except for the provisioning cost ^, as the new regulation implies substantial variation of this parameter with the leverage of the borrower in delinquency, and the time spent in said state. For instance; after the reform, the cost of provisioning was increased to more than 8% for highly leveraged borrowers who were delinquent for more than one day; but to more than 30% for

Image bocp

delinquency of more than 90 days -see Figure (1)-. Though we analyze a large support for plausible values of tf, our model does not distinguish one-day from 90-day delinquency; hence our quantitative conclusions should be understood only as an approximation to guide our empirical analysis.

We learn that the way the regulation was implemented, implies that the 80% threshold is very important for a wide set of plausible parameterizations. First, consider the benchmark calibration, with provisioning tf not contingent on LTV. Panel (a) of Figure (5), depicts the optimal cut-off rule in equation (6) under different values of the scarcity of high-quality borrowers v, and different values of provisioning tf. If any applicant with signal ei < e (below any given curve) requests a loan, his application will be rejected. If the signal is ei > e then they will be granted the loan. This threshold is non-decreasing in the provisioning cost, and in the scarcity of good borrowers, as we already stated in Proposition 1. Second, let us consider the non-linearity introduced in the regulation for parameter tf. We learned from Figure 1 that below the the LTV threshold of 80% the provisioning cost was negligible. Instead, above such threshold, and when a borrower enters delinquency, the provisioning cost can be as high as 30% of total asset value. If a signal 0.2 < e < e we know that the associated provisioning cost in the horizontal axis, is not really operative. This is depicted in the right hand side panel in Figure (5), which shows the actual thresholds for mortgage loan approvals. For a wide set of provisioning costs, the 80% LTV limit is the one that matters. This is crucial to the understanding of the change in the distribution of LTV ratios documented in section 3. For (almost) all relevant values of the provisioning cost, tf, it is an equilibrium outcome to observe that the distribution of LTV ratios of granted mortgage loans will gravitate towards 80%, as this will be the binding threshold above which banks will reject applicants.

Image 8h1u

3. An Empirical Examination to the LTV Distribution

We learned from the model above that the distribution of loan-to-value ratios of mort­gages granted after the new regulation came into force, would necessarily gravitate toward lower values in order for banks to accommodate the higher cost of ending up with a low quality loan. Given a wide set of parameter values and the specifics of the regulation we concluded that the distribution of LTV would necessarily concentrate around 80%. This (predicted) swing in the LTV distribution, however, is unconditional on other relevant vari­ables simultaneously changing arount the date of implementation of the regulation under study. The most straightforward variables being economic activity and house price growth, among others. In this subsection we check for the main prediction of the model above: that controlling for a wide set of potentially relevant variables, the new regulation of finan­cial provisions in the mortgage market led unambiguously to lower LTV ratios, where the discontinuity kink of 80% proves to be relevant.

3.1 A Matching Exercise

Our argument is based on the conclusions of a matching exercise which exploits the rich­ness of our dataset; the coarsened exact matching (CEM) algorithm proposed in Iacus, King and Porro (2012). Matching is a widely-used method of evaluation of non-experimental treatments or programmes. The principle behind this method is quite intuitive; it con­trasts the outcomes of "programme" participants (Yi) with the outcomes of "comparable" nonparticipants (Yo) (An extensive summary of the benefits of matching can be found in Heckman, Ichimura and Todd (1998)). The main idea is that differences in the outcomes between the two groups are attributed to the programme or treatment, given that groups were indeed "comparable" in every other sense.

The method is powerful, so it is no surprise that applications can be found in the eval­uation of an extensive list of policies. For instance, Heckman, Ichimura and Todd (1997), Lechner (2002), Jalan and Ravallion (2003) and Smith and Todd (2005) evaluate the im­pact of training programs on earnings; Galiani, Gertler and Schargrodsky (2005) evaluate the impact of privatization of water services on child mortality in Argentina in the 1990's; Encina (2013) studies the labor market effects of the 2008 pension reform in Chile; and Al- mus and Czarnitzki (2003) and Moser (2005) study the impact of subsidies and patent laws on research and development, patents, and innovation. The housing and credit markets are no exception. To name a couple among many others; Park (2016) studies mortgage per­formance for FHA and privately insured home purchases relative to uninsured mortgates; and Field and Torero (2006) study the impact on credit supply of obtaining a property title through a land titling program in Peru.

3.2. Some definitions

The introduction of the regulation of financial provisioning for mortgage loans, is an ex­ogenous event from the perspective of a given household's home buying decision, but it is not entirely experimental. The problem -as with any non experimental data-, is that counterfactuals are unobserved. Ideally one would be intrested in observing the outcome variable of an individual who received the treatment and the outcome for that same indi­vidual without the treatment. In our setup, we would like a potential borrower to enter a bank and have a coin decide on whether the new regulation applies to him, and enter again and do the opposite; and compare the outcome. Clearly this experiment is not avail­able, and we use a matching method to try to uncover two samples that mimic this sort of experiment.

Following Smith and Todd (2005), define a dummy variable D, which takes the value of one (D = 1) if the new regulation has come into force (starting 2016), and zero (D = 0) if not (before 2016). Our object of interest is the mean differential effect on the outcome variable (Y, LTV) on those households subject to the new regulation compared to their counterfactual under no-regulation: the mean effect of treatment on the treated for people with covariates X,

Image z5u3

where E(Yi |D = 1, X) represents the outcome (LTV) for agents who were affected by the new regulation, and E(Y0 |D = 1, X) the outcome for agents had they not have been affected by the new regulation but the regulation was already active (the unobserved counterfac­tual). The first term can be directly identified from home buyers in 2016. The second term, however, is unobservable. As an approximation to the second term, E(Y0 |D = 0, X) is used, meaning the no-treatment outcome of buyers when the regulation was not active. This approximation has a potential selection bias:

Image ux9m

Then, the fundamental identification condition in for estimating (13) is conditional mean independence (see Heckman, Ichimura and Todd (1998))

Image 3oks

which amounts to saying that conditioning on X, eliminates the bias; or that conditional on X, studied agent samples are balanced. Exactly balanced data means that controlling further for X is unnecessary because it is unrelated to the treatment variable. It also means that model dependence is minimized and researcher's discretion along with it (Ho et al., 2007).

3.3. Coarsened Exact Matching

The most straightforward (and ideal) matching would be exact matching. That is, emulat­ing a fully blocked experiment in which two agents are matched with the same covariate variables (X), and then treatment is randomly applied to one of them. This type of match­ing not only balances unobserved covariates on average, but balances observed covariates exactly (Ho et al., 2007). Unfortunately, when using several covariates -and when at least one of them is a continuous variable-, this approach becomes impractical because finding exact matches becomes unlikely. Other methods of approximate matching rely on finding "close enough" covariates for the control and treated agents. Notably the Mahalanobis Dis­tance Matching (MDM), or the popular Propensity Score Matching (PSM), or the Coarsened Exact Matching (CEM).

In this paper we choose to use CEM over MDM and PSM. We choose not to use PSM as it is the least efficient of the three methods. This method takes several X covariates, and summarizes them into the "propensity score", and uses only this scalar as a measure of distance between treated and control units (as opposed to using a distance which considers all k dimensions of X). Then, it prunes any observations that do not get matched. But doing so, results in loss of infomation because there is a inherently random component dictating which observations are dropped. Notably, it is not the pruning that makes the method less efficient. On the contrary, all matching methods rely on some form of pruning. PSM is less efficient than the alternatives because of the way such pruning is performed. The second method; the MDM emulates a fully blocked experiment defining a (euclidean) distance between covariates (X). Later, it prunes units which are not close enough, and compares the outcome variable on those surviving matches. Clearly it deals more satisfactorily with continuous variables and seeks to compare treated and control covariates using a multidi­mensional notion of distance, therefore not incurring in random pruning. The shortcoming of the method is the not-so obvious way to weight every covariate -with different units- in the euclidean distance. CEM addresses this point more directly, while keeping all the advantages of the MDM.

The CEM is an approximation to exact matching. We have already made the point that while the exact matching provides perfect balance, it does so at the cost of producing very few matches, in particular when a covariate is a continuos variable. CEM attempts to address this weakness. The idea behind CEM, as presented in Iacus, King and Porro (2012), is to temporarily coarsen each variable into substantively meaningful groups; exact match on these new data; sort observations in strata; prune any strate with no treated or control units; and pass on only original uncoarsened values after pruning. The method is more powerful if the coarsening is nourished by meaningful grouping of covariates. There are other attractive properties of the method. First CEM belongs to the group monotonic imbalance-reducing methods, which means that the balance between treated and control groups is chosen ex-ante (i.e. by means of the coarsening), rather than post-estimation as in the propensity score matching. Also, CEM meets the congruence principle, which states that data and analysis spaces should be the same. This is achieved via pruning of observations whose strata (bins in the coarsening) fail to find a match in the complementary (treated/control) group. Finally, CEM restricts matched data to areas of common support by construction, which is a requirement to be checked post-estimation when using the PSM.

4. Results

We explore our data in three complementary sets of experiments: a benchmark case (two alternative exercises), an anticipation case, and a placebo test (two alternative exercises). In the benchmark case we compare individuals who were given credit before, and after the regulation came into force in January 2016. While we examined several periods as candidates for control group, the results are very robust to this choice. Hence, we report the results of using loans granted during years 2012-14 (and 2013-14) as control group, and 2016-17 as treated group. The anticipation exercise uses loans granted in year 2015 (after the regulation was annouced but not yet enforced) as treated group, and those in year 2014 as control group. Finally, we present two placebo exercises, in which the year 2014 is considered the treated group against two alternative control groups: individuals who were given credit in 2013, or alternatively in years 2012-13.

Image m1o

Across all our experiments we have kept the coarsening of variables unchanged, to ensure comparability. In particular, the vector X £ Rk includes the following seven dimen­sions in which we perform the matching: neighborhood ("comuna"); property price in real terms; maturity of mortgage loan in years; lender institution; size of the property; income of the borrower (up to taxable income brackets); and type of property (apartment/house). Loan maturity is coarsened using the following cutpoints (in years); {15,20,25,30,35}. Neighborhood, lender financial institution, income bracket, and property type are no fur­ther coarsened. All remaining variables, except loan maturity are coarsened automatically using the CEM package by King et al. (2010) which in turn uses Scott's method (Scott, 2015). Given the featuring role of pruning in the method, table (2) reports some summary statistics of the matching. Across all experiments, we can see that one of every four strate contains control and treated units, and is therefore kept. All other strata contain no obser­vations, or either only treated, or only control units. However, the method still uses three out of every four units in the control and treated groups, as can be seen from the ratio of matched units to total units in every group. That is, the matching method restricts to a small common support region, in which it uses intensively most of the observation units.

An overall imbalance metric L1 is also reported in table (2). This statistic is a distance notion between multidimensional histograms of treated and control group (Iacus, King and Porro, 2012). Intuitively, it provides information about how balanced the covariates in the two groups are. Technically, consider a total of s strata (multidimensional boxes) in which the covariates are coarsened and exactly matched, then record the k-dimensional relative frequencies for treated f and for control g units. The measure of imbalance is the absolute difference over all s cell values: &i(f,g) = 1/2'£4=1!fi — gi|. If this statistic takes the value of zero, then we have achieved perfect balance, if it takes the value of 1, then we have total imbalance. As mentioned by Iacus, King and Porro (2012), this statistic is to matching as R2 is to regression analysis.

Next, we compare the results of the three sets of experiments in terms of the variable of interest, the loan to value ratio.

4.1. Benchmark results

In columns (1) and (2) of table (3) and (4) we report some statitics of the distribution of loan to value ratios. Even though the control group in (2) is smaller by one third, the method proves very robust to this exclusion. We can see that borrowers in 2012-14 were granted loans that were on average, 81.5% of collateral value. During and after 2016, loans granted to a comparable gruop of borrowers were smaller; averaging 78.8% of collateral value. We attribute the -2.7% difference to the coming into force of the regulation on provisioing for credit risk in the mortgage market. The picture is clearer if we consider percentiles of the distribution, as in table (4). From the first two benchmark experiments we learn that the 25th and 75th percentiles were hardly changed. However, a large mass of borrowers did move. The median borrower pre-regulation borrowed 89.8% of collateral value. After the regulation was introduced that number dropped to 80% exactly, as the calibrated model in the previous section anticipated.

4.2. Dealing with anticipation

In the previous baseline exercises, we assumed that a treated household , was one who got a mortgage loan after January 1st, 2016 -when the regulation was fully enforced-, and that a control household was one who got a loan before December 2014, when the regulation was announced. Thus, dropping 2015 is a choice made to keep the exercise as clean as possible,

Image n4sv

 but the downside is that we are missing a potentially important anticipation effect. In order to evaluate if this is the case, consider column (3) in tables (3) and (4). Note that while it is true that mean LTV is marginally (but statistically significant) lower post December 2014, other moments of the distribution are unchanged. Percentiles 25 and 75 remain 80% and 90% respectively. Also, in contrast to the 9.8% drop in the baseline cases, the anticipation effect for the median borrower is only 0.8% (from 89.8% to 89%).

4.3. Placebo tests

In columns (4) and (5) of tables (3) and (4) we report the results for two placebo tests. In the first one our control group are households who were granted loans in 2013, and the treated group those who got theirs on 2014 (before the regulation was announced). The second placebo test extends the control period to 2012-13. As in the baseline case, the actual choice of control gruoup (period) is immaterial as long as it is effectively before the regulation came into force. The first placebo test presents evidence that the treatment resulted in a 0.26% hike, and the second a 0.22% drop in LTV ratios as is evident from table (3). In the same way, percentiles 25th and 75th are basically unchanged. Furthermore, our placebo tests imply that there was a drop in the median LTV of the treated gruop between 0.4% and 0.6%. These numbers are an order of magnitude lower than 9.8% reduction in the median

Image tlg

of LTV in the baseline scenario. Taken together, all these experiments confirm the fact that the new regulation on provisions for credit risk in the mortgage market had a bite in the decision of banks to extend smaller loans, relative to the pledged collateral.

In order to make our point more explicit let us present the histograms corresponding to specifications (1) and (4) in figure (6). The red pointed lines mark 80% and 90%. Panel

  1. shows the baseline exercise. It is clear that after the regulation an important probability mass transited from just below 90% to just below 80%. In particular, the number of loans granted at exactly 80% more than tripled with the new regulation. On the contrary, Panel
  2. shows a placebo test (2013 vs. 2014). We see that treatment indeed raises LTV at the 80% threshold level, but does so an order of magnitude relative to the baseline case. This is exactly the same intuition conveyed from table (4). Also, in figure (7) we present the Cumulative Distribution Function for the same two experiments. On Panel (a) it is clear that treatment-LTV-CDF is different, both statistically and economically, from the control- gruop-LTV-CDF. On the contrary, in Panel (b) we show how similar the CDFs of treatment and control groups are in the placebo test. These are only another way to interpret the same information as in figure (6), but highlight the effect the regulation had on terms and

Image u7h8

conditions on granted loans due to the coming into force of the regulation of provisions for credit risk of mortgage loans.

4.4.Conclusion and road ahead

On December 2014 the Chilean Banking Supervisor announced that within a year, super­vised institutions would need to effectively raise provisions for credit risk of mortgage loans, to match expected loss according to a unified criteria. In this paper we have ana­lyzed the effect of such change in regulation on the mortgage credit market. Notably, this new regulation raises required provisions contingent on leverage at the moment of ex post realized delinquency. Also, compared to previous regulation, the higher financial cost for banks is substantial.

We offer evidence that, as a consequence of the regulation, granted loans were on av­erage, lower as a fraction of value of pledged collateral. We do so by developing a small

Image a3eu

screening - under - imperfect - information model about borrowers' quality. In said model, the introduction of higher provisioning cost, contingent on ex-post borrower payment be­havior, and borrowers' leverage at the moment of delinquency, affects the ex-ante screening of loan applicants by financial institutions. The LTV ratio is a informative but imperfect signal of borrowers' quality, hence financial institutions can use it to screen borrowers. By incorporating the features of the regulation into the model, we are able to generate an en­dogenous LTV limit, which helps us rationalize a clear bunching of loans in the data; which we otherwise could not. Equiped with the model we can more carefully examine the data.

We use novel and unique data from administrative records, collected by the Internal Revenue Service. Our data spans all transacions of real estate in Chilean territory from year 2002 to present, though we focus our analysis in years 2012-17. We have access to buyer, seller and real estate characteristics. Using a matching algorithm we seek to evaluate the effect of the regulation on realized LTV ratios. We conclude that quantitatively the regulation had an effect: banks accomodated it by granting smaller loans as a fraction of pledged collateral. We estimate that, after the regulation came into force, average granted LTV ratio is 2.8% lower. Also, for the median borrower, it is 9.8% lower. We also document that because of the calibration of the regulation, a large fraction of loans are granted at exactl 80% LTV. In particular the fraction of loans granted at exactly 80% has more than tripled and represent now one fouth of all loans. This is precisely the sort of bunching our theoretical model predicts.

This paper left out other potential information sources of borrower quality (e.g. credit scores, alternative collateral, past behavior on loans with the same banking institution). We have assumed in our model that, besides all the observable characteristics we detailed in the text, the only other signal a borrower can provide is the size of the down-payment relative to the value of the property. This is clearly an abstraction. A prediction of the model, though, is that if there is a higher cost on having a low quality borrower, on average the quality of the portfolio should be better after the introduction of the regulation. Unfortunately evidence on ex-post delinquency rates are not observable just yet. This prediction could be tested in a few more years when enough time has passed to allow low-quality borrowers to actually enter into arrears.

References

Albanesi, Stefania, Giacomo De Giorgi and Jaromir Nosal. 2017. Credit growth and the financial crisis: A new narrative. Technical report National Bureau of Economic Research. 4

Almus, Matthias and Dirk Czarnitzki. 2003. "The effects of public R&D Subsidies on Firms' Innovation Activities: the Case of Eastern Germany." Journal of Business & Economic Statis­tics 21(2):226-236. 20

Ates, Sina T and Felipe Eduardo Saffie. 2013. "Project Heterogeneity and Growth: The Impact of Selection." Unpublished, University of Pennsylvania . 8,18

Banco Central de Chile. 2017. "Chapter 2: Local Financial Markets." Informe de Estabilidad Financiera 1. 6,18

Beltratti, Andrea, Matteo Benetton and Alessandro Gavazza. 2017. "The role of prepayment penalties in mortgage loans." Journal of Banking & Finance . 4

Ben-David, Itzhak. 2011. "Financial constraints and inflated home prices during the real estate boom." American Economic Journal: Applied Economics 3(3):55-87. 2, 4

Cerutti, Eugenio, Jihad Dagher and Giovanni Dell'Ariccia. 2017. "Housing finance and real-estate booms: a cross-country perspective." Journal of Housing Economics . 4

Cerutti, Eugenio, Stijn Claessens and Luc Laeven. 2017. "The use and effectiveness of macroprudential policies: new evidence." Journal of Financial Stability . 3,4

Crowe, Christopher, Giovanni Dell'Ariccia, Deniz Igan and Pau Rabanal. 2013. "How to deal with real estate booms: Lessons from country experiences." Journal of Financial Sta­bility 9(3):300-319. 4

Encina, Jenny. 2013. "Pension Reform in Chile: A Difference in Difference Matching Esti­mation." Estudios de Economia 40(1). 20

Field, Erica and Maximo Torero. 2006. Do property titles increase credit access among the urban poor? Evidence from a nationwide titling program. Technical report Working paper, Department of Economics, Harvard University, Cambridge, MA. 20

Galiani, Sebastian, Paul Gertler and Ernesto Schargrodsky. 2005. "Water for life: The Impact of the Privatization of Water Services on Child Mortality." Journal of Political Economy 113(1):83-120. 20

Heckman, James J, Hidehiko Ichimura and Petra E Todd. 1997. "Matching as an Econo­metric Evaluation Estimator: Evidence from Evaluating a Job Training Programme." The Review of Economic Studies 64(4):605-654. 20

Heckman, James J, Hidehiko Ichimura and Petra Todd. 1998. "Matching as an Econometric Evaluation Estimator." The Review of Economic Studies 65(2):261-294. 20, 21

Ho, Daniel E, Kosuke Imai, Gary King and Elizabeth A Stuart. 2007. "Matching as nonpara­metric preprocessing for reducing model dependence in parametric causal inference." Political analysis 15(3):199-236. 21

Hott, Christian. 2015. "A model of mortgage losses and its applications for macroprudential instruments." Journal of Financial Stability 16:183 - 194. URL: http://www.sciencedirect.com/science/article/pii/S1572308914000576 4

Iacus, Stefano M, Gary King and Giuseppe Porro. 2012. "Causal Inference without Balance Checking: Coarsened Exact Matching." Political Analysis 20(1):1-24. 3, 19, 22, 24

IMF. 2011. "Housing Finance and Financial Stability - Back to Basics. Global Financial Stability Report, IMF.". 4

Jalan, Jyotsna and Martin Ravallion. 2003. "Estimating the Benefit Incidence of an An­tipoverty Program by Propensity-Score Matching." Journal of Business & Economic Statis­tics 21(1):19-30. 20

King, Gary, Matthew Blackwell, Stefano Iacus and Giuseppe Porro. 2010. "cem: Coarsened exact matching in Stata." Stata Journal. 23

Kuttner, Kenneth N and Ilhyock Shim. 2016. "Can Non-Interest Rate Policies Stabilize Housing Markets? Evidence from a Panel of 57 Economies." Journal of Financial Stability 26:31-44. 4

Lechner, Michael. 2002. "Some Practical Issues in the Evaluation of Heterogeneous Labour Market Programmes by Matching Methods." Journal of the Royal Statistical Society: Series A (Statistics in Society) 165(1):59-82. 20

Mas-Colell, Andreu, Michael Whinston and Jerry Green. 1995. Microeconomic Theory. Ox­ford University Press. URL: http://EconPapers.repec.org/RePEc:oxp:obooks:9780195102680 8

Matus, Jose Miguel. 2015. "Provisiones por Riesgo de Credito de la Banca Nacional: Analisis de los Cambios Normativos, Periodo 1975-2014." Studies in Economic Statistics, Central Bank of Chile . 5

Moser, Petra. 2005. "How do Patent Laws Influence Innovation? Evidence from Nineteenth- Century World's Fairs." The American Economic Review 95(4):1214-1236. 20

Pacheco, David, Carlos Pugar and Eduardo Valdebenito. 2014. Modelo Estandar de Pro- visiones para Cartera Hipotecaria Residencial. Technical report. "Serie de Estudios Nor- mativos". 3,18

Park, Kevin A. 2016. "FHA loan performance and adverse selection in mortgage insurance." Journal of Housing Economics 34:82-97. 20

Qi, Min and Xiaolong Yang. 2009. "Loss given default of high loan-to-value residential mortgages." Journal of Banking & Finance 33(5):788-799. 4

Scott, David W. 2015. Multivariate density estimation: theory, practice, and visualization. John Wiley & Sons. 23

Smith, Jeffrey A and Petra E Todd. 2005. "Does Matching Overcome LaLonde's Critique of Nonexperimental Estimators?" Journal of Econometrics 125(1):305-353. 20

Stiglitz, Joseph E and Andrew Weiss. 1981. "Credit rationing in markets with imperfect information." The American economic review 71(3):393-410. 8