Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2021, A. M. Karminsky et al. (eds.), Risk Assessment and Financial Regulation in Emerging Markets’ Banking, Advanced Studies in Emerging Markets Finance
https://doi.org/10.1007/978-3-030-69748-8_6…
31 pages
1 file
This chapter proposes an approach to decompose the RR/LGD model development process with two stages, specifically, for the RR/LGD rating model, and to calibrate the model using a linear form that minimizes residual risk. The residual risk in the recovery of defaulted debts is determined by the high uncertainty of the recovery level according to its average expected level. Such residual risk should be considered in the capital requirements for unexpected losses in the loan portfolio. This paper considers a simple residual risk model defined by one parameter. By developing an optimal RR/LGD model, it is proposed to use a residual risk metric. This metric gives the final formula for calibrating the LGD model, which is proposed for the linear model. Residual risk parameters are calculated for RR/LGD models for several open data sources for developed and developing markets. An implied method for updating the RR/LGD model is constructed with a correction for incomplete recovery through the recovery curve, which is built on the training sets. Based on the recovery curve, a recovery indicator is proposed which is useful for monitoring and collecting payments. The given recommendations are important for validating the parameters of RR/LGD model.
Journal of Risk and Financial Management
In a default event, several obligors simultaneously experience financial difficulty in servicing their debt to the point where the entire market can experience a sudden yet significant jump to a credit default. To help protect lenders against a jump-to-default event, regulators require banks to hold capital equivalent to the default risk charge as a buffer against the losses they may incur. The Basel regulatory committee has articulated and set default risk modelling guidelines to improve comparability amongst banks and enable a consistent bank-wide default risk charge estimation. Emerging markets are unique because they usually have illiquid markets and sparse data. Thus, implementing an emerging market default risk model and, at the same time, complying with the regulatory guidelines can be non-trivial. This research presents a framework for modelling the default risk charge in emerging markets in line with the regulatory requirements. The default correlation model inputs are deri...
Procedia Economics and Finance, 2015
This paper deals with the methods for estimating credit risk parameters from market prices, e.g. Probability of Default (PD) and Loss Given Default (LGD). Precise evaluation of these parameters is important not only for bank to calculate their regulatory capital but also for investors to price risky bonds and credit derivatives. In this paper, we introduced reduced-form analytical methods for the calculation of LGD to pricing Credit Default Swaps. Reduced-form credit risk models were introduced as a reaction to structural approach, especially trying to decrease informational difficulty when modelling credit risk. In the reducedform approach, the market value of defaulted bonds is the same as in the fraction recovered from the exposure at default. We use the face value convention, which Hull & White (2000) presented in their model which extended recovery of face value convention for coupon bonds.
The Basel II Accord offers banks the opportunity to estimate Loss Given Default (LGD) if they wish to calculate their own value for the capital required to cover credit losses in extreme circumstances. This paper will analyze the various methods of modeling LGD and will provide an alternative estimate of LGD using Merton's model for the valuation of assets. Four components will be developed in this document: estimation of the minimum value that could have a financial asset, estimation of the loss given default LGD, development of a practical component, and finally validation of the proposed model. JEL classification numbers: G17, G24, G32
2008
Despite the success of advanced credit portfolio models, many financial institutions still continue using a variance-covariance approach to portfolio modelling. When setting up such a framework, the parameters must be quantified and a certain number of assumptions has to be made. Assessing the level of the parameters is beyond the scope of this paper since they should ultimately pertain to peculiar features of the actual dataset. The different assumptions however should at least be mutually consistent, and a model with an inconsistent set of parameters is clearly unacceptable. We found that the concept of a stochastic loss given default in conjunction with default correlations can give rise to an inconsistent set of axioms. We propose two consistent methodologies that do not add (too much) complexity to the * Corresponding author.
数理解析研究所講究録, 2014
In recent years, a number of studies has been devoted to looking for ways to estimate marketimplied recover rates. This quantity is important for risk management as well as for credit derivative pricing. Unlike historical recovery rates which are backward-looking, market-implied recovery rates subsume information about market's expectation of future economic condition. Although this quantity varies significantly over time and is negatively correlated with default rates (See e.g. Altman et al., 2005, Acharya et al., 2007), most studies assume constant, and ffx at 40% for senior unsecured bonds. The reason is because in most credit risk approach, default probability and recovery rate, that enter int $0$ the formula of credit derivatives, are multiplicatively linked. This fact makes separate identification difficult. This identification problem is known since Duffie & Singleton (1999). Many techniques has been considered in order to be able to separately estimate implied default probabilities and recovery rates from credit spreads (See Schl\"afer, 2011). One technique is to specify a link between implied default and recovery rates. For example, Bakshi et al. (2006) specify recovery rates as a function of default intensity. One other technique is to use credit default swap term structure information. Pan& Singleton (2008) use term structure information on sovereign CDS to estimate constant implied recovery rates of sovereign bonds. They show that default risk and recovery rate can be identified under the assumption of recovery of face value. Christensen (2007) use CDS data of Ford Moto Corp., and concludes that default and recovery risk can be jointly identified from CDS term structure data. He assumes both constant and stochastic recovery. Schneider et al. (2011) assume constant implied recovery rates and use CDS on senior unsecured bonds of 278 U.S. corporates. Doshi (2011) use senior and subordinate CDS of 46 firms to estimate jointly default intensity and stochastic recovery rates. In this paper, we attempt to use CDS term structure information to decompose marketimplied default intensity and recovery rate. We want to investigate if this is really possible. In order to do so, we first specify ajoint model of interest rates, default intensity, and recovery rates, which incorporates negative correlation between default intensity and recovery rates while trying to preserve tractability. We consider both constant and stochastic recovery models. For interest rate term structure model, we consider the modified arbitrage-free Nelson-Siegel model proposed by Sim & Ohnishi (2012). In the stochastic recovery case, logistic model is assumed for loss given default (LGD) (so that the recovery rate value is ensured between $0$ and 1), and negative correlation between default intensity and recovery rate is captured via an interest rate factor. Using generalized transforms of affine processes, CDS pricing formula can be explicitly obtained even under the logistic LGD assumption. Then, we investigate whether it is possible to jointly estimate default intensity and recovery rate through an empirical estimation and a simulation
Journal of Banking & Finance, 2012
This article presents a modification of ruin option pricing model to estimate the implied probability of default from stock and option market prices. To test the model, we analyze all global financial firms with traded options in the US and focus on the subprime mortgage crisis period. We compare the performance of the implied probability of default from our model to the expected default frequencies based on the Moody's KMV model and agency credit ratings by constructing cumulative accuracy profiles (CAP) and the receiver operating characteristic (ROC). We find that the probability of default estimates from our model are equal or superior to other credit risk measures studied based on CAP and ROC. In particular, during the subprime crisis our model surpassed credit ratings and matched or exceeded KMV in anticipating the magnitude of the crisis. We have also found some initial evidence that adding off-balance-sheet derivatives exposure improves the performance of the KMV model.
A MAJOR CHALLENGE in developing models that can effectively assess the credit risk of individual obligors is the limited availability of high-frequency objective information to use as model inputs. Most models estimate creditworthiness over a period of one year or more, which often implies the need for several years of historical financial data for each borrower 1. While reliable and timely financial data can usually be obtained for the largest corporate borrowers, they are difficult to obtain for smaller borrowers, and are particularly difficult to obtain for companies in financial distress or default, which are key to the construction of accurate credit risk models. The scarcity of reliable data required for building credit risk models also stems from the highly infrequent nature of default events. In addition to the difficulties associated with developing models, the limited availability of data presents challenges in assessing the accuracy and reliability of credit risk models. In its recent report on credit risk modelling, the Basle Committee on Banking Supervision highlighted the relatively informal nature of the credit model validation approaches at many financial institutions. In particular, the Committee emphasised data sufficiency and model sensitivity analysis as significant challenges to validation. The Committee has identified validation as a key issue in the use of quantitative default models and concluded that "…the area of validation will prove to be a key challenge for banking institutions in the foreseeable future. " 2 This article describes several of the techniques that Moody's has found valuable for quantitative default model validation and benchmarking. More precisely, we focus on (a) robust segmentation of data for model validation and testing, and (b) several robust measures of model performance and inter-model comparison that we have found informative and currently use. These performance measures can be used to complement standard statistical measures. We address the two fundamental issues that arise in validating and determining the accuracy of a credit risk model under: what is measured, or the metrics by which model 'goodness' can be defined; and how it is measured, or the framework that ensures that the observed performance can reasonably be expected to represent the behavior of the model in practice. Model accuracy When used as classification tools, default risk models can err in one of two ways 3. First, the model can indicate low risk when, in fact, the risk is high. This Type I error corresponds to the assignment of high credit quality to issuers who nevertheless default or come close to defaulting in their obligations. The cost to the investor can be the loss of principal and interest, or a loss in the market value of the obligation. Second, the model can assign a low credit quality when, in fact, the quality is high. Potential losses resulting from this Type II error include the loss of return and origination fees when loans are either turned down or lost through non-competitive bidding. These accuracy and cost scenarios are described schematically in Figures 1 and 2. Unfortunately, minimising one type of error usually comes at the expense of increasing the other. The tradeoff between these errors is a complex and important issue. It is often the case, for example, that a particular model will outperform another under one set of cost assumptions, but can be disadvantaged under a different set of assumptions. Since different institutions have different cost and pay-off structures, it is difficult to present a single cost function that is appropriate across all firms. For this reason, here we use cost functions related only to the information content of the models. A validation framework Performance statistics for credit risk models can be highly sensitive to the data sample used for validation. To avoid embedding unwanted sample dependency, quantitative models should be developed and validated using some type of out-ofsample 4 , out-of-universe and out-of-time testing approach on panel or cross-sectional data sets. However, even this seemingly rigorous approach can generate false impressions about a model's reliability if done incorrectly. Hold out testing can easily miss important model problems, particularly when processes vary over time, as credit risk does. In the following section, we describe a validation framework that accounts for variations across both time and across the population of obligors 5. Validation methodologies for default risk models The Basle Committee has identified credit model validation as one of the most challenging issues in quantitative credit model development.
Global Journal of Business Research, 2007
Loss Given Default (henceforth the LGD) is the ratio of losses to exposure at default. It includes the loss of principal, the carrying costs of non-performing loans and workout expenses. In light of the management and regulatory advances regarding LGD, this paper addresses the topic of choosing the proper rate to estimate the current value of recoveries. By means of a review of the available literature on LGD, the impacts of different solutions for the discount rate (contractual rate, risk-free rate and single-factor approaches) on the variability of LGD are analyzed and compared. In order to understand the influence of market constraints from both the static and dynamic standpoints, the paper studies the methodologies for the selection of the discount rate. Considering the limitations of the approaches found in both academic and operational literature, the paper proposes a multi-factor model to measure the discount rate based on systemic and specific factors. These factors, in light of the aggregate empirical evidence, can serve as explanations for the variability of LGD.
Lately, the credit rating agencies have been the subject of significant criticism for failing to warn the investors of the defaults well in advance. Investors in long-term debt instruments are usually risk averse, buy-and-hold types; and hence, for them, the variability of investment-grade default rates is particularly important since they employ simple investment-grade rating cut-offs in the design of their investment eligibility plan. According to ICRA (Investment Information and Credit Rating Agency of India) and the other credit rating agencies, default means that the company has either already failed in the payment of interest and/or principal as per terms or is expected to fail. The debt rating at no time informs as to how much bond face value holders would recover in the event of a default. The present regulations pertaining to the credit rating agencies preclude the investors and issuers from suing agencies for awarding a particular rating. Hence, credit ratings are of value only as long as they are credible. This paper tests the reliability of ratings assigned by ICRA on the basis of the actual default rate experience on long-term debt across five sectors over a period of seven years, i.e., 1995-2002. The reason for including only long-term debt instruments for the purpose of analysis is that the assigned rating and its movement can be observed only over a long period. Since the credit rating agencies do not publish ratings that are not accepted by the issuers, this study is limited to only those issues that have been accepted and used by the issuers. The default statistics were examined sector-wise, period-wise, and company/institution-wise. Analyses of the background and business, operating performance, management and systems, financial performance, prospects, key issues, and the reasons cited for defaults were undertaken with respect to all the companies. Simple metrics like default rates by rating grades and rating prior to default were used to analyse whether low ratings (i.e., speculative-grade ratings) were assigned by ICRA to defaulting credits well in advance of default rate. Further, an attempt was made to identify whether companies in default had issued other debt instruments that were rated by other credit rating agencies.
SSRN Electronic Journal, 2000
The main objective of this paper is to estimate a statistical model that incorporates information at different levels: collateral, facility, industry, zone and the macro economy to predict the Recovery Rates which will enable the bank to arrive at the Loss Given Default figure that would help to better price and manage credit risk. This estimated LGD can also play a critical role in meeting the Basel II requirements on advanced Internal Rating Based Approach (AIRB).
2019
The quanti�cation of model risk is still in its infancy. This paper provides an operational quanti�cation of this risk for credit portfolio, when the objective is to approximate the average loss. The methodology is easy to implement and does not require the construction of any worst-case model. The required capital computed to cover for model risk depends on three components, that are an estimated impact of the incorrect model, an evaluated risk of inaccurate estimation of model risk and the prediction error hedge factor. The approach is illustrated by an application to a portfolio of corporate loans segmented by grades.
Purpose – This paper aims at developing an early warning signal model for predicting corporate default in emerging market economy like India. At the same time, it also aims to present methods for directly estimating corporate probability of default (PD) using financial as well as non-financial variables. Design/methodology/approach – Multiple Discriminate Analysis (MAD) is used for developing Z-score models for predicting corporate bond default in India. Logistic regression model is employed to directly estimate the probability of default. Findings – The new Z-score model developed in this paper depicted not only a high classification power on the estimated sample, but also exhibited a high predictive power in terms of its ability to detect bad firms in the holdout sample. The model clearly outperforms the other two contesting models comprising of Altman's original and emerging market set of ratios respectively in the Indian context. In the logit analysis, the empirical results reveal that inclusion of financial and non-financial parameters would be useful in more accurately describing default risk. Originality/value – Using the new Z-score model of this paper, banks, as well as investors in emerging market like India can get early warning signals about the firm's solvency status and might reassess the magnitude of the default premium they require on low-grade securities. The default probability estimate (PD) from the logistic analysis would help banks for estimation of credit risk capital (CRC) and setting corporate pricing on a risk adjusted return basis.
Industrija, 2016
In this paper a quantitative PD model development has been excercised according to the Basel Capital Accord standards. The modeling dataset is based on the financial statements information from the Republic of Serbia. The goal of the paper is to develop a credit scoring model capable of producing PD estimate with high predictive power on the sample of corporate entities. The modeling is based on 5 years of end-of-year financial statements data of available Serbian corporate entities. Weight of evidence (WOE) approach has been applied to quantitatively transform and prepare financial ratios. Correlation analysis has been utilized to reduce long list of variables and to remove highly interdependent variables from training and validation datasets. According to the best banking practice and academic literature, the final model is provided by using adjusted stepwise Logistic regression. The finally proposed model and its financial ratio constituents have been discussed and benchmarked against examples from relevant academic literature.
SSRN Electronic Journal, 2000
The paper analyzes a two-factor credit risk model allowing to capture default and recovery rate variation, their mutual correlation, and dependence on various explanatory variables. At the same time, it allows computing analytically the unexpected credit loss. We propose and empirically implement estimation of the model based on aggregate and exposure level Moody's default and recovery data. The results confirm existence of significantly positive default and recovery rate correlation. We empirically compare the unexpected loss estimates based on the reduced two-factor model with Monte Carlo simulation results, and with the current regulatory formula outputs. The results show a very good performance of the proposed analytical formula which could feasibly replace the current regulatory formula.
SSRN Electronic Journal, 2000
Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen Zwecken und zum Privatgebrauch gespeichert und kopiert werden. Sie dürfen die Dokumente nicht für öffentliche oder kommerzielle Zwecke vervielfältigen, öffentlich ausstellen, öffentlich zugänglich machen, vertreiben oder anderweitig nutzen. Sofern die Verfasser die Dokumente unter Open-Content-Lizenzen (insbesondere CC-Lizenzen) zur Verfügung gestellt haben sollten, gelten abweichend von diesen Nutzungsbedingungen die in der dort genannten Lizenz gewährten Nutzungsrechte. Terms of use: Documents in EconStor may be saved and copied for your personal and scholarly purposes. You are not to copy documents for public or commercial purposes, to exhibit the documents publicly, to make them publicly available on the internet, or to distribute or otherwise use the documents in public. If the documents have been made available under an Open Content Licence (especially Creative Commons Licences), you may exercise further usage rights as specified in the indicated licence.
The global financial crisis highlighted the fact that default and recovery rates of multiple borrowers generally deteriorate jointly during economic downturns. The vast majority of the literature, as well as many industry credit-portfolio risk models, ignore this and analyze default probabilities and recoveries in the event of default separately. As a result, the models project losses that are too low in economic downturns such as the recent financial crisis. Nevertheless, alternatives that incorporate the dependence between probabilities of default and recovery rates have been proposed. This paper is the first of its kind to assess the performance of these structurally different approaches. Four banks using different estimation procedures are compared. We use root mean square errors and relative absolute errors to measure the predictive accuracy of each procedure. The results show that models accounting for the correlation of default and recovery do indeed perform better than models ignoring it.
Journal of Global Economy, 2018
        Innovative Approach for Forecasting Corporate Default Risk       Submitted To: Journal of Global Economy                              By: Prashanta Kumar Behera, PhD                              Email: [email protected] . Ph : 91+8108932693: Present time corporate default risk parameters are dynamic in nature and understanding how these parameters change in time is a fundamental task for risk management. In this research paper I am trying to forecast for corporate default rates.  I work with historical credit migrations data to construct some time series of interest and to visualize default rates dynamics and also, I use some of the series constructed and some additional data to fit a forecasting model for corporate default rates and to shows some back testing and stress testing. A linear regression model for corporate default rates is presented but the tools and concepts described can be u...
The UniTed STaTeS is a nation of debtors. By the end of 2007, total debt outstanding by households, businesses, state and local governments, and the federal government added up to $31.2 trillion. The domestic financial sector accounted for half of this total, or $15.8 trillion. The size of the debt market is quite large. Indeed, it exceeds both the U.S. GDP in 2007 ($13.8 trillion) and the equity market value of all domestic corporations ($15.5 trillion). 1 The primary risk of all this debt is credit risk, or the risk of default. The current credit crisis demonstrates how shifts in credit spreads and market liquidity can also significantly impact debt values. Although these alternative factors are important for understanding debt markets, we will focus here only on default risk. Investors measure default risk in many different ways, and there have been important recent innovations in this regard. The state of the art in assessing corporate credit risk is based on one of three approaches: 1) the Merton distance-to-default measure, 2) the reduced-form approach, and 3) credit ratings. We will compare and contrast these three approaches, showing that the reducedform approach is preferred because of its generality, flexibility, and superior forecasting ability. Merton's Distance-to-Default For more than three decades, a common approach used to measure a firm' s default probability has been the so-called distance-to-default. This measure is based on the pioneer
This paper offers a joint estimation approach for forecasting probabilities of default and loss rates given default in the presence of selection. The approach accommodates fixed and random risk factors. An empirical analysis identifies bond ratings, borrower characteristics and macroeconomic information as important risk factors. A portfolio-level analysis finds evidence that common risk measurement approaches may underestimate bank capital by up to 17 per cent relative to the presented model.
2003
This paper proposes a simple approach to infer the risk neutral density of recovery rates implied by the prices of the debt securities of a firm. The proposed approach is independent of modeling default arrival rates and allows for the violation of absolute priority rule (APR). The paper demonstrates that a new statistic, the adjusted relative spread, captures risk neutral recovery information in debt prices. Interest rates and firm tangible assets are shown to be significant determinants of the price of recovery. An application illustrates the pricing of credit derivatives written on the realized recovery rate. JEL specification: G130,G330
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.