M PRA
Munich Personal RePEc Archive
A CAMEL rating’s shelf life
Cole, Rebel A. and Gunther, Jeffery W.
DePaul University, Federal Reserve Bank of Dallas
13. November 1995
Online at [Link]
MPRA Paper No. 24693, posted 30. August 2010 / 00:26
A CAMEL Rating's Shelf Life
Rebel A. Cole
Economist
Board of Governors of the Federal Reserve System
Washington, DC
Jeffery W. Gunther
Senior Economist and Policy Advisor
Federal Reserve Bank of Dallas
Dallas, TX
Abstract:
How quickly do the CAMEL ratings regulators assign to banks during on-site examinations
become "stale"? One measure of the information content of CAMEL ratings is their ability to
discriminate between banks that will fail and those that will survive. To assess the accuracy of
CAMEL ratings in predicting failure, Rebel Cole and Jeffery Gunther use as a benchmark an off-
site monitoring system based on publicly available accounting data. Their findings suggest that,
if a bank has not been examined for more than two quarters, off-site monitoring systems usually
provide a more accurate indication of survivability than its CAMEL rating. The lower predictive
accuracy for CAMEL ratings “older” than two quarters causes the overall accuracy of CAMEL
ratings to fall substantially below that of off-site monitoring systems. The higher predictive
accuracy of off-site systems derives from both their timeliness—an updated off-site rating is
available for every bank in every quarter—and the accuracy of the financial data on which they
are based. Cole and Gunther conclude that off-site monitoring systems should continue to play a
prominent role in the supervisory process, as a complement to on-site examinations.
JEL Classification: G21, G28
Key Words: Bank, Bank Failure, CAMEL, Commercial Bank, Offsite Supervision
A CAMEL Rating's Shelf Life
How long does a supervisory rating derived from an on-site examination of a bank's
financial condition adequately reflect the bank's financial viability? Insofar as financial
conditions can, and often do, change rapidly, we would not expect a given examination rating to
remain accurate for long periods of time. Yet, during the late 1980s, a tumultuous period for the
banking industry characterized by high failure rates, many banks went for several years between
on-site examinations. The need for more up-to-date examination ratings was recognized by
Congress and codified in the Federal Deposit Insurance Corporation Improvement Act of 1991
(FDICIA), which requires regulators to conduct annual on-site examinations.1 However, even
annual on-site examinations cannot always detect rapid changes in a bank’s financial condition.
The question remains as to how quickly the bank examination ratings commonly known as
"CAMEL" ratings become "stale".
We attempt to answer this question by analyzing the historical relationship between
examination ratings and bank failures. Although failure prediction is not the primary purpose of
the CAMEL rating, its ability to predict failures offers a convenient metric for assessing the
decay of information contained in an examination rating.
To assess the accuracy of examination ratings in predicting failure, we use as a
benchmark an off-site monitoring system based on publicly available accounting data. This
system is very much similar to one component of the Federal Reserve’s comprehensive Financial
Institutions Monitoring System (FIMS), which the Fed implemented in 1993 to monitor the
1
FDICIA permits banks that are small, well-capitalized, and highly rated to be examined only once every eighteen
months.
−2−
condition of banks between examinations.2 If up-to-date examination ratings are an accurate
measure of financial condition, then their ability to predict bank failures should be at least as high
as that of the ratings generated by our off-site monitoring system. In analyzing the predictive
accuracy of examination ratings, we take into account the length of time between on-site
examinations and subsequent failures because we expect recent examinations to be more accurate
in predicting failures than examinations conducted in the relatively distant past.
Our findings suggest that the information content of examination ratings decays rather
quickly. Specifically, the ability of examination ratings to anticipate failures appears to exceed
that of off-site monitoring systems only when the ratings used are based on on-site examinations
conducted no more than two quarters earlier. If a bank has not been examined for more than two
quarters, our findings suggest that off-site monitoring systems can provide a more accurate
indication of survivability. The reduction in predictive accuracy for relatively "old" CAMEL
atings causes the overall accuracy of CAMEL ratings to fall substantially below that of off-site
monitoring systems.
Off-site monitoring systems are critically dependent upon the accuracy of their
accounting data inputs, however, and the integrity of those data can only be ensured by periodic
on-site examinations. These systems also may have difficulty in identifying emerging problems
unless these problems manifest themselves through the accounting data inputs. It seems likely
that on-site examinations are more likely to identify such paradigm shifts, and to require that
2
The Federal Reserve uses FIMS not only to track the financial condition of individual banks and banking
organizations between on-site examinations but also to direct examination resources. An overview of FIMS is
provided by Cole, Cornyn, and Gunther (1995). Putnam (1983) describes the bank surveillance systems used by
regulators during the 1970s and early 1980s.
−3−
banks recognize emerging financial difficulties through reserves and charge-offs. Moreover,
systems such as our benchmark require that we observe relatively frequent failures, and,
historically, there has been a paucity of bank failures.3 We therefore conclude that off-site
monitoring systems should continue to play a prominent role in the supervisory process, but only
as a complement to comprehensive, on-site examinations. In addition, our results indicate that an
off-site monitoring model, such as the one used to produce our benchmark ratings, would be a
valuable tool for anyone interested in tracking the financial condition of individual banks.
On-site monitoring
The Uniform Financial Institutions Rating System, adopted in 1979, provides federal
bank regulatory agencies with a framework for rating the financial condition and performance of
individual banks. Regulators periodically visit banking offices to evaluate their financial
3
To generate accurate forecasts, offsite monitoring systems such as ours typically require that we
observe 50-100 failures. Yet in 1993 and 1994, there were only 43 and 11 bank failures
respectively. Historically, from the mid-1930s until the early 1980s, no more than 20 bank
failures were recorded in any one year. Whether systems based upon the failure experience of the
1980s will be accurate in predicting failures during the 1990s and beyond is open to debate.
−4−
soundness, to monitor their compliance with laws and regulatory policies, and to assess the
quality of their management and systems of internal control.4
Based on the results of these on-site evaluations, regulators then rate the performance of
individual banks along five key dimensions—capital adequacy, asset quality, management,
earnings, and liquidity—yielding the rating system's acronym, CAMEL. Each of the five areas of
performance are rated on a scale of 1 to 5 as follows: 1—strong performance, 2—satisfactory
performance, 3—performance that is flawed to some degree, 4—marginal performance that is
significantly below average, and 5—unsatisfactory performance that is critically deficient and in
need of immediate remedial action.
Once each of the five areas of performance has been assigned a rating, a composite, or
overall, rating is derived, again on a scale from 1 to 5. The five composite rating levels are
described as follows in the Commercial Bank Examination Manual produced by the Board of
Governors of the Federal Reserve System: 1—an institution that is basically sound in every
respect, 2—an institution that is fundamentally sound but has modest weaknesses, 3—an
4
According to the American Institute of Certified Public Accountants Committee on Working Procedures, “internal
control comprises the plan of organization and all of the coordinate methods and measures adopted within a business
to safeguard its assets, check the accuracy and reliability of its accounting data, promote operational efficiency, and
encourage adherence to subscribed managerial policies.”
−5−
institution with financial, operational, or compliance weaknesses that give cause for supervisory
concern, 4—an institution with serious financial weaknesses that could impair future viability,
and 5—an institution with critical financial weaknesses that render the probability of failure
extremely high in the near term.
The frequency of on-site examinations has varied considerably over recent years. Before
FDICIA's adoption, banks often were not subject to annual examinations.5 Because a bank's
financial condition can change appreciably from one quarter to the next, more frequent on-site
examinations provide a more accurate assessment of a bank's current financial condition. And
the earlier regulators can identify a troubled bank, the more quickly they can intervene with
supervisory actions intended to return the bank to financial health or, if necessary, close the bank
so as to minimize losses to the Bank Insurance Fund.6
The benefits of more frequent on-site examinations, however, must be weighed against
the substantial costs of such examinations to both regulators and banks. The perceived trade-off
between the costs and benefits of more frequent examinations has precluded Congress from
requiring more than an annual frequency of on-site examinations.7 When banks are only subject
to annual on-site examinations, the task of monitoring individual banks on a more frequent basis
devolves to off-site monitoring systems such as FIMS.
5
State-chartered banks regulated by the Federal Reserve generally were subject to annual
examinations even before the FDICIA mandate.
6
Gilbert (1993) provides evidence that failing banks examined in their last twelve months of
operation imposed lower losses on the Bank Insurance Fund, as a percentage of their assets, than
banks that were not examined near the time of failure.
7
It is important to note that “problem banks,” those with composite CAMEL ratings of 4 or 5,
are generally subject to an on-site examination twice per year.
−6−
Off-site monitoring
Various off-site monitoring systems have been developed to complement the CAMEL
rating system. While these systems have employed a wide variety of analytical tools, most have
relied on a common source of data—the Report of Condition and Income, or "call report"—
which each bank submits quarterly to its primary regulatory agency. The financial data contained
in this report provide timely information on the performance of individual banks and a strong
foundation for off-site monitoring systems.
It is important to note that one of the primary functions of the on-site examination process
is to ensure that each bank has in place a system of internal control that checks the accuracy and
reliability of its accounting data. Without accurate accounting data, off-site systems cannot
detect banks whose financial condition is deteriorating.
To illustrate the nature and function of off-site monitoring systems, we develop a system
based on key financial ratios derived from the bank call report data. In this system, we use
standard statistical methods to estimate the relationship between the financial ratios measured at
year-end 1985 for all U.S. insured commercial banks and the likelihood of bank failure during
the two-year period from the second quarter of 1986 through the first quarter of 1988.8
8
Failures are identified starting in the second quarter of 1986, rather than the first quarter, to
impose a one-quarter lag in the estimated relationship. This is done to approximate real-world
−7−
conditions, under which edited call report data generally are not available until forty-five to
seventy days after the end of each quarter. Consequently, failures occurring during that first
quarter are excluded from the analysis. When the estimated relationship is used to predict future
bank failures, lags in the reporting of call report data imply a short lag between the call report
date and the period over which failures are predicted.
−8−
We use seven financial indicators, each measured as a percentage of gross assets, to
characterize the financial posture of individual banks. As shown in Table 1, these indicators are
measures of capital adequacy, asset quality, earnings, and liquidity—four of the five components
of the CAMEL rating. Equity capital, which serves as a buffer protecting a bank's solvency
against financial losses, is our measure of capital adequacy; more capital is expected to reduce
the chance of failure. We use three indicators of asset quality—loans past due 90 days or more
and still accruing interest, nonaccrual loans, and other real estate owned (which, for the most
part, consists of foreclosed real estate). Higher values of each indicator should increase the
probability of failure in subsequent years. To measure earnings, we use net income as our
indicator. Higher income generally reflects a lack of financial difficulties and so also would be
expected to reduce the likelihood of failure. Finally, we use two indicators of liquidity—
investment securities and large certificates of deposit ($100,000 or more). Liquid assets, such as
investment securities, enable a bank to respond quickly to unexpected demands for cash and
typically reflect relatively conservative financial strategies, whereas volatile liabilities, such as
large certificates of deposit, often reflect relatively aggressive financial strategies, impose high
interest expenses, and are subject to quick withdrawal. As a result, we expect higher values of
investment securities to reduce the chance of failure, whereas higher values of large certificates
of deposit should increase the probability of failure.
The historical relationship between these financial indicators and failure is estimated
using statistical methods.9 The estimation results indicate that the variables included in the
9
Specifically, our off-site monitoring system uses the probit methodology to estimate the
historical relationship between the financial indicators and the likelihood of failure. The
−9−
system are important indicators of bank survivability and that each affects the probability of
failure in the expected fashion. With the estimated relationship in hand, we can now insert into
the system values of the seven financial indicators reported for year-end 1987 to generate
predictions of the probability of failure for individual banks over the two-year period from the
second quarter of 1988 through the first quarter of 1990. This exercise illustrates the manner in
which regulators use off-site monitoring systems in practice. A historical relationship is
estimated between a set of financial indicators and the likelihood of bank failure, which then
provides the basis for generating predictions of future failures. Here, we compare the predicted
probabilities of failure for the period from the second quarter of 1988 through the first quarter of
1990 with the failures that actually occurred, thereby establishing a sense of the system's
predictive accuracy. We can then use the off-site surveillance system to benchmark the ability of
CAMEL ratings to anticipate bank failures.
The information content of CAMEL ratings
statistical underpinnings of this methodology are described by Maddala (1983).
−10−
To measure the information content of CAMEL ratings, we test their ability to
discriminate between banks that will fail and banks that will survive.10 Accuracy in predicting
bank failure is an important ingredient of a successful banking supervision program, but it is
important to remember that CAMEL ratings were never intended to measure the probability of
bank failure. Instead, they were designed to serve as a summary measure of financial condition,
not just a measure of catastrophic failure. For example, a CAMEL rating can only take on five
discrete values, making it difficult to discriminate among banks within each rating class.
Moreover, regulators don’t expect every bank assigned a “5”-rating to fail. Indeed, one goal of
bank supervision is to intervene and take actions that will return troubled banks to financial
health.
10
Berger and Davies (1994) provide a detailed review of the academic literature on the value of
the information generated by federal bank examinations. Based on their own results, Berger and
Davies conclude that CAMEL downgrades reveal previously private unfavorable information
about bank condition.
−11−
To provide a benchmark for assessing the accuracy of CAMEL ratings in predicting
failure, we use results from the off-site monitoring system presented in the previous section.
Since CAMEL ratings incorporate confidential information from on-site examinations, as well as
public information from the quarterly call reports and other sources, we expect that, in predicting
bank failures, up-to-date CAMEL ratings would be more accurate than the ratings from our off-
site monitoring system.11 Moreover, both CAMEL ratings and off-site ratings should be
significantly more accurate in predicting failure than a naive model that randomly selects a
sample of banks as likely to fail.
Are Timely CAMEL Ratings Informative?
In assessing the predictive accuracy of CAMEL ratings, we take into account the length
of time between on-site examinations and the beginning of our forecast period. Because
CAMEL ratings are assigned on a flow basis as examinations are completed, there are numerous
vintages of CAMEL ratings available at any one time. We expect the accuracy of CAMEL
ratings in predicting failures to be a decreasing function of the length of time between the
assignment of the rating and the beginning of the forecast period.
To test this hypothesis, we assess the accuracy of the CAMEL ratings for individual
banks at year-end 1987 in predicting failures during the two-year period from the second quarter
11
Jones and King (1995) report that on-site examination information improves the ability of risk-
based capital ratios derived from call report information to identify banks with a high risk of
insolvency. Moreover, call report information often depends on examination results, rather than
the other way around, as on-site examinations frequently result in substantial changes to reported
financial information. Berger and Davies (1994) provide evidence that the call report acts as a
conduit to transmit examination results to the public.
−12−
of 1988 through the first quarter of 1990. Because all bank examinations are not conducted at
the same time, the CAMEL ratings available at year-end 1987 were assigned during a wide span
of time. While many of the ratings were based on examinations conducted during the fourth
quarter of 1987, many others were assigned much earlier and were based on examinations
conducted during the previous year and even earlier. Because the financial condition of
individual banks can change appreciably from quarter to quarter, the CAMEL ratings based on
examinations conducted near the end of 1987 should provide a better indication of future
survivability than those based on examinations conducted a year or more earlier.
To provide an indication of how well recent CAMEL ratings predict failure, we first limit
our sample to ratings assigned “as of” the fourth quarter of 1987.12 Of the 9,880 insured
commercial banks used in this analysis, 2,254 had CAMEL ratings assigned based on financial
data from the fourth quarter.13 We sort the 2,254 banks from worst to best based on their
composite CAMEL ratings. Then, we sort the banks within each of the five possible composite
ratings from worst to best based on the arithmetic average of their five CAMEL component
12
There are three primary dates typically associated with an examination—the start date, the end
date, and the “as of” date. The “as of” date derives its name from the fact that it is the date for
the financial data on which the CAMEL rating is based. We use the “as of” date to match
CAMEL ratings with the ratings from our off-site monitoring system, which also are dated based
on the date of the financial data used.
13
The number of banks included in our analysis is limited by our access to historical CAMEL
rating data. Of the 13,365 U.S. insured commercial banks that meet the other requirements of
our study, we are able to obtain year-end 1987 CAMEL ratings for 9,880, or 74 percent. Of these
9,880 banks, 244 failed during the two-year period examined. Also, of the 9,880 banks, 9,740
were rated based on a "full scope" examination, another 134 had ratings associated with "limited
scope" examinations, and the remaining six were the subject of "targeted" examinations. The
results reported here are qualitatively identical when the analysis is limited to "full scope"
examinations.
−13−
ratings.14 This is somewhat ad hoc, in that bank examiners do not intend for the component
ratings to be used as a means of ranking banks within rating classes, but some such ranking
procedure is necessary to obtain a metric comparable to our offsite monitoring score. Using the
resulting ranking as our guide, we expect the banks with the worst ratings to be the most likely to
fail during the two-year period from second-quarter 1988 through first-quarter 1990.
14
While the equal weighting of the five component ratings is somewhat arbitrary, we also
used several alternative schemes to weight the five component ratings for determining ranks
within composite CAMEL rating groups. The results are not qualitatively different when
alternative weightings are used.
−14−
Chart 1 shows the accuracy of the CAMEL ratings based on fourth-quarter 1987 financial
data in predicting failures during the subsequent two-year period of interest (April 1988—March
1990).15 The horizontal axis measures the proportion of banks predicted to fail. For example,
the value of 10 on the horizontal axis indicates that the top 10 percent of the sample of banks, as
sorted from the worst to best CAMEL ratings, are predicted to fail. The vertical axis gives, as a
percentage of the total number of banks that actually failed, the number of failed banks correctly
identified as failures. So, for example, when the 10 percent of banks with the worst CAMEL
ratings are predicted to fail, Chart 1 indicates that 89 percent of the failures that actually occurred
are identified successfully. In comparison, the 10 percent of the same sample of banks with the
highest predicted probability of failure, as generated by the off-site monitoring system, includes
87 percent of the failures that actually occurred. Hence, when each system considers the 10
percent of banks most likely to fail, recently assigned CAMEL ratings are slightly more accurate
in identifying failures than are the ratings generated by our off-site monitoring system.
Overall, the on-site and off-site systems' degrees of accuracy are comparable, as indicated
by the tendency for the two curves in Chart 1 to remain fairly close together. This is somewhat
surprising as we might expect the on-site system to be considerably more accurate when only
recently assigned ratings are used. However, there is an important feedback effect that greatly
benefits the offsite system. During the examination, supervisors require banks that have not
adequately reserved against or charged off losses to do so, and these actions are reflected in key
call report data that are inputs to the off-site monitoring system, e.g., capital, earnings, and asset
15
We exclude the first quarter of 1988 because examinations based upon December 1987
financial statements would not be finalized until at least some point during the first quarter of
−15−
quality. Hence, the more recent the examination, the more accurate is the call report data, and
this accuracy improves the performance of the offsite monitoring system in identifying failures.
Both systems perform much better than the expected results of the naive model that
randomly selects potential failures. For example, if 10 percent of the banks are selected at
random and predicted to fail, only 10 percent of the failures would be successfully identified, on
average. This indicates that both recent CAMEL ratings and off-site ratings are highly accurate
in predicting bank failure.
How Soon Do CAMEL Ratings Become Stale?
While recently assigned CAMEL ratings provide a good indication of the survival
prospects for individual banks, the speed with which financial conditions can change suggests
that CAMEL ratings assigned in the relatively distant past may not predict future failures as well
as "fresh" CAMEL ratings. To provide an indication of how well relatively dated CAMEL
ratings predict failure, we augment our initial sample of banks rated as of fourth-quarter 1987
with banks rated as of the third quarter of that year. Of the 9,880 insured commercial banks used
in this analysis, 4,529 had CAMEL ratings based on financial data from the third or fourth
quarter. Once again, we sort these individual banks from worst to best based on their composite
CAMEL ratings and average CAMEL component ratings, with the expectation that those with
1988.
−16−
the worst ratings would be the most likely to fail during the two-year period from second-quarter
1988 through first-quarter 1990.
Chart 2 shows the accuracy of the CAMEL ratings based on data from the third or fourth
quarter of 1987 in predicting failures during the two-year period. Overall, the on-site and off-site
systems' levels of predictive accuracy are again comparable, as indicated by the closeness of the
two curves. When the 10 percent of the banks with the worst ratings are predicted to fail, the
CAMEL ratings capture 88 percent of the failures that actually occurred, while the off-site
monitoring system identifies 87 percent. These findings suggest that, for the time period
examined, no appreciable reduction occurs in the relative ability of CAMEL ratings to anticipate
failures when examinations conducted one quarter earlier are augmented with examinations
conducted two quarters earlier.
A different picture emerges, however, when banks with a most recent examination of
three quarters ago are also included in the analysis. Chart 3 shows the accuracy of the CAMEL
ratings as of the second, third, or fourth quarter of 1987 in predicting failures during the two-year
period from second-quarter 1988 through first-quarter 1990. Of the 9,880 insured commercial
banks used in this analysis, 6,358 had CAMEL ratings based on financial data from the second,
third, or fourth quarter. When the banks with three-quarter-old CAMEL ratings are included in
the analysis, the accuracy of the CAMEL ratings in predicting failures is appreciably less than
that of the ratings (predicted probabilities of failure) generated by the off-site monitoring system.
When the 10 percent of the banks with the worst ratings are predicted to fail, the CAMEL
ratings identify 78 percent of the failures that actually occurred, whereas the off-site ratings
−17−
identify 85 percent of the failures.16 Based on these findings, it appears that a substantial
reduction occurs in the relative ability of CAMEL ratings to anticipate failures when
examinations conducted one and two quarters earlier are augmented with examinations
conducted three quarters earlier.17
The reduction in the predictive accuracy of CAMEL ratings continues when banks with
four-quarter-old CAMEL ratings are included in the analysis. Of the 9,880 insured commercial
banks used in this analysis, 7,872 had CAMEL ratings based on financial data from the first
through fourth quarters of 1987. As shown in Chart 4, for this broader sample of banks, the
ratings from the off-site monitoring system are substantially more accurate forecasts of bank
failure than the CAMEL ratings. When the 10 percent of the banks with the worst ratings are
predicted to fail, the CAMEL ratings identify 73 percent of the failures that actually occurred,
whereas the ratings from the off-site monitoring system capture 86 percent of the failures.
16
The lower success rate of the CAMEL ratings in identifying failures implies that the CAMEL
ratings also mistakenly predict a greater number of surviving banks as failing.
17
This result is consistent with Gilbert and Park (1994), who find that early warning systems
often can identify emerging problems at failing banks earlier than on-site examinations.
−18−
Finally, we consider all banks for which CAMEL ratings would have been available at
year-end 1987. Interestingly, of the 9,880 insured commercial banks analyzed, 2,008 had
CAMEL ratings at year-end 1987 based on financial data from 1986 or earlier. When these
2,008 banks are included and the entire sample of 9,880 banks is analyzed, the accuracy of the
off-site monitoring system relative to CAMEL ratings is even higher. When the 10 percent of the
banks with the worst ratings are predicted to fail, the CAMEL ratings identify only 74 percent of
the failures that actually occurred, whereas the ratings from the off-site monitoring system
identify 88 percent, as shown in Chart 5. The reduction in predictive accuracy for relatively old
CAMEL ratings causes the overall accuracy of CAMEL ratings to fall substantially below that of
off-site monitoring systems.18
These results indicate that CAMEL ratings can become stale rather quickly, pointing to
the conclusion that off-site monitoring systems provide regulators with valuable information on
bank survivability over and above the information generated by the examination process. In
practice, output from regulatory offsite monitoring systems is reviewed by supervisory personnel
in conjunction with information obtained during previous on-site examinations and other sources
including the Uniform Bank Performance Report and the Bank Holding Company Performance
Report. These latter reports are analytical tools created by supervisory personnel on a quarterly
basis showing the effect of management decisions and economic conditions on a banks’ financial
performance and balance sheet composition. The results of this comprehensive off-site analysis
18
For example, looking separately at the 2,008 banks with CAMEL ratings based on financial
data from 1986 or earlier, the 10 percent with the worst CAMEL ratings includes only 59 percent
of the subsequent failures, while the 10 percent with the worst off-site ratings includes 95 percent
of the subsequent failures. Similarly large differences in predictive accuracy occur for banks
−19−
are then used to accelerate the on-site examination of institutions showing financial deterioration;
to identify the areas of most supervisory concern in those institutions already scheduled for
examination; and to allocate the most experienced examiners to troubled institutions.
Conclusion
examined in the first and second quarters of 1987.
−20−
The findings reported here suggest that the information content of CAMEL ratings decays
rapidly. During the period examined, the ability of CAMEL ratings to anticipate failures is
comparable to or better than that of off-site monitoring systems only when the CAMEL ratings
are based on on-site examinations conducted no more than two quarters prior to the forecast
period. If a bank has not been examined for more than two quarters, then off-site monitoring
systems more accurately indicate survivability. The reduction in predictive accuracy for
relatively old CAMEL ratings causes the overall accuracy of CAMEL ratings to fall substantially
below that of off-site monitoring systems. The higher predictive accuracy of off-site ratings
derives from both their timeliness—an updated off-site rating is available for every bank in every
quarter—and the accuracy of the call report data on which they are based. Of course, these
conclusions are based on the particular period analyzed, and may not generalize to all other
periods.19 Nevertheless, the pattern of CAMEL ratings and bank failures during the recent
period of banking difficulties points to the value of off-site monitoring systems as a supplement
to the supervisory ratings generated from periodic on-site examinations. We conclude that off-
site monitoring systems such as the Federal Reserve’s FIMS should continue to play a prominent
role in the supervisory process.
19
We obtained similar results when analyzing bank failures occurring during two year periods
from 1988-92. In later periods, there were too few failures to conduct any meaningful analysis.
Only 41 banks failed during all of 1993, and even fewer in 1994. We were prevented from
analyzing earlier periods by our inability to obtain CAMEL ratings from those periods.
−21−
References
Berger, Allen N., and Sally M. Davies (1994), "The Information Content of Bank Examinations,"
Finance and Economics Discussion Series, no. 20 (Washington, D.C.: Board of
Governors of the Federal Reserve System, July).
Cole, Rebel A., Barbara G. Cornyn, and Jeffery W. Gunther (1995), "FIMS: A New Monitoring
System for Banking Institutions," Federal Reserve Bulletin, January, 1–15.
Gilbert, R. Alton (1993), "Implications of Annual Examinations for the Bank Insurance Fund,
Federal Reserve Bank of St. Louis Review, 35–52.
———, and Sangkyun Park (1994), "The Value of Early Warning Models in Bank Supervision,"
(Federal Reserve Bank of St. Louis, mimeo).
Jones, David S., and Kathleen Kuester King (1995), "The Implementation of Prompt Corrective
Action: An Assessment," Journal of Banking and Finance 19 (June): 491–510.
Maddala, G. S. (1983), Limited-Dependent and Qualitative Variables in Econometrics
(Cambridge: Cambridge University Press), 22–27.
Putnam, Barron H. (1983), "Early Warning Systems and Financial Analysis in Bank Monitoring:
Concepts of Financial Monitoring," Federal Reserve Bank of Atlanta Economic Review,
November, 6–13.
Table 1
Financial Indicators Used in the Off-Site Surveillance System
Expected effect on the
Financial indicator * likelihood of bank failure
Capital Adequacy
Equity capital Reduce
Asset Quality
Loans past due 90 days or more and still accruing Increase
Nonaccrual loans Increase
Other real estate owned Increase
Earnings
Net income Reduce
Liquidity
Investment securities Reduce
Large certificates of deposit ($100,000 or more) Increase
____________________________________________________________________
* Each indicator is measured relative to gross assets.
DATA SOURCE: Report of Condition and Income.