Wednesday, December 1, 2010

Credit Risk Principles

Expected Loss Components


Under Basel II, Expected Loss (EL) equates to the
Probability of Default (PD) times Loss Given Default (LGD)
times Exposure at Default (EAD), or in symbolic form:
In turn, we can expand each of these components further.
An FSA definition stipulates mortgage default (D) to be 180
days of arrears 11 as the guideline within retail exposures.
Using the 3Cs approach for default measures, we can set
conditional probability of default in Bayesian notation as:
Mortgage Type can refer to the three main market
segments of Prime, Near Prime and Sub-Prime but depends
on the Character result to determine which type is finally
applicable. The weighting parameters alpha , beta , and
gamma above, (always sum to one) will determine the
overall influence of the PD measurements for Character
and Capacity and Collateral For prime mortgages, one would expect Character to be
paramount. You would also expect to treat Collateral as
merely security underpinning the loan whilst Capacity would
have an influence somewhere in between these two
measures. However, under sub-prime mortgages Collateral
is paramount, and then Capacity in relative influence and
finally Character (as most applicants have had a chequered
credit history by simply qualifying for these types of loans).
We could therefore assume the following initial values for
the relative influence of these PD sub-parts:
As evidence of this view especially for the Collateral Sub-prime
component, we can examine the CML Repossession Risk
Review report. Authors Cunningham and Panell (2007) cite that
‖...the adverse credit loans in non-conforming RMBS have
substantially higher arrears rates than prime home -buyer
mortgages and the adverse credit sector accounts for a much
larger share of repossessions than its 5 -6% share of new lending
business. In a number of locations in London, the cumulative
repossession rates since issue on non-conforming RMBS portfolios
are around 5%, compared with the overall industry average for
2006 of 0.15%.‖ Therefore, for sub-prime or non-conforming
mortgages, given evidence of the readiness of the sector to
repossess properties, one therefore needs to over-emphasise
the Collateral component more so than the Character aspect
and Capacity could be set relatively equal regardless of the
mortgage type. Near-prime mortgages would also need to have
PD relative influences set within the risk extremes of prime
and sub-prime. You could also adjust the component part
values of alpha, beta and gamma to reflect the overall risk
appetite control and perhaps as part of the conditional PD
measure adjustments for more accurate reflection of the
current stage of the housing cycle. By way of practical
application, a typical specialist lender might have the following
segments within their mortgage portfolio : Sub-prime 60%,
Near Prime 20% and Prime 20%.

PD Character Measurement

For
it is possible to derive a PD measure of credit
character from a credit bureau score or by using an internal
application score (or even a combination of both). For an
initial risk-based pricing approach, the bureau or application
score is required. However, for any on -going risk adjustment
offers (for example, through a customer retention initiative), it
would be better to use a behavioural score and/or a bureau
score to derive the required PD. The use of mortgage bureau
scores have gained in popularity since 2000 with U.S. ones
being similar to U.K. bureaux except that less information is
recorded. According to Thomas et al. (2002), such scores will
rely upon the following general types of data:

1. Personal information (name; address; former address;
date of birth; name of current and former employers; and
identifiers: such as Social Security number in the U.S.)

2. Public record information (county court judgements;
bankruptcy; Involuntary agreements; and electoral roll
data)

3. Credit accounts history (type of account; credit limit;
payments up-to-date; arrears data and balances
outstanding)

4. Inquiries (type of credit grantors and date of inquiry)

5. Aggregated information (percent of houses at a postcode
with CCJs for example)

Affordability Capacity Measures

Some research exists on tests for affordability measure for
predicting credit problems in the credit risk literature.
Wilkinson and Tingay (2002) report that affordability does
marginally add to the lending decision using a comparison
of the performance of credit score models (with and without
affordability measures for personal loans) . Although this
research is useful, an affordability test for personal credit
usage is definitely not of the same scale as one for home
mortgages―a sizable difference in both scope and end
outcome. In addition, research by Russell (2005) shows
clever use supplements their Delphi bureau score to deliver richer
of a bureau Affordability Index (AI) that.
selection criteria and strategy. This is an example of a top-
down measurement of affordability. Another available external
measure is the over-indebtedness (OI) index provided by the
Callcredit bureau that helps identify any serial credit card
users or applicants who have very large or insurmountable
debt accumulation. This useful measure of personal debt is
truly global in nature and hence provides additional evidence
of ‗true‘ affordability for the mortgage application in hand.
Since mortgage regulation in October 2004, there has been a
greater onus on lenders to demonstrate responsible lending to
ensure that Therefore, it has become a mandatory requirement that each
lender now shows evidence of an individual applicant's ability
to repay. According to Van Dijk and Garga (2006), lenders
active in the sub-prime market are more likely to use an
affordability model than lenders in the prime market. They see
this trend as not that surprising, given that applications within
the sub-prime market are more likely to be o f higher risk, thus
warranting applicant can truly afford the mortgage.
With Capacity PD measures, it is possible to use either an
internal or an external measurement. One form of an internal
measure would be from the application of a suitable mortgage
affordability calculator. The main purpose for the construction
of an affordability calculator would be to help any lender
better comply with mortgage regulation requirements including
that of being a responsible lend er. Responsible lending
mandates that applicants are genuinely able to service their
loan obligations.Perhaps what is required from the regulators though, in the
future, is for an industry-wide standard on what constitutes
true affordability for a loan. For example, a cursory website
examination affordability calculators indicates that considerable variation
exists amongst the maximum possible loan, given similar input
variables.they are a more detailed of several

Credit Character Measures C1

Credit character measures tends to be performed by a
combination of credit scoring (credit scoring models and
credit bureau data) plus tailored credit policy rules as part
of the underwriting decision and of course fraud tests for
direct rejection. Regardless of which party bears
responsibility for diligent underwriting―either broke r,
packager, lender, rating agency, investors or even a fully
automated ‗underwriting system‘―measurement of the
credit character of a borrower is generally considered
paramount, especially for prime borrowers. Before the use
of securitisation funding, the credit provider was usually the
lender who retained the loan risk on their own balance
sheet but with the advent of securitisation, the underwriting
decision now has a degree of coercion from other entities
that profit from loan volumes, and therefore hav e minimal
involvement with the future performance of those loans.
Such a design structure will inevitably increase the
incentive for third-party intermediary credit brokers to write
new loans but also reduces their incentive to consider how
these loans will perform over time. In any efficient
performing market, such problematic practices should self-
correct over time, as originators will become more liable for
creating quality books (e.g., ‗claw-back‘ profit arrangements
on bad deals).


The ultimate aim of the credit character measuring process
is to stratify applicants into meaningful segments or risk
grades that will assist in risk-based pricing. During the
application stage for credit, it will be possible to use
combinations of credit bureau scores coupled with any
available application credit scoring and required policy rule
restrictions and fraud testing elimination. If pricing -for-risk
after acceptance (say at set future intervals), then a
behavioural score in conjunction with a bureau score would
be measures that are more preferable (as behaviour of the
account will be readily available giving a stronger prediction
of risk than from the application score).

Policy Rules and Loan Design

Policy rules help shape loan applications and eliminate
others. In the sub-prime market, according to Van Dijk and
Garga (2006), lenders with manual processes serve a
significantly larger proportion of applicants . Thus, more
applications will receive manual reviews by sub-prime
lenders using partly automated processes than in the prime
market. They view this as an expected outcome, given that
sub-prime applicants are more likely to have characteristics
that may not be acceptable under automated policy rules,
and so are more likely to require manual assessment.
Therefore, greater reliance on policy rules under partial
manual processing is necessary—until further automation of
the process occurs, (assuming that the efficiency and
effectiveness benefits from automation will exceed any
increased losses that may result from having less
experienced judgement applied).


Policy rules will tend to focus upon the applicant and the
loan details—that is, the policy rules will form the minimum
criteria that the applicant must satisfy in order to qualify for
the loan. The applicant criteria may cover, for example:


Minimum and maximum age of applicant,


Criteria for unacceptable credit history,


Legal entity of borrower and jurisdiction,


Minimum and maximum loan amounts requested,


Maximum loan-to-value ratios permitted (LTVs),


Maximum income multiples, and


Thresholds (or cut-off points) for any credit score


The lender creates most of these policy rules (from
empirical evidence and regular adjustment) but the
regulator (FSA) could proscribe some of them, or even the
securitisation participants could stipulate policy (or
otherwise the institution to which an originator intends to
sell, any complete or whole-loans could stipulate policy).


There will also be policy rules surrounding the property—
these policy rules will form the minimum criteria that the
property must satisfy in order to grant the loan. Property
criteria may include, for example:


Type of property (detached or semi-detached, flat,
bungalow, terraced etc.),


Construction method (or materials
certain building company exclusions,


Date of construction period,


What constitutes a defective property, and


Locale restrictions or certain postcode exclusions


In general, the lender specifies these criteria, but it may
also reflect the requirements of insurers and securitisation
vehicles especially for concentration risk issues .


used)Loan Design


One of the more curious aspects of the U.K. mortgage market,
in general, is the proliferation of mortgage ‗products‘, which in
effect amount to nothing more than a simple variation of the
general mortgage contract (for example, applying for a
mortgage with a lower LTV band could therefore mean a
different product is now applicable). This aspect results in
thousands of such ‗products‘ becoming available, even though
the essence of each variation still requires a repayment
schedule at some interest rate for a loan amount borrowed
over a period of time. Marketing departments tend to use the
product variation primarily for dual purposes: 1) to segment
the market for increased penetration of volume and 2) as a
means of applying a rather crude industry-wide standard of
generic
creditworthiness.
Under
this
generic
credit
classification scheme (could refer to it as the ‗ABCD‘
approach), it relies on variations of certain parameters about
the applicant credit history in terms of:


a)


Arrears record (maximum of X missed payments in last
Y months);


b)


Bankruptcy/Involuntary Arrangement (IVA) evidence
indicating satisfied/discharged within set period;


c)


County Court Judgements (CCJs – up to £X in last Y
years, or otherwise unlimited);


d)


Defaults (X number of defaults permitted for previous
rent payments or unsecured loans)


On the basis of how any individual fits within the arbitrary
criteria set suggested above, the applicant will thus become
eligible for a ‗product‘ that might be described as ‗Very Minor‘,
‗Minor‘, ‗Medium‘, ‗Heavy‘ or ‗Unlimited‘ for example. Each of
these ‗products‘ will have an arbitrary margin for risk added to
the base funding mechanism (e.g., LIBOR) and for any other
variations selected (e.g., Self-certification or Buy-to-let
purpose). Interestingly, whenever you create an internal
credit-scoring model using all available data, it is usually the
case that none of the above criteria is automatically selected
as being predictive by the modelling tool but they may instead
be incorporated as part of the generic product group.
Nevertheless, the industry appears to place great faith on
these criteria and they therefore form an integral part of the
automated product selection system in use by brokers and
packagers.It should also be borne in mind that the current design of sub-
prime mortgages provides for early exit fees (via early
redemption penalties) such that the securitisation funding
mechanism expects these additional cash flows as part of the
income for retention by the issuer. Not surprisingly, if the
borrow behaviour is not as anticipated, then a funding crisis
can result, whereas if instead, a charge was made upfront to
cover the prepayment option, then this loan design would
assist the customer by creating more flexibility (but it may also
prove to be less lucrative overall from the lender viewpoint).

Risk Underwriting Process and the 3Cs

According to Barnes et al. (2007) in explaining how S&P
issues ratings for Residential Mortgage Backed Securities
(RMBS), each of the characteristics of every loan in a
securitised pool has a probability of default and therefore
an ultimate loss. S&P‘s analysis addresses this via a layered
risk―or multiple characteristics of risk approach―as in:


1)  Loan structure review (checking for adjustable
rate mortgages or income verification details),


2)  Borrower credit character assessment (through
the use of FICO credit scores),


3) Assessing borrower‘s ability to repay the loan (or
capacity) and,


4) Determining amount of equity (or collateral) a
borrower may have in their home.


These characteristics are combined into a sophisticated
stress simulation test and analysis before any given asset
tranche can be subjectively rated ‗AAA‘ or ‗A‘ , for example.
To this list of requirements, S&P impose a crucial aspect,
namely fraud risk control, especially concerning data
integrity measures.


 Reduced Target Market after Filtering Rules


We can illustrate a broad process for making underwriting
decisions as in Figure 4 above. The approach above also
relates to a version of the process that Van Dijk and Garga
(2006) illustrate in their CML Mortgage Underwriting Report
covering the various parts of the underwriting system. In
essence, this crucial underwriting system depicts an
appropriate methodology for blending external and internal
data. The data inputs will not necessarily be of equal
importance across different applicants for final decision-
making purposes, but it is still crucial to ensure availability
of additional data, if required. The blue coloured box in

Figure 4 reflects external information that helps augment the
internal measures (obtained from the application form and
other sources). To some extent, the policy rules will thus need
to encapsulate this external information in order to allow for
cancelation of the current application if it breaches some pre-
defined parameter (or otherwise to demand a reconfiguration
of terms and conditions in order to obtain final underwriting
approval). Therefore, the process parts of 1) policy rules and
2) fraud checks, together with 3) external data sources, will
effectively act as ‗filters‘ over the target market―reducing the
number of applicants to only those able to pass through these
general policy barriers.


We can illustrate a more detailed underwriting design below:


Figure 5 - Overview of the Underwriting Process


Of note, is that the PD of an applicant can actually change
during this underwriting process, especially as the deal is
being put together (somewhat akin to how the odds of winning
can rapidly change during live betting for sports events) . For
example, applicants who want a bigger more expensive home
than their previous one will be creating greater potential for
default, especially for the maximum possible loan and as it
transpires, the affordability measure proves to be inaccurate
even though they have good credit character and are very
willing to repay the loan. In addition to internal scoring
approaches, one should also investigate any external measures
of credit character as provided by credit bureaux. These
additional sources of information can help refine the internal
models, or otherwise confirm a decision for applicants that a re
not definitively in the good or poor character segments. We
can elaborate further on the main constituent parts of this
Underwriting process in the following sections.


Credit Scoring Intro

The famous 1935 Escher lithograph shown in Figure 1 below
of the ‗ Hand With Reflecting Sphere‘ is an intriguing view
that Morgan (1988) uses to illustrate a fundamental
epistemological point with modern Accounting‘s (futile)
attempt to portray its discipline as a reality construct
(highlighted by the artist Escher viewing his own created
image through a crystal ball). Morgan‘s argument is that
accountants typically construct reality in a very limited,
enclosed and one-sided view and he therefore debunks the
profession‘s supposed ―objectivity‖ as some mythical
concept and even arguing that accounting should be
approached as a form of ―dialogue‖ allowing accountants to
construct, ―read‖ and probe situations from a multitude of
viewpoints and perspectives. To put this concept in simpler
terms: the map is never the same as the territory!I would like to advance the same argument (with perhaps even
greater gusto) concerning the ‗art-and-science‘ of risk
measurement (covering at the very least credit, market and
operational risks). Risk practitioners may like to attain true
measures of financial risk (with subsequent control) but after
observing the series of major financial calamities that have
occurred over the last few decades , you have to conclude that
there has been only limited risk management control evident 2.
You can see the cyclical nature of these risk-triggering events
in Figure 2 below, indicating a so-called vicious cycle of risk at
work (as per Kupper 1999). It can prove to be quite hard to
break out of this systemic cycle, once initiated, assuming of
course that management even realises that such a pr ocess is
going on whilst they are busy peering into their own Escher-
like ‗crystal balls‘ (the modern equivalent would be electronic
dashboard reporting devices).Historically one can observe many ―scientific‖ attempts at
forming possible boundary solutions that might help in
breaking the cycle above. More recently, under the Basel II
framework 3, it becomes possible to use a rather crude internal-
rating based measure of an obligors‘ likelihood of experiencing
an expected loss over some arbitrarily defined period. Risk
practitioners will still need to understand these ‗pseudo-
scientific‘ measurement aspects (the Basel II risk framework is
a good example―especially concerning the conflict with the
accounting ―view‖ in the International Financial Report ing
Standards for loss provisioning). However, they should also
realise that any such derived measures can only ever form