Technical Notes: Dissecting P2P Securitizations

A couple of unrelated things popped up in the last few weeks that got the team at Huffle thinking. Firstly, the LendingClub issues over asset quality, which we believe is a specific case and not an industry risk.

The second, which I will discuss here, is regarding the securitization of peer-to-peer loans originated by Funding Circle in the UK.

TRanching

As with all securitization, Funding Circle’s has a pretty obscure name. The transaction is called SBOLT 2016-1, which stands for Small Business Origination Loan Trust. Having structured CLOs earlier in my career, the securitization on small business loans is not only interesting from a market demand perspective but also an intellectual one. How have unsecured assets been packaged and presented to the rating agencies?

Pricing:

securitization are commonly opaque in terms of pricing, often due to the discount on issuance that may increase the attainable yields or spreads. Luckily, the prospectus was available to me, so I can investigate a little further.

Pricing of CLOs is driven by the attainable credit ratings but also by an assessment of cashflow and scenario analysis. Here we have multiple tranches of increasing risk, with the senior piece getting a BBB rating and a spread of 2.2% over UK Libor.

It is interesting to see how the BBB has tightened in Europe. Locally to Australia, Commonwealth Bank priced AUD RMBS Medallion 2016-1 AAA at +140bps around the same time as SBOLT. Currency risk aside, I would certainly prefer to take on AAA RMBS risk and I also expect traditional CLO paper might also offer better value and liquidity.

Looking at this transaction against an underlying portfolio that is yielding 9.57%, it might be better to simply buy the underlying portfolio rather than the tranched transaction. Further, Class E seems to make no sense, particularly as Class D looks to have a higher IRR. Be aware that fee side-letters may exist and other mechanisms to make the transaction more attractive for investors.

collateral

Tranche Sizing:

Six tranches on a small deal appear to be a little tight: how much loss protection will the Class E offer the Class D in a stressed scenario?

Multiple tranching is possible as the underlying collateral is pretty granular, with over 2400 loans. For this quick assessment, Classes C to Z are of less interest to me. The B Class has obviously caused some discrepancy between Moody’s and S&P as Moody’s has given it a lower rating than the Class A.

So how secure is the Class A?

On inspection, the senior note is pretty secure. BBB rated assets have an annual default probability of around 0.2% and the 67% and 72% attachment points for the Class A and Class B fall comfortably inside the stressed scenario distributions required to meet the 99.8% pass rate (1 minus 0.2%).

A simpler way to recreate the credit rating agency analysis is to adapt the Advanced IRB framework for the given risk profile and asset class. The unexpected loss can be used as an indicator for senior tranching, although the final credit rating agency models are different.

Could we ever see this ever become AAA?

There is always potential to make AAA senior Classes. In this instance, the attachment point would need to be closer to 50%, which is too small and the transaction will struggle to sell. The reason for this is that the unsecured aspect of the underlying assets is a really high downside loss-given default (assume 80%-90%) and a probability of default of 5% to 10% (implied backwards given the high interest rate charged). The Advanced IRB framework can show you how much you can lose in downside scenarios needed to attain a AAA.

Different asset classes make the ability to create AAA rated securitization harder or easier. Secured assets, such as residential mortgages or auto-leases, are much easier for this asset class, which it ultimately about creating additional security rather than funding arbitrage. Unsecured loans are usually the hardest.

Verdict on SBOLT:

The senior tranching appears to make sense versus where the portfolio risk comes out. If we think the securitization mathematics are wrong, we should also assume the entire Basel framework on bank capital is wrong. As such, the transaction, structurally, is pretty well aligned to globally accepted risk frameworks and the securitization should be seen as a valid investment for regulated entities such as banks and insurance companies.

I am less concerned about the lower tranches as they are smaller fractions, speculative and more sensitive to the underlying portfolio for which I don’t have granular data.

How should we view this deal in Australia?

Overall, I am optimistic for the transaction. Wholesale funding is an important piece to peer-to-peer lending on the basis that not all investors want loan specific risk or equal risk that the borrower offers.

Caution should be taken that CLO securitization and subsequent layers of intermediation (such as fixed income portfolio managers, risk processes and rating agencies) add layers of costs that is worn, ultimately, in higher borrowing costs or reduced returns for investors. Direct lending by hedge fund-owned CLO platforms has been around for over decade. Can FinTech offer an advantage here?

In some cases they can, if they have a specialized team, but they need to ensure they have strong compliance procedures and the ability to perform the analysis and risk management process for risk transformation, which is where LendingClub recently faltered. Unlike vanilla funds without structuring overlays, the underlying collateral in securitization becomes ever more important in the resulting investment performance, particularly if there is a stressed market event.

As FinTechs evolve from new entrants and upstarts into more established businesses, these are the type of specific processes that are likely to be taken on.

Note: I have tried to simplify this blog so that more people can follow the analysis. Credit rating agency models have a number of different mechanisms and methodologies to the Basel II framework.

 

Read More

Technical Notes: Behavioral Risk Frameworks

At Huffle we’re big fans of investigating behavioural risk models. Largely because improving the understanding behaviour is easier through Big Data gathered from social media than other types of risk.

traffic-jam

Behavioural risk can be split into separate areas: internal risk and external risk.

Internal risk is created by employees, where we start to see companies monitoring employee activity to estimate a behavioural risk that may be associated with fraud or processes that lead to poor customer outcomes.

External risk in the form of customer behaviour is also important. Marketing has always investigated behaviour of customer’s purchasing. In finance we need to look at customer behaviour over the life of transactions as there is a risk of default, churn, fraud or additional leveraging.

Many parts of behaviour can be understood through statistical modelling – and procedures to understand, segment, assign probabilities are well understood and continually investigated. However, non-linear dynamics and bifurcation theory may be a better avenue*.

Why bifurcation theory?

Bifurcation theory looks at system changes as certain assumptions or inputs change. We would look for indications that would adjust one of our expected outcomes, which may include willingness or ability to service debt or likelihood of finding and switching loan provider. The advantage over stochastic modeling is that we can use historical data but also forecast what could lead to changes beyond probability likelihoods. The obvious difficulty is developing a reliable non-linear dynamic model and testing it.

We also note that there are interactions in a system. Banks will be selling agents and create internal shifts in dynamics depending on their willingness to lower margins to gain market share. Ultimately, we would also be able to use these models to manage risk as well and measure it: understanding which reactions lead to optimal outcomes.

Starting Models: How Uber may change traffic jams

One of the easiest behavioral dynamic models is traffic. We’ve all sat on the freeway, stop-starting in a traffic jam only to eventually end up moving without any accident in sight. This is a result of human inefficiencies and behavior**. Traffic flow modeling has been investigated for decades but can be summarized in a few rules:

  1. A person wants to follow the car in front at the speed limit
  2. A person doesn’t to get too close. There is a minimum headroom between cars. If they get too close, they will hit the brakes
  3. Each person has a delay in reacting to the car in front

With these rules, we can start to build interactions between cars, with the simplest example being a single-lane freeway. Cars accelerate to meet the speed limit and brake when they get too close to the car in front.

If we build this (non-linear) system we can start to observe a few things:

  1. The system can converge to where all cars move smoothly at the speed limit
  2. If the first car stops, all cars behind it will eventually stop
  3. If the first car slows down, all cars behind it will eventually meet that speed

Perturbation Theory:

The next aspect is to add a small amount of noise. Here, noise would mean a single car suddenly brakes for a reason, perhaps the driver got too close to the car in front after looking at their Facebook feed on their smartphone.

This would lead to a sudden braking. The interesting aspect to take from here is that subsequent cars, if they were travelling at the speed limit and close enough to the car suddenly braking, will be forced to brake themselves. Furthermore, if they were at the headroom, they would encroach on the safety zone and need to brake more sharply to replenish that zone.

There is a potential knock-on impact that subsequent cars need to brake substantially more if they were all travelling close to the headroom distance behind the car in front and at the speed limit.

Without modeling this, we can understand a potential amplification of the initial noise, and subsequent cars can potentially be forced to stop. At this point, we can create the traffic jam.

What factors impact the potential for a traffic jam?

  1. Car density: more cars increase the likelihood of cars travelling closer to the headroom distance as they cram onto the road in rush hour.
  2. Speed: faster motion means the encroachment into headroom can be higher (the delay in human reactions is a fixed time but the distance travelled in that time will be higher)

There are other smaller impacts. However, the above two are controllable on a system wide basis.

What are the risks?

Car crashes are a potential risk if the stopping propagates too quickly. Time delays are also possible as cars also have a delay in returning to the speed limit after they stop, and this can lead to significantly longer delays in traffic.

How do we control them?

Reducing cars is an easy one. This can be done through toll roads or pricing strategies. Ultimately, this is one aspect that makes Uber incredibly interesting as it is adopted more widely. Rush hour will become more expensive to travel.

Speed is another. We often see temporary speed limits. This diminishes the human delay and reduced the likelihood of the traffic jam propagating. The obvious cost here is that the journey times are less than optimal and may be higher on average. But dispersion and risk is reduced.

Moving back to lending

Without detailing too much, the non-linear model becomes more complex as the forces driving it are very different in the world of finance. First of all, if we are building a default model, what defines a default? We could classify it in a similar way to a car stopping in the traffic model and with a speed limit taken as repayments on a mortgage.

Multiple agents would exist: borrowers wouldn’t react with each other, instead they would react around a set of banks, gravitating towards them as better mortgage offers appear. In this system, we would have a borrower acting as a metallic ball in a large financial system that acts as a pinball machine bouncing off different lenders.

Managing risk then becomes a similar trick:

The inputs that then cause reactions in the system (similar to the car braking above) can then be changed by:

  1. Adjusting the price of loans by lowering interest rates, making debt easier to service. This can be performed by central banks. This also relates to the velocity of money.
  2. Modify the loans, which can be performed by the lenders.
  3. Restricting the new loans made available by increasing interest rates. Banks would do this if they want to reduce the risk on their books (hope for churn of “risky” customers).

However, we can also identify other ways to manage risk and measure potential changes, particularly around macro-economic noise that may lead to significantly higher defaults or churn, depending on the conditions.

Internal Risks?

The thing to note here is that behavioural operational risk comes in various forms. In an ideal world, a pricing and cost mechanism can be established. To manage the risk and control temperament if it overheats, or to retrospectively adjust if poor behaviour is not picked up in time.

However, it is hard to reclaim salaries following poor behaviour and it is also hard to pay employees fully in deferred shares. In many ways, outsourcing and deferring a payment is easier here.

Another key aspect could potentially be keeping each product to separate teams, or where we expect things to go – with a separate company (FinTech). If a particular product emerges with a problem, the relationship or business unit can be replaced or removed.

Again, our modelling aims to track what would potentially lead to a change in behavior, most notably when behavior begins to focus on commission rather than customer outcomes.

Once we have achieved this, we may become significantly better at estimating our “through the crisis” revenue streams and understanding risk driven by both borrower and competitor behaviour, enabling us to proactively manage behavioural risks.

*We prefer to move away from stochastic modeling, instead preferring to build non-linear models based more on particle theory. We think this allows us to investigate potential behavioural shifts through bifurcation theory.

**The phenomenon is a sinusoidal compression wave travelling in the opposite direction to traffic flow.

Picture from www.fatvat.co.uk

Read More

6a010534b1db25970b0147e0ae51b2970b-800wi

Musings of a FinTech – Actionable Insights from Social Media

Being able to effectively mine data produced via social media is very topical, with the emergence of companies such as Thinknum providing metrics from social media and other sources to provide new insights into company performance.

However, there is a wider question here of what data can actually be harnessed to provide genuine insight and tangible value to both companies and/or individuals – the classic problem of extracting the signal from the noise.

Thinknum provides company metrics such as Twitter/Facebook followers, employees on LinkedIn and web site traffic which arguably could be useful indicators of a company’s health for investors. A recent FT article provides a good run down of some of the current crop of investor-offerings in this space.

 

In the area of housing, a recently published piece of research from Harvard, Facebook, NYU and the Bureau for Economic Research provides one such insight using data from Facebook. Entitled “Social Networks and Housing Markets”, it looks at how social media influences an individuals perception of the attractiveness of property investment.

The key takeaway from the paper is “Individuals whose friends experienced a 5 percentage points larger house price increase over the previous 24 months (i) are 3.1 percentage points more likely to transition from renting to owning over a two-year period, (ii) buy a 1.7 percent larger house, and (iii) pay 3.3 percent more for a given house. Similarly, when homeowners’ friends experience less positive house price changes, these homeowners are more likely to become renters, and more likely to sell their property at a lower price.”

It’s interesting to see how they combined the data sources for it – the model used Facebook user data along with market research data from Acxiom at its core to build rich demographic data.

One of the key uses of the Facebook friend data was the location of where an individual’s friends reside – specifically those that are within or outside of the Los Angeles county commuting zone (they surveyed homeowners all resided in LA county). This enabled the researchers to distinguish between local friend influences and biases, versus those further afield – the assumption being that house price movements experienced by friends outside the commuting zone would have been effected via social media channels (sec 1.4 p10).

This was supplemented with the relevant housing data, and a 4 question multiple-choice survey for testing the various hypotheses:

 

  1. How informed are you about house prices in your zip code?

[x] Not at all informed [x] Somewhat informed [x] Well informed [x] Very well informed

 

  1. How informed are you about house prices where your friends live?

[x] Not at all informed [x] Somewhat informed [x] Well informed [x] Very well informed

 

  1. How often do you talk to your friends about whether buying a house is a good investment?

[x] Never [x] Rarely [x] Sometimes [x] Often

 

  1. If someone had a large sum of money that they wanted to invest, would you say that relative to other possible financial investments, buying property in your zip code today is:

[x] A very good investment [x] A somewhat good investment [x] Neither good nor bad as an investment [x] A somewhat bad investment [x] A very bad investment

 

The ordering of the questions in 35% of the surveys was changed to avoid the framing effect with people’s responses, which was an interesting point to note (although they didn’t find participants were influenced by ordering of questions in this instance).

This survey & demographic data was then utilised alongside housing transaction data, and they created a number of regression models which supported the conclusions of the paper.

 

Given all of the talk about social media & mining this data, it’s a useful paper to be aware of and illustrates not only a potential use case for harnessing the power of social media to generate insights, but also how complex a task it is to do so.

Bearing in mind that the individuals in the survey were limited to those residing in LA county, and the measured impact of social media appears to influence individuals by up to ~3% which is pretty small in the grand scheme of things*, trying to apply a similar model to something similar like how social networks influence an individuals mortgage preference would be no small task!

 

*We are very excited overall that new data can not only lead to new ways of analysing risk but potentially be a strong leading indicator, allowing more time to rebalance portfolio risk. However making a judgment call on new data presents higher modelling risk.

 

Read More

Mondo Screens

Musings of a FinTech: The Banking Revolution will be Digitised

Whilst it may not have made the news here in Australia, earlier this month Holvi, a completely digital bank catering to small business owners and entrepreneurs in Finland, was acquired by BBVA. It’s the latest in a string of acquisitions and investments by the Spanish banking giant in digital banks. It started back in early 2014 with a $117 million acquisition of Simple Bank in the US, and now there are a raft of new digital banks in the process of being launched around the world with backing from them.

 

Much like other established industries, the banking system is sitting on a complex patchwork of technology and platforms that have been built upon over many years. What this creates is a costly infrastructure to change and update, and often small, customer-centric enhancements are so costly that they get de-scoped. Whilst some banks have forged ahead with building out a “digital first” experience, in Australia I’m thinking ING Direct and UBank (powered by NAB). Because they are still built atop the existing bank frameworks, limitations exist as to how far they can push the envelope.

 

The UK is the country that looks set to benefit the most from this new push into digital banks. With a low barrier to entry (you only need £1 million capital to get your banking licence vs $50 million here in Australia), and a nation that has adopted doing things online faster than most (the UK are some of the most prolific online shoppers in the world). There is now a slew of digital or mobile banks set to launch imminently.

 

Mondo Screens

Image credit: getmondo.co.uk

The most well known of these banks is Atom, they started 2 years ago with a vision to be a mobile bank with a heavy focus on personalisation – one of their early campaigns was to get 1.4 million logos designed, so every member can choose their own. They are the first of these digital banks to get their licence (in 2015) and have received £135 million funding to date. Mondo is another UK digital bank that has a good news story with their funding – they famously raised £1 million in 96 seconds via crowdfunding platform, Crowdcube. Two other names to watch in that market are Starling and Tandem. Brits are soon going to be spoilt for choice with new digital alternatives to the traditional banks.

 

But what about here in Australia? We know conventionally there is a lag in new tech developments reaching these shores, and this coupled with the aforementioned high barriers to entry makes it harder to break into this market. But it’s not impossible. Whilst at Huffle we are initially looking to introduce new home loan products, there are some logical steps we could take to build this proposition out to become a digital bank. Without the constraints of complex legacy systems, and fresh thinking from founders with experience both within and outside of Financial Services, it’s not a stretch to see Huffle making moves to create one of Australia’s first truly digital banks… But one step at a time, first we want to shake up the home loan industry with our great new mortgages.

Read More

Architecture Evolution

Like most startups, Huffle’s website platform has undergone a number of changes during the past year. The path it has followed is pretty typical, with each platform evolution reflecting the increase in technical investment required to grow from one stage to the next.

I thought it may be useful to share this evolution, given it’s such a common sequence of phases, I hope it may save on some of the research you have to do yourself.

 

Phase 1 – The Hosted Landing Page

Huffle’s initial site contained a single page used as a lead generation tool. There are many different platforms that can be used here including WordPress, Wix, Instapage and Unbounce, which are some of the popular options. Each of these platforms provide online editors for designing and writing the relevant content you want to display. They also typically provide integrations with 3rd party services for capturing leads, such as MailChimp/Campaign Monitor for e-mail lists, Salesforce/Zoho for CRM.

Our very early site was hosted on Wix, but we preferred the landing page templates on Instapage, so moved across mid-last year.

Once created, you simply point your site DNS record to the hosting provider and away you go with your landing page.

 

Phase 1 - Instapage

 

You’re completely at the mercy of your landing page provider – if they go down, there’s very little recourse you can take, but at least you have a web presence.

 

Phase 2 – Platform as a Service (PaaS)

The landing page was never considered more then a temporary web presence. We were able to import chunks of HTML/CSS/JavaScript into the page via the hosting platform, but we simply couldn’t customise the look and feel as much as we wanted to. Additionally, we wanted to throw a database into the mix to start capturing real customer data, so we needed to start building out a proper customer site using a web framework.

The most common choices in this space tend to by with a dynamically typed language such as Ruby (Rails), Python (Django, Flask), PHP (Cake) or JavaScript (Node.js, React, AngularJS), as they tend to be good for getting something up and running quickly. You can go with a statically typed language (Go, Java, .NET, Scala, Haskell, …), but they tend not to be as fast to get something live out (unless you’re far more comfortable with statically typed languages).

The target deployment infrastructure is pretty straight forwards, consisting of web and database servers.

However, getting a nice automated deployment process up and running takes time, plus the underlying severs need to be managed, which is where Platform as a Service (PaaS) solutions came in. We used Heroku, as it provided a ready made platform for serving up applications in a number of different languages.

It provides a single command to deploy our latest code base out to their platform running on top of Amazon Web Services. Additional web servers (dynos in Heroku speak) can be freely added or removed to scale up/down your site as needs dictate, making it an ideal platform during the early stages of your startup.

Heroku also provides a marketplace for add-ins, making it really straight forwards to add additional functionality (sending email, hosting over SSL, application monitoring, …) with a minimal amount of effort. You can also make use of tools such as loader.io to easily see how your site performs under moderate loads (hundreds of requests per second) to ensure your site can handle those initial burst of publicity.

 

Phase 2 - Heroku

 

As great as Heroku was for getting our web application up and running quickly. There were some limitations that were frustrating to work with:

  • You cannot jump onto a server to have a dig around – everything is done via the Heroku logs command
  • Heroku runs on top of AWS across a limited number of regions – none of which are in Australia.
  • You cannot run a Heroku application out of multiple regions with duplicating your entire platform including the database server (which is expensive) in both sites. Plus you’ll need to find a way to synchronise your database. This means that when there is a problem in the AWS region your Heroku instance is running in or in Heroku itself, you have zero options for redundancy unless you duplicate your infrastructure.

Heroku does provide a status page which is useful, but if site availability is crucial to you, these issues are too great to rely on it as a solo hosting platform, which is why we made the move to AWS, which provided us with a greater degree of flexibility with our deployment/management options.

 

Phase 3 – Infrastructure as a Service (IaaS)

In the world of Infrasutructure as a Service (IaaS), Amazon Web Services is king. There are a number of other IaaS platforms to choose from, however, given its relative maturity, it being platform of choice for so many startup success stories, and it’s Activate Program for startups, it was a no-brainer for us.

Amazon Web Services provides resilience across multiple geographic regions. Within each of these regions there are multiple data centres (availability zones) you can deploy your application across. This flexibility of deployment met our needs by providing a hosting platform that provided availability across multiple physical sites, giving us the resiliency we required for running our main production site.

The up-front investment required to automate the provisioning and deployments of environments is high, requiring investment in:

  • The DevOps toolchains such as Ansible, Chef, Puppet or SaltStack for environments provisioning and ongoing management
  • Creating deployment/release tools, especially if you want to use immutable servers
  • Security – ensuring access points to your environment are minimised and communication between nodes is restricted to the bare essentials

The end result for us looks something like this, where we have full site redundancy across multiple data centres and are located within AWS’s Sydney region.

 

Phase 3 - AWS

 

If required a new copy of this environment could be brought up in a matter of minutes with our DevOps provisioning tools, should our AWS region fail, but for now it mostly meets our needs, and provides us with a great degree of flexibility going forwards.

AWS does provide Platform as a Service capabilities with it’s Elastic Beanstalk offering, however we wanted the flexibility to manage our own servers and support non-standard use cases such as hosting multiple sites over SSL on a single set of infrastructure, which does not play so well with Elastic Beanstalk.

They also provide OpWorks for managing cloud infrastructure, however, it does tie you to Chef which was less appealing for us compared with some of the other options out there.

 

Footnote – DNS Failover

One of the options that we looked at early on was DNS failover to provide resiliency between different hosting providers, should one of them fail. The issue with this approach is that most providers require you to work with IP addresses which is not feasible if you’re using a provider that only gives you a URL to point to.

Amazon’s Route 53 DNS record management service provides a failover mechanism with CNAME records, which we found was good for our use case.

Read More

Evolving Architecture

As a startup co-founder and CTO, I have a number of responsibilities for my business, to quote Eric Reis (http://www.startuplessonslearned.com/2008/09/what-does-startup-cto-actually-do.html), one of these is “platform selection and architectural design”.

I have invested very heavily in this area since starting my journey with Huffle, and this blog is really to capture my thoughts and decision process made along this journey.

As you’d expect my role is currently very varied – I come from a financial technology background, and right now my role is covering all sorts of things, including:

  • Strategy
  • Dev-ops
  • Model development/implementation
  • Platform architecture
  • Server side and front end development (full-stack appears to be the buzzword doing the rounds these days…)
  • UX/design

My posts will likely dip into all of the above, some will be ad-hoc notes, others more in depth discussing strategic directions we’ve taken along the route.

Read More