Bridges causing troubled waters …

2017 was the “year of the bridge financing”. The frequency and the magnitude of fund level credit facilities reached an all-time high, and appear to have become an integral part of fundraises going forward. One can hardly have a conversation with an LP or a GP without this topic coming up.

This issue can be approached from a number of perspectives: The use of fund level credit facilities can bring financial and operational advantages for some actors, while it may be costly to others. While the overall conversation between LPs and GPs about whether and how much fund level credit facilities should be used is still ongoing, the PE industry has become aware of the fact that the (undisclosed) use of fund level credit facilities has a serious side-effect: It distorts the IRR-based performance measurement of some PE funds in a way that industry-standard IRR performance benchmark statistics have become unreliable as they no longer enable investors to identify which funds are really in the top performance quartile of a given peer group.

Why do fund level credit facilities distort IRR statistics?

Fundamentally, the use of fund level credit facilities affects fund performance in two ways: It shortens the duration, i.e. the average amount of time money was invested as measured by the difference between the capital-weighted average takedown date and the capital-weighted average date of distributions/the NAV, of the fund; and it also somewhat lowers fund’s MOIC (multiple on invested capital) as these fund level credit facilities are costly. Taken together, these two effects in general increase annualized fund returns as the shorter duration more than compensates for the cost of the fund level credit facilities and this is one of the reasons why (some) people like to see fund level credit facilities in place. Depending on how much and how long the bridge is used, the effect is an increase in annual returns of maybe 1-2% on average. This may of course be relevant for the quartile classification of one specific fund that is just at the threshold to another performance quartile, but would not be a source of concern about the general reliability of IRR performance benchmark statistics.

The real problem is a different one and is in fact related to a shortcoming of IRR that has long been recognized by academics and practitioners alike: Simply speaking, IRR pays too much attention to what happens early in a life of a fund and too little attention to what happens afterwards. In extreme cases, very profitable cash flow patterns early in a fund’s life lead to an IRR that is “locked in” a very high value and does not decrease even if a fund’s stellar start is followed by many years of poor performance. In the literature, this is call the “reinvestment bias of IRR” because the math behind the IRR implicitly assumes that proceeds from the fund’s stellar early months can be reinvested and generate the same stellar returns … an assumption that is not realistic in such cases.

Fund level credit facilities make it substantially more likely that we see extreme IRR values due to this bias: They do not only shorten the fund’s duration in general, but they compress (often dramatically) the time between the first takedown from LPs and the first capital distribution to LPs. Consider as a simple example an investment of 100M that doubles the money after three years, hence generating an IRR of 26%. If the same investment is made by a fund that uses a 360-day credit facility, then there are only two years between the first takedown from LPs and the first capital distribution to LPs, and (assuming even 4% financing costs p.a.) the IRR jumps to 40%. Even if the fund then makes another deal in each subsequent year that runs for three years to triple the money, the un-bridged IRR of 26% compares to an IRR of the fund that used the bridge (only once and for the first deal) of 32% . Clearly a 6% performance difference is likely to put that fund into a different performance quartile.

What is the magnitude of this distortion?

The previous example demonstrates that it is well possible that a fund report IRRs based on “bridged CFs” (i.e. based on the use of fund level credit facilities) that put them into a certain performance quartile, which an otherwise identical fund without any fund level credit facility would report an IRR that would put it into a different quartile. To get a feel for how widespread this effect is, we performed a simulation analysis

We took actual cash flow data on 95 liquidated buyout funds from US and EU, vintages 2000 to 2003 included , from the Preqin CF dataset. Then we compared performance results for seven different cases:

  1. Original CFs
  2. Simulated Cash Flows in which takedowns are pushed back by a bridge to a later date, that date being capped at the last actual takedown date (assumed end of investment period)
    • 90 days
    • 180 days
    • 360 days
  3. Simulated Cash Flows in which takedowns are pushed back by a bridge to a later date, that date being capped at the first actual distribution date (assuming that bridge cannot be used once fund started to distribute cash to LPs)
    • 90 days
    • 180 days
    • 360 days

For all seven cases, we calculated for each fund the IRR, assessed change in IRR under the different simulated bridging assumptions and verified whether the change in IRR would also change a fund’s quartiling position under the different simulated bridging assumptions. For that last step, we used quartile cut offs that remained based on the actual cash flows, so that we can assess how the quartiling of ONE fund would have changed if ONLY THAT ONE fund has used a bridge facility, but the others did not.

For the case in which we simulated Cash Flows in which takedowns are pushed back by a bridge to a later date, that date being capped at the last actual takedown date (assumed end of investment period), we see that on average the distortion is about 3% IRR for the 90 day bridge and about 7% IRR for the 180 day bridge. For a 360 day bridge, the average distortion is over 1000% due to an outlier with an extreme case of IRR bias. Even ignoring that outlier, this result is driven by the extreme cases in which the IRR distortion becomes asymmetrically high, i.e. exactly by the mechanism outlined in the previous passage. The table below shows not only the average IRR distortion, but also the distribution of the IRR distortion from the 5% percentile, i.e. the 5% of the sample with the lowest level of IRR distortion, over the 50% percentile, i.e. the median IRR distortion level all the way up to the cases of the 95% and 99% percentiles, which show the most extreme 5% and 1% of IRR distortion cases respectively. If we were to consider (admittedly somewhat arbitrarily) that a distortion of more than 5% would be problematic, we see that for a simulated 90 day bridge just about 10% of the funds show problematic levels of IRR distortion , while this is the case for over 25% of all funds assuming a 180 day bridge and more than half the funds for a 360 day bridge.

If we operationalize the impact of the credit facility differently, we see a distortion of lower magnitude, but conceptually the same impact. In the case of a 180 or 360 day bridge, we still have 4.5% or 10% of all funds IRRs distorted by 5% or more.

So let’s look at the degree to which the simulated bridge facilities distort the IRR quartiling. Even under the less extreme, second scenario, the magnitude of this potential problem becomes fully visible. 11 of the 95 funds would change quartile status even if we assume “only” a 90 day bridge facility. In other words, almost one fund out of eight is potentially “mis-quartiled” if we have to assume that the group of funds that make up the quartile benchmark sample is heterogeneous and intransparent in terms of the use of fund level credit facilities.

Who needs to worry about this?

We are living in a world in which for several years some funds have been using fund level credit facilities and some have not. Hence in general we do not know which of the funds that form the reference group for an IRR vintage year quartile benchmark use fund level credit facilities and which do not. Even if we were to know whether and based on what terms a given fund we are interested in has been using these facilities, it becomes impossible to accurately assign an IRR performance quartile to this fund. This is because the aforementioned results imply that it may well be possible that the top quartile is to a large extent made up of funds that show high levels of IRR only because they used fund level credit facilities … and had a successful investment with a short holding period early in their life. This then pushes up the threshold of “being top quartile” to inaccurately high levels and creates problems for LPs and GPs alike. Most obviously, LPs are no longer able to assess whether a fund they consider for investment is a top performer or not based on the IRR quartile criterion. But it also makes it impossible for a GP to demonstrate its true quartile status, even if it was to disclose the terms of its own use of credit lines, as long as the quartile thresholds are derived based on a sample of funds that is heterogeneous and intransparent with respect to the use of credit lines and hence whatever quartile threshold is shown may be irrelevant for that specific GP, who had a different kind of credit facility in place.

What can be done about it?

In the long term, it may be a solution if all funds were to disclose “bridged” and “non-bridged” cash flows to all LPs and all benchmark data providers as then one could create “bridged” and “non-bridged” performance benchmarks. However, there are reasons to believe that this is unlikely to happen in a comprehensive fashion in the near term. And even if it does, one would still have to deal with the fact that for the past few vintage years all performance statistics are suffering from this issue.

Fortunately, a powerful albeit imperfect methodological fix exists for this problem. As outlined above, the source of the distortive impact of fund level credit facilities which trigger the IRR reinvestment bias. Several methodological improvements have been proposed to fix this IRR bias. One of those which is particularly suitable in this case is the “PERACS Rate of Return”, which calculates annualized returns simply as a function of the TVPI and the investment duration:

 

Accordingly, this performance measure accurately captures the true economic impact of fund level credit facilities, which would alter both the TVPI and the duration. However, unlike IRR, the PERACS Rate of Return is not biased by any extreme cash flows that occur early in a fund’s life.

If we replicate the previously outlined simulation, using the PERACS Rate of Return instead of the IRR, we find the magnitude of the average performance distortion to be much lower (see tables above). Most importantly, under both scenarios, the extreme cases of IRR distortion are avoided by the use of the PERACS Rate of Return – as one would have expected thanks to the methodological superiority of the PERACS Rate of Return. Accordingly, we also find the quartile statistics to be much more robust in PERACS Rate of Return than in IRR. If we calculate quartile thresholds for all funds under our different scenarios also based on PERACS Rate of Return. We find that only 5 cases out of 95 in which quartiles change based on the simulated use of fund level credit facilities.

In short, there is good news for all LPs and GPs worried that the use of fund level credit facilities distorts annual performance statistics: If they were to switch from IRR to PERACS Rate or Return, they can still rely of performance indicators and benchmark statistics with a good degree of reliability.

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates from our team.

You have Successfully Subscribed!