The most in-demand topic on this blog is for an Excel semivariance example. I have posted mathematical semivariance formulas before, but now I am providing a description of exactly how to compute semivariance in “vanilla” Excel… no VBA required.
The starting point is row D. Cell D$2 contains average returns of over the past 36 months. The range D31:D66 contains those returns. Thus the contents of D$2 are simply:
We will now examine each building block of this formula starting with
We only want to measure “dips” below the mean return. For all the observations that “dip” below the mean we take the square of the dip, otherwise we return zero. Obviously this is a vector operation, the IF function returns a vector of values.
Next we divide the resulting vector by the number of observations (months) minus 1. We can simply COUNT the number of observations with COUNT(D31:D66-1). [NOTE 1: The minus 1 means we are taking the semivariance of a sample, not a population. NOTE 2: We could just as easily taken the division “outside” the SUM — the result is the same either way.]
Next is the SUM. The following formula is the monthlysemivariance of our returns in row D:
You’ll notice the added curly braces around this formula. This specifies that this formula should be treated as a vector (matrix) operation. The curly braces allow this formula to stand alone. The way the curly braces are applied to a vector (or matrix) formula is to hit <CTRL><SHIFT><ENTER> rather than just <ENTER>. Hitting <CTRL><SHIFT><ENTER> is required after every edit.
We now have monthly semivariance. If we wanted annual semivariance we could simply multiply by 12.
Often, however, we ultimately want annual semi-deviation (also called semi-standard deviation) for computing things like Sortino ratios, etc. Going up one more layer in the call stack brings us to the SQRT operation, specifically:
I start with a hypothetical. You are considering between three portfolios A, B, and C. If you could know with certainty one of the following annual risk measures, which would you choose:
For me the choice is obvious: max drawdown. Variance and semi-variance are deliberately decoupled from return. In fact, we often say variance as short-hand for mean-return variance. Similarly, semi-variance is short-hand for mean-return semi-variance. For each variance flavor, mean-returns — average returns — are subtracted from the risk formula. The mathematical bifurcation of risk and return is deliberate.
Max drawdown blends return and risk. This is mathematically untidy — max drawdown and return are non-orthogonal. However, the crystal ball of max drawdown allows choosing the “best” portfolio because it puts a floor on loss. Tautologically the annual loss cannot exceed the annual max drawdown.
My revised answer stretches the rules. If all three portfolios have future max drawdowns of less than 5 percent, then I’d like to know the semi-variances.
Of course there are no infallible crystal balls. Such choices are only hypothetical.
Past variance tends to be reasonably predictive of future variance; past semi-variance tends to predict future semi-variance to a similar degree. However, I have not seen data about the relationship between past and future drawdowns.
Research Opportunities Regarding Max Drawdown
It turns out that there are complications unique to max drawdown minimization that are not present with MVO or semi-variance optimization. However, at Sigma1, we have found some intriguing ways around those early obstacles.
That said, there are other interesting observations about max drawdown optimization:
1) Max drawdown only considers the worst drawdown period; all other risk data is ignored.
2) Unlike V or SV optimization, longer historical periods increase the max drawdown percentage.
3) There is a scarcity of evidence of the degree (or lack) of relationship between past max drawdowns and future.
(#1) can possibly be addressed by using hybrid risk measures such as combined semi-variance and max drawdown measures. (#2) can be addressed by standardizing max drawdowns… a simple standardization would be DDnorm = DD/num_years. Another possibility is DDnorm = DD/sqrt(num_years). (#3) Requires research. Research across different time periods, different countries, different market caps, etc.
Also note that drawdown has many alternative flavors — cumulative drawdown, weighted cumulative drawdown (WCDD), weighted cumulative drawdown over threshold — just to name three.
The bottom line is that early adopters have embraced semi-variance based optimization and the trend appears to be snowballing. For instance, Morningstar now calculates risk “with an emphasis on downward variation.” I believe that drawdown measures, either stand-alone or hybridized with semi-variance, are the future of post post modern portfolio theory.
Bye PMPT. Time for a Better Name! Contemporary Portfolio Theory?
I recommend starting with the the acronym first. I propose CPT or CAPT. Either could be pronounced as “Capped”. However, CAPT could also be pronounced “Cap T” as distinct from CAPM (“Cap M”). “C” could stand for either Contemporary or Current. And the “A” — Advanced, Alternative — with the first being a bit pretentious, and the latter being more diplomatic. I put my two cents behind CAPT, pronounced “Cap T”; You can figure out what you want the letters to represent. What is your 2 cents? Please leave a comment!
Back to (Contemporary) Risk Measures
I see semi-variance beginning to transition from the early-adopter phase to the early-majority phase. However, my observations may be skewed by the types of interactions Sigma1 Financial invites. I believe that semi-variance optimization will be mainstream in 5 years or less. That is plenty of time for semi-variance optimization companies to flourish. However, we’re also looking for the nextnext big thing in finance.
Over 50 years of academic financial thinking is based on a kind of financial gravity: the notion that for a relatively diverse investment portfolio, higher risk translates into higher return given a sufficiently long time horizon. Stated simply: “Risk equals reward.” Stated less tersely, “Return for an optimized portfolio is proportional to portfolio risk.”
As I assimilated the CAPM doctrine in grad school, part of my brain rejected some CAPM concepts even as it embraced others. I remember seeing a graph of asset diversification that showed that randomly selected portfolios exhibited better risk/reward profiles up to 30 assets, at which point further improvement was minuscule and only asymptotically approached an “optimal” risk/reward asymptote. That resonated.
Conversely, strict CAPM thinking implied that a well-diversified portfolio of high-beta stocks will outperform a marketed-weighted portfolio of stocks over the long-term, albeit in a zero-alpha fashion. That concept met with cognitive dissonance.
Now, dear reader, as a reward for staying with this post this far, I will reward you with some hard-won insights. After much risk/reward curve fitting on compute-intensive analyses, I found that the best-fit expected-return metric for assets was proportional to the square root of beta. In my analyses I defined an asset’s beta as 36-month, monthly returns relative to the benchmark index. Mostly, for US assets, my benchmark “index” was VTI total-return data.
Little did I know, at the time, that a brilliant financial maverick had been doing the heavy academic lifting around similar financial ideas. His name is Bob Haugen. I only learned of the work of this kindred spirit upon his passing.
My academic number crunching on data since 1980 suggested a positive, but decreasingincremental total return vs. increasing volatility (or for increasing beta). Bob Haugen suggested a negative incremental total return for high-volatility assets above an inflection-point of volatility.
Mr. Haugen’s lifetime of published research dwarfs my to-date analyses. There is some consolation in the fact that I followed the data to conclusions that had more in common with Mr. Haugen’s than with the Academic Consensus.
An objective analysis of the investment approach of three investing greats will show that they have more in common with Mr. Haugen than Mr. E.M. Hypothesis (aka Mr. Efficient Markets, [Hypothesis] , not to be confused with “Mr. Market”). Those great investors are 1) Benjamin Graham, 2) Warren Buffet, 3) Peter Lynch.
CAPM suggests that, with either optimal “risk-free”or leveraged investments a capital asset line exists — tantamount to a linear risk-reward relationship. This line is set according to an unique tangent point to the efficient frontier curve of expected volatility to expected return.
My research at Sigma1 suggests a modified curve with a tangent point portfolio comprised, generally, of a greater proportion of low volatility assets than CAPM would indicate. In other words, my back-testing at Sigma1 Financial suggests that a different mix, favoring lower-volatility assets is optimal. The Sigma1 CAL (capital allocation line) is different and based on a different asset mix. Nonetheless, the slope (first derivative) of the Sigma1 efficient frontier is always upward sloping.
Mr. Haugen’s research indicates that, in theory, the efficient frontier curve past a critical point begins sloping downward with as portfolio volatility increases. (Arguably the curve past the critical point ceases to be “efficient”, but from a parametric point it can be calculated for academic or theoretical purposes.) An inverted risk/return curve can exist, just as an inverted Treasury yield curve can exist.
Academia routinely deletes the dominated bottom of the the parabola-like portion of the the complete “efficient frontier” curve (resembling a parabola of the form x = A + B*y^2) for allocation of two assets (commonly stocks (e.g. SPY) and bonds (e.g. AGG)).
Maybe a more thorough explanation is called for. In the two-asset model the complete “parabola” is a parametric equation where x = Vol(t*A, (1-t)*B) and y = ER( t*A, (1-t)*B. [Vol == Volatility or standard-deviation, ER = Expected Return)]. The bottom part of the “parabola” is excluded because it has no potential utility to any rational investor. In the multi-weight model, x=minVol (W), y=maxER(W), and W is subject to the condition that the sum of weights in vector W = 1. In the multi-weight, multi-asset model the underside is automatically excluded. However there is no guarantee that there is no point where dy/dx is negative. In fact, Bob Haugen’s research suggests that negative slopes (dy/dx) are possible, even likely, for many collections of assets.
Time prevents me from following this financial rabbit hole to its end. However I will point out the increasing popularity and short-run success of low-volatility ETFs such as SPLV, USMV, and EEMV. I am invested in them, and so far am pleased with their high returns AND lower volatilities.
NOTE: The part about W is oversimplified for flow of reading. The bulkier explanation is y is stepped from y = ER(W) for minVol(W) to max expected-return of all the assets (Wmax_ER_asset = 1, y = max_ER_asset_return), and each x = minVol(W) s.t. y = ER(W) and sum_of_weights(W) = 1. Clear as mud, right? That’s why I wrote it the other way first.
As I continue to explore patterns in beta-client data, I clearly see one common difference. For globally-diversified, and asset-diversified ETF- and mutual-fund-based portfolios 36-month, monthly modified-semivariance and variance based portfolios tend to converge to produce similar results. This is in sharp contrast to stock-based portfolios, where variance (MVO) and semivariance (PMPT) portfolios display a significant trade-off between Sharpe and Sortino ratios.
My preliminary conclusion, based on poring through individual optimized-portfolios, is that variance and semivariance are closely correlated for portfolios based on sufficiently-diversified ETFs. On the other hand, the difference between variance-optimized, semivariance-optimized, and hybrid (blend of variance- and covariance-optimized) portfolio is significantly different if individual stocks and bonds are analyzed. [Sufficiently-diversified in this context does not mean diversified per se. It only means relative diversification within a given ETF or set or ETFs/ETNs/Mutual Funds.)
These preliminary findings suggest that semivariance and variance based optimizations are highly correlated for certain asset classes (and expected returns) while differing for other asset classes (and expected returns). Stock-pickers are more likely to see benefits from semivariance-based optimization than are those who select from relatively-diverse ETFs.
These preliminary findings are causing a shift in the approach taken by Sigma1. Since, so far, Sigma1 beta partners are primarily interested in constructing portfolios based primarily or exclusively around ETFs, ETNs, and mutual funds, our company is focusing more on Sharpe ratios (because they are quicker to optimize for than Sortino ratios).
Because Sigma1 HAL0 portfolio-optimization is tuned to optimize for 3 objectives this presents an interesting question: “Your investment company wishes to optimize portfolios based on 1) expected return, 2) minimal variance, and 3) <RISK MEASURE 3>?”
Sigma1 is posing questions: What is your third criterion? What is your other risk measure? Answer these questions, and Sigma1 HAL0 software will optimize your portfolio accordingly; showing the trade-offs between Sharpe ratios and your other chosen risk metric.
Sigma1’s 3-objective-optimization is causing a few financial-industry players to ask the question of established optimization engines, “Can you do that?” Sigma1 Software can. Can your current portfolio-optimization software do the same?
In today’s near-zero interest rate economy, the reward versus risk of an investment portfolio can be measured using the Sharpe ratio. Like a batting average, higher numbers are better, and 0.400 is very good.
If portfolio Z has a forward-looking Sharpe ratio of 0.400, and an expected return of 8%, there is a 68% chance its 1-year return will be between -12% and +28%.
The math is surprisingly easy. Because the Sharpe ratio is a return/risk ratio it can be transformed into a risk/return ratio by finding its inverse (using the “1/x” button on a calculator). The inverse of 0.400 is 2.5. The return is 8%, so the “risk” is 2.5 times 8% which is 20%.
For the Sharpe ratio, the downside risk and the upside “risk” are the same. So the downside is 8% -20%, or -12%. The upside risk is 8%+20%, or 28%. Easy!
Sharpe Ratios and Risk (more detail)
Where did the “68% chance” come from? The answer is a bit more complicated, but still fairly easy to understand.
It comes from the 3-sigma1 rule of statistics. The range of -12% to +28% comes from 1 standard deviations of the mean (or plus or minus one sigma). The 3-sigma rule also says that 95% of outcomes will fall within two standard deviations. Double the deviation means two times the upside and downside risk, so the 95% confidence range becomes -32% to 48%. Finally the 3-sigma rule means triple the upside and downside risk, meaning outcomes from -52% to +68% will occur 99.7 percent of the time.
Almost every investor will be be pleased with a positive sigma event, where the return is above 8%. For example a +1 sigma (+1σ) occurrence has a +28% return — quite nice.
A downside event is potentially quite troublesome. Even a -1σ event means a 12% loss. A -2σ is a much worse 32% loss.
Ex Ante and Ex Post Sharpe Ratios
Forward-looking (ex ante) Sharpe ratios are predictions “prior to the event(s)”. They are always positive, because no rational investor would invest in a negative expected return. The assumptions baked into an ex ante Sharpe ratio predictions are 1) expected standard deviation of total return, σ, 2) expected future return.
Backward-looking, or after the fact, (ex post) Sharpe ratios can be negative or positive. In fact, assuming “normal distributions of return”, there is a reasonable (but less than 50%) chance of a negative ex post Sharpe ratio.
Sigma1 HAL0 software optimizes for Sharpe ratios by optimizing for return and standard deviation. It also optimizes for semivariance. More “plain English” on that advantage later.
Two mathematical equations have transformed the world of modern finance. The first was CAPM, the second Black-Scholes. CAPM gave a new perspective on portfolio construction. Black-Scholes gave insight into pricing options and other derivatives. There have been many other advancements in the field of financial optimization, such as Fama-French — but CAPM and Black-Scholes-Merton stand out as perhaps the two most influential.
When CAPM (and MPT) were invented, computers existed, but were very limited. Though the father of CAPM, Harry Markowitz, wanted to use semi-variance, the computers of 1959 were simply inadequate. So Markowitz used variance in his ground breaking book “Portfolio Selection — Efficient Diversification of Investments”.
Choosing variance over semi-variance made the computations orders of magnitude easier, but the were still very taxing to the computers of 1959. Classic covariance-based optimizations are still reasonably compute-intensive when a large number of assets are considered. Classic optimization of a 2000 asset portfolio starts by creating a 2,002,000-entry (technically 2,002,000 unique entries which, when mirrored about the shared diagonal, number 4,000,000) covariance matrix; that is the easy part. The hard part involves optimizing (minimizing) portfolio variance for a range of expected returns. This is often referred to as computing the efficient frontier.
The concept of semi-variance (SV) is very similar to variance used in CAPM. The difference is in the computation. A quick internet search reveals very little data about computing a “semi-covariance matrix”. Such a matrix, if it existed in the right form, could possibly allow quick and precise computation of portfolio semi-variance in the same way that a covariance matrix does for computing portfolio variance. Semi-covariance matrices (SMVs) exist, but none “in the right form.” Each form of SVM has strengths and weaknesses. Thus, one of the many problems with semi-covariance matrices is that there is no unique canonical form for a given data set. SVMs of different types only capture an incomplete portion of the information needed for semi-variance optimization.
The beauty of SV is that it measures “downside risk”, exclusively. Variance includes the odd concept of “upside risk” and penalizes investments for it. While not going to the extreme of rewarding upside “risk”, the modified semi-variance formula presented in this blog post simply disregards it.
I’m sure most of the readers of this blog understand this modified semi-variance formula. Please indulge me while I touch on some of the finer points. First, the 2 may look a bit out of place. The 2 simply normalizes the value of SV relative to variance (V). Second, the “question mark, colon” notation simply means if the first statement is true use the squared value in summation, else use zero. Third, notice I use ri rather than ri – ravg.
The last point above is intentional and another difference from “mean variance”, or rather “mean semi-variance”. If R is monotonically increasing during for all samples (n intervals, n+1 data points), then SV is zero. I have many reasons for this choice. The primary reason is that with ravg the SV for a straight descending R would be zero. I don’t want a formula that rewards such a performance with 0, the best possible SV score. [Others would substitute T, a usually positive number, as target return, sometimes called minimal acceptable return.]
Finally, a word about ri — ri is the total return over the interval i. Intervals should be as uniform as possible. I tend to avoid daily intervals due to the non-uniformity introduced by weekends and holidays. Weekly (last closing price of the trading week), monthly (last closing price of the month), and quarterly are significantly more uniform in duration.
Big Data and Heuristic Algorithms
Innovations in computing and algorithms are how semi-variance equations will change the world of finance. Common sense is why. I’ll explain why heuristic algorithms like Sigma1’s HALO can quickly find near-optimal SV solutions on a common desktop workstation, and even better solutions when leveraging a data center’s resources. And I’ll explain why SV is vastly superior to variance.
Computing SV for a single portfolio of 100 securities is easy on a modern desktop computer. For example 3-year monthly semi-variance requires 3700 multiply-accumulate operations to compute portfolio return, Rp, followed by a mere 37 subtractions, 36 multiplies (for squaring), and 36 additions (plus multiplying by 2/n). Any modern computer can perform this computation in the blink of an eye.
Now consider building a 100-security portfolio from scratch. Assume the portfolio is long-only and that any of these securities can have a weight between 0.1% and 90% in steps of 0.1%. Each security has 900 possible weightings. I’ll spare you the math — there are 6.385*10138 permutations. Needless to say, this problem cannot be solved by brute force. Further note that if the portfolio is turned into a long-short portfolio, where negative values down to -50% are allowed, the search space explodes to close to 102000.
I don’t care how big your data center is, a brute force solution is never going to work. This is where heuristic algorithms come into play. Heuristic algorithms are a subset of metaheuristics. In essence heuristic algorithms are algorithms that guide heuristics (or vise versa) to find approximate solution(s) to a complex problem. I prefer the term heuristic algorithm to describe HALO, because in some cases it is hard to say whether a particular line of code is “algorithmic” or “heuristic”, because sometimes the answer is both. For example, semi-variance is computed by an algorithm but is fundamentally a heuristic.
Heuristic Algorithms, HAs, find practical solutions for problems that are too difficult to brute force. They can be configured to look deeper or run faster as desired by the user. Smarter HAs can take advantage of modern computer infrastructure by utilizing multiple threads, multiple cores, and multiple compute servers in parallel. Many, such as HAL0, can provide intermediate solutions as they run far and deep into the solution space.
Let me be blunt — If you’re using Microsoft Excel Solver for portfolio optimization, you’re missing out. Fly me out and let me bring my laptop loaded with HAL0 to crunch your data set — You’ll be glad you did.
Now For the Fun Part: Why switch to Semi-Variance?
Thanks for reading this far! Would you buy insurance that paid you if your house didn’t burn down? Say you pay $500/year and after 10 years, if your house is still standing, you get $6000. Otherwise you get $0. Ludicrous, right? Or insurance that only “protects” your house from appreciation? Say it pays 50 cents for every dollar make when you resell your house, but if you lose money on the resale you get nothing?
In essence that is what you are doing when you buy (or create) a portfolio optimized for variance. Sure, variance analysis seeks to reduce the downs, but it also penalizes the ups (if they are too rapid). Run the numbers on any portfolio and you’ll see that SV ≠ V. All things equal, the portfolios with SV < V are the better bet. (Note that classic_SV ≤ V, because it has a subset of positive numbers added together compared to V).
Let me close with a real-world example. SPLV is an ETF I own. It is based on owning the 100 stocks out of the S&P 500 with the lowest 12-month volatility. It has performed well, and been received well by the ETF marketplace, accumulating over $1.5 billion in AUM. A simple variant of SPLV (which could be called PLSV for PowerShares Low Semi-Variance) would contain the 100 stocks with the least SV. An even better variant would contain the 100 stocks that in aggregate produced the lowest SV portfolio over the proceeding 12 months.
HALO has the power to construct such a portfolio. It could solve preserving the relative market-cap ratios of the 100 stocks, picking which 100 stocks are collectively optimal. Or it could produce a re-weighted portfolio that further reduced overall semi-variance.
[Even more information on semi-variance (in its many related forms) can be found here.]
Let me take you back to grad school for a few moments, or perhaps your college undergrad. If you’ve studied much finance, you’ve surely studied beta in the context of modern portfolio theory (MPT) and the Capital-Asset Pricing Model (CAPM). If you are a quant like me, you may have been impressed with the elegance of the theory. A theory that explains the value and risk of a security, not in isolation, but in the context of markets and portfolios.
Markowitz‘s MPT book, in the late 50’s, must have come as a clarion call to some investment managers. Published ten years prior, Benjamin Graham’s The Intelligent Investorwas, perhaps, the most definitive book of its time. Graham’s book described an intelligent portfolio as a roughly 50/50 stock/bond mix, where each stock or bond had been selected to provide a “margin of safety”. Graham provided a value-oriented model for security analysis; Markowitz provided the tools for portfolio analysis. Markowizt’s concept of beta added another dimension to security analysis.
As I explore new frontiers of portfolio modeling and optimization, I like to occasionally survey the history of the evolving landscape of finance. My survey lead me to put together a spreadsheet to compute β. Here is the beta-computation spreadsheet. The Excel spreadsheet uses three different methods to compute β, and they produce nearly identical results. I used 3 years of weekly adjusted closing-price data for the computations. R2 and α (alpha) are also computed. The “nearly” part of identical gives me a bit of pause — is it simply round off, or are there errors? Please let me know if you see any.
An ancient saying goes “Seek not to follow in the footsteps of men of old; seek what they sought.” The path of “modern” portfolio theory leaves behind many footprints, including β and R-squared. Today, the computation of these numbers is a simple academic exercise. The fact that these numbers represent closed-form solutions (CFS) to some important financial questions has an almost irresistible appeal to many quantitative analysts and finance academics. CFS were just the steps along the path; the goal was building better portfolios.
Markowitz’s tools were mathematics, pencils, paper, a slide rule, and books of financial data. The first handheld digital calculator wasn’t invented until 1967. As someone quipped, “It’s not like he had a Dell computer on his desk.” He used the mathematical tools of statistics developed more than 30 years prior to his birth. A consequence of his environment is Markowitz’s (primary) definition of risk: mean variance. When first learning about mean-variance optimization (MVO), almost every astute learner eventually asks the perplexing question “So upside ‘risk’ counts the same as the risk of loss?” In MTP, the answer is a resounding “Yes!”
The current year is 2012, and most sophisticated investors are still using tools developed during the slide-rule era. The reason the MVO approach to risk feels wrong is because it simply doesn’t match the way clients and investors define risk. Rather than adapt to the clients’ view of risk, most investment advisers, ratings agencies, and money managers ask the client to fill out a “risk tolerance” questionnaire that tries to map investor risk models into a handful of MV boxes.
MPT has been tweaked and incrementally improved by researchers like Sharpe and Fama and French — to name a few. But the mathematically convenient MV definition of risk has lingered like a baseball pitcher’s nagging shoulder injury. Even if this metaphorical “injury” is not career-ending, it can be career-limiting.
There is a better way, though it has a clunky name: Post-Modern Portfolio Theory (PMPT). [Clearly most quants and financial researchers are not good marketers… Next-Gen Portfolio Optimization, instead?] The heart of PMPT can be summed up as “minimizing downside risk as measured by the standard deviation of negative returns. “A good overview of PMPT in this Journal of Financial Planning Article. This quote for that article stands out brilliantly:
Markowitz himself said that “downside semi-variance” would build better portfolios than standard deviation. But as Sharpe notes, “in light of the formidable computational problems…he bases his analysis on the variance and standard deviation.”
“Formidable computational problems” of 1959 are much less so today. Financial companies are replete with processing power, data storage and computer networks. In some cases developing efficient software to use certain PMPT concepts is easy, in other cases it can be extremely challenging. (Please note the emphasis on the word ‘efficient’. An financial algorithm that takes months to complete is unlikely to be of any practical use.) The example Excel spreadsheet could easily be modified to compute a PMPT-inspired beta. [Hint: =IF(C4>0, 0, C4)]
Are you ready step off the beaten path constructed 50 years ago by wise men with archaic tools? To step onto the hidden path they might have blazed, if armed with powerful computer technology? Click the link to start your journey on the one less traveled by.
I started Sigma1 with $35,000 in seed capital, a Linux workstation and a domain name I acquired in auction for $760. The original plan was to create a revolutionary hedge fund with accredited investors as clients. I started studying for the Series 65 exam and all went well until I started reading about securities laws and various legal case studies. I gradually realized two things:
U.S. Securities Law is very restrictive, even for “lightly regulated” hedge funds
The legal start-up costs for a hedge fund were much higher than I anticipated
The first realization was the most devastating to my plans. The innovative fee structure I wished to use was likely to face serious legal challenges to implement. Without a revolutionary fee structure, more favorable to clients, the Sigma1 Fund would be hard to differentiate from the hundreds of other funds already available.
The second objective of Sigma1 has been to develop proprietary financial software. Until now the Sigma1 Proprietary Trading Fund has been constructed based on research, pencil-and-paper securities analysis and some rudimentary Excel simulations. Some quantitative analysis has been applied, but without the mathematical rigor I prefer. That is about to change.
I recently devised a way to apply techniques developed while studying Electrical Engineering and Finance in grad school. In a nutshell, I will apply evolutionary algorithms to optimize portfolio construction. The same fundamental techniques my electrical engineering colleagues and I used to explore and optimize around the random perturbations inherent in fabricated silicon circuits can be used to optimize portfolios by efficiently exploiting conventional (linear) and unconventional (non-linear) correlations between diverse assets.
I have sequestered myself in a beautiful, tranquil location while on a well-earned sabbatical from work. While evolutionary algorithms will be a significant part of the software suite I will develop, I also intend to incorporate heuristics and machine-learning techniques as well. Similarly I intend to use techniques from CAPM such as efficient-frontiers, but only as a first-order guide. Many of the limitations of CAPM (and Fama-French enhancements thereof) consist on their intrinsic reliance on Gaussian or “normal-distribution” statistical models. Such models do not properly model long-tail events, nor asymmetrical distributions, nor even log-normal distributions. Classic CAPM models even struggle with geometric-mean of expected or passed returns and generally use arithmetic means to preserve the use of linear systems analysis. Heuristic algorithms and other AI techniques need not use such assumptions as a mathematical crutch. The software I intend to develop should be able to find near-optimal solutions to financial problems that classic statistical methods “solve” only by making grossly inaccurate assumptions about probability distributions.
My intention is to develop one or more software products for fund managers that will aid in portfolio analysis, construction and refinement.