Greener Software is Better Software

CPU Load and Green Software
CPU Load Correlates with Power Consumption

Faster Software is Greener Software

Simply put, when one software product is more efficient than another, it runs faster and takes less time to solve the same problem.  The less time software takes to run, the less power is consumed.

By way of illustration, consider the efficiency of a steam ship going from New York to San Fransisco before and after the Panama Canal was built.  The canal was a technological marvel of its time, and it cut the journey distance from 13,000 miles to 5,000.  It cut travel time by (more than) half, and reduced the journey’s coal consumption by 50%.  The same work was performed, with the same “hardware” (the steamer), but in just 30 days rather than 60, and using half the fuel.

Faster run time is the most significant and most visible component of green software, but it is not the only significant factor.  Other factors affecting how much power software consumes include:

  • Cache miss rate
  • Streamlined versus bloated, crufty software
  • Use of best-suited hardware resources
  • Algorithm scalability

Without getting too technical, I’ll briefly touch on each bullet point.  A cache hit is when a CPU finds the information it needs in its internal cache memory, while a cache miss is when the CPU must send an off-chip request to the computer’s RAM to get the required data.  A cache miss is about 100x slower than a cache hit, in part because the data has to travel about 10cm for a cache miss, versus about 5mm for a cache hit.  The difference in power consumption between a cache hit and a cache miss easily be 20x to 100X, or more.

Most software starts out reasonably streamlined.  Later, if the software is popular, comes a time when enhancement requests and bug reports come in faster than developers can implement them in a streamlined manner.  Consequently many developers implement quick but inefficient fixes.  Often this behavior is encouraged by managers trying to hit aggressive schedule commitments.  The developers have intentions to come back and improve the code, but frequently their workload doesn’t permit that.  After a while developers forget where the software “kludges” or hack are.  Even worse, the initial developers either get reassigned to other projects or leave for other jobs.  The new software developers are challenged learn the unfamiliar code and implement fixes and enhancements — adding their own cruft.   This is how crufty, bloated software emerges:  overworked developers, focused on schedule over software efficiency, and developer turnover.

Modern CPUs have specialized instructions and hardware for different compute operations.  One example is Intel SSE technology which features a variety of Single-Instruction, Multiple-Data (SIMD) extensions.  For example, SSE4 (and AVX) can add 4 or more pairs of numbers (2 4-number vectors) in one operation, rather than 4 separate ADD operations.  This reduces CPU instruction traffic and saves power and time.

Finally algorithm scalability is increasingly important to modern computing and compute efficiency.  Scalability has many meanings, but I will focus on the ability of software to use multiple compute resources in parallel.  [Also known as parallel computing.]   Unfortunately most software in use today has limited or no compute-resource scalability.  This means that this software can only use 1 core of a modern 4-core CPU.  In contrast, linearly-scalable software could run 3x faster by using 3 of the 4 cores at full speed.  Even better, it could run 3x faster on 4 cores running at 75% speed, and consume about 30% less power.  [I’ll spare you the math, but if you are curious this link will get you started.]

“Distributed Software” is Greener

Distributed computing is technology that allows compute jobs to be distributed into the “cloud” or data center queue.  Rather than having desktop workstations sitting idle much of the day, a data center is a room full of computers that direct compute jobs to the least busy computers.  Jobs can be directed to the computers best-suited to a particular compute request. Intelligent data centers can even put unused computers into “deep sleep” mode that uses very little power.

I use the term distributed software to mean software that is easily integrated with a job-submission or queuing software infrastructure.  [Short for distributed-computing-capable software.]  Clearly distributed software benefits directly from the efficiencies of a given data center.  Distributed software can also benefit from the ability to run in parallel on multiple machines.  The more tightly-coupled with the capabilities and status of the data center, the more efficiently distributed software can adapt to dynamic changes.

Sigma1 Software is Green

Sigma1 financial software (code-named HAL0) has been designed from the ground up to be lean and green.  First and foremost, HAL0 (code-named in honor of Arthur C. Clarke’s HAL 9000 — “H-A-L is derived from Heuristic ALgorithmic (computer)”) is architected to scale near-linearly to tens or hundreds of cores, “sockets”, or distributed machines.  Second, the central kernel or engine is designed to be as light-weight and streamlined as possible — helping to reduce expensive cache misses.  Third, HAL0 uses Heuristic Algorithms and other “AI” features to efficiently navigate astronomically-large search spaces (10^18 and higher).  Fourth, HAL0 uses an innovative computation cache system that allows repeated complex computations to be looked up in the cache, rather than recomputed.  In alpha testing, this feature alone accounted for a 3X run-time improvement.  Finally, HAL0 portfolio software incorporates a number of more modest run-time and power-saving features such as coding vector operations explicitly as vector operations, thus allowing easier use of SIMD and possibly GPGPU instructions and hardware.

Some financial planners still use Microsoft Excel to construct and optimize portfolios.  This is slow and inefficient — to say the least.  Other portfolio software I have read about is an improvement over Excel, but doesn’t mention scalability nor heuristic algorithms.  It is possible, perhaps likely, that other financial software with some the capabilities of HAL0 exists.  I suspect, however, that if it does, it is proprietary, in-house software that is not for sale.

A Plea for Better, Greener Software

In closing, I’d like the software community to consider how the efficiency (or inefficiency) of their current software products contribute to world-wide power consumption.  Computer hardware has made tremendous strides to improving performance/power in the last ten years, and continues to do so.   IT and data-center technology is also becoming more power efficient.  Unfortunately, most software has been trending in the opposite direction — becoming more bloated and less efficient.  I urge software developers and software managers to consider the impact of the software they are developing.  I challenge you to consider, probably for the first time, how many kilowatt- or megawatt-hours your current software is likely to consume.  Then ask yourself, “How can I reduce that power?”

Toss your Financial Slide-rule: Beta Computation, MPT, and PMPT

Let me take you back to grad school for a few moments, or perhaps your college undergrad. If you’ve studied much finance, you’ve surely studied beta in the context of modern portfolio theory (MPT) and the Capital-Asset Pricing Model (CAPM). If you are a quant like me, you may have been impressed with the elegance of the theory. A theory that explains the value and risk of a security, not in isolation, but in the context of markets and portfolios.

Markowitz‘s MPT book, in the late 50’s, must have come as a clarion call to some investment managers.  Published ten years prior, Benjamin Graham’s The Intelligent Investor was, perhaps, the most definitive book of its time.  Graham’s book described an intelligent  portfolio as a roughly 50/50 stock/bond mix, where each stock or bond had been selected to provide a “margin of safety”.  Graham provided a value-oriented model for security analysis; Markowitz provided the tools for portfolio analysis.  Markowizt’s concept of  beta added another dimension to security analysis.

As I explore new frontiers of portfolio modeling and optimization, I like to occasionally survey the history of the evolving landscape of finance.  My survey lead me to put together a spreadsheet to compute β.  Here is the beta-computation spreadsheet.   The Excel spreadsheet uses three different methods to compute β, and they produce nearly identical results.  I used 3 years of weekly adjusted closing-price data for the computations.  R2 and α (alpha) are also computed.   The “nearly” part of identical gives me a bit of pause — is it simply round off, or are there errors?  Please let me know if you see any.

An ancient saying goes “Seek not to follow in the footsteps of men of old; seek what they sought.”   The path of “modern” portfolio theory leaves behind many footprints, including β and R-squared.  Today, the computation of these numbers is a simple academic exercise.  The fact that these numbers represent closed-form solutions (CFS) to some important financial questions has an almost irresistible appeal to many quantitative analysts and finance academics.   CFS were just the steps along the path;  the goal was building better portfolios.

Markowitz’s tools were mathematics, pencils, paper, a slide rule, and books of financial data.  The first handheld digital calculator wasn’t invented until 1967.  As someone quipped, “It’s not like he had a Dell computer on his desk.”   He used the mathematical tools of statistics developed more than 30 years prior to his birth.  A consequence of his environment is Markowitz’s (primary) definition of risk:  mean variance.  When first learning about mean-variance optimization (MVO), almost every astute learner eventually asks the perplexing question “So upside ‘risk’ counts the same as the risk of loss?”  In MTP, the answer is a resounding “Yes!”

The current year is 2012, and most sophisticated investors are still using tools developed during the slide-rule era.  The reason the MVO approach to risk feels wrong is because it simply doesn’t match the way clients and investors define risk.  Rather than adapt to the clients’ view of risk, most investment advisers, ratings agencies, and money managers ask the client to fill out a “risk tolerance” questionnaire that tries to map investor risk models into a handful of MV boxes.

MPT has been tweaked and incrementally improved by researchers like Sharpe and Fama and French — to name a few.  But the mathematically convenient MV definition of risk has lingered like a baseball pitcher’s nagging shoulder injury.  Even if this metaphorical “injury” is not career-ending, it can be career-limiting.

There is a better way, though it has a clunky name:  Post-Modern Portfolio Theory (PMPT).  [Clearly most quants and financial researchers are not good marketers… Next-Gen Portfolio Optimization, instead?]   The heart of PMPT can be summed up as “minimizing downside risk as measured by the standard deviation of negative returns.  “A good overview of PMPT in this Journal of Financial Planning Article.  This quote for that article stands out brilliantly:

Markowitz himself said that “downside semi-variance” would build better portfolios than standard deviation. But as Sharpe notes, “in light of the formidable computational problems…he bases his analysis on the variance and standard deviation.”

“Formidable computational problems” of 1959 are much less so today.  Financial companies are replete with processing power, data storage and computer networks.  In some cases developing efficient software to use certain PMPT concepts is easy, in other cases it can be extremely challenging.   (Please note the emphasis on the word ‘efficient’.   An financial algorithm that takes months to complete is unlikely to be of any practical use.)   The example Excel spreadsheet could easily be modified to compute a PMPT-inspired beta. [Hint:  =IF(C4>0, 0, C4)]

Are you ready step off the beaten path constructed 50 years ago by wise men with archaic tools?   To step onto the hidden path they might have blazed, if armed with powerful computer technology?  Click the link to start your journey on the one less traveled by.

New Perspectives on Portfolio Optimization

Portfolio Risk/Reward Contours
Risk/Reward Contours for 100 Optimized Portfolios

Building superior investment portfolios is what money managers are paid to do. As a fund manager, I wanted software to help me build superior, positive-alpha portfolios.

Not finding software that did anything like I wanted, I decided to write my own.

When I build or modify a portfolio I start with investment ideas. Ideas like going short BWX (international government debt) and long JNK (US junk bonds). I want some US equity exposure with VTI and some modest buy-write protection through ETB. And I have a few stocks that I believe are likely to outperform the market. What I’d like is portfolio software that will take my list of stocks, ETFs, and other securities and show me the risk/reward trade off for a variety of portfolios comprised of these securities.

Before I get too far ahead of myself, let me explain the above graphic. It uses two measures of risk and a proprietary measure of expected return. The risk measures are 3-year portfolio beta (vs. the S&P500), and sector diversification. This risk measures are transformed into “utility metrics”, which simply means bigger is better. By maximizing utility, risk is minimized.

The risk utility metrics (or heuristics) are set up as follows. 10 is the absolute best score and 0 the worst. In this graph a beta of 1.0 results in a beta “risk metric” of 10. A beta of infinity would result in a beta risk metric of 0. For this simulation, I don’t care about betas less than 1, though they are not excluded. The sector diversification metric measures how closely any portfolio matches sector market-cap weights in the S&P 500. A perfect match scores a 10. The black “X” surrounded by a white circle denotes such a perfectly balanced portfolio. In fact this portfolio is used to seed the construction of the wide range of investment portfolios depicted in the chart.

On thing is immediately clear. Moving away from the relative safety of the 10/10 corner, expected returns increase, from 7.8% up to 15%. Another observation is that the software doesn’t think there is much benefit in increased beta (decreased beta metric) unless sector diversification is also decreased.  [This is the software “talking”, not my opinion, per se.]

The contour lines help visualize the risk tradeoffs (trading beta risk for non-diversification risk) for a particular expected rate of return.  The pink 11% return contour looks almost linear — an outcome I find a bit surprising given the non-linear risk-estimation heuristics used in the modeling.

For all that the graphic shows, there is much it does not.  It does not show the composition or weightings of securities used to build the 100 portfolios whose scores appear.  That data appears in reports produced by the portfolio-tuner software.  The riskiest, but highest expected-return portfolios are heavy in financials and, intriguingly, consumer goods.  More centrally-located portfolios, with expected returns in the 11% range, are over-weighted in the basic materials, services (retail), consumer goods, financial, and technology sectors.

Back to the original theme: desirable features of financial software — particularly portfolio-optimization software.  For discussion, let’s assign the codename HAL0 (HAL zero in homage to HAL 9000) to this portfolio-optimizing software.  I don’t want dime-a-dozen stock/ETF screeners, but I do want software that I can ask “HAL0, help me build a complete portfolio by finding securities that optimally complements this 70% core of securities.”  Or “HAL, let’s create an volatility-optimized portfolio based on this particular list of securities, using my expected rates of return.”  Even, “HAL, forget volatility, standard-deviation, etc, and use my measures of risk and return, and build a choice of portfolios tuned and optimized to these heuristics”.

These are things the alpha version of HAL0 can do today (except for understanding English… you have to speak HAL’s language to pose your requests).  The plot you see was generated from data generated in just under 3 hours on an inexpensive desktop running Linux.  That run used 10,000 iterations of the optimization engine.  However 100 iterations, running in a mere 2 minutes, will produce a solution-space that is nearly identical.

HAL0 supports n-dimensional solution spaces (surfaces, frontiers), though I’ve only tested 2-D and 3-D so far.  The fact that visualizing 4-D data would probably involve an animated 3-D video makes me hesitate.  And preserving “granularity” requires an exponential scaling in time complexity.  Ten data points provides acceptable granularity for a 2-D optimization, 100 data points is acceptable for 3-D, and 1000 data points for 4-D.  Under such conditions the 4-D sim would be a bit more than 10x slower.  If a granularity of 20 is desired, the 3-D sim would be slowed by 4X, and a 4-D optimization by an additional 8X.   I have considered the idea that a 4-D optimization could be used for a short time, say 10 iterations and/or with low granularity (say 8),  and then one of the utility heuristics could be discarded and 3-D optimization (with higher depth and granularity )could continue from there… nothing in the HALo software precludes this.

HAL0 is software built to build portfolios.  It uses algorithms from software my partner and I developed in grad school to solve engineering problems– algorithms that built upon evolutionary algorithms, AI, machine learning and heuristic algorithms.  HAL0 also incorporates ideas and insights that I have had in the intervening 8 years.  Incorporated into its software DNA are features that I find extremely important:  robustness, scalability and extensibility.

Today HAL0 can construct portfolios comprised of stocks, ETFs, and highly-liquid bonds and commodities.   I have not yet figured out a satisfactory way to include options, futures, or assets such as non-negotiable CDs into the optimization engine.  Nor have I implemented multi-threading nor distributed computing, though the software is designed from the ground up to support these scalability features.

HAL0 is in the late alpha-testing phase.  I plan to have a web-based beta-testing model ready by the end of 2012.

Disclaimer:  Do not make adjustments to your investment portfolio without first consulting a registered investment adviser (RIA), CFP or other investment professional.

 

 

Financial Software: Heuristics Explained

Software visualization
Abstract Visual of Software

A Baseball Analogy

Imagine you’re the general manager of a Major League ball club.  Your primary job is to construct (and maintain) a team of players  that will win lots of games, while keeping the total player payroll as low as possible.  When considering a hypothetical roster a baseball GM has two primary objectives in mind:

  1. Total annual payroll (plus any associated “luxury tax”)
  2. Expected season wins (and post-season wins)

These objectives can also be called heuristics — rules of thumb to help find solutions to complex problems.   These heuristics can be turned into numbers (quantified) by creating cost functions or utility functions.  Please don’t let all of this jargon disembolden you; we are merely talking a little baseball here.

The cost function function for payroll is just that… the total annual salaries for a proposed roster.  It is called a cost function because cost is something we are trying to minimize.  Expected wins is called a utility function, because utility is good, and we want to maximize it.

Now, accurately predicting number of wins for a hypothetical (or real) roster of players is a real challenge.  Every scout and adviser is going to have his or her own ideas or heuristics.  Just watch Moneyball to see what I mean.  To turn any given roster into a utility score a GM could write a proposed roster on a whiteboard and point-blank ask each advisory “How many wins will this team produce?”  The GM could average these predictions and, boom!, that’s an utility function.  The GM could also hire a computer scientist and statistician to code up a utility function for any proposed roster relying on a chosen set of stats.

Either way, now the GM has can evaluate any proposed roster based on two metrics: cost and wins.  These data can be plotted, and quickly patterns will emerge.  Some proposed rosters will be both more expensive and less “winning” than others.  These rosters are said to be dominated, and they can be removed from consideration.  Once all the dominated rosters are eliminated, what remains is a series of dots that form a curve.  As one moves up that curve, one finds more winning, but more expensive rosters.  Moving the other way, the payroll cost is less, but the expected wins decrease.  This curve resembles what financial folks call an efficient frontier — the expected risk/reward tradeoff for an optimized portfolio selected from a basket of securities.

Back to Portfolio Optimization Software

The baseball analogy above tries to explain mathematical concepts without resorting to math.   OK, I did use a few math words, but no equations!

There are several differences between a baseball roster and an investment portfolio.  Key differences from an investment portfolio are:  1) You can own multiple shares of a stock or ETF (but have only 1 of any player),  2) You can trade stocks/ETFs virtually whenever you want.

Nonetheless, the baseball analogy is useful in illustrating what Sigma1 Software will be able to do for fund managers and investors.  Instead of building a baseball roster, you are building an investment portfolio.  In the classic “CAPM” investing model, the cost function is standard deviation (risk), and the utility function is expected returnHistorical standard deviation is easy to compute, but expected return is much harder to accurately compute.

Now, if you are an active fund manager, you probably have in-house analysts paid to help you pick stocks (just like GM’s have scouts).  But scouting reports from analysts do not a portfolio make… even if your analysts are giving you positive-alpha stock picks. A robust asset allocation strategy is necessary to build a robust portfolio out of your chosen list of securities.

The Vision for Sigma1 Portfolio Software

A Vision for Financial Professionals

It started with the desire to create software that would allow me to build a better portfolio for my proprietary trading fund — Software that could optimize portfolios using heuristics, cost functions, and utility functions of my own choosing.   I wanted to create portfolio software for investment managers that:

  • Allows them to select their own list of securities (or chosen dynamically from all investable securities)
  • Takes advantage of one or more “seed portfolios” if desired
  • Allows proprietary heuristics, cost functions, market models, etc. to plug seamlessly into the optimization engine
  • Isn’t limited to linear or Gaussian risk-analysis measures
  • Runs in minutes or hours, not days
  • Is capable of efficiently utilizing distributed and parallel computing resources — Scalability

A Vision for “Retail” Investors

For retail investors, the general investing public, I envision scaled-down versions of the professional portfolio optimization software.  The retail investor software will run as an application on a web server.  A free version will provide portfolio optimization for a small basket of user-chosen securities, perhaps limiting portfolio size to 10.   A paid-subscription plan will offer more features and allow retail users to build larger portfolios.

To keep the software easy to use, a variety of ready-to-use heuristics will be available.  These are likely to include:

  • Standard deviation
  • Historic best-year and worst-year analysis
  • Beta (versus common indices)
  • Diversification measures (e.g. sector, market-cap)
  • Price-to-earnings
  • Proprietary expected-return predictors

Portfolio Software Development: Day 3

Portfolio Software: Plain English

Yesterday I wrote an early version of financial software to help users improve their investing portfolios.  This software has the ability to solve financial problems in a very different way than taught in graduate-level finance classes.   Rather than relying solely on a type of mathematics called statistical analysis, Sigma1 software uses techniques from computer science called artificial intelligence or AI.  (I prefer the term machine intelligence because there is nothing artificial about the intelligence results produced by a solid AI algorithm.  If you doubt this, I challenge you to beat Chessmaster 11 running on your PC… on max difficulty.)

My idea has been to develop a sophisticated program that would allow institutional investors such as fund managers to  “plug in” their proprietary valuation models and come up with solid portfolios in minutes or hours, rather than days or weeks using brute-force techniques.    As I was working, I realized that smaller “normal” investors could also benefit from a simplified version of Sigma1 software.

Rather than sell this lite version of portfolio-opt software I may provide a free version on a website.  The free version would have limitations on both the number of securities and the “depth” of analysis and reporting.  For example the user may only be able to enter a maximum of 20 securities in their current (or proposed) starting portfolio.  The free web version would quickly suggest an asset-allocation mix of those securities that is (potentially) safer with the same expected return or (potentially) equally safe with a higher expected return.

If the free web version is popular enough, Sigma1 may introduce a paid web subscription service that allows a larger portfolio, a wider selection of securities, more detailed reports and even sample portfolios to “blend” with the investor’s favorite tickers.

Even after the free web version is released, I plan to refine the advanced institutional version of the software.  I plan to use it to improve the composition of the Sigma1 proprietary trading fund.  I also intend to develop a world-class product that institutional investors will want to have access to… for a very reasonable price.

At this time I have zero interest in sharing the source code or specific concepts underlying the current and future Sigma1 software.  Many of these ideas stem from work in my undergrad engineering and computer science studies.  They have developed in my graduate work in finance and engineering.   The realization that the techniques I have developed for engineering, game-theory, poker and number theory apply most directly to portfolio construction and optimization hints at the possibility that I have hit upon one of those rare ideas that strike gold.  Not academic gold; real “gold” with real financial value.

I love academic research and open-source software.  I don’t intend to keep the concepts and code that Sigma1 is developing locked up forever.  If the Sigma1 financial software is financially successful enough, I hope to release pieces of it to the open-source community over time.  (Conversely, if the software does  not ultimately find a lucrative market, I will eventually release it too 🙂 )

Portfolio Optimization Software: Tech Speak

Yesterday I wrote the key pieces of an algorithm to build and optimize securities portfolios.   The remaining pieces: heuristics and selection should be relatively easy to code.   The coding and testing was very quick  1) because I’ve written similar optimizers many times before, 2) because I had 2 days to think about it as I was driving and 3) because I wrote it in Ruby.

Based on previous experience (and depending on the complexity of the heuristics), run-times should be swift for portfolios of 500 securities or less. In previous research I’ve been able to use distributed computing when the heuristics/analysis dominated run-time.  Generally the optimizer has not been the limiting factor for speed.

I plan to start with relatively simple heuristics to test the portfolio-optimization software.  Likely the first test will merely compute the (near-optimal) efficient frontier for a basket of securities, plotting 3-year standard deviation of various portfolios on the frontier versus expected return.  If I wish I may even compare the results to efficient frontiers constructed with classic methods using covariance matrices.

Once I create a Ruby prototype I plan to re-code the software in C/C++, both for execution speed and for the relative IP-protection provided by releasing only compiled binary executables.