Why Volatility is the Wrong Measure of Investment Risk

Advisor Perspectives welcomes guest contributions. The views presented here do not necessarily represent those of Advisor Perspectives.

Volatility is the standard measure used by advisors to measure risk. It has been useful but has limitations. There are ways that volatility will not provide an accurate representation of the risk of an investment portfolio.

The popularity of volatility as a measure of investment risk is widely attributed to modern portfolio theory, introduced by Nobel Laureate Harry Markowitz in 1952. Markowitz’s key insight was that an investor should rationally demand higher returns to compensate for the risk of owning more volatile investments. It’s hard to believe, but prior to this, volatility of returns was seldom a consideration when assessing portfolio performance.

As there was no standard convention for how to measure portfolio risk, Markowitz chose volatility as an appropriate metric out of convenience and practicality. The volatility of an investment or portfolio could be measured by a well-known statistical concept: the standard deviation.

Standard deviation offered several advantages that contributed to its adoption and popularity:

  • It is mathematically simple to calculate and compare across individual assets and portfolios.
  • It provides a simple interpretation (what range of returns are expected over any given time horizon).
  • It can be easily used to estimate the likelihood of a given loss.

But standard deviation has several significant disadvantages that can lead to naive risk assessments:

  • It doesn’t address all the major risks that concern a typical investor.
  • It doesn't mean what most people intuitively think it does.
  • It is based on assumptions that don’t fit the real world.