In the Modern Portfolio Theory, the risk is defined by beta or volatility, theoretically simple and graceful but largely flawed and irrelevant in reality. The most dangerous aspect of this definition is that people more often than not mistake low volatility as low risk. Low volatility gives people the false sense of security, encouraging excessive risk taking. Throughout the history, we have seen repeatedly that long periods of low volatility often precipitated major market crisis, like the Savings & Loans in the 80s, LTCM in the 90s, and the subprime crisis that we are currently experiencing. Other popular quantitative models that measure Value at Risk (VAR) also suffer large degrees of deficiency because of unrealistic input assumptions. For instance, the variance-covariance model (VCV) assumes that risk factor returns are always normally distributed. Nassim Taleb has since proven that the tail risk is far greater than the bell curve can predict. The output is only as good as the input.
Warren Buffett considers risk as “the possibility of loss or injury from an investment,” a honest but certainly unsatisfactory definition for academics and investment bankers who seek precise measurement in risk. The truth of the matter is that such precision might not exist at all. Rather than defining risk in some sort of uniformly objective fashion, I think that risk is a subject of the observer or bearer of the risk. In other words, risk is how much a person is willing to and can afford to lose. If one considers 10% the maximal willingness-to-lose on his investment, a 10% drawdown should signal the exit on the investment when possible. This risk management measure is not uncommon among investment practitioners, but the real difficulty is to have the discipline to stick to it when losses accumulate. After all, most people suffer loss aversion: when facing large potential losses, people exhibit risk-seeking behavior rather than risk-averse behavior.