Quantitative Equity Investing PDF Free Download

Posted : admin On 1/10/2022

Quantitative Equity Investing-Frank J. Fabozzi 2010-01-29 A comprehensive look at the tools and techniques used in quantitative equity management Some books attempt to extend portfolio theory, but the real issue today relates to the practical implementation of the theory introduced by Harry Markowitz and others who followed. Download Equity Management The Art And Science Of Modern Quantitative Investing Second Edition books, The classic guide to quantitative investing—expanded and updated for today’s increasingly complex markets From Bruce Jacobs and Ken Levy—two pioneers of quantitative equity management— the go-to guide to stock selection has been.

Free PDF Quantitative Equity Portfolio Management: An Active Approach to Portfolio Construction and Management (McGraw-Hill Library of Investment an

It can be one of your morning readings Quantitative Equity Portfolio Management: An Active Approach To Portfolio Construction And Management (McGraw-Hill Library Of Investment An This is a soft file book that can be survived downloading from online book. As known, in this advanced period, innovation will ease you in doing some activities. Also it is simply reading the existence of publication soft data of Quantitative Equity Portfolio Management: An Active Approach To Portfolio Construction And Management (McGraw-Hill Library Of Investment An can be added attribute to open up. It is not only to open as well as conserve in the gizmo. This moment in the morning and also other downtime are to read the book Quantitative Equity Portfolio Management: An Active Approach To Portfolio Construction And Management (McGraw-Hill Library Of Investment An


Quantitative Equity Portfolio Management: An Active Approach to Portfolio Construction and Management (McGraw-Hill Library of Investment an


Free PDF Quantitative Equity Portfolio Management: An Active Approach to Portfolio Construction and Management (McGraw-Hill Library of Investment an

Quantitative Equity Portfolio Management: An Active Approach To Portfolio Construction And Management (McGraw-Hill Library Of Investment An. Delighted reading! This is just what we desire to claim to you who love reading a lot. Just what regarding you that declare that reading are only obligation? Never ever mind, reading behavior needs to be begun with some certain factors. Among them is reviewing by obligation. As just what we really want to supply below, the book entitled Quantitative Equity Portfolio Management: An Active Approach To Portfolio Construction And Management (McGraw-Hill Library Of Investment An is not sort of required e-book. You can appreciate this e-book Quantitative Equity Portfolio Management: An Active Approach To Portfolio Construction And Management (McGraw-Hill Library Of Investment An to review.

As understood, numerous individuals say that e-books are the vinyl windows for the world. It does not suggest that purchasing e-book Quantitative Equity Portfolio Management: An Active Approach To Portfolio Construction And Management (McGraw-Hill Library Of Investment An will certainly mean that you could acquire this globe. Simply for joke! Reading a publication Quantitative Equity Portfolio Management: An Active Approach To Portfolio Construction And Management (McGraw-Hill Library Of Investment An will opened a person to think far better, to keep smile, to amuse themselves, and to motivate the knowledge. Every e-book also has their unique to influence the reader. Have you recognized why you read this Quantitative Equity Portfolio Management: An Active Approach To Portfolio Construction And Management (McGraw-Hill Library Of Investment An for?

Well, still confused of ways to obtain this e-book Quantitative Equity Portfolio Management: An Active Approach To Portfolio Construction And Management (McGraw-Hill Library Of Investment An right here without going outside? Simply attach your computer system or gadget to the web as well as begin downloading Quantitative Equity Portfolio Management: An Active Approach To Portfolio Construction And Management (McGraw-Hill Library Of Investment An Where? This page will reveal you the link page to download Quantitative Equity Portfolio Management: An Active Approach To Portfolio Construction And Management (McGraw-Hill Library Of Investment An You never ever stress, your preferred publication will be quicker yours now. It will certainly be a lot easier to appreciate checking out Quantitative Equity Portfolio Management: An Active Approach To Portfolio Construction And Management (McGraw-Hill Library Of Investment An by online or getting the soft file on your kitchen appliance. It will despite which you are and just what you are. This publication Quantitative Equity Portfolio Management: An Active Approach To Portfolio Construction And Management (McGraw-Hill Library Of Investment An is created for public and also you are one of them which can delight in reading of this e-book Quantitative Equity Portfolio Management: An Active Approach To Portfolio Construction And Management (McGraw-Hill Library Of Investment An

Investing the spare time by reading Quantitative Equity Portfolio Management: An Active Approach To Portfolio Construction And Management (McGraw-Hill Library Of Investment An can offer such wonderful experience also you are simply sitting on your chair in the office or in your bed. It will not curse your time. This Quantitative Equity Portfolio Management: An Active Approach To Portfolio Construction And Management (McGraw-Hill Library Of Investment An will assist you to have more valuable time while taking rest. It is extremely satisfying when at the twelve noon, with a cup of coffee or tea and also a book Quantitative Equity Portfolio Management: An Active Approach To Portfolio Construction And Management (McGraw-Hill Library Of Investment An in your gizmo or computer display. By taking pleasure in the sights around, here you could start reading.

Praise for Quantitative Equity Portfolio Management

“A must-have reference for any equity portfolio manager or MBA student, this book is a comprehensive guide to all aspects of equity portfolio management, from factor models to tax management.” ERIC ROSENFELD, Principal & Co-founder of JWM Partners

“This is an ambitious book that both develops the broad range of artillery employed in quantitative equity investment management and provides the reader with a host of relevant practical examples. The book excels in melding theory with practice.” STEPHEN A. ROSS, Franco Modigliani Professor of Financial Economics, Massachusetts Institute of Technology

“The book is very comprehensive in its coverage, detailed in its discussions and written from a practical perspective without sacrificing needed rigor.” DAVID BLITZER, Managing Director and Chairman, Standard & Poor's Index Committee

“Making the transition from the walls of academia to Wall Street has traditionally been a difficult task…This book provides this link in a successful and engaging fashion, giving students of finance a road map for the application of financial theories in a real-world setting.” MARK HOLOWESKO, CEO and Founder, Templeton Capital Advisors

“This text provides an excellent synthesis of a broad range of quantitative portfolio management methods…In addition, there are a number of insightful innovations that extend and improve current techniques.” DAN DIBARTOLOMEO, President and Founder, Northfield Information Services, Inc.

Capitalize on Today's Most Powerful Quantitative Methods to Construct and Manage a High-Performance Equity Portfolio

Quantitative Equity Portfolio Management is a comprehensive guide to the entire process of constructing and managing a high-yield quantitative equity portfolio. This detailed handbook begins with the basic principles of quantitative active management and then clearly outlines how to build an equity portfolio using those powerful concepts.

Financial experts Ludwig Chincarini and Daehwan Kim provide clear explanations of topics ranging from basic models, factors and factor choice, and stock screening and ranking…to fundamental factor models, economic factor models, and forecasting factor premiums and exposures.

Readers will also find step-by-step coverage of portfolio weights… rebalancing and transaction costs…tax management…leverage…market neutral…Bayesian _…performance measurement and attribution…the back testing process…and portfolio performance.

Quantitative Equity Investing

Filled with proven investment strategies and tools for developing new ones, Quantitative Equity Portfolio Management features:

  • A complete, easy-to-apply methodology for creating an equity portfolio that maximizes returns and minimizes risks
  • The latest techniques for building optimization into a professionally managed portfolio
  • An accompanying CD with a wide range of practical exercises and solutions using actual historical stock data
  • An excellent melding of financial theory with real-world practice
  • A wealth of down-to-earth financial examples and case studies

    Each chapter of this all-in-one portfolio management resource contains an appendix with valuable figures, tables, equations, mathematical solutions, and formulas. In addition, the book as a whole has appendices covering a brief history of financial theory, fundamental models of stock returns, a basic review of mathematical and statistical concepts, an entertaining explanation and quantitative approach to the casino game of craps, and other on-target supplemental materials.

    An essential reference for professional money managers and students taking advanced investment courses, Quantitative Equity Portfolio Management offers a full array of methods for effectively developing high-performance equity portfolios that deliver lucrative returns for clients.

    About the Authors

    Quantitative Equity Investing PDF Free Download

    Ludwig B. Chincarini, Ph.D., CFA, is a professor of finance at the University of San Francisco and on the academic board of IndexIQ. Previously, he was director of research at Rydex Global Advisors, the index mutual fund company. Prior to that, Dr. Chincarini was director of research at FOLIOfn, a brokerage firm that pioneered basket trading. He also worked at the Bank for International Settlements and holds a Ph.D. in economics from the Massachusetts Institute of Technology.

    Daehwan Kim, Ph.D., is a professor of economics at the American University in Bulgaria. Previously, he was employed as a financial economist for FOLIOfn. Dr. Kim also worked as a financial journalist, writing regular columns on financial markets for business media in Asia. He also holds a Ph.D. in economics from Harvard University.

    • Sales Rank: #96274 in Books
    • Published on: 2006-08-17
    • Original language: English
    • Number of items: 1
    • Dimensions: 10.80' h x 1.60' w x 5.10' l, 2.46 pounds
    • Binding: Hardcover
    • 658 pages

    From the Back Cover
    [Back Cover Copy]
    Finance and Investing
    Capitalize on Today's Most Powerful Quantitative Methodsto Construct and Manage a High-Performance Equity Portfolio!
    Praise for Quantitative Equity Portfolio Management
    'A must-have reference for any equity portfolio manager or MBA student, this book is a comprehensive guide to all aspects of equity portfolio management, from factor models to tax management.'ERIC ROSENFELD, Principal & Co-founder of JWM Partners
    'This is an ambitious book that both develops the broad range of artillery employed in quantitative equity investment management and provides the reader with a host of relevant practical examples. The book excels in melding theory with practice.'_STEPHEN A. ROSS, Franco Modigliani Professor of Financial Economics, Massachusetts Institute of Technology
    'The book is very comprehensive in its coverage, detailed in its discussions and written from a practical perspective without sacrificing needed rigor.'_DAVID BLITZER, Managing Director and Chairman, Standard & Poor's Index Committee
    'Making the transition from the walls of academia to Wall Street has traditionally been a difficult task...This book provides this link in a successful and engaging fashion, giving students of finance a road map for the application of financial theories in a real-world setting.'_MARK HOLOWESKO, CEO and Founder, Templeton Capital Advisors
    'This text provides an excellent synthesis of a broad range of quantitative portfolio management methods...In addition, there are a number of insightful innovations that extend and improve current techniques.'_DAN DIBARTOLOMEO, President and Founder, Northfield Information Services, Inc.
    [Flap Copy
    Quantitative Equity Portfolio Management is a comprehensive guide to the entire process of constructing and managing a high-yield quantitative equity portfolio. This detailed handbook begins with the basic principles of quantitative active management and then clearly outlines how to build an equity portfolio using those powerful concepts.
    Financial experts Ludwig Chincarini and Daehwan Kim provide clear explanations of topics ranging from basic models, factors and factor choice, and stock screening and ranking...to fundamental factor models, economic factor models, and forecasting factor premiums and exposures.
    Readers will also find step-by-step coverage of portfolio weights... rebalancing and transaction costs...tax management...leverage...market neutral...Bayesian _...performance measurement and attribution...the back testing process...and portfolio performance.
    Filled with proven investment strategies and tools for developing new ones, Quantitative Equity Portfolio Management features:

    Pdf Free

  • A complete, easy-to-apply methodology for creating an equity portfolio that maximizes returns and minimizes risks
  • The latest techniques for building optimization into a professionally managed portfolio
  • An accompanying CD with a wide range of practical exercises and solutions using actual historical stock data
  • An excellent melding of financial theory with real-world practice
  • Pdf Free Converter

  • A wealth of down-to-earth financial examples and case studies Each chapter of this all-in-one portfolio management resource contains an appendix with valuable figures, tables, equations, mathematical solutions, and formulas. In addition, the book as a whole has appendices covering a brief history of financial theory, fundamental models of stock returns, a basic review of mathematical and statistical concepts, an entertaining explanation and quantitative approach to the casino game of craps, and other on-target supplemental materials.
    An essential reference for professional money managers and students taking advanced investment courses, Quantitative Equity Portfolio Management offers a full array of methods for effectively developing high-performance equity portfolios that deliver lucrative returns for clients.
    About the Authors
    Ludwig B. Chincarini, Ph.D., CFA, is a professor of finance at the University of San Francisco and Director of Quantitative Strategies at United States Commodity Funds. He was previously on the academic board of IndexIQ and Future Advisor, Director of Research at Rydex Global Advisors (now Guggenheim), Director of Research at FOLIOfn, a brokerage firm that pioneered basket trading, and analyst and portfolio manager at the BIS. He holds a Ph.D. in economics from the Massachusetts Institute of Technology.
    Daehwan Kim, Ph.D., is a professor of economics at the American University in Bulgaria. Previously, he was employed as a financial economist for FOLIOfn. Dr. Kim also worked as a financial journalist, writing regular columns on financial markets for business media in Asia. He also holds a Ph.D. in economics from Harvard University.
  • About the Author
    Ludwig B. Chincarini, CFA, is a professor of finance at the University of San Francisco and Director of Quantitative Strategies at United States Commodity Funds.
    Daehwan Kim, Ph.D., is a finance professor at Konkuk University.

    Most helpful customer reviews

    19 of 20 people found the following review helpful.
    *The* book on factor models
    By Dimitri Shvorob
    Should you buy this book if you already have Grinold and Kahn? Yes, if you are interested in factor models of equity returns: about 200 pages of CK are devoted to them, compared to a few in GK. If you are not, I would still recommend the purchase, but expect CK's stay on your bookshelf to be brief - enough time to read through, and spot things not found in GK, such as discussion of leverage and tax management. Finally, if you do not have Grinold and Kahn, read this book as as a warm-up, but do graduate to the GK tome. Alas, this, too, is a book for MBAs: it does not reflect state-of-the-art, and shies away from math or statistics.

    28 of 32 people found the following review helpful.
    A practical entry-level quant equity portfolio management book
    By Yin Luo
    This is a very practical entry-level quant equity portfolio management book (more practical and easier than Grinold & Kahn's book). It's a good place to start if you are interested in building quant models to manage equity portfolios. The book started from the beginning (e.g., how to pick factors), to how to build models to forecast returns, risk, and how to run optimization. This book didn't get into enough details about econometrics, database management, optimization, and programming, which are all essential to build a robust quant model. The authors mentioned pooled time-series cross-sectional econometric models, but didn't go beyond the surface. Overall, it's a good introduction book, but could be better.

    19 of 21 people found the following review helpful.
    A very readable primer on quantiative equity portfolio management
    By a quant
    In my opinion, this book should be on the bookshelf of every quantitative equity portfolio manager right next to his Grinold & Kahn. In addition to the basics, it covers a lot of the more practical aspects of quantitative euqity portfolio management compared to the Grinold & Kahn, but is not lacking any academic rigour. It is very readable and accessible for anyone with a professional finance background or interest.

    See all 36 customer reviews...

    Quantitative Equity Portfolio Management: An Active Approach to Portfolio Construction and Management (McGraw-Hill Library of Investment an PDF
    Quantitative Equity Portfolio Management: An Active Approach to Portfolio Construction and Management (McGraw-Hill Library of Investment an EPub
    Quantitative Equity Portfolio Management: An Active Approach to Portfolio Construction and Management (McGraw-Hill Library of Investment an Doc
    Quantitative Equity Portfolio Management: An Active Approach to Portfolio Construction and Management (McGraw-Hill Library of Investment an iBooks
    Quantitative Equity Portfolio Management: An Active Approach to Portfolio Construction and Management (McGraw-Hill Library of Investment an rtf
    Quantitative Equity Portfolio Management: An Active Approach to Portfolio Construction and Management (McGraw-Hill Library of Investment an Mobipocket
    Quantitative Equity Portfolio Management: An Active Approach to Portfolio Construction and Management (McGraw-Hill Library of Investment an Kindle

    [K378.Ebook] Free PDF Quantitative Equity Portfolio Management: An Active Approach to Portfolio Construction and Management (McGraw-Hill Library of Investment an Doc

    [K378.Ebook] Free PDF Quantitative Equity Portfolio Management: An Active Approach to Portfolio Construction and Management (McGraw-Hill Library of Investment an Doc

    [K378.Ebook] Free PDF Quantitative Equity Portfolio Management: An Active Approach to Portfolio Construction and Management (McGraw-Hill Library of Investment an Doc
    [K378.Ebook] Free PDF Quantitative Equity Portfolio Management: An Active Approach to Portfolio Construction and Management (McGraw-Hill Library of Investment an Doc
    THE FRANK J. FABOZZI SERIES
    QUANTITATIVE EQUITY INVESTING Techniques and Strategies FRANK J. FABOZZI, SERGIO M. FOCARDI, PETTER N. KOLM
    Quantitative Equity Investing
    The Frank J. Fabozzi Series Fixed Income Securities, Second Edition by Frank J. Fabozzi Focus on Value: A Corporate and Investor Guide to Wealth Creation by James L. Grant and James A. Abate Handbook of Global Fixed Income Calculations by Dragomir Krgin Managing a Corporate Bond Portfolio by Leland E. Crabbe and Frank J. Fabozzi Real Options and Option-Embedded Securities by William T. Moore Capital Budgeting: Theory and Practice by Pamela P. Peterson and Frank J. Fabozzi The Exchange-Traded Funds Manual by Gary L. Gastineau Professional Perspectives on Fixed Income Portfolio Management, Volume 3 edited by Frank J. Fabozzi Investing in Emerging Fixed Income Markets edited by Frank J. Fabozzi and Efstathia Pilarinu Handbook of Alternative Assets by Mark J. P. Anson The Global Money Markets by Frank J. Fabozzi, Steven V. Mann, and Moorad Choudhry The Handbook of Financial Instruments edited by Frank J. Fabozzi Collateralized Debt Obligations: Structures and Analysis by Laurie S. Goodman and Frank J. Fabozzi Interest Rate, Term Structure, and Valuation Modeling edited by Frank J. Fabozzi Investment Performance Measurement by Bruce J. Feibel The Handbook of Equity Style Management edited by T. Daniel Coggin and Frank J. Fabozzi The Theory and Practice of Investment Management edited by Frank J. Fabozzi and Harry M. Markowitz Foundations of Economic Value Added, Second Edition by James L. Grant Financial Management and Analysis, Second Edition by Frank J. Fabozzi and Pamela P. Peterson Measuring and Controlling Interest Rate and Credit Risk, Second Edition by Frank J. Fabozzi, Steven V. Mann, and Moorad Choudhry Professional Perspectives on Fixed Income Portfolio Management, Volume 4 edited by Frank J. Fabozzi The Handbook of European Fixed Income Securities edited by Frank J. Fabozzi and Moorad Choudhry The Handbook of European Structured Financial Products edited by Frank J. Fabozzi and Moorad Choudhry The Mathematics of Financial Modeling and Investment Management by Sergio M. Focardi and Frank J. Fabozzi Short Selling: Strategies, Risks, and Rewards edited by Frank J. Fabozzi The Real Estate Investment Handbook by G. Timothy Haight and Daniel Singer Market Neutral Strategies edited by Bruce I. Jacobs and Kenneth N. Levy Securities Finance: Securities Lending and Repurchase Agreements edited by Frank J. Fabozzi and Steven V. Mann Fat-Tailed and Skewed Asset Return Distributions by Svetlozar T. Rachev, Christian Menn, and Frank J. Fabozzi Financial Modeling of the Equity Market: From CAPM to Cointegration by Frank J. Fabozzi, Sergio M. Focardi, and Petter N. Kolm Advanced Bond Portfolio Management: Best Practices in Modeling and Strategies edited by Frank J. Fabozzi, Lionel Martellini, and Philippe Priaulet Analysis of Financial Statements, Second Edition by Pamela P. Peterson and Frank J. Fabozzi Collateralized Debt Obligations: Structures and Analysis, Second Edition by Douglas J. Lucas, Laurie S. Goodman, and Frank J. Fabozzi Handbook of Alternative Assets, Second Edition by Mark J. P. Anson Introduction to Structured Finance by Frank J. Fabozzi, Henry A. Davis, and Moorad Choudhry Financial Econometrics by Svetlozar T. Rachev, Stefan Mittnik, Frank J. Fabozzi, Sergio M. Focardi, and Teo Jasic Developments in Collateralized Debt Obligations: New Products and Insights by Douglas J. Lucas, Laurie S. Goodman, Frank J. Fabozzi, and Rebecca J. Manning Robust Portfolio Optimization and Management by Frank J. Fabozzi, Peter N. Kolm, Dessislava A. Pachamanova, and Sergio M. Focardi Advanced Stochastic Models, Risk Assessment, and Portfolio Optimizations by Svetlozar T. Rachev, Stogan V. Stoyanov, and Frank J. Fabozzi How to Select Investment Managers and Evaluate Performance by G. Timothy Haight, Stephen O. Morrell, and Glenn E. Ross Bayesian Methods in Finance by Svetlozar T. Rachev, John S. J. Hsu, Biliana S. Bagasheva, and Frank J. Fabozzi Structured Products and Related Credit Derivatives by Brian P. Lancaster, Glenn M. Schultz, and Frank J. Fabozzi
    Quantitative Equity Investing Techniques and Strategies
    FRANK J. FABOZZI SERGIO M. FOCARDI PETTER N. KOLM with the assistance of Joseph A. Cerniglia and Dessislava Pachamanova
    John Wiley & Sons, Inc.
    Copyright © 2010 by John Wiley & Sons, Inc. All rights reserved. Published by John Wiley & Sons, Inc., Hoboken, New Jersey. Published simultaneously in Canada. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 646-8600, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permissions. Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993, or fax (317) 572-4002. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books. For more information about Wiley products, visit our web site at www.wiley.com.
    Library of Congress Cataloging-in-Publication Data: Fabozzi, Frank J. Quantitative equity investing : techniques and strategies / Frank J. Fabozzi, Sergio M. Focardi, Petter N. Kolm ; with the assistance of Joseph A. Cerniglia and Dessislava Pachamanova. p. cm. — (The Frank J. Fabozzi series) Includes index. ISBN 978-0-470-26247-4 (cloth) 1. Portfolio management. 2. Investments. I. Focardi, Sergio. II. Kolm, Petter N. III. Title. HG4529.5.F3346 2010 332.63’2042—dc22 2009050962 Printed in the United States of America.
    10 9 8 7 6 5 4 3 2 1
    FJF To my wife Donna, and my children Francesco, Patricia, and Karly
    SMF To my mother and in memory of my father
    PNK To my wife and my daughter, Carmen and Kimberly, and in memory of my father-in-law, John
    Contents
    Preface About the Authors
    CHAPTER 1 Introduction In Praise of Mathematical Finance Studies of the Use of Quantitative Equity Management Looking Ahead for Quantitative Equity Investing
    CHAPTER 2 Financial Econometrics I: Linear Regressions Historical Notes Covariance and Correlation Regressions, Linear Regressions, and Projections Multivariate Regression Quantile Regressions Regression Diagnostic Robust Estimation of Regressions Classification and Regression Trees Summary
    CHAPTER 3 Financial Econometrics II: Time Series Stochastic Processes Time Series Stable Vector Autoregressive Processes Integrated and Cointegrated Variables Estimation of Stable Vector Autoregressive (VAR) Models Estimating the Number of Lags Autocorrelation and Distributional Properties of Residuals Stationary Autoregressive Distributed Lag Models
    xi xv
    1 3 9 45
    47 47 49 61 76 78 80 83 96 99
    101 101 102 110 114 120 137 139 140
    vii
    viii
    CONTENTS
    Estimation of Nonstationary VAR Models Estimation with Canonical Correlations Estimation with Principal Component Analysis Estimation with the Eigenvalues of the Companion Matrix Nonlinear Models in Finance Causality Summary
    CHAPTER 4 Common Pitfalls in Financial Modeling Theory and Engineering Engineering and Theoretical Science Engineering and Product Design in Finance Learning, Theoretical, and Hybrid Approaches to Portfolio Management Sample Biases The Bias in Averages Pitfalls in Choosing from Large Data Sets Time Aggregation of Models and Pitfalls in the Selection of Data Frequency Model Risk and its Mitigation Summary
    CHAPTER 5 Factor Models and Their Estimation The Notion of Factors Static Factor Models Factor Analysis and Principal Components Analysis Why Factor Models of Returns Approximate Factor Models of Returns Dynamic Factor Models Summary
    CHAPTER 6 Factor-Based Trading Strategies I: Factor Construction and Analysis Factor-Based Trading Developing Factor-Based Trading Strategies Risk to Trading Strategies Desirable Properties of Factors Sources for Factors
    141 151 153 154 155 156 157
    159 159 161 163 164 165 167 170 173 174 193
    195 195 196 205 219 221 222 239
    243 245 247 249 251 251
    Contents
    Building Factors from Company Characteristics Working with Data Analysis of Factor Data Summary
    CHAPTER 7 Factor-Based Trading Strategies II: Cross-Sectional Models and Trading Strategies Cross-Sectional Methods for Evaluation of Factor Premiums Factor Models Performance Evaluation of Factors Model Construction Methodologies for a Factor-Based Trading Strategy Backtesting Backtesting Our Factor Trading Strategy Summary
    CHAPTER 8 Portfolio Optimization: Basic Theory and Practice Mean-Variance Analysis: Overview Classical Framework for Mean-Variance Optimization Mean-Variance Optimization with a Risk-Free Asset Portfolio Constraints Commonly Used in Practice Estimating the Inputs Used in Mean-Variance Optimization: Expected Return and Risk Portfolio Optimization with Other Risk Measures Summary
    CHAPTER 9 Portfolio Optimization: Bayesian Techniques and the Black-Litterman Model Practical Problems Encountered in Mean-Variance Optimization Shrinkage Estimation The Black-Litterman Model Summary
    CHAPTER 10 Robust Portfolio Optimization Robust Mean-Variance Formulations Using Robust Mean-Variance Portfolio Optimization in Practice
    ix 253 253 261 266
    269 270 278 288 295 306 308 309
    313 314 317 321 327 333 342 357
    361 362 369 373 394
    395 396 411
    x
    CONTENTS
    Some Practical Remarks on Robust Portfolio Optimization Models Summary
    CHAPTER 11 Transaction Costs and Trade Execution A Taxonomy of Transaction Costs Liquidity and Transaction Costs Market Impact Measurements and Empirical Findings Forecasting and Modeling Market Impact Incorporating Transaction Costs in Asset-Allocation Models Integrated Portfolio Management: Beyond Expected Return and Portfolio Risk Summary
    CHAPTER 12 Investment Management and Algorithmic Trading Market Impact and the Order Book Optimal Execution Impact Models Popular Algorithmic Trading Strategies What Is Next? Some Comments about the High-Frequency Arms Race Summary
    APPENDIX A Data Descriptions and Factor Definitions The MSCI World Index One-Month LIBOR The Compustat Point-in-Time, IBES Consensus Databases and Factor Definitions
    APPENDIX B Summary of Well-Known Factors and Their Underlying Economic Rationale APPENDIX C Review of Eigenvalues and Eigenvectors The SWEEP Operator
    Index
    416 418
    419 420 427 430 433 439 444 446
    449 450 452 455 457 465 467 470
    473 473 482 483
    487
    493 494
    497
    Preface
    Quantitative equity portfolio management is a fundamental building block of investment management. The basic principles of investment management have been proposed back in the 1950s in the pathbreaking work of Harry Markowitz. For his work, in 1990 Markowitz was awarded the Nobel Memorial Prize in Economic Sciences. Markowitz’s ideas proved to be very fertile. Entire new research areas originated from it which, with the diffusion of low-cost powerful computers, found important practical applications in several fields of finance. Among the developments that followed Markowitz’s original approach we can mention: ■
    ■ ■



    The development of CAPM and of general equilibrium asset pricing models. The development of multifactor models. The extension of the investment framework to a dynamic multiperiod environment. The development of statistical tools to extend his framework to fattailed distributions. The development of Bayesian techniques to integrate human judgment with results from models. The progressive adoption of optimization and robust optimization techniques.
    Due to these and other theoretical advances it has progressively become possible to manage investments with computer programs that look for the best risk-return trade-off available in the market. People have always tried to beat the market, in the hunt for a free lunch. This began by relying on simple observations and rules of thumb to pick the winners, and later with the advent of computers brought much more complicated systems and mathematical models within common reach. Today, so-called buy-side quants deploy a wide range of techniques ranging from econometrics, optimization, and computer science to data mining, machine learning, and artificial intelligence to trade the equity markets. Their strategies may range from intermediate and long-term strategies, six months to
    xi
    xii
    PREFACE
    several years out, to so-called ultra-high or high-frequency strategies, at the sub-millisecond level. The modern quantitative techniques have replaced good old-fashioned experience and market insight, with the scientific rigor of mathematical and financial theories. This book is about quantitative equity portfolio management performed with modern techniques. One of our goals for this book is to present advances in the theory and practice of quantitative equity portfolio management that represent what we might call the “state of the art of advanced equity portfolio management.” We cover the most common techniques, tools, and strategies used in quantitative equity portfolio management in the industry today. For many of the advanced topics, we provide the reader with references to the most recent applicable research in the field. This book is intended for students, academics, and financial practitioners alike who want an up-to-date treatment of quantitative techniques in equity portfolio management, and who desire to deepen their knowledge of some of the most cutting-edge techniques in this rapidly developing area. The book is written in an almost self-contained fashion, so that little background knowledge in finance is needed. Nonetheless, basic working knowledge of undergraduate linear algebra and probability theory are useful, especially for the more mathematical topics in this book. In Chapter 1 we discuss the role and use of mathematical techniques in finance. In addition to offering theoretical arguments in support of finance as a mathematical science, we discuss the results of three surveys on the diffusion of quantitative methods in the management of equity portfolios. In Chapters 2 and 3, we provide extensive background material on one of the principal tools used in quantitative equity management, financial econometrics. Coverage in Chapter 2 includes modern regression theory, applications of Random Matrix Theory, and robust methods. In Chapter 3, we extend our coverage of financial economics to dynamic models of times series, vector autoregressive models, and cointegration analysis. Financial engineering, the many pitfalls of estimation, and methods to control model risk are the subjects of Chapter 4. In Chapter 5, we introduce the modern theory of factor models, including approximate factor models and dynamic factor models. Trading strategies based on factors and factor models are the focus of Chapters 6 and 7. In these chapters we offer a modern view on how to construct factor models based on fundamental factors and how to design and test trading strategies based on these. We offer a wealth of practical examples on the application of factor models in these chapters. The coverage in Chapters 8, 9, and 10 is on the use of optimization models in quantitative equity management. The basics of portfolio optimization are reviewed in Chapter 9, followed by a discussion of the Bayesian approach to investment management as implemented in the Black-Litterman
    Preface
    xiii
    framework in Chapter 9. In Chapter 10 we discuss robust optimization techniques because they have greatly enhanced the ability to implement portfolio optimization models in practice. The last two chapters of the book cover the important topic of trading costs and trading techniques. In Chapter 11, our focus is on the issues related to trading cost and implementation of trading strategies from a practical point of view. The modern techniques of algorithmic trading are the subject of the final chapter in the book, Chapter 12. There are three appendixes. Appendix A provides a description of the data and factor definitions used in the illustrations and examples in the book. A summary of the factors, their economic rationale, and references that have supported the use of each factor is provided in Appendix B. In Appendix C we provide a review of eigenvalues and eigenvectors.
    TEACHING USING THIS BOOK Many of the chapters in this book have been used in courses and workshops on quantitative investment management, econometrics, trading strategies and algorithmic trading. The topics of the book are appropriate for undergraduate advanced electives on investment management, and graduate students in finance, economics, or in the mathematical and physical sciences. For a typical course it is natural to start with Chapters 1–3, 5, and 8 where the quantitative investment management industry, standard econometric techniques, and modern portfolio and asset pricing theory are reviewed. Important practical considerations such as model risk and its mitigation are presented in Chapter 4. Chapters 6 and 7 focus on the development of factor-based trading strategies and provide many practical examples. Chapters 9–12 cover the important topics of Bayesian techniques, robust optimization, and transaction cost modeling—by now standard tools used in quantitative portfolio construction in the financial industry. We recommend that a more advanced course covers these topics in some detail. Student projects can be based on specialized topics such as the development of trading strategies (in Chapters 6 and 7), optimal execution, and algorithmic trading (in Chapters 11 and 12). The many references in these chapters, and in the rest of the book, provide a good starting point for research.
    ACKNOWLEDGMENTS We would like to acknowledge the assistance of several individuals who contributed to this book. Chapters 6 and 7 on trading strategies were co-
    xiv
    PREFACE
    authored with Joseph A. Cerniglia of of Aberdeen Asset Management Inc. Chapter 10 on robust portfolio optimization is coauthored with Dessislava Pachamanova of Babson College. Chapter 12 draws from a chapter by one of the authors and Lee Maclin, adjunct at the Courant Institute of Mathematical Sciences, New York University, that will appear in the Encyclopedia of Quantitative Finance, edited by Rama Cont and to be published by John Wiley & Sons. We also thank Axioma, Inc. for allowing us to use several figures from its white paper series co-authored by Sebastian Ceria and Robert Stubbs. Megan Orem typeset the book and provided editorial assistance. We appreciate her patience and understanding in working through numerous revisions. Frank J. Fabozzi Sergio M. Focardi Petter N. Kolm
    About the Authors
    Frank J. Fabozzi is Professor in the Practice of Finance in the School of Management at Yale University and an Affiliated Professor at the University of Karlsruhe’s Institute of Statistics, Econometrics and Mathematical Finance. Prior to joining the Yale faculty, he was a Visiting Professor of Finance in the Sloan School at MIT. Frank is a Fellow of the International Center for Finance at Yale University and on the Advisory Council for the Department of Operations Research and Financial Engineering at Princeton University. He is the editor of the Journal of Portfolio Management. He is a trustee for the BlackRock family of closed-end funds. In 2002, Frank was inducted into the Fixed Income Analysts Society’s Hall of Fame and is the 2007 recipient of the C. Stewart Sheppard Award given by the CFA Institute. His recently coauthored books published by Wiley in include Institutional Investment Management (2009), Finance: Capital Markets, Financial Management and Investment Management (2009), Bayesian Methods in Finance (2008), Advanced Stochastic Models, Risk Assessment, and Portfolio Optimization: The Ideal Risk, Uncertainty, and Performance Measures (2008), Financial Modeling of the Equity Market: From CAPM to Cointegration (2006), Robust Portfolio Optimization and Management (2007), and Financial Econometrics: From Basics to Advanced Modeling Techniques (2007). Frank earned a doctorate in economics from the City University of New York in 1972. He earned the designation of Chartered Financial Analyst and Certified Public Accountant. Sergio Focardi is Professor of Finance at the EDHEC Business School in Nice and the founding partner of the Paris-based consulting firm The Intertek Group. He is a member of the editorial board of the Journal of Portfolio Management. Sergio has authored numerous articles and books on financial modeling and risk management including the following Wiley books: Financial Econometrics (2007), Financial Modeling of the Equity Market (2006), The Mathematics of Financial Modeling and Investment Management (2004), Risk Management: Framework, Methods and Practice (1998), and Modeling the Markets: New Theories and Techniques (1997). He also authored two monographs published by the CFA Institute’s monographs: Challenges in Quantitative Equity Management (2008) and Trends in
    xv
    xvi
    ABOUT THE AUTHORS
    Quantitative Finance (2006). Sergio has been appointed as a speaker of the CFA Institute Speaker Retainer Program. His research interests include the econometrics of large equity portfolios and the modeling of regime changes. Sergio holds a degree in Electronic Engineering from the University of Genoa and a PhD in Mathematical Finance and Financial Econometrics from the University of Karlsruhe. Petter N. Kolm is the Deputy Director of the Mathematics in Finance Masters Program and Clinical Associate Professor at the Courant Institute of Mathematical Sciences, New York University, and a Founding Partner of the New York-based financial consulting firm, the Heimdall Group, LLC. Previously, Petter worked in the Quantitative Strategies Group at Goldman Sachs Asset Management where his responsibilities included researching and developing new quantitative investment strategies for the group’s hedge fund. Petter authored the books Financial Modeling of the Equity Market: From CAPM to Cointegration (Wiley, 2006), Trends in Quantitative Finance (CFA Research Institute, 2006), and Robust Portfolio Management and Optimization (Wiley, 2007). His interests include high-frequency finance, algorithmic trading, quantitative trading strategies, financial econometrics, risk management, and optimal portfolio strategies. Petter holds a doctorate in mathematics from Yale University, an M.Phil. in applied mathematics from the Royal Institute of Technology in Stockholm, and an M.S. in mathematics from ETH Zürich. Petter is a member of the editorial board of the Journal of Portfolio Management.
    CHAPTER
    1
    Introduction
    n economy can be regarded as a machine that takes in input labor and natural resources and outputs products and services. Studying this machine from a physical point of view would be very difficult because we should study the characteristics and the interrelationships among all modern engineering and production processes. Economics takes a bird’s-eye view of these processes and attempts to study the dynamics of the economic value associated with the structure of the economy and its inputs and outputs. Economics is by nature a quantitative science, though it is difficult to find simple rules that link economic quantities. In most economies value is presently obtained through a market process where supply meets demand. Here is where finance and financial markets come into play. They provide the tools to optimize the allocation of resources through time and space and to manage risk. Finance is by nature quantitative like economics but it is subject to a large level of risk. It is the measurement of risk and the implementation of decision-making processes based on risk that makes finance a quantitative science and not simply accounting. Equity investing is one of the most fundamental processes of finance. Equity investing allows allocating the savings of the households to investments in the productive activities of an economy. This investment process is a fundamental economic enabler: without equity investment it would be very difficult for an economy to properly function and grow. With the diffusion of affordable fast computers and with progress made in understanding financial processes, financial modeling has become a determinant of investment decision-making processes. Despite the growing diffusion of financial modeling, objections to its use are often raised. In the second half of the 1990s, there was so much skepticism about quantitative equity investing that David Leinweber, a pioneer in applying advanced techniques borrowed from the world of physics to fund management, and author of Nerds on Wall Street,1 wrote an article entitled: “Is
    A
    1
    David Leinweber, Nerds on Wall Street: Math, Machines, and Wired Markets (Hoboken, NJ: John Wiley & Sons, 2009).
    1
    2
    QUANTITATIVE EQUITY INVESTING
    quantitative investment dead?”2 In the article, Leinweber defended quantitative fund management and maintained that in an era of ever faster computers and ever larger databases, quantitative investment was here to stay. The skepticism toward quantitative fund management, provoked by the failure of some high-profile quantitative funds at that time, was related to the fact that investment professionals felt that capturing market inefficiencies could best be done by exercising human judgment. Despite mainstream academic opinion that held that markets are efficient and unpredictable, the asset managers’ job is to capture market inefficiencies and translate them into enhanced returns for their clients. At the academic level, the notion of efficient markets has been progressively relaxed. Empirical evidence led to the acceptance of the notion that financial markets are somewhat predictable and that systematic market inefficiencies can be detected. There has been a growing body of evidence that there are market anomalies that can be systematically exploited to earn excess profits after considering risk and transaction costs.3 In the face of this evidence, Andrew Lo proposed replacing the efficient market hypothesis with the adaptive market hypothesis as market inefficiencies appear as the market adapts to changes in a competitive environment. In this scenario, a quantitative equity investment management process is characterized by the use of computerized rules as the primary source of decisions. In a quantitative process, human intervention is limited to a control function that intervenes only exceptionally to modify decisions made by computers. We can say that a quantitative process is a process that quantifies things. The notion of quantifying things is central to any modern science, including the dismal science of economics. Note that everything related to accounting—balance sheet/income statement data, and even accounting at the national level—is by nature quantitative. So, in a narrow sense, finance has always been quantitative. The novelty is that we are now quantifying things that are not directly observed, such as risk, or things that are not quantitative per se, such as market sentiment and that we seek simple rules to link these quantities In this book we explain techniques for quantitative equity investing. Our purpose in this chapter is threefold. First, we discuss the relationship between mathematics and equity investing and look at the objections raised. We attempt to show that most objections are misplaced. Second, we discuss the results of three studies based on surveys and interviews of major market 2
    David Leinweber, “Is Quantitative Investing Dead?” Pensions & Investments, February 8, 1999. 3 For a modern presentation of the status of market efficiency, see M. Hashem Pesaran, “Market Efficiency Today,” Working Paper 05.41, 2005 (Institute of Economic Policy Research).
    Introduction
    3
    participants whose objective was to quantitative equity portfolio management and their implications for equity portfolio managers. The results of these three studies are helpful in understanding the current state of quantitative equity investing, trends, challenges, and implementation issues. Third, we discuss the challenges ahead for quantitative equity investing.
    IN PRAISE OF MATHEMATICAL FINANCE Is the use of mathematics to describe and predict financial and economic phenomena appropriate? The question was first raised at the end of the nineteenth century when Vilfredo Pareto and Leon Walras made an initial attempt to formalize economics. Since then, financial economic theorists have been divided into two camps: those who believe that economics is a science and can thus be described by mathematics and those who believe that economic phenomena are intrinsically different from physical phenomena which can be described by mathematics. In a tribute to Paul Samuelson, Robert Merton wrote: Although most would agree that finance, micro investment theory and much of the economics of uncertainty are within the sphere of modern financial economics, the boundaries of this sphere, like those of other specialties, are both permeable and flexible. It is enough to say here that the core of the subject is the study of the individual behavior of households in the intertemporal allocation of their resources in an environment of uncertainty and of the role of economic organizations in facilitating these allocations. It is the complexity of the interaction of time and uncertainty that provides intrinsic excitement to study of the subject, and, indeed, the mathematics of financial economics contains some of the most interesting applications of probability and optimization theory. Yet, for all its seemingly obtrusive mathematical complexity, the research has had a direct and significant influence on practice4 The three principal objections to treating finance economic theory as a mathematical science we will discuss are that (1) financial markets are driven by unpredictable unique events and, consequently, attempts to use mathematics to describe and predict financial phenomena are futile, (2) financial phenomena are driven by forces and events that cannot be quantified, though we can use intuition and judgment to form a meaningful finan4
    Robert C. Merton, “Paul Samuelson and Financial Economics,” American Economist 50, no. 2 (Fall 2006), pp. 262–300.
    4
    QUANTITATIVE EQUITY INVESTING
    cial discourse, and (3) although we can indeed quantify financial phenomena, we cannot predict or even describe financial phenomena with realistic mathematical expressions and/or computational procedures because the laws themselves change continuously. A key criticism to the application of mathematics to financial economics is the role of uncertainty. As there are unpredictable events with a potentially major impact on the economy, it is claimed that financial economics cannot be formalized as a mathematical methodology with predictive power. In a nutshell, the answer is that black swans exist not only in financial markets but also in the physical sciences. But no one questions the use of mathematics in the physical sciences because there are major events that we cannot predict. The same should hold true for finance. Mathematics can be used to understand financial markets and help to avoid catastrophic events.5 However, it is not necessarily true that science and mathematics will enable unlimited profitable speculation. Science will allow one to discriminate between rational predictable systems and highly risky unpredictable systems. There are reasons to believe that financial economic laws must include some fundamental uncertainty. The argument is, on a more general level, the same used to show that there cannot be arbitrage opportunities in financial markets. Consider that economic agents are intelligent agents who can use scientific knowledge to make forecasts. Were financial economic laws deterministic, agents could make (and act on) deterministic forecasts. But this would imply a perfect consensus between agents to ensure that there is no contradiction between forecasts and the actions determined by the same forecasts. For example, all investment opportunities should have exactly identical payoffs. Only a perfectly and completely planned economy can be deterministic; any other economy must include an element of uncertainty. In finance, the mathematical handling of uncertainty is based on probabilities learned from data. In finance, we have only one sample of small size and cannot run tests. Having only one sample, the only rigorous way to apply statistical models is to invoke ergodicity. An ergodic process is a stationary process where the limit of time averages is equal to time-invariant ensemble averages. Note that in financial modeling it is not necessary that economic quantities themselves form ergodic processes, only that residuals after modeling form an ergodic process. In practice, we would like the models to extract all meaningful information and leave a sequence of white noise residuals. 5
    This is what Nassim Taleb refers to as “black swans” in his critique of financial models in his book The Black Swan: The Impact of the Highly Improbable (New York: Random House, 2007).
    Introduction
    5
    If we could produce models that generate white noise residuals over extended periods of time, we would interpret uncertainty as probability and probability as relative frequency. However, we cannot produce such models because we do not have a firm theory known a priori. Our models are a combination of theoretical considerations, estimation, and learning; they are adaptive structures that need to be continuously updated and modified. Uncertainty in forecasts is due not only to the probabilistic uncertainty inherent in stochastic models but also to the possibility that the models themselves are misspecified. Model uncertainty cannot be measured with the usual concept of probability because this uncertainty itself is due to unpredictable changes. Ultimately, the case for mathematical financial economics hinges on our ability to create models that maintain their descriptive and predictive power even if there are sudden unpredictable changes in financial markets. It is not the large unpredictable events that are the challenge to mathematical financial economics, but our ability to create models able to recognize these events. This situation is not confined to financial economics. It is now recognized that there are physical systems that are totally unpredictable. These systems can be human artifacts or natural systems. With the development of nonlinear dynamics, it has been demonstrated that we can build artifacts whose behavior is unpredictable. There are examples of unpredictable artifacts of practical importance. Turbulence, for example, is a chaotic phenomenon. The behavior of an airplane can become unpredictable under turbulence. There are many natural phenomena from genetic mutations to tsunami and earthquakes whose development is highly nonlinear and cannot be individually predicted. But we do not reject mathematics in the physical sciences because there are events that cannot be predicted. On the contrary, we use mathematics to understand where we can find regions of dangerous unpredictability. We do not knowingly fly an airplane in extreme turbulence and we refrain from building dangerous structures that exhibit catastrophic behavior. Principles of safe design are part of sound engineering. Financial markets are no exception. Financial markets are designed artifacts: we can make them more or less unpredictable. We can use mathematics to understand the conditions that make financial markets subject to nonlinear behavior with possibly catastrophic consequences. We can improve our knowledge of what variables we need to control in order to avoid entering chaotic regions. It is therefore not reasonable to object that mathematics cannot be used in finance because there are unpredictable events with major consequences. It is true that there are unpredictable financial markets where we cannot use
    6
    QUANTITATIVE EQUITY INVESTING
    mathematics except to recognize that these markets are unpredictable. But we can use mathematics to make financial markets safer and more stable.6 Let us now turn to the objection that we cannot use mathematics in finance because the financial discourse is inherently qualitative and cannot be formalized in mathematical expressions. For example, it is objected that qualitative elements such as the quality of management or the culture of a firm are important considerations that cannot be formalized in mathematical expressions. A partial acceptance of this point of view has led to the development of techniques to combine human judgment with models. These techniques range from simply counting analysts’ opinions to sophisticated Bayesian methods that incorporate qualitative judgment into mathematical models. These hybrid methodologies link models based on data with human overlays. Is there any irreducibly judgmental process in finance? Consider that in finance, all data important for decision-making are quantitative or can be expressed in terms of logical relationships. Prices, profits, and losses are quantitative, as are corporate balance-sheet data. Links between companies and markets can be described through logical structures. Starting from these data we can construct theoretical terms such as volatility. Are there hidden elements that cannot be quantified or described logically? Ultimately, in finance, the belief in hidden elements that cannot be either quantified or logically described is related to the fact that economic agents are human agents with a decision-making process. The operational point of view of Samuelson has been replaced by the neoclassical economics view that, apparently, places the accent on agents’ decision-making. It is curious that the agent of neoclassical economics is not a realistic human agent but a mathematical optimizer described by a utility function. Do we need anything that cannot be quantified or expressed in logical terms? At this stage of science, we can say the answer is a qualified no, if we consider markets in the aggregate. Human behavior is predictable in the aggregate and with statistical methods. Interaction between individuals, at least at the level of economic exchange, can be described with logical tools. We have developed many mathematical tools that allow us to describe critical points of aggregation that might lead to those situations of unpredictability described by complex systems theory. We can conclude that the objection of hidden qualitative variables should be rejected. If we work at the aggregate level and admit uncertainty, 6
    A complex system theorist could object that there is a fundamental uncertainty as regards the decisions that we will make: Will we take the path of building safer financial systems or we will build increasingly risky financial systems in the hope of realizing a gain?
    Introduction
    7
    there is no reason why we have to admit inherently qualitative judgment. In practice, we integrate qualitative judgment with models because (presently) it would be impractical or too costly to model all variables. If we consider modeling individual decision-making at the present stage of science, we have no definitive answer. Whenever financial markets depend on single decisions of single individuals we are in the presence of uncertainty that cannot be quantified. However, we have situations of this type in the physical sciences and we do not consider them an obstacle to the development of a mathematical science. Let us now address a third objection to the use of mathematics in finance. It is sometimes argued that we cannot arrive at mathematical laws in finance because the laws themselves keep on changing. This objection is somehow true. Addressing it has led to the development of methods specific to financial economics. First observe that many physical systems are characterized by changing laws. For example, if we monitor the behavior of complex artifacts such as nuclear reactors we find that their behavior changes with aging. We can consider these changes as structural breaks. Obviously one could object that if we had more information we could establish a precise time-invariant law. Still, if the artifact is complex and especially if we cannot access all its parts, we might experience true structural breaks. For example, if we are monitoring the behavior of a nuclear reactor we might not be able to inspect it properly. Many natural systems such as volcanoes cannot be properly inspected and structurally described. We can only monitor their behavior, trying to find predictive laws. We might find that our laws change abruptly or continuously. We assume that we could identify more complex laws if we had all the requisite information, though, in practice, we do not have this information. These remarks show that the objection of changing laws is less strong than we might intuitively believe. The real problem is not that the laws of finance change continuously. The real problem is that they are too complex. We do not have enough theoretical knowledge to determine finance laws and, if we try to estimate statistical models, we do not have enough data to estimate complex models. Stated differently, the question is not whether we can use mathematics in financial economic theory. The real question is: How much information we can obtain in studying financial markets? Laws and models in finance are highly uncertain. One partial solution is to use adaptive models. Adaptive models are formed by simple models plus rules to change the parameters of the simple models. A typical example is nonlinear state-space models. Nonlinear state-space models are formed by a simple regression plus another process that adapts continuously the model parameters. Other examples are hidden Markov models that might
    8
    QUANTITATIVE EQUITY INVESTING
    represent prices as formed by sequences of random walks with different parameters. We can therefore conclude that the objection that there is no fixed law in financial economics cannot be solved a priori. Empirically we find that simple models cannot describe financial markets over long periods of time: if we turn to adaptive modeling, we are left with a residual high level of uncertainty. Our overall conclusion is twofold. First, we can and indeed should regard mathematical finance as a discipline with methods and mathematics specific to the type of empirical data available in the discipline. Given the state of continuous change in our economies, we cannot force mathematical finance into the same paradigm of classical mathematical physics based on differential equations. Mathematical finance needs adaptive, nonlinear models that are able to adapt in a timely fashion to a changing empirical environment. This is not to say that mathematical finance is equivalent to = [1, 1,..., 1]
    where Λ = diag(f1, f2, …, fN), that is, Λ equals the diagonal matrix of the fractions of portfolio wealth. The undershoot and overshoot variables need to be as small as possible at the optimal point, and therefore, they are penalized in the objective function, yielding the following optimization problem: max z ′Λμ − λ z ′ΛΣΛz − γ (ε − + ε + ) z
    subject to z ′Λι + ε − + ε + = 1, −
    ι ′ = [1, 1,..., 1]
    +
    ε ≥ 0, ε ≥ 0 where λ and γ are parameters chosen by the portfolio manager. Normally, the inclusion of round lot constraints to the mean-variance optimization problem only produces a small increase in risk for a prespecified expected return. Furthermore, the portfolios obtained in this manner cannot be obtained by simply rounding the portfolio weights from a standard mean-variance optimization to the nearest round lot. In order to represent threshold and cardinality constraints we have to introduce binary (0/1) variables, and for round lots we need integer variables. In effect, the original quadratic program (QP) resulting from the meanvariance formulation becomes a quadratic mixed integer program (QMIP). Therefore, these combinatorial extensions require more sophisticated and specialized algorithms that often require significant computing time.
    ESTIMATING THE INPUTS USED IN MEAN-VARIANCE OPTIMIZATION: EXPECTED RETURN AND RISK In this section, we discuss the estimation of the inputs required for portfolio asset-allocation models. We focus on the estimation of expected asset returns and their covariances using classical and practically well-probed techniques. Modern techniques using dynamic models and hidden variable models are covered in Fabozzi, Focardi, and Kolm.24 An analyst might proceed in the following way. Observing weekly or monthly returns, he might use the past five years of historical data to esti24
    See Chapters 14, 15, and 16 in Frank J. Fabozzi, Sergio M. Focardi, and Petter N. Kolm, Financial Modeling of the Equity Market: From CAPM to Cointegration (Hoboken, NJ: John Wiley & Sons, 2006).
    334
    QUANTITATIVE EQUITY INVESTING
    mate the expected return and the covariance matrix by the sample mean and sample covariance matrix. He would then use these as inputs to the mean-variance optimization, along with any ad hoc adjustments to reflect his views about expected returns on future performance. Unfortunately this historical approach most often leads to counterintuitive, unstable, or merely wrong portfolios. Statistical estimates are noisy and do depend on the quality of the data and the particular statistical techniques used. In general, it is desirable that an estimator of expected return and risk have the following properties: ■
    ■ ■

    It provides a forward-looking forecast with some predictive power, not just a backward-looking historical summary of past performance. The estimate can be produced at a reasonable computational cost. The technique used does not amplify errors already present in the inputs used in the process of estimation. The forecast should be intuitive, that is, the portfolio manager or the analyst should be able to explain and justify them in a comprehensible manner.
    In this section we discuss the properties of the sample mean and covariance estimators as a forecast of expected returns and risk. The forecasting power of these estimators is typically poor, and for practical applications, modifications and extensions are necessary. We focus on some of the most common and widely used modifications. We postpone the treatment of Bayesian techniques such as the Black-Litterman model to Chapter 9.
    The Sample Mean and Covariance Estimators Quantitative techniques for forecasting security expected returns and risk most often rely on historical data. Therefore, it is important keep in mind that we are implicitly assuming that the past can predict the future. It is well known that expected returns exhibit significant time variation (nonstationarity) and that realized returns are strongly influenced by changes in expected returns.25 Consequently, extrapolated historical returns are in general poor forecasts of future returns, or as a typical disclaimer in any investment prospectus states: “Past performance is not an indication of future performance.” 25
    See Eugene F. Fama and Kenneth R. French, “The Equity Risk Premium,” Journal of Finance, 57 (2002), pp. 637–659; and Thomas K. Philips, “Why Do Valuation Ratios Forecast Long-Run Equity Returns?” Journal of Portfolio Management, 25 (1999), pp. 39–44.
    Portfolio Optimization: Basic Theory and Practice
    335
    One problem of basing forecasts on historical performance is that markets and economic conditions change throughout time. For example, interest rates have varied substantially, all the way from the high double digits to the low interest rate environment in the early 2000s. Other factors that change over time, and that can significantly influence the markets, include the political environment within and across countries, monetary and fiscal policy, consumer confidence, and the business cycle of different industry sectors and regions. Of course, there are reasons why we can place more faith in statistical estimates obtained from historical data for some assets as compared to others. Different asset classes have varying lengths of histories available. For example, not only do the United States and the European markets have longer histories, but their data also tends to be more accurate. For emerging markets, the situation is quite different. Sometimes only a few years of historical data are available. As a consequence, based upon the quality of the inputs, we expect that for some asset classes we should be able to construct more precise estimates than others. In practice, if portfolio managers believe that the inputs that rely on the historical performance of an asset class are not a good reflection of the future expected performance of that asset class, they may alter the inputs objectively or subjectively. Obviously, different portfolio managers may have different beliefs and therefore their corrections will be different. We now turn to the estimation of the expected return and risk by the sample mean and covariance estimators. Given the historical returns of two securities i and j, Ri,t and Rj,t, where t = 1, …, T, the sample mean and covariance are given by T
    Ri =
    1 ∑Ri, t T t =1
    Rj =
    1 ∑Rj, t T t =1
    σ ij =
    1 ∑(Ri, t − Ri )(Rj, t − Rj ) T − 1 t =1
    T
    T
    In the case of N securities, the covariance matrix can be expressed directly in matrix form: Σ= where
    1 XX ′ N −1
    336
    QUANTITATIVE EQUITY INVESTING
    ⎡ R11 0002 R1T ⎤ ⎡ R1 0002 R1 ⎤ ⎥ ⎢ ⎥ ⎢ X=⎢ 0003 0004 0003 ⎥−⎢ 0003 0004 0003 ⎥ ⎢⎣ RN 1 0002 RNT ⎥⎦ ⎢⎣ RN 0002 RN ⎥⎦ Under the assumption that security returns are independent and identically distributed (i.i.d.), it can be demonstrated that Σ is the maximumlikelihood estimator of the population covariance matrix and that this matrix follows a Wishart distribution with N – 1 degrees of freedom.26 As mentioned before, the risk-free rate Rf does change significantly over time. Therefore, when using a longer history, it is common that historical security returns are first converted into excess returns, Ri,t – Rf,t, and thereafter the expected return is estimated from T
    Ri = Rf , t +
    1 ∑(Ri, t − Rf , t ) T t =1
    Alternatively, the expected excess returns may be used directly in a meanvariance optimization framework. Unfortunately, for financial return series, the sample mean is a poor estimator for the expected return. The sample mean is the best linear unbiased estimator (BLUE) of the population mean for distributions that are not heavy-tailed. In this case, the sample mean exhibits the important property that an increase in the sample size always improves its performance. However, these results are no longer valid under extreme thick-tailedness and caution has to be exercised.27 Furthermore, financial time series are typically not stationary, so the mean is not a good forecast of expected return. Moreover, the resulting estimator has a large estimation error (as measured by the 26
    Suppose X1, …, XN are independent and identically distributed random vectors, and that for each i it holds Xi ~ Np(0, V) (that is, E(Xi) = 0, where 0 is a p dimensional vector, and Var(X i ) = E(X i X ′i ) = V
    where V is a p × p dimensional matrix). Then, the Wishart distribution with N degrees of freedom is the probability distribution of the p × p random matrix N
    S = ∑X i X ′i i =1
    and we write S ~ Wp(V, N). In the case when p = 1 and V = 1, then this distribution reduces to a chi-square distribution. 27 Rustam Ibragimov, “Efficiency of Linear Estimators under Heavy-Tailedness: Convolutions of α-Symmetric Distributions,” Econometric Theory, 23 (2007), pp. 501–517.
    Portfolio Optimization: Basic Theory and Practice
    337
    standard error), which significantly influences the mean-variance portfolio allocation process. As a consequence: ■


    Equally-weighted portfolios often outperform mean-variance optimized portfolios.28 Mean-variance optimized portfolios are not necessarily well diversified.29 Uncertainty of returns tends to have more influence than risk in meanvariance optimization.30
    These problems must be addressed from different perspectives. More robust or stable (lower estimation error) estimates of expected return should be used. One approach is to impose more structure on the estimator. Most commonly, practitioners use some form of factor model to produce the expected return forecasts.31 Another possibility is to use Bayesian (such as the Black-Litterman model) or shrinkage estimators. Mean-variance optimization is very sensitive to its inputs. Small changes in expected return inputs often lead to large changes in portfolio weights. To some extent this is mitigated by using better estimators. However, by taking the estimation errors (whether large or small) into account in the optimization, further improvements can be made. In a nutshell, the problem is related to the fact that the mean-variance optimizer does not know that the inputs are statistical estimates and not known with certainty. When we are using classical mean-variance optimization, we are implicitly assuming that inputs are deterministic, and available with great accuracy. In other words, bad inputs lead to even worse outputs, or “garbage in, garbage out.” Chapter 10 covers so-called robust portfolio optimization that addresses these issues. We will now turn to the sample covariance matrix estimator. Several authors (for example, Gemmill;32 Litterman and Winkelmann;33 and 28
    J. D. Jobson and Bob M. Korkie, “Putting Markowitz Theory to Work,” Journal of Portfolio Management, 7 (1981), pp. 70–74. 29 Philippe Jorion, “International Portfolio Diversification with Estimation Risk,” Journal of Business, 58 (1985), pp. 259–278. 30 Vijay K. Chopra and William T. Ziemba, “The Effect of Errors in Means, Variances, and Covariances on Optimal Portfolio Choice,” Journal of Portfolio Management, 19 (1993), pp. 6–11. 31 Factor models are covered in Chapter 5. 32 Gordon Gemmill, Options Pricing, An International Perspective (London: McGraw-Hill, 1993). 33 Robert Litterman and Kurt Winkelmann, “Estimating Covariance Matrices,” Risk Management Series, Goldman Sachs, 1998.
    338
    QUANTITATIVE EQUITY INVESTING
    Pafka, Potters, and Kondor34) suggest improvements to this estimator using weighted data. The reason behind using weighted data is that the market changes and it makes sense to give more importance to recent, rather than to long past, information. If we give the most recent observation a weight of one and subsequent observations weights of d, d2, d3, … where d < 1, then T
    σ ij =
    ∑d
    T −t
    (Ri , t − Ri )(Rj , t − Rj )
    t =1
    T
    ∑d
    T −1
    t =1
    =
    1− d 1 − dT
    T
    ∑d
    T −t
    (Ri , t − Ri )(Rj , t − Rj )
    t =1
    We observe that 1− d ≈ 1− d 1 − dT when T is large enough. The weighting (decay) parameter d can be estimated by maximum likelihood estimation, or by minimizing the out-of-sample forecasting error.35 Nevertheless, just like the estimator for expected returns, the covariance estimator suffers from estimation errors, especially when the number of historical return observations is small relative to the number of securities. The sample mean and covariance matrix are poor estimators for anything but i.i.d. time series. In the i.i.d. case, the sample mean and covariance estimator are the maximum likelihood estimators of the true mean and covariance.36 The sample covariance estimator often performs poorly in practice. For instance, Ledoit and Wolf37 argue against using the sample covariance matrix for portfolio optimization purposes. They stress that the sample covariance matrix contains estimation errors that will very likely perturb and produce 34
    Szilard Pafka, Marc Potters, and Imre Kondor, “Exponential Weighting and RandomMatrix-Theory-Based Filtering of Financial Covariance Matrices for Portfolio Optimization,” Working Paper, Science & Finance, Capital Fund Management, 2004. 35 See Giorgio De Santis, Robert Litterman, Adrien Vesval, and Kurt Winkelmann, “Covariance Matrix Estimation,” in Robert Litterman (ed.), Modern Investment Management: An Equilibrium Approach (Hoboken, NJ: John Wiley & Sons, 2003), pp. 224–248. 36 See, for example, Fumio Hayashi, Econometrics (Princeton: Princeton University Press, 2000). 37 Olivier Ledoit and Michael Wolf, “Honey, I Shrunk the Sample Covariance Matrix,” Journal of Portfolio Management, 30 (2004), pp. 110–117.
    Portfolio Optimization: Basic Theory and Practice
    339
    poor results in a mean-variance optimization. As a substitute, they suggest applying shrinkage techniques to covariance estimation. We discuss these methods later in this chapter. The sample covariance matrix is a nonparametric (unstructured) estimator. An alternative is to make assumptions on the structure of the covariance matrix during the estimation process. For example, one can include information on the underlying economic variables or factors contributing to the movement of securities. This is the basic idea behind many asset pricing and factor models that we will describe in subsequent sections. Such models are intuitive and practical, and are very widely used. It is important to remember, however, that introducing a structure for any statistical estimator comes at a price. Structured estimators can suffer from specification error, that is, the assumptions made may be too restrictive for accurate forecasting of reality. As a solution, Jagannathan and Ma38 proposed using portfolios of covariance matrix estimators. Their idea was to “diversify away” the estimation and specification errors to which all covariance matrix estimators are subject. Portfolios of estimators are typically constructed in a simple fashion: they are equally weighted, and easier to compute than, say, shrinkage estimators. For example, one of the portfolios of estimators suggested by Jagannathan and Ma and Bengtsson and Holst39 consists of the average of the sample covariance matrix, a single-index matrix, and a matrix containing only the diagonal elements of the sample matrix. The latter matrix is more stable than a full asset-asset covariance matrix, as the sample covariance matrix is frequently noninvertible due to noisy data and in general may result in ill-conditioned mean-variance portfolio optimization. The single-index matrix is a covariance matrix estimator obtained by assuming that returns are generated according to Sharpe’s classical single-index factor model.40 Other portfolios of estimators add the matrix of constant correlations (a highly structured covariance matrix that assumes that each pair of assets has the same correlation). Interestingly, a recent study of several portfolio and shrinkage covariance matrix estimators using historical data on stocks traded on the New York Stock Exchange 38
    Ravi Jagannathan and Tongshu Ma, “Three Methods for Improving the Precision in Covariance Matrix Estimators,” Manuscript, Kellogg School of Management, Northwestern University, 2000. 39 Christoffer Bengtsson and Jan Holst, “On Portfolio Selection: Improved Covariance Matrix Estimation for Swedish Asset Returns,” Working Paper, Lund University and Lund Institute of Technology. 40 William Sharpe, “A Simplified Model for Portfolio Analysis,” Management Science, 9 (1963), pp. 277–293. Sharpe suggested a single-factor model for returns, where the single factor is a market index. We will discuss factor models later in this chapter.
    340
    QUANTITATIVE EQUITY INVESTING
    concluded that while portfolios of estimators and shrinkage estimators of the covariance matrix were indisputably better than the simple sample covariance matrix estimator, there were no statistically significant differences in portfolio performance over time between stock portfolios constructed using simple portfolios of covariance matrix estimators and stock portfolios constructed using shrinkage estimators of the covariance matrix, at least for this particular set of data.41 As a general matter, it is always important to test any particular estimator of the covariance matrix for the specific asset classes and data with which a portfolio manager is dealing before adopting it for portfolio management purposes.
    Further Practical Considerations In this subsection, we consider some techniques that are important for a more successful implementation of the sample mean and covariance matrix estimators, as well as advanced estimators encountered in practice. Heteroskedasticity and Autocorrelation Consistent Covariance Matrix Estimation Financial return series exhibit serial correlation and heteroskedasticity.42 Serial correlation, also referred to as autocorrelation, is the correlation of the return of a security with itself over successive time intervals. The presence of heteroskedasticity means that variances/covariances are not constant but time varying. These two effects introduce biases in the estimated covariance matrix. Fortunately, there are simple and straightforward techniques available that almost automatically correct for these biases. Probably the most popular techniques include the approaches by Newey and West,43 and its extension by Andrews,44 often referred to as “NeweyWest corrections” in the financial literature.45 41
    David Disatnik and Simon Bennings, “Shrinking the Covariance Matrix—Simpler is Better,” Journal of Portfolio Management, 33 (2007), pp. 56–63. 42 See John Y. Campbell, Andrew W. Lo, and A. Craig MacKinlay, The Econometrics of Financial Markets (Princeton, NJ: Princeton University Press, 1997). 43 Whitney K. Newey and Kenneth D. West, “A Simple, Positive Semidefinite Heteroskedasticity and Autocorrelation Consistent Covariance Matrix,” Econometrica, 56 (1987), pp. 203–208. 44 Donald W. K. Andrews, “Heteroskedasticity and Autocorrelation Consistent Covariance Matrix Estimation,” Econometrica, 59 (1991), pp. 817–858. 45 However, these techniques can be traced back to work done by Jowett and Hannan in the 1950s. See G. H. Jowett, “The Comparison of Means of Sets of Observations from Sections of Independent Stochastic Series,” Journal of the Royal Statistical Society, Series B, 17 (1955), pp. 208–227; and E. J. Hannan, “The Variance of the
    Portfolio Optimization: Basic Theory and Practice
    341
    Dealing with Missing and Truncated Data In practice, we have to deal with the fact that no data series are perfect. There will be missing and errant observations, or just simply not enough data. If care is not taken, this can lead to poorly estimated models and inferior investment performance. Typically, it is tedious but very important work to clean data series for practical use. Some statistical techniques are available for dealing with missing observations; the so-called expectation maximization (EM) algorithm being among the most popular for financial applications.46 Longer daily return data series are often available from well-established companies in developed countries. However, if we turn to newer companies, or companies in emerging markets, this is often not the case. Say that we have a portfolio of 10 assets, of which five have a return history of 10 years, while the other five have only been around for three years. We could, for example, truncate the data series making all of them three years long and then calculate the sample covariance matrix. But by using the method proposed by Stambaugh,47 we can do better than that. Simplistically speaking, starting from the truncated sample covariance matrix, this technique produces improvements to the covariance matrix that utilizes all the available data. Data Frequency Merton48 shows that even if the expected returns are constant over time, a long history would still be required in order to estimate them accurately. The situation is very different for variances and covariances. Under reasonable assumptions, it can be shown that estimates of these quantities can be improved by increasing the sampling frequency. However, not everyone has the luxury of having access to high-frequency or tick-by-tick data. An improved estimator of volatility can be achieved by using the daily high, low, opening, and closing prices, along with the Mean of a Stationary Process,” Journal of the Royal Statistical Society, Series B, 19 (1957), pp. 282–285. 46 See Roderick J. A. Little and Donald B. Rubin, Statistical Analysis with Missing Data (New York: Wiley-Interscience, 2002); and Joe L. Schafer, Analysis of Incomplete Multivariate Data (Boca Raton, FL: Chapman & Hall/CRC, 1997). 47 For a more detailed description of the technique, see Robert F. Stambaugh, “Analyzing Investments Whose Histories Differ in Length,” Journal of Financial Economics, 45 (1997), pp. 285–331. 48 Robert C. Merton, “On Estimating the Expected Return on the Market: An Exploratory Investigation,” Journal of Financial Economics, 8 (1980), pp. 323–361.
    342
    QUANTITATIVE EQUITY INVESTING
    transaction volume.49 These types of estimators are typically referred to as Garman-Klass estimators. Some guidance can also be gained from the option pricing literature. As suggested by Burghardt and Lane, when historical volatility is calculated for option pricing purposes, the time horizon for sampling should be equal to the time to maturity of the option.50 As Butler and Schachter point out, when historical data are used for volatility forecasting purposes, the bias found in the estimator tends to increase with the sample length.51 However, it can be problematic to use information based on too short time periods. In this case, often the volatility estimator becomes highly sensitive to short-term regimes, such as over- and underreaction corrections.
    PORTFOLIO OPTIMIZATION WITH OTHER RISK MEASURES Generally speaking, the main objective of portfolio selection is the construction of portfolios that maximize expected returns at a certain level of risk. It is now well known that asset returns are not normal and, therefore, the mean and the variance alone do not fully describe the characteristics of the joint asset return distribution. Indeed, many risks and undesirable scenarios faced by a portfolio manager cannot be captured solely by the variance of the portfolio. Consequently, especially in cases of significant nonnormality, the classical mean-variance approach will not be a satisfactory portfolio allocation model. Since about the mid-1990s, considerable thought and innovation in the financial industry have been directed toward creating a better understanding of risk and its measurement, and toward improving the management of risk in financial portfolios. From the statistical point of view, a key innovation is the attention paid to the ratio between the bulk of the risk and the risk of the tails. The latter has become a critical statistical determinant of risk management policies. Changing situations and different portfolios may require alternative and new risk measures. The race for inventing the best risk measure for a given situation or portfolio is still ongoing. It is possible that we will never find a completely satisfactory answer 49
    See Mark B. Garman and Michael J. Klass, “On the Estimation of Security Price Volatilities from Historical Data,” Journal of Business, 53 (1980), pp. 67–78; and Michael Parkinson, “The Extreme Value Method for Estimating the Variance of the Rate of Return,” Journal of Business, 53 (1980), pp. 61–65. 50 Galen Burghardt and Morton Lane, “How to Tell if Options Are Cheap,” Journal of Portfolio Management, 16 (1990), pp. 72–78. 51 John S. Butler and Barry Schachter, “Unbiased Estimation of the Black-Scholes Formula,” Journal of Financial Economics, 15 (1986), pp. 341–357.
    Portfolio Optimization: Basic Theory and Practice
    343
    to the question of which risk measure to use, and the choice to some extent remains an art. We distinguish between two different types of risk measures: (1) dispersion and (2) downside measures. We begin with an overview of the most common dispersion and downside measures.52 We close this section with a derivation of a mean-CVaR portfolio optimization model.
    Dispersion Measures Dispersion measures are measures of uncertainty. Uncertainty, however, does not necessarily quantify risk. Dispersion measures consider both positive and negative deviations from the mean, and treat those deviations as equally risky. In other words, overperformance relative to the mean is penalized as much as underperformance. In this section, we review the most popular and important portfolio dispersion measures such as mean standard deviation, mean absolute deviation, and mean absolute moment. Mean Standard Deviation and the Mean-Variance Approach For historical reasons, portfolio standard deviation (or portfolio variance) is probably the most well-known dispersion measure because of its use in classical portfolio theory (i.e., mean-variance framework). Mean Absolute Deviation Konno53 introduced the mean absolute deviation (MAD) approach in 1988. Rather than using squared deviations as in the mean-variance approach, here the dispersion measure is based on the absolution deviations from the mean; that is, it is defined as N ⎛ N ⎞ MAD(Rp ) = E ⎜ ∑ wi Ri − ∑ wi μ i ⎟ ⎝ i =1 ⎠ i =1
    where 52
    For a further discussion, see Sergio Ortobelli, Svetlozar T. Rachev, Stoyan Stoyanov, Frank J. Fabozzi, and Almira Biglova, “The Correct Use of Risk Measures in Portfolio Theory,” International Journal of Theoretical and Applied Finance, 8 (2005), pp. 1–27. 53 Hiroshi Konno, “Portfolio Optimization Using L1 Risk Function,” IHSS Report 88-9, Institute of Human and Social Sciences, Tokyo Institute of Technology, 1988. See also, Hiroshi Konno, “Piecewise Linear Risk Functions and Portfolio Optimization,” Journal of the Operations Research Society of Japan, 33 (1990), pp. 139– 156.
    344
    QUANTITATIVE EQUITY INVESTING N
    Rp = ∑ wi Ri i =1
    Ri and μi are the portfolio return, the return on asset i, and the expected return on asset i, respectively. The computation of optimal portfolios in the case of the mean absolute deviation approach is significantly simplified, as the resulting optimization problem is linear and can be solved by standard linear programming routines. We note that it can be shown that under the assumption that the individual asset returns are multivariate normally distributed, MAD(Rp ) =
    2 σ π p
    where σp is the standard deviation of the portfolio.54 That is, when asset returns are normally distributed, the mean absolute deviation and the meanvariance approaches are equivalent. Mean Absolute Moment The mean-absolute moment (MAMq) of order q is defined by
    ((
    MAMq (Rp ) = E Rp − E(Rp )
    q
    ))
    1/ q
    ,q≥1
    and is a straightforward generalization of the mean-standard deviation (q = 2) and the mean absolute deviation (q = 1) approaches.
    Downside Measures The objective in downside risk measure portfolio allocation models is the maximization of the probability that the portfolio return is above a certain minimal acceptable level, often also referred to as the benchmark level or disaster level. Despite their theoretical appeal, downside or safety-first risk measures are often computationally more complicated to use in a portfolio context. Downside risk measures of individual securities cannot be easily aggregated into portfolio downside risk measures, as their computation requires knowledge of the entire joint distribution of security returns. Often, one has to 54
    Hiroshi Konno and Hiroaki Yamazaki, “Mean-Absolute Deviation Portfolio Optimization Model and its Application to Tokyo Stock Market,” Management Science, 37 (1991), pp. 519–531.
    Portfolio Optimization: Basic Theory and Practice
    345
    resort to computationally intensive nonparametric estimation, simulation, and optimization techniques. Furthermore, the estimation risk of downside measures is usually greater than for standard mean-variance approaches. By the estimation of downside risk measures, we only use a portion of the original data—maybe even just the tail of the empirical distribution—and hence the estimation error increases.55 Nevertheless, these risk measures are very useful in assessing the risk of securities with asymmetric return distributions, such as call and put options, as well as other derivative contracts. We discuss some of the most common safety-first and downside risk measures such as Roy’s safety-first, semivariance, lower partial moment, Value-at-Risk, and conditional Value-at-Risk. Roy’s Safety-First Two very important papers on portfolio selection were published in 1952: first, Markowitz’s56 paper on portfolio selection and classical portfolio theory; second, Roy’s57 paper on safety first, which laid the seed for the development of downside risk measures.58 Let us first understand the difference between these two approaches.59 According to classical portfolio theory, an investor constructs a portfolio that represents a trade-off between risk and return. The trade-off between risk and return and the portfolio allocation depend upon the investor’s utility function. It can be hard, or even impossible, to determine an investor’s actual utility function. Roy argued that an investor, rather than thinking in terms of utility functions, first wants to make sure that a certain amount of the principal is preserved. Thereafter, he decides on some minimal acceptable return that achieves this principal preservation. In essence, the investor chooses his portfolio by solving the following optimization problem 55
    For further discussion of these issues, see Henk Grootveld and Winfried G. Hallerbach, “Variance Versus Downside Risk: Is There Really That Much Difference?” European Journal of Operational Research, 114 (1999), pp. 304–319. 56 Harry M. Markowitz, “Portfolio Selection,” Journal of Finance, 7 (1952), pp. 77–91. 57 Andrew D. Roy, “Safety-First and the Holding of Assets,” Econometrica, 20 (1952), pp. 431–449. 58 See, for example, Vijay S. Bawa, “Optimal Rules for Ordering Uncertain Prospects,” Journal of Financial Economics, 2 (1975), pp. 95–121; and Vijay S. Bawa, “Safety-First Stochastic Dominance and Portfolio Choice,” Journal of Financial and Quantitative Analysis, 13 (1978), pp. 255–271. 59 For a more detailed description of these historical events, we refer the reader to David Nawrocki, “A Brief History of Downside Risk Measures,” Journal of Investing (1999), pp. 9–26.
    346
    QUANTITATIVE EQUITY INVESTING
    min P(Rp ≤ R0 ) w
    subject to w ′ι = 1, ι ′ = [1,1,…,1] where P is the probability function and N
    Rp = ∑ wi Ri i =1
    is the portfolio return. Most likely, the investor will not know the true probability function. However, by using Tchebycheff’s inequality, we obtain60 P(Rp ≤ R0 ) ≤
    σ 2p (μ p − R0 )2
    where μp and σp denote the expected return and the variance of the portfolio, respectively. Therefore, not knowing the probability function, the investor will end up solving the approximation min w
    σp μ p − R0
    subject to w ′ι = 1, ι ′ = [1,1,…,1] We note that if R0 is equal to the risk-free rate, then this optimization problem is equivalent to maximizing a portfolio’s Sharpe ratio.
    For a random variable x with expected value μ and variance σ 2x , Tchebycheff’s inequality states that for any positive real number c, it holds that 60
    P( x − μ > c) ≤
    σ 2x c2
    Applying Tchebycheff’s inequality, we get P(Rp ≤ R0 ) = P(μ p − Rp ≥ μ p − R0 ) ≤
    σ 2p (μ p − R0 )2
    347
    Portfolio Optimization: Basic Theory and Practice
    Semivariance In his original book, Markowitz proposed the usage of semivariance to correct for the fact that variance penalizes overperformance and underperformance equally.61 When receiving his Nobel Prize in Economic Science, Markowitz stated that “… it can further help evaluate the adequacy of mean and variance, or alternative practical measures, as criteria.” Furthermore, he added “Perhaps some other measure of portfolio risk will serve in a two parameter analysis. … Semivariance seems more plausible than variance as a measure of risk, since it is concerned only with adverse deviations.”62 The portfolio semivariance is defined as σ
    2 p ,min
    N ⎛ ⎛ N ⎞⎞ = E ⎜ min ⎜ ∑ wi Ri − ∑ wi μ i , 0⎟ ⎟ ⎝ i =1 ⎠⎠ ⎝ i =1
    2
    where N
    Rp = ∑ wi Ri i =1
    Ri and μi are the portfolio return, the return on asset i, and the expected return on asset i, respectively. Jin, Markowitz, and Zhou provide some of the theoretical properties of the mean-semivariance approach both in the single-period as well as in the continuous-time setting.63 A generalization to the semivariance is provided by the lower partial moment risk measure that we discuss in the next subsection. Lower Partial Moment The lower partial moment risk measure provides a natural generalization of semivariance that we described previously (see, for example, Bawa64 and Fishburn65). The lower partial moment with power index q and the target rate of return R0 is given by 61
    Harry Markowitz, Portfolio Selection—Efficient Diversification of Investment (New York: Wiley, 1959). 62 Harry Markowitz, “Foundations of Portfolio Theory,” Journal of Finance, 46 (1991), pp. 469–477. 63 Hanqing Jin, Harry Markowitz, and Xunyu Zhou, “A Note on Semivariance,” Forthcoming in Mathematical Finance. 64 Vijay S. Bawa, “Admissible Portfolio for All Individuals,” Journal of Finance, 31 (1976), pp. 1169–1183. 65 Peter C. Fishburn, “Mean-Risk Analysis with Risk Associated with Below-Target Returns,” American Economic Review, 67 (1977), pp. 116–126.
    348
    QUANTITATIVE EQUITY INVESTING
    σ Rp ,q,R0 = (E(min(Rp − R0 , 0)q ))1/ q where N
    Rp = ∑ wi Ri i =1
    is the portfolio return. The target rate of return R0 is what Roy termed the disaster level.66 We recognize that by setting q = 2 and R0 equal to the expected return, the semivariance is obtained. Fishburn demonstrated that q = 1 represents a risk neutral investor, whereas 0 < q ≤ 1 and q > 1 correspond to a risk-seeking and a risk-averse investor, respectively. Value-at-Risk Probably the most well-known downside risk measure is Value-at-Risk (VaR), first developed by JP Morgan, and made available through the RiskMetrics™ software in October 1994.67 VaR is related to the percentiles of loss distributions, and measures the predicted maximum loss at a specified probability level (for example, 95%) over a certain time horizon (for example, 10 days). Today VaR is used by most financial institutions to both track and report the market risk exposure of their trading portfolios. Formally, VaR is defined as VaR1− ε (Rp ) = min{R P(− Rp ≥ R) ≤ ε} where P denotes the probability function. Typical values for (1 – ε) are 90%, 95%, and 99%.68 Some of the practical and computational issues 66
    Roy, “Safety-First and the Holding of Assets.” JP Morgan/Reuters, RiskMetrics™—Technical Document, 4th ed. (New York: Morgan Guaranty Trust Company of New York, 1996). See also http://www.risk metrics.com. 68 There are several equivalent ways to define VaR mathematically. In this book, we generally use ε to denote small numbers, so the expression 67
    VaR1–ε(Rp) = min {R P(–Rp ≥ R) ≤ ε} emphasizes the fact that the (1 – ε)-VaR is the value R such that the probability that the possible portfolio loss (–Rp) exceeds R is at most some small number ε such as 1%, 5%, or 10%. For example, the 95% VaR for a portfolio is the value R such that the probability that the possible portfolio loss exceeds R is less than ε = 5%. An alternative, and equivalent way to define (1 – ε)-VaR is as the value R such that the probability that the maximum portfolio loss (–Rp) is at most R is at least
    Portfolio Optimization: Basic Theory and Practice
    349
    related to using VaR are discussed in Alexander and Baptista,69 Gaivoronski and Pflug,70 and Mittnik, Rachev, and Schwartz.71 Chow and Kritzman discuss the usage of VaR in formulating risk budgets, and provide an intuitive method for converting efficient portfolio allocations into value-at-risk assignments.72 In a subsequent article, they discuss some of the problems with the simplest approach for computing the VaR of a portfolio.73 In particular, the common assumption that the portfolio itself is lognormally distributed can be somewhat problematic, especially for portfolios that contain both long and short positions. VaR also has several undesirable properties as a risk measure.74 First, it is not subadditive, so the risk as measured by the VaR of a portfolio of two funds may be higher than the sum of the risks of the two individual portfolios. In other words, for VaR it does not hold that ρ(R1 + R2) ≤ ρ(R1) + ρ(R2) for all returns R1, R2. The subadditivity property is the mathematical description of the diversification effect. It is unreasonable to think that some large number (1 – ε) such as 99%, 95%, or 90%. Mathematically, this can be expressed as VaR1–ε(Rp) = min {R P(–Rp ≤ R) ≥ 1 – ε} In some standard references, the parameter ε is used to denote the “large” probability in the VaR definition, such as 99%, 95% or 90%. VaR is referred to as α-VaR, and is defined as VaRα(Rp) = min {R P(–Rp ≤ R) ≥ α} Notice that there is no mathematical difference between the VaR definitions that involve α or ε, because α is in fact (1 – ε). In this book, we prefer using ε instead of α in the VaR definition so as to avoid confusion with the term alpha used in asset return estimation. 69 Gordon J. Alexander and Alexandre M. Baptista, “Economic Implications of Using a Mean-VaR Model for Portfolio Selection: A Comparison with Mean-Variance Analysis,” Journal of Economic Dynamics and Control, 26 (2002), pp. 1159–1193. 70 Alexei A. Gaivoronski and Georg Pflug, “Value-at-Risk in Portfolio Optimization: Properties and Computational Approach,” Journal of Risk, 7 (2005), pp. 1–31. 71 Stefan Mittnik, Svetlotzar Rachev, and Eduardo Schwartz, “Value At-Risk and Asset Allocation with Stable Return Distributions,” Allgemeines Statistisches Archiv, 86 (2003), pp. 53–67. 72 George Chow and Mark Kritzman, “Risk Budgets—Converting Mean-Variance Optimization into VaR Assignments,” Journal of Portfolio Management, 27 (2001), pp. 56–60. 73 George Chow and Mark Kritzman, “Value at Risk for Portfolios with Short Positions,” Journal of Portfolio Management, 28 (2002), pp. 73–81. 74 Hans Rau-Bredow, “Value-at-Risk, Expected Shortfall and Marginal Risk Contribution,” in Giorgio Szegö (ed.) Risk Measures for the 21st Century (Chichester: John Wiley & Sons, 2004), pp. 61–68.
    350
    QUANTITATIVE EQUITY INVESTING
    a more diversified portfolio would have higher risk, so nonsubadditive risk measures are undesirable. Second, when VaR is calculated from generated scenarios, it is a nonsmooth and nonconvex function of the portfolio holdings. As a consequence, the VaR function has multiple stationary points, making it computationally both difficult and time-consuming to find the global optimal point in the optimization process for portfolio allocation.75 Third, VaR does not take the magnitude of the losses beyond the VaR value into account. For example, it is very unlikely that an investor will be indifferent between two portfolios with identical expected return and VaR when the return distribution of one portfolio has a short left tail and the other has a long left tail. These undesirable features motivated the development of Conditional Value-at-Risk that we discuss next. Conditional Value-at-Risk The deficiencies of Value-at-Risk led Artzner et al. to propose a set of desirable properties for a risk measure.76 They called risk measures satisfying these properties coherent risk measures.77 Conditional Value-at-Risk (CVaR) is a coherent risk measure defined by the formula 75
    For some possible remedies and fixes to this problem see, Henk Grootveld and Winfried G. Hallerbach, “Upgrading Value-at-Risk from Diagnostic Metric to Decision Variable: A Wise Thing to Do?” in Risk Measures for the 21st Century, pp. 33–50. We will discuss computational issues with portfolio VaR optimization in more detail in Chapters 13 and 19. 76 Philippe Artzner, Freddy Delbaen, Jean-Marc Eber, and David Heath, “Coherent Measures of Risk,” Mathematical Finance, 9 (1999), pp. 203–228. 77 A risk measure ρ is called a coherent measure of risk if it satisfies the following properties: 1. Monotonicity. If X ≥ 0, then ρ(X) ≤ 0 2. Subadditivity. ρ(X + Y) ≤ ρ(X) + ρ(Y) 3. Positive homogeneity. For any positive real number c, ρ(cX) = cρ(X) 4. Translational invariance. For any real number c, ρ(X + c) ≤ ρ(X) − c where X and Y are random variables. In words, these properties can be interpreted as: (1) If there are only positive returns, then the risk should be nonpositive; (2) the risk of a portfolio of two assets should be less than or equal to the sum of the risks of the individual assets; (3) if the portfolio is increased c times, the risk becomes c times larger; and (4) cash or another risk-free asset does not contribute to portfolio risk. Interestingly, standard deviation, a very popular risk measure, is not coherent—it violates the monotonicity property. It does, however, satisfy subadditivity, which is considered one of the most important properties. The four properties required for coherence are actually quite restrictive: when taken together, they rule out a number of other popular risk measures as well. For example, semideviation-type risk measures violate the subadditivity condition.
    351
    Portfolio Optimization: Basic Theory and Practice
    CVaR1–ε(Rp) = E(–Rp –Rp ≥ VaR1–ε (Rp)) Therefore, CVaR measures the expected amount of losses in the tail of the distribution of possible portfolio losses, beyond the portfolio VaR. In the literature, this risk measure is also referred to as expected shortfall,78 expected tail loss (ETL), and tail VaR. As with VaR, the most commonly considered values for (1 – ε) are 90%, 95%, and 99%.
    Mean-CVaR Optimization In this subsection we develop a model for mean-CVaR optimization. Here, the objective is to maximize expected return subject to that the portfolio risk, measured by CVaR, is no larger than some value. Mathematically, we can express this as max μ ′w w
    subject to CVaR1 – ε(w) ≤ c0 along with any other constraints on w (represented by w ∈ Cw), where μ represents the vector of expected returns, and c0 is a constant denoting the required level of risk. Before we formulate the mean-CVaR optimization problem, we need some useful mathematical properties of the CVaR measure. To this end, let us denote by w the N-dimensional portfolio vector such that each component wi equals the number of shares held in asset i. Furthermore, we denote by y a random vector describing the uncertain outcomes (also referred to as market variables) of the economy. We let the function f(w,y) (also referred to as the loss function) represent the loss associated with the portfolio vector w. Note that for each w the loss function f(w,y) is a one-dimensional random variable. We let p(y) be the probability associated with scenario y. Now, assuming that all random values are discrete, the probability that the loss function does not exceed a certain value γ is given by the cumulative probability Ψ(w, γ ) =

    p(y)
    {y f ( w ,y )≤ γ } 78
    Strictly speaking, expected shortfall is defined in a different way, but is shown to be equivalent to CVaR (see Carlo Acerbi and Dirk Tasche, “On the Coherence of Expected Shortfall,” Journal of Banking and Finance, 26 (2002), pp. 1487–1503).
    352
    QUANTITATIVE EQUITY INVESTING
    Using this cumulative probability, we see that VaR1− ε (w) = min{ γ Ψ(w, γ ) ≥ 1 − ε} Since CVaR of the losses of portfolio w is the expected value of the losses conditioned on the losses being in excess of VaR, we have that CVaR1− ε (w) = E(f (w, y) f (w, y) > VaR1− ε (w)) =

    p(y)f (w, y)
    { y f ( w ,y)>VaR1− ε ( w )}

    p(y)
    { y f ( w ,y)>VaR1− ε ( w )}
    The continuous equivalents of these formulas are Ψ(w, y) =

    p(y)dy
    f ( w , y )≤ γ
    VaR1− ε (w) = min{ γ Ψ(w, γ ) ≥ 1 − ε} CVaR1− ε (w) = E(f (w, y) f (w, y) ≥ VaR1− ε (w))

    = ε −1
    f (w, y)p(y)dy
    f ( w ,y)≥VaR1− ε ( w )
    We note that in the continuous case it holds that Ψ(w,γ) = 1 – ε and therefore the denominator

    p(y)
    { y f ( w ,y)>VaR1− ε ( w )}
    in the discrete version of CVaR becomes ε in the continuous case. Moreover, we see that CVaR1− ε (w) = ε −1

    f (w, y)p(y)dy

    VaR1− ε (w)p(y)dy
    f ( w ,y)≥VaR1− ε ( w )
    ≥ ε −1
    f ( w ,y)≥VaR1− ε ( w )
    = VaR1− ε (w) because ε −1

    f ( w ,y)≥VaR1− ε ( w )
    p(y)dy = 1
    Portfolio Optimization: Basic Theory and Practice
    353
    In other words, CVaR is always at least as large as VaR, but as we mentioned above, CVaR is a coherent risk measure, whereas VaR is not. It can also be shown that CVaR is a concave function and, therefore, has a unique minimum. However, working directly with the above formulas turns out to be somewhat tricky in practice as they involve the VaR function (except for those rare cases when one has an analytical expression for VaR). Fortunately, a simpler approach was discovered by Rockefellar and Uryasev.79 Their idea is that the function Fε (w, ξ) = ξ + ε −1

    (f (w, y) − ξ)p(y)dy
    f ( w , y )≥ γ
    can be used instead of CVaR. Specifically, they proved the following three important properties: Property 1. Fε(w,ξ) is a convex and continuously differentiable function in ξ. Property 2. VaR1 – ε(w) is a minimizer of Fε(w,ξ). Property 3. The minimum value of Fε(w,ξ) is CVaR1 – ε(w). In particular, we can find the optimal value of CVaR1 – ε(w) by solving the optimization problem min Fε (w, ξ) w ,ξ
    Consequently, if we denote by (w*, ξ*) the solution to this optimization problem, then Fε(w*, ξ*) is the optimal CVaR. In addition, the optimal portfolio is given by w* and the corresponding VaR is given by ξ*. In other words, in this fashion we can compute the optimal CVaR without first calculating VaR. In practice, the probability density function p(y) is often not available, or is very difficult to estimate. Instead, we might have T different scenarios Y = {y1, …, yT} that are sampled from the probability distribution or that have been obtained from computer simulations. Evaluating the auxiliary function Fξ(w, ξ) using the scenarios Y, we obtain T
    FεY (w, ξ) = ξ + ε −1T −1 ∑ max(f (w, y i ) − ξ, 0) i =1
    79
    See Stanislav Uryasev, “Conditional Value-at-Risk: Optimization Algorithms and Applications,” Financial Engineering News, 14 (2000), pp. 1–5; and R. Tyrrell Rockefellar and Stanislav Uryasev, “Optimization of Conditional Value-at-Risk,” Journal of Risk, 2 (2000), pp. 21–41.
    354
    QUANTITATIVE EQUITY INVESTING
    Therefore, in this case the optimization problem min CVaR1−ε (w) w
    takes the form T
    min ξ + ε −1T −1 ∑ max(f (w, y i ) − ξ, 0) w ,ξ
    i =1
    Replacing max(f(w, yi) – ξ, 0) by the auxiliary variables zi along with appropriate constraints, we obtain the equivalent optimization problem T
    min ξ + ε −1T −1 ∑ zi i =1
    subject to zi ≥ 0, i = 1, …, T zi ≥ f(w,yi ) – ξ, i = 1, …, T along with any other constraints on w, such as no short-selling constraints or any of the constraints we discussed previously in this chapter. Under the assumption that f(w,y) is linear in w,80 the above optimization problem is linear and can therefore be solved very efficiently by standard linear programming techniques.81 The formulation discussed previously can be seen as an extension of calculating the GMV and can be used as an alternative when the underlying asset return distribution is asymmetric and exhibits fat tails. Moreover, the representation of CVaR given by the auxiliary function Fε(w, ξ) can be used in the construction of other portfolio optimization problems. For example, the mean-CVaR optimization problem 80
    This is typically the case as the loss function in the discrete case is chosen to be N
    f (w, y) = − ∑ wi (y i − xi ) i =1
    where xi is the current price of security i. 81 See, for example, Chapter 9 in Frank J. Fabozzi, Sergio M. Focardi, Petter N. Kolm, and Dessislava Pachamanova, Robust Portfolio Optimization and Management (Hoboken, NJ: John Wiley & Sons, 2007) for a discussion of numerical optimization techniques used in portfolio management.
    Portfolio Optimization: Basic Theory and Practice
    355
    max μ ′w w
    subject to CVaR1 – ε(w) ≤ c0 along with any other constraints on w (represented by w ∈ Cw), where μ represents the vector of expected returns, and c0 is a constant denoting the required level of risk, would result in the following approximation max μ ′w w
    subject to T
    ξ + ε −1T −1 ∑ zi ≤ c0 i =1
    zi ≥ 0, 0 = 1,..., T zi ≥ f (w, y i ) − ξ, 0 = 1,..., T w ∈C w
    To illustrate the mean-CVaR optimization approach, we discuss an example from Palmquist, Uryasev, and Krokhmal.82 They considered twoweek returns for all the stocks in the S&P 100 Index over the period July 1, 1997 to July 8, 1999 for scenario generation. Optimal portfolios were constructed by solving the preceding mean-CVaR optimization problem for a two-week horizon for different levels of confidence. In Exhibit 8.3 we see three different mean-CVaR efficient frontiers corresponding to (1 – ε) = 90%, 95%, and 99%. The two-week rate of return is calculated as the ratio of the optimized portfolio value divided by the initial value, and the risk is calculated as the percentage of the initial portfolio value that is allowed to be put at risk. In other words, when the risk is 7% and (1 – ε) is 95%, this means that we allow for no more than a 7% loss of the initial value of the portfolio with a probability of 5%. We observe from the exhibit that as the CVaR constraint decreases (i.e., the probability increases) the rate of return increases. It can be shown that for a normally distributed loss function, the meanvariance and the mean-CVaR frameworks generate the same efficient frontier. However, when distributions are nonnormal these two approaches are 82
    Pavlo Krokhmal, Jonas Palmquist, and Stanislav Uryasev, “Portfolio Optimization with Conditional Value-At-Risk Objective and Constraints,” Journal of Risk, 4 (2002), pp. 11–27.
    356
    QUANTITATIVE EQUITY INVESTING
    EXHIBIT 8.3 Efficient Frontiers of Different Mean-CVaR Portfolios 3.5
    Rate of Return %
    3.0 2.5 2.0 1.5 0.90
    1.0
    0.95 0.5 0.0
    0.99 0
    5
    10
    15
    20
    25
    Risk % Source: Pavlo Krokhmal, Jonas Palmquist, and Stanislav Uryasev, “Portfolio Optimization with Conditional Value-At-Risk Objective and Constraints,” The Journal of Risk 4, no. 2 (2002), p. 21. This copyrighted material is reprinted with permission from Incisive Media Plc, Haymarket House, 28-29 Haymarket, London, SW1Y 4RX, United Kingdom.
    significantly different. On the one hand, in the mean-variance approach risk is defined by the variance of the loss distribution, and because the variance incorporates information from both the left as well as the right tail of the distribution, both the gains and losses are contributing equally to the risk. On the other hand, the mean-CVaR methodology only involves the part of the tail of the distribution that contributes to high losses. In Exhibit 8.4 we can see a comparison between the two approaches for (1 – ε) = 95%. The same data set is used as in the previous illustration. We note that in return/CVaR coordinates, as expected, the mean-CVaR efficient frontier lies above the mean-variance efficient frontier. In this particular example, the two efficient frontiers are close to each other and are similarly shaped. Yet with the inclusion of derivative assets such as options and credit derivatives, this will no longer be the case.83 83
    Nicklas Larsen, Helmut Mausser, and Stanislav Uryasev, “Algorithms for Optimization of Value-at-Risk,” in P. Pardalos and V. K. Tsitsiringos (eds.), Financial Engineering, e-commerce and Supply Chain (Boston: Kluwer Academic Publishers, 2002), pp. 129–157.
    357
    Portfolio Optimization: Basic Theory and Practice
    EXHIBIT 8.4 Comparison Mean-CVaR95% and Mean-Variance Efficient Portfolios 3.5
    Rate of Return %
    3.0 2.5 2.0 1.5 1.0 Mean-CVaR portfolio
    0.5
    MV portfolio 0.0 0
    0.05
    0.1
    0.15
    CVaR Source: Pavlo Krokhmal, Jonas Palmquist, and Stanislav Uryasev, “Portfolio Optimization with Conditional Value-At-Risk Objective and Constraints,” The Journal of Risk 4, no. 2 (2002), p. 23. This copyrighted material is reprinted with permission from Incisive Media Plc, Haymarket House, 28-29 Haymarket, London, SW1Y 4RX, United Kingdom.
    SUMMARY ■



    Markowitz quantified the concept of diversification through the statistical notion of covariance between individual securities, and the overall standard deviation of a portfolio. The basic assumption behind modern portfolio theory is that an investor’s preferences can be represented by a function (utility function) of the expected return and the variance of a portfolio. The basic principle underlying modern portfolio theory is that for a given level of expected return a rational investor would choose the portfolio with minimum variance from among the set of all possible portfolios. We presented three equivalent formulations: (1) the minimum variance formulation; (2) the expected return maximization formulation; and (3) the risk aversion formulation. Minimum variance portfolios are called mean-variance efficient portfolios. The set of all mean-variance efficient portfolios is called the
    358











    QUANTITATIVE EQUITY INVESTING
    efficient frontier. The efficient frontier with only risky assets has a parabolic shape in the expected return/standard deviation coordinate system. The portfolio on the efficient frontier with the smallest variance is called the global minimum variance portfolio. The mean-variance problem results in an optimization problem referred to as a quadratic program. With the addition of a risk-free asset, the efficient frontier becomes a straight line in the expected return/standard deviation coordinate system. This line is called the capital market line. The tangency point of the efficient frontier with only risky assets and the capital market line is called the tangency portfolio. The market portfolio is the portfolio that consists of all assets available to investors in the same proportion as each security’s market value divided by the total market value of all securities. Under certain assumptions it can be shown that the tangency portfolio is the same as the market portfolio. The excess expected return of the market portfolio (the expected return of the market portfolio minus the risk-free rate) divided by the standard deviation of the market portfolio is referred to as the equilibrium market price of risk. The capital market line expresses that the expected return on a portfolio is equal to the risk-free rate plus a portfolio specific risk premium. The portfolio specific risk premium is the market price of risk multiplied by the risk (standard deviation) of the portfolio. Some of the most common constraints used in practice are no shortselling constraints, turnover constraints, maximum holding constraints, and tracking error constraints. These constraints can be handled in a straightforward way by the same type of optimization algorithms used for solving the mean-variance problem. Integer constraints or constraints of combinatorial nature are more difficult to handle and require more specialized optimization algorithms. Some examples of these types of constraints are minimum holding constraints, transaction size constraints, cardinality constraints (number of securities permitted in the portfolio), and round lot constraints. The sample means and covariances of financial return series are easy to calculate, but may exhibit significant estimation errors. Serial correlation or autocorrelation is the correlation of the return of a security with itself over successive time intervals. Heteroskedasticity means that variances/covariances are not constant but changing over time.
    Portfolio Optimization: Basic Theory and Practice ■






    359
    In practical applications, it is important to correct the covariance estimator for serial correlation and heteroskedasticity. The sample covariance estimator can be improved by increasing the sampling frequency. This is not the case for the sample expected return estimator, whose accuracy can only be improved by extending the length of the sample. The mean-variance framework only takes the first two moments, the mean and the variance, into account. When investors have preferences beyond the first two moments, it is desirable to extend the mean-variance framework to include higher moments. Two different types of risk measures can be distinguished: dispersion and downside measures. Dispersion measures are measures of uncertainty. In contrast to downside measures, dispersion measures consider both positive and negative deviations from the mean, and treat those deviations as equally risky. Some common portfolio dispersion approaches are mean standard deviation, mean-variance, mean absolute deviation, and mean absolute moment. Some common portfolio downside measures are Roy’s safety-first, semivariance, lower partial moment, Value-at-Risk, and Conditional Valueat-Risk.
    CHAPTER
    9
    Portfolio Optimization: Bayesian Techniques and the Black-Litterman Model
    nvestment policies constructed using inferior estimates, such as sample means and sample covariance matrices, typically perform very poorly in practice. Besides introducing spurious changes in portfolio weights each time the portfolio is rebalanced, this undesirable property also results in unnecessary turnover and increased transaction costs. These phenomena are not necessarily a sign that portfolio optimization does not work, but rather that the modern portfolio theory framework is very sensitive to the accuracy of inputs. There are different ways to address this issue. On the estimation side, one can try to produce more robust estimates of the input parameters for the optimization problems. This is most often achieved by using estimators that are less sensitive to outliers, and possibly, other sampling errors, such as Bayesian and shrinkage estimators. On the modeling side, one can constrain portfolio weights, use portfolio resampling, or apply robust or stochastic optimization techniques to specify scenarios or ranges of values for parameters estimated from data, thus incorporating uncertainty into the optimization process itself.1 The outline of the chapter is as follows. First, we provide a general overview of some of the common problems encountered in mean-variance optimization before we turn our attention to shrinkage estimators for expected returns and the covariance matrix. Within the context of Bayesian estimation, we focus on the Black-Litterman model. We derive the model using so-called mixed estimation from classical econometrics. Introducing a simple cross-sectional momentum strategy, we then show how we can combine this
    I
    1
    Interestingly, some new results suggest that the two approaches are not necessarily disjoint, and in some cases may lead to the same end result; see Bernd Scherer, “How Different Is Robust Optimization Really?” Journal of Asset Management, 7 (2007), pp. 374–387.
    361
    362
    QUANTITATIVE EQUITY INVESTING
    strategy with market equilibrium using the Black-Litterman model in the mean-variance framework to rebalance the portfolio on a monthly basis.
    PRACTICAL PROBLEMS ENCOUNTERED IN MEAN-VARIANCE OPTIMIZATION The simplicity and the intuitive appeal of portfolio construction using modern portfolio theory have attracted significant attention both in academia and in practice. Yet, despite considerable effort, it took many years until portfolio managers started using modern portfolio theory for managing real money. Unfortunately, in real world applications there are many problems with it, and portfolio optimization is still considered by many practitioners to be difficult to apply. In this section we consider some of the typical problems encountered in mean-variance optimization. In particular, we elaborate on: (1) the sensitivity to estimation error; (2) the effects of uncertainty in the inputs in the optimization process; and (3) the large data requirement necessary for accurately estimating the inputs for the portfolio optimization framework. We start by considering an example illustrating the effect of estimation error.
    Example: The True, Estimated, and Actual Efficient Frontiers Broadie introduced the terms true frontier, estimated frontier, and actual frontier to refer to the efficient frontiers computed using the true expected returns (unobservable), estimated expected returns, and true expected returns of the portfolios on the estimated frontier, respectively.2 In this example, we refer to the frontier computed using the true, but unknown, expected returns as the true frontier. Similarly, we refer to the frontier computed using estimates of the expected returns and the true covariance matrix as the estimated frontier. Finally, we define the actual frontier as follows: We take the portfolios on the estimated frontier and then calculate their expected returns using the true expected returns. Since we are using the true covariance matrix, the variance of a portfolio on the estimated frontier is the same as the variance on the actual frontier. From these definitions, we observe that the actual frontier will always lie below the true frontier. The estimated frontier can lie anywhere with respect to the other frontiers. However, if the errors in the expected return estimates have a mean of zero, then the estimated frontier will lie above the true 2
    Mark Broadie, “Computing Efficient Frontiers Using Estimated Parameters,” Annals of Operations Research: Special Issue on Financial Engineering 45, nos. 1–4 (December 1993), pp. 21–58.
    Portfolio Optimization: Bayesian Techniques and the Black-Litterman Model
    363
    frontier with extremely high probability, particularly when the investment universe is large. We look at two cases considered by Ceria and Stubbs:3 1. Using the covariance matrix and expected return vector from Idzorek,4 they randomly generate a time series of normally distributed returns and compute the average to use as estimates of expected returns. Using the expected-return estimate calculated in this fashion and the true covariance matrix, they generate an estimated efficient frontier of risk versus expected return where the portfolios were subject to no-shorting constraints and the standard budget constraint that the sum of portfolio weights is one. Similarly, Ceria and Stubbs compute the true efficient frontier using the original covariance matrix and expected return vector. Finally, they construct the actual frontier by computing the expected return and risk of the portfolios on the estimated frontier with the true covariance and expected return values. These three frontiers are illustrated in Exhibit 9.1. 2. Using the same estimate of expected returns, Ceria and Stubbs also generate risk versus expected return where active holdings of the assets are constrained to be ±3% of the benchmark holding of each asset. These frontiers are illustrated in Exhibit 9.2. We observe that the estimated frontiers significantly overestimate the expected return for any risk level in both types of frontiers. More importantly, we note that the actual frontier lies far below the true frontier in both cases. This shows that the optimal mean-variance portfolio is not necessarily a good portfolio, that is, it is not mean-variance efficient. Since the true expected return is not observable, we do not know how far the actual expected return may be from the expected return of the mean-variance optimal portfolio, and we end up holding an inferior portfolio.
    Sensitivity to Estimation Error In a portfolio optimization context, securities with large expected returns and low standard deviations will be overweighted and conversely, securities with low expected returns and high standard deviations will be underweighted. Therefore, large estimation errors in expected returns and/or variances/ 3
    We are grateful to Axioma Inc. for providing us with this example. Previously, it has appeared in Sebastian Ceria and Robert A. Stubbs, “Incorporating Estimation Errors into Portfolio Selection: Robust Portfolio Construction,” Axioma, Inc., 2005. 4 Thomas M. Idzorek, “A Step-By-Step Guide to the Black-Litterman Model: Incorporating User-Specified Confidence Levels,” Research Paper, Ibbotson Associates, Chicago, 2005.
    364
    QUANTITATIVE EQUITY INVESTING
    EXHIBIT 9.1 Markowitz Efficient Frontiers
    Expected Return (%)
    15.0
    12.5
    10.0
    7.5 Estimated Markowitz Frontier True Frontier
    5.0
    Actual Markowitz Frontier
    15
    20
    25
    30
    35
    40
    45
    50
    Risk (%) Source: Figure 2 in Sebastian Ceria and Robert A. Stubbs, “Incorporating Estimation Errors into Portfolio Selection: Robust Portfolio Construction,” Axioma, Inc., 2005, p. 6. This figure is reprinted with the permission of Axioma, Inc.
    EXHIBIT 9.2 Markowitz Benchmark-Relative Efficient Frontiers
    Expected Return (%)
    11.0 10.5 10.0 9.5 9.0 Estimated Markowitz Frontier True Frontier
    8.5
    Actual Markowitz Frontier
    0
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    Risk (%)
    Source: Figure 3 in Sebastian Ceria and Robert A. Stubbs, “Incorporating Estimation Errors into Portfolio Selection: Robust Portfolio Construction,” Axioma, Inc., 2005, p. 7. This figure is reprinted with the permission of Axioma, Inc.
    Portfolio Optimization: Bayesian Techniques and the Black-Litterman Model
    365
    covariances introduce errors in the optimized portfolio weights. For this reason, people often cynically refer to optimizers as error maximizers. Uncertainty from estimation error in expected returns tends to have more influence than in the covariance matrix in a mean-variance optimization.5 The relative importance depends on the investor’s risk aversion, but as a general rule of thumb, errors in the expected returns are about 10 times more important than errors in the covariance matrix, and errors in the variances are about twice as important as errors in the covariances.6 As the risk tolerance increases, the relative impact of estimation errors in the expected returns becomes even more important. Conversely, as the risk tolerance decreases, the impact of errors in expected returns relative to errors in the covariance matrix becomes smaller. From this simple rule, it follows that the major focus should be on providing good estimates for the expected returns, followed by the variances. In this chapter we discuss shrinkage techniques and the Black-Litterman model in order to mitigate estimation errors. Constraining Portfolio Weights Several studies have shown that the inclusion of constraints in the meanvariance optimization problem leads to better out-of-sample performance.7 Practitioners often use no short-selling constraints or upper and lower bounds for each security to avoid overconcentration in a few assets. Gupta and Eichhorn suggest that constraining portfolio weights may also assist in containing volatility, increase realized efficiency, and decrease downside risk or shortfall probability.8 5
    See, Michael J. Best and Robert R. Grauer, “The Analytics of Sensitivity Analysis for Mean-Variance Portfolio Problems,” International Review of Financial Analysis, 1 (1992), pp. 17–37; and Michael J. Best and Robert R. Grauer, “On the Sensitivity of Mean-Variance-Efficient Portfolios to Changes in Assets Means: Some Analytical and Computational Results,” Review of Financial Studies, 4 (1991), pp. 315–342. 6 Vijay K. Chopra and William T. Ziemba, “The Effect of Errors in Means, Variances, and Covariances on Optimal Portfolio Choice,” Journal of Portfolio Management, 19 (1993), pp. 6–11; and Jarl G. Kallberg and William T. Ziemba, “Misspecification in Portfolio Selection Problems,” in G. Bamberg and K. Spremann (eds.), Risk and Capital: Lecture Notes in Economics and Mathematical Systems (New York: Springer, 1984). 7 See, for example, Peter A. Frost and James E. Savarino, “For Better Performance: Constrain Portfolio Weights,” Journal of Portfolio Management, 15 (1988), pp. 29–34; Vijay K. Chopra, “Mean-Variance Revisited: Near-Optimal Portfolios and Sensitivity to Input Variations,” Russell Research Commentary, December 1991; and Robert R. Grauer, and Frederick C. Shen, “Do Constraints Improve Portfolio Performance?” Journal of Banking and Finance, 24 (2000), pp. 1253–1274. 8 Francis Gupta and David Eichhorn, “Mean-Variance Optimization for Practitioners of Asset Allocation,” Chapter 4 in Frank J. Fabozzi (ed.), Handbook of Portfolio Management (Hoboken, NJ: John Wiley & Sons, 1998).
    366
    QUANTITATIVE EQUITY INVESTING
    Jagannathan and Ma provide a theoretical justification for these observations.9 Specifically, they show that the no short-selling constraints are equivalent to reducing the estimated asset covariances, whereas upper bounds are equivalent to increasing the corresponding covariances. For example, stocks that have high covariance with other stocks tend to receive negative portfolio weights. Therefore, when their covariance is decreased (which is equivalent to the effect of imposing no short-selling constraints), these negative weights disappear. Similarly, stocks that have low covariances with other stocks tend to get overweighted. Hence, by increasing the corresponding covariances the impact of these overweighted stocks decreases. Furthermore, Monte Carlo experiments performed by Jagannathan and Ma indicate that when no-short-sell constraints are imposed, the sample covariance matrix has about the same performance (as measured by the global minimum variance (GMV) portfolio) as a covariance matrix estimator constructed from a factor structure. Care needs to be taken when imposing constraints for robustness and stability purposes. For example, if the constraints used are too tight, they will completely determine the portfolio allocation—not the forecasts. Instead of providing ad hoc upper and lower bounds on each security, as proposed by Bouchaud, Potters, and Aguilar one can use so-called diversification indicators that measure the concentration of the portfolio.10 These diversification indicators can be used as constraints in the portfolio construction phase to limit the concentration to individual securities. The authors demonstrate that these indicators are related to the information content of the portfolio in the sense of information theory.11 For example, a very concentrated portfolio corresponds to a large information content (as we would only choose a very concentrated allocation if our information about future price fluctuations is perfect), whereas an equally weighted portfolio would indicate low information content (as we would not put “all the eggs in one basket” if our information about future price fluctuations is poor). 9
    Ravi Jagannathan and Tongshu Ma, “Risk Reduction in Large Portfolios: Why Imposing the Wrong Constraints Helps,” Journal of Finance, 58 (2003), pp. 1651– 1683. 10 Jean-Philippe Bouchaud, Marc Potters, and Jean-Pierre Aguilar, “Missing Information and Asset Allocation,” Working Paper, Science & Finance, Capital Fund Management, 1997. 11 The relationship to information theory is based upon the premise that the diversification indicators are generalized entropies. See, Evaldo M. F. Curado and Constantino Tsallis, “Generalized Statistical Mechanics: Connection with Thermodynamics,” Journal of Physics A: Mathematical and General, 24 (1991), pp. L69–L72, 1991.
    Portfolio Optimization: Bayesian Techniques and the Black-Litterman Model
    367
    Importance of Sensitivity Analysis In practice, in order to minimize dramatic changes due to estimation error, it is advisable to perform sensitivity analysis. For example, one can study the results of small changes or perturbations to the inputs from an efficient portfolio selected from a mean-variance optimization. If the portfolio calculated from the perturbed inputs drastically differ from the first one, this might indicate a problem. The perturbation can also be performed on a security by security basis in order to identify those securities that are the most sensitive. The objective of this sensitivity analysis is to identify a set of security weights that will be close to efficient under several different sets of plausible inputs. Issues with Highly Correlated Assets The inclusion of highly correlated securities is another major cause for instability in the mean-variance optimization framework. For example, high correlation coefficients among common asset classes are one reason why real estate is popular in optimized portfolios. Real estate is one of the few asset classes that has a low correlation with other common asset classes. But real estate in general does not have the liquidity necessary in order to implement these portfolios and may therefore fail to deliver the return promised by the real estate indexes. The problem of high correlations typically becomes worse when the correlation matrix is estimated from historical data. Specifically, when the correlation matrix is estimated over a slightly different period, correlations may change, but the impact on the new portfolio weights may be drastic. In these situations, it may be a good idea to resort to a shrinkage estimator or a factor model to model covariances and correlations.
    Incorporating Uncertainty in the Inputs into the Portfolio Allocation Process In the classical mean-variance optimization problem, the expected returns and the covariance matrix of returns are uncertain and have to be estimated. After the estimation of these quantities, the portfolio optimization problem is solved as a deterministic problem—completely ignoring the uncertainty in the inputs. However, it makes sense for the uncertainty of expected returns and risk to enter into the optimization process, thus creating a more realistic model. Using point estimates of the expected returns and the covariance matrix of returns, and treating them as error-free in portfolio allocation, does not necessarily correspond to prudent investor behavior.
    368
    QUANTITATIVE EQUITY INVESTING
    The investor would probably be more comfortable choosing a portfolio that would perform well under a number of different scenarios, thereby also attaining some protection from estimation risk and model risk. Obviously, to have some insurance in the event of less likely but more extreme cases (e.g., scenarios that are highly unlikely under the assumption that returns are normally distributed), the investor must be willing to give up some of the upside that would result under the more likely scenarios. Such an investor seeks a robust portfolio, that is, a portfolio that is assured against some worst-case model misspecification. The estimation process can be improved through robust statistical techniques such as shrinkage and Bayesian estimators discussed later in this chapter. However, jointly considering estimation risk and model risk in the financial decision-making process is becoming more important. The estimation process frequently does not deliver a point forecast (that is, one single number), but a full distribution of expected returns. Recent approaches attempt to integrate estimation risk into the mean-variance framework by using the expected return distribution in the optimization. A simple approach is to sample from the return distribution and average the resulting portfolios (Monte Carlo approach).12 However, as a meanvariance problem has to be solved for each draw, this is computationally intensive for larger portfolios. In addition, the averaging does not guarantee that the resulting portfolio weights will satisfy all constraints. Introduced in the late 1990s by Ben-Tal and Nemirovski13 and El Ghaoui and Lebret,14 the robust optimization framework is computationally more efficient than the Monte Carlo approach. This development in optimization technology allows for efficiently solving the robust version of the mean-variance optimization problem in about the same time as the classical mean-variance optimization problem. The technique explicitly uses the distribution from the estimation process to find a robust portfolio in one single optimization. It thereby incorporates uncertainties of inputs into a deter12
    See, for example, Richard O. Michaud, Efficient Asset Management: A Practical Guide to Stock Portfolio Optimization and Asset Allocation (Oxford: Oxford University Press, 1998); Philippe Jorion, “Portfolio Optimization in Practice,” Financial Analysts Journal, 48 (1992), pp. 68–74; and Bernd Scherer, “Portfolio Resampling: Review and Critique,” Financial Analysts Journal, 58 (2002), pp. 98–109. 13 Aharon Ben-Tal and Arkadi S. Nemirovski, “Robust Convex Optimization,” Mathematics of Operations Research, 23 (1998), pp. 769–805; and Aharon BenTal and Arkadi S. Nemirovski, “Robust Solutions to Uncertain Linear Programs,” Operations Research Letters, 25 (1999), pp. 1–13. 14 Laurent El Ghaoui and Herve Lebret, “Robust Solutions to Least-Squares Problems with Uncertain Data,” SIAM Journal Matrix Analysis with Applications, 18 (1997), pp. 1035–1064.
    Portfolio Optimization: Bayesian Techniques and the Black-Litterman Model
    369
    ministic framework. The classical portfolio optimization formulations such as the mean-variance portfolio selection problem, the maximum Sharpe ratio portfolio problem, and the value-at-risk (VaR) portfolio problem all have robust counterparts that can be solved in roughly the same amount of time as the original problem.15 In Chapter 10 we discuss robust portfolio optimization in more detail.
    Large Data Requirements In classical mean-variance optimization, we need to provide estimates of the expected returns and covariances of all the securities in the investment universe considered. Typically, however, portfolio managers have reliable return forecasts for only a small subset of these assets. This is probably one of the major reasons why the mean-variance framework has not been adopted by practitioners in general. It is simply unreasonable for the portfolio manager to produce good estimates of all the inputs required in classical portfolio theory. We will see later in this chapter that the Black-Litterman model provides a remedy in that it blends any views (this could be a forecast on just one or a few securities, or all them) the investor might have with the market equilibrium. When no views are present, the resulting Black-Litterman expected returns are just the expected returns consistent with the market equilibrium. Conversely, when the investor has views on some of the assets, the resulting expected returns deviate from market equilibrium.
    SHRINKAGE ESTIMATION It is well known since Stein’s seminal work that biased estimators often yield better parameter estimates than their generally preferred unbiased counterparts.16 In particular, it can be shown that if we consider the problem of estimating the mean of an N-dimensional multivariate normal variable (N > 2), X ∈ N (μ, Σ) with known covariance matrix Σ , the sample mean μˆ is not the best estimator of the population mean μ in terms of the quadratic loss function L(μ, μˆ ) = (μ − μˆ )′ Σ −1 (μ − μˆ ) 15
    See, for example, Donald Goldfarb and Garud Iyengar, “Robust Portfolio Selection Problems,” Mathematics of Operations Research, 28 (2003), pp. 1–38. 16 Charles Stein, “Inadmissibility of the Usual Estimator for the Mean of Multivariate Normal Distribution,” Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability, 1 (1956), pp. 197–206.
    370
    QUANTITATIVE EQUITY INVESTING
    For example, the so-called James-Stein shrinkage estimator μˆ JS = (1 − w)μˆ + wμ 0 ι has a lower quadratic loss than the sample mean, where N−2 ⎛ ⎞ w = min ⎜ 1, −1 ⎝ T (μˆ − μ0 ι)′ Σ (μˆ − μ0 ι) ⎟⎠ and ι = [1,1,…,1]′. Moreover, T is the number of observations, and μ0 is an arbitrary number. The vector μ0ι and the weight w are referred to as the shrinkage target and the shrinkage intensity (or shrinkage factor), respectively. Although there are some choices of μ0 that are better than others, what is surprising with this result is that it could be any number! This fact is referred to as the Stein paradox. In effect, shrinkage is a form of averaging different estimators. The shrinkage estimator typically consists of three components: (1) an estimator with little or no structure (like the sample mean above); (2) an estimator with a lot of structure (the shrinkage target); and (3) the shrinkage intensity. The shrinkage target is chosen with the following two requirements in mind. First, it should have only a small number of free parameters (robust and with a lot of structure). Second, it should have some of the basic properties in common with the unknown quantity being estimated. The shrinkage intensity can be chosen based on theoretical properties or simply by numerical simulation. Probably the most well-known shrinkage estimator17 used to estimate expected returns in the financial literature is the one proposed by Jorion,18 where the shrinkage target is given by μgι with μg =
    ι ′Σ −1μˆ ι ′Σ −1ι
    and w=
    N+2 N + 2 + T (μˆ − μ g ι)′ Σ −1 (μˆ − μ g ι)
    We note that μg is the return on the GMV portfolio discussed in Chapter 8. Several studies document that for the mean-variance framework: (1) the 17
    Many similar approaches have been proposed. For example, see Jobson and Korkie, “Putting Markowitz Theory to Work” and Frost and Savarino, “An Empirical Bayes Approach to Efficient Portfolio Selection.” 18 Philippe Jorion, “Bayes-Stein Estimation for Portfolio Analysis,” Journal of Financial and Quantitative Analysis, 21 (1986), pp. 279–292.
    Portfolio Optimization: Bayesian Techniques and the Black-Litterman Model
    371
    variability in the portfolio weights from one period to the next decrease; and (2) the out-of-sample risk-adjusted performance improves significantly when using a shrinkage estimator as compared to the sample mean.19 We can also apply the shrinkage technique for covariance matrix estimation. This involves shrinking an unstructured covariance estimator toward a more structured covariance estimator. Typically the structured covariance estimator only has a few degrees of freedom (only a few nonzero eigenvalues) as motivated by Random Matrix Theory (see Chapter 2). For example, as shrinkage targets, Ledoit and Wolf20 suggest using the covariance matrix that follows from the single-factor model developed by Sharpe21 or the constant correlation covariance matrix. In practice the single-factor model and the constant correlation model yield similar results, but the constant correlation model is much easier to implement. In the case of the constant correlation model, the shrinkage estimator for the covariance matrix takes the form Σˆ LW = w Σˆ CC + (1 − w)Σˆ where Σˆ is the sample covariance matrix, and Σˆ CC is the sample covariance matrix with constant correlation. The sample covariance matrix with constant correlation is computed as follows. First, we decompose the sample covariance matrix according to Σˆ = ΛC Λ ′ 19
    See, for example, Michaud, “The Markowitz Optimization Enigma: Is ‘Optimized’ Optimal?”; Jorion, “Bayesian and CAPM Estimators of the Means: Implications for Portfolio Selection”; and Glen Larsen, Jr. and Bruce Resnick, “Parameter Estimation Techniques, Optimization Frequency, and Portfolio Return Enhancement,” Journal of Portfolio Management, 27 (2001), pp. 27–34. 20 Olivier Ledoit and Michael Wolf, “Improved Estimation of the Covariance Matrix of Stock Returns with an Application to Portfolio Selection,” Journal of Empirical Finance, 10 (2003), pp. 603–621; and Olivier Ledoit and Michael Wolf, “Honey, I Shrunk the Sample Covariance Matrix,” Journal of Portfolio Management, 30 (2004), pp. 110–119. 21 William F. Sharpe, “A Simplified Model for Portfolio Analysis,” Management Science, 9 (1963), pp. 277–293. Elton, Gruber, and Urich proposed the single factor model for purposes of covariance estimation in 1978. They show that this approach leads to: (1) better forecasts of the covariance matrix; (2) more stable portfolio allocations over time; and (3) more diversified portfolios. They also find that the average correlation coefficient is a good forecast of the future correlation matrix. See Edwin J. Elton, Martin J. Gruber, and Thomas J. Urich, “Are Betas Best?” Journal of Finance, 33 (1978), pp. 1375–1384.
    372
    QUANTITATIVE EQUITY INVESTING
    where Λ is a diagonal matrix of the volatilities of returns and C is the sample correlation matrix, that is, ⎡ 1 ρˆ 12 ⎢ˆ ρ 0003 C = ⎢ 21 ⎢ 0004 0003 ⎢ ⎢⎣ρˆ N 1 0002
    0002 0003 0003 ρˆ NN −1
    ρˆ 1N ⎤ ⎥ 0004 ⎥ ρˆ N −1N ⎥ ⎥ 1 ⎥⎦
    Second, we replace the sample correlation matrix with the constant correlation matrix
    CCC
    ⎡1 ρˆ 0002 ρˆ ⎤ ⎢ˆ ⎥ ρ 0003 0003 0004⎥ =⎢ ⎢ 0004 0003 0003 ρˆ ⎥ ⎢ ⎥ ⎣ρˆ 0002 ρˆ 1 ⎦
    where ρˆ is the average of all the sample correlations, in other words ρˆ =
    N N 2 ρˆ ∑ ∑ (N − 1)N i =1 j =i +1 ij
    The optimal shrinkage intensity can be shown to be proportional to a constant divided by the length of the history, T.22 22
    Although straightforward to implement, the optimal shrinkage intensity, w, is a bit tedious to write down mathematically. Let us denote by ri,t the return on security i during period t, 1 ≤ i ≤ N, 1 ≤ t ≤ T, ri =
    1 T 1 T ri ,t and σˆ ij = ∑ ∑ (r − r )(r − r ) T − 1 t =1 i ,t i j ,t j T t =1
    Then the optimal shrinkage intensity is given by the formula
    { }
    ⎧ ⎫ κˆ w = max ⎨0,min ,1 ⎬ T ⎩ ⎭ where κˆ =
    πˆ − cˆ γˆ
    and the parameters πˆ , cˆ , γˆ are computed as follows. First, πˆ is given by πˆ =
    N
    ∑ πˆ
    i , j =1
    ij
    where πˆ ij =
    1 T ∑ (r − r )(r − r ) − σˆ ij T t =1 i ,t i j ,t j
    (
    )
    2
    Portfolio Optimization: Bayesian Techniques and the Black-Litterman Model
    373
    In their two articles, Ledoit and Wolf compare the empirical out-of-sample performance of their shrinkage covariance matrix estimators with other covariance matrix estimators, such as the sample covariance matrix, a statistical factor model based on the first five principal components, and a factor model based on the 48 industry factors as defined by Fama and French.23 The results indicate that when it comes to computing a GMV portfolio, their shrinkage estimators are superior compared to the others tested, with the constant correlation shrinkage estimator coming out slightly ahead. Interestingly enough, it turns out that the shrinkage intensity for the single-factor model (the shrinkage intensity for the constant coefficient model is not reported) is fairly constant throughout time with a value around 0.8. This suggests that there is about four times as much estimation error present in the sample covariance matrix as there is bias in the single-factor covariance matrix.
    THE BLACK-LITTERMAN MODEL In the Black-Litterman model an estimate of future expected returns is based on combining market equilibrium (e.g., the CAPM equilibrium) with an investor’s views. As we will see, the Black-Litterman expected return is a shrinkage estimator where market equilibrium is the shrinkage target and the shrinkage intensity is determined by the portfolio manger’s confidence in the model inputs. We will make this statement precise later in this section. Such views are expressed as absolute or relative deviations from equilibrium together with confidence levels of the views (as measured by the standard deviation of the views). Second, cˆ is given by N
    N
    i =1
    i =1 i≠ j
    cˆ = ∑ πˆ ii + ∑
    ρˆ 2
    (
    ρˆ jj / ρˆ ii ϑˆ ii , jj + ρˆ ii / ρˆ jj ϑˆ jj ,ii
    )
    where 1 T ϑˆ ii ,ij = ∑ ⎡ (ri ,t − ri )2 − σˆ ii (ri ,t − ri )(rj ,t − rj ) − σˆ ij ⎤ ⎦ T t =1 ⎣
    (
    )(
    )
    Finally, γˆ is given by 2
    γˆ = C − CCC
    F
    where ⋅ F denotes the Frobenius norm defined by A
    F
    =
    N
    ∑a
    i , j =1
    23
    2 ij
    Eugene F. Fama and Kenneth R. French, “Industry Costs of Equity,” Journal of Financial Economics, 43 (1997), pp. 153–193.
    374
    QUANTITATIVE EQUITY INVESTING
    The Black-Litterman expected return is calculated as a weighted average of the market equilibrium and the investor’s views. The weights depend on (1) the volatility of each asset and its correlations with the other assets and (2) the degree of confidence in each forecast. The resulting expected return, which is the mean of the posterior distribution, is then used as input in the portfolio optimization process. Portfolio weights computed in this fashion tend to be more intuitive and less sensitive to small changes in the original inputs (i.e., forecasts of market equilibrium, investor’s views, and the covariance matrix). The Black-Litterman model can be interpreted as a Bayesian model. Named after the English mathematician Thomas Bayes, the Bayesian approach is based on the subjective interpretation of probability. A probability distribution is used to represent an investor’s belief on the probability that a specific event will actually occur. This probability distribution, called the prior distribution, reflects an investor’s knowledge about the probability before any data are observed. After more information is provided (e.g., data observed), the investor’s opinions about the probability might change. Bayes’ rule is the formula for computing the new probability distribution, called the posterior distribution. The posterior distribution is based on knowledge of the prior probability distribution plus the new data. A posterior distribution of expected return is derived by combining the forecast from the empirical data with a prior distribution. The ability to incorporate exogenous insight, such as a portfolio manager’s judgment, into formal models is important: such insight might be the most valuable input used by the model. The Bayesian framework allows forecasting systems to use such external information sources and subjective interventions (i.e., modification of the model due to judgment) in addition to traditional information sources such as market data and proprietary data. Because portfolio managers might not be willing to give up control to a black box, incorporating exogenous insights into formal models through Bayesian techniques is one way of giving the portfolio manager better control in a quantitative framework. Forecasts are represented through probability distributions that can be modified or adjusted to incorporate other sources of information deemed relevant. The only restriction is that such additional information (i.e., the investor’s views) be combined with the existing model through the laws of probability. In effect, incorporating Bayesian views into a model allows one to rationalize subjectivity within a formal, quantitative framework. “[T]he rational investor is a Bayesian,” as Markowitz noted.24
    24
    See page 57 in Harry M. Markowitz, Mean-Variance Analysis in Portfolio Choice and Capital Markets (Cambridge, MA: Basil Blackwell, 1987).
    Portfolio Optimization: Bayesian Techniques and the Black-Litterman Model
    375
    Derivation of the Black-Litterman Model The basic feature of the Black-Litterman model that we discuss in this and the following sections is that it combines an investor’s views with the market equilibrium. Let us understand what this statement implies. In the classical mean-variance optimization framework an investor is required to provide estimates of the expected returns and covariances of all the securities in the investment universe considered. This is of course a humongous task, given the number of securities available today. Portfolio and investment managers are very unlikely to have a detailed understanding of all the securities, companies, industries, and sectors that they have at their disposal. Typically, most of them have a specific area of expertise that they focus on in order to achieve superior returns. This is probably one of the major reasons why the mean-variance framework has not been adopted among practitioners in general. It is simply unrealistic for the portfolio manager to produce reasonable estimates (besides the additional problems of estimation error) of the inputs required in classical portfolio theory. Furthermore, many trading strategies used today cannot easily be turned into forecasts of expected returns and covariances. In particular, not all trading strategies produce views on absolute return, but rather just provide relative rankings of securities that are predicted to outperform/underperform other securities. For example, considering two stocks, A and B, instead of the absolute view, “the one-month expected return on A and B are 1.2% and 1.7% with a standard deviation of 5% and 5.5%, respectively,” while a relative view may be of the form “B will outperform A with half a percent over the next month” or simply “B will outperform A over the next month.” Clearly, it is not an easy task to translate any of these relative views into the inputs required for the modern portfolio theoretical framework. We now walk through and illustrate the usage of the Black-Litterman model in three simple steps. Step 1: Basic Assumptions and Starting Point One of the basic assumptions underlying the Black-Litterman model is that the expected return of a security should be consistent with market equilibrium unless the investor has a specific view on the security.25 In other words, an investor who does not have any views on the market should hold the market.26 25
    Fischer Black and Robert Litterman, Asset Allocation: Combining Investor Views with Market Equilibrium, Goldman, Sachs & Co., Fixed Income Research, September 1990. 26 A predecessor to the Black-Litterman model is the so-called Treynor-Black model. In this model, an investor’s portfolio is shown to consist of two parts (1) a passive
    376
    QUANTITATIVE EQUITY INVESTING
    Our starting point is the CAPM model: E(Ri) – Rf = βi(E(RM) – Rf) where E(Ri), E(RM), and Rf are the expected return on security i, the expected return on the market portfolio, and the risk-free rate, respectively. Furthermore, βi =
    cov(Ri , RM ) σ 2M
    where σ 2M is the variance of the market portfolio. Let us denote by wb = (wb1, …, wbN)′ the market capitalization or benchmark weights, so that with an asset universe of N securities27 the return on the market can be written as N
    RM = ∑ wbj Rj j =1
    Then by the CAPM, the expected excess return on asset i, Πi = E(Ri) – Rf, becomes Πi = βi (E(RM ) − Rf ) = =
    cov(Ri , RM ) (E(RM ) − Rf ) σ 2M E(RM ) − Rf σ
    2 M
    N
    ∑ cov(R , R )w j =1
    i
    j
    bj
    We can also express this in matrix-vector form as28 portfolio/positions held purely for the purpose of mimicking the market portfolio, and (2) an active portfolio/positions based on the investor’s return/risk expectations. This somewhat simpler model relies on the assumption that returns of all securities are related only through the variation of the market portfolio (Sharpe’s Diagonal Model). See Jack L. Treynor and Fischer Black, “How to Use Security Analysis to Improve Portfolio Selection,” Journal of Business, 46 (1973) pp. 66–86. 27 For simplicity, we consider only equity securities. Extending this model to other assets classes such as bonds and currencies is fairly straightforward. 28 Two comments about the following two relationships are of importance: 1. As it may be difficult to accurately estimate expected returns, practitioners use other techniques. One is that of reverse optimization also referred to as the technique of implied expected returns. The technique simply uses the expression Π = δΣw to calculate the expected return vector given the market price of risk δ, the covariance matrix Σ , and the market capitalization weights w. The technique was first introduced by Sharpe and Fisher; see William F. Sharpe, “Imput-
    Portfolio Optimization: Bayesian Techniques and the Black-Litterman Model
    377
    Π = δΣw where we define the market price of risk as δ=
    E(RM ) − Rf σ 2M
    the expected excess return vector ⎡ Π1 ⎤ ⎢ ⎥ Π=⎢ 0004 ⎥ ⎢⎣ΠN ⎥⎦ and the covariance matrix of returns ⎡ cov(R1 , R1) 0002 cov(R1 , RN ) ⎤ ⎢ ⎥ Σ=⎢ 0004 0003 0004 ⎥ ⎢⎣ cov(RN , R1) 0002 cov(RN , RN ) ⎥⎦ The true expected returns μ of the securities are unknown. However, we assume that our previous equilibrium model serves as a reasonable estimate of the true expected returns in the sense that Π = μ + ε Π , ε Π ∼ N (0, τΣ) for some small parameter τ