Literature Review

Read Complete Research Material

LITERATURE REVIEW

Literature review

Abstract

In this paper we investigate an efficient implementation of the Monte Carlo EM algorithm based on Quasi-Monte Carlo sampling. The Monte Carlo EM algorithm is a stochastic version of the deterministic EM (Expectation-Maximization) algorithm in which an intractable E-step is replaced by a Monte Carlo approximation. Quasi-Monte Carlo methods produce deterministic sequences of points that can significantly improve the accuracy of Monte Carlo approximations over purely random sampling. One drawback to deterministic Quasi-Monte Carlo methods is that it is generally difficult to determine the magnitude of the approximation error. However ? in order to implement the Monte Carlo EM algorithm in an automated way ? the ability to measure this error is fundamental. Recent developments of randomized Quasi-Monte Carlo methods can overcome this drawback. We investigate the implementation of an automated ? datadriven Monte Carlo EM algorithm based on randomized Quasi-Monte Carlo methods. We apply this algorithm to a geostatistical model of online purchases and find that it can significantly decrease the total simulation effort ? thus showing great potential for improving upon the efficiency of the classical Monte Carlo EM algorithm.

Table of Contents

Chapter One: Introduction4

Overview4

Chapter Two: Literature review9

Introduction9

The Black Scholes Model9

Crude Monte Carlo13

Pricing European option with Monte Carlo14

Numerical example15

Jump diffusion26

Pricing American Option33

Chapter Three: Methodology34

Numerical example38

Improving Monte Carlo method for pricing American options39

Regression function40

Using the Chebychev polynomial41

Improvement which involves the choice of random numbers44

Quasi Monte Carlo method45

Numerical example48

Numerical example52

-Using a faster computing algorithm to find the solution52

Chapter Four: Discussion and Conclusion56

Improvement involves upper and lower bounds56

Proof:57

Proof59

Chapter One: Introduction

Overview

The Expectation-Maximization (EM) algorithm (Dempster et al. ? 1977) is a popular tool in statistics and many other fields. One limitation to the use of EM is ? however ? that quite often the E-step of the algorithm involves an analytically intractable ? sometimes high dimensional integral. Hobert (2000) ? for example ? considers a model for which the E-step involves intractable integrals of dimension twenty. The Monte Carlo EM (MCEM) algorithm ? proposed by Wei & Tanner (1990) ? estimates this intractable integral with an empirical average based on simulated data. Typically ? the simulated data is obtained by producing random draws from the distribution commanded by EM. By the law of large numbers ? this integral-estimate can be made arbitrarily accurate by increasing the size of the simulated data. The MCEM algorithm typically requires a very high accuracy ? especially at the later iterations. Booth & Hobert (1999) ? for example ? report sample sizes of over 66 ?000 at convergence. This suggests that the overall efficiency of MCEM could be improved by using simulation methods that achieve a high accuracy in the integral-estimate with smaller sample sizes.

Quasi-Monte Carlo method

Quasi-Monte Carlo methods can be regarded as a deterministic counterpart to classical Monte Carlo. (A Ibanez, 2004) Suppose we want to evaluate an (analytically intractable) integral

Over the d-dimensional unit cube Classical Monte Carlo integration randomly selects points Uniform and approximates above equation by the empirical average

Quasi-Monte Carlo methods ? on the other hand ? select the points deterministically. Specifically ? ...
Related Ads