Assignment

Read Complete Research Material

ASSIGNMENT

Quantitative Methods



Quantitative Methods

Introduction

Probability is the branch of pure mathematics which corresponds to statistics in applied mathematics. It began as the study of chance, and especially of games of chance, in 16th century France with Blaise Pascal (1623 - 1662), who was paid by a nobleman to find out why he always lost when he made a particular bet (the answer was that he was giving odds of evens on a bet where the odds were really only 47% in his favour). Pascal went on to discover 'his' famous triangle (which was in fact known in other cultures long before), which makes calculations of this sort of probability very easy (calculations of the probability distribution of repetitions of a single event, like the probability of throwing exactly six sixes in twenty-one throws of a die).

The problem with Pascal's work, and that of the probabilists who followed him, was that their work was purely empirical; they used 'common sense' ideas of probabilities in mathematical models of real situations (the idea that an unbiased coin should have equal probability of showing heads or tails when tossed, for example), so that what they were doing was not really probability or statistics, but a mixture of both. This dependence on observation made many 19th-century mathematicians reluctant to allow probability a place as a branch of mathematics, rather than part of physics.

Discussion

There are abstract formalizations of probability and interpretations of probability. The most common abstract formulation involves three axioms for the probability functions Pr. First, Pr (x) = 0 for all x. Second, Pr (x) = 1 if x is necessary. Third, Pr (x v y) = Pr (x)+Pr (y), where 'v' means the inclusive disjunction of logic or union as in set theory. The third axiom, called finite additivity, can be generalized to countable additivity, where the function Pr ranges over infinite disjunctions or unions. Conditional probability, or Pr (x/y), is the quotient Pr (x&y)/Pr (y). An item x is positively statistically (or probabilistically) correlated with an item y if Pr (x/y) is greater than Pr (x/-y), where -y is the negation of a proposition y, the non-occurrence of an event y, or the complement of y in set theory. An item x is negatively statistically (or probabilistically) correlated with an item y if Pr (x/y) is less than Pr (x/-y), where, as before, -y is the negation of a proposition y, the non-occurrence of an event y, or the complement of y in set theory. As for probabilistic independence (or statistical independence), if Pr (x/y)=Pr (x/-y), then x is statistically (or probabilistically) independent of y. This abstract formulation leads to a variety of theorems, e.g. Pr (-x)=1-Pr (x). An important probability theorem is the first-limit theorem.

There are various interpretations of probability: classical, relative frequency, propensity, logical, and subjective. The classical interpretation holds that the probability of an event e is equal to a ratio: the number of equi-possibilities, or equi-possible or equi-probable events - i.e. events having the same probability - that are ...
Related Ads