Outline
 Joint probability, Marginal probability, Conditional probability.

Sum rule, Product rule, Bayes’ Theorem.

Prior probability, Posterior probability, Maximum Likelihood Estimation.

Expectation.
1.Joint probability, Marginal probability, Conditional probability.
We shall assume that there are two random variables and , can take any of the values , can take any of the values .
(Figure 1)
We shall define:
1>. denotes a specific probability.
2>. denotes the probability distribution.
When take the value and take the value is written and is called the joint probability of . It is given by the number of points falling in the cell i,j as a fraction of the total number of points, and hence
(1.1)
. Similarly, the probability that takes the vaule regardless of is denoted as and is given by the fraction of the total number of points that fall in column , so that
(1.2)
Since the number of instances in column in Figure 1 is just the sum of the number of instances in each cell of that column, we have and therefore, from (1.1) and (1.2), we have:
(1.3)
which is the sum rule of probability. and in this context, is called the marginal probability.
Now suppose , the fraction of instances which is denoted and is called the conditional probability of given . It then follows that
(1.4)
2.Sum rule, Product rule, Bayes’ Theorem.
Since . From (1.1),(1.2), and (1.4), yields:
(1.5)
called product rule of probability.
For the sake of simplicity, we may write to denote the distribution evaluated for the particular value , Relatively, denoted Marginal probability distribution and Joint probability distribution as , respectively.
With these more compact notations, we rewrite the two prominent rules of probability theory as:
Sum rule (1.6)
Product rule (1.7)
These two concise rules form the basis for probabilistic theory that we use throughout this series of posts.
From the product rule, since , we obtain , Namely:
(1.8)
Which is called Bayes’ theorem . it palys a central role in pattern recognition and machine learning.
3.Prior probability, Posterior probability.
1>.Prior probability: The probability that an event will reflect established beliefs about the event before the arrival of new evidence or information. Prior probabilities are the original probabilities of an outcome, which be will updated with new information to create posterior probability.
2>.Posterior probability:The revised probability of an event occurring after taking into consideration new information. Posterior probability is normally calculated by updating the prior probability by using Bayes’ theorem.
example 1:
We shall suppose there are 100 students, , , denotes the probability of students’ sex， denotes the probability of students’ wear.
means the student is a girl, means a boy. means this student wears skirt, means wearing pants.
Now suppose we saw a student wearing pants (), Finding the probability that student j is a girl(), namely asking for .
According to the Bayes’ theorem, .
Hence, .
denote the probability that a student is a girl irrespective of her wearing.
namely prior probability. Correspondingly, is posterior probability.
denote the probability that a student wearing pants irrespective of its sex,
denote the probability that a girl wearing pants.
we can compute these three probabilities by maximum likelihood estimation,
Informally, if we use subset of dataset D, to estimate these three probabilities, We shall call this likelihood estimate, if we use overall dataset D to estimate, it is maximum likelihood estimation relatively.Therefore:
is indicator function, when ” * ” is true , otherwise .
4.Expectation.
One of the most important operations involving probabilities is that of finding weighted averages of functions. The average value of some function f(x) under a probability distribution p(x) is called the expectation of f(x) denoted by ?[f] .For a discrete distribution. It is given by:
(1.9)
So that the average is weighted by the relative probabilities of the different values of x. Futhermore, conditional expectation is given by:
(1.10)
In the case of continuous variables, expectations are expressed in terms of an integration with respect to the corresponding probability density:
(1.11)
References
Bishop, C.M. (2006). Pattern Recognition and Machine Learning. Springer, New York.
周志华. (2015). 机器学习（MACHINE LEARNING）. 清华大学出版社， 北京。
李航. （2012). 统计学习方法. 清华大学出版社， 北京.
2 条评论
我想说的是文章很多图貌似都挂掉了
不会啊，我用safari和chrome都没有挂图，可能是浏览器或者网速的问题。