Introduction to Bayesian StatisticsTraditionally, introductory statistics courses have been taught from a frequentist perspective. The recent upsurge in the use of Bayesian methods in applied statistical analysis highlights the need to expose students early on to the Bayes theorem, its advantages, and its applications. Based on the author’s successful courses, Introduction to Bayesian Statistics introduces statistics from a Bayesian perspective in a way that is understandable to readers with a reasonable mathematics background. Covering most of the same ground found in a typical statistics book–but from a Bayesian perspective–Introduction to Bayesian Statistics offers thorough, clearly-explained discussions of:
To assist in the understanding of Bayesian statistics, this introduction provides readers with exercises (with selected answers); summaries of main points from each chapter; a calculus refresher, and a summary on the use of statistical tables; and R functions and Minitab macros for Bayesian analysis and Monte Carlo simulations (downloadable from the associated Web site) |
Avis des internautes - Rédiger un commentaire
Aucun commentaire n'a été trouvé aux emplacements habituels.
Table des matières
| 1 | |
| 13 | |
| 29 | |
| 55 | |
| 75 | |
Bayesian Inference for Discrete Random Variables | 95 |
Bayesian Inference for Binomial Proportion | 129 |
Summarizing the Posterior Distribution | 136 |
Bayesian Inference for Difference between Means | 209 |
Bayesian Inference for Simple Linear Regression | 235 |
Robust Bayesian Methods | 261 |
A Introduction to Calculus | 275 |
B Use of Statistical Tables | 295 |
Using the Included Minitab Macros | 307 |
Using the Included R Functions | 317 |
E Answers to Selected Exercises | 329 |
Comparing Bayesian and Frequentist Inferences for Proportion | 147 |
Bayesian Inference for Normal Mean | 169 |
Comparing Bayesian and Frequentist Inferences for Mean | 193 |
References | 349 |
Autres éditions - Tout afficher
Expressions et termes fréquents
Bayes Bayesian credible interval Bayesian estimator Bayesian inference Bayesian statistics Bayesian universe beta beta(a.b binomial Calculate the posterior Chapter column command line editor completely randomized design conditional probability confidence interval constant continuous prior data set density function discrete prior discrete random variables equal Equation event experimental units Find a 95 Find the posterior frequentist estimator gives graph hypothesis test integral joint posterior joint probability least squares line level of significance linear lurking variable marginal probability mean and variance measurements median Minitab normal distribution null hypothesis parameter value population possible values posterior distribution posterior mean posterior precision posterior probability posterior variance prior belief prior distribution prior probability prior x likelihood probability distribution proportion quartile random sample reduced universe reject the null rejection region response variable sample mean sampling distribution scatterplot standard deviation Suppose Table tail area theorem treatment groups two-sided hypothesis Var(Y variance a2 versus Zealand women
Fréquemment cités
Page 281 - The derivative of a constant times a function is the constant times the derivative of the function.
Page 124 - ... using it rather than some other measure : (1) Additivity. The variance of the sum of two independent random variables is the sum of their variances, and even when the two variables are dependent the variability of their sum has a simple formula. (2) Central limit theorem. The limiting behavior of a random variable that is the sum of a large number of independent random variables depends upon the variances of these random variables. Of course, it isn't just the biggest squared deviation that counts,...
Page 70 - The set of all possible outcomes of a random experiment is called the sample space of the experiment.
Page 86 - ... this turns out to be much more tractable mathematically. As we become acquainted with properties of the variance, which uses the squares of deviations from the mean to measure variability, we shall see that there are two fundamental reasons for using it rather than some other measure : (1) Additivity. The variance of the sum of two independent random variables is the sum of their variances, and even when the two variables are dependent the variability of their sum has a simple formula. (2) Central...
Page 185 - Equations 8-3 and 8-5 (below) are the equivalent of the Bayes theorem for normal prior and sampling distributions. They determine the mean and the standard deviation of the posterior normal distribution. Example Our manufacturer who...
Page 85 - The expected value has a surprising and useful property: the expected value of the sum of two random variables is the sum of the expected values of those variables: E(x + y) = E(x) + E(y), for random variables x and y.
Page 42 - Var(u.) = ff2 2 — . [10] (This latter expression results from the fact that the variance of a sum of independent variables is the sum of the variances, applied to expressions 8 and 9.) We now have an unbiased estimator with a known variance.
Page 70 - The union of two events A and B is the set of outcomes that are included in A or B or both A and B.
Page 70 - B. The intersection of two events A and B is the set of elementary events favorable to both, and is denoted by A fl B or simply by AB.
