Introduction to Bayesian Statistics

Couverture
John Wiley & Sons, 26 avr. 2004 - 354 pages
1 Commentaire
Traditionally, introductory statistics courses have been taught from a frequentist perspective. The recent upsurge in the use of Bayesian methods in applied statistical analysis highlights the need to expose students early on to the Bayes theorem, its advantages, and its applications. Based on the author’s successful courses, Introduction to Bayesian Statistics introduces statistics from a Bayesian perspective in a way that is understandable to readers with a reasonable mathematics background.

Covering most of the same ground found in a typical statistics book–but from a Bayesian perspective–Introduction to Bayesian Statistics offers thorough, clearly-explained discussions of:

  • Scientific data gathering, including the use of random sampling methods and randomized experiments to make inferences on cause-effect relationships
  • The rules of probability, including joint, marginal, and conditional probability
  • Discrete and continuous random variables
  • Bayesian inferences for means and proportions compared with the corresponding frequentist ones
  • The simple linear regression model analyzed in a Bayesian manner

To assist in the understanding of Bayesian statistics, this introduction provides readers with exercises (with selected answers); summaries of main points from each chapter; a calculus refresher, and a summary on the use of statistical tables; and R functions and Minitab macros for Bayesian analysis and Monte Carlo simulations (downloadable from the associated Web site)

 

Avis des internautes - Rédiger un commentaire

Aucun commentaire n'a été trouvé aux emplacements habituels.

Table des matières

Introduction to Statistical Science
1
Scientific Data Gathering
13
Displaying and Summarizing Data
29
Logic Probability and Uncertainty
55
Discrete Random Variables
75
Bayesian Inference for Discrete Random Variables
95
Bayesian Inference for Binomial Proportion
129
Summarizing the Posterior Distribution
136
Bayesian Inference for Difference between Means
209
Bayesian Inference for Simple Linear Regression
235
Robust Bayesian Methods
261
A Introduction to Calculus
275
B Use of Statistical Tables
295
Using the Included Minitab Macros
307
Using the Included R Functions
317
E Answers to Selected Exercises
329

Comparing Bayesian and Frequentist Inferences for Proportion
147
Bayesian Inference for Normal Mean
169
Comparing Bayesian and Frequentist Inferences for Mean
193
References
349
Droits d'auteur

Autres éditions - Tout afficher

Expressions et termes fréquents

Fréquemment cités

Page 281 - The derivative of a constant times a function is the constant times the derivative of the function.
Page 124 - ... using it rather than some other measure : (1) Additivity. The variance of the sum of two independent random variables is the sum of their variances, and even when the two variables are dependent the variability of their sum has a simple formula. (2) Central limit theorem. The limiting behavior of a random variable that is the sum of a large number of independent random variables depends upon the variances of these random variables. Of course, it isn't just the biggest squared deviation that counts,...
Page 70 - The set of all possible outcomes of a random experiment is called the sample space of the experiment.
Page 86 - ... this turns out to be much more tractable mathematically. As we become acquainted with properties of the variance, which uses the squares of deviations from the mean to measure variability, we shall see that there are two fundamental reasons for using it rather than some other measure : (1) Additivity. The variance of the sum of two independent random variables is the sum of their variances, and even when the two variables are dependent the variability of their sum has a simple formula. (2) Central...
Page 185 - Equations 8-3 and 8-5 (below) are the equivalent of the Bayes theorem for normal prior and sampling distributions. They determine the mean and the standard deviation of the posterior normal distribution. Example Our manufacturer who...
Page 21 - Block 1 Block 2 Block 3 Block 4...
Page 85 - The expected value has a surprising and useful property: the expected value of the sum of two random variables is the sum of the expected values of those variables: E(x + y) = E(x) + E(y), for random variables x and y.
Page 42 - Var(u.) = ff2 2 — . [10] (This latter expression results from the fact that the variance of a sum of independent variables is the sum of the variances, applied to expressions 8 and 9.) We now have an unbiased estimator with a known variance.
Page 70 - The union of two events A and B is the set of outcomes that are included in A or B or both A and B.
Page 70 - B. The intersection of two events A and B is the set of elementary events favorable to both, and is denoted by A fl B or simply by AB.

Références à ce livre

Tous les résultats Google Recherche de Livres »

À propos de l'auteur (2004)

WILLIAM M. BOLSTAD, PhD, is a Senior Lecturer in the Department of Statistics at the University of Waikato, New Zealand. He holds degrees from the University of Missouri, Stanford University, and the University of Waikato, New Zealand.

Informations bibliographiques