About the Book
Please note that the content of this book primarily consists of articles available from Wikipedia or other free sources online. Pages: 60. Chapters: Extreme value theory, Likelihood-ratio test, Histogram, Bernoulli process, Order statistic, Effect size, Regression toward the mean, List of important publications in statistics, Validity, Independent component analysis, Information geometry, V-optimal histograms, Bilinear time-frequency distribution, Loss function, Unit-weighted regression, Normal-gamma distribution, Explained sum of squares, Fieller's theorem, Bayesian information criterion, Matching pursuit, Explained variation, Normal curve equivalent, Cochran's theorem, Memorylessness, SUBCLU, Blocking, Sample mean and sample covariance, Exact test, Blind deconvolution, P-rep, Cointegration, Random effects model, Breusch-Pagan test, Complete-linkage clustering, Randomness tests, Mean square weighted deviation, Page's trend test, Polya urn model, Generalized p-value, Political forecasting, Imputation, Redescending M-estimator, Spatial dependence, Box-Behnken design, Complete spatial randomness, Most probable number, Gaussian process emulator, Varimax rotation, Quasi-maximum likelihood, Vuong's closeness test, Barnard's test, Equiprobable, Wilks' lambda distribution, Data binning, Sinkov statistic, Newman-Keuls method, Qualitative data, Higher-order statistics, Mean signed difference, Davies-Bouldin index, Eigenpoll. Excerpt: In statistics, an effect size is a measure of the strength of the relationship between two variables in a statistical population, or a sample-based estimate of that quantity. An effect size calculated from data is a descriptive statistic that conveys the estimated magnitude of a relationship without making any statement about whether the apparent relationship in the data reflects a true relationship in the population. In that way, effect sizes complement inferential statistics such as p-values. Among other uses, effect size measures play an ...