About the Book
Please note that the content of this book primarily consists of articles available from Wikipedia or other free sources online. Pages: 90. Chapters: Absorbing Markov chain, Algorithmic composition, Baum-Welch algorithm, Bernoulli scheme, Burst error, Dependability state model, Detailed balance, Discrete phase-type distribution, Dynamics of Markovian particles, Dynamic Markov compression, Entropy rate, Examples of Markov chains, Forward algorithm, Forward-backward algorithm, Gene prediction, GLIMMER, Google matrix, Hidden Markov model, Iterative Viterbi decoding, Kalman filter, Language model, Markovian discrimination, Markov chain geostatistics, Markov chain Monte Carlo, Markov partition, Markov property, Markov switching multifractal, Mark V Shaney, Maximum-entropy Markov model, Models of DNA evolution, Multiple sequence alignment, PageRank, Part-of-speech tagging, Path dependence, Population process, Pop music automation, Quantum Markov chain, Queueing model, Queueing theory, Reinforcement learning, Snakes and Ladders, Soft output Viterbi algorithm, Stochastic matrix, Transiogram, Variable-order Bayesian network, Variable-order Markov model. Excerpt: The Kalman filter, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, containing noise (random variations) and other inaccuracies, and produces estimates of unknown variables that tend to be more precise than those based on a single measurement alone. More formally, the Kalman filter operates recursively on streams of noisy input data to produce a statistically optimal estimate of the underlying system state. The filter is named for Rudolf (Rudy) E. Kalman, one of the primary developers of its theory. The Kalman filter has numerous applications in technology. A common application is for guidance, navigation and control of vehicles, particularly aircraft and spacecraft. Furthermore, the Kalman filter is a widely applied concept in time series analysis used in fields such as signal processing and econometrics. The algorithm works in a two-step process. In the prediction step, the Kalman filter produces estimates of the current state variables, along with their uncertainties. Once the outcome of the next measurement (necessarily corrupted with some amount of error, including random noise) is observed, these estimates are updated using a weighted average, with more weight being given to estimates with higher certainty. Because of the algorithm's recursive nature, it can run in real time using only the present input measurements and the previously calculated state; no additional past information is required. From a theoretical standpoint, the main assumption of the Kalman filter is that the underlying system is a linear dynamical system and that all error terms and measurements have a Gaussian distribution (often a multivariate Gaussian distribution). Extensions and generalizations to the method have also been developed, such as the extended Kalman filter and the unscented Kalman filter which work on nonlinear systems. The underlying model is a Bayesian model similar to a hidden Markov model but where t