000 03751cam a22004218i 4500
001 21440656
003 BD-ChCU
005 20240324102125.0
008 200208s2020 enk b 001 0 eng
010 _a 2019053276
020 _a9781108486828
_q(hardback)
020 _z9781108571401
_q(epub)
040 _aLBSOR/DLC
_beng
_erda
_cDLC
042 _apcc
050 0 0 _aQA402.5
_b.L367 2020
082 0 0 _a519.3 L351 b
_223
100 1 _aLattimore, Tor,
_d1987-
_eauthor.
245 1 0 _aBandit algorithms /
_cTor Lattimore and Csaba Szepesvari.
263 _a2005
264 1 _aCambridge ;
_aNew York, NY :
_bCambridge University Press,
_c2020.
300 _apages cm
336 _atext
_btxt
_2rdacontent
337 _aunmediated
_bn
_2rdamedia
338 _avolume
_bnc
_2rdacarrier
504 _aIncludes bibliographical references and index.
505 0 _aFoundations of probability -- Stochastic processes and Markov chains -- Stochastic bandits -- Concentration of measure -- The explore-then-commit algorithm -- The upper confidence bound algorithm -- The upper confidence bound algorithm: asymptotic optimality -- The upper confidence bound algorithm: minimax optimality -- The upper confidence bound algorithm: Bernoulli noise -- The Exp3 algorithm -- The Exp3-IX algorithm -- Lower bounds: basic ideas -- Foundations of information theory -- Minimax lower bounds -- Instance dependent lower bounds -- High probability lower bounds -- Contextual bandits -- Stochastic linear bandits -- Confidence bounds for least squares estimators -- Optimal design for least squares estimators -- Stochastic linear bandits with finitely many arms -- Stochastic linear bandits with sparsity -- Minimax lower bounds for stochastic linear bandits -- Asymptotic lower bounds for stochastic linear bandits -- Foundations of convex analysis -- Exp3 for adversarial linear bandits -- Follow the regularized leader and mirror descent -- The relation between adversarial and stochastic linear bandits -- Combinatorial bandits -- Non-stationary bandits -- Ranking -- Pure exploration -- Foundations of Bayesian learning -- Bayesian bandits -- Thompson sampling -- Partial monitoring -- Markov decision processes.
520 _a"Decision-making in the face of uncertainty is a significant challenge in machine learning, and the multi-armed bandit model is a commonly used framework to address it. This comprehensive and rigorous introduction to the multi-armed bandit problem examines all the major settings, including stochastic, adversarial, and Bayesian frameworks. A focus on both mathematical intuition and carefully worked proofs makes this an excellent reference for established researchers and a helpful resource for graduate students in computer science, engineering, statistics, applied mathematics and economics. Linear bandits receive special attention as one of the most useful models in applications, while other chapters are dedicated to combinatorial bandits, ranking, non-stationary problems, Thompson sampling and pure exploration. The book ends with a peek into the world beyond bandits with an introduction to partial monitoring and learning in Markov decision processes"--
_cProvided by publisher.
650 0 _aMathematical optimization.
650 0 _aProbabilities.
650 0 _aDecision making
_xMathematical models.
650 0 _aResource allocation
_xMathematical models.
650 0 _aAlgorithms.
700 1 _aSzepesvári, Csaba,
_eauthor.
776 0 8 _iOnline version:
_aLattimore, Tor, 1987-
_tBandit algorithms
_dCambridge ; New York, NY : Cambridge University Press, 2020
_z9781108571401
_w(DLC) 2019053277
906 _a7
_bcbc
_corignew
_d1
_eecip
_f20
_gy-gencatlg
942 _2ddc
_cBK
_n0
999 _c103924
_d103924