LDR 02751cam a2200325 i 4500 001 9944188107626 005 20230927105205.0 008 190114t20192019maua b 001 0 eng d 020 |a9780692196380 020 |a0692196382 035 |a(OCoLC)ocn1081372892 040 |aYDX|beng|erda|cYDX|dSEA|dPUL|dOCLCF|dNLHHG|dPAU|dOCLCO|dSTF|dOCLCO|dUPM|dOCLCO|dIPL|dGUA|dOCLCO|dP4A|dCOO|dOCLCO|dBCD|dGRU|dORZ|dNLMVD|dU9X|dOCLCQ|dBWBCA 050 4 |aQA184.2|b.S77 2019 082 04 |a512.5 090 |aQA184.2|b.S77 2019 100 1 |aStrang, Gilbert,|eauthor. 245 10 |aLinear algebra and learning from data /|cGilbert Strang. 264 1 |aWellesley, MA :|bWellesley-Cambridge Press,|c[2019] 264 4 |c©2019 300 |axiii, 432 pages :|billustrations ;|c25 cm 336 |atext|btxt|2rdacontent 337 |aunmediated|bn|2rdamedia 338 |avolume|bnc|2rdacarrier 504 |aIncludes bibliographical references and indexes. 505 0 |aDeep learning and neural nets -- Preface and acknowledgements -- Part I: Highlights of linear algebra -- Part II: Computations with large matrices -- Part III: Low rank and compressed sensing -- Part IV: Special matrices -- Part V: Probability and statistics -- Part IV: Optimization -- Part VII: Learning from data -- Books on machine learning -- Eigenvalues and singular values : rank one -- Codes and algorithms for numerical linear algebra -- Counting parameters in the basic factorizations -- Index of authors -- Index -- Index of symbols. 520 |aThis is a textbook to help readers understand the steps that lead to deep learning. Linear algebra comes first especially singular values, least squares, and matrix factorizations. Often the goal is a low rank approximation A = CR (column-row) to a large matrix of data to see its most important part. This uses the full array of applied linear algebra, including randomization for very large matrices. Then deep learning creates a large-scale optimization problem for the weights solved by gradient descent or better stochastic gradient descent. Finally, the book develops the architectures of fully connected neural nets and of Convolutional Neural Nets (CNNs) to find patterns in data. Audience: This book is for anyone who wants to learn how data is reduced and interpreted by and understand matrix methods. Based on the second linear algebra course taught by Professor Strang, whose lectures on the training data are widely known, it starts from scratch (the four fundamental subspaces) and is fully accessible without the first text. 650 0 |aAlgebras, Linear|vTextbooks. 650 0 |aMathematical optimization|vTextbooks. 650 0 |aMathematical statistics|vTextbooks.