This is the first comprehensive introduction to computational learning theory. The author's uniform presentation of fundamental results and their applications offers AI researchers a theoretical perspective on the problems they study. The book presents to
This is the first comprehensive introduction to computational learning theory. The author's uniform presentation of fundamental results and their applications offers AI researchers a theoretical perspective on the problems they study. The book presents tools for the analysis of probabilistic models of learning, tools that crisply classify what is and is not efficiently learnable. After a general introduction to Valiant's PAC paradigm and the important notion of the Vapnik-Chervonenkis dimension, the author explores specific topics such as finite automata and neural networks. The presentation is intended for a broad audience--the author's ability to motivate and pace discussions for beginners has been praised by reviewers. Each chapter contains numerous examples and exercises, as well as a useful summary of important results. An excellent introduction to the area, suitable either for a first course, or as a component in general machine learning and advanced AI courses. Also an important reference for AI researchers.
Our site uses cookies and similar technologies to offer you a better experience. We use analytical cookies (our own and third party) to understand and improve your browsing experience, and advertising cookies (our own and third party) to send you advertisements in line with your preferences. To modify or opt-out of the use of some or all of our cookies, please go to “Manage Cookies” or view our Cookie Policy to find out more. By clicking “Accept all” you consent to the use of these cookies.