Following the success of our previous 'top AI books' reading list, we thought we would put ten more great books forward for you to peruse. The below books include a short summary & link to the place of free download or purchase. Didn't see our first list of ten books you need to read this year? See it here.
After two years of development and 600 pages later, the Practical Deep Learning Book for Cloud, Mobile & Edge has been released, and stresses on one word - 'Practical'. Suitable for those with a programming background, from beginners and hobbyists to veteran data scientists, this book uses Keras & TensorFlow to cover the breadth on how to develop industry AI projects, from using transfer learning to train models in minutes to deploying at scale for millions of requests on cloud, from mobile apps to robots, from video game simulations to eventually using reinforcement learning to build miniature self-driving cars. The authors of this book take a somewhat humorous approach to complex issues making it an approachable and user-friendly read, and feature motivating insights from industry veterans including François Chollet (Keras), Jeremy Howard (Fast.AI), Anima Anandkumar (NVIDIA) and more. Featured by the official Keras website as a learning resource, its already been used by a student team competing at RoboRace's 200 mph autonomous driving championship.
Whilst this book is over ten years old, it's certainly worthy of a slot on our reading list! Covering a ten year period of Machine Learning applications, Christopher discusses the advancement of ML ops, including, but not limited to Bayesian methods, pattern recognition and more. Whilst aimed at advanced undergraduates or first-year PhD students, as well as researchers and practitioners, the author suggests that no previous knowledge of pattern recognition or machine learning concepts is assumed. See more here.
3) Python Machine Learning: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow 2
The recent third addition to this book builds on the initial 12 chapters, now including improved explanations based on reader feedback, updated to support the latest versions of NumPy, SciPy, and scikit-learn. The re-write also saw a new section on Generative Adversarial Networks (GANs) added. This staggering 770 page book provides comprehensive coverage of ML and DL, touching on all relevant topics with the latest research and papers cited. Read more here.
This book provides a theoretical account of the fundamentals underlying machine learning and the mathematical derivations that transform these principles into practical algorithms. These include a discussion of the computational complexity of learning and the concepts of convexity and stability; important algorithmic paradigms including stochastic gradient descent, neural networks, and structured output learning; and emerging theoretical concepts such as the PAC-Bayes approach and compression-based bounds. Read more on this book here.
5) Deep Reinforcement Learning Hands-On: Apply modern RL methods, with deep Q-networks, value iteration, policy gradients, TRPO, AlphaGo Zero and more
This inclusion, written by Maxim Lapan, evaluates high-profile RL methods, including value iteration, deep Q-networks, policy gradients, TRPO, PPO, DDPG, D4PG, evolution strategies and genetic algorithms. Maxim guides your evaluation of methods including Cross-entropy and policy gradients, before helping you to apply them to real-world environments. Take on both the Atari set of virtual games and family favorites such as Connect4 with AI as instructed in this book. Read more here.
This inclusion primarily focusses on the theory and algorithms of deep learning. Charu explains that to understand the theory, algorithms of neural networks are particularly important for understanding important concepts, so that one can understand the important design concepts of neural architectures in different applications. Why do neural networks work? When do they work better than off-the-shelf machine-learning models? When is depth useful? Why is training neural networks so hard? What are the pitfalls? Find out the answers in Charu's book here.
7) The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind
In this must-read, Minsky argues that emotions are different ways to think that our mind uses to increase our intelligence. Minsky's research has led to many advances in artificial intelligence, psychology, physical optics, mathematics, and the theory of computation, many of which he covers in this book, stating that if we can understand the step-by-step processes of the mind, we can build machines. Read more here.
"Since Aristotle, we have fought to understand the causes behind everything. But this ideology is fading. In the age of big data, we can crunch an incomprehensible amount of information, providing us with invaluable insights about the what rather than the why." This fascinating book addresses current trends in the field, the dark side of data, future decision making and more. Read more on this great topic here.
9) Elements of Statistical Learning
This book covers a variety of important ideas, spanning various industries including biology, finance, and marketing in a common conceptual framework. Whilst Trevor et al's approach is statistical, the emphasis is on concepts rather than mathematics. Somewhat of a curveball for the list, this near 800 page book covers everything you need to know about statistics! Read more on the book here.
10) Uses and Risks of Business Chatbots: Guidelines for Purchasers in the Public and Private Sectors
This world first summary of the evolution of 2D chatbots in websites, backends of portals and social media apps, and conversationally advanced 3D mixed reality cognitive interfaces, serves several purposes. It dissects some of the best-known case studies to emerge from the past two decades of tech giants launching the best chatbot, or supposedly the smartest, intelligent virtual assistant. You can read more on Tania Peitzker's coverage of Chatbots here.
Have we missed any that we should include next time? Let me know at [email protected]