Start Over Please hold this item Export MARC Display Return To Browse Modify Search
 
     
Limit search to available items
1679 results found. Sorted by relevance | date | title .
Record: Previous Record Next Record
Author Hall, Patrick.
Title Machine Learning for High-Risk Applications.
Publisher Québec : O'Reilly Media, Incorporated, 2023.
Edition 1st ed.



Descript 1 online resource (469 pages)
Content text txt
Media computer c
Carrier online resource cr
Edition 1st ed.
Contents Cover -- Copyright -- Table of Contents -- Foreword -- Preface -- Who Should Read This Book -- What Readers Will Learn -- Alignment with the NIST AI Risk Management Framework -- Book Outline -- Part I -- Part II -- Part III -- Example Datasets -- Taiwan Credit Data -- Kaggle Chest X-Ray Data -- Conventions Used in This Book -- Online Figures -- Using Code Examples -- O'Reilly Online Learning -- How to Contact Us -- Acknowledgments -- Patrick Hall -- James Curtis -- Parul Pandey -- Part I. Theories and Practical Applications of AI Risk Management -- Chapter 1. Contemporary Machine Learning Risk Management -- A Snapshot of the Legal and Regulatory Landscape -- The Proposed EU AI Act -- US Federal Laws and Regulations -- State and Municipal Laws -- Basic Product Liability -- Federal Trade Commission Enforcement -- Authoritative Best Practices -- AI Incidents -- Cultural Competencies for Machine Learning Risk Management -- Organizational Accountability -- Culture of Effective Challenge -- Diverse and Experienced Teams -- Drinking Our Own Champagne -- Moving Fast and Breaking Things -- Organizational Processes for Machine Learning Risk Management -- Forecasting Failure Modes -- Model Risk Management Processes -- Beyond Model Risk Management -- Case Study: The Rise and Fall of Zillow's iBuying -- Fallout -- Lessons Learned -- Resources -- Chapter 2. Interpretable and Explainable Machine Learning -- Important Ideas for Interpretability and Explainability -- Explainable Models -- Additive Models -- Decision Trees -- An Ecosystem of Explainable Machine Learning Models -- Post Hoc Explanation -- Feature Attribution and Importance -- Surrogate Models -- Plots of Model Performance -- Cluster Profiling -- Stubborn Difficulties of Post Hoc Explanation in Practice -- Pairing Explainable Models and Post Hoc Explanation -- Case Study: Graded by Algorithm.
Resources -- Chapter 3. Debugging Machine Learning Systems for Safety and Performance -- Training -- Reproducibility -- Data Quality -- Model Specification for Real-World Outcomes -- Model Debugging -- Software Testing -- Traditional Model Assessment -- Common Machine Learning Bugs -- Residual Analysis -- Sensitivity Analysis -- Benchmark Models -- Remediation: Fixing Bugs -- Deployment -- Domain Safety -- Model Monitoring -- Case Study: Death by Autonomous Vehicle -- Fallout -- An Unprepared Legal System -- Lessons Learned -- Resources -- Chapter 4. Managing Bias in Machine Learning -- ISO and NIST Definitions for Bias -- Systemic Bias -- Statistical Bias -- Human Biases and Data Science Culture -- Legal Notions of ML Bias in the United States -- Who Tends to Experience Bias from ML Systems -- Harms That People Experience -- Testing for Bias -- Testing Data -- Traditional Approaches: Testing for Equivalent Outcomes -- A New Mindset: Testing for Equivalent Performance Quality -- On the Horizon: Tests for the Broader ML Ecosystem -- Summary Test Plan -- Mitigating Bias -- Technical Factors in Mitigating Bias -- The Scientific Method and Experimental Design -- Bias Mitigation Approaches -- Human Factors in Mitigating Bias -- Case Study: The Bias Bug Bounty -- Resources -- Chapter 5. Security for Machine Learning -- Security Basics -- The Adversarial Mindset -- CIA Triad -- Best Practices for Data Scientists -- Machine Learning Attacks -- Integrity Attacks: Manipulated Machine Learning Outputs -- Confidentiality Attacks: Extracted Information -- General ML Security Concerns -- Countermeasures -- Model Debugging for Security -- Model Monitoring for Security -- Privacy-Enhancing Technologies -- Robust Machine Learning -- General Countermeasures -- Case Study: Real-World Evasion Attacks -- Evasion Attacks -- Lessons Learned -- Resources.
Part II. Putting AI Risk Management into Action -- Chapter 6. Explainable Boosting Machines and Explaining XGBoost -- Concept Refresher: Machine Learning Transparency -- Additivity Versus Interactions -- Steps Toward Causality with Constraints -- Partial Dependence and Individual Conditional Expectation -- Shapley Values -- Model Documentation -- The GAM Family of Explainable Models -- Elastic Net-Penalized GLM with Alpha and Lambda Search -- Generalized Additive Models -- GA2M and Explainable Boosting Machines -- XGBoost with Constraints and Post Hoc Explanation -- Constrained and Unconstrained XGBoost -- Explaining Model Behavior with Partial Dependence and ICE -- Decision Tree Surrogate Models as an Explanation Technique -- Shapley Value Explanations -- Problems with Shapley values -- Better-Informed Model Selection -- Resources -- Chapter 7. Explaining a PyTorch Image Classifier -- Explaining Chest X-Ray Classification -- Concept Refresher: Explainable Models and Post Hoc Explanation Techniques -- Explainable Models Overview -- Occlusion Methods -- Gradient-Based Methods -- Explainable AI for Model Debugging -- Explainable Models -- ProtoPNet and Variants -- Other Explainable Deep Learning Models -- Training and Explaining a PyTorch Image Classifier -- Training Data -- Addressing the Dataset Imbalance Problem -- Data Augmentation and Image Cropping -- Model Training -- Evaluation and Metrics -- Generating Post Hoc Explanations Using Captum -- Evaluating Model Explanations -- The Robustness of Post Hoc Explanations -- Conclusion -- Resources -- Chapter 8. Selecting and Debugging XGBoost Models -- Concept Refresher: Debugging ML -- Model Selection -- Sensitivity Analysis -- Residual Analysis -- Remediation -- Selecting a Better XGBoost Model -- Sensitivity Analysis for XGBoost -- Stress Testing XGBoost -- Stress Testing Methodology.
Altering Data to Simulate Recession Conditions -- Adversarial Example Search -- Residual Analysis for XGBoost -- Analysis and Visualizations of Residuals -- Segmented Error Analysis -- Modeling Residuals -- Remediating the Selected Model -- Overemphasis of PAY_0 -- Miscellaneous Bugs -- Conclusion -- Resources -- Chapter 9. Debugging a PyTorch Image Classifier -- Concept Refresher: Debugging Deep Learning -- Debugging a PyTorch Image Classifier -- Data Quality and Leaks -- Software Testing for Deep Learning -- Sensitivity Analysis for Deep Learning -- Remediation -- Sensitivity Fixes -- Conclusion -- Resources -- Chapter 10. Testing and Remediating Bias with XGBoost -- Concept Refresher: Managing ML Bias -- Model Training -- Evaluating Models for Bias -- Testing Approaches for Groups -- Individual Fairness -- Proxy Bias -- Remediating Bias -- Preprocessing -- In-processing -- Postprocessing -- Model Selection -- Conclusion -- Resources -- Chapter 11. Red-Teaming XGBoost -- Concept Refresher -- CIA Triad -- Attacks -- Countermeasures -- Model Training -- Attacks for Red-Teaming -- Model Extraction Attacks -- Adversarial Example Attacks -- Membership Attacks -- Data Poisoning -- Backdoors -- Conclusion -- Resources -- Part III. Conclusion -- Chapter 12. How to Succeed in High-Risk Machine Learning -- Who Is in the Room? -- Science Versus Engineering -- The Data-Scientific Method -- The Scientific Method -- Evaluation of Published Results and Claims -- Apply External Standards -- Commonsense Risk Mitigation -- Conclusion -- Resources -- Index -- About the Authors -- Colophon.
ISBN 9781098102395 (electronic bk.)
Click on the terms below to find similar items in the catalogue
Author Hall, Patrick.
Alt author Curtis, James.
Pandey, Parul.
Descript 1 online resource (469 pages)
Content text txt
Media computer c
Carrier online resource cr
Edition 1st ed.
Contents Cover -- Copyright -- Table of Contents -- Foreword -- Preface -- Who Should Read This Book -- What Readers Will Learn -- Alignment with the NIST AI Risk Management Framework -- Book Outline -- Part I -- Part II -- Part III -- Example Datasets -- Taiwan Credit Data -- Kaggle Chest X-Ray Data -- Conventions Used in This Book -- Online Figures -- Using Code Examples -- O'Reilly Online Learning -- How to Contact Us -- Acknowledgments -- Patrick Hall -- James Curtis -- Parul Pandey -- Part I. Theories and Practical Applications of AI Risk Management -- Chapter 1. Contemporary Machine Learning Risk Management -- A Snapshot of the Legal and Regulatory Landscape -- The Proposed EU AI Act -- US Federal Laws and Regulations -- State and Municipal Laws -- Basic Product Liability -- Federal Trade Commission Enforcement -- Authoritative Best Practices -- AI Incidents -- Cultural Competencies for Machine Learning Risk Management -- Organizational Accountability -- Culture of Effective Challenge -- Diverse and Experienced Teams -- Drinking Our Own Champagne -- Moving Fast and Breaking Things -- Organizational Processes for Machine Learning Risk Management -- Forecasting Failure Modes -- Model Risk Management Processes -- Beyond Model Risk Management -- Case Study: The Rise and Fall of Zillow's iBuying -- Fallout -- Lessons Learned -- Resources -- Chapter 2. Interpretable and Explainable Machine Learning -- Important Ideas for Interpretability and Explainability -- Explainable Models -- Additive Models -- Decision Trees -- An Ecosystem of Explainable Machine Learning Models -- Post Hoc Explanation -- Feature Attribution and Importance -- Surrogate Models -- Plots of Model Performance -- Cluster Profiling -- Stubborn Difficulties of Post Hoc Explanation in Practice -- Pairing Explainable Models and Post Hoc Explanation -- Case Study: Graded by Algorithm.
Resources -- Chapter 3. Debugging Machine Learning Systems for Safety and Performance -- Training -- Reproducibility -- Data Quality -- Model Specification for Real-World Outcomes -- Model Debugging -- Software Testing -- Traditional Model Assessment -- Common Machine Learning Bugs -- Residual Analysis -- Sensitivity Analysis -- Benchmark Models -- Remediation: Fixing Bugs -- Deployment -- Domain Safety -- Model Monitoring -- Case Study: Death by Autonomous Vehicle -- Fallout -- An Unprepared Legal System -- Lessons Learned -- Resources -- Chapter 4. Managing Bias in Machine Learning -- ISO and NIST Definitions for Bias -- Systemic Bias -- Statistical Bias -- Human Biases and Data Science Culture -- Legal Notions of ML Bias in the United States -- Who Tends to Experience Bias from ML Systems -- Harms That People Experience -- Testing for Bias -- Testing Data -- Traditional Approaches: Testing for Equivalent Outcomes -- A New Mindset: Testing for Equivalent Performance Quality -- On the Horizon: Tests for the Broader ML Ecosystem -- Summary Test Plan -- Mitigating Bias -- Technical Factors in Mitigating Bias -- The Scientific Method and Experimental Design -- Bias Mitigation Approaches -- Human Factors in Mitigating Bias -- Case Study: The Bias Bug Bounty -- Resources -- Chapter 5. Security for Machine Learning -- Security Basics -- The Adversarial Mindset -- CIA Triad -- Best Practices for Data Scientists -- Machine Learning Attacks -- Integrity Attacks: Manipulated Machine Learning Outputs -- Confidentiality Attacks: Extracted Information -- General ML Security Concerns -- Countermeasures -- Model Debugging for Security -- Model Monitoring for Security -- Privacy-Enhancing Technologies -- Robust Machine Learning -- General Countermeasures -- Case Study: Real-World Evasion Attacks -- Evasion Attacks -- Lessons Learned -- Resources.
Part II. Putting AI Risk Management into Action -- Chapter 6. Explainable Boosting Machines and Explaining XGBoost -- Concept Refresher: Machine Learning Transparency -- Additivity Versus Interactions -- Steps Toward Causality with Constraints -- Partial Dependence and Individual Conditional Expectation -- Shapley Values -- Model Documentation -- The GAM Family of Explainable Models -- Elastic Net-Penalized GLM with Alpha and Lambda Search -- Generalized Additive Models -- GA2M and Explainable Boosting Machines -- XGBoost with Constraints and Post Hoc Explanation -- Constrained and Unconstrained XGBoost -- Explaining Model Behavior with Partial Dependence and ICE -- Decision Tree Surrogate Models as an Explanation Technique -- Shapley Value Explanations -- Problems with Shapley values -- Better-Informed Model Selection -- Resources -- Chapter 7. Explaining a PyTorch Image Classifier -- Explaining Chest X-Ray Classification -- Concept Refresher: Explainable Models and Post Hoc Explanation Techniques -- Explainable Models Overview -- Occlusion Methods -- Gradient-Based Methods -- Explainable AI for Model Debugging -- Explainable Models -- ProtoPNet and Variants -- Other Explainable Deep Learning Models -- Training and Explaining a PyTorch Image Classifier -- Training Data -- Addressing the Dataset Imbalance Problem -- Data Augmentation and Image Cropping -- Model Training -- Evaluation and Metrics -- Generating Post Hoc Explanations Using Captum -- Evaluating Model Explanations -- The Robustness of Post Hoc Explanations -- Conclusion -- Resources -- Chapter 8. Selecting and Debugging XGBoost Models -- Concept Refresher: Debugging ML -- Model Selection -- Sensitivity Analysis -- Residual Analysis -- Remediation -- Selecting a Better XGBoost Model -- Sensitivity Analysis for XGBoost -- Stress Testing XGBoost -- Stress Testing Methodology.
Altering Data to Simulate Recession Conditions -- Adversarial Example Search -- Residual Analysis for XGBoost -- Analysis and Visualizations of Residuals -- Segmented Error Analysis -- Modeling Residuals -- Remediating the Selected Model -- Overemphasis of PAY_0 -- Miscellaneous Bugs -- Conclusion -- Resources -- Chapter 9. Debugging a PyTorch Image Classifier -- Concept Refresher: Debugging Deep Learning -- Debugging a PyTorch Image Classifier -- Data Quality and Leaks -- Software Testing for Deep Learning -- Sensitivity Analysis for Deep Learning -- Remediation -- Sensitivity Fixes -- Conclusion -- Resources -- Chapter 10. Testing and Remediating Bias with XGBoost -- Concept Refresher: Managing ML Bias -- Model Training -- Evaluating Models for Bias -- Testing Approaches for Groups -- Individual Fairness -- Proxy Bias -- Remediating Bias -- Preprocessing -- In-processing -- Postprocessing -- Model Selection -- Conclusion -- Resources -- Chapter 11. Red-Teaming XGBoost -- Concept Refresher -- CIA Triad -- Attacks -- Countermeasures -- Model Training -- Attacks for Red-Teaming -- Model Extraction Attacks -- Adversarial Example Attacks -- Membership Attacks -- Data Poisoning -- Backdoors -- Conclusion -- Resources -- Part III. Conclusion -- Chapter 12. How to Succeed in High-Risk Machine Learning -- Who Is in the Room? -- Science Versus Engineering -- The Data-Scientific Method -- The Scientific Method -- Evaluation of Published Results and Claims -- Apply External Standards -- Commonsense Risk Mitigation -- Conclusion -- Resources -- Index -- About the Authors -- Colophon.
ISBN 9781098102395 (electronic bk.)
Author Hall, Patrick.
Alt author Curtis, James.
Pandey, Parul.

Descript 1 online resource (469 pages)
Content text txt
Media computer c
Carrier online resource cr
Contents Cover -- Copyright -- Table of Contents -- Foreword -- Preface -- Who Should Read This Book -- What Readers Will Learn -- Alignment with the NIST AI Risk Management Framework -- Book Outline -- Part I -- Part II -- Part III -- Example Datasets -- Taiwan Credit Data -- Kaggle Chest X-Ray Data -- Conventions Used in This Book -- Online Figures -- Using Code Examples -- O'Reilly Online Learning -- How to Contact Us -- Acknowledgments -- Patrick Hall -- James Curtis -- Parul Pandey -- Part I. Theories and Practical Applications of AI Risk Management -- Chapter 1. Contemporary Machine Learning Risk Management -- A Snapshot of the Legal and Regulatory Landscape -- The Proposed EU AI Act -- US Federal Laws and Regulations -- State and Municipal Laws -- Basic Product Liability -- Federal Trade Commission Enforcement -- Authoritative Best Practices -- AI Incidents -- Cultural Competencies for Machine Learning Risk Management -- Organizational Accountability -- Culture of Effective Challenge -- Diverse and Experienced Teams -- Drinking Our Own Champagne -- Moving Fast and Breaking Things -- Organizational Processes for Machine Learning Risk Management -- Forecasting Failure Modes -- Model Risk Management Processes -- Beyond Model Risk Management -- Case Study: The Rise and Fall of Zillow's iBuying -- Fallout -- Lessons Learned -- Resources -- Chapter 2. Interpretable and Explainable Machine Learning -- Important Ideas for Interpretability and Explainability -- Explainable Models -- Additive Models -- Decision Trees -- An Ecosystem of Explainable Machine Learning Models -- Post Hoc Explanation -- Feature Attribution and Importance -- Surrogate Models -- Plots of Model Performance -- Cluster Profiling -- Stubborn Difficulties of Post Hoc Explanation in Practice -- Pairing Explainable Models and Post Hoc Explanation -- Case Study: Graded by Algorithm.
Resources -- Chapter 3. Debugging Machine Learning Systems for Safety and Performance -- Training -- Reproducibility -- Data Quality -- Model Specification for Real-World Outcomes -- Model Debugging -- Software Testing -- Traditional Model Assessment -- Common Machine Learning Bugs -- Residual Analysis -- Sensitivity Analysis -- Benchmark Models -- Remediation: Fixing Bugs -- Deployment -- Domain Safety -- Model Monitoring -- Case Study: Death by Autonomous Vehicle -- Fallout -- An Unprepared Legal System -- Lessons Learned -- Resources -- Chapter 4. Managing Bias in Machine Learning -- ISO and NIST Definitions for Bias -- Systemic Bias -- Statistical Bias -- Human Biases and Data Science Culture -- Legal Notions of ML Bias in the United States -- Who Tends to Experience Bias from ML Systems -- Harms That People Experience -- Testing for Bias -- Testing Data -- Traditional Approaches: Testing for Equivalent Outcomes -- A New Mindset: Testing for Equivalent Performance Quality -- On the Horizon: Tests for the Broader ML Ecosystem -- Summary Test Plan -- Mitigating Bias -- Technical Factors in Mitigating Bias -- The Scientific Method and Experimental Design -- Bias Mitigation Approaches -- Human Factors in Mitigating Bias -- Case Study: The Bias Bug Bounty -- Resources -- Chapter 5. Security for Machine Learning -- Security Basics -- The Adversarial Mindset -- CIA Triad -- Best Practices for Data Scientists -- Machine Learning Attacks -- Integrity Attacks: Manipulated Machine Learning Outputs -- Confidentiality Attacks: Extracted Information -- General ML Security Concerns -- Countermeasures -- Model Debugging for Security -- Model Monitoring for Security -- Privacy-Enhancing Technologies -- Robust Machine Learning -- General Countermeasures -- Case Study: Real-World Evasion Attacks -- Evasion Attacks -- Lessons Learned -- Resources.
Part II. Putting AI Risk Management into Action -- Chapter 6. Explainable Boosting Machines and Explaining XGBoost -- Concept Refresher: Machine Learning Transparency -- Additivity Versus Interactions -- Steps Toward Causality with Constraints -- Partial Dependence and Individual Conditional Expectation -- Shapley Values -- Model Documentation -- The GAM Family of Explainable Models -- Elastic Net-Penalized GLM with Alpha and Lambda Search -- Generalized Additive Models -- GA2M and Explainable Boosting Machines -- XGBoost with Constraints and Post Hoc Explanation -- Constrained and Unconstrained XGBoost -- Explaining Model Behavior with Partial Dependence and ICE -- Decision Tree Surrogate Models as an Explanation Technique -- Shapley Value Explanations -- Problems with Shapley values -- Better-Informed Model Selection -- Resources -- Chapter 7. Explaining a PyTorch Image Classifier -- Explaining Chest X-Ray Classification -- Concept Refresher: Explainable Models and Post Hoc Explanation Techniques -- Explainable Models Overview -- Occlusion Methods -- Gradient-Based Methods -- Explainable AI for Model Debugging -- Explainable Models -- ProtoPNet and Variants -- Other Explainable Deep Learning Models -- Training and Explaining a PyTorch Image Classifier -- Training Data -- Addressing the Dataset Imbalance Problem -- Data Augmentation and Image Cropping -- Model Training -- Evaluation and Metrics -- Generating Post Hoc Explanations Using Captum -- Evaluating Model Explanations -- The Robustness of Post Hoc Explanations -- Conclusion -- Resources -- Chapter 8. Selecting and Debugging XGBoost Models -- Concept Refresher: Debugging ML -- Model Selection -- Sensitivity Analysis -- Residual Analysis -- Remediation -- Selecting a Better XGBoost Model -- Sensitivity Analysis for XGBoost -- Stress Testing XGBoost -- Stress Testing Methodology.
Altering Data to Simulate Recession Conditions -- Adversarial Example Search -- Residual Analysis for XGBoost -- Analysis and Visualizations of Residuals -- Segmented Error Analysis -- Modeling Residuals -- Remediating the Selected Model -- Overemphasis of PAY_0 -- Miscellaneous Bugs -- Conclusion -- Resources -- Chapter 9. Debugging a PyTorch Image Classifier -- Concept Refresher: Debugging Deep Learning -- Debugging a PyTorch Image Classifier -- Data Quality and Leaks -- Software Testing for Deep Learning -- Sensitivity Analysis for Deep Learning -- Remediation -- Sensitivity Fixes -- Conclusion -- Resources -- Chapter 10. Testing and Remediating Bias with XGBoost -- Concept Refresher: Managing ML Bias -- Model Training -- Evaluating Models for Bias -- Testing Approaches for Groups -- Individual Fairness -- Proxy Bias -- Remediating Bias -- Preprocessing -- In-processing -- Postprocessing -- Model Selection -- Conclusion -- Resources -- Chapter 11. Red-Teaming XGBoost -- Concept Refresher -- CIA Triad -- Attacks -- Countermeasures -- Model Training -- Attacks for Red-Teaming -- Model Extraction Attacks -- Adversarial Example Attacks -- Membership Attacks -- Data Poisoning -- Backdoors -- Conclusion -- Resources -- Part III. Conclusion -- Chapter 12. How to Succeed in High-Risk Machine Learning -- Who Is in the Room? -- Science Versus Engineering -- The Data-Scientific Method -- The Scientific Method -- Evaluation of Published Results and Claims -- Apply External Standards -- Commonsense Risk Mitigation -- Conclusion -- Resources -- Index -- About the Authors -- Colophon.
Alt author Curtis, James.
Pandey, Parul.
ISBN 9781098102395 (electronic bk.)

Links and services for this item: