Normal view MARC view ISBD view

Explainable and Interpretable Models in Computer Vision and Machine Learning.

By: Escalante, Hugo Jair.
Contributor(s): Escalera, Sergio | Guyon, Isabelle | Baró, Xavier | Güçlütürk, Yağmur | Güçlü, Umut | van Gerven, Marcel.
Material type: TextTextSeries: eBooks on Demand.The Springer Series on Challenges in Machine Learning Ser: Publisher: Cham : Springer, 2019Copyright date: ©2018Description: 1 online resource (305 pages).Content type: text Media type: computer Carrier type: online resourceISBN: 9783319981314.Subject(s): Machine learning | Computer visionGenre/Form: Electronic books.Additional physical formats: Print version:: Explainable and Interpretable Models in Computer Vision and Machine LearningDDC classification: 006.31 LOC classification: QA75.5-76.95Online resources: Click here to view this ebook.
Contents:
Intro -- Foreword -- Preface -- Acknowledgements -- Contents -- Contributors -- Part I Notions and Concepts on Explainability and Interpretability -- Considerations for Evaluation and Generalization in Interpretable Machine Learning -- 1 Introduction -- 2 Defining Interpretability -- 3 Defining the Interpretability Need -- 4 Evaluation -- 5 Considerations for Generalization -- 6 Conclusion: Recommendations for Researchers -- References -- Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges -- 1 Introduction -- 1.1 The Components of Explainability -- 1.2 Users and Laws -- 1.3 Explanation and DNNs -- 2 Users and Their Concerns -- 2.1 Case Study: Autonomous Driving -- 3 Laws and Regulations -- 4 Explanation -- 5 Explanation Methods -- 5.1 Desirable Properties of Explainers -- 5.2 A Taxonomy for Explanation Methods -- 5.2.1 Rule-Extraction Methods -- 5.2.2 Attribution Methods -- 5.2.3 Intrinsic Methods -- 6 Addressing General Concerns -- 7 Discussion -- References -- Part II Explainability and Interpretability in Machine Learning -- Learning Functional Causal Models with Generative NeuralNetworks -- 1 Introduction -- 2 Problem Setting -- 2.1 Notations -- 2.2 Assumptions and Properties -- 3 State of the Art -- 3.1 Learning the CPDAG -- 3.1.1 Constraint-Based Methods -- 3.1.2 Score-Based Methods -- 3.1.3 Hybrid Algorithms -- 3.2 Exploiting Asymmetry Between Cause and Effect -- 3.2.1 The Intuition -- 3.2.2 Restriction on the Class of Causal Mechanisms Considered -- 3.2.3 Pairwise Methods -- 3.3 Discussion -- 4 Causal Generative Neural Networks -- 4.1 Modeling Continuous FCMs with Generative Neural Networks -- 4.1.1 Generative Model and Interventions -- 4.2 Model Evaluation -- 4.2.1 Scoring Metric -- 4.2.2 Representational Power of CGNN -- 4.3 Model Optimization -- 4.3.1 Parametric (Weight) Optimization.
4.3.2 Non-parametric (Structure) Optimization -- 4.3.3 Identifiability of CGNN up to Markov Equivalence Classes -- 5 Experiments -- 5.1 Experimental Setting -- 5.2 Learning Bivariate Causal Structures -- 5.2.1 Benchmarks -- 5.2.2 Baseline Approaches -- 5.2.3 Hyper-Parameter Selection -- 5.2.4 Empirical Results -- 5.3 Identifying v-structures -- 5.4 Multivariate Causal Modeling Under Causal Sufficiency Assumption -- 5.4.1 Results on Artificial Graphs with Additive and Multiplicative Noises -- 5.4.2 Result on Biological Data -- 5.4.3 Results on Biological Real-World Data -- 6 Towards Predicting Confounding Effects -- 6.1 Principle -- 6.2 Experimental Validation -- 6.2.1 Benchmarks -- 6.2.2 Baselines -- 6.2.3 Results -- 7 Discussion and Perspectives -- Appendix -- The Maximum Mean Discrepancy (MMD) Statistic -- Proofs -- Table of Scores for the Experiments on Cause-Effect Pairs -- Table of Scores for the Experiments on Graphs -- References -- Learning Interpretable Rules for Multi-Label Classification -- 1 Introduction -- 2 Multi-Label Classification -- 2.1 Problem Definition -- 2.2 Dependencies in Multi-Label Classification -- 2.3 Evaluation of Multi-Label Predictions -- 2.3.1 Bipartition Evaluation Functions -- 2.3.2 Multi-Label Evaluation Functions -- 2.3.3 Aggregation and Averaging -- 3 Multi-Label Rule Learning -- 3.1 Rule Learning -- 3.1.1 Predictive Rule Learning -- 3.1.2 Descriptive Rule Learning -- 3.2 Multi-Label Rules -- 3.3 Challenges for Multi-Label Rule Learning -- 4 Discovery of Multi-Label Rules -- 4.1 Association Rule-Based Algorithms -- 4.2 Choosing Loss-Minimizing Rule Heads -- 4.2.1 Anti-Monotonicity and Decomposability -- 4.2.2 Efficient Generation of Multi-Label Heads -- 5 Learning Predictive Rule-Based Multi-Label Models -- 5.1 Layered Multi-Label Learning -- 5.1.1 Stacked Binary Relevance -- 5.2 Multi-Label Separate-and-Conquer.
5.2.1 A Multi-Label Covering Algorithm -- 6 Case Studies -- 6.1 Case Study 1: Single-Label Head Rules -- 6.1.1 Exemplary Rule Models -- 6.1.2 Visualization of Dependencies -- 6.1.3 Discussion -- 6.2 Case Study 2: Multi-Label Heads -- 6.2.1 Exemplary Rule Models -- 6.2.2 Predictive Performance -- 6.2.3 Computational Cost -- 7 Conclusion -- References -- Structuring Neural Networks for More Explainable Predictions -- 1 Introduction -- 2 Explanation Techniques -- 2.1 Sensitivity Analysis -- 2.2 Deep Taylor Decomposition -- 2.3 Theoretical Limitations -- 3 Convolutional Neural Networks -- 3.1 Experiments -- 4 Recurrent Neural Networks -- 4.1 Experiments -- 5 Conclusion -- References -- Part III Explainability and Interpretability in Computer Vision -- Generating Post-Hoc Rationales of Deep Visual ClassificationDecisions -- 1 Introduction -- 2 Related Work -- 3 Generating Visual Explanations (GVE) Model -- 3.1 Relevance Loss -- 3.2 Discriminative Loss -- 4 Experimental Setup -- 5 Results -- 5.1 Quantitative Results -- 5.2 Qualitative Results -- 6 Conclusion -- References -- Ensembling Visual Explanations -- 1 Introduction -- 2 Background and Related Work -- 3 Algorithms for Ensembling Visual Explanations -- 3.1 Weighted Average Ensemble Explanation -- 3.2 Penalized Weighted Average Ensemble Explanation -- 3.3 Agreement with N Systems -- 4 Evaluating Explanations -- 4.1 Comparison Metric -- 4.2 Uncovering Metric -- 4.3 Crowd-Sourced Hyper-Parameter Tuning -- 5 Experimental Results and Discussion -- 6 Conclusions and Future Directions -- References -- Explainable Deep Driving by Visualizing Causal Attention -- 1 Introduction -- 2 Related Work -- 2.1 End-to-End Learning for Self-Driving Cars -- 2.2 Visual Explanations -- 3 Attention-Based Explainable Deep Driving Model -- 3.1 Preprocessing -- 3.2 Encoder: Convolutional Feature Extraction.
3.3 Coarse-Grained Decoder: Visual (Spatial) Attention -- 3.4 Fine-Grained Decoder: Causality Test -- 4 Result -- 4.1 Datasets -- 4.2 Training and Evaluation Details -- 4.3 Effect of Choosing Penalty Coefficient λ -- 4.4 Effect of Varying Smoothing Factors -- 4.5 Quantitative Analysis -- 4.6 Effect of Causal Visual Saliencies -- 5 Discussion -- 6 Conclusion -- References -- Part IV Explainability and Interpretability in First Impressions Analysis -- Psychology Meets Machine Learning: Interdisciplinary Perspectives on Algorithmic Job Candidate Screening -- 1 Introduction: Algorithmic Opportunities for Job Candidate Screening -- 1.1 The Need for Explainability -- 1.2 Purpose and Outline of the Chapter -- 2 Common Methodological Focus Areas -- 2.1 Psychology -- 2.1.1 Psychometrics -- 2.1.2 Reliability -- 2.1.3 Validity -- 2.1.4 Experimentation and the Nomological Network -- 2.2 Computer Science and Machine Learning -- 2.2.1 The Abstract Machine Learning Perspective -- 2.2.2 Machine Learning in Applied Domains -- 2.3 Contrasting Focus Areas in Psychology and Machine Learning -- 2.4 Conclusion -- 3 The Personnel Selection Problem -- 3.1 How to Identify Which KSAOs Are Needed? -- 3.2 How to Measure KSAOs? -- 3.3 Dealing with Judgment -- 3.4 What Is Job Performance? -- 3.5 Conclusion -- 4 Use Case: An Explainable Solution for Multimodal Job Candidate Screening -- 4.1 The Chalearn Looking at People Job Candidate Screening Challenge -- 4.2 Dataset -- 4.3 General Framework of a Potential Explainable Solution -- 4.3.1 Chosen Features -- 4.3.2 Regression Model -- 4.3.3 Quantitative Performance -- 4.4 Opportunities for Explanation -- 4.5 Reflection -- 5 Acceptability -- 5.1 Applicants -- 5.2 Hiring Managers -- 6 Recommendations -- 6.1 Better Understanding of Methodology and Evaluation -- 6.1.1 Stronger Focus on Criterion Validity.
6.1.2 Combining Methodological Focus Points -- 6.2 Philosophical and Ethical Awareness -- 6.3 Explicit Decision Support -- 6.4 The Goal of Explanation -- 6.5 Conclusion -- References -- Multimodal Personality Trait Analysis for Explainable Modeling of Job Interview Decisions -- 1 Introduction -- 2 Related Work -- 3 Job Candidate Screening Challenge -- 4 Proposed Method -- 4.1 Visual Feature Extraction -- 4.2 Acoustic Features -- 4.3 Classification -- 5 Experimental Results -- 5.1 Experimental Results Using Regression -- 5.2 Experimental Results Using Classification -- 6 Explainability Analysis -- 6.1 The Effect of Ethnicity, Age, and Sex -- 7 Discussion and Conclusions -- References -- On the Inherent Explainability of Pattern Theory-Based Video Event Interpretations -- 1 Introduction -- 2 Explainable Model for Video Interpretation -- 2.1 Symbolic Representation of Concepts -- 2.2 Constructing Contextualization Cues -- 2.3 Expressing Semantic Relationships -- 2.3.1 Bond Compatibility -- 2.3.2 Types -- 2.3.3 Quantification -- 2.4 Constructing Interpretations -- 2.4.1 Probability -- 2.4.2 Inherent Explainability -- 2.5 Inference -- 3 Generating Explanations -- 3.1 Understanding the Overall Interpretation (Q1) -- 3.2 Understanding Provenance of Concepts (Q3) -- 3.3 Handling What-Ifs -- 3.3.1 Alternatives to Grounded Concept Generators -- 3.3.2 Alternative Activity Interpretations -- 3.3.3 Why Not a Given Interpretation? -- 4 Conclusion and Future Work -- References.
Tags from this library: No tags from this library for this title. Log in to add tags.
Item type Current location Call number URL Status Date due Barcode
Electronic Book UT Tyler Online
Online
QA75.5-76.95 (Browse shelf) https://ebookcentral.proquest.com/lib/uttyler/detail.action?docID=5609375 Available EBC5609375

Intro -- Foreword -- Preface -- Acknowledgements -- Contents -- Contributors -- Part I Notions and Concepts on Explainability and Interpretability -- Considerations for Evaluation and Generalization in Interpretable Machine Learning -- 1 Introduction -- 2 Defining Interpretability -- 3 Defining the Interpretability Need -- 4 Evaluation -- 5 Considerations for Generalization -- 6 Conclusion: Recommendations for Researchers -- References -- Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges -- 1 Introduction -- 1.1 The Components of Explainability -- 1.2 Users and Laws -- 1.3 Explanation and DNNs -- 2 Users and Their Concerns -- 2.1 Case Study: Autonomous Driving -- 3 Laws and Regulations -- 4 Explanation -- 5 Explanation Methods -- 5.1 Desirable Properties of Explainers -- 5.2 A Taxonomy for Explanation Methods -- 5.2.1 Rule-Extraction Methods -- 5.2.2 Attribution Methods -- 5.2.3 Intrinsic Methods -- 6 Addressing General Concerns -- 7 Discussion -- References -- Part II Explainability and Interpretability in Machine Learning -- Learning Functional Causal Models with Generative NeuralNetworks -- 1 Introduction -- 2 Problem Setting -- 2.1 Notations -- 2.2 Assumptions and Properties -- 3 State of the Art -- 3.1 Learning the CPDAG -- 3.1.1 Constraint-Based Methods -- 3.1.2 Score-Based Methods -- 3.1.3 Hybrid Algorithms -- 3.2 Exploiting Asymmetry Between Cause and Effect -- 3.2.1 The Intuition -- 3.2.2 Restriction on the Class of Causal Mechanisms Considered -- 3.2.3 Pairwise Methods -- 3.3 Discussion -- 4 Causal Generative Neural Networks -- 4.1 Modeling Continuous FCMs with Generative Neural Networks -- 4.1.1 Generative Model and Interventions -- 4.2 Model Evaluation -- 4.2.1 Scoring Metric -- 4.2.2 Representational Power of CGNN -- 4.3 Model Optimization -- 4.3.1 Parametric (Weight) Optimization.

4.3.2 Non-parametric (Structure) Optimization -- 4.3.3 Identifiability of CGNN up to Markov Equivalence Classes -- 5 Experiments -- 5.1 Experimental Setting -- 5.2 Learning Bivariate Causal Structures -- 5.2.1 Benchmarks -- 5.2.2 Baseline Approaches -- 5.2.3 Hyper-Parameter Selection -- 5.2.4 Empirical Results -- 5.3 Identifying v-structures -- 5.4 Multivariate Causal Modeling Under Causal Sufficiency Assumption -- 5.4.1 Results on Artificial Graphs with Additive and Multiplicative Noises -- 5.4.2 Result on Biological Data -- 5.4.3 Results on Biological Real-World Data -- 6 Towards Predicting Confounding Effects -- 6.1 Principle -- 6.2 Experimental Validation -- 6.2.1 Benchmarks -- 6.2.2 Baselines -- 6.2.3 Results -- 7 Discussion and Perspectives -- Appendix -- The Maximum Mean Discrepancy (MMD) Statistic -- Proofs -- Table of Scores for the Experiments on Cause-Effect Pairs -- Table of Scores for the Experiments on Graphs -- References -- Learning Interpretable Rules for Multi-Label Classification -- 1 Introduction -- 2 Multi-Label Classification -- 2.1 Problem Definition -- 2.2 Dependencies in Multi-Label Classification -- 2.3 Evaluation of Multi-Label Predictions -- 2.3.1 Bipartition Evaluation Functions -- 2.3.2 Multi-Label Evaluation Functions -- 2.3.3 Aggregation and Averaging -- 3 Multi-Label Rule Learning -- 3.1 Rule Learning -- 3.1.1 Predictive Rule Learning -- 3.1.2 Descriptive Rule Learning -- 3.2 Multi-Label Rules -- 3.3 Challenges for Multi-Label Rule Learning -- 4 Discovery of Multi-Label Rules -- 4.1 Association Rule-Based Algorithms -- 4.2 Choosing Loss-Minimizing Rule Heads -- 4.2.1 Anti-Monotonicity and Decomposability -- 4.2.2 Efficient Generation of Multi-Label Heads -- 5 Learning Predictive Rule-Based Multi-Label Models -- 5.1 Layered Multi-Label Learning -- 5.1.1 Stacked Binary Relevance -- 5.2 Multi-Label Separate-and-Conquer.

5.2.1 A Multi-Label Covering Algorithm -- 6 Case Studies -- 6.1 Case Study 1: Single-Label Head Rules -- 6.1.1 Exemplary Rule Models -- 6.1.2 Visualization of Dependencies -- 6.1.3 Discussion -- 6.2 Case Study 2: Multi-Label Heads -- 6.2.1 Exemplary Rule Models -- 6.2.2 Predictive Performance -- 6.2.3 Computational Cost -- 7 Conclusion -- References -- Structuring Neural Networks for More Explainable Predictions -- 1 Introduction -- 2 Explanation Techniques -- 2.1 Sensitivity Analysis -- 2.2 Deep Taylor Decomposition -- 2.3 Theoretical Limitations -- 3 Convolutional Neural Networks -- 3.1 Experiments -- 4 Recurrent Neural Networks -- 4.1 Experiments -- 5 Conclusion -- References -- Part III Explainability and Interpretability in Computer Vision -- Generating Post-Hoc Rationales of Deep Visual ClassificationDecisions -- 1 Introduction -- 2 Related Work -- 3 Generating Visual Explanations (GVE) Model -- 3.1 Relevance Loss -- 3.2 Discriminative Loss -- 4 Experimental Setup -- 5 Results -- 5.1 Quantitative Results -- 5.2 Qualitative Results -- 6 Conclusion -- References -- Ensembling Visual Explanations -- 1 Introduction -- 2 Background and Related Work -- 3 Algorithms for Ensembling Visual Explanations -- 3.1 Weighted Average Ensemble Explanation -- 3.2 Penalized Weighted Average Ensemble Explanation -- 3.3 Agreement with N Systems -- 4 Evaluating Explanations -- 4.1 Comparison Metric -- 4.2 Uncovering Metric -- 4.3 Crowd-Sourced Hyper-Parameter Tuning -- 5 Experimental Results and Discussion -- 6 Conclusions and Future Directions -- References -- Explainable Deep Driving by Visualizing Causal Attention -- 1 Introduction -- 2 Related Work -- 2.1 End-to-End Learning for Self-Driving Cars -- 2.2 Visual Explanations -- 3 Attention-Based Explainable Deep Driving Model -- 3.1 Preprocessing -- 3.2 Encoder: Convolutional Feature Extraction.

3.3 Coarse-Grained Decoder: Visual (Spatial) Attention -- 3.4 Fine-Grained Decoder: Causality Test -- 4 Result -- 4.1 Datasets -- 4.2 Training and Evaluation Details -- 4.3 Effect of Choosing Penalty Coefficient λ -- 4.4 Effect of Varying Smoothing Factors -- 4.5 Quantitative Analysis -- 4.6 Effect of Causal Visual Saliencies -- 5 Discussion -- 6 Conclusion -- References -- Part IV Explainability and Interpretability in First Impressions Analysis -- Psychology Meets Machine Learning: Interdisciplinary Perspectives on Algorithmic Job Candidate Screening -- 1 Introduction: Algorithmic Opportunities for Job Candidate Screening -- 1.1 The Need for Explainability -- 1.2 Purpose and Outline of the Chapter -- 2 Common Methodological Focus Areas -- 2.1 Psychology -- 2.1.1 Psychometrics -- 2.1.2 Reliability -- 2.1.3 Validity -- 2.1.4 Experimentation and the Nomological Network -- 2.2 Computer Science and Machine Learning -- 2.2.1 The Abstract Machine Learning Perspective -- 2.2.2 Machine Learning in Applied Domains -- 2.3 Contrasting Focus Areas in Psychology and Machine Learning -- 2.4 Conclusion -- 3 The Personnel Selection Problem -- 3.1 How to Identify Which KSAOs Are Needed? -- 3.2 How to Measure KSAOs? -- 3.3 Dealing with Judgment -- 3.4 What Is Job Performance? -- 3.5 Conclusion -- 4 Use Case: An Explainable Solution for Multimodal Job Candidate Screening -- 4.1 The Chalearn Looking at People Job Candidate Screening Challenge -- 4.2 Dataset -- 4.3 General Framework of a Potential Explainable Solution -- 4.3.1 Chosen Features -- 4.3.2 Regression Model -- 4.3.3 Quantitative Performance -- 4.4 Opportunities for Explanation -- 4.5 Reflection -- 5 Acceptability -- 5.1 Applicants -- 5.2 Hiring Managers -- 6 Recommendations -- 6.1 Better Understanding of Methodology and Evaluation -- 6.1.1 Stronger Focus on Criterion Validity.

6.1.2 Combining Methodological Focus Points -- 6.2 Philosophical and Ethical Awareness -- 6.3 Explicit Decision Support -- 6.4 The Goal of Explanation -- 6.5 Conclusion -- References -- Multimodal Personality Trait Analysis for Explainable Modeling of Job Interview Decisions -- 1 Introduction -- 2 Related Work -- 3 Job Candidate Screening Challenge -- 4 Proposed Method -- 4.1 Visual Feature Extraction -- 4.2 Acoustic Features -- 4.3 Classification -- 5 Experimental Results -- 5.1 Experimental Results Using Regression -- 5.2 Experimental Results Using Classification -- 6 Explainability Analysis -- 6.1 The Effect of Ethnicity, Age, and Sex -- 7 Discussion and Conclusions -- References -- On the Inherent Explainability of Pattern Theory-Based Video Event Interpretations -- 1 Introduction -- 2 Explainable Model for Video Interpretation -- 2.1 Symbolic Representation of Concepts -- 2.2 Constructing Contextualization Cues -- 2.3 Expressing Semantic Relationships -- 2.3.1 Bond Compatibility -- 2.3.2 Types -- 2.3.3 Quantification -- 2.4 Constructing Interpretations -- 2.4.1 Probability -- 2.4.2 Inherent Explainability -- 2.5 Inference -- 3 Generating Explanations -- 3.1 Understanding the Overall Interpretation (Q1) -- 3.2 Understanding Provenance of Concepts (Q3) -- 3.3 Handling What-Ifs -- 3.3.1 Alternatives to Grounded Concept Generators -- 3.3.2 Alternative Activity Interpretations -- 3.3.3 Why Not a Given Interpretation? -- 4 Conclusion and Future Work -- References.

Description based on publisher supplied metadata and other sources.

There are no comments for this item.

Log in to your account to post a comment.