Normal view MARC view ISBD view

Visual Attributes.

By: Feris, Rogerio Schmidt.
Contributor(s): Lampert, Christoph | Parikh, Devi.
Material type: TextTextSeries: eBooks on Demand.Advances in Computer Vision and Pattern Recognition: Publisher: Cham : Springer International Publishing, 2017Copyright date: ©2017Description: 1 online resource (362 pages).Content type: text Media type: computer Carrier type: online resourceISBN: 9783319500775.Subject(s): Computer scienceAdditional physical formats: Print version:: Visual AttributesDDC classification: 006.42 Online resources: Click here to view this ebook.
Contents:
Preface -- Contents -- 1 Introduction to Visual Attributes -- 1.1 Overview of the Chapters -- References -- Part I Attribute-Based Recognition -- 2 An Embarrassingly Simple Approach to Zero-Shot Learning -- 2.1 Introduction -- 2.2 Related Work -- 2.3 Embarrassingly Simple ZSL -- 2.3.1 Regularisation and Loss Function Choices -- 2.4 Risk Bounds -- 2.4.1 Simple ZSL as a Domain Adaptation Problem -- 2.4.2 Risk Bounds for Domain Adaptation -- 2.5 Experiments -- 2.5.1 Synthetic Experiments -- 2.5.2 Real Data Experiments -- 2.6 Discussion -- References -- 3 In the Era of Deep Convolutional Features: Are Attributes Still Useful Privileged Data? -- 3.1 Introduction -- 3.2 Related Work -- 3.3 Learning Using Privileged Information -- 3.3.1 Maximum-Margin Model 1: SVM+ -- 3.3.2 Maximum-Margin Model 2: Margin Transfer -- 3.3.3 How Is Information Being Transferred? -- 3.4 Experiments -- 3.4.1 Object Recognition in Images -- 3.4.2 Recognizing Easy from Hard Images -- 3.5 Conclusion -- References -- 4 Divide, Share, and Conquer: Multi-task Attribute Learning with Selective Sharing -- 4.1 Introduction -- 4.2 Learning Decorrelated Attributes -- 4.2.1 Approach -- 4.2.2 Experiments and Results -- 4.3 Learning Analogous Category-Sensitive Attributes -- 4.3.1 Approach -- 4.3.2 Experiments and Results -- 4.4 Related Work -- 4.4.1 Attributes as Semantic Features -- 4.4.2 Attribute Correlations -- 4.4.3 Differentiating Attributes -- 4.4.4 Multi-task Learning (MTL) -- 4.5 Conclusion -- References -- Part II Relative Attributes and Their Application to Image Search -- 5 Attributes for Image Retrieval -- 5.1 Introduction -- 5.2 Comparative Relevance Feedback Using Attributes -- 5.2.1 Learning to Predict Relative Attributes -- 5.2.2 Relative Attribute Feedback -- 5.2.3 Experimental Validation -- 5.3 Actively Guiding the User's Relevance Feedback.
5.3.1 Attribute Binary Search Trees -- 5.3.2 Predicting the Relevance of an Image -- 5.3.3 Actively Selecting an Informative Comparison -- 5.3.4 Experimental Validation -- 5.4 Accounting for Differing User Perceptions of Attributes -- 5.4.1 Adapting Attributes -- 5.4.2 Experimental Validation -- 5.5 Discovering Attribute Shades of Meaning -- 5.5.1 Collecting Personal Labels and Label Explanations -- 5.5.2 Discovering Schools and Training Per-School Adapted Models -- 5.5.3 Experimental Validation -- 5.6 Discussion and Conclusion -- References -- 6 Fine-Grained Comparisons with Attributes -- 6.1 Introduction -- 6.2 Related Work -- 6.3 Ranking Functions for Relative Attributes -- 6.4 Fine-Grained Visual Comparisons -- 6.4.1 Local Learning for Visual Comparisons -- 6.4.2 Selecting Fine-Grained Neighboring Pairs -- 6.4.3 Fine-Grained Attribute Zappos Dataset -- 6.4.4 Experiments and Results -- 6.4.5 Predicting Useful Neighborhoods -- 6.5 Just Noticeable Differences -- 6.5.1 Local Bayesian Model of Distinguishability -- 6.5.2 Experiments and Results -- 6.6 Discussion -- 6.7 Conclusion -- References -- 7 Localizing and Visualizing Relative Attributes -- 7.1 Introduction -- 7.2 Related Work -- 7.2.1 Visual Attributes -- 7.2.2 Visual Discovery -- 7.3 Approach -- 7.3.1 Initializing Candidate Visual Chains -- 7.3.2 Iteratively Growing Each Visual Chain -- 7.3.3 Ranking and Creating a Chain Ensemble -- 7.4 Results -- 7.4.1 Visualization of Discovered Visual Chains -- 7.4.2 Visualization of Discovered Spatial Extent -- 7.4.3 Relative Attribute Ranking Accuracy -- 7.4.4 Ablation Studies -- 7.4.5 Limitations -- 7.4.6 Application: Attribute Editor -- 7.5 Conclusion -- References -- Part III Describing People Based on Attributes -- 8 Deep Learning Face Attributes for Detection and Alignment -- 8.1 Introduction -- 8.2 Learning to Recognize Face Attributes.
8.2.1 A Large Margin Local Embedding Approach -- 8.3 Face Attributes for Face Localization and Detection -- 8.3.1 A Cascaded Approach for Face Localization and Attribute Recognition -- 8.3.2 From Facial Parts Responses to Face Detection -- 8.4 Face Attributes for Face Alignment -- 8.4.1 Attribute Tasks-Constrained Deep Convolutional Network -- 8.5 Discussion -- References -- 9 Visual Attributes for Fashion Analytics -- 9.1 Motivation and Related Work -- 9.2 Recommendation Systems -- 9.2.1 ``Hi, magic closet, tell me what to wear!'' -- 9.2.2 ``Wow You Are so Beautiful Today!'' -- 9.3 Fine-Grained Clothing Retrieval System -- 9.4 Data Collection -- 9.4.1 Dual Attribute-Aware Ranking Network -- 9.4.2 Clothing Detection -- 9.4.3 Cross-Domain Clothing Retrieval -- 9.4.4 Experiments and Results -- 9.5 Summary -- References -- Part IV Defining a Vocabulary of Attributes -- 10 A Taxonomy of Part and Attribute Discovery Techniques -- 10.1 Introduction -- 10.1.1 Overview -- 10.2 Non-semantic PnAs -- 10.2.1 Attributes as Embeddings -- 10.2.2 Part Discovery Based on Appearance and Geometry -- 10.3 Semantic Language-Based PnAs -- 10.3.1 Expert Defined Attributes -- 10.3.2 Attribute Discovery by Automatically Mining Text -- 10.3.3 Interactive Discovery of Nameable Attributes -- 10.3.4 Expert Defined Parts -- 10.4 Semantic Language-Free PnAs -- 10.4.1 Attribute Discovery from Similarity Comparisons -- 10.4.2 Part Discovery from Correspondence Annotations -- 10.5 Conclusion -- References -- 11 The SUN Attribute Database: Organizing Scenes by Affordances, Materials, and Layout -- 11.1 Attribute-Based Representations of Scenes -- 11.2 Building a Taxonomy of Scene Attributes from Human Descriptions -- 11.3 Building the SUN Attribute Database -- 11.4 Exploring Scenes in Attribute Space -- 11.5 Recognizing Scene Attributes.
11.6 Predicting Scene Categories from Attributes -- 11.6.1 Predictive Power of Attributes -- 11.6.2 Scene Classification -- 11.7 Discussion -- References -- Part V Attributes and Language -- 12 Attributes as Semantic Units Between Natural Language and Visual Recognition -- 12.1 Introduction -- 12.1.1 Challenges for Combining Visual and Linguistic Modalities -- 12.1.2 Overview and Outline -- 12.2 Linguistic Knowledge for Recognition of Novel Categories -- 12.2.1 Semantic Relatedness Mined from Language Resources for Zero-Shot Recognition -- 12.2.2 Propagated Semantic Transfer -- 12.2.3 Composite Activity Recognition with Attributes and Script Data -- 12.3 Image and Video Description Using Compositional Attributes -- 12.3.1 Translating Image and Video Content to Natural Language Descriptions -- 12.3.2 Coherent Multi-sentence Video Description with Variable Level of Detail -- 12.3.3 Describing Movies with an Intermediate Layer of Attributes -- 12.3.4 Describing Novel Object Categories -- 12.4 Grounding Text in Images -- 12.4.1 Unsupervised Grounding -- 12.4.2 Semi-supervised and Fully Supervised Grounding -- 12.4.3 Grounding Results -- 12.5 Visual Question Answering -- 12.6 Conclusions -- References -- 13 Grounding the Meaning of Words with Visual Attributes -- 13.1 Introduction -- 13.2 Background: Models of Word Meaning -- 13.2.1 Distributional Models -- 13.2.2 Models Based on Human-Produced Attributes -- 13.2.3 Grounded Semantic Spaces -- 13.3 Representing Word Meaning with Attributes from Images and Text -- 13.3.1 Visual Attributes from Images -- 13.3.2 Textual Attributes -- 13.4 Visually Grounding Word Meaning with Attributes -- 13.4.1 Multimodal Deep Learning -- 13.4.2 Background -- 13.4.3 Grounded Semantic Representations with Autoencoders -- 13.5 Experiments -- 13.5.1 Experiment 1: Word Similarity.
13.5.2 Experiment 2: Concept Categorisation -- 13.6 Conclusions -- References -- Index.
Tags from this library: No tags from this library for this title. Log in to add tags.
Item type Current location Call number URL Status Date due Barcode
Electronic Book UT Tyler Online
Online
QA75.5-76.95 (Browse shelf) http://ebookcentral.proquest.com/lib/uttyler/detail.action?docID=4828410 Available EBC4828410

Preface -- Contents -- 1 Introduction to Visual Attributes -- 1.1 Overview of the Chapters -- References -- Part I Attribute-Based Recognition -- 2 An Embarrassingly Simple Approach to Zero-Shot Learning -- 2.1 Introduction -- 2.2 Related Work -- 2.3 Embarrassingly Simple ZSL -- 2.3.1 Regularisation and Loss Function Choices -- 2.4 Risk Bounds -- 2.4.1 Simple ZSL as a Domain Adaptation Problem -- 2.4.2 Risk Bounds for Domain Adaptation -- 2.5 Experiments -- 2.5.1 Synthetic Experiments -- 2.5.2 Real Data Experiments -- 2.6 Discussion -- References -- 3 In the Era of Deep Convolutional Features: Are Attributes Still Useful Privileged Data? -- 3.1 Introduction -- 3.2 Related Work -- 3.3 Learning Using Privileged Information -- 3.3.1 Maximum-Margin Model 1: SVM+ -- 3.3.2 Maximum-Margin Model 2: Margin Transfer -- 3.3.3 How Is Information Being Transferred? -- 3.4 Experiments -- 3.4.1 Object Recognition in Images -- 3.4.2 Recognizing Easy from Hard Images -- 3.5 Conclusion -- References -- 4 Divide, Share, and Conquer: Multi-task Attribute Learning with Selective Sharing -- 4.1 Introduction -- 4.2 Learning Decorrelated Attributes -- 4.2.1 Approach -- 4.2.2 Experiments and Results -- 4.3 Learning Analogous Category-Sensitive Attributes -- 4.3.1 Approach -- 4.3.2 Experiments and Results -- 4.4 Related Work -- 4.4.1 Attributes as Semantic Features -- 4.4.2 Attribute Correlations -- 4.4.3 Differentiating Attributes -- 4.4.4 Multi-task Learning (MTL) -- 4.5 Conclusion -- References -- Part II Relative Attributes and Their Application to Image Search -- 5 Attributes for Image Retrieval -- 5.1 Introduction -- 5.2 Comparative Relevance Feedback Using Attributes -- 5.2.1 Learning to Predict Relative Attributes -- 5.2.2 Relative Attribute Feedback -- 5.2.3 Experimental Validation -- 5.3 Actively Guiding the User's Relevance Feedback.

5.3.1 Attribute Binary Search Trees -- 5.3.2 Predicting the Relevance of an Image -- 5.3.3 Actively Selecting an Informative Comparison -- 5.3.4 Experimental Validation -- 5.4 Accounting for Differing User Perceptions of Attributes -- 5.4.1 Adapting Attributes -- 5.4.2 Experimental Validation -- 5.5 Discovering Attribute Shades of Meaning -- 5.5.1 Collecting Personal Labels and Label Explanations -- 5.5.2 Discovering Schools and Training Per-School Adapted Models -- 5.5.3 Experimental Validation -- 5.6 Discussion and Conclusion -- References -- 6 Fine-Grained Comparisons with Attributes -- 6.1 Introduction -- 6.2 Related Work -- 6.3 Ranking Functions for Relative Attributes -- 6.4 Fine-Grained Visual Comparisons -- 6.4.1 Local Learning for Visual Comparisons -- 6.4.2 Selecting Fine-Grained Neighboring Pairs -- 6.4.3 Fine-Grained Attribute Zappos Dataset -- 6.4.4 Experiments and Results -- 6.4.5 Predicting Useful Neighborhoods -- 6.5 Just Noticeable Differences -- 6.5.1 Local Bayesian Model of Distinguishability -- 6.5.2 Experiments and Results -- 6.6 Discussion -- 6.7 Conclusion -- References -- 7 Localizing and Visualizing Relative Attributes -- 7.1 Introduction -- 7.2 Related Work -- 7.2.1 Visual Attributes -- 7.2.2 Visual Discovery -- 7.3 Approach -- 7.3.1 Initializing Candidate Visual Chains -- 7.3.2 Iteratively Growing Each Visual Chain -- 7.3.3 Ranking and Creating a Chain Ensemble -- 7.4 Results -- 7.4.1 Visualization of Discovered Visual Chains -- 7.4.2 Visualization of Discovered Spatial Extent -- 7.4.3 Relative Attribute Ranking Accuracy -- 7.4.4 Ablation Studies -- 7.4.5 Limitations -- 7.4.6 Application: Attribute Editor -- 7.5 Conclusion -- References -- Part III Describing People Based on Attributes -- 8 Deep Learning Face Attributes for Detection and Alignment -- 8.1 Introduction -- 8.2 Learning to Recognize Face Attributes.

8.2.1 A Large Margin Local Embedding Approach -- 8.3 Face Attributes for Face Localization and Detection -- 8.3.1 A Cascaded Approach for Face Localization and Attribute Recognition -- 8.3.2 From Facial Parts Responses to Face Detection -- 8.4 Face Attributes for Face Alignment -- 8.4.1 Attribute Tasks-Constrained Deep Convolutional Network -- 8.5 Discussion -- References -- 9 Visual Attributes for Fashion Analytics -- 9.1 Motivation and Related Work -- 9.2 Recommendation Systems -- 9.2.1 ``Hi, magic closet, tell me what to wear!'' -- 9.2.2 ``Wow You Are so Beautiful Today!'' -- 9.3 Fine-Grained Clothing Retrieval System -- 9.4 Data Collection -- 9.4.1 Dual Attribute-Aware Ranking Network -- 9.4.2 Clothing Detection -- 9.4.3 Cross-Domain Clothing Retrieval -- 9.4.4 Experiments and Results -- 9.5 Summary -- References -- Part IV Defining a Vocabulary of Attributes -- 10 A Taxonomy of Part and Attribute Discovery Techniques -- 10.1 Introduction -- 10.1.1 Overview -- 10.2 Non-semantic PnAs -- 10.2.1 Attributes as Embeddings -- 10.2.2 Part Discovery Based on Appearance and Geometry -- 10.3 Semantic Language-Based PnAs -- 10.3.1 Expert Defined Attributes -- 10.3.2 Attribute Discovery by Automatically Mining Text -- 10.3.3 Interactive Discovery of Nameable Attributes -- 10.3.4 Expert Defined Parts -- 10.4 Semantic Language-Free PnAs -- 10.4.1 Attribute Discovery from Similarity Comparisons -- 10.4.2 Part Discovery from Correspondence Annotations -- 10.5 Conclusion -- References -- 11 The SUN Attribute Database: Organizing Scenes by Affordances, Materials, and Layout -- 11.1 Attribute-Based Representations of Scenes -- 11.2 Building a Taxonomy of Scene Attributes from Human Descriptions -- 11.3 Building the SUN Attribute Database -- 11.4 Exploring Scenes in Attribute Space -- 11.5 Recognizing Scene Attributes.

11.6 Predicting Scene Categories from Attributes -- 11.6.1 Predictive Power of Attributes -- 11.6.2 Scene Classification -- 11.7 Discussion -- References -- Part V Attributes and Language -- 12 Attributes as Semantic Units Between Natural Language and Visual Recognition -- 12.1 Introduction -- 12.1.1 Challenges for Combining Visual and Linguistic Modalities -- 12.1.2 Overview and Outline -- 12.2 Linguistic Knowledge for Recognition of Novel Categories -- 12.2.1 Semantic Relatedness Mined from Language Resources for Zero-Shot Recognition -- 12.2.2 Propagated Semantic Transfer -- 12.2.3 Composite Activity Recognition with Attributes and Script Data -- 12.3 Image and Video Description Using Compositional Attributes -- 12.3.1 Translating Image and Video Content to Natural Language Descriptions -- 12.3.2 Coherent Multi-sentence Video Description with Variable Level of Detail -- 12.3.3 Describing Movies with an Intermediate Layer of Attributes -- 12.3.4 Describing Novel Object Categories -- 12.4 Grounding Text in Images -- 12.4.1 Unsupervised Grounding -- 12.4.2 Semi-supervised and Fully Supervised Grounding -- 12.4.3 Grounding Results -- 12.5 Visual Question Answering -- 12.6 Conclusions -- References -- 13 Grounding the Meaning of Words with Visual Attributes -- 13.1 Introduction -- 13.2 Background: Models of Word Meaning -- 13.2.1 Distributional Models -- 13.2.2 Models Based on Human-Produced Attributes -- 13.2.3 Grounded Semantic Spaces -- 13.3 Representing Word Meaning with Attributes from Images and Text -- 13.3.1 Visual Attributes from Images -- 13.3.2 Textual Attributes -- 13.4 Visually Grounding Word Meaning with Attributes -- 13.4.1 Multimodal Deep Learning -- 13.4.2 Background -- 13.4.3 Grounded Semantic Representations with Autoencoders -- 13.5 Experiments -- 13.5.1 Experiment 1: Word Similarity.

13.5.2 Experiment 2: Concept Categorisation -- 13.6 Conclusions -- References -- Index.

Description based on publisher supplied metadata and other sources.

There are no comments for this item.

Log in to your account to post a comment.