# Parallel Processing for Artificial Intelligence 1.

##### By: Kanal, L. N.

##### Contributor(s): Kitano, H | Kumar, V | Suttner, C. B.

Material type: TextSeries: eBooks on Demand.Machine Intelligence and Pattern Recognition: Publisher: Amsterdam : Elsevier Science, 2015Copyright date: ©1994Description: 1 online resource (445 pages).Content type: text Media type: computer Carrier type: online resourceISBN: 9781483295749.Subject(s): Artificial intelligence | Parallel processing (Electronic computers)Genre/Form: Electronic books.Additional physical formats: Print version:: Parallel Processing for Artificial Intelligence 1DDC classification: 006.3 Online resources: Click here to view this ebook.Item type | Current location | Call number | URL | Status | Date due | Barcode |
---|---|---|---|---|---|---|

Electronic Book | UT Tyler Online Online | QA76.58 -- .P377 1994 (Browse shelf) | http://ebookcentral.proquest.com/lib/uttyler/detail.action?docID=1877103 | Available | EBC1877103 |

Front Cover -- Parallel Processing for Artificial Intelligence 1 -- Copyright Page -- Table of Contents -- PREFACE -- EDITORS -- AUTHORS -- PART I: IMAGE PROCESSING -- Chapter 1. A Perspective on Parallel Processing in Computer Vision and Image Understanding -- 1. Introduction -- 2. Parallelism in Vision Systems -- 3. Representation Based Classification of Vision Computations -- 4. Issues in Data and Computation Partitioning -- 5. Architectural Requirements -- 6. Future Directions -- Acknowledgments -- References -- Chapter 2. On Supporting Rule-Based Image Interpretation Using a Distributed Memory Multicomputer -- 1. Introduction -- 2. Software and Hardware Strategies for Supporting RBS -- 3. AIMS: A Multi-Sensor Image Interpretation System -- 4. Parallel Implementation -- 5. Discussion -- 6. Conclusion -- References -- Chapter 3. Parallel Affine Image Warping -- 1. Introduction -- 2. Forward versus inverse algorithms in affine image warping -- 3. Other important characteristics of affine image warping -- 4. Machines -- 5. Classification of implementations -- 6. Systolic methods -- 7. Data partitioned methods -- 8. A scanline method -- 9. A Sweep-Based Method -- 10. Conclusions -- References -- Chapter 4. Image Processing On Reconfigurable Meshes With Buses -- Abstract -- 1. Introduction -- 2· Data Manipulation Operations -- 3. Area And Perimeter Of Connected Components -- 4. Shrinking And Expanding -- 5. Clustering -- 6. Template Matching -- 7. Conclusions -- 8. References -- PART II: SEMANTIC NETWORKS -- Chapter 5. Inheritance Operations in Massively Parallel Knowledge Representation -- 1. Massively Parallel Knowledge Representation -- 2. Schubert's Tree Encoding of IS-Á Hierarchies -- 3. How to Achieve the Same Effect Without Trees -- 4. Parallelizing the Update Algorithm -- 5. Inheritance Terminology -- 6. Upward-Inductive Inheritance.

7. Downward Inheritance Algorithm -- 8. Upward-Inductive Inheritance Algorith -- 9. Experimental Results -- 10. Conclusions -- Acknowledgement -- References -- Chapter 6. Providing Computationally Effective Knowledge Representation via Massive Parallelism -- 1. Introduction -- 2. Description of PARKA -- 3. Performance -- 4. Future & Related Work -- 5. Conclusion -- 6. Acknowledgments -- References -- PART III: PRODUCTION SYSTEMS III -- Chapter 7. Speeding Up Production Systems: From Concurrent Matching to Parallel Rule Firing -- 1. Introduction -- 2. A Generic Production System Architecture -- 3. State-Saving Algorithms -- 4. Parallel Execution of Rete -- 5. Compile Time Optimization of Rete -- 6. Parallel Rule Firing -- 7. Discussion -- References -- Chapter 8. Guaranteeing Serializability in Parallel Production Systems -- 1. Execution Models for Production Systems -- 2. The Serialization Problem -- 3. Ishida and Stolfo's Work -- 4. Definitions and Tests -- 5. Solution to the Serialization Problem -- 6. Algorithms to Guarantee Serializaibilty -- 7. Performance Analysis -- 8. Related Work -- 9. Conclusions -- 10. Acknowledgments -- References -- PART IV: MECHANIZATION OF LOGIC IV -- Chapter 9. Parallel Automated Theorem Proving -- Abstract -- 1. Introduction -- 2. Classification of Parallelization Approaches -- 3. Partitioning-based Parallel Theorem Provers -- 4. Competition-based Parallel Theorem Provers -- 5. Summary -- Appendix -- References -- Chapter 10. Massive Parallelism in Inference Systems -- 1. Parallelism in Logic -- 2. Massive Parallelism -- 3. The Potential of Massive Parallelism for Logic -- 4. CHCL: A Connectionist Inference System -- References -- Chapter 11. Representing Propositional Logic and Searching for Satisfiability in Connectionist Networks -- 1. Introduction -- 2. The energy paradigm.

3. Propositional Logic and Energy Functions -- 4. Experimental Results -- 5. Discussion -- Acknowledgment -- References -- PART V: CONSTRAINT SATISFACTION -- Chapter 12. Parallel and Distributed Finite Constraint Satisfaction: Complexity, Algorithms and Experiments -- 1. Introduction -- 2. Properties of Constraint Networks -- 3. A Parallel Algorithm and Complexity -- 4. A Distributed Algorithm and Complexity -- 5. A Coarse-Grain Distributed Algorithm -- 6. Experimental Results -- 7. Conclusions -- Acknowledgements -- References -- Chapter 13. PARALLEL ALGORITHMS AND ARCHITECTURES FOR CONSISTENT LABELING -- 1. Introduction -- 2. Consistent Labeling -- 3. Previous Designs -- 4. Implementations on Special Purpose Architectures -- 5. Implementations on General Purpose Parallel Architectures -- 6. Conclusion -- Acknowledgement -- References -- PART VI: OTHER TOPICS -- Chapter 14. Massively Parallel Parsing Algorithms for Natural Language -- 1. Introduction -- 2. Tree Adjoining Grammar -- 3. The Connection Machine Model CM-2 -- 4. Parsing Sparse TAGs: Parallel Algorith I -- 5. Parsing Sparse TAGs: Parallel Algorithm II -- 6. Parallel Algorithms for Parsing Dense TAGs -- 7. Conclusions and Future Work -- 8. Appendix -- References -- Chapter 15. Process Trellis and FGP: Software Architectures for Data Filtering and Mining -- 1. Introduction -- 2. Linda and the Master/Worker model -- 3. The FGP Machine -- 4. The Process Trellis -- 5. Combining the Trellis and FGP programs for Real-Time Data Management -- 6. An Integrated Program for Network Monitoring -- 7. Conclusions -- References.

Parallel processing for AI problems is of great current interest because of its potential for alleviating the computational demands of AI procedures. The articles in this book consider parallel processing for problems in several areas of artificial intelligence: image processing, knowledge representation in semantic networks, production rules, mechanization of logic, constraint satisfaction, parsing of natural language, data filtering and data mining. The publication is divided into six sections. The first addresses parallel computing for processing and understanding images. The second discusses parallel processing for semantic networks, which are widely used means for representing knowledge - methods which enable efficient and flexible processing of semantic networks are expected to have high utility for building large-scale knowledge-based systems. The third section explores the automatic parallel execution of production systems, which are used extensively in building rule-based expert systems - systems containing large numbers of rules are slow to execute and can significantly benefit from automatic parallel execution. The exploitation of parallelism for the mechanization of logic is dealt with in the fourth section. While sequential control aspects pose problems for the parallelization of production systems, logic has a purely declarative interpretation which does not demand a particular evaluation strategy. In this area, therefore, very large search spaces provide significant potential for parallelism. In particular, this is true for automated theorem proving. The fifth section considers the problem of constraint satisfaction, which is a useful abstraction of a number of important problems in AI and other fields of computer science. It also discusses the technique of consistent labeling as a preprocessing step in the constraint satisfaction

problem. Section VI consists of two articles, each on a different, important topic. The first discusses parallel formulation for the Tree Adjoining Grammar (TAG), which is a powerful formalism for describing natural languages. The second examines the suitability of a parallel programming paradigm called Linda, for solving problems in artificial intelligence. Each of the areas discussed in the book holds many open problems, but it is believed that parallel processing will form a key ingredient in achieving at least partial solutions. It is hoped that the contributions, sourced from experts around the world, will inspire readers to take on these challenging areas of inquiry.

Description based on publisher supplied metadata and other sources.

There are no comments for this item.