📄 Extracted Text (225,622 words)
Ben Goertzel with Cassio Pennachin & Nil Geisweiller &
the OpenCog Team
Engineering General Intelligence, Part 2:
The CogPrime Architecture for Integrative, Embodied
AGI
September 19, 2013
EFTA00624128
EFTA00624129
This book is dedicated by Ben Goertzel to his beloved,
departed grandfather, Leo Ztuell - an amazingly
warm-hearted, giving human being who was also a deep
thinker and excellent scientist, who got Ben started on the
path of science. As a careful experimentalist, Leo would
have been properly skeptical of the big hypotheses made
here - but he would have been eager to see them put to the
test!
EFTA00624130
EFTA00624131
Preface
Welcome to the second volume of Engineering General Intelligence! This is the second half of
a two-part technical treatise aimed at outlining a practical approach to engineering software
systems with general intelligence at the human level and ultimately beyond.
Our goal here is an ambitious one and not a modest one: Machines with flexible problem-
solving ability, open-ended learning capability, creativity and eventually, their own kind of
genius.
Part 1 set the stage, dealing with with a variety of general conceptual issues related to the
engineering of advanced AGI, as well as presenting a brief overview of the CogPrime design
for Artificial General Intelligence. Now here in Part 2 we plunge deep into the nitty-gritty, and
describe the multiple aspects of the CogPrime with a fairly high degree of detail.
First we describe the CogPrime software architecture and knowledge representation in de-
tail; then we review the "cognitive cycle" via which CogPrime perceives and acts in the world
and reflects on itself. We then turn to various forms of learning: procedural. declarative (e.g.
inference), simulative and integrative. Methods of enabling natural language functionality in
CogPrime are then discussed; and the volume concludes with a chapter summarizing the ar-
gument that CogPrime can lead to human-level (and eventually perhaps greater) AGI, and a
chapter giving a "thought experiment" describing the internal dynamics via which a completed
CogPrime system might solve the problem of obeying the request "Build me something with
blocks that I haven't seen before."
Reading this book before Engineering General Intelligence, Part 1 first is not especially
recommended, since the prequel not only provides context for this one, but it also defines a
number of specific terms and concepts that are used here without explanation (for example,
Part One has an extensive Glossary). However, the impatient reader who has not mastered
Part 1, or the reader who has finished Part 1 but is tempted to hop through Part 2 nonlinearly,
might wish to first skim the final two chapters, and then return to reading in linear order.
While the majority of the text here was written by the lead author Ben Goertzel, the overall
work and underlying ideas have been very much a team effort, with major input from the sec-
ondary authors Cassio Pennachin and Nil Geisweiller, and large contributions from various other
contributors as well. Nlany chapters have specifically indicated coauthors; but the contributions
from various collaborating researchers and engineers go far beyond these. The creation of the
AGI approach and design presented here is a process that has occurred over a long period of
time among a community of people; and this book is in fact a quite partial view of the existent
Iii
EFTA00624132
via
body of knowledge and intuition regarding CogPrime. For example, beyond the ideas presented
here, there is a body of work on the OpenCog wiki site, and then the OpenCog codebase itself.
More extensive introductory remarks may be found in Preface of Part 1, including a brief
history of the book and acknowledgements to some of those who helped inspire it.
Also, one brief comment from the Preface of Part 1 bears repeating: At several places in this
volume, as in its predecessor, we will refer to the "current" CogPrime implementation (in the
OpenCog framework); in all cases this refers to the OpenCog software system as of late 2013.
We fully realize that this book is not "easy reading", and that the level and nature of
exposition varies somewhat from chapter to chapter. We have done our best to present these
very complex ideas as clearly as we could, given our own time constraints, and the lack of
commonly understood vocabularies for discussing many of the concepts and systems involved.
Our hope is that the length of the book, and the conceptual difficulty of some portions, will
be considered as compensated by the interest of the ideas we present. For, make no mistake —
for all their technicality and subtlety, we find the ideas presented here incredibly exciting. We
are talking about no less than the creation of machines with intelligence, creativity and genius
equaling and ultimately exceeding that of human beings.
This is, in the end, the kind of book that we (the authors) all hoped to find when we first
entered the AI field: a reasonably detailed description of how to go about creating thinking
machines. The fact that so few treatises of this nature, and so few projects explicitly aimed
at the creation of advanced AGI, exist, is something that has perplexed us since we entered
the field. Rather than just complain about it, we have taken matters into our own hands, and
worked to create a design and a codebase that we believe capable of leading to human-level
AGI and beyond.
We feel tremendously fortunate to live in times when this sort of pursuit can be discussed in
a serious, scientific way.
Online Appendices
Just one more thing before getting started! This book originally had even more chapters than
the ones currently presented in Parts 1 and 2. In order to decrease length and increase fo-
cus, however, a number of chapters dealing with peripheral - yet still relevant and interest-
ing - matters were moved to online appendices. These may be downloaded in a single PDF
file at http: higoert zel.orgiengineering_general_Intenigence_appendices_
B-I4.pdf. The titles of these appendices are:
• Appendix A: Possible Worlds Semantics and Experiential Semantics
• Appendix B: Steps Toward a Formal Theory of Cognitive Structure and Dynamics
• Appendix C: Emergent Reflexive Mental Structures
• Appendix D: GOLEM: Toward an AGI Meta-Architecture Enabling Both Goal Preservation
and Radical Self-Improvement
• Appendix E: Lojban++: A Novel Linguistic Mechanism for Teaching AGI Systems
• Appendix F: PLN and the Brain
• Appendix G: Possible Worlds Semantics and Experiential Semantics
• Appendix H: Propositions About Environments in Which CogPrime Components are Useful
EFTA00624133
ix
None of these are critical to understanding the key ideas in the book, which is why they were
relegated to online appendices. However, reading them will deepen your understanding of the
conceptual and formal perspectives underlying the CogPrime design.
September 2013
Ben Goertzet
EFTA00624134
EFTA00624135
Contents
Section I Architectural and Representational Mechanisms
19 The OpenCog Framework 3
19.1 Introduction 3
19.1.1 Layers of Abstraction in Describing Artificial Minds 3
19.1.2 The OpenCog Framework 4
19.2 The OpenCog Architecture 5
19.2.1 OpenCog and Hardware Models 5
19.2.2 The Key Components of the OpenCog Framework 6
19.3 The AtomSpace 7
19.3.1 The Knowledge Unit: Atoms 7
19.3.2 AtomSpace Requirements and Properties 8
19.3.3 Accessing the Atomspace 9
19.3.4 Persistence 10
19.3.5 Specialized Knowledge Stores 11
19.4 MindAgents: Cognitive Processes 13
19.4.1 A Conceptual View of CogPrime Cognitive Processes 14
19.4.2 Implementation of MindAgents 15
19.4.3 Tasks 16
19.4.4 Scheduling of MindAgents and Tasks in a Unit 16
19.4.5 The Cognitive Cycle 17
19.5 Distributed AtomSpace and Cognitive Dynamics 18
19.5.1 Distributing the AtomSpace 18
19.5.2 Distributed Processing 23
20 Knowledge Representation Using the Atomspace 27
20.1 Introduction 27
20.2 Denoting Atoms 28
20.2.1 Meta-Language 28
20.2.2 Denoting Atoms 30
20.3 Representing Functions and Predicates 35
20.3.1 Execution Links 36
20.3.2 Denoting Schema and Predicate Variables 39
xi
EFTA00624136
xii Contents
20.3.3 Variable and Combinator Notation 41
20.3.4 Inheritance Between Higher-Order Types 43
20.3.5 Advanced Schema Manipulation 44
21 Representing Procedural Knowledge 49
21.1 Introduction 49
21.2 Representing Programs 50
21.3 Representational Challenges 51
21.4 What Makes a Representation Tractable? 53
21.5 The Combo Language 55
21.6 Normal Forms Postulated to Provide Tractable Representations 55
21.6.1 A Simple Type System 56
21.6.2 Boolean Normal Form 57
21.6.3 Number Normal Form 57
21.6.4 List Normal Form 57
21.6.5 Tuple Normal Form 57
21.6.6 Enum Normal Form 58
21.6.7 Function Normal Form 58
21.6.8 Action Result Normal Form 58
21.7 Program Transformations 59
21.7.1 Reductions 59
21.7.2 Neutral Transformations 60
21.7.3 Non-Neutral Transformations 62
21.8 Interfacing Between Procedural and Declarative Knowledge 63
21.8.1 Programs Manipulating Atoms 63
21.9 Declarative Representation of Procedures 64
Section II The Cognitive Cycle
22 Emotion, Motivation, Attention and Control 67
22.1 Introduction 67
22.2 A Quick Look at Action Selection 68
22.3 Psi in C,ogPrime 69
22.4 Implementing Emotion Rules atop Psi's Emotional Dynamics 72
22.4.1 Grounding the Logical Structure of Emotions in the Psi Model 73
22.5 Goals and Contexts 73
22.5.1 Goal Atoms 74
22.6 Context Atoms 76
22.7 Ubergoal Dynamics 77
22.7.1 Implicit Ubergoal Pool Modification 77
22.7.2 Explicit Ubergoal Pool Modification 78
22.8 Goal Formation 78
22.9 Goal Fulfillment and Predicate Schematization 79
22.10Context Formation 79
22.11Execut ion Management 80
22.12Goals and Time 81
EFTA00624137
Contents xiii
23 Attention Allocation 83
23.1 Introduction 83
23.2 Semantics of Short and Long Temi Importance 85
23.2.1 The Precise Semantics of STI and LTI 86
23.2.2 STI, STIFund, and Juju 89
23.2.3 Formalizing LTI 89
23.2.4 Applications of LT/bunt versus LT/cont 90
23.3 Defining Burst LTI in Terms of STI 91
23.4 Valuing LTI and STI in terms of a Single Currency 92
23.5 Economic Attention Networks 94
23.5.1 Semantics of Hebbian Links 94
23.5.2 Explicit and Implicit Hebbian Relations 95
23.6 Dynamics of STI and LTI Propagation 95
23.6.1 ECAN Update Equations 96
23.6.2 ECAN as Associative Memory 101
23.7 Glocal Economic Attention Networks 101
23.7.1 Experimental Explorations 102
23.8 Long-Term Importance and Forgetting 102
23.9 Attention Allocation via Data Mining on the System Activity Table 103
23.10Schema Credit Assignment 104
23.11Interaction between ECANs and other CogPrime Components 106
23.11.1Use of PLN and Procedure Learning to Help ECAN 106
23.11.2Use of ECAN to Help Other Cognitive Processes 106
23.12MindAgent Importance and Scheduling 107
23.13Information Geometry for Attention Allocation 108
23.13.1Brief Review of Information Geometry 108
23.13.2Information-Geometric Learning for Recurrent Networks: Extending
the ANGL Algorithm 109
23.13.3Information Geometry for Economic Attention Allocation: A Detailed
Example 110
24 Economic Goal and Action Selection 113
24.1 Introduction 113
24.2 Transfer of STI "Requests for Services" Between Goals 114
24.3 Feasibility Structures 116
24.4 Goal Based Schema Selection 116
24.4.1 A Game-Theoretic Approach to Action Selection 117
24.5 SchemaActivation 118
24.6 GoalBasedSchemaLearning 119
25 Integrative Procedure Evaluation 121
25.1 Introduction 121
25.2 Procedure Evaluators 121
25.2.1 Simple Procedure Evaluation 122
25.2.2 Effort Based Procedure Evaluation 122
25.2.3 Procedure Evaluation with Adaptive Evaluation Order 123
25.3 The Procedure Evaluation Process 123
EFTA00624138
xiv Contents
25.3.1 Truth Value Evaluation 124
25.3.2 Schema Execution 125
Section III Perception and Action
26 Perceptual and Motor Hierarchies 129
26.1 Introduction 129
26.2 The Generic Perception Process 130
26.2.1 The ExperienceDB 131
26.3 Interfacing CogPrime with a Virtual Agent 131
26.3.1 Perceiving the Virtual World 132
26.3.2 Acting in the Virtual World 133
26.4 Perceptual Pattern Mining 134
26.4.1 Input Data 134
26.4.2 Transaction Graphs 135
26.4.3 Spatiotemporal Conjunctions 135
26.4.4 The Mining Task 136
26.5 The Perceptual-Motor Hierarchy 136
26.6 Object Recognition from Polygonal Meshes 137
26.6.1 Algorithm Overview 138
26.6.2 Recognizing PersistentPolygonNodes (PPNodes) from PolygonNodes 138
26.6.3 Creating Adjacency Graphs from PPNodes 139
26.6.4 Clustering in the Adjacency Graph 140
26.6.5 Discussion 140
26.7 Interfacing the Atomspace with a Deep Learning Based Perception-Action
Hierarchy 140
26.7.1 Hierarchical Perception Action Networks 141
26.7.2 Declarative Memory 142
26.7.3 Sensory Memory 142
26.7.4 Procedural Memory 142
26.7.5 Episodic Memory 143
26.7.6 Action Selection and Attention Allocation 144
26.8 Multiple Interaction Channels 144
27 Integrating CogPrime with a Compositional Spatiotemporal Deep
Learning Network 147
27.1 Introduction 147
27.2 Integrating CSDLNs with Other AI Frameworks 149
27.3 Semantic CSDLN for Perception Processing 149
27.4 Semantic CSDLN for Motor and Sensorimotor Processing 152
27.5 Connecting the Perceptual and Motoric Hierarchies with a Goal Hierarchy 154
28 Making DeSTIN Representationally Transparent 157
28.1 Introduction 157
28.2 Review of DeSTIN Architecture and Dynamics 158
28.2.1 Beyond Gray-Scale Vision 159
28.3 Uniform DeSTIN 159
28.3.1 Translation-Invariant DeSTIN 160
EFTA00624139
Contents xv
28.3.2 Mapping States of Tran.slation-Invariant De$TIN into the Atomspace 161
28.3.3 Scale-Invariant DeSTIN 162
28.3.4 Rotation Invariant DeSTIN 163
28.3.5 Temporal Perception 164
28.4 Interpretation of DeSTIN's Activity 164
28.4.1 DeSTIN's Assumption of Hierarchical Decomposability 165
28.4.2 Distance and Utility 165
28.5 Benefits and Costs of Uniform DeSTIN 166
28.6 Imprecise Probability as a Tool for Linking CogPrime and DeSTIN 167
28.6.1 Visual Attention Focusing 167
28.6.2 Using Imprecise Probabilities to Guide Visual Attention Focusing 168
28.6.3 Sketch of Application to DeSTIN 168
29 Bridging the Symbolic/Subsymbolic Gap 171
29.1 Introduction 171
29.2 Simplified OpenCog Workflow 173
29.3 Integrating De$TIN and OpenCog 174
29.3.1 Mining Patterns from DeSTIN States 175
29.3.2 Probabilistic Inference on Mined Hypergraphs 176
29.3.3 Insertion of OpenCog-Learned Predicates into DeSTIN's Pattern Library 177
29.4 Multisensory Integration, and Perception-Action Integration 178
29.4.1 Perception-Action Integration 179
29.4.2 Thought-Experiment: Eye-Hand Coordination 181
29.5 A Practical Example: Using Subtree Mining to Bridge the Gap Between
DeSTIN and PLN 182
29.5.1 The Importance of Semantic Feedback 184
29.6 Some Simple Experiments with Letters 184
29.6.1 Mining Subtrees from DeSTIN States Induced via Observing Letterforms 184
29.6.2 Mining Subtrees from DeSTIN States Induced via Observing Letterforms 185
29.7 Conclusion 188
Section IV Procedure Learning
30 Procedure Learning as Program Learning 193
30.1 Introduction 193
30.1.1 Program Learning 193
30.2 Representation-Building 195
30.3 Specification Based Procedure Learning 196
31 Learning Procedures via Imitation, Reinforcement and Correction 197
31.1 Introduction 197
31.2 IRC Learning 197
31.2.1 A Simple Example of Imitation/Reinforcement Learning 198
31.2.2 A Simple Example of Corrective Learning 199
31.3 IRC Learning in the PetBrain 201
31.3.1 Introducing Corrective Learning 203
31.4 Applying A Similar IRC Methodology to Spontaneous Learning 203
EFTA00624140
xti Contents
32 Procedure Learning via Adaptively Biased Hillcimbing 205
32.1 Introduction 205
32.2 Hillclimbing 206
32.3 Entity and Perception Filters 207
32.3.1 Entity filter 207
32.3.2 Entropy perception filter 207
32.4 Using Action Sequences as Building Blocks 208
32.5 Automatically Parametrizing the Program Size Penalty 208
32.5.1 Definition of the complexity penalty 208
32.5.2 Parameterizing the complexity penalty 209
32.5.3 Definition of the Optimization Problem 210
32.6 Some Simple Experimental Results 211
32.7 Conclusion 214
33 Probabilistic Evolutionary Procedure Learning 215
33.1 Introduction 215
33.1.1 Explicit versus Implicit Evolution in CogPrime 217
33.2 Estimation of Distribution Algorithms 218
33.3 Competent Program Evolution via MOSES 219
33.3.1 Statics 219
33.3.2 Dynamics 222
33.3.3 Architecture 223
33.3.4 Example: Artificial Ant Problem 224
33.3.5 Discussion 229
33.3.6 Conclusion 229
33.4 Integrating Feature Selection Into the Learning Process 230
33.4.1 Machine Learning, Feature Selection and AGI 231
33.4.2 Data- and Feature- Focusable Learning Problems 232
33.4.3 Integrating Feature Selection Into Learning 233
33.4.4 Integrating Feature Selection into MOSES Learning 234
33.4.5 Application to Genomic Data Classification 234
33.5 Supplying Evolutionary Learning with Long-Term Memory 236
33.6 Hierarchical Program Learning 237
33.6.1 Hierarchical Modeling of Composite Procedures in the AtomSpace 238
33.6.2 Identifying Hierarchical Structure In Combo trees via Metallodes and
Dimensional Embedding 239
33.7 Fitness Function Estimation via Integrative Intelligence 242
Section V Declarative Learning
34 Probabilistic Logic Networks 247
34.1 Introduction 247
34.2 A Simple Overview of PLN 248
34.2.1 Forward and Backward Chaining 249
34.3 First Order Probabilistic Logic Networks 250
34.3.1 Core FOPLN Relationships 250
34.3.2 PLN Truth Values 251
EFTA00624141
Contents xvii
34.3.3 Auxiliary FOPLN Relationships 251
34.3.4 PLN Rules and Formulas 252
34.3.5 Inference Trails 253
34.4 Higher-Order PLN 254
34.4.1 Reducing HOPLN to FOPLN 255
34.5 Predictive Implication and Attraction 256
34.6 Confidence Decay 257
34.6.1 An Example 258
34.7 Why is PLN a Good Idea' 260
35 Spatiotemporal Inference 263
35.1 Introduction 263
35.2 Related Work on Spatio-temporal Calculi 264
35.3 Uncertainty with Distributional Fuzzy Values 267
35.4 Spatio-temporal Inference in PLN 270
35.5 Examples 272
35.5.1 Spatiotemporal Rules 272
35.5.2 The Laptop is Safe from the Rain 273
35.5.3 Fetching the Toy Inside the Upper Cupboard 273
35.6 An Integrative Approach to Planning 275
36 Adaptive, Integrative Inference Control 277
36.1 Introduction 277
36.2 High-Level Control Mechanisms 277
36.2.1 The Need for Adaptive Inference Control 278
36.3 Inference Control in PLN 279
36.3.1 Representing PLN Rules as GroundedSchemallodes 279
36.3.2 Recording Executed PLN Inferences in the Atomspace 279
36.3.3 Anatomy of a Single Inference Step 280
36.3.4 Basic Forward and Backward Inference Steps 281
36.3.5 Interaction of Forward and Backward Inference 282
36.3.6 Coordinating Variable Bindings 282
36.3.7 An Example of Problem Decomposition 284
36.3.8 Example of Casting a Variable Assignment Problem as an Optimization
Problem 284
36.3.9 Backward Chaining via Nested Optimization 285
36.4 Combining Backward and Forward Inference Steps with Attention Allocation
to Achieve the Same Effect as Backward Chaining (and Even Smarter Inference
Dynamics) 288
36.4.1 Breakdown into MindAgents 289
36.5 Hebbian Inference Control 289
36.6 Inference Pattern Mining 293
36.7 Evolution As an Inference Control Scheme 293
36.8 Incorporating Other Cognitive Processes into Inference 294
36.9 PLN and Bayes Nets 295
EFTA00624142
xtiii Contents
37 Pattern Mining 297
37.1 Introduction 297
37.2 Finding Interesting Patterns via Program Learning 298
37.3 Pattern Mining via Frequent/Surprising Subgraph Mining 299
37.4 Fishgram 300
37.4.1 Example Patterns 300
37.4.2 The Fishgram Algorithm 301
37.4.3 Preprocessing 302
37.4.4 Search Process 303
37.4.5 Comparison to other algorithms 304
38 Speculative Concept Formation 305
38.1 Introduction 305
38.2 Evolutionary Concept Formation 306
38.3 Conceptual Blending 308
38.3.1 Outline of a CogPrime Blending Algorithm 310
38.3.2 Another Example of Blending 311
38.4 Clustering 312
38.5 Concept Formation via Formal Concept Analysis 312
38.5.1 Calculating Membership Degrees of New Concepts 313
38.5.2 Forming New Attributes 313
38.5.3 Iterating the Fuzzy Concept Formation Process 314
Section VI Integrative Learning
39 Dimensional Embedding 319
39.1 Introduction 319
39.2 Link Based Dimensional Embedding 320
39.3 Harel and Koren's Dimensional Embedding Algorithm 322
39.3.1 Step 1: Choosing Pivot Points 322
39.3.2 Step 2: Similarity Estimation 323
39.3.3 Step 3: Embedding 323
39.4 Embedding Based Inference Control 323
39.5 Dimensional Embedding and InheritanceLinks 325
40 Mental Simulation and Episodic Memory 327
40.1 Introduction 327
40.2 Internal Simulations 328
40.3 Episodic Memory 328
41 Integrative Procedure Learning 333
41.1 Introduction 333
41.1.1 The Diverse Technicalities of Procedure Learning in CogPrime 334
41.2 Preliminary Comments on Procedure Map Encapsulation and Expansion 336
41.3 Predicate Schematization 337
41.3.1 A Concrete Example 339
41.4 Concept-Driven Schema and Predicate Creation 340
41.4.1 Concept-Driven Predicate Creation 340
EFTA00624143
Contents xix
41.4.2 Concept-Driven Schema Creation 341
41.5 Inference-Guided Evolution of Pattern-Embodying Predicates 342
41.5.1 Rewarding Surprising Predicates 342
41.5.2 A More Formal Treatment 344
41.6 PredicateNode Mining 345
41.7 Learning Schema Maps 346
41.7.1 Goal-Directed Schema Evolution 347
41.8 Occam's Razor 349
42 Map Formation 351
42.1 Introduction 351
42.2 Map Encapsulation 353
42.3 Atom and Predicate Activity Tables 355
42.4 Mining the AtomSpace for Maps 356
42.4.1 Frequent Itemset Mining for Map Mining 357
42.4.2 Evolutionary Map Detection 359
42.5 Map Dynamics 359
42.6 Procedure Encapsulation and Expansion 360
42.6.1 Procedure Encapsulation in More Detail 361
42.6.2 Procedure Encapsulation in the Human Brain 361
42.7 Maps and Focused Attention 362
42.8 Recognizing and Creating Self-Referential Structures 363
42.8.1 Encouraging the Recognition of Self-Referential Structures in the
AtomSpace 364
Section VII Communication Between Human and Artificial Minds
43 Communication Between Artificial Minds 369
43.1 Introduction 369
43.2 A Simple Example Using a PsyneseVocabulary Server 371
43.2.1 The Psynese Match Schema 373
43.3 Psynese as a Language 373
43.4 Psynese Mindplexes 374
43.4.1 AGI Mindplexes 375
43.5 Psynese and Natural Language Processing 376
43.5.1 Collective Language Learning 378
44 Natural Language Comprehension 379
44.1 Introduction 379
44.2 Linguistic Atom Types 381
44.3 The Comprehension and Generation Pipelines 382
44.4 Parsing with Link Grammar 383
44.4.1 Link Grammar vs. Phrase Structure Grammar 385
44.5 The RelEx Framework for Natural Language Comprehension 386
44.5.1 RelEx2Frame: Mapping Syntactico-Semantic Relationships into
FrameNet Based Logical Relationships 387
44.5.2 A Priori Probabilities For Rules 389
44.5.3 Exclusions Between Rules 389
EFTA00624144
xx Contents
44.5.4 Handling Multiple Prepositional Relationships 390
44.5.5 Comparatives and Phantom Nodes 391
44.6 Frame2Atom 392
44.6.1 Examples of Frame2Atom 393
44.6.2 Issues Involving Disambiguation 396
44.7 Syn2Sem: A Semi-Supervised Alternative to RelEx and RelEx2Frame 397
44.8 Mapping Link Parses into Atom Structures 398
44.8.1 Example Training Pair 399
44.9 Making a Training Corpus 399
44.9.1 Leveraging RelEx to Create a Training Corpus 399
44.9.2 Making an Experience Based Training Corpus 399
44.9.3 Unsupervised, Experience Based Corpus Creation 400
44.10Limiting the Degree of Disambiguation Attempted 400
44.11Rule Format 401
44.11.1Example Rule 402
44.12Rule Learning 402
44.13Creating a Cyc-Like Database via Text Mining 403
44.14PROWL Grammar 404
44.14.1Brief Review of Word Grammar 405
44.14.2Word Grammar's Logical Network Model 406
44.14.3Link Grammar Parsing vs Word Grammar Parsing 407
44.14.4Contextually Guided Greedy Parsing and Generation Using Word Link
Grammar 411
44.15Aspects of Language Learning 413
44.15.1 Word Sense Creation 413
44.15.2Feature Structure Learning 414
44.15.3Transformation and Semantic Mapping Rule Learning 414
44.16Experiential Language Learning 415
44.17Which Path(s) Forward? 416
45 Language Learning via Unsupervised Corpus Analysis 417
45.1 Introduction 417
45.2 Assumed Linguistic Infrastructure 419
45.3 Linguistic Content To Be Learned 421
45.3.1 Deeper Aspects of Comprehension 423
45.4 A Methodology for Unsupervised Language Learning from a Large Corpus 423
45.4.1 A High Level Perspective on Language Learning 424
45.4.2 Learning Syntax 426
45.4.3 Learning Semantics 430
45.5 The Importance of Incremental Learning 434
45.6 Integrating Language Learned via Corpus Analysis into CogPrime's
Experiential Learning 435
46 Natural Language Generation 437
46.1 Introduction 437
46.2 SegSim for Sentence Generation 437
46.2.1 NLGen: Example Results 441
EFTA00624145
Contents xxi
46.3 Experiential Learning of Language Generation 444
46.4 Sem2Syn 445
46.5 Conclusion 445
47 Embodied Language Processing 447
47.1 Introduction 447
47.2 Semiosis 448
47.3 Teaching Gestural Communication 450
47.4 Simple Experiments with Embodiment and Anaphor Resolution 455
47.5 Simple Experiments with Embodiment and Question Answering 456
47.5.1 Preparing/Matching Framm 456
47.5.2 Frames2RelEx 458
47.5.3 Example of the Question Answering Pipeline 458
47.5.4 Example of the PetBrain Language Generation Pipeline 459
47.6 The Prospect of Massively Multiplayer Language Teaching 460
48 Natural Language Dialogue 463
48.1 Introduction 463
48.1.1 Two Phases of Dialogue System Development 464
48.2 Speech Act Theory and its Elaboration 464
48.3 Speech Act Schemata and Triggers 465
48.3.1 Notes Toward Example SpeechActSchema 467
48.4 Probabilistic Mining of Trigger contexts 471
48.5 Conclusion 473
Section VIII From Here to AGI
49 Summary of Argument for the CogPrime Approach 477
49.1 Introduction 477
49.2 Multi-Memory Systems 477
49.3 Perception, Action and Environment 478
49.4 Developmental Pathways 479
49.5 Knowledge Representation 480
49.6 Cognitive Processes 480
49.6.1 Uncertain Logic for Declarative Knowledge 481
49.6.2 Program Learning for Procedural Knowledge 482
49.6.3 Attention Allocation 483
49.6.4 Internal Simulation and Episodic Knowledge 484
49.6.5 Low-Level Perception and Action 484
49.6.6 Goals 485
49.7 Fulfilling the "Cognitive Equation" 485
49.8 Occam's Razor 486
49.8.1 Mind Geometry 486
49.9 Cognitive Synergy 488
49.9.1 Synergies that Help Inference 488
49.10Synergies that Help MOSES 489
49.10.1Synergies that Help Attention Allocation 489
49.10.2Further Synergies Related to Pattern Mining 489
EFTA00624146
xxii Contents
49.10.3Synergim Related to Map Formation 490
49.11Emergent Structures and Dynamics 490
49.
ℹ️ Document Details
SHA-256
18d7b4f48c6342b39cdc2a4fb3fcb1f5a2276405f2443bd4820a6bdd8294cae8
Bates Number
EFTA00624128
Dataset
DataSet-9
Document Type
document
Pages
555
Comments 0