Abstract accepted for poster presentation at AGU 2016.
This book provides students with the modern skills and concepts needed to be able to use a computer expressively in scientific work. The authors take an integrated approach by covering programming, important methods and techniques of scientific computation (graphics, the organization of data, data acquisition, numerical issues, etc.) and the organization of software. Balancing the best of the teach-a-package and teach-a-language approaches, the book teaches general-purpose language skills and concepts, and also takes advantage of existing package-like software so that realistic computations can be performed.
Buy on Project Mosaic Books Buy on Amazon
Abstract
We advocate for a novel connectionist modeling framework as an answer to a set of challenges
to AGI and cognitive science put forth by classical formal systems approaches.
We show how this framework, which we call
Vector Symbolic Architectures, or VSAs, is also the kind of model of mental activity
that we arrive at by taking Ludwig Wittgenstein's critiques of the philosophy of
mind and language seriously. We conclude
by describing how VSA and related architectures provide a compelling
solution to three central problems raised by
Wittgenstein in the Philosophical Investigations regarding
rule-following, aspect-seeing, and the development of a “private” language.
Paper
Software
BibTex
Abstract
We propose a knowledge-representation architecture allowing a robot to learn
arbitrarily complex, hierarchical / symbolic relationships between sensors and
actuators. These relationships are encoded in high-dimensional, low-precision
vectors that are very robust to noise. Low-dimensional (single-bit) sensor values
are projected onto the high-dimensional representation space using low-precision
random weights, and the appropriate actions are then computed using elementwise
vector multiplication in this space. The high-dimensional action representations
are then projected back down to low-dimensional actuator signals via a simple
vector operation like dot product. As a proof-of-concept for our architecture,
we use it to implement a behavior-based controller for a simulated robot with
three sensors (touch sensor, left/right light sensor) and two actuators (wheels).
We conclude by discussing the prospects for deriving such representations automatically.
Paper
Software
BibTex
Abstract
The aim of this workshop was to bring together researchers working with a wide range of compositional connectionist models, independent of application domain (e.g. language, logic, analogy, web search), with a focus on what commitments (if any) each model makes to localist or distributed representation. We solicited submissions from both localist and distributed modellers, as well as those whose work bypasses this distinction or challenges its importance. We expected vigorous and exciting debate on this topic, and we were not disappointed.
Specifically, our call for participation encouraged discussion on the following topics:
Abstract
In this position paper we argue that brain-inspired cognitive architectures
must simultaneously be
compatible with the explanation of human cognition and support the human design
of artificial cognitive systems. Most cognitive neuroscience models fail to provide
a basis for implementation because they neglect necessary levels of functional
organisation in jumping directly from physical phenomena to cognitive behaviour.
Of those models that do attempt to include the intervening levels, most either fail
to implement the required cognitive functionality or do not scale adequately. We
argue that these problems of functionality and scaling arise because of identifying
computational entities with physical resources such as neurons and synapses. This
issue can be avoided by introducing appropriate virtual machines. We propose a
tool stack that introduces such virtual machines and supports design of cognitive
architectures by simplifying the design task through vertical modularity.
Paper
BibTex.
Abstract
We present a biologically grounded approach to syntax in which recursion emerges
as semantic roles are generalized from entities to propositions. Our model uses
a simplified vector representation of spiking neurons to encode semantic
role/filler bindings, which degrades gracefully as more complex representations
are encoded. By employing such representations for predicates, roles, and
fillers, our system offers a plausible account of depth limitations and other
psychological phenomena associated with recursion, which is absent in
traditional grammar-based approaches. We provide an example of how the model
learns a simple grammatical construction. After describing the relationship of
our representational scheme to traditional grammatical categories, we conclude
with a discussion of the possible origins of linguistic universals not explained
by the model.
Paper
BibTex
Abstract
We are concerned with the practical feasibility
of the neural basis of analogical mapping.
All existing connectionist models of analogical
mapping rely to some degree on localist
representation (each concept or relation is
represented by a dedicated unit/neuron). These
localist solutions are implausible because they
need too many units for human-level competence
or require the dynamic re-wiring of networks
on a sub-second time-scale.
Analogical mapping can be formalised as
finding an approximate isomorphism between
graphs representing the source and target conceptual
structures. Connectionist models of
analogical mapping implement continuous
heuristic processes for finding graph isomorphisms.
We present a novel connectionist
mechanism for finding graph isomorphisms
that relies on distributed, high-dimensional
representations of structure and mappings.
Consequently, it does not suffer from the problems
of the number of units scaling combinatorially
with the number of concepts or requiring
dynamic network re-wiring.
Paper
Matlab software (use this link instead of the one in the paper)
BibTex
Abstract
We present a fully distributed connectionist architecture supporting
lateral inhibition / winner-takes all competition. All
items (individuals, relations, and structures) are represented by
high-dimensional distributed vectors, and (multi)sets of items
as the sum of such vectors. The architecture uses a neurally
plausible permutation circuit to support a multiset intersection
operation without decomposing the summed vector into
its constituent items or requiring more hardware for more complex
representations. Iterating this operation produces a vector
in which an initially slightly favored item comes to dominate
the others. This result (1) challenges the view that lateral inhibition
calls for localist representation; and (2) points toward
a neural implementation where more complex representations
do not require more complex hardware.
Keywords: Lateral inhibition; winner-takes-all; connectionism;
distributed representation; Vector Symbolic Architecture
Paper
Matlab software
BibTex
Abstract
We present a series of software tools for the automation of cross section construction
from digital geologic map data and corresponding digital elevation models. Our
approach integrates surface data into a 3D environment and involves three fundamental
toolboxes: 1) A near-surface cross-section projection and preparation toolbox, 2) A kink-
method constructor toolbox, 3) A forward modeling toolbox for fault-related folding.
The programs are written using Matlab, and can be fully automated
or operated in an interactive mode. An example of the utility of these toolboxes is
presented by modeling the northern terminus of the Sequatchie Anticline in eastern
Tennessee, a well established fault-bend fold with excellent surface map data.
Full abstract and figures
BibTex
Abstract
AI models are often categorized in terms of the connectionist vs. symbolic
distinction. In addition to being descriptively unhelpful, these terms are also
typically conflated with a host of issues that may have nothing to do with the
commitments entailed by a particular model. A more useful distinction among
cognitive representations asks whether they are local or distributed.
Traditional symbol systems (grammar, predicate calculus) use local
representations: a given symbol has no internal content and is located at a
particular address in memory. Although well understood and successful in a
number of domains, traditional representations suffer from brittleness. The
number of possible items to be represented is fixed at some arbitrary hard
limit, and a single corrupt memory location or broken pointer can wreck an
entire structure.
In a distributed representation, on the other hand, each entity is represented
by a pattern of activity distributed over many computing elements, and each
computing element is involved in representing many different entities. Such
representations have a number of properties that make them
attractive for knowledge representation:
they are robust to noise, degrade gracefully, and support graded comparison
through distance metrics. These properties enable fast associative memory and
efficient comparison of entire structures without unpacking the structures into
their component parts.
This article provides an overview of distributed representations, setting the
approach in its historical context. The two essential operations necessary for
building distributed representation of structures — binding and bundling
— are
described. We present example applications of each model, and conclude by
discussing the current state of the art.
Order the encyclopedia.
BibTex
Abstract
We provide an overview of Vector Symbolic Architectures (VSA), a class of structured
associative memory models that offers a number of desirable features for artificial
general intelligence. By directly encoding structure using familiar, computationally efficient
algorithms, VSA bypasses many of the problems that have consumed unnecessary effort and
attention in previous connectionist work. Example applications from opposite ends of the AI
spectrum — visual map-seeking circuits and structured analogy processing — attest to the
generality and power of the VSA approach in building new solutions for AI.
Paper
Video
BibTex
Abstract
Semantic roles describe "who did what to whom'' and as such are central to
many subfields of AI and cognitive science. Each subfield or application tends
to use its own "flavor'' of roles. For analogy processing, logical deduction,
and related tasks, roles are usually specific to each predicate: for
loves there is a LOVER and a
BELOVED, for eats an EATER and an EATEN,
etc. Language modeling, on the other hand, requires more general roles like
AGENT and PATIENT in order to relate form to meaning
in a parsimonious way. Commitment to a particular type of role makes it
difficult to model processes of change, for example the change from
specific to general roles that seems to take place in language learning. The use
of semantic features helps solve this problem, but still limits the nature
and number of changes that can take place. This paper presents a new model
of semantic role change that addresses this problem. The model uses
an existing technique, Holographic Reduced Representation (HRR) for
representing roles and their fillers. Starting with specific roles, the model
learns to generalize roles through exposure to language data. The learning
mechanism is simple and efficient, and is scaling properties are well-understood.
The model is able to learn and exploit new representations without losing the
information from existing ones. We present experimental data illustrating
these principles, and conclude with by discussing some implications of the model
for the issues of changing representations as a whole.
Paper
BibTex
Abstract
Semantic roles describe "who did what to whom" and as such are central to
analogy processing and other cognitive processes. For analogy processing, roles are usually
specific to each predicate: for loves there is a LOVER and a
BELOVED, for eats an EATER and an EATEN,
etc. Language modeling, on the other hand, requires more general roles like
AGENT and PATIENT in order to relate form to meaning
in a parsimonious way. This paper presents a new model
of semantic roles that addresses this dichotomy. The model uses
a distributed representation scheme called Vector Symbolic Architectures (VSA) for
representing roles and their fillers. Starting with specific roles, the model
learns to generalize roles through exposure to language data, through a process
that is itself analogical. The learning
mechanism is simple and efficient, and its scaling properties are well-understood.
The model is able to learn and exploit new representations without losing the
information from existing ones. The contribution of the model to the study
of analogy is thus twofold: it shows how representations needed for analogy processing can
be accommodated within a more general theory of semantic roles, and
suggests how important analogy may be to language learning.
We present experimental data illustrating
these principles, and conclude by discussing some implications for the
relation between analogical processing and language.
Paper
BibTex
Abstract
Language is the most complex of all human activities. Its complexity is both
local and global. Within a given language community, a literally infinite
number of expressions can be produced and understood by language users from
an early age. Even a single idea can be communicated in an immense variety of
ways, each of them expressing a different attitude toward the topic, toward the
listener, toward some third party, etc. Across the many language communities of
the world we find a breathtaking variety of ways of saying the same thingâ
thing, to the point where efforts to describe all of human language in terms
of a finite "Universal Grammar" seem hopelessly naïve. To complicate
matters even further, researchers have suggested that "saying the same
thing" may not even be a coherent idea: languages may differ so strongly in
their conceptualizations of time, kinship, and other fundamental concepts that
it makes as much sense to see thought as the product of language as it does to
see language as an expression of thought. In an earlier presentation at this
conference I argued that a comparison of human languages with formal
(computer-programming, mathematical) languages can provide science students with an entré to some of this remarkable diversity. In the present paper I
expand on that theme, using examples from a recent undergraduate linguistic
anthropology seminar. Participants will learn how a critical study of
sociological and anthropological linguistic scholarship can inform our efforts
to gain an unbiased view of linguistic — and hence human — diversity.
Paper
BibTex
Abstract
We present a neural-competitive learning model of language evolution in
which several symbol sequences compete to signify a given propositional
meaning. Both symbol sequences and propositional meanings are represented
by high-dimensional vectors of real
numbers. A neural network learns to map between the distributed
representations of the symbol sequences and the distributed representations
of the propositions. Unlike previous neural network models of
language evolution, our model uses a Kohonen Self-Organizing Map with
unsupervised learning, thereby avoiding the computational slowdown and
biological implausibility of back-propagation networks and the lack of
scalability associated with Hebbian-learning networks. After several
evolutionary generations, the network develops systematically regular
mappings between meanings and sequences, of the sort traditionally
associated with symbolic grammars. Because of the potential of neural-like representations for addressing the symbol-grounding problem, this sort of model holds a good deal of promise as a new explanatory mechanism for both language evolution and acquisition.
Paper
BibTex
Abstract
Matlab is the most popular platform for rapid prototyping and
development of scientific and engineering applications. A typical
university computing lab will have Matlab installed on a set of
networked Linux workstations. With the growing availability of
distributed computing networks, many third-party software libraries
have been developed to support parallel execution of Matlab programs
in such a setting. These libraries typically run on top of a
message-passing library, which can lead to a variety of complications
and difficulties. One alternative, a distributed-computing toolkit
from the makers of Matlab, is prohibitively expensive for many users.
As a third alternative, we present PECON, a very small, easy-to-use
Matlab class library that simplifies the task of parallelizing
existing Matlab programs. PECON exploits Matlab's built-in Java
Virtual Machine to pass data structures between a central client and
several ''compute servers'' using sockets, thereby avoiding reliance
on lower-level message-passing software or disk i/o. PECON is free,
open-source software than runs "out of the box" without any additional
installation or modification of system parameters. This arrangement
makes it trivial to parallelize and run existing applications in which
time is mainly spent on computing results from small amounts of data.
We show how using PECON for one such application — a genetic
algorithm for evolving cellular automata — leads to linear reduction
in execution time. Finally, we show an application — computing the
Mandelbrot set — in which element-wise matrix computations can be
performed in parallel, resulting in dramatic speedup.
Paper
Software
BibTex
Abstract
We show successful application of a genetic algorithm (GA) to
evolving challenging opponents (agents) in an existing, open-source
first-person-shooter (FPS) video game. Each of an agent's possible
decisions (jump over obstacle, shoot at human) is represented by a
single boolean value, and a set of such values is combined into a
single data structure representing the "DNA" for that agent. At the
end of each "generation" (game), surviving agents are chosen
probabilistically based on their fitness (performance); their DNA is
saved to disk, and they are thereby "reborn" to play against a human
in the next generation. Qualitatively, these agents end up being a
lot more fun for a human to play against, than agents whose difficulty
comes from hard-coded increments or increased numbers.
Quantitatively, we were able to observe counter-intuitive patterns in
the density of certain "genes" in the population, confirming the
validity of the evolutionary approach. Our success also highlights the
value of open-source platforms for the AI community.
Paper
Software
BibTex
Abstract
We demonstrate how a first-person shooter (FPS) video game can be made
more fun and challenging by replacing the hard-wired behavior of
opponents with behaviors evolved via an evolutionary algorithm. Using
the open-source FPS game Cube as a platform, we replaced the agents'
(opponents) hard-wired behavior with binary “DNA” supporting a much
richer variety of agent responses. Survival-of-the-fittest ensured
that only those agents whose DNA allowed them to avoid being killed by
the human player would continue on to the next "generation"
(game). Mutating the DNA of the survivors provided enough variability
in behavior to make the agent's actions unpredictable. Our demo will
show how this approach produces an increasingly challenging level of
play, more fine-tuned to the skills of an individual human player than
the traditional approach using pre-programmed levels of difficulty or
simply adding more opponents.
Paper
Software
BibTex
Abstract
We are developing a radially symmetric octopedal robot using computational intelligence methods. The problem has been partitioned to decrease the complexity of these methods. The first step was to create a working leg. A multidisciplinary approach was taken, with one team from computer science (CS) and one team from electrical and computer engineering (ECE). The CS team developed a software model of the leg; the ECE team designed and built a hardware model. After the models were constructed, code was developed to control the leg using an adaptive neural network for generating a walk-cycle. The CS team wrote a back-propagation procedure to train a feed-forward neural net to perform arbitrary mappings. They then wrote an algorithm producing joint angles from desired foot positions. The algorithm is being used as a benchmark for networks trained on this position-angle mapping. The ECE team constructed a leg using servomotors as actuators, and then wrote a program implementing the inverse kinematics of the leg. Given a foot position in three-space, the inverse equations yield the joint angles resulting in that position. The program shows the feasibility of foot trajectories that can possibly be learned by the neural network.
Keywords: Robotics, legged robots, neural networks, genetic algorithms
Paper
BibTex
Abstract
We explore ways in which diversity issues can be incorporated in the
teaching of the so-called "hard" sciences, with specific attention to
our experiences in teaching undergraduate computer science courses.
Our approach uses three perspectives: biographical, sociological , and
anthropological/linguistic,. We describe how these approaches may
enliven classroom discussion, help students to re-examine traditional
notions about science, and give diversity a more central role in the
teaching and learning of quantitative disciplines.
Paper
BibTex
Abstract
Compositionality (the ability to combine
constituents recursively) is generally taken to be essential to the
open-ended productivity of perception, cognition, language and other
human capabilities aspired to by AI. Ultimately, these capabilities
are implemented by the neural networks of the brain, yet connectionist
models have had difficulties with compositionality. This symposium
brought together connectionist and non-connectionist researchers to
discuss and debate compositionality and connectionism. The aim of the
symposium was to expose connectionist researchers to the broadest
possible range of conceptions of composition - including those
conceptions that pose the greatest challenge for connectionism - while
simultaneously alerting other AI and cognitive science researchers to
the range of possibilities for connectionist implementation of
composition.
Keywords: Compositionality, connectionism, neural networks,
language, dynamical systems, cognitive science
Order this report from
AAAI Press.
Abstract
This paper presents languages and images as
sharing the fundamental property of self-similarity. The
self-similarity of images, especially those of objects in the natural
world (leaves, clouds, galaxies), has been described by mathematicians
like Mandelbrot, and has been used as the basis for fractal image
compression algorithms by Barnsley and others. Self-similarity in
language appears in the guise of stories within stories, or sentences
within sentences ("I know what I know"), and has been represented in
the form of recursive grammar rules by Chomsky and his
followers. Having observed this common property of language and
images, we present a formal mathematical model for putting together
words and phrases, based on the iterated function system (IFS) method
used in fractal image compression. Building (literally) on
vector-space representations of word meaning from contemporary
cognitive science research, we show how the meaning of phrases and
sentences can likewise be represented as points in a vector space of
arbitrary dimension. As in fractal image compression, the key is to
find a set of (linear or non-linear) transforms that map the vector
space into itself in a useful way. We conclude by describing some
advantages of such continuous-valued representations of meaning, and
potential implications.
Keywords: Self-similarity, fractals, language, grammars,
iterated function systems, recurrent neural networks
Download this paper as:
Postscript
(bics2004.ps)
Gzipped Postscript
(bics2004.ps.gz)
PDF
(bics2004.pdf)
Abstract
A connectionist parsing model is presented in which traditional formal
computing mechanisms (Finite-State Automaton; Parse Tree) have direct
recurrent neural-network analogues (Sequential Cascaded Net; Fractal RAAM
Decoder). The model is demonstrated on a paradigmatic formal
context-free language and an arithmetic-expression parsing task.
Advantages and current shortcomings of the model are described, and its
contribution to the ongoing debate about the role of connectionism in
language-processing tasks is discussed.
Keywords: Parsing, Connectionism, Neural Networks, Fractals,
Dynamical Systems
Download this paper as:
Postscript
(jcis03.ps)
Gzipped Postscript
(jcis03.ps.gz)
PDF
(jcis03.pdf)
Abstract
The traditional approach to complex problems in science and
engineering is to break down each problem into a set of primitive
building blocks, which are then combined by rules to form structures.
In turn, these structures can be taken apart systematically to recover
the original building blocks that went into them. Connectionist
models of such complex problems (especially in the realm of cognitive
science) have often been criticized for their putative failure to
support this sort of compositionality, systematicity, and
recoverability of components. In this paper we discuss a
connectionist model, Recursive Auto-Associative Memory (RAAM), designed to
deal with these issues. Specifically, we show how an initial approach
to RAAM involving arbitrary building-block representations placed severe
constraints on the scalability of the model. We describe a
re-analysis the building-block and "rule" components of the model as
merely two aspects of a single underlying nonlinear dynamical system,
allowing the model to represent an unbounded number of well-formed
compositional structures. We conclude by speculating about the insight
that such a "unified" view might contribute to our attempts to understand
and model rule-governed, compositional behavior in a variety of AI domains.
Keywords: Compositionality, Building Blocks, Neural
Networks, Fractals, Connectionism
Download this paper as:
Postscript
(aaai03.ps)
Gzipped Postscript
(aaai03.ps.gz)
PDF
(aaai03.pdf)
Abstract
This thesis attempts to provide an answer to the question "What is
the mathematical basis of cognitive representations?'' The answer we
present is a novel connectionist framework called Infinite RAAM. We
show how this framework satisfies the cognitive requirements of
systematicity, compositionality, and scalable representational
capacity, while also exhibiting "natural'' properties like
learnability, generalization, and inductive bias.
The contributions of this work are twofold: First, Infinite RAAM shows
how connectionist models can exhibit infinite competence for
interesting cognitive domains like language. Second, our
attractor-based learning algorithm provides a way of learning
structured cognitive representations, with robust decoding and
generalization. Both results come from allowing the dynamics of the
network to devise emergent representations during learning.
An appendix provides Matlab code for the experiments described in the thesis.
Keywords: Neural Networks, Fractals, Connectionism, Language, Grammar.
Download this paper as:
Postscript
(levythesis.ps)
Gzipped Postscript
(levythesis.ps.gz)
PDF
(levythesis.pdf)
Abstract
Unification-based approaches have come to play an important role in
both theoretical and applied modeling of cognitive processes, most
notably natural language. Attempts to model such
processes using neural networks have met with some success, but have
faced serious hurdles caused by the limitations of standard
connectionist coding schemes. As a contribution to this effort, this
paper presents recent work in Infinite RAAM (IRAAM), a new
connectionist unification model. Based on a fusion of recurrent neural
networks with fractal geometry, IRAAM allows us to understand the
behavior of these networks as dynamical systems. Using a logical
programming language as our modeling domain, we show how this
dynamical-systems approach solves many of the problems faced by
earlier connectionist models, supporting unification over arbitrarily
large sets of recursive expressions. We conclude that IRAAM can
provide a principled connectionist substrate for unification in a
variety of cognitive modeling domains.
Keywords: unification, neural networks, fractals, dynamical systems, iterated
function systems, recurrent neural networks, language, grammar, competence.
Download this paper as:
Postscript
(raam-iccm01.ps)
PDF
(raam-iccm01.pdf)
Abstract
Attempts to use neural networks to model recursive symbolic
processes like logic have met with some success, but have
faced serious hurdles caused by the limitations of standard
connectionist coding schemes. As a contribution to this effort, this
paper presents recent work in Infinite RAAM (IRAAM), a new
connectionist unification model based on a fusion of recurrent neural
networks with fractal geometry. Using a logical
programming language as our modeling domain, we show how this
approach solves many of the problems faced by
earlier connectionist models, supporting arbitrarily
large sets of logical expressions.
Keywords: neural networks, fractals, unification, logic,
dynamical systems, iterated function systems, recurrent neural networks.
Download this paper as:
Gzipped Postscript
(raam-ijcnn01.ps.gz)
PDF
(raam-ijcnn01.pdf)
Abstract
This paper presents Infinite RAAM (IRAAM), a new fusion of recurrent
neural networks with fractal geometry, allowing us to understand the
behavior of these networks as dynamical systems. Our recent work with
IRAAMs has shown that they are capable of generating the context-free
(non-regular) language a^{n}b^{n} for arbitrary values of $n$. This
paper expands upon that work, showing that IRAAMs are capable of
generating syntactically ambiguous languages but seem less capable of
generating certain context-free constructions that are absent or
disfavored in natural languages. Together, these demonstrations
support our belief that IRAAMs can provide an explanatorily adequate
connectionist model of grammatical competence in natural language.
Keywords: neural networks, fractals, dynamical systems, iterated function systems,
recurrent neural networks, language, grammar, competence.
Download this paper as:
Postscript
(raam-cogsci00.ps)
Gzipped Postscript
(raam-cogsci00.ps.gz)
PDF
(raam-cogsci00.pdf)
Abstract
With its ability to represent variable sized trees in fixed width
patterns, RAAM is a bridge between connectionist and symbolic
systems. In the past, due to limitations in our understanding, its
development plateaued. By examining RAAM from a dynamical systems
perspective we overcome most of the problems that previously plagued
it. In fact, using a dynamical systems analysis we can now prove that
not only is RAAM capable of generating parts of a context free
language (a^nb^n) but is capable of expressing the whole language. .
Keywords: neural networks, fractals, learning rules, gradient descent,
dynamical systems, iterated function systems, recurrent neural networks.
Download this paper as:
Postscript
(raam-ijcnn.ps)
Gzipped Postscript
(raam-ijcnn.ps.gz)
PDF
(raam-ijcnn.pdf)