## Evolving Cellular Automata

A quite different example of automatic programming by genetic algorithms is found in work done by James Crutchfield, Rajarshi Das, Peter Hraber, and myself on evolving cellular automata to perform computations (Mitchell, Hraber, and Crutchfield 1993; Mitchell, Crutchfield, and Hraber 1994a; Crutchfield and Mitchell 1994; Das, Mitchell, and Crutchfield 1994). This project has elements of both problem solving and scientific modeling. One motivation is to understand how natural evolution creates systems in which "emergent computation" takes place—that is, in which the actions of simple components with limited information and communication give rise to coordinated global information processing. Insect colonies, economic systems, the immune system, and the brain have all been cited as examples of systems in which such emergent computation occurs (Forrest 1990; Langton 1992). However, it is not well understood how these natural systems perform computations. Another motivation is to find ways to engineer sophisticated emergent computation in decentralized multi-processor systems, using ideas from how natural decentralized systems compute. Such systems have many of the desirable properties for computer systems mentioned in chapter 1: they are sophisticated, robust, fast, and adaptable information processors. Using ideas from such systems to design new types of parallel computers might yield great progress in computer science.

One of the simplest systems in which emergent computation can be studied is a one-dimensional binary-state cellular automaton (CA)—a one-dimensional lattice of N two-state machines ("cells"), each of which changes its state as a function only of the current states in a local neighborhood. (The well-known "game of Life" (Berlekamp, Conway, and Guy 1982) is an example of a two-dimensional CA.) A one-dimensional CA is illustrated in figure 2.5. The lattice starts out with an initial configuration of cell states (zeros and ones) and this configuration changes in discrete time steps in which all cells are updated simultaneously according to the CA "rule" M. (Here I use the term "state" to refer to refer to a local state s——the value of the single cell at site i. The term "configuration" will refer to the pattern of local states over the entire lattice.)

Figure 2.5: Illustration of a one-dimensional, binary-state, nearest-neighbor (r = 1) cellular automaton with N = 11. Both the lattice and the rule table for updating the lattice are illustrated. The lattice configuration is shown over one time step. The cellular automaton has periodic boundary conditions: the lattice is viewed as a circle, with the leftmost cell the right neighbor of the rightmost cell, and vice versa.

Figure 2.5: Illustration of a one-dimensional, binary-state, nearest-neighbor (r = 1) cellular automaton with N = 11. Both the lattice and the rule table for updating the lattice are illustrated. The lattice configuration is shown over one time step. The cellular automaton has periodic boundary conditions: the lattice is viewed as a circle, with the leftmost cell the right neighbor of the rightmost cell, and vice versa.

A CA rule M can be expressed as a lookup table ("rule table") that lists, for each local neighborhood, the update state for the neighborhood's central cell. For a binary-state CA, the update states are referred to as the "output bits" of the rule table. In a one-dimensional CA, a neighborhood consists of a cell and its r ("radius") neighbors on either side. The CA illustrated in figure 2.5 has r = 1. It illustrates the "majority" rule: for each neighborhood of three adjacent cells, the new state is decided by a majority vote among the three cells. The CA illustrated in figure 2.5, like all those I will discuss here, has periodic boundary conditions: Sj = Sj + n. In figure 2.5 the lattice configuration is shown iterated over one time step.

Cellular automata have been studied extensively as mathematical objects, as models of natural systems, and as architectures for fast, reliable parallel computation. (For overviews of CA theory and applications, see Toffoli and Margolus 1987 and Wolfram 1986.) However, the difficulty of understanding the emergent behavior of CAs or of designing CAs to have desired behavior has up to now severely limited their use in science and engineering and for general computation. Our goal is to use GAs as a method for engineering CAs to perform computations.

Typically, a CA performing a computation means that the input to the computation is encoded as an initial configuration, the output is read off the configuration after some time step, and the intermediate steps that transform the input to the output are taken as the steps in the computation. The "program" emerges from the CA rule being obeyed by each cell. (Note that this use of CAs as computers differs from the impractical though theoretically interesting method of constructing a universal Turing machine in a CA; see Mitchell, Crutchfield, and Hraber 1994b for a comparison of these two approaches.)

The behavior of one-dimensional CAs is often illustrated by a "space-time diagram"—a plot of lattice configurations over a range of time steps, with ones given as black cells and zeros given as white cells and with time increasing down the page. Figure 2.6 shows such a diagram for a binary-state r = 3 CA in which the rule table's output bits were filled in at random. It is shown iterating on a randomly generated initial configuration. Random-looking patterns, such as the one shown, are typical for the vast majority of CAs. To produce CAs that can perform sophisticated parallel computations, the genetic algorithm must evolve CAs in which the actions of the cells are not random-looking but are coordinated with one another so as to produce the desired result. This coordination must, of course, happen in the absence of any central processor or memory directing the coordination.

Figure 2.6: Space-time diagram for a randomly generated r = 3 cellular automaton, iterating on a randomly generated initial configuration. N = 149 sites are shown, with time increasing down the page. Here cells with state 0 are white and cells with state 1 are black. (This and the other space-time diagrams given here were generated using the program "laid" written by James P. Crutchfield.)

Some early work on evolving CAs with genetic algorithms was done by Norman Packard and his colleagues (Packard 1988; Richards, Meyer, and Packard 1990). John Koza (1992) also applied the GP paradigm to evolve CAs for simple random-number generation.

Our work builds on that of Packard (1988). As a preliminary project, we used a form of the GA to evolve one-dimensional, binary-state r = 3 CAs to perform a density-classification task. The goal is to find a CA that decides whether or not the initial configuration contains a majority of ones (i.e., has high density). If it does, the whole lattice should eventually go to an unchanging configuration of all ones; all zeros otherwise. More formally, we call this task the task. Here A denotes the density of ones in a binary-state CA configuration and Ac denotes a "critical" or threshold density for classification. Let A0 denote the density of ones in the initial configuration (IC). If A0 > Ac, then within M time steps the CA should go to the fixed-point configuration of all ones (i.e., all cells in state 1 for all subsequent t); otherwise, within M time steps it should go to the fixed-point configuration of all zeros. M is a parameter of the task that depends on the lattice size N.

It may occur to the reader that the majority rule mentioned above might be a good candidate for solving this task. Figure 2.7 gives space-time diagrams for the r = 3 majority rule (the output bit is decided by a majority vote of the bits in each seven-bit neighborhood) on two ICs, one with and one with As can be seen, local neighborhoods with majority ones map to regions of all ones and similarly for zeros, but when an all-ones region and an all-zeros region border each other, there is no way to decide between them, and both persist. Thus, the majority rule does not perform the task.

## Lessons You Can Learn From Fitness Classes

Greater Results and Better Health With Intense Fitness Classes Lessons. This Book Is One Of The Most Valuable Resources In The World When It Comes To Powerful Tips To Enjoy your Fitness classes.

## Post a comment