The red-neck-cellular automata

I continue revising my work on collective intelligence, and I am linking it to the theory of complex systems. I return to the excellent book ‘What Is a Complex System?’ by James Landyman and Karoline Wiesner (Yale University Press, 2020, ISBN 978-0-300-25110-4, Kindle Edition). I take and quote their summary list of characteristics that complex systems display, on pages 22 – 23: “ […] which features are necessary and sufficient for which kinds of complexity and complex system. The features are as follows:

1. Numerosity: complex systems involve many interactions among many components.

2. Disorder and diversity: the interactions in a complex system are not coordinated or controlled centrally, and the components may differ.

3. Feedback: the interactions in complex systems are iterated so that there is feedback from previous interactions on a time scale relevant to the system’s emergent dynamics.

4. Non-equilibrium: complex systems are open to the environment and are often driven by something external.

5. Spontaneous order and self-organisation: complex systems exhibit structure and order that arises out of the interactions among their parts.

6. Nonlinearity: complex systems exhibit nonlinear dependence on parameters or external drivers.

7. Robustness: the structure and function of complex systems is stable under relevant perturbations.

8. Nested structure and modularity: there may be multiple scales of structure, clustering and specialisation of function in complex systems.

9. History and memory: complex systems often require a very long history to exist and often store information about history.

10. Adaptive behaviour: complex systems are often able to modify their behaviour depending on the state of the environment and the predictions they make about it”.

As I look at the list, my method of simulating collective intelligence is coherent therewith. Still, there is one point which I think I need to dig a bit more into: that whole thing with simple entities inside the complex system. In most of my simulations, I work on interactions between cognitive categories, i.e. between quantitative variables. Interaction between real social entities is most frequently implied rather than empirically nailed down. Still, there is one piece of research which sticks out a bit in that respect, and which I did last year. It is devoted to cities and their role in the human civilisation. I wrote quite a few blog updates on the topic, and I have one unpublished paper written thereon, titled ‘The Puzzle of Urban Density And Energy Consumption’. In this case, I made simulations of collective intelligence with my method, thus I studied interactions between variables. Yet, in the phenomenological background of emerging complexity in variables, real people interact in cities: there are real social entities interacting in correlation with the connections between variables. I think the collective intelligence of cities the piece of research where I have the surest empirical footing, as compared to others.

There is another thing which I almost inevitably think about. Given the depth and breadth of the complexity theory, such as I start discovering it with and through that ‘What Is a Complex System?’ book, by James Landyman and Karoline Wiesner, I ask myself: what kind of bacon can I bring to that table? Why should anyone bother about my research? What theoretical value added can I supply? A good way of testing it is talking real problems. I have just signalled my research on cities. The most general hypothesis I am exploring is that cities are factories of new social roles in the same way that the countryside is a factory of food. In the presence of demographic growth, we need more food, and we need new social roles for new humans coming around. In the absence of such new social roles, those new humans feel alienated, they identify as revolutionaries fighting for the greater good, they identify the incumbent humans as oppressive patriarchy, and the next thing you know, there is systemic, centralized, government-backed terror. Pardon my French, this is a system of social justice. Did my bit of social justice, in the communist Poland.

Anyway, cities make new social roles by making humans interact much more abundantly than they usually do in a farm. More abundant an interaction means more data to process for each human brain, more s**t to figure out, and the next thing you know, you become a craftsman, a businessperson, an artist, or an assassin. Once again, being an assassin in the countryside would not make much sense. Jumping from one roof to another looks dashing only in an urban environment. Just try it on a farm.

Now, an intellectual challenge. How can humans, who essentially don’t know what to do collectively, can interact so as to create emergent complexity which, in hindsight, looks as if they had known what to do? An interesting approach, which hopefully allows using some kind of neural network, is the paradigm of the maze. Each individual human is so lost in social reality that the latter appears as a maze, which one ignores the layout of. Before I go further, one linguistic thing is to nail down. I feel stupid using impersonal forms such as ‘one’, or ‘an individual’. I like more concreteness. I am going to start with George the Hero. George the Hero lives in a maze, and I stress it: he lives there. Social reality is like a maze to George, and, logically, George does not even want to get out of that maze, ‘cause that would mean being lonely, with no one around to gauge George’s heroism. George the Hero needs to stay in the maze.

The first thing which George the Hero needs to figure out is the dimensionality of the maze. How many axes can George move along in that social complexity? Good question. George needs to experiment in order to discover that. He makes moves in different social directions. He looks around what different kinds of education he can possibly get. He assesses his occupational options, mostly jobs and business ventures. He asks himself how he can structure his relations with family and friends. Is being an asshole compatible with fulfilling emotional bonds with people around?  

Wherever George the Hero currently is in the maze, there are n neighbouring and available cells around him. In each given place of the social maze, George the Hero has n possible ways to move further, into those n accessible cells in the immediate vicinity, and that is associated with k dimensions of movement. What is k, exactly? Here, I can refer to the theory of cellular automata, which attempts to simulate interactions between really simple, cell-like entities (Bandini, Mauri & Serra 2001[1]; Yu et al. 2021[2]). There is something called ‘von Neumann neighbourhood’. It corresponds to the assumption that if George the Hero has n neighbouring social cells which he move into, he can move like ‘left-right-forward-back’. That, in turn, spells k = n/2. If George can move into 4 neighbouring cells, he moves in a 2-dimensional space. Should he be able to move into 6 adjacent cells of the social maze, he has 3 dimensions to move along etc. Trouble starts when George sees an odd number of places to move to, like 5 or 7, on the account of these giving half-dimensions, like 5/2 = 2.5, 7/2 = 3.5 etc. Half a dimension means, in practical terms, that George the Hero faces social constraints. There might be cells around, mind you, which technically are there, but there are walls between George and them, and thus, for all practical purposes, the Hero can afford not to give a f**k.

George the Hero does not like to move back. Hardly anyone does. Thus, when George has successfully moved from cell A to cell B, he will probably not like going back to A, just in order to explore another cell adjacent thereto. People behave heuristically. People build up on their previous gains. Once George the Hero has moved from A to B, B becomes his A for the next move. He will choose one among the cells adjacent to B (now A), move there etc. George is a Hero, not a scientist, and therefore he carves a path through the social maze rather than discovers the maze as such. Each cell in the maze contains some rewards and some threats. George can get food and it means getting into a dangerously complex relation with that sabre-tooth tiger. George can earn money and it means giving up some of his personal freedom. George can bond with other people and find existential meaning and it means giving up even more of what he provisionally perceives as his personal freedom.

The social maze is truly a maze because there are many Georges around. Interestingly, many Georges in English give one Georges in French, and I feel this is the point where I should drop the metaphor of George the Hero. I need to get more precise, and thus I go to a formal concept in the theory of cellular automata, namely that of a d-dimensional cellular automaton, which can be mathematically expressed as A = (Zd, S, N, Sn+1 -> S). In that automaton A, Zd stands for the architecture of the maze, thus a lattice of d – tuples of integer numbers. In plain human, Zd is given by the number of dimensions, possibly constrained, which a human can move along in the social space. Many people carve their paths across the social maze, no one likes going back, and thus the more people are around, and the better they can communicate their respective experiences, the more exhaustive knowledge we have of the surrounding Zd.

There is a finite set S of states in that social space Zd, and that finitude is connected to the formally defined neighbourhood of the automaton A, namely the N. Formally, N is a finite ordered subset of Zd, and, besides the ‘left-right-forward-back’ neighbourhood of von Neumann, there is a more complex one, namely the Moore’s neighbourhood. In the latter, we can move diagonally between cells, like to the left and forward, to the right and forward etc. Keeping in mind that neighbourhood means, in practical terms, the number n of cells which we can move into from the social cell we are currently in, the cellular automaton can be rephrased as as A = (Zd, S, n, Sn+1 -> S). The transition Sn+1 -> S, called the local rule of A, makes more sense now. With me being in a given cell of the social maze, and there being n available cells immediately adjacent to mine, that makes n +1 cells where I can possibly be in, and I can technically visit all those cells in a finite number of Sn+1 combinatorial paths. The transition Sn+1 -> S expresses the way which I carve my finite set S of states out of the generally available Sn+1.       

If I assume that cities are factories of new social roles, the cellular automaton of an urban homo sapiens should be more complex than the red-neck-cellular automaton in a farm folk. It might mean greater an n, thus more cells available for moving from where I am now. It might also mean more efficient a Sn+1 -> S local rule, i.e. a better way to explore all the possible states I can achieve starting from where I am. There is a separate formal concept for that efficiency in the local rule, and it is called configuration of the cellular automaton AKA its instantaneous description AKA its global state, and it refers to the map Zd -> S. Hence, the configuration of my cellular automaton is the way which the overall social space Zd mapes into the set S of states actually available to me.

Right, if I have my cellular automaton with a configuration map Zd -> S, it is sheer fairness that you have yours too, and your cousin Eleonore has another one for herself, as well. There are many of us in the social space Zd. We are many x’s in the Zd. Each x of us has their own configuration map Zd -> S. If we want to get along with each other, our individual cellular automatons need to be mutually coherent enough to have a common, global function of cellular automata, and we know there is such a global function when we can collectively produce a sequence of configurations.

According to my own definition, a social structure is a collectively intelligent structure to the extent that it can experiment with many alternative versions of itself and select the fittest one, whilst staying structurally coherent. Structural coherence, in turn, is the capacity to relax and tighten, in a sequence, behavioural coupling inside the society, so as to allow the emergence and grounding of new behavioural patterns. The theory of cellular automata provides me some insights in that respect. Collective intelligence means the capacity to experiment with ourselves, right? That means experimenting with our global function Zd -> S, i.e. with the capacity to translate the technically available social space Zd into a catalogue S of possible states. If we take a random sample of individuals in a society, and study their cellular automatons A, they will display local rules Sn+1 -> S, and these can be expressed as coefficients (S / Sn+1), 0 ≤ (S / Sn+1) ≤ 1. The latter express the capacity of individual cellular automatons to generate actual states S of being out of the generally available menu of Sn+1.

In a large population, we can observe the statistical distribution of individual (S / Sn+1) coefficients of freedom in making one’s cellular state. The properties of that statistical distribution, e.g. the average (S / Sn+1) across the board, are informative about how intelligent collectively the given society is. The greater the average (S / Sn+1), the more possible states can the given society generate in the incumbent social structure, and the more it can know about the fittest state possible. That looks like a cellular definition of functional freedom.

[1] Bandini, S., Mauri, G., & Serra, R. (2001). Cellular automata: From a theoretical parallel computational model to its application to complex systems. Parallel Computing, 27(5), 539-553.

[2] Yu, J., Hagen-Zanker, A., Santitissadeekorn, N., & Hughes, S. (2021). Calibration of cellular automata urban growth models from urban genesis onwards-a novel application of Markov chain Monte Carlo approximate Bayesian computation. Computers, environment and urban systems, 90, 101689.

Leave a Reply