Representative for collective intelligence

I am generalizing from the article which I am currently revising, and I am taking a broader view on many specific strands of research I am running, mostly in order to move forward with my hypothesis of collective intelligence in human social structures. I want to recapitulate on my method – once more – in order to extract and understand its meaning. 

I have recently realized a few things about my research. Firstly, I am using the logical structure of an artificial neural network as a simulator more than an optimizer, as digital imagination rather than functional, goal-oriented intelligence, and that seems to be the way of using AI which hardly anyone else in social sciences seems to be doing. The big question which I am (re)asking myself is to what extent are my simulations representative for the collective intelligence of human societies.

I start gently, with variables, hence with my phenomenology. I mostly use the commonly accessible and published variables, such as those published by the World Bank, the International Monetary Fund, STATISTA etc. Sometimes, I make my own coefficients out of those commonly accepted metrics, e.g. the coefficient of resident patent applications per 1 million people, the proportion between the density of population in cities and the general one, or the coefficient of fixed capital assets per 1 patent application.

My take on any variables in social sciences is very strongly phenomenological, or even hermeneutic. I follow the line of logic which you can find, for example, in “Phenomenology of Perception” by Maurice Merleau-Ponty (reprint, revised, Routledge, 2013, ISBN 1135718601, 9781135718602). I assume that any of the metrics we have in social sciences is an entanglement of our collective cognition with the actual s**t going on. As the actual s**t going on encompasses our way of forming our collective cognition, any variable used in social sciences is very much like a person’s attempt to look at themselves from a distance. Yes! This is what we use mirrors for! Variables used in social sciences are mirrors. Still, they are mirrors made largely by trial and error, with a little bit of a shaky hand, and each of them shows actual social reality in slightly disformed a manner.

Empirical research in social sciences consists, very largely, in a group of people trying to guess something about themselves on the basis of repeated looks into a set of imperfect mirrors. Those mirrors are imperfect, and yet they serve some purpose. I pass to my second big phenomenological take on social reality, namely that our entangled observations thereof are far from being haphazard. The furtive looks we catch of the phenomenal soup, out there, are purposeful. We pay attention to things which pay off. We define specific variables in social sciences because we know by experience that paying attention to those aspects of social reality brings concrete rewards, whilst not paying attention thereto can hurt, like bad.

Let’s take inflation. Way back in the day, like 300 years ago, no one really used the term of inflation because the monetary system consisted in a multitude of currencies, mixing private and public deeds of various kinds. Entire provinces in European countries could rely on bills of exchange issued by influential merchants and bankers, just to switch to other type of bills 5 years later. Fluctuations in the rates of exchange in those multiple currencies very largely cancelled each other. Each business of respectable size was like a local equivalent of the today’s Forex exchange. Inflation was a metric which did not even make sense at the time, as any professional of finance would intuitively ask back: ‘Inflation? Like… inflation in which exactly among those 27 currencies I use everyday?’.

Standardized monetary systems, which we call ‘FIAT money’ today, steadied themselves only in the 19th century. Multiple currencies progressively fused into one, homogenized monetary mass, and mass conveys energy. Inflation is loss of monetary energy, like entropy of the monetary mass. People started paying attention to inflation when it started to matter.

We make our own social reality, which is fundamentally unobservable to us, and it makes sense because it is hard to have an objective, external look at a box when we are staying inside the box. Living in that box, we have learnt, over time, how to pay attention to the temporarily important properties of the box. We have learnt how to use maths for fine tuning in that selective perception of ours. We learnt, for example, to replace the basic distinction between people doing business and people not doing business at all with finer shades of how much business are people doing exactly in a unit of time-space.   

Therefore, a set of empirical variables, e.g. from the World Bank, is a collection of imperfect observations, which represent valuable outcomes social outcomes. A set of N socio-economic variables represents N collectively valuable social outcomes, which, in turn, correspond to N collective pursuits – it is a set of collective orientations. Now, my readers have the full right to protest: ‘Man, just chill. You are getting carried away by your own ideas. Quantitative variables about society and economy are numbers, right? They are the metrics of something. Measurement is objective and dispassionate. How can you say that objectively gauged metrics are collective orientations?’. Yes, these are all valid objections, and I made up that little imaginary voice of my readers on the basis of reviews that I had for some of my papers.

Once again, then. We measure the things we care about, and we go to great lengths in creating accurate scales and methods of measurement for the things we very much care about. Collective coordination is costly and hard to achieve. If we devote decades of collective work to nail down the right way of measuring, e.g. the professional activity of people, it probably matters. If it matters, we are collectively after optimizing it. A set of quantitative, socio-economic variables represents a set of collectively pursued orientations.

In the branch of philosophy called ethics, there is a stream of thought labelled ‘contextual ethics’, whose proponents claim that whatever normatively defined values we say we stick to, the real values we stick to are to be deconstructed from our behaviour. Things we are recurrently and systematically after are our contextual ethical values. Yes, the socio-economic variables we can get from your average statistical office are informative about the contextual values of our society.

When I deal with a variable like the % of electricity in the total consumption of energy, I deal with a superimposition of two cognitive perspectives. I observe something that happens in the social reality, and that phenomenon takes the form of a spatially differentiated, complex state of things, which changes over time, i.e. one complex state transitions into another complex state etc. On the other hand, I observe a collective pursuit to optimize that % of electricity in the total consumption of energy.

The process of optimizing a socio-economic metric makes me think once again about the measurement of social phenomena. We observe and measure things which are important to us because they give us some sort of payoff. We can have collective payoffs in three basic ways. We can max out, for one. Case: Gross Domestic Product, access to sanitation. We can keep something as low as possible, for two. Case: murder, tuberculosis. Finally, we can maintain some kind of healthy dynamic balance. Case: inflation, use of smartphones. Now, let’s notice that we don’t really do fine calculations about murder or tuberculosis. Someone is healthy or sick, still alive or already murdered. Transitional states are not really of much of a collective interest. As it comes to outcomes which pay off by the absence of something, we tend to count them digitally, like ‘is there or isn’t there’. On the other hand, those other outcomes, which we max out on or keep in equilibrium, well, that’s another story. We invent and perfect subtle scales of measurement for those phenomena. That makes me think about a seminal paper titled ‘Selection by consequences’, by the founding father of behaviourism, Burrhus Frederic Skinner. Skinner introduced the distinction between positive and negative reinforcements. He claimed that negative reinforcements are generally stronger in shaping human behaviour, whilst being clumsier as well. We just run away from a tiger, we don’t really try to calibrate the right distance and the right speed of evasion. On the other hand, we tend to calibrate quite finely our reactions to positive reinforcements. We dose our food, we measure exactly the buildings we make, we learn by small successes etc.  

If a set of quantitative socio-economic variables is informative about a set of collective orientations (collectively pursued outcomes), one of the ways we can study that set consists in establishing the hierarchy of orientations. Are some of those collective values more important than others? What does it even mean ‘more important’ in this context, and how can it be assessed? We can imagine that each among the many collective orientations is an individual pursuing their idiosyncratic path of payoffs from interactions with the external world. By the way, this metaphor is closer to reality than it could appear at the first sight. Each human is, in fact, a distinct orientation. Each of us is action. This perspective has been very sharply articulated by Martin Heidegger, in his “Being and Time”.    

Hence, each collective orientation can be equated to an individual force, pulling the society in a specific direction. In the presence of many socio-economic variables, I assume the actual social reality is a superimposition of those forces. They can diverge or concur, as they please, I do not make any assumptions about that. Which of those forces pulls the most powerfully?

Here comes my mathematical method, in the form of an artificial neural network. I proceed step by step. What does it mean that we collectively optimize a metric? Mostly by making it coherent with our other orientations. Human social structures are based on coordination, and coordination happens both between social entities (individuals, cities, states, political parties etc.), and between different collective pursuits. Optimizing a metric representative for a collectively valuable outcome means coordinating with other collectively valuable outcomes. In that perspective, a phenomenon represented (imperfectly) with a socio-economic metric is optimized when it remains in some kind of correlation with other phenomena, represented with other metrics. The way I define correlation in that statement is a broad one: correlation is any concurrence of events displaying a repetitive, functional pattern.

Thus, when I study the force of a given variable as a collective orientation in a society, I take this variable as the hypothetical output in the process (of collective orientation, and I simulate that process as the output variable sort of dragging the remaining variables behind it, by the force of functional coherence. With a given set of empirical variables, I make as many mutations thereof as I have variables. Each mutated set represents a process, where one variable as output, and the remaining ones as input. The process consists of as many experiments as there are observational rows in my database. Most socio-economic variables come in rows of the type “country A in year X”.  

Here, I do a little bit of mathematical cavalry with two different models of swarm intelligence: particle swarm and ants’ colony (see: Gupta & Srivastava 2020[1]). The model of particle swarm comes from the observation of birds, which keeps me in a state of awe about human symbolic creativity, and it models the way that flocks of birds stay collectively coherent when they fly around in the search of food. Each socio-economic variable is a collective orientation, and in practical terms it corresponds to a form of social behaviour. Each such form of social behaviour is a bird, which observes and controls its distance from other birds, i.e. from other forms of social behaviour. Societies experiment with different ways of maintaining internal coherence between different orientations. Each distinct collective orientation observes and controls its distance from other collective orientations. From the perspective of an ants’ colony, each form of social behaviour is a pheromonal trace which other forms of social behaviour can follow and reinforce, or not give a s**t about it, to their pleasure and leisure. Societies experiment with different strengths attributed to particular forms of social behaviour, which mimics an ants’ colony experimenting with different pheromonal intensities attached to different paths toward food.

Please, notice that both models – particle swarm and ants’ colony – mention food. Food is the outcome to achieve. Output variables in mutated datasets – which I create out of the empirical one – are the food to acquire. Input variables are the moves and strategies which birds (particles) or ants can perform in order to get food. Experimentation the ants’ way involves weighing each local input (i.e. the input of each variable in each experimental round) with a random weight R, 0 < R < 1. When experimenting the birds’ way, I drop into my model the average Euclidean distance E from the local input to all the other local inputs.   

I want to present it all rolled nicely into an equation, and, as noblesse oblige, I introduce symbols. The local input of an input variable xi in experimental round tj is represented with xi(tj), whilst the local value of the output variable xo is written as xo(tj). The compound experimental input which the society makes, both the ants’ way and the birds’ way, is written as h(tj), and it spells h(tj) = x1(tj)*R* E[xi(tj-1)] + x2(tj)*R* E[x2(tj-1)] + … + xn(tj)*R* E[xn(tj-1)].    

Up to that point, this is not really a neural network. It mixes things up, but it does not really adapt. I mean… maybe there is a little intelligence? After all, when my variables act like a flock of birds, they observe each other’s position in the previous experimental round, through the E[xi(tj-1)] Euclidean thing. However, I still have no connection, at this point, between the compound experimental input h(tj) and the pursued output xo(tj). I need a connection which would work like an observer, something like a cognitive meta-structure.

Here comes the very basic science of artificial neural networks. There is a function called hyperbolic tangent, which spells tanh = (e2x – 1)/(e2x + 1) where x can be whatever you want. This function happens to be one of those used in artificial neural networks, as neural activation, i.e. as a way to mediate between a compound input and an expected output. When I have that compound experimental input h(tj) = x1(tj)*R* E[xi(tj-1)] + x2(tj)*R* E[x2(tj-1)] + … + xn(tj)*R* E[xn(tj-1)], I can put it in the place of x in the hyperbolic tangent, and I bet tanh = (e2h  – 1)/(e2h  + 1). In a neural network, error in optimization can be calculated, generally, as e = xo(tj) – tanh[h(tj)]. That error can be fed forward into the next experimental round, and then we are talking, ‘cause the compound experimental input morphs into:

>>  input h(tj) = x1(tj)*R* E[xi(tj-1)]*e(tj-1) + x2(tj)*R* E[x2(tj-1)] *e(tj-1) + … + xn(tj)*R* E[xn(tj-1)] *e(tj-1)   

… and that means that each compound experimental input takes into account both the coherence of the input in question (E), and the results of previous attempts to optimize.

Here, I am a bit stuck. I need to explain, how exactly the fact of computing the error of optimization e = xo(tj) – tanh[h(tj)] is representative for collective intelligence.


[1] Gupta, A., & Srivastava, S. (2020). Comparative analysis of ant colony and particle swarm optimization algorithms for distance optimization. Procedia Computer Science, 173, 245-253. https://doi.org/10.1016/j.procs.2020.06.029

One thought on “Representative for collective intelligence

Leave a Reply