Both needed and impossible

I return to and focus on the issue of behavioural change under the impact of an external stressor, and the use of an artificial neural network to simulate it. I somehow connect to what I wrote in ‘Cross breeding my general observations’, and I want to explore the outcomes to expect from the kind of s**t which is happening right now: climate change, pandemic, rapid technological change with the rise of digital technologies, urbanisation, social unrest…name it. I want to observe Black Swans and study the way they make their way into our normal (see Black Swans happen all the time). I intend to dissect situations when exogenous stressors trigger the so-far dormant patterns of behaviour, whilst randomly pushing the incumbent ones out of the system, and, in the process, those initially exogenous stressors become absorbed (AKA endogenized) by the society.

Back to plain human lingo, I assume that we, humans, do stuff. We can do only what we have learnt to do, therefore anything we do is a recurrent pattern of behaviour, which changes constantly in the process of learning. We differ in our individual patterns, and social life can be represented as the projection of a population into a finite set of behavioural patterns, which, further in this development, I will label as ‘social roles’. You probably know what a pool table looks like. Imagine a pretty continuous stream of pool balls, i.e. humans, spilling over an immense pool table with lots of holes in it. Each hole is a social role, and each human ball finally ends up in one of the holes, i.e. endorsing one among a finite number of social roles. Probabilistically, each social role can be described with the probability that the average homo sapiens being around endorses that role.

Thus, I study a human population projecting itself into a finite set SR = {sr1, sr2, …, srm} of m social roles, coupled with the set PSR = {p(sr1), p(sr2), …, p(srm)} of probabilities that each given social role is being endorsed. Those two coupled sets, i.e. SR and PSR, make a collectively intelligent social structure, able to learn by experimenting with many alternative versions of itself. This, in turn, implies two processes, namely production of and selection from among those alternative versions. Structural intelligence manifests as the capacity to produce and select alternative versions whilst staying coherent laterally and longitudinally. Lateral coherence is observable as functional connection between social roles in the set SR, whilst the longitudinal one is continuity in the structure of the set SR. Out of those two coherences, the lateral one is self-explanatory and assumed a priori: no social role can exist in total abstraction from other social roles, i.e. without any functional connection whatsoever. On the other hand, I assume that longitudinal coherence can be broken, in the sense that under some conditions the set SR can turn into a new set SR’, which will contain very different a repertoire of social roles.

I go into maths. Each social role sri, besides being associated with its probability of endorsement p(sri), is associated with a meta-parameter, i.e. its lateral coherence LC(sri) with m – 1 other social roles in the set SR, and that coherence is defined as the average Euclidean distance between p(sri) and the probabilities p(srj) of other social roles, as in Equation (1) below.

Equation (1)

Logically, we have one more component of the collectively intelligent social structure, namely the set LCSR = {LC(sr1), LC(sr2), …, LC(sri)} of lateral coherences between social roles.

The collectively intelligent social structure, manifest as three mutually coupled sets, i.e. SR, PSR, and LCSR, optimizes a vector of social outcomes. In order to keep some sort of methodological purity, I will further designate that vector as a set, namely the set O = {o1, o2, …, ok} of k social outcomes. Still, we keep in mind that in mathematics, the transition from set to vector and back is pretty simple and common-sense-based. A set has the same variables in it as the vector made out of that set, only we can cherry-pick variables from a set, whilst we cannot really do it out of a vector, on the account of them variables being bloody entangled in the vector. When a set turns into a vector, its variables go mousquetaire, Dumas style, and they are one for all and all for one, sort of.

With the above assumptions, a collectively intelligent social structure can be represented as the coupling of four sets: social roles SR, probabilities of endorsement as regards those social roles PSR, lateral coherences between social roles LCSR, and social outcomes O. Further, the compound notation {SR, PSR, LCSR, O} is used to designate such a structure.

Experimental instances happen one by one, and therefore they can be interpreted as consecutive experiments, possible to designate mathematically as units t of time. For the sake of clarity, the current experimental instance of the structure {SR, PSR, LCSR, O} is designated with ‘t’, past instances are referred to as t – l, where ‘l’ stands for temporal lag, and the hypothetical first state of that structure is t0. Any current instance of {SR, PSR, LCSR, O} is notated as {SR(t), PSR(t), LCSR(t),O(t)}.

Consistently with the Interface Theory of Perception (Hoffman et al. 2015[1], Fields et al. 2018[2]), as well as the theory of Black Swans (Taleb 2007[3]; Taleb & Blyth 2011[4]), it is assumed that the structure {SR, PSR, LCSR, O} internalizes exogenous stressors, both positive and negative, transforming them into endogenous constraints, therefore creating an expected vector E(O) of outcomes. Each consecutive instance {SR(t), PSR(t), LCSR(t),O(t)} of the structure {SR, PSR, LCSR, O} learns by pitching its real local outcomes O(t) against their expected local state E[O(t)].

Internalization of exogenous stressors allows studying the whole sequence of l states, i.e. from  instance {SR(t0), PSR(t0), LCSR(t0),O(t0)} to {SR(t), PSR(t), LCSR(t),O(t)} as a Markov chain of states, which transform into each other through a σ-algebra. The current state {SR(t), PSR(t), LCSR(t),O(t)} and its expected outcomes E[O(t)] contain all the information from past learning, and therefore the local error in adaptation, i.e. e(t) = {E[O(t)] – O(t)}*dO(t), where dO(t) stands for the local derivative (local first moment) of O(t) conveys all that information from past learning. That factorisation of error in adaptation into a residual difference and a first moment is based on the intuition that collective intelligence is always on the move, and any current state instance {SR(t0), PSR(t0), LCSR(t0),O(t0)} is just a snapshot of an otherwise constantly changing social structure.

With the assumptions above, {SR(t), PSR(t), LCSR(t),O(t)} = {SR(t-1) + e(t-1), PSR(t-1) + e(t-1), LCSR(t-1) + e(t-1), O(t-1) + e(t-1)} and E[O(t)] = E[O(t-1)] + e(t-1). The logic behind adding the immediately past error to the present state {SR(t), PSR(t), LCSR(t),O(t)} is that collective learning is essentially incremental, and not revolutionary. Each consecutive state {SR(t), PSR(t), LCSR(t),O(t)} is a one-mutation neighbour of the immediately preceding state {SR(t-1), PSR(t-1), LCSR(t-1),O(t-1)} rather than its structural modification. Hence, we are talking about arithmetical addition rather than multiplication or division. Of course, it is to keep in mind that subtraction is a special case of addition, when one component of addition has a negative sign.

Exogenous stressors act upon human behaviour at two levels: recurrent and incidental. Recurrent exogenous stressors make people reconsider, systematically, their decisions to endorse a given social role, in the sense that those decisions, besides taking into account the past state of the structure {SR(t0), PSR(t0), LCSR(t0),O(t0)}, incorporate randomly distributed, current exogenous information X(t). That random exogenous parcel of information affects all the people susceptible to endorse the given social role sri which, in turn, means arithmetical multiplication rather than addition, i.e. PSR(t) = X(t)*[PSR(t-1) + e(t-1)].

Incidental exogenous stress, in this specific development, is very similar to Black Swans (Taleb 2007 op. cit.; Taleb & Blyth 2011 op. cit.)., i.e. it consists of short-term, violently disturbing events, likely to put some social roles extinct or, conversely, trigger into existence new social roles. Extinction of a social role means that its probability becomes null: P(sri) = 0. The birth of a new social role is more complex. Social roles are based on pre-formed skillsets and socially tested strategies of gaining payoffs from those skillsets. A new social role appears in two phases. In the first phase, skills necessary to endorse that role progressively form in the members of a given society, yet those skills have not played out sufficiently, yet, in order to be endorsed as the social identity of an individual. Just to give an example, the recent and present development of cloud computing as a distinct digital business encourage the formation of skills in trading, at the business level, large datasets, such as those collected via the cookie algorithms. Trade in datasets is real, and the skills required are just as real, yet there is no officially labelled profession of data trader yet. Data trader is something like a dormant social role: the skills are there, in the humans involved, and still there is nothing to endorse officially. A more seasoned social role, which followed a similar trajectory, is an electricity broker. As power grids have been evolving towards increasing digitalisation and liquidity in the transmission of power, it became possible to do daily trade in power capacity, at first, and then a distinct profession, that of a power broker, emerged together with institutionalized power exchanges.

That first phase of emergence, in a new social role, creates dormant social roles, i.e. ready-to-use skillsets which need just a small encouragement, in the form of socially recognized economic incentives, to kick into existence. Mathematically, it means that the set SR of social roles entails two subsets: active and dormant. Active social roles display p(sri;t) > 0, and, under the impact of a local, Black-Swan type event, they can turn p(sri;t) = 0. Dormant social roles are at p(sri;t) = 0 for now, and can turn into display p(sri;t) > 0 in the presence of a Black Swan.

In the presence of active recurrent stress upon the structure {SR, PSR, LCSR, O}, thus if we assume X(t) > 0, I can present a succinct mathematical example of Black-Swan-type exogenous disturbance, with just two social roles, sr1 and sr2. Before the disturbance, sr1 is active and sr2 is dormant. In other words, P(sr1; t -1)*X(t-1) > 0 whilst P(sr2; t -1)*X(t-1) = 0 . With the component of learning by incremental error in a Markov chain of states, it means [P(sr1; t – 2) + e(t-2)]*X(t-1) > 0 and [P(sr2; t -1) + e(t-2)]*X(t-1) = 0, which logically equates to P(sr1; t – 2) > – e(t-2) and P(sr2; t -1) = – e(t – 2).

After the disturbance, the situation changes dialectically, namely P(sr1; t -1)*X(t-1) = 0 and P(sr2; t -1)*X(t-1) > 0, implying that P(sr1; t – 2) = – e(t-2) and P(sr2; t -1) > – e(t – 2). As you can probably recall from math classes in high school, there is no way a probability can be negative, and therefore, if I want the expression ‘– e(t-2)’ to make any sense at all in this context, I need e(t – 2) ≤ 0. As e(t) = {E[O(t)] – O(t)}*dO(t), e(t) ≤ 0 occurs when E[O(t)] ≤ O(t) or dO(t) ≤ 0.

Therefore, the whole construct of Black-Swan-type exogenous stressors such as presented above seems to hold logically when:

>> the structure {SR, PSR, LCSR, O} yields local real outcomes O(t) greater than or equal to expected outcomes E[O(t)]; in other words, that structure should yield no error at all (i.e. perfect match between actual outcomes and expected ones), thus should a perfect adaptation, or it should overshoot actual outcomes beyond expectations…


>> …the first moment of local real outcomes is perfectly still (i.e. equal to zero) or negative

Of course, there is open possibility of such instances, in the structure {SR, PSR, LCSR, O}, which yield negative error, thus E[O(t)] > O(t), with dO(t) > 0. In these instances, according to the above-deployed logic of collective intelligence, the next experimental round t+1 can yield negative probabilities p(sri) of endorsing specific social roles, thus an impossible state.  Can collective intelligence of a human society go into those impossible states? I admit I have no clear answer to that question, and therefore I asked around. I mean, I went to Google Scholar. I found three articles, all of them, interestingly, in the field of physics. In an article by Feynman, R. P. , published in 1987, and titled ‘Negative probability. Quantum implications: essays in honour of David Bohm’ (pages: 235-248,, I read: ‘[…] conditional probabilities and probabilities of imagined intermediary states may be negative in a calculation of probabilities of physical events or states. If a physical theory for calculating      probabilities yields a negative probability for a given situation under certain assumed conditions, we need not conclude the theory is incorrect. Two other possibilities of interpretation exist. One is that the conditions (for example, initial conditions) may not be capable of being realized in the physical world. The other possibility is that the situation for which the probability appears to be negative is not one that can be verified directly. A combination of these two, limitation of verifiability and freedom of initial conditions, may also be a solution to the apparent difficulty’.

This sends me back to my economics and to the concept of economic equilibrium, which assumes that societies can be in a state of economic equilibrium or in a lack thereof. In the former case, they can sort of steady themselves, and in the latter… Well, when you have no balance, man, you need to move so as to gain some.  If a collectively intelligent social structure yields negative probability attached to the occurrence of a given social role, it can indicate truly impossible a state, yet impossibility is understood in the lines of quantum physics. It is a state, from which our society should get the hell out of, ‘cause it is not gonna last, on the account of being impossible. An impossible state is not a state that cannot happen: it is a state which cannot stay in place.

Well, I am having real fun with that thing. I started from an innocent model of collective intelligence, I found myself cornered with negative probabilities, and I guess I found my way out by referring to quantum physics. The provisional moral I draw from this fairy tale is that a collectively intelligent social structure, whose learning and adaptation can be represented as a Markov chain of states, can have two types of states: the possible AKA stable ones, on the one hand, and the impossible AKA transitory ones, on the other hand.

The structure {SR, PSR, LCSR, O} is in stable, and therefore in possible a state, it yields local real outcomes O(t) greater than or equal to expected outcomes E[O(t)]; it is perfectly fit to fight for survival or it overshoots expectations. Another possible state is that of real outcomes O(t) being perfectly still or negative in its first moment. On the other hand, when the structure {SR, PSR, LCSR, O} yield real outcomes O(t) smaller than expected outcomes E[O(t)], in the presence of positive local gradient of change in those real outcomes, it is an impossible, unstable state. That thing from quantum physics surprisingly well fits to a classical economic theory, namely the theory of innovation by Joseph Schumpeter: economic systems transition from one neighbourhood of equilibrium to another one, and they transition through states of disequilibrium, which are both needed for social change, and impossible to hold for a long time.

When the structure {SR, PSR, LCSR, O} hits an impossible state, where some social roles happen with negative probabilities, that state is an engine which powers accelerated social change.     

[1] Hoffman, D. D., Singh, M., & Prakash, C. (2015). The interface theory of perception. Psychonomic bulletin & review, 22(6), 1480-1506.

[2] Fields, C., Hoffman, D. D., Prakash, C., & Singh, M. (2018). Conscious agent networks: Formal analysis and application to cognition. Cognitive Systems Research, 47, 186-213.

[3] Taleb, N. N. (2007). The black swan: The impact of the highly improbable (Vol. 2). Random house.

[4] Taleb, N. N., & Blyth, M. (2011). The black swan of Cairo: How suppressing volatility makes the world less predictable and more dangerous. Foreign Affairs, 33-39.

Leave a Reply