The batteries we don’t need anymore

I continue on the thread I started to develop in my last update in French, titled ‘De quoi parler à la prochaine réunion de faculté’, i.e. I am using that blog, and the fact of writing, to put some order in the almost ritual mess that happens at the beginning of the academic year. New calls for tenders start in the ministerial grant programs, new syllabuses need to be prepared, new classes start. Ordinary stuff, mind you, this is just something about September, as if I were in Vivaldi’s ‘Four seasons’: the hot, tumultuous Summer slowly folds into the rich, textured, and yet implacably realistic Autumn.

My central idea is to use some of the science which I dove into during the summer holidays as an intellectual tool for putting order in that chaos. That almost new science of mine is mostly based on the theory of complex systems, and my basic claim is that technological change is an emergent phenomenon in complex social systems. We don’t know why exactly our technologies change the way they change. We can trace the current technologies back to their most immediate ancestors and sometimes we can predict their most immediate successors, but that’s about it. Futuristic visions of technologies that could be there in 50 years from now are already some kind of traditional entertainment. The concept of technological progress, when we try to find a developmental logic in the historically known technological change, is usually standing on wobbly legs, on the other hand. Yes, electricity allowed the emergence of medical technologies used in hospitals, and that saved a lot of human lives, but there is no way Thomas Edison could have known that. The most spectacular technological achievements of mankind, such as the Egyptian pyramids, the medieval cathedrals, the Dutch windmills from the 16th century, or the automobile, seen from the historical distance, look ambiguous. Yes, it all solved some problems, but it facilitated the emergence of new problems. The truly unequivocal benefit of those technological leaps, which could have been actually experienced by the people who made them, was to learn how to develop technologies.

The studies I did during the Summer holidays 2021 focused on four essential, mathematical models of emergent technological change: cellular automata, flock of birds AKA particle swarm, ants’ nest, and imperfect Markov chains. I start with passing in review the model of cellular automata. At any given moment, the social complexity can be divided into a finite number of social entities (agents). They can be individual humans, businesses, NGOs, governments, local markets etc. Each such entity has an immediate freedom of movement, i.e. a finite number of one-step moves. The concept is related to the theory of games and corresponds to what happens in real life. When we do something social, we seldom just rush forwards. Most frequently, we make one step, observe the outcomes, adjust, then we make the next step etc. When all social agents do it, the whole social complexity can be seen as a collection of cells, or pixels. Each such cell (pixel) is a local state of being in society. A social entity can move into that available state, or not, at their pleasure and leisure. All the one-step moves a social entity can make translate into a trajectory it can follow across the social space. Collective outcomes we strive for and achieve can be studied as temporary complex states of those entities following their respective trajectories. The epistemological trick here is that individual moves and their combinations can be known for sure only ex post. All we can do ex ante is to define the possible states, and just wait where does the reality go.

As we are talking about the possible states of social complexity, I found an interesting mathematical mindf**k at quite an unexpected source, namely in the book titled ‘Aware. The Science and Practice of Presence. The Groundbreaking Meditation Practice’ by Daniel J. Siegel [Penguin Random House LLC, 2018, Identifiers: LCCN 2018016987 (print), LCCN 2018027672 (ebook), ISBN 9780143111788, ISBN 9781101993040 (hardback)]. This is a mathematical way of thinking, apparently taken from quantum physics. Here is the essence of it. Everything that happens does so as 100% probability of the given thing happening. Each phenomenon which takes place is the actualization of the same phenomenon being just likely to happen.

Actualization of probability can be seen as collision of two vehicles in traffic. When the two vehicles are at a substantial distance from each other, the likelihood of them colliding is zero, for all practical purposes. As they converge towards each other, there comes a point when they become sort of provisionally entangled, e.g. they find themselves heading towards the same crossroads. The probability of collision increases slightly, and yet it is not even the probability of collision, it is just the probability that these two might find themselves in a vicinity conducive to a possible collision. Nothing to write home about, yet, like really. It can be seen as a plateau of probability slowly emerging out of the initial soup of all the things which can possibly happen.

As the two cars drive closer and closer to the crossroads in question, the panoply of possible states narrows down. There is a very clear chunk of reality which gains in likelihood, as if it was a mountain range pushing up from the provisional plateau. There comes a point where the two cars (and their drivers) just come on collision course, and there is no way around it, and this is a peak of 100% probability. Boom! Probability is being consumed.

What do those cars have in common with meditation and with the emergence of technological change? As regards meditation, thought can be viewed as a progressively emerging actualization of something that was just a weak probability, sort of a month ago it was just weakly probable that today I would think what I think, it became much more likely yesterday, as the thoughts from yesterday have an impact on the thoughts of today, and today it all comes to fruition, i.e. to the 100% probability. As regards emergent technological change, the way technology changes today can be viewed as actualization of something that was highly probable last year, just somehow probable 10 years ago, and had been just part of the amorphous soup of probability 30 years ago. Those trajectories followed by individual agents inside social complexity, as defined in the theory of cellular automata, are entangled together precisely according to that pattern of emergent probabilities. Two businesses coming up with two mutually independent, and yet similar technologies, are like two peak actualizations of 100% probability in a plateau of probable technological change, which, in turn, has been slowly emerging for some time.

Those other theories I use explain and allow to model mathematically that entanglement. The theory of particle swarm, pertinent to flocks of birds, assumes that autonomous social agents strive for a certain level of behavioural coupling. We expect some level of predictability from others, and we can cooperate with others when we are satisfactorily predictable in our actions. The strive for social coherence is, therefore, one mechanism of entanglement between individual trajectories of cellular automata. The theory of ants’ nest focuses on a specific category of communication systems in societies, working like pheromones. Ants organize by marking, reinforcing and following paths across their environment, and their pheromones serve as markers and reinforcement agents for those paths. In human societies, there are social pheromones. Money and financial markets make probably the most obvious example, but scientific publications are another one. The more scientific articles are being published on a given topic, the more likely are other articles being written on the same topic, until the whole thing reaches a point of saturation, when some ants (pardon me, scientists) start thinking about another path to mark with intellectual pheromones.

Cool. I have (OK, we have) complex social states, made of entangled probabilities that something specific happens, and they encompass technology. Those complex states change, i.e. one complex state morphs into another. Now, how the hell can I know, as a researcher, what is happening exactly? Such as the theory of complex systems suggests it, I can never know exactly, for one, and I need to observe, for two. As I don’t know exactly what is it exactly, that thing which I label ‘technological change’, it is problematic to set too many normative assumptions as for which specific path that technological change should take. I think this is the biggest point of contention as I apply my theory, such as I have just outlined it, to my main field of empirical research, namely energy economics, and technological change in the sector of energy. The more I do that research, the more convinced I am that the so-called ‘energy policies’, ‘climate policies’ etc. are politically driven bullshit based on wishful thinking, with not much of a chance to bring the positive change we expect. I have that deep feeling that setting a strategy for future innovations in our business/country/world is very much like that Polish expression ‘sharing the skin of a bear which is still running in the woods’. First, you need to kill the bear, only then you can bicker about who takes what part of the skin. In the case of innovation, long-term strategies in that domain consist in predicting what we will do when we have something we don’t even know yet what is it exactly.

I am trying to apply this general theory in the grant applications which I am in charge of preparing now, and in my teaching. We have that idea, at the faculty, to apply for funding to study the market of electric vehicles in Europe and in Poland. This is an interesting situation as regards business models. In the US, the market of electric cars is clearly divided among three categories of players. There is Tesla, which is a category and an industry in itself, with its peculiar strategy of extreme vertical integration. Then there are the big, classical car makers, such as Toyota, General Motors etc., with their business models based on rather a short vertical chain of value added inside the business, and a massive supply chain upstream of the house. Finally, there is a rising tide of small start-ups in the making of electric vehicles. I wonder what I could be in Europe. As our European market of electric vehicles is taking off, it is dominated by the incumbent big manufacturers, the old school ones, with Tesla building a factory in Germany, and progressively building a beachhead in the market. There is some timid movement towards small start-up businesses in the field, but it is really timid. In my home country, Poland, the most significant attempt at starting up an electric vehicle made in Poland is a big consortium of state-controlled companies, running under the name of ‘Electromobility Poland’.  

I have that intuition, which I provisionally express as a working hypothesis, namely that business models are an emergent property of technologies which they use. As regards the market of electric vehicles, it means that Tesla’s business model is not an accidental explosion of Elon Musk’s genius mind: it is an emergent characteristic of the technologies involved.

Good. I have some theory taking shape, nice and easy. I let it ripen a bit, and I start sniffing around for facts. What is a business model, in my mind? It is the way of operating the chain of value added, and getting paid for it, in the first place. Then, it is the way of using capital. I noticed that highly innovative environments force businesses to build up and keep large amounts of cash money, arguably to manage the diverse uncertainties emerging as technologies around morph like hell. In some cases, e.g. in biotech, the right business model for rapid innovation is a money-sucker, with apparently endless pay-ins of additional equity by the shareholders, and yet with a big value in terms of technological novelty created. I can associate that phenomenon of vacuum cleaning equity with the case of Tesla, who just recently started being profitable, and had gone through something like a decade in permanent operational loss. That is all pertinent to fixed costs, thus to the cash we need to build up and keep in place the organizational structure required for managing the value chain the way we want to manage it.

I am translating those loose remarks of mine into observable phenomena. Everything I have just mentioned is to be found in the annual financial reports. This is my first source of information. When I want to study business models in the market of electric vehicles, I need to look into financial and corporate reports of businesses active in the market. I need to look into the financial reports of Mercedes Benz, BMW, Renault, PSA, Volkswagen, Fiat, Volvo, and Opel – thus the European automotive makers – and see how it is going, and whether whatever is going on can be correlated with changes in the European market of electric vehicles. Then, it is useful to look into the financial reports of global players present in the European market, e.g. Tesla, Toyota, Honda and whatnot, just to see what changes in them as the European market of electric vehicles is changing.

If my intuition is correct, i.e. if business models are truly an emergent property of technologies used, the fact of engaging into the business of electric vehicles should be correlated with some sort of recurrent pattern in those companies.         

Good. This is about the big boys in the playground. Now, I turn toward the small ones, the start-up businesses. As I already said, it is not like we have a crowd of them in the European industry of electric vehicles. The intuitive axis of research which comes to my mind is to look at start-ups active in the U.S., study their business models, and see if there is any chance of something similar emerging in Europe. Somehow tangentially to that, I think it would be interesting to check whether the plan of Polish government regarding ‘Electromobility Poland’, that is the plan to develop it with public and semi-public money, and then sell it to private investors, has any grounds and under what conditions it can be a workable plan.

Good. I have rummaged a bit in my own mind, time to do the same to other people. I mean, I am passing to reviewing the literature. I type ‘electric vehicles Europe business model’ at the https://www.sciencedirect.com/ platform, and I look at what’s popping up. Here comes the paper by Pardo-Bosch, F., Pujadas, P., Morton, C., & Cervera, C. (2021). Sustainable deployment of an electric vehicle public charging infrastructure network from a city business model perspective. Sustainable Cities and Society, 71, 102957., https://doi.org/10.1016/j.scs.2021.102957 . The abstract says: ‘The unprecedented growth of global cities together with increased population mobility and a heightened concern regarding climate change and energy independence have increased interest in electric vehicles (EVs) as one means to address these challenges. The development of a public charging infrastructure network is a key element for promoting EVs, and with them reducing greenhouse gas emissions attributable to the operation of conventional cars and improving the local environment through reductions in air pollution. This paper discusses the effectiveness, efficiency, and feasibility of city strategic plans for establishing a public charging infrastructure network to encourage the uptake and use of EVs. A holistic analysis based on the Value Creation Ecosystem (VCE) and the City Model Canvas (CMC) is used to visualise how such plans may offer public value with a long-term and sustainable approach. The charging infrastructure network implementation strategy of two major European cities, Nantes (France) and Hamburg (Germany), are analysed and the results indicate the need to involve a wide range of public and private stakeholders in the metropolitan areas. Additionally, relevant, and fundamental patterns and recommendations are provided, which may help other public managers effectively implement this service and scale-up its use and business model.

Well, I see there is a lot of work to do, as I read that abstract. I rarely find a paper where I have so much to argue with, just after having read the abstract. First of all, ‘the unprecedented growth of global cities’ thing. Actually, if you care to have a look at the World Bank data on urban land (https://data.worldbank.org/indicator/AG.LND.TOTL.UR.K2 ), as well as that on urban population (https://data.worldbank.org/indicator/SP.URB.TOTL.IN.ZS ), you will see that urbanization is an ambiguous phenomenon, strongly region-specific. The central thing is that cities become increasingly distinct from the countryside, as types of human settlements. The connection between electric vehicles and cities is partly clear, but just partly. Cities are the most obvious place to start with EVs, because of the relatively short distance to travel between charging points. Still, moving EVs outside the cities, and making them functional in rural areas, is the next big challenge.

Then comes the ‘The development of a public charging infrastructure network is a key element for promoting EVs’ part. As I studied the thing in Europe, the network of charging stations, as compared to the fleet of EVs in the streets is so dense that we have like 12 vehicles per charging station on average, across the European Union. There is no way a private investor can have it for their money, when financing a private charging station, with that average density. We face a paradox: there are so many publicly funded charging stations, in relation to the car fleet out there, that private investment gets discouraged. I agree that it could be an acceptable transitory state in the market, although it begs the question whether private charging stations are a viable business in Europe. Tesla has based a large part of its business model in the US precisely on the development of their own charging stations. Is it a viable solution in Europe?

Here comes another general remark, contingent to my hypothesis of business models being emergent on the basis of technologies. Automotive technologies in general, thus the technology of a vehicle moving by itself, regardless the method of propulsion (i.e. internal combustion vs electric) is a combination of two component technologies. Said method of propulsion is one of them, and the other one is the technology of distributing the power source across space. Electric vehicles can be viewed as cousins to tramways and electric trains, with just more pronounced a taste for independence: instead of drinking electricity from a permanent wiring, EVs carry their electricity around with them, in batteries.

As we talk about batteries, here comes another paper in my cursory rummaging across other people’s science: Albertsen, L., Richter, J. L., Peck, P., Dalhammar, C., & Plepys, A. (2021). Circular business models for electric vehicle lithium-ion batteries: An analysis of current practices of vehicle manufacturers and policies in the EU. Resources, Conservation and Recycling, 172, 105658., https://doi.org/10.1016/j.resconrec.2021.105658 . Yes, indeed, the advent of electric vehicles creates a problem to solve, namely what to do with all those batteries. I mean two categories of batteries. Those which we need, and hope to acquire easily when the time comes for changing them in our vehicles, in the first place, and those we don’t need anymore and expect someone to take care of them swiftly and elegantly.       

Representative for collective intelligence

I am generalizing from the article which I am currently revising, and I am taking a broader view on many specific strands of research I am running, mostly in order to move forward with my hypothesis of collective intelligence in human social structures. I want to recapitulate on my method – once more – in order to extract and understand its meaning. 

I have recently realized a few things about my research. Firstly, I am using the logical structure of an artificial neural network as a simulator more than an optimizer, as digital imagination rather than functional, goal-oriented intelligence, and that seems to be the way of using AI which hardly anyone else in social sciences seems to be doing. The big question which I am (re)asking myself is to what extent are my simulations representative for the collective intelligence of human societies.

I start gently, with variables, hence with my phenomenology. I mostly use the commonly accessible and published variables, such as those published by the World Bank, the International Monetary Fund, STATISTA etc. Sometimes, I make my own coefficients out of those commonly accepted metrics, e.g. the coefficient of resident patent applications per 1 million people, the proportion between the density of population in cities and the general one, or the coefficient of fixed capital assets per 1 patent application.

My take on any variables in social sciences is very strongly phenomenological, or even hermeneutic. I follow the line of logic which you can find, for example, in “Phenomenology of Perception” by Maurice Merleau-Ponty (reprint, revised, Routledge, 2013, ISBN 1135718601, 9781135718602). I assume that any of the metrics we have in social sciences is an entanglement of our collective cognition with the actual s**t going on. As the actual s**t going on encompasses our way of forming our collective cognition, any variable used in social sciences is very much like a person’s attempt to look at themselves from a distance. Yes! This is what we use mirrors for! Variables used in social sciences are mirrors. Still, they are mirrors made largely by trial and error, with a little bit of a shaky hand, and each of them shows actual social reality in slightly disformed a manner.

Empirical research in social sciences consists, very largely, in a group of people trying to guess something about themselves on the basis of repeated looks into a set of imperfect mirrors. Those mirrors are imperfect, and yet they serve some purpose. I pass to my second big phenomenological take on social reality, namely that our entangled observations thereof are far from being haphazard. The furtive looks we catch of the phenomenal soup, out there, are purposeful. We pay attention to things which pay off. We define specific variables in social sciences because we know by experience that paying attention to those aspects of social reality brings concrete rewards, whilst not paying attention thereto can hurt, like bad.

Let’s take inflation. Way back in the day, like 300 years ago, no one really used the term of inflation because the monetary system consisted in a multitude of currencies, mixing private and public deeds of various kinds. Entire provinces in European countries could rely on bills of exchange issued by influential merchants and bankers, just to switch to other type of bills 5 years later. Fluctuations in the rates of exchange in those multiple currencies very largely cancelled each other. Each business of respectable size was like a local equivalent of the today’s Forex exchange. Inflation was a metric which did not even make sense at the time, as any professional of finance would intuitively ask back: ‘Inflation? Like… inflation in which exactly among those 27 currencies I use everyday?’.

Standardized monetary systems, which we call ‘FIAT money’ today, steadied themselves only in the 19th century. Multiple currencies progressively fused into one, homogenized monetary mass, and mass conveys energy. Inflation is loss of monetary energy, like entropy of the monetary mass. People started paying attention to inflation when it started to matter.

We make our own social reality, which is fundamentally unobservable to us, and it makes sense because it is hard to have an objective, external look at a box when we are staying inside the box. Living in that box, we have learnt, over time, how to pay attention to the temporarily important properties of the box. We have learnt how to use maths for fine tuning in that selective perception of ours. We learnt, for example, to replace the basic distinction between people doing business and people not doing business at all with finer shades of how much business are people doing exactly in a unit of time-space.   

Therefore, a set of empirical variables, e.g. from the World Bank, is a collection of imperfect observations, which represent valuable outcomes social outcomes. A set of N socio-economic variables represents N collectively valuable social outcomes, which, in turn, correspond to N collective pursuits – it is a set of collective orientations. Now, my readers have the full right to protest: ‘Man, just chill. You are getting carried away by your own ideas. Quantitative variables about society and economy are numbers, right? They are the metrics of something. Measurement is objective and dispassionate. How can you say that objectively gauged metrics are collective orientations?’. Yes, these are all valid objections, and I made up that little imaginary voice of my readers on the basis of reviews that I had for some of my papers.

Once again, then. We measure the things we care about, and we go to great lengths in creating accurate scales and methods of measurement for the things we very much care about. Collective coordination is costly and hard to achieve. If we devote decades of collective work to nail down the right way of measuring, e.g. the professional activity of people, it probably matters. If it matters, we are collectively after optimizing it. A set of quantitative, socio-economic variables represents a set of collectively pursued orientations.

In the branch of philosophy called ethics, there is a stream of thought labelled ‘contextual ethics’, whose proponents claim that whatever normatively defined values we say we stick to, the real values we stick to are to be deconstructed from our behaviour. Things we are recurrently and systematically after are our contextual ethical values. Yes, the socio-economic variables we can get from your average statistical office are informative about the contextual values of our society.

When I deal with a variable like the % of electricity in the total consumption of energy, I deal with a superimposition of two cognitive perspectives. I observe something that happens in the social reality, and that phenomenon takes the form of a spatially differentiated, complex state of things, which changes over time, i.e. one complex state transitions into another complex state etc. On the other hand, I observe a collective pursuit to optimize that % of electricity in the total consumption of energy.

The process of optimizing a socio-economic metric makes me think once again about the measurement of social phenomena. We observe and measure things which are important to us because they give us some sort of payoff. We can have collective payoffs in three basic ways. We can max out, for one. Case: Gross Domestic Product, access to sanitation. We can keep something as low as possible, for two. Case: murder, tuberculosis. Finally, we can maintain some kind of healthy dynamic balance. Case: inflation, use of smartphones. Now, let’s notice that we don’t really do fine calculations about murder or tuberculosis. Someone is healthy or sick, still alive or already murdered. Transitional states are not really of much of a collective interest. As it comes to outcomes which pay off by the absence of something, we tend to count them digitally, like ‘is there or isn’t there’. On the other hand, those other outcomes, which we max out on or keep in equilibrium, well, that’s another story. We invent and perfect subtle scales of measurement for those phenomena. That makes me think about a seminal paper titled ‘Selection by consequences’, by the founding father of behaviourism, Burrhus Frederic Skinner. Skinner introduced the distinction between positive and negative reinforcements. He claimed that negative reinforcements are generally stronger in shaping human behaviour, whilst being clumsier as well. We just run away from a tiger, we don’t really try to calibrate the right distance and the right speed of evasion. On the other hand, we tend to calibrate quite finely our reactions to positive reinforcements. We dose our food, we measure exactly the buildings we make, we learn by small successes etc.  

If a set of quantitative socio-economic variables is informative about a set of collective orientations (collectively pursued outcomes), one of the ways we can study that set consists in establishing the hierarchy of orientations. Are some of those collective values more important than others? What does it even mean ‘more important’ in this context, and how can it be assessed? We can imagine that each among the many collective orientations is an individual pursuing their idiosyncratic path of payoffs from interactions with the external world. By the way, this metaphor is closer to reality than it could appear at the first sight. Each human is, in fact, a distinct orientation. Each of us is action. This perspective has been very sharply articulated by Martin Heidegger, in his “Being and Time”.    

Hence, each collective orientation can be equated to an individual force, pulling the society in a specific direction. In the presence of many socio-economic variables, I assume the actual social reality is a superimposition of those forces. They can diverge or concur, as they please, I do not make any assumptions about that. Which of those forces pulls the most powerfully?

Here comes my mathematical method, in the form of an artificial neural network. I proceed step by step. What does it mean that we collectively optimize a metric? Mostly by making it coherent with our other orientations. Human social structures are based on coordination, and coordination happens both between social entities (individuals, cities, states, political parties etc.), and between different collective pursuits. Optimizing a metric representative for a collectively valuable outcome means coordinating with other collectively valuable outcomes. In that perspective, a phenomenon represented (imperfectly) with a socio-economic metric is optimized when it remains in some kind of correlation with other phenomena, represented with other metrics. The way I define correlation in that statement is a broad one: correlation is any concurrence of events displaying a repetitive, functional pattern.

Thus, when I study the force of a given variable as a collective orientation in a society, I take this variable as the hypothetical output in the process (of collective orientation, and I simulate that process as the output variable sort of dragging the remaining variables behind it, by the force of functional coherence. With a given set of empirical variables, I make as many mutations thereof as I have variables. Each mutated set represents a process, where one variable as output, and the remaining ones as input. The process consists of as many experiments as there are observational rows in my database. Most socio-economic variables come in rows of the type “country A in year X”.  

Here, I do a little bit of mathematical cavalry with two different models of swarm intelligence: particle swarm and ants’ colony (see: Gupta & Srivastava 2020[1]). The model of particle swarm comes from the observation of birds, which keeps me in a state of awe about human symbolic creativity, and it models the way that flocks of birds stay collectively coherent when they fly around in the search of food. Each socio-economic variable is a collective orientation, and in practical terms it corresponds to a form of social behaviour. Each such form of social behaviour is a bird, which observes and controls its distance from other birds, i.e. from other forms of social behaviour. Societies experiment with different ways of maintaining internal coherence between different orientations. Each distinct collective orientation observes and controls its distance from other collective orientations. From the perspective of an ants’ colony, each form of social behaviour is a pheromonal trace which other forms of social behaviour can follow and reinforce, or not give a s**t about it, to their pleasure and leisure. Societies experiment with different strengths attributed to particular forms of social behaviour, which mimics an ants’ colony experimenting with different pheromonal intensities attached to different paths toward food.

Please, notice that both models – particle swarm and ants’ colony – mention food. Food is the outcome to achieve. Output variables in mutated datasets – which I create out of the empirical one – are the food to acquire. Input variables are the moves and strategies which birds (particles) or ants can perform in order to get food. Experimentation the ants’ way involves weighing each local input (i.e. the input of each variable in each experimental round) with a random weight R, 0 < R < 1. When experimenting the birds’ way, I drop into my model the average Euclidean distance E from the local input to all the other local inputs.   

I want to present it all rolled nicely into an equation, and, as noblesse oblige, I introduce symbols. The local input of an input variable xi in experimental round tj is represented with xi(tj), whilst the local value of the output variable xo is written as xo(tj). The compound experimental input which the society makes, both the ants’ way and the birds’ way, is written as h(tj), and it spells h(tj) = x1(tj)*R* E[xi(tj-1)] + x2(tj)*R* E[x2(tj-1)] + … + xn(tj)*R* E[xn(tj-1)].    

Up to that point, this is not really a neural network. It mixes things up, but it does not really adapt. I mean… maybe there is a little intelligence? After all, when my variables act like a flock of birds, they observe each other’s position in the previous experimental round, through the E[xi(tj-1)] Euclidean thing. However, I still have no connection, at this point, between the compound experimental input h(tj) and the pursued output xo(tj). I need a connection which would work like an observer, something like a cognitive meta-structure.

Here comes the very basic science of artificial neural networks. There is a function called hyperbolic tangent, which spells tanh = (e2x – 1)/(e2x + 1) where x can be whatever you want. This function happens to be one of those used in artificial neural networks, as neural activation, i.e. as a way to mediate between a compound input and an expected output. When I have that compound experimental input h(tj) = x1(tj)*R* E[xi(tj-1)] + x2(tj)*R* E[x2(tj-1)] + … + xn(tj)*R* E[xn(tj-1)], I can put it in the place of x in the hyperbolic tangent, and I bet tanh = (e2h  – 1)/(e2h  + 1). In a neural network, error in optimization can be calculated, generally, as e = xo(tj) – tanh[h(tj)]. That error can be fed forward into the next experimental round, and then we are talking, ‘cause the compound experimental input morphs into:

>>  input h(tj) = x1(tj)*R* E[xi(tj-1)]*e(tj-1) + x2(tj)*R* E[x2(tj-1)] *e(tj-1) + … + xn(tj)*R* E[xn(tj-1)] *e(tj-1)   

… and that means that each compound experimental input takes into account both the coherence of the input in question (E), and the results of previous attempts to optimize.

Here, I am a bit stuck. I need to explain, how exactly the fact of computing the error of optimization e = xo(tj) – tanh[h(tj)] is representative for collective intelligence.


[1] Gupta, A., & Srivastava, S. (2020). Comparative analysis of ant colony and particle swarm optimization algorithms for distance optimization. Procedia Computer Science, 173, 245-253. https://doi.org/10.1016/j.procs.2020.06.029

Tax on Bronze

I am trying to combine the line of logic which I developed in the proof-of-concept for the idea I labelled ‘Energy Ponds’ AKA ‘Project Aqueduct’ with the research on collective intelligence in human societies. I am currently doing serious review of literature as regards the theory of complex systems, as it looks like just next door to my own conceptual framework. The general idea is to use the theory of complex systems – within the general realm of which the theory of cellular automata looks the most promising, for the moment – to simulate the emergence and absorption of a new technology in the social structure.  

I started to sketch the big lines of that picture in my last update in French, namely in ‘L’automate cellulaire respectable’. I assume that any new technology burgeons inside something like a social cell, i.e. a group of people connected by common goals and interests, together with some kind of institutional vehicle, e.g. a company, a foundation etc. It is interesting to notice that new technologies develop through the multiplication of such social cells rather than through linear growth of just one cell. Up to a point this is just one cell growing, something like the lone wolf of Netflix in the streaming business, and then ideas start breeding and having babies with other people.

I found an interesting quote in the book which is my roadmap through the theory of complex systems, namely in ‘What Is a Complex System?’ by James Landyman and Caroline Wiesner (Yale University Press 2020, ISBN 978-0-300-25110-4). On page 56 (Kindle Edition), Landyman and Wiesner write something interesting about the collective intelligence in colonies of ants: ‘What determines a colony’s survival is its ability to grow quickly, because individual workers need to bump into other workers often to be stimulated to carry out their tasks, and this will happen only if the colony is large. Army ants, for example, are known for their huge swarm raids in pursuit of prey. With up to 200 000 virtually blind foragers, they form trail systems that are up to 20 metres wide and 100 metres long (Franks et al. 1991). An army of this size harvests prey of 40 grams and more each day. But if a small group of a few hundred ants accidentally gets isolated, it will go round in a circle until the ants die from starvation […]’.

Interesting. Should nascent technologies have an ant-like edge to them, their survival should be linked to them reaching some sort of critical size, which allows the formation of social interactions in the amount which, in turn, an assure proper orientation in all the social cells involved. Well, looks like nascent technologies really are akin to ant colonies because this is exactly what happens. When we want to push a technology from its age of early infancy into the phase of development, a critical size of the social network is required. Customers, investors, creditors, business partners… all that lot is necessary, once again in a threshold amount, to give a new technology the salutary kick in the ass, sending it into the orbit of big business.

I like jumping quickly between ideas and readings, with conceptual coherence being an excuse just as frequently as it is a true guidance, and here comes an article on urban growth, by Yu et al. (2021[1]). The authors develop a model of urban growth, based on the empirical data on two British cities: Oxford and Swindon. The general theoretical idea here is that strictly speaking urban areas are surrounded by places which are sort of in two minds whether they like being city or countryside. These places can be represented as spatial cells, and their local communities are cellular automatons which move cautiously, step by step, into alternative states of being more urban or more rural. Each such i-th cellular automaton displays a transition potential Ni, which is a local balance between the benefits of urban agglomeration Ni(U), as opposed to the benefits Ni(N) of conserving scarce non-urban resources. The story wouldn’t be complete without the shit-happens component Ri of randomness, and the whole story can be summarized as: Ni = Ni(U) – Ni(N) + Ri.

Yu et al. (2021 op. cit.) add an interesting edge to the basic theory of cellular automata, such as presented e.g. in Bandini, Mauri & Serra (2001[2]), namely the component of different spatial scales. A spatial cell in a peri-urban area can be attracted to many spatial aspects of being definitely urban. Those people may consider the possible benefits of sharing the same budget for local schools in a perimeter of 5 kilometres, as well as the possible benefits of connecting to a big hospital 20 km away. Starting from there, it looks a bit gravitational. Each urban cell has a power of attraction for non-urban cells, however that power decays exponentially with physical distance.

I generalize. There are many technologies spreading across the social space, and each of them is like a city. I mean, it does not necessarily have a mayor, but it has dense social interactions inside, and those interactions create something like a gravitational force for external social cells. When a new technology gains new adherents, like new investors, new engineers, new business entities, it becomes sort of seen and known. I see two phases in the development of a nascent technology. Before it gains enough traction in order to exert significant gravitational force on the temporarily non-affiliated social cells, a technology grows through random interactions of the initially involved social cells. If those random interactions exceed a critical threshold, thus if there are enough forager ants in the game, their simple interactions create an emergence, which starts coagulating them into a new industry.

I return to cities and their growth, for a moment. I return to the story which Yu et al. (2021[3]) are telling. In my own story on a similar topic, namely in my draft paper ‘The Puzzle of Urban Density And Energy Consumption’, I noticed an amazing fact: whilst individual cities grow, others decay or even disappear, and the overall surface of urban areas on Earth seems to be amazingly stationary over many decades. It looks as if the total mass, and hence the total gravitational attraction of all the cities on Earth was a constant over at least one human generation (20 – 25 years). Is it the same with technologies? I mean, is there some sort of constant total mass that all technologies on Earth have, within the lifespan of one human generation, and there are just specific technologies getting sucked into that mass whilst others drop out and become moons (i.e. cold, dry places with not much to do and hardly any air to breathe).

What if a new technology spreads like Tik-Tok, i.e. like a wildfire? There is science for everything, and there is some science about fires in peri-urban areas as well. That science is based on the same theory of cellular automata. Jiang et al. (2021[4]) present a model, where territories prone to wildfires are mapped into grids of square cells. Each cell presents a potential to catch fire, through its local properties: vegetation, landscape, local climate. The spread of a wildfire from a given cell R0 is always based on the properties of the cells surrounding the fire.

Cirillo, Nardi & Spitoni (2021[5]) present an interesting mathematical study of what happens when, in a population of cellular automata, each local automaton updates itself into a state which is a function of the preceding state in the same cell, as well as of the preceding states in the two neighbouring cells. It means, among other things, that if we add the dimension of time to any finite space Zd where cellular automata dwell, the immediately future state of a cell is a component of the available neighbourhood for the present state of that cell. Cirillo, Nardi & Spitoni (2021) demonstrate, as well, that if we know the number and the characteristics of the possible states which one cellular automaton can take, like (-1, 0, 1), we can compute the total number of states that automaton can take in a finite number of moves. If we make many such cellular automatons move in the same space Zd , a probabilistic chain of complex states emerge.

As I wrote in ‘L’automate cellulaire respectable’, I see a social cell built around a new technology, e.g. ‘Energy Ponds’, moving, in the first place, along two completely clear dimensions: physical size of installations and financial size of the balance sheet. Movements along these two axes are subject to the influence happening along some foggy, unclear dimensions connected to preferences and behaviour: expected return on investment, expected future value of the firm, risk aversion as opposed to risk affinity etc. That makes me think, somehow, about a theory next door to that of cellular automata, namely the theory of swarms. This is a theory which explains complex changes in complex systems through changes in strength of correlation between individual movements. According to the swarm theory, a complex set which behaves like a swarm can adapt to external stressors by making the moves of individual members more or less correlated with each other. A swarm in routine action has its members couple their individual behaviour rigidly, like marching in step. A swarm alerted by a new stressor can loosen it a little, and allow individual members some play in their behaviour, like ‘If I do A, you do B or C or D, anyway one out of these three’. A swarm in mayhem loses it completely and there is no behavioural coupling whatsoever between members.

When it comes to the development and societal absorption of a new technology, the central idea behind the swarm-theoretic approach is that in order to do something new, the social swarm has to shake it off a bit. Social entities need to loosen their mutual behavioural coupling so as to allow some of them to do something else than just ritually respond to the behaviour of others. I found an article which I can use to transition nicely from the theory of cellular automata to the swarm theory: Puzicha & Buchholz (2021[6]). The paper is essentially applicable to the behaviour of robots, yet it is about a swarm of 60 distributed autonomous mobile robots which need to coordinate through a communication network with low reliability and restricted capacity. In other words, sometimes those robots can communicate with each other, and sometimes they don’t. When some robots out of the 60 are having a chat, they can jam the restricted capacity of the network and thus bar the remaining robots from communicating. Incidentally, this is how innovative industries work. When a few companies, let’s say the calibre of unicorns, are developing a new technology. They absorb the attention of investors, governments, potential business partners and potential employees. They jam the restricted field of attention available in the markets of, respectively, labour and capital.      

Another paper from the same symposium ‘Intelligent Systems’, namely Serov, Voronov & Kozlov (2021[7]), leads in a slightly different direction. Whilst directly derived from the functioning of communication systems, mostly the satellite-based ones, the paper suggests a path of learning in a network, where the capacity for communication is restricted, and the baseline method of balancing the whole thing is so burdensome for the network that it jams communication even further. You can compare it to a group of people who are all so vocal about the best way to allow each other to speak that they have no time and energy left for speaking their mind and listening to others. I have found another paper, which is closer to explaining the behaviour of those individual agents when they coordinate just sort of. It is Gupta & Srivastava (2020[8]), who compare two versions of swarm intelligence: particle swarm and ant colony. The former (particle swarm) generalises a problem applicable to birds. Simple, isn’t it? A group of birds will randomly search for food. Birds don’t know where exactly the food is, so they follow the bird which is nearest to the food.  The latter emulates the use of pheromones in a colony of ants. Ants selectively spread pheromones as they move around, and they find the right way of moving by following earlier deposits of pheromones. As many ants walk many times a given path, the residual pheromones densify and become even more attractive. Ants find the optimal path by following maximum pheromone deposition.

Gupta & Srivastava (2020) demonstrate that the model of ant colony, thus systems endowed with a medium of communication which acts by simple concentration in space and time are more efficient for quick optimization than the bird-particle model, based solely on observing each other’s moves. From my point of view, i.e. from that of new technologies, those results reach deeper than it could seem at the first sight. Financial capital is like a pheromone. One investor-ant drops some financial deeds at a project, and it can hopefully attract further deposits of capital etc. Still, ant colonies need to reach a critical size in order for that whole pheromone business to work. There needs to be a sufficient number of ants per unit of available space, in order to create those pheromonal paths. Below the critical size, no path becomes salient enough to create coordination and ants starve to death fault of communicating efficiently. Incidentally, the same is true for capital markets. Some 11 years ago, right after the global financial crisis, a fashion came to create small, relatively informal stock markets, called ‘alternative capital markets’. Some of them were created by the operators of big stock markets (e.g. the AIM market organized by the London Stock Exchange), some others were completely independent ventures. Now, a decade after that fashion exploded, the conclusion is similar to ant colonies: fault of reaching a critical size, those alternative capital markets just don’t work as smoothly as the big ones.

All that science I have quoted makes my mind wander, and it starts walking down the path of hilarious and absurd. I return, just for a moment, to another book: ‘1177 B.C. THE YEAR CIVILIZATION COLLAPSED. REVISED AND UPDATED’ by Eric H. Cline (Turning Points in Ancient History, Princeton University Press, 2021, ISBN 9780691208022). The book gives in-depth an account of the painful, catastrophic end of a whole civilisation, namely that of the Late Bronze Age, in the Mediterranean and the Levant. The interesting thing is that we know that whole network of empires – Egypt, Hittites, Mycenae, Ugarit and whatnot – collapsed at approximately the same moment, around 1200 – 1150 B.C., we know they collapsed violently, and yet we don’t know exactly how they collapsed.

Alternative history comes to my mind. I imagine the transition from Bronze Age to the Iron Age similarly to what we do presently. The pharaoh-queen VanhderLeyenh comes up with the idea of iron. Well, she doesn’t, someone she pays does. The idea is so seducing that she comes, by herself this time, with another one, namely tax on bronze. ‘C’mon, Mr Brurumph, don’t tell me you can’t transition to iron within the next year. How many appliances in bronze do you have? Five? A shovel, two swords, and two knives. Yes, we checked. What about your rights? We are going through a deep technological change, Mr Brurumph, this is not a moment to talk about rights. Anyway, this is not even the new era yet, and there is no such thing as individual rights. So, Mr Brurumph, a one-year notice for passing from bronze to iron is more than enough. Later, you pay the bronze tax on each bronze appliance we find. Still, there is a workaround. If you officially identify as a non-Bronze person, and you put the corresponding sign over your door, you have a century-long prolongation on that tax’.

Mr Brurumph gets pissed off. Others do too. They feel lost in a hostile social environment. They start figuring s**t out, starting from the first principles of their logic. They become cellular automata. They focus on nailing down the next immediate move to make. Errors are costly. Swarm behaviour forms. Fights break out. Cities get destroyed. Not being liable to pay the tax on bronze becomes a thing. It gets support and gravitational attraction. It becomes tempting to join the wandering hordes of ‘Tax Free People’ who just don’t care and go. The whole idea of iron gets postponed like by three centuries.  


[1] Yu, J., Hagen-Zanker, A., Santitissadeekorn, N., & Hughes, S. (2021). Calibration of cellular automata urban growth models from urban genesis onwards-a novel application of Markov chain Monte Carlo approximate Bayesian computation. Computers, environment and urban systems, 90, 101689. https://doi.org/10.1016/j.compenvurbsys.2021.101689

[2] Bandini, S., Mauri, G., & Serra, R. (2001). Cellular automata: From a theoretical parallel computational model to its application to complex systems. Parallel Computing, 27(5), 539-553. https://doi.org/10.1016/S0167-8191(00)00076-4

[3] Yu, J., Hagen-Zanker, A., Santitissadeekorn, N., & Hughes, S. (2021). Calibration of cellular automata urban growth models from urban genesis onwards-a novel application of Markov chain Monte Carlo approximate Bayesian computation. Computers, environment and urban systems, 90, 101689. https://doi.org/10.1016/j.compenvurbsys.2021.101689

[4] Jiang, W., Wang, F., Fang, L., Zheng, X., Qiao, X., Li, Z., & Meng, Q. (2021). Modelling of wildland-urban interface fire spread with the heterogeneous cellular automata model. Environmental Modelling & Software, 135, 104895. https://doi.org/10.1016/j.envsoft.2020.104895

[5] Cirillo, E. N., Nardi, F. R., & Spitoni, C. (2021). Phase transitions in random mixtures of elementary cellular automata. Physica A: Statistical Mechanics and its Applications, 573, 125942. https://doi.org/10.1016/j.physa.2021.125942

[6] Puzicha, A., & Buchholz, P. (2021). Decentralized model predictive control for autonomous robot swarms with restricted communication skills in unknown environments. Procedia Computer Science, 186, 555-562. https://doi.org/10.1016/j.procs.2021.04.176

[7] Serov, V. A., Voronov, E. M., & Kozlov, D. A. (2021). A neuro-evolutionary synthesis of coordinated stable-effective compromises in hierarchical systems under conflict and uncertainty. Procedia Computer Science, 186, 257-268. https://doi.org/10.1016/j.procs.2021.04.145

[8] Gupta, A., & Srivastava, S. (2020). Comparative analysis of ant colony and particle swarm optimization algorithms for distance optimization. Procedia Computer Science, 173, 245-253. https://doi.org/10.1016/j.procs.2020.06.029

L’automate cellulaire respectable

J’essaie de développer une jonction entre deux créneaux de ma recherche : l’étude de faisabilité pour mon « Projet Aqueduc » d’une part et ma recherche plus théorique sur le phénomène d’intelligence collective d’autre part. Question : comment prédire et prévoir l’absorption d’une technologie nouvelle dans une structure sociale ? En des termes plus concrets, comment puis-je prévoir l’absorption de « Projet Aqueduc » dans l’environnement socio-économique ? Pour me rendre la vie plus difficile – ce qui est toujours intéressant – je vais essayer de construire le modèle de cette absorption à partir d’une base théorique relativement nouvelle pour moi, notamment la théorie d’automates cellulaires. En termes de littérature, pour le moment, je me réfère à deux articles espacés de 20 ans l’un de l’autre : Bandini, Mauri & Serra (2001[1]) ainsi que Yu et al. (2021[2]).

Pourquoi cette théorie précise ? Pourquoi pas, en fait ? Sérieusement, la théorie d’automates cellulaires essaie d’expliquer des phénomènes très complexes – qui surviennent dans des structures qui ont l’air d’être vraiment intelligentes – à partir d’assomptions très faibles à propos du comportement individuel d’entités simples à l’intérieur de ces structures. En plus, cette théorie est déjà bien traduite en termes d’intelligence artificielle et se marie donc bien avec mon but général de développer une méthode de simuler des changements socio-économiques avec des réseaux neuronaux.

Il y a donc un groupe des gens qui s’organisent d’une façon ou d’une autre autour d’une technologie nouvelle. Les ressources économiques et la structure institutionnelle de ce groupe peuvent varier : ça peut être une société de droit, un projet public-privé, une organisation non-gouvernementale etc. Peu importe : ça commence comme une microstructure sociale. Remarquez : une technologie existe seulement lorsque et dans la mesure qu’une telle structure existe, sinon une structure plus grande et plus complexe. Une technologie existe seulement lorsqu’il y a des gens qui s’occupent d’elle.

Il y a donc ce groupe organisé autour d’une technologie naissante. Tout ce que nous savons sur l’histoire économique et l’histoire des technologies nous dit que si l’idée s’avère porteuse, d’autres groupes plus ou moins similaires vont se former. Je répète : d’autres groupes. Lorsque la technologie des voitures électriques avait finalement bien mordu dans le marché, ceci n’a pas entraîné l’expansion monopolistique de Tesla. Au contraire : d’autres entités ont commencé à bâtir de façon indépendante sur l’expérience de Tesla. Aujourd’hui, chacun des grands constructeurs automobiles vit une aventure plus ou moins poussée avec les bagnoles électriques et il y a toute une vague des startups crées dans le même créneau. En fait, la technologie du véhicule électrique a donné une deuxième jeunesse au modèle de petite entreprise automobile, un truc qui semblait avoir été renvoyé à la poubelle de l’histoire.

L’absorption d’une technologie nouvelle peut donc être représentée comme la prolifération des cellules bâties autour de cette technologie. A quoi bon, pouvez-vous demander. Pourquoi inventer un modèle théorique de plus pour le développement des technologies nouvelles ? Après tout, il y en a déjà pas mal, de tels modèles. Le défi théorique consiste à simuler le changement technologique de façon à cerner des Cygnes Noirs possibles. La différence entre un cygne noir tout simple et un Cygne Noir écrit avec des majuscules est que ce dernier se réfère au livre de Nassim Nicolas Taleb « The Black Swan. The impact of the highly improbable », Penguin, 2010. Oui, je sais, il y a plus que ça. Un Cygne Noir en majuscules peut bien être le Cygne Noir de Tchaïkovski, donc une femme (Odile) autant attirante que dangereuse par son habileté d’introduire du chaos dans la vie d’un homme. Je sais aussi que si j’arrangerai une conversation entre Tchaïkovski et Carl Gustav Jung, les deux messieurs seraient probablement d’accord qu’Odile alias Cygne Noir symbolise le chaos, en opposition à l’ordre fragile dans la vie de Siegfried, donc à Odette. Enfin, j’fais pas du ballet, moi, ici. Je blogue. Ceci implique une tenue différente, ainsi qu’un genre différent de flexibilité. Je suis plus âgé que Siegfried, aussi, comme par une génération.  

De tout en tout, mon Cygne Noir à moi est celui emprunté à Nassim Nicolas Taleb et c’est donc un phénomène qui, tout en étant hors d’ordinaire et surprenant pour les gens concernés, est néanmoins fonctionnellement et logiquement dérivé d’une séquence des phénomènes passés. Un Cygne Noir se forme autour des phénomènes qui pendant un certain temps surviennent aux extrémités de la courbe Gaussienne, donc à la frange de probabilité. Les Cygnes Noirs véhiculent du danger et des opportunités nouvelles, à des doses aussi variées que le sont les Cygnes Noirs eux-mêmes. L’intérêt pratique de cerner des Cygnes Noirs qui peuvent surgir à partir de la situation présente est donc celui de prévenir des risques du type catastrophique d’une part et de capter très tôt des opportunités exceptionnelles d’autre part.

Voilà donc que, mine de rien, je viens d’enrichir la description fonctionnelle de ma méthode de simuler l’intelligence collective des sociétés humaines avec les réseaux neuronaux artificiels. Cette méthode peut servir à identifier à l’avance des développements possibles du type de Cygne Noir : significatifs, subjectivement inattendus et néanmoins fonctionnellement enracinées dans la réalité présente.

Il y a donc cette technologie nouvelle et il y a des cellules socio-économiques qui se forment autour d’elle. Il y a des espèces distinctes des cellules et chaque espèce correspond à une technologie différente. Chaque cellule peut être représentée comme un automate cellulaire A = (Zd, S, n, Sn+1 -> S), dont l’explication commence avec Zd, donc l’espace à d dimensions ou les cellules font ce qu’elles ont à faire. L’automate cellulaire ne sait rien sur cet espace, tout comme une truite n’est pas vraiment forte lorsqu’il s’agit de décrire une rivière. Un automate cellulaire prend S états différents et ces états sont composés des mouvements du type un-pas-à-la-fois, dans n emplacements cellulaires adjacents. L’automate sélectionne ces S états différents dans un catalogue plus large Sn+1 de tous les états possibles et la fonction Sn+1 -> S alias la règle locale de l’automate A décrit de façon générale le quotient de cette sélection, donc la capacité de l’automate cellulaire d’explorer toutes les possibilités de bouger son cul (cellulaire) juste d’un cran à partir de la position actuelle.

Pourquoi distinguer ces quatre variables structurelles dans l’automate cellulaire ? Pourquoi n’assumons-nous pas que le nombre possible des mouvements « n » est une fonction constante des dimensions offertes par l’espace Zd ? Pourquoi ne pas assumer que le nombre réel d’états S est égal au total possible de Sn+1 ? Eh bien parce que la théorie d’automates cellulaires a des ambitions de servir à quelque chose d’utile et elle s’efforce de simuler la réalité. Il y a donc une technologie nouvelle encapsulée dans une cellule sociale A. L’espace social autour d’A est vaste, mais il peut y avoir des portes verrouillées. Des marchés oligopoles, des compétiteurs plus rapides et plus entreprenants, des obstacles légaux et mêmes des obstacles purement sociaux. Si une société à qui vous proposez de coopérer dans votre projet innovant craint d’être exposée à 10 000 tweets enragés de la part des gens qui n’aiment pas votre technologie, cette porte-là est fermée, quoi que la dimension où elle se trouve est théoriquement accessible.

Si je suis un automate cellulaire tout à fait ordinaire et j’ai la possibilité de bouger dans n emplacements sociaux adjacents à celui où je suis maintenant, je commence par choisir juste un mouvement et voir ce qui se passe. Lorsque tout se passe de façon satisfaisante, j’observe mon environnement immédiat nouveau – j’observe donc le « n » nouveau visible à partir de la cellule où je viens de bouger – je fais un autre mouvement dans un emplacement sélectionné dans ce nouveau « n » et ainsi de suite. Dans un environnement immédiat « n » moi, l’automate cellulaire moyen, j’explore plus qu’un emplacement possible de parmi n seulement lorsque je viens d’essuyer un échec dans l’emplacement précédemment choisi et j’avais décidé que la meilleure stratégie est de retourner à la case départ tout en reconsidérant les options possibles.         

La cellule sociale bâtie autour d’une technologie va donc se frayer un chemin à travers l’espace social Zd, en essayant de faire des mouvement réussis, donc en sélectionnant une option de parmi les « n » possibles. Oui, les échecs ça arrive et donc parfois la cellule sociale va expérimenter avec k > 1 mouvements immédiats. Néanmoins, la situation où k = n c’est quand les gens qui travaillent sur une technologie nouvelle ont essayé, en vain, toutes les options possibles sauf une dernière et se jettent la tête en avant dans celle-ci, qui s’avère une réussite. De telles situations arrivent, je le sais. Je crois bien que Canal+ était une aventure de ce type à ces débuts. Néanmoins, lorsqu’un truc marche, dans le lancement d’une technologie nouvelle, on juste continue dans la foulée sans regarder par-dessus l’épaule.

Le nombre réel S d’états que prend un automate cellulaire est donc largement sujet à l’hystérèse. Chaque mouvement réussi est un environnement immédiat de moins à exploiter, donc celui laissé derrière nous.  En même temps, c’est un défi nouveau de faire l’autre mouvement réussi au premier essai sans s’attarder dans des emplacements alternatifs. L’automate cellulaire est donc un voyageur plus qu’un explorateur. Bref, la formulation A = (Zd, S, n, Sn+1 -> S) d’un automate cellulaire exprime donc des opportunités et des contraintes à la fois.

Ma cellule sociale bâtie autour de « Projet Aqueduc » coexiste avec des cellules sociales bâties autour d’autres technologies. Comme tout automate cellulaire respectable, je regarde autour de moi et je vois des mouvements évidents en termes d’investissement. Je peux bouger ma cellule sociale en termes de capital accumulé ainsi que de l’échelle physique des installations. Je suppose que les autres cellules sociales centrées sur d’autres technologies vont faire de même : chercher du capital et des opportunités de croître physiquement. Excellent ! Voilà donc que je vois deux dimensions de Zd : l’échelle financière et l’échelle physique. Je me demande comment faire pour y bouger et je découvre d’autres dimensions, plus comportementales et cognitives celles-là : le retour interne (profit) espéré sur l’investissement ainsi que le retour externe (croissance de valeur d’entreprise), la croissance générale du marché de capital d’investissement etc.

Trouver des dimensions nouvelles, c’est fastoche, par ailleurs. Beaucoup plus facile que c’est montré dans les films de science-fiction. Il suffit de se demander ce qui peut bien gêner nos mouvements, regarder bien autour, avoir quelques conversations et voilà ! Je peux découvrir des dimensions nouvelles même sans accès à un téléporteur inter-dimensionnel à haute énergie. Je me souviens d’avoir vu sur You Tube une série de vidéos dont les créateurs prétendaient savoir à coup sûr que le grand collisionneur de hadrons (oui, celui à Genève) a ouvert un tunnel vers l’enfer. Je passe sur des questions simplissimes du genre : « Comment savez-vous que c’est un tunnel, donc un tube avec une entrée et une sortie ? Comment savez-vous qu’il mène en enfer ? Quelqu’un est-il allé de l’autre côté et demandé les locaux où ça où ils habitent ? ». Le truc vraiment épatant est qu’il y a toujours des gens qui croient dur comme fer que vous avez besoin des centaines de milliers de dollars d’investissement et des années de recherche scientifique pour découvrir un chemin vers l’enfer. Ce chemin, chacun de nous l’a à portée de la main. Suffit d’arrêter de découvrir des dimensions nouvelles dans notre existence.

Bon, je suis donc un automate cellulaire respectable qui développe le « Projet Aqueduc » à partir d’une cellule d’enthousiastes et en présence d’autres automates cellulaires. On bouge, nous, les automates cellulaires, le long de deux dimensions bien claires d’échelle – capital accumulé et taille physique des installations – et on sait que bouger dans ces dimensions-ci exige un effort dans d’autres dimensions moins évidentes qui s’entrelacent autour d’intérêt général pour notre idée de la part des gens extra – cellulaires. Notre Zd est en fait un Zd eh ben alors !. Le fait d’avoir deux dimensions bien visibles et un nombre discutable de dimensions plus floues fait que le nombre « n » des mouvements possibles est tout aussi discutable et on évite d’en explorer toutes les nuances. On saute sur le premier emplacement possible de parmi « n », ce qui nous transporte dans un autre « n », puis encore et encore.

Lorsque tous les automates cellulaires démontrent des règles locales Sn+1 -> S à peu près cohérentes, il est possible d’en faire une description instantanée Zd -> S, connue aussi comme configuration de A ou bien son état global. Le nombre d’états possibles que mon « Projet Aqueduc » peut prendre dans un espace rempli d’automates cellulaires va dépendre du nombre d’états possibles d’autres automates cellulaires. Ces descriptions instantanées Zd -> S sont, comme le nom l’indique, instantanées, donc temporaires et locales. Elles peuvent changer. En particulier, le nombre S d’états possibles de mon « Projet Aqueduc » change en fonction de l’environnement immédiat « n » accessible à partir de la position courante t. Une séquence de positions correspond donc à une séquence des configurations ct = Zd -> S (t) et cette séquence est désignée comme comportement de l’automate cellulaire A ou bien son évolution.        


[1] Bandini, S., Mauri, G., & Serra, R. (2001). Cellular automata: From a theoretical parallel computational model to its application to complex systems. Parallel Computing, 27(5), 539-553. https://doi.org/10.1016/S0167-8191(00)00076-4

[2] Yu, J., Hagen-Zanker, A., Santitissadeekorn, N., & Hughes, S. (2021). Calibration of cellular automata urban growth models from urban genesis onwards-a novel application of Markov chain Monte Carlo approximate Bayesian computation. Computers, environment and urban systems, 90, 101689. https://doi.org/10.1016/j.compenvurbsys.2021.101689

The red-neck-cellular automata

I continue revising my work on collective intelligence, and I am linking it to the theory of complex systems. I return to the excellent book ‘What Is a Complex System?’ by James Landyman and Karoline Wiesner (Yale University Press, 2020, ISBN 978-0-300-25110-4, Kindle Edition). I take and quote their summary list of characteristics that complex systems display, on pages 22 – 23: “ […] which features are necessary and sufficient for which kinds of complexity and complex system. The features are as follows:

1. Numerosity: complex systems involve many interactions among many components.

2. Disorder and diversity: the interactions in a complex system are not coordinated or controlled centrally, and the components may differ.

3. Feedback: the interactions in complex systems are iterated so that there is feedback from previous interactions on a time scale relevant to the system’s emergent dynamics.

4. Non-equilibrium: complex systems are open to the environment and are often driven by something external.

5. Spontaneous order and self-organisation: complex systems exhibit structure and order that arises out of the interactions among their parts.

6. Nonlinearity: complex systems exhibit nonlinear dependence on parameters or external drivers.

7. Robustness: the structure and function of complex systems is stable under relevant perturbations.

8. Nested structure and modularity: there may be multiple scales of structure, clustering and specialisation of function in complex systems.

9. History and memory: complex systems often require a very long history to exist and often store information about history.

10. Adaptive behaviour: complex systems are often able to modify their behaviour depending on the state of the environment and the predictions they make about it”.

As I look at the list, my method of simulating collective intelligence is coherent therewith. Still, there is one point which I think I need to dig a bit more into: that whole thing with simple entities inside the complex system. In most of my simulations, I work on interactions between cognitive categories, i.e. between quantitative variables. Interaction between real social entities is most frequently implied rather than empirically nailed down. Still, there is one piece of research which sticks out a bit in that respect, and which I did last year. It is devoted to cities and their role in the human civilisation. I wrote quite a few blog updates on the topic, and I have one unpublished paper written thereon, titled ‘The Puzzle of Urban Density And Energy Consumption’. In this case, I made simulations of collective intelligence with my method, thus I studied interactions between variables. Yet, in the phenomenological background of emerging complexity in variables, real people interact in cities: there are real social entities interacting in correlation with the connections between variables. I think the collective intelligence of cities the piece of research where I have the surest empirical footing, as compared to others.

There is another thing which I almost inevitably think about. Given the depth and breadth of the complexity theory, such as I start discovering it with and through that ‘What Is a Complex System?’ book, by James Landyman and Karoline Wiesner, I ask myself: what kind of bacon can I bring to that table? Why should anyone bother about my research? What theoretical value added can I supply? A good way of testing it is talking real problems. I have just signalled my research on cities. The most general hypothesis I am exploring is that cities are factories of new social roles in the same way that the countryside is a factory of food. In the presence of demographic growth, we need more food, and we need new social roles for new humans coming around. In the absence of such new social roles, those new humans feel alienated, they identify as revolutionaries fighting for the greater good, they identify the incumbent humans as oppressive patriarchy, and the next thing you know, there is systemic, centralized, government-backed terror. Pardon my French, this is a system of social justice. Did my bit of social justice, in the communist Poland.

Anyway, cities make new social roles by making humans interact much more abundantly than they usually do in a farm. More abundant an interaction means more data to process for each human brain, more s**t to figure out, and the next thing you know, you become a craftsman, a businessperson, an artist, or an assassin. Once again, being an assassin in the countryside would not make much sense. Jumping from one roof to another looks dashing only in an urban environment. Just try it on a farm.

Now, an intellectual challenge. How can humans, who essentially don’t know what to do collectively, can interact so as to create emergent complexity which, in hindsight, looks as if they had known what to do? An interesting approach, which hopefully allows using some kind of neural network, is the paradigm of the maze. Each individual human is so lost in social reality that the latter appears as a maze, which one ignores the layout of. Before I go further, one linguistic thing is to nail down. I feel stupid using impersonal forms such as ‘one’, or ‘an individual’. I like more concreteness. I am going to start with George the Hero. George the Hero lives in a maze, and I stress it: he lives there. Social reality is like a maze to George, and, logically, George does not even want to get out of that maze, ‘cause that would mean being lonely, with no one around to gauge George’s heroism. George the Hero needs to stay in the maze.

The first thing which George the Hero needs to figure out is the dimensionality of the maze. How many axes can George move along in that social complexity? Good question. George needs to experiment in order to discover that. He makes moves in different social directions. He looks around what different kinds of education he can possibly get. He assesses his occupational options, mostly jobs and business ventures. He asks himself how he can structure his relations with family and friends. Is being an asshole compatible with fulfilling emotional bonds with people around?  

Wherever George the Hero currently is in the maze, there are n neighbouring and available cells around him. In each given place of the social maze, George the Hero has n possible ways to move further, into those n accessible cells in the immediate vicinity, and that is associated with k dimensions of movement. What is k, exactly? Here, I can refer to the theory of cellular automata, which attempts to simulate interactions between really simple, cell-like entities (Bandini, Mauri & Serra 2001[1]; Yu et al. 2021[2]). There is something called ‘von Neumann neighbourhood’. It corresponds to the assumption that if George the Hero has n neighbouring social cells which he move into, he can move like ‘left-right-forward-back’. That, in turn, spells k = n/2. If George can move into 4 neighbouring cells, he moves in a 2-dimensional space. Should he be able to move into 6 adjacent cells of the social maze, he has 3 dimensions to move along etc. Trouble starts when George sees an odd number of places to move to, like 5 or 7, on the account of these giving half-dimensions, like 5/2 = 2.5, 7/2 = 3.5 etc. Half a dimension means, in practical terms, that George the Hero faces social constraints. There might be cells around, mind you, which technically are there, but there are walls between George and them, and thus, for all practical purposes, the Hero can afford not to give a f**k.

George the Hero does not like to move back. Hardly anyone does. Thus, when George has successfully moved from cell A to cell B, he will probably not like going back to A, just in order to explore another cell adjacent thereto. People behave heuristically. People build up on their previous gains. Once George the Hero has moved from A to B, B becomes his A for the next move. He will choose one among the cells adjacent to B (now A), move there etc. George is a Hero, not a scientist, and therefore he carves a path through the social maze rather than discovers the maze as such. Each cell in the maze contains some rewards and some threats. George can get food and it means getting into a dangerously complex relation with that sabre-tooth tiger. George can earn money and it means giving up some of his personal freedom. George can bond with other people and find existential meaning and it means giving up even more of what he provisionally perceives as his personal freedom.

The social maze is truly a maze because there are many Georges around. Interestingly, many Georges in English give one Georges in French, and I feel this is the point where I should drop the metaphor of George the Hero. I need to get more precise, and thus I go to a formal concept in the theory of cellular automata, namely that of a d-dimensional cellular automaton, which can be mathematically expressed as A = (Zd, S, N, Sn+1 -> S). In that automaton A, Zd stands for the architecture of the maze, thus a lattice of d – tuples of integer numbers. In plain human, Zd is given by the number of dimensions, possibly constrained, which a human can move along in the social space. Many people carve their paths across the social maze, no one likes going back, and thus the more people are around, and the better they can communicate their respective experiences, the more exhaustive knowledge we have of the surrounding Zd.

There is a finite set S of states in that social space Zd, and that finitude is connected to the formally defined neighbourhood of the automaton A, namely the N. Formally, N is a finite ordered subset of Zd, and, besides the ‘left-right-forward-back’ neighbourhood of von Neumann, there is a more complex one, namely the Moore’s neighbourhood. In the latter, we can move diagonally between cells, like to the left and forward, to the right and forward etc. Keeping in mind that neighbourhood means, in practical terms, the number n of cells which we can move into from the social cell we are currently in, the cellular automaton can be rephrased as as A = (Zd, S, n, Sn+1 -> S). The transition Sn+1 -> S, called the local rule of A, makes more sense now. With me being in a given cell of the social maze, and there being n available cells immediately adjacent to mine, that makes n +1 cells where I can possibly be in, and I can technically visit all those cells in a finite number of Sn+1 combinatorial paths. The transition Sn+1 -> S expresses the way which I carve my finite set S of states out of the generally available Sn+1.       

If I assume that cities are factories of new social roles, the cellular automaton of an urban homo sapiens should be more complex than the red-neck-cellular automaton in a farm folk. It might mean greater an n, thus more cells available for moving from where I am now. It might also mean more efficient a Sn+1 -> S local rule, i.e. a better way to explore all the possible states I can achieve starting from where I am. There is a separate formal concept for that efficiency in the local rule, and it is called configuration of the cellular automaton AKA its instantaneous description AKA its global state, and it refers to the map Zd -> S. Hence, the configuration of my cellular automaton is the way which the overall social space Zd mapes into the set S of states actually available to me.

Right, if I have my cellular automaton with a configuration map Zd -> S, it is sheer fairness that you have yours too, and your cousin Eleonore has another one for herself, as well. There are many of us in the social space Zd. We are many x’s in the Zd. Each x of us has their own configuration map Zd -> S. If we want to get along with each other, our individual cellular automatons need to be mutually coherent enough to have a common, global function of cellular automata, and we know there is such a global function when we can collectively produce a sequence of configurations.

According to my own definition, a social structure is a collectively intelligent structure to the extent that it can experiment with many alternative versions of itself and select the fittest one, whilst staying structurally coherent. Structural coherence, in turn, is the capacity to relax and tighten, in a sequence, behavioural coupling inside the society, so as to allow the emergence and grounding of new behavioural patterns. The theory of cellular automata provides me some insights in that respect. Collective intelligence means the capacity to experiment with ourselves, right? That means experimenting with our global function Zd -> S, i.e. with the capacity to translate the technically available social space Zd into a catalogue S of possible states. If we take a random sample of individuals in a society, and study their cellular automatons A, they will display local rules Sn+1 -> S, and these can be expressed as coefficients (S / Sn+1), 0 ≤ (S / Sn+1) ≤ 1. The latter express the capacity of individual cellular automatons to generate actual states S of being out of the generally available menu of Sn+1.

In a large population, we can observe the statistical distribution of individual (S / Sn+1) coefficients of freedom in making one’s cellular state. The properties of that statistical distribution, e.g. the average (S / Sn+1) across the board, are informative about how intelligent collectively the given society is. The greater the average (S / Sn+1), the more possible states can the given society generate in the incumbent social structure, and the more it can know about the fittest state possible. That looks like a cellular definition of functional freedom.


[1] Bandini, S., Mauri, G., & Serra, R. (2001). Cellular automata: From a theoretical parallel computational model to its application to complex systems. Parallel Computing, 27(5), 539-553. https://doi.org/10.1016/S0167-8191(00)00076-4

[2] Yu, J., Hagen-Zanker, A., Santitissadeekorn, N., & Hughes, S. (2021). Calibration of cellular automata urban growth models from urban genesis onwards-a novel application of Markov chain Monte Carlo approximate Bayesian computation. Computers, environment and urban systems, 90, 101689. https://doi.org/10.1016/j.compenvurbsys.2021.101689

The collective of individual humans being any good at being smart

I am working on two topics in parallel, which is sort of normal in my case. As I know myself, instead of asking “Isn’t two too much?”, I should rather say “Just two? Run out of ideas, obviously”. I keep working on a proof-of-concept article for the idea which I provisionally labelled “Energy Ponds” AKA “Project Aqueduct”, on the one hand. See my two latest updates, namely ‘I have proven myself wrong’ and ‘Plusieurs bouquins à la fois, comme d’habitude’, as regards the summary of what I have found out and written down so far. As in most research which I do, I have come to the conclusion that however wonderful the concept appears, the most important thing in my work is the method of checking the feasibility of that concept. I guess I should develop on the method more specifically.

On the other hand, I am returning to my research on collective intelligence. I have just been approached by a publisher, with a kind invitation to submit the proposal for a book on that topic. I am passing in review my research, and the available literature. I am wondering what kind of central thread I should structure the entire book around. Two threads turn up in my mind, as a matter of fact. The first one is the assumption that whatever kind of story I am telling, I am actually telling the story of my own existence. I feel I need to go back to the roots of my interest in the phenomenon of collective intelligence, and those roots are in my meddling with artificial neural networks. At some point, I came to the conclusion that artificial neural networks can be good simulators of the way that human societies figure s**t out. I need to dig again into that idea.

My second thread is the theory of complex systems AKA the theory of complexity. The thing seems to be macheting its way through the jungle of social sciences, those last years, and it looks interestingly similar to what I labelled as collective intelligence. I came by the theory of complexity in three books which I am reading now (just three?). The first one is a history book: ‘1177 B.C. The Year Civilisation Collapsed. Revised and Updated’, published by Eric H. Cline with Princeton University Press in 2021[1]. The second book is just a few light years away from the first one. It regards mindfulness. It is ‘Aware. The Science and Practice of Presence. The Groundbreaking Meditation Practice’, published by Daniel J. Siegel with TarcherPerigee in 2018[2]. The third book is already some sort of a classic; it is ‘The Black Swan. The impact of the highly improbable’ by Nassim Nicolas Taleb with Penguin, in 2010.   

I think it is Daniel J. Siegel who gives the best general take on the theory of complexity, and I allow myself to quote: ‘One of the fundamental emergent properties of complex systems in this reality of ours is called self-organization. That’s a term you might think someone in psychology or even business might have created—but it is a mathematical term. The form or shape of the unfolding of a complex system is determined by this emergent property of self-organization. This unfolding can be optimized, or it can be constrained. When it’s not optimizing, it moves toward chaos or toward rigidity. When it is optimizing, it moves toward harmony and is flexible, adaptive, coherent, energized, and stable’. (Siegel, Daniel J.. Aware (p. 9). Penguin Publishing Group. Kindle Edition).  

I am combining my scientific experience with using AI as social simulator with the theory of complex systems. I means I need to UNDERSTAND, like really. I need to understand my own thinking, in the first place, and then I need to combine it with whatever I can understand from other people’s thinking. It started with a simple artificial neural network, which I used to write my article ‘Energy efficiency as manifestation of collective intelligence in human societies’ (Energy, 191, 116500, https://doi.org/10.1016/j.energy.2019.116500 ).  I had a collection of quantitative variables, which I had previously meddled with using classical regression. As regression did not really bring much conclusive results, I had the idea of using an artificial neural network. Of course, today, neural networks are a whole technology and science. The one I used is the equivalent of a spear with a stone tip as compared to a battle drone. Therefore, the really important thing is the fundamental logic of neural networking as compared to regression, in analyzing quantitative data.

When I do regression, I come up with a function, like y = a1*x1 + a2*x2 + …+ b, I trace that function across the cloud of empirical data points I am working with, and I measure the average distance from those points to the line of my function. That average distance is the average (standard) error of estimation with that given function. I repeat the process as many times as necessary to find a function which both makes sense logically and yields the lowest standard error of estimation. The central thing is that I observe all my data at once, as if it was all happening at the same time and as if I was observing it from outside. Here is the thing: I observe it from outside, but when that empirical data was happening, i.e. when the social phenomena expressed in my quantitative variables were taking place, everybody (me included) was inside, not outside.

How to express mathematically the fact of being inside the facts measured? One way is to take those empirical occurrences one by one, sort of Denmark in 2005, and then Denmark in 2006, and then Germany in 2005 etc. Being inside the events changes my perspective on what is the error of estimation, as compared to being outside. When I am outside, error means departure from the divine plan, i.e. from the regression function. When I am inside things that are happening, error happens as discrepancy between what I want and expect, on the one hand, and what I actually get, on the other hand. These are two different errors of estimation, measured as departures from two different functions. The regression function is the most accurate (or as accurate as you can get) mathematical explanation of the empirical data points. The function which we use when simulating the state of being inside the events is different: it is a function of adaptation.      

Intelligent adaptation means that we are after something: food, sex, power, a new Ferrari, social justice, 1000 000 followers on Instagram…whatever. There is something we are after, some kind of outcome we try to optimize. When I have a collection of quantitative variables which describe a society, such as energy efficiency, headcount of population, inflation rates, incidence of Ferraris per 1 million people etc., I can make a weak assumption that any of these can express a desired outcome. Here, a digression is due. In science and philosophy, weak assumptions are assumptions which assume very little, and therefore they are bloody hard to discard. On the other hand, strong assumptions assume a lot, and that makes them pretty good targets for discarding criticism. In other words, in science and philosophy, weak assumptions are strong and strong assumptions are weak. Obvious, isn’t it? Anyway, I make that weak assumption that any phenomenon we observe and measure with a numerical scale can be a collectively desired outcome we pursue.

Another assumption I make, a weak one as well, is sort of hidden in the word ‘expresses’. Here, I relate to a whole line of philosophical and scientific heritage, going back to people like Plato, Kant, William James, Maurice Merleau-Ponty, or, quite recently, Michael Keane (1972[3]), as well as Berghout & Verbitskiy (2021[4]). Very nearly everyone who seriously thought (or keeps thinking, on the account of being still alive) about human cognition of reality agrees that we essentially don’t know s**t. We make cognitive constructs in our minds, so as to make at least a little bit of sense of the essentially chaotic reality outside our skin, and we call it empirical observation. Mind you, stuff inside our skin is not much less chaotic, but this is outside the scope of social sciences. As we focus on quantitative variables commonly used in social sciences, the notion of facts becomes really blurred. Have you ever shaken hands with energy efficiency, with Gross Domestic Product or with the mortality rate? Have you touched it? No? Neither have I. These are highly distilled cognitive structures which we use to denote something about the state of society.

Therefore, I assume that quantitative, socio-economic variables express something about the societies observed, and that something is probably important if we collectively keep record of it. If I have n empirical variables, each of them possibly represents collectively important outcomes. As these are distinct variables, I assume that, with all the imperfections and simplification of the corresponding phenomenology, each distinct variable possibly represents a distinct type of collectively important outcome. When I study a human society through the lens of many quantitative variables, I assume they are informative about a set of collectively important social outcomes in that society.

Whilst a regression function explains how many variables are connected when observed ex post and from outside, an adaptation function explains and expresses the way that a society addresses important collective outcomes in a series of trials and errors. Here come two fundamental differences between studying a society with a regression function, as opposed to using an adaptation function. Firstly, for any collection of variables, there is essentially one regression function of the type:  y = a1*x1 + a2*x2 + …+ an*xn + b. On the other hand, with a collection of n quantitative variables at hand, there is at least as many functions of adaptation as there are variables. We can hypothesize that each individual variable x is the collective outcome to pursue and optimize, whilst the remaining n – 1 variables are instrumental to that purpose. One remark is important to make now: the variable informative about collective outcomes pursued, that specific x, can be and usually is instrumental to itself. We can make a desired Gross Domestic Product based on the Gross Domestic Product we have now. The same applies to inflation, energy efficiency, share of electric cars in the overall transportation system etc. Therefore, the entire set of n variables can be assumed instrumental to the optimization of one variable x from among them.   

Mathematically, it starts with assuming a functional input f(x1, x2, …, xn) which gets pitched against one specific outcome xi. Subtraction comes as the most logical representation of that pitching, and thus we have the mathematical expression ‘xi – f(x1, x2, …, xn)’, which informs about how close the society observed has come to the desired outcome xi. It is technically possible that people just nail it, and xi = f(x1, x2, …, x­n), whence xi – f(x1, x2, …, x­n) = 0. This is a perfect world, which, however, can be dangerously perfect. We know those societies of apparently perfectly happy people, who live in harmony with nature, even if that harmony means hosting most intestinal parasites of the local ecosystem. One day other people come, with big excavators, monetary systems, structured legal norms, and the bubble bursts, and it hurts.

Thus, on the whole, it might be better to hit xi ≠ f(x1, x2, …, x­n), whence xi – f(x1, x2, …, x­n) ≠ 0. It helps learning new stuff. The ‘≠ 0’ part means there is an error in adaptation. The functional input f(x1, x2, …, x­n) hits above or below the desired xi. As we want to learn, that error in adaptation AKA e = xi – f(x1, x2, …, xn) ≠ 0, makes any practical sense when we utilize it in subsequent rounds of collective trial and error. Sequence means order, and a timeline. We have a sequence {t0, t1, t2, …, tm} of m moments in time. Local adaptation turns into ‘xi(t) – ft(x1, x2, …, x­n)’, and error of adaptation becomes the time-specific et = xi(t) – ft(x1, x2, …, x­n) ≠ 0. The clever trick consists in taking e(t0) = xi(t0) – ft0(x1, x2, …, x­n) ≠ 0 and combining it somehow with the next functional input ft1(x1, x2, …, x­n). Mathematically, if we want to combine two values, we can add them up or multiply them. We keep in mind that division is a special case of multiplication, namely x * (1/z). We I add up two values, I assume they are essentially of the same kind and sort of independent from each other. When, on the other hand, I multiply them, they become entwined so that each of them reproduces the other one. Multiplication ‘x * z’ means that x gets reproduced z times and vice versa. When I have the error of adaptation et0 from the last experimental round and I want to combine it with the functional input of adaptation ft1(x1, x2, …, x­n) in the next experimental round, that whole reproduction business looks like a strong assumption, with a lot of weak spots on it. I settle for the weak assumption then, and I assume that ft1(x1, x2, …, x­n) becomes ft0(x1, x2, …, x­n) + e(t0).

The expression ft0(x1, x2, …, x­n) + e(t0) makes any functional sense only when and after we have e(t0) = xi(t0) – ft0(x1, x2, …, x­n) ≠ 0. Consequently, the next error of adaptation, namely e(t1) = xi(t1) – ft1(x1, x2, …, x­n) ≠ 0 can come into being only after its predecessor et0 has occurred. We have a chain of m states in the functional input of the society, i.e. {ft0(x1, x2, …, x­n) => ft1(x1, x2, …, x­n) => … => ftm(x1, x2, …, x­n)}, associated with a chain of m desired outcomes {xi(t0) => xi(t1) => … => xi(tm)}, and with a chain of errors in adaptation {e(t0) => e(t1) => …=> e(tm)}. That triad – chain of functional inputs, chain of desired outcomes, and the chain of errors in adaptation – makes for me the closest I can get now to the mathematical expression of the adaptation function. As errors get fed along the chain of states (as I see it, they are being fed forward, but in the algorithmic version, you can backpropagate them), those errors are some sort of dynamic memory in that society, the memory from learning to adapt.

Here we can see the epistemological difference between studying a society from outside, and explaining its workings with a regression function, on the one hand, and studying those mechanisms from inside, by simulation with an adaptation function, on the other hand. Adaptation function is the closest I can get, in mathematical form, to what I understand by collective intelligence. As I have been working with that general construct, I progressively zoomed in on another concept, namely that of intelligent structure, which I define as a structure which learns by experimenting with many alternative versions of itself whilst staying structurally coherent, i.e. by maintaining basic coupling between particular components.

I feel like comparing my approach to intelligent structures and their collective intelligence with the concept of complex systems, as discussed in the literature I have just referred to. I returned, therefore, to the book entitled ‘1177 B.C. The Year Civilisation Collapsed. Revised and Updated’, by Eric H. Cline, Princeton University Press, 2021. The theory of complex systems is brought forth in that otherwise very interesting piece in order to help formulating an answer to the following question: “Why did the great empires of the Late Bronze Age, such as Egypt, the Hittites, or the Myceneans, collapse all in approximately the same time, around 1200 – 1150 B.C.?”.  The basic assertion which Eric Cline develops on and questions is that the entire patchwork of those empires in the Mediterranean, the Levant and the Middle East was one big complex system, which collapsed on the account of having overkilled it slightly in the complexity department.

I am trying to reconstruct the definition of systemic complexity such as Eric Cline uses it in his flow of logic. I start with the following quote: Complexity science or theory is the study of a complex system or systems, with the goal of explaining the phenomena which emerge from a collection of interacting objects’. If we study a society as a complex system, we need to assume two things. There are many interacting objects in it, for one, and their mutual interaction leads to the emergence of some specific phenomena. Sounds cool. I move on, and a few pages later I find the following statement: ‘In one aspect of complexity theory, behavior of those objects is affected by their memories and “feedback” from what has happened in the past. They are able to adapt their strategies, partly on the basis of their knowledge of previous history’. Nice. We are getting closer. Entities inside a complex system accumulate memory, and they learn on that basis. This is sort of next door to the three sequential chains: states, desired outcomes, and errors in adaptation, which I coined up.

Further, I find an assertion that a complex social system is typically “alive”, which means that it evolves in a complicated, nontrivial way, whilst being open to influences from the environment. All that leads to the complex system to generate phenomena which can be considered as surprising and extreme. Good. This is the moment to move to the next book:  ‘The Black Swan. The impact of the highly improbable’ by Nassim Nicolas Taleb , Penguin, 2010. Here comes a lengthy quote, which I bring here for the sheer pleasure of savouring one more time Nassim Taleb’s delicious style: “[…] say you attribute the success of the nineteenth-century novelist Honoré de Balzac to his superior “realism,” “insights,” “sensitivity,” “treatment of characters,” “ability to keep the reader riveted,” and so on. These may be deemed “superior” qualities that lead to superior performance if, and only if, those who lack what we call talent also lack these qualities. But what if there are dozens of comparable literary masterpieces that happened to perish? And, following my logic, if there are indeed many perished manuscripts with similar attributes, then, I regret to say, your idol Balzac was just the beneficiary of disproportionate luck compared to his peers. Furthermore, you may be committing an injustice to others by favouring him. My point, I will repeat, is not that Balzac is untalented, but that he is less uniquely talented than we think. Just consider the thousands of writers now completely vanished from consciousness: their record does not enter into analyses. We do not see the tons of rejected manuscripts because these writers have never been published. The New Yorker alone rejects close to a hundred manuscripts a day, so imagine the number of geniuses that we will never hear about. In a country like France, where more people write books while, sadly, fewer people read them, respectable literary publishers accept one in ten thousand manuscripts they receive from first-time authors”.

Many people write books, few people read them, and that creates something like a flow of highly risky experiments. That coincides with something like a bottleneck of success, with possibly great positive outcomes (fame, money, posthumous fame, posthumous money for other people etc.), and a low probability of occurrence. A few salient phenomena are produced – the Balzacs – whilst the whole build-up of other writing efforts, by less successful novelists, remains in the backstage of history. That, in turn, somehow rhymes with my intuition that intelligent structures need to produce big outliers, at least from time to time. On the one hand, those outliers can be viewed as big departures from the currently expected outcomes. They are big local errors. Big errors mean a lot of information to learn from. There is an even further-going, conceptual coincidence with the theory and practice of artificial neural networks. A network can be prone to overfitting, which means that it learns too fast, sort of by jumping prematurely to conclusions, before and without having worked through the required work through local errors in adaptation.

Seen from that angle, the function of adaptation I have come up with has a new shade. The sequential chain of errors appears as necessary for the intelligent structure to be any good. Good. Let’s jump to the third book I quoted with respect to the theory of complex systems: ‘Aware. The Science and Practice of Presence. The Ground-breaking Meditation Practice’, by Daniel J. Siegel, TarcherPerigee, 2018. I return to the idea of self-organisation in complex systems, and the choice between three different states: a) the optimal state of flexibility, adaptability, coherence, energy and stability b) non-optimal rigidity and c) non-optimal chaos.

That conceptual thread concurs interestingly with my draft paper: ‘Behavioral absorption of Black Swans: simulation with an artificial neural network’ . I found out that with the chain of functional input states {ft0(x1, x2, …, x­n) => ft1(x1, x2, …, x­n) => … => ftm(x1, x2, …, x­n)} being organized in rigorously the same way, different types of desired outcomes lead to different patterns of learning, very similar to the triad which Daniel Siegel refers to. When my neural network does its best to optimize outcomes such as Gross Domestic Product, it quickly comes to rigidity. It makes some errors in the beginning of the learning process, but then it quickly drives the local error asymptotically to zero and is like ‘We nailed it. There is no need to experiment further’. There are other outcomes, such as the terms of trade (the residual fork between the average price of exports and that of imports), or the average number of hours worked per person per year, which yield a curve of local error in the form of a graceful sinusoid, cyclically oscillating between different magnitudes of error. This is the energetic, dynamic balance. Finally, some macroeconomic outcomes, such as the index of consumer prices, can make the same neural network go nuts, and generate an ever-growing curve of local error, as if the poor thing couldn’t learn anything sensible from looking at the prices of apparel and refrigerators. The (most) puzzling thing in all that differences in pursued outcomes are the source of discrepancy in the patterns of learning, not the way of learning as such. Some outcomes, when pursued, keep the neural network I made in a state of healthy adaptability, whilst other outcomes make it overfit or go haywire.  

When I write about collective intelligence and complex system, it can come as a sensible idea to read (and quote) books which have those concepts explicitly named. Here comes ‘The Knowledge Illusion. Why we never think alone’ by Steven Sloman and Philip Fernbach, RIVERHEAD BOOKS (An imprint of Penguin Random House LLC, Ebook ISBN: 9780399184345, Kindle Edition). In the introduction, titled ‘Ignorance and the Community of Knowledge’, Sloman and Fernbach write: “The human mind is not like a desktop computer, designed to hold reams of information. The mind is a flexible problem solver that evolved to extract only the most useful information to guide decisions in new situations. As a consequence, individuals store very little detailed information about the world in their heads. In that sense, people are like bees and society a beehive: Our intelligence resides not in individual brains but in the collective mind. To function, individuals rely not only on knowledge stored within our skulls but also on knowledge stored elsewhere: in our bodies, in the environment, and especially in other people. When you put it all together, human thought is incredibly impressive. But it is a product of a community, not of any individual alone”. This is a strong statement, which I somehow distance myself from. I think that collective human intelligence can be really workable when individual humans are any good at being smart. Individuals need to have practical freedom of action, based on their capacity to figure s**t out in difficult situations, and the highly fluid ensemble of individual freedoms allows the society to make and experiment with many alternative versions of themselves.

Another book is more of a textbook. It is ‘What Is a Complex System?’ by James Landyman and Karoline Wiesner, published with Yale University Press (ISBN 978-0-300-25110-4, Kindle Edition). In the introduction (p.15), Landyman and Wiesner claim: “One of the most fundamental ideas in complexity science is that the interactions of large numbers of entities may give rise to qualitatively new kinds of behaviour different from that displayed by small numbers of them, as Philip Anderson says in his hugely influential paper, ‘more is different’ (1972). When whole systems spontaneously display behaviour that their parts do not, this is called emergence”. In my world, those ‘entities’ are essentially the chained functional input states {ft0(x1, x2, …, x­n) => ft1(x1, x2, …, x­n) => … => ftm(x1, x2, …, x­n)}. My entities are phenomenological – they are cognitive structures which fault of a better word we call ‘empirical variables’. If the neural networks I make and use for my research are any good at representing complex systems, emergence is the property of data in the first place. Interactions between those entities are expressed through the function of adaptation, mostly through the chain {e(t0) => e(t1) => …=> e(tm)} of local errors, concurrent with the chain of functional input states.

I think I know what the central point and thread of my book on collective intelligence is, should I (finally) write that book for good. Artificial neural networks can be used as simulators of collective social behaviour and social change. Still, they do not need to be super-performant network. My point is that with the right intellectual method, even the simplest neural networks, those possible to program into an Excel spreadsheet, can be reliable cognitive tools for social simulation.


[1] LCCN 2020024530 (print) | LCCN 2020024531 (ebook) | ISBN 9780691208015 (paperback) | ISBN 9780691208022 (ebook) ; Cline, Eric H.. 1177 B.C.: 6 (Turning Points in Ancient History, 1) . Princeton University Press. Kindle Edition.

[2] LCCN 2018016987 (print) | LCCN 2018027672 (ebook) | ISBN 9780143111788 | ISBN 9781101993040 (hardback) ; Siegel, Daniel J.. Aware (p. viii). Penguin Publishing Group. Kindle Edition.

[3] Keane, M. (1972). Strongly mixing measures. Inventiones mathematicae, 16(4), 309-324. DOI https://doi.org/10.1007/BF01425715

[4] Berghout, S., & Verbitskiy, E. (2021). On regularity of functions of Markov chains. Stochastic Processes and their Applications, Volume 134, April 2021, Pages 29-54, https://doi.org/10.1016/j.spa.2020.12.006

Unintentional, and yet powerful a reductor

As usually, I work on many things at the same time. I mean, not exactly at the same time, just in a tight alternate sequence. I am doing my own science, and I am doing collective science with other people. Right now, I feel like restating and reframing the main lines of my own science, with the intention to both reframe my own research, and be a better scientific partner to other researchers.

Such as I see it now, my own science is mostly methodological, and consists in studying human social structures as collectively intelligent ones. I assume that collectively we have a different type of intelligence from the individual one, and most of what we experience as social life is constant learning through experimentation with alternative versions of our collective way of being together. I use artificial neural networks as simulators of collective intelligence, and my essential process of simulation consists in creating multiple artificial realities and comparing them.

I deliberately use very simple, if not simplistic neural networks, namely those oriented on optimizing just one attribute of theirs, among the many available. I take a dataset, representative for the social structure I study, I take just one variable in the dataset as the optimized output, and I consider the remaining variables as instrumental input. Such a neural network simulates an artificial reality where the social structure studied pursues just one, narrow orientation. I create as many such narrow-minded, artificial societies as I have variables in my dataset. I assess the Euclidean distance between the original empirical dataset, and each of those artificial societies. 

It is just now that I realize what kind of implicit assumptions I make when doing so. I assume the actual social reality, manifested in the empirical dataset I study, is a concurrence of different, single-variable-oriented collective pursuits, which remain in some sort of dynamic interaction with each other. The path of social change we take, at the end of the day, manifests the relative prevalence of some among those narrow-minded pursuits, with others being pushed to the second rank of importance.

As I am pondering those generalities, I reconsider the actual scientific writings that I should hatch. Publish or perish, as they say in my profession. With that general method of collective intelligence being assumed in human societies, I focus more specifically on two empirical topics: the market of energy and the transition away from fossil fuels make one stream of my research, whilst the civilisational role of cities, especially in the context of the COVID-19 pandemic, is another stream of me trying to sound smart in my writing.

For now, I focus on issues connected to energy, and I return to revising my manuscript ‘Climbing the right hill – an evolutionary approach to the European market of electricity’, as a resubmission to Applied Energy . According to the guidelines of Applied Energy , I am supposed to structure my paper into the following parts: Introduction, Material and Methods, Theory, Calculations, Results, Discussion, and, as sort of a summary pitch, I need to prepare a cover letter where I shortly introduce the reasons why should the editor of Applied Energy bother about my paper at all. On the top of all these formally expressed requirements, there is something I noticed about the general style of articles published in Applied Energy : they all demonstrate and discuss strong, sharp-cutting hypotheses, with a pronounced theoretical edge in them. If I want my paper to be accepted by that journal, I need to give it that special style.  

That special style requires two things which, honestly, I am not really accustomed to doing. First of all, it requires, precisely, to phrase out very sharp claims. What I like the most is to show people material and methods which I work with and sort of provoke a discussion around it. When I have to formulate very sharp claims around that basic empirical stuff, I feel a bit awkward. Still, I understand that many people are willing to discuss only when they are truly pissed by the topic at hand, and sharply cut hypotheses serve to fuel that flame.

Second of all, making sharp claims of my own requires passing in thorough review the claims which other researchers phrase out. It requires doing my homework thoroughly in the review-of-literature. Once again, not really a fan of it, on my part, but well, life is brutal, as my parents used to teach me and as I have learnt in my own life. In other words, real life starts when I get out of my comfort zone.

The first body of literature I want to refer to in my revised article is the so-called MuSIASEM framework AKA Multi-scale Integrated Analysis of Societal and Ecosystem Metabolism’. Human societies are assumed to be giant organisms, and transformation of energy is a metabolic function of theirs (e.g. Andreoni 2020[1], Al-Tamimi & Al-Ghamdi 2020[2] or Velasco-Fernández et al. 2020[3]). The MuSIASEM framework is centred around an evolutionary assumption, which I used to find perfectly sound, and which I have come to consider as highly arguable, namely that the best possible state for both a living organism and a human society is that of the highest possible energy efficiency. As regards social structures, energy efficiency is the coefficient of real output per unit of energy consumption, or, in other words, the amount of real output we can produce with 1 kilogram of oil equivalent in energy. My theoretical departure from that assumption started with my own empirical research, published in my article ‘Energy efficiency as manifestation of collective intelligence in human societies’ (Energy, Volume 191, 15 January 2020, 116500, https://doi.org/10.1016/j.energy.2019.116500 ). As I applied my method of computation with a neural network as simulator of social change, I found out that human societies do not really seem to max out on energy efficiency. Maybe they should but they don’t. It was the first realization, on my part, that we, humans, orient our collective intelligence on optimizing the social structure as such, and whatever comes out of that in terms of energy efficiency, is an unintended by-product rather than a purpose. That general impression has been subsequently reinforced by other empirical findings of mine, precisely those which I introduce in that manuscript ‘Climbing the right hill – an evolutionary approach to the European market of electricity’, which I am currently revising for resubmission with Applied Energy . According to the guidelines of Applied Energy.

In practical terms, it means that when a public policy states that ‘we should maximize our energy efficiency’, it is a declarative goal which human societies actually do not strive for. It is a little as if a public policy imposed the absolute necessity of being nice to each other and punished any deviation from that imperative. People are nice to each other to the extent of current needs in social coordination, period. The absolute imperative of being nice is frequently the correlate of intense rivalry, e.g. as it was the case with traditional aristocracy. The French have even an expression, which I find profoundly true, namely ‘trop gentil pour être honnête’, which means ‘too nice to be honest’. My personal experience makes me kick into an alert state when somebody is that sort of intensely nice to me.

Passing from metaphors to the actual subject matter of energy management, it is a known fact that highly innovative technologies are usually truly inefficient. Optimization of efficiency, would it be energy efficiency or any other aspect thereof, is actually a late stage in the lifecycle of a technology. Deep technological change is usually marked by a temporary slump in efficiency. Imposing energy efficiency as chief goal of technology-related policies means systematically privileging and promoting technologies with the highest energy efficiency, thus, by metaphorical comparison to humans, technologies in their 40ies, past and over the excesses of youth.

The MuSIASEM framework has two other traits which I find arguable, namely the concept of evolutionary purpose, and the imperative of equality between countries in terms of energy efficiency. Researchers who lean towards and into the MuSIASEM methodology claim that it is an evolutionary purpose of every living organism to maximize energy efficiency, and therefore human societies have the same evolutionary purpose. It further implies that species displaying marked evolutionary success, i.e. significant growth in headcount (sometimes in mandibulae-count, should the head be not really what we mean it to be), achieve that success by being particularly energy efficient. I even went into some reading in life sciences and that claim is not grounded in any science. It seems that energy efficiency, and any denomination of efficiency, as a matter of fact, are very crude proportions we apply to complex a balance of flows which we have to learn a lot about. Niebel et al. (2019[4]) phrase it out as follows: ‘The principles governing cellular metabolic operation are poorly understood. Because diverse organisms show similar metabolic flux patterns, we hypothesized that a fundamental thermodynamic constraint might shape cellular metabolism. Here, we develop a constraint-based model for Saccharomyces cerevisiae with a comprehensive description of biochemical thermodynamics including a Gibbs energy balance. Non-linear regression analyses of quantitative metabolome and physiology data reveal the existence of an upper rate limit for cellular Gibbs energy dissipation. By applying this limit in flux balance analyses with growth maximization as the objective function, our model correctly predicts the physiology and intracellular metabolic fluxes for different glucose uptake rates as well as the maximal growth rate. We find that cells arrange their intracellular metabolic fluxes in such a way that, with increasing glucose uptake rates, they can accomplish optimal growth rates but stay below the critical rate limit on Gibbs energy dissipation. Once all possibilities for intracellular flux redistribution are exhausted, cells reach their maximal growth rate. This principle also holds for Escherichia coli and different carbon sources. Our work proposes that metabolic reaction stoichiometry, a limit on the cellular Gibbs energy dissipation rate, and the objective of growth maximization shape metabolism across organisms and conditions’. 

I feel like restating the very concept of evolutionary purpose as such. Evolution is a mechanism of change through selection. Selection in itself is largely a random process, based on the principle that whatever works for now can keep working until something else works even better. There is hardly any purpose in that. My take on the thing is that living species strive to maximize their intake of energy from environment rather than their energy efficiency. I even hatched an article about it (Wasniewski 2017[5]).

Now, I pass to the second postulate of the MuSIASEM methodology, namely to the alleged necessity of closing gaps between countries as for their energy efficiency. Professor Andreoni expresses this view quite vigorously in a recent article (Andreoni 2020[6]). I think this postulate doesn’t hold both inside the MuSIASEM framework, and outside of it. As for the purely external perspective, I think I have just laid out the main reasons for discarding the assumption that our civilisation should prioritize energy efficiency above other orientations and values. From the internal perspective of MuSIASEM, i.e. if we assume that energy efficiency is a true priority, we need to give that energy efficiency a boost, right? Now, the last time I checked, the only way we, humans, can get better at whatever we want to get better at is to create positive outliers, i.e. situations when we like really nail it better than in other situations. With a bit of luck, those positive outliers become a workable pattern of doing things. In management science, it is known as the principle of best practices. The only way of having positive outliers is to have a hierarchy of outcomes according to the given criterion. When everybody is at the same level, nobody is an outlier, and there is no way we can give ourselves a boost forward.

Good. Those six paragraphs above, they pretty much summarize my theoretical stance as regards the MuSIASEM framework in research about energy economics. Please, note that I respect that stream of research and the scientists involved in it. I think that representing energy management in human social structures as a metabolism is a great idea: it is one of those metaphors which can be fruitfully turned into a quantitative model. Still, I have my reserves.

I go further. A little more review of literature. Here comes a paper by Halbrügge et al. (2021[7]), titled ‘How did the German and other European electricity systems react to the COVID-19 pandemic?’. It points at an interesting point as regards energy economics: the pandemic has induced a new type of risk, namely short-term fluctuations in local demand for electricity. That, in turn, leads to deeper troughs and higher peaks in both the quantity and the price of energy in the market. More risk requires more liquidity: this is a known principle in business. As regards energy, liquidity can be achieved both through inventories, i.e. by developing storage capacity for energy, and through financial instruments. Halbrügge et al. come to the conclusion that such circumstances in the German market have led to the reinforcement of RES (Renewable Energy Sources). RES installations are typically more dispersed, more local in their reach, and more flexible than large power plants. It is much easier to modulate the output of a windfarm or a solar farm, as compared to a large fossil-fuel-based installation. 

Keeping an eye on the impact of the pandemic upon the market of energy, I pass to the article titled ‘Revisiting oil-stock nexus during COVID-19 pandemic: Some preliminary results’, by Salisu, Ebuh & Usman (2020[8]). First of all, a few words of general explanation as for what the hell is the oil-stock nexus. This is a phenomenon, which I saw any research about in 2017, which consists in a diversification of financial investment portfolios from pure financial stock into various mixes of stock and oil. Somehow around 2015, people who used to hold their liquid investments just in financial stock (e.g. as I do currently) started to build investment positions in various types of contracts based on the floating inventory of oil: futures, options and whatnot. When I say ‘floating’, it is quite literal: that inventory of oil really actually floats, stored on board of super-tanker ships, sailing gently through international waters, with proper gravitas (i.e. not too fast).

Long story short, crude oil has been increasingly becoming a financial asset, something like a buffer to hedge against risks encountered in other assets. Whilst the paper by Salisu, Ebuh & Usman is quite technical, without much theoretical generalisation, an interesting observation comes out of it, namely that short-term shocks, during the pandemic in financial markets had adversely impacted the price of oil more than the prices of stock. That, in turn, could indicate that crude oil was good as hedging asset just for a certain range of risks, and in the presence of price shocks induced by the pandemic, the role of oil could diminish.     

Those two papers point at a factor which we almost forgot as regards the market of energy, namely the role of short-term shocks. Until recently, i.e. until COVID-19 hit us hard, the textbook business model in the sector of energy had been that of very predictable demand, nearly constant in the long-perspective and varying in a sinusoidal manner in the short-term. The very disputable concept of LCOE AKA Levelized Cost of Energy, where investment outlays are treated as if they were a current cost, is based on those assumptions. The pandemic has shown a different aspect of energy systems, namely the need for buffering capacity. That, in turn, leads to the issue of adaptability, which, gently but surely leads further into the realm of adaptive changes, and that, ladies and gentlemen, is my beloved landscape of evolutionary, collectively intelligent change.

Cool. I move forward, and, by the same occasion, I move back. Back to the concept of energy efficiency. Halvorsen & Larsen study the so-called rebound effect as regards energy efficiency (Halvorsen & Larsen 2021[9]). Their paper is interesting for three reasons, the general topic of energy efficiency being the first one. The second one is methodological focus on phenomena which we cannot observe directly, and therefore we observe them through mediating variables, which is theoretically close to my own method of research. Finally, the phenomenon of rebound effect, namely the fact that, in the presence of temporarily increased energy efficiency, the consumers of energy tend to use more of those locally more energy-efficient goods, is essentially a short-term disturbance being transformed into long-term habits. This is adaptive change.

The model construed by Halvorsen & Larsen is a theoretical delight, just something my internal happy bulldog can bite into. They introduce the general assumption that consumption of energy in households is a build-up of different technologies, which can substitute each other under some conditions, and complementary under different conditions. Households maximize something called ‘energy services’, i.e. everything they can purposefully derive from energy carriers. Halvorsen & Larsen build and test a model where they derive demand for energy services from a whole range of quite practical variables, which all sums up to the following: energy efficiency is indirectly derived from the way that social structures work, and it is highly doubtful whether we can purposefully optimize energy efficiency as such.       

Now, here comes the question: what are the practical implications of all those different theoretical stances, I mean mine and those by other scientists? What does it change, and does it change anything at all, if policy makers follow the theoretical line of the MuSIASEM framework, or, alternatively, my approach? I am guessing differences at the level of both the goals, and the real outcomes of energy-oriented policies, and I am trying to wrap my mind around that guessing. Such as I see it, the MuSIASEM approach advocates for putting energy-efficiency of the whole global economy at the top of any political agenda, as a strategic goal. On the path towards achieving that strategic goal, there seems to be an intermediate one, namely that to narrow down significantly two types of discrepancies:

>> firstly, it is about discrepancies between countries in terms of energy efficiency, with a special focus on helping the poorest developing countries in ramping up their efficiency in using energy

>> secondly, there should be a priority to privilege technologies with the highest possible energy efficiency, whilst kicking out those which perform the least efficiently in that respect.    

If I saw a real policy based on those assumptions, I would have a few critical points to make. Firstly, I firmly believe that large human societies just don’t have the institutions to enforce energy efficiency as chief collective purpose. On the other hand, we have institutions oriented on other goals, which are able to ramp up energy efficiency as instrumental change. One institution, highly informal and yet highly efficient, is there, right in front of our eyes: markets and value chains. Each product and each service contain an input of energy, which manifests as a cost. In the presence of reasonably competitive markets, that cost is under pressure from market prices. Yes, we, humans are greedy, and we like accumulating profits, and therefore we squeeze our costs. Whenever energy comes into play as significant a cost, we figure out ways of diminishing its consumption per unit of real output. Competitive markets, both domestic and international, thus including free trade, act as an unintentional, and yet powerful a reductor of energy consumption, and, under a different angle, they remind us to find cheap sources of energy.


[1] Andreoni, V. (2020). The energy metabolism of countries: Energy efficiency and use in the period that followed the global financial crisis. Energy Policy, 139, 111304. https://doi.org/10.1016/j.enpol.2020.111304

[2] Al-Tamimi and Al-Ghamdi (2020), ‘Multiscale integrated analysis of societal and ecosystem metabolism of Qatar’ Energy Reports, 6, 521-527, https://doi.org/10.1016/j.egyr.2019.09.019

[3] Velasco-Fernández, R., Pérez-Sánchez, L., Chen, L., & Giampietro, M. (2020), A becoming China and the assisted maturity of the EU: Assessing the factors determining their energy metabolic patterns. Energy Strategy Reviews, 32, 100562.  https://doi.org/10.1016/j.esr.2020.100562

[4] Niebel, B., Leupold, S. & Heinemann, M. An upper limit on Gibbs energy dissipation governs cellular metabolism. Nat Metab 1, 125–132 (2019). https://doi.org/10.1038/s42255-018-0006-7

[5] Waśniewski, K. (2017). Technological change as intelligent, energy-maximizing adaptation. Energy-Maximizing Adaptation (August 30, 2017). http://dx.doi.org/10.1453/jest.v4i3.1410

[6] Andreoni, V. (2020). The energy metabolism of countries: Energy efficiency and use in the period that followed the global financial crisis. Energy Policy, 139, 111304. https://doi.org/10.1016/j.enpol.2020.111304

[7] Halbrügge, S., Schott, P., Weibelzahl, M., Buhl, H. U., Fridgen, G., & Schöpf, M. (2021). How did the German and other European electricity systems react to the COVID-19 pandemic?. Applied Energy, 285, 116370. https://doi.org/10.1016/j.apenergy.2020.116370

[8] Salisu, A. A., Ebuh, G. U., & Usman, N. (2020). Revisiting oil-stock nexus during COVID-19 pandemic: Some preliminary results. International Review of Economics & Finance, 69, 280-294. https://doi.org/10.1016/j.iref.2020.06.023

[9] Halvorsen, B., & Larsen, B. M. (2021). Identifying drivers for the direct rebound when energy efficiency is unknown. The importance of substitution and scale effects. Energy, 222, 119879. https://doi.org/10.1016/j.energy.2021.119879

DIY algorithms of our own

I return to that interesting interface of science and business, which I touched upon in my before-last update, titled ‘Investment, national security, and psychiatry’ and which means that I return to discussing two research projects I start being involved in, one in the domain of national security, another one in psychiatry, both connected by the idea of using artificial neural networks as analytical tools. What I intend to do now is to pass in review some literature, just to get the hang of what is the state of science, those last days.

On the top of that, I have been asked by my colleagues to crash take the leadership of a big, multi-thread research project in management science. The multitude of threads has emerged as a circumstantial by-product of partly the disruption caused by the pandemic, and partly as a result of excessive partition in the funding of research. As regards the funding of research, Polish universities have sort of two financial streams. One consists of big projects, usually team-based, financed by specialized agencies, such as the National Science Centre (https://www.ncn.gov.pl/?language=en ) or the National Centre for Research and Development (https://www.gov.pl/web/ncbr-en ). Another one is based on relatively small grants, applied for by and granted to individual scientists by their respective universities, which, in turn, receive bulk subventions from the Ministry of Education and Science. Personally, I think that last category, such as it is being allocated and used now, is a bit of a relic. It is some sort of pocket money for the most urgent and current expenses, relatively small in scale and importance, such as the costs of publishing books and articles, the costs of attending conferences etc. This is a financial paradox: we save and allocate money long in advance, in order to have money for essentially incidental expenses – which come at the very end of the scientific pipeline – and we have to make long-term plans for it. It is a case of fundamental mismatch between the intrinsic properties of a cash flow, on the one hand, and the instruments used for managing that cash flow, on the other hand.

Good. This is introduction to detailed thinking. Once I have those semantic niceties checked out, I cut into the flesh of thinking, and the first piece I intend to cut out is the state of science as regards Territorial Defence Forces and their role amidst the COVID-19 pandemic. I found an interesting article by Tiutiunyk et al. (2018[1]). It is interesting because it gives a detailed methodology for assessing operational readiness in any military unit, territorial defence or other. That corresponds nicely to Hypothesis #2 which I outlined for that project in national security, namely: ‘the actual role played by the TDF during the pandemic was determined by the TDF’s actual capacity of reaction, i.e. speed and diligence in the mobilisation of human and material resources’. That article by Tiutiunyk et al. (2018) allows entering into details as regards that claim. 

Those details start unfolding from the assumption that operational readiness is there when the entity studied possesses the required quantity of efficient technical and human resources. The underlying mathematical concept is quite simple. I the given situation, adequate response requires using m units of resources at k% of capacity during time te. The social entity studied can muster n units of the same resources at l% of capacity during the same time te. The most basic expression of operational readiness is, therefore, a coefficient OR = (n*l)/(m*k). I am trying to find out what specific resources are the key to that readiness. Tiutiunyk et al. (2018) offer a few interesting insights in that respect. They start by noticing the otherwise known fact that resources used in crisis situations are not exactly the same we use in everyday course of life and business, and therefore we tend to hold them for a time longer than their effective lifecycle. We don’t amortize them properly because we don’t really control for their physical and moral depreciation. One of the core concepts in territorial defence is to counter that negative phenomenon, and to maintain, through comprehensive training and internal control, a required level of capacity.

As I continue going through literature, I come by an interesting study by I. Bet-El (2020), titled: ‘COVID-19 and the future of security and defence’, published by the European Leadership Network (https://www.europeanleadershipnetwork.org/wp-content/uploads/2020/05/Covid-security-defence-1.pdf ). Bet-El introduces an important distinction between threats and risks, and, contiguously, the distinction between security and defence: ‘A threat is a patent, clear danger, while risk is the probability of a latent danger becoming patent; evaluating that probability requires judgement. Within this framework, defence is to be seen as the defeat or deterrence of a patent threat, primarily by military, while security involves taking measures to prevent latent threats from becoming patent and if the measures fail, to do so in such a way that there is time and space to mount an effective defence’. This is deep. I do a lot of research in risk management, especially as I invest in the stock market. When we face a risk factor, our basic behavioural response is hedging or insurance. We hedge by diversifying our exposures to risk, and we insure by sharing the risk with other people. Healthcare systems are a good example of insurance. We have a flow of capital that fuels a manned infrastructure (hospitals, ambulances etc.), and that infrastructure allows each single sick human to share his or her risks with other people. Social distancing is the epidemic equivalent of hedging. When cutting completely or significantly throttling social interactions between households, we have each household being sort of separated from the epidemic risk in other households. When one node in a network is shielded from some of the risk occurring in other nodes, this is hedging.

The military is made for responding to threats rather than risks. Military action is a contingency plan, implemented when insurance and hedging have gone to hell. The pandemic has shown that we need more of such buffers, i.e. more social entities able to mobilise quickly into deterring directly an actual threat. Territorial Defence Forces seem to fit the bill.  Another piece of literature, from my own, Polish turf, by Gąsiorek & Marek (2020[2]), state straightforwardly that Territorial Defence Forces have proven to be a key actor during the COVID-19 pandemic precisely because they maintain a high degree of actual readiness in their crisis-oriented resources, as compared to other entities in the Polish public sector.

Good. I have a thread, from literature, for the project devoted to national security. The issue of operational readiness seems to be somehow in the centre, and it translates into the apparently fluent frontier between security and national defence. Speed of mobilisation in the available resources, as well as the actual reliability of those resources, once mobilized, look like the key to understanding the surprisingly significant role of Territorial Defence Forces during the COVID-19 pandemic. Looks like my initial hypothesis #2, claiming that the actual role played by the TDF during the pandemic was determined by the TDF’s actual capacity of reaction, i.e. speed and diligence in the mobilisation of human and material resources, is some sort of theoretical core to that whole body of research.

In our team, we plan and have a provisional green light to run interviews with the soldiers of Territorial Defence Forces. That basic notion of actually mobilizable resources can help narrowing down the methodology to apply in those interviews, by asking specific questions pertinent to that issue. Which specific resources proved to be the most valuable in the actual intervention of TDF in pandemic? Which resources – if any – proved to be 100% mobilizable on the spot? Which of those resources proved to be much harder to mobilise than it had been initially assumed? Can we rate and rank all the human and technical resources of TDF as for their capacity to be mobilised?

Good. I gently close the door of that room in my head, filled with Territorial Defence Forces and the pandemic. I make sure I can open it whenever I want, and I open the door to that other room, where psychiatry dwells. Me and those psychiatrists I am working with can study a sample of medical records as regards patients with psychosis. Verbal elocutions of those patients are an important part of that material, and I make two hypotheses along that tangent:

>> Hypothesis #1: the probability of occurrence in specific grammatical structures A, B, C, in the general grammatical structure of a patient’s elocutions, both written and spoken, is informative about the patient’s mental state, including the likelihood of psychosis and its specific form.

>> Hypothesis #2: the action of written self-reporting, e.g. via email, from the part of a psychotic patient, allows post-clinical treatment of psychosis, with results observable as transition from mental state A to mental state B.

I start listening to what smarter people than me have to say on the matter. I start with Worthington et al. (2019[3]), and I learn there is a clinical category: clinical high risk for psychosis (CHR-P), thus a set of subtler (than psychotic) ‘changes in belief, perception, and thought that appear to represent attenuated forms of delusions, hallucinations, and formal thought disorder’. I like going backwards upstream, and I immediately ask myself whether that line of logic can be reverted. If there is clinical high risk for psychosis, the occurrence of those same symptoms in reverse order, from severe to light, could be a path of healing, couldn’t it?

Anyway, according to Worthington et al. (2019), some 25% of people with diagnosed CHR-P transition into fully scaled psychosis. Once again, from the perspective of risk management, 25% of actual occurrence in a risk category is a lot. It means that CHR-P is pretty solid as risk assessment comes. I further learn that CHR-P, when represented as a collection of variables (a vector for friends with a mathematical edge), entails an internal distinction into predictors and converters. Predictors are the earliest possible observables, something like a subtle smell of possible s**t, swirling here and there in the ambient air. Converters are information that bring progressive confirmation to predictors.

That paper by Worthington et al. (2019) is a review of literature in itself, and allows me to compare different approaches to CHR-P. The most solid ones, in terms of accurately predicting the onset of full-clip psychosis, always incorporate two components: assessment of the patient’s social role, and analysis of verbalized thought. Good. Looks promising. I think the initial hypotheses should be expanded into claims about socialization.

I continue with another paper, by Corcoran and Cecchi (2020[4]). Generally, patients with psychotic disorders display lower a semantic coherence than ordinary. The flow of meaning in their speech is impended: they can express less meaning in the same volume of words, as compared to a mentally healthy person. Reduced capacity to deliver meaning manifests as apparent tangentiality in verbal expression. Psychotic patients seem to err in their elocutions. Reduced complexity of speech, i.e. relatively low a capacity to swing between different levels of abstraction, with a tendency to exaggerate concreteness, is another observable which informs about psychosis. Two big families of diagnostic methods follow that twofold path. Latent Semantic Analysis (LSA) seems to be the name of the game as regards the study of semantic coherence. Its fundamental assumption is that words convey meaning by connecting to other words, which further unfolds into assuming that semantic similarity, or dissimilarity, with a more or less complex coefficient joint occurrence, as opposed to disjoint occurrence inside big corpuses of language.  

Corcoran and Cecchi (2020) name two main types of digital tools for Latent Semantic Analysis. One is Word2Vec (https://en.wikipedia.org/wiki/Word2vec), and I found a more technical and programmatic approach there to at: https://towardsdatascience.com/a-word2vec-implementation-using-numpy-and-python-d256cf0e5f28 . Another one is GloVe, which I found three interesting references to, at https://nlp.stanford.edu/projects/glove/ , https://github.com/maciejkula/glove-python , and at https://pypi.org/project/glove-py/ .

As regards semantic complexity, two types of analytical tools seem to run the show. One is the part-of-speech (POS) algorithm, where we tag words according to their grammatical function in the sentence: noun, verb, determiner etc. There are already existing digital platforms for implementing that approach, such as Natural Language Toolkit (http://www.nltk.org/ ). Another angle is that of speech graphs, where words are nodes in the network of discourse, and their connections (e.g. joint occurrence) to other words are edges in that network. Now, the intriguing thing about that last thread is that it seems to had been burgeoning in the late 1990ies, and then it sort of faded away. Anyway, I found two references for an algorithmic approach to speech graphs, at https://github.com/guillermodoghel/speechgraph , and at https://www.researchgate.net/publication/224741196_A_general_algorithm_for_word_graph_matrix_decomposition .

That quick review of literature, as regards natural language as predictor of psychosis, leads me to an interesting sidestep. Language is culture, right? Low coherence, and low complexity in natural language are informative about psychosis, right? Now, I put that argument upside down. What if we, homo (mostly) sapiens have a natural proclivity to psychosis, with that overblown cortex of ours? What if we had figured out, at some point of our evolutionary path, that language is a collectively intelligent tool which, with is unique coherence and complexity required for efficient communication, keeps us in a state of acceptable sanity, until we go on Twitter, of course.  

Returning to the intellectual discipline which I should demonstrate, as a respectable researcher, the above review of literature brings one piece of good news, as regards the project in psychiatry. Initially, in this specific team, we assumed that we necessarily need an external partner, most likely a digital business, with important digital resources in AI, in order to run research on natural language. Now, I realized that we can assume two scenarios: one with big, fat AI from that external partner, and another one, with DIY algorithms of our own. Gives some freedom of movement. Cool.


[1] Tiutiunyk, V. V., Ivanets, H. V., Tolkunov, І. A., & Stetsyuk, E. I. (2018). System approach for readiness assessment units of civil defense to actions at emergency situations. Науковий вісник Національного гірничого університету, (1), 99-105. DOI: 10.29202/nvngu/2018-1/7

[2] Gąsiorek, K., & Marek, A. (2020). Działania wojsk obrony terytorialnej podczas pandemii COVID–19 jako przykład wojskowego wsparcia władz cywilnych i społeczeństwa. Wiedza Obronna. DOI: https://doi.org/10.34752/vs7h-g945

[3] Worthington, M. A., Cao, H., & Cannon, T. D. (2019). Discovery and validation of prediction algorithms for psychosis in youths at clinical high risk. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging. https://doi.org/10.1016/j.bpsc.2019.10.006

[4] Corcoran, C. M., & Cecchi, G. (2020). Using language processing and speech analysis for the identification of psychosis and other disorders. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging. https://doi.org/10.1016/j.bpsc.2020.06.004

Investment, national security, and psychiatry

I need to clear my mind a bit. For the last few weeks, I have been working a lot on revising an article of mine, and I feel I need a little bit of a shake-off. I know by experience that I need a structure to break free from another structure. Yes, I am one of those guys. I like structures. When I feel I lack one, I make one.

The structure which I want to dive into, in order to shake off the thinking about my article, is the thinking about my investment in the stock market. My general strategy in that department is to take the rent, which I collect from an apartment in town, every month, and to invest it in the stock market. Economically, it is a complex process of converting the residential utility of a real asset (apartment) into a flow of cash, thus into a financial asset with quite steady a market value (inflation is still quite low), and then I convert that low-risk financial asset into a differentiated portfolio of other financial assets endowed with higher a risk (stock). I progressively move capital from markets with low risk (residential real estate, money) into a high-risk-high-reward market.

I am playing a game. I make a move (monthly cash investment), and I wait for a change in the stock market. I am wrapping my mind around the observable change, and I make my next move the next month. With each move I make, I gather information. What is that information? Let’s have a look at my portfolio such as it is now. You can see it in the table below:

StockValue in EURReal return in €Rate of return I have as of April 6ht, 2021, in the morning
CASH & CASH FUND & FTX CASH (EUR) € 25,82 €                                    –   €                                     25,82
ALLEGRO.EU SA € 48,86 €                               (2,82)-5,78%
ALTIMMUNE INC. – COMM € 1 147,22 €                            179,6515,66%
APPLE INC. – COMMON ST € 1 065,87 €                                8,210,77%
BIONTECH SE € 1 712,88 €                           (149,36)-8,72%
CUREVAC N.V. € 711,00 €                             (98,05)-13,79%
DEEPMATTER GROUP PLC € 8,57 €                               (1,99)-23,26%
FEDEX CORPORATION COMM € 238,38 €                              33,4914,05%
FIRST SOLAR INC. – CO € 140,74 €                             (11,41)-8,11%
GRITSTONE ONCOLOGY INC € 513,55 €                           (158,43)-30,85%
INPOST € 90,74 €                             (17,56)-19,35%
MODERNA INC. – COMMON € 879,85 €                             (45,75)-5,20%
NOVAVAX INC. – COMMON STOCK € 1 200,75 €                            398,5333,19%
NVIDIA CORPORATION – C € 947,35 €                              42,254,46%
ONCOLYTICS BIOTCH CM € 243,50 €                             (14,63)-6,01%
SOLAREDGE TECHNOLOGIES € 683,13 €                             (83,96)-12,29%
SOLIGENIX INC. COMMON € 518,37 €                           (169,40)-32,68%
TESLA MOTORS INC. – C € 4 680,34 €                            902,3719,28%
VITALHUB CORP.. € 136,80 €                               (3,50)-2,56%
WHIRLPOOL CORPORATION € 197,69 €                              33,1116,75%
  €       15 191,41 €                            840,745,53%

A few words of explanation are due. Whilst I have been actively investing for 13 months, I made this portfolio in November 2020, when I did some major reshuffling. My overall return on the cash invested, over the entire period of 13 months, is 30,64% as for now (April 6th, 2021), which makes 30,64% * (12/13) = 28,3% on the annual basis.

The 5,53% of return which I have on this specific portfolio makes roughly 1/6th of the total return in have on all the portfolios I had over the past 13 months. It is the outcome of my latest experimental round, and this round is very illustrative of the mistake which I know I can make as an investor: panic.

In August and September 2020, I collected some information, I did some thinking, and I made a portfolio of biotech companies involved in the COVID-vaccine story: Pfizer, Biontech, Curevac, Moderna, Novavax, Soligenix. By mid-October 2020, I was literally swimming in extasy, as I had returns on these ones like +50%. Pure madness. Then, big financial sharks, commonly called ‘investment funds’, went hunting for those stocks, and they did what sharks do: they made their target bleed before eating it. They boxed and shorted those stocks in order to make their prices affordably low for long investment positions. At the time, I lost control of my emotions, and when I saw those prices plummet, I sold out everything I had. Almost as soon as I did it, I realized what an idiot I had been. Two weeks later, the same stocks started to rise again. Sharks had had their meal. In response, I did what I still wonder whether it was wise or stupid: I bought back into those positions, only at a price higher than what I sold them for.

Selling out was stupid, for sure. Was buying back in a wise move? I don’t know, like really. My intuition tells me that biotech companies in general have a bright future ahead, and not only in connection with vaccines. I am deeply convinced that the pandemic has already built up, and will keep building up an interest for biotechnology and medical technologies, especially in highly innovative forms. This is even more probable as we realized that modern biotechnology is very largely digital technology. This is what is called ‘platforms’ in the biotech lingo. These are digital clouds which combine empirical experimental data with artificial intelligence, and the latter is supposed to experiment virtually with that data. Modern biotechnology consists in creating as many alternative combinations of molecules and lifeforms as we possibly can make and study, and then pick those which offer the best combination of biological outcomes with the probability of achieving said outcomes.

My currently achieved rates of return, in the portfolio I have now, are very illustrative of an old principle in capital investment: I will fail most of the times. Most of my investment decisions will be failures, at least in the short and medium term, because I cannot possibly outsmart the incredibly intelligent collective structure of the stock market. My overall gain, those 5,53% in the case of this specific portfolio, is the outcome of 19 experiments, where I fail in 12 of them, for now, and I am more or less successful in the remaining 7.

The very concept of ‘beating the market’, which some wannabe investment gurus present, is ridiculous. The stock market is made of dozens of thousands of human brains, operating in correlated coupling, and leveraged with increasingly powerful artificial neural networks. When I expect to beat that networked collective intelligence with that individual mind of mine, I am pumping smoke up my ass. On the other hand, what I can do is to do as many different experiments as I can possibly spread my capital between.

It is important to understand that any investment strategy, where I assume that from now on, I will not make any mistakes, is delusional. I made mistakes in the past, and I am likely to make mistakes in the future. What I can do is to make myself more predictable to myself. I can narrow down the type of mistakes I tend to make, and to create the corresponding compensatory moves in my own strategy.

Differentiation of risk is a big principle in my investment philosophy, and yet it is not the only one. Generally, with the exception of maybe 2 or 3 days in a year, I don’t really like quick, daily trade in the stock market. I am more of a financial farmer: I sow, and I wait to see plants growing out of those seeds. I invest in industries rather than individual companies. I look for some kind of strong economic undertow for my investments, and the kind of undertow I specifically look for is high potential for deep technological change. Accessorily, I look for industries which sort of logically follow human needs, e.g. the industry of express deliveries in the times of pandemic. I focus on three main fields of technology: biotech, digital, and energy.

Good. I needed to shake off, and I am. Thinking and writing about real business decisions helped me to take some perspective. Now, I am gently returning into the realm of science, without completely leaving the realm of business: I am navigating the somehow troubled and feebly charted waters of money for science. I am currently involved in launching and fundraising for two scientific projects, in two very different fields of science: national security and psychiatry. Yes, I know, they can conjunct in more points than we commonly think they can. Still, in canonical scientific terms, these two diverge.

How come I am involved, as researcher, in both national security and psychiatry? Here is the thing: my method of using a simple artificial neural network to simulate social interactions seems to be catching on. Honestly, I think it is catching on because other researchers, when they hear me talking about ‘you know, simulating alternative realities and assessing which one is the closest to the actual reality’ sense in me that peculiar mental state, close to the edge of insanity, but not quite over that edge, just enough to give some nerve and some fun to science.

In the field of national security, I teamed up with a scientist strongly involved in it, and we take on studying the way our Polish forces of Territorial Defence have been acting in and coping with the pandemic of COVID-19. First, the context. So far, the pandemic has worked as a magnifying glass for all the f**kery in public governance. We could all see a minister saying ‘A,B and C will happen because we said so’, and right after there was just A happening, with a lot of delay, and then a completely unexpected phenomenal D appeared, with B and C bitching and moaning they haven’t the right conditions for happening decently, and therefore they will not happen at all.  This is the first piece of the context. The second is the official mission and the reputation of our Territorial Defence Forces AKA TDF. This is a branch of our Polish military, created in 2017 by our right-wing government. From the beginning, these guys had the reputation to be a right-wing militia dressed in uniforms and paid with taxpayers’ money. I honestly admit I used to share that view. TDF is something like the National Guard in US. These are units made of soldiers who serve in the military, and have basic military training, but they have normal civilian lives besides. They have civilian jobs, whilst training regularly and being at the ready should the nation call.

The initial idea of TDF emerged after the Russian invasion of the Crimea, when we became acutely aware that military troops in nondescript uniforms, apparently lost, and yet strangely connected to the Russian government, could massively start looking lost by our Eastern border. The initial idea behind TDF was to significantly increase the capacity of the Polish population for mobilising military resources. Switzerland and Finland largely served as models.

When the pandemic hit, our government could barely pretend they control the situation. Hospitals designated as COVID-specific had frequently no resources to carry out that mission. Our government had the idea of mobilising TDF to help with basic stuff: logistics, triage and support in hospitals etc. Once again, the initial reaction of the general public was to put the label of ‘militarisation’ on that decision, and, once again, I was initially thinking this way. Still, some friends of mine, strongly involved as social workers supporting healthcare professionals, started telling me that working with TDF, in local communities, was nothing short of amazing. TDF had the speed, the diligence, and the capacity to keep their s**t together which many public officials lacked. They were just doing their job and helping tremendously.

I started scratching the surface. I did some research, and I found out that TDF was of invaluable help for many local communities, especially outside of big cities. Recently, I accidentally had a conversation about it with M., the scientist whom I am working with on that project. He just confirmed my initial observations.

M. has strong connections with TDF, including their top command. Our common idea is to collect abundant, interview-based data from TDF soldiers mobilised during the pandemic, as regards the way they carried out their respective missions. The purely empirical edge we want to have here is oriented on defining successes and failures, as well as their context and contributing factors. The first layer of our study is supposed to provide the command of TDF with some sort of case-studies-based manual for future interventions. At the theoretical, more scientific level, we intend to check the following hypotheses:      

>> Hypothesis #1: during the pandemic, TDF has changed its role, under the pressure of external events, from the initially assumed, properly spoken territorial defence, to civil defence and assistance to the civilian sector.

>> Hypothesis #2: the actual role played by the TDF during the pandemic was determined by the TDF’s actual capacity of reaction, i.e. speed and diligence in the mobilisation of human and material resources.

>> Hypothesis #3: collectively intelligent human social structures form mechanisms of reaction to external stressors, and the chief orientation of those mechanisms is to assure proper behavioural coupling between the action of external stressors, and the coordinated social reaction. Note: I define behavioural coupling in terms of the games’ theory, i.e. as the objectively existing need for proper pacing in action and reaction.   

The basic method of verifying those hypotheses consists, in the first place, in translating the primary empirical material into a matrix of probabilities. There is a finite catalogue of operational procedures that TDF can perform. Some of those procedures are associated with territorial military defence as such, whilst other procedures belong to the realm of civil defence. It is supposed to go like: ‘At the moment T, in the location A, procedure of type Si had a P(T,A, Si) probability of happening’. In that general spirit, Hypothesis #1 can be translated straight into a matrix of probabilities, and phrased out as ‘during the pandemic, the probability of TDF units acting as civil defence was higher than seeing them operate as strict territorial defence’.

That general probability can be split into local ones, e.g. region-specific. On the other hand, I intuitively associate Hypotheses #2 and #3 with the method which I call ‘study of orientation’. I take the matrix of probabilities defined for the purposes of Hypothesis #1, and I put it back to back with a matrix of quantitative data relative to the speed and diligence in action, as regards TDF on the one hand, and other public services on the other hand. It is about the availability of vehicles, capacity of mobilisation in people etc. In general, it is about the so-called ‘operational readiness’, which you can read more in, for example, the publications of RAND Corporation (https://www.rand.org/topics/operational-readiness.html).  

Thus, I take the matrix of variables relative to operational readiness observable in the TDF, and I use that matrix as input for a simple neural network, where the aggregate neural activation based on those metrics, e.g. through a hyperbolic tangent, is supposed to approximate a specific probability relative to TDF people endorsing, in their operational procedures, the role of civil defence, against that of military territorial defence. I hypothesise that operational readiness in TDF manifests a collective intelligence at work and doing its best to endorse specific roles and applying specific operational procedures. I make as many such neural networks as there are operational procedures observed for the purposes of Hypothesis #1. Each of these networks is supposed to represent the collective intelligence of TDF attempting to optimize, through its operational readiness, the endorsement and fulfilment of a specific role. In other words, each network represents an orientation.

Each such network transforms the input data it works with. This is what neural networks do: they experiment with many alternative versions of themselves. Each experimental round, in this case, consists in a vector of metrics informative about the operational readiness TDF, and that vector locally tries to generate an aggregate outcome – its neural activation – as close as possible to the probability of effectively playing a specific role. This is always a failure: the neural activation of operational readiness always falls short of nailing down exactly the probability it attempts to optimize. There is always a local residual error to account for, and the way a neural network (well, my neural network) accounts for errors consists in measuring them and feeding them into the next experimental round. The point is that each such distinct neural network, oriented on optimizing the probability of Territorial Defence Forces endorsing and fulfilling a specific social role, is a transformation of the original, empirical dataset informative about the TDF’s operational readiness.

Thus, in this method, I create as many transformations (AKA alternative versions) of the actual operational readiness in TDF, as there are social roles to endorse and fulfil by TDF. In the next step, I estimate two mathematical attributes of each such transformation: its Euclidean distance from the original empirical dataset, and the distribution of its residual error. The former is informative about similarity between the actual reality of TDF’s operational readiness, on the one hand, and alternative realities, where TDF orient themselves on endorsing and fulfilling just one specific role. The latter shows the process of learning which happens in each such alternative reality.

I make a few methodological hypotheses at this point. Firstly, I expect a few, like 1 ÷ 3 transformations (alternative realities) to fall particularly close from the actual empirical reality, as compared to others. Particularly close means their Euclidean distances from the original dataset will be at least one order of magnitude smaller than those observable in the remaining transformations. Secondly, I expect those transformations to display a specific pattern of learning, where the residual error swings in a predictable cycle, over a relatively wide amplitude, yet inside that amplitude. This is a cycle where the collective intelligence of Territorial Defence Forces goes like: ‘We optimize, we optimize, it goes well, we narrow down the error, f**k!, we failed, our error increased, and yet we keep trying, we optimize, we optimize, we narrow down the error once again…’ etc. Thirdly, I expect the remaining transformations, namely those much less similar to the actual reality in Euclidean terms, to display different patterns of learning, either completely dishevelled, with the residual error bouncing haphazardly all over the place, or exaggeratedly tight, with error being narrowed down very quickly and small ever since.

That’s the outline of research which I am engaging into in the field of national security. My role in this project is that of a methodologist. I am supposed to design the system of interviews with TDF people, the way of formalizing the resulting data, binding it with other sources of information, and finally carrying out the quantitative analysis. I think I can use the experience I already have with using artificial neural networks as simulators of social reality, mostly in defining said reality as a vector of probabilities attached to specific events and behavioural patterns.     

As regards psychiatry, I have just started to work with a group of psychiatrists who have abundant professional experience in two specific applications of natural language in the diagnosing and treating psychoses. The first one consists in interpreting patients’ elocutions as informative about their likelihood of being psychotic, relapsing into psychosis after therapy, or getting durably better after such therapy. In psychiatry, the durability of therapeutic outcomes is a big thing, as I have already learnt when preparing for this project. The second application is the analysis of patients’ emails. Those psychiatrists I am starting to work with use a therapeutic method which engages the patient to maintain contact with the therapist by writing emails. Patients describe, quite freely and casually, their mental state together with their general existential context (job, family, relationships, hobbies etc.). They don’t necessarily discuss those emails in subsequent therapeutic sessions; sometimes they do, sometimes they don’t. The most important therapeutic outcome seems to be derived from the very fact of writing and emailing.

In terms of empirical research, the semantic material we are supposed to work with in that project are two big sets of written elocutions: patients’ emails, on the one hand, and transcripts of standardized 5-minute therapeutic interviews, on the other hand. Each elocution is a complex grammatical structure in itself. The semantic material is supposed to be cross-checked with neurological biomarkers in the same patients. The way I intend to use neural networks in this case is slightly different from that national security thing. I am thinking about defining categories, i.e. about networks which guess similarities and classification out of crude empirical data. For now, I make two working hypotheses:

>> Hypothesis #1: the probability of occurrence in specific grammatical structures A, B, C, in the general grammatical structure of a patient’s elocutions, both written and spoken, is informative about the patient’s mental state, including the likelihood of psychosis and its specific form.

>> Hypothesis #2: the action of written self-reporting, e.g. via email, from the part of a psychotic patient, allows post-clinical treatment of psychosis, with results observable as transition from mental state A to mental state B.

The inflatable dartboard made of fine paper

My views on environmentally friendly production and consumption of energy, and especially on public policies in that field, differ radically from what seems to be currently the mainstream of scientific research and writing. I even got kicked out of a scientific conference because of my views. After my paper was accepted, I received a questionnaire to fill, which was supposed to feed the discussion on the plenary session of that conference. I answered those questions in good faith and sincerely, and: boom! I receive an email which says that my views ‘are not in line with the ideas we want to develop in the scientific community’. You could rightly argue that my views might be so incongruous that kicking me out of that conference was an act of mercy rather than enmity. Good. Let’s pass my views in review.

There is that thing of energy efficiency and climate neutrality. Energy efficiency, i.e. the capacity to derive a maximum of real output out of each unit of energy consumed, can be approached from two different angles: as a stationary value, on the one hand, or an elasticity, on the other hand. We could say: let’s consume as little energy as we possibly can and be as productive as possible with that frugal base. That’s the stationary view. Yet, we can say: let’s rock it, like really. Let’s boost our energy consumption so as to get in control of our climate. Let’s pass from roughly 30% of energy generated on the surface of the Earth, which we consume now, to like 60% or 70%. Sheer laws of thermodynamics suggest that if we manage to do that, we can really run the show. These is the summary of what in my views is not in line with ‘the ideas we want to develop in the scientific community’.

Of course, I can put forth any kind of idiocy and claim this is a valid viewpoint. Politics are full of such episodes. I was born and raised in a communist country. I know something about stupid, suicidal ideas being used as axiology for running a nation. I also think that discarding completely other people’s ‘ideas we want to develop in the scientific community’ and considering those people as pathetically lost would be preposterous from my part. We are all essentially wrong about that complex stuff we call ‘reality’. It is just that some ways of being wrong are more functional than others. I think truly correct a way to review the current literature on energy-related policies is to take its authors’ empirical findings and discuss them

under a different interpretation, namely the one sketched in the preceding paragraph.

I like looking at things with precisely that underlying assumption that I don’t know s**t about anything, and I just make up cognitive stuff which somehow pays off. I like swinging around that Ockham’s razor and cut out all the strong assumptions, staying just with the weak ones, which do not require much assuming and are at the limit of stylized observations and theoretical claims.

My basic academic background is in law (my Master’s degree), and in economics (my PhD). I look at social reality around me through the double lens of those two disciplines, which, when put in stereoscopic view, boil down to having an eye on patterns in human behaviour.

I think I observe that we, humans, are social and want to stay social, and being social means a baseline mutual predictability in our actions. We are very much about maintaining a certain level of coherence in culture, which means a certain level of behavioural coupling. We would rather die than accept the complete dissolution of that coherence. We, humans, we make behavioural coherence: this is our survival strategy, and it allows us to be highly social. Our cultures always develop along the path of differentiation in social roles. We like specializing inside the social group we belong to.

Our proclivity to endorse specific skillsets, which turn into social roles, has the peculiar property of creating local surpluses, and we tend to trade those surpluses. This is how markets form. In economics, there is that old distinction between production and consumption. I believe that one of the first social thinkers who really meant business about it was Jean Baptiste Say, in his “Treatise of Political Economy”. Here >> https://discoversocialsciences.com/wp-content/uploads/2020/03/Say_treatise_political-economy.pdf  you have it in the English translation, whilst there >>

https://discoversocialsciences.com/wp-content/uploads/2018/04/traite-deconomie-politique-jean-baptiste-say.pdf it is in its elegant French original.

In my perspective, the distinction between production and consumption is instrumental, i.e. it is useful for solving some economic problems, but just some. Saying that I am a consumer is a gross simplification. I am a consumer in some of my actions, but in others I am a producer. As I write this blog, I produce written content. I prefer assuming that production and consumption are two manifestations of the same activity, namely of markets working around tradable surpluses created by homo sapiens as individual homo sapiens endorse specific social roles.

When some scientists bring forth empirically backed claims that our patterns of consumption have the capacity to impact climate (e.g. Bjelle et al. 2021[1]), I say ‘Yes, indeed, and at the end of that specific intellectual avenue we find out that creating some specific, tradable surpluses, ergo the fact of endorsing some specific social roles, has the capacity to impact climate’. Bjelle et al. find out something which from my point of view is gobsmacking: whilst relative prevalence of particular goods in the overall patterns of demand has little effect on the emission of Greenhouse Gases (GHG) at the planetary scale, there are regional discrepancies. In developing countries and in emerging markets, changes in the baskets of goods consumed seem to have strong impact GHG-wise. On the other hand, in developed economies, however the consumers shift their preferences between different goods, it seems to be very largely climate neutral. From there, Bjelle et al. conclude into such issues as environmental taxation. My own take on those results is different. What impacts climate is social change occurring in developing economies and emerging markets, and this is relatively quick demographic growth combined with quick creation of new social roles, and a big socio-economic difference between urban environments, and the rural ones.

In the broad theoretical perspective, states of society which we label as classes of socio-economic development are far more than just income brackets. They are truly different patterns of social interactions. I had a glimpse of that when I was comparing data on the consumption of energy per capita (https://data.worldbank.org/indicator/EG.USE.PCAP.KG.OE ) with the distribution of gross national product per capita (https://data.worldbank.org/indicator/NY.GDP.PCAP.CD ). It looks as if different levels of economic development were different levels of energy in the social system. Each 100 ÷ 300 kilograms of oil equivalent per capita per year seem to be associated with specific institutions in society.

Let’s imagine that climate change goes on. New s**t comes our way, which we need to deal with. We need to learn. We form new skillsets, and we define new social roles. New social roles mean new tradable surpluses, and new markets with new goods in it. We don’t really know what kind of skillsets, markets and goods that will be. Enhanced effort of collective adaptation leads to outcomes impossible to predict in themselves. The question is: can we predict the way those otherwise unpredictable outcomes will take shape?         

My fellow scientists seem not to like unpredictable outcomes. Shigetomi et al. (2020[2]) straightforwardly find out empirically that ‘only the very low, low, and very high-income households are likely to achieve a reduction in carbon footprint due to their high level of environmental consciousness. These income brackets include the majority of elderly households who are likely to have higher consciousness about environmental protection and addressing climate change’. In my fairy-tale, it means that only a fringe of society cares about environment and climate, and this is the fringe which does not really move a lot in terms of new social role. People with low income have low income because their social roles do not allow them to trade significant surpluses, and elderly people with high income do not really shape the labour market.

This is what I infer from those empirical results. Yet, Shigetomi et al. conclude that ‘The Input-Output Analysis Sustainability Evaluation Framework (IOSEF), as proposed in this study, demonstrates how disparity in household consumption causes societal distortion via the supply chain, in terms of consumption distribution, environmental burdens and household preferences. The IOSEF has the potential to be a useful tool to aid in measuring social inequity and burden distribution allocation across time and demographics’.

Guys, like really. Just sit and think for a moment. I even pass over the claim that inequality of income is a social distortion, although I am tempted to say that no know human society has ever been free of that alleged distortion, and therefore we’d better accommodate with it and stop calling it a distortion. What I want is logic. Guys, you have just proven empirically that only low-income people, and elderly high-income people care about climate and environment. The middle-incomes and the relatively young high-incomes, thus people who truly run the show of social and technological change, do not care as much as you would like them to. You claim that inequality of income is a distortion, and you want to eliminate it. When you kick inequality out of the social equation, you get rid of the low-income folks, and of the high-income ones. Stands to reason: with enforced equality, everybody is more or less middle-income. Therefore, the majority of society is in a social position where they don’t give a f**k about climate and environment. Besides, when you remove inequality, you remove vertical social mobility along hierarchies, and therefore you give a cold shoulder to a fundamental driver of social change. Still, you want social change, you have just said it.  

Guys, the conclusions you derive from your own findings are the political equivalent of an inflatable dartboard made of fine paper. Cheap to make, might look dashing, and doomed to be extremely short-lived as soon as used in practice.   


[1] Bjelle, E. L., Wiebe, K. S., Többen, J., Tisserant, A., Ivanova, D., Vita, G., & Wood, R. (2021). Future changes in consumption: The income effect on greenhouse gas emissions. Energy Economics, 95, 105114. https://doi.org/10.1016/j.eneco.2021.105114

[2] Shigetomi, Y., Chapman, A., Nansai, K., Matsumoto, K. I., & Tohno, S. (2020). Quantifying lifestyle based social equity implications for national sustainable development policy. Environmental Research Letters, 15(8), 084044. https://doi.org/10.1088/1748-9326/ab9142