The possible Black Swans

I am re-digesting, like a cow, some of the intellectual food I figured out recently. I return to the specific strand of my research to be found in the unpublished manuscript ‘The Puzzle of Urban Density And Energy Consumption’, and I want to rummage a bit inside one specific issue, namely the meaning which I can attach to the neural activation function in the quantitative method I use.

Just to give a quick sketch of the landscape, I work through a general hypothesis that our human civilization is based on two factories: the factory of food in the countryside, and the factory of new social roles in cities. The latter produce new social roles by creating demographic anomalies, i.e. by packing humans tightly together, in abnormally high density. Being dense together makes us interact more with each other, which, whilst not always pleasant, stimulates our social brains and makes us figure out new interesting s**t, i.e. new social roles.

I made a metric of density in population, which is a coefficient derived from the available data of the World Bank. I took the coefficient of urbanization (World Bank 1[1]), and I multiplied it by the headcount of population (World Bank 4[2]). This is how I got the number of people living in cities. I divided it by the surface of urban land (World Bank 2[3]), and I got the density of population in cities, which I further label as ‘DU’. Further, I gather that the social difference between cities and the countryside, hence the relative impact of cities as breeding ground for new social roles, is determined by the difference in the depth of demographic anomalies created by the urban density of population. Therefore, I took the just-calculated coefficient DU and I divided it by the general density of population, or ‘DG’ (World Bank 5[4]). This is how I ended up the with the coefficient ‘DU/DG’, which, mathematically, denominates the density of urban population in units of general density in population.

I simulate an artificial reality, where we, humans, optimize the coefficient ‘DU/DG’ as our chief collective orientation. We just want to get it right. Enough human density in cities to be creative, and yet enough space for each human being able to practice mindfulness when taking a #2 in the toilet. We optimize ourselves being dense together in cities on the base of 7 input characteristics of ours, namely:   

Population – this is a typical scale variable. The intuition behind it is that size matters, and that’s why in most socio-economic research, when we really mean business in quantitative terms, we add such variables, pertinent to the size of the social entity studied. Urbanization occurring in a small country, like Belgium (with all my due respect for Belgians), is likely to occur differently from urbanization in India or in the U.S. In this specific case, I assume that a big population, like hundreds of millions of people, has to move more resources around to accommodate people in cities, as compared to a population counted in dozens of millions.  
Urban population absolute – same tune, a scale variable, more specifically pertinent to the headcount of urban populations.   
Gross Domestic Product (GDP, constant 2010 US$) – scale variable, once again, but this time it is about the real output of the economy. In my approach, the GDP is not exactly a measure of the wealth produced, but more of an appraisal of total productive activity in the humans living around. This is why I use constant prices. That shaves off the price-and-relative-wealth component, and leaves GDP as a metric pertinent to how much tradable surpluses do humans create in a given place and time.  
Broad money (% of GDP) – this is essentially the opposite to the velocity of money, and it corresponds to another strand in my research. I discovered and I keep studying the fact that in the presence of quick technological change, human societies stuff themselves up with abnormally high amounts of cash (or cash equivalents, for that matter). It holds for entire countries as well as for individual businesses. You can find more on that in my article ‘Technological change as a monetary phenomenon’. I guess that when humans make more new social roles in cities, technologies change faster.            
Energy use (kg of oil equivalent per capita) – this is one of the fundamental variables I frequently work with. I guess I included it in this particular piece of research just in case, in order to be able to connect with my research on the market of energy.  
Agricultural land (km2) – the surface of agricultural land available is a logical correlate of urban population. A given number of people in cities need a given amount of food, which, in turn, can be provided by a given surface of agricultural land.            
Cereal yield (kg per hectare) – logically complementary to the surface of agricultural land. Yield per hectare in France is different from what an average hectare can contribute in Nigeria, and that is likely to be correlated with urbanization.  

You can get the raw data I used UNDER THIS LINK. It covers Australia, Brazil, Canada, China, Colombia, France, Gabon, Germany, Ghana, India, Malaysia, Mexico, Mozambique, Namibia, New Zealand, Nigeria, Norway, Poland, Russian Federation, United Kingdom, and the United States. All that lot observed over the window in time stretching from 1961 all the way to 2015.

I make that data into a neural network, which means that I make h(tj) = x1(tj)*R* E[xi(tj-1)] + x2(tj)*R* E[x2(tj-1)] + … + xn(tj)*R* E[xn(tj-1)], as explained in my update titled ‘Representative for collective intelligence’, with x1, x2,…, x7 input variables described above, grouped in 21 social entities (countries), and spread over 2015 – 1961= 54 years. After the curation of data for empty cells, I have m = 896 experimental rounds in the (alleged) collective intelligence, whose presence I guess behind the numbers. I made that lot learn how to squeeze the partly randomized input, controlled for internal coherence, into the mould of the desired output of the coefficient xo = DU/DG. I ran the procedure of learning with 4 different methods of estimating the error of optimization. Firstly, I computed that error the way we do it in basic statistics, namely e1 = xo – h(tj). The mixed-up input is simply subtracted from expected output. In the background, I assume that the locally output xo is an expected value in statistical terms, i.e. it is the mean value of some hypothetical Gaussian distribution, local and specific to that concrete observation.  With that approach to error, there is no neural activation as such. It is an autistic neural network, which does not discriminate input as for its strength. It just reacts.

As I want my collective intelligence to be smarter than your average leech, I make three more estimations of errors, with the input h(tj) passing through a neural activation function. I start with the ReLU rectifier, AKA max[0, h(tj)], and, correspondingly, with e2 = xo – ReLU[h(tj)]. Then I warm up, and I use neural activation via hyperbolic tangent tanh[h] = (e2h – 1) / (e2h + 1), and I compute e3 = xo – tanh[h(tj)]. The hyperbolic tangent is a transcendental number generated by periodical observation of a hyperbola, and that means that hyperbolic tangent has no functional correlation to its input. Neural activation with hyperbolic tangent creates a projection of input into a separate, non-correlated space of states, like cultural transformation of cognitive input into symbols, ideologies and whatnot. Fourthly and finally, I use the sigmoid function (AKA logistic function) sig(h) = 1 / (1 + e-h) which can be read as smoothed likelihood that something happens, i.e. that input h(tj) has full power. The corresponding error is e4 = xo – sig[h(tj)].

From there, I go my normal way. I create 4 artificial realities out of my source dataset. Each of these realities assumes that humans strive to nail down the right social difference between cities and the countryside, as measured with the DU/DG coefficient. Each of these realities is generated with a different way of appraising how far we are from the desired DU/DG, this with four different ways of computing the error: e1, e2, e3, and e4.  The expected states of both the source empirical dataset, and sets representative for those 4 alternative realities, are given by their respective vectors of mean values, i.e. mean DU/DG, mean population etc. Those vectors of means are provided in Table 1 below. The source dataset shows a mean DU/DG = 41,14, which means that cities in this dataset display, on average across countries, 41 times greater a density of population than the general density of population. Mean empirical population is 149,6 million people, with mean urban population being 67,34 million people. Yes, we have China and India in the lot, and they really pump those scale numbers up.

Table 1 – Vectors of mean values in the source empirical set and in the perceptrons simulating alternative realities, optimizing the coefficient DU/DG

  Perceptrons pegged on DU/DG
VariableSource dataseterror = xo – herror = xo – ReLu(h)error = xo – tanh(h)error = xo – sigmoid(h)
DU/DG41,1436,384,9161,56324,29
Population149 625 587,07125 596 355,00(33 435 417,00)252 800 741,001 580 356 431,00
GDP (constant 2010 US$)1 320 025 624 972,081 025 700 000 000,00(922 220 000 000,00)2 583 780 000 000,0018 844 500 000 000,00
Broad money (% of GDP)57,5054,1331,8071,99258,38
Urban population absolute67 349 480,4254 311 459,20(31 977 590,00)123 331 287,00843 649 729,00
Energy use (kg of oil equivalent per capita)2 918,692 769,761 784,113 558,1611 786,15
Agricultural land km21 227 301,861 135 064,25524 611,511 623 345,716 719 245,69
Cereal yield (kg per hectare)3 153,313 010,542 065,683 766,3111 653,77

One of the first things which jumps to the eye in Table 1 – at least to my eye – is that one of the alternative realities, namely that based on the ReLU activation function, is an impossible reality. There are negative populations in this one, and this is not a livable state of things. I don’t know about you, my readers, but I would feel horrible knowing that I am a minus. People can’t be negative by default. By the way, in this specific dataset, the ReLU looks like almost identical to the basic difference e1 = xo – h(tj). Yet, whilst making an alternative reality with no neural transformation of quasi-randomized input, thus making it with e1 = xo – h(tj), creates something pretty close to the original empirics.

Another alternative reality which looks sort of sketchy is the one based on neural activation via the sigmoid function. This one transforms the initial mean expected values into their several-times-multiples. Looks like the sigmoid is equivalent, in this case, to powering the collective intelligence of societies studied with substantial doses of interesting chemicals. That particular reality is sort of a wild dream, like what it would be like to produce almost 4 times more cereal yield per hectare, having more than 4 times more agricultural land, and over 10 times more people in cities. The surface of available land being finite as it is, 4 times more agricultural land and 10 times more people in cities would mean cities tiny in terms of land surface, probably all in height, both under and above ground, with those cities being 324 times denser with humans than the general landscape. Sounds familiar, a bit like sci fi movies.  

Four different ways of pitching input variables against the expected output of optimal DU/DG coefficient produce four very different alternative realities. Out of these four, one is impossible, one is hilarious, and we stay with two acceptable ones, namely that based on no proper neural activation at all, and the other one using the hyperbolic tangent for assessing the salience of things. Interestingly, errors estimated as e1 = xo – h(tj) are essentially correlated with the input variables, whilst those assessed as e3 = xo – tanh[h(tj)] are essentially uncorrelated. It means that in the former case one can more or less predict how satisfied the neural network will be with the local input, and that prediction can be reliably made a priori. In the latter case, with the hyperbolic tangent, there is no way to know in advance. In this case, neural activation is a true transformation of reality.

Table 2 below provides the formal calculation of standardized Euclidean distance between all the 4 alternative realities and the real world of tears we live in. By standardized Euclidean I mean: E = {[(meanX – meanS)2]0,5} / meanX. The ‘/ meanX’ part means that divide the basic Euclidean distance by the mean value which serves me as benchmark, i.e. the empirical one. That facilitates subsequent averaging of those variable-specific Euclidean distances into one metric of mathematical similarity between entire vectors of values.   

Table 2 – Vectors of standardized Euclidean distances between the source set X and the perceptrons simulating alternative realities, optimizing the coefficient DU/DG

error = xo – herror = xo – ReLu(h)error = xo – tanh(h)error = xo – sigmoid(h)
DU/DG]0,1155978740,880654960,4963466216,882843342
Population0,1605957411,2234605570,689555559,562073386
GDP (constant 2010 US$)0,222969631,6986379530,95737109313,27585923
Broad money (% of GDP)0,0586723240,4469811720,2519234033,493424228
Urban population absolute0,1935875551,4748008420,83121363711,52644748
Energy use (kg of oil equivalent per capita)0,0510261810,3887308450,2190928923,038163202
Agricultural land km20,0751547870,5725489140,3226947364,474810971
Cereal yield (kg per hectare)0,0452750,3449168340,1943988412,695730596
Average0,1153598860,878841510,4953245966,868669054

Interestingly, whilst alternative reality based on neural activation through the ReLU function creates impossibly negative populations, its overall Euclidean similarity to the source dataset is not as big as it could seem. The impossible alternative is specific just to some variables.

Now, what does it all have to do with anything? How is that estimation of error representative for collective intelligence in human societies? Good question. I am doing my best to give some kind of answer to it. Quantitative socio-economic variables represent valuable collective outcomes, and thus are informative about alternative orientations in collective action. The process of learning how to nail those valuable outcomes down consumes said orientation in action. Assuming that figuring out the right proportion of demographic anomaly in cities, as measured with DU/DG, is a valuable collective outcome, four collective orientations thereon have been simulated. One goes a bit haywire (negative populations), and yet it shows a possible state of society which attempts to sort of smooth out the social difference between cities and the countryside, with DU/DG being ten times lower than reality. Another one goes fantasque, with huge numbers and a slightly sci-fi-ish shade. The remaining two look like realistic alternatives, one essentially predictable with e1 = xo – h(tj), and another one essentially unpredictable, with e3 = xo – tanh[h(tj)].

I want my method to serve as a predictive tool for sketching the possible scenarios of technological change, in particular as regards the emergence and absorption of radically new technologies. On the other hand, I want my method to be of help when it comes to identifying the possible Black Swans, i.e. the rather unlikely and yet profoundly disturbing states of nature. As I look at those 4 alternative realities my perceptron has just made up (it’s not me, its him! Well, it…), I can see two Black Swans. The one made with the sigmoid activation function shows a possible direction which, for example, African countries could follow, should they experience rapid demographic growth. This particular Black Swan is a hypothetical situation, when population grows like hell. This automatically puts enormous pressure on agriculture. More people need more food. More agriculture requires more space and there is fewer left for cities. Still, more people around need more social roles, and we need to ramp up the production thereof in very densely packed urban populations, where the sheer density of human interaction makes our social brains just race for novelty. This particular Black Swan could be actually a historical reconstruction. It could be representative for the type of social change which we know as civilisational revival: passage from the nomad life to the sedentary one, like a dozen of thousands of years ago, reconstruction of social tissue after the fall of the Western Roman Empire in Europe, that sort of stuff.

Another Black Swan is made with the ReLU activation function and simulates a society, where cities lose their function as factories of new social roles. It is the society in downsizing. It is actually a historical reconstruction, too. This is what must have happened when the Western Roman Empire was collapsing, and before the European civilization bounced back.

Well, well, well, that s**t makes sense… Amazing.


[1] World Bank 1: https://data.worldbank.org/indicator/SP.URB.TOTL.IN.ZS

[2] World Bank 4: https://data.worldbank.org/indicator/SP.POP.TOTL

[3] World Bank 2: https://data.worldbank.org/indicator/AG.LND.TOTL.UR.K2

[4] World Bank 5: https://data.worldbank.org/indicator/EN.POP.DNST

Cœur de réflexion

Je me concentre sur un aspect particulier de la révision finale de mon article pour « International Journal of Energy Sector Management » – sous le titre « Climbing the right hill – an evolutionary approach to the European market of electricity » – notamment sur le rapport entre ma méthodologie et celle de MuSIASEM, soit « Multi-scale Integrated Analysis of Societal and Ecosystem Metabolism ».

Je me réfère plus particulièrement à trois articles que je juge représentatifs pour ce créneau de recherche :

>> Al-Tamimi and Al-Ghamdi (2020), ‘Multiscale integrated analysis of societal and ecosystem metabolism of Qatar’ Energy Reports, 6, 521-527, https://doi.org/10.1016/j.egyr.2019.09.019 

>> Andreoni, V. (2020). The energy metabolism of countries: Energy efficiency and use in the period that followed the global financial crisis. Energy Policy, 139, 111304. https://doi.org/10.1016/j.enpol.2020.111304

>> Velasco-Fernández, R., Pérez-Sánchez, L., Chen, L., & Giampietro, M. (2020), A becoming China and the assisted maturity of the EU: Assessing the factors determining their energy metabolic patterns. Energy Strategy Reviews, 32, 100562.  https://doi.org/10.1016/j.esr.2020.100562

De parmi ces trois, je choisis subjectivement le travail de prof. Andreoni (2020[1]) comme le plus solide en termes de théorie. L’idée de base de MuSIASEM est d’étudier l’efficience énergétique des sociétés humaines comme un métabolisme, donc comme un système complexe qui se soutient et se développe à travers la transformation d’énergie et de ressources matérielles.  

J’essaie de comprendre et présenter la logique de base de MuSIASEM en explorant les avantages que professeur Andreoni attribue à cette méthode. Je me permets de traduire fidèlement un passage de l’article (2020[2]) : « […] l’approche MuSIASEM présente des avantages par rapport aux autres méthodologies utilisées pour étudier le métabolisme des sociétés, telles que ‘emergy’, empreinte écologique et l’analyse entrée-sortie […]. En fournissant des descriptions intégrées à travers des niveaux d’analyse différents, l’approche MuSIASEM ne réduit pas l’information en un index quantitatif unique et analyse l’énergie utilisée par rapport aux structures socio-économiques concrètes. Qui plus est, l’inclusion de dimensions multiples (telles que le PIB, temps humain et consommation d’énergie) en combinaison avec des échelles différentes d’analyse (telles que le niveau sectoriel et le niveau national) rend possible de fournir l’information pertinente aux processus à l’intérieur du système ainsi que d’analyser la façon dont les variables externes (telles que la crise économique et la pénurie des ressources) peuvent affecter l’allocation et l’utilisation des ressources ».      

Je me dis que si quelqu’un se vante d’avoir des avantages par rapport à quoi que ce soit d’autre, ces avantages reflètent les aspects les plus importants des phénomènes en question, selon le même quelqu’un. Ainsi donc, prof. Andreoni assume que MuSIASEM permet d’étudier quelque chose d’important – l’efficience énergétique des sociétés comme un métabolisme – toute en ayant l’avantage de déconstruction des variables agrégées en des variables composantes ainsi que celui de multi-dimensionnalité d’analyse. 

Les variables étudiées semblent donc être la base de la méthode. Parlons donc des variables. Professeur Andreoni présente dans son article trois variables essentielles :

>> L’activité humaine totale, calculée comme le produit de : [la population] x [24 heures] x [365 jours]

>> Transformation totale d’énergie, calculée comme la somme de : [consommation finale d’énergie] + [Consommation interne d’énergie dans le secteur d’énergie] + [Pertes d’énergie dans sa transformation]

>> Produit Intérieur Brut  

Ces trois variables fondamentales sont étudiées à trois niveaux différents d’agrégation. Le niveau de base est celui d’économie(s) nationale(s), à partir d’où on décompose, tout d’abord, entre les secteurs macroéconomiques de : ménages par opposition à celui d’activité payée (entreprises plus secteur public). Ensuite, ces secteurs macroéconomiques sont tous les deux désagrégés en l’agriculture, l’industrie et les services.

A chaque niveau d’agrégation, les trois variables fondamentales sont mises en relation entre elles pour calculer deux coefficients : intensité énergétique et métabolisme d’énergie. Celui d’intensité énergétique est calculé comme quantité d’énergie utilisée pour produire un euro de Produit Intérieur Brut et c’est donc l’inverse de l’efficience énergétique (cette dernière est calculée comme quantité de PIB produite à partir d’une unité d’énergie). Le coefficient métabolique, en revanche, est calculé comme la quantité d’énergie par heure d’activité humaine.

J’ai quelques remarques critiques par rapport à ces variables, mais avant de développer là-dessus je contraste rapidement avec ma méthode. Les variables de professeur Andreoni sont des transformations des variables utilisées dans des bases de données publiquement accessibles. Professeur Andreoni prend donc une méthode générale d’observation empirique – donc par exemple la méthode de calculer la consommation finale d’énergie – et transforme cette méthode générale de façon à obtenir une vue différente de la même réalité empirique. Cette transformation tend à agréger des variables « communes ». Moi, de mon côté, j’utilise un éventail large des variables communément formalisées et présentées dans des bases de données publiquement accessibles plus un petit zest des coefficients que je calcule moi-même. En fait, dans la recherche sur l’énergie, j’utilise juste deux coefficients originaux, soit le nombre moyen de demandes de brevet nationales par 1 million d’habitants, d’une part, et la quantité moyenne de capital fixe d’entreprise par une demande nationale de brevet. Quant au reste, j’utilise des variables communes. Dans cet article que je suis en train de finir pour « International Journal of Energy Sector Management » j’utilise les quarante et quelques variables de Penn Tables 9.1. (Feenstra et al. 2015[3]) plus des variables de la Banque Mondiale au sujet d’énergie (consommation finale, participation des sources renouvelables, participation d’électricité) plus des données Eurostat sur les prix d’électricité, plus ces deux coefficients relatifs aux demandes nationales de brevets.

La différence entre ma méthode et celle de MuSIASEM est donc visible déjà au niveau phénoménologique. Moi, je prends la phénoménologie généralement acceptée – donc par exemple la phénoménologie de consommation d’énergie ou celle d’activité économique – et ensuite j’étudie le rapport entre les variables correspondantes pour en extraire un tableau plus complexe. Je sais déjà que dans ma méthode, la quantité et la diversité des variables est un facteur clé. Mes résultats deviennent vraiment robustes – donc cohérents à travers des échantillons empiriques différents – lorsque j’utilise une panoplie riche de variables. Chez MuSIASEM, en revanche, ils commencent par construire leur propre phénoménologie au tout début en ensuite ils raisonnent avec.

Il semble y avoir un terrain commun entre ma méthode et celle de MuSIASEM : on semble être d’accord que les variables macroéconomiques telles qu’elles sont accessibles publiquement donnent l’image imparfaite d’une réalité autrement plus complexe. A partir de là, toutefois, il y différence. Moi, j’assume que si je prends beaucoup d’observations imparfaites distinctes – donc beaucoup de variables différentes, chacune un peu à côté de la réalité – je peux reconstruire quelque chose à propos de ladite réalité en transformant ces observations imparfaites avec un réseau neuronal. J’assume donc que je ne sais pas d’avance de quelle manière exacte ces variables sont imparfaites et je m’en fiche par ailleurs. C’est comme si reconstruisais un crime (j’adore les romans policiers) à partir d’un grand nombre des dépositions faites par des témoins qui, au moment et en présence du crime en question étaient soit ivres, soit drogués soit ils regardaient un match de foot sur leur portable. J’assume qu’aussi peu fiables soient tous ces témoins, je peux interposer et recombiner leurs dépositions de façon à cerner le mécréant qui a tué la vieille dame. J’expérimente avec des combinaisons différentes et j’essaie de voir laquelle est la plus cohérente. Chez MuSIASEM, en revanche, ils établissent d’avance une méthode de mettre en concours des dépositions imparfaites des témoins en état d’ébriété et ensuite ils l’appliquent de façon cohérente à travers tous les cas de tels témoignages.

Jusqu’à ce point-là, ma méthode est garnie d’assomptions moins fortes que celle de MuSIASEM. De manière générale je préfère des méthodes avec des assomptions faibles. Lorsque je mets en question des idées reçues, tout simplement en les suspendant et en vérifiant si elles tiennent le coup (de suspension), j’ai la chance de trouver plus de trucs nouveaux et intéressants.  Maintenant, je m’offre le plaisir pervers de passer au peigne fin les assomptions fortes de MuSIASEM, juste pour voir où bien puis-je leur enfoncer une épingle. Je commence par l’activité humaine totale, calculée comme le produit de : [la population] x [24 heures] x [365 jours]. Première remarque : le produit 24 heures fois 365 jours = 8760 heures est une constante. Si je compare deux pays aux populations différentes, leur activités humaines totales respectives seront différentes uniquement à travers leurs démographies différentes. Le produit [24 heures] x [365 jours] est donc une décoration redondante du point de vue mathématique. Toutefois, c’est une redondance astucieuse. Le produit 24 heures fois 365 jours = 8760 c’est le facteur de multiplication communément utilisé pour transformer la capacité énergétique en énergie effectivement accessible. On prend la puissance d’une bombe atomique, en joules, on la recalcule en kilowatts, on la multiplie par 24 heures fois 365 jours et boum : on obtient la quantité d’énergie accessible à la population générale si cette bombe explosait continuellement tout le long de l’année. On ajoute toutefois 24 heures supplémentaires d’explosion pour les années bissextiles.

Bombe atomique ou pas, le produit 24 heures fois 365 jours = 8760 est donc utile lorsqu’on veut faire une connexion élégante entre la démographie et la transformation d’énergie, ce qui semble judicieux dans une méthode de recherche qui se concentre précisément sur l’énergie. La multiplication « population x 8760 heures dans l’année » est-elle donc pertinente comme mesure d’activité humaine ? Hmmouiais… peut-être, à la rigueur… Je veux dire, si nous avons des populations très similaires en termes de style de vie et de technologie, elles peuvent démontrer des niveaux d’activité similaires par heure et donc des niveaux d’activité humaine totales distincts uniquement sur la base de leurs démographies différentes. Néanmoins, il nous faut des populations vraiment très similaires. Si nous prenons une portion essentielle de l’activité humaine – la production agricole par tête d’habitant – et nous la comparons entre la Belgique, l’Argentine et Botswana, nous obtenons des coefficients d’activité tout à fait différents.

Je pense donc que les assomptions qui maintiennent l’identité phénoménologique l’activité humaine totale = [la population] x [24 heures] x [365 jours] sont des assomptions tellement fortes qu’elles en deviennent dysfonctionnelles. J’assume donc que la méthode MuSIASEM utilise en fait la taille de la population comme une variable fondamentale, point à la ligne. Moi je fais de même, par ailleurs. Je trouve la démographie jouer un rôle injustement secondaire dans la recherche économique. Je vois que beaucoup de chercheurs utilisent des variables démographiques comme « calibrage » ou « facteurs d’ajustement ».  Tout ce que je sais sur la théorie générale des systèmes complexes, par exemple le créneau de recherche sur la théorie d’automates cellulaires (Bandini, Mauri & Serra 2001[4] ; Yu et al. 2021[5]) ou bien la théorie d’essaims (Gupta & Srivastava (2020[6]), suggère que la taille des populations ainsi que leur intensité d’interactions sociales sont des attributs fondamentaux de chaque civilisation.                    

Je trouve donc que l’identité phénoménologique l’activité humaine totale = [la population] x [24 heures] x [365 jours] dans la méthode MuSIASEM est donc une sorte de ruse, un peu superflue, pour introduire la démographie au cœur de la réflexion sur l’efficience énergétique. Par conséquent, le coefficient métabolique de MuSIASEM, calculé comme la quantité d’énergie par heure d’activité humaine, est équivalent à la consommation d’énergie par tête d’habitant. Le métabolisme énergétique d’une société humaine est donc défini par la consommation d’énergie par tête d’habitant (https://data.worldbank.org/indicator/EG.USE.PCAP.KG.OE ) ainsi que le coût énergétique de PIB (https://data.worldbank.org/indicator/EG.USE.COMM.GD.PP.KD ). Les liens hypertexte entre parenthèses renvoient à des bases de données correspondantes de la Banque Mondiale. Lorsque je regarde ces deux coefficients à travers le monde et je fais un truc absolument simpliste – je discrimine les pays et les régions en une liste hiérarchique – deux histoires différentes émergent. Le coefficient de consommation d’énergie par tête d’habitant raconte une histoire de hiérarchie pure et simple de bien-être économique et social. Plus ce coefficient est élevé, plus le pays donné est développé en termes non seulement de revenu par tête d’habitant mais aussi en termes de complexité institutionnelle, droits de l’homme, complexité technologique etc.

Lorsque j’écoute l’histoire dite par le coût énergétique de PIB (https://data.worldbank.org/indicator/EG.USE.COMM.GD.PP.KD ), c’est compliqué comme une enquête policière. Devinez donc les points communs entre Panama, Sri Lanka, la Suisse, l’Irlande, Malte et la République Dominicaine. Fascinant, non ? Eh bien, ces 6 pays sont en tête de la course planétaire à l’efficience énergétique, puisqu’ils sont tous les six capables de produire 1000 dollars de PIB avec moins de 50 kilogrammes d’équivalent pétrole en énergie consommée. Pour placer leur exploit dans un contexte géographique plus large, les États-Unis et la Serbie sont plus de deux fois plus bas dans cette hiérarchie, tout près l’un de l’autre, à 122 kilogrammes d’équivalent pétrole par 1000 dollars de PIB. Par ailleurs, ça les place tous les deux près de la moyenne planétaire ainsi que celle des pays dans la catégorie « revenu moyen inférieur ».

Si je récapitule mes observations sur la géographie de ces deux coefficients, les sociétés humaines différentes semblent avoir une capacité très idiosyncratique d’optimiser le coût énergétique de PIB à des niveaux différents de la consommation d’énergie par tête d’habitant. C’est comme s’il y avait une façon différente d’optimiser l’efficience énergétique en étant pauvre, par rapport à celle d’optimiser la même efficience lorsqu’on est riche et développé.

Nous, les homo sapiens, on peut faire des trucs vraiment bêtes dans le quotidien mais dans le long terme nous sommes plutôt pratiques, ce qui pourrait notre capacité actuelle de transformer quelque 30% de l’énergie totale à la surface de la planète. Si hiérarchie il y a, cette hiérarchie a probablement un rôle à jouer. Difficile à dire quel rôle exactement mais ça semble important d’avoir cette structure hiérarchique d’efficience énergétique. C’est un autre point où je diverge de la méthode MuSIASEM. Les chercheurs actifs dans le créneau MuSIASEM assument que l’efficience énergétique maximale est un impératif évolutif de notre civilisation et que tous les pays devraient aspirer à l’optimiser. Hiérarchies d’efficiences énergétique sont donc perçues comme un accident historique dysfonctionnel, probablement effet d’oppression des pauvres par les riches. Bien sûr, on peut demander si les habitants de la République Dominicaine sont tellement plus riches que ceux des États-Unis, pour avoir une efficience énergétique presque trois fois supérieure.


[1] Andreoni, V. (2020). The energy metabolism of countries: Energy efficiency and use in the period that followed the global financial crisis. Energy Policy, 139, 111304. https://doi.org/10.1016/j.enpol.2020.111304

[2] Andreoni, V. (2020). The energy metabolism of countries: Energy efficiency and use in the period that followed the global financial crisis. Energy Policy, 139, 111304. https://doi.org/10.1016/j.enpol.2020.111304

[3] Feenstra, Robert C., Robert Inklaar and Marcel P. Timmer (2015), “The Next Generation of the Penn World Table” American Economic Review, 105(10), 3150-3182, available for download at http://www.ggdc.net/pwt 

[4] Bandini, S., Mauri, G., & Serra, R. (2001). Cellular automata: From a theoretical parallel computational model to its application to complex systems. Parallel Computing, 27(5), 539-553. https://doi.org/10.1016/S0167-8191(00)00076-4

[5] Yu, J., Hagen-Zanker, A., Santitissadeekorn, N., & Hughes, S. (2021). Calibration of cellular automata urban growth models from urban genesis onwards-a novel application of Markov chain Monte Carlo approximate Bayesian computation. Computers, environment and urban systems, 90, 101689. https://doi.org/10.1016/j.compenvurbsys.2021.101689

[6] Gupta, A., & Srivastava, S. (2020). Comparative analysis of ant colony and particle swarm optimization algorithms for distance optimization. Procedia Computer Science, 173, 245-253. https://doi.org/10.1016/j.procs.2020.06.029

Tax on Bronze

I am trying to combine the line of logic which I developed in the proof-of-concept for the idea I labelled ‘Energy Ponds’ AKA ‘Project Aqueduct’ with the research on collective intelligence in human societies. I am currently doing serious review of literature as regards the theory of complex systems, as it looks like just next door to my own conceptual framework. The general idea is to use the theory of complex systems – within the general realm of which the theory of cellular automata looks the most promising, for the moment – to simulate the emergence and absorption of a new technology in the social structure.  

I started to sketch the big lines of that picture in my last update in French, namely in ‘L’automate cellulaire respectable’. I assume that any new technology burgeons inside something like a social cell, i.e. a group of people connected by common goals and interests, together with some kind of institutional vehicle, e.g. a company, a foundation etc. It is interesting to notice that new technologies develop through the multiplication of such social cells rather than through linear growth of just one cell. Up to a point this is just one cell growing, something like the lone wolf of Netflix in the streaming business, and then ideas start breeding and having babies with other people.

I found an interesting quote in the book which is my roadmap through the theory of complex systems, namely in ‘What Is a Complex System?’ by James Landyman and Caroline Wiesner (Yale University Press 2020, ISBN 978-0-300-25110-4). On page 56 (Kindle Edition), Landyman and Wiesner write something interesting about the collective intelligence in colonies of ants: ‘What determines a colony’s survival is its ability to grow quickly, because individual workers need to bump into other workers often to be stimulated to carry out their tasks, and this will happen only if the colony is large. Army ants, for example, are known for their huge swarm raids in pursuit of prey. With up to 200 000 virtually blind foragers, they form trail systems that are up to 20 metres wide and 100 metres long (Franks et al. 1991). An army of this size harvests prey of 40 grams and more each day. But if a small group of a few hundred ants accidentally gets isolated, it will go round in a circle until the ants die from starvation […]’.

Interesting. Should nascent technologies have an ant-like edge to them, their survival should be linked to them reaching some sort of critical size, which allows the formation of social interactions in the amount which, in turn, an assure proper orientation in all the social cells involved. Well, looks like nascent technologies really are akin to ant colonies because this is exactly what happens. When we want to push a technology from its age of early infancy into the phase of development, a critical size of the social network is required. Customers, investors, creditors, business partners… all that lot is necessary, once again in a threshold amount, to give a new technology the salutary kick in the ass, sending it into the orbit of big business.

I like jumping quickly between ideas and readings, with conceptual coherence being an excuse just as frequently as it is a true guidance, and here comes an article on urban growth, by Yu et al. (2021[1]). The authors develop a model of urban growth, based on the empirical data on two British cities: Oxford and Swindon. The general theoretical idea here is that strictly speaking urban areas are surrounded by places which are sort of in two minds whether they like being city or countryside. These places can be represented as spatial cells, and their local communities are cellular automatons which move cautiously, step by step, into alternative states of being more urban or more rural. Each such i-th cellular automaton displays a transition potential Ni, which is a local balance between the benefits of urban agglomeration Ni(U), as opposed to the benefits Ni(N) of conserving scarce non-urban resources. The story wouldn’t be complete without the shit-happens component Ri of randomness, and the whole story can be summarized as: Ni = Ni(U) – Ni(N) + Ri.

Yu et al. (2021 op. cit.) add an interesting edge to the basic theory of cellular automata, such as presented e.g. in Bandini, Mauri & Serra (2001[2]), namely the component of different spatial scales. A spatial cell in a peri-urban area can be attracted to many spatial aspects of being definitely urban. Those people may consider the possible benefits of sharing the same budget for local schools in a perimeter of 5 kilometres, as well as the possible benefits of connecting to a big hospital 20 km away. Starting from there, it looks a bit gravitational. Each urban cell has a power of attraction for non-urban cells, however that power decays exponentially with physical distance.

I generalize. There are many technologies spreading across the social space, and each of them is like a city. I mean, it does not necessarily have a mayor, but it has dense social interactions inside, and those interactions create something like a gravitational force for external social cells. When a new technology gains new adherents, like new investors, new engineers, new business entities, it becomes sort of seen and known. I see two phases in the development of a nascent technology. Before it gains enough traction in order to exert significant gravitational force on the temporarily non-affiliated social cells, a technology grows through random interactions of the initially involved social cells. If those random interactions exceed a critical threshold, thus if there are enough forager ants in the game, their simple interactions create an emergence, which starts coagulating them into a new industry.

I return to cities and their growth, for a moment. I return to the story which Yu et al. (2021[3]) are telling. In my own story on a similar topic, namely in my draft paper ‘The Puzzle of Urban Density And Energy Consumption’, I noticed an amazing fact: whilst individual cities grow, others decay or even disappear, and the overall surface of urban areas on Earth seems to be amazingly stationary over many decades. It looks as if the total mass, and hence the total gravitational attraction of all the cities on Earth was a constant over at least one human generation (20 – 25 years). Is it the same with technologies? I mean, is there some sort of constant total mass that all technologies on Earth have, within the lifespan of one human generation, and there are just specific technologies getting sucked into that mass whilst others drop out and become moons (i.e. cold, dry places with not much to do and hardly any air to breathe).

What if a new technology spreads like Tik-Tok, i.e. like a wildfire? There is science for everything, and there is some science about fires in peri-urban areas as well. That science is based on the same theory of cellular automata. Jiang et al. (2021[4]) present a model, where territories prone to wildfires are mapped into grids of square cells. Each cell presents a potential to catch fire, through its local properties: vegetation, landscape, local climate. The spread of a wildfire from a given cell R0 is always based on the properties of the cells surrounding the fire.

Cirillo, Nardi & Spitoni (2021[5]) present an interesting mathematical study of what happens when, in a population of cellular automata, each local automaton updates itself into a state which is a function of the preceding state in the same cell, as well as of the preceding states in the two neighbouring cells. It means, among other things, that if we add the dimension of time to any finite space Zd where cellular automata dwell, the immediately future state of a cell is a component of the available neighbourhood for the present state of that cell. Cirillo, Nardi & Spitoni (2021) demonstrate, as well, that if we know the number and the characteristics of the possible states which one cellular automaton can take, like (-1, 0, 1), we can compute the total number of states that automaton can take in a finite number of moves. If we make many such cellular automatons move in the same space Zd , a probabilistic chain of complex states emerge.

As I wrote in ‘L’automate cellulaire respectable’, I see a social cell built around a new technology, e.g. ‘Energy Ponds’, moving, in the first place, along two completely clear dimensions: physical size of installations and financial size of the balance sheet. Movements along these two axes are subject to the influence happening along some foggy, unclear dimensions connected to preferences and behaviour: expected return on investment, expected future value of the firm, risk aversion as opposed to risk affinity etc. That makes me think, somehow, about a theory next door to that of cellular automata, namely the theory of swarms. This is a theory which explains complex changes in complex systems through changes in strength of correlation between individual movements. According to the swarm theory, a complex set which behaves like a swarm can adapt to external stressors by making the moves of individual members more or less correlated with each other. A swarm in routine action has its members couple their individual behaviour rigidly, like marching in step. A swarm alerted by a new stressor can loosen it a little, and allow individual members some play in their behaviour, like ‘If I do A, you do B or C or D, anyway one out of these three’. A swarm in mayhem loses it completely and there is no behavioural coupling whatsoever between members.

When it comes to the development and societal absorption of a new technology, the central idea behind the swarm-theoretic approach is that in order to do something new, the social swarm has to shake it off a bit. Social entities need to loosen their mutual behavioural coupling so as to allow some of them to do something else than just ritually respond to the behaviour of others. I found an article which I can use to transition nicely from the theory of cellular automata to the swarm theory: Puzicha & Buchholz (2021[6]). The paper is essentially applicable to the behaviour of robots, yet it is about a swarm of 60 distributed autonomous mobile robots which need to coordinate through a communication network with low reliability and restricted capacity. In other words, sometimes those robots can communicate with each other, and sometimes they don’t. When some robots out of the 60 are having a chat, they can jam the restricted capacity of the network and thus bar the remaining robots from communicating. Incidentally, this is how innovative industries work. When a few companies, let’s say the calibre of unicorns, are developing a new technology. They absorb the attention of investors, governments, potential business partners and potential employees. They jam the restricted field of attention available in the markets of, respectively, labour and capital.      

Another paper from the same symposium ‘Intelligent Systems’, namely Serov, Voronov & Kozlov (2021[7]), leads in a slightly different direction. Whilst directly derived from the functioning of communication systems, mostly the satellite-based ones, the paper suggests a path of learning in a network, where the capacity for communication is restricted, and the baseline method of balancing the whole thing is so burdensome for the network that it jams communication even further. You can compare it to a group of people who are all so vocal about the best way to allow each other to speak that they have no time and energy left for speaking their mind and listening to others. I have found another paper, which is closer to explaining the behaviour of those individual agents when they coordinate just sort of. It is Gupta & Srivastava (2020[8]), who compare two versions of swarm intelligence: particle swarm and ant colony. The former (particle swarm) generalises a problem applicable to birds. Simple, isn’t it? A group of birds will randomly search for food. Birds don’t know where exactly the food is, so they follow the bird which is nearest to the food.  The latter emulates the use of pheromones in a colony of ants. Ants selectively spread pheromones as they move around, and they find the right way of moving by following earlier deposits of pheromones. As many ants walk many times a given path, the residual pheromones densify and become even more attractive. Ants find the optimal path by following maximum pheromone deposition.

Gupta & Srivastava (2020) demonstrate that the model of ant colony, thus systems endowed with a medium of communication which acts by simple concentration in space and time are more efficient for quick optimization than the bird-particle model, based solely on observing each other’s moves. From my point of view, i.e. from that of new technologies, those results reach deeper than it could seem at the first sight. Financial capital is like a pheromone. One investor-ant drops some financial deeds at a project, and it can hopefully attract further deposits of capital etc. Still, ant colonies need to reach a critical size in order for that whole pheromone business to work. There needs to be a sufficient number of ants per unit of available space, in order to create those pheromonal paths. Below the critical size, no path becomes salient enough to create coordination and ants starve to death fault of communicating efficiently. Incidentally, the same is true for capital markets. Some 11 years ago, right after the global financial crisis, a fashion came to create small, relatively informal stock markets, called ‘alternative capital markets’. Some of them were created by the operators of big stock markets (e.g. the AIM market organized by the London Stock Exchange), some others were completely independent ventures. Now, a decade after that fashion exploded, the conclusion is similar to ant colonies: fault of reaching a critical size, those alternative capital markets just don’t work as smoothly as the big ones.

All that science I have quoted makes my mind wander, and it starts walking down the path of hilarious and absurd. I return, just for a moment, to another book: ‘1177 B.C. THE YEAR CIVILIZATION COLLAPSED. REVISED AND UPDATED’ by Eric H. Cline (Turning Points in Ancient History, Princeton University Press, 2021, ISBN 9780691208022). The book gives in-depth an account of the painful, catastrophic end of a whole civilisation, namely that of the Late Bronze Age, in the Mediterranean and the Levant. The interesting thing is that we know that whole network of empires – Egypt, Hittites, Mycenae, Ugarit and whatnot – collapsed at approximately the same moment, around 1200 – 1150 B.C., we know they collapsed violently, and yet we don’t know exactly how they collapsed.

Alternative history comes to my mind. I imagine the transition from Bronze Age to the Iron Age similarly to what we do presently. The pharaoh-queen VanhderLeyenh comes up with the idea of iron. Well, she doesn’t, someone she pays does. The idea is so seducing that she comes, by herself this time, with another one, namely tax on bronze. ‘C’mon, Mr Brurumph, don’t tell me you can’t transition to iron within the next year. How many appliances in bronze do you have? Five? A shovel, two swords, and two knives. Yes, we checked. What about your rights? We are going through a deep technological change, Mr Brurumph, this is not a moment to talk about rights. Anyway, this is not even the new era yet, and there is no such thing as individual rights. So, Mr Brurumph, a one-year notice for passing from bronze to iron is more than enough. Later, you pay the bronze tax on each bronze appliance we find. Still, there is a workaround. If you officially identify as a non-Bronze person, and you put the corresponding sign over your door, you have a century-long prolongation on that tax’.

Mr Brurumph gets pissed off. Others do too. They feel lost in a hostile social environment. They start figuring s**t out, starting from the first principles of their logic. They become cellular automata. They focus on nailing down the next immediate move to make. Errors are costly. Swarm behaviour forms. Fights break out. Cities get destroyed. Not being liable to pay the tax on bronze becomes a thing. It gets support and gravitational attraction. It becomes tempting to join the wandering hordes of ‘Tax Free People’ who just don’t care and go. The whole idea of iron gets postponed like by three centuries.  


[1] Yu, J., Hagen-Zanker, A., Santitissadeekorn, N., & Hughes, S. (2021). Calibration of cellular automata urban growth models from urban genesis onwards-a novel application of Markov chain Monte Carlo approximate Bayesian computation. Computers, environment and urban systems, 90, 101689. https://doi.org/10.1016/j.compenvurbsys.2021.101689

[2] Bandini, S., Mauri, G., & Serra, R. (2001). Cellular automata: From a theoretical parallel computational model to its application to complex systems. Parallel Computing, 27(5), 539-553. https://doi.org/10.1016/S0167-8191(00)00076-4

[3] Yu, J., Hagen-Zanker, A., Santitissadeekorn, N., & Hughes, S. (2021). Calibration of cellular automata urban growth models from urban genesis onwards-a novel application of Markov chain Monte Carlo approximate Bayesian computation. Computers, environment and urban systems, 90, 101689. https://doi.org/10.1016/j.compenvurbsys.2021.101689

[4] Jiang, W., Wang, F., Fang, L., Zheng, X., Qiao, X., Li, Z., & Meng, Q. (2021). Modelling of wildland-urban interface fire spread with the heterogeneous cellular automata model. Environmental Modelling & Software, 135, 104895. https://doi.org/10.1016/j.envsoft.2020.104895

[5] Cirillo, E. N., Nardi, F. R., & Spitoni, C. (2021). Phase transitions in random mixtures of elementary cellular automata. Physica A: Statistical Mechanics and its Applications, 573, 125942. https://doi.org/10.1016/j.physa.2021.125942

[6] Puzicha, A., & Buchholz, P. (2021). Decentralized model predictive control for autonomous robot swarms with restricted communication skills in unknown environments. Procedia Computer Science, 186, 555-562. https://doi.org/10.1016/j.procs.2021.04.176

[7] Serov, V. A., Voronov, E. M., & Kozlov, D. A. (2021). A neuro-evolutionary synthesis of coordinated stable-effective compromises in hierarchical systems under conflict and uncertainty. Procedia Computer Science, 186, 257-268. https://doi.org/10.1016/j.procs.2021.04.145

[8] Gupta, A., & Srivastava, S. (2020). Comparative analysis of ant colony and particle swarm optimization algorithms for distance optimization. Procedia Computer Science, 173, 245-253. https://doi.org/10.1016/j.procs.2020.06.029

The collective of individual humans being any good at being smart

I am working on two topics in parallel, which is sort of normal in my case. As I know myself, instead of asking “Isn’t two too much?”, I should rather say “Just two? Run out of ideas, obviously”. I keep working on a proof-of-concept article for the idea which I provisionally labelled “Energy Ponds” AKA “Project Aqueduct”, on the one hand. See my two latest updates, namely ‘I have proven myself wrong’ and ‘Plusieurs bouquins à la fois, comme d’habitude’, as regards the summary of what I have found out and written down so far. As in most research which I do, I have come to the conclusion that however wonderful the concept appears, the most important thing in my work is the method of checking the feasibility of that concept. I guess I should develop on the method more specifically.

On the other hand, I am returning to my research on collective intelligence. I have just been approached by a publisher, with a kind invitation to submit the proposal for a book on that topic. I am passing in review my research, and the available literature. I am wondering what kind of central thread I should structure the entire book around. Two threads turn up in my mind, as a matter of fact. The first one is the assumption that whatever kind of story I am telling, I am actually telling the story of my own existence. I feel I need to go back to the roots of my interest in the phenomenon of collective intelligence, and those roots are in my meddling with artificial neural networks. At some point, I came to the conclusion that artificial neural networks can be good simulators of the way that human societies figure s**t out. I need to dig again into that idea.

My second thread is the theory of complex systems AKA the theory of complexity. The thing seems to be macheting its way through the jungle of social sciences, those last years, and it looks interestingly similar to what I labelled as collective intelligence. I came by the theory of complexity in three books which I am reading now (just three?). The first one is a history book: ‘1177 B.C. The Year Civilisation Collapsed. Revised and Updated’, published by Eric H. Cline with Princeton University Press in 2021[1]. The second book is just a few light years away from the first one. It regards mindfulness. It is ‘Aware. The Science and Practice of Presence. The Groundbreaking Meditation Practice’, published by Daniel J. Siegel with TarcherPerigee in 2018[2]. The third book is already some sort of a classic; it is ‘The Black Swan. The impact of the highly improbable’ by Nassim Nicolas Taleb with Penguin, in 2010.   

I think it is Daniel J. Siegel who gives the best general take on the theory of complexity, and I allow myself to quote: ‘One of the fundamental emergent properties of complex systems in this reality of ours is called self-organization. That’s a term you might think someone in psychology or even business might have created—but it is a mathematical term. The form or shape of the unfolding of a complex system is determined by this emergent property of self-organization. This unfolding can be optimized, or it can be constrained. When it’s not optimizing, it moves toward chaos or toward rigidity. When it is optimizing, it moves toward harmony and is flexible, adaptive, coherent, energized, and stable’. (Siegel, Daniel J.. Aware (p. 9). Penguin Publishing Group. Kindle Edition).  

I am combining my scientific experience with using AI as social simulator with the theory of complex systems. I means I need to UNDERSTAND, like really. I need to understand my own thinking, in the first place, and then I need to combine it with whatever I can understand from other people’s thinking. It started with a simple artificial neural network, which I used to write my article ‘Energy efficiency as manifestation of collective intelligence in human societies’ (Energy, 191, 116500, https://doi.org/10.1016/j.energy.2019.116500 ).  I had a collection of quantitative variables, which I had previously meddled with using classical regression. As regression did not really bring much conclusive results, I had the idea of using an artificial neural network. Of course, today, neural networks are a whole technology and science. The one I used is the equivalent of a spear with a stone tip as compared to a battle drone. Therefore, the really important thing is the fundamental logic of neural networking as compared to regression, in analyzing quantitative data.

When I do regression, I come up with a function, like y = a1*x1 + a2*x2 + …+ b, I trace that function across the cloud of empirical data points I am working with, and I measure the average distance from those points to the line of my function. That average distance is the average (standard) error of estimation with that given function. I repeat the process as many times as necessary to find a function which both makes sense logically and yields the lowest standard error of estimation. The central thing is that I observe all my data at once, as if it was all happening at the same time and as if I was observing it from outside. Here is the thing: I observe it from outside, but when that empirical data was happening, i.e. when the social phenomena expressed in my quantitative variables were taking place, everybody (me included) was inside, not outside.

How to express mathematically the fact of being inside the facts measured? One way is to take those empirical occurrences one by one, sort of Denmark in 2005, and then Denmark in 2006, and then Germany in 2005 etc. Being inside the events changes my perspective on what is the error of estimation, as compared to being outside. When I am outside, error means departure from the divine plan, i.e. from the regression function. When I am inside things that are happening, error happens as discrepancy between what I want and expect, on the one hand, and what I actually get, on the other hand. These are two different errors of estimation, measured as departures from two different functions. The regression function is the most accurate (or as accurate as you can get) mathematical explanation of the empirical data points. The function which we use when simulating the state of being inside the events is different: it is a function of adaptation.      

Intelligent adaptation means that we are after something: food, sex, power, a new Ferrari, social justice, 1000 000 followers on Instagram…whatever. There is something we are after, some kind of outcome we try to optimize. When I have a collection of quantitative variables which describe a society, such as energy efficiency, headcount of population, inflation rates, incidence of Ferraris per 1 million people etc., I can make a weak assumption that any of these can express a desired outcome. Here, a digression is due. In science and philosophy, weak assumptions are assumptions which assume very little, and therefore they are bloody hard to discard. On the other hand, strong assumptions assume a lot, and that makes them pretty good targets for discarding criticism. In other words, in science and philosophy, weak assumptions are strong and strong assumptions are weak. Obvious, isn’t it? Anyway, I make that weak assumption that any phenomenon we observe and measure with a numerical scale can be a collectively desired outcome we pursue.

Another assumption I make, a weak one as well, is sort of hidden in the word ‘expresses’. Here, I relate to a whole line of philosophical and scientific heritage, going back to people like Plato, Kant, William James, Maurice Merleau-Ponty, or, quite recently, Michael Keane (1972[3]), as well as Berghout & Verbitskiy (2021[4]). Very nearly everyone who seriously thought (or keeps thinking, on the account of being still alive) about human cognition of reality agrees that we essentially don’t know s**t. We make cognitive constructs in our minds, so as to make at least a little bit of sense of the essentially chaotic reality outside our skin, and we call it empirical observation. Mind you, stuff inside our skin is not much less chaotic, but this is outside the scope of social sciences. As we focus on quantitative variables commonly used in social sciences, the notion of facts becomes really blurred. Have you ever shaken hands with energy efficiency, with Gross Domestic Product or with the mortality rate? Have you touched it? No? Neither have I. These are highly distilled cognitive structures which we use to denote something about the state of society.

Therefore, I assume that quantitative, socio-economic variables express something about the societies observed, and that something is probably important if we collectively keep record of it. If I have n empirical variables, each of them possibly represents collectively important outcomes. As these are distinct variables, I assume that, with all the imperfections and simplification of the corresponding phenomenology, each distinct variable possibly represents a distinct type of collectively important outcome. When I study a human society through the lens of many quantitative variables, I assume they are informative about a set of collectively important social outcomes in that society.

Whilst a regression function explains how many variables are connected when observed ex post and from outside, an adaptation function explains and expresses the way that a society addresses important collective outcomes in a series of trials and errors. Here come two fundamental differences between studying a society with a regression function, as opposed to using an adaptation function. Firstly, for any collection of variables, there is essentially one regression function of the type:  y = a1*x1 + a2*x2 + …+ an*xn + b. On the other hand, with a collection of n quantitative variables at hand, there is at least as many functions of adaptation as there are variables. We can hypothesize that each individual variable x is the collective outcome to pursue and optimize, whilst the remaining n – 1 variables are instrumental to that purpose. One remark is important to make now: the variable informative about collective outcomes pursued, that specific x, can be and usually is instrumental to itself. We can make a desired Gross Domestic Product based on the Gross Domestic Product we have now. The same applies to inflation, energy efficiency, share of electric cars in the overall transportation system etc. Therefore, the entire set of n variables can be assumed instrumental to the optimization of one variable x from among them.   

Mathematically, it starts with assuming a functional input f(x1, x2, …, xn) which gets pitched against one specific outcome xi. Subtraction comes as the most logical representation of that pitching, and thus we have the mathematical expression ‘xi – f(x1, x2, …, xn)’, which informs about how close the society observed has come to the desired outcome xi. It is technically possible that people just nail it, and xi = f(x1, x2, …, x­n), whence xi – f(x1, x2, …, x­n) = 0. This is a perfect world, which, however, can be dangerously perfect. We know those societies of apparently perfectly happy people, who live in harmony with nature, even if that harmony means hosting most intestinal parasites of the local ecosystem. One day other people come, with big excavators, monetary systems, structured legal norms, and the bubble bursts, and it hurts.

Thus, on the whole, it might be better to hit xi ≠ f(x1, x2, …, x­n), whence xi – f(x1, x2, …, x­n) ≠ 0. It helps learning new stuff. The ‘≠ 0’ part means there is an error in adaptation. The functional input f(x1, x2, …, x­n) hits above or below the desired xi. As we want to learn, that error in adaptation AKA e = xi – f(x1, x2, …, xn) ≠ 0, makes any practical sense when we utilize it in subsequent rounds of collective trial and error. Sequence means order, and a timeline. We have a sequence {t0, t1, t2, …, tm} of m moments in time. Local adaptation turns into ‘xi(t) – ft(x1, x2, …, x­n)’, and error of adaptation becomes the time-specific et = xi(t) – ft(x1, x2, …, x­n) ≠ 0. The clever trick consists in taking e(t0) = xi(t0) – ft0(x1, x2, …, x­n) ≠ 0 and combining it somehow with the next functional input ft1(x1, x2, …, x­n). Mathematically, if we want to combine two values, we can add them up or multiply them. We keep in mind that division is a special case of multiplication, namely x * (1/z). We I add up two values, I assume they are essentially of the same kind and sort of independent from each other. When, on the other hand, I multiply them, they become entwined so that each of them reproduces the other one. Multiplication ‘x * z’ means that x gets reproduced z times and vice versa. When I have the error of adaptation et0 from the last experimental round and I want to combine it with the functional input of adaptation ft1(x1, x2, …, x­n) in the next experimental round, that whole reproduction business looks like a strong assumption, with a lot of weak spots on it. I settle for the weak assumption then, and I assume that ft1(x1, x2, …, x­n) becomes ft0(x1, x2, …, x­n) + e(t0).

The expression ft0(x1, x2, …, x­n) + e(t0) makes any functional sense only when and after we have e(t0) = xi(t0) – ft0(x1, x2, …, x­n) ≠ 0. Consequently, the next error of adaptation, namely e(t1) = xi(t1) – ft1(x1, x2, …, x­n) ≠ 0 can come into being only after its predecessor et0 has occurred. We have a chain of m states in the functional input of the society, i.e. {ft0(x1, x2, …, x­n) => ft1(x1, x2, …, x­n) => … => ftm(x1, x2, …, x­n)}, associated with a chain of m desired outcomes {xi(t0) => xi(t1) => … => xi(tm)}, and with a chain of errors in adaptation {e(t0) => e(t1) => …=> e(tm)}. That triad – chain of functional inputs, chain of desired outcomes, and the chain of errors in adaptation – makes for me the closest I can get now to the mathematical expression of the adaptation function. As errors get fed along the chain of states (as I see it, they are being fed forward, but in the algorithmic version, you can backpropagate them), those errors are some sort of dynamic memory in that society, the memory from learning to adapt.

Here we can see the epistemological difference between studying a society from outside, and explaining its workings with a regression function, on the one hand, and studying those mechanisms from inside, by simulation with an adaptation function, on the other hand. Adaptation function is the closest I can get, in mathematical form, to what I understand by collective intelligence. As I have been working with that general construct, I progressively zoomed in on another concept, namely that of intelligent structure, which I define as a structure which learns by experimenting with many alternative versions of itself whilst staying structurally coherent, i.e. by maintaining basic coupling between particular components.

I feel like comparing my approach to intelligent structures and their collective intelligence with the concept of complex systems, as discussed in the literature I have just referred to. I returned, therefore, to the book entitled ‘1177 B.C. The Year Civilisation Collapsed. Revised and Updated’, by Eric H. Cline, Princeton University Press, 2021. The theory of complex systems is brought forth in that otherwise very interesting piece in order to help formulating an answer to the following question: “Why did the great empires of the Late Bronze Age, such as Egypt, the Hittites, or the Myceneans, collapse all in approximately the same time, around 1200 – 1150 B.C.?”.  The basic assertion which Eric Cline develops on and questions is that the entire patchwork of those empires in the Mediterranean, the Levant and the Middle East was one big complex system, which collapsed on the account of having overkilled it slightly in the complexity department.

I am trying to reconstruct the definition of systemic complexity such as Eric Cline uses it in his flow of logic. I start with the following quote: Complexity science or theory is the study of a complex system or systems, with the goal of explaining the phenomena which emerge from a collection of interacting objects’. If we study a society as a complex system, we need to assume two things. There are many interacting objects in it, for one, and their mutual interaction leads to the emergence of some specific phenomena. Sounds cool. I move on, and a few pages later I find the following statement: ‘In one aspect of complexity theory, behavior of those objects is affected by their memories and “feedback” from what has happened in the past. They are able to adapt their strategies, partly on the basis of their knowledge of previous history’. Nice. We are getting closer. Entities inside a complex system accumulate memory, and they learn on that basis. This is sort of next door to the three sequential chains: states, desired outcomes, and errors in adaptation, which I coined up.

Further, I find an assertion that a complex social system is typically “alive”, which means that it evolves in a complicated, nontrivial way, whilst being open to influences from the environment. All that leads to the complex system to generate phenomena which can be considered as surprising and extreme. Good. This is the moment to move to the next book:  ‘The Black Swan. The impact of the highly improbable’ by Nassim Nicolas Taleb , Penguin, 2010. Here comes a lengthy quote, which I bring here for the sheer pleasure of savouring one more time Nassim Taleb’s delicious style: “[…] say you attribute the success of the nineteenth-century novelist Honoré de Balzac to his superior “realism,” “insights,” “sensitivity,” “treatment of characters,” “ability to keep the reader riveted,” and so on. These may be deemed “superior” qualities that lead to superior performance if, and only if, those who lack what we call talent also lack these qualities. But what if there are dozens of comparable literary masterpieces that happened to perish? And, following my logic, if there are indeed many perished manuscripts with similar attributes, then, I regret to say, your idol Balzac was just the beneficiary of disproportionate luck compared to his peers. Furthermore, you may be committing an injustice to others by favouring him. My point, I will repeat, is not that Balzac is untalented, but that he is less uniquely talented than we think. Just consider the thousands of writers now completely vanished from consciousness: their record does not enter into analyses. We do not see the tons of rejected manuscripts because these writers have never been published. The New Yorker alone rejects close to a hundred manuscripts a day, so imagine the number of geniuses that we will never hear about. In a country like France, where more people write books while, sadly, fewer people read them, respectable literary publishers accept one in ten thousand manuscripts they receive from first-time authors”.

Many people write books, few people read them, and that creates something like a flow of highly risky experiments. That coincides with something like a bottleneck of success, with possibly great positive outcomes (fame, money, posthumous fame, posthumous money for other people etc.), and a low probability of occurrence. A few salient phenomena are produced – the Balzacs – whilst the whole build-up of other writing efforts, by less successful novelists, remains in the backstage of history. That, in turn, somehow rhymes with my intuition that intelligent structures need to produce big outliers, at least from time to time. On the one hand, those outliers can be viewed as big departures from the currently expected outcomes. They are big local errors. Big errors mean a lot of information to learn from. There is an even further-going, conceptual coincidence with the theory and practice of artificial neural networks. A network can be prone to overfitting, which means that it learns too fast, sort of by jumping prematurely to conclusions, before and without having worked through the required work through local errors in adaptation.

Seen from that angle, the function of adaptation I have come up with has a new shade. The sequential chain of errors appears as necessary for the intelligent structure to be any good. Good. Let’s jump to the third book I quoted with respect to the theory of complex systems: ‘Aware. The Science and Practice of Presence. The Ground-breaking Meditation Practice’, by Daniel J. Siegel, TarcherPerigee, 2018. I return to the idea of self-organisation in complex systems, and the choice between three different states: a) the optimal state of flexibility, adaptability, coherence, energy and stability b) non-optimal rigidity and c) non-optimal chaos.

That conceptual thread concurs interestingly with my draft paper: ‘Behavioral absorption of Black Swans: simulation with an artificial neural network’ . I found out that with the chain of functional input states {ft0(x1, x2, …, x­n) => ft1(x1, x2, …, x­n) => … => ftm(x1, x2, …, x­n)} being organized in rigorously the same way, different types of desired outcomes lead to different patterns of learning, very similar to the triad which Daniel Siegel refers to. When my neural network does its best to optimize outcomes such as Gross Domestic Product, it quickly comes to rigidity. It makes some errors in the beginning of the learning process, but then it quickly drives the local error asymptotically to zero and is like ‘We nailed it. There is no need to experiment further’. There are other outcomes, such as the terms of trade (the residual fork between the average price of exports and that of imports), or the average number of hours worked per person per year, which yield a curve of local error in the form of a graceful sinusoid, cyclically oscillating between different magnitudes of error. This is the energetic, dynamic balance. Finally, some macroeconomic outcomes, such as the index of consumer prices, can make the same neural network go nuts, and generate an ever-growing curve of local error, as if the poor thing couldn’t learn anything sensible from looking at the prices of apparel and refrigerators. The (most) puzzling thing in all that differences in pursued outcomes are the source of discrepancy in the patterns of learning, not the way of learning as such. Some outcomes, when pursued, keep the neural network I made in a state of healthy adaptability, whilst other outcomes make it overfit or go haywire.  

When I write about collective intelligence and complex system, it can come as a sensible idea to read (and quote) books which have those concepts explicitly named. Here comes ‘The Knowledge Illusion. Why we never think alone’ by Steven Sloman and Philip Fernbach, RIVERHEAD BOOKS (An imprint of Penguin Random House LLC, Ebook ISBN: 9780399184345, Kindle Edition). In the introduction, titled ‘Ignorance and the Community of Knowledge’, Sloman and Fernbach write: “The human mind is not like a desktop computer, designed to hold reams of information. The mind is a flexible problem solver that evolved to extract only the most useful information to guide decisions in new situations. As a consequence, individuals store very little detailed information about the world in their heads. In that sense, people are like bees and society a beehive: Our intelligence resides not in individual brains but in the collective mind. To function, individuals rely not only on knowledge stored within our skulls but also on knowledge stored elsewhere: in our bodies, in the environment, and especially in other people. When you put it all together, human thought is incredibly impressive. But it is a product of a community, not of any individual alone”. This is a strong statement, which I somehow distance myself from. I think that collective human intelligence can be really workable when individual humans are any good at being smart. Individuals need to have practical freedom of action, based on their capacity to figure s**t out in difficult situations, and the highly fluid ensemble of individual freedoms allows the society to make and experiment with many alternative versions of themselves.

Another book is more of a textbook. It is ‘What Is a Complex System?’ by James Landyman and Karoline Wiesner, published with Yale University Press (ISBN 978-0-300-25110-4, Kindle Edition). In the introduction (p.15), Landyman and Wiesner claim: “One of the most fundamental ideas in complexity science is that the interactions of large numbers of entities may give rise to qualitatively new kinds of behaviour different from that displayed by small numbers of them, as Philip Anderson says in his hugely influential paper, ‘more is different’ (1972). When whole systems spontaneously display behaviour that their parts do not, this is called emergence”. In my world, those ‘entities’ are essentially the chained functional input states {ft0(x1, x2, …, x­n) => ft1(x1, x2, …, x­n) => … => ftm(x1, x2, …, x­n)}. My entities are phenomenological – they are cognitive structures which fault of a better word we call ‘empirical variables’. If the neural networks I make and use for my research are any good at representing complex systems, emergence is the property of data in the first place. Interactions between those entities are expressed through the function of adaptation, mostly through the chain {e(t0) => e(t1) => …=> e(tm)} of local errors, concurrent with the chain of functional input states.

I think I know what the central point and thread of my book on collective intelligence is, should I (finally) write that book for good. Artificial neural networks can be used as simulators of collective social behaviour and social change. Still, they do not need to be super-performant network. My point is that with the right intellectual method, even the simplest neural networks, those possible to program into an Excel spreadsheet, can be reliable cognitive tools for social simulation.


[1] LCCN 2020024530 (print) | LCCN 2020024531 (ebook) | ISBN 9780691208015 (paperback) | ISBN 9780691208022 (ebook) ; Cline, Eric H.. 1177 B.C.: 6 (Turning Points in Ancient History, 1) . Princeton University Press. Kindle Edition.

[2] LCCN 2018016987 (print) | LCCN 2018027672 (ebook) | ISBN 9780143111788 | ISBN 9781101993040 (hardback) ; Siegel, Daniel J.. Aware (p. viii). Penguin Publishing Group. Kindle Edition.

[3] Keane, M. (1972). Strongly mixing measures. Inventiones mathematicae, 16(4), 309-324. DOI https://doi.org/10.1007/BF01425715

[4] Berghout, S., & Verbitskiy, E. (2021). On regularity of functions of Markov chains. Stochastic Processes and their Applications, Volume 134, April 2021, Pages 29-54, https://doi.org/10.1016/j.spa.2020.12.006

Plusieurs bouquins à la fois, comme d’habitude

Je suis en train de finir la première version, encore un peu rudimentaire, de mon article sur la faisabilité du « Projet Aqueduc » : un concept technologique en phase de naissance que j’essaie de développer et de promouvoir. Je pense que j’ai fait tous les calculs de base et j’ai l’intention d’en donner un compte rendu sommaire dans cette mise à jour. Je vais présenter ces résultats dans une structure logique qui est en train de faire sa percée dans le monde de la science : je commence par présenter l’idée de base et je l’associe avec du matériel empirique que je juge pertinent ainsi qu’avec la méthode d’analyse de ce matériel. Seulement après la description méthodologique je fais une revue de la littérature à propos des points saillants de la méthode et de l’idée de base. Ces trois composantes de base – introduction, matériel empirique et méthode d’analyse, revue de la littérature – forment la base de ce qui suit, donc de la présentation des calculs et leurs résultats ainsi que la discussion finale du tout. C’est une forme de composition qui est en train de remplacer une structure plus traditionnelle, qui était bâtie autour d’une transition rigoureuse de la théorie vers la partie empirique.

Je commence donc par reformuler et réaffirmer mon idée de base, donc l’essence même de « Projet Aqueduc ». Le travail de recherche que je viens de faire m’a fait changer les idées à ce propos. Initialement, je voyais le « Project Aqueduc » de la façon que vous pouvez voir décrite dans une mise à jour antérieure : « Ça semble expérimenter toujours ». Maintenant, je commence à apprécier la valeur cognitive et pratique de la méthode que j’ai mise au point pour conduire l’étude de faisabilité elle-même. La méthode en question est une application créative (enfin, j’espère) du rasoir d’Ockham : je divise mon concept entier en technologies composantes spécifiques et j’utilise la revue de littérature pour évaluer le degré d’incertitude attaché à chacune de parmi elles. Je concentre l’étude de faisabilité économique sur ce que peux dire de façon à peu près fiable à propos des technologies relativement le plus certaines et j’assume que ces technologies-là doivent générer un surplus de liquidité financière suffisant pour financer le développement de celles relativement plus incertaines.

Dans le cadre du « Projet Aqueduc », ce qui semble le mieux enraciné en termes de calcul ces coûts et d’investissement c’est la technologie de hydro-génération. Celle-ci est bien documentée et bien connue. Pas vraiment beaucoup d’innovation, par ailleurs. ça semble tourner tout seul. Les technologies de, respectivement, stockage d’énergie ainsi que chargement des voitures électriques viennent juste après en termes de prévisibilité : ça bouge, mais ça bouge de façon plutôt organisée. Il y a des innovations à espérer mais je pense que je suis capable de prédire plus ou moins de quelle direction elles vont venir.

Quoi qu’il en soit, j’ai simulé des installations hypothétiques de « Projet Aqueduc » dans les embouchures de 32 rivières de mon pays, la Pologne. J’ai pris les données officielles sur le débit par seconde, en mètres cubes, et j’ai simulé trois niveaux d’adsorption à partir de ce courant, à travers les béliers hydrauliques du « Projet Aqueduc » : 5%, 10% et 20%. En parallèle, j’ai simulé trois élévations possibles des réservoirs d’égalisation : 10 mètres, 20 mètres et 30 mètres. Avec les 654 millimètres de précipitations annuelles moyennes en Pologne, donc avec un ravitaillement hydrologique des précipitations avoisinant 201,8 milliards mètres cubes, ces 32 installations hypothétiques pourraient faire re-circuler entre 2,5% et 10% de ce total. Ceci fait un impact hydrologique substantiel pendant que l’impact sur le marché d’énergie n’est pas vraiment important. Avec l’adsorption d’eau au maximum, soit 20% du débit des rivières en question, ainsi qu’avec l’élévation des réservoirs d’égalisation fixée à 30 mètres (donc le maximum rationnellement possible vu la littérature du sujet), la puissance électrique totale de ces 32 installations hypothétiques serait de quelques 128,9 mégawatts, contre les 50 gigawatts déjà installés dans le système énergétique de la Pologne.

J’écrivais, dans mes mises à jour précédentes, que le « Projet Aqueduc » combine l’impact hydrologique avec celui sur le marché d’énergies renouvelables. Faut que je corrige. La production d’hydro-énergie est tout juste un moyen d’assurer la faisabilité économique du projet et puisque j’en suis là, encore quelques résultats de calculs. Vu les données d’Eurostat sur les prix d’énergie, le « Projet Aqueduc » semble faisable financièrement plutôt avec les prix moyens enregistrés en Europe qu’avec les prix minimum. Avec les prix moyens, l’exploitation des turbines hydroélectriques ainsi que celle d’installations de stockage d’énergie peut dégager quelques 90% de marge brute qui, à son tour, peut servir à financer les autres technologies du projet (pompage avec les béliers hydrauliques, infrastructure hydrologique etc.) et à créer un surplus net de trésorerie. En revanche, lorsque je simule les prix d’énergie à leur minimum empirique, ça donne un déficit brut de -18% après le coût d’énergie et de son stockage. Du coup, le « Projet Aqueduc » n’est pas vraiment du genre « énergies renouvelables pour tous et bon marché ». Le truc a des chances de marcher sans financement publique seulement lorsqu’il touche un marché de consommateurs prêts à payer plus que le minimum pour leur électricité.

En ce qui concerne la station de chargement de véhicules électriques, comme créneau marketing pour l’hydro-énergie produite, je répète tout simplement les conclusions que j’avais déjà exprimées dans la mise à jour intitulée « I have proven myself wrong » : ça n’a pas l’air de pouvoir marcher. A moins de créer une station de chargement hyper-demandée, avec des centaines de chargements par mois, il n’y aura tout simplement pas de trafic suffisant, au moins pas avec les proportions présentes entre la flotte de véhicules électriques en Europe et le réseau des stations de chargement. En revanche, il y a cette idée alternative de stations mobiles de chargement, développé de façon rigoureuse par Elmeligy et al. (2021[1]), par exemple. C’est un changement profond d’approche. Au lieu de construire une station puissante de chargement rapide, couplée avec un magasin d’énergie performant (et cher), on construit un système de batteries mobiles à puissance un peu moins élevée (200 kW dans la solution citée) et on les déplace à travers des parkings fréquentés dans un véhicule spécialement adapté à cette fin.

Maintenant, je change de sujet, mais alors complètement. Hier, j’ai reçu un courriel de la part d’une maison d’édition américaine, Nova Science Publishers, Inc., avec l’invitation à proposer un manuscrit de livre sur le sujet général d’intelligence collective. Apparemment, ils ont lu mon article dans le journal « Energy », intitulé « Energy efficiency as manifestation of collective intelligence in human societies ». Il est aussi possible que quelqu’un chez Nova suit mon blog et ce que je publie sur le phénomène d’intelligence collective. Écrire un livre est différent d’écrire un article. Ce dernier privilégie la concision et la brévité pendant que le premier exige un flot abondant d’idées tout comme un contexte riche et structuré.

En faisant un peu de lecture, ces dernières semaines, je me suis rendu compte que mon hypothèse générale d’intelligence collective des sociétés humaines – donc l’hypothèse d’apprentissage collectif à travers l’expérimentation avec plusieurs versions alternatives de la même structure sociale de base – se marie bien avec l’hypothèse des systèmes complexes. J’ai trouvé cette intersection intéressante comme je lisais le livre intitulé « 1177 B.C. The Year Civilisation Collapsed. Revised and Updated », publié par Eric H. Cline chez Princeton University Press en 2021[2]. En étudiant les mécanismes possibles de la décomposition des grands empires de l’âge de Bronze, Eric Cline cite la théorie des systèmes complexes. Si un ensemble est composé d’entités qui différent dans leur complexité – donc si nous observons entité dans entité et tout ça dans encore une autre entité – les connections fonctionnelles entre ces entités peuvent en quelque sorte stocker l’information et donc générer l’apprentissage spontané. De façon tout à fait surprenante, j’ai trouvé une référence scientifiquement sérieuse à la théorie des systèmes complexes dans un autre bouquin que je suis en train de lire (oui, j’ai l’habitude de lire plusieurs livres à la fois), donc dans « Aware. The Science and Practice of Presence. The Groundbreaking Meditation Practice », publié par Daniel J. Siegel chez TarcherPerigee en 2018[3].  Daniel J. Siegel developpe sur l’hypothèse que la conscience humaine est un système complexe et comme tel est capable d’auto-organisation. Je me permets de traduire ad hoc un court passage du début de ce livre : « L’une des caractéristiques émergentes fondamentales des systèmes complexes dans cette réalité qui est la nôtre est désignée comme auto-organisation. C’est un concept que vous pourriez croire être crée par quelqu’un en psychologie ou même dans les affaires, mais c’est un terme mathématique. La forme ou les contours du déploiement d’un système complexe sont déterminés par cette propriété émergente d’auto-organisation. Ce déploiement peut être optimisé ou bien il peut être contraint. Lorsqu’il ne s’optimise pas, il passe vers chaos ou vers la rigidité. Lorsqu’il s’optimise, il passe vers l’harmonie, en étant flexible, adaptable, cohérent, énergétique et stable ».

Intéressant : une étude systématique du développement et de la chute d’une civilisation peut trouver la même base théorique que l’étude scientifique de la méditation et cette base et la théorie des systèmes complexes. La façon do cette théorie se présente ressemble beaucoup à mes simulations de changement social et technologique où j’utilise des réseaux neuronaux comme représentation d’intelligence collective. Je suis en train de réfléchir sur la façon la plus générale possible d’exprimer et englober mon hypothèse d’intelligence collective. Je pense que le brouillon intitulé « Behavioral absorption of Black Swans: simulation with an artificial neural network », en combinaison avec la théorie des chaînes imparfaites de Markov (Berghout & Verbitskiy 2021[4]) sont peut-être le meilleur point de départ. J’assume donc que toute réalité sociale est une collection des phénomènes que nous ne percevons que de façon partielle et imparfaite et que nous estimons comme saillants lorsque leur probabilité d’occurrence dépasse un certain niveau critique.

Mathématiquement, la réalité sociale intelligible est donc un ensemble de probabilités. Je ne fais aucune assomption à priori quant à la dépendance mutuelle formelle de ces probabilités, mais je peux assumer que nous percevons tout changement de réalité sociale comme passage d’un ensemble des probabilités à un autre, donc comme une chaîne complexe d’états. Ici et maintenant, nous sommes dans une chaîne complexe A et à partir de là, tout n’est pas possible. Bien sûr, je ne veux pas dire que tout est impossible : j’assume tout simplement que la complexité d’ici et maintenant peut se transformer en d’autres complexités sous certaines conditions et contraintes. L’assomption la plus élémentaire à ce propos est que nous envisageons de bouger notre cul collectif seulement vers des états complexes qui nous rapprochent de ce que nous poursuivons ensemble et ceci quelles que soient les contraintes exogènes à notre choix. Je dirais même qu’en présence de contraintes sévères nous devenons particulièrement attentifs à l’état complexe prochain vers lequel nous transigeons. Une société constamment menacée par la pénurie de nourriture, par exemple, va être très tatillonne et en même temps très ingénieuse dans sa propre croissance démographique, en allant même jusqu’à la régulation culturelle du cycle menstruel des femmes.

Bon, ce sera tout dans cette mise à jour. Je m’en vais réfléchir et lire (plusieurs bouquins à la fois, comme d’habitude).


[1] Elmeligy, M. M., Shaaban, M. F., Azab, A., Azzouz, M. A., & Mokhtar, M. (2021). A Mobile Energy Storage Unit Serving Multiple EV Charging Stations. Energies, 14(10), 2969. https://doi.org/10.3390/en14102969

[2] LCCN 2020024530 (print) | LCCN 2020024531 (ebook) | ISBN 9780691208015 (paperback) | ISBN 9780691208022 (ebook) ; Cline, Eric H.. 1177 B.C.: 6 (Turning Points in Ancient History, 1) . Princeton University Press. Kindle Edition.

[3] LCCN 2018016987 (print) | LCCN 2018027672 (ebook) | ISBN 9780143111788 | ISBN 9781101993040 (hardback) ; Siegel, Daniel J.. Aware (p. viii). Penguin Publishing Group. Kindle Edition.

[4] Berghout, S., & Verbitskiy, E. (2021). On regularity of functions of Markov chains. Stochastic Processes and their Applications, Volume 134, April 2021, Pages 29-54, https://doi.org/10.1016/j.spa.2020.12.006

I have proven myself wrong

I keep working on a proof-of-concept paper for the idea I baptized ‘Energy Ponds’. You can consult two previous updates, namely ‘We keep going until we observe’ and ‘Ça semble expérimenter toujours’ to keep track of the intellectual drift I am taking. This time, I am focusing on the end of the technological pipeline, namely on the battery-powered charging station for electric cars. First, I want to make myself an idea of the market for charging.

I take the case of France. In December 2020, they had a total of 119 737 electric vehicles officially registered (matriculated), which made + 135% as compared to December 2019[1]. That number pertains only to 100% electrical ones, with plug-in hybrids left aside for the moment. When plug-in hybrids enter the game, France had, in December 2020, 470 295 vehicles that need or might need the services of charging stations. According to the same source, there were 28 928 charging stations in France at the time, which makes 13 EVs per charging station. That coefficient is presented for 4 other European countries: Norway (23 EVs per charging station), UK (12), Germany (9), and Netherlands (4).

I look up into other sources. According to Reuters[2], there was 250 000 charging stations in Europe by September 2020, as compared to 34 000 in 2014. That means an average increase by 36 000 a year. I find a different estimation with Statista[3]: 2010 – 3 201; 2011 – 7 018; 2012 – 17 498; 2013 – 28 824; 2014 – 40 910; 2015 – 67 064; 2016 – 98 669; 2017 – 136 059; 2018 – 153 841; 2019 – 211 438; 2020 – 285 796.

On the other hand, the European Alternative Fuels Observatory supplies their own data at https://www.eafo.eu/electric-vehicle-charging-infrastructure, as regards European Union.

Number of EVs per charging station (source: European Alternative Fuels Observatory):

EVs per charging station
201014
20116
20123
20134
20145
20155
20165
20175
20186
20197
20209

The same EAFO site gives their own estimation as regards the number of charging stations in Europe:

Number of charging stations in Europe (source: European Alternative Fuels Observatory):

High-power recharging points (more than 22 kW) in EUNormal charging stations in EUTotal charging stations
201225710 25010 507
201375117 09317 844
20141 47424 91726 391
20153 39644 78648 182
20165 19070 01275 202
20178 72397 287106 010
201811 138107 446118 584
201915 136148 880164 016
202024 987199 250224 237

Two conclusions jump to the eye. Firstly, there is just a very approximate count of charging stations. Numbers differ substantially from source to source. I can just guess that one of the reasons for that discrepancy is the distinction between officially issued permits to build charging points, on the one hand, and the actually active charging points, on the other hand. In Europe, building charging points for electric vehicles has become sort of a virtue, which governments at all levels like signaling. I guess there is some boasting and chest-puffing in the numbers those individual countries report.  

Secondly, high-power stations, charging with direct current, with a power of at least 22 kWh,  gain in importance. In 2012, that category made 2,45% of the total charging network in Europe, and in 2020 that share climbed to 11,14%. This is an important piece of information as regards the proof-of-concept which I am building up for my idea of Energy Ponds. The charging station I placed at the end of the pipeline in the concept of Energy Ponds, and which is supposed to earn a living for all the technologies and installations upstream of it, is supposed to be powered from a power storage facility. That means direct current, and most likely, high power.   

On the whole, the www.eafo.eu site seems somehow more credible that Statista, with all the due respect for the latter, and thus I am reporting some data they present on the fleet of EVs in Europe. Here it comes, in a few consecutive tables below:

Passenger EVs in Europe (source: European Alternative Fuels Observatory):

BEV (pure electric)PHEV (plug-in-hybrid)Total
20084 1554 155
20094 8414 841
20105 7855 785
201113 39516313 558
201225 8913 71229 603
201345 66232 47478 136
201475 47956 745132 224
2015119 618125 770245 388
2016165 137189 153354 290
2017245 347254 473499 820
2018376 398349 616726 014
2019615 878479 7061 095 584
20201 125 485967 7212 093 206

Light Commercial EVs in Europe (source: European Alternative Fuels Observatory):

BEV (pure electric)PHEV (plug-in-hybrid)Total
2008253253
2009254254
2010309309
20117 6697 669
20129 5279 527
201313 66913 669
201410 04910 049
201528 61028 610
201640 926140 927
201752 026152 027
201876 286176 287
201997 36311797 480
2020120 7111 054121 765

Bus EVs in Europe (source: European Alternative Fuels Observatory):

BEV (pure electric)PHEV (plug-in-hybrid)Total
20082727
20091212
2010123123
2011128128
2012286286
2013376376
201438940429
2015420145565
2016686304990
20178884451 333
20181 6084862 094
20193 6365254 161
20205 3115505 861

Truck EVs in Europe (source: European Alternative Fuels Observatory):

BEV (pure electric)PHEV (plug-in-hybrid)Total
200855
200955
201066
201177
201288
20134747
20145858
20157171
201611339152
2017544094
201822240262
201959538633
2020983291 012

Structure of EV fleet in Europe as regards the types of vehicles (source: European Alternative Fuels Observatory):

Passenger EVLight commercial EVBus EVTruck EV
200893,58%5,70%0,61%0,11%
200994,70%4,97%0,23%0,10%
201092,96%4,97%1,98%0,10%
201163,47%35,90%0,60%0,03%
201275,09%24,17%0,73%0,02%
201384,72%14,82%0,41%0,05%
201492,62%7,04%0,30%0,04%
201589,35%10,42%0,21%0,03%
201689,39%10,33%0,25%0,04%
201790,34%9,40%0,24%0,02%
201890,23%9,48%0,26%0,03%
201991,46%8,14%0,35%0,05%
202094,21%5,48%0,26%0,05%

Summing it up a bit. The market of Electric Vehicles in Europe seems being durably dominated by passenger cars. There is some fleet in other categories of vehicles, and there is even some increase, but, for the moment, in all looks more like an experiment. Well, maybe electric buses turn up sort of more systemically.

The proportion between the fleet of electric vehicles and the infrastructure of charging stations still seems to be in the phase of adjustment in the latter to the abundance of the former. Generally, the number of charging stations seems to be growing slower than the fleet of EVs. Thus, for my own concept, I assume that the coefficient of 9 EVs per charging station, on average, will stand still or will slightly increase. For the moment, I take 9. I assume that my charging stations will have like 9 habitual customers, plus a fringe of incidental ones.

From there, I think in the following terms. The number of times the average customer charges their car depends on the distance they cover. Apparently, there is like a 100 km  50 kWh equivalence. I did not find detailed statistics as regards distances covered by electric vehicles as such, however I came by some Eurostat data on distances covered by all passenger vehicles taken together: https://ec.europa.eu/eurostat/statistics-explained/index.php?title=Passenger_mobility_statistics#Distance_covered . There is a lot of discrepancy between the 11 European countries studied for that metric, but the average is 12,49 km per day. My average 9 customers would do, in total, an average of 410,27 of 50 kWh charging purchases per year. I checked the prices of fast charging with direct current: 2,3 PLN per 1 kWh in Poland[4],  €0,22 per 1 kWh in France[5], $0,13 per 1 kWh in US[6], 0,25 pence per 1 kWh in UK[7]. Once converted to US$, it gives $0,59 in Poland, $0,26 in France, $0,35 in UK, and, of course, $0,13 in US. Even at the highest price, namely that in Poland, those 410,27 charging stops give barely more than $12 000 a year.

If I want to have a station able to charge 2 EVs at the same time, fast charging, and counting 350 kW per charging pile (McKinsey 2018[8]), I need 700 kW it total. Investment in batteries is like $600 ÷ $800 per 1 kW (Cole & Frazier 2019[9]; Cole, Frazier, Augustine 2021[10]), thus 700 * ($600 ÷ $800) = $420 000 ÷ $560 000. There is no way that investment pays back with $12 000 a year in revenue, and I haven’t even started talking about paying off on investment in all the remaining infrastructure of Energy Ponds: ram pumps, elevated tanks, semi-artificial wetlands, and hydroelectric turbines.

Now, I revert my thinking. Investment in the range of $420 000 ÷ $560 000, in the charging station and its batteries, gives a middle-of-the-interval value of $490 000. I found a paper by Zhang et al. (2018[11]) who claim that a charging station has chances to pay off, as a business, when it sells some 5 000 000 kWh a year. When I put it back-to-back with the [50 kWh / 100 km] coefficient, it gives 10 000 000 km. Divided by the average annual distance covered by European drivers, thus by 4 558,55 km, it gives 2 193,68 customers per year, or some 6 charging stops per day. That seems hardly feasible with 9 customers. I assume that one customer would charge their electric vehicle no more than twice a week, and 6 chargings a day make 6*7 = 42 chargings, and therefore 21 customers.

I need to stop and think. Essentially, I have proven myself wrong. I had been assuming that putting a charging station for electric vehicles at the end of the internal value chain in the overall infrastructure of Energy Ponds will solve the problem of making money on selling electricity. Turns out it makes even more problems. I need time to wrap my mind around it.


[1] http://www.avere-france.org/Uploads/Documents/161011498173a9d7b7d55aef7bdda9008a7e50cb38-barometre-des-immatriculations-decembre-2020(9).pdf

[2] https://www.reuters.com/article/us-eu-autos-electric-charging-idUSKBN2C023C

[3] https://www.statista.com/statistics/955443/number-of-electric-vehicle-charging-stations-in-europe/

[4] https://elo.city/news/ile-kosztuje-ladowanie-samochodu-elektrycznego

[5] https://particulier.edf.fr/fr/accueil/guide-energie/electricite/cout-recharge-voiture-electrique.html

[6] https://afdc.energy.gov/fuels/electricity_charging_home.html

[7] https://pod-point.com/guides/driver/cost-of-charging-electric-car

[8] McKinsey Center for Future Mobility, How Battery Storage Can Help Charge the Electric-Vehicle Market?, February 2018,

[9] Cole, Wesley, and A. Will Frazier. 2019. Cost Projections for Utility-Scale Battery Storage.

Golden, CO: National Renewable Energy Laboratory. NREL/TP-6A20-73222. https://www.nrel.gov/docs/fy19osti/73222.pdf

[10] Cole, Wesley, A. Will Frazier, and Chad Augustine. 2021. Cost Projections for UtilityScale Battery Storage: 2021 Update. Golden, CO: National Renewable Energy

Laboratory. NREL/TP-6A20-79236. https://www.nrel.gov/docs/fy21osti/79236.pdf.

[11] Zhang, J., Liu, C., Yuan, R., Li, T., Li, K., Li, B., … & Jiang, Z. (2019). Design scheme for fast charging station for electric vehicles with distributed photovoltaic power generation. Global Energy Interconnection, 2(2), 150-159. https://doi.org/10.1016/j.gloei.2019.07.003

Ça semble expérimenter toujours

Je continue avec l’idée que j’avais baptisée « Projet Aqueduc ». Je suis en train de préparer un article sur ce sujet, du type « démonstration de faisabilité ». Je le prépare en anglais et je me suis dit que c’est une bonne idée de reformuler en français ce que j’ai écrit jusqu’à maintenant, l’histoire de changer l’angle intellectuel, me dégourdir un peu et prendre de la distance.

Une démonstration de faisabilité suit une logique similaire à tout autre article scientifique, sauf qu’au lieu d’explorer et vérifier une hypothèse théorique du type « les choses marchent de façon ABCD, sous conditions RTYU », j’explore et vérifie l’hypothèse qu’un concept pratique, comme celui du « Projet Aqueduc », a des fondements scientifiques suffisamment solides pour que ça vaille la peine de travailler dessus et de le tester en vie réelle. Les fondements scientifiques viennent en deux couches, en quelque sorte. La couche de base consiste à passer en revue la littérature du sujet pour voir si quelqu’un a déjà décrit des solutions similaires et là, le truc c’est explorer des différentes perspectives de similarité. Similaire ne veut pas dire identique, n’est-ce pas ? Cette revue de littérature doit apporter une structure logique – un modèle – applicable à la recherche empirique, avec des variables et des paramètres constants. C’est alors que vient la couche supérieure de démonstration de faisabilité, qui consiste à conduire de la recherche empirique proprement dite avec ce modèle.    

Moi, pour le moment, j’en suis à la couche de base. Je passe donc en revue la littérature pertinente aux solutions hydrologiques et hydroélectriques, tout en formant, progressivement, un modèle numérique du « Projet Aqueduc ». Dans cette mise à jour, je commence par une brève récapitulation du concept et j’enchaîne avec ce que j’ai réussi à trouver dans la littérature. Le concept de base du « Projet Aqueduc » consiste donc à placer dans le cours d’une rivière des pompes qui travaillent selon le principe du bélier hydraulique et qui donc utilisent l’énergie cinétique de l’eau pour pomper une partie de cette eau en dehors du lit de la rivière, vers des structures marécageuses qui ont pour fonction de retenir l’eau dans l’écosystème local. Le bélier hydraulique à la capacité de pomper à la verticale aussi bien qu’à l’horizontale et donc avant d’être retenue dans les marécages, l’eau passe par une structure similaire à un aqueduc élevé (d’où le nom du concept en français), avec des réservoirs d’égalisation de flux, et ensuite elle descend vers les marécages à travers des turbines hydroélectriques. Ces dernières produisent de l’énergie qui est ensuite emmagasinée dans une installation de stockage et de là, elle est vendue pour assurer la survie financière à la structure entière. On peut ajouter des installations éoliennes et/ou photovoltaïques pour optimiser la production de l’énergie sur le terrain occupé par la structure entière.  Vous pouvez trouver une description plus élaborée du concept dans ma mise à jour intitulée « Le Catch 22 dans ce jardin d’Eden ». La faisabilité dont je veux faire une démonstration c’est la capacité de cette structure à se financer entièrement sur la base des ventes d’électricité, comme un business régulier, donc de se développer et durer sans subventions publiques. La solution pratique que je prends en compte très sérieusement en termes de créneau de vente d’électricité est une station de chargement des véhicules électriques.   

L’approche de base que j’utilise dans la démonstration de faisabilité – donc mon modèle de base – consiste à représenter le concept en question comme une chaîne des technologies :

>> TCES – stockage d’énergie

>> TCCS – station de chargement des véhicules électriques

>> TCRP – pompage en bélier hydraulique

>> TCEW – réservoirs élevés d’égalisation

>> TCCW – acheminement et siphonage d’eau

>> TCWS – l’équipement artificiel des structures marécageuses

>> TCHE – les turbines hydroélectriques

>> TCSW – installations éoliennes et photovoltaïques     

Mon intuition de départ, que j’ai l’intention de vérifier dans ma recherche à travers la littérature, est que certaines de ces technologies sont plutôt prévisibles et bien calibrées, pendant qu’il y en a d’autres qui sont plus floues et sujettes au changement, donc moins prévisibles. Les technologies prévisibles sont une sorte d’ancrage pour the concept entier et celles plus floues sont l’objet d’expérimentation.

Je commence la revue de littérature par le contexte environnemental, donc avec l’hydrologie. Les variations au niveau de la nappe phréatiques, qui est un terme scientifique pour les eaux souterraines, semblent être le facteur numéro 1 des anomalies au niveau de rétention d’eau dans les réservoirs artificiels (Neves, Nunes, & Monteiro 2020[1]). D’autre part, même sans modélisation hydrologique détaillée, il y a des preuves empiriques substantielles que la taille des réservoirs naturels et artificiels dans les plaines fluviales, ainsi que la densité de placement de ces réservoirs et ma manière de les exploiter ont une influence majeure sur l’accès pratique à l’eau dans les écosystèmes locaux. Il semble que la taille et la densité des espaces boisés intervient comme un facteur d’égalisation dans l’influence environnementale des réservoirs (Chisola, Van der Laan, & Bristow 2020[2]). Par comparaison aux autres types de technologie, l’hydrologie semble être un peu en arrière en termes de rythme d’innovation et il semble aussi que des méthodes de gestion d’innovation appliquées ailleurs avec succès peuvent marcher pour l’hydrologie, par exemple des réseaux d’innovation ou des incubateurs des technologies (Wehn & Montalvo 2018[3]; Mvulirwenande & Wehn 2020[4]). L’hydrologie rurale et agriculturale semble être plus innovatrice que l’hydrologie urbaine, par ailleurs (Wong, Rogers & Brown 2020[5]).

Ce que je trouve assez surprenant est le manque apparent de consensus scientifique à propos de la quantité d’eau dont les sociétés humaines ont besoin. Toute évaluation à ce sujet commence avec « beaucoup et certainement trop » et à partir de là, le beaucoup et le trop deviennent plutôt flous. J’ai trouvé un seul calcul, pour le moment, chez Hogeboom (2020[6]), qui maintient que la personne moyenne dans les pays développés consomme 3800 litres d’eau par jour au total, mais c’est une estimation très holistique qui inclue la consommation indirecte à travers les biens et les services ainsi que le transport. Ce qui est consommé directement via le robinet et la chasse d’eau dans les toilettes, ça reste un mystère pour la science, apparemment, à moins que la science ne considère ce sujet comment trop terre-à-terre pour s’en occuper sérieusement.     

Il y a un créneau de recherche intéressant, que certains de ses représentants appellent « la socio-hydrologie », qui étudie les comportements collectifs vis-à-vis de l’eau et des systèmes hydrologiques et qui est basée sur l’observation empirique que lesdits comportements collectifs s’adaptent, d’une façon profonde et pernicieuse à la fois, aux conditions hydrologiques que la société en question vit avec (Kumar et al. 2020[7]). Il semble que nous nous adaptons collectivement à la consommation accrue de l’eau par une productivité croissante dans l’exploitation de nos ressources hydrologiques et le revenu moyen par tête d’habitant semble être positivement corrélé avec cette productivité (Bagstad et al. 2020[8]). Il paraît donc que l’accumulation et superposition de nombreuses technologies, caractéristique aux pays développés, contribue à utiliser l’eau de façon de plus en plus productive. Dans ce contexte, il y a une recherche intéressante conduite par Mohamed et al. (2020[9]) qui avance la thèse qu’un environnement aride est non seulement un état hydrologique mais aussi une façon de gérer les ressources hydrologiques, sur ma base des données qui sont toujours incomplètes par rapport à une situation qui change rapidement.

Il y a une question qui vient plus ou moins naturellement : dans la foulée de l’adaptation socio-hydrologique quelqu’un a-t-il présenté un concept similaire à ce que moi je présente comme « Projet Aqueduc » ? Eh bien, je n’ai rien trouvé d’identique, néanmoins il y a des idées intéressement proches. Dans l’hydrologie descriptive il y a ce concept de pseudo-réservoir, qui veut dire une structure comme les marécages ou des nappes phréatiques peu profondes qui ne retiennent pas l’eau de façons statique, comme un lac artificiel, mais qui ralentissent la circulation de l’eau dans le bassin fluvial d’une rivière suffisamment pour modifier les conditions hydrologiques dans l’écosystème (Harvey et al. 2009[10]; Phiri et al. 2021[11]). D’autre part, il y a une équipe des chercheurs australiens qui ont inventé une structure qu’ils appellent par l’acronyme STORES et dont le nom complet est « short-term off-river energy storage » (Lu et al. 2021[12]; Stocks et al. 2021[13]). STORES est une structure semi-artificielle d’accumulation par pompage, où on bâtit un réservoir artificiel au sommet d’un monticule naturel placé à une certaine distance de la rivière la plus proche et ce réservoir reçoit l’eau pompée artificiellement de la rivière. Ces chercheurs australiens avancent et donnent des preuves scientifiques pour appuyer la thèse qu’avec un peu d’astuce on peut faire fonctionner ce réservoir naturel en boucle fermée avec la rivière qui l’alimente et donc de créer un système de rétention d’eau. STORES semble être relativement le plus près de mon concept de « Projet Aqueduc » et ce qui est épatant est que moi, j’avais inventé mon idée pour l’environnement des plaines alluviales de l’Europe tandis que STORES avait été mis au point pour l’environnement aride et quasi-désertique d’Australie. Enfin, il y a l’idée des soi-disant « jardins de pluie » qui sont une technologie de rétention d’eau de pluie dans l’environnement urbain, dans des structures horticulturales, souvent placées sur les toits d’immeubles (Bortolini & Zanin 2019[14], par exemple).

Je peux conclure provisoirement que tout ce qui touche à l’hydrologie strictement dite dans le cadre du « Projet Aqueduc » est sujet aux changements plutôt imprévisible. Ce que j’ai pu déduire de la littérature ressemble à un potage bouillant sous couvercle. Il y a du potentiel pour changement technologique, il y a de la pression environnementale et sociale, mais il n’y pas encore de mécanismes institutionnels récurrents pour connecter l’un à l’autre. Les technologies TCEW (réservoirs élevés d’égalisation), TCCW (acheminement et siphonage d’eau), et TCWS (l’équipement artificiel des structures marécageuses) démontrant donc un avenir flou, je passe à la technologie TCRP de pompage en bélier hydraulique. J’ai trouvé deux articles chinois, qui se suivent chronologiquement et qui semblent par ailleurs avoir été écrits par la même équipe de chercheurs : Guo et al. (2018[15]), and Li et al. (2021[16]). Ils montrent la technologie du bélier hydraulique sous un angle intéressant. D’une part, les Chinois semblent avoir donné du vrai élan à l’innovation dans ce domaine spécifique, tout au moins beaucoup plus d’élan que j’ai pu observer en Europe. D’autre part, les estimations de la hauteur effective à laquelle l’eau peut être pompée avec les béliers hydrauliques dernier cri sont respectivement de 50 mètres dans l’article de 2018 et 30 mètres dans celui de 2021. Vu que les deux articles semblent être le fruit du même projet, il y a eu comme une fascination suivie par une correction vers le bas. Quoi qu’il en soit, même l’estimation plus conservative de 30 mètres c’est nettement mieux que les 20 mètres que j’assumais jusqu’à maintenant.

Cette élévation relative possible à atteindre avec la technologie du bélier hydraulique est importante pour la technologie suivante de ma chaîne, donc celle des petites turbines hydroélectriques, la TCHE. L’élévation relative de l’eau et le flux par seconde sont les deux paramètres clés qui déterminent la puissance électrique produite (Cai, Ye & Gholinia 2020[17]) et il se trouve que dans le « Projet Aqueduc », avec l’élévation et le flux largement contrôlés à travers la technologie du bélier hydraulique, les turbines deviennent un peu moins dépendantes sur les conditions naturelles.

J’ai trouvé une revue merveilleusement encyclopédique des paramètres pertinents aux petites turbines hydroélectriques chez Hatata, El-Saadawi, & Saad (2019[18]). La puissance électrique se calcule donc comme : Puissance = densité de l’eau (1000 kg/m3) * constante d’accélération gravitationnelle (9,8 m/s2) * élévation nette (mètres) * Q (flux par seconde m3/s).

L’investissement initial en de telles installations se calcule par unité de puissance, donc sur la base de 1 kilowatt et se divise en 6 catégories : la construction de la prise d’eau, la centrale électrique strictement dite, les turbines, le générateur, l’équipement auxiliaire, le transformateur et enfin le poste extérieur. Je me dis par ailleurs que – vu la structure du « Projet Aqueduc » – l’investissement en la construction de prise d’eau est en quelque sorte équivalent au système des béliers hydrauliques et réservoirs élevés. En tout cas :

>> la construction de la prise d’eau, par 1 kW de puissance  ($) 186,216 * Puissance-0,2368 * Élévation -0,597

>> la centrale électrique strictement dite, par 1 kW de puissance  ($) 1389,16 * Puissance-0,2351 * Élévation-0,0585

>> les turbines, par 1 kW de puissance  ($)

@ la turbine Kaplan: 39398 * Puissance-0,58338 * Élévation-0,113901

@ la turbine Frances: 30462 * Puissance-0,560135 * Élévation-0,127243

@ la turbine à impulsions radiales: 10486,65 * Puissance-0,3644725 * Élévation-0,281735

@ la turbine Pelton: 2 * la turbine à impulsions radiales

>> le générateur, par 1 kW de puissance  ($) 1179,86 * Puissance-0,1855 * Élévation-0,2083

>> l’équipement auxiliaire, par 1 kW de puissance  ($) 612,87 * Puissance-0,1892 * Élévation-0,2118

>> le transformateur et le poste extérieur, par 1 kW de puissance 

($) 281 * Puissance0,1803 * Élévation-0,2075

Une fois la puissance électrique calculée avec le paramètre d’élévation relative assurée par les béliers hydrauliques, je peux calculer l’investissement initial en hydro-génération comme la somme des positions mentionnées ci-dessus. Hatata, El-Saadawi, & Saad (2019 op. cit.) recommandent aussi de multiplier une telle somme par le facteur de 1,13 (c’est donc un facteur du type « on ne sait jamais ») et d’assumer que les frais courants d’exploitation annuelle vont se situer entre 1% et 6% de l’investissement initial.

Syahputra & Soesanti (2021[19]) étudient le cas de la rivière Progo, dotée d’un flux tout à fait modeste de 6,696 mètres cubes par seconde et située dans Kulon Progo Regency (une region spéciale au sein de Yogyakarta, Indonesia). Le système des petites turbines hydroélectriques y fournit l’électricité aux 962 ménages locaux, et crée un surplus de 4 263 951 kWh par an d’énergie à revendre aux consommateurs externes. Dans un autre article, Sterl et al. (2020[20]) étudient le cas de Suriname et avancent une thèse intéressante, notamment que le développement d’installations basées sur les énergies renouvelables crée un phénomène d’appétit d’énergie qui croît à mesure de manger et qu’un tel développement en une source d’énergie – le vent, par exemple – stimule l’investissement en installations basées sur d’autres sources, donc l’hydraulique et le photovoltaïque.  

Ces études relativement récentes corroborent celles d’il y a quelques années, comme celle de Vilanova & Balestieri (2014[21]) ou bien celle de Vieira et al. (2015[22]), avec une conclusion générale que les petites turbines hydroélectriques ont atteint un degré de sophistication technologique suffisante pour dégager une quantité d’énergie économiquement profitable. Par ailleurs, il semble qu’il y a beaucoup à gagner dans ce domaine à travers l’optimisation de la distribution de puissance entre les turbines différentes. De retour aux publications les plus récentes, j’ai trouvé des études de faisabilité tout à fait robustes pour les petites turbines hydroélectriques, qui indiquent que – pourvu qu’on soit prêt à accepter un retour d’environ 10 à 11 ans sur l’investissement initial – le petit hydro peut être exploité profitablement même avec une élévation relative en dessous de 20 mètres (Arthur et al. 2020[23] ; Ali et al. 2021[24]).

C’est ainsi que j’arrive donc à la portion finale dans la chaîne technologique du « Projet Aqueduc », donc au stockage d’énergie (TCES) ainsi que TCCS ou la station de chargement des véhicules électriques. La puissance à installer dans une station de chargement semble se situer entre 700 et 1000 kilowatts (Zhang et al. 2018[25]; McKinsey 2018[26]). En dessous de 700 kilowatt la station peut devenir si difficile à accéder pour le consommateur moyen, due aux files d’attente, qu’elle peut perdre la confiance des clients locaux. En revanche, tout ce qui va au-dessus de 1000 kilowatts est vraiment utile seulement aux heures de pointe dans des environnements urbains denses. Il y a des études de concept pour les stations de chargement où l’unité de stockage d’énergie est alimentée à partir des sources renouvelables (Al Wahedi & Bicer 2020[27]). Zhang et al. (2019[28]) présentent un concept d’entreprise tout fait pour une station de chargement située dans le milieu urbain. Apparemment, le seuil de profitabilité se situe aux environs de 5 100 000 kilowatt heures vendues par an.  

En termes de technologie de stockage strictement dite, les batteries Li-ion semblent être la solution de base pour maintenant, quoi qu’une combinaison avec les piles à combustible ou bien avec l’hydrogène semble prometteuse (Al Wahedi & Bicer 2020 op. cit. ; Sharma, Panvar & Tripati 2020[29]). En général, pour le moment, les batteries Li-Ion montrent le rythme d’innovation relativement le plus soutenu (Tomaszewska et al. 2019[30] ; de Simone & Piegari 2019[31]; Koohi-Fayegh & Rosen 2020[32]). Un article récent par Elmeligy et al. (2021[33]) présente un concept intéressant d’unité mobile de stockage qui pourrait se déplacer entre plusieurs stations de chargement. Quant à l’investissement initial requis pour une station de chargement, ça semble expérimenter toujours mais la marge de manœuvre se rétrécit pour tomber quelque part entre $600 ÷ $800 par 1 kW de puissance (Cole & Frazier 2019[34]; Cole, Frazier, Augustine 2021[35]).


[1] Neves, M. C., Nunes, L. M., & Monteiro, J. P. (2020). Evaluation of GRACE data for water resource management in Iberia: a case study of groundwater storage monitoring in the Algarve region. Journal of Hydrology: Regional Studies, 32, 100734. https://doi.org/10.1016/j.ejrh.2020.100734

[2] Chisola, M. N., Van der Laan, M., & Bristow, K. L. (2020). A landscape hydrology approach to inform sustainable water resource management under a changing environment. A case study for the Kaleya River Catchment, Zambia. Journal of Hydrology: Regional Studies, 32, 100762. https://doi.org/10.1016/j.ejrh.2020.100762

[3] Wehn, U., & Montalvo, C. (2018). Exploring the dynamics of water innovation: Foundations for water innovation studies. Journal of Cleaner Production, 171, S1-S19. https://doi.org/10.1016/j.jclepro.2017.10.118

[4] Mvulirwenande, S., & Wehn, U. (2020). Fostering water innovation in Africa through virtual incubation: Insights from the Dutch VIA Water programme. Environmental Science & Policy, 114, 119-127. https://doi.org/10.1016/j.envsci.2020.07.025

[5] Wong, T. H., Rogers, B. C., & Brown, R. R. (2020). Transforming cities through water-sensitive principles and practices. One Earth, 3(4), 436-447. https://doi.org/10.1016/j.oneear.2020.09.012

[6] Hogeboom, R. J. (2020). The Water Footprint Concept and Water’s Grand Environmental Challenges. One earth, 2(3), 218-222. https://doi.org/10.1016/j.oneear.2020.02.010

[7] Kumar, P., Avtar, R., Dasgupta, R., Johnson, B. A., Mukherjee, A., Ahsan, M. N., … & Mishra, B. K. (2020). Socio-hydrology: A key approach for adaptation to water scarcity and achieving human well-being in large riverine islands. Progress in Disaster Science, 8, 100134. https://doi.org/10.1016/j.pdisas.2020.100134

[8] Bagstad, K. J., Ancona, Z. H., Hass, J., Glynn, P. D., Wentland, S., Vardon, M., & Fay, J. (2020). Integrating physical and economic data into experimental water accounts for the United States: Lessons and opportunities. Ecosystem Services, 45, 101182. https://doi.org/10.1016/j.ecoser.2020.101182

[9] Mohamed, M. M., El-Shorbagy, W., Kizhisseri, M. I., Chowdhury, R., & McDonald, A. (2020). Evaluation of policy scenarios for water resources planning and management in an arid region. Journal of Hydrology: Regional Studies, 32, 100758. https://doi.org/10.1016/j.ejrh.2020.100758

[10] Harvey, J.W., Schaffranek, R.W., Noe, G.B., Larsen, L.G., Nowacki, D.J., O’Connor, B.L., 2009. Hydroecological factors governing surface water flow on a low-gradient floodplain. Water Resour. Res. 45, W03421, https://doi.org/10.1029/2008WR007129.

[11] Phiri, W. K., Vanzo, D., Banda, K., Nyirenda, E., & Nyambe, I. A. (2021). A pseudo-reservoir concept in SWAT model for the simulation of an alluvial floodplain in a complex tropical river system. Journal of Hydrology: Regional Studies, 33, 100770. https://doi.org/10.1016/j.ejrh.2020.100770.

[12] Lu, B., Blakers, A., Stocks, M., & Do, T. N. (2021). Low-cost, low-emission 100% renewable electricity in Southeast Asia supported by pumped hydro storage. Energy, 121387. https://doi.org/10.1016/j.energy.2021.121387

[13] Stocks, M., Stocks, R., Lu, B., Cheng, C., & Blakers, A. (2021). Global atlas of closed-loop pumped hydro energy storage. Joule, 5(1), 270-284. https://doi.org/10.1016/j.joule.2020.11.015

[14] Bortolini, L., & Zanin, G. (2019). Reprint of: Hydrological behaviour of rain gardens and plant suitability: A study in the Veneto plain (north-eastern Italy) conditions. Urban forestry & urban greening, 37, 74-86. https://doi.org/10.1016/j.ufug.2018.07.003

[15] Guo, X., Li, J., Yang, K., Fu, H., Wang, T., Guo, Y., … & Huang, W. (2018). Optimal design and performance analysis of hydraulic ram pump system. Proceedings of the Institution of Mechanical Engineers, Part A: Journal of Puissance and Energy, 232(7), 841-855. https://doi.org/10.1177%2F0957650918756761

[16] Li, J., Yang, K., Guo, X., Huang, W., Wang, T., Guo, Y., & Fu, H. (2021). Structural design and parameter optimization on a waste valve for hydraulic ram pumps. Proceedings of the Institution of Mechanical Engineers, Part A: Journal of Puissance and Energy, 235(4), 747–765. https://doi.org/10.1177/0957650920967489

[17] Cai, X., Ye, F., & Gholinia, F. (2020). Application of artificial neural network and Soil and Water Assessment Tools in evaluating Puissance generation of small hydroPuissance stations. Energy Reports, 6, 2106-2118. https://doi.org/10.1016/j.egyr.2020.08.010.

[18] Hatata, A. Y., El-Saadawi, M. M., & Saad, S. (2019). A feasibility study of small hydro Puissance for selected locations in Egypt. Energy Strategy Reviews, 24, 300-313. https://doi.org/10.1016/j.esr.2019.04.013

[19] Syahputra, R., & Soesanti, I. (2021). Renewable energy systems based on micro-hydro and solar photovoltaic for rural areas: A case study in Yogyakarta, Indonesia. Energy Reports, 7, 472-490. https://doi.org/10.1016/j.egyr.2021.01.015

[20] Sterl, S., Donk, P., Willems, P., & Thiery, W. (2020). Turbines of the Caribbean: Decarbonising Suriname’s electricity mix through hydro-supported integration of wind Puissance. Renewable and Sustainable Energy Reviews, 134, 110352. https://doi.org/10.1016/j.rser.2020.110352

[21] Vilanova, M. R. N., & Balestieri, J. A. P. (2014). HydroPuissance recovery in water supply systems: Models and case study. Energy conversion and management, 84, 414-426. https://doi.org/10.1016/j.enconman.2014.04.057

[22] Vieira, D. A. G., Guedes, L. S. M., Lisboa, A. C., & Saldanha, R. R. (2015). Formulations for hydroelectric energy production with optimality conditions. Energy Conversion and Management, 89, 781-788. https://doi.org/10.1016/j.enconman.2014.10.048

[23] Arthur, E., Anyemedu, F. O. K., Gyamfi, C., Asantewaa-Tannor, P., Adjei, K. A., Anornu, G. K., & Odai, S. N. (2020). Potential for small hydroPuissance development in the Lower Pra River Basin, Ghana. Journal of Hydrology: Regional Studies, 32, 100757. https://doi.org/10.1016/j.ejrh.2020.100757

[24] Ali, M., Wazir, R., Imran, K., Ullah, K., Janjua, A. K., Ulasyar, A., … & Guerrero, J. M. (2021). Techno-economic assessment and sustainability impact of hybrid energy systems in Gilgit-Baltistan, Pakistan. Energy Reports, 7, 2546-2562. https://doi.org/10.1016/j.egyr.2021.04.036

[25] Zhang, Y., He, Y., Wang, X., Wang, Y., Fang, C., Xue, H., & Fang, C. (2018). Modeling of fast charging station equipped with energy storage. Global Energy Interconnection, 1(2), 145-152. DOI:10.14171/j.2096-5117.gei.2018.02.006

[26] McKinsey Center for Future Mobility, How Battery Storage Can Help Charge the Electric-Vehicle Market?, February 2018,

[27] Al Wahedi, A., & Bicer, Y. (2020). Development of an off-grid electrical vehicle charging station hybridized with renewables including battery cooling system and multiple energy storage units. Energy Reports, 6, 2006-2021. https://doi.org/10.1016/j.egyr.2020.07.022

[28] Zhang, J., Liu, C., Yuan, R., Li, T., Li, K., Li, B., … & Jiang, Z. (2019). Design scheme for fast charging station for electric vehicles with distributed photovoltaic power generation. Global Energy Interconnection, 2(2), 150-159. https://doi.org/10.1016/j.gloei.2019.07.003

[29] Sharma, S., Panwar, A. K., & Tripathi, M. M. (2020). Storage technologies for electric vehicles. Journal of traffic and transportation engineering (english edition), 7(3), 340-361. https://doi.org/10.1016/j.jtte.2020.04.004

[30] Tomaszewska, A., Chu, Z., Feng, X., O’Kane, S., Liu, X., Chen, J., … & Wu, B. (2019). Lithium-ion battery fast charging: A review. ETransportation, 1, 100011. https://doi.org/10.1016/j.etran.2019.100011

[31] De Simone, D., & Piegari, L. (2019). Integration of stationary batteries for fast charge EV charging stations. Energies, 12(24), 4638. https://doi.org/10.3390/en12244638

[32] Koohi-Fayegh, S., & Rosen, M. A. (2020). A review of energy storage types, applications and recent developments. Journal of Energy Storage, 27, 101047. https://doi.org/10.1016/j.est.2019.101047

[33] Elmeligy, M. M., Shaaban, M. F., Azab, A., Azzouz, M. A., & Mokhtar, M. (2021). A Mobile Energy Storage Unit Serving Multiple EV Charging Stations. Energies, 14(10), 2969. https://doi.org/10.3390/en14102969

[34] Cole, Wesley, and A. Will Frazier. 2019. Cost Projections for Utility-Scale Battery Storage.

Golden, CO: National Renewable Energy Laboratory. NREL/TP-6A20-73222. https://www.nrel.gov/docs/fy19osti/73222.pdf

[35] Cole, Wesley, A. Will Frazier, and Chad Augustine. 2021. Cost Projections for UtilityScale Battery Storage: 2021 Update. Golden, CO: National Renewable Energy

Laboratory. NREL/TP-6A20-79236. https://www.nrel.gov/docs/fy21osti/79236.pdf.

We keep going until we observe

I keep working on a proof-of-concept paper for my idea of ‘Energy Ponds’. In my last two updates, namely in ‘Seasonal lakes’, and in ‘Le Catch 22 dans ce jardin d’Eden’, I sort of refreshed my ideas and set the canvas for painting. Now, I start sketching. What exact concept do I want to prove, and what kind of evidence can possibly confirm (or discard) that concept? The idea I am working on has a few different layers. The most general vision is that of purposefully storing water in spongy structures akin to swamps or wetlands. These can bear various degree of artificial construction, and can stretch from natural wetlands, through semi-artificial ones, all the way to urban technologies such as rain gardens and sponge cities. The most general proof corresponding to that vision is a review of publicly available research – peer-reviewed papers, preprints, databases etc. – on that general topic.

Against that general landscape, I sketch two more specific concepts: the idea of using ram pumps as a technology of forced water retention, and the possibility of locating those wetland structures in the broadly spoken Northern Europe, thus my home region. Correspondingly, I need to provide two streams of scientific proof: a review of literature on the technology of ram pumping, on the one hand, and on the actual natural conditions, as well as land management policies in Europe, on the other hand.  I need to consider the environmental impact of creating new wetland-like structures in Northern Europe, as well as the socio-economic impact, and legal feasibility of conducting such projects.

Next, I sort of build upwards. I hypothesise a complex technology, where ram-pumped water from the river goes into a sort of light elevated tanks, and from there, using the principle of Roman siphon, cascades down into wetlands, and through a series of small hydro-electric turbines. Turbines generate electricity, which is being stored and then sold outside.

At that point, I have a technology of water retention coupled with a technology of energy generation and storage. I further advance a second hypothesis that such a complex technology will be economically sustainable based on the corresponding sales of electricity. In other words, I want to figure out a configuration of that technology, which will be suitable for communities which either don’t care at all, or simply cannot afford to care about the positive environmental impact of the solution proposed.

Proof of concept for those two hypotheses is going to be complex. First, I need to pass in review the available technologies for energy storage, energy generation, as well as for the construction of elevated tanks and Roman siphons. I need to take into account various technological mixes, including the incorporation of wind turbines and photovoltaic installation into the whole thing, in order to optimize the output of energy. I will try to look for documented examples of small hydro-generation coupled with wind and solar. Then, I have to rack the literature as regards mathematical models for the optimization of such power systems and put them against my own idea of reverse engineering back from the storage technology. I take the technology of energy storage which seems the most suitable for the local market of energy, and for the hypothetical charging from hydro-wind-solar mixed generation. I build a control scenario where that storage facility just buys energy at wholesale prices from the power grid and then resells it. Next, I configure the hydro-wind-solar generation so as to make it economically competitive against the supply of energy from the power grid.

Now, I sketch. I keep in mind the levels of conceptualization outlined above, and I quickly move through published science along that logical path, quickly picking a few articles for each topic. I am going to put those nonchalantly collected pieces of science back-to-back and see how and whether at all it all makes sense together. I start with Bortolini & Zanin (2019[1]), who study the impact of rain gardens on water management in cities of the Veneto region in Italy. Rain gardens are vegetal structures, set up in the urban environment, with the specific purpose to retain rainwater.  Bortolini & Zanin (2019 op. cit.) use a simplified water balance, where the rain garden absorbs and retains a volume ‘I’ of water (‘I’ stands for infiltration), which is the difference between precipitations on the one hand, and the sum total of overflowing runoff from the rain garden plus evapotranspiration of water, on the other hand. Soil and plants in the rain garden have a given top capacity to retain water. Green plants typically hold 80 – 95% of their mass in water, whilst trees hold about 50%. Soil is considered wet when it contains about 25% of water. The rain garden absorbs water from precipitations at a rate determined by hydraulic conductivity, which means the relative ease of a fluid (usually water) to move through pore spaces or fractures, and which depends on the intrinsic permeability of the material, the degree of saturation, and on the density and viscosity of the fluid.

As I look at it, I can see that the actual capacity of water retention in a rain garden can hardly be determined a priori, unless we have really a lot of empirical data from the given location. For a new location of a new rain garden, it is safe to assume that we need an experimental phase when we empirically assess the retentive capacity of the rain garden with different configurations of soil and vegetation used. That leads me to generalizing that any porous structure we use for retaining rainwater, would it be something like wetlands, or something like a rain garden in urban environment, has a natural constraint of hydraulic conductivity, and that constraint determines the percentage of precipitations, and the metric volume thereof, which the given structure can retain.

Bortolini & Zanin (2019 op. cit.) bring forth empirical results which suggest that properly designed rain gardens located on rooftops in a city can absorb from 87% to 93% of the total input of water they receive. Cool. I move on and towards the issue of water management in Europe, with a working paper by Fribourg-Blanc, B. (2018[2]), and the most important takeaway from that paper is that we have something called European Platform for Natural Water Retention Measures AKA http://nwrm.eu , and that thing have both good properties and bad properties. The good thing about http://nwrm.eu is that it contains loads of data and publications about projects in Natural Water Retention in Europe. The bad thing is that http://nwrm.eu is not a secure website. Another paper, by Tóth et al. (2017[3]) tells me that another analytical tool exists, namely the European Soil Hydraulic Database (EU‐ SoilHydroGrids ver1.0).

So far, so good. I already know there is data and science for evaluating, with acceptable precision, the optimal structure and the capacity for water retention in porous structures such as rain gardens or wetlands, in the European context. I move to the technology of ram pumps. I grab two papers: Guo et al. (2018[4]), and Li et al. (2021[5]). They show me two important things. Firstly, China seems to be burning the rubber in the field of ram pumping technology. Secondly, the greatest uncertainty as for that technology seems to be the actual height those ram pumps can elevate water at, or, when coupled with hydropower, the hydraulic head which ram pumps can create. Guo et al. (2018 op. cit.) claim that 50 meters of elevation is the maximum which is both feasible and efficient. Li et al. (2021 op. cit.) are sort of vertically more conservative and claim that the whole thing should be kept below 30 meters of elevation. Both are better than 20 meters, which is what I thought was the best one can expect. Greater elevation of water means greater hydraulic head, and more hydropower to be generated. It pays off to review literature.

Lots of uncertainty as for the actual capacity and efficiency of ram pumping means quick technological change in that domain. This is economically interesting. It means that investing in projects which involve ram pumping means investment in quickly changing a technology. That means both high hopes for an even better technology in immediate future, and high needs for cash in the balance sheet of the entities involved.

I move to the end-of-the-pipeline technology in my concept, namely to energy storage. I study a paper by Koohi-Fayegh & Rosen (2020[6]), which suggests two things. Firstly, for a standalone installation in renewable energy, whatever combination of small hydropower, photovoltaic and small wind turbines we think of, lithium-ion batteries are always a good idea for power storage, Secondly, when we work with hydrogeneration, thus when we have any hydraulic head to make electricity with, pumped storage comes sort of natural. That leads me to an idea which looks even crazier than what I have imagined so far: what if we create an elevated garden with strong capacity for water retention. Ram pumps take water from the river and pump it up onto elevated platforms with rain gardens on it. Those platforms can be optimized as for their absorption of sunlight and thus as regards their interaction with whatever is underneath them.  

I move to small hydro, and I find two papers, namely Couto & Olden (2018[7]), and Lange et al. (2018[8]), which are both interestingly critical as regards small hydropower installations. Lange et al. (2018 op. cit.) claim that the overall environmental impact of small hydro should be closely monitored. Couto & Olden (2018 op. cit.) go further and claim there is a ‘craze’ about small hydro, and that craze has already lead to overinvestment in the corresponding installations, which can be damaging both environmentally and economically (overinvestment means financial collapse of many projects). Those critical views in mind, I turn to another paper, by Zhou et al. (2019[9]), who approach the issue as a case for optimization, within a broader framework called ‘Water-Food-Energy’ Nexus, WFE for closer friends. This paper, just as a few others it cites (Ming et al. 2018[10]; Uen et al. 2018[11]), advocates for using artificial intelligence in order to optimize for WFE.

Zhou et al. (2019 op.cit.) set three hydrological scenarios for empirical research and simulation. The baseline scenario corresponds to an average hydrological year, with average water levels and average precipitations. Next to it are: a dry year and a wet year. The authors assume that the cost of installation in small hydropower is $600 per kW on average.  They simulate the use of two technologies for hydro-electric turbines: Pelton and Vortex. Pelton turbines are optimized paddled wheels, essentially, whilst the Vortex technology consists in creating, precisely, a vortex of water, and that vortex moves a rotor placed in the middle of it.

Zhou et al. (2019 op.cit.) create a multi-objective function to optimize, with the following desired outcomes:

>> Objective 1: maximize the reliability of water supply by minimizing the probability of real water shortage occurring.

>> Objective 2: maximize water storage given the capacity of the reservoir. Note: reservoir is understood hydrologically, as any structure, natural or artificial, able to retain water.

>> Objective 3: maximize the average annual output of small hydro-electric turbines

Those objectives are being achieved under the corresponding sets of constraints. For water supply those constraints all turn around water balance, whilst for energy output it is more about the engineering properties of the technologies taken into account. The three objectives are hierarchized. First, Zhou et al. (2019 op.cit.) perform an optimization regarding Objectives 1 and 2, thus in order to find the optimal hydrological characteristics to meet, and then, on the basis of these, they optimize the technology to put in place, as regards power output.

The general tool for optimization used by Zhou et al. (2019 op.cit.) is a genetic algorithm called NSGA-II, AKA Non-dominated Sorting Genetic Algorithm. Apparently, NSGA-II has a long and successful history of good track in engineering, including water management and energy (see e.g. Chang et al. 2016[12]; Jain & Sachdeva 2017[13];  Assaf & Shabani 2018[14]). I want to stop for a while here and have a good look at this specific algorithm. The logic of NSGA-II starts with creating an initial population of cases/situations/configurations etc. Each case is a combination of observations as regards the objectives to meet, and the actual values observed in constraining variables, e.g. precipitations for water balance or hydraulic head for the output of hydropower. In the conventional lingo of this algorithm, those cases are called chromosomes. Yes, I know, a hydro-electric turbine placed in the context of water management hardly looks like a chromosome, but it is a genetic algorithm, and it just sounds fancy to use that biologically marked vocabulary.

As for me, I like staying close to real life, and therefore I call those cases solutions rather than chromosomes. Anyway, the underlying math is the same. Once I have that initial population of real-life solutions, I calculate two parameters for each of them: their rank as regards the objectives to maximize, and their so-called ‘crowded distance’. Ranking is done with the procedure of fast non-dominated sorting. It is a comparison in pairs, where the solution A dominates another solution B, if and only if there is no objective of A worse than that objective of B and there is at least one objective of A better than that objective of B. The solution which scores the most wins in such peer-to-peer comparisons is at the top of the ranking, the one with the second score of wins is the second etc. Crowding distance is essentially the same as what I call coefficient of coherence in my own research: Euclidean distance (or other mathematical distance) is calculated for each pair of solutions. As a result, each solution is associated with k Euclidean distances to the k remaining solutions, which can be reduced to an average distance, i.e. the crowded distance.

In the next step, an off-spring population is produced from that original population of solutions. It is created by taking relatively the fittest solutions from the initial population, recombining their characteristics in a 50/50 proportion, and adding them some capacity for endogenous mutation. Two out of these three genetic functions are de facto controlled. We choose relatively the fittest by establishing some kind of threshold for fitness, as regards the objectives pursued. It can be a required minimum, a quantile (e.g. the third quartile), or an average. In the first case, we arbitrarily impose a scale of fitness on our population, whilst in the latter two the hierarchy of fitness is generated endogenously from the population of solutions observed. Fitness can have shades and grades, by weighing the score in non-dominated sorting, thus the number of wins over other solutions, on the one hand, and the crowded distance on the other hand. In other words, we can go for solutions which have a lot of similar ones in the population (i.e. which have a low average crowded distance), or, conversely, we can privilege lone wolves, with a high average Euclidean distance from anything else on the plate.  

The capacity for endogenous mutation means that we can allow variance in all or in just the selected variables which make each solution. The number of degrees of freedom we allow in each variable dictates the number of mutations that can be created. Once again, discreet power is given to the analyst: we can choose the genetic traits which can mutate and we can determine their freedom to mutate. In an engineering problem, technological and environmental constraints should normally put a cap on the capacity for mutation. Still, we can think about an algorithm which definitely kicks the lid off the barrel of reality, and which generates mutations in the wildest registers of variables considered. It is a way to simulate a process when the presence of strong outliers has a strong impact on the whole population.

The same discreet cap on the freedom to evolve is to be found when we repeat the process. The offspring generation of solutions goes essentially through the same process as the initial one, to produce further offspring: ranking by non-dominated sorting and crowded distance, selection of the fittest, recombination, and endogenous mutation. At the starting point of this process, we can be two alternative versions of the Mother Nature. We can be a mean Mother Nature, and we shave off from the offspring population all those baby-solutions which do not meet the initial constraints, e.g. zero supply of water in this specific case. On the other hand, we can be even meaner a Mother Nature and allow those strange, dysfunctional mutants to keep going and see what happens to the whole species after a few rounds of genetic reproduction.

With each generation, we compute an average crowded distance between all the solutions created, i.e. we check how diverse is the species in this generation. As long as diversity grows or remains constant, we assume that the divergence between the solutions generated grows or stays the same. Similarly, we can compute an even more general crowded distance between each pair of generations, and therefore to assess how far has the current generation gone from the parent one. We keep going until we observe that the intra-generational crowded distance and the inter-generational one start narrowing down asymptotically to zero. In other words, we consider resuming evolution when solutions in the game become highly similar to each other and when genetic change stops bringing significant functional change.

Cool. When I want to optimize my concept of Energy Ponds, I need to add the objective of constrained return on investment, based on the sales of electricity. In comparison to Zhou et al. (2019 op.cit.), I need to add a third level of selection. I start with selecting environmentally the solutions which make sense in terms of water management. In the next step, I produce a range of solutions which assure the greatest output of power, in a possible mix with solar and wind. Then I take those and filter them through the NSGA-II procedure as regards their capacity to sustain themselves financially. Mind you, I can shake it off a bit by fusing together those levels of selection. I can simulate extreme cases, when, for example, good economic sustainability becomes an environmental problem. Still, it would be rather theoretical. In Europe, non-compliance with environmental requirements makes a project a non-starter per se: you just can get the necessary permits if your hydropower project messes with hydrological constraints legally imposed on the given location.     

Cool. It all starts making sense. There is apparently a lot of stir in the technology of making semi-artificial structures for retaining water, such as rain gardens and wetlands. That means a lot of experimentation, and that experimentation can be guided and optimized by testing the fitness of alternative solutions for meeting objectives of water management, power output and economic sustainability. I have some starting data, to produce the initial generation of solutions, and then try to optimize them with an algorithm such as NSGA-II.


[1] Bortolini, L., & Zanin, G. (2019). Reprint of: Hydrological behaviour of rain gardens and plant suitability: A study in the Veneto plain (north-eastern Italy) conditions. Urban forestry & urban greening, 37, 74-86. https://doi.org/10.1016/j.ufug.2018.07.003

[2] Fribourg-Blanc, B. (2018, April). Natural Water Retention Measures (NWRM), a tool to manage hydrological issues in Europe?. In EGU General Assembly Conference Abstracts (p. 19043). https://ui.adsabs.harvard.edu/abs/2018EGUGA..2019043F/abstract

[3] Tóth, B., Weynants, M., Pásztor, L., & Hengl, T. (2017). 3D soil hydraulic database of Europe at 250 m resolution. Hydrological Processes, 31(14), 2662-2666. https://onlinelibrary.wiley.com/doi/pdf/10.1002/hyp.11203

[4] Guo, X., Li, J., Yang, K., Fu, H., Wang, T., Guo, Y., … & Huang, W. (2018). Optimal design and performance analysis of hydraulic ram pump system. Proceedings of the Institution of Mechanical Engineers, Part A: Journal of Power and Energy, 232(7), 841-855. https://doi.org/10.1177%2F0957650918756761

[5] Li, J., Yang, K., Guo, X., Huang, W., Wang, T., Guo, Y., & Fu, H. (2021). Structural design and parameter optimization on a waste valve for hydraulic ram pumps. Proceedings of the Institution of Mechanical Engineers, Part A: Journal of Power and Energy, 235(4), 747–765. https://doi.org/10.1177/0957650920967489

[6] Koohi-Fayegh, S., & Rosen, M. A. (2020). A review of energy storage types, applications and recent developments. Journal of Energy Storage, 27, 101047. https://doi.org/10.1016/j.est.2019.101047

[7] Couto, T. B., & Olden, J. D. (2018). Global proliferation of small hydropower plants–science and policy. Frontiers in Ecology and the Environment, 16(2), 91-100. https://doi.org/10.1002/fee.1746

[8] Lange, K., Meier, P., Trautwein, C., Schmid, M., Robinson, C. T., Weber, C., & Brodersen, J. (2018). Basin‐scale effects of small hydropower on biodiversity dynamics. Frontiers in Ecology and the Environment, 16(7), 397-404.  https://doi.org/10.1002/fee.1823

[9] Zhou, Y., Chang, L. C., Uen, T. S., Guo, S., Xu, C. Y., & Chang, F. J. (2019). Prospect for small-hydropower installation settled upon optimal water allocation: An action to stimulate synergies of water-food-energy nexus. Applied Energy, 238, 668-682. https://doi.org/10.1016/j.apenergy.2019.01.069

[10] Ming, B., Liu, P., Cheng, L., Zhou, Y., & Wang, X. (2018). Optimal daily generation scheduling of large hydro–photovoltaic hybrid power plants. Energy Conversion and Management, 171, 528-540. https://doi.org/10.1016/j.enconman.2018.06.001

[11] Uen, T. S., Chang, F. J., Zhou, Y., & Tsai, W. P. (2018). Exploring synergistic benefits of Water-Food-Energy Nexus through multi-objective reservoir optimization schemes. Science of the Total Environment, 633, 341-351. https://doi.org/10.1016/j.scitotenv.2018.03.172

[12] Chang, F. J., Wang, Y. C., & Tsai, W. P. (2016). Modelling intelligent water resources allocation for multi-users. Water resources management, 30(4), 1395-1413. https://doi.org/10.1007/s11269-016-1229-6

[13] Jain, V., & Sachdeva, G. (2017). Energy, exergy, economic (3E) analyses and multi-objective optimization of vapor absorption heat transformer using NSGA-II technique. Energy Conversion and Management, 148, 1096-1113. https://doi.org/10.1016/j.enconman.2017.06.055

[14] Assaf, J., & Shabani, B. (2018). Multi-objective sizing optimisation of a solar-thermal system integrated with a solar-hydrogen combined heat and power system, using genetic algorithm. Energy Conversion and Management, 164, 518-532. https://doi.org/10.1016/j.enconman.2018.03.026

Big Black Swans

Oops! Another big break in blogging. Sometimes life happens so fast that thoughts in my head run faster than I can possibly write about them. This is one of those sometimeses. Topics for research and writing abound, projects abound, everything is changing at a pace which proves challenging to a gentleman in his 50ies, such as I am. Yes, I am a gentleman: even when I want to murder someone, I go out of my way to stay polite.

I need to do that thing I periodically do, on this blog. I need to use published writing as a method of putting order in the chaos. I start with sketching the contours of chaos and its main components, and then I sequence and compartmentalize.

My chaos is made of the following parts:

>> My research on collective intelligence

>> My research on energy systems, with focus on investment in energy storage

>> My research on the civilisational role of cities, and on the concept of the entire human civilisation, such as we know it today, being a combination of two productive mechanisms: production of food in the countryside, and production of new social roles in the cities.

>> Joint research which I run with a colleague of mine, on the reproduction of human capital

>> The project I once named Energy Ponds, and which I recently renamed ‘Project Aqueduct’, for the purposes of promoting it.

>> The project which I have just started, together with three other researchers, on the role of Territorial Defence Forces during the COVID-19 pandemic

>> An extremely interesting project, which both I and a bunch of psychiatrists from my university have provisionally failed to kickstart, on the analysis of natural language in diagnosing and treating psychoses

>> A concept which recently came to my mind, as I was working on a crowdfunding project: a game as method of behavioural research about complex decisional patterns.

Nice piece of chaos, isn’t it? How do I put order in my chaos? Well, I ask myself, and, of course, I do my best to answer honestly the following questions: What do I want? How will I know I have what I want? How will other people know I have what I want? Why should anyone bother? What is the point? What do I fear? How will I know my fears come true? How will other people know my fears come true? How do I want to sequence my steps? What skills do I need to learn?

I know I tend to be deceitful with myself. As a matter of fact, most of us tend to. We like confirming our ideas rather than challenging them. I think I can partly overcome that subjectivity of mine by interweaving my answers to those questions with references to published scientific research. Another way of staying close to real life with my thinking consists in trying to understand what specific external events have pushed me to engage in the different paths, which, as I walk down all of them at the same time, make my personal chaos.

In 2018, I started using artificial neural networks, just like that, mostly for fun, and in a very simple form. As I observed those things at work, I developed a deep fascination with intelligent structures, and just as deep (i.e. f**king hard to phrase out intelligibly) an intuition that neural networks can be used as simulators of collectively intelligent social structures.

Both of my parents died in 2019, exactly at the same age of 78, having spent the last 20 years of their respective individual lives in complete separation from each other to the point of not having exchanged a spoken or written word over those last 20 years. That changed my perspective as regards subjectivity. I became acutely aware how subjective I am in my judgement, and how subjective other people are, most likely. Pandemic started in early 2020, and, almost at the same moment, I started to invest in the stock market, after a few years of break. I had been learning at an accelerated pace. I had been adapting to the reality of high epidemic risk – something I almost forgot since I had a devastating scarlatina at the age of 9 – and I had been adapting to a subjectively new form of economic decisions (i.e. those in the stock market). That had been the hell of a ride.

Right now, another piece of experience comes into the game. Until recently, in my house, the attic was mine. The remaining two floors were my wife’s dominion, but the attic was mine. It was painted in joyful, eye-poking colours. There as a lot of yellow and orange. It was mine. Yet, my wife had an eye for that space. Wives do, quite frequently. A fearsome ally came to support her: an interior decorator. Change has just happened. Now, the attic is all dark brown and cream. To me, it looks like the inside of a coffin. Yes, I know what the inside of a coffin looks like: I saw it just before my father’s funeral. That attic has become an alien space for me. I still have hard times wrapping my mind around how shaken I am with that change. I realize how attached am I to the space around me. If I am so strongly bound to colours and shapes in my vicinity, other people probably feel the same, and that triggers another intuition: we, humans, are either simple dwellers in the surrounding space, or we are architects thereof, and these are two radically different frames of mind.

I am territorial as f**k. I have just clarified it inside my head. Now, it is time to go back to science. In a first step, I am trying to connect those experiences of mine to my hypothesis of collective intelligence. Step by step, I am phrasing it out. We are intelligent about each other. We are intelligent about the physical space around us. We are intelligent about us being subjective, and thus we have invented that thing called language, which allows us to produce a baseline for intersubjective description of the surrounding world.

I am conscious of my subjectivity, and of my strong emotions (that attic, f**k!). Therefore, I want to see my own ideas from other people’s point of view. Some review of literature is what I need. I start with Peeters, M. M., van Diggelen, J., Van Den Bosch, K., Bronkhorst, A., Neerincx, M. A., Schraagen, J. M., & Raaijmakers, S. (2021). Hybrid collective intelligence in a human–AI society. AI & SOCIETY, 36(1), 217-238. https://doi.org/10.1007/s00146-020-01005-y . I start with it because I could lay my hands on an open-access version of the full paper, and, as I read it, many bells ring in my head. Among all those bells ringing, the main one refers to the experience I had with the otherwise simplistic neural networks, namely the perceptrons possible to structure in an Excel spreadsheet. Back in 2018, I was observing the way that truly a dumb neural network was working, one made of just 6 equations, looped together, and had that realization: ‘Blast! Those six equations together are actually intelligent’. This is the core of that paper by Peeters et al.. The whole story of collective intelligence had become a thing when Artificial Intelligence started spreading throughout society, thus it is generally the same thing in scientific literature as it is with me individually. Conscious, inquisitive interaction with artificial intelligence seems to awaken an entirely new worldview, where we, humans, can see at work an intelligent fruit of our own intelligence.

I am trying to make one more step, from bewilderment to premises and hypotheses. In Peeters et al., three big intellectual streams are named: (1) the technology-centric perspective (2) the human-centric one, and finally (3) that focused on the concept of collective intelligence-centric perspective. The third one sounds familiar, and so I dig into it. The general idea here is that humans can put their individual intelligences into a kind of interaction which is smarter than those individual ones. This hypothesis is a little counterintuitive – if we consider electoral campaigns or Instagram – but it becomes much more plausible when we think about networks of inventors and scientists. Peeters et al. present an interesting extension to that, namely collectively intelligent agglomerations of people and technology. This is exactly what I do when I do empirical research and use a neural network as simulator, with quantitative data in it. I am one human interacting with one simple piece of AI, and interesting things come out of it. 

That paper by Peeters et al. cites a book: Sloman, S., Sloman, S. A., & Fernbach, P. (2018). The knowledge illusion: Why we never think alone (Penguin). Before I pass to my first impressions about that book, another sidekick. In 1993, one of the authors of that book, Aaron Sloman, wrote an introduction to another book, this one being a collection of proceedings (conference papers in plain human lingo) from a conference, grouped under the common title: Prospects for Artificial Intelligence (Hogg & Humphreys 1993[1]). In that introduction, Aaron Sloman claims that using Artificial Intelligence as simulator of General Intelligence requires a specific approach, which he calls ‘design-based’, where we investigate the capabilities and the constraints within which intelligence, understood as a general phenomenon, has to function. Based on those constraints, requirements can be defined, and, consequently, the way that intelligence is enabled to meet them, through its own architecture and mechanisms.  

We jump 25 years, from 1993, and this is what Sloman, Sloman & Fernbach wrote in the introduction to “The knowledge illusion…”: “This story illustrates a fundamental paradox of humankind. The human mind is both genius and pathetic, brilliant and idiotic. People are capable of the most remarkable feats, achievements that defy the gods. We went from discovering the atomic nucleus in 1911 to megaton nuclear weapons in just over forty years. We have mastered fire, created democratic institutions, stood on the moon, and developed genetically modified tomatoes. And yet we are equally capable of the most remarkable demonstrations of hubris and foolhardiness. Each of us is error-prone, sometimes irrational, and often ignorant. It is incredible that humans are capable of building thermonuclear bombs. It is equally incredible that humans do in fact build thermonuclear bombs (and blow them up even when they don’t fully understand how they work). It is incredible that we have developed governance systems and economies that provide the comforts of modern life even though most of us have only a vague sense of how those systems work. And yet human society works amazingly well, at least when we’re not irradiating native populations. How is it that people can simultaneously bowl us over with their ingenuity and disappoint us with their ignorance? How have we mastered so much despite how limited our understanding often is?” (Sloman, Steven; Fernbach, Philip. The Knowledge Illusion (p. 3). Penguin Publishing Group. Kindle Edition)

Those readings have given me a thread, and I am interweaving that thread with my own thoughts. Now, I return to another reading, namely to “The Black Swan” by Nassim Nicolas Taleb, where, on pages xxi – xxii of the introduction, the author writes: “What we call here a Black Swan (and capitalize it) is an event with the following three attributes. First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme impact (unlike the bird). Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable. I stop and summarize the triplet: rarity, extreme impact, and retrospective (though not prospective) predictability.fn3 A small number of Black Swans explain almost everything in our world, from the success of ideas and religions, to the dynamics of historical events, to elements of our own personal lives. Ever since we left the Pleistocene, some ten millennia ago, the effect of these Black Swans has been increasing. It started accelerating during the industrial revolution, as the world started getting more complicated, while ordinary events, the ones we study and discuss and try to predict from reading the newspapers, have become increasingly inconsequential”.

I combine the idea by Sloman, Sloman & Fernbach, that we, humans, can be really smart collectively by interaction between very limited and subjective individual intelligences, with the concept of Black Swan by Nassim Nicolas Taleb. I reinterpret the Black Swan. This is an event we did not expect to happen, and yet it happened, and it has blown a hole in our self-sufficient certainty that we understand what the f**k is going on. When we do things, we expect certain outcomes. When we don’t get the outcomes we expected, this is a functional error. This is local instance of chaos. We take that local chaos as learning material, and we try again, and again, and again, in the expectation of bringing order into the general chaos of existence. Our collectively intelligent learning  

Nassim Nicolas Taleb claims that our culture tends to mask the occurrence of Black Swan-type outliers as just the manifestation of otherwise recurrent, predictable patterns. We collectively internalize Black Swans. This is what a neural network does. It takes an obvious failure – the local error of optimization – and utilizes it as valuable information in the next experimental round. The network internalizes a series of f**k-ups, because it never hits the output variable exactly home, there is always some discrepancy, at least a tiny one. The fact of being wrong about reality becomes normal. Every neural network I have worked with does the same thing: it starts with some substantial magnitude of error, and then it tends to reduce that error, at least temporarily, i.e. for at least a few more experimental rounds.   

This is what a simple neural network – one of those I use in my calculations – does with quantitative variables. It processes data so as to create error, i.e. so as to purposefully create outliers located out of the expected. Those neural networks purposefully create Black Swans, those abnormalities which allow us to learn. Now, what is so collective about neural networks? Why do I intuitively associate my experiments with neural networks to collective intelligence rather than to the individual one? Well, I work with socio-economic quantitative variables. The lowest level of aggregation I use is the probability of occurrence in a specific event, but this is really low aggregation for me. Most of my data is like Gross Domestic Product, average hours worked per person per year, average prices of electricity etc. This is essentially collective data, in the sense that no individual intelligence can possibly be observed as for its density of population or for its inflation. There needs to be a society in place for those metrics to exist at all.

When I work with that type of data, I assume that many people observe and gauge it, then report and aggregate their observations etc. Many people put a lot of work into making those quantitative variables both available and reliable. I guess it is important, then. When some kind of data is that important collectively, it is very likely to reflect some important aspect of collective reality. When I run that data through a neural network, the latter yields a simulation of collective action and its (always) provisional outcomes.

My neural network (I mean the one on my computer, not the one in my head) takes like 0.05 of local Gross Domestic Product, then 0.4 of average consumption of energy per capita, maybe 0.09 of inflation in consumer prices, plus some other stuff in random proportions, and sums up all those small portions of whatever is important as collectively measured socio-economic outcome. Usually, that sum is designated as ‘h’, or vector of aggregate input. Then, my network takes that h and puts it into a neural activation which, in most cases, is the hyperbolic tangent hyperbolic tangent AKA (e2h – 1) / (e2h +1). When we learn by trial and error, a hypothetical number e2h measures the force which the neural network reacts with to a complex stimulation from a set of variables xi. The ‘e2’ part of that hypothetical reaction is constant and equals e2 = 7,389056099, whilst h is a variable parameter, specific to the given phenomenal occurrence. The parameter h is roughly proportional to the number of variables in the source empirical set. The more complex the reality I process with the neural network, i.e. the more variables I split my reality into, the greater is the value of h. In other words, the more complexity, the more is the neuron, based on the expression e2h, driven away from its constant root e2. Complexity in variables induces greater swings in the hyperbolic tangent, i.e. greater magnitudes of error, and, consequently, longer strides in the process of learning.         

Logically, the more complex social reality I represent with quantitative variables, the bigger Black Swans the neural network produces as it tries to optimize one single variable chosen as the desired output of the neurally activated input.   


[1] Hogg, D., & Humphreys, G. W. (1993). Prospects for Artificial Intelligence: Proceedings of AISB’93, 29 March-2 April 1993, Birmingham, UK (Vol. 17). IOS Press.

The type of riddle I like

Once again, I had quite a break in blogging. I spend a lot of time putting together research projects, in a network of many organisations, which I am supposed to bring to working together. I give it a lot of time and personal energy. It drains me a bit, and I like that drain. I like the thrill of putting together a team, agreeing about goals and possible openings. Since 2005, when I stopped running my own business and I settled for a quite academic career, I haven’t experienced that special kind of personal drive. I sincerely believe that every teacher should apply his or her own teaching in the everyday life of theirs, just to see if their teaching still corresponds to reality.

This is one of the reasons why I have made it a regular activity of mine to invest in the stock market. I teach economics, and the stock market is very much like the pulse of economics, in all its grades and shades, ranging from hardcore macroeconomic cycles, passing through the microeconomics of specific industries I am currently focusing on with my investment portfolio, and all the way down the path of behavioural economics. I teach management, as well, and putting together new projects in research is the closest I can come, currently, to management science being applied in real life.

Still, besides trying to apply my teaching in real life, I still do science. I do research, and I write about the things I think I have found out, on that research path of mine. I do a lot of research as regards the economics of energy. Currently, I am still revising a paper of mine, titled ‘Climbing the right hill – an evolutionary approach to the European market of electricity’. Around the topic of energy economics, I have built more general a method of studying quantitative socio-economic data, with the technical hypothesis that said data manifests collective intelligence in human social structures. It means that whenever I deal with a collection of quantitative socio-economic variables, I study the dataset at hand by assuming that each multivariate record line in the database is the local instance of an otherwise coherent social structure, which experimentins with many such specific instances of itself and selects those offering the best adaptation to the current external stressors. Yes, there is a distinct sound of evolutionary method in that approach.

Over the last three months, I have been slowly ruminating my theoretical foundations for the revision of that paper. Now, I am doing what I love doing: I am disrupting the gently predictable flow of theory with some incongruous facts. Yes, facts don’t know how to behave themselves, like really. Here is an interesting fact about energy: between 1999 and 2016, at the planetary scale, there had been more and more new cars produced per each new human being born. This is visualised in the composite picture below. Data about cars comes from https://www.worldometers.info/cars/ , whilst data about the headcount of population comes from the World Bank (https://data.worldbank.org/indicator/SP.POP.TOTL ).

Now, the meaning of all that. I mean, not ALL THAT (i.e. reality and life in general), just all that data about cars and population. Why do we consistently make more and more physical substance of cars per each new human born? Two explanations come to my mind. One politically correct and nicely environmentalist: we are collectively dumb as f**k and we keep overshooting the output of cars over and above the incremental change in population. The latter, when translated into a rate of growth, tends to settle down (https://data.worldbank.org/indicator/SP.POP.GROW ). Yeah, those capitalists who own car factories just want to make more and more money, and therefore they make more and more cars. Yeah, those corrupt politicians want to conserve jobs in the automotive industry, and they support it. Yeah, f**k them all! Yeah, cars destroy the planet!

I checked. The first door I knocked at was General Motors (https://investor.gm.com/sec-filings ). What I can see is that they actually make more and more operational money by making less and less cars. Their business used to be overshot in terms of volume, and now they are slowly making sense and money out of making less cars. Then I checked with Toyota (https://global.toyota/en/ir/library/sec/ ). These guys looks as if they were struggling to maintain their capacity to make approximately the same operational surplus each year, and they seem to be experimenting with the number of cars they need to put out in order to stay in good financial shape. When I say ‘experimenting’, it means experimenting upwards or downwards.

As a matter of fact, the only player who seems to be unequivocally making more operational money out of making more cars is Tesla (https://ir.tesla.com/#tab-quarterly-disclosure). In There comes another explanation – much less politically correct, if at all – for there being more cars made per each new human, and it says that we, humans, are collectively intelligent, and we have a good reason for making more and more cars per each new human coming to this realm of tears, and the reason is to store energy in a movable, possibly auto-movable a form. Yes, each car has a fuel tank or a set of batteries, in the case of them Teslas or other electric f**kers. Each car is a moving reservoir of chemical energy, immediately converted into kinetic energy, which, in turn, has economic utility. Making more cars with batteries pays off better than making more cars with combustible fuel in their tanks: a new generation of movable reservoirs in chemical energy is replacing an older generation thereof. 

Let’s hypothesise that this is precisely the point of each new human being coupled with more and more of a new car being made: the point is more chemical energy convertible into kinetic energy. Do we need to move around more, as time passes? Maybe, although I am a bit doubtful. Technically, with more and more humans being around in a constant space, there is more and more humans per square kilometre, and that incremental growth in the density of population happens mostly in cities. I described that phenomenon in a paper of mine, titled ‘The Puzzle of Urban Density And Energy Consumption’. That means that space available for travelling and needed to be covered, per individual capita of each human being, is actually decreasing. Less space to travel in means less need for means of transportation. 

Thus, what are we after, collectively? We might be preparing for having to move around more in the future, or for having to restructure the geography of our settlements. That’s possible, although the research I did for that paper about urban density indicates that geographical patterns of urbanization are quite durable. Anyway, those two cases sum up to some kind of zombie apocalypse. On the other hand, the fact of developing the amount of dispersed, temporarily stored energy (in cars) might be a manifestation of us learning how to build and maintain large, dispersed networks of energy reservoirs.

Isn’t it dumb to hypothesise that we go out of our way, as a civilisation, just to learn the best ways of developing what we are developing? Well, take the medieval cathedrals. Them medieval folks would keep building them for decades or even centuries. The Notre Dame cathedral in Paris, France, seems to be the record holder, with a construction period stretching from 1160 to 1245 (Bruzelius 1987[1]). Still, the same people who were so appallingly slow when building a cathedral could accomplish lightning-fast construction of quite complex military fortifications. When building cathedrals, the masters of stone masonry would do something apparently idiotic: they would build, then demolish, and then build again the same portion of the edifice, many times. WTF? Why slowing down something we can do quickly? In order to experiment with the process and with the technologies involved, sir. Cathedrals were experimental labs of physics, mathematics and management, long before these scientific disciplines even emerged. Yes, there was the official rationale of getting closer to God, to accomplish God’s will, and, honestly, it came handy. There was an entire culture – the medieval Christianity – which was learning how to learn by experimentation. The concept of fulfilling God’s will through perseverant pursuit, whilst being stoic as regards exogenous risks, was excellent a cultural vehicle to that purpose.

We move a few hundreds of years in time, to the 17th century. The cutting edge of technology is to find in textile and garments (Braudel 1992[2]), and the peculiarity of the European culture consisted in quickly changing fashions, geographically idiosyncratic and strongly enforced through social peer pressure. The industry of garments and textile was a giant experimental lab of business and management, developing the division of labour, the management of supply chains, quick study of subtle shades in customers’ tastes and just as quick adaptation thereto. This is how we, Europeans, prepared for the much later introduction of mechanized industry, which, in turn, gave birth to what we are today: a species controlling something like 30% of all energy on the surface of our planet.       

Maybe we are experimenting with dispersed, highly mobile and coordinated networks of small energy reservoirs – the automotive fleet – just for the sake of learning how to develop such networks? Some other facts, which, once again, are impolitely disturbing, come to the fore. I had a look at the data published by United Nations, as regards the total installed capacity of generation in electricity (https://unstats.un.org/unsd/energystats/ ). I calculated the average electrical capacity per capita, at the global scale. Turns out in 2014 the average human capita on Earth had around 60% more power capacity to tap from, as compared to a similarly human capita in 1999.

Interesting. It looks even more interesting when taken as the first moment of a process. When I divide the annual incremental change in the installed electrical capacity on the planet, and I divide it by the absolute demographic increment, thus when I go ‘Delta capacity / delta population’, that coefficient of elasticity grows like hell. In 2014, it was almost three times more than in 1999. We, humans, keep developing denser a network of cars, as compared to our population, and, at the same time, we keep increasing the relative power capacity which every human can tap into.    

Someone could say it is because we simply consume more and more energy per capita. Cool, I check with the World Bank: https://data.worldbank.org/indicator/EG.USE.PCAP.KG.OE . Yes, we increase our average annual consumption of energy per one human being, and yet this is a very gentle increment: barely 18% from 1999 through 2014. Nothing to do with the quick accumulation of generative capacity. We accumulate densifying a global fleet of cars, and growing a reserve of power capacity. What are we doing it for?

This is a deep question, and I calculated two additional elasticities with the data at hand. Firstly, I denominated incremental change in the number of new cars per each new human born over the average consumption of energy per capita. In the visual below, this is the coefficient ‘Elasticity of cars per capita to energy per capita’. Between 1999 and 2014, this elasticity had passed from 0,49 to 0,79. We keep accumulating something like an overhead of incremental car fleet, as compared to the amount of energy we consume.

Secondly, I formalized the comparison between individual consumption of energy and average power capacity per capita. This is the ‘Elasticity of capacity per capita to energy per capita’ column in the visual below.  Once again, it is a growing trend.   

At the planetary scale, we keep beefing up our collective reserves of energy, and we seriously mean business about dispersing those reserves into networks of small reservoirs, possibly on wheels.

Increased propensity to store is a historically known collective response to anticipated shortage. Do we, the human race, collectively and not quite consciously anticipate a shortage of energy? How could that happen? Our biology should suggest it just the opposite. With the climate change being around, we technically have more energy in the ambient environment, not less. What exact kind of shortage in energy are we collectively anticipating? This is the type of riddle I like.


[1] Bruzelius, C. (1987). The Construction of Notre-Dame in Paris. The Art Bulletin, 69(4), 540-569. https://doi.org/10.1080/00043079.1987.10788458

[2] Braudel, F. (1992). Civilization and capitalism, 15th-18th century, vol. II: The wheels of commerce (Vol. 2). Univ of California Press.