We keep going until we observe

I keep working on a proof-of-concept paper for my idea of ‘Energy Ponds’. In my last two updates, namely in ‘Seasonal lakes’, and in ‘Le Catch 22 dans ce jardin d’Eden’, I sort of refreshed my ideas and set the canvas for painting. Now, I start sketching. What exact concept do I want to prove, and what kind of evidence can possibly confirm (or discard) that concept? The idea I am working on has a few different layers. The most general vision is that of purposefully storing water in spongy structures akin to swamps or wetlands. These can bear various degree of artificial construction, and can stretch from natural wetlands, through semi-artificial ones, all the way to urban technologies such as rain gardens and sponge cities. The most general proof corresponding to that vision is a review of publicly available research – peer-reviewed papers, preprints, databases etc. – on that general topic.

Against that general landscape, I sketch two more specific concepts: the idea of using ram pumps as a technology of forced water retention, and the possibility of locating those wetland structures in the broadly spoken Northern Europe, thus my home region. Correspondingly, I need to provide two streams of scientific proof: a review of literature on the technology of ram pumping, on the one hand, and on the actual natural conditions, as well as land management policies in Europe, on the other hand.  I need to consider the environmental impact of creating new wetland-like structures in Northern Europe, as well as the socio-economic impact, and legal feasibility of conducting such projects.

Next, I sort of build upwards. I hypothesise a complex technology, where ram-pumped water from the river goes into a sort of light elevated tanks, and from there, using the principle of Roman siphon, cascades down into wetlands, and through a series of small hydro-electric turbines. Turbines generate electricity, which is being stored and then sold outside.

At that point, I have a technology of water retention coupled with a technology of energy generation and storage. I further advance a second hypothesis that such a complex technology will be economically sustainable based on the corresponding sales of electricity. In other words, I want to figure out a configuration of that technology, which will be suitable for communities which either don’t care at all, or simply cannot afford to care about the positive environmental impact of the solution proposed.

Proof of concept for those two hypotheses is going to be complex. First, I need to pass in review the available technologies for energy storage, energy generation, as well as for the construction of elevated tanks and Roman siphons. I need to take into account various technological mixes, including the incorporation of wind turbines and photovoltaic installation into the whole thing, in order to optimize the output of energy. I will try to look for documented examples of small hydro-generation coupled with wind and solar. Then, I have to rack the literature as regards mathematical models for the optimization of such power systems and put them against my own idea of reverse engineering back from the storage technology. I take the technology of energy storage which seems the most suitable for the local market of energy, and for the hypothetical charging from hydro-wind-solar mixed generation. I build a control scenario where that storage facility just buys energy at wholesale prices from the power grid and then resells it. Next, I configure the hydro-wind-solar generation so as to make it economically competitive against the supply of energy from the power grid.

Now, I sketch. I keep in mind the levels of conceptualization outlined above, and I quickly move through published science along that logical path, quickly picking a few articles for each topic. I am going to put those nonchalantly collected pieces of science back-to-back and see how and whether at all it all makes sense together. I start with Bortolini & Zanin (2019[1]), who study the impact of rain gardens on water management in cities of the Veneto region in Italy. Rain gardens are vegetal structures, set up in the urban environment, with the specific purpose to retain rainwater.  Bortolini & Zanin (2019 op. cit.) use a simplified water balance, where the rain garden absorbs and retains a volume ‘I’ of water (‘I’ stands for infiltration), which is the difference between precipitations on the one hand, and the sum total of overflowing runoff from the rain garden plus evapotranspiration of water, on the other hand. Soil and plants in the rain garden have a given top capacity to retain water. Green plants typically hold 80 – 95% of their mass in water, whilst trees hold about 50%. Soil is considered wet when it contains about 25% of water. The rain garden absorbs water from precipitations at a rate determined by hydraulic conductivity, which means the relative ease of a fluid (usually water) to move through pore spaces or fractures, and which depends on the intrinsic permeability of the material, the degree of saturation, and on the density and viscosity of the fluid.

As I look at it, I can see that the actual capacity of water retention in a rain garden can hardly be determined a priori, unless we have really a lot of empirical data from the given location. For a new location of a new rain garden, it is safe to assume that we need an experimental phase when we empirically assess the retentive capacity of the rain garden with different configurations of soil and vegetation used. That leads me to generalizing that any porous structure we use for retaining rainwater, would it be something like wetlands, or something like a rain garden in urban environment, has a natural constraint of hydraulic conductivity, and that constraint determines the percentage of precipitations, and the metric volume thereof, which the given structure can retain.

Bortolini & Zanin (2019 op. cit.) bring forth empirical results which suggest that properly designed rain gardens located on rooftops in a city can absorb from 87% to 93% of the total input of water they receive. Cool. I move on and towards the issue of water management in Europe, with a working paper by Fribourg-Blanc, B. (2018[2]), and the most important takeaway from that paper is that we have something called European Platform for Natural Water Retention Measures AKA http://nwrm.eu , and that thing have both good properties and bad properties. The good thing about http://nwrm.eu is that it contains loads of data and publications about projects in Natural Water Retention in Europe. The bad thing is that http://nwrm.eu is not a secure website. Another paper, by Tóth et al. (2017[3]) tells me that another analytical tool exists, namely the European Soil Hydraulic Database (EU‐ SoilHydroGrids ver1.0).

So far, so good. I already know there is data and science for evaluating, with acceptable precision, the optimal structure and the capacity for water retention in porous structures such as rain gardens or wetlands, in the European context. I move to the technology of ram pumps. I grab two papers: Guo et al. (2018[4]), and Li et al. (2021[5]). They show me two important things. Firstly, China seems to be burning the rubber in the field of ram pumping technology. Secondly, the greatest uncertainty as for that technology seems to be the actual height those ram pumps can elevate water at, or, when coupled with hydropower, the hydraulic head which ram pumps can create. Guo et al. (2018 op. cit.) claim that 50 meters of elevation is the maximum which is both feasible and efficient. Li et al. (2021 op. cit.) are sort of vertically more conservative and claim that the whole thing should be kept below 30 meters of elevation. Both are better than 20 meters, which is what I thought was the best one can expect. Greater elevation of water means greater hydraulic head, and more hydropower to be generated. It pays off to review literature.

Lots of uncertainty as for the actual capacity and efficiency of ram pumping means quick technological change in that domain. This is economically interesting. It means that investing in projects which involve ram pumping means investment in quickly changing a technology. That means both high hopes for an even better technology in immediate future, and high needs for cash in the balance sheet of the entities involved.

I move to the end-of-the-pipeline technology in my concept, namely to energy storage. I study a paper by Koohi-Fayegh & Rosen (2020[6]), which suggests two things. Firstly, for a standalone installation in renewable energy, whatever combination of small hydropower, photovoltaic and small wind turbines we think of, lithium-ion batteries are always a good idea for power storage, Secondly, when we work with hydrogeneration, thus when we have any hydraulic head to make electricity with, pumped storage comes sort of natural. That leads me to an idea which looks even crazier than what I have imagined so far: what if we create an elevated garden with strong capacity for water retention. Ram pumps take water from the river and pump it up onto elevated platforms with rain gardens on it. Those platforms can be optimized as for their absorption of sunlight and thus as regards their interaction with whatever is underneath them.  

I move to small hydro, and I find two papers, namely Couto & Olden (2018[7]), and Lange et al. (2018[8]), which are both interestingly critical as regards small hydropower installations. Lange et al. (2018 op. cit.) claim that the overall environmental impact of small hydro should be closely monitored. Couto & Olden (2018 op. cit.) go further and claim there is a ‘craze’ about small hydro, and that craze has already lead to overinvestment in the corresponding installations, which can be damaging both environmentally and economically (overinvestment means financial collapse of many projects). Those critical views in mind, I turn to another paper, by Zhou et al. (2019[9]), who approach the issue as a case for optimization, within a broader framework called ‘Water-Food-Energy’ Nexus, WFE for closer friends. This paper, just as a few others it cites (Ming et al. 2018[10]; Uen et al. 2018[11]), advocates for using artificial intelligence in order to optimize for WFE.

Zhou et al. (2019 op.cit.) set three hydrological scenarios for empirical research and simulation. The baseline scenario corresponds to an average hydrological year, with average water levels and average precipitations. Next to it are: a dry year and a wet year. The authors assume that the cost of installation in small hydropower is $600 per kW on average.  They simulate the use of two technologies for hydro-electric turbines: Pelton and Vortex. Pelton turbines are optimized paddled wheels, essentially, whilst the Vortex technology consists in creating, precisely, a vortex of water, and that vortex moves a rotor placed in the middle of it.

Zhou et al. (2019 op.cit.) create a multi-objective function to optimize, with the following desired outcomes:

>> Objective 1: maximize the reliability of water supply by minimizing the probability of real water shortage occurring.

>> Objective 2: maximize water storage given the capacity of the reservoir. Note: reservoir is understood hydrologically, as any structure, natural or artificial, able to retain water.

>> Objective 3: maximize the average annual output of small hydro-electric turbines

Those objectives are being achieved under the corresponding sets of constraints. For water supply those constraints all turn around water balance, whilst for energy output it is more about the engineering properties of the technologies taken into account. The three objectives are hierarchized. First, Zhou et al. (2019 op.cit.) perform an optimization regarding Objectives 1 and 2, thus in order to find the optimal hydrological characteristics to meet, and then, on the basis of these, they optimize the technology to put in place, as regards power output.

The general tool for optimization used by Zhou et al. (2019 op.cit.) is a genetic algorithm called NSGA-II, AKA Non-dominated Sorting Genetic Algorithm. Apparently, NSGA-II has a long and successful history of good track in engineering, including water management and energy (see e.g. Chang et al. 2016[12]; Jain & Sachdeva 2017[13];  Assaf & Shabani 2018[14]). I want to stop for a while here and have a good look at this specific algorithm. The logic of NSGA-II starts with creating an initial population of cases/situations/configurations etc. Each case is a combination of observations as regards the objectives to meet, and the actual values observed in constraining variables, e.g. precipitations for water balance or hydraulic head for the output of hydropower. In the conventional lingo of this algorithm, those cases are called chromosomes. Yes, I know, a hydro-electric turbine placed in the context of water management hardly looks like a chromosome, but it is a genetic algorithm, and it just sounds fancy to use that biologically marked vocabulary.

As for me, I like staying close to real life, and therefore I call those cases solutions rather than chromosomes. Anyway, the underlying math is the same. Once I have that initial population of real-life solutions, I calculate two parameters for each of them: their rank as regards the objectives to maximize, and their so-called ‘crowded distance’. Ranking is done with the procedure of fast non-dominated sorting. It is a comparison in pairs, where the solution A dominates another solution B, if and only if there is no objective of A worse than that objective of B and there is at least one objective of A better than that objective of B. The solution which scores the most wins in such peer-to-peer comparisons is at the top of the ranking, the one with the second score of wins is the second etc. Crowding distance is essentially the same as what I call coefficient of coherence in my own research: Euclidean distance (or other mathematical distance) is calculated for each pair of solutions. As a result, each solution is associated with k Euclidean distances to the k remaining solutions, which can be reduced to an average distance, i.e. the crowded distance.

In the next step, an off-spring population is produced from that original population of solutions. It is created by taking relatively the fittest solutions from the initial population, recombining their characteristics in a 50/50 proportion, and adding them some capacity for endogenous mutation. Two out of these three genetic functions are de facto controlled. We choose relatively the fittest by establishing some kind of threshold for fitness, as regards the objectives pursued. It can be a required minimum, a quantile (e.g. the third quartile), or an average. In the first case, we arbitrarily impose a scale of fitness on our population, whilst in the latter two the hierarchy of fitness is generated endogenously from the population of solutions observed. Fitness can have shades and grades, by weighing the score in non-dominated sorting, thus the number of wins over other solutions, on the one hand, and the crowded distance on the other hand. In other words, we can go for solutions which have a lot of similar ones in the population (i.e. which have a low average crowded distance), or, conversely, we can privilege lone wolves, with a high average Euclidean distance from anything else on the plate.  

The capacity for endogenous mutation means that we can allow variance in all or in just the selected variables which make each solution. The number of degrees of freedom we allow in each variable dictates the number of mutations that can be created. Once again, discreet power is given to the analyst: we can choose the genetic traits which can mutate and we can determine their freedom to mutate. In an engineering problem, technological and environmental constraints should normally put a cap on the capacity for mutation. Still, we can think about an algorithm which definitely kicks the lid off the barrel of reality, and which generates mutations in the wildest registers of variables considered. It is a way to simulate a process when the presence of strong outliers has a strong impact on the whole population.

The same discreet cap on the freedom to evolve is to be found when we repeat the process. The offspring generation of solutions goes essentially through the same process as the initial one, to produce further offspring: ranking by non-dominated sorting and crowded distance, selection of the fittest, recombination, and endogenous mutation. At the starting point of this process, we can be two alternative versions of the Mother Nature. We can be a mean Mother Nature, and we shave off from the offspring population all those baby-solutions which do not meet the initial constraints, e.g. zero supply of water in this specific case. On the other hand, we can be even meaner a Mother Nature and allow those strange, dysfunctional mutants to keep going and see what happens to the whole species after a few rounds of genetic reproduction.

With each generation, we compute an average crowded distance between all the solutions created, i.e. we check how diverse is the species in this generation. As long as diversity grows or remains constant, we assume that the divergence between the solutions generated grows or stays the same. Similarly, we can compute an even more general crowded distance between each pair of generations, and therefore to assess how far has the current generation gone from the parent one. We keep going until we observe that the intra-generational crowded distance and the inter-generational one start narrowing down asymptotically to zero. In other words, we consider resuming evolution when solutions in the game become highly similar to each other and when genetic change stops bringing significant functional change.

Cool. When I want to optimize my concept of Energy Ponds, I need to add the objective of constrained return on investment, based on the sales of electricity. In comparison to Zhou et al. (2019 op.cit.), I need to add a third level of selection. I start with selecting environmentally the solutions which make sense in terms of water management. In the next step, I produce a range of solutions which assure the greatest output of power, in a possible mix with solar and wind. Then I take those and filter them through the NSGA-II procedure as regards their capacity to sustain themselves financially. Mind you, I can shake it off a bit by fusing together those levels of selection. I can simulate extreme cases, when, for example, good economic sustainability becomes an environmental problem. Still, it would be rather theoretical. In Europe, non-compliance with environmental requirements makes a project a non-starter per se: you just can get the necessary permits if your hydropower project messes with hydrological constraints legally imposed on the given location.     

Cool. It all starts making sense. There is apparently a lot of stir in the technology of making semi-artificial structures for retaining water, such as rain gardens and wetlands. That means a lot of experimentation, and that experimentation can be guided and optimized by testing the fitness of alternative solutions for meeting objectives of water management, power output and economic sustainability. I have some starting data, to produce the initial generation of solutions, and then try to optimize them with an algorithm such as NSGA-II.


[1] Bortolini, L., & Zanin, G. (2019). Reprint of: Hydrological behaviour of rain gardens and plant suitability: A study in the Veneto plain (north-eastern Italy) conditions. Urban forestry & urban greening, 37, 74-86. https://doi.org/10.1016/j.ufug.2018.07.003

[2] Fribourg-Blanc, B. (2018, April). Natural Water Retention Measures (NWRM), a tool to manage hydrological issues in Europe?. In EGU General Assembly Conference Abstracts (p. 19043). https://ui.adsabs.harvard.edu/abs/2018EGUGA..2019043F/abstract

[3] Tóth, B., Weynants, M., Pásztor, L., & Hengl, T. (2017). 3D soil hydraulic database of Europe at 250 m resolution. Hydrological Processes, 31(14), 2662-2666. https://onlinelibrary.wiley.com/doi/pdf/10.1002/hyp.11203

[4] Guo, X., Li, J., Yang, K., Fu, H., Wang, T., Guo, Y., … & Huang, W. (2018). Optimal design and performance analysis of hydraulic ram pump system. Proceedings of the Institution of Mechanical Engineers, Part A: Journal of Power and Energy, 232(7), 841-855. https://doi.org/10.1177%2F0957650918756761

[5] Li, J., Yang, K., Guo, X., Huang, W., Wang, T., Guo, Y., & Fu, H. (2021). Structural design and parameter optimization on a waste valve for hydraulic ram pumps. Proceedings of the Institution of Mechanical Engineers, Part A: Journal of Power and Energy, 235(4), 747–765. https://doi.org/10.1177/0957650920967489

[6] Koohi-Fayegh, S., & Rosen, M. A. (2020). A review of energy storage types, applications and recent developments. Journal of Energy Storage, 27, 101047. https://doi.org/10.1016/j.est.2019.101047

[7] Couto, T. B., & Olden, J. D. (2018). Global proliferation of small hydropower plants–science and policy. Frontiers in Ecology and the Environment, 16(2), 91-100. https://doi.org/10.1002/fee.1746

[8] Lange, K., Meier, P., Trautwein, C., Schmid, M., Robinson, C. T., Weber, C., & Brodersen, J. (2018). Basin‐scale effects of small hydropower on biodiversity dynamics. Frontiers in Ecology and the Environment, 16(7), 397-404.  https://doi.org/10.1002/fee.1823

[9] Zhou, Y., Chang, L. C., Uen, T. S., Guo, S., Xu, C. Y., & Chang, F. J. (2019). Prospect for small-hydropower installation settled upon optimal water allocation: An action to stimulate synergies of water-food-energy nexus. Applied Energy, 238, 668-682. https://doi.org/10.1016/j.apenergy.2019.01.069

[10] Ming, B., Liu, P., Cheng, L., Zhou, Y., & Wang, X. (2018). Optimal daily generation scheduling of large hydro–photovoltaic hybrid power plants. Energy Conversion and Management, 171, 528-540. https://doi.org/10.1016/j.enconman.2018.06.001

[11] Uen, T. S., Chang, F. J., Zhou, Y., & Tsai, W. P. (2018). Exploring synergistic benefits of Water-Food-Energy Nexus through multi-objective reservoir optimization schemes. Science of the Total Environment, 633, 341-351. https://doi.org/10.1016/j.scitotenv.2018.03.172

[12] Chang, F. J., Wang, Y. C., & Tsai, W. P. (2016). Modelling intelligent water resources allocation for multi-users. Water resources management, 30(4), 1395-1413. https://doi.org/10.1007/s11269-016-1229-6

[13] Jain, V., & Sachdeva, G. (2017). Energy, exergy, economic (3E) analyses and multi-objective optimization of vapor absorption heat transformer using NSGA-II technique. Energy Conversion and Management, 148, 1096-1113. https://doi.org/10.1016/j.enconman.2017.06.055

[14] Assaf, J., & Shabani, B. (2018). Multi-objective sizing optimisation of a solar-thermal system integrated with a solar-hydrogen combined heat and power system, using genetic algorithm. Energy Conversion and Management, 164, 518-532. https://doi.org/10.1016/j.enconman.2018.03.026

Le Catch 22 dans ce jardin d’Eden

Ça fait un sacré bout de temps depuis ma dernière mise à jour en français sur ce blog, « Discover Social Sciences ». Je n’avais pas écrit en français depuis printemps 2020. Pourquoi je recommence maintenant ? Probablement parce que j’ai besoin d’arranger les idées dans ma tête. Il se passe beaucoup de choses, cette année, et j’avais découvert, déjà en 2017, qu’écrire en français m’aide à mettre de l’ordre dans le flot de mes pensées.

Je me concentre sur un sujet que j’avais déjà développé dans le passé et que je vais présenter à une conférence, ce vendredi. Il s’agit du concept que j’avais nommé « Étangs énergétiques » auparavant et que je présente maintenant comme « Projet aqueduc ». Je commence avec une description générale du concept et ensuite je vais passer en revue un peu de littérature récente sur le sujet.

Oui, bon, le sujet. Le voilà. Il s’agit d’un concept technologique qui combine la rétention contrôlée de l’eau dans les écosystèmes placés le long des fleuves et des rivières avec de la génération d’électricité avec les turbines hydrauliques, le tout sur la base des structures marécageuses. Du point de vue purement hydrologique, une rivière est une gouttière qui collecte l’eau de pluie qui tombe sur la surface de son bassin. Le lit de la rivière est une vallée inclinée qui connecte les points le moins élevés du terrain en question et de de fait l’eau de pluie converge des tous les points du bassin fluvial vers l’embouchure de la rivière.

La civilisation humaine sédentaire est largement basée sur le fait que les bassins fluviaux ont la capacité de retenir l’eau de pluie pour un certain temps avant qu’elle s’évapore ou coule dans la rivière. Ça se retient à la surface – en forme des lacs, étangs ou marécages – et ça se retient sous terre, en forme des couches et des poches aquifères diverses. La rétention souterraine dans les poches aquifères rocheuses est naturellement permanente. L’eau retenue dans une couche aquifère reste là jusqu’au moment où nous la puisons. En revanche, la rétention superficielle ainsi que celle dans les couches aquifères souterraines est essentiellement temporaire. L’eau y est ralentie dans sa circulation, aussi bien dans son mouvement physique vers les points les plus bas du bassin local (la rivière du coin) que dans son évaporation vers l’atmosphère. L’existence même des fleuves et des rivières est aussi une manifestation de circulation ralentie. Le lit de la rivière n’arrive pas à évacuer en temps réel toute l’eau qui s’y agglomère et c’est ainsi que les rivières ont de la profondeur : cette profondeur est la mesure de rétention temporaire de l’eau de pluie.

Ces mécanismes fondamentaux fonctionnent différemment en fonction des conditions géologiques. Maintenant, je me concentre sur les conditions que je connais dans mon environnement à moi, donc sur les écosystèmes des plaines et des vallées de l’Europe du Nord, soit grosso modo au nord des Alpes. Ces écosystèmes sont pour la plupart des moraines post-glaciales de fond, donc c’est de la terre littéralement labourée, sculptée et dénivelée par les glaciers. Il n’y a pas vraiment beaucoup de poches aquifères profondes dans la roche de base, en revanche nous avons beaucoup de couches aquifères relativement proches de la surface. Par conséquent, il n’y a pas beaucoup d’accumulation durable de l’eau, à la différence de l’Europe du Sud et de l’Afrique du Nord, où les poches aquifères rocheuses peuvent retenir des quantités importantes d’eau pendant des décennies, voir des siècles. La circulation de l’eau dans ces écosystèmes des plaines est relativement lente – beaucoup plus lente que dans la montagne – ce qui favorise la présence des rivières larges et pas vraiment très profondes ainsi que la formation des marécages.

Dans ces plaines post-glaciales de l’Europe du Nord, l’eau coule lentement, s’accumule peu et s’évapore vite. La forme idéale des précipitations dans ces conditions géologiques c’est de la neige abondante en hiver – qui fond lentement, goutte par goute, au printemps – ainsi que des pluies lentes en longues. La moraine post-glaciale absorbe bien de l’eau qui arrive lentement, mais n’est pas vraiment faite pour absorber des pluies torrentielles. Avec le changement climatique, les précipitations ont changé. Il y a beaucoup moins de neige en hiver en beaucoup plus des pluies violentes. Si nous voulons avoir du contrôle de notre système hydrologique, il nous faut des technologies de rétention d’eau pour compenser des variations temporaires.

Bon, ça c’est le contexte de mon idée et voilà l’idée elle-même. Elle consiste à créer des structures marécageuses semi-artificielles dans la proximité des rivières et les remplir avec de l’eau pompée desdites rivières. La technologie de pompage est celle du bélier hydraulique : une pompe qui utilise l’énergie cinétique de l’eau courante. Le principe général est un truc ancien. D’après ce que j’ai lu à ce sujet, le principe de base, sous la forme de la roue à aubes , fût déjà en usage dans la Rome ancienne, était très utilisé dans les villes Européennes jusqu’à la fin du 18ème siècle. La technologie du bélier hydraulique – une pompe qui utilise ladite énergie cinétique de l’eau dans un mécanisme similaire au muscle cardiaque – fût victime des aléas de l’histoire. Inventée en 1792 par Joseph de Montgolfier (oui, l’un des fameux frères-ballon), cette technologie n’avait jamais eu l’occasion de montrer tous ses avantages. en 1792 (le même qui, quelques années plus tôt, fit voler, avec son frère Étienne, le premier ballon à air chaud). Au 19ème siècle, avec la création des systèmes hydrauliques modernes avec l’eau courante dans les robinets, les technologies de pompage devaient offrir assez de puissance pour assurer une pression suffisante au niveau des robinets et c’est ainsi que les pompes électriques avaient pris la relève. Néanmoins, lorsqu’il s’agit de pomper lentement de l’eau courante des rivières vers les marécages artificiels, le bélier hydraulique est suffisant.

« Suffisant pour faire quoi exactement ? », peut-on demander. Voilà donc le reste de mon idée. Un ou plusieurs béliers hydrauliques sont plongés dans une rivière. Ils pompent l’eau de la rivière vers des structures marécageuses semi-artificielles. Ces marécages servent à retenir l’eau de pluie (qui coule déjà dans le cours de la rivière). L’eau de la rivière que je pompe vers les marécages c’est l’eau de pluie qui avait gravité, en amont, vers le lit de la rivière. Une fois dans les marécages, cette eau va de toute façon finir par graviter vers le lit de la rivière à quelque distance en amont. Pompage et rétention dans les marécages servent à ralentir la circulation de l’eau dans l’écosystème local. Circulation ralentie veut dire que plus d’eau va s’accumuler dans cet écosystème, comme une réserve flottante. Il y aura plus d’eau dans les couches aquifères souterraines, donc plus d’eau dans les puits locaux et – à la longue – plus d’eau dans la rivière elle-même, puisque l’eau dans la rivière c’est l’eau qui y avait coulé depuis et à travers les réservoirs locaux.

Jusqu’à ce point-là, l’idée se présente donc de façon suivante : rivière => bélier hydraulique => marécages => rivière. Je passe plus loin. Le pompage consiste à utiliser l’énergie cinétique de l’eau courante. L’énergie, ça se conserve par transformation. L’énergie cinétique de l’eau courante se transforme en énergie cinétique de la pompe, qui à son tour se transforme en énergie cinétique du flux vers les marécages.

La surface des marécages est placée au-dessus du lit de la rivière, à moins qu’ils ne soient un polder, auquel cas il n’y a pas besoin de pompage. Une fois l’eau est déversée dans les marécages, ceux-là absorbent donc, dans leur masse, l’énergie cinétique du flux qui se transforme en énergie potentielle de dénivellation. Et si nous amplifions ce phénomène ? Si nous utilisions l’énergie cinétique captée par le bélier hydraulique de façon à minimiser la dispersion dans la masse des marécages et de créer un maximum d’énergie potentielle ? L’énergie potentielle et proportionnelle à l’élévation relative. Plus haut je pompe l’eau de la rivière, plus d’énergie potentielle je récupère à partir de l’énergie cinétique du flux pompé. La solution la plus évidente serait une installation de pompage-turbinage, donc le réservoir de rétention devrait être placé sérieusement plus haut que la rivière. Quoi qu’apparemment la plus évidente et porteuse des principes de base intéressants, cette solution a ses défauts en ce qui concerne sa flexibilité et son coût.

Le principe de base à retenir c’est l’idée d’utiliser l’énergie potentielle de l’eau pompée à une certaine élévation comme un de facto réservoir d’énergie électrique. Il suffit de placer des turbines hydro-électriques en aval de l’eau stockée en élévation. En revanche, les installations de pompage-turbinage sont très coûteuses et très exigeantes en termes d’espace. Le réservoir supérieur dans les installations de pompage-turbinage est censé être soit un lac semi-artificiel soit un réservoir complètement artificiel en de tour, certainement pas un marécage. Il est donc temps que j’explique pourquoi je suis tant attaché à cette forme hydrologique précise. Les marécages sont relativement peu chers à créer et à maintenir, tout en étant relativement faciles à placer près de et de combiner avec les habitations humaines. Par « relativement » je veux dire en comparaison au pompage-turbinage.

Le marécage est un endroit symboliquement négatif dans notre culture. Le mal est tapi dans les marécages. Les marécages sont malsains. Ma théorie tout à fait privée à ce sujet est que dans le passé les colonies humaines, fréquemment celles qui ont finalement donné naissance à des villes, étaient localisées près des marécages. Probablement c’était parce que le niveau d’eau souterraine dans des tels endroits est favorablement haut. Il est facile d’y creuser des puits, d’épandre des fossés d’irrigation, petit gibier y abonde. Seulement voilà, lorsque les homo sapiens abondent, ils se différencient inévitablement en hominides rustiques d’une part et les citadins d’autre part. Ce partage est un mécanisme de base de la civilisation humaine. La campagne produit de la nourriture, la ville produit des nouveaux rôles sociaux, à travers interaction intense dans un espace densément peuplé. L’un des aspects fondamentaux de la ville est qu’elle sert de laboratoire expérimental permanent pour nos technologies, à travers la construction et la reconstruction d’immeubles. Oui, l’architecture, en compagnie du textile, du bâtiment naval et de la guerre, ont toujours été les activités humaines par excellence orientées sur l’innovation technologique.

La ville veut donc dire le bâtiment et le bâtiment a besoin de terre vraiment ferme. Les marécages deviennent ennemis. Il faut les assécher et les séparer durablement de la circulation hydrologique naturelle qui les eût formés pendant des millénaires. Les humains et les marécages ce fût donc un mariage naturel au début, suivie par une crise conjugale due à la nécessité d’apprendre comment faire de la technologie nouvelle et maintenant la technologie vraiment nouvelle rend possible une médiation conjugale dans ce couple. Il y a tout un courant de recherche et innovation architecturale, concentré autour des concepts tels que « les jardins de pluie » (Sharma & Malaviya 2021[1] ; Li, Liu & Li 2020[2] ; Venvik & Boogaard 2020[3]) ou « les villes éponges » (Ma, Jiang & Swallow2020[4] ; Sun, Cheshmehzangi & Wang 2020[5]). Nous sommes en train de développer des technologies qui rendent la cohabitation entre villes et marécages non seulement possible mais bénéfique pour l’environnement et pour les citadins en même temps.

Question : comment utiliser le principe de base de pompage-turbinage, donc le stockage d’énergie potentielle de l’eau placée en élévation, sans construire des structures de pompage-turbinage et en présence des structures marécageuses à la limite de la ville et de la campagne ? Réponse : à travers la construction des tours relativement petites et légères, avec des petits réservoirs d’égalisation au sommet de chaque tour. Un bélier hydraulique bien construit rend possible d’élever l’eau par 20 mètres environ. On peut imaginer donc un réseau des béliers hydrauliques installés dans le cours d’une rivière et connectés à des petites tours de 20 mètres chacune, où chaque tour est équipée d’un tuyau de descente vers les marécages et le tuyau est équipé des petites turbines hydro-électriques.

L’idée complète se présente donc comme suit : rivière => bélier hydraulique => l’eau monte => tours légères de 20 mètres avec des petits réservoirs d’égalisation au sommet => l’eau descend => petites turbines hydro-électriques => marécages => l’eau s’accumule => circulation hydrologique naturelle à travers le sol => rivière.

Bon, où est le Catch 22 dans ce jardin d’Eden ? Dans l’aspect économique. Les béliers hydrauliques de bonne qualité, tels qu’ils sont produits aujourd’hui, sont chers. Il y a très peu de fournisseurs solides de cette technologie. La plupart des béliers hydrauliques en utilisation sont des machins artisanaux à faible puissance et petit débit. L’infrastructure des tours de siphonage avec les turbines hydro-électriques de bonne qualité, ça coûte aussi. Si on veut être sérieux côté électricité, faut équiper tout ce bazar avec des magasins d’énergie. Toute l’infrastructure aurait besoin des frais de maintenance que je ne sais même pas comment calculer. Selon mes calculs, la vente d’électricité produite dans ce circuit hydrologique pourrait assurer un retour sur l’investissement pas plus court que 8 – 9 ans et encore, c’est calculé avec des prix d’électricité vraiment élevés.

Faut que j’y pense (plus).    


[1] Sharma, R., & Malaviya, P. (2021). Management of stormwater pollution using green infrastructure: The role of rain gardens. Wiley Interdisciplinary Reviews: Water, 8(2), e1507. https://doi.org/10.1002/wat2.1507

[2] Li, J., Liu, F., & Li, Y. (2020). Simulation and design optimization of rain gardens via DRAINMOD and response surface methodology. Journal of Hydrology, 585, 124788. https://doi.org/10.1016/j.jhydrol.2020.124788

[3] Venvik, G., & Boogaard, F. C. (2020). Infiltration capacity of rain gardens using full-scale test method: effect of infiltration system on groundwater levels in Bergen, Norway. Land, 9(12), 520. https://doi.org/10.3390/land9120520

[4] Ma, Y., Jiang, Y., & Swallow, S. (2020). China’s sponge city development for urban water resilience and sustainability: A policy discussion. Science of the Total Environment, 729, 139078. https://doi.org/10.1016/j.scitotenv.2020.139078

[5] Sun, J., Cheshmehzangi, A., & Wang, S. (2020). Green infrastructure practice and a sustainability key performance indicators framework for neighbourhood-level construction of sponge city programme. Journal of Environmental Protection, 11(2), 82-109. https://doi.org/10.4236/jep.2020.112007

Big Black Swans

Oops! Another big break in blogging. Sometimes life happens so fast that thoughts in my head run faster than I can possibly write about them. This is one of those sometimeses. Topics for research and writing abound, projects abound, everything is changing at a pace which proves challenging to a gentleman in his 50ies, such as I am. Yes, I am a gentleman: even when I want to murder someone, I go out of my way to stay polite.

I need to do that thing I periodically do, on this blog. I need to use published writing as a method of putting order in the chaos. I start with sketching the contours of chaos and its main components, and then I sequence and compartmentalize.

My chaos is made of the following parts:

>> My research on collective intelligence

>> My research on energy systems, with focus on investment in energy storage

>> My research on the civilisational role of cities, and on the concept of the entire human civilisation, such as we know it today, being a combination of two productive mechanisms: production of food in the countryside, and production of new social roles in the cities.

>> Joint research which I run with a colleague of mine, on the reproduction of human capital

>> The project I once named Energy Ponds, and which I recently renamed ‘Project Aqueduct’, for the purposes of promoting it.

>> The project which I have just started, together with three other researchers, on the role of Territorial Defence Forces during the COVID-19 pandemic

>> An extremely interesting project, which both I and a bunch of psychiatrists from my university have provisionally failed to kickstart, on the analysis of natural language in diagnosing and treating psychoses

>> A concept which recently came to my mind, as I was working on a crowdfunding project: a game as method of behavioural research about complex decisional patterns.

Nice piece of chaos, isn’t it? How do I put order in my chaos? Well, I ask myself, and, of course, I do my best to answer honestly the following questions: What do I want? How will I know I have what I want? How will other people know I have what I want? Why should anyone bother? What is the point? What do I fear? How will I know my fears come true? How will other people know my fears come true? How do I want to sequence my steps? What skills do I need to learn?

I know I tend to be deceitful with myself. As a matter of fact, most of us tend to. We like confirming our ideas rather than challenging them. I think I can partly overcome that subjectivity of mine by interweaving my answers to those questions with references to published scientific research. Another way of staying close to real life with my thinking consists in trying to understand what specific external events have pushed me to engage in the different paths, which, as I walk down all of them at the same time, make my personal chaos.

In 2018, I started using artificial neural networks, just like that, mostly for fun, and in a very simple form. As I observed those things at work, I developed a deep fascination with intelligent structures, and just as deep (i.e. f**king hard to phrase out intelligibly) an intuition that neural networks can be used as simulators of collectively intelligent social structures.

Both of my parents died in 2019, exactly at the same age of 78, having spent the last 20 years of their respective individual lives in complete separation from each other to the point of not having exchanged a spoken or written word over those last 20 years. That changed my perspective as regards subjectivity. I became acutely aware how subjective I am in my judgement, and how subjective other people are, most likely. Pandemic started in early 2020, and, almost at the same moment, I started to invest in the stock market, after a few years of break. I had been learning at an accelerated pace. I had been adapting to the reality of high epidemic risk – something I almost forgot since I had a devastating scarlatina at the age of 9 – and I had been adapting to a subjectively new form of economic decisions (i.e. those in the stock market). That had been the hell of a ride.

Right now, another piece of experience comes into the game. Until recently, in my house, the attic was mine. The remaining two floors were my wife’s dominion, but the attic was mine. It was painted in joyful, eye-poking colours. There as a lot of yellow and orange. It was mine. Yet, my wife had an eye for that space. Wives do, quite frequently. A fearsome ally came to support her: an interior decorator. Change has just happened. Now, the attic is all dark brown and cream. To me, it looks like the inside of a coffin. Yes, I know what the inside of a coffin looks like: I saw it just before my father’s funeral. That attic has become an alien space for me. I still have hard times wrapping my mind around how shaken I am with that change. I realize how attached am I to the space around me. If I am so strongly bound to colours and shapes in my vicinity, other people probably feel the same, and that triggers another intuition: we, humans, are either simple dwellers in the surrounding space, or we are architects thereof, and these are two radically different frames of mind.

I am territorial as f**k. I have just clarified it inside my head. Now, it is time to go back to science. In a first step, I am trying to connect those experiences of mine to my hypothesis of collective intelligence. Step by step, I am phrasing it out. We are intelligent about each other. We are intelligent about the physical space around us. We are intelligent about us being subjective, and thus we have invented that thing called language, which allows us to produce a baseline for intersubjective description of the surrounding world.

I am conscious of my subjectivity, and of my strong emotions (that attic, f**k!). Therefore, I want to see my own ideas from other people’s point of view. Some review of literature is what I need. I start with Peeters, M. M., van Diggelen, J., Van Den Bosch, K., Bronkhorst, A., Neerincx, M. A., Schraagen, J. M., & Raaijmakers, S. (2021). Hybrid collective intelligence in a human–AI society. AI & SOCIETY, 36(1), 217-238. https://doi.org/10.1007/s00146-020-01005-y . I start with it because I could lay my hands on an open-access version of the full paper, and, as I read it, many bells ring in my head. Among all those bells ringing, the main one refers to the experience I had with the otherwise simplistic neural networks, namely the perceptrons possible to structure in an Excel spreadsheet. Back in 2018, I was observing the way that truly a dumb neural network was working, one made of just 6 equations, looped together, and had that realization: ‘Blast! Those six equations together are actually intelligent’. This is the core of that paper by Peeters et al.. The whole story of collective intelligence had become a thing when Artificial Intelligence started spreading throughout society, thus it is generally the same thing in scientific literature as it is with me individually. Conscious, inquisitive interaction with artificial intelligence seems to awaken an entirely new worldview, where we, humans, can see at work an intelligent fruit of our own intelligence.

I am trying to make one more step, from bewilderment to premises and hypotheses. In Peeters et al., three big intellectual streams are named: (1) the technology-centric perspective (2) the human-centric one, and finally (3) that focused on the concept of collective intelligence-centric perspective. The third one sounds familiar, and so I dig into it. The general idea here is that humans can put their individual intelligences into a kind of interaction which is smarter than those individual ones. This hypothesis is a little counterintuitive – if we consider electoral campaigns or Instagram – but it becomes much more plausible when we think about networks of inventors and scientists. Peeters et al. present an interesting extension to that, namely collectively intelligent agglomerations of people and technology. This is exactly what I do when I do empirical research and use a neural network as simulator, with quantitative data in it. I am one human interacting with one simple piece of AI, and interesting things come out of it. 

That paper by Peeters et al. cites a book: Sloman, S., Sloman, S. A., & Fernbach, P. (2018). The knowledge illusion: Why we never think alone (Penguin). Before I pass to my first impressions about that book, another sidekick. In 1993, one of the authors of that book, Aaron Sloman, wrote an introduction to another book, this one being a collection of proceedings (conference papers in plain human lingo) from a conference, grouped under the common title: Prospects for Artificial Intelligence (Hogg & Humphreys 1993[1]). In that introduction, Aaron Sloman claims that using Artificial Intelligence as simulator of General Intelligence requires a specific approach, which he calls ‘design-based’, where we investigate the capabilities and the constraints within which intelligence, understood as a general phenomenon, has to function. Based on those constraints, requirements can be defined, and, consequently, the way that intelligence is enabled to meet them, through its own architecture and mechanisms.  

We jump 25 years, from 1993, and this is what Sloman, Sloman & Fernbach wrote in the introduction to “The knowledge illusion…”: “This story illustrates a fundamental paradox of humankind. The human mind is both genius and pathetic, brilliant and idiotic. People are capable of the most remarkable feats, achievements that defy the gods. We went from discovering the atomic nucleus in 1911 to megaton nuclear weapons in just over forty years. We have mastered fire, created democratic institutions, stood on the moon, and developed genetically modified tomatoes. And yet we are equally capable of the most remarkable demonstrations of hubris and foolhardiness. Each of us is error-prone, sometimes irrational, and often ignorant. It is incredible that humans are capable of building thermonuclear bombs. It is equally incredible that humans do in fact build thermonuclear bombs (and blow them up even when they don’t fully understand how they work). It is incredible that we have developed governance systems and economies that provide the comforts of modern life even though most of us have only a vague sense of how those systems work. And yet human society works amazingly well, at least when we’re not irradiating native populations. How is it that people can simultaneously bowl us over with their ingenuity and disappoint us with their ignorance? How have we mastered so much despite how limited our understanding often is?” (Sloman, Steven; Fernbach, Philip. The Knowledge Illusion (p. 3). Penguin Publishing Group. Kindle Edition)

Those readings have given me a thread, and I am interweaving that thread with my own thoughts. Now, I return to another reading, namely to “The Black Swan” by Nassim Nicolas Taleb, where, on pages xxi – xxii of the introduction, the author writes: “What we call here a Black Swan (and capitalize it) is an event with the following three attributes. First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme impact (unlike the bird). Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable. I stop and summarize the triplet: rarity, extreme impact, and retrospective (though not prospective) predictability.fn3 A small number of Black Swans explain almost everything in our world, from the success of ideas and religions, to the dynamics of historical events, to elements of our own personal lives. Ever since we left the Pleistocene, some ten millennia ago, the effect of these Black Swans has been increasing. It started accelerating during the industrial revolution, as the world started getting more complicated, while ordinary events, the ones we study and discuss and try to predict from reading the newspapers, have become increasingly inconsequential”.

I combine the idea by Sloman, Sloman & Fernbach, that we, humans, can be really smart collectively by interaction between very limited and subjective individual intelligences, with the concept of Black Swan by Nassim Nicolas Taleb. I reinterpret the Black Swan. This is an event we did not expect to happen, and yet it happened, and it has blown a hole in our self-sufficient certainty that we understand what the f**k is going on. When we do things, we expect certain outcomes. When we don’t get the outcomes we expected, this is a functional error. This is local instance of chaos. We take that local chaos as learning material, and we try again, and again, and again, in the expectation of bringing order into the general chaos of existence. Our collectively intelligent learning  

Nassim Nicolas Taleb claims that our culture tends to mask the occurrence of Black Swan-type outliers as just the manifestation of otherwise recurrent, predictable patterns. We collectively internalize Black Swans. This is what a neural network does. It takes an obvious failure – the local error of optimization – and utilizes it as valuable information in the next experimental round. The network internalizes a series of f**k-ups, because it never hits the output variable exactly home, there is always some discrepancy, at least a tiny one. The fact of being wrong about reality becomes normal. Every neural network I have worked with does the same thing: it starts with some substantial magnitude of error, and then it tends to reduce that error, at least temporarily, i.e. for at least a few more experimental rounds.   

This is what a simple neural network – one of those I use in my calculations – does with quantitative variables. It processes data so as to create error, i.e. so as to purposefully create outliers located out of the expected. Those neural networks purposefully create Black Swans, those abnormalities which allow us to learn. Now, what is so collective about neural networks? Why do I intuitively associate my experiments with neural networks to collective intelligence rather than to the individual one? Well, I work with socio-economic quantitative variables. The lowest level of aggregation I use is the probability of occurrence in a specific event, but this is really low aggregation for me. Most of my data is like Gross Domestic Product, average hours worked per person per year, average prices of electricity etc. This is essentially collective data, in the sense that no individual intelligence can possibly be observed as for its density of population or for its inflation. There needs to be a society in place for those metrics to exist at all.

When I work with that type of data, I assume that many people observe and gauge it, then report and aggregate their observations etc. Many people put a lot of work into making those quantitative variables both available and reliable. I guess it is important, then. When some kind of data is that important collectively, it is very likely to reflect some important aspect of collective reality. When I run that data through a neural network, the latter yields a simulation of collective action and its (always) provisional outcomes.

My neural network (I mean the one on my computer, not the one in my head) takes like 0.05 of local Gross Domestic Product, then 0.4 of average consumption of energy per capita, maybe 0.09 of inflation in consumer prices, plus some other stuff in random proportions, and sums up all those small portions of whatever is important as collectively measured socio-economic outcome. Usually, that sum is designated as ‘h’, or vector of aggregate input. Then, my network takes that h and puts it into a neural activation which, in most cases, is the hyperbolic tangent hyperbolic tangent AKA (e2h – 1) / (e2h +1). When we learn by trial and error, a hypothetical number e2h measures the force which the neural network reacts with to a complex stimulation from a set of variables xi. The ‘e2’ part of that hypothetical reaction is constant and equals e2 = 7,389056099, whilst h is a variable parameter, specific to the given phenomenal occurrence. The parameter h is roughly proportional to the number of variables in the source empirical set. The more complex the reality I process with the neural network, i.e. the more variables I split my reality into, the greater is the value of h. In other words, the more complexity, the more is the neuron, based on the expression e2h, driven away from its constant root e2. Complexity in variables induces greater swings in the hyperbolic tangent, i.e. greater magnitudes of error, and, consequently, longer strides in the process of learning.         

Logically, the more complex social reality I represent with quantitative variables, the bigger Black Swans the neural network produces as it tries to optimize one single variable chosen as the desired output of the neurally activated input.   


[1] Hogg, D., & Humphreys, G. W. (1993). Prospects for Artificial Intelligence: Proceedings of AISB’93, 29 March-2 April 1993, Birmingham, UK (Vol. 17). IOS Press.

When it plays out, it looks nasty

I feel like using my hypothesis of collectively intelligent social structures in other fields than just energy and urbanisation, which I have been largely doing so far. This time, I want to make a case for individual freedom as both a factor and a manifestation of collective intelligence. There is a population of humans. Each human has m possible states of being. As soon as two humans interact, one m states of being in the first human interacts with the other m states of being in the other human. It is like an existential geometrical square: those two humans together have m*m = m2 collective states of being. Generally, n humans, with m possible states of being in each of them, can produce mn different states of being together. When n gets substantial, like 38 million people in my home country, Poland, you can hardly expect all of us 38 million Poles having the repertoire of freedom in our behavioural patterns. Some of us will have 3m actually happening states of being, some other will soar into 6m alternative ways of being in the world, whilst still some other others will modestly stick to 0,3m. In that large population, the standard m ways of existing will be an expected state, thus an arithmetical average or an expected interval around it.   

Collectively intelligent structures learn by experimenting with many alternative states of themselves. Up to a point, the more such alternative states, the more and better we can learn. There is probably a point where ‘the more’ becomes ‘too much to process’, and then, we face a fork on the road: either we simply ignore some alternative versions of ourselves and we truly learn just from those which we can cover inside our cognitive span, or we try to experiment with everything we can possibly be, and chaos develops. I understand freedom, at the collective level, as the flexibility in shifting between those different states of being. Organized, collective freedom is the ability to explore the sweet spot of transition between order and chaos, and the ability to experiment with as many alternative versions of ourselves as we possibly can. Those collectively defined alternative realities always follow the basic logic of mn. At the end of the day, there are as many versions of us being together as there are us, for one, namely the ‘n’ exponent, and as many as there are possible states of being in the average individual among n, and this is the ‘m’ base.

Degrees of freedom in the average member of society are the foundation of collectively intelligent learning. I guess this is a mathematical argument for individual freedom in legal and political systems. As I think about my whole hypothesis of collectively intelligent social structures, I inevitably ask the question which any social scientist needs to ask: what is the practical usefulness of all that stuff? Social sciences are applied sciences, at the end of the day. However abstract I go in my intellectual peregrinations, my findings and methods need to serve in real life, for designing policies, business strategies, business plans etc. The empirical method I have developed around that whole thing of collective intelligence opens on two practical applications. Firstly, it allows non-arbitrary testing of various empirical observables as actual social outcomes. In policies and business strategies, and, by the way, in the whole realm of social sciences, there is that curse of arbitrary orientations. ‘People strive to maximize profit’. ‘No, they want to optimize dynamic equilibriums in their social games’. ‘Well, maybe, but we can and should educate people towards social justice and environmentally rational behaviour’ etc. etc. All that chatter abounds in literature which deems itself ‘scientific’, and yet it is 100% metaphysics, with no scientific grounds at all. I think my method allows working around that metaphysical part and testing human populations for the actual outcomes they collectively, objectively pursue. Here comes an interesting question: are our goals collective or individual? The more I think about it, the more I am convinced they are collective. When I ask myself about my own goals, at least those which I phrase out explicitly in my mind, they are all sort of categorical rather than idiosyncratically my own. I pursue the types of goals which many other people pursue in their existence. I just hop on those specific wagons, with my own backpack.  

Secondly, my method allows exploring the issue of Black Swans, i.e. outlier events, which suddenly become key drivers of social change. The method I have developed allows simulating something like a social chain reaction. An unexpected triggering event happens, and it is unexpected because from our point of view it is random. That triggers a collection of events which we could otherwise fathom, but they have been in the refrigerator of history so far. Now, they are triggered into existence, and, at the same time, the overall cohesion of the social structure weakens, at least temporarily. New things start happening, and old things happen sort of more loosely and chaotically than they used to. I have discovered that depending on the exact orientation assigned a priori to the social structure I study, those social chain reactions can we essentially predictable, completely unpredictable, or, in still another case, we can calm them down exaggeratedly quickly, without really learning from them.

All in all, the method of using a simple neural network as social simulator, which I developed in connection with my hypothesis of collectively intelligent social structures, allows what I perceive as very empiricist a study of social change, much freer of metaphysics than many other methods. Of course, a bit of metaphysics is unavoidable. What we use to call ‘quantitative variables’ in social sciences are always the mathematics of something we think that happens, and we think in terms of our language and culture.

Ooops, pardon my manners, I have gone into philosophy again. Philosophy is nice, but when I stay in this realm longer than what is strictly necessary for feeling like an intellectual, I start feeling as too much of an intellectual and my apish side calls for more ground under my feet. I use this blog for providing a current account of my intellectual journey, and of the actual projects which I am working on. I hope that the paragraphs above are (provisionally) sufficient as regards the intellectual journey, and I can pass to debriefing on my projects.

One of the projects I start working on is a platform for debt-based crowdfunding. This is some sort of comeback to the interest I had in financial schemes for the implementation of small installations in renewable energies. For the less initiated readers, I am quickly going through the basics. You probably know that if your cousin asks you to invest in his or her business, you can do it, on the basis of a private contract of partnership, and, in most countries, you don’t even go to jail afterwards. This is the market of private equity. You can also lend money to your cousin, you can agree as for the exact terms of the loan, and this is financing through private debt. The opposite of private is public, and therefore we have public capital markets on the opposite end of the spectrum. Stock markets are the most visible ones, and sort of next to them are the markets of publicly traded debt, where you can buy and sell bonds of all kinds: corporate, municipal, and sovereign. 

Between the strictly private and the regulated public, a transitional zone, of many shades and colours, is to be found.  Crowdfunding, sometimes called ‘societal funding’ or ‘communitarian funding’ dwells in this zone, precisely. The basic difference between crowdfunding and private finance strictly spoken is the largely aleatory, social-media-type creation of relations between investors and entrepreneurs. Crowdfunding happens essentially via digital platforms, where entrepreneurs auction their ventures and try to attract whoever is interested in them. Those digital platforms in themselves are marketing engines, essentially. On the other hand, the basic difference between public financial markets and crowdfunding is that the latter does not really allow tradability in financial positions. When I invest my money through crowdfunding, it is much more of a long-term commitment than investment via stock market. Less liquidity in my financial assets means more exposure to long-term risks, and yet less exposure to short-term volatility in market value.

In my own big picture of social reality, I put the emergence of crowdfunding in the same phenomenological bag as I put cryptocurrencies, progressively increasing supply of money in relation to real output in the economy (thus decreasing velocity of money), and increasingly cash-furnished corporate balance sheets. As a civilisation, we are building up a growing base of financial liquidity, and that means we are facing a quickening pace of depreciation in technological assets, and thus we are in the middle of accelerated technological change. Now, a little word is due about the way I understand accelerated technological change. I have encountered quite well-articulated views that technological change is currently disappointingly slow as compared to what we need. Well, maybe, but in strictly spoken business terms, when a piece of technology which I purchased last year ages morally twice as fast as those which I purchased 5 years ago, because new generations of the same equipment pop up faster and faster, this is accelerated technological change, and, as a businessperson, I need to figure out a strategy to cope with that change.

Here, my own point of view of that phenomenon called ‘financialization’ differs significantly from a lot of other researchers. The mainstream doctrine says that increased financialization is a bad thing, it destabilizes the economic system, and it contributes to social inequalities. I think that financialization is the by-product of something else. It is an otherwise rational coping mechanism to smooth and amortize quick social change which, without financialization, could take very nasty forms, like global wars, massive disappearance of human settlements and much greater damage to natural environment than what we use to bitch and moan about today. Just imagine that somewhere in Europe, 5 million people in a post-industrial spot cannot afford to pay for electricity anymore and they start burning wood and coal in stoves instead. This is what could happen in the presence of quick technological change and in the absence of that horrible financialization.     

Crowdfunding is essentially attached to new ideas and new business structures. It is seed capital or early development capital. When I invest my money through crowdfunding, I am opening a long-term position in something essentially young, burgeoning and full of uncertainty. One hundred years ago, mustering capital for such a venture would take an entrepreneur years of patient contacts with potential investors. Now, it can take months or even weeks, and this is the tangible gain of time through the use of digital platforms.   

That introduction kept in mind, I get closer to the main thread of that project in crowdfunding, namely to the new regulations thereof, likely to enter into force in Poland this autumn, based on recent regulations of the European Union as a whole. I am passing in review the REGULATION (EU) 2020/1503 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 7 October 2020 on European crowdfunding service providers for business, and amending Regulation (EU) 2017/1129 and Directive (EU) 2019/1937, to find at https://eur-lex.europa.eu/legal-content/PL/TXT/?uri=CELEX:32020R1503 . As I usually do, I start from the end, more specifically from Annex II, titled SOPHISTICATED INVESTORS FOR THE PURPOSE OF THIS REGULATION.

A sophisticated investor is an investor who possesses the awareness of the risks associated with investing in capital markets and adequate resources to undertake those risks without exposing itself to excessive financial consequences. Sophisticated investors may be categorised as such if they meet identification criteria, which, in turn, differ according to the legal personality of the entity. Legal persons (like a bunch of folks in a business partnership), are assumed to be sophisticated in their investments if they meet at least one of the following criteria: (a) own funds of at least EUR 100 000 (b) net turnover of at least EUR 2 000 000 (c) balance sheet of at least EUR 1 000 000.

On the other hand, natural persons can call themselves sophisticated investors when the meet at least two of the following criteria:

>> (a) personal gross income of at least EUR 60 000 per fiscal year, or a financial instrument portfolio, defined as including cash deposits and financial assets, that exceeds EUR 100 000;

>> (b) the investor works or has worked in the financial sector for at least one year in a professional position which requires knowledge of the transactions or services envisaged, or the investor has held an executive position for at least 12 months in a legal person considered as sophisticated investor;

>> (c) the investor has carried out transactions of a significant size on the capital markets at an average frequency of 10 per quarter, over the previous four quarters.

The whole distinction between ordinary investors and the sophisticated ones is in the degree of legal protection they are provided with. That distinction essentially taps into an older one, contained in the DIRECTIVE 2014/65/EU OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 15 May 2014 on markets in financial instruments and amending Directive 2002/92/EC and Directive 2011/61/EU (https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A32014L0065 ). As it happens sometimes, protection turns out to be a limitation actually. Non-sophisticated investors are generally limited in the amounts of money they can invest, and the repertoire of financial instruments which they can invest in. If one wants not to be treated like a child, they have to make a special, written request to be treated as sophisticated investor, and whatever operator of financial platform is that request addressed to can accept or reject said request.    

The Polish prospective regulations on crowdfunding approach things from a different angle. By the way, they are just prospective regulations, and the only official version of that will which I could get my hands on is in Polish. For those who speak the beautiful language of my home country – distinctive, among others, by a record-level density of consonants in one word – I placed the current bill of this regulation in the archives of my blog, just here: https://discoversocialsciences.com/wp-content/uploads/2021/05/Projekt-crowdfunding-.docx . Polish regulators focus mostly on the concept of ‘key investment information sheet’, which I will allow myself to call KIIS in what follows, is present in the European regulations as well. The KIIS should warn prospective investors that the investing environment they have entered into entails risks that are covered neither by deposit guarantee schemes, nor by investor compensation schemes. The KIIS should reflect the specific features of lending-based and investment-based crowdfunding. To that end, specific and relevant indicators should be required. The KIIS should also take into account, where available, the specific features and risks associated with project owners, and should focus on material information about the project owners, the investors’ rights and fees, and the type of transferable securities, admitted instruments for crowdfunding purposes and loans offered. The KIIS should be drawn up by the project owners, because the project owners are in the best position to provide the information required to be included therein. However, since it is the crowdfunding service providers that are responsible for providing the KIIS to prospective investors, it is the crowd­funding service providers that should ensure that the KIIS is clear, correct and complete.

The specificity of the Polish regulations as regards the KIIS is largely in the addressees of that information. In the general European regulations, the KIIS is addressed to prospective and actual investors. In Polish regulations, it is strongly stressed that crowdfunding operators should communicate all their KIIS’s to the Financial Supervision Commission (PL: Komisja Nadzoru Finansowego, https://www.knf.gov.pl/en/ ), not later than 7 days before making the same KIIS available to prospective investors. On the other hand, the owner of the project subject to crowdfunding can publish the KIIS on their own platform only after the provider of crowdfunding does in on their own one. We have a sequence of KIISes. The first KIIS goes from the crowdfunding provider to the Financial Supervision Commission, which has at least 7 days to consider (what exactly?). The next KIIS goes from the crowdfunding provider to prospective investors, who also receive the last KIIS from the owner of the crowdfunded project in question.

In a general manner, those Polish regulations give a lot of discretionary prerogatives to the Financial Supervision Commission as regards crowdfunding providers. They can halt a crowdfunding project immediately, and for an essentially indefinite period of time, on the grounds of a simple suspicion. I don’t like it. Someone in charge with the Financial Supervision Commission is the first to know about a crowdfunded project, they can request any information about that project, they can halt the project whenever they want. That smells bad. That smells insider trading. That smells uncontrolled pressure on the owners of crowdfunded projects. Imagine: you start such a project, and then you have a phone call, I mean THE phone call. Someone tells you they know about your crowdfunding campaign, and they would willingly take 60% of your business for 50% of its book value. You refuse, and the next thing you know is your crowdfunding campaign being suspended for an unknown period of time. I know the scheme, I saw it play out, and when it plays out, it looks nasty, believe me. That means people close to the government taking over entire swaths of small business, and the kind of small business, which is particularly exposed to adverse actions, the emerging one.  

The type of riddle I like

Once again, I had quite a break in blogging. I spend a lot of time putting together research projects, in a network of many organisations, which I am supposed to bring to working together. I give it a lot of time and personal energy. It drains me a bit, and I like that drain. I like the thrill of putting together a team, agreeing about goals and possible openings. Since 2005, when I stopped running my own business and I settled for a quite academic career, I haven’t experienced that special kind of personal drive. I sincerely believe that every teacher should apply his or her own teaching in the everyday life of theirs, just to see if their teaching still corresponds to reality.

This is one of the reasons why I have made it a regular activity of mine to invest in the stock market. I teach economics, and the stock market is very much like the pulse of economics, in all its grades and shades, ranging from hardcore macroeconomic cycles, passing through the microeconomics of specific industries I am currently focusing on with my investment portfolio, and all the way down the path of behavioural economics. I teach management, as well, and putting together new projects in research is the closest I can come, currently, to management science being applied in real life.

Still, besides trying to apply my teaching in real life, I still do science. I do research, and I write about the things I think I have found out, on that research path of mine. I do a lot of research as regards the economics of energy. Currently, I am still revising a paper of mine, titled ‘Climbing the right hill – an evolutionary approach to the European market of electricity’. Around the topic of energy economics, I have built more general a method of studying quantitative socio-economic data, with the technical hypothesis that said data manifests collective intelligence in human social structures. It means that whenever I deal with a collection of quantitative socio-economic variables, I study the dataset at hand by assuming that each multivariate record line in the database is the local instance of an otherwise coherent social structure, which experimentins with many such specific instances of itself and selects those offering the best adaptation to the current external stressors. Yes, there is a distinct sound of evolutionary method in that approach.

Over the last three months, I have been slowly ruminating my theoretical foundations for the revision of that paper. Now, I am doing what I love doing: I am disrupting the gently predictable flow of theory with some incongruous facts. Yes, facts don’t know how to behave themselves, like really. Here is an interesting fact about energy: between 1999 and 2016, at the planetary scale, there had been more and more new cars produced per each new human being born. This is visualised in the composite picture below. Data about cars comes from https://www.worldometers.info/cars/ , whilst data about the headcount of population comes from the World Bank (https://data.worldbank.org/indicator/SP.POP.TOTL ).

Now, the meaning of all that. I mean, not ALL THAT (i.e. reality and life in general), just all that data about cars and population. Why do we consistently make more and more physical substance of cars per each new human born? Two explanations come to my mind. One politically correct and nicely environmentalist: we are collectively dumb as f**k and we keep overshooting the output of cars over and above the incremental change in population. The latter, when translated into a rate of growth, tends to settle down (https://data.worldbank.org/indicator/SP.POP.GROW ). Yeah, those capitalists who own car factories just want to make more and more money, and therefore they make more and more cars. Yeah, those corrupt politicians want to conserve jobs in the automotive industry, and they support it. Yeah, f**k them all! Yeah, cars destroy the planet!

I checked. The first door I knocked at was General Motors (https://investor.gm.com/sec-filings ). What I can see is that they actually make more and more operational money by making less and less cars. Their business used to be overshot in terms of volume, and now they are slowly making sense and money out of making less cars. Then I checked with Toyota (https://global.toyota/en/ir/library/sec/ ). These guys looks as if they were struggling to maintain their capacity to make approximately the same operational surplus each year, and they seem to be experimenting with the number of cars they need to put out in order to stay in good financial shape. When I say ‘experimenting’, it means experimenting upwards or downwards.

As a matter of fact, the only player who seems to be unequivocally making more operational money out of making more cars is Tesla (https://ir.tesla.com/#tab-quarterly-disclosure). In There comes another explanation – much less politically correct, if at all – for there being more cars made per each new human, and it says that we, humans, are collectively intelligent, and we have a good reason for making more and more cars per each new human coming to this realm of tears, and the reason is to store energy in a movable, possibly auto-movable a form. Yes, each car has a fuel tank or a set of batteries, in the case of them Teslas or other electric f**kers. Each car is a moving reservoir of chemical energy, immediately converted into kinetic energy, which, in turn, has economic utility. Making more cars with batteries pays off better than making more cars with combustible fuel in their tanks: a new generation of movable reservoirs in chemical energy is replacing an older generation thereof. 

Let’s hypothesise that this is precisely the point of each new human being coupled with more and more of a new car being made: the point is more chemical energy convertible into kinetic energy. Do we need to move around more, as time passes? Maybe, although I am a bit doubtful. Technically, with more and more humans being around in a constant space, there is more and more humans per square kilometre, and that incremental growth in the density of population happens mostly in cities. I described that phenomenon in a paper of mine, titled ‘The Puzzle of Urban Density And Energy Consumption’. That means that space available for travelling and needed to be covered, per individual capita of each human being, is actually decreasing. Less space to travel in means less need for means of transportation. 

Thus, what are we after, collectively? We might be preparing for having to move around more in the future, or for having to restructure the geography of our settlements. That’s possible, although the research I did for that paper about urban density indicates that geographical patterns of urbanization are quite durable. Anyway, those two cases sum up to some kind of zombie apocalypse. On the other hand, the fact of developing the amount of dispersed, temporarily stored energy (in cars) might be a manifestation of us learning how to build and maintain large, dispersed networks of energy reservoirs.

Isn’t it dumb to hypothesise that we go out of our way, as a civilisation, just to learn the best ways of developing what we are developing? Well, take the medieval cathedrals. Them medieval folks would keep building them for decades or even centuries. The Notre Dame cathedral in Paris, France, seems to be the record holder, with a construction period stretching from 1160 to 1245 (Bruzelius 1987[1]). Still, the same people who were so appallingly slow when building a cathedral could accomplish lightning-fast construction of quite complex military fortifications. When building cathedrals, the masters of stone masonry would do something apparently idiotic: they would build, then demolish, and then build again the same portion of the edifice, many times. WTF? Why slowing down something we can do quickly? In order to experiment with the process and with the technologies involved, sir. Cathedrals were experimental labs of physics, mathematics and management, long before these scientific disciplines even emerged. Yes, there was the official rationale of getting closer to God, to accomplish God’s will, and, honestly, it came handy. There was an entire culture – the medieval Christianity – which was learning how to learn by experimentation. The concept of fulfilling God’s will through perseverant pursuit, whilst being stoic as regards exogenous risks, was excellent a cultural vehicle to that purpose.

We move a few hundreds of years in time, to the 17th century. The cutting edge of technology is to find in textile and garments (Braudel 1992[2]), and the peculiarity of the European culture consisted in quickly changing fashions, geographically idiosyncratic and strongly enforced through social peer pressure. The industry of garments and textile was a giant experimental lab of business and management, developing the division of labour, the management of supply chains, quick study of subtle shades in customers’ tastes and just as quick adaptation thereto. This is how we, Europeans, prepared for the much later introduction of mechanized industry, which, in turn, gave birth to what we are today: a species controlling something like 30% of all energy on the surface of our planet.       

Maybe we are experimenting with dispersed, highly mobile and coordinated networks of small energy reservoirs – the automotive fleet – just for the sake of learning how to develop such networks? Some other facts, which, once again, are impolitely disturbing, come to the fore. I had a look at the data published by United Nations, as regards the total installed capacity of generation in electricity (https://unstats.un.org/unsd/energystats/ ). I calculated the average electrical capacity per capita, at the global scale. Turns out in 2014 the average human capita on Earth had around 60% more power capacity to tap from, as compared to a similarly human capita in 1999.

Interesting. It looks even more interesting when taken as the first moment of a process. When I divide the annual incremental change in the installed electrical capacity on the planet, and I divide it by the absolute demographic increment, thus when I go ‘Delta capacity / delta population’, that coefficient of elasticity grows like hell. In 2014, it was almost three times more than in 1999. We, humans, keep developing denser a network of cars, as compared to our population, and, at the same time, we keep increasing the relative power capacity which every human can tap into.    

Someone could say it is because we simply consume more and more energy per capita. Cool, I check with the World Bank: https://data.worldbank.org/indicator/EG.USE.PCAP.KG.OE . Yes, we increase our average annual consumption of energy per one human being, and yet this is a very gentle increment: barely 18% from 1999 through 2014. Nothing to do with the quick accumulation of generative capacity. We accumulate densifying a global fleet of cars, and growing a reserve of power capacity. What are we doing it for?

This is a deep question, and I calculated two additional elasticities with the data at hand. Firstly, I denominated incremental change in the number of new cars per each new human born over the average consumption of energy per capita. In the visual below, this is the coefficient ‘Elasticity of cars per capita to energy per capita’. Between 1999 and 2014, this elasticity had passed from 0,49 to 0,79. We keep accumulating something like an overhead of incremental car fleet, as compared to the amount of energy we consume.

Secondly, I formalized the comparison between individual consumption of energy and average power capacity per capita. This is the ‘Elasticity of capacity per capita to energy per capita’ column in the visual below.  Once again, it is a growing trend.   

At the planetary scale, we keep beefing up our collective reserves of energy, and we seriously mean business about dispersing those reserves into networks of small reservoirs, possibly on wheels.

Increased propensity to store is a historically known collective response to anticipated shortage. Do we, the human race, collectively and not quite consciously anticipate a shortage of energy? How could that happen? Our biology should suggest it just the opposite. With the climate change being around, we technically have more energy in the ambient environment, not less. What exact kind of shortage in energy are we collectively anticipating? This is the type of riddle I like.


[1] Bruzelius, C. (1987). The Construction of Notre-Dame in Paris. The Art Bulletin, 69(4), 540-569. https://doi.org/10.1080/00043079.1987.10788458

[2] Braudel, F. (1992). Civilization and capitalism, 15th-18th century, vol. II: The wheels of commerce (Vol. 2). Univ of California Press.

Unintentional, and yet powerful a reductor

As usually, I work on many things at the same time. I mean, not exactly at the same time, just in a tight alternate sequence. I am doing my own science, and I am doing collective science with other people. Right now, I feel like restating and reframing the main lines of my own science, with the intention to both reframe my own research, and be a better scientific partner to other researchers.

Such as I see it now, my own science is mostly methodological, and consists in studying human social structures as collectively intelligent ones. I assume that collectively we have a different type of intelligence from the individual one, and most of what we experience as social life is constant learning through experimentation with alternative versions of our collective way of being together. I use artificial neural networks as simulators of collective intelligence, and my essential process of simulation consists in creating multiple artificial realities and comparing them.

I deliberately use very simple, if not simplistic neural networks, namely those oriented on optimizing just one attribute of theirs, among the many available. I take a dataset, representative for the social structure I study, I take just one variable in the dataset as the optimized output, and I consider the remaining variables as instrumental input. Such a neural network simulates an artificial reality where the social structure studied pursues just one, narrow orientation. I create as many such narrow-minded, artificial societies as I have variables in my dataset. I assess the Euclidean distance between the original empirical dataset, and each of those artificial societies. 

It is just now that I realize what kind of implicit assumptions I make when doing so. I assume the actual social reality, manifested in the empirical dataset I study, is a concurrence of different, single-variable-oriented collective pursuits, which remain in some sort of dynamic interaction with each other. The path of social change we take, at the end of the day, manifests the relative prevalence of some among those narrow-minded pursuits, with others being pushed to the second rank of importance.

As I am pondering those generalities, I reconsider the actual scientific writings that I should hatch. Publish or perish, as they say in my profession. With that general method of collective intelligence being assumed in human societies, I focus more specifically on two empirical topics: the market of energy and the transition away from fossil fuels make one stream of my research, whilst the civilisational role of cities, especially in the context of the COVID-19 pandemic, is another stream of me trying to sound smart in my writing.

For now, I focus on issues connected to energy, and I return to revising my manuscript ‘Climbing the right hill – an evolutionary approach to the European market of electricity’, as a resubmission to Applied Energy . According to the guidelines of Applied Energy , I am supposed to structure my paper into the following parts: Introduction, Material and Methods, Theory, Calculations, Results, Discussion, and, as sort of a summary pitch, I need to prepare a cover letter where I shortly introduce the reasons why should the editor of Applied Energy bother about my paper at all. On the top of all these formally expressed requirements, there is something I noticed about the general style of articles published in Applied Energy : they all demonstrate and discuss strong, sharp-cutting hypotheses, with a pronounced theoretical edge in them. If I want my paper to be accepted by that journal, I need to give it that special style.  

That special style requires two things which, honestly, I am not really accustomed to doing. First of all, it requires, precisely, to phrase out very sharp claims. What I like the most is to show people material and methods which I work with and sort of provoke a discussion around it. When I have to formulate very sharp claims around that basic empirical stuff, I feel a bit awkward. Still, I understand that many people are willing to discuss only when they are truly pissed by the topic at hand, and sharply cut hypotheses serve to fuel that flame.

Second of all, making sharp claims of my own requires passing in thorough review the claims which other researchers phrase out. It requires doing my homework thoroughly in the review-of-literature. Once again, not really a fan of it, on my part, but well, life is brutal, as my parents used to teach me and as I have learnt in my own life. In other words, real life starts when I get out of my comfort zone.

The first body of literature I want to refer to in my revised article is the so-called MuSIASEM framework AKA Multi-scale Integrated Analysis of Societal and Ecosystem Metabolism’. Human societies are assumed to be giant organisms, and transformation of energy is a metabolic function of theirs (e.g. Andreoni 2020[1], Al-Tamimi & Al-Ghamdi 2020[2] or Velasco-Fernández et al. 2020[3]). The MuSIASEM framework is centred around an evolutionary assumption, which I used to find perfectly sound, and which I have come to consider as highly arguable, namely that the best possible state for both a living organism and a human society is that of the highest possible energy efficiency. As regards social structures, energy efficiency is the coefficient of real output per unit of energy consumption, or, in other words, the amount of real output we can produce with 1 kilogram of oil equivalent in energy. My theoretical departure from that assumption started with my own empirical research, published in my article ‘Energy efficiency as manifestation of collective intelligence in human societies’ (Energy, Volume 191, 15 January 2020, 116500, https://doi.org/10.1016/j.energy.2019.116500 ). As I applied my method of computation with a neural network as simulator of social change, I found out that human societies do not really seem to max out on energy efficiency. Maybe they should but they don’t. It was the first realization, on my part, that we, humans, orient our collective intelligence on optimizing the social structure as such, and whatever comes out of that in terms of energy efficiency, is an unintended by-product rather than a purpose. That general impression has been subsequently reinforced by other empirical findings of mine, precisely those which I introduce in that manuscript ‘Climbing the right hill – an evolutionary approach to the European market of electricity’, which I am currently revising for resubmission with Applied Energy . According to the guidelines of Applied Energy.

In practical terms, it means that when a public policy states that ‘we should maximize our energy efficiency’, it is a declarative goal which human societies actually do not strive for. It is a little as if a public policy imposed the absolute necessity of being nice to each other and punished any deviation from that imperative. People are nice to each other to the extent of current needs in social coordination, period. The absolute imperative of being nice is frequently the correlate of intense rivalry, e.g. as it was the case with traditional aristocracy. The French have even an expression, which I find profoundly true, namely ‘trop gentil pour être honnête’, which means ‘too nice to be honest’. My personal experience makes me kick into an alert state when somebody is that sort of intensely nice to me.

Passing from metaphors to the actual subject matter of energy management, it is a known fact that highly innovative technologies are usually truly inefficient. Optimization of efficiency, would it be energy efficiency or any other aspect thereof, is actually a late stage in the lifecycle of a technology. Deep technological change is usually marked by a temporary slump in efficiency. Imposing energy efficiency as chief goal of technology-related policies means systematically privileging and promoting technologies with the highest energy efficiency, thus, by metaphorical comparison to humans, technologies in their 40ies, past and over the excesses of youth.

The MuSIASEM framework has two other traits which I find arguable, namely the concept of evolutionary purpose, and the imperative of equality between countries in terms of energy efficiency. Researchers who lean towards and into the MuSIASEM methodology claim that it is an evolutionary purpose of every living organism to maximize energy efficiency, and therefore human societies have the same evolutionary purpose. It further implies that species displaying marked evolutionary success, i.e. significant growth in headcount (sometimes in mandibulae-count, should the head be not really what we mean it to be), achieve that success by being particularly energy efficient. I even went into some reading in life sciences and that claim is not grounded in any science. It seems that energy efficiency, and any denomination of efficiency, as a matter of fact, are very crude proportions we apply to complex a balance of flows which we have to learn a lot about. Niebel et al. (2019[4]) phrase it out as follows: ‘The principles governing cellular metabolic operation are poorly understood. Because diverse organisms show similar metabolic flux patterns, we hypothesized that a fundamental thermodynamic constraint might shape cellular metabolism. Here, we develop a constraint-based model for Saccharomyces cerevisiae with a comprehensive description of biochemical thermodynamics including a Gibbs energy balance. Non-linear regression analyses of quantitative metabolome and physiology data reveal the existence of an upper rate limit for cellular Gibbs energy dissipation. By applying this limit in flux balance analyses with growth maximization as the objective function, our model correctly predicts the physiology and intracellular metabolic fluxes for different glucose uptake rates as well as the maximal growth rate. We find that cells arrange their intracellular metabolic fluxes in such a way that, with increasing glucose uptake rates, they can accomplish optimal growth rates but stay below the critical rate limit on Gibbs energy dissipation. Once all possibilities for intracellular flux redistribution are exhausted, cells reach their maximal growth rate. This principle also holds for Escherichia coli and different carbon sources. Our work proposes that metabolic reaction stoichiometry, a limit on the cellular Gibbs energy dissipation rate, and the objective of growth maximization shape metabolism across organisms and conditions’. 

I feel like restating the very concept of evolutionary purpose as such. Evolution is a mechanism of change through selection. Selection in itself is largely a random process, based on the principle that whatever works for now can keep working until something else works even better. There is hardly any purpose in that. My take on the thing is that living species strive to maximize their intake of energy from environment rather than their energy efficiency. I even hatched an article about it (Wasniewski 2017[5]).

Now, I pass to the second postulate of the MuSIASEM methodology, namely to the alleged necessity of closing gaps between countries as for their energy efficiency. Professor Andreoni expresses this view quite vigorously in a recent article (Andreoni 2020[6]). I think this postulate doesn’t hold both inside the MuSIASEM framework, and outside of it. As for the purely external perspective, I think I have just laid out the main reasons for discarding the assumption that our civilisation should prioritize energy efficiency above other orientations and values. From the internal perspective of MuSIASEM, i.e. if we assume that energy efficiency is a true priority, we need to give that energy efficiency a boost, right? Now, the last time I checked, the only way we, humans, can get better at whatever we want to get better at is to create positive outliers, i.e. situations when we like really nail it better than in other situations. With a bit of luck, those positive outliers become a workable pattern of doing things. In management science, it is known as the principle of best practices. The only way of having positive outliers is to have a hierarchy of outcomes according to the given criterion. When everybody is at the same level, nobody is an outlier, and there is no way we can give ourselves a boost forward.

Good. Those six paragraphs above, they pretty much summarize my theoretical stance as regards the MuSIASEM framework in research about energy economics. Please, note that I respect that stream of research and the scientists involved in it. I think that representing energy management in human social structures as a metabolism is a great idea: it is one of those metaphors which can be fruitfully turned into a quantitative model. Still, I have my reserves.

I go further. A little more review of literature. Here comes a paper by Halbrügge et al. (2021[7]), titled ‘How did the German and other European electricity systems react to the COVID-19 pandemic?’. It points at an interesting point as regards energy economics: the pandemic has induced a new type of risk, namely short-term fluctuations in local demand for electricity. That, in turn, leads to deeper troughs and higher peaks in both the quantity and the price of energy in the market. More risk requires more liquidity: this is a known principle in business. As regards energy, liquidity can be achieved both through inventories, i.e. by developing storage capacity for energy, and through financial instruments. Halbrügge et al. come to the conclusion that such circumstances in the German market have led to the reinforcement of RES (Renewable Energy Sources). RES installations are typically more dispersed, more local in their reach, and more flexible than large power plants. It is much easier to modulate the output of a windfarm or a solar farm, as compared to a large fossil-fuel-based installation. 

Keeping an eye on the impact of the pandemic upon the market of energy, I pass to the article titled ‘Revisiting oil-stock nexus during COVID-19 pandemic: Some preliminary results’, by Salisu, Ebuh & Usman (2020[8]). First of all, a few words of general explanation as for what the hell is the oil-stock nexus. This is a phenomenon, which I saw any research about in 2017, which consists in a diversification of financial investment portfolios from pure financial stock into various mixes of stock and oil. Somehow around 2015, people who used to hold their liquid investments just in financial stock (e.g. as I do currently) started to build investment positions in various types of contracts based on the floating inventory of oil: futures, options and whatnot. When I say ‘floating’, it is quite literal: that inventory of oil really actually floats, stored on board of super-tanker ships, sailing gently through international waters, with proper gravitas (i.e. not too fast).

Long story short, crude oil has been increasingly becoming a financial asset, something like a buffer to hedge against risks encountered in other assets. Whilst the paper by Salisu, Ebuh & Usman is quite technical, without much theoretical generalisation, an interesting observation comes out of it, namely that short-term shocks, during the pandemic in financial markets had adversely impacted the price of oil more than the prices of stock. That, in turn, could indicate that crude oil was good as hedging asset just for a certain range of risks, and in the presence of price shocks induced by the pandemic, the role of oil could diminish.     

Those two papers point at a factor which we almost forgot as regards the market of energy, namely the role of short-term shocks. Until recently, i.e. until COVID-19 hit us hard, the textbook business model in the sector of energy had been that of very predictable demand, nearly constant in the long-perspective and varying in a sinusoidal manner in the short-term. The very disputable concept of LCOE AKA Levelized Cost of Energy, where investment outlays are treated as if they were a current cost, is based on those assumptions. The pandemic has shown a different aspect of energy systems, namely the need for buffering capacity. That, in turn, leads to the issue of adaptability, which, gently but surely leads further into the realm of adaptive changes, and that, ladies and gentlemen, is my beloved landscape of evolutionary, collectively intelligent change.

Cool. I move forward, and, by the same occasion, I move back. Back to the concept of energy efficiency. Halvorsen & Larsen study the so-called rebound effect as regards energy efficiency (Halvorsen & Larsen 2021[9]). Their paper is interesting for three reasons, the general topic of energy efficiency being the first one. The second one is methodological focus on phenomena which we cannot observe directly, and therefore we observe them through mediating variables, which is theoretically close to my own method of research. Finally, the phenomenon of rebound effect, namely the fact that, in the presence of temporarily increased energy efficiency, the consumers of energy tend to use more of those locally more energy-efficient goods, is essentially a short-term disturbance being transformed into long-term habits. This is adaptive change.

The model construed by Halvorsen & Larsen is a theoretical delight, just something my internal happy bulldog can bite into. They introduce the general assumption that consumption of energy in households is a build-up of different technologies, which can substitute each other under some conditions, and complementary under different conditions. Households maximize something called ‘energy services’, i.e. everything they can purposefully derive from energy carriers. Halvorsen & Larsen build and test a model where they derive demand for energy services from a whole range of quite practical variables, which all sums up to the following: energy efficiency is indirectly derived from the way that social structures work, and it is highly doubtful whether we can purposefully optimize energy efficiency as such.       

Now, here comes the question: what are the practical implications of all those different theoretical stances, I mean mine and those by other scientists? What does it change, and does it change anything at all, if policy makers follow the theoretical line of the MuSIASEM framework, or, alternatively, my approach? I am guessing differences at the level of both the goals, and the real outcomes of energy-oriented policies, and I am trying to wrap my mind around that guessing. Such as I see it, the MuSIASEM approach advocates for putting energy-efficiency of the whole global economy at the top of any political agenda, as a strategic goal. On the path towards achieving that strategic goal, there seems to be an intermediate one, namely that to narrow down significantly two types of discrepancies:

>> firstly, it is about discrepancies between countries in terms of energy efficiency, with a special focus on helping the poorest developing countries in ramping up their efficiency in using energy

>> secondly, there should be a priority to privilege technologies with the highest possible energy efficiency, whilst kicking out those which perform the least efficiently in that respect.    

If I saw a real policy based on those assumptions, I would have a few critical points to make. Firstly, I firmly believe that large human societies just don’t have the institutions to enforce energy efficiency as chief collective purpose. On the other hand, we have institutions oriented on other goals, which are able to ramp up energy efficiency as instrumental change. One institution, highly informal and yet highly efficient, is there, right in front of our eyes: markets and value chains. Each product and each service contain an input of energy, which manifests as a cost. In the presence of reasonably competitive markets, that cost is under pressure from market prices. Yes, we, humans are greedy, and we like accumulating profits, and therefore we squeeze our costs. Whenever energy comes into play as significant a cost, we figure out ways of diminishing its consumption per unit of real output. Competitive markets, both domestic and international, thus including free trade, act as an unintentional, and yet powerful a reductor of energy consumption, and, under a different angle, they remind us to find cheap sources of energy.


[1] Andreoni, V. (2020). The energy metabolism of countries: Energy efficiency and use in the period that followed the global financial crisis. Energy Policy, 139, 111304. https://doi.org/10.1016/j.enpol.2020.111304

[2] Al-Tamimi and Al-Ghamdi (2020), ‘Multiscale integrated analysis of societal and ecosystem metabolism of Qatar’ Energy Reports, 6, 521-527, https://doi.org/10.1016/j.egyr.2019.09.019

[3] Velasco-Fernández, R., Pérez-Sánchez, L., Chen, L., & Giampietro, M. (2020), A becoming China and the assisted maturity of the EU: Assessing the factors determining their energy metabolic patterns. Energy Strategy Reviews, 32, 100562.  https://doi.org/10.1016/j.esr.2020.100562

[4] Niebel, B., Leupold, S. & Heinemann, M. An upper limit on Gibbs energy dissipation governs cellular metabolism. Nat Metab 1, 125–132 (2019). https://doi.org/10.1038/s42255-018-0006-7

[5] Waśniewski, K. (2017). Technological change as intelligent, energy-maximizing adaptation. Energy-Maximizing Adaptation (August 30, 2017). http://dx.doi.org/10.1453/jest.v4i3.1410

[6] Andreoni, V. (2020). The energy metabolism of countries: Energy efficiency and use in the period that followed the global financial crisis. Energy Policy, 139, 111304. https://doi.org/10.1016/j.enpol.2020.111304

[7] Halbrügge, S., Schott, P., Weibelzahl, M., Buhl, H. U., Fridgen, G., & Schöpf, M. (2021). How did the German and other European electricity systems react to the COVID-19 pandemic?. Applied Energy, 285, 116370. https://doi.org/10.1016/j.apenergy.2020.116370

[8] Salisu, A. A., Ebuh, G. U., & Usman, N. (2020). Revisiting oil-stock nexus during COVID-19 pandemic: Some preliminary results. International Review of Economics & Finance, 69, 280-294. https://doi.org/10.1016/j.iref.2020.06.023

[9] Halvorsen, B., & Larsen, B. M. (2021). Identifying drivers for the direct rebound when energy efficiency is unknown. The importance of substitution and scale effects. Energy, 222, 119879. https://doi.org/10.1016/j.energy.2021.119879

DIY algorithms of our own

I return to that interesting interface of science and business, which I touched upon in my before-last update, titled ‘Investment, national security, and psychiatry’ and which means that I return to discussing two research projects I start being involved in, one in the domain of national security, another one in psychiatry, both connected by the idea of using artificial neural networks as analytical tools. What I intend to do now is to pass in review some literature, just to get the hang of what is the state of science, those last days.

On the top of that, I have been asked by my colleagues to crash take the leadership of a big, multi-thread research project in management science. The multitude of threads has emerged as a circumstantial by-product of partly the disruption caused by the pandemic, and partly as a result of excessive partition in the funding of research. As regards the funding of research, Polish universities have sort of two financial streams. One consists of big projects, usually team-based, financed by specialized agencies, such as the National Science Centre (https://www.ncn.gov.pl/?language=en ) or the National Centre for Research and Development (https://www.gov.pl/web/ncbr-en ). Another one is based on relatively small grants, applied for by and granted to individual scientists by their respective universities, which, in turn, receive bulk subventions from the Ministry of Education and Science. Personally, I think that last category, such as it is being allocated and used now, is a bit of a relic. It is some sort of pocket money for the most urgent and current expenses, relatively small in scale and importance, such as the costs of publishing books and articles, the costs of attending conferences etc. This is a financial paradox: we save and allocate money long in advance, in order to have money for essentially incidental expenses – which come at the very end of the scientific pipeline – and we have to make long-term plans for it. It is a case of fundamental mismatch between the intrinsic properties of a cash flow, on the one hand, and the instruments used for managing that cash flow, on the other hand.

Good. This is introduction to detailed thinking. Once I have those semantic niceties checked out, I cut into the flesh of thinking, and the first piece I intend to cut out is the state of science as regards Territorial Defence Forces and their role amidst the COVID-19 pandemic. I found an interesting article by Tiutiunyk et al. (2018[1]). It is interesting because it gives a detailed methodology for assessing operational readiness in any military unit, territorial defence or other. That corresponds nicely to Hypothesis #2 which I outlined for that project in national security, namely: ‘the actual role played by the TDF during the pandemic was determined by the TDF’s actual capacity of reaction, i.e. speed and diligence in the mobilisation of human and material resources’. That article by Tiutiunyk et al. (2018) allows entering into details as regards that claim. 

Those details start unfolding from the assumption that operational readiness is there when the entity studied possesses the required quantity of efficient technical and human resources. The underlying mathematical concept is quite simple. I the given situation, adequate response requires using m units of resources at k% of capacity during time te. The social entity studied can muster n units of the same resources at l% of capacity during the same time te. The most basic expression of operational readiness is, therefore, a coefficient OR = (n*l)/(m*k). I am trying to find out what specific resources are the key to that readiness. Tiutiunyk et al. (2018) offer a few interesting insights in that respect. They start by noticing the otherwise known fact that resources used in crisis situations are not exactly the same we use in everyday course of life and business, and therefore we tend to hold them for a time longer than their effective lifecycle. We don’t amortize them properly because we don’t really control for their physical and moral depreciation. One of the core concepts in territorial defence is to counter that negative phenomenon, and to maintain, through comprehensive training and internal control, a required level of capacity.

As I continue going through literature, I come by an interesting study by I. Bet-El (2020), titled: ‘COVID-19 and the future of security and defence’, published by the European Leadership Network (https://www.europeanleadershipnetwork.org/wp-content/uploads/2020/05/Covid-security-defence-1.pdf ). Bet-El introduces an important distinction between threats and risks, and, contiguously, the distinction between security and defence: ‘A threat is a patent, clear danger, while risk is the probability of a latent danger becoming patent; evaluating that probability requires judgement. Within this framework, defence is to be seen as the defeat or deterrence of a patent threat, primarily by military, while security involves taking measures to prevent latent threats from becoming patent and if the measures fail, to do so in such a way that there is time and space to mount an effective defence’. This is deep. I do a lot of research in risk management, especially as I invest in the stock market. When we face a risk factor, our basic behavioural response is hedging or insurance. We hedge by diversifying our exposures to risk, and we insure by sharing the risk with other people. Healthcare systems are a good example of insurance. We have a flow of capital that fuels a manned infrastructure (hospitals, ambulances etc.), and that infrastructure allows each single sick human to share his or her risks with other people. Social distancing is the epidemic equivalent of hedging. When cutting completely or significantly throttling social interactions between households, we have each household being sort of separated from the epidemic risk in other households. When one node in a network is shielded from some of the risk occurring in other nodes, this is hedging.

The military is made for responding to threats rather than risks. Military action is a contingency plan, implemented when insurance and hedging have gone to hell. The pandemic has shown that we need more of such buffers, i.e. more social entities able to mobilise quickly into deterring directly an actual threat. Territorial Defence Forces seem to fit the bill.  Another piece of literature, from my own, Polish turf, by Gąsiorek & Marek (2020[2]), state straightforwardly that Territorial Defence Forces have proven to be a key actor during the COVID-19 pandemic precisely because they maintain a high degree of actual readiness in their crisis-oriented resources, as compared to other entities in the Polish public sector.

Good. I have a thread, from literature, for the project devoted to national security. The issue of operational readiness seems to be somehow in the centre, and it translates into the apparently fluent frontier between security and national defence. Speed of mobilisation in the available resources, as well as the actual reliability of those resources, once mobilized, look like the key to understanding the surprisingly significant role of Territorial Defence Forces during the COVID-19 pandemic. Looks like my initial hypothesis #2, claiming that the actual role played by the TDF during the pandemic was determined by the TDF’s actual capacity of reaction, i.e. speed and diligence in the mobilisation of human and material resources, is some sort of theoretical core to that whole body of research.

In our team, we plan and have a provisional green light to run interviews with the soldiers of Territorial Defence Forces. That basic notion of actually mobilizable resources can help narrowing down the methodology to apply in those interviews, by asking specific questions pertinent to that issue. Which specific resources proved to be the most valuable in the actual intervention of TDF in pandemic? Which resources – if any – proved to be 100% mobilizable on the spot? Which of those resources proved to be much harder to mobilise than it had been initially assumed? Can we rate and rank all the human and technical resources of TDF as for their capacity to be mobilised?

Good. I gently close the door of that room in my head, filled with Territorial Defence Forces and the pandemic. I make sure I can open it whenever I want, and I open the door to that other room, where psychiatry dwells. Me and those psychiatrists I am working with can study a sample of medical records as regards patients with psychosis. Verbal elocutions of those patients are an important part of that material, and I make two hypotheses along that tangent:

>> Hypothesis #1: the probability of occurrence in specific grammatical structures A, B, C, in the general grammatical structure of a patient’s elocutions, both written and spoken, is informative about the patient’s mental state, including the likelihood of psychosis and its specific form.

>> Hypothesis #2: the action of written self-reporting, e.g. via email, from the part of a psychotic patient, allows post-clinical treatment of psychosis, with results observable as transition from mental state A to mental state B.

I start listening to what smarter people than me have to say on the matter. I start with Worthington et al. (2019[3]), and I learn there is a clinical category: clinical high risk for psychosis (CHR-P), thus a set of subtler (than psychotic) ‘changes in belief, perception, and thought that appear to represent attenuated forms of delusions, hallucinations, and formal thought disorder’. I like going backwards upstream, and I immediately ask myself whether that line of logic can be reverted. If there is clinical high risk for psychosis, the occurrence of those same symptoms in reverse order, from severe to light, could be a path of healing, couldn’t it?

Anyway, according to Worthington et al. (2019), some 25% of people with diagnosed CHR-P transition into fully scaled psychosis. Once again, from the perspective of risk management, 25% of actual occurrence in a risk category is a lot. It means that CHR-P is pretty solid as risk assessment comes. I further learn that CHR-P, when represented as a collection of variables (a vector for friends with a mathematical edge), entails an internal distinction into predictors and converters. Predictors are the earliest possible observables, something like a subtle smell of possible s**t, swirling here and there in the ambient air. Converters are information that bring progressive confirmation to predictors.

That paper by Worthington et al. (2019) is a review of literature in itself, and allows me to compare different approaches to CHR-P. The most solid ones, in terms of accurately predicting the onset of full-clip psychosis, always incorporate two components: assessment of the patient’s social role, and analysis of verbalized thought. Good. Looks promising. I think the initial hypotheses should be expanded into claims about socialization.

I continue with another paper, by Corcoran and Cecchi (2020[4]). Generally, patients with psychotic disorders display lower a semantic coherence than ordinary. The flow of meaning in their speech is impended: they can express less meaning in the same volume of words, as compared to a mentally healthy person. Reduced capacity to deliver meaning manifests as apparent tangentiality in verbal expression. Psychotic patients seem to err in their elocutions. Reduced complexity of speech, i.e. relatively low a capacity to swing between different levels of abstraction, with a tendency to exaggerate concreteness, is another observable which informs about psychosis. Two big families of diagnostic methods follow that twofold path. Latent Semantic Analysis (LSA) seems to be the name of the game as regards the study of semantic coherence. Its fundamental assumption is that words convey meaning by connecting to other words, which further unfolds into assuming that semantic similarity, or dissimilarity, with a more or less complex coefficient joint occurrence, as opposed to disjoint occurrence inside big corpuses of language.  

Corcoran and Cecchi (2020) name two main types of digital tools for Latent Semantic Analysis. One is Word2Vec (https://en.wikipedia.org/wiki/Word2vec), and I found a more technical and programmatic approach there to at: https://towardsdatascience.com/a-word2vec-implementation-using-numpy-and-python-d256cf0e5f28 . Another one is GloVe, which I found three interesting references to, at https://nlp.stanford.edu/projects/glove/ , https://github.com/maciejkula/glove-python , and at https://pypi.org/project/glove-py/ .

As regards semantic complexity, two types of analytical tools seem to run the show. One is the part-of-speech (POS) algorithm, where we tag words according to their grammatical function in the sentence: noun, verb, determiner etc. There are already existing digital platforms for implementing that approach, such as Natural Language Toolkit (http://www.nltk.org/ ). Another angle is that of speech graphs, where words are nodes in the network of discourse, and their connections (e.g. joint occurrence) to other words are edges in that network. Now, the intriguing thing about that last thread is that it seems to had been burgeoning in the late 1990ies, and then it sort of faded away. Anyway, I found two references for an algorithmic approach to speech graphs, at https://github.com/guillermodoghel/speechgraph , and at https://www.researchgate.net/publication/224741196_A_general_algorithm_for_word_graph_matrix_decomposition .

That quick review of literature, as regards natural language as predictor of psychosis, leads me to an interesting sidestep. Language is culture, right? Low coherence, and low complexity in natural language are informative about psychosis, right? Now, I put that argument upside down. What if we, homo (mostly) sapiens have a natural proclivity to psychosis, with that overblown cortex of ours? What if we had figured out, at some point of our evolutionary path, that language is a collectively intelligent tool which, with is unique coherence and complexity required for efficient communication, keeps us in a state of acceptable sanity, until we go on Twitter, of course.  

Returning to the intellectual discipline which I should demonstrate, as a respectable researcher, the above review of literature brings one piece of good news, as regards the project in psychiatry. Initially, in this specific team, we assumed that we necessarily need an external partner, most likely a digital business, with important digital resources in AI, in order to run research on natural language. Now, I realized that we can assume two scenarios: one with big, fat AI from that external partner, and another one, with DIY algorithms of our own. Gives some freedom of movement. Cool.


[1] Tiutiunyk, V. V., Ivanets, H. V., Tolkunov, І. A., & Stetsyuk, E. I. (2018). System approach for readiness assessment units of civil defense to actions at emergency situations. Науковий вісник Національного гірничого університету, (1), 99-105. DOI: 10.29202/nvngu/2018-1/7

[2] Gąsiorek, K., & Marek, A. (2020). Działania wojsk obrony terytorialnej podczas pandemii COVID–19 jako przykład wojskowego wsparcia władz cywilnych i społeczeństwa. Wiedza Obronna. DOI: https://doi.org/10.34752/vs7h-g945

[3] Worthington, M. A., Cao, H., & Cannon, T. D. (2019). Discovery and validation of prediction algorithms for psychosis in youths at clinical high risk. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging. https://doi.org/10.1016/j.bpsc.2019.10.006

[4] Corcoran, C. M., & Cecchi, G. (2020). Using language processing and speech analysis for the identification of psychosis and other disorders. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging. https://doi.org/10.1016/j.bpsc.2020.06.004

Investment, national security, and psychiatry

I need to clear my mind a bit. For the last few weeks, I have been working a lot on revising an article of mine, and I feel I need a little bit of a shake-off. I know by experience that I need a structure to break free from another structure. Yes, I am one of those guys. I like structures. When I feel I lack one, I make one.

The structure which I want to dive into, in order to shake off the thinking about my article, is the thinking about my investment in the stock market. My general strategy in that department is to take the rent, which I collect from an apartment in town, every month, and to invest it in the stock market. Economically, it is a complex process of converting the residential utility of a real asset (apartment) into a flow of cash, thus into a financial asset with quite steady a market value (inflation is still quite low), and then I convert that low-risk financial asset into a differentiated portfolio of other financial assets endowed with higher a risk (stock). I progressively move capital from markets with low risk (residential real estate, money) into a high-risk-high-reward market.

I am playing a game. I make a move (monthly cash investment), and I wait for a change in the stock market. I am wrapping my mind around the observable change, and I make my next move the next month. With each move I make, I gather information. What is that information? Let’s have a look at my portfolio such as it is now. You can see it in the table below:

StockValue in EURReal return in €Rate of return I have as of April 6ht, 2021, in the morning
CASH & CASH FUND & FTX CASH (EUR) € 25,82 €                                    –   €                                     25,82
ALLEGRO.EU SA € 48,86 €                               (2,82)-5,78%
ALTIMMUNE INC. – COMM € 1 147,22 €                            179,6515,66%
APPLE INC. – COMMON ST € 1 065,87 €                                8,210,77%
BIONTECH SE € 1 712,88 €                           (149,36)-8,72%
CUREVAC N.V. € 711,00 €                             (98,05)-13,79%
DEEPMATTER GROUP PLC € 8,57 €                               (1,99)-23,26%
FEDEX CORPORATION COMM € 238,38 €                              33,4914,05%
FIRST SOLAR INC. – CO € 140,74 €                             (11,41)-8,11%
GRITSTONE ONCOLOGY INC € 513,55 €                           (158,43)-30,85%
INPOST € 90,74 €                             (17,56)-19,35%
MODERNA INC. – COMMON € 879,85 €                             (45,75)-5,20%
NOVAVAX INC. – COMMON STOCK € 1 200,75 €                            398,5333,19%
NVIDIA CORPORATION – C € 947,35 €                              42,254,46%
ONCOLYTICS BIOTCH CM € 243,50 €                             (14,63)-6,01%
SOLAREDGE TECHNOLOGIES € 683,13 €                             (83,96)-12,29%
SOLIGENIX INC. COMMON € 518,37 €                           (169,40)-32,68%
TESLA MOTORS INC. – C € 4 680,34 €                            902,3719,28%
VITALHUB CORP.. € 136,80 €                               (3,50)-2,56%
WHIRLPOOL CORPORATION € 197,69 €                              33,1116,75%
  €       15 191,41 €                            840,745,53%

A few words of explanation are due. Whilst I have been actively investing for 13 months, I made this portfolio in November 2020, when I did some major reshuffling. My overall return on the cash invested, over the entire period of 13 months, is 30,64% as for now (April 6th, 2021), which makes 30,64% * (12/13) = 28,3% on the annual basis.

The 5,53% of return which I have on this specific portfolio makes roughly 1/6th of the total return in have on all the portfolios I had over the past 13 months. It is the outcome of my latest experimental round, and this round is very illustrative of the mistake which I know I can make as an investor: panic.

In August and September 2020, I collected some information, I did some thinking, and I made a portfolio of biotech companies involved in the COVID-vaccine story: Pfizer, Biontech, Curevac, Moderna, Novavax, Soligenix. By mid-October 2020, I was literally swimming in extasy, as I had returns on these ones like +50%. Pure madness. Then, big financial sharks, commonly called ‘investment funds’, went hunting for those stocks, and they did what sharks do: they made their target bleed before eating it. They boxed and shorted those stocks in order to make their prices affordably low for long investment positions. At the time, I lost control of my emotions, and when I saw those prices plummet, I sold out everything I had. Almost as soon as I did it, I realized what an idiot I had been. Two weeks later, the same stocks started to rise again. Sharks had had their meal. In response, I did what I still wonder whether it was wise or stupid: I bought back into those positions, only at a price higher than what I sold them for.

Selling out was stupid, for sure. Was buying back in a wise move? I don’t know, like really. My intuition tells me that biotech companies in general have a bright future ahead, and not only in connection with vaccines. I am deeply convinced that the pandemic has already built up, and will keep building up an interest for biotechnology and medical technologies, especially in highly innovative forms. This is even more probable as we realized that modern biotechnology is very largely digital technology. This is what is called ‘platforms’ in the biotech lingo. These are digital clouds which combine empirical experimental data with artificial intelligence, and the latter is supposed to experiment virtually with that data. Modern biotechnology consists in creating as many alternative combinations of molecules and lifeforms as we possibly can make and study, and then pick those which offer the best combination of biological outcomes with the probability of achieving said outcomes.

My currently achieved rates of return, in the portfolio I have now, are very illustrative of an old principle in capital investment: I will fail most of the times. Most of my investment decisions will be failures, at least in the short and medium term, because I cannot possibly outsmart the incredibly intelligent collective structure of the stock market. My overall gain, those 5,53% in the case of this specific portfolio, is the outcome of 19 experiments, where I fail in 12 of them, for now, and I am more or less successful in the remaining 7.

The very concept of ‘beating the market’, which some wannabe investment gurus present, is ridiculous. The stock market is made of dozens of thousands of human brains, operating in correlated coupling, and leveraged with increasingly powerful artificial neural networks. When I expect to beat that networked collective intelligence with that individual mind of mine, I am pumping smoke up my ass. On the other hand, what I can do is to do as many different experiments as I can possibly spread my capital between.

It is important to understand that any investment strategy, where I assume that from now on, I will not make any mistakes, is delusional. I made mistakes in the past, and I am likely to make mistakes in the future. What I can do is to make myself more predictable to myself. I can narrow down the type of mistakes I tend to make, and to create the corresponding compensatory moves in my own strategy.

Differentiation of risk is a big principle in my investment philosophy, and yet it is not the only one. Generally, with the exception of maybe 2 or 3 days in a year, I don’t really like quick, daily trade in the stock market. I am more of a financial farmer: I sow, and I wait to see plants growing out of those seeds. I invest in industries rather than individual companies. I look for some kind of strong economic undertow for my investments, and the kind of undertow I specifically look for is high potential for deep technological change. Accessorily, I look for industries which sort of logically follow human needs, e.g. the industry of express deliveries in the times of pandemic. I focus on three main fields of technology: biotech, digital, and energy.

Good. I needed to shake off, and I am. Thinking and writing about real business decisions helped me to take some perspective. Now, I am gently returning into the realm of science, without completely leaving the realm of business: I am navigating the somehow troubled and feebly charted waters of money for science. I am currently involved in launching and fundraising for two scientific projects, in two very different fields of science: national security and psychiatry. Yes, I know, they can conjunct in more points than we commonly think they can. Still, in canonical scientific terms, these two diverge.

How come I am involved, as researcher, in both national security and psychiatry? Here is the thing: my method of using a simple artificial neural network to simulate social interactions seems to be catching on. Honestly, I think it is catching on because other researchers, when they hear me talking about ‘you know, simulating alternative realities and assessing which one is the closest to the actual reality’ sense in me that peculiar mental state, close to the edge of insanity, but not quite over that edge, just enough to give some nerve and some fun to science.

In the field of national security, I teamed up with a scientist strongly involved in it, and we take on studying the way our Polish forces of Territorial Defence have been acting in and coping with the pandemic of COVID-19. First, the context. So far, the pandemic has worked as a magnifying glass for all the f**kery in public governance. We could all see a minister saying ‘A,B and C will happen because we said so’, and right after there was just A happening, with a lot of delay, and then a completely unexpected phenomenal D appeared, with B and C bitching and moaning they haven’t the right conditions for happening decently, and therefore they will not happen at all.  This is the first piece of the context. The second is the official mission and the reputation of our Territorial Defence Forces AKA TDF. This is a branch of our Polish military, created in 2017 by our right-wing government. From the beginning, these guys had the reputation to be a right-wing militia dressed in uniforms and paid with taxpayers’ money. I honestly admit I used to share that view. TDF is something like the National Guard in US. These are units made of soldiers who serve in the military, and have basic military training, but they have normal civilian lives besides. They have civilian jobs, whilst training regularly and being at the ready should the nation call.

The initial idea of TDF emerged after the Russian invasion of the Crimea, when we became acutely aware that military troops in nondescript uniforms, apparently lost, and yet strangely connected to the Russian government, could massively start looking lost by our Eastern border. The initial idea behind TDF was to significantly increase the capacity of the Polish population for mobilising military resources. Switzerland and Finland largely served as models.

When the pandemic hit, our government could barely pretend they control the situation. Hospitals designated as COVID-specific had frequently no resources to carry out that mission. Our government had the idea of mobilising TDF to help with basic stuff: logistics, triage and support in hospitals etc. Once again, the initial reaction of the general public was to put the label of ‘militarisation’ on that decision, and, once again, I was initially thinking this way. Still, some friends of mine, strongly involved as social workers supporting healthcare professionals, started telling me that working with TDF, in local communities, was nothing short of amazing. TDF had the speed, the diligence, and the capacity to keep their s**t together which many public officials lacked. They were just doing their job and helping tremendously.

I started scratching the surface. I did some research, and I found out that TDF was of invaluable help for many local communities, especially outside of big cities. Recently, I accidentally had a conversation about it with M., the scientist whom I am working with on that project. He just confirmed my initial observations.

M. has strong connections with TDF, including their top command. Our common idea is to collect abundant, interview-based data from TDF soldiers mobilised during the pandemic, as regards the way they carried out their respective missions. The purely empirical edge we want to have here is oriented on defining successes and failures, as well as their context and contributing factors. The first layer of our study is supposed to provide the command of TDF with some sort of case-studies-based manual for future interventions. At the theoretical, more scientific level, we intend to check the following hypotheses:      

>> Hypothesis #1: during the pandemic, TDF has changed its role, under the pressure of external events, from the initially assumed, properly spoken territorial defence, to civil defence and assistance to the civilian sector.

>> Hypothesis #2: the actual role played by the TDF during the pandemic was determined by the TDF’s actual capacity of reaction, i.e. speed and diligence in the mobilisation of human and material resources.

>> Hypothesis #3: collectively intelligent human social structures form mechanisms of reaction to external stressors, and the chief orientation of those mechanisms is to assure proper behavioural coupling between the action of external stressors, and the coordinated social reaction. Note: I define behavioural coupling in terms of the games’ theory, i.e. as the objectively existing need for proper pacing in action and reaction.   

The basic method of verifying those hypotheses consists, in the first place, in translating the primary empirical material into a matrix of probabilities. There is a finite catalogue of operational procedures that TDF can perform. Some of those procedures are associated with territorial military defence as such, whilst other procedures belong to the realm of civil defence. It is supposed to go like: ‘At the moment T, in the location A, procedure of type Si had a P(T,A, Si) probability of happening’. In that general spirit, Hypothesis #1 can be translated straight into a matrix of probabilities, and phrased out as ‘during the pandemic, the probability of TDF units acting as civil defence was higher than seeing them operate as strict territorial defence’.

That general probability can be split into local ones, e.g. region-specific. On the other hand, I intuitively associate Hypotheses #2 and #3 with the method which I call ‘study of orientation’. I take the matrix of probabilities defined for the purposes of Hypothesis #1, and I put it back to back with a matrix of quantitative data relative to the speed and diligence in action, as regards TDF on the one hand, and other public services on the other hand. It is about the availability of vehicles, capacity of mobilisation in people etc. In general, it is about the so-called ‘operational readiness’, which you can read more in, for example, the publications of RAND Corporation (https://www.rand.org/topics/operational-readiness.html).  

Thus, I take the matrix of variables relative to operational readiness observable in the TDF, and I use that matrix as input for a simple neural network, where the aggregate neural activation based on those metrics, e.g. through a hyperbolic tangent, is supposed to approximate a specific probability relative to TDF people endorsing, in their operational procedures, the role of civil defence, against that of military territorial defence. I hypothesise that operational readiness in TDF manifests a collective intelligence at work and doing its best to endorse specific roles and applying specific operational procedures. I make as many such neural networks as there are operational procedures observed for the purposes of Hypothesis #1. Each of these networks is supposed to represent the collective intelligence of TDF attempting to optimize, through its operational readiness, the endorsement and fulfilment of a specific role. In other words, each network represents an orientation.

Each such network transforms the input data it works with. This is what neural networks do: they experiment with many alternative versions of themselves. Each experimental round, in this case, consists in a vector of metrics informative about the operational readiness TDF, and that vector locally tries to generate an aggregate outcome – its neural activation – as close as possible to the probability of effectively playing a specific role. This is always a failure: the neural activation of operational readiness always falls short of nailing down exactly the probability it attempts to optimize. There is always a local residual error to account for, and the way a neural network (well, my neural network) accounts for errors consists in measuring them and feeding them into the next experimental round. The point is that each such distinct neural network, oriented on optimizing the probability of Territorial Defence Forces endorsing and fulfilling a specific social role, is a transformation of the original, empirical dataset informative about the TDF’s operational readiness.

Thus, in this method, I create as many transformations (AKA alternative versions) of the actual operational readiness in TDF, as there are social roles to endorse and fulfil by TDF. In the next step, I estimate two mathematical attributes of each such transformation: its Euclidean distance from the original empirical dataset, and the distribution of its residual error. The former is informative about similarity between the actual reality of TDF’s operational readiness, on the one hand, and alternative realities, where TDF orient themselves on endorsing and fulfilling just one specific role. The latter shows the process of learning which happens in each such alternative reality.

I make a few methodological hypotheses at this point. Firstly, I expect a few, like 1 ÷ 3 transformations (alternative realities) to fall particularly close from the actual empirical reality, as compared to others. Particularly close means their Euclidean distances from the original dataset will be at least one order of magnitude smaller than those observable in the remaining transformations. Secondly, I expect those transformations to display a specific pattern of learning, where the residual error swings in a predictable cycle, over a relatively wide amplitude, yet inside that amplitude. This is a cycle where the collective intelligence of Territorial Defence Forces goes like: ‘We optimize, we optimize, it goes well, we narrow down the error, f**k!, we failed, our error increased, and yet we keep trying, we optimize, we optimize, we narrow down the error once again…’ etc. Thirdly, I expect the remaining transformations, namely those much less similar to the actual reality in Euclidean terms, to display different patterns of learning, either completely dishevelled, with the residual error bouncing haphazardly all over the place, or exaggeratedly tight, with error being narrowed down very quickly and small ever since.

That’s the outline of research which I am engaging into in the field of national security. My role in this project is that of a methodologist. I am supposed to design the system of interviews with TDF people, the way of formalizing the resulting data, binding it with other sources of information, and finally carrying out the quantitative analysis. I think I can use the experience I already have with using artificial neural networks as simulators of social reality, mostly in defining said reality as a vector of probabilities attached to specific events and behavioural patterns.     

As regards psychiatry, I have just started to work with a group of psychiatrists who have abundant professional experience in two specific applications of natural language in the diagnosing and treating psychoses. The first one consists in interpreting patients’ elocutions as informative about their likelihood of being psychotic, relapsing into psychosis after therapy, or getting durably better after such therapy. In psychiatry, the durability of therapeutic outcomes is a big thing, as I have already learnt when preparing for this project. The second application is the analysis of patients’ emails. Those psychiatrists I am starting to work with use a therapeutic method which engages the patient to maintain contact with the therapist by writing emails. Patients describe, quite freely and casually, their mental state together with their general existential context (job, family, relationships, hobbies etc.). They don’t necessarily discuss those emails in subsequent therapeutic sessions; sometimes they do, sometimes they don’t. The most important therapeutic outcome seems to be derived from the very fact of writing and emailing.

In terms of empirical research, the semantic material we are supposed to work with in that project are two big sets of written elocutions: patients’ emails, on the one hand, and transcripts of standardized 5-minute therapeutic interviews, on the other hand. Each elocution is a complex grammatical structure in itself. The semantic material is supposed to be cross-checked with neurological biomarkers in the same patients. The way I intend to use neural networks in this case is slightly different from that national security thing. I am thinking about defining categories, i.e. about networks which guess similarities and classification out of crude empirical data. For now, I make two working hypotheses:

>> Hypothesis #1: the probability of occurrence in specific grammatical structures A, B, C, in the general grammatical structure of a patient’s elocutions, both written and spoken, is informative about the patient’s mental state, including the likelihood of psychosis and its specific form.

>> Hypothesis #2: the action of written self-reporting, e.g. via email, from the part of a psychotic patient, allows post-clinical treatment of psychosis, with results observable as transition from mental state A to mental state B.

The inflatable dartboard made of fine paper

My views on environmentally friendly production and consumption of energy, and especially on public policies in that field, differ radically from what seems to be currently the mainstream of scientific research and writing. I even got kicked out of a scientific conference because of my views. After my paper was accepted, I received a questionnaire to fill, which was supposed to feed the discussion on the plenary session of that conference. I answered those questions in good faith and sincerely, and: boom! I receive an email which says that my views ‘are not in line with the ideas we want to develop in the scientific community’. You could rightly argue that my views might be so incongruous that kicking me out of that conference was an act of mercy rather than enmity. Good. Let’s pass my views in review.

There is that thing of energy efficiency and climate neutrality. Energy efficiency, i.e. the capacity to derive a maximum of real output out of each unit of energy consumed, can be approached from two different angles: as a stationary value, on the one hand, or an elasticity, on the other hand. We could say: let’s consume as little energy as we possibly can and be as productive as possible with that frugal base. That’s the stationary view. Yet, we can say: let’s rock it, like really. Let’s boost our energy consumption so as to get in control of our climate. Let’s pass from roughly 30% of energy generated on the surface of the Earth, which we consume now, to like 60% or 70%. Sheer laws of thermodynamics suggest that if we manage to do that, we can really run the show. These is the summary of what in my views is not in line with ‘the ideas we want to develop in the scientific community’.

Of course, I can put forth any kind of idiocy and claim this is a valid viewpoint. Politics are full of such episodes. I was born and raised in a communist country. I know something about stupid, suicidal ideas being used as axiology for running a nation. I also think that discarding completely other people’s ‘ideas we want to develop in the scientific community’ and considering those people as pathetically lost would be preposterous from my part. We are all essentially wrong about that complex stuff we call ‘reality’. It is just that some ways of being wrong are more functional than others. I think truly correct a way to review the current literature on energy-related policies is to take its authors’ empirical findings and discuss them

under a different interpretation, namely the one sketched in the preceding paragraph.

I like looking at things with precisely that underlying assumption that I don’t know s**t about anything, and I just make up cognitive stuff which somehow pays off. I like swinging around that Ockham’s razor and cut out all the strong assumptions, staying just with the weak ones, which do not require much assuming and are at the limit of stylized observations and theoretical claims.

My basic academic background is in law (my Master’s degree), and in economics (my PhD). I look at social reality around me through the double lens of those two disciplines, which, when put in stereoscopic view, boil down to having an eye on patterns in human behaviour.

I think I observe that we, humans, are social and want to stay social, and being social means a baseline mutual predictability in our actions. We are very much about maintaining a certain level of coherence in culture, which means a certain level of behavioural coupling. We would rather die than accept the complete dissolution of that coherence. We, humans, we make behavioural coherence: this is our survival strategy, and it allows us to be highly social. Our cultures always develop along the path of differentiation in social roles. We like specializing inside the social group we belong to.

Our proclivity to endorse specific skillsets, which turn into social roles, has the peculiar property of creating local surpluses, and we tend to trade those surpluses. This is how markets form. In economics, there is that old distinction between production and consumption. I believe that one of the first social thinkers who really meant business about it was Jean Baptiste Say, in his “Treatise of Political Economy”. Here >> https://discoversocialsciences.com/wp-content/uploads/2020/03/Say_treatise_political-economy.pdf  you have it in the English translation, whilst there >>

https://discoversocialsciences.com/wp-content/uploads/2018/04/traite-deconomie-politique-jean-baptiste-say.pdf it is in its elegant French original.

In my perspective, the distinction between production and consumption is instrumental, i.e. it is useful for solving some economic problems, but just some. Saying that I am a consumer is a gross simplification. I am a consumer in some of my actions, but in others I am a producer. As I write this blog, I produce written content. I prefer assuming that production and consumption are two manifestations of the same activity, namely of markets working around tradable surpluses created by homo sapiens as individual homo sapiens endorse specific social roles.

When some scientists bring forth empirically backed claims that our patterns of consumption have the capacity to impact climate (e.g. Bjelle et al. 2021[1]), I say ‘Yes, indeed, and at the end of that specific intellectual avenue we find out that creating some specific, tradable surpluses, ergo the fact of endorsing some specific social roles, has the capacity to impact climate’. Bjelle et al. find out something which from my point of view is gobsmacking: whilst relative prevalence of particular goods in the overall patterns of demand has little effect on the emission of Greenhouse Gases (GHG) at the planetary scale, there are regional discrepancies. In developing countries and in emerging markets, changes in the baskets of goods consumed seem to have strong impact GHG-wise. On the other hand, in developed economies, however the consumers shift their preferences between different goods, it seems to be very largely climate neutral. From there, Bjelle et al. conclude into such issues as environmental taxation. My own take on those results is different. What impacts climate is social change occurring in developing economies and emerging markets, and this is relatively quick demographic growth combined with quick creation of new social roles, and a big socio-economic difference between urban environments, and the rural ones.

In the broad theoretical perspective, states of society which we label as classes of socio-economic development are far more than just income brackets. They are truly different patterns of social interactions. I had a glimpse of that when I was comparing data on the consumption of energy per capita (https://data.worldbank.org/indicator/EG.USE.PCAP.KG.OE ) with the distribution of gross national product per capita (https://data.worldbank.org/indicator/NY.GDP.PCAP.CD ). It looks as if different levels of economic development were different levels of energy in the social system. Each 100 ÷ 300 kilograms of oil equivalent per capita per year seem to be associated with specific institutions in society.

Let’s imagine that climate change goes on. New s**t comes our way, which we need to deal with. We need to learn. We form new skillsets, and we define new social roles. New social roles mean new tradable surpluses, and new markets with new goods in it. We don’t really know what kind of skillsets, markets and goods that will be. Enhanced effort of collective adaptation leads to outcomes impossible to predict in themselves. The question is: can we predict the way those otherwise unpredictable outcomes will take shape?         

My fellow scientists seem not to like unpredictable outcomes. Shigetomi et al. (2020[2]) straightforwardly find out empirically that ‘only the very low, low, and very high-income households are likely to achieve a reduction in carbon footprint due to their high level of environmental consciousness. These income brackets include the majority of elderly households who are likely to have higher consciousness about environmental protection and addressing climate change’. In my fairy-tale, it means that only a fringe of society cares about environment and climate, and this is the fringe which does not really move a lot in terms of new social role. People with low income have low income because their social roles do not allow them to trade significant surpluses, and elderly people with high income do not really shape the labour market.

This is what I infer from those empirical results. Yet, Shigetomi et al. conclude that ‘The Input-Output Analysis Sustainability Evaluation Framework (IOSEF), as proposed in this study, demonstrates how disparity in household consumption causes societal distortion via the supply chain, in terms of consumption distribution, environmental burdens and household preferences. The IOSEF has the potential to be a useful tool to aid in measuring social inequity and burden distribution allocation across time and demographics’.

Guys, like really. Just sit and think for a moment. I even pass over the claim that inequality of income is a social distortion, although I am tempted to say that no know human society has ever been free of that alleged distortion, and therefore we’d better accommodate with it and stop calling it a distortion. What I want is logic. Guys, you have just proven empirically that only low-income people, and elderly high-income people care about climate and environment. The middle-incomes and the relatively young high-incomes, thus people who truly run the show of social and technological change, do not care as much as you would like them to. You claim that inequality of income is a distortion, and you want to eliminate it. When you kick inequality out of the social equation, you get rid of the low-income folks, and of the high-income ones. Stands to reason: with enforced equality, everybody is more or less middle-income. Therefore, the majority of society is in a social position where they don’t give a f**k about climate and environment. Besides, when you remove inequality, you remove vertical social mobility along hierarchies, and therefore you give a cold shoulder to a fundamental driver of social change. Still, you want social change, you have just said it.  

Guys, the conclusions you derive from your own findings are the political equivalent of an inflatable dartboard made of fine paper. Cheap to make, might look dashing, and doomed to be extremely short-lived as soon as used in practice.   


[1] Bjelle, E. L., Wiebe, K. S., Többen, J., Tisserant, A., Ivanova, D., Vita, G., & Wood, R. (2021). Future changes in consumption: The income effect on greenhouse gas emissions. Energy Economics, 95, 105114. https://doi.org/10.1016/j.eneco.2021.105114

[2] Shigetomi, Y., Chapman, A., Nansai, K., Matsumoto, K. I., & Tohno, S. (2020). Quantifying lifestyle based social equity implications for national sustainable development policy. Environmental Research Letters, 15(8), 084044. https://doi.org/10.1088/1748-9326/ab9142

The fine details of theory

I keep digging. I keep revising that manuscript of mine – ‘Climbing the right hill – an evolutionary approach to the European market of electricity’ – in order to resubmit it to Applied Energy. Some of my readers might become slightly fed up with that thread. C’mon, man! How long do you mean to work on that revision? It is just an article! Yes, it is just an article, and I have that thing in me, those three mental characters: the curious ape, the happy bulldog, and the austere monk. The ape is curious, and it almost instinctively reaches for interesting things. My internal bulldog just loves digging out tasty pieces and biting into bones. The austere monk in me observes the intellectual mess, which the ape and the bulldog make together, and then he takes that big Ockham’s razor, from the recesses of his robe, and starts cutting bullshit out. When the three of those start dancing around a topic, it is a long path to follow, believe me.

In this update, I intend to structure the theoretical background of my paper. First, I restate the essential point of my own research, which I need and want to position in relation to other people’s views and research. I claim that energy-related policies, including those with environmental edge, should assume that whatever we do with energy, as a civilisation, is a by-product of actions purposefully oriented on other types of outcomes. Metaphorically, when I claim that a society should take the shift towards renewable energies as its chief goal, and take everything else as instrumental, is like saying that the chief goal of an individual should be to keep their blood sugar firmly at 80,00, whatever happens. What’s the best way to achieving it? Putting yourself in a clinic, under permanent intravenous nutrition, and stop experimenting with that thing people call ‘food’, ‘activity’, ‘important things to do’. Anyone wants to do it? Hardly anyone, I guess. The right level of blood sugar can be approximately achieved as balanced outcome of a proper lifestyle, and can serve as a gauge of whether our actual lifestyle is healthy.

Coming back from my nutritional metaphor to energy-related policies, there is no historical evidence that any human society has ever achieved any important change regarding the production of energy or its consumption, by explicitly stating ‘From now on, we want better energy management’. The biggest known shifts in our energy base happened as by-products of changes oriented on something else. In Europe, my home continent, we had three big changes. First, way back in the day, like starting from the 13th century, we harnessed the power of wind and that of water in, respectively, windmills and watermills. That served to provide kinetic energy to grind cereals into flour, which, in turn, served to feed a growing urban population. Windmills and watermills brought with them a silent revolution, which we are still wrapping our minds around. By the end of the 19th century, we started a massive shift towards fossil fuels. Why? Because we expected to drive Ferraris around, one day in the future? Not really. We just went terribly short on wood. People who claim that Europe should recreate its ‘ancestral’ forests deliberately ignore the fact that hardly anyone today knows what those ancestral forests should look like. Virtually all the forests we have today come from massive replantation which took place starting from the beginning of the 20th century. Yes, we have a bunch of 400-year-old oaks across the continent, but I dare reminding that one oak is not exactly a forest.

The civilisational change which I think is going on now, in our human civilisation, is the readjustment of social roles, and of the ways we create new social roles, in the presence of a radical demographic change: unprecedently high headcount of population, accompanied by just as unprecedently low rate of demographic growth. For hundreds of years, our civilisation has been evolving as two concurrent factories: the factory of food in the countryside, and the factory of new social roles in cities. Food comes the best when the headcount of humans immediately around is a low constant, and new social roles burgeon the best when humans interact abundantly, and therefore when they are tightly packed together in a limited space. The basic idea of our civilisation is to put most of the absolute demographic growth into cities and let ourselves invent new ways of being useful to each other, whilst keeping rural land as productive as possible.

That thing had worked for centuries. It had worked for humanity that had been relatively small in relation to available space and had been growing quickly into that space. That idea of separating the production of food from the creation of social roles and institutions was adapted precisely to that demographic pattern, which you can still find vestiges of in some developing countries, as well as in emerging markets, with urban population several dozens of times denser than the rural one, and cities that look like something effervescent. These cities grow bubbles out of themselves, and those bubbles burst just as quickly. My own trip to China showed me how cities can be truly alive, with layers and bubbles inside them. One is tempted to say these cities are something abnormal, as compared to the orderly, demographically balanced urban entities in developed countries. Still, historically, this is what cities are supposed to look like.

Now, something is changing. There is more of us on the planet than it has ever been but, at the same time, we experience unprecedently low rate of demographic growth. Whilst we apparently still manage to keep total urban land on the planet at a constant level (https://data.worldbank.org/indicator/AG.LND.TOTL.UR.K2 ), we struggle with keeping the surface of agricultural land up to our needs (https://data.worldbank.org/indicator/AG.LND.AGRI.ZS ). As in any system tilted out of balance, weird local phenomena start occurring, and the basic metrics pertinent to the production and consumption of energy show an interesting pattern. When I look at the percentage participation of renewable sources in the total consumption of energy (https://data.worldbank.org/indicator/EG.FEC.RNEW.ZS ), I see a bumpy cycle which looks like learning with experimentation. When I narrow down to the participation of renewables in the total consumption of electricity ( https://data.worldbank.org/indicator/EG.ELC.RNEW.ZS), what I see is a more pronounced trend upwards, with visible past experimentation. The use of nuclear power to generate electricity (https://data.worldbank.org/indicator/EG.ELC.NUCL.ZS) looks like a long-run experiment, which now is in its phase of winding down.

Now, two important trends come into my focus. Energy efficiency, defined as average real output per unit of energy use (https://data.worldbank.org/indicator/EG.GDP.PUSE.KO.PP.KD) quite unequivocal a trend upwards. Someone could say ‘Cool, we purposefully make ourselves energy efficient’. Still, when we care to have a look at the coefficient of energy consumed per person per year (https://data.worldbank.org/indicator/EG.USE.PCAP.KG.OE), a strong trend upwards appears, with some deep bumps in the past. When I put those two trends back to back, I conclude that what we really max out on is the real output of goods and services in our civilisation, and energy efficiency is just a means to that end.

It is a good moment to puncture an intellectual balloon. I can frequently see and hear people argue that maximizing real output, in any social entity or context, is a manifestation of stupid, baseless greed and blindness to the truly important stuff. Still, please consider the following line of logic. We, humans, interact with the natural environment, and interact with each other.  When we interact with each other a lot, in highly dense networks of social relations, we reinforce each other’s learning, and start spinning the wheel of innovation and technological change. Abundant interaction with each other gives us new ideas for interacting with the natural environment.

Cities have peculiar properties. Firstly, by creating new social roles through intense social interaction, they create new products and services, and therefore new markets, connected in chains of value added. This is how the real output of goods and services in a society becomes a complex, multi-layered network of technologies, and this is how social structures become self-propelling businesses. The more complexity in social roles is created, the more products and services emerge, which brings the development in greater a number of markets. That, in turn, gives greater a real output, greater income per person, which incentivizes to create new social roles etc. This how social complexity creates the phenomenon called economic growth.

The phenomenon of economic growth, thus the quantitative growth in complex, networked technologies which emerge in relatively dense human settlements, has a few peculiar properties. You can’t see it, you can’t touch it, and yet you can immediately feel when its pace changes. Economic growth is among the most abstract concepts of social sciences, and yet living in a society with real economic growth at 5% per annum is like a different galaxy when compared to living in a place where real economic growth is actually a recession of -5%. The arithmetical difference is just 10 percentage points, around the top of something underlying which makes the base of 1. Still, lives in those two contexts are completely different. At +5% in real economic growth, starting a new business is generally a sensible idea, provided you have it nailed down with a business plan. At – 5% a year, i.e. in recession, the same business plan can be an elaborate way of committing economic and financial suicide. At +5%, political elections are usually won by people who just sell you the standard political bullshit, like ‘I will make your lives better’ claimed by a heavily indebted alcoholic with no real career of their own. At -5%, politics start being haunted by those sinister characters, who look and sound like evil spirits from our dreams and claim they ‘will restore order and social justice’.

The society which we consider today as normal is a society of positive real economic growth. All the institutions we are used to, such as healthcare systems, internal security, public administration, education – all that stuff works at least acceptably smoothly when complex, networked technologies of our society have demonstrable capacity to increase their real economic output. That ‘normal’ state of society is closely connected to the factories of social roles which we commonly call ‘cities’. Real economic growth happens when the amount of new social roles – fabricated through intense interactions between densely packed humans – is enough for the new humans coming around. Being professionally active means having a social role solid enough to participate in the redistribution of value added created in complex technological networks. It is both formal science and sort of accumulated wisdom in governance that we’d better have most of the adult, able bodied people in that state of professional activity. A small fringe of professionally inactive people is somehow healthy a margin of human energy free to be professionally activated, and when I say ‘small’, it is like no more than 5% of the adult population. Anything above becomes both a burden and a disruption to social cohesion. Too big a percentage of people with no clear, working social roles makes it increasingly difficult to make social interactions sufficiently abundant and complex to create enough new social roles for new people. This is why governments of this world attach keen importance to the accurate measurement of the phenomenon quantified as ‘unemployment’.  

Those complex networks of technologies in our societies, which have the capacity to create social roles and generate economic growth, work their work properly when we can transact about them, i.e. when we have working markets for the final economic goods produced with those technologies, and for intermediate economic goods produced for them. It is as if the whole thing worked when we can buy and sell things. I was born in 1968, in a communist country, namely Poland, and I can tell you that in the absence of markets the whole mechanism just jams, progressively to a halt. Yes, markets are messy and capricious, and transactional prices can easily get out of hand, creating inflation, and yet markets give those little local incentives needed to get the most of human social roles. In the communist Poland, I remember people doing really strange things, like hoarding massive inventories of refrigerators or women’s underwear, just to create some speculative spin in an ad hoc, semi-legal or completely illegal market. It looks as if people needed to market and transact for real, amidst the theoretically perfectly planned society.   

Anyway, economic growth is observable through big sets of transactions in product markets, and those transactions have two attributes: quantities and prices AKA Q an P. It is like Q*P = ∑qi*pi. When I have – well, when we have – that complex network of technologies functionally connected to a factory of social roles for new humans, that thing makes ∑qi*pi, thus a lot of local transactions with quantities qi, at prices pi. The economic growth I have been so vocal about in the last few paragraphs is the real growth, i.e. in quantity Q = ∑qi. On the long run, what I am interested in, and my government is interested in, is to reasonably max out on ∆ Q = ∆∑qi. Quantities change slowly and quite predictably, whilst prices tend to change quickly and, mostly on the short term, chaotically. Measuring accurately real economic growth involving kicking the ‘*pi’ component out of the equation and extracting just ∆ Q = ∆∑qi. Question: why bothering with the observation of Q*P = ∑qi*pi when the real thing we need is just ∆ Q = ∆∑qi? Answer: because there is no other way. Complex networks of technologies produce economic growth by creating increasing diversity in social roles in concurrence with increasing diversity in products and their respective markets. No genius has come up, so far, with a method to add up, directly, the volume of visits in hairdresser’s salons with the volume of electric vehicles made, and all that with the volume of energy consumed.

I have ventured myself far from the disciplined logic of revision in my paper, for resubmitting it. The logical flow required in this respect by Applied Energy is the following: introduction first, method and material next, theory follows, and calculations come after. The literature which I refer to in my writing needs to have two dimensions: longitudinal and lateral. Laterally, I divide other people’s publications into three basic groups: a) standpoints which I argue with b) methods and assumptions which I agree with and use to support my own reasoning, and c) viewpoints which sort of go elsewhere, and can be interesting openings into something even different from what I discuss. Longitudinally, the literature I cite needs, in the first place, to open up on the main points of my paper. This is ‘Introduction’. Publications which I cite here need to point at the utility of developing the line of research which I develop. They need to convey strong, general claims which sort of set my landmarks.

The section titled ‘Theory’ is supposed to provide the fine referencing of my method, so as to both support the logic thereof, and to open up on the detailed calculations I develop in the following section. Literature which I bring forth here should contain specific developments, both factual and methodological, something like a conceptual cobweb. In other words, ‘Introduction’ should be provocative, whilst ‘Theory’ transforms provocation into a structure.

Among the recent literature I am passing in review, three papers come forth as provocative enough for me to discuss them in the introduction of my article:  Andreoni 2020[1], Koponen & Le Net 2021[2]. The first of the three on that list, namely the paper by professor Valeria Andreoni, well in the mainstream of the MuSIASEM methodology (Multi-scale Integrated Analysis of Societal and Ecosystem Metabolism), sets an important line of theoretical debate, namely the arguable imperative to focus energy-related policies, and economic policies in general, on two outcomes, namely on maximizing energy efficiency (i.e. maximizing the amount of real output per unit of energy consumption), and on minimizing cross sectional differences between countries as regards energy efficiency. Both postulates are based on the assumption that energy efficiency of national economies corresponds to the metabolic efficiency of living organisms, and that maxing out on both is an objective evolutionary purpose in both cases. My method has the same general foundations as MuSIASEM. I claim that societies can be studied similarly to living organisms.

At that point, I diverge from the MuSIASEM framework: instead of focusing on the metabolism of such organically approached societies, I pay attention to their collective cognitive processes, their collective intelligence. I claim that human societies are collectively intelligent structures, which learn by experimenting with many alternative versions of themselves whilst staying structurally coherent. From that assumption, I derive two further claims. Firstly, if we reduce disparities between countries with respect to any important attribute of theirs, including energy efficiency, we kick out of the game a lot of opportunities for future learning: the ‘many alternative versions’ part of the process is no more there. Secondly, I claim there is no such thing as objective evolutionary purpose, would it be maximizing energy efficiency or anything else. Evolution has no purpose; it just has the mechanism of selection by replication. Replication of humans is proven to happen the most favourably when we collectively learn fast and make a civilisation out of that learning.

Therefore, whilst having no objective evolutionary purpose, human societies have objective orientations: we collectively attempt to optimize some specific outcomes, which have the attribute to organize our collective learning the most efficiently, in a predictable cycle of, respectively, episodes marked with large errors in adjustment, and those displaying much smaller errors in that respect.

From that theoretical cleavage between my method and the postulates of the MuSIASEM framework, I derive two practical claims as regards economic policies, especially as regards environmentally friendly energy systems. Looking for homogeneity between countries is a road to nowhere, for one. Expecting that human societies will purposefully strive to maximize their overall energy efficiency is unrealistic a goal, and therefore it is a harmful assumption in the presence of serious challenges connected to climate change, for two. Public policies should explicitly aim for disparity of outcomes in technological race, and the race should be oriented on outcomes which are being objectively pursued by human societies.

Whilst disagreeing with professor Valeria Andreoni on principles, I find her empirical findings highly interesting. Rapid economic change, especially the kind of change associated with crises, seems to correlate with deepening disparities between countries in terms of energy efficiency. In other words, when large economic systems need to adjust hard and fast, they sort of play their individual games separately as regards energy efficiency. Rapid economic adjustment under constraint is conducive to creating a large discrepancy of alternative states in what energy efficiency can possibly be, in the context of other socio-economic outcomes, and, therefore, more material is there for learning collectively by experimenting with many alternative versions of ourselves.

Against that theoretical sketch, I place the second paper which I judge worth to introduce with: Koponen, K., & Le Net, E. (2021): Towards robust renewable energy investment decisions at the territorial level. Applied Energy, 287, 116552.  https://doi.org/10.1016/j.apenergy.2021.116552 . I chose this one because it shows a method very similar to mine: the authors build a simulative model in Excel, where they create m = 5000 alternative futures for a networked energy system aiming at optimizing 5 performance metrics. The model was based on actual empirical data as for those variables, and the ‘alternative futures’ are, in other words, 5000 alternative states of the same system. Outcomes are gauged with the so-called regret analysis, where the relative performance in a specific outcome is measured as the residual difference between its local value, and, respectively, its general minimum or maximum, depending on whether the given metric is something we strive to maximize (e.g. capacity), or to minimize (e.g. GHG).

I can generalize on the method presented by Koponen and Le Net, and assume that any given state of society can be studied as one among many alternative states of said society, and the future depends very largely on how this society will navigate through the largely uncharted waters of itself being in many alternative states. Navigators need a star in the sky, to find their North, and so do societies. Koponen and Le Net simulate civilizational navigation under the constraint of four stars, namely the cost of CO2, the cost of electricity, the cost of natural gas, and the cost of biomass. I generalize and say that experimentation with alternative versions of us being collectively intelligent can be oriented on optimizing many alternative Norths, and the kind of North we will most likely pursue is the kind which allows us to learn efficiently how to go from one alternative future to another.

Good. This is my ‘Introduction’. It sets the tone for the method I present in the subsequent section, and the method opens up on the fine details of theory.


[1] Andreoni, V. (2020). The energy metabolism of countries: Energy efficiency and use in the period that followed the global financial crisis. Energy Policy, 139, 111304. https://doi.org/10.1016/j.enpol.2020.111304

[2] Koponen, K., & Le Net, E. (2021): Towards robust renewable energy investment decisions at the territorial level. Applied Energy, 287, 116552.  https://doi.org/10.1016/j.apenergy.2021.116552  .