We keep going until we observe

I keep working on a proof-of-concept paper for my idea of ‘Energy Ponds’. In my last two updates, namely in ‘Seasonal lakes’, and in ‘Le Catch 22 dans ce jardin d’Eden’, I sort of refreshed my ideas and set the canvas for painting. Now, I start sketching. What exact concept do I want to prove, and what kind of evidence can possibly confirm (or discard) that concept? The idea I am working on has a few different layers. The most general vision is that of purposefully storing water in spongy structures akin to swamps or wetlands. These can bear various degree of artificial construction, and can stretch from natural wetlands, through semi-artificial ones, all the way to urban technologies such as rain gardens and sponge cities. The most general proof corresponding to that vision is a review of publicly available research – peer-reviewed papers, preprints, databases etc. – on that general topic.

Against that general landscape, I sketch two more specific concepts: the idea of using ram pumps as a technology of forced water retention, and the possibility of locating those wetland structures in the broadly spoken Northern Europe, thus my home region. Correspondingly, I need to provide two streams of scientific proof: a review of literature on the technology of ram pumping, on the one hand, and on the actual natural conditions, as well as land management policies in Europe, on the other hand.  I need to consider the environmental impact of creating new wetland-like structures in Northern Europe, as well as the socio-economic impact, and legal feasibility of conducting such projects.

Next, I sort of build upwards. I hypothesise a complex technology, where ram-pumped water from the river goes into a sort of light elevated tanks, and from there, using the principle of Roman siphon, cascades down into wetlands, and through a series of small hydro-electric turbines. Turbines generate electricity, which is being stored and then sold outside.

At that point, I have a technology of water retention coupled with a technology of energy generation and storage. I further advance a second hypothesis that such a complex technology will be economically sustainable based on the corresponding sales of electricity. In other words, I want to figure out a configuration of that technology, which will be suitable for communities which either don’t care at all, or simply cannot afford to care about the positive environmental impact of the solution proposed.

Proof of concept for those two hypotheses is going to be complex. First, I need to pass in review the available technologies for energy storage, energy generation, as well as for the construction of elevated tanks and Roman siphons. I need to take into account various technological mixes, including the incorporation of wind turbines and photovoltaic installation into the whole thing, in order to optimize the output of energy. I will try to look for documented examples of small hydro-generation coupled with wind and solar. Then, I have to rack the literature as regards mathematical models for the optimization of such power systems and put them against my own idea of reverse engineering back from the storage technology. I take the technology of energy storage which seems the most suitable for the local market of energy, and for the hypothetical charging from hydro-wind-solar mixed generation. I build a control scenario where that storage facility just buys energy at wholesale prices from the power grid and then resells it. Next, I configure the hydro-wind-solar generation so as to make it economically competitive against the supply of energy from the power grid.

Now, I sketch. I keep in mind the levels of conceptualization outlined above, and I quickly move through published science along that logical path, quickly picking a few articles for each topic. I am going to put those nonchalantly collected pieces of science back-to-back and see how and whether at all it all makes sense together. I start with Bortolini & Zanin (2019[1]), who study the impact of rain gardens on water management in cities of the Veneto region in Italy. Rain gardens are vegetal structures, set up in the urban environment, with the specific purpose to retain rainwater.  Bortolini & Zanin (2019 op. cit.) use a simplified water balance, where the rain garden absorbs and retains a volume ‘I’ of water (‘I’ stands for infiltration), which is the difference between precipitations on the one hand, and the sum total of overflowing runoff from the rain garden plus evapotranspiration of water, on the other hand. Soil and plants in the rain garden have a given top capacity to retain water. Green plants typically hold 80 – 95% of their mass in water, whilst trees hold about 50%. Soil is considered wet when it contains about 25% of water. The rain garden absorbs water from precipitations at a rate determined by hydraulic conductivity, which means the relative ease of a fluid (usually water) to move through pore spaces or fractures, and which depends on the intrinsic permeability of the material, the degree of saturation, and on the density and viscosity of the fluid.

As I look at it, I can see that the actual capacity of water retention in a rain garden can hardly be determined a priori, unless we have really a lot of empirical data from the given location. For a new location of a new rain garden, it is safe to assume that we need an experimental phase when we empirically assess the retentive capacity of the rain garden with different configurations of soil and vegetation used. That leads me to generalizing that any porous structure we use for retaining rainwater, would it be something like wetlands, or something like a rain garden in urban environment, has a natural constraint of hydraulic conductivity, and that constraint determines the percentage of precipitations, and the metric volume thereof, which the given structure can retain.

Bortolini & Zanin (2019 op. cit.) bring forth empirical results which suggest that properly designed rain gardens located on rooftops in a city can absorb from 87% to 93% of the total input of water they receive. Cool. I move on and towards the issue of water management in Europe, with a working paper by Fribourg-Blanc, B. (2018[2]), and the most important takeaway from that paper is that we have something called European Platform for Natural Water Retention Measures AKA http://nwrm.eu , and that thing have both good properties and bad properties. The good thing about http://nwrm.eu is that it contains loads of data and publications about projects in Natural Water Retention in Europe. The bad thing is that http://nwrm.eu is not a secure website. Another paper, by Tóth et al. (2017[3]) tells me that another analytical tool exists, namely the European Soil Hydraulic Database (EU‐ SoilHydroGrids ver1.0).

So far, so good. I already know there is data and science for evaluating, with acceptable precision, the optimal structure and the capacity for water retention in porous structures such as rain gardens or wetlands, in the European context. I move to the technology of ram pumps. I grab two papers: Guo et al. (2018[4]), and Li et al. (2021[5]). They show me two important things. Firstly, China seems to be burning the rubber in the field of ram pumping technology. Secondly, the greatest uncertainty as for that technology seems to be the actual height those ram pumps can elevate water at, or, when coupled with hydropower, the hydraulic head which ram pumps can create. Guo et al. (2018 op. cit.) claim that 50 meters of elevation is the maximum which is both feasible and efficient. Li et al. (2021 op. cit.) are sort of vertically more conservative and claim that the whole thing should be kept below 30 meters of elevation. Both are better than 20 meters, which is what I thought was the best one can expect. Greater elevation of water means greater hydraulic head, and more hydropower to be generated. It pays off to review literature.

Lots of uncertainty as for the actual capacity and efficiency of ram pumping means quick technological change in that domain. This is economically interesting. It means that investing in projects which involve ram pumping means investment in quickly changing a technology. That means both high hopes for an even better technology in immediate future, and high needs for cash in the balance sheet of the entities involved.

I move to the end-of-the-pipeline technology in my concept, namely to energy storage. I study a paper by Koohi-Fayegh & Rosen (2020[6]), which suggests two things. Firstly, for a standalone installation in renewable energy, whatever combination of small hydropower, photovoltaic and small wind turbines we think of, lithium-ion batteries are always a good idea for power storage, Secondly, when we work with hydrogeneration, thus when we have any hydraulic head to make electricity with, pumped storage comes sort of natural. That leads me to an idea which looks even crazier than what I have imagined so far: what if we create an elevated garden with strong capacity for water retention. Ram pumps take water from the river and pump it up onto elevated platforms with rain gardens on it. Those platforms can be optimized as for their absorption of sunlight and thus as regards their interaction with whatever is underneath them.  

I move to small hydro, and I find two papers, namely Couto & Olden (2018[7]), and Lange et al. (2018[8]), which are both interestingly critical as regards small hydropower installations. Lange et al. (2018 op. cit.) claim that the overall environmental impact of small hydro should be closely monitored. Couto & Olden (2018 op. cit.) go further and claim there is a ‘craze’ about small hydro, and that craze has already lead to overinvestment in the corresponding installations, which can be damaging both environmentally and economically (overinvestment means financial collapse of many projects). Those critical views in mind, I turn to another paper, by Zhou et al. (2019[9]), who approach the issue as a case for optimization, within a broader framework called ‘Water-Food-Energy’ Nexus, WFE for closer friends. This paper, just as a few others it cites (Ming et al. 2018[10]; Uen et al. 2018[11]), advocates for using artificial intelligence in order to optimize for WFE.

Zhou et al. (2019 op.cit.) set three hydrological scenarios for empirical research and simulation. The baseline scenario corresponds to an average hydrological year, with average water levels and average precipitations. Next to it are: a dry year and a wet year. The authors assume that the cost of installation in small hydropower is $600 per kW on average.  They simulate the use of two technologies for hydro-electric turbines: Pelton and Vortex. Pelton turbines are optimized paddled wheels, essentially, whilst the Vortex technology consists in creating, precisely, a vortex of water, and that vortex moves a rotor placed in the middle of it.

Zhou et al. (2019 op.cit.) create a multi-objective function to optimize, with the following desired outcomes:

>> Objective 1: maximize the reliability of water supply by minimizing the probability of real water shortage occurring.

>> Objective 2: maximize water storage given the capacity of the reservoir. Note: reservoir is understood hydrologically, as any structure, natural or artificial, able to retain water.

>> Objective 3: maximize the average annual output of small hydro-electric turbines

Those objectives are being achieved under the corresponding sets of constraints. For water supply those constraints all turn around water balance, whilst for energy output it is more about the engineering properties of the technologies taken into account. The three objectives are hierarchized. First, Zhou et al. (2019 op.cit.) perform an optimization regarding Objectives 1 and 2, thus in order to find the optimal hydrological characteristics to meet, and then, on the basis of these, they optimize the technology to put in place, as regards power output.

The general tool for optimization used by Zhou et al. (2019 op.cit.) is a genetic algorithm called NSGA-II, AKA Non-dominated Sorting Genetic Algorithm. Apparently, NSGA-II has a long and successful history of good track in engineering, including water management and energy (see e.g. Chang et al. 2016[12]; Jain & Sachdeva 2017[13];  Assaf & Shabani 2018[14]). I want to stop for a while here and have a good look at this specific algorithm. The logic of NSGA-II starts with creating an initial population of cases/situations/configurations etc. Each case is a combination of observations as regards the objectives to meet, and the actual values observed in constraining variables, e.g. precipitations for water balance or hydraulic head for the output of hydropower. In the conventional lingo of this algorithm, those cases are called chromosomes. Yes, I know, a hydro-electric turbine placed in the context of water management hardly looks like a chromosome, but it is a genetic algorithm, and it just sounds fancy to use that biologically marked vocabulary.

As for me, I like staying close to real life, and therefore I call those cases solutions rather than chromosomes. Anyway, the underlying math is the same. Once I have that initial population of real-life solutions, I calculate two parameters for each of them: their rank as regards the objectives to maximize, and their so-called ‘crowded distance’. Ranking is done with the procedure of fast non-dominated sorting. It is a comparison in pairs, where the solution A dominates another solution B, if and only if there is no objective of A worse than that objective of B and there is at least one objective of A better than that objective of B. The solution which scores the most wins in such peer-to-peer comparisons is at the top of the ranking, the one with the second score of wins is the second etc. Crowding distance is essentially the same as what I call coefficient of coherence in my own research: Euclidean distance (or other mathematical distance) is calculated for each pair of solutions. As a result, each solution is associated with k Euclidean distances to the k remaining solutions, which can be reduced to an average distance, i.e. the crowded distance.

In the next step, an off-spring population is produced from that original population of solutions. It is created by taking relatively the fittest solutions from the initial population, recombining their characteristics in a 50/50 proportion, and adding them some capacity for endogenous mutation. Two out of these three genetic functions are de facto controlled. We choose relatively the fittest by establishing some kind of threshold for fitness, as regards the objectives pursued. It can be a required minimum, a quantile (e.g. the third quartile), or an average. In the first case, we arbitrarily impose a scale of fitness on our population, whilst in the latter two the hierarchy of fitness is generated endogenously from the population of solutions observed. Fitness can have shades and grades, by weighing the score in non-dominated sorting, thus the number of wins over other solutions, on the one hand, and the crowded distance on the other hand. In other words, we can go for solutions which have a lot of similar ones in the population (i.e. which have a low average crowded distance), or, conversely, we can privilege lone wolves, with a high average Euclidean distance from anything else on the plate.  

The capacity for endogenous mutation means that we can allow variance in all or in just the selected variables which make each solution. The number of degrees of freedom we allow in each variable dictates the number of mutations that can be created. Once again, discreet power is given to the analyst: we can choose the genetic traits which can mutate and we can determine their freedom to mutate. In an engineering problem, technological and environmental constraints should normally put a cap on the capacity for mutation. Still, we can think about an algorithm which definitely kicks the lid off the barrel of reality, and which generates mutations in the wildest registers of variables considered. It is a way to simulate a process when the presence of strong outliers has a strong impact on the whole population.

The same discreet cap on the freedom to evolve is to be found when we repeat the process. The offspring generation of solutions goes essentially through the same process as the initial one, to produce further offspring: ranking by non-dominated sorting and crowded distance, selection of the fittest, recombination, and endogenous mutation. At the starting point of this process, we can be two alternative versions of the Mother Nature. We can be a mean Mother Nature, and we shave off from the offspring population all those baby-solutions which do not meet the initial constraints, e.g. zero supply of water in this specific case. On the other hand, we can be even meaner a Mother Nature and allow those strange, dysfunctional mutants to keep going and see what happens to the whole species after a few rounds of genetic reproduction.

With each generation, we compute an average crowded distance between all the solutions created, i.e. we check how diverse is the species in this generation. As long as diversity grows or remains constant, we assume that the divergence between the solutions generated grows or stays the same. Similarly, we can compute an even more general crowded distance between each pair of generations, and therefore to assess how far has the current generation gone from the parent one. We keep going until we observe that the intra-generational crowded distance and the inter-generational one start narrowing down asymptotically to zero. In other words, we consider resuming evolution when solutions in the game become highly similar to each other and when genetic change stops bringing significant functional change.

Cool. When I want to optimize my concept of Energy Ponds, I need to add the objective of constrained return on investment, based on the sales of electricity. In comparison to Zhou et al. (2019 op.cit.), I need to add a third level of selection. I start with selecting environmentally the solutions which make sense in terms of water management. In the next step, I produce a range of solutions which assure the greatest output of power, in a possible mix with solar and wind. Then I take those and filter them through the NSGA-II procedure as regards their capacity to sustain themselves financially. Mind you, I can shake it off a bit by fusing together those levels of selection. I can simulate extreme cases, when, for example, good economic sustainability becomes an environmental problem. Still, it would be rather theoretical. In Europe, non-compliance with environmental requirements makes a project a non-starter per se: you just can get the necessary permits if your hydropower project messes with hydrological constraints legally imposed on the given location.     

Cool. It all starts making sense. There is apparently a lot of stir in the technology of making semi-artificial structures for retaining water, such as rain gardens and wetlands. That means a lot of experimentation, and that experimentation can be guided and optimized by testing the fitness of alternative solutions for meeting objectives of water management, power output and economic sustainability. I have some starting data, to produce the initial generation of solutions, and then try to optimize them with an algorithm such as NSGA-II.


[1] Bortolini, L., & Zanin, G. (2019). Reprint of: Hydrological behaviour of rain gardens and plant suitability: A study in the Veneto plain (north-eastern Italy) conditions. Urban forestry & urban greening, 37, 74-86. https://doi.org/10.1016/j.ufug.2018.07.003

[2] Fribourg-Blanc, B. (2018, April). Natural Water Retention Measures (NWRM), a tool to manage hydrological issues in Europe?. In EGU General Assembly Conference Abstracts (p. 19043). https://ui.adsabs.harvard.edu/abs/2018EGUGA..2019043F/abstract

[3] Tóth, B., Weynants, M., Pásztor, L., & Hengl, T. (2017). 3D soil hydraulic database of Europe at 250 m resolution. Hydrological Processes, 31(14), 2662-2666. https://onlinelibrary.wiley.com/doi/pdf/10.1002/hyp.11203

[4] Guo, X., Li, J., Yang, K., Fu, H., Wang, T., Guo, Y., … & Huang, W. (2018). Optimal design and performance analysis of hydraulic ram pump system. Proceedings of the Institution of Mechanical Engineers, Part A: Journal of Power and Energy, 232(7), 841-855. https://doi.org/10.1177%2F0957650918756761

[5] Li, J., Yang, K., Guo, X., Huang, W., Wang, T., Guo, Y., & Fu, H. (2021). Structural design and parameter optimization on a waste valve for hydraulic ram pumps. Proceedings of the Institution of Mechanical Engineers, Part A: Journal of Power and Energy, 235(4), 747–765. https://doi.org/10.1177/0957650920967489

[6] Koohi-Fayegh, S., & Rosen, M. A. (2020). A review of energy storage types, applications and recent developments. Journal of Energy Storage, 27, 101047. https://doi.org/10.1016/j.est.2019.101047

[7] Couto, T. B., & Olden, J. D. (2018). Global proliferation of small hydropower plants–science and policy. Frontiers in Ecology and the Environment, 16(2), 91-100. https://doi.org/10.1002/fee.1746

[8] Lange, K., Meier, P., Trautwein, C., Schmid, M., Robinson, C. T., Weber, C., & Brodersen, J. (2018). Basin‐scale effects of small hydropower on biodiversity dynamics. Frontiers in Ecology and the Environment, 16(7), 397-404.  https://doi.org/10.1002/fee.1823

[9] Zhou, Y., Chang, L. C., Uen, T. S., Guo, S., Xu, C. Y., & Chang, F. J. (2019). Prospect for small-hydropower installation settled upon optimal water allocation: An action to stimulate synergies of water-food-energy nexus. Applied Energy, 238, 668-682. https://doi.org/10.1016/j.apenergy.2019.01.069

[10] Ming, B., Liu, P., Cheng, L., Zhou, Y., & Wang, X. (2018). Optimal daily generation scheduling of large hydro–photovoltaic hybrid power plants. Energy Conversion and Management, 171, 528-540. https://doi.org/10.1016/j.enconman.2018.06.001

[11] Uen, T. S., Chang, F. J., Zhou, Y., & Tsai, W. P. (2018). Exploring synergistic benefits of Water-Food-Energy Nexus through multi-objective reservoir optimization schemes. Science of the Total Environment, 633, 341-351. https://doi.org/10.1016/j.scitotenv.2018.03.172

[12] Chang, F. J., Wang, Y. C., & Tsai, W. P. (2016). Modelling intelligent water resources allocation for multi-users. Water resources management, 30(4), 1395-1413. https://doi.org/10.1007/s11269-016-1229-6

[13] Jain, V., & Sachdeva, G. (2017). Energy, exergy, economic (3E) analyses and multi-objective optimization of vapor absorption heat transformer using NSGA-II technique. Energy Conversion and Management, 148, 1096-1113. https://doi.org/10.1016/j.enconman.2017.06.055

[14] Assaf, J., & Shabani, B. (2018). Multi-objective sizing optimisation of a solar-thermal system integrated with a solar-hydrogen combined heat and power system, using genetic algorithm. Energy Conversion and Management, 164, 518-532. https://doi.org/10.1016/j.enconman.2018.03.026

Big Black Swans

Oops! Another big break in blogging. Sometimes life happens so fast that thoughts in my head run faster than I can possibly write about them. This is one of those sometimeses. Topics for research and writing abound, projects abound, everything is changing at a pace which proves challenging to a gentleman in his 50ies, such as I am. Yes, I am a gentleman: even when I want to murder someone, I go out of my way to stay polite.

I need to do that thing I periodically do, on this blog. I need to use published writing as a method of putting order in the chaos. I start with sketching the contours of chaos and its main components, and then I sequence and compartmentalize.

My chaos is made of the following parts:

>> My research on collective intelligence

>> My research on energy systems, with focus on investment in energy storage

>> My research on the civilisational role of cities, and on the concept of the entire human civilisation, such as we know it today, being a combination of two productive mechanisms: production of food in the countryside, and production of new social roles in the cities.

>> Joint research which I run with a colleague of mine, on the reproduction of human capital

>> The project I once named Energy Ponds, and which I recently renamed ‘Project Aqueduct’, for the purposes of promoting it.

>> The project which I have just started, together with three other researchers, on the role of Territorial Defence Forces during the COVID-19 pandemic

>> An extremely interesting project, which both I and a bunch of psychiatrists from my university have provisionally failed to kickstart, on the analysis of natural language in diagnosing and treating psychoses

>> A concept which recently came to my mind, as I was working on a crowdfunding project: a game as method of behavioural research about complex decisional patterns.

Nice piece of chaos, isn’t it? How do I put order in my chaos? Well, I ask myself, and, of course, I do my best to answer honestly the following questions: What do I want? How will I know I have what I want? How will other people know I have what I want? Why should anyone bother? What is the point? What do I fear? How will I know my fears come true? How will other people know my fears come true? How do I want to sequence my steps? What skills do I need to learn?

I know I tend to be deceitful with myself. As a matter of fact, most of us tend to. We like confirming our ideas rather than challenging them. I think I can partly overcome that subjectivity of mine by interweaving my answers to those questions with references to published scientific research. Another way of staying close to real life with my thinking consists in trying to understand what specific external events have pushed me to engage in the different paths, which, as I walk down all of them at the same time, make my personal chaos.

In 2018, I started using artificial neural networks, just like that, mostly for fun, and in a very simple form. As I observed those things at work, I developed a deep fascination with intelligent structures, and just as deep (i.e. f**king hard to phrase out intelligibly) an intuition that neural networks can be used as simulators of collectively intelligent social structures.

Both of my parents died in 2019, exactly at the same age of 78, having spent the last 20 years of their respective individual lives in complete separation from each other to the point of not having exchanged a spoken or written word over those last 20 years. That changed my perspective as regards subjectivity. I became acutely aware how subjective I am in my judgement, and how subjective other people are, most likely. Pandemic started in early 2020, and, almost at the same moment, I started to invest in the stock market, after a few years of break. I had been learning at an accelerated pace. I had been adapting to the reality of high epidemic risk – something I almost forgot since I had a devastating scarlatina at the age of 9 – and I had been adapting to a subjectively new form of economic decisions (i.e. those in the stock market). That had been the hell of a ride.

Right now, another piece of experience comes into the game. Until recently, in my house, the attic was mine. The remaining two floors were my wife’s dominion, but the attic was mine. It was painted in joyful, eye-poking colours. There as a lot of yellow and orange. It was mine. Yet, my wife had an eye for that space. Wives do, quite frequently. A fearsome ally came to support her: an interior decorator. Change has just happened. Now, the attic is all dark brown and cream. To me, it looks like the inside of a coffin. Yes, I know what the inside of a coffin looks like: I saw it just before my father’s funeral. That attic has become an alien space for me. I still have hard times wrapping my mind around how shaken I am with that change. I realize how attached am I to the space around me. If I am so strongly bound to colours and shapes in my vicinity, other people probably feel the same, and that triggers another intuition: we, humans, are either simple dwellers in the surrounding space, or we are architects thereof, and these are two radically different frames of mind.

I am territorial as f**k. I have just clarified it inside my head. Now, it is time to go back to science. In a first step, I am trying to connect those experiences of mine to my hypothesis of collective intelligence. Step by step, I am phrasing it out. We are intelligent about each other. We are intelligent about the physical space around us. We are intelligent about us being subjective, and thus we have invented that thing called language, which allows us to produce a baseline for intersubjective description of the surrounding world.

I am conscious of my subjectivity, and of my strong emotions (that attic, f**k!). Therefore, I want to see my own ideas from other people’s point of view. Some review of literature is what I need. I start with Peeters, M. M., van Diggelen, J., Van Den Bosch, K., Bronkhorst, A., Neerincx, M. A., Schraagen, J. M., & Raaijmakers, S. (2021). Hybrid collective intelligence in a human–AI society. AI & SOCIETY, 36(1), 217-238. https://doi.org/10.1007/s00146-020-01005-y . I start with it because I could lay my hands on an open-access version of the full paper, and, as I read it, many bells ring in my head. Among all those bells ringing, the main one refers to the experience I had with the otherwise simplistic neural networks, namely the perceptrons possible to structure in an Excel spreadsheet. Back in 2018, I was observing the way that truly a dumb neural network was working, one made of just 6 equations, looped together, and had that realization: ‘Blast! Those six equations together are actually intelligent’. This is the core of that paper by Peeters et al.. The whole story of collective intelligence had become a thing when Artificial Intelligence started spreading throughout society, thus it is generally the same thing in scientific literature as it is with me individually. Conscious, inquisitive interaction with artificial intelligence seems to awaken an entirely new worldview, where we, humans, can see at work an intelligent fruit of our own intelligence.

I am trying to make one more step, from bewilderment to premises and hypotheses. In Peeters et al., three big intellectual streams are named: (1) the technology-centric perspective (2) the human-centric one, and finally (3) that focused on the concept of collective intelligence-centric perspective. The third one sounds familiar, and so I dig into it. The general idea here is that humans can put their individual intelligences into a kind of interaction which is smarter than those individual ones. This hypothesis is a little counterintuitive – if we consider electoral campaigns or Instagram – but it becomes much more plausible when we think about networks of inventors and scientists. Peeters et al. present an interesting extension to that, namely collectively intelligent agglomerations of people and technology. This is exactly what I do when I do empirical research and use a neural network as simulator, with quantitative data in it. I am one human interacting with one simple piece of AI, and interesting things come out of it. 

That paper by Peeters et al. cites a book: Sloman, S., Sloman, S. A., & Fernbach, P. (2018). The knowledge illusion: Why we never think alone (Penguin). Before I pass to my first impressions about that book, another sidekick. In 1993, one of the authors of that book, Aaron Sloman, wrote an introduction to another book, this one being a collection of proceedings (conference papers in plain human lingo) from a conference, grouped under the common title: Prospects for Artificial Intelligence (Hogg & Humphreys 1993[1]). In that introduction, Aaron Sloman claims that using Artificial Intelligence as simulator of General Intelligence requires a specific approach, which he calls ‘design-based’, where we investigate the capabilities and the constraints within which intelligence, understood as a general phenomenon, has to function. Based on those constraints, requirements can be defined, and, consequently, the way that intelligence is enabled to meet them, through its own architecture and mechanisms.  

We jump 25 years, from 1993, and this is what Sloman, Sloman & Fernbach wrote in the introduction to “The knowledge illusion…”: “This story illustrates a fundamental paradox of humankind. The human mind is both genius and pathetic, brilliant and idiotic. People are capable of the most remarkable feats, achievements that defy the gods. We went from discovering the atomic nucleus in 1911 to megaton nuclear weapons in just over forty years. We have mastered fire, created democratic institutions, stood on the moon, and developed genetically modified tomatoes. And yet we are equally capable of the most remarkable demonstrations of hubris and foolhardiness. Each of us is error-prone, sometimes irrational, and often ignorant. It is incredible that humans are capable of building thermonuclear bombs. It is equally incredible that humans do in fact build thermonuclear bombs (and blow them up even when they don’t fully understand how they work). It is incredible that we have developed governance systems and economies that provide the comforts of modern life even though most of us have only a vague sense of how those systems work. And yet human society works amazingly well, at least when we’re not irradiating native populations. How is it that people can simultaneously bowl us over with their ingenuity and disappoint us with their ignorance? How have we mastered so much despite how limited our understanding often is?” (Sloman, Steven; Fernbach, Philip. The Knowledge Illusion (p. 3). Penguin Publishing Group. Kindle Edition)

Those readings have given me a thread, and I am interweaving that thread with my own thoughts. Now, I return to another reading, namely to “The Black Swan” by Nassim Nicolas Taleb, where, on pages xxi – xxii of the introduction, the author writes: “What we call here a Black Swan (and capitalize it) is an event with the following three attributes. First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme impact (unlike the bird). Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable. I stop and summarize the triplet: rarity, extreme impact, and retrospective (though not prospective) predictability.fn3 A small number of Black Swans explain almost everything in our world, from the success of ideas and religions, to the dynamics of historical events, to elements of our own personal lives. Ever since we left the Pleistocene, some ten millennia ago, the effect of these Black Swans has been increasing. It started accelerating during the industrial revolution, as the world started getting more complicated, while ordinary events, the ones we study and discuss and try to predict from reading the newspapers, have become increasingly inconsequential”.

I combine the idea by Sloman, Sloman & Fernbach, that we, humans, can be really smart collectively by interaction between very limited and subjective individual intelligences, with the concept of Black Swan by Nassim Nicolas Taleb. I reinterpret the Black Swan. This is an event we did not expect to happen, and yet it happened, and it has blown a hole in our self-sufficient certainty that we understand what the f**k is going on. When we do things, we expect certain outcomes. When we don’t get the outcomes we expected, this is a functional error. This is local instance of chaos. We take that local chaos as learning material, and we try again, and again, and again, in the expectation of bringing order into the general chaos of existence. Our collectively intelligent learning  

Nassim Nicolas Taleb claims that our culture tends to mask the occurrence of Black Swan-type outliers as just the manifestation of otherwise recurrent, predictable patterns. We collectively internalize Black Swans. This is what a neural network does. It takes an obvious failure – the local error of optimization – and utilizes it as valuable information in the next experimental round. The network internalizes a series of f**k-ups, because it never hits the output variable exactly home, there is always some discrepancy, at least a tiny one. The fact of being wrong about reality becomes normal. Every neural network I have worked with does the same thing: it starts with some substantial magnitude of error, and then it tends to reduce that error, at least temporarily, i.e. for at least a few more experimental rounds.   

This is what a simple neural network – one of those I use in my calculations – does with quantitative variables. It processes data so as to create error, i.e. so as to purposefully create outliers located out of the expected. Those neural networks purposefully create Black Swans, those abnormalities which allow us to learn. Now, what is so collective about neural networks? Why do I intuitively associate my experiments with neural networks to collective intelligence rather than to the individual one? Well, I work with socio-economic quantitative variables. The lowest level of aggregation I use is the probability of occurrence in a specific event, but this is really low aggregation for me. Most of my data is like Gross Domestic Product, average hours worked per person per year, average prices of electricity etc. This is essentially collective data, in the sense that no individual intelligence can possibly be observed as for its density of population or for its inflation. There needs to be a society in place for those metrics to exist at all.

When I work with that type of data, I assume that many people observe and gauge it, then report and aggregate their observations etc. Many people put a lot of work into making those quantitative variables both available and reliable. I guess it is important, then. When some kind of data is that important collectively, it is very likely to reflect some important aspect of collective reality. When I run that data through a neural network, the latter yields a simulation of collective action and its (always) provisional outcomes.

My neural network (I mean the one on my computer, not the one in my head) takes like 0.05 of local Gross Domestic Product, then 0.4 of average consumption of energy per capita, maybe 0.09 of inflation in consumer prices, plus some other stuff in random proportions, and sums up all those small portions of whatever is important as collectively measured socio-economic outcome. Usually, that sum is designated as ‘h’, or vector of aggregate input. Then, my network takes that h and puts it into a neural activation which, in most cases, is the hyperbolic tangent hyperbolic tangent AKA (e2h – 1) / (e2h +1). When we learn by trial and error, a hypothetical number e2h measures the force which the neural network reacts with to a complex stimulation from a set of variables xi. The ‘e2’ part of that hypothetical reaction is constant and equals e2 = 7,389056099, whilst h is a variable parameter, specific to the given phenomenal occurrence. The parameter h is roughly proportional to the number of variables in the source empirical set. The more complex the reality I process with the neural network, i.e. the more variables I split my reality into, the greater is the value of h. In other words, the more complexity, the more is the neuron, based on the expression e2h, driven away from its constant root e2. Complexity in variables induces greater swings in the hyperbolic tangent, i.e. greater magnitudes of error, and, consequently, longer strides in the process of learning.         

Logically, the more complex social reality I represent with quantitative variables, the bigger Black Swans the neural network produces as it tries to optimize one single variable chosen as the desired output of the neurally activated input.   


[1] Hogg, D., & Humphreys, G. W. (1993). Prospects for Artificial Intelligence: Proceedings of AISB’93, 29 March-2 April 1993, Birmingham, UK (Vol. 17). IOS Press.

The type of riddle I like

Once again, I had quite a break in blogging. I spend a lot of time putting together research projects, in a network of many organisations, which I am supposed to bring to working together. I give it a lot of time and personal energy. It drains me a bit, and I like that drain. I like the thrill of putting together a team, agreeing about goals and possible openings. Since 2005, when I stopped running my own business and I settled for a quite academic career, I haven’t experienced that special kind of personal drive. I sincerely believe that every teacher should apply his or her own teaching in the everyday life of theirs, just to see if their teaching still corresponds to reality.

This is one of the reasons why I have made it a regular activity of mine to invest in the stock market. I teach economics, and the stock market is very much like the pulse of economics, in all its grades and shades, ranging from hardcore macroeconomic cycles, passing through the microeconomics of specific industries I am currently focusing on with my investment portfolio, and all the way down the path of behavioural economics. I teach management, as well, and putting together new projects in research is the closest I can come, currently, to management science being applied in real life.

Still, besides trying to apply my teaching in real life, I still do science. I do research, and I write about the things I think I have found out, on that research path of mine. I do a lot of research as regards the economics of energy. Currently, I am still revising a paper of mine, titled ‘Climbing the right hill – an evolutionary approach to the European market of electricity’. Around the topic of energy economics, I have built more general a method of studying quantitative socio-economic data, with the technical hypothesis that said data manifests collective intelligence in human social structures. It means that whenever I deal with a collection of quantitative socio-economic variables, I study the dataset at hand by assuming that each multivariate record line in the database is the local instance of an otherwise coherent social structure, which experimentins with many such specific instances of itself and selects those offering the best adaptation to the current external stressors. Yes, there is a distinct sound of evolutionary method in that approach.

Over the last three months, I have been slowly ruminating my theoretical foundations for the revision of that paper. Now, I am doing what I love doing: I am disrupting the gently predictable flow of theory with some incongruous facts. Yes, facts don’t know how to behave themselves, like really. Here is an interesting fact about energy: between 1999 and 2016, at the planetary scale, there had been more and more new cars produced per each new human being born. This is visualised in the composite picture below. Data about cars comes from https://www.worldometers.info/cars/ , whilst data about the headcount of population comes from the World Bank (https://data.worldbank.org/indicator/SP.POP.TOTL ).

Now, the meaning of all that. I mean, not ALL THAT (i.e. reality and life in general), just all that data about cars and population. Why do we consistently make more and more physical substance of cars per each new human born? Two explanations come to my mind. One politically correct and nicely environmentalist: we are collectively dumb as f**k and we keep overshooting the output of cars over and above the incremental change in population. The latter, when translated into a rate of growth, tends to settle down (https://data.worldbank.org/indicator/SP.POP.GROW ). Yeah, those capitalists who own car factories just want to make more and more money, and therefore they make more and more cars. Yeah, those corrupt politicians want to conserve jobs in the automotive industry, and they support it. Yeah, f**k them all! Yeah, cars destroy the planet!

I checked. The first door I knocked at was General Motors (https://investor.gm.com/sec-filings ). What I can see is that they actually make more and more operational money by making less and less cars. Their business used to be overshot in terms of volume, and now they are slowly making sense and money out of making less cars. Then I checked with Toyota (https://global.toyota/en/ir/library/sec/ ). These guys looks as if they were struggling to maintain their capacity to make approximately the same operational surplus each year, and they seem to be experimenting with the number of cars they need to put out in order to stay in good financial shape. When I say ‘experimenting’, it means experimenting upwards or downwards.

As a matter of fact, the only player who seems to be unequivocally making more operational money out of making more cars is Tesla (https://ir.tesla.com/#tab-quarterly-disclosure). In There comes another explanation – much less politically correct, if at all – for there being more cars made per each new human, and it says that we, humans, are collectively intelligent, and we have a good reason for making more and more cars per each new human coming to this realm of tears, and the reason is to store energy in a movable, possibly auto-movable a form. Yes, each car has a fuel tank or a set of batteries, in the case of them Teslas or other electric f**kers. Each car is a moving reservoir of chemical energy, immediately converted into kinetic energy, which, in turn, has economic utility. Making more cars with batteries pays off better than making more cars with combustible fuel in their tanks: a new generation of movable reservoirs in chemical energy is replacing an older generation thereof. 

Let’s hypothesise that this is precisely the point of each new human being coupled with more and more of a new car being made: the point is more chemical energy convertible into kinetic energy. Do we need to move around more, as time passes? Maybe, although I am a bit doubtful. Technically, with more and more humans being around in a constant space, there is more and more humans per square kilometre, and that incremental growth in the density of population happens mostly in cities. I described that phenomenon in a paper of mine, titled ‘The Puzzle of Urban Density And Energy Consumption’. That means that space available for travelling and needed to be covered, per individual capita of each human being, is actually decreasing. Less space to travel in means less need for means of transportation. 

Thus, what are we after, collectively? We might be preparing for having to move around more in the future, or for having to restructure the geography of our settlements. That’s possible, although the research I did for that paper about urban density indicates that geographical patterns of urbanization are quite durable. Anyway, those two cases sum up to some kind of zombie apocalypse. On the other hand, the fact of developing the amount of dispersed, temporarily stored energy (in cars) might be a manifestation of us learning how to build and maintain large, dispersed networks of energy reservoirs.

Isn’t it dumb to hypothesise that we go out of our way, as a civilisation, just to learn the best ways of developing what we are developing? Well, take the medieval cathedrals. Them medieval folks would keep building them for decades or even centuries. The Notre Dame cathedral in Paris, France, seems to be the record holder, with a construction period stretching from 1160 to 1245 (Bruzelius 1987[1]). Still, the same people who were so appallingly slow when building a cathedral could accomplish lightning-fast construction of quite complex military fortifications. When building cathedrals, the masters of stone masonry would do something apparently idiotic: they would build, then demolish, and then build again the same portion of the edifice, many times. WTF? Why slowing down something we can do quickly? In order to experiment with the process and with the technologies involved, sir. Cathedrals were experimental labs of physics, mathematics and management, long before these scientific disciplines even emerged. Yes, there was the official rationale of getting closer to God, to accomplish God’s will, and, honestly, it came handy. There was an entire culture – the medieval Christianity – which was learning how to learn by experimentation. The concept of fulfilling God’s will through perseverant pursuit, whilst being stoic as regards exogenous risks, was excellent a cultural vehicle to that purpose.

We move a few hundreds of years in time, to the 17th century. The cutting edge of technology is to find in textile and garments (Braudel 1992[2]), and the peculiarity of the European culture consisted in quickly changing fashions, geographically idiosyncratic and strongly enforced through social peer pressure. The industry of garments and textile was a giant experimental lab of business and management, developing the division of labour, the management of supply chains, quick study of subtle shades in customers’ tastes and just as quick adaptation thereto. This is how we, Europeans, prepared for the much later introduction of mechanized industry, which, in turn, gave birth to what we are today: a species controlling something like 30% of all energy on the surface of our planet.       

Maybe we are experimenting with dispersed, highly mobile and coordinated networks of small energy reservoirs – the automotive fleet – just for the sake of learning how to develop such networks? Some other facts, which, once again, are impolitely disturbing, come to the fore. I had a look at the data published by United Nations, as regards the total installed capacity of generation in electricity (https://unstats.un.org/unsd/energystats/ ). I calculated the average electrical capacity per capita, at the global scale. Turns out in 2014 the average human capita on Earth had around 60% more power capacity to tap from, as compared to a similarly human capita in 1999.

Interesting. It looks even more interesting when taken as the first moment of a process. When I divide the annual incremental change in the installed electrical capacity on the planet, and I divide it by the absolute demographic increment, thus when I go ‘Delta capacity / delta population’, that coefficient of elasticity grows like hell. In 2014, it was almost three times more than in 1999. We, humans, keep developing denser a network of cars, as compared to our population, and, at the same time, we keep increasing the relative power capacity which every human can tap into.    

Someone could say it is because we simply consume more and more energy per capita. Cool, I check with the World Bank: https://data.worldbank.org/indicator/EG.USE.PCAP.KG.OE . Yes, we increase our average annual consumption of energy per one human being, and yet this is a very gentle increment: barely 18% from 1999 through 2014. Nothing to do with the quick accumulation of generative capacity. We accumulate densifying a global fleet of cars, and growing a reserve of power capacity. What are we doing it for?

This is a deep question, and I calculated two additional elasticities with the data at hand. Firstly, I denominated incremental change in the number of new cars per each new human born over the average consumption of energy per capita. In the visual below, this is the coefficient ‘Elasticity of cars per capita to energy per capita’. Between 1999 and 2014, this elasticity had passed from 0,49 to 0,79. We keep accumulating something like an overhead of incremental car fleet, as compared to the amount of energy we consume.

Secondly, I formalized the comparison between individual consumption of energy and average power capacity per capita. This is the ‘Elasticity of capacity per capita to energy per capita’ column in the visual below.  Once again, it is a growing trend.   

At the planetary scale, we keep beefing up our collective reserves of energy, and we seriously mean business about dispersing those reserves into networks of small reservoirs, possibly on wheels.

Increased propensity to store is a historically known collective response to anticipated shortage. Do we, the human race, collectively and not quite consciously anticipate a shortage of energy? How could that happen? Our biology should suggest it just the opposite. With the climate change being around, we technically have more energy in the ambient environment, not less. What exact kind of shortage in energy are we collectively anticipating? This is the type of riddle I like.


[1] Bruzelius, C. (1987). The Construction of Notre-Dame in Paris. The Art Bulletin, 69(4), 540-569. https://doi.org/10.1080/00043079.1987.10788458

[2] Braudel, F. (1992). Civilization and capitalism, 15th-18th century, vol. II: The wheels of commerce (Vol. 2). Univ of California Press.

Unintentional, and yet powerful a reductor

As usually, I work on many things at the same time. I mean, not exactly at the same time, just in a tight alternate sequence. I am doing my own science, and I am doing collective science with other people. Right now, I feel like restating and reframing the main lines of my own science, with the intention to both reframe my own research, and be a better scientific partner to other researchers.

Such as I see it now, my own science is mostly methodological, and consists in studying human social structures as collectively intelligent ones. I assume that collectively we have a different type of intelligence from the individual one, and most of what we experience as social life is constant learning through experimentation with alternative versions of our collective way of being together. I use artificial neural networks as simulators of collective intelligence, and my essential process of simulation consists in creating multiple artificial realities and comparing them.

I deliberately use very simple, if not simplistic neural networks, namely those oriented on optimizing just one attribute of theirs, among the many available. I take a dataset, representative for the social structure I study, I take just one variable in the dataset as the optimized output, and I consider the remaining variables as instrumental input. Such a neural network simulates an artificial reality where the social structure studied pursues just one, narrow orientation. I create as many such narrow-minded, artificial societies as I have variables in my dataset. I assess the Euclidean distance between the original empirical dataset, and each of those artificial societies. 

It is just now that I realize what kind of implicit assumptions I make when doing so. I assume the actual social reality, manifested in the empirical dataset I study, is a concurrence of different, single-variable-oriented collective pursuits, which remain in some sort of dynamic interaction with each other. The path of social change we take, at the end of the day, manifests the relative prevalence of some among those narrow-minded pursuits, with others being pushed to the second rank of importance.

As I am pondering those generalities, I reconsider the actual scientific writings that I should hatch. Publish or perish, as they say in my profession. With that general method of collective intelligence being assumed in human societies, I focus more specifically on two empirical topics: the market of energy and the transition away from fossil fuels make one stream of my research, whilst the civilisational role of cities, especially in the context of the COVID-19 pandemic, is another stream of me trying to sound smart in my writing.

For now, I focus on issues connected to energy, and I return to revising my manuscript ‘Climbing the right hill – an evolutionary approach to the European market of electricity’, as a resubmission to Applied Energy . According to the guidelines of Applied Energy , I am supposed to structure my paper into the following parts: Introduction, Material and Methods, Theory, Calculations, Results, Discussion, and, as sort of a summary pitch, I need to prepare a cover letter where I shortly introduce the reasons why should the editor of Applied Energy bother about my paper at all. On the top of all these formally expressed requirements, there is something I noticed about the general style of articles published in Applied Energy : they all demonstrate and discuss strong, sharp-cutting hypotheses, with a pronounced theoretical edge in them. If I want my paper to be accepted by that journal, I need to give it that special style.  

That special style requires two things which, honestly, I am not really accustomed to doing. First of all, it requires, precisely, to phrase out very sharp claims. What I like the most is to show people material and methods which I work with and sort of provoke a discussion around it. When I have to formulate very sharp claims around that basic empirical stuff, I feel a bit awkward. Still, I understand that many people are willing to discuss only when they are truly pissed by the topic at hand, and sharply cut hypotheses serve to fuel that flame.

Second of all, making sharp claims of my own requires passing in thorough review the claims which other researchers phrase out. It requires doing my homework thoroughly in the review-of-literature. Once again, not really a fan of it, on my part, but well, life is brutal, as my parents used to teach me and as I have learnt in my own life. In other words, real life starts when I get out of my comfort zone.

The first body of literature I want to refer to in my revised article is the so-called MuSIASEM framework AKA Multi-scale Integrated Analysis of Societal and Ecosystem Metabolism’. Human societies are assumed to be giant organisms, and transformation of energy is a metabolic function of theirs (e.g. Andreoni 2020[1], Al-Tamimi & Al-Ghamdi 2020[2] or Velasco-Fernández et al. 2020[3]). The MuSIASEM framework is centred around an evolutionary assumption, which I used to find perfectly sound, and which I have come to consider as highly arguable, namely that the best possible state for both a living organism and a human society is that of the highest possible energy efficiency. As regards social structures, energy efficiency is the coefficient of real output per unit of energy consumption, or, in other words, the amount of real output we can produce with 1 kilogram of oil equivalent in energy. My theoretical departure from that assumption started with my own empirical research, published in my article ‘Energy efficiency as manifestation of collective intelligence in human societies’ (Energy, Volume 191, 15 January 2020, 116500, https://doi.org/10.1016/j.energy.2019.116500 ). As I applied my method of computation with a neural network as simulator of social change, I found out that human societies do not really seem to max out on energy efficiency. Maybe they should but they don’t. It was the first realization, on my part, that we, humans, orient our collective intelligence on optimizing the social structure as such, and whatever comes out of that in terms of energy efficiency, is an unintended by-product rather than a purpose. That general impression has been subsequently reinforced by other empirical findings of mine, precisely those which I introduce in that manuscript ‘Climbing the right hill – an evolutionary approach to the European market of electricity’, which I am currently revising for resubmission with Applied Energy . According to the guidelines of Applied Energy.

In practical terms, it means that when a public policy states that ‘we should maximize our energy efficiency’, it is a declarative goal which human societies actually do not strive for. It is a little as if a public policy imposed the absolute necessity of being nice to each other and punished any deviation from that imperative. People are nice to each other to the extent of current needs in social coordination, period. The absolute imperative of being nice is frequently the correlate of intense rivalry, e.g. as it was the case with traditional aristocracy. The French have even an expression, which I find profoundly true, namely ‘trop gentil pour être honnête’, which means ‘too nice to be honest’. My personal experience makes me kick into an alert state when somebody is that sort of intensely nice to me.

Passing from metaphors to the actual subject matter of energy management, it is a known fact that highly innovative technologies are usually truly inefficient. Optimization of efficiency, would it be energy efficiency or any other aspect thereof, is actually a late stage in the lifecycle of a technology. Deep technological change is usually marked by a temporary slump in efficiency. Imposing energy efficiency as chief goal of technology-related policies means systematically privileging and promoting technologies with the highest energy efficiency, thus, by metaphorical comparison to humans, technologies in their 40ies, past and over the excesses of youth.

The MuSIASEM framework has two other traits which I find arguable, namely the concept of evolutionary purpose, and the imperative of equality between countries in terms of energy efficiency. Researchers who lean towards and into the MuSIASEM methodology claim that it is an evolutionary purpose of every living organism to maximize energy efficiency, and therefore human societies have the same evolutionary purpose. It further implies that species displaying marked evolutionary success, i.e. significant growth in headcount (sometimes in mandibulae-count, should the head be not really what we mean it to be), achieve that success by being particularly energy efficient. I even went into some reading in life sciences and that claim is not grounded in any science. It seems that energy efficiency, and any denomination of efficiency, as a matter of fact, are very crude proportions we apply to complex a balance of flows which we have to learn a lot about. Niebel et al. (2019[4]) phrase it out as follows: ‘The principles governing cellular metabolic operation are poorly understood. Because diverse organisms show similar metabolic flux patterns, we hypothesized that a fundamental thermodynamic constraint might shape cellular metabolism. Here, we develop a constraint-based model for Saccharomyces cerevisiae with a comprehensive description of biochemical thermodynamics including a Gibbs energy balance. Non-linear regression analyses of quantitative metabolome and physiology data reveal the existence of an upper rate limit for cellular Gibbs energy dissipation. By applying this limit in flux balance analyses with growth maximization as the objective function, our model correctly predicts the physiology and intracellular metabolic fluxes for different glucose uptake rates as well as the maximal growth rate. We find that cells arrange their intracellular metabolic fluxes in such a way that, with increasing glucose uptake rates, they can accomplish optimal growth rates but stay below the critical rate limit on Gibbs energy dissipation. Once all possibilities for intracellular flux redistribution are exhausted, cells reach their maximal growth rate. This principle also holds for Escherichia coli and different carbon sources. Our work proposes that metabolic reaction stoichiometry, a limit on the cellular Gibbs energy dissipation rate, and the objective of growth maximization shape metabolism across organisms and conditions’. 

I feel like restating the very concept of evolutionary purpose as such. Evolution is a mechanism of change through selection. Selection in itself is largely a random process, based on the principle that whatever works for now can keep working until something else works even better. There is hardly any purpose in that. My take on the thing is that living species strive to maximize their intake of energy from environment rather than their energy efficiency. I even hatched an article about it (Wasniewski 2017[5]).

Now, I pass to the second postulate of the MuSIASEM methodology, namely to the alleged necessity of closing gaps between countries as for their energy efficiency. Professor Andreoni expresses this view quite vigorously in a recent article (Andreoni 2020[6]). I think this postulate doesn’t hold both inside the MuSIASEM framework, and outside of it. As for the purely external perspective, I think I have just laid out the main reasons for discarding the assumption that our civilisation should prioritize energy efficiency above other orientations and values. From the internal perspective of MuSIASEM, i.e. if we assume that energy efficiency is a true priority, we need to give that energy efficiency a boost, right? Now, the last time I checked, the only way we, humans, can get better at whatever we want to get better at is to create positive outliers, i.e. situations when we like really nail it better than in other situations. With a bit of luck, those positive outliers become a workable pattern of doing things. In management science, it is known as the principle of best practices. The only way of having positive outliers is to have a hierarchy of outcomes according to the given criterion. When everybody is at the same level, nobody is an outlier, and there is no way we can give ourselves a boost forward.

Good. Those six paragraphs above, they pretty much summarize my theoretical stance as regards the MuSIASEM framework in research about energy economics. Please, note that I respect that stream of research and the scientists involved in it. I think that representing energy management in human social structures as a metabolism is a great idea: it is one of those metaphors which can be fruitfully turned into a quantitative model. Still, I have my reserves.

I go further. A little more review of literature. Here comes a paper by Halbrügge et al. (2021[7]), titled ‘How did the German and other European electricity systems react to the COVID-19 pandemic?’. It points at an interesting point as regards energy economics: the pandemic has induced a new type of risk, namely short-term fluctuations in local demand for electricity. That, in turn, leads to deeper troughs and higher peaks in both the quantity and the price of energy in the market. More risk requires more liquidity: this is a known principle in business. As regards energy, liquidity can be achieved both through inventories, i.e. by developing storage capacity for energy, and through financial instruments. Halbrügge et al. come to the conclusion that such circumstances in the German market have led to the reinforcement of RES (Renewable Energy Sources). RES installations are typically more dispersed, more local in their reach, and more flexible than large power plants. It is much easier to modulate the output of a windfarm or a solar farm, as compared to a large fossil-fuel-based installation. 

Keeping an eye on the impact of the pandemic upon the market of energy, I pass to the article titled ‘Revisiting oil-stock nexus during COVID-19 pandemic: Some preliminary results’, by Salisu, Ebuh & Usman (2020[8]). First of all, a few words of general explanation as for what the hell is the oil-stock nexus. This is a phenomenon, which I saw any research about in 2017, which consists in a diversification of financial investment portfolios from pure financial stock into various mixes of stock and oil. Somehow around 2015, people who used to hold their liquid investments just in financial stock (e.g. as I do currently) started to build investment positions in various types of contracts based on the floating inventory of oil: futures, options and whatnot. When I say ‘floating’, it is quite literal: that inventory of oil really actually floats, stored on board of super-tanker ships, sailing gently through international waters, with proper gravitas (i.e. not too fast).

Long story short, crude oil has been increasingly becoming a financial asset, something like a buffer to hedge against risks encountered in other assets. Whilst the paper by Salisu, Ebuh & Usman is quite technical, without much theoretical generalisation, an interesting observation comes out of it, namely that short-term shocks, during the pandemic in financial markets had adversely impacted the price of oil more than the prices of stock. That, in turn, could indicate that crude oil was good as hedging asset just for a certain range of risks, and in the presence of price shocks induced by the pandemic, the role of oil could diminish.     

Those two papers point at a factor which we almost forgot as regards the market of energy, namely the role of short-term shocks. Until recently, i.e. until COVID-19 hit us hard, the textbook business model in the sector of energy had been that of very predictable demand, nearly constant in the long-perspective and varying in a sinusoidal manner in the short-term. The very disputable concept of LCOE AKA Levelized Cost of Energy, where investment outlays are treated as if they were a current cost, is based on those assumptions. The pandemic has shown a different aspect of energy systems, namely the need for buffering capacity. That, in turn, leads to the issue of adaptability, which, gently but surely leads further into the realm of adaptive changes, and that, ladies and gentlemen, is my beloved landscape of evolutionary, collectively intelligent change.

Cool. I move forward, and, by the same occasion, I move back. Back to the concept of energy efficiency. Halvorsen & Larsen study the so-called rebound effect as regards energy efficiency (Halvorsen & Larsen 2021[9]). Their paper is interesting for three reasons, the general topic of energy efficiency being the first one. The second one is methodological focus on phenomena which we cannot observe directly, and therefore we observe them through mediating variables, which is theoretically close to my own method of research. Finally, the phenomenon of rebound effect, namely the fact that, in the presence of temporarily increased energy efficiency, the consumers of energy tend to use more of those locally more energy-efficient goods, is essentially a short-term disturbance being transformed into long-term habits. This is adaptive change.

The model construed by Halvorsen & Larsen is a theoretical delight, just something my internal happy bulldog can bite into. They introduce the general assumption that consumption of energy in households is a build-up of different technologies, which can substitute each other under some conditions, and complementary under different conditions. Households maximize something called ‘energy services’, i.e. everything they can purposefully derive from energy carriers. Halvorsen & Larsen build and test a model where they derive demand for energy services from a whole range of quite practical variables, which all sums up to the following: energy efficiency is indirectly derived from the way that social structures work, and it is highly doubtful whether we can purposefully optimize energy efficiency as such.       

Now, here comes the question: what are the practical implications of all those different theoretical stances, I mean mine and those by other scientists? What does it change, and does it change anything at all, if policy makers follow the theoretical line of the MuSIASEM framework, or, alternatively, my approach? I am guessing differences at the level of both the goals, and the real outcomes of energy-oriented policies, and I am trying to wrap my mind around that guessing. Such as I see it, the MuSIASEM approach advocates for putting energy-efficiency of the whole global economy at the top of any political agenda, as a strategic goal. On the path towards achieving that strategic goal, there seems to be an intermediate one, namely that to narrow down significantly two types of discrepancies:

>> firstly, it is about discrepancies between countries in terms of energy efficiency, with a special focus on helping the poorest developing countries in ramping up their efficiency in using energy

>> secondly, there should be a priority to privilege technologies with the highest possible energy efficiency, whilst kicking out those which perform the least efficiently in that respect.    

If I saw a real policy based on those assumptions, I would have a few critical points to make. Firstly, I firmly believe that large human societies just don’t have the institutions to enforce energy efficiency as chief collective purpose. On the other hand, we have institutions oriented on other goals, which are able to ramp up energy efficiency as instrumental change. One institution, highly informal and yet highly efficient, is there, right in front of our eyes: markets and value chains. Each product and each service contain an input of energy, which manifests as a cost. In the presence of reasonably competitive markets, that cost is under pressure from market prices. Yes, we, humans are greedy, and we like accumulating profits, and therefore we squeeze our costs. Whenever energy comes into play as significant a cost, we figure out ways of diminishing its consumption per unit of real output. Competitive markets, both domestic and international, thus including free trade, act as an unintentional, and yet powerful a reductor of energy consumption, and, under a different angle, they remind us to find cheap sources of energy.


[1] Andreoni, V. (2020). The energy metabolism of countries: Energy efficiency and use in the period that followed the global financial crisis. Energy Policy, 139, 111304. https://doi.org/10.1016/j.enpol.2020.111304

[2] Al-Tamimi and Al-Ghamdi (2020), ‘Multiscale integrated analysis of societal and ecosystem metabolism of Qatar’ Energy Reports, 6, 521-527, https://doi.org/10.1016/j.egyr.2019.09.019

[3] Velasco-Fernández, R., Pérez-Sánchez, L., Chen, L., & Giampietro, M. (2020), A becoming China and the assisted maturity of the EU: Assessing the factors determining their energy metabolic patterns. Energy Strategy Reviews, 32, 100562.  https://doi.org/10.1016/j.esr.2020.100562

[4] Niebel, B., Leupold, S. & Heinemann, M. An upper limit on Gibbs energy dissipation governs cellular metabolism. Nat Metab 1, 125–132 (2019). https://doi.org/10.1038/s42255-018-0006-7

[5] Waśniewski, K. (2017). Technological change as intelligent, energy-maximizing adaptation. Energy-Maximizing Adaptation (August 30, 2017). http://dx.doi.org/10.1453/jest.v4i3.1410

[6] Andreoni, V. (2020). The energy metabolism of countries: Energy efficiency and use in the period that followed the global financial crisis. Energy Policy, 139, 111304. https://doi.org/10.1016/j.enpol.2020.111304

[7] Halbrügge, S., Schott, P., Weibelzahl, M., Buhl, H. U., Fridgen, G., & Schöpf, M. (2021). How did the German and other European electricity systems react to the COVID-19 pandemic?. Applied Energy, 285, 116370. https://doi.org/10.1016/j.apenergy.2020.116370

[8] Salisu, A. A., Ebuh, G. U., & Usman, N. (2020). Revisiting oil-stock nexus during COVID-19 pandemic: Some preliminary results. International Review of Economics & Finance, 69, 280-294. https://doi.org/10.1016/j.iref.2020.06.023

[9] Halvorsen, B., & Larsen, B. M. (2021). Identifying drivers for the direct rebound when energy efficiency is unknown. The importance of substitution and scale effects. Energy, 222, 119879. https://doi.org/10.1016/j.energy.2021.119879

DIY algorithms of our own

I return to that interesting interface of science and business, which I touched upon in my before-last update, titled ‘Investment, national security, and psychiatry’ and which means that I return to discussing two research projects I start being involved in, one in the domain of national security, another one in psychiatry, both connected by the idea of using artificial neural networks as analytical tools. What I intend to do now is to pass in review some literature, just to get the hang of what is the state of science, those last days.

On the top of that, I have been asked by my colleagues to crash take the leadership of a big, multi-thread research project in management science. The multitude of threads has emerged as a circumstantial by-product of partly the disruption caused by the pandemic, and partly as a result of excessive partition in the funding of research. As regards the funding of research, Polish universities have sort of two financial streams. One consists of big projects, usually team-based, financed by specialized agencies, such as the National Science Centre (https://www.ncn.gov.pl/?language=en ) or the National Centre for Research and Development (https://www.gov.pl/web/ncbr-en ). Another one is based on relatively small grants, applied for by and granted to individual scientists by their respective universities, which, in turn, receive bulk subventions from the Ministry of Education and Science. Personally, I think that last category, such as it is being allocated and used now, is a bit of a relic. It is some sort of pocket money for the most urgent and current expenses, relatively small in scale and importance, such as the costs of publishing books and articles, the costs of attending conferences etc. This is a financial paradox: we save and allocate money long in advance, in order to have money for essentially incidental expenses – which come at the very end of the scientific pipeline – and we have to make long-term plans for it. It is a case of fundamental mismatch between the intrinsic properties of a cash flow, on the one hand, and the instruments used for managing that cash flow, on the other hand.

Good. This is introduction to detailed thinking. Once I have those semantic niceties checked out, I cut into the flesh of thinking, and the first piece I intend to cut out is the state of science as regards Territorial Defence Forces and their role amidst the COVID-19 pandemic. I found an interesting article by Tiutiunyk et al. (2018[1]). It is interesting because it gives a detailed methodology for assessing operational readiness in any military unit, territorial defence or other. That corresponds nicely to Hypothesis #2 which I outlined for that project in national security, namely: ‘the actual role played by the TDF during the pandemic was determined by the TDF’s actual capacity of reaction, i.e. speed and diligence in the mobilisation of human and material resources’. That article by Tiutiunyk et al. (2018) allows entering into details as regards that claim. 

Those details start unfolding from the assumption that operational readiness is there when the entity studied possesses the required quantity of efficient technical and human resources. The underlying mathematical concept is quite simple. I the given situation, adequate response requires using m units of resources at k% of capacity during time te. The social entity studied can muster n units of the same resources at l% of capacity during the same time te. The most basic expression of operational readiness is, therefore, a coefficient OR = (n*l)/(m*k). I am trying to find out what specific resources are the key to that readiness. Tiutiunyk et al. (2018) offer a few interesting insights in that respect. They start by noticing the otherwise known fact that resources used in crisis situations are not exactly the same we use in everyday course of life and business, and therefore we tend to hold them for a time longer than their effective lifecycle. We don’t amortize them properly because we don’t really control for their physical and moral depreciation. One of the core concepts in territorial defence is to counter that negative phenomenon, and to maintain, through comprehensive training and internal control, a required level of capacity.

As I continue going through literature, I come by an interesting study by I. Bet-El (2020), titled: ‘COVID-19 and the future of security and defence’, published by the European Leadership Network (https://www.europeanleadershipnetwork.org/wp-content/uploads/2020/05/Covid-security-defence-1.pdf ). Bet-El introduces an important distinction between threats and risks, and, contiguously, the distinction between security and defence: ‘A threat is a patent, clear danger, while risk is the probability of a latent danger becoming patent; evaluating that probability requires judgement. Within this framework, defence is to be seen as the defeat or deterrence of a patent threat, primarily by military, while security involves taking measures to prevent latent threats from becoming patent and if the measures fail, to do so in such a way that there is time and space to mount an effective defence’. This is deep. I do a lot of research in risk management, especially as I invest in the stock market. When we face a risk factor, our basic behavioural response is hedging or insurance. We hedge by diversifying our exposures to risk, and we insure by sharing the risk with other people. Healthcare systems are a good example of insurance. We have a flow of capital that fuels a manned infrastructure (hospitals, ambulances etc.), and that infrastructure allows each single sick human to share his or her risks with other people. Social distancing is the epidemic equivalent of hedging. When cutting completely or significantly throttling social interactions between households, we have each household being sort of separated from the epidemic risk in other households. When one node in a network is shielded from some of the risk occurring in other nodes, this is hedging.

The military is made for responding to threats rather than risks. Military action is a contingency plan, implemented when insurance and hedging have gone to hell. The pandemic has shown that we need more of such buffers, i.e. more social entities able to mobilise quickly into deterring directly an actual threat. Territorial Defence Forces seem to fit the bill.  Another piece of literature, from my own, Polish turf, by Gąsiorek & Marek (2020[2]), state straightforwardly that Territorial Defence Forces have proven to be a key actor during the COVID-19 pandemic precisely because they maintain a high degree of actual readiness in their crisis-oriented resources, as compared to other entities in the Polish public sector.

Good. I have a thread, from literature, for the project devoted to national security. The issue of operational readiness seems to be somehow in the centre, and it translates into the apparently fluent frontier between security and national defence. Speed of mobilisation in the available resources, as well as the actual reliability of those resources, once mobilized, look like the key to understanding the surprisingly significant role of Territorial Defence Forces during the COVID-19 pandemic. Looks like my initial hypothesis #2, claiming that the actual role played by the TDF during the pandemic was determined by the TDF’s actual capacity of reaction, i.e. speed and diligence in the mobilisation of human and material resources, is some sort of theoretical core to that whole body of research.

In our team, we plan and have a provisional green light to run interviews with the soldiers of Territorial Defence Forces. That basic notion of actually mobilizable resources can help narrowing down the methodology to apply in those interviews, by asking specific questions pertinent to that issue. Which specific resources proved to be the most valuable in the actual intervention of TDF in pandemic? Which resources – if any – proved to be 100% mobilizable on the spot? Which of those resources proved to be much harder to mobilise than it had been initially assumed? Can we rate and rank all the human and technical resources of TDF as for their capacity to be mobilised?

Good. I gently close the door of that room in my head, filled with Territorial Defence Forces and the pandemic. I make sure I can open it whenever I want, and I open the door to that other room, where psychiatry dwells. Me and those psychiatrists I am working with can study a sample of medical records as regards patients with psychosis. Verbal elocutions of those patients are an important part of that material, and I make two hypotheses along that tangent:

>> Hypothesis #1: the probability of occurrence in specific grammatical structures A, B, C, in the general grammatical structure of a patient’s elocutions, both written and spoken, is informative about the patient’s mental state, including the likelihood of psychosis and its specific form.

>> Hypothesis #2: the action of written self-reporting, e.g. via email, from the part of a psychotic patient, allows post-clinical treatment of psychosis, with results observable as transition from mental state A to mental state B.

I start listening to what smarter people than me have to say on the matter. I start with Worthington et al. (2019[3]), and I learn there is a clinical category: clinical high risk for psychosis (CHR-P), thus a set of subtler (than psychotic) ‘changes in belief, perception, and thought that appear to represent attenuated forms of delusions, hallucinations, and formal thought disorder’. I like going backwards upstream, and I immediately ask myself whether that line of logic can be reverted. If there is clinical high risk for psychosis, the occurrence of those same symptoms in reverse order, from severe to light, could be a path of healing, couldn’t it?

Anyway, according to Worthington et al. (2019), some 25% of people with diagnosed CHR-P transition into fully scaled psychosis. Once again, from the perspective of risk management, 25% of actual occurrence in a risk category is a lot. It means that CHR-P is pretty solid as risk assessment comes. I further learn that CHR-P, when represented as a collection of variables (a vector for friends with a mathematical edge), entails an internal distinction into predictors and converters. Predictors are the earliest possible observables, something like a subtle smell of possible s**t, swirling here and there in the ambient air. Converters are information that bring progressive confirmation to predictors.

That paper by Worthington et al. (2019) is a review of literature in itself, and allows me to compare different approaches to CHR-P. The most solid ones, in terms of accurately predicting the onset of full-clip psychosis, always incorporate two components: assessment of the patient’s social role, and analysis of verbalized thought. Good. Looks promising. I think the initial hypotheses should be expanded into claims about socialization.

I continue with another paper, by Corcoran and Cecchi (2020[4]). Generally, patients with psychotic disorders display lower a semantic coherence than ordinary. The flow of meaning in their speech is impended: they can express less meaning in the same volume of words, as compared to a mentally healthy person. Reduced capacity to deliver meaning manifests as apparent tangentiality in verbal expression. Psychotic patients seem to err in their elocutions. Reduced complexity of speech, i.e. relatively low a capacity to swing between different levels of abstraction, with a tendency to exaggerate concreteness, is another observable which informs about psychosis. Two big families of diagnostic methods follow that twofold path. Latent Semantic Analysis (LSA) seems to be the name of the game as regards the study of semantic coherence. Its fundamental assumption is that words convey meaning by connecting to other words, which further unfolds into assuming that semantic similarity, or dissimilarity, with a more or less complex coefficient joint occurrence, as opposed to disjoint occurrence inside big corpuses of language.  

Corcoran and Cecchi (2020) name two main types of digital tools for Latent Semantic Analysis. One is Word2Vec (https://en.wikipedia.org/wiki/Word2vec), and I found a more technical and programmatic approach there to at: https://towardsdatascience.com/a-word2vec-implementation-using-numpy-and-python-d256cf0e5f28 . Another one is GloVe, which I found three interesting references to, at https://nlp.stanford.edu/projects/glove/ , https://github.com/maciejkula/glove-python , and at https://pypi.org/project/glove-py/ .

As regards semantic complexity, two types of analytical tools seem to run the show. One is the part-of-speech (POS) algorithm, where we tag words according to their grammatical function in the sentence: noun, verb, determiner etc. There are already existing digital platforms for implementing that approach, such as Natural Language Toolkit (http://www.nltk.org/ ). Another angle is that of speech graphs, where words are nodes in the network of discourse, and their connections (e.g. joint occurrence) to other words are edges in that network. Now, the intriguing thing about that last thread is that it seems to had been burgeoning in the late 1990ies, and then it sort of faded away. Anyway, I found two references for an algorithmic approach to speech graphs, at https://github.com/guillermodoghel/speechgraph , and at https://www.researchgate.net/publication/224741196_A_general_algorithm_for_word_graph_matrix_decomposition .

That quick review of literature, as regards natural language as predictor of psychosis, leads me to an interesting sidestep. Language is culture, right? Low coherence, and low complexity in natural language are informative about psychosis, right? Now, I put that argument upside down. What if we, homo (mostly) sapiens have a natural proclivity to psychosis, with that overblown cortex of ours? What if we had figured out, at some point of our evolutionary path, that language is a collectively intelligent tool which, with is unique coherence and complexity required for efficient communication, keeps us in a state of acceptable sanity, until we go on Twitter, of course.  

Returning to the intellectual discipline which I should demonstrate, as a respectable researcher, the above review of literature brings one piece of good news, as regards the project in psychiatry. Initially, in this specific team, we assumed that we necessarily need an external partner, most likely a digital business, with important digital resources in AI, in order to run research on natural language. Now, I realized that we can assume two scenarios: one with big, fat AI from that external partner, and another one, with DIY algorithms of our own. Gives some freedom of movement. Cool.


[1] Tiutiunyk, V. V., Ivanets, H. V., Tolkunov, І. A., & Stetsyuk, E. I. (2018). System approach for readiness assessment units of civil defense to actions at emergency situations. Науковий вісник Національного гірничого університету, (1), 99-105. DOI: 10.29202/nvngu/2018-1/7

[2] Gąsiorek, K., & Marek, A. (2020). Działania wojsk obrony terytorialnej podczas pandemii COVID–19 jako przykład wojskowego wsparcia władz cywilnych i społeczeństwa. Wiedza Obronna. DOI: https://doi.org/10.34752/vs7h-g945

[3] Worthington, M. A., Cao, H., & Cannon, T. D. (2019). Discovery and validation of prediction algorithms for psychosis in youths at clinical high risk. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging. https://doi.org/10.1016/j.bpsc.2019.10.006

[4] Corcoran, C. M., & Cecchi, G. (2020). Using language processing and speech analysis for the identification of psychosis and other disorders. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging. https://doi.org/10.1016/j.bpsc.2020.06.004