I have proven myself wrong

I keep working on a proof-of-concept paper for the idea I baptized ‘Energy Ponds’. You can consult two previous updates, namely ‘We keep going until we observe’ and ‘Ça semble expérimenter toujours’ to keep track of the intellectual drift I am taking. This time, I am focusing on the end of the technological pipeline, namely on the battery-powered charging station for electric cars. First, I want to make myself an idea of the market for charging.

I take the case of France. In December 2020, they had a total of 119 737 electric vehicles officially registered (matriculated), which made + 135% as compared to December 2019[1]. That number pertains only to 100% electrical ones, with plug-in hybrids left aside for the moment. When plug-in hybrids enter the game, France had, in December 2020, 470 295 vehicles that need or might need the services of charging stations. According to the same source, there were 28 928 charging stations in France at the time, which makes 13 EVs per charging station. That coefficient is presented for 4 other European countries: Norway (23 EVs per charging station), UK (12), Germany (9), and Netherlands (4).

I look up into other sources. According to Reuters[2], there was 250 000 charging stations in Europe by September 2020, as compared to 34 000 in 2014. That means an average increase by 36 000 a year. I find a different estimation with Statista[3]: 2010 – 3 201; 2011 – 7 018; 2012 – 17 498; 2013 – 28 824; 2014 – 40 910; 2015 – 67 064; 2016 – 98 669; 2017 – 136 059; 2018 – 153 841; 2019 – 211 438; 2020 – 285 796.

On the other hand, the European Alternative Fuels Observatory supplies their own data at https://www.eafo.eu/electric-vehicle-charging-infrastructure, as regards European Union.

Number of EVs per charging station (source: European Alternative Fuels Observatory):

EVs per charging station
201014
20116
20123
20134
20145
20155
20165
20175
20186
20197
20209

The same EAFO site gives their own estimation as regards the number of charging stations in Europe:

Number of charging stations in Europe (source: European Alternative Fuels Observatory):

High-power recharging points (more than 22 kW) in EUNormal charging stations in EUTotal charging stations
201225710 25010 507
201375117 09317 844
20141 47424 91726 391
20153 39644 78648 182
20165 19070 01275 202
20178 72397 287106 010
201811 138107 446118 584
201915 136148 880164 016
202024 987199 250224 237

Two conclusions jump to the eye. Firstly, there is just a very approximate count of charging stations. Numbers differ substantially from source to source. I can just guess that one of the reasons for that discrepancy is the distinction between officially issued permits to build charging points, on the one hand, and the actually active charging points, on the other hand. In Europe, building charging points for electric vehicles has become sort of a virtue, which governments at all levels like signaling. I guess there is some boasting and chest-puffing in the numbers those individual countries report.  

Secondly, high-power stations, charging with direct current, with a power of at least 22 kWh,  gain in importance. In 2012, that category made 2,45% of the total charging network in Europe, and in 2020 that share climbed to 11,14%. This is an important piece of information as regards the proof-of-concept which I am building up for my idea of Energy Ponds. The charging station I placed at the end of the pipeline in the concept of Energy Ponds, and which is supposed to earn a living for all the technologies and installations upstream of it, is supposed to be powered from a power storage facility. That means direct current, and most likely, high power.   

On the whole, the www.eafo.eu site seems somehow more credible that Statista, with all the due respect for the latter, and thus I am reporting some data they present on the fleet of EVs in Europe. Here it comes, in a few consecutive tables below:

Passenger EVs in Europe (source: European Alternative Fuels Observatory):

BEV (pure electric)PHEV (plug-in-hybrid)Total
20084 1554 155
20094 8414 841
20105 7855 785
201113 39516313 558
201225 8913 71229 603
201345 66232 47478 136
201475 47956 745132 224
2015119 618125 770245 388
2016165 137189 153354 290
2017245 347254 473499 820
2018376 398349 616726 014
2019615 878479 7061 095 584
20201 125 485967 7212 093 206

Light Commercial EVs in Europe (source: European Alternative Fuels Observatory):

BEV (pure electric)PHEV (plug-in-hybrid)Total
2008253253
2009254254
2010309309
20117 6697 669
20129 5279 527
201313 66913 669
201410 04910 049
201528 61028 610
201640 926140 927
201752 026152 027
201876 286176 287
201997 36311797 480
2020120 7111 054121 765

Bus EVs in Europe (source: European Alternative Fuels Observatory):

BEV (pure electric)PHEV (plug-in-hybrid)Total
20082727
20091212
2010123123
2011128128
2012286286
2013376376
201438940429
2015420145565
2016686304990
20178884451 333
20181 6084862 094
20193 6365254 161
20205 3115505 861

Truck EVs in Europe (source: European Alternative Fuels Observatory):

BEV (pure electric)PHEV (plug-in-hybrid)Total
200855
200955
201066
201177
201288
20134747
20145858
20157171
201611339152
2017544094
201822240262
201959538633
2020983291 012

Structure of EV fleet in Europe as regards the types of vehicles (source: European Alternative Fuels Observatory):

Passenger EVLight commercial EVBus EVTruck EV
200893,58%5,70%0,61%0,11%
200994,70%4,97%0,23%0,10%
201092,96%4,97%1,98%0,10%
201163,47%35,90%0,60%0,03%
201275,09%24,17%0,73%0,02%
201384,72%14,82%0,41%0,05%
201492,62%7,04%0,30%0,04%
201589,35%10,42%0,21%0,03%
201689,39%10,33%0,25%0,04%
201790,34%9,40%0,24%0,02%
201890,23%9,48%0,26%0,03%
201991,46%8,14%0,35%0,05%
202094,21%5,48%0,26%0,05%

Summing it up a bit. The market of Electric Vehicles in Europe seems being durably dominated by passenger cars. There is some fleet in other categories of vehicles, and there is even some increase, but, for the moment, in all looks more like an experiment. Well, maybe electric buses turn up sort of more systemically.

The proportion between the fleet of electric vehicles and the infrastructure of charging stations still seems to be in the phase of adjustment in the latter to the abundance of the former. Generally, the number of charging stations seems to be growing slower than the fleet of EVs. Thus, for my own concept, I assume that the coefficient of 9 EVs per charging station, on average, will stand still or will slightly increase. For the moment, I take 9. I assume that my charging stations will have like 9 habitual customers, plus a fringe of incidental ones.

From there, I think in the following terms. The number of times the average customer charges their car depends on the distance they cover. Apparently, there is like a 100 km  50 kWh equivalence. I did not find detailed statistics as regards distances covered by electric vehicles as such, however I came by some Eurostat data on distances covered by all passenger vehicles taken together: https://ec.europa.eu/eurostat/statistics-explained/index.php?title=Passenger_mobility_statistics#Distance_covered . There is a lot of discrepancy between the 11 European countries studied for that metric, but the average is 12,49 km per day. My average 9 customers would do, in total, an average of 410,27 of 50 kWh charging purchases per year. I checked the prices of fast charging with direct current: 2,3 PLN per 1 kWh in Poland[4],  €0,22 per 1 kWh in France[5], $0,13 per 1 kWh in US[6], 0,25 pence per 1 kWh in UK[7]. Once converted to US$, it gives $0,59 in Poland, $0,26 in France, $0,35 in UK, and, of course, $0,13 in US. Even at the highest price, namely that in Poland, those 410,27 charging stops give barely more than $12 000 a year.

If I want to have a station able to charge 2 EVs at the same time, fast charging, and counting 350 kW per charging pile (McKinsey 2018[8]), I need 700 kW it total. Investment in batteries is like $600 ÷ $800 per 1 kW (Cole & Frazier 2019[9]; Cole, Frazier, Augustine 2021[10]), thus 700 * ($600 ÷ $800) = $420 000 ÷ $560 000. There is no way that investment pays back with $12 000 a year in revenue, and I haven’t even started talking about paying off on investment in all the remaining infrastructure of Energy Ponds: ram pumps, elevated tanks, semi-artificial wetlands, and hydroelectric turbines.

Now, I revert my thinking. Investment in the range of $420 000 ÷ $560 000, in the charging station and its batteries, gives a middle-of-the-interval value of $490 000. I found a paper by Zhang et al. (2018[11]) who claim that a charging station has chances to pay off, as a business, when it sells some 5 000 000 kWh a year. When I put it back-to-back with the [50 kWh / 100 km] coefficient, it gives 10 000 000 km. Divided by the average annual distance covered by European drivers, thus by 4 558,55 km, it gives 2 193,68 customers per year, or some 6 charging stops per day. That seems hardly feasible with 9 customers. I assume that one customer would charge their electric vehicle no more than twice a week, and 6 chargings a day make 6*7 = 42 chargings, and therefore 21 customers.

I need to stop and think. Essentially, I have proven myself wrong. I had been assuming that putting a charging station for electric vehicles at the end of the internal value chain in the overall infrastructure of Energy Ponds will solve the problem of making money on selling electricity. Turns out it makes even more problems. I need time to wrap my mind around it.


[1] http://www.avere-france.org/Uploads/Documents/161011498173a9d7b7d55aef7bdda9008a7e50cb38-barometre-des-immatriculations-decembre-2020(9).pdf

[2] https://www.reuters.com/article/us-eu-autos-electric-charging-idUSKBN2C023C

[3] https://www.statista.com/statistics/955443/number-of-electric-vehicle-charging-stations-in-europe/

[4] https://elo.city/news/ile-kosztuje-ladowanie-samochodu-elektrycznego

[5] https://particulier.edf.fr/fr/accueil/guide-energie/electricite/cout-recharge-voiture-electrique.html

[6] https://afdc.energy.gov/fuels/electricity_charging_home.html

[7] https://pod-point.com/guides/driver/cost-of-charging-electric-car

[8] McKinsey Center for Future Mobility, How Battery Storage Can Help Charge the Electric-Vehicle Market?, February 2018,

[9] Cole, Wesley, and A. Will Frazier. 2019. Cost Projections for Utility-Scale Battery Storage.

Golden, CO: National Renewable Energy Laboratory. NREL/TP-6A20-73222. https://www.nrel.gov/docs/fy19osti/73222.pdf

[10] Cole, Wesley, A. Will Frazier, and Chad Augustine. 2021. Cost Projections for UtilityScale Battery Storage: 2021 Update. Golden, CO: National Renewable Energy

Laboratory. NREL/TP-6A20-79236. https://www.nrel.gov/docs/fy21osti/79236.pdf.

[11] Zhang, J., Liu, C., Yuan, R., Li, T., Li, K., Li, B., … & Jiang, Z. (2019). Design scheme for fast charging station for electric vehicles with distributed photovoltaic power generation. Global Energy Interconnection, 2(2), 150-159. https://doi.org/10.1016/j.gloei.2019.07.003

Ça semble expérimenter toujours

Je continue avec l’idée que j’avais baptisée « Projet Aqueduc ». Je suis en train de préparer un article sur ce sujet, du type « démonstration de faisabilité ». Je le prépare en anglais et je me suis dit que c’est une bonne idée de reformuler en français ce que j’ai écrit jusqu’à maintenant, l’histoire de changer l’angle intellectuel, me dégourdir un peu et prendre de la distance.

Une démonstration de faisabilité suit une logique similaire à tout autre article scientifique, sauf qu’au lieu d’explorer et vérifier une hypothèse théorique du type « les choses marchent de façon ABCD, sous conditions RTYU », j’explore et vérifie l’hypothèse qu’un concept pratique, comme celui du « Projet Aqueduc », a des fondements scientifiques suffisamment solides pour que ça vaille la peine de travailler dessus et de le tester en vie réelle. Les fondements scientifiques viennent en deux couches, en quelque sorte. La couche de base consiste à passer en revue la littérature du sujet pour voir si quelqu’un a déjà décrit des solutions similaires et là, le truc c’est explorer des différentes perspectives de similarité. Similaire ne veut pas dire identique, n’est-ce pas ? Cette revue de littérature doit apporter une structure logique – un modèle – applicable à la recherche empirique, avec des variables et des paramètres constants. C’est alors que vient la couche supérieure de démonstration de faisabilité, qui consiste à conduire de la recherche empirique proprement dite avec ce modèle.    

Moi, pour le moment, j’en suis à la couche de base. Je passe donc en revue la littérature pertinente aux solutions hydrologiques et hydroélectriques, tout en formant, progressivement, un modèle numérique du « Projet Aqueduc ». Dans cette mise à jour, je commence par une brève récapitulation du concept et j’enchaîne avec ce que j’ai réussi à trouver dans la littérature. Le concept de base du « Projet Aqueduc » consiste donc à placer dans le cours d’une rivière des pompes qui travaillent selon le principe du bélier hydraulique et qui donc utilisent l’énergie cinétique de l’eau pour pomper une partie de cette eau en dehors du lit de la rivière, vers des structures marécageuses qui ont pour fonction de retenir l’eau dans l’écosystème local. Le bélier hydraulique à la capacité de pomper à la verticale aussi bien qu’à l’horizontale et donc avant d’être retenue dans les marécages, l’eau passe par une structure similaire à un aqueduc élevé (d’où le nom du concept en français), avec des réservoirs d’égalisation de flux, et ensuite elle descend vers les marécages à travers des turbines hydroélectriques. Ces dernières produisent de l’énergie qui est ensuite emmagasinée dans une installation de stockage et de là, elle est vendue pour assurer la survie financière à la structure entière. On peut ajouter des installations éoliennes et/ou photovoltaïques pour optimiser la production de l’énergie sur le terrain occupé par la structure entière.  Vous pouvez trouver une description plus élaborée du concept dans ma mise à jour intitulée « Le Catch 22 dans ce jardin d’Eden ». La faisabilité dont je veux faire une démonstration c’est la capacité de cette structure à se financer entièrement sur la base des ventes d’électricité, comme un business régulier, donc de se développer et durer sans subventions publiques. La solution pratique que je prends en compte très sérieusement en termes de créneau de vente d’électricité est une station de chargement des véhicules électriques.   

L’approche de base que j’utilise dans la démonstration de faisabilité – donc mon modèle de base – consiste à représenter le concept en question comme une chaîne des technologies :

>> TCES – stockage d’énergie

>> TCCS – station de chargement des véhicules électriques

>> TCRP – pompage en bélier hydraulique

>> TCEW – réservoirs élevés d’égalisation

>> TCCW – acheminement et siphonage d’eau

>> TCWS – l’équipement artificiel des structures marécageuses

>> TCHE – les turbines hydroélectriques

>> TCSW – installations éoliennes et photovoltaïques     

Mon intuition de départ, que j’ai l’intention de vérifier dans ma recherche à travers la littérature, est que certaines de ces technologies sont plutôt prévisibles et bien calibrées, pendant qu’il y en a d’autres qui sont plus floues et sujettes au changement, donc moins prévisibles. Les technologies prévisibles sont une sorte d’ancrage pour the concept entier et celles plus floues sont l’objet d’expérimentation.

Je commence la revue de littérature par le contexte environnemental, donc avec l’hydrologie. Les variations au niveau de la nappe phréatiques, qui est un terme scientifique pour les eaux souterraines, semblent être le facteur numéro 1 des anomalies au niveau de rétention d’eau dans les réservoirs artificiels (Neves, Nunes, & Monteiro 2020[1]). D’autre part, même sans modélisation hydrologique détaillée, il y a des preuves empiriques substantielles que la taille des réservoirs naturels et artificiels dans les plaines fluviales, ainsi que la densité de placement de ces réservoirs et ma manière de les exploiter ont une influence majeure sur l’accès pratique à l’eau dans les écosystèmes locaux. Il semble que la taille et la densité des espaces boisés intervient comme un facteur d’égalisation dans l’influence environnementale des réservoirs (Chisola, Van der Laan, & Bristow 2020[2]). Par comparaison aux autres types de technologie, l’hydrologie semble être un peu en arrière en termes de rythme d’innovation et il semble aussi que des méthodes de gestion d’innovation appliquées ailleurs avec succès peuvent marcher pour l’hydrologie, par exemple des réseaux d’innovation ou des incubateurs des technologies (Wehn & Montalvo 2018[3]; Mvulirwenande & Wehn 2020[4]). L’hydrologie rurale et agriculturale semble être plus innovatrice que l’hydrologie urbaine, par ailleurs (Wong, Rogers & Brown 2020[5]).

Ce que je trouve assez surprenant est le manque apparent de consensus scientifique à propos de la quantité d’eau dont les sociétés humaines ont besoin. Toute évaluation à ce sujet commence avec « beaucoup et certainement trop » et à partir de là, le beaucoup et le trop deviennent plutôt flous. J’ai trouvé un seul calcul, pour le moment, chez Hogeboom (2020[6]), qui maintient que la personne moyenne dans les pays développés consomme 3800 litres d’eau par jour au total, mais c’est une estimation très holistique qui inclue la consommation indirecte à travers les biens et les services ainsi que le transport. Ce qui est consommé directement via le robinet et la chasse d’eau dans les toilettes, ça reste un mystère pour la science, apparemment, à moins que la science ne considère ce sujet comment trop terre-à-terre pour s’en occuper sérieusement.     

Il y a un créneau de recherche intéressant, que certains de ses représentants appellent « la socio-hydrologie », qui étudie les comportements collectifs vis-à-vis de l’eau et des systèmes hydrologiques et qui est basée sur l’observation empirique que lesdits comportements collectifs s’adaptent, d’une façon profonde et pernicieuse à la fois, aux conditions hydrologiques que la société en question vit avec (Kumar et al. 2020[7]). Il semble que nous nous adaptons collectivement à la consommation accrue de l’eau par une productivité croissante dans l’exploitation de nos ressources hydrologiques et le revenu moyen par tête d’habitant semble être positivement corrélé avec cette productivité (Bagstad et al. 2020[8]). Il paraît donc que l’accumulation et superposition de nombreuses technologies, caractéristique aux pays développés, contribue à utiliser l’eau de façon de plus en plus productive. Dans ce contexte, il y a une recherche intéressante conduite par Mohamed et al. (2020[9]) qui avance la thèse qu’un environnement aride est non seulement un état hydrologique mais aussi une façon de gérer les ressources hydrologiques, sur ma base des données qui sont toujours incomplètes par rapport à une situation qui change rapidement.

Il y a une question qui vient plus ou moins naturellement : dans la foulée de l’adaptation socio-hydrologique quelqu’un a-t-il présenté un concept similaire à ce que moi je présente comme « Projet Aqueduc » ? Eh bien, je n’ai rien trouvé d’identique, néanmoins il y a des idées intéressement proches. Dans l’hydrologie descriptive il y a ce concept de pseudo-réservoir, qui veut dire une structure comme les marécages ou des nappes phréatiques peu profondes qui ne retiennent pas l’eau de façons statique, comme un lac artificiel, mais qui ralentissent la circulation de l’eau dans le bassin fluvial d’une rivière suffisamment pour modifier les conditions hydrologiques dans l’écosystème (Harvey et al. 2009[10]; Phiri et al. 2021[11]). D’autre part, il y a une équipe des chercheurs australiens qui ont inventé une structure qu’ils appellent par l’acronyme STORES et dont le nom complet est « short-term off-river energy storage » (Lu et al. 2021[12]; Stocks et al. 2021[13]). STORES est une structure semi-artificielle d’accumulation par pompage, où on bâtit un réservoir artificiel au sommet d’un monticule naturel placé à une certaine distance de la rivière la plus proche et ce réservoir reçoit l’eau pompée artificiellement de la rivière. Ces chercheurs australiens avancent et donnent des preuves scientifiques pour appuyer la thèse qu’avec un peu d’astuce on peut faire fonctionner ce réservoir naturel en boucle fermée avec la rivière qui l’alimente et donc de créer un système de rétention d’eau. STORES semble être relativement le plus près de mon concept de « Projet Aqueduc » et ce qui est épatant est que moi, j’avais inventé mon idée pour l’environnement des plaines alluviales de l’Europe tandis que STORES avait été mis au point pour l’environnement aride et quasi-désertique d’Australie. Enfin, il y a l’idée des soi-disant « jardins de pluie » qui sont une technologie de rétention d’eau de pluie dans l’environnement urbain, dans des structures horticulturales, souvent placées sur les toits d’immeubles (Bortolini & Zanin 2019[14], par exemple).

Je peux conclure provisoirement que tout ce qui touche à l’hydrologie strictement dite dans le cadre du « Projet Aqueduc » est sujet aux changements plutôt imprévisible. Ce que j’ai pu déduire de la littérature ressemble à un potage bouillant sous couvercle. Il y a du potentiel pour changement technologique, il y a de la pression environnementale et sociale, mais il n’y pas encore de mécanismes institutionnels récurrents pour connecter l’un à l’autre. Les technologies TCEW (réservoirs élevés d’égalisation), TCCW (acheminement et siphonage d’eau), et TCWS (l’équipement artificiel des structures marécageuses) démontrant donc un avenir flou, je passe à la technologie TCRP de pompage en bélier hydraulique. J’ai trouvé deux articles chinois, qui se suivent chronologiquement et qui semblent par ailleurs avoir été écrits par la même équipe de chercheurs : Guo et al. (2018[15]), and Li et al. (2021[16]). Ils montrent la technologie du bélier hydraulique sous un angle intéressant. D’une part, les Chinois semblent avoir donné du vrai élan à l’innovation dans ce domaine spécifique, tout au moins beaucoup plus d’élan que j’ai pu observer en Europe. D’autre part, les estimations de la hauteur effective à laquelle l’eau peut être pompée avec les béliers hydrauliques dernier cri sont respectivement de 50 mètres dans l’article de 2018 et 30 mètres dans celui de 2021. Vu que les deux articles semblent être le fruit du même projet, il y a eu comme une fascination suivie par une correction vers le bas. Quoi qu’il en soit, même l’estimation plus conservative de 30 mètres c’est nettement mieux que les 20 mètres que j’assumais jusqu’à maintenant.

Cette élévation relative possible à atteindre avec la technologie du bélier hydraulique est importante pour la technologie suivante de ma chaîne, donc celle des petites turbines hydroélectriques, la TCHE. L’élévation relative de l’eau et le flux par seconde sont les deux paramètres clés qui déterminent la puissance électrique produite (Cai, Ye & Gholinia 2020[17]) et il se trouve que dans le « Projet Aqueduc », avec l’élévation et le flux largement contrôlés à travers la technologie du bélier hydraulique, les turbines deviennent un peu moins dépendantes sur les conditions naturelles.

J’ai trouvé une revue merveilleusement encyclopédique des paramètres pertinents aux petites turbines hydroélectriques chez Hatata, El-Saadawi, & Saad (2019[18]). La puissance électrique se calcule donc comme : Puissance = densité de l’eau (1000 kg/m3) * constante d’accélération gravitationnelle (9,8 m/s2) * élévation nette (mètres) * Q (flux par seconde m3/s).

L’investissement initial en de telles installations se calcule par unité de puissance, donc sur la base de 1 kilowatt et se divise en 6 catégories : la construction de la prise d’eau, la centrale électrique strictement dite, les turbines, le générateur, l’équipement auxiliaire, le transformateur et enfin le poste extérieur. Je me dis par ailleurs que – vu la structure du « Projet Aqueduc » – l’investissement en la construction de prise d’eau est en quelque sorte équivalent au système des béliers hydrauliques et réservoirs élevés. En tout cas :

>> la construction de la prise d’eau, par 1 kW de puissance  ($) 186,216 * Puissance-0,2368 * Élévation -0,597

>> la centrale électrique strictement dite, par 1 kW de puissance  ($) 1389,16 * Puissance-0,2351 * Élévation-0,0585

>> les turbines, par 1 kW de puissance  ($)

@ la turbine Kaplan: 39398 * Puissance-0,58338 * Élévation-0,113901

@ la turbine Frances: 30462 * Puissance-0,560135 * Élévation-0,127243

@ la turbine à impulsions radiales: 10486,65 * Puissance-0,3644725 * Élévation-0,281735

@ la turbine Pelton: 2 * la turbine à impulsions radiales

>> le générateur, par 1 kW de puissance  ($) 1179,86 * Puissance-0,1855 * Élévation-0,2083

>> l’équipement auxiliaire, par 1 kW de puissance  ($) 612,87 * Puissance-0,1892 * Élévation-0,2118

>> le transformateur et le poste extérieur, par 1 kW de puissance 

($) 281 * Puissance0,1803 * Élévation-0,2075

Une fois la puissance électrique calculée avec le paramètre d’élévation relative assurée par les béliers hydrauliques, je peux calculer l’investissement initial en hydro-génération comme la somme des positions mentionnées ci-dessus. Hatata, El-Saadawi, & Saad (2019 op. cit.) recommandent aussi de multiplier une telle somme par le facteur de 1,13 (c’est donc un facteur du type « on ne sait jamais ») et d’assumer que les frais courants d’exploitation annuelle vont se situer entre 1% et 6% de l’investissement initial.

Syahputra & Soesanti (2021[19]) étudient le cas de la rivière Progo, dotée d’un flux tout à fait modeste de 6,696 mètres cubes par seconde et située dans Kulon Progo Regency (une region spéciale au sein de Yogyakarta, Indonesia). Le système des petites turbines hydroélectriques y fournit l’électricité aux 962 ménages locaux, et crée un surplus de 4 263 951 kWh par an d’énergie à revendre aux consommateurs externes. Dans un autre article, Sterl et al. (2020[20]) étudient le cas de Suriname et avancent une thèse intéressante, notamment que le développement d’installations basées sur les énergies renouvelables crée un phénomène d’appétit d’énergie qui croît à mesure de manger et qu’un tel développement en une source d’énergie – le vent, par exemple – stimule l’investissement en installations basées sur d’autres sources, donc l’hydraulique et le photovoltaïque.  

Ces études relativement récentes corroborent celles d’il y a quelques années, comme celle de Vilanova & Balestieri (2014[21]) ou bien celle de Vieira et al. (2015[22]), avec une conclusion générale que les petites turbines hydroélectriques ont atteint un degré de sophistication technologique suffisante pour dégager une quantité d’énergie économiquement profitable. Par ailleurs, il semble qu’il y a beaucoup à gagner dans ce domaine à travers l’optimisation de la distribution de puissance entre les turbines différentes. De retour aux publications les plus récentes, j’ai trouvé des études de faisabilité tout à fait robustes pour les petites turbines hydroélectriques, qui indiquent que – pourvu qu’on soit prêt à accepter un retour d’environ 10 à 11 ans sur l’investissement initial – le petit hydro peut être exploité profitablement même avec une élévation relative en dessous de 20 mètres (Arthur et al. 2020[23] ; Ali et al. 2021[24]).

C’est ainsi que j’arrive donc à la portion finale dans la chaîne technologique du « Projet Aqueduc », donc au stockage d’énergie (TCES) ainsi que TCCS ou la station de chargement des véhicules électriques. La puissance à installer dans une station de chargement semble se situer entre 700 et 1000 kilowatts (Zhang et al. 2018[25]; McKinsey 2018[26]). En dessous de 700 kilowatt la station peut devenir si difficile à accéder pour le consommateur moyen, due aux files d’attente, qu’elle peut perdre la confiance des clients locaux. En revanche, tout ce qui va au-dessus de 1000 kilowatts est vraiment utile seulement aux heures de pointe dans des environnements urbains denses. Il y a des études de concept pour les stations de chargement où l’unité de stockage d’énergie est alimentée à partir des sources renouvelables (Al Wahedi & Bicer 2020[27]). Zhang et al. (2019[28]) présentent un concept d’entreprise tout fait pour une station de chargement située dans le milieu urbain. Apparemment, le seuil de profitabilité se situe aux environs de 5 100 000 kilowatt heures vendues par an.  

En termes de technologie de stockage strictement dite, les batteries Li-ion semblent être la solution de base pour maintenant, quoi qu’une combinaison avec les piles à combustible ou bien avec l’hydrogène semble prometteuse (Al Wahedi & Bicer 2020 op. cit. ; Sharma, Panvar & Tripati 2020[29]). En général, pour le moment, les batteries Li-Ion montrent le rythme d’innovation relativement le plus soutenu (Tomaszewska et al. 2019[30] ; de Simone & Piegari 2019[31]; Koohi-Fayegh & Rosen 2020[32]). Un article récent par Elmeligy et al. (2021[33]) présente un concept intéressant d’unité mobile de stockage qui pourrait se déplacer entre plusieurs stations de chargement. Quant à l’investissement initial requis pour une station de chargement, ça semble expérimenter toujours mais la marge de manœuvre se rétrécit pour tomber quelque part entre $600 ÷ $800 par 1 kW de puissance (Cole & Frazier 2019[34]; Cole, Frazier, Augustine 2021[35]).


[1] Neves, M. C., Nunes, L. M., & Monteiro, J. P. (2020). Evaluation of GRACE data for water resource management in Iberia: a case study of groundwater storage monitoring in the Algarve region. Journal of Hydrology: Regional Studies, 32, 100734. https://doi.org/10.1016/j.ejrh.2020.100734

[2] Chisola, M. N., Van der Laan, M., & Bristow, K. L. (2020). A landscape hydrology approach to inform sustainable water resource management under a changing environment. A case study for the Kaleya River Catchment, Zambia. Journal of Hydrology: Regional Studies, 32, 100762. https://doi.org/10.1016/j.ejrh.2020.100762

[3] Wehn, U., & Montalvo, C. (2018). Exploring the dynamics of water innovation: Foundations for water innovation studies. Journal of Cleaner Production, 171, S1-S19. https://doi.org/10.1016/j.jclepro.2017.10.118

[4] Mvulirwenande, S., & Wehn, U. (2020). Fostering water innovation in Africa through virtual incubation: Insights from the Dutch VIA Water programme. Environmental Science & Policy, 114, 119-127. https://doi.org/10.1016/j.envsci.2020.07.025

[5] Wong, T. H., Rogers, B. C., & Brown, R. R. (2020). Transforming cities through water-sensitive principles and practices. One Earth, 3(4), 436-447. https://doi.org/10.1016/j.oneear.2020.09.012

[6] Hogeboom, R. J. (2020). The Water Footprint Concept and Water’s Grand Environmental Challenges. One earth, 2(3), 218-222. https://doi.org/10.1016/j.oneear.2020.02.010

[7] Kumar, P., Avtar, R., Dasgupta, R., Johnson, B. A., Mukherjee, A., Ahsan, M. N., … & Mishra, B. K. (2020). Socio-hydrology: A key approach for adaptation to water scarcity and achieving human well-being in large riverine islands. Progress in Disaster Science, 8, 100134. https://doi.org/10.1016/j.pdisas.2020.100134

[8] Bagstad, K. J., Ancona, Z. H., Hass, J., Glynn, P. D., Wentland, S., Vardon, M., & Fay, J. (2020). Integrating physical and economic data into experimental water accounts for the United States: Lessons and opportunities. Ecosystem Services, 45, 101182. https://doi.org/10.1016/j.ecoser.2020.101182

[9] Mohamed, M. M., El-Shorbagy, W., Kizhisseri, M. I., Chowdhury, R., & McDonald, A. (2020). Evaluation of policy scenarios for water resources planning and management in an arid region. Journal of Hydrology: Regional Studies, 32, 100758. https://doi.org/10.1016/j.ejrh.2020.100758

[10] Harvey, J.W., Schaffranek, R.W., Noe, G.B., Larsen, L.G., Nowacki, D.J., O’Connor, B.L., 2009. Hydroecological factors governing surface water flow on a low-gradient floodplain. Water Resour. Res. 45, W03421, https://doi.org/10.1029/2008WR007129.

[11] Phiri, W. K., Vanzo, D., Banda, K., Nyirenda, E., & Nyambe, I. A. (2021). A pseudo-reservoir concept in SWAT model for the simulation of an alluvial floodplain in a complex tropical river system. Journal of Hydrology: Regional Studies, 33, 100770. https://doi.org/10.1016/j.ejrh.2020.100770.

[12] Lu, B., Blakers, A., Stocks, M., & Do, T. N. (2021). Low-cost, low-emission 100% renewable electricity in Southeast Asia supported by pumped hydro storage. Energy, 121387. https://doi.org/10.1016/j.energy.2021.121387

[13] Stocks, M., Stocks, R., Lu, B., Cheng, C., & Blakers, A. (2021). Global atlas of closed-loop pumped hydro energy storage. Joule, 5(1), 270-284. https://doi.org/10.1016/j.joule.2020.11.015

[14] Bortolini, L., & Zanin, G. (2019). Reprint of: Hydrological behaviour of rain gardens and plant suitability: A study in the Veneto plain (north-eastern Italy) conditions. Urban forestry & urban greening, 37, 74-86. https://doi.org/10.1016/j.ufug.2018.07.003

[15] Guo, X., Li, J., Yang, K., Fu, H., Wang, T., Guo, Y., … & Huang, W. (2018). Optimal design and performance analysis of hydraulic ram pump system. Proceedings of the Institution of Mechanical Engineers, Part A: Journal of Puissance and Energy, 232(7), 841-855. https://doi.org/10.1177%2F0957650918756761

[16] Li, J., Yang, K., Guo, X., Huang, W., Wang, T., Guo, Y., & Fu, H. (2021). Structural design and parameter optimization on a waste valve for hydraulic ram pumps. Proceedings of the Institution of Mechanical Engineers, Part A: Journal of Puissance and Energy, 235(4), 747–765. https://doi.org/10.1177/0957650920967489

[17] Cai, X., Ye, F., & Gholinia, F. (2020). Application of artificial neural network and Soil and Water Assessment Tools in evaluating Puissance generation of small hydroPuissance stations. Energy Reports, 6, 2106-2118. https://doi.org/10.1016/j.egyr.2020.08.010.

[18] Hatata, A. Y., El-Saadawi, M. M., & Saad, S. (2019). A feasibility study of small hydro Puissance for selected locations in Egypt. Energy Strategy Reviews, 24, 300-313. https://doi.org/10.1016/j.esr.2019.04.013

[19] Syahputra, R., & Soesanti, I. (2021). Renewable energy systems based on micro-hydro and solar photovoltaic for rural areas: A case study in Yogyakarta, Indonesia. Energy Reports, 7, 472-490. https://doi.org/10.1016/j.egyr.2021.01.015

[20] Sterl, S., Donk, P., Willems, P., & Thiery, W. (2020). Turbines of the Caribbean: Decarbonising Suriname’s electricity mix through hydro-supported integration of wind Puissance. Renewable and Sustainable Energy Reviews, 134, 110352. https://doi.org/10.1016/j.rser.2020.110352

[21] Vilanova, M. R. N., & Balestieri, J. A. P. (2014). HydroPuissance recovery in water supply systems: Models and case study. Energy conversion and management, 84, 414-426. https://doi.org/10.1016/j.enconman.2014.04.057

[22] Vieira, D. A. G., Guedes, L. S. M., Lisboa, A. C., & Saldanha, R. R. (2015). Formulations for hydroelectric energy production with optimality conditions. Energy Conversion and Management, 89, 781-788. https://doi.org/10.1016/j.enconman.2014.10.048

[23] Arthur, E., Anyemedu, F. O. K., Gyamfi, C., Asantewaa-Tannor, P., Adjei, K. A., Anornu, G. K., & Odai, S. N. (2020). Potential for small hydroPuissance development in the Lower Pra River Basin, Ghana. Journal of Hydrology: Regional Studies, 32, 100757. https://doi.org/10.1016/j.ejrh.2020.100757

[24] Ali, M., Wazir, R., Imran, K., Ullah, K., Janjua, A. K., Ulasyar, A., … & Guerrero, J. M. (2021). Techno-economic assessment and sustainability impact of hybrid energy systems in Gilgit-Baltistan, Pakistan. Energy Reports, 7, 2546-2562. https://doi.org/10.1016/j.egyr.2021.04.036

[25] Zhang, Y., He, Y., Wang, X., Wang, Y., Fang, C., Xue, H., & Fang, C. (2018). Modeling of fast charging station equipped with energy storage. Global Energy Interconnection, 1(2), 145-152. DOI:10.14171/j.2096-5117.gei.2018.02.006

[26] McKinsey Center for Future Mobility, How Battery Storage Can Help Charge the Electric-Vehicle Market?, February 2018,

[27] Al Wahedi, A., & Bicer, Y. (2020). Development of an off-grid electrical vehicle charging station hybridized with renewables including battery cooling system and multiple energy storage units. Energy Reports, 6, 2006-2021. https://doi.org/10.1016/j.egyr.2020.07.022

[28] Zhang, J., Liu, C., Yuan, R., Li, T., Li, K., Li, B., … & Jiang, Z. (2019). Design scheme for fast charging station for electric vehicles with distributed photovoltaic power generation. Global Energy Interconnection, 2(2), 150-159. https://doi.org/10.1016/j.gloei.2019.07.003

[29] Sharma, S., Panwar, A. K., & Tripathi, M. M. (2020). Storage technologies for electric vehicles. Journal of traffic and transportation engineering (english edition), 7(3), 340-361. https://doi.org/10.1016/j.jtte.2020.04.004

[30] Tomaszewska, A., Chu, Z., Feng, X., O’Kane, S., Liu, X., Chen, J., … & Wu, B. (2019). Lithium-ion battery fast charging: A review. ETransportation, 1, 100011. https://doi.org/10.1016/j.etran.2019.100011

[31] De Simone, D., & Piegari, L. (2019). Integration of stationary batteries for fast charge EV charging stations. Energies, 12(24), 4638. https://doi.org/10.3390/en12244638

[32] Koohi-Fayegh, S., & Rosen, M. A. (2020). A review of energy storage types, applications and recent developments. Journal of Energy Storage, 27, 101047. https://doi.org/10.1016/j.est.2019.101047

[33] Elmeligy, M. M., Shaaban, M. F., Azab, A., Azzouz, M. A., & Mokhtar, M. (2021). A Mobile Energy Storage Unit Serving Multiple EV Charging Stations. Energies, 14(10), 2969. https://doi.org/10.3390/en14102969

[34] Cole, Wesley, and A. Will Frazier. 2019. Cost Projections for Utility-Scale Battery Storage.

Golden, CO: National Renewable Energy Laboratory. NREL/TP-6A20-73222. https://www.nrel.gov/docs/fy19osti/73222.pdf

[35] Cole, Wesley, A. Will Frazier, and Chad Augustine. 2021. Cost Projections for UtilityScale Battery Storage: 2021 Update. Golden, CO: National Renewable Energy

Laboratory. NREL/TP-6A20-79236. https://www.nrel.gov/docs/fy21osti/79236.pdf.

We keep going until we observe

I keep working on a proof-of-concept paper for my idea of ‘Energy Ponds’. In my last two updates, namely in ‘Seasonal lakes’, and in ‘Le Catch 22 dans ce jardin d’Eden’, I sort of refreshed my ideas and set the canvas for painting. Now, I start sketching. What exact concept do I want to prove, and what kind of evidence can possibly confirm (or discard) that concept? The idea I am working on has a few different layers. The most general vision is that of purposefully storing water in spongy structures akin to swamps or wetlands. These can bear various degree of artificial construction, and can stretch from natural wetlands, through semi-artificial ones, all the way to urban technologies such as rain gardens and sponge cities. The most general proof corresponding to that vision is a review of publicly available research – peer-reviewed papers, preprints, databases etc. – on that general topic.

Against that general landscape, I sketch two more specific concepts: the idea of using ram pumps as a technology of forced water retention, and the possibility of locating those wetland structures in the broadly spoken Northern Europe, thus my home region. Correspondingly, I need to provide two streams of scientific proof: a review of literature on the technology of ram pumping, on the one hand, and on the actual natural conditions, as well as land management policies in Europe, on the other hand.  I need to consider the environmental impact of creating new wetland-like structures in Northern Europe, as well as the socio-economic impact, and legal feasibility of conducting such projects.

Next, I sort of build upwards. I hypothesise a complex technology, where ram-pumped water from the river goes into a sort of light elevated tanks, and from there, using the principle of Roman siphon, cascades down into wetlands, and through a series of small hydro-electric turbines. Turbines generate electricity, which is being stored and then sold outside.

At that point, I have a technology of water retention coupled with a technology of energy generation and storage. I further advance a second hypothesis that such a complex technology will be economically sustainable based on the corresponding sales of electricity. In other words, I want to figure out a configuration of that technology, which will be suitable for communities which either don’t care at all, or simply cannot afford to care about the positive environmental impact of the solution proposed.

Proof of concept for those two hypotheses is going to be complex. First, I need to pass in review the available technologies for energy storage, energy generation, as well as for the construction of elevated tanks and Roman siphons. I need to take into account various technological mixes, including the incorporation of wind turbines and photovoltaic installation into the whole thing, in order to optimize the output of energy. I will try to look for documented examples of small hydro-generation coupled with wind and solar. Then, I have to rack the literature as regards mathematical models for the optimization of such power systems and put them against my own idea of reverse engineering back from the storage technology. I take the technology of energy storage which seems the most suitable for the local market of energy, and for the hypothetical charging from hydro-wind-solar mixed generation. I build a control scenario where that storage facility just buys energy at wholesale prices from the power grid and then resells it. Next, I configure the hydro-wind-solar generation so as to make it economically competitive against the supply of energy from the power grid.

Now, I sketch. I keep in mind the levels of conceptualization outlined above, and I quickly move through published science along that logical path, quickly picking a few articles for each topic. I am going to put those nonchalantly collected pieces of science back-to-back and see how and whether at all it all makes sense together. I start with Bortolini & Zanin (2019[1]), who study the impact of rain gardens on water management in cities of the Veneto region in Italy. Rain gardens are vegetal structures, set up in the urban environment, with the specific purpose to retain rainwater.  Bortolini & Zanin (2019 op. cit.) use a simplified water balance, where the rain garden absorbs and retains a volume ‘I’ of water (‘I’ stands for infiltration), which is the difference between precipitations on the one hand, and the sum total of overflowing runoff from the rain garden plus evapotranspiration of water, on the other hand. Soil and plants in the rain garden have a given top capacity to retain water. Green plants typically hold 80 – 95% of their mass in water, whilst trees hold about 50%. Soil is considered wet when it contains about 25% of water. The rain garden absorbs water from precipitations at a rate determined by hydraulic conductivity, which means the relative ease of a fluid (usually water) to move through pore spaces or fractures, and which depends on the intrinsic permeability of the material, the degree of saturation, and on the density and viscosity of the fluid.

As I look at it, I can see that the actual capacity of water retention in a rain garden can hardly be determined a priori, unless we have really a lot of empirical data from the given location. For a new location of a new rain garden, it is safe to assume that we need an experimental phase when we empirically assess the retentive capacity of the rain garden with different configurations of soil and vegetation used. That leads me to generalizing that any porous structure we use for retaining rainwater, would it be something like wetlands, or something like a rain garden in urban environment, has a natural constraint of hydraulic conductivity, and that constraint determines the percentage of precipitations, and the metric volume thereof, which the given structure can retain.

Bortolini & Zanin (2019 op. cit.) bring forth empirical results which suggest that properly designed rain gardens located on rooftops in a city can absorb from 87% to 93% of the total input of water they receive. Cool. I move on and towards the issue of water management in Europe, with a working paper by Fribourg-Blanc, B. (2018[2]), and the most important takeaway from that paper is that we have something called European Platform for Natural Water Retention Measures AKA http://nwrm.eu , and that thing have both good properties and bad properties. The good thing about http://nwrm.eu is that it contains loads of data and publications about projects in Natural Water Retention in Europe. The bad thing is that http://nwrm.eu is not a secure website. Another paper, by Tóth et al. (2017[3]) tells me that another analytical tool exists, namely the European Soil Hydraulic Database (EU‐ SoilHydroGrids ver1.0).

So far, so good. I already know there is data and science for evaluating, with acceptable precision, the optimal structure and the capacity for water retention in porous structures such as rain gardens or wetlands, in the European context. I move to the technology of ram pumps. I grab two papers: Guo et al. (2018[4]), and Li et al. (2021[5]). They show me two important things. Firstly, China seems to be burning the rubber in the field of ram pumping technology. Secondly, the greatest uncertainty as for that technology seems to be the actual height those ram pumps can elevate water at, or, when coupled with hydropower, the hydraulic head which ram pumps can create. Guo et al. (2018 op. cit.) claim that 50 meters of elevation is the maximum which is both feasible and efficient. Li et al. (2021 op. cit.) are sort of vertically more conservative and claim that the whole thing should be kept below 30 meters of elevation. Both are better than 20 meters, which is what I thought was the best one can expect. Greater elevation of water means greater hydraulic head, and more hydropower to be generated. It pays off to review literature.

Lots of uncertainty as for the actual capacity and efficiency of ram pumping means quick technological change in that domain. This is economically interesting. It means that investing in projects which involve ram pumping means investment in quickly changing a technology. That means both high hopes for an even better technology in immediate future, and high needs for cash in the balance sheet of the entities involved.

I move to the end-of-the-pipeline technology in my concept, namely to energy storage. I study a paper by Koohi-Fayegh & Rosen (2020[6]), which suggests two things. Firstly, for a standalone installation in renewable energy, whatever combination of small hydropower, photovoltaic and small wind turbines we think of, lithium-ion batteries are always a good idea for power storage, Secondly, when we work with hydrogeneration, thus when we have any hydraulic head to make electricity with, pumped storage comes sort of natural. That leads me to an idea which looks even crazier than what I have imagined so far: what if we create an elevated garden with strong capacity for water retention. Ram pumps take water from the river and pump it up onto elevated platforms with rain gardens on it. Those platforms can be optimized as for their absorption of sunlight and thus as regards their interaction with whatever is underneath them.  

I move to small hydro, and I find two papers, namely Couto & Olden (2018[7]), and Lange et al. (2018[8]), which are both interestingly critical as regards small hydropower installations. Lange et al. (2018 op. cit.) claim that the overall environmental impact of small hydro should be closely monitored. Couto & Olden (2018 op. cit.) go further and claim there is a ‘craze’ about small hydro, and that craze has already lead to overinvestment in the corresponding installations, which can be damaging both environmentally and economically (overinvestment means financial collapse of many projects). Those critical views in mind, I turn to another paper, by Zhou et al. (2019[9]), who approach the issue as a case for optimization, within a broader framework called ‘Water-Food-Energy’ Nexus, WFE for closer friends. This paper, just as a few others it cites (Ming et al. 2018[10]; Uen et al. 2018[11]), advocates for using artificial intelligence in order to optimize for WFE.

Zhou et al. (2019 op.cit.) set three hydrological scenarios for empirical research and simulation. The baseline scenario corresponds to an average hydrological year, with average water levels and average precipitations. Next to it are: a dry year and a wet year. The authors assume that the cost of installation in small hydropower is $600 per kW on average.  They simulate the use of two technologies for hydro-electric turbines: Pelton and Vortex. Pelton turbines are optimized paddled wheels, essentially, whilst the Vortex technology consists in creating, precisely, a vortex of water, and that vortex moves a rotor placed in the middle of it.

Zhou et al. (2019 op.cit.) create a multi-objective function to optimize, with the following desired outcomes:

>> Objective 1: maximize the reliability of water supply by minimizing the probability of real water shortage occurring.

>> Objective 2: maximize water storage given the capacity of the reservoir. Note: reservoir is understood hydrologically, as any structure, natural or artificial, able to retain water.

>> Objective 3: maximize the average annual output of small hydro-electric turbines

Those objectives are being achieved under the corresponding sets of constraints. For water supply those constraints all turn around water balance, whilst for energy output it is more about the engineering properties of the technologies taken into account. The three objectives are hierarchized. First, Zhou et al. (2019 op.cit.) perform an optimization regarding Objectives 1 and 2, thus in order to find the optimal hydrological characteristics to meet, and then, on the basis of these, they optimize the technology to put in place, as regards power output.

The general tool for optimization used by Zhou et al. (2019 op.cit.) is a genetic algorithm called NSGA-II, AKA Non-dominated Sorting Genetic Algorithm. Apparently, NSGA-II has a long and successful history of good track in engineering, including water management and energy (see e.g. Chang et al. 2016[12]; Jain & Sachdeva 2017[13];  Assaf & Shabani 2018[14]). I want to stop for a while here and have a good look at this specific algorithm. The logic of NSGA-II starts with creating an initial population of cases/situations/configurations etc. Each case is a combination of observations as regards the objectives to meet, and the actual values observed in constraining variables, e.g. precipitations for water balance or hydraulic head for the output of hydropower. In the conventional lingo of this algorithm, those cases are called chromosomes. Yes, I know, a hydro-electric turbine placed in the context of water management hardly looks like a chromosome, but it is a genetic algorithm, and it just sounds fancy to use that biologically marked vocabulary.

As for me, I like staying close to real life, and therefore I call those cases solutions rather than chromosomes. Anyway, the underlying math is the same. Once I have that initial population of real-life solutions, I calculate two parameters for each of them: their rank as regards the objectives to maximize, and their so-called ‘crowded distance’. Ranking is done with the procedure of fast non-dominated sorting. It is a comparison in pairs, where the solution A dominates another solution B, if and only if there is no objective of A worse than that objective of B and there is at least one objective of A better than that objective of B. The solution which scores the most wins in such peer-to-peer comparisons is at the top of the ranking, the one with the second score of wins is the second etc. Crowding distance is essentially the same as what I call coefficient of coherence in my own research: Euclidean distance (or other mathematical distance) is calculated for each pair of solutions. As a result, each solution is associated with k Euclidean distances to the k remaining solutions, which can be reduced to an average distance, i.e. the crowded distance.

In the next step, an off-spring population is produced from that original population of solutions. It is created by taking relatively the fittest solutions from the initial population, recombining their characteristics in a 50/50 proportion, and adding them some capacity for endogenous mutation. Two out of these three genetic functions are de facto controlled. We choose relatively the fittest by establishing some kind of threshold for fitness, as regards the objectives pursued. It can be a required minimum, a quantile (e.g. the third quartile), or an average. In the first case, we arbitrarily impose a scale of fitness on our population, whilst in the latter two the hierarchy of fitness is generated endogenously from the population of solutions observed. Fitness can have shades and grades, by weighing the score in non-dominated sorting, thus the number of wins over other solutions, on the one hand, and the crowded distance on the other hand. In other words, we can go for solutions which have a lot of similar ones in the population (i.e. which have a low average crowded distance), or, conversely, we can privilege lone wolves, with a high average Euclidean distance from anything else on the plate.  

The capacity for endogenous mutation means that we can allow variance in all or in just the selected variables which make each solution. The number of degrees of freedom we allow in each variable dictates the number of mutations that can be created. Once again, discreet power is given to the analyst: we can choose the genetic traits which can mutate and we can determine their freedom to mutate. In an engineering problem, technological and environmental constraints should normally put a cap on the capacity for mutation. Still, we can think about an algorithm which definitely kicks the lid off the barrel of reality, and which generates mutations in the wildest registers of variables considered. It is a way to simulate a process when the presence of strong outliers has a strong impact on the whole population.

The same discreet cap on the freedom to evolve is to be found when we repeat the process. The offspring generation of solutions goes essentially through the same process as the initial one, to produce further offspring: ranking by non-dominated sorting and crowded distance, selection of the fittest, recombination, and endogenous mutation. At the starting point of this process, we can be two alternative versions of the Mother Nature. We can be a mean Mother Nature, and we shave off from the offspring population all those baby-solutions which do not meet the initial constraints, e.g. zero supply of water in this specific case. On the other hand, we can be even meaner a Mother Nature and allow those strange, dysfunctional mutants to keep going and see what happens to the whole species after a few rounds of genetic reproduction.

With each generation, we compute an average crowded distance between all the solutions created, i.e. we check how diverse is the species in this generation. As long as diversity grows or remains constant, we assume that the divergence between the solutions generated grows or stays the same. Similarly, we can compute an even more general crowded distance between each pair of generations, and therefore to assess how far has the current generation gone from the parent one. We keep going until we observe that the intra-generational crowded distance and the inter-generational one start narrowing down asymptotically to zero. In other words, we consider resuming evolution when solutions in the game become highly similar to each other and when genetic change stops bringing significant functional change.

Cool. When I want to optimize my concept of Energy Ponds, I need to add the objective of constrained return on investment, based on the sales of electricity. In comparison to Zhou et al. (2019 op.cit.), I need to add a third level of selection. I start with selecting environmentally the solutions which make sense in terms of water management. In the next step, I produce a range of solutions which assure the greatest output of power, in a possible mix with solar and wind. Then I take those and filter them through the NSGA-II procedure as regards their capacity to sustain themselves financially. Mind you, I can shake it off a bit by fusing together those levels of selection. I can simulate extreme cases, when, for example, good economic sustainability becomes an environmental problem. Still, it would be rather theoretical. In Europe, non-compliance with environmental requirements makes a project a non-starter per se: you just can get the necessary permits if your hydropower project messes with hydrological constraints legally imposed on the given location.     

Cool. It all starts making sense. There is apparently a lot of stir in the technology of making semi-artificial structures for retaining water, such as rain gardens and wetlands. That means a lot of experimentation, and that experimentation can be guided and optimized by testing the fitness of alternative solutions for meeting objectives of water management, power output and economic sustainability. I have some starting data, to produce the initial generation of solutions, and then try to optimize them with an algorithm such as NSGA-II.


[1] Bortolini, L., & Zanin, G. (2019). Reprint of: Hydrological behaviour of rain gardens and plant suitability: A study in the Veneto plain (north-eastern Italy) conditions. Urban forestry & urban greening, 37, 74-86. https://doi.org/10.1016/j.ufug.2018.07.003

[2] Fribourg-Blanc, B. (2018, April). Natural Water Retention Measures (NWRM), a tool to manage hydrological issues in Europe?. In EGU General Assembly Conference Abstracts (p. 19043). https://ui.adsabs.harvard.edu/abs/2018EGUGA..2019043F/abstract

[3] Tóth, B., Weynants, M., Pásztor, L., & Hengl, T. (2017). 3D soil hydraulic database of Europe at 250 m resolution. Hydrological Processes, 31(14), 2662-2666. https://onlinelibrary.wiley.com/doi/pdf/10.1002/hyp.11203

[4] Guo, X., Li, J., Yang, K., Fu, H., Wang, T., Guo, Y., … & Huang, W. (2018). Optimal design and performance analysis of hydraulic ram pump system. Proceedings of the Institution of Mechanical Engineers, Part A: Journal of Power and Energy, 232(7), 841-855. https://doi.org/10.1177%2F0957650918756761

[5] Li, J., Yang, K., Guo, X., Huang, W., Wang, T., Guo, Y., & Fu, H. (2021). Structural design and parameter optimization on a waste valve for hydraulic ram pumps. Proceedings of the Institution of Mechanical Engineers, Part A: Journal of Power and Energy, 235(4), 747–765. https://doi.org/10.1177/0957650920967489

[6] Koohi-Fayegh, S., & Rosen, M. A. (2020). A review of energy storage types, applications and recent developments. Journal of Energy Storage, 27, 101047. https://doi.org/10.1016/j.est.2019.101047

[7] Couto, T. B., & Olden, J. D. (2018). Global proliferation of small hydropower plants–science and policy. Frontiers in Ecology and the Environment, 16(2), 91-100. https://doi.org/10.1002/fee.1746

[8] Lange, K., Meier, P., Trautwein, C., Schmid, M., Robinson, C. T., Weber, C., & Brodersen, J. (2018). Basin‐scale effects of small hydropower on biodiversity dynamics. Frontiers in Ecology and the Environment, 16(7), 397-404.  https://doi.org/10.1002/fee.1823

[9] Zhou, Y., Chang, L. C., Uen, T. S., Guo, S., Xu, C. Y., & Chang, F. J. (2019). Prospect for small-hydropower installation settled upon optimal water allocation: An action to stimulate synergies of water-food-energy nexus. Applied Energy, 238, 668-682. https://doi.org/10.1016/j.apenergy.2019.01.069

[10] Ming, B., Liu, P., Cheng, L., Zhou, Y., & Wang, X. (2018). Optimal daily generation scheduling of large hydro–photovoltaic hybrid power plants. Energy Conversion and Management, 171, 528-540. https://doi.org/10.1016/j.enconman.2018.06.001

[11] Uen, T. S., Chang, F. J., Zhou, Y., & Tsai, W. P. (2018). Exploring synergistic benefits of Water-Food-Energy Nexus through multi-objective reservoir optimization schemes. Science of the Total Environment, 633, 341-351. https://doi.org/10.1016/j.scitotenv.2018.03.172

[12] Chang, F. J., Wang, Y. C., & Tsai, W. P. (2016). Modelling intelligent water resources allocation for multi-users. Water resources management, 30(4), 1395-1413. https://doi.org/10.1007/s11269-016-1229-6

[13] Jain, V., & Sachdeva, G. (2017). Energy, exergy, economic (3E) analyses and multi-objective optimization of vapor absorption heat transformer using NSGA-II technique. Energy Conversion and Management, 148, 1096-1113. https://doi.org/10.1016/j.enconman.2017.06.055

[14] Assaf, J., & Shabani, B. (2018). Multi-objective sizing optimisation of a solar-thermal system integrated with a solar-hydrogen combined heat and power system, using genetic algorithm. Energy Conversion and Management, 164, 518-532. https://doi.org/10.1016/j.enconman.2018.03.026

Big Black Swans

Oops! Another big break in blogging. Sometimes life happens so fast that thoughts in my head run faster than I can possibly write about them. This is one of those sometimeses. Topics for research and writing abound, projects abound, everything is changing at a pace which proves challenging to a gentleman in his 50ies, such as I am. Yes, I am a gentleman: even when I want to murder someone, I go out of my way to stay polite.

I need to do that thing I periodically do, on this blog. I need to use published writing as a method of putting order in the chaos. I start with sketching the contours of chaos and its main components, and then I sequence and compartmentalize.

My chaos is made of the following parts:

>> My research on collective intelligence

>> My research on energy systems, with focus on investment in energy storage

>> My research on the civilisational role of cities, and on the concept of the entire human civilisation, such as we know it today, being a combination of two productive mechanisms: production of food in the countryside, and production of new social roles in the cities.

>> Joint research which I run with a colleague of mine, on the reproduction of human capital

>> The project I once named Energy Ponds, and which I recently renamed ‘Project Aqueduct’, for the purposes of promoting it.

>> The project which I have just started, together with three other researchers, on the role of Territorial Defence Forces during the COVID-19 pandemic

>> An extremely interesting project, which both I and a bunch of psychiatrists from my university have provisionally failed to kickstart, on the analysis of natural language in diagnosing and treating psychoses

>> A concept which recently came to my mind, as I was working on a crowdfunding project: a game as method of behavioural research about complex decisional patterns.

Nice piece of chaos, isn’t it? How do I put order in my chaos? Well, I ask myself, and, of course, I do my best to answer honestly the following questions: What do I want? How will I know I have what I want? How will other people know I have what I want? Why should anyone bother? What is the point? What do I fear? How will I know my fears come true? How will other people know my fears come true? How do I want to sequence my steps? What skills do I need to learn?

I know I tend to be deceitful with myself. As a matter of fact, most of us tend to. We like confirming our ideas rather than challenging them. I think I can partly overcome that subjectivity of mine by interweaving my answers to those questions with references to published scientific research. Another way of staying close to real life with my thinking consists in trying to understand what specific external events have pushed me to engage in the different paths, which, as I walk down all of them at the same time, make my personal chaos.

In 2018, I started using artificial neural networks, just like that, mostly for fun, and in a very simple form. As I observed those things at work, I developed a deep fascination with intelligent structures, and just as deep (i.e. f**king hard to phrase out intelligibly) an intuition that neural networks can be used as simulators of collectively intelligent social structures.

Both of my parents died in 2019, exactly at the same age of 78, having spent the last 20 years of their respective individual lives in complete separation from each other to the point of not having exchanged a spoken or written word over those last 20 years. That changed my perspective as regards subjectivity. I became acutely aware how subjective I am in my judgement, and how subjective other people are, most likely. Pandemic started in early 2020, and, almost at the same moment, I started to invest in the stock market, after a few years of break. I had been learning at an accelerated pace. I had been adapting to the reality of high epidemic risk – something I almost forgot since I had a devastating scarlatina at the age of 9 – and I had been adapting to a subjectively new form of economic decisions (i.e. those in the stock market). That had been the hell of a ride.

Right now, another piece of experience comes into the game. Until recently, in my house, the attic was mine. The remaining two floors were my wife’s dominion, but the attic was mine. It was painted in joyful, eye-poking colours. There as a lot of yellow and orange. It was mine. Yet, my wife had an eye for that space. Wives do, quite frequently. A fearsome ally came to support her: an interior decorator. Change has just happened. Now, the attic is all dark brown and cream. To me, it looks like the inside of a coffin. Yes, I know what the inside of a coffin looks like: I saw it just before my father’s funeral. That attic has become an alien space for me. I still have hard times wrapping my mind around how shaken I am with that change. I realize how attached am I to the space around me. If I am so strongly bound to colours and shapes in my vicinity, other people probably feel the same, and that triggers another intuition: we, humans, are either simple dwellers in the surrounding space, or we are architects thereof, and these are two radically different frames of mind.

I am territorial as f**k. I have just clarified it inside my head. Now, it is time to go back to science. In a first step, I am trying to connect those experiences of mine to my hypothesis of collective intelligence. Step by step, I am phrasing it out. We are intelligent about each other. We are intelligent about the physical space around us. We are intelligent about us being subjective, and thus we have invented that thing called language, which allows us to produce a baseline for intersubjective description of the surrounding world.

I am conscious of my subjectivity, and of my strong emotions (that attic, f**k!). Therefore, I want to see my own ideas from other people’s point of view. Some review of literature is what I need. I start with Peeters, M. M., van Diggelen, J., Van Den Bosch, K., Bronkhorst, A., Neerincx, M. A., Schraagen, J. M., & Raaijmakers, S. (2021). Hybrid collective intelligence in a human–AI society. AI & SOCIETY, 36(1), 217-238. https://doi.org/10.1007/s00146-020-01005-y . I start with it because I could lay my hands on an open-access version of the full paper, and, as I read it, many bells ring in my head. Among all those bells ringing, the main one refers to the experience I had with the otherwise simplistic neural networks, namely the perceptrons possible to structure in an Excel spreadsheet. Back in 2018, I was observing the way that truly a dumb neural network was working, one made of just 6 equations, looped together, and had that realization: ‘Blast! Those six equations together are actually intelligent’. This is the core of that paper by Peeters et al.. The whole story of collective intelligence had become a thing when Artificial Intelligence started spreading throughout society, thus it is generally the same thing in scientific literature as it is with me individually. Conscious, inquisitive interaction with artificial intelligence seems to awaken an entirely new worldview, where we, humans, can see at work an intelligent fruit of our own intelligence.

I am trying to make one more step, from bewilderment to premises and hypotheses. In Peeters et al., three big intellectual streams are named: (1) the technology-centric perspective (2) the human-centric one, and finally (3) that focused on the concept of collective intelligence-centric perspective. The third one sounds familiar, and so I dig into it. The general idea here is that humans can put their individual intelligences into a kind of interaction which is smarter than those individual ones. This hypothesis is a little counterintuitive – if we consider electoral campaigns or Instagram – but it becomes much more plausible when we think about networks of inventors and scientists. Peeters et al. present an interesting extension to that, namely collectively intelligent agglomerations of people and technology. This is exactly what I do when I do empirical research and use a neural network as simulator, with quantitative data in it. I am one human interacting with one simple piece of AI, and interesting things come out of it. 

That paper by Peeters et al. cites a book: Sloman, S., Sloman, S. A., & Fernbach, P. (2018). The knowledge illusion: Why we never think alone (Penguin). Before I pass to my first impressions about that book, another sidekick. In 1993, one of the authors of that book, Aaron Sloman, wrote an introduction to another book, this one being a collection of proceedings (conference papers in plain human lingo) from a conference, grouped under the common title: Prospects for Artificial Intelligence (Hogg & Humphreys 1993[1]). In that introduction, Aaron Sloman claims that using Artificial Intelligence as simulator of General Intelligence requires a specific approach, which he calls ‘design-based’, where we investigate the capabilities and the constraints within which intelligence, understood as a general phenomenon, has to function. Based on those constraints, requirements can be defined, and, consequently, the way that intelligence is enabled to meet them, through its own architecture and mechanisms.  

We jump 25 years, from 1993, and this is what Sloman, Sloman & Fernbach wrote in the introduction to “The knowledge illusion…”: “This story illustrates a fundamental paradox of humankind. The human mind is both genius and pathetic, brilliant and idiotic. People are capable of the most remarkable feats, achievements that defy the gods. We went from discovering the atomic nucleus in 1911 to megaton nuclear weapons in just over forty years. We have mastered fire, created democratic institutions, stood on the moon, and developed genetically modified tomatoes. And yet we are equally capable of the most remarkable demonstrations of hubris and foolhardiness. Each of us is error-prone, sometimes irrational, and often ignorant. It is incredible that humans are capable of building thermonuclear bombs. It is equally incredible that humans do in fact build thermonuclear bombs (and blow them up even when they don’t fully understand how they work). It is incredible that we have developed governance systems and economies that provide the comforts of modern life even though most of us have only a vague sense of how those systems work. And yet human society works amazingly well, at least when we’re not irradiating native populations. How is it that people can simultaneously bowl us over with their ingenuity and disappoint us with their ignorance? How have we mastered so much despite how limited our understanding often is?” (Sloman, Steven; Fernbach, Philip. The Knowledge Illusion (p. 3). Penguin Publishing Group. Kindle Edition)

Those readings have given me a thread, and I am interweaving that thread with my own thoughts. Now, I return to another reading, namely to “The Black Swan” by Nassim Nicolas Taleb, where, on pages xxi – xxii of the introduction, the author writes: “What we call here a Black Swan (and capitalize it) is an event with the following three attributes. First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme impact (unlike the bird). Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable. I stop and summarize the triplet: rarity, extreme impact, and retrospective (though not prospective) predictability.fn3 A small number of Black Swans explain almost everything in our world, from the success of ideas and religions, to the dynamics of historical events, to elements of our own personal lives. Ever since we left the Pleistocene, some ten millennia ago, the effect of these Black Swans has been increasing. It started accelerating during the industrial revolution, as the world started getting more complicated, while ordinary events, the ones we study and discuss and try to predict from reading the newspapers, have become increasingly inconsequential”.

I combine the idea by Sloman, Sloman & Fernbach, that we, humans, can be really smart collectively by interaction between very limited and subjective individual intelligences, with the concept of Black Swan by Nassim Nicolas Taleb. I reinterpret the Black Swan. This is an event we did not expect to happen, and yet it happened, and it has blown a hole in our self-sufficient certainty that we understand what the f**k is going on. When we do things, we expect certain outcomes. When we don’t get the outcomes we expected, this is a functional error. This is local instance of chaos. We take that local chaos as learning material, and we try again, and again, and again, in the expectation of bringing order into the general chaos of existence. Our collectively intelligent learning  

Nassim Nicolas Taleb claims that our culture tends to mask the occurrence of Black Swan-type outliers as just the manifestation of otherwise recurrent, predictable patterns. We collectively internalize Black Swans. This is what a neural network does. It takes an obvious failure – the local error of optimization – and utilizes it as valuable information in the next experimental round. The network internalizes a series of f**k-ups, because it never hits the output variable exactly home, there is always some discrepancy, at least a tiny one. The fact of being wrong about reality becomes normal. Every neural network I have worked with does the same thing: it starts with some substantial magnitude of error, and then it tends to reduce that error, at least temporarily, i.e. for at least a few more experimental rounds.   

This is what a simple neural network – one of those I use in my calculations – does with quantitative variables. It processes data so as to create error, i.e. so as to purposefully create outliers located out of the expected. Those neural networks purposefully create Black Swans, those abnormalities which allow us to learn. Now, what is so collective about neural networks? Why do I intuitively associate my experiments with neural networks to collective intelligence rather than to the individual one? Well, I work with socio-economic quantitative variables. The lowest level of aggregation I use is the probability of occurrence in a specific event, but this is really low aggregation for me. Most of my data is like Gross Domestic Product, average hours worked per person per year, average prices of electricity etc. This is essentially collective data, in the sense that no individual intelligence can possibly be observed as for its density of population or for its inflation. There needs to be a society in place for those metrics to exist at all.

When I work with that type of data, I assume that many people observe and gauge it, then report and aggregate their observations etc. Many people put a lot of work into making those quantitative variables both available and reliable. I guess it is important, then. When some kind of data is that important collectively, it is very likely to reflect some important aspect of collective reality. When I run that data through a neural network, the latter yields a simulation of collective action and its (always) provisional outcomes.

My neural network (I mean the one on my computer, not the one in my head) takes like 0.05 of local Gross Domestic Product, then 0.4 of average consumption of energy per capita, maybe 0.09 of inflation in consumer prices, plus some other stuff in random proportions, and sums up all those small portions of whatever is important as collectively measured socio-economic outcome. Usually, that sum is designated as ‘h’, or vector of aggregate input. Then, my network takes that h and puts it into a neural activation which, in most cases, is the hyperbolic tangent hyperbolic tangent AKA (e2h – 1) / (e2h +1). When we learn by trial and error, a hypothetical number e2h measures the force which the neural network reacts with to a complex stimulation from a set of variables xi. The ‘e2’ part of that hypothetical reaction is constant and equals e2 = 7,389056099, whilst h is a variable parameter, specific to the given phenomenal occurrence. The parameter h is roughly proportional to the number of variables in the source empirical set. The more complex the reality I process with the neural network, i.e. the more variables I split my reality into, the greater is the value of h. In other words, the more complexity, the more is the neuron, based on the expression e2h, driven away from its constant root e2. Complexity in variables induces greater swings in the hyperbolic tangent, i.e. greater magnitudes of error, and, consequently, longer strides in the process of learning.         

Logically, the more complex social reality I represent with quantitative variables, the bigger Black Swans the neural network produces as it tries to optimize one single variable chosen as the desired output of the neurally activated input.   


[1] Hogg, D., & Humphreys, G. W. (1993). Prospects for Artificial Intelligence: Proceedings of AISB’93, 29 March-2 April 1993, Birmingham, UK (Vol. 17). IOS Press.

The type of riddle I like

Once again, I had quite a break in blogging. I spend a lot of time putting together research projects, in a network of many organisations, which I am supposed to bring to working together. I give it a lot of time and personal energy. It drains me a bit, and I like that drain. I like the thrill of putting together a team, agreeing about goals and possible openings. Since 2005, when I stopped running my own business and I settled for a quite academic career, I haven’t experienced that special kind of personal drive. I sincerely believe that every teacher should apply his or her own teaching in the everyday life of theirs, just to see if their teaching still corresponds to reality.

This is one of the reasons why I have made it a regular activity of mine to invest in the stock market. I teach economics, and the stock market is very much like the pulse of economics, in all its grades and shades, ranging from hardcore macroeconomic cycles, passing through the microeconomics of specific industries I am currently focusing on with my investment portfolio, and all the way down the path of behavioural economics. I teach management, as well, and putting together new projects in research is the closest I can come, currently, to management science being applied in real life.

Still, besides trying to apply my teaching in real life, I still do science. I do research, and I write about the things I think I have found out, on that research path of mine. I do a lot of research as regards the economics of energy. Currently, I am still revising a paper of mine, titled ‘Climbing the right hill – an evolutionary approach to the European market of electricity’. Around the topic of energy economics, I have built more general a method of studying quantitative socio-economic data, with the technical hypothesis that said data manifests collective intelligence in human social structures. It means that whenever I deal with a collection of quantitative socio-economic variables, I study the dataset at hand by assuming that each multivariate record line in the database is the local instance of an otherwise coherent social structure, which experimentins with many such specific instances of itself and selects those offering the best adaptation to the current external stressors. Yes, there is a distinct sound of evolutionary method in that approach.

Over the last three months, I have been slowly ruminating my theoretical foundations for the revision of that paper. Now, I am doing what I love doing: I am disrupting the gently predictable flow of theory with some incongruous facts. Yes, facts don’t know how to behave themselves, like really. Here is an interesting fact about energy: between 1999 and 2016, at the planetary scale, there had been more and more new cars produced per each new human being born. This is visualised in the composite picture below. Data about cars comes from https://www.worldometers.info/cars/ , whilst data about the headcount of population comes from the World Bank (https://data.worldbank.org/indicator/SP.POP.TOTL ).

Now, the meaning of all that. I mean, not ALL THAT (i.e. reality and life in general), just all that data about cars and population. Why do we consistently make more and more physical substance of cars per each new human born? Two explanations come to my mind. One politically correct and nicely environmentalist: we are collectively dumb as f**k and we keep overshooting the output of cars over and above the incremental change in population. The latter, when translated into a rate of growth, tends to settle down (https://data.worldbank.org/indicator/SP.POP.GROW ). Yeah, those capitalists who own car factories just want to make more and more money, and therefore they make more and more cars. Yeah, those corrupt politicians want to conserve jobs in the automotive industry, and they support it. Yeah, f**k them all! Yeah, cars destroy the planet!

I checked. The first door I knocked at was General Motors (https://investor.gm.com/sec-filings ). What I can see is that they actually make more and more operational money by making less and less cars. Their business used to be overshot in terms of volume, and now they are slowly making sense and money out of making less cars. Then I checked with Toyota (https://global.toyota/en/ir/library/sec/ ). These guys looks as if they were struggling to maintain their capacity to make approximately the same operational surplus each year, and they seem to be experimenting with the number of cars they need to put out in order to stay in good financial shape. When I say ‘experimenting’, it means experimenting upwards or downwards.

As a matter of fact, the only player who seems to be unequivocally making more operational money out of making more cars is Tesla (https://ir.tesla.com/#tab-quarterly-disclosure). In There comes another explanation – much less politically correct, if at all – for there being more cars made per each new human, and it says that we, humans, are collectively intelligent, and we have a good reason for making more and more cars per each new human coming to this realm of tears, and the reason is to store energy in a movable, possibly auto-movable a form. Yes, each car has a fuel tank or a set of batteries, in the case of them Teslas or other electric f**kers. Each car is a moving reservoir of chemical energy, immediately converted into kinetic energy, which, in turn, has economic utility. Making more cars with batteries pays off better than making more cars with combustible fuel in their tanks: a new generation of movable reservoirs in chemical energy is replacing an older generation thereof. 

Let’s hypothesise that this is precisely the point of each new human being coupled with more and more of a new car being made: the point is more chemical energy convertible into kinetic energy. Do we need to move around more, as time passes? Maybe, although I am a bit doubtful. Technically, with more and more humans being around in a constant space, there is more and more humans per square kilometre, and that incremental growth in the density of population happens mostly in cities. I described that phenomenon in a paper of mine, titled ‘The Puzzle of Urban Density And Energy Consumption’. That means that space available for travelling and needed to be covered, per individual capita of each human being, is actually decreasing. Less space to travel in means less need for means of transportation. 

Thus, what are we after, collectively? We might be preparing for having to move around more in the future, or for having to restructure the geography of our settlements. That’s possible, although the research I did for that paper about urban density indicates that geographical patterns of urbanization are quite durable. Anyway, those two cases sum up to some kind of zombie apocalypse. On the other hand, the fact of developing the amount of dispersed, temporarily stored energy (in cars) might be a manifestation of us learning how to build and maintain large, dispersed networks of energy reservoirs.

Isn’t it dumb to hypothesise that we go out of our way, as a civilisation, just to learn the best ways of developing what we are developing? Well, take the medieval cathedrals. Them medieval folks would keep building them for decades or even centuries. The Notre Dame cathedral in Paris, France, seems to be the record holder, with a construction period stretching from 1160 to 1245 (Bruzelius 1987[1]). Still, the same people who were so appallingly slow when building a cathedral could accomplish lightning-fast construction of quite complex military fortifications. When building cathedrals, the masters of stone masonry would do something apparently idiotic: they would build, then demolish, and then build again the same portion of the edifice, many times. WTF? Why slowing down something we can do quickly? In order to experiment with the process and with the technologies involved, sir. Cathedrals were experimental labs of physics, mathematics and management, long before these scientific disciplines even emerged. Yes, there was the official rationale of getting closer to God, to accomplish God’s will, and, honestly, it came handy. There was an entire culture – the medieval Christianity – which was learning how to learn by experimentation. The concept of fulfilling God’s will through perseverant pursuit, whilst being stoic as regards exogenous risks, was excellent a cultural vehicle to that purpose.

We move a few hundreds of years in time, to the 17th century. The cutting edge of technology is to find in textile and garments (Braudel 1992[2]), and the peculiarity of the European culture consisted in quickly changing fashions, geographically idiosyncratic and strongly enforced through social peer pressure. The industry of garments and textile was a giant experimental lab of business and management, developing the division of labour, the management of supply chains, quick study of subtle shades in customers’ tastes and just as quick adaptation thereto. This is how we, Europeans, prepared for the much later introduction of mechanized industry, which, in turn, gave birth to what we are today: a species controlling something like 30% of all energy on the surface of our planet.       

Maybe we are experimenting with dispersed, highly mobile and coordinated networks of small energy reservoirs – the automotive fleet – just for the sake of learning how to develop such networks? Some other facts, which, once again, are impolitely disturbing, come to the fore. I had a look at the data published by United Nations, as regards the total installed capacity of generation in electricity (https://unstats.un.org/unsd/energystats/ ). I calculated the average electrical capacity per capita, at the global scale. Turns out in 2014 the average human capita on Earth had around 60% more power capacity to tap from, as compared to a similarly human capita in 1999.

Interesting. It looks even more interesting when taken as the first moment of a process. When I divide the annual incremental change in the installed electrical capacity on the planet, and I divide it by the absolute demographic increment, thus when I go ‘Delta capacity / delta population’, that coefficient of elasticity grows like hell. In 2014, it was almost three times more than in 1999. We, humans, keep developing denser a network of cars, as compared to our population, and, at the same time, we keep increasing the relative power capacity which every human can tap into.    

Someone could say it is because we simply consume more and more energy per capita. Cool, I check with the World Bank: https://data.worldbank.org/indicator/EG.USE.PCAP.KG.OE . Yes, we increase our average annual consumption of energy per one human being, and yet this is a very gentle increment: barely 18% from 1999 through 2014. Nothing to do with the quick accumulation of generative capacity. We accumulate densifying a global fleet of cars, and growing a reserve of power capacity. What are we doing it for?

This is a deep question, and I calculated two additional elasticities with the data at hand. Firstly, I denominated incremental change in the number of new cars per each new human born over the average consumption of energy per capita. In the visual below, this is the coefficient ‘Elasticity of cars per capita to energy per capita’. Between 1999 and 2014, this elasticity had passed from 0,49 to 0,79. We keep accumulating something like an overhead of incremental car fleet, as compared to the amount of energy we consume.

Secondly, I formalized the comparison between individual consumption of energy and average power capacity per capita. This is the ‘Elasticity of capacity per capita to energy per capita’ column in the visual below.  Once again, it is a growing trend.   

At the planetary scale, we keep beefing up our collective reserves of energy, and we seriously mean business about dispersing those reserves into networks of small reservoirs, possibly on wheels.

Increased propensity to store is a historically known collective response to anticipated shortage. Do we, the human race, collectively and not quite consciously anticipate a shortage of energy? How could that happen? Our biology should suggest it just the opposite. With the climate change being around, we technically have more energy in the ambient environment, not less. What exact kind of shortage in energy are we collectively anticipating? This is the type of riddle I like.


[1] Bruzelius, C. (1987). The Construction of Notre-Dame in Paris. The Art Bulletin, 69(4), 540-569. https://doi.org/10.1080/00043079.1987.10788458

[2] Braudel, F. (1992). Civilization and capitalism, 15th-18th century, vol. II: The wheels of commerce (Vol. 2). Univ of California Press.

Unintentional, and yet powerful a reductor

As usually, I work on many things at the same time. I mean, not exactly at the same time, just in a tight alternate sequence. I am doing my own science, and I am doing collective science with other people. Right now, I feel like restating and reframing the main lines of my own science, with the intention to both reframe my own research, and be a better scientific partner to other researchers.

Such as I see it now, my own science is mostly methodological, and consists in studying human social structures as collectively intelligent ones. I assume that collectively we have a different type of intelligence from the individual one, and most of what we experience as social life is constant learning through experimentation with alternative versions of our collective way of being together. I use artificial neural networks as simulators of collective intelligence, and my essential process of simulation consists in creating multiple artificial realities and comparing them.

I deliberately use very simple, if not simplistic neural networks, namely those oriented on optimizing just one attribute of theirs, among the many available. I take a dataset, representative for the social structure I study, I take just one variable in the dataset as the optimized output, and I consider the remaining variables as instrumental input. Such a neural network simulates an artificial reality where the social structure studied pursues just one, narrow orientation. I create as many such narrow-minded, artificial societies as I have variables in my dataset. I assess the Euclidean distance between the original empirical dataset, and each of those artificial societies. 

It is just now that I realize what kind of implicit assumptions I make when doing so. I assume the actual social reality, manifested in the empirical dataset I study, is a concurrence of different, single-variable-oriented collective pursuits, which remain in some sort of dynamic interaction with each other. The path of social change we take, at the end of the day, manifests the relative prevalence of some among those narrow-minded pursuits, with others being pushed to the second rank of importance.

As I am pondering those generalities, I reconsider the actual scientific writings that I should hatch. Publish or perish, as they say in my profession. With that general method of collective intelligence being assumed in human societies, I focus more specifically on two empirical topics: the market of energy and the transition away from fossil fuels make one stream of my research, whilst the civilisational role of cities, especially in the context of the COVID-19 pandemic, is another stream of me trying to sound smart in my writing.

For now, I focus on issues connected to energy, and I return to revising my manuscript ‘Climbing the right hill – an evolutionary approach to the European market of electricity’, as a resubmission to Applied Energy . According to the guidelines of Applied Energy , I am supposed to structure my paper into the following parts: Introduction, Material and Methods, Theory, Calculations, Results, Discussion, and, as sort of a summary pitch, I need to prepare a cover letter where I shortly introduce the reasons why should the editor of Applied Energy bother about my paper at all. On the top of all these formally expressed requirements, there is something I noticed about the general style of articles published in Applied Energy : they all demonstrate and discuss strong, sharp-cutting hypotheses, with a pronounced theoretical edge in them. If I want my paper to be accepted by that journal, I need to give it that special style.  

That special style requires two things which, honestly, I am not really accustomed to doing. First of all, it requires, precisely, to phrase out very sharp claims. What I like the most is to show people material and methods which I work with and sort of provoke a discussion around it. When I have to formulate very sharp claims around that basic empirical stuff, I feel a bit awkward. Still, I understand that many people are willing to discuss only when they are truly pissed by the topic at hand, and sharply cut hypotheses serve to fuel that flame.

Second of all, making sharp claims of my own requires passing in thorough review the claims which other researchers phrase out. It requires doing my homework thoroughly in the review-of-literature. Once again, not really a fan of it, on my part, but well, life is brutal, as my parents used to teach me and as I have learnt in my own life. In other words, real life starts when I get out of my comfort zone.

The first body of literature I want to refer to in my revised article is the so-called MuSIASEM framework AKA Multi-scale Integrated Analysis of Societal and Ecosystem Metabolism’. Human societies are assumed to be giant organisms, and transformation of energy is a metabolic function of theirs (e.g. Andreoni 2020[1], Al-Tamimi & Al-Ghamdi 2020[2] or Velasco-Fernández et al. 2020[3]). The MuSIASEM framework is centred around an evolutionary assumption, which I used to find perfectly sound, and which I have come to consider as highly arguable, namely that the best possible state for both a living organism and a human society is that of the highest possible energy efficiency. As regards social structures, energy efficiency is the coefficient of real output per unit of energy consumption, or, in other words, the amount of real output we can produce with 1 kilogram of oil equivalent in energy. My theoretical departure from that assumption started with my own empirical research, published in my article ‘Energy efficiency as manifestation of collective intelligence in human societies’ (Energy, Volume 191, 15 January 2020, 116500, https://doi.org/10.1016/j.energy.2019.116500 ). As I applied my method of computation with a neural network as simulator of social change, I found out that human societies do not really seem to max out on energy efficiency. Maybe they should but they don’t. It was the first realization, on my part, that we, humans, orient our collective intelligence on optimizing the social structure as such, and whatever comes out of that in terms of energy efficiency, is an unintended by-product rather than a purpose. That general impression has been subsequently reinforced by other empirical findings of mine, precisely those which I introduce in that manuscript ‘Climbing the right hill – an evolutionary approach to the European market of electricity’, which I am currently revising for resubmission with Applied Energy . According to the guidelines of Applied Energy.

In practical terms, it means that when a public policy states that ‘we should maximize our energy efficiency’, it is a declarative goal which human societies actually do not strive for. It is a little as if a public policy imposed the absolute necessity of being nice to each other and punished any deviation from that imperative. People are nice to each other to the extent of current needs in social coordination, period. The absolute imperative of being nice is frequently the correlate of intense rivalry, e.g. as it was the case with traditional aristocracy. The French have even an expression, which I find profoundly true, namely ‘trop gentil pour être honnête’, which means ‘too nice to be honest’. My personal experience makes me kick into an alert state when somebody is that sort of intensely nice to me.

Passing from metaphors to the actual subject matter of energy management, it is a known fact that highly innovative technologies are usually truly inefficient. Optimization of efficiency, would it be energy efficiency or any other aspect thereof, is actually a late stage in the lifecycle of a technology. Deep technological change is usually marked by a temporary slump in efficiency. Imposing energy efficiency as chief goal of technology-related policies means systematically privileging and promoting technologies with the highest energy efficiency, thus, by metaphorical comparison to humans, technologies in their 40ies, past and over the excesses of youth.

The MuSIASEM framework has two other traits which I find arguable, namely the concept of evolutionary purpose, and the imperative of equality between countries in terms of energy efficiency. Researchers who lean towards and into the MuSIASEM methodology claim that it is an evolutionary purpose of every living organism to maximize energy efficiency, and therefore human societies have the same evolutionary purpose. It further implies that species displaying marked evolutionary success, i.e. significant growth in headcount (sometimes in mandibulae-count, should the head be not really what we mean it to be), achieve that success by being particularly energy efficient. I even went into some reading in life sciences and that claim is not grounded in any science. It seems that energy efficiency, and any denomination of efficiency, as a matter of fact, are very crude proportions we apply to complex a balance of flows which we have to learn a lot about. Niebel et al. (2019[4]) phrase it out as follows: ‘The principles governing cellular metabolic operation are poorly understood. Because diverse organisms show similar metabolic flux patterns, we hypothesized that a fundamental thermodynamic constraint might shape cellular metabolism. Here, we develop a constraint-based model for Saccharomyces cerevisiae with a comprehensive description of biochemical thermodynamics including a Gibbs energy balance. Non-linear regression analyses of quantitative metabolome and physiology data reveal the existence of an upper rate limit for cellular Gibbs energy dissipation. By applying this limit in flux balance analyses with growth maximization as the objective function, our model correctly predicts the physiology and intracellular metabolic fluxes for different glucose uptake rates as well as the maximal growth rate. We find that cells arrange their intracellular metabolic fluxes in such a way that, with increasing glucose uptake rates, they can accomplish optimal growth rates but stay below the critical rate limit on Gibbs energy dissipation. Once all possibilities for intracellular flux redistribution are exhausted, cells reach their maximal growth rate. This principle also holds for Escherichia coli and different carbon sources. Our work proposes that metabolic reaction stoichiometry, a limit on the cellular Gibbs energy dissipation rate, and the objective of growth maximization shape metabolism across organisms and conditions’. 

I feel like restating the very concept of evolutionary purpose as such. Evolution is a mechanism of change through selection. Selection in itself is largely a random process, based on the principle that whatever works for now can keep working until something else works even better. There is hardly any purpose in that. My take on the thing is that living species strive to maximize their intake of energy from environment rather than their energy efficiency. I even hatched an article about it (Wasniewski 2017[5]).

Now, I pass to the second postulate of the MuSIASEM methodology, namely to the alleged necessity of closing gaps between countries as for their energy efficiency. Professor Andreoni expresses this view quite vigorously in a recent article (Andreoni 2020[6]). I think this postulate doesn’t hold both inside the MuSIASEM framework, and outside of it. As for the purely external perspective, I think I have just laid out the main reasons for discarding the assumption that our civilisation should prioritize energy efficiency above other orientations and values. From the internal perspective of MuSIASEM, i.e. if we assume that energy efficiency is a true priority, we need to give that energy efficiency a boost, right? Now, the last time I checked, the only way we, humans, can get better at whatever we want to get better at is to create positive outliers, i.e. situations when we like really nail it better than in other situations. With a bit of luck, those positive outliers become a workable pattern of doing things. In management science, it is known as the principle of best practices. The only way of having positive outliers is to have a hierarchy of outcomes according to the given criterion. When everybody is at the same level, nobody is an outlier, and there is no way we can give ourselves a boost forward.

Good. Those six paragraphs above, they pretty much summarize my theoretical stance as regards the MuSIASEM framework in research about energy economics. Please, note that I respect that stream of research and the scientists involved in it. I think that representing energy management in human social structures as a metabolism is a great idea: it is one of those metaphors which can be fruitfully turned into a quantitative model. Still, I have my reserves.

I go further. A little more review of literature. Here comes a paper by Halbrügge et al. (2021[7]), titled ‘How did the German and other European electricity systems react to the COVID-19 pandemic?’. It points at an interesting point as regards energy economics: the pandemic has induced a new type of risk, namely short-term fluctuations in local demand for electricity. That, in turn, leads to deeper troughs and higher peaks in both the quantity and the price of energy in the market. More risk requires more liquidity: this is a known principle in business. As regards energy, liquidity can be achieved both through inventories, i.e. by developing storage capacity for energy, and through financial instruments. Halbrügge et al. come to the conclusion that such circumstances in the German market have led to the reinforcement of RES (Renewable Energy Sources). RES installations are typically more dispersed, more local in their reach, and more flexible than large power plants. It is much easier to modulate the output of a windfarm or a solar farm, as compared to a large fossil-fuel-based installation. 

Keeping an eye on the impact of the pandemic upon the market of energy, I pass to the article titled ‘Revisiting oil-stock nexus during COVID-19 pandemic: Some preliminary results’, by Salisu, Ebuh & Usman (2020[8]). First of all, a few words of general explanation as for what the hell is the oil-stock nexus. This is a phenomenon, which I saw any research about in 2017, which consists in a diversification of financial investment portfolios from pure financial stock into various mixes of stock and oil. Somehow around 2015, people who used to hold their liquid investments just in financial stock (e.g. as I do currently) started to build investment positions in various types of contracts based on the floating inventory of oil: futures, options and whatnot. When I say ‘floating’, it is quite literal: that inventory of oil really actually floats, stored on board of super-tanker ships, sailing gently through international waters, with proper gravitas (i.e. not too fast).

Long story short, crude oil has been increasingly becoming a financial asset, something like a buffer to hedge against risks encountered in other assets. Whilst the paper by Salisu, Ebuh & Usman is quite technical, without much theoretical generalisation, an interesting observation comes out of it, namely that short-term shocks, during the pandemic in financial markets had adversely impacted the price of oil more than the prices of stock. That, in turn, could indicate that crude oil was good as hedging asset just for a certain range of risks, and in the presence of price shocks induced by the pandemic, the role of oil could diminish.     

Those two papers point at a factor which we almost forgot as regards the market of energy, namely the role of short-term shocks. Until recently, i.e. until COVID-19 hit us hard, the textbook business model in the sector of energy had been that of very predictable demand, nearly constant in the long-perspective and varying in a sinusoidal manner in the short-term. The very disputable concept of LCOE AKA Levelized Cost of Energy, where investment outlays are treated as if they were a current cost, is based on those assumptions. The pandemic has shown a different aspect of energy systems, namely the need for buffering capacity. That, in turn, leads to the issue of adaptability, which, gently but surely leads further into the realm of adaptive changes, and that, ladies and gentlemen, is my beloved landscape of evolutionary, collectively intelligent change.

Cool. I move forward, and, by the same occasion, I move back. Back to the concept of energy efficiency. Halvorsen & Larsen study the so-called rebound effect as regards energy efficiency (Halvorsen & Larsen 2021[9]). Their paper is interesting for three reasons, the general topic of energy efficiency being the first one. The second one is methodological focus on phenomena which we cannot observe directly, and therefore we observe them through mediating variables, which is theoretically close to my own method of research. Finally, the phenomenon of rebound effect, namely the fact that, in the presence of temporarily increased energy efficiency, the consumers of energy tend to use more of those locally more energy-efficient goods, is essentially a short-term disturbance being transformed into long-term habits. This is adaptive change.

The model construed by Halvorsen & Larsen is a theoretical delight, just something my internal happy bulldog can bite into. They introduce the general assumption that consumption of energy in households is a build-up of different technologies, which can substitute each other under some conditions, and complementary under different conditions. Households maximize something called ‘energy services’, i.e. everything they can purposefully derive from energy carriers. Halvorsen & Larsen build and test a model where they derive demand for energy services from a whole range of quite practical variables, which all sums up to the following: energy efficiency is indirectly derived from the way that social structures work, and it is highly doubtful whether we can purposefully optimize energy efficiency as such.       

Now, here comes the question: what are the practical implications of all those different theoretical stances, I mean mine and those by other scientists? What does it change, and does it change anything at all, if policy makers follow the theoretical line of the MuSIASEM framework, or, alternatively, my approach? I am guessing differences at the level of both the goals, and the real outcomes of energy-oriented policies, and I am trying to wrap my mind around that guessing. Such as I see it, the MuSIASEM approach advocates for putting energy-efficiency of the whole global economy at the top of any political agenda, as a strategic goal. On the path towards achieving that strategic goal, there seems to be an intermediate one, namely that to narrow down significantly two types of discrepancies:

>> firstly, it is about discrepancies between countries in terms of energy efficiency, with a special focus on helping the poorest developing countries in ramping up their efficiency in using energy

>> secondly, there should be a priority to privilege technologies with the highest possible energy efficiency, whilst kicking out those which perform the least efficiently in that respect.    

If I saw a real policy based on those assumptions, I would have a few critical points to make. Firstly, I firmly believe that large human societies just don’t have the institutions to enforce energy efficiency as chief collective purpose. On the other hand, we have institutions oriented on other goals, which are able to ramp up energy efficiency as instrumental change. One institution, highly informal and yet highly efficient, is there, right in front of our eyes: markets and value chains. Each product and each service contain an input of energy, which manifests as a cost. In the presence of reasonably competitive markets, that cost is under pressure from market prices. Yes, we, humans are greedy, and we like accumulating profits, and therefore we squeeze our costs. Whenever energy comes into play as significant a cost, we figure out ways of diminishing its consumption per unit of real output. Competitive markets, both domestic and international, thus including free trade, act as an unintentional, and yet powerful a reductor of energy consumption, and, under a different angle, they remind us to find cheap sources of energy.


[1] Andreoni, V. (2020). The energy metabolism of countries: Energy efficiency and use in the period that followed the global financial crisis. Energy Policy, 139, 111304. https://doi.org/10.1016/j.enpol.2020.111304

[2] Al-Tamimi and Al-Ghamdi (2020), ‘Multiscale integrated analysis of societal and ecosystem metabolism of Qatar’ Energy Reports, 6, 521-527, https://doi.org/10.1016/j.egyr.2019.09.019

[3] Velasco-Fernández, R., Pérez-Sánchez, L., Chen, L., & Giampietro, M. (2020), A becoming China and the assisted maturity of the EU: Assessing the factors determining their energy metabolic patterns. Energy Strategy Reviews, 32, 100562.  https://doi.org/10.1016/j.esr.2020.100562

[4] Niebel, B., Leupold, S. & Heinemann, M. An upper limit on Gibbs energy dissipation governs cellular metabolism. Nat Metab 1, 125–132 (2019). https://doi.org/10.1038/s42255-018-0006-7

[5] Waśniewski, K. (2017). Technological change as intelligent, energy-maximizing adaptation. Energy-Maximizing Adaptation (August 30, 2017). http://dx.doi.org/10.1453/jest.v4i3.1410

[6] Andreoni, V. (2020). The energy metabolism of countries: Energy efficiency and use in the period that followed the global financial crisis. Energy Policy, 139, 111304. https://doi.org/10.1016/j.enpol.2020.111304

[7] Halbrügge, S., Schott, P., Weibelzahl, M., Buhl, H. U., Fridgen, G., & Schöpf, M. (2021). How did the German and other European electricity systems react to the COVID-19 pandemic?. Applied Energy, 285, 116370. https://doi.org/10.1016/j.apenergy.2020.116370

[8] Salisu, A. A., Ebuh, G. U., & Usman, N. (2020). Revisiting oil-stock nexus during COVID-19 pandemic: Some preliminary results. International Review of Economics & Finance, 69, 280-294. https://doi.org/10.1016/j.iref.2020.06.023

[9] Halvorsen, B., & Larsen, B. M. (2021). Identifying drivers for the direct rebound when energy efficiency is unknown. The importance of substitution and scale effects. Energy, 222, 119879. https://doi.org/10.1016/j.energy.2021.119879

Alois in the middle

 

I am returning to my syllabuses for the next academic year. I am focusing more specifically on microeconomics. Next year, I am supposed to give lectures in Microeconomics at both the Undergraduate, and the Master’s level. I feel like asking fundamental questions. My fundamental question, as it comes to teaching any curriculum, is the same: what can my students do with it? What is the function and the purpose of microeconomics? Please, notice that I am not asking that frequently stated, rhetorical question ‘What are microeconomics about?’. Well, buddy, microeconomics are about the things you are going to lecture about. Stands to reason. I want to know, and communicate, what is the practical utility, in one’s life, of those things that microeconomics are about.

The basic claim I am focusing on is the following: microeconomics are the accountancy of social structures. They serve exactly the same purpose that any kind of bookkeeping has ever served: to find and exploit patterns in human behaviour, by the means of accurately applied measures. Them ancients, who built those impressive pyramids (who builds a structure without windows and so little free space inside?), very quickly gathered that in order to have one decent pyramid, you need an army of clerks who do the accounting. They used to count stone, people, food, water etc. This is microeconomics, basically.

Thus, you can do with microeconomics if you want to build an ancient pyramid. Now, I am dividing the construction of said ancient pyramid in two stages: Undergraduate, and Master’s. An Undergraduate ancient pyramid requires the understanding of what do you need to keep the accounts of if you don’t want to be thrown to crocodiles. At the Master’s level, you will want to know what are the odds that you find yourself in a social structure, where inaccurate accounting, in connection with a pyramid, will have you thrown to crocodiles.

Good, now some literature, and a little turn by my current scientific work on the EneFin concept (see « Which salesman am I? » and « Sans une once d’utopisme » for sort of a current account of that research). I have just read that sort of transitional form of science, between an article and a book, basically a report, by Bleich and Guimaraes 2016[1]. It regards investment in renewable energies, mostly from the strictly spoken view of investment logic. Return on investment, net present value – that kind of thing. As I was making my notes out of that reading, my mind made a jump, and it landed on the cover of the quite-well-known book by Joseph Schumpeter: ‘Business Cycles’.

Joseph Schumpeter is an intriguing classic, so to say. Born in 1883, he published ‘Business Cycles’ in 1939, being 56 year-old, after the hell of a ride both for him and for the world, and right at the beginning of another ride (for the world). He was studying economics in Austria, in the early 1900, when social sciences in general were sort of different from their today’s version. They were the living account of a world that used to be changing at a breath-taking pace. Young Joseph (well, Alois in the middle) Schumpeter witnessed the rise of Marxism, World War I, the dissolution of his homeland, the Austro-Hungarian Empire, the rise of the German Reich. He moved from academia to banking, and from European banking to American academia.

I deeply believe that whatever kind of story I am telling, whether I am lecturing about economics, discussing a business concept, or chatting about philosophy, at the bottom line I am telling the story of my own existence. I also deeply believe that the same is true for anyone who goes to any lengths in telling a story. We tell stories in order to rationalize that crazy, exciting, unique and deadly something called ‘life’. To me, those ‘Business Cycles’ by Joseph Schumpeter look very much like a rationalized story of quite turbulent a life.

So, here come a few insights I have out of re-reading ‘Business Cycles’ for the n-th time, in the context of research on my EneFin business concept. Any technological change takes place in a chain of value added. Innovation in one tier of the chain needs to overcome the status quo both upstream and downstream of the chain, but once this happens, the whole chain of technologies and goods changes. I wonder how it can apply specifically to EneFin, which is essentially an institutional scheme. In terms of value added, this scheme is situated somewhere between the classical financial markets, and typical social entrepreneurship. It is social to the extent that it creates that quasi-cooperative connexion between the consumers of energy, and its suppliers. Still, as my idea assumes a financial market for those complex contracts « energy + shares in the supplier’s equity », there is a strong capitalist component.

I guess that the resistance this innovation would have to overcome would consist, on one end, in distrust from the part of those hardcore activists of social entrepreneurship, like ‘Anything that has anything to do with money is bad!’, and, on the other hand, there can be resistance from the classical financial market, namely the willingness to forcibly squeeze the EneFin scheme into some kind of established structure, like the stock market.

The second insight that Joseph has just given me is the following: there is a special type of business model and business action, the entrepreneurial one, centred on innovation rather than on capitalizing on the status quo. This is deep, really. What I could notice, so far, in my research, is that in every industry there are business models which just work, and others which just don’t. However innovative you think you are, most of the times either you follow the field-tested patterns or you simply fail. The real, deep technological change starts when this established order gets a wedge stuffed up its ass, and the wedge is, precisely, that entrepreneurial business model. I wonder how entrepreneurial is the business model of EneFin. Is it really as innovative as I think it is?

In the broad theoretical picture, which comes handy as it comes to science, the incidence of that entrepreneurial business model can be measured and assessed as a probability, and that probability, in turn, is a factor of change. My favourite mathematical approach to structural change is that particular mutation that Paul Krugman[2] made out of the classical production function, as initially formulated by Prof Charles W. Cobb and Prof Paul H. Douglas, in their common work from 1928[3]. We have some output generated by two factors, one of which changes slowly, whilst the other changes quickly. In other words, we have one quite conservative factor, and another one that takes on the crazy ride of creative destruction.

That second factor is innovation, or, if you want, the entrepreneurial business model. If it is to be powerful, then, mathematically, incremental change in that innovative factor should bring much greater a result on the side of output than numerically identical an increment in the conservative factor. The classical notation by Cobb and Douglas fits the bill. We have Y = A*F1a*F21-a and a > 0,5. Any change in F1 automatically brings more Y than the identical change in F2. Now, the big claim by Paul Krugman is that if F1 changes functionally, i.e. if its changes really increase the overall Y, resources will flow from F2 to F1, and a self-reinforcing spiral of change forms: F1 induces faster a change than F2, therefore resources are being transferred to F1, and it induces even more incremental change in F1, which, in turn, makes the Y jump even higher etc.

I can apply this logic to my scientific approach of the EneFin concept. I assume that introducing the institutional scheme of EneFin can improve the access to electricity in remote, rural locations, in the developing countries, and, consequently, it can contribute to creating whole new markets and social structures. Those local power systems organized in the lines of EneFin are the factor of innovation, the one with the a > 0,5 exponent in the Y = A*F1a*F21-a function. The empirical application of this logic requires to approximate the value of ‘a’, somehow. In my research on the fundamental link between population and access to energy, I had those exponents nailed down pretty accurately for many countries in the world. I wonder to what extent I can recycle them intellectually for the purposes of my present research.

As I am thinking on this issue, I will keep talking on something else, and the something else in question is the creation of new markets. I go back to the Venerable Man of microeconomics, the Source of All Wisdom, who used to live with his mother when writing the wisdom which he is so reputed for, today. In other words, I am referring to Adam Smith. Still, just to look original, I will quote his ‘Lectures on Justice’ first, rather than going directly to his staple book, namely ‘The Inquiry Into The Nature And Causes of The Wealth of Nations’.

So, in the ‘Lectures on Justice’, Adam Smith presents his basic considerations about contracts (page 130 and on): « That obligation to performance which arises from contract is founded on the reasonable expectation produced by a promise, which considerably differs from a mere declaration of intention. Though I say I have a mind to do such thing for you, yet on account of some occurrences I do not do it, I am not guilty of breach of promise. A promise is a declaration of your desire that the person for whom you promise should depend on you for the performance of it. Of consequence the promise produces an obligation, and the breach of it is an injury. Breach of contract is naturally the slightest of all injuries, because we naturally depend more on what we possess that what is in the hands of others. A man robbed of five pounds thinks himself much more injured than if he had lost five pounds by a contract ».

People make markets, and markets are made of contracts. A contract implies that two or more people want to do some exchange of value, and they want to perform the exchange without coercion. A contract contains a value that one party engages to transfer on the other party, and, possibly, in the case of mutual contracts, another value will be transferred the other way round. There is one thing about contracts and markets, a paradox as for the role of the state. Private contracts don’t like the government to meddle, but they need the government in order to have any actual force and enforceability. This is one of the central thoughts by another classic, Jean-Jacques Rousseau, in his ‘Social Contract’: if we want enforceable contracts, which can make the intervention of the government superfluous, we need a strong government to back up the enforceability of contracts.

If I want my EneFin scheme to be a game-changer in developing countries, it can work only in countries with relatively well-functioning legal systems. I am thinking about using the metric published by the World Bank, the CPIA property rights and rule-based governance rating.

Still another insight that I have found in Joseph Schumpeter’s ‘Business Cycles’ is that when the entrepreneur, introducing a new technology, struggles against the first inertia of the market, that struggle in itself is a sequence of adaptation, and the strategy(ies) applied in the phases of growth and maturity in the new technology, later on, are the outcome of patterns developed during that early struggle. There is some sort of paradox in that struggle. When the early entrepreneur is progressively building his or her presence in the market, they operate under high uncertainty, and, almost inevitably, do a lot of trial and error, i.e. a lot of adjustments to the initially inaccurate prediction of the future. The developed, more mature version of the newly introduced technology is the outcome of that somehow unique sequence of trials, errors, and adjustments.

Scientifically, that insight means a fundamental uncertainty: once the actual implementation of an entrepreneurial business model, such as EneFin, gets inside that tunnel of learning and struggle, it can take on so many different mutations, and the response of the social environment to those mutations can be so idiosyncratic that we get into really serious economic modelling here.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

Support this blog

€10.00

[1] Bleich, K., & Guimaraes, R. D. (2016). Renewable Infrastructure Investment Handbook: A Guide for Institutional Investors. In World Economic Forum, Geneva.

[2] Krugman, P. (1991). Increasing returns and economic geography. Journal of political economy, 99(3), 483-499.

[3] Charles W. Cobb, Paul H. Douglas, 1928, A Theory of Production, The American Economic Review, Volume 18, Issue 1, Supplement, Papers and Proceedings of the Fortieth Annual Meeting of the American Economic Association (March 1928), pp. 139 – 165

DIY algorithms of our own

I return to that interesting interface of science and business, which I touched upon in my before-last update, titled ‘Investment, national security, and psychiatry’ and which means that I return to discussing two research projects I start being involved in, one in the domain of national security, another one in psychiatry, both connected by the idea of using artificial neural networks as analytical tools. What I intend to do now is to pass in review some literature, just to get the hang of what is the state of science, those last days.

On the top of that, I have been asked by my colleagues to crash take the leadership of a big, multi-thread research project in management science. The multitude of threads has emerged as a circumstantial by-product of partly the disruption caused by the pandemic, and partly as a result of excessive partition in the funding of research. As regards the funding of research, Polish universities have sort of two financial streams. One consists of big projects, usually team-based, financed by specialized agencies, such as the National Science Centre (https://www.ncn.gov.pl/?language=en ) or the National Centre for Research and Development (https://www.gov.pl/web/ncbr-en ). Another one is based on relatively small grants, applied for by and granted to individual scientists by their respective universities, which, in turn, receive bulk subventions from the Ministry of Education and Science. Personally, I think that last category, such as it is being allocated and used now, is a bit of a relic. It is some sort of pocket money for the most urgent and current expenses, relatively small in scale and importance, such as the costs of publishing books and articles, the costs of attending conferences etc. This is a financial paradox: we save and allocate money long in advance, in order to have money for essentially incidental expenses – which come at the very end of the scientific pipeline – and we have to make long-term plans for it. It is a case of fundamental mismatch between the intrinsic properties of a cash flow, on the one hand, and the instruments used for managing that cash flow, on the other hand.

Good. This is introduction to detailed thinking. Once I have those semantic niceties checked out, I cut into the flesh of thinking, and the first piece I intend to cut out is the state of science as regards Territorial Defence Forces and their role amidst the COVID-19 pandemic. I found an interesting article by Tiutiunyk et al. (2018[1]). It is interesting because it gives a detailed methodology for assessing operational readiness in any military unit, territorial defence or other. That corresponds nicely to Hypothesis #2 which I outlined for that project in national security, namely: ‘the actual role played by the TDF during the pandemic was determined by the TDF’s actual capacity of reaction, i.e. speed and diligence in the mobilisation of human and material resources’. That article by Tiutiunyk et al. (2018) allows entering into details as regards that claim. 

Those details start unfolding from the assumption that operational readiness is there when the entity studied possesses the required quantity of efficient technical and human resources. The underlying mathematical concept is quite simple. I the given situation, adequate response requires using m units of resources at k% of capacity during time te. The social entity studied can muster n units of the same resources at l% of capacity during the same time te. The most basic expression of operational readiness is, therefore, a coefficient OR = (n*l)/(m*k). I am trying to find out what specific resources are the key to that readiness. Tiutiunyk et al. (2018) offer a few interesting insights in that respect. They start by noticing the otherwise known fact that resources used in crisis situations are not exactly the same we use in everyday course of life and business, and therefore we tend to hold them for a time longer than their effective lifecycle. We don’t amortize them properly because we don’t really control for their physical and moral depreciation. One of the core concepts in territorial defence is to counter that negative phenomenon, and to maintain, through comprehensive training and internal control, a required level of capacity.

As I continue going through literature, I come by an interesting study by I. Bet-El (2020), titled: ‘COVID-19 and the future of security and defence’, published by the European Leadership Network (https://www.europeanleadershipnetwork.org/wp-content/uploads/2020/05/Covid-security-defence-1.pdf ). Bet-El introduces an important distinction between threats and risks, and, contiguously, the distinction between security and defence: ‘A threat is a patent, clear danger, while risk is the probability of a latent danger becoming patent; evaluating that probability requires judgement. Within this framework, defence is to be seen as the defeat or deterrence of a patent threat, primarily by military, while security involves taking measures to prevent latent threats from becoming patent and if the measures fail, to do so in such a way that there is time and space to mount an effective defence’. This is deep. I do a lot of research in risk management, especially as I invest in the stock market. When we face a risk factor, our basic behavioural response is hedging or insurance. We hedge by diversifying our exposures to risk, and we insure by sharing the risk with other people. Healthcare systems are a good example of insurance. We have a flow of capital that fuels a manned infrastructure (hospitals, ambulances etc.), and that infrastructure allows each single sick human to share his or her risks with other people. Social distancing is the epidemic equivalent of hedging. When cutting completely or significantly throttling social interactions between households, we have each household being sort of separated from the epidemic risk in other households. When one node in a network is shielded from some of the risk occurring in other nodes, this is hedging.

The military is made for responding to threats rather than risks. Military action is a contingency plan, implemented when insurance and hedging have gone to hell. The pandemic has shown that we need more of such buffers, i.e. more social entities able to mobilise quickly into deterring directly an actual threat. Territorial Defence Forces seem to fit the bill.  Another piece of literature, from my own, Polish turf, by Gąsiorek & Marek (2020[2]), state straightforwardly that Territorial Defence Forces have proven to be a key actor during the COVID-19 pandemic precisely because they maintain a high degree of actual readiness in their crisis-oriented resources, as compared to other entities in the Polish public sector.

Good. I have a thread, from literature, for the project devoted to national security. The issue of operational readiness seems to be somehow in the centre, and it translates into the apparently fluent frontier between security and national defence. Speed of mobilisation in the available resources, as well as the actual reliability of those resources, once mobilized, look like the key to understanding the surprisingly significant role of Territorial Defence Forces during the COVID-19 pandemic. Looks like my initial hypothesis #2, claiming that the actual role played by the TDF during the pandemic was determined by the TDF’s actual capacity of reaction, i.e. speed and diligence in the mobilisation of human and material resources, is some sort of theoretical core to that whole body of research.

In our team, we plan and have a provisional green light to run interviews with the soldiers of Territorial Defence Forces. That basic notion of actually mobilizable resources can help narrowing down the methodology to apply in those interviews, by asking specific questions pertinent to that issue. Which specific resources proved to be the most valuable in the actual intervention of TDF in pandemic? Which resources – if any – proved to be 100% mobilizable on the spot? Which of those resources proved to be much harder to mobilise than it had been initially assumed? Can we rate and rank all the human and technical resources of TDF as for their capacity to be mobilised?

Good. I gently close the door of that room in my head, filled with Territorial Defence Forces and the pandemic. I make sure I can open it whenever I want, and I open the door to that other room, where psychiatry dwells. Me and those psychiatrists I am working with can study a sample of medical records as regards patients with psychosis. Verbal elocutions of those patients are an important part of that material, and I make two hypotheses along that tangent:

>> Hypothesis #1: the probability of occurrence in specific grammatical structures A, B, C, in the general grammatical structure of a patient’s elocutions, both written and spoken, is informative about the patient’s mental state, including the likelihood of psychosis and its specific form.

>> Hypothesis #2: the action of written self-reporting, e.g. via email, from the part of a psychotic patient, allows post-clinical treatment of psychosis, with results observable as transition from mental state A to mental state B.

I start listening to what smarter people than me have to say on the matter. I start with Worthington et al. (2019[3]), and I learn there is a clinical category: clinical high risk for psychosis (CHR-P), thus a set of subtler (than psychotic) ‘changes in belief, perception, and thought that appear to represent attenuated forms of delusions, hallucinations, and formal thought disorder’. I like going backwards upstream, and I immediately ask myself whether that line of logic can be reverted. If there is clinical high risk for psychosis, the occurrence of those same symptoms in reverse order, from severe to light, could be a path of healing, couldn’t it?

Anyway, according to Worthington et al. (2019), some 25% of people with diagnosed CHR-P transition into fully scaled psychosis. Once again, from the perspective of risk management, 25% of actual occurrence in a risk category is a lot. It means that CHR-P is pretty solid as risk assessment comes. I further learn that CHR-P, when represented as a collection of variables (a vector for friends with a mathematical edge), entails an internal distinction into predictors and converters. Predictors are the earliest possible observables, something like a subtle smell of possible s**t, swirling here and there in the ambient air. Converters are information that bring progressive confirmation to predictors.

That paper by Worthington et al. (2019) is a review of literature in itself, and allows me to compare different approaches to CHR-P. The most solid ones, in terms of accurately predicting the onset of full-clip psychosis, always incorporate two components: assessment of the patient’s social role, and analysis of verbalized thought. Good. Looks promising. I think the initial hypotheses should be expanded into claims about socialization.

I continue with another paper, by Corcoran and Cecchi (2020[4]). Generally, patients with psychotic disorders display lower a semantic coherence than ordinary. The flow of meaning in their speech is impended: they can express less meaning in the same volume of words, as compared to a mentally healthy person. Reduced capacity to deliver meaning manifests as apparent tangentiality in verbal expression. Psychotic patients seem to err in their elocutions. Reduced complexity of speech, i.e. relatively low a capacity to swing between different levels of abstraction, with a tendency to exaggerate concreteness, is another observable which informs about psychosis. Two big families of diagnostic methods follow that twofold path. Latent Semantic Analysis (LSA) seems to be the name of the game as regards the study of semantic coherence. Its fundamental assumption is that words convey meaning by connecting to other words, which further unfolds into assuming that semantic similarity, or dissimilarity, with a more or less complex coefficient joint occurrence, as opposed to disjoint occurrence inside big corpuses of language.  

Corcoran and Cecchi (2020) name two main types of digital tools for Latent Semantic Analysis. One is Word2Vec (https://en.wikipedia.org/wiki/Word2vec), and I found a more technical and programmatic approach there to at: https://towardsdatascience.com/a-word2vec-implementation-using-numpy-and-python-d256cf0e5f28 . Another one is GloVe, which I found three interesting references to, at https://nlp.stanford.edu/projects/glove/ , https://github.com/maciejkula/glove-python , and at https://pypi.org/project/glove-py/ .

As regards semantic complexity, two types of analytical tools seem to run the show. One is the part-of-speech (POS) algorithm, where we tag words according to their grammatical function in the sentence: noun, verb, determiner etc. There are already existing digital platforms for implementing that approach, such as Natural Language Toolkit (http://www.nltk.org/ ). Another angle is that of speech graphs, where words are nodes in the network of discourse, and their connections (e.g. joint occurrence) to other words are edges in that network. Now, the intriguing thing about that last thread is that it seems to had been burgeoning in the late 1990ies, and then it sort of faded away. Anyway, I found two references for an algorithmic approach to speech graphs, at https://github.com/guillermodoghel/speechgraph , and at https://www.researchgate.net/publication/224741196_A_general_algorithm_for_word_graph_matrix_decomposition .

That quick review of literature, as regards natural language as predictor of psychosis, leads me to an interesting sidestep. Language is culture, right? Low coherence, and low complexity in natural language are informative about psychosis, right? Now, I put that argument upside down. What if we, homo (mostly) sapiens have a natural proclivity to psychosis, with that overblown cortex of ours? What if we had figured out, at some point of our evolutionary path, that language is a collectively intelligent tool which, with is unique coherence and complexity required for efficient communication, keeps us in a state of acceptable sanity, until we go on Twitter, of course.  

Returning to the intellectual discipline which I should demonstrate, as a respectable researcher, the above review of literature brings one piece of good news, as regards the project in psychiatry. Initially, in this specific team, we assumed that we necessarily need an external partner, most likely a digital business, with important digital resources in AI, in order to run research on natural language. Now, I realized that we can assume two scenarios: one with big, fat AI from that external partner, and another one, with DIY algorithms of our own. Gives some freedom of movement. Cool.


[1] Tiutiunyk, V. V., Ivanets, H. V., Tolkunov, І. A., & Stetsyuk, E. I. (2018). System approach for readiness assessment units of civil defense to actions at emergency situations. Науковий вісник Національного гірничого університету, (1), 99-105. DOI: 10.29202/nvngu/2018-1/7

[2] Gąsiorek, K., & Marek, A. (2020). Działania wojsk obrony terytorialnej podczas pandemii COVID–19 jako przykład wojskowego wsparcia władz cywilnych i społeczeństwa. Wiedza Obronna. DOI: https://doi.org/10.34752/vs7h-g945

[3] Worthington, M. A., Cao, H., & Cannon, T. D. (2019). Discovery and validation of prediction algorithms for psychosis in youths at clinical high risk. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging. https://doi.org/10.1016/j.bpsc.2019.10.006

[4] Corcoran, C. M., & Cecchi, G. (2020). Using language processing and speech analysis for the identification of psychosis and other disorders. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging. https://doi.org/10.1016/j.bpsc.2020.06.004

Investment, national security, and psychiatry

I need to clear my mind a bit. For the last few weeks, I have been working a lot on revising an article of mine, and I feel I need a little bit of a shake-off. I know by experience that I need a structure to break free from another structure. Yes, I am one of those guys. I like structures. When I feel I lack one, I make one.

The structure which I want to dive into, in order to shake off the thinking about my article, is the thinking about my investment in the stock market. My general strategy in that department is to take the rent, which I collect from an apartment in town, every month, and to invest it in the stock market. Economically, it is a complex process of converting the residential utility of a real asset (apartment) into a flow of cash, thus into a financial asset with quite steady a market value (inflation is still quite low), and then I convert that low-risk financial asset into a differentiated portfolio of other financial assets endowed with higher a risk (stock). I progressively move capital from markets with low risk (residential real estate, money) into a high-risk-high-reward market.

I am playing a game. I make a move (monthly cash investment), and I wait for a change in the stock market. I am wrapping my mind around the observable change, and I make my next move the next month. With each move I make, I gather information. What is that information? Let’s have a look at my portfolio such as it is now. You can see it in the table below:

StockValue in EURReal return in €Rate of return I have as of April 6ht, 2021, in the morning
CASH & CASH FUND & FTX CASH (EUR) € 25,82 €                                    –   €                                     25,82
ALLEGRO.EU SA € 48,86 €                               (2,82)-5,78%
ALTIMMUNE INC. – COMM € 1 147,22 €                            179,6515,66%
APPLE INC. – COMMON ST € 1 065,87 €                                8,210,77%
BIONTECH SE € 1 712,88 €                           (149,36)-8,72%
CUREVAC N.V. € 711,00 €                             (98,05)-13,79%
DEEPMATTER GROUP PLC € 8,57 €                               (1,99)-23,26%
FEDEX CORPORATION COMM € 238,38 €                              33,4914,05%
FIRST SOLAR INC. – CO € 140,74 €                             (11,41)-8,11%
GRITSTONE ONCOLOGY INC € 513,55 €                           (158,43)-30,85%
INPOST € 90,74 €                             (17,56)-19,35%
MODERNA INC. – COMMON € 879,85 €                             (45,75)-5,20%
NOVAVAX INC. – COMMON STOCK € 1 200,75 €                            398,5333,19%
NVIDIA CORPORATION – C € 947,35 €                              42,254,46%
ONCOLYTICS BIOTCH CM € 243,50 €                             (14,63)-6,01%
SOLAREDGE TECHNOLOGIES € 683,13 €                             (83,96)-12,29%
SOLIGENIX INC. COMMON € 518,37 €                           (169,40)-32,68%
TESLA MOTORS INC. – C € 4 680,34 €                            902,3719,28%
VITALHUB CORP.. € 136,80 €                               (3,50)-2,56%
WHIRLPOOL CORPORATION € 197,69 €                              33,1116,75%
  €       15 191,41 €                            840,745,53%

A few words of explanation are due. Whilst I have been actively investing for 13 months, I made this portfolio in November 2020, when I did some major reshuffling. My overall return on the cash invested, over the entire period of 13 months, is 30,64% as for now (April 6th, 2021), which makes 30,64% * (12/13) = 28,3% on the annual basis.

The 5,53% of return which I have on this specific portfolio makes roughly 1/6th of the total return in have on all the portfolios I had over the past 13 months. It is the outcome of my latest experimental round, and this round is very illustrative of the mistake which I know I can make as an investor: panic.

In August and September 2020, I collected some information, I did some thinking, and I made a portfolio of biotech companies involved in the COVID-vaccine story: Pfizer, Biontech, Curevac, Moderna, Novavax, Soligenix. By mid-October 2020, I was literally swimming in extasy, as I had returns on these ones like +50%. Pure madness. Then, big financial sharks, commonly called ‘investment funds’, went hunting for those stocks, and they did what sharks do: they made their target bleed before eating it. They boxed and shorted those stocks in order to make their prices affordably low for long investment positions. At the time, I lost control of my emotions, and when I saw those prices plummet, I sold out everything I had. Almost as soon as I did it, I realized what an idiot I had been. Two weeks later, the same stocks started to rise again. Sharks had had their meal. In response, I did what I still wonder whether it was wise or stupid: I bought back into those positions, only at a price higher than what I sold them for.

Selling out was stupid, for sure. Was buying back in a wise move? I don’t know, like really. My intuition tells me that biotech companies in general have a bright future ahead, and not only in connection with vaccines. I am deeply convinced that the pandemic has already built up, and will keep building up an interest for biotechnology and medical technologies, especially in highly innovative forms. This is even more probable as we realized that modern biotechnology is very largely digital technology. This is what is called ‘platforms’ in the biotech lingo. These are digital clouds which combine empirical experimental data with artificial intelligence, and the latter is supposed to experiment virtually with that data. Modern biotechnology consists in creating as many alternative combinations of molecules and lifeforms as we possibly can make and study, and then pick those which offer the best combination of biological outcomes with the probability of achieving said outcomes.

My currently achieved rates of return, in the portfolio I have now, are very illustrative of an old principle in capital investment: I will fail most of the times. Most of my investment decisions will be failures, at least in the short and medium term, because I cannot possibly outsmart the incredibly intelligent collective structure of the stock market. My overall gain, those 5,53% in the case of this specific portfolio, is the outcome of 19 experiments, where I fail in 12 of them, for now, and I am more or less successful in the remaining 7.

The very concept of ‘beating the market’, which some wannabe investment gurus present, is ridiculous. The stock market is made of dozens of thousands of human brains, operating in correlated coupling, and leveraged with increasingly powerful artificial neural networks. When I expect to beat that networked collective intelligence with that individual mind of mine, I am pumping smoke up my ass. On the other hand, what I can do is to do as many different experiments as I can possibly spread my capital between.

It is important to understand that any investment strategy, where I assume that from now on, I will not make any mistakes, is delusional. I made mistakes in the past, and I am likely to make mistakes in the future. What I can do is to make myself more predictable to myself. I can narrow down the type of mistakes I tend to make, and to create the corresponding compensatory moves in my own strategy.

Differentiation of risk is a big principle in my investment philosophy, and yet it is not the only one. Generally, with the exception of maybe 2 or 3 days in a year, I don’t really like quick, daily trade in the stock market. I am more of a financial farmer: I sow, and I wait to see plants growing out of those seeds. I invest in industries rather than individual companies. I look for some kind of strong economic undertow for my investments, and the kind of undertow I specifically look for is high potential for deep technological change. Accessorily, I look for industries which sort of logically follow human needs, e.g. the industry of express deliveries in the times of pandemic. I focus on three main fields of technology: biotech, digital, and energy.

Good. I needed to shake off, and I am. Thinking and writing about real business decisions helped me to take some perspective. Now, I am gently returning into the realm of science, without completely leaving the realm of business: I am navigating the somehow troubled and feebly charted waters of money for science. I am currently involved in launching and fundraising for two scientific projects, in two very different fields of science: national security and psychiatry. Yes, I know, they can conjunct in more points than we commonly think they can. Still, in canonical scientific terms, these two diverge.

How come I am involved, as researcher, in both national security and psychiatry? Here is the thing: my method of using a simple artificial neural network to simulate social interactions seems to be catching on. Honestly, I think it is catching on because other researchers, when they hear me talking about ‘you know, simulating alternative realities and assessing which one is the closest to the actual reality’ sense in me that peculiar mental state, close to the edge of insanity, but not quite over that edge, just enough to give some nerve and some fun to science.

In the field of national security, I teamed up with a scientist strongly involved in it, and we take on studying the way our Polish forces of Territorial Defence have been acting in and coping with the pandemic of COVID-19. First, the context. So far, the pandemic has worked as a magnifying glass for all the f**kery in public governance. We could all see a minister saying ‘A,B and C will happen because we said so’, and right after there was just A happening, with a lot of delay, and then a completely unexpected phenomenal D appeared, with B and C bitching and moaning they haven’t the right conditions for happening decently, and therefore they will not happen at all.  This is the first piece of the context. The second is the official mission and the reputation of our Territorial Defence Forces AKA TDF. This is a branch of our Polish military, created in 2017 by our right-wing government. From the beginning, these guys had the reputation to be a right-wing militia dressed in uniforms and paid with taxpayers’ money. I honestly admit I used to share that view. TDF is something like the National Guard in US. These are units made of soldiers who serve in the military, and have basic military training, but they have normal civilian lives besides. They have civilian jobs, whilst training regularly and being at the ready should the nation call.

The initial idea of TDF emerged after the Russian invasion of the Crimea, when we became acutely aware that military troops in nondescript uniforms, apparently lost, and yet strangely connected to the Russian government, could massively start looking lost by our Eastern border. The initial idea behind TDF was to significantly increase the capacity of the Polish population for mobilising military resources. Switzerland and Finland largely served as models.

When the pandemic hit, our government could barely pretend they control the situation. Hospitals designated as COVID-specific had frequently no resources to carry out that mission. Our government had the idea of mobilising TDF to help with basic stuff: logistics, triage and support in hospitals etc. Once again, the initial reaction of the general public was to put the label of ‘militarisation’ on that decision, and, once again, I was initially thinking this way. Still, some friends of mine, strongly involved as social workers supporting healthcare professionals, started telling me that working with TDF, in local communities, was nothing short of amazing. TDF had the speed, the diligence, and the capacity to keep their s**t together which many public officials lacked. They were just doing their job and helping tremendously.

I started scratching the surface. I did some research, and I found out that TDF was of invaluable help for many local communities, especially outside of big cities. Recently, I accidentally had a conversation about it with M., the scientist whom I am working with on that project. He just confirmed my initial observations.

M. has strong connections with TDF, including their top command. Our common idea is to collect abundant, interview-based data from TDF soldiers mobilised during the pandemic, as regards the way they carried out their respective missions. The purely empirical edge we want to have here is oriented on defining successes and failures, as well as their context and contributing factors. The first layer of our study is supposed to provide the command of TDF with some sort of case-studies-based manual for future interventions. At the theoretical, more scientific level, we intend to check the following hypotheses:      

>> Hypothesis #1: during the pandemic, TDF has changed its role, under the pressure of external events, from the initially assumed, properly spoken territorial defence, to civil defence and assistance to the civilian sector.

>> Hypothesis #2: the actual role played by the TDF during the pandemic was determined by the TDF’s actual capacity of reaction, i.e. speed and diligence in the mobilisation of human and material resources.

>> Hypothesis #3: collectively intelligent human social structures form mechanisms of reaction to external stressors, and the chief orientation of those mechanisms is to assure proper behavioural coupling between the action of external stressors, and the coordinated social reaction. Note: I define behavioural coupling in terms of the games’ theory, i.e. as the objectively existing need for proper pacing in action and reaction.   

The basic method of verifying those hypotheses consists, in the first place, in translating the primary empirical material into a matrix of probabilities. There is a finite catalogue of operational procedures that TDF can perform. Some of those procedures are associated with territorial military defence as such, whilst other procedures belong to the realm of civil defence. It is supposed to go like: ‘At the moment T, in the location A, procedure of type Si had a P(T,A, Si) probability of happening’. In that general spirit, Hypothesis #1 can be translated straight into a matrix of probabilities, and phrased out as ‘during the pandemic, the probability of TDF units acting as civil defence was higher than seeing them operate as strict territorial defence’.

That general probability can be split into local ones, e.g. region-specific. On the other hand, I intuitively associate Hypotheses #2 and #3 with the method which I call ‘study of orientation’. I take the matrix of probabilities defined for the purposes of Hypothesis #1, and I put it back to back with a matrix of quantitative data relative to the speed and diligence in action, as regards TDF on the one hand, and other public services on the other hand. It is about the availability of vehicles, capacity of mobilisation in people etc. In general, it is about the so-called ‘operational readiness’, which you can read more in, for example, the publications of RAND Corporation (https://www.rand.org/topics/operational-readiness.html).  

Thus, I take the matrix of variables relative to operational readiness observable in the TDF, and I use that matrix as input for a simple neural network, where the aggregate neural activation based on those metrics, e.g. through a hyperbolic tangent, is supposed to approximate a specific probability relative to TDF people endorsing, in their operational procedures, the role of civil defence, against that of military territorial defence. I hypothesise that operational readiness in TDF manifests a collective intelligence at work and doing its best to endorse specific roles and applying specific operational procedures. I make as many such neural networks as there are operational procedures observed for the purposes of Hypothesis #1. Each of these networks is supposed to represent the collective intelligence of TDF attempting to optimize, through its operational readiness, the endorsement and fulfilment of a specific role. In other words, each network represents an orientation.

Each such network transforms the input data it works with. This is what neural networks do: they experiment with many alternative versions of themselves. Each experimental round, in this case, consists in a vector of metrics informative about the operational readiness TDF, and that vector locally tries to generate an aggregate outcome – its neural activation – as close as possible to the probability of effectively playing a specific role. This is always a failure: the neural activation of operational readiness always falls short of nailing down exactly the probability it attempts to optimize. There is always a local residual error to account for, and the way a neural network (well, my neural network) accounts for errors consists in measuring them and feeding them into the next experimental round. The point is that each such distinct neural network, oriented on optimizing the probability of Territorial Defence Forces endorsing and fulfilling a specific social role, is a transformation of the original, empirical dataset informative about the TDF’s operational readiness.

Thus, in this method, I create as many transformations (AKA alternative versions) of the actual operational readiness in TDF, as there are social roles to endorse and fulfil by TDF. In the next step, I estimate two mathematical attributes of each such transformation: its Euclidean distance from the original empirical dataset, and the distribution of its residual error. The former is informative about similarity between the actual reality of TDF’s operational readiness, on the one hand, and alternative realities, where TDF orient themselves on endorsing and fulfilling just one specific role. The latter shows the process of learning which happens in each such alternative reality.

I make a few methodological hypotheses at this point. Firstly, I expect a few, like 1 ÷ 3 transformations (alternative realities) to fall particularly close from the actual empirical reality, as compared to others. Particularly close means their Euclidean distances from the original dataset will be at least one order of magnitude smaller than those observable in the remaining transformations. Secondly, I expect those transformations to display a specific pattern of learning, where the residual error swings in a predictable cycle, over a relatively wide amplitude, yet inside that amplitude. This is a cycle where the collective intelligence of Territorial Defence Forces goes like: ‘We optimize, we optimize, it goes well, we narrow down the error, f**k!, we failed, our error increased, and yet we keep trying, we optimize, we optimize, we narrow down the error once again…’ etc. Thirdly, I expect the remaining transformations, namely those much less similar to the actual reality in Euclidean terms, to display different patterns of learning, either completely dishevelled, with the residual error bouncing haphazardly all over the place, or exaggeratedly tight, with error being narrowed down very quickly and small ever since.

That’s the outline of research which I am engaging into in the field of national security. My role in this project is that of a methodologist. I am supposed to design the system of interviews with TDF people, the way of formalizing the resulting data, binding it with other sources of information, and finally carrying out the quantitative analysis. I think I can use the experience I already have with using artificial neural networks as simulators of social reality, mostly in defining said reality as a vector of probabilities attached to specific events and behavioural patterns.     

As regards psychiatry, I have just started to work with a group of psychiatrists who have abundant professional experience in two specific applications of natural language in the diagnosing and treating psychoses. The first one consists in interpreting patients’ elocutions as informative about their likelihood of being psychotic, relapsing into psychosis after therapy, or getting durably better after such therapy. In psychiatry, the durability of therapeutic outcomes is a big thing, as I have already learnt when preparing for this project. The second application is the analysis of patients’ emails. Those psychiatrists I am starting to work with use a therapeutic method which engages the patient to maintain contact with the therapist by writing emails. Patients describe, quite freely and casually, their mental state together with their general existential context (job, family, relationships, hobbies etc.). They don’t necessarily discuss those emails in subsequent therapeutic sessions; sometimes they do, sometimes they don’t. The most important therapeutic outcome seems to be derived from the very fact of writing and emailing.

In terms of empirical research, the semantic material we are supposed to work with in that project are two big sets of written elocutions: patients’ emails, on the one hand, and transcripts of standardized 5-minute therapeutic interviews, on the other hand. Each elocution is a complex grammatical structure in itself. The semantic material is supposed to be cross-checked with neurological biomarkers in the same patients. The way I intend to use neural networks in this case is slightly different from that national security thing. I am thinking about defining categories, i.e. about networks which guess similarities and classification out of crude empirical data. For now, I make two working hypotheses:

>> Hypothesis #1: the probability of occurrence in specific grammatical structures A, B, C, in the general grammatical structure of a patient’s elocutions, both written and spoken, is informative about the patient’s mental state, including the likelihood of psychosis and its specific form.

>> Hypothesis #2: the action of written self-reporting, e.g. via email, from the part of a psychotic patient, allows post-clinical treatment of psychosis, with results observable as transition from mental state A to mental state B.

The inflatable dartboard made of fine paper

My views on environmentally friendly production and consumption of energy, and especially on public policies in that field, differ radically from what seems to be currently the mainstream of scientific research and writing. I even got kicked out of a scientific conference because of my views. After my paper was accepted, I received a questionnaire to fill, which was supposed to feed the discussion on the plenary session of that conference. I answered those questions in good faith and sincerely, and: boom! I receive an email which says that my views ‘are not in line with the ideas we want to develop in the scientific community’. You could rightly argue that my views might be so incongruous that kicking me out of that conference was an act of mercy rather than enmity. Good. Let’s pass my views in review.

There is that thing of energy efficiency and climate neutrality. Energy efficiency, i.e. the capacity to derive a maximum of real output out of each unit of energy consumed, can be approached from two different angles: as a stationary value, on the one hand, or an elasticity, on the other hand. We could say: let’s consume as little energy as we possibly can and be as productive as possible with that frugal base. That’s the stationary view. Yet, we can say: let’s rock it, like really. Let’s boost our energy consumption so as to get in control of our climate. Let’s pass from roughly 30% of energy generated on the surface of the Earth, which we consume now, to like 60% or 70%. Sheer laws of thermodynamics suggest that if we manage to do that, we can really run the show. These is the summary of what in my views is not in line with ‘the ideas we want to develop in the scientific community’.

Of course, I can put forth any kind of idiocy and claim this is a valid viewpoint. Politics are full of such episodes. I was born and raised in a communist country. I know something about stupid, suicidal ideas being used as axiology for running a nation. I also think that discarding completely other people’s ‘ideas we want to develop in the scientific community’ and considering those people as pathetically lost would be preposterous from my part. We are all essentially wrong about that complex stuff we call ‘reality’. It is just that some ways of being wrong are more functional than others. I think truly correct a way to review the current literature on energy-related policies is to take its authors’ empirical findings and discuss them

under a different interpretation, namely the one sketched in the preceding paragraph.

I like looking at things with precisely that underlying assumption that I don’t know s**t about anything, and I just make up cognitive stuff which somehow pays off. I like swinging around that Ockham’s razor and cut out all the strong assumptions, staying just with the weak ones, which do not require much assuming and are at the limit of stylized observations and theoretical claims.

My basic academic background is in law (my Master’s degree), and in economics (my PhD). I look at social reality around me through the double lens of those two disciplines, which, when put in stereoscopic view, boil down to having an eye on patterns in human behaviour.

I think I observe that we, humans, are social and want to stay social, and being social means a baseline mutual predictability in our actions. We are very much about maintaining a certain level of coherence in culture, which means a certain level of behavioural coupling. We would rather die than accept the complete dissolution of that coherence. We, humans, we make behavioural coherence: this is our survival strategy, and it allows us to be highly social. Our cultures always develop along the path of differentiation in social roles. We like specializing inside the social group we belong to.

Our proclivity to endorse specific skillsets, which turn into social roles, has the peculiar property of creating local surpluses, and we tend to trade those surpluses. This is how markets form. In economics, there is that old distinction between production and consumption. I believe that one of the first social thinkers who really meant business about it was Jean Baptiste Say, in his “Treatise of Political Economy”. Here >> https://discoversocialsciences.com/wp-content/uploads/2020/03/Say_treatise_political-economy.pdf  you have it in the English translation, whilst there >>

https://discoversocialsciences.com/wp-content/uploads/2018/04/traite-deconomie-politique-jean-baptiste-say.pdf it is in its elegant French original.

In my perspective, the distinction between production and consumption is instrumental, i.e. it is useful for solving some economic problems, but just some. Saying that I am a consumer is a gross simplification. I am a consumer in some of my actions, but in others I am a producer. As I write this blog, I produce written content. I prefer assuming that production and consumption are two manifestations of the same activity, namely of markets working around tradable surpluses created by homo sapiens as individual homo sapiens endorse specific social roles.

When some scientists bring forth empirically backed claims that our patterns of consumption have the capacity to impact climate (e.g. Bjelle et al. 2021[1]), I say ‘Yes, indeed, and at the end of that specific intellectual avenue we find out that creating some specific, tradable surpluses, ergo the fact of endorsing some specific social roles, has the capacity to impact climate’. Bjelle et al. find out something which from my point of view is gobsmacking: whilst relative prevalence of particular goods in the overall patterns of demand has little effect on the emission of Greenhouse Gases (GHG) at the planetary scale, there are regional discrepancies. In developing countries and in emerging markets, changes in the baskets of goods consumed seem to have strong impact GHG-wise. On the other hand, in developed economies, however the consumers shift their preferences between different goods, it seems to be very largely climate neutral. From there, Bjelle et al. conclude into such issues as environmental taxation. My own take on those results is different. What impacts climate is social change occurring in developing economies and emerging markets, and this is relatively quick demographic growth combined with quick creation of new social roles, and a big socio-economic difference between urban environments, and the rural ones.

In the broad theoretical perspective, states of society which we label as classes of socio-economic development are far more than just income brackets. They are truly different patterns of social interactions. I had a glimpse of that when I was comparing data on the consumption of energy per capita (https://data.worldbank.org/indicator/EG.USE.PCAP.KG.OE ) with the distribution of gross national product per capita (https://data.worldbank.org/indicator/NY.GDP.PCAP.CD ). It looks as if different levels of economic development were different levels of energy in the social system. Each 100 ÷ 300 kilograms of oil equivalent per capita per year seem to be associated with specific institutions in society.

Let’s imagine that climate change goes on. New s**t comes our way, which we need to deal with. We need to learn. We form new skillsets, and we define new social roles. New social roles mean new tradable surpluses, and new markets with new goods in it. We don’t really know what kind of skillsets, markets and goods that will be. Enhanced effort of collective adaptation leads to outcomes impossible to predict in themselves. The question is: can we predict the way those otherwise unpredictable outcomes will take shape?         

My fellow scientists seem not to like unpredictable outcomes. Shigetomi et al. (2020[2]) straightforwardly find out empirically that ‘only the very low, low, and very high-income households are likely to achieve a reduction in carbon footprint due to their high level of environmental consciousness. These income brackets include the majority of elderly households who are likely to have higher consciousness about environmental protection and addressing climate change’. In my fairy-tale, it means that only a fringe of society cares about environment and climate, and this is the fringe which does not really move a lot in terms of new social role. People with low income have low income because their social roles do not allow them to trade significant surpluses, and elderly people with high income do not really shape the labour market.

This is what I infer from those empirical results. Yet, Shigetomi et al. conclude that ‘The Input-Output Analysis Sustainability Evaluation Framework (IOSEF), as proposed in this study, demonstrates how disparity in household consumption causes societal distortion via the supply chain, in terms of consumption distribution, environmental burdens and household preferences. The IOSEF has the potential to be a useful tool to aid in measuring social inequity and burden distribution allocation across time and demographics’.

Guys, like really. Just sit and think for a moment. I even pass over the claim that inequality of income is a social distortion, although I am tempted to say that no know human society has ever been free of that alleged distortion, and therefore we’d better accommodate with it and stop calling it a distortion. What I want is logic. Guys, you have just proven empirically that only low-income people, and elderly high-income people care about climate and environment. The middle-incomes and the relatively young high-incomes, thus people who truly run the show of social and technological change, do not care as much as you would like them to. You claim that inequality of income is a distortion, and you want to eliminate it. When you kick inequality out of the social equation, you get rid of the low-income folks, and of the high-income ones. Stands to reason: with enforced equality, everybody is more or less middle-income. Therefore, the majority of society is in a social position where they don’t give a f**k about climate and environment. Besides, when you remove inequality, you remove vertical social mobility along hierarchies, and therefore you give a cold shoulder to a fundamental driver of social change. Still, you want social change, you have just said it.  

Guys, the conclusions you derive from your own findings are the political equivalent of an inflatable dartboard made of fine paper. Cheap to make, might look dashing, and doomed to be extremely short-lived as soon as used in practice.   


[1] Bjelle, E. L., Wiebe, K. S., Többen, J., Tisserant, A., Ivanova, D., Vita, G., & Wood, R. (2021). Future changes in consumption: The income effect on greenhouse gas emissions. Energy Economics, 95, 105114. https://doi.org/10.1016/j.eneco.2021.105114

[2] Shigetomi, Y., Chapman, A., Nansai, K., Matsumoto, K. I., & Tohno, S. (2020). Quantifying lifestyle based social equity implications for national sustainable development policy. Environmental Research Letters, 15(8), 084044. https://doi.org/10.1088/1748-9326/ab9142