Cœur de réflexion

Je me concentre sur un aspect particulier de la révision finale de mon article pour « International Journal of Energy Sector Management » – sous le titre « Climbing the right hill – an evolutionary approach to the European market of electricity » – notamment sur le rapport entre ma méthodologie et celle de MuSIASEM, soit « Multi-scale Integrated Analysis of Societal and Ecosystem Metabolism ».

Je me réfère plus particulièrement à trois articles que je juge représentatifs pour ce créneau de recherche :

>> Al-Tamimi and Al-Ghamdi (2020), ‘Multiscale integrated analysis of societal and ecosystem metabolism of Qatar’ Energy Reports, 6, 521-527, https://doi.org/10.1016/j.egyr.2019.09.019 

>> Andreoni, V. (2020). The energy metabolism of countries: Energy efficiency and use in the period that followed the global financial crisis. Energy Policy, 139, 111304. https://doi.org/10.1016/j.enpol.2020.111304

>> Velasco-Fernández, R., Pérez-Sánchez, L., Chen, L., & Giampietro, M. (2020), A becoming China and the assisted maturity of the EU: Assessing the factors determining their energy metabolic patterns. Energy Strategy Reviews, 32, 100562.  https://doi.org/10.1016/j.esr.2020.100562

De parmi ces trois, je choisis subjectivement le travail de prof. Andreoni (2020[1]) comme le plus solide en termes de théorie. L’idée de base de MuSIASEM est d’étudier l’efficience énergétique des sociétés humaines comme un métabolisme, donc comme un système complexe qui se soutient et se développe à travers la transformation d’énergie et de ressources matérielles.  

J’essaie de comprendre et présenter la logique de base de MuSIASEM en explorant les avantages que professeur Andreoni attribue à cette méthode. Je me permets de traduire fidèlement un passage de l’article (2020[2]) : « […] l’approche MuSIASEM présente des avantages par rapport aux autres méthodologies utilisées pour étudier le métabolisme des sociétés, telles que ‘emergy’, empreinte écologique et l’analyse entrée-sortie […]. En fournissant des descriptions intégrées à travers des niveaux d’analyse différents, l’approche MuSIASEM ne réduit pas l’information en un index quantitatif unique et analyse l’énergie utilisée par rapport aux structures socio-économiques concrètes. Qui plus est, l’inclusion de dimensions multiples (telles que le PIB, temps humain et consommation d’énergie) en combinaison avec des échelles différentes d’analyse (telles que le niveau sectoriel et le niveau national) rend possible de fournir l’information pertinente aux processus à l’intérieur du système ainsi que d’analyser la façon dont les variables externes (telles que la crise économique et la pénurie des ressources) peuvent affecter l’allocation et l’utilisation des ressources ».      

Je me dis que si quelqu’un se vante d’avoir des avantages par rapport à quoi que ce soit d’autre, ces avantages reflètent les aspects les plus importants des phénomènes en question, selon le même quelqu’un. Ainsi donc, prof. Andreoni assume que MuSIASEM permet d’étudier quelque chose d’important – l’efficience énergétique des sociétés comme un métabolisme – toute en ayant l’avantage de déconstruction des variables agrégées en des variables composantes ainsi que celui de multi-dimensionnalité d’analyse. 

Les variables étudiées semblent donc être la base de la méthode. Parlons donc des variables. Professeur Andreoni présente dans son article trois variables essentielles :

>> L’activité humaine totale, calculée comme le produit de : [la population] x [24 heures] x [365 jours]

>> Transformation totale d’énergie, calculée comme la somme de : [consommation finale d’énergie] + [Consommation interne d’énergie dans le secteur d’énergie] + [Pertes d’énergie dans sa transformation]

>> Produit Intérieur Brut  

Ces trois variables fondamentales sont étudiées à trois niveaux différents d’agrégation. Le niveau de base est celui d’économie(s) nationale(s), à partir d’où on décompose, tout d’abord, entre les secteurs macroéconomiques de : ménages par opposition à celui d’activité payée (entreprises plus secteur public). Ensuite, ces secteurs macroéconomiques sont tous les deux désagrégés en l’agriculture, l’industrie et les services.

A chaque niveau d’agrégation, les trois variables fondamentales sont mises en relation entre elles pour calculer deux coefficients : intensité énergétique et métabolisme d’énergie. Celui d’intensité énergétique est calculé comme quantité d’énergie utilisée pour produire un euro de Produit Intérieur Brut et c’est donc l’inverse de l’efficience énergétique (cette dernière est calculée comme quantité de PIB produite à partir d’une unité d’énergie). Le coefficient métabolique, en revanche, est calculé comme la quantité d’énergie par heure d’activité humaine.

J’ai quelques remarques critiques par rapport à ces variables, mais avant de développer là-dessus je contraste rapidement avec ma méthode. Les variables de professeur Andreoni sont des transformations des variables utilisées dans des bases de données publiquement accessibles. Professeur Andreoni prend donc une méthode générale d’observation empirique – donc par exemple la méthode de calculer la consommation finale d’énergie – et transforme cette méthode générale de façon à obtenir une vue différente de la même réalité empirique. Cette transformation tend à agréger des variables « communes ». Moi, de mon côté, j’utilise un éventail large des variables communément formalisées et présentées dans des bases de données publiquement accessibles plus un petit zest des coefficients que je calcule moi-même. En fait, dans la recherche sur l’énergie, j’utilise juste deux coefficients originaux, soit le nombre moyen de demandes de brevet nationales par 1 million d’habitants, d’une part, et la quantité moyenne de capital fixe d’entreprise par une demande nationale de brevet. Quant au reste, j’utilise des variables communes. Dans cet article que je suis en train de finir pour « International Journal of Energy Sector Management » j’utilise les quarante et quelques variables de Penn Tables 9.1. (Feenstra et al. 2015[3]) plus des variables de la Banque Mondiale au sujet d’énergie (consommation finale, participation des sources renouvelables, participation d’électricité) plus des données Eurostat sur les prix d’électricité, plus ces deux coefficients relatifs aux demandes nationales de brevets.

La différence entre ma méthode et celle de MuSIASEM est donc visible déjà au niveau phénoménologique. Moi, je prends la phénoménologie généralement acceptée – donc par exemple la phénoménologie de consommation d’énergie ou celle d’activité économique – et ensuite j’étudie le rapport entre les variables correspondantes pour en extraire un tableau plus complexe. Je sais déjà que dans ma méthode, la quantité et la diversité des variables est un facteur clé. Mes résultats deviennent vraiment robustes – donc cohérents à travers des échantillons empiriques différents – lorsque j’utilise une panoplie riche de variables. Chez MuSIASEM, en revanche, ils commencent par construire leur propre phénoménologie au tout début en ensuite ils raisonnent avec.

Il semble y avoir un terrain commun entre ma méthode et celle de MuSIASEM : on semble être d’accord que les variables macroéconomiques telles qu’elles sont accessibles publiquement donnent l’image imparfaite d’une réalité autrement plus complexe. A partir de là, toutefois, il y différence. Moi, j’assume que si je prends beaucoup d’observations imparfaites distinctes – donc beaucoup de variables différentes, chacune un peu à côté de la réalité – je peux reconstruire quelque chose à propos de ladite réalité en transformant ces observations imparfaites avec un réseau neuronal. J’assume donc que je ne sais pas d’avance de quelle manière exacte ces variables sont imparfaites et je m’en fiche par ailleurs. C’est comme si reconstruisais un crime (j’adore les romans policiers) à partir d’un grand nombre des dépositions faites par des témoins qui, au moment et en présence du crime en question étaient soit ivres, soit drogués soit ils regardaient un match de foot sur leur portable. J’assume qu’aussi peu fiables soient tous ces témoins, je peux interposer et recombiner leurs dépositions de façon à cerner le mécréant qui a tué la vieille dame. J’expérimente avec des combinaisons différentes et j’essaie de voir laquelle est la plus cohérente. Chez MuSIASEM, en revanche, ils établissent d’avance une méthode de mettre en concours des dépositions imparfaites des témoins en état d’ébriété et ensuite ils l’appliquent de façon cohérente à travers tous les cas de tels témoignages.

Jusqu’à ce point-là, ma méthode est garnie d’assomptions moins fortes que celle de MuSIASEM. De manière générale je préfère des méthodes avec des assomptions faibles. Lorsque je mets en question des idées reçues, tout simplement en les suspendant et en vérifiant si elles tiennent le coup (de suspension), j’ai la chance de trouver plus de trucs nouveaux et intéressants.  Maintenant, je m’offre le plaisir pervers de passer au peigne fin les assomptions fortes de MuSIASEM, juste pour voir où bien puis-je leur enfoncer une épingle. Je commence par l’activité humaine totale, calculée comme le produit de : [la population] x [24 heures] x [365 jours]. Première remarque : le produit 24 heures fois 365 jours = 8760 heures est une constante. Si je compare deux pays aux populations différentes, leur activités humaines totales respectives seront différentes uniquement à travers leurs démographies différentes. Le produit [24 heures] x [365 jours] est donc une décoration redondante du point de vue mathématique. Toutefois, c’est une redondance astucieuse. Le produit 24 heures fois 365 jours = 8760 c’est le facteur de multiplication communément utilisé pour transformer la capacité énergétique en énergie effectivement accessible. On prend la puissance d’une bombe atomique, en joules, on la recalcule en kilowatts, on la multiplie par 24 heures fois 365 jours et boum : on obtient la quantité d’énergie accessible à la population générale si cette bombe explosait continuellement tout le long de l’année. On ajoute toutefois 24 heures supplémentaires d’explosion pour les années bissextiles.

Bombe atomique ou pas, le produit 24 heures fois 365 jours = 8760 est donc utile lorsqu’on veut faire une connexion élégante entre la démographie et la transformation d’énergie, ce qui semble judicieux dans une méthode de recherche qui se concentre précisément sur l’énergie. La multiplication « population x 8760 heures dans l’année » est-elle donc pertinente comme mesure d’activité humaine ? Hmmouiais… peut-être, à la rigueur… Je veux dire, si nous avons des populations très similaires en termes de style de vie et de technologie, elles peuvent démontrer des niveaux d’activité similaires par heure et donc des niveaux d’activité humaine totales distincts uniquement sur la base de leurs démographies différentes. Néanmoins, il nous faut des populations vraiment très similaires. Si nous prenons une portion essentielle de l’activité humaine – la production agricole par tête d’habitant – et nous la comparons entre la Belgique, l’Argentine et Botswana, nous obtenons des coefficients d’activité tout à fait différents.

Je pense donc que les assomptions qui maintiennent l’identité phénoménologique l’activité humaine totale = [la population] x [24 heures] x [365 jours] sont des assomptions tellement fortes qu’elles en deviennent dysfonctionnelles. J’assume donc que la méthode MuSIASEM utilise en fait la taille de la population comme une variable fondamentale, point à la ligne. Moi je fais de même, par ailleurs. Je trouve la démographie jouer un rôle injustement secondaire dans la recherche économique. Je vois que beaucoup de chercheurs utilisent des variables démographiques comme « calibrage » ou « facteurs d’ajustement ».  Tout ce que je sais sur la théorie générale des systèmes complexes, par exemple le créneau de recherche sur la théorie d’automates cellulaires (Bandini, Mauri & Serra 2001[4] ; Yu et al. 2021[5]) ou bien la théorie d’essaims (Gupta & Srivastava (2020[6]), suggère que la taille des populations ainsi que leur intensité d’interactions sociales sont des attributs fondamentaux de chaque civilisation.                    

Je trouve donc que l’identité phénoménologique l’activité humaine totale = [la population] x [24 heures] x [365 jours] dans la méthode MuSIASEM est donc une sorte de ruse, un peu superflue, pour introduire la démographie au cœur de la réflexion sur l’efficience énergétique. Par conséquent, le coefficient métabolique de MuSIASEM, calculé comme la quantité d’énergie par heure d’activité humaine, est équivalent à la consommation d’énergie par tête d’habitant. Le métabolisme énergétique d’une société humaine est donc défini par la consommation d’énergie par tête d’habitant (https://data.worldbank.org/indicator/EG.USE.PCAP.KG.OE ) ainsi que le coût énergétique de PIB (https://data.worldbank.org/indicator/EG.USE.COMM.GD.PP.KD ). Les liens hypertexte entre parenthèses renvoient à des bases de données correspondantes de la Banque Mondiale. Lorsque je regarde ces deux coefficients à travers le monde et je fais un truc absolument simpliste – je discrimine les pays et les régions en une liste hiérarchique – deux histoires différentes émergent. Le coefficient de consommation d’énergie par tête d’habitant raconte une histoire de hiérarchie pure et simple de bien-être économique et social. Plus ce coefficient est élevé, plus le pays donné est développé en termes non seulement de revenu par tête d’habitant mais aussi en termes de complexité institutionnelle, droits de l’homme, complexité technologique etc.

Lorsque j’écoute l’histoire dite par le coût énergétique de PIB (https://data.worldbank.org/indicator/EG.USE.COMM.GD.PP.KD ), c’est compliqué comme une enquête policière. Devinez donc les points communs entre Panama, Sri Lanka, la Suisse, l’Irlande, Malte et la République Dominicaine. Fascinant, non ? Eh bien, ces 6 pays sont en tête de la course planétaire à l’efficience énergétique, puisqu’ils sont tous les six capables de produire 1000 dollars de PIB avec moins de 50 kilogrammes d’équivalent pétrole en énergie consommée. Pour placer leur exploit dans un contexte géographique plus large, les États-Unis et la Serbie sont plus de deux fois plus bas dans cette hiérarchie, tout près l’un de l’autre, à 122 kilogrammes d’équivalent pétrole par 1000 dollars de PIB. Par ailleurs, ça les place tous les deux près de la moyenne planétaire ainsi que celle des pays dans la catégorie « revenu moyen inférieur ».

Si je récapitule mes observations sur la géographie de ces deux coefficients, les sociétés humaines différentes semblent avoir une capacité très idiosyncratique d’optimiser le coût énergétique de PIB à des niveaux différents de la consommation d’énergie par tête d’habitant. C’est comme s’il y avait une façon différente d’optimiser l’efficience énergétique en étant pauvre, par rapport à celle d’optimiser la même efficience lorsqu’on est riche et développé.

Nous, les homo sapiens, on peut faire des trucs vraiment bêtes dans le quotidien mais dans le long terme nous sommes plutôt pratiques, ce qui pourrait notre capacité actuelle de transformer quelque 30% de l’énergie totale à la surface de la planète. Si hiérarchie il y a, cette hiérarchie a probablement un rôle à jouer. Difficile à dire quel rôle exactement mais ça semble important d’avoir cette structure hiérarchique d’efficience énergétique. C’est un autre point où je diverge de la méthode MuSIASEM. Les chercheurs actifs dans le créneau MuSIASEM assument que l’efficience énergétique maximale est un impératif évolutif de notre civilisation et que tous les pays devraient aspirer à l’optimiser. Hiérarchies d’efficiences énergétique sont donc perçues comme un accident historique dysfonctionnel, probablement effet d’oppression des pauvres par les riches. Bien sûr, on peut demander si les habitants de la République Dominicaine sont tellement plus riches que ceux des États-Unis, pour avoir une efficience énergétique presque trois fois supérieure.


[1] Andreoni, V. (2020). The energy metabolism of countries: Energy efficiency and use in the period that followed the global financial crisis. Energy Policy, 139, 111304. https://doi.org/10.1016/j.enpol.2020.111304

[2] Andreoni, V. (2020). The energy metabolism of countries: Energy efficiency and use in the period that followed the global financial crisis. Energy Policy, 139, 111304. https://doi.org/10.1016/j.enpol.2020.111304

[3] Feenstra, Robert C., Robert Inklaar and Marcel P. Timmer (2015), “The Next Generation of the Penn World Table” American Economic Review, 105(10), 3150-3182, available for download at http://www.ggdc.net/pwt 

[4] Bandini, S., Mauri, G., & Serra, R. (2001). Cellular automata: From a theoretical parallel computational model to its application to complex systems. Parallel Computing, 27(5), 539-553. https://doi.org/10.1016/S0167-8191(00)00076-4

[5] Yu, J., Hagen-Zanker, A., Santitissadeekorn, N., & Hughes, S. (2021). Calibration of cellular automata urban growth models from urban genesis onwards-a novel application of Markov chain Monte Carlo approximate Bayesian computation. Computers, environment and urban systems, 90, 101689. https://doi.org/10.1016/j.compenvurbsys.2021.101689

[6] Gupta, A., & Srivastava, S. (2020). Comparative analysis of ant colony and particle swarm optimization algorithms for distance optimization. Procedia Computer Science, 173, 245-253. https://doi.org/10.1016/j.procs.2020.06.029

Ça semble expérimenter toujours

Je continue avec l’idée que j’avais baptisée « Projet Aqueduc ». Je suis en train de préparer un article sur ce sujet, du type « démonstration de faisabilité ». Je le prépare en anglais et je me suis dit que c’est une bonne idée de reformuler en français ce que j’ai écrit jusqu’à maintenant, l’histoire de changer l’angle intellectuel, me dégourdir un peu et prendre de la distance.

Une démonstration de faisabilité suit une logique similaire à tout autre article scientifique, sauf qu’au lieu d’explorer et vérifier une hypothèse théorique du type « les choses marchent de façon ABCD, sous conditions RTYU », j’explore et vérifie l’hypothèse qu’un concept pratique, comme celui du « Projet Aqueduc », a des fondements scientifiques suffisamment solides pour que ça vaille la peine de travailler dessus et de le tester en vie réelle. Les fondements scientifiques viennent en deux couches, en quelque sorte. La couche de base consiste à passer en revue la littérature du sujet pour voir si quelqu’un a déjà décrit des solutions similaires et là, le truc c’est explorer des différentes perspectives de similarité. Similaire ne veut pas dire identique, n’est-ce pas ? Cette revue de littérature doit apporter une structure logique – un modèle – applicable à la recherche empirique, avec des variables et des paramètres constants. C’est alors que vient la couche supérieure de démonstration de faisabilité, qui consiste à conduire de la recherche empirique proprement dite avec ce modèle.    

Moi, pour le moment, j’en suis à la couche de base. Je passe donc en revue la littérature pertinente aux solutions hydrologiques et hydroélectriques, tout en formant, progressivement, un modèle numérique du « Projet Aqueduc ». Dans cette mise à jour, je commence par une brève récapitulation du concept et j’enchaîne avec ce que j’ai réussi à trouver dans la littérature. Le concept de base du « Projet Aqueduc » consiste donc à placer dans le cours d’une rivière des pompes qui travaillent selon le principe du bélier hydraulique et qui donc utilisent l’énergie cinétique de l’eau pour pomper une partie de cette eau en dehors du lit de la rivière, vers des structures marécageuses qui ont pour fonction de retenir l’eau dans l’écosystème local. Le bélier hydraulique à la capacité de pomper à la verticale aussi bien qu’à l’horizontale et donc avant d’être retenue dans les marécages, l’eau passe par une structure similaire à un aqueduc élevé (d’où le nom du concept en français), avec des réservoirs d’égalisation de flux, et ensuite elle descend vers les marécages à travers des turbines hydroélectriques. Ces dernières produisent de l’énergie qui est ensuite emmagasinée dans une installation de stockage et de là, elle est vendue pour assurer la survie financière à la structure entière. On peut ajouter des installations éoliennes et/ou photovoltaïques pour optimiser la production de l’énergie sur le terrain occupé par la structure entière.  Vous pouvez trouver une description plus élaborée du concept dans ma mise à jour intitulée « Le Catch 22 dans ce jardin d’Eden ». La faisabilité dont je veux faire une démonstration c’est la capacité de cette structure à se financer entièrement sur la base des ventes d’électricité, comme un business régulier, donc de se développer et durer sans subventions publiques. La solution pratique que je prends en compte très sérieusement en termes de créneau de vente d’électricité est une station de chargement des véhicules électriques.   

L’approche de base que j’utilise dans la démonstration de faisabilité – donc mon modèle de base – consiste à représenter le concept en question comme une chaîne des technologies :

>> TCES – stockage d’énergie

>> TCCS – station de chargement des véhicules électriques

>> TCRP – pompage en bélier hydraulique

>> TCEW – réservoirs élevés d’égalisation

>> TCCW – acheminement et siphonage d’eau

>> TCWS – l’équipement artificiel des structures marécageuses

>> TCHE – les turbines hydroélectriques

>> TCSW – installations éoliennes et photovoltaïques     

Mon intuition de départ, que j’ai l’intention de vérifier dans ma recherche à travers la littérature, est que certaines de ces technologies sont plutôt prévisibles et bien calibrées, pendant qu’il y en a d’autres qui sont plus floues et sujettes au changement, donc moins prévisibles. Les technologies prévisibles sont une sorte d’ancrage pour the concept entier et celles plus floues sont l’objet d’expérimentation.

Je commence la revue de littérature par le contexte environnemental, donc avec l’hydrologie. Les variations au niveau de la nappe phréatiques, qui est un terme scientifique pour les eaux souterraines, semblent être le facteur numéro 1 des anomalies au niveau de rétention d’eau dans les réservoirs artificiels (Neves, Nunes, & Monteiro 2020[1]). D’autre part, même sans modélisation hydrologique détaillée, il y a des preuves empiriques substantielles que la taille des réservoirs naturels et artificiels dans les plaines fluviales, ainsi que la densité de placement de ces réservoirs et ma manière de les exploiter ont une influence majeure sur l’accès pratique à l’eau dans les écosystèmes locaux. Il semble que la taille et la densité des espaces boisés intervient comme un facteur d’égalisation dans l’influence environnementale des réservoirs (Chisola, Van der Laan, & Bristow 2020[2]). Par comparaison aux autres types de technologie, l’hydrologie semble être un peu en arrière en termes de rythme d’innovation et il semble aussi que des méthodes de gestion d’innovation appliquées ailleurs avec succès peuvent marcher pour l’hydrologie, par exemple des réseaux d’innovation ou des incubateurs des technologies (Wehn & Montalvo 2018[3]; Mvulirwenande & Wehn 2020[4]). L’hydrologie rurale et agriculturale semble être plus innovatrice que l’hydrologie urbaine, par ailleurs (Wong, Rogers & Brown 2020[5]).

Ce que je trouve assez surprenant est le manque apparent de consensus scientifique à propos de la quantité d’eau dont les sociétés humaines ont besoin. Toute évaluation à ce sujet commence avec « beaucoup et certainement trop » et à partir de là, le beaucoup et le trop deviennent plutôt flous. J’ai trouvé un seul calcul, pour le moment, chez Hogeboom (2020[6]), qui maintient que la personne moyenne dans les pays développés consomme 3800 litres d’eau par jour au total, mais c’est une estimation très holistique qui inclue la consommation indirecte à travers les biens et les services ainsi que le transport. Ce qui est consommé directement via le robinet et la chasse d’eau dans les toilettes, ça reste un mystère pour la science, apparemment, à moins que la science ne considère ce sujet comment trop terre-à-terre pour s’en occuper sérieusement.     

Il y a un créneau de recherche intéressant, que certains de ses représentants appellent « la socio-hydrologie », qui étudie les comportements collectifs vis-à-vis de l’eau et des systèmes hydrologiques et qui est basée sur l’observation empirique que lesdits comportements collectifs s’adaptent, d’une façon profonde et pernicieuse à la fois, aux conditions hydrologiques que la société en question vit avec (Kumar et al. 2020[7]). Il semble que nous nous adaptons collectivement à la consommation accrue de l’eau par une productivité croissante dans l’exploitation de nos ressources hydrologiques et le revenu moyen par tête d’habitant semble être positivement corrélé avec cette productivité (Bagstad et al. 2020[8]). Il paraît donc que l’accumulation et superposition de nombreuses technologies, caractéristique aux pays développés, contribue à utiliser l’eau de façon de plus en plus productive. Dans ce contexte, il y a une recherche intéressante conduite par Mohamed et al. (2020[9]) qui avance la thèse qu’un environnement aride est non seulement un état hydrologique mais aussi une façon de gérer les ressources hydrologiques, sur ma base des données qui sont toujours incomplètes par rapport à une situation qui change rapidement.

Il y a une question qui vient plus ou moins naturellement : dans la foulée de l’adaptation socio-hydrologique quelqu’un a-t-il présenté un concept similaire à ce que moi je présente comme « Projet Aqueduc » ? Eh bien, je n’ai rien trouvé d’identique, néanmoins il y a des idées intéressement proches. Dans l’hydrologie descriptive il y a ce concept de pseudo-réservoir, qui veut dire une structure comme les marécages ou des nappes phréatiques peu profondes qui ne retiennent pas l’eau de façons statique, comme un lac artificiel, mais qui ralentissent la circulation de l’eau dans le bassin fluvial d’une rivière suffisamment pour modifier les conditions hydrologiques dans l’écosystème (Harvey et al. 2009[10]; Phiri et al. 2021[11]). D’autre part, il y a une équipe des chercheurs australiens qui ont inventé une structure qu’ils appellent par l’acronyme STORES et dont le nom complet est « short-term off-river energy storage » (Lu et al. 2021[12]; Stocks et al. 2021[13]). STORES est une structure semi-artificielle d’accumulation par pompage, où on bâtit un réservoir artificiel au sommet d’un monticule naturel placé à une certaine distance de la rivière la plus proche et ce réservoir reçoit l’eau pompée artificiellement de la rivière. Ces chercheurs australiens avancent et donnent des preuves scientifiques pour appuyer la thèse qu’avec un peu d’astuce on peut faire fonctionner ce réservoir naturel en boucle fermée avec la rivière qui l’alimente et donc de créer un système de rétention d’eau. STORES semble être relativement le plus près de mon concept de « Projet Aqueduc » et ce qui est épatant est que moi, j’avais inventé mon idée pour l’environnement des plaines alluviales de l’Europe tandis que STORES avait été mis au point pour l’environnement aride et quasi-désertique d’Australie. Enfin, il y a l’idée des soi-disant « jardins de pluie » qui sont une technologie de rétention d’eau de pluie dans l’environnement urbain, dans des structures horticulturales, souvent placées sur les toits d’immeubles (Bortolini & Zanin 2019[14], par exemple).

Je peux conclure provisoirement que tout ce qui touche à l’hydrologie strictement dite dans le cadre du « Projet Aqueduc » est sujet aux changements plutôt imprévisible. Ce que j’ai pu déduire de la littérature ressemble à un potage bouillant sous couvercle. Il y a du potentiel pour changement technologique, il y a de la pression environnementale et sociale, mais il n’y pas encore de mécanismes institutionnels récurrents pour connecter l’un à l’autre. Les technologies TCEW (réservoirs élevés d’égalisation), TCCW (acheminement et siphonage d’eau), et TCWS (l’équipement artificiel des structures marécageuses) démontrant donc un avenir flou, je passe à la technologie TCRP de pompage en bélier hydraulique. J’ai trouvé deux articles chinois, qui se suivent chronologiquement et qui semblent par ailleurs avoir été écrits par la même équipe de chercheurs : Guo et al. (2018[15]), and Li et al. (2021[16]). Ils montrent la technologie du bélier hydraulique sous un angle intéressant. D’une part, les Chinois semblent avoir donné du vrai élan à l’innovation dans ce domaine spécifique, tout au moins beaucoup plus d’élan que j’ai pu observer en Europe. D’autre part, les estimations de la hauteur effective à laquelle l’eau peut être pompée avec les béliers hydrauliques dernier cri sont respectivement de 50 mètres dans l’article de 2018 et 30 mètres dans celui de 2021. Vu que les deux articles semblent être le fruit du même projet, il y a eu comme une fascination suivie par une correction vers le bas. Quoi qu’il en soit, même l’estimation plus conservative de 30 mètres c’est nettement mieux que les 20 mètres que j’assumais jusqu’à maintenant.

Cette élévation relative possible à atteindre avec la technologie du bélier hydraulique est importante pour la technologie suivante de ma chaîne, donc celle des petites turbines hydroélectriques, la TCHE. L’élévation relative de l’eau et le flux par seconde sont les deux paramètres clés qui déterminent la puissance électrique produite (Cai, Ye & Gholinia 2020[17]) et il se trouve que dans le « Projet Aqueduc », avec l’élévation et le flux largement contrôlés à travers la technologie du bélier hydraulique, les turbines deviennent un peu moins dépendantes sur les conditions naturelles.

J’ai trouvé une revue merveilleusement encyclopédique des paramètres pertinents aux petites turbines hydroélectriques chez Hatata, El-Saadawi, & Saad (2019[18]). La puissance électrique se calcule donc comme : Puissance = densité de l’eau (1000 kg/m3) * constante d’accélération gravitationnelle (9,8 m/s2) * élévation nette (mètres) * Q (flux par seconde m3/s).

L’investissement initial en de telles installations se calcule par unité de puissance, donc sur la base de 1 kilowatt et se divise en 6 catégories : la construction de la prise d’eau, la centrale électrique strictement dite, les turbines, le générateur, l’équipement auxiliaire, le transformateur et enfin le poste extérieur. Je me dis par ailleurs que – vu la structure du « Projet Aqueduc » – l’investissement en la construction de prise d’eau est en quelque sorte équivalent au système des béliers hydrauliques et réservoirs élevés. En tout cas :

>> la construction de la prise d’eau, par 1 kW de puissance  ($) 186,216 * Puissance-0,2368 * Élévation -0,597

>> la centrale électrique strictement dite, par 1 kW de puissance  ($) 1389,16 * Puissance-0,2351 * Élévation-0,0585

>> les turbines, par 1 kW de puissance  ($)

@ la turbine Kaplan: 39398 * Puissance-0,58338 * Élévation-0,113901

@ la turbine Frances: 30462 * Puissance-0,560135 * Élévation-0,127243

@ la turbine à impulsions radiales: 10486,65 * Puissance-0,3644725 * Élévation-0,281735

@ la turbine Pelton: 2 * la turbine à impulsions radiales

>> le générateur, par 1 kW de puissance  ($) 1179,86 * Puissance-0,1855 * Élévation-0,2083

>> l’équipement auxiliaire, par 1 kW de puissance  ($) 612,87 * Puissance-0,1892 * Élévation-0,2118

>> le transformateur et le poste extérieur, par 1 kW de puissance 

($) 281 * Puissance0,1803 * Élévation-0,2075

Une fois la puissance électrique calculée avec le paramètre d’élévation relative assurée par les béliers hydrauliques, je peux calculer l’investissement initial en hydro-génération comme la somme des positions mentionnées ci-dessus. Hatata, El-Saadawi, & Saad (2019 op. cit.) recommandent aussi de multiplier une telle somme par le facteur de 1,13 (c’est donc un facteur du type « on ne sait jamais ») et d’assumer que les frais courants d’exploitation annuelle vont se situer entre 1% et 6% de l’investissement initial.

Syahputra & Soesanti (2021[19]) étudient le cas de la rivière Progo, dotée d’un flux tout à fait modeste de 6,696 mètres cubes par seconde et située dans Kulon Progo Regency (une region spéciale au sein de Yogyakarta, Indonesia). Le système des petites turbines hydroélectriques y fournit l’électricité aux 962 ménages locaux, et crée un surplus de 4 263 951 kWh par an d’énergie à revendre aux consommateurs externes. Dans un autre article, Sterl et al. (2020[20]) étudient le cas de Suriname et avancent une thèse intéressante, notamment que le développement d’installations basées sur les énergies renouvelables crée un phénomène d’appétit d’énergie qui croît à mesure de manger et qu’un tel développement en une source d’énergie – le vent, par exemple – stimule l’investissement en installations basées sur d’autres sources, donc l’hydraulique et le photovoltaïque.  

Ces études relativement récentes corroborent celles d’il y a quelques années, comme celle de Vilanova & Balestieri (2014[21]) ou bien celle de Vieira et al. (2015[22]), avec une conclusion générale que les petites turbines hydroélectriques ont atteint un degré de sophistication technologique suffisante pour dégager une quantité d’énergie économiquement profitable. Par ailleurs, il semble qu’il y a beaucoup à gagner dans ce domaine à travers l’optimisation de la distribution de puissance entre les turbines différentes. De retour aux publications les plus récentes, j’ai trouvé des études de faisabilité tout à fait robustes pour les petites turbines hydroélectriques, qui indiquent que – pourvu qu’on soit prêt à accepter un retour d’environ 10 à 11 ans sur l’investissement initial – le petit hydro peut être exploité profitablement même avec une élévation relative en dessous de 20 mètres (Arthur et al. 2020[23] ; Ali et al. 2021[24]).

C’est ainsi que j’arrive donc à la portion finale dans la chaîne technologique du « Projet Aqueduc », donc au stockage d’énergie (TCES) ainsi que TCCS ou la station de chargement des véhicules électriques. La puissance à installer dans une station de chargement semble se situer entre 700 et 1000 kilowatts (Zhang et al. 2018[25]; McKinsey 2018[26]). En dessous de 700 kilowatt la station peut devenir si difficile à accéder pour le consommateur moyen, due aux files d’attente, qu’elle peut perdre la confiance des clients locaux. En revanche, tout ce qui va au-dessus de 1000 kilowatts est vraiment utile seulement aux heures de pointe dans des environnements urbains denses. Il y a des études de concept pour les stations de chargement où l’unité de stockage d’énergie est alimentée à partir des sources renouvelables (Al Wahedi & Bicer 2020[27]). Zhang et al. (2019[28]) présentent un concept d’entreprise tout fait pour une station de chargement située dans le milieu urbain. Apparemment, le seuil de profitabilité se situe aux environs de 5 100 000 kilowatt heures vendues par an.  

En termes de technologie de stockage strictement dite, les batteries Li-ion semblent être la solution de base pour maintenant, quoi qu’une combinaison avec les piles à combustible ou bien avec l’hydrogène semble prometteuse (Al Wahedi & Bicer 2020 op. cit. ; Sharma, Panvar & Tripati 2020[29]). En général, pour le moment, les batteries Li-Ion montrent le rythme d’innovation relativement le plus soutenu (Tomaszewska et al. 2019[30] ; de Simone & Piegari 2019[31]; Koohi-Fayegh & Rosen 2020[32]). Un article récent par Elmeligy et al. (2021[33]) présente un concept intéressant d’unité mobile de stockage qui pourrait se déplacer entre plusieurs stations de chargement. Quant à l’investissement initial requis pour une station de chargement, ça semble expérimenter toujours mais la marge de manœuvre se rétrécit pour tomber quelque part entre $600 ÷ $800 par 1 kW de puissance (Cole & Frazier 2019[34]; Cole, Frazier, Augustine 2021[35]).


[1] Neves, M. C., Nunes, L. M., & Monteiro, J. P. (2020). Evaluation of GRACE data for water resource management in Iberia: a case study of groundwater storage monitoring in the Algarve region. Journal of Hydrology: Regional Studies, 32, 100734. https://doi.org/10.1016/j.ejrh.2020.100734

[2] Chisola, M. N., Van der Laan, M., & Bristow, K. L. (2020). A landscape hydrology approach to inform sustainable water resource management under a changing environment. A case study for the Kaleya River Catchment, Zambia. Journal of Hydrology: Regional Studies, 32, 100762. https://doi.org/10.1016/j.ejrh.2020.100762

[3] Wehn, U., & Montalvo, C. (2018). Exploring the dynamics of water innovation: Foundations for water innovation studies. Journal of Cleaner Production, 171, S1-S19. https://doi.org/10.1016/j.jclepro.2017.10.118

[4] Mvulirwenande, S., & Wehn, U. (2020). Fostering water innovation in Africa through virtual incubation: Insights from the Dutch VIA Water programme. Environmental Science & Policy, 114, 119-127. https://doi.org/10.1016/j.envsci.2020.07.025

[5] Wong, T. H., Rogers, B. C., & Brown, R. R. (2020). Transforming cities through water-sensitive principles and practices. One Earth, 3(4), 436-447. https://doi.org/10.1016/j.oneear.2020.09.012

[6] Hogeboom, R. J. (2020). The Water Footprint Concept and Water’s Grand Environmental Challenges. One earth, 2(3), 218-222. https://doi.org/10.1016/j.oneear.2020.02.010

[7] Kumar, P., Avtar, R., Dasgupta, R., Johnson, B. A., Mukherjee, A., Ahsan, M. N., … & Mishra, B. K. (2020). Socio-hydrology: A key approach for adaptation to water scarcity and achieving human well-being in large riverine islands. Progress in Disaster Science, 8, 100134. https://doi.org/10.1016/j.pdisas.2020.100134

[8] Bagstad, K. J., Ancona, Z. H., Hass, J., Glynn, P. D., Wentland, S., Vardon, M., & Fay, J. (2020). Integrating physical and economic data into experimental water accounts for the United States: Lessons and opportunities. Ecosystem Services, 45, 101182. https://doi.org/10.1016/j.ecoser.2020.101182

[9] Mohamed, M. M., El-Shorbagy, W., Kizhisseri, M. I., Chowdhury, R., & McDonald, A. (2020). Evaluation of policy scenarios for water resources planning and management in an arid region. Journal of Hydrology: Regional Studies, 32, 100758. https://doi.org/10.1016/j.ejrh.2020.100758

[10] Harvey, J.W., Schaffranek, R.W., Noe, G.B., Larsen, L.G., Nowacki, D.J., O’Connor, B.L., 2009. Hydroecological factors governing surface water flow on a low-gradient floodplain. Water Resour. Res. 45, W03421, https://doi.org/10.1029/2008WR007129.

[11] Phiri, W. K., Vanzo, D., Banda, K., Nyirenda, E., & Nyambe, I. A. (2021). A pseudo-reservoir concept in SWAT model for the simulation of an alluvial floodplain in a complex tropical river system. Journal of Hydrology: Regional Studies, 33, 100770. https://doi.org/10.1016/j.ejrh.2020.100770.

[12] Lu, B., Blakers, A., Stocks, M., & Do, T. N. (2021). Low-cost, low-emission 100% renewable electricity in Southeast Asia supported by pumped hydro storage. Energy, 121387. https://doi.org/10.1016/j.energy.2021.121387

[13] Stocks, M., Stocks, R., Lu, B., Cheng, C., & Blakers, A. (2021). Global atlas of closed-loop pumped hydro energy storage. Joule, 5(1), 270-284. https://doi.org/10.1016/j.joule.2020.11.015

[14] Bortolini, L., & Zanin, G. (2019). Reprint of: Hydrological behaviour of rain gardens and plant suitability: A study in the Veneto plain (north-eastern Italy) conditions. Urban forestry & urban greening, 37, 74-86. https://doi.org/10.1016/j.ufug.2018.07.003

[15] Guo, X., Li, J., Yang, K., Fu, H., Wang, T., Guo, Y., … & Huang, W. (2018). Optimal design and performance analysis of hydraulic ram pump system. Proceedings of the Institution of Mechanical Engineers, Part A: Journal of Puissance and Energy, 232(7), 841-855. https://doi.org/10.1177%2F0957650918756761

[16] Li, J., Yang, K., Guo, X., Huang, W., Wang, T., Guo, Y., & Fu, H. (2021). Structural design and parameter optimization on a waste valve for hydraulic ram pumps. Proceedings of the Institution of Mechanical Engineers, Part A: Journal of Puissance and Energy, 235(4), 747–765. https://doi.org/10.1177/0957650920967489

[17] Cai, X., Ye, F., & Gholinia, F. (2020). Application of artificial neural network and Soil and Water Assessment Tools in evaluating Puissance generation of small hydroPuissance stations. Energy Reports, 6, 2106-2118. https://doi.org/10.1016/j.egyr.2020.08.010.

[18] Hatata, A. Y., El-Saadawi, M. M., & Saad, S. (2019). A feasibility study of small hydro Puissance for selected locations in Egypt. Energy Strategy Reviews, 24, 300-313. https://doi.org/10.1016/j.esr.2019.04.013

[19] Syahputra, R., & Soesanti, I. (2021). Renewable energy systems based on micro-hydro and solar photovoltaic for rural areas: A case study in Yogyakarta, Indonesia. Energy Reports, 7, 472-490. https://doi.org/10.1016/j.egyr.2021.01.015

[20] Sterl, S., Donk, P., Willems, P., & Thiery, W. (2020). Turbines of the Caribbean: Decarbonising Suriname’s electricity mix through hydro-supported integration of wind Puissance. Renewable and Sustainable Energy Reviews, 134, 110352. https://doi.org/10.1016/j.rser.2020.110352

[21] Vilanova, M. R. N., & Balestieri, J. A. P. (2014). HydroPuissance recovery in water supply systems: Models and case study. Energy conversion and management, 84, 414-426. https://doi.org/10.1016/j.enconman.2014.04.057

[22] Vieira, D. A. G., Guedes, L. S. M., Lisboa, A. C., & Saldanha, R. R. (2015). Formulations for hydroelectric energy production with optimality conditions. Energy Conversion and Management, 89, 781-788. https://doi.org/10.1016/j.enconman.2014.10.048

[23] Arthur, E., Anyemedu, F. O. K., Gyamfi, C., Asantewaa-Tannor, P., Adjei, K. A., Anornu, G. K., & Odai, S. N. (2020). Potential for small hydroPuissance development in the Lower Pra River Basin, Ghana. Journal of Hydrology: Regional Studies, 32, 100757. https://doi.org/10.1016/j.ejrh.2020.100757

[24] Ali, M., Wazir, R., Imran, K., Ullah, K., Janjua, A. K., Ulasyar, A., … & Guerrero, J. M. (2021). Techno-economic assessment and sustainability impact of hybrid energy systems in Gilgit-Baltistan, Pakistan. Energy Reports, 7, 2546-2562. https://doi.org/10.1016/j.egyr.2021.04.036

[25] Zhang, Y., He, Y., Wang, X., Wang, Y., Fang, C., Xue, H., & Fang, C. (2018). Modeling of fast charging station equipped with energy storage. Global Energy Interconnection, 1(2), 145-152. DOI:10.14171/j.2096-5117.gei.2018.02.006

[26] McKinsey Center for Future Mobility, How Battery Storage Can Help Charge the Electric-Vehicle Market?, February 2018,

[27] Al Wahedi, A., & Bicer, Y. (2020). Development of an off-grid electrical vehicle charging station hybridized with renewables including battery cooling system and multiple energy storage units. Energy Reports, 6, 2006-2021. https://doi.org/10.1016/j.egyr.2020.07.022

[28] Zhang, J., Liu, C., Yuan, R., Li, T., Li, K., Li, B., … & Jiang, Z. (2019). Design scheme for fast charging station for electric vehicles with distributed photovoltaic power generation. Global Energy Interconnection, 2(2), 150-159. https://doi.org/10.1016/j.gloei.2019.07.003

[29] Sharma, S., Panwar, A. K., & Tripathi, M. M. (2020). Storage technologies for electric vehicles. Journal of traffic and transportation engineering (english edition), 7(3), 340-361. https://doi.org/10.1016/j.jtte.2020.04.004

[30] Tomaszewska, A., Chu, Z., Feng, X., O’Kane, S., Liu, X., Chen, J., … & Wu, B. (2019). Lithium-ion battery fast charging: A review. ETransportation, 1, 100011. https://doi.org/10.1016/j.etran.2019.100011

[31] De Simone, D., & Piegari, L. (2019). Integration of stationary batteries for fast charge EV charging stations. Energies, 12(24), 4638. https://doi.org/10.3390/en12244638

[32] Koohi-Fayegh, S., & Rosen, M. A. (2020). A review of energy storage types, applications and recent developments. Journal of Energy Storage, 27, 101047. https://doi.org/10.1016/j.est.2019.101047

[33] Elmeligy, M. M., Shaaban, M. F., Azab, A., Azzouz, M. A., & Mokhtar, M. (2021). A Mobile Energy Storage Unit Serving Multiple EV Charging Stations. Energies, 14(10), 2969. https://doi.org/10.3390/en14102969

[34] Cole, Wesley, and A. Will Frazier. 2019. Cost Projections for Utility-Scale Battery Storage.

Golden, CO: National Renewable Energy Laboratory. NREL/TP-6A20-73222. https://www.nrel.gov/docs/fy19osti/73222.pdf

[35] Cole, Wesley, A. Will Frazier, and Chad Augustine. 2021. Cost Projections for UtilityScale Battery Storage: 2021 Update. Golden, CO: National Renewable Energy

Laboratory. NREL/TP-6A20-79236. https://www.nrel.gov/docs/fy21osti/79236.pdf.

Seasonal lakes

Once again, been a while since I last blogged. What do you want, I am having a busy summer. Putting order in my own chaos, and, over the top of that, putting order in other people’s chaos, this is all quite demanding in terms of time and energy. What? Without trying to put order in chaos, that chaos might take less time and energy? Well, yes, but order look tidier than chaos.

I am returning to the technological concept which I labelled ‘Energy Ponds’ (or ‘projet Aqueduc’ in French >> see: Le Catch 22 dans ce jardin d’Eden). You can find a description of that concept onder the hyperlinked titles provided. I am focusing on refining my repertoire of skills in scientific validation of technological concepts. I am passing in review some recent literature, and I am trying to find good narrative practices in that domain.

The general background of ‘Energy Ponds’ consists in natural phenomena observable in Europe as the climate change progresses, namely: a) long-term shift in the structure of precipitations, from snow to rain b) increasing occurrence of floods and droughts c) spontaneous reemergence of wetlands. All these phenomena have one common denominator: increasingly volatile flow per second in rivers. The essential idea of Energy Ponds is to ‘financialize’ that volatile flow, so to say, i.e. to capture its local surpluses, store them for later, and use the very mechanism of storage itself as a source of economic value.

When water flows downstream, in a river, its retention can be approached as the opportunity for the same water to loop many times over the same specific portion of the collecting basin (of the river). Once such a loop is created, we can extend the average time that a liter of water spends in the whereabouts. Ram pumps, connected to storage structures akin to swamps, can give such an opportunity. A ram pump uses the kinetic energy of flowing water in order to pump some of that flow up and away from its mainstream. Ram pumps allow forcing a process, which we now as otherwise natural. Rivers, especially in geological plains, where they flow relatively slowly, tend to build, with time, multiple ramifications. Those branchings can be directly observable at the surface, as meanders, floodplains or seasonal lakes, but much of them is underground, as pockets of groundwater. In this respect, it is useful to keep in mind that mechanically, rivers are the drainpipes of rainwater from their respective basins. Another basic hydrological fact, useful to remember in the context of the Energy Ponds concept, is that strictly speaking retention of rainwater – i.e. a complete halt in its circulation through the collecting basin of the river – is rarely possible, and just as rarely it is a sensible idea to implement. Retention means rather a slowdown to the flow of rainwater through the collecting basin into the river.

One of the ways that water can be slowed down consists in making it loop many times over the same section of the river. Let’s imagine a simple looping sequence: water from the river is being ram-pumped up and away into retentive structures akin to swamps, i.e. moderately deep spongy structures underground, with high capacity for retention, covered with a superficial layer of shallow-rooted vegetation. With time, as the swamp fills with water, the surplus is evacuated back into the river, by a system of canals. Water stored in the swamp will be ultimately evacuated, too, minus evaporation, it will just happen much more slowly, by the intermediary of groundwaters. In order to illustrate the concept mathematically, let’ s suppose that we have water in the river flowing at the pace of, e.g. 45 m3 per second. We make it loop once via ram pumps and retentive swamps, and, if as a result of that looping, the speed of the flow is sliced by 3. On the long run we slow down the way that the river works as the local drainpipe: we slow it from 43 m3 per second down to [43/3 = 14,33…] m3 per second.  As water from the river flows slower overall, it can yield more environmental services: each cubic meter of water has more time to ‘work’ in the ecosystem.  

When I think of it, any human social structure, such as settlements, industries, infrastructures etc., needs to stay in balance with natural environment. That balance is to be understood broadly, as the capacity to stay, for a satisfactorily long time, within a ‘safety zone’, where the ecosystem simply doesn’t kill us. That view has little to do with the moral concepts of environment-friendliness or sustainability. As a matter of fact, most known human social structures sooner or later fall out of balance with the ecosystem, and this is how civilizations collapse. Thus, here comes the first important assumption: any human social structure is, at some level, an environmental project. The incumbent social structures, possible to consider as relatively stable, are environmental projects which have simply hold in place long enough to grow social institutions, and those institutions allow further seeking of environmental balance.

I am starting my review of literature with an article by Phiri et al. (2021[1]), where the authors present a model for assessing the way that alluvial floodplains behave. I chose this one because my concept of Energy Ponds is supposed to work precisely in alluvial floodplains, i.e. in places where we have: a) a big river b) a lot of volatility in the amount of water in that river, and, as a consequence, we have (c) an alternation of floods and droughts. Normal stuff where I come from, i.e. in Northern Europe. Phiri et al. use the general model, acronymically called SWAT, which comes from ‘Soil and Water Assessment Tool’ (see also: Arnold et al. 1998[2]; Neitsch et al. 2005[3]), and with that general tool, they study the concept of pseudo-reservoirs in alluvial plains. In short, a pseudo-reservoir is a hydrological structure which works like a reservoir but does not necessarily look like one. In that sense, wetlands in floodplains can work as reservoirs of water, even if from the hydrological point of view they are rather extensions of the main river channel (Harvey et al. 2009[4]).

Analytically, the SWAT model defines the way a reservoir works with the following equation: V = Vstored + Vflowin − Vflowout + Vpcp − Vevap − Vseep . People can rightly argue that it is a good thing to know what symbols mean in an equation, and therefore V stands for the volume of water in reservoir at the end of the day, Vstored corresponds to the amount of water stored at the beginning of the day, Vflowin means the quantity of water entering reservoir during the day, Vflowout is the metric outflow of water during the day, Vpcp is volume of precipitation falling on the water body during the day, Vevap is volume of water removed from the water body by evaporation during the day, Vseep is volume of water lost from the water body by seepage.

This is a good thing to know, as well, once we have a nice equation, what the hell are we supposed to do with it in real life. Well, the SWAT model has even its fan page (http://www.swatusers.com ), and, as Phiri et al. phrase it out, it seems that the best practical use is to control the so-called ‘target release’, i.e. the quantity of water released at a given point in space and time, designated as Vtarg. The target release is mostly used as a control metric for preventing or alleviating floods, and with that purpose in mind, two decision rules are formulated. During the non-flood season, no reservation for flood is needed, and target storage is set at emergency spillway volume. In other words, in the absence of imminent flood, we can keep the reservoir full. On the other hand, when the flood season is on, flood control reservation is a function of soil water content. This is set to maximum and 50 % of maximum for wet and dry grounds, respectively. In the context of the V = Vstored + Vflowin − Vflowout + Vpcp − Vevap − Vseep equation, Vtarg is a specific value (or interval of values) in the Vflowout component.

As I am wrapping my mind around those conditions, I am thinking about the opposite application, i.e. about preventing and alleviating droughts. Drought is recognizable by exceptionally low values in the amount of water stored at the end of the given period, thus in the basic V, in the presence of low precipitation, thus low Vpcp, and high evaporation, which corresponds to high Vevap. More generally, both floods and droughts occur when – or rather after – in a given Vflowin − Vflowout balance, precipitation and evaporation take one turn or another.

I feel like moving those exogenous meteorological factors on one side of the equation, which goes like  – Vpcp + Vevap =  – V + Vstored + Vflowin − Vflowout − Vseep and doesn’t make much sense, as there are not really many cases of negative precipitation. I need to switch signs, and then it is more presentable, as Vpcp – VevapV – Vstored – Vflowin + Vflowout + Vseep . Weeell, almost makes sense. I guess that Vflowin is sort of exogenous, too. The inflow of water into the basin of the river comes from a melting glacier, from another river, from an upstream section of the same river etc. I reframe: Vpcp – Vevap + Vflowin V – Vstored + Vflowout + Vseep  . Now, it makes sense. Precipitations plus the inflow of water through the main channel of the river, minus evaporation, all that stuff creates a residual quantity of water. That residual quantity seeps into the groundwaters (Vseep), flows out (Vflowout), and stays in the reservoir-like structure at the end of the day (V – Vstored).

I am having a look at how Phiri et al. (2021 op. cit.) phrase out their model of pseudo-reservoir. The output value they peg the whole thing on is Vpsrc, or the quantity of water retained in the pseudo-reservoir at the end of the day. The Vpsrc is modelled for two alternative situations: no flood (V ≤ Vtarg), or flood (V > Vtarg). I interpret drought as particularly uncomfortable a case of the absence of flood.

Whatever. If V ≤ Vtarg , then Vpsrc = Vstored + Vflowin − Vbaseflowout + Vpcp − Vevap − Vseep  , where, besides the already known variables, Vbaseflowoutstands for volume of water leaving PSRC during the day as base flow. When, on the other hand, we have flood, Vpsrc = Vstored + Vflowin − Vbaseflowout − Voverflowout + Vpcp − Vevap − Vseep .

Phiri et al. (2021 op. cit.) argue that once we incorporate the phenomenon of pseudo-reservoirs in the evaluation of possible water discharge from alluvial floodplains, the above-presented equations perform better than the standard SWAT model, or V = Vstored + Vflowin − Vflowout + Vpcp − Vevap − Vseep

My principal takeaway from the research by Phiri et al. (2021 op. cit.) is that wetlands matter significantly for the hydrological balance of areas with characteristics of floodplains. My concept of ‘Energy Ponds’ assumes, among other things, storing water in swamp-like structures, including urban and semi-urban ones, such as rain gardens (Sharma & Malaviya 2021[5] ; Li, Liu & Li 2020[6] ; Venvik & Boogaard 2020[7],) or sponge cities (Ma, Jiang & Swallow 2020[8] ; Sun, Cheshmehzangi & Wang 2020[9]).  

Now, I have a few papers which allow me to have sort of a bird’s eye view of the SWAT model as regards the actual predictability of flow and retention in fluvial basins. It turns out that identifying optimal sites for hydropower installations is a very complex task, prone to a lot of error, and only the introduction of digital data such as GIS allows acceptable precision. The problem is to estimate accurately both the flow and the head of the waterway in question at an exact location (Liu et al., 2017[10]; Gollou and Ghadimi 2017[11]; Aghajani & Ghadimi 2018[12]; Yu & Ghadimi 2019[13]; Cai, Ye & Gholinia 2020[14]). My concept of ‘Energy Ponds’ includes hydrogeneration, but makes one of those variables constant, by introducing something like Roman siphons, with a constant head, apparently possible to peg at 20 metres. The hydro-power generation seems to be pseudo-concave function (i.e. it hits quite a broad, concave peak of performance) if the hydraulic head (height differential) is constant, and the associated productivity function is strongly increasing. Analytically, it can be expressed as a polynomial, i.e. as a combination of independent factors with various powers (various impact) assigned to them (Cordova et al. 2014[15]; Vieira et al. 2015[16]). In other words, by introducing, in my technological concept, that constant head (height) makes the whole thing more prone to optimization.

Now, I take on a paper which shows how to present a proof of concept properly: Pradhan, A., Marence, M., & Franca, M. J. (2021). The adoption of Seawater Pump Storage Hydropower Systems increases the share of renewable energy production in Small Island Developing States. Renewable Energy, https://doi.org/10.1016/j.renene.2021.05.151 . This paper is quite close to my concept of ‘Energy Ponds’, as it includes the technology of pumped storage, which I think about morphing and changing into something slightly different. Such as presented by Pradhan, Marence & Franca (2021, op. cit.), the proof of concept is structured in two parts: the general concept is presented, and then a specific location is studied  – the island of Curaçao, in this case – as representative for a whole category. The substance of proof is articulated around the following points:

>> the basic diagnosis as for the needs of the local community in terms of energy sources, with the basic question whether Seawater Pumped Storage Hydropower System is locally suitable as technology. In this specific case, the main criterium was the possible reduction of dependency on fossils. Assumptions as for the electric power required have been made, specifically for the local community.  

>> a GIS tool has been tested for choosing the optimal location. GIS stands for Geographic Information System (https://en.wikipedia.org/wiki/Geographic_information_system ). In this specific thread the proof of concept consisted in checking whether the available GIS data, and the software available for processing it are sufficient for selecting an optimal location in Curaçao.

At the bottom line, the proof of concept sums up to checking, whether the available GIS technology allows calibrating a site for installing the required electrical power in a Seawater Pumped Storage Hydropower System.

That paper by Pradhan, Marence & Franca (2021, op. cit.) presents a few other interesting traits for me. Firstly, the author’s prove that combining hydropower with windmills and solar modules is a viable solution, and this is exactly what I thought, only I wasn’t sure. Secondly, the authors consider a very practical issue: corrosion, and the materials recommended in order to bypass that problem. Their choice is fiberglass. Secondly, they introduce an important parameter, namely the L/H aka ‘Length to Head’ ratio. This is the proportion between the length of water conductors and the hydraulic head (i.e. the relative denivelation) in the actual installation. Pradhan, Marence & Franca recommend distinguishing two types of installations: those with L/H < 15, on the one hand, and those with 15 ≤ L/H ≤ 25. However accurate is that assessment of theirs, it is a paremeter to consider. In my concept of ‘Energy Ponds’, I assume an artificially created hydraulic head of 20 metres, and thus the conductors leading from elevated tanks to the collecting wetland-type structure should be classified in two types, namely [(L/H < 15) (L < 15*20) (L < 300 metres)], on the one hand, and [(15 ≤ L/H ≤ 25) (300 metres ≤ L ≤ 500 metres)], on the other hand.  

Still, there is bad news for me. According to a report by Botterud, Levin & Koritarov (2014[17]), which Pradhan, Marence & Franca quote as an authoritative source, hydraulic head for pumped storage should be at least 100 metres in order to make the whole thing profitable. My working assumption with ‘Energy Ponds’ is 20 metres, and, obviously, I have to work through it.

I think I have the outline of a structure for writing a decent proof-of-concept article for my ‘Energy Ponds’ concept. I think I should start with something I have already done once, two years ago, namely with compiling data as regards places in Europe, located in fluvial plains, with relatively the large volatility in water level and flow. These places will need water retention.

Out of that list, I select locations eligible for creating wetland-type structures for retaining water, either in the form of swamps, or as porous architectural structures. Once that second list prepared, I assess the local need for electrical power. From there, I reverse engineer. With a given power of X megawatts, I reverse to the storage capacity needed for delivering that power efficiently and cost-effectively. I nail down the storage capacity as such, and I pass in review the available technologies of power storage.

Next, I choose the best storage technology for that specific place, and I estimate the investment outlays necessary for installing it. I calculate the hydropower required in hydroelectric turbines, as well as in adjacent windmills and photovoltaic. I check whether the local river can supply the amount of water that fits the bill. I pass in review literature as regards optimal combinations of those three sources of energy. I calculate the investment outlays needed to install all that stuff, and I add the investment required in ram pumping, elevated tanks, and water conductors.  

Then, I do a first approximation of cash flow: cash from sales of electricity, in that local installation, minus the possible maintenance costs. After I calculate that gross margin of cash,  I compare it to the investment capital I had calculated before, and I try to estimate provisionally the time of return on investment. Once this done, I add maintenance costs to my sauce. I think that the best way of estimating these is to assume a given lifecycle of complete depreciation in the technology installed, and to count maintenance costs as the corresponding annual amortization.         


[1] Phiri, W. K., Vanzo, D., Banda, K., Nyirenda, E., & Nyambe, I. A. (2021). A pseudo-reservoir concept in SWAT model for the simulation of an alluvial floodplain in a complex tropical river system. Journal of Hydrology: Regional Studies, 33, 100770. https://doi.org/10.1016/j.ejrh.2020.100770.

[2] Arnold, J.G., Srinivasan, R., Muttiah, R.S., Williams, J.R., 1998. Large area hydrological modelling and assessment: Part I. Model development. J. Am. Water Resour. Assoc. 34, 73–89.

[3] Neitsch, S.L., Arnold, J.G., Kiniry, J.R., Williams, J.R., 2005. “Soil and Water Assessment Tool Theoretical Documentation.” Version 2005. Blackland Research Center, Texas.

[4] Harvey, J.W., Schaffranek, R.W., Noe, G.B., Larsen, L.G., Nowacki, D.J., O’Connor, B.L., 2009. Hydroecological factors governing surface water flow on a low-gradient floodplain. Water Resour. Res. 45, W03421, https://doi.org/10.1029/2008WR007129.

[5] Sharma, R., & Malaviya, P. (2021). Management of stormwater pollution using green infrastructure: The role of rain gardens. Wiley Interdisciplinary Reviews: Water, 8(2), e1507. https://doi.org/10.1002/wat2.1507

[6] Li, J., Liu, F., & Li, Y. (2020). Simulation and design optimization of rain gardens via DRAINMOD and response surface methodology. Journal of Hydrology, 585, 124788. https://doi.org/10.1016/j.jhydrol.2020.124788

[7] Venvik, G., & Boogaard, F. C. (2020). Infiltration capacity of rain gardens using full-scale test method: effect of infiltration system on groundwater levels in Bergen, Norway. Land, 9(12), 520. https://doi.org/10.3390/land9120520

[8] Ma, Y., Jiang, Y., & Swallow, S. (2020). China’s sponge city development for urban water resilience and sustainability: A policy discussion. Science of the Total Environment, 729, 139078. https://doi.org/10.1016/j.scitotenv.2020.139078

[9] Sun, J., Cheshmehzangi, A., & Wang, S. (2020). Green infrastructure practice and a sustainability key performance indicators framework for neighbourhood-level construction of sponge city programme. Journal of Environmental Protection, 11(2), 82-109. https://doi.org/10.4236/jep.2020.112007

[10] Liu, Yan, Wang, Wei, Ghadimi, Noradin, 2017. Electricity load forecasting by an improved forecast engine for building level consumers. Energy 139, 18–30. https://doi.org/10.1016/j.energy.2017.07.150

[11] Gollou, Abbas Rahimi, Ghadimi, Noradin, 2017. A new feature selection and hybrid forecast engine for day-ahead price forecasting of electricity markets. J. Intell. Fuzzy Systems 32 (6), 4031–4045.

[12] Aghajani, Gholamreza, Ghadimi, Noradin, 2018. Multi-objective energy manage- ment in a micro-grid. Energy Rep. 4, 218–225.

[13] Yu, Dongmin, Ghadimi, Noradin, 2019. Reliability constraint stochastic UC by considering the correlation of random variables with Copula theory. IET Renew. Power Gener. 13 (14), 2587–2593.

[14] Cai, X., Ye, F., & Gholinia, F. (2020). Application of artificial neural network and Soil and Water Assessment Tools in evaluating power generation of small hydropower stations. Energy Reports, 6, 2106-2118. https://doi.org/10.1016/j.egyr.2020.08.010.

[15] Cordova M, Finardi E, Ribas F, de Matos V, Scuzziato M. Performance evaluation and energy production optimization in the real-time operation of hydropower plants. Electr Pow Syst Res 2014;116:201–7.   http://dx.doi.org/ 10.1016/j.epsr.2014.06.012  

[16] Vieira, D. A. G., Guedes, L. S. M., Lisboa, A. C., & Saldanha, R. R. (2015). Formulations for hydroelectric energy production with optimality conditions. Energy Conversion and Management, 89, 781-788.

[17] Botterud, A., Levin, T., & Koritarov, V. (2014). Pumped storage hydropower: benefits for grid reliability and integration of variable renewable energy (No. ANL/DIS-14/10). Argonne National Lab.(ANL), Argonne, IL (United States). https://publications.anl.gov/anlpubs/2014/12/106380.pdf

Le Catch 22 dans ce jardin d’Eden

Ça fait un sacré bout de temps depuis ma dernière mise à jour en français sur ce blog, « Discover Social Sciences ». Je n’avais pas écrit en français depuis printemps 2020. Pourquoi je recommence maintenant ? Probablement parce que j’ai besoin d’arranger les idées dans ma tête. Il se passe beaucoup de choses, cette année, et j’avais découvert, déjà en 2017, qu’écrire en français m’aide à mettre de l’ordre dans le flot de mes pensées.

Je me concentre sur un sujet que j’avais déjà développé dans le passé et que je vais présenter à une conférence, ce vendredi. Il s’agit du concept que j’avais nommé « Étangs énergétiques » auparavant et que je présente maintenant comme « Projet aqueduc ». Je commence avec une description générale du concept et ensuite je vais passer en revue un peu de littérature récente sur le sujet.

Oui, bon, le sujet. Le voilà. Il s’agit d’un concept technologique qui combine la rétention contrôlée de l’eau dans les écosystèmes placés le long des fleuves et des rivières avec de la génération d’électricité avec les turbines hydrauliques, le tout sur la base des structures marécageuses. Du point de vue purement hydrologique, une rivière est une gouttière qui collecte l’eau de pluie qui tombe sur la surface de son bassin. Le lit de la rivière est une vallée inclinée qui connecte les points le moins élevés du terrain en question et de de fait l’eau de pluie converge des tous les points du bassin fluvial vers l’embouchure de la rivière.

La civilisation humaine sédentaire est largement basée sur le fait que les bassins fluviaux ont la capacité de retenir l’eau de pluie pour un certain temps avant qu’elle s’évapore ou coule dans la rivière. Ça se retient à la surface – en forme des lacs, étangs ou marécages – et ça se retient sous terre, en forme des couches et des poches aquifères diverses. La rétention souterraine dans les poches aquifères rocheuses est naturellement permanente. L’eau retenue dans une couche aquifère reste là jusqu’au moment où nous la puisons. En revanche, la rétention superficielle ainsi que celle dans les couches aquifères souterraines est essentiellement temporaire. L’eau y est ralentie dans sa circulation, aussi bien dans son mouvement physique vers les points les plus bas du bassin local (la rivière du coin) que dans son évaporation vers l’atmosphère. L’existence même des fleuves et des rivières est aussi une manifestation de circulation ralentie. Le lit de la rivière n’arrive pas à évacuer en temps réel toute l’eau qui s’y agglomère et c’est ainsi que les rivières ont de la profondeur : cette profondeur est la mesure de rétention temporaire de l’eau de pluie.

Ces mécanismes fondamentaux fonctionnent différemment en fonction des conditions géologiques. Maintenant, je me concentre sur les conditions que je connais dans mon environnement à moi, donc sur les écosystèmes des plaines et des vallées de l’Europe du Nord, soit grosso modo au nord des Alpes. Ces écosystèmes sont pour la plupart des moraines post-glaciales de fond, donc c’est de la terre littéralement labourée, sculptée et dénivelée par les glaciers. Il n’y a pas vraiment beaucoup de poches aquifères profondes dans la roche de base, en revanche nous avons beaucoup de couches aquifères relativement proches de la surface. Par conséquent, il n’y a pas beaucoup d’accumulation durable de l’eau, à la différence de l’Europe du Sud et de l’Afrique du Nord, où les poches aquifères rocheuses peuvent retenir des quantités importantes d’eau pendant des décennies, voir des siècles. La circulation de l’eau dans ces écosystèmes des plaines est relativement lente – beaucoup plus lente que dans la montagne – ce qui favorise la présence des rivières larges et pas vraiment très profondes ainsi que la formation des marécages.

Dans ces plaines post-glaciales de l’Europe du Nord, l’eau coule lentement, s’accumule peu et s’évapore vite. La forme idéale des précipitations dans ces conditions géologiques c’est de la neige abondante en hiver – qui fond lentement, goutte par goute, au printemps – ainsi que des pluies lentes en longues. La moraine post-glaciale absorbe bien de l’eau qui arrive lentement, mais n’est pas vraiment faite pour absorber des pluies torrentielles. Avec le changement climatique, les précipitations ont changé. Il y a beaucoup moins de neige en hiver en beaucoup plus des pluies violentes. Si nous voulons avoir du contrôle de notre système hydrologique, il nous faut des technologies de rétention d’eau pour compenser des variations temporaires.

Bon, ça c’est le contexte de mon idée et voilà l’idée elle-même. Elle consiste à créer des structures marécageuses semi-artificielles dans la proximité des rivières et les remplir avec de l’eau pompée desdites rivières. La technologie de pompage est celle du bélier hydraulique : une pompe qui utilise l’énergie cinétique de l’eau courante. Le principe général est un truc ancien. D’après ce que j’ai lu à ce sujet, le principe de base, sous la forme de la roue à aubes , fût déjà en usage dans la Rome ancienne, était très utilisé dans les villes Européennes jusqu’à la fin du 18ème siècle. La technologie du bélier hydraulique – une pompe qui utilise ladite énergie cinétique de l’eau dans un mécanisme similaire au muscle cardiaque – fût victime des aléas de l’histoire. Inventée en 1792 par Joseph de Montgolfier (oui, l’un des fameux frères-ballon), cette technologie n’avait jamais eu l’occasion de montrer tous ses avantages. en 1792 (le même qui, quelques années plus tôt, fit voler, avec son frère Étienne, le premier ballon à air chaud). Au 19ème siècle, avec la création des systèmes hydrauliques modernes avec l’eau courante dans les robinets, les technologies de pompage devaient offrir assez de puissance pour assurer une pression suffisante au niveau des robinets et c’est ainsi que les pompes électriques avaient pris la relève. Néanmoins, lorsqu’il s’agit de pomper lentement de l’eau courante des rivières vers les marécages artificiels, le bélier hydraulique est suffisant.

« Suffisant pour faire quoi exactement ? », peut-on demander. Voilà donc le reste de mon idée. Un ou plusieurs béliers hydrauliques sont plongés dans une rivière. Ils pompent l’eau de la rivière vers des structures marécageuses semi-artificielles. Ces marécages servent à retenir l’eau de pluie (qui coule déjà dans le cours de la rivière). L’eau de la rivière que je pompe vers les marécages c’est l’eau de pluie qui avait gravité, en amont, vers le lit de la rivière. Une fois dans les marécages, cette eau va de toute façon finir par graviter vers le lit de la rivière à quelque distance en amont. Pompage et rétention dans les marécages servent à ralentir la circulation de l’eau dans l’écosystème local. Circulation ralentie veut dire que plus d’eau va s’accumuler dans cet écosystème, comme une réserve flottante. Il y aura plus d’eau dans les couches aquifères souterraines, donc plus d’eau dans les puits locaux et – à la longue – plus d’eau dans la rivière elle-même, puisque l’eau dans la rivière c’est l’eau qui y avait coulé depuis et à travers les réservoirs locaux.

Jusqu’à ce point-là, l’idée se présente donc de façon suivante : rivière => bélier hydraulique => marécages => rivière. Je passe plus loin. Le pompage consiste à utiliser l’énergie cinétique de l’eau courante. L’énergie, ça se conserve par transformation. L’énergie cinétique de l’eau courante se transforme en énergie cinétique de la pompe, qui à son tour se transforme en énergie cinétique du flux vers les marécages.

La surface des marécages est placée au-dessus du lit de la rivière, à moins qu’ils ne soient un polder, auquel cas il n’y a pas besoin de pompage. Une fois l’eau est déversée dans les marécages, ceux-là absorbent donc, dans leur masse, l’énergie cinétique du flux qui se transforme en énergie potentielle de dénivellation. Et si nous amplifions ce phénomène ? Si nous utilisions l’énergie cinétique captée par le bélier hydraulique de façon à minimiser la dispersion dans la masse des marécages et de créer un maximum d’énergie potentielle ? L’énergie potentielle et proportionnelle à l’élévation relative. Plus haut je pompe l’eau de la rivière, plus d’énergie potentielle je récupère à partir de l’énergie cinétique du flux pompé. La solution la plus évidente serait une installation de pompage-turbinage, donc le réservoir de rétention devrait être placé sérieusement plus haut que la rivière. Quoi qu’apparemment la plus évidente et porteuse des principes de base intéressants, cette solution a ses défauts en ce qui concerne sa flexibilité et son coût.

Le principe de base à retenir c’est l’idée d’utiliser l’énergie potentielle de l’eau pompée à une certaine élévation comme un de facto réservoir d’énergie électrique. Il suffit de placer des turbines hydro-électriques en aval de l’eau stockée en élévation. En revanche, les installations de pompage-turbinage sont très coûteuses et très exigeantes en termes d’espace. Le réservoir supérieur dans les installations de pompage-turbinage est censé être soit un lac semi-artificiel soit un réservoir complètement artificiel en de tour, certainement pas un marécage. Il est donc temps que j’explique pourquoi je suis tant attaché à cette forme hydrologique précise. Les marécages sont relativement peu chers à créer et à maintenir, tout en étant relativement faciles à placer près de et de combiner avec les habitations humaines. Par « relativement » je veux dire en comparaison au pompage-turbinage.

Le marécage est un endroit symboliquement négatif dans notre culture. Le mal est tapi dans les marécages. Les marécages sont malsains. Ma théorie tout à fait privée à ce sujet est que dans le passé les colonies humaines, fréquemment celles qui ont finalement donné naissance à des villes, étaient localisées près des marécages. Probablement c’était parce que le niveau d’eau souterraine dans des tels endroits est favorablement haut. Il est facile d’y creuser des puits, d’épandre des fossés d’irrigation, petit gibier y abonde. Seulement voilà, lorsque les homo sapiens abondent, ils se différencient inévitablement en hominides rustiques d’une part et les citadins d’autre part. Ce partage est un mécanisme de base de la civilisation humaine. La campagne produit de la nourriture, la ville produit des nouveaux rôles sociaux, à travers interaction intense dans un espace densément peuplé. L’un des aspects fondamentaux de la ville est qu’elle sert de laboratoire expérimental permanent pour nos technologies, à travers la construction et la reconstruction d’immeubles. Oui, l’architecture, en compagnie du textile, du bâtiment naval et de la guerre, ont toujours été les activités humaines par excellence orientées sur l’innovation technologique.

La ville veut donc dire le bâtiment et le bâtiment a besoin de terre vraiment ferme. Les marécages deviennent ennemis. Il faut les assécher et les séparer durablement de la circulation hydrologique naturelle qui les eût formés pendant des millénaires. Les humains et les marécages ce fût donc un mariage naturel au début, suivie par une crise conjugale due à la nécessité d’apprendre comment faire de la technologie nouvelle et maintenant la technologie vraiment nouvelle rend possible une médiation conjugale dans ce couple. Il y a tout un courant de recherche et innovation architecturale, concentré autour des concepts tels que « les jardins de pluie » (Sharma & Malaviya 2021[1] ; Li, Liu & Li 2020[2] ; Venvik & Boogaard 2020[3]) ou « les villes éponges » (Ma, Jiang & Swallow2020[4] ; Sun, Cheshmehzangi & Wang 2020[5]). Nous sommes en train de développer des technologies qui rendent la cohabitation entre villes et marécages non seulement possible mais bénéfique pour l’environnement et pour les citadins en même temps.

Question : comment utiliser le principe de base de pompage-turbinage, donc le stockage d’énergie potentielle de l’eau placée en élévation, sans construire des structures de pompage-turbinage et en présence des structures marécageuses à la limite de la ville et de la campagne ? Réponse : à travers la construction des tours relativement petites et légères, avec des petits réservoirs d’égalisation au sommet de chaque tour. Un bélier hydraulique bien construit rend possible d’élever l’eau par 20 mètres environ. On peut imaginer donc un réseau des béliers hydrauliques installés dans le cours d’une rivière et connectés à des petites tours de 20 mètres chacune, où chaque tour est équipée d’un tuyau de descente vers les marécages et le tuyau est équipé des petites turbines hydro-électriques.

L’idée complète se présente donc comme suit : rivière => bélier hydraulique => l’eau monte => tours légères de 20 mètres avec des petits réservoirs d’égalisation au sommet => l’eau descend => petites turbines hydro-électriques => marécages => l’eau s’accumule => circulation hydrologique naturelle à travers le sol => rivière.

Bon, où est le Catch 22 dans ce jardin d’Eden ? Dans l’aspect économique. Les béliers hydrauliques de bonne qualité, tels qu’ils sont produits aujourd’hui, sont chers. Il y a très peu de fournisseurs solides de cette technologie. La plupart des béliers hydrauliques en utilisation sont des machins artisanaux à faible puissance et petit débit. L’infrastructure des tours de siphonage avec les turbines hydro-électriques de bonne qualité, ça coûte aussi. Si on veut être sérieux côté électricité, faut équiper tout ce bazar avec des magasins d’énergie. Toute l’infrastructure aurait besoin des frais de maintenance que je ne sais même pas comment calculer. Selon mes calculs, la vente d’électricité produite dans ce circuit hydrologique pourrait assurer un retour sur l’investissement pas plus court que 8 – 9 ans et encore, c’est calculé avec des prix d’électricité vraiment élevés.

Faut que j’y pense (plus).    


[1] Sharma, R., & Malaviya, P. (2021). Management of stormwater pollution using green infrastructure: The role of rain gardens. Wiley Interdisciplinary Reviews: Water, 8(2), e1507. https://doi.org/10.1002/wat2.1507

[2] Li, J., Liu, F., & Li, Y. (2020). Simulation and design optimization of rain gardens via DRAINMOD and response surface methodology. Journal of Hydrology, 585, 124788. https://doi.org/10.1016/j.jhydrol.2020.124788

[3] Venvik, G., & Boogaard, F. C. (2020). Infiltration capacity of rain gardens using full-scale test method: effect of infiltration system on groundwater levels in Bergen, Norway. Land, 9(12), 520. https://doi.org/10.3390/land9120520

[4] Ma, Y., Jiang, Y., & Swallow, S. (2020). China’s sponge city development for urban water resilience and sustainability: A policy discussion. Science of the Total Environment, 729, 139078. https://doi.org/10.1016/j.scitotenv.2020.139078

[5] Sun, J., Cheshmehzangi, A., & Wang, S. (2020). Green infrastructure practice and a sustainability key performance indicators framework for neighbourhood-level construction of sponge city programme. Journal of Environmental Protection, 11(2), 82-109. https://doi.org/10.4236/jep.2020.112007

The type of riddle I like

Once again, I had quite a break in blogging. I spend a lot of time putting together research projects, in a network of many organisations, which I am supposed to bring to working together. I give it a lot of time and personal energy. It drains me a bit, and I like that drain. I like the thrill of putting together a team, agreeing about goals and possible openings. Since 2005, when I stopped running my own business and I settled for a quite academic career, I haven’t experienced that special kind of personal drive. I sincerely believe that every teacher should apply his or her own teaching in the everyday life of theirs, just to see if their teaching still corresponds to reality.

This is one of the reasons why I have made it a regular activity of mine to invest in the stock market. I teach economics, and the stock market is very much like the pulse of economics, in all its grades and shades, ranging from hardcore macroeconomic cycles, passing through the microeconomics of specific industries I am currently focusing on with my investment portfolio, and all the way down the path of behavioural economics. I teach management, as well, and putting together new projects in research is the closest I can come, currently, to management science being applied in real life.

Still, besides trying to apply my teaching in real life, I still do science. I do research, and I write about the things I think I have found out, on that research path of mine. I do a lot of research as regards the economics of energy. Currently, I am still revising a paper of mine, titled ‘Climbing the right hill – an evolutionary approach to the European market of electricity’. Around the topic of energy economics, I have built more general a method of studying quantitative socio-economic data, with the technical hypothesis that said data manifests collective intelligence in human social structures. It means that whenever I deal with a collection of quantitative socio-economic variables, I study the dataset at hand by assuming that each multivariate record line in the database is the local instance of an otherwise coherent social structure, which experimentins with many such specific instances of itself and selects those offering the best adaptation to the current external stressors. Yes, there is a distinct sound of evolutionary method in that approach.

Over the last three months, I have been slowly ruminating my theoretical foundations for the revision of that paper. Now, I am doing what I love doing: I am disrupting the gently predictable flow of theory with some incongruous facts. Yes, facts don’t know how to behave themselves, like really. Here is an interesting fact about energy: between 1999 and 2016, at the planetary scale, there had been more and more new cars produced per each new human being born. This is visualised in the composite picture below. Data about cars comes from https://www.worldometers.info/cars/ , whilst data about the headcount of population comes from the World Bank (https://data.worldbank.org/indicator/SP.POP.TOTL ).

Now, the meaning of all that. I mean, not ALL THAT (i.e. reality and life in general), just all that data about cars and population. Why do we consistently make more and more physical substance of cars per each new human born? Two explanations come to my mind. One politically correct and nicely environmentalist: we are collectively dumb as f**k and we keep overshooting the output of cars over and above the incremental change in population. The latter, when translated into a rate of growth, tends to settle down (https://data.worldbank.org/indicator/SP.POP.GROW ). Yeah, those capitalists who own car factories just want to make more and more money, and therefore they make more and more cars. Yeah, those corrupt politicians want to conserve jobs in the automotive industry, and they support it. Yeah, f**k them all! Yeah, cars destroy the planet!

I checked. The first door I knocked at was General Motors (https://investor.gm.com/sec-filings ). What I can see is that they actually make more and more operational money by making less and less cars. Their business used to be overshot in terms of volume, and now they are slowly making sense and money out of making less cars. Then I checked with Toyota (https://global.toyota/en/ir/library/sec/ ). These guys looks as if they were struggling to maintain their capacity to make approximately the same operational surplus each year, and they seem to be experimenting with the number of cars they need to put out in order to stay in good financial shape. When I say ‘experimenting’, it means experimenting upwards or downwards.

As a matter of fact, the only player who seems to be unequivocally making more operational money out of making more cars is Tesla (https://ir.tesla.com/#tab-quarterly-disclosure). In There comes another explanation – much less politically correct, if at all – for there being more cars made per each new human, and it says that we, humans, are collectively intelligent, and we have a good reason for making more and more cars per each new human coming to this realm of tears, and the reason is to store energy in a movable, possibly auto-movable a form. Yes, each car has a fuel tank or a set of batteries, in the case of them Teslas or other electric f**kers. Each car is a moving reservoir of chemical energy, immediately converted into kinetic energy, which, in turn, has economic utility. Making more cars with batteries pays off better than making more cars with combustible fuel in their tanks: a new generation of movable reservoirs in chemical energy is replacing an older generation thereof. 

Let’s hypothesise that this is precisely the point of each new human being coupled with more and more of a new car being made: the point is more chemical energy convertible into kinetic energy. Do we need to move around more, as time passes? Maybe, although I am a bit doubtful. Technically, with more and more humans being around in a constant space, there is more and more humans per square kilometre, and that incremental growth in the density of population happens mostly in cities. I described that phenomenon in a paper of mine, titled ‘The Puzzle of Urban Density And Energy Consumption’. That means that space available for travelling and needed to be covered, per individual capita of each human being, is actually decreasing. Less space to travel in means less need for means of transportation. 

Thus, what are we after, collectively? We might be preparing for having to move around more in the future, or for having to restructure the geography of our settlements. That’s possible, although the research I did for that paper about urban density indicates that geographical patterns of urbanization are quite durable. Anyway, those two cases sum up to some kind of zombie apocalypse. On the other hand, the fact of developing the amount of dispersed, temporarily stored energy (in cars) might be a manifestation of us learning how to build and maintain large, dispersed networks of energy reservoirs.

Isn’t it dumb to hypothesise that we go out of our way, as a civilisation, just to learn the best ways of developing what we are developing? Well, take the medieval cathedrals. Them medieval folks would keep building them for decades or even centuries. The Notre Dame cathedral in Paris, France, seems to be the record holder, with a construction period stretching from 1160 to 1245 (Bruzelius 1987[1]). Still, the same people who were so appallingly slow when building a cathedral could accomplish lightning-fast construction of quite complex military fortifications. When building cathedrals, the masters of stone masonry would do something apparently idiotic: they would build, then demolish, and then build again the same portion of the edifice, many times. WTF? Why slowing down something we can do quickly? In order to experiment with the process and with the technologies involved, sir. Cathedrals were experimental labs of physics, mathematics and management, long before these scientific disciplines even emerged. Yes, there was the official rationale of getting closer to God, to accomplish God’s will, and, honestly, it came handy. There was an entire culture – the medieval Christianity – which was learning how to learn by experimentation. The concept of fulfilling God’s will through perseverant pursuit, whilst being stoic as regards exogenous risks, was excellent a cultural vehicle to that purpose.

We move a few hundreds of years in time, to the 17th century. The cutting edge of technology is to find in textile and garments (Braudel 1992[2]), and the peculiarity of the European culture consisted in quickly changing fashions, geographically idiosyncratic and strongly enforced through social peer pressure. The industry of garments and textile was a giant experimental lab of business and management, developing the division of labour, the management of supply chains, quick study of subtle shades in customers’ tastes and just as quick adaptation thereto. This is how we, Europeans, prepared for the much later introduction of mechanized industry, which, in turn, gave birth to what we are today: a species controlling something like 30% of all energy on the surface of our planet.       

Maybe we are experimenting with dispersed, highly mobile and coordinated networks of small energy reservoirs – the automotive fleet – just for the sake of learning how to develop such networks? Some other facts, which, once again, are impolitely disturbing, come to the fore. I had a look at the data published by United Nations, as regards the total installed capacity of generation in electricity (https://unstats.un.org/unsd/energystats/ ). I calculated the average electrical capacity per capita, at the global scale. Turns out in 2014 the average human capita on Earth had around 60% more power capacity to tap from, as compared to a similarly human capita in 1999.

Interesting. It looks even more interesting when taken as the first moment of a process. When I divide the annual incremental change in the installed electrical capacity on the planet, and I divide it by the absolute demographic increment, thus when I go ‘Delta capacity / delta population’, that coefficient of elasticity grows like hell. In 2014, it was almost three times more than in 1999. We, humans, keep developing denser a network of cars, as compared to our population, and, at the same time, we keep increasing the relative power capacity which every human can tap into.    

Someone could say it is because we simply consume more and more energy per capita. Cool, I check with the World Bank: https://data.worldbank.org/indicator/EG.USE.PCAP.KG.OE . Yes, we increase our average annual consumption of energy per one human being, and yet this is a very gentle increment: barely 18% from 1999 through 2014. Nothing to do with the quick accumulation of generative capacity. We accumulate densifying a global fleet of cars, and growing a reserve of power capacity. What are we doing it for?

This is a deep question, and I calculated two additional elasticities with the data at hand. Firstly, I denominated incremental change in the number of new cars per each new human born over the average consumption of energy per capita. In the visual below, this is the coefficient ‘Elasticity of cars per capita to energy per capita’. Between 1999 and 2014, this elasticity had passed from 0,49 to 0,79. We keep accumulating something like an overhead of incremental car fleet, as compared to the amount of energy we consume.

Secondly, I formalized the comparison between individual consumption of energy and average power capacity per capita. This is the ‘Elasticity of capacity per capita to energy per capita’ column in the visual below.  Once again, it is a growing trend.   

At the planetary scale, we keep beefing up our collective reserves of energy, and we seriously mean business about dispersing those reserves into networks of small reservoirs, possibly on wheels.

Increased propensity to store is a historically known collective response to anticipated shortage. Do we, the human race, collectively and not quite consciously anticipate a shortage of energy? How could that happen? Our biology should suggest it just the opposite. With the climate change being around, we technically have more energy in the ambient environment, not less. What exact kind of shortage in energy are we collectively anticipating? This is the type of riddle I like.


[1] Bruzelius, C. (1987). The Construction of Notre-Dame in Paris. The Art Bulletin, 69(4), 540-569. https://doi.org/10.1080/00043079.1987.10788458

[2] Braudel, F. (1992). Civilization and capitalism, 15th-18th century, vol. II: The wheels of commerce (Vol. 2). Univ of California Press.

Unintentional, and yet powerful a reductor

As usually, I work on many things at the same time. I mean, not exactly at the same time, just in a tight alternate sequence. I am doing my own science, and I am doing collective science with other people. Right now, I feel like restating and reframing the main lines of my own science, with the intention to both reframe my own research, and be a better scientific partner to other researchers.

Such as I see it now, my own science is mostly methodological, and consists in studying human social structures as collectively intelligent ones. I assume that collectively we have a different type of intelligence from the individual one, and most of what we experience as social life is constant learning through experimentation with alternative versions of our collective way of being together. I use artificial neural networks as simulators of collective intelligence, and my essential process of simulation consists in creating multiple artificial realities and comparing them.

I deliberately use very simple, if not simplistic neural networks, namely those oriented on optimizing just one attribute of theirs, among the many available. I take a dataset, representative for the social structure I study, I take just one variable in the dataset as the optimized output, and I consider the remaining variables as instrumental input. Such a neural network simulates an artificial reality where the social structure studied pursues just one, narrow orientation. I create as many such narrow-minded, artificial societies as I have variables in my dataset. I assess the Euclidean distance between the original empirical dataset, and each of those artificial societies. 

It is just now that I realize what kind of implicit assumptions I make when doing so. I assume the actual social reality, manifested in the empirical dataset I study, is a concurrence of different, single-variable-oriented collective pursuits, which remain in some sort of dynamic interaction with each other. The path of social change we take, at the end of the day, manifests the relative prevalence of some among those narrow-minded pursuits, with others being pushed to the second rank of importance.

As I am pondering those generalities, I reconsider the actual scientific writings that I should hatch. Publish or perish, as they say in my profession. With that general method of collective intelligence being assumed in human societies, I focus more specifically on two empirical topics: the market of energy and the transition away from fossil fuels make one stream of my research, whilst the civilisational role of cities, especially in the context of the COVID-19 pandemic, is another stream of me trying to sound smart in my writing.

For now, I focus on issues connected to energy, and I return to revising my manuscript ‘Climbing the right hill – an evolutionary approach to the European market of electricity’, as a resubmission to Applied Energy . According to the guidelines of Applied Energy , I am supposed to structure my paper into the following parts: Introduction, Material and Methods, Theory, Calculations, Results, Discussion, and, as sort of a summary pitch, I need to prepare a cover letter where I shortly introduce the reasons why should the editor of Applied Energy bother about my paper at all. On the top of all these formally expressed requirements, there is something I noticed about the general style of articles published in Applied Energy : they all demonstrate and discuss strong, sharp-cutting hypotheses, with a pronounced theoretical edge in them. If I want my paper to be accepted by that journal, I need to give it that special style.  

That special style requires two things which, honestly, I am not really accustomed to doing. First of all, it requires, precisely, to phrase out very sharp claims. What I like the most is to show people material and methods which I work with and sort of provoke a discussion around it. When I have to formulate very sharp claims around that basic empirical stuff, I feel a bit awkward. Still, I understand that many people are willing to discuss only when they are truly pissed by the topic at hand, and sharply cut hypotheses serve to fuel that flame.

Second of all, making sharp claims of my own requires passing in thorough review the claims which other researchers phrase out. It requires doing my homework thoroughly in the review-of-literature. Once again, not really a fan of it, on my part, but well, life is brutal, as my parents used to teach me and as I have learnt in my own life. In other words, real life starts when I get out of my comfort zone.

The first body of literature I want to refer to in my revised article is the so-called MuSIASEM framework AKA Multi-scale Integrated Analysis of Societal and Ecosystem Metabolism’. Human societies are assumed to be giant organisms, and transformation of energy is a metabolic function of theirs (e.g. Andreoni 2020[1], Al-Tamimi & Al-Ghamdi 2020[2] or Velasco-Fernández et al. 2020[3]). The MuSIASEM framework is centred around an evolutionary assumption, which I used to find perfectly sound, and which I have come to consider as highly arguable, namely that the best possible state for both a living organism and a human society is that of the highest possible energy efficiency. As regards social structures, energy efficiency is the coefficient of real output per unit of energy consumption, or, in other words, the amount of real output we can produce with 1 kilogram of oil equivalent in energy. My theoretical departure from that assumption started with my own empirical research, published in my article ‘Energy efficiency as manifestation of collective intelligence in human societies’ (Energy, Volume 191, 15 January 2020, 116500, https://doi.org/10.1016/j.energy.2019.116500 ). As I applied my method of computation with a neural network as simulator of social change, I found out that human societies do not really seem to max out on energy efficiency. Maybe they should but they don’t. It was the first realization, on my part, that we, humans, orient our collective intelligence on optimizing the social structure as such, and whatever comes out of that in terms of energy efficiency, is an unintended by-product rather than a purpose. That general impression has been subsequently reinforced by other empirical findings of mine, precisely those which I introduce in that manuscript ‘Climbing the right hill – an evolutionary approach to the European market of electricity’, which I am currently revising for resubmission with Applied Energy . According to the guidelines of Applied Energy.

In practical terms, it means that when a public policy states that ‘we should maximize our energy efficiency’, it is a declarative goal which human societies actually do not strive for. It is a little as if a public policy imposed the absolute necessity of being nice to each other and punished any deviation from that imperative. People are nice to each other to the extent of current needs in social coordination, period. The absolute imperative of being nice is frequently the correlate of intense rivalry, e.g. as it was the case with traditional aristocracy. The French have even an expression, which I find profoundly true, namely ‘trop gentil pour être honnête’, which means ‘too nice to be honest’. My personal experience makes me kick into an alert state when somebody is that sort of intensely nice to me.

Passing from metaphors to the actual subject matter of energy management, it is a known fact that highly innovative technologies are usually truly inefficient. Optimization of efficiency, would it be energy efficiency or any other aspect thereof, is actually a late stage in the lifecycle of a technology. Deep technological change is usually marked by a temporary slump in efficiency. Imposing energy efficiency as chief goal of technology-related policies means systematically privileging and promoting technologies with the highest energy efficiency, thus, by metaphorical comparison to humans, technologies in their 40ies, past and over the excesses of youth.

The MuSIASEM framework has two other traits which I find arguable, namely the concept of evolutionary purpose, and the imperative of equality between countries in terms of energy efficiency. Researchers who lean towards and into the MuSIASEM methodology claim that it is an evolutionary purpose of every living organism to maximize energy efficiency, and therefore human societies have the same evolutionary purpose. It further implies that species displaying marked evolutionary success, i.e. significant growth in headcount (sometimes in mandibulae-count, should the head be not really what we mean it to be), achieve that success by being particularly energy efficient. I even went into some reading in life sciences and that claim is not grounded in any science. It seems that energy efficiency, and any denomination of efficiency, as a matter of fact, are very crude proportions we apply to complex a balance of flows which we have to learn a lot about. Niebel et al. (2019[4]) phrase it out as follows: ‘The principles governing cellular metabolic operation are poorly understood. Because diverse organisms show similar metabolic flux patterns, we hypothesized that a fundamental thermodynamic constraint might shape cellular metabolism. Here, we develop a constraint-based model for Saccharomyces cerevisiae with a comprehensive description of biochemical thermodynamics including a Gibbs energy balance. Non-linear regression analyses of quantitative metabolome and physiology data reveal the existence of an upper rate limit for cellular Gibbs energy dissipation. By applying this limit in flux balance analyses with growth maximization as the objective function, our model correctly predicts the physiology and intracellular metabolic fluxes for different glucose uptake rates as well as the maximal growth rate. We find that cells arrange their intracellular metabolic fluxes in such a way that, with increasing glucose uptake rates, they can accomplish optimal growth rates but stay below the critical rate limit on Gibbs energy dissipation. Once all possibilities for intracellular flux redistribution are exhausted, cells reach their maximal growth rate. This principle also holds for Escherichia coli and different carbon sources. Our work proposes that metabolic reaction stoichiometry, a limit on the cellular Gibbs energy dissipation rate, and the objective of growth maximization shape metabolism across organisms and conditions’. 

I feel like restating the very concept of evolutionary purpose as such. Evolution is a mechanism of change through selection. Selection in itself is largely a random process, based on the principle that whatever works for now can keep working until something else works even better. There is hardly any purpose in that. My take on the thing is that living species strive to maximize their intake of energy from environment rather than their energy efficiency. I even hatched an article about it (Wasniewski 2017[5]).

Now, I pass to the second postulate of the MuSIASEM methodology, namely to the alleged necessity of closing gaps between countries as for their energy efficiency. Professor Andreoni expresses this view quite vigorously in a recent article (Andreoni 2020[6]). I think this postulate doesn’t hold both inside the MuSIASEM framework, and outside of it. As for the purely external perspective, I think I have just laid out the main reasons for discarding the assumption that our civilisation should prioritize energy efficiency above other orientations and values. From the internal perspective of MuSIASEM, i.e. if we assume that energy efficiency is a true priority, we need to give that energy efficiency a boost, right? Now, the last time I checked, the only way we, humans, can get better at whatever we want to get better at is to create positive outliers, i.e. situations when we like really nail it better than in other situations. With a bit of luck, those positive outliers become a workable pattern of doing things. In management science, it is known as the principle of best practices. The only way of having positive outliers is to have a hierarchy of outcomes according to the given criterion. When everybody is at the same level, nobody is an outlier, and there is no way we can give ourselves a boost forward.

Good. Those six paragraphs above, they pretty much summarize my theoretical stance as regards the MuSIASEM framework in research about energy economics. Please, note that I respect that stream of research and the scientists involved in it. I think that representing energy management in human social structures as a metabolism is a great idea: it is one of those metaphors which can be fruitfully turned into a quantitative model. Still, I have my reserves.

I go further. A little more review of literature. Here comes a paper by Halbrügge et al. (2021[7]), titled ‘How did the German and other European electricity systems react to the COVID-19 pandemic?’. It points at an interesting point as regards energy economics: the pandemic has induced a new type of risk, namely short-term fluctuations in local demand for electricity. That, in turn, leads to deeper troughs and higher peaks in both the quantity and the price of energy in the market. More risk requires more liquidity: this is a known principle in business. As regards energy, liquidity can be achieved both through inventories, i.e. by developing storage capacity for energy, and through financial instruments. Halbrügge et al. come to the conclusion that such circumstances in the German market have led to the reinforcement of RES (Renewable Energy Sources). RES installations are typically more dispersed, more local in their reach, and more flexible than large power plants. It is much easier to modulate the output of a windfarm or a solar farm, as compared to a large fossil-fuel-based installation. 

Keeping an eye on the impact of the pandemic upon the market of energy, I pass to the article titled ‘Revisiting oil-stock nexus during COVID-19 pandemic: Some preliminary results’, by Salisu, Ebuh & Usman (2020[8]). First of all, a few words of general explanation as for what the hell is the oil-stock nexus. This is a phenomenon, which I saw any research about in 2017, which consists in a diversification of financial investment portfolios from pure financial stock into various mixes of stock and oil. Somehow around 2015, people who used to hold their liquid investments just in financial stock (e.g. as I do currently) started to build investment positions in various types of contracts based on the floating inventory of oil: futures, options and whatnot. When I say ‘floating’, it is quite literal: that inventory of oil really actually floats, stored on board of super-tanker ships, sailing gently through international waters, with proper gravitas (i.e. not too fast).

Long story short, crude oil has been increasingly becoming a financial asset, something like a buffer to hedge against risks encountered in other assets. Whilst the paper by Salisu, Ebuh & Usman is quite technical, without much theoretical generalisation, an interesting observation comes out of it, namely that short-term shocks, during the pandemic in financial markets had adversely impacted the price of oil more than the prices of stock. That, in turn, could indicate that crude oil was good as hedging asset just for a certain range of risks, and in the presence of price shocks induced by the pandemic, the role of oil could diminish.     

Those two papers point at a factor which we almost forgot as regards the market of energy, namely the role of short-term shocks. Until recently, i.e. until COVID-19 hit us hard, the textbook business model in the sector of energy had been that of very predictable demand, nearly constant in the long-perspective and varying in a sinusoidal manner in the short-term. The very disputable concept of LCOE AKA Levelized Cost of Energy, where investment outlays are treated as if they were a current cost, is based on those assumptions. The pandemic has shown a different aspect of energy systems, namely the need for buffering capacity. That, in turn, leads to the issue of adaptability, which, gently but surely leads further into the realm of adaptive changes, and that, ladies and gentlemen, is my beloved landscape of evolutionary, collectively intelligent change.

Cool. I move forward, and, by the same occasion, I move back. Back to the concept of energy efficiency. Halvorsen & Larsen study the so-called rebound effect as regards energy efficiency (Halvorsen & Larsen 2021[9]). Their paper is interesting for three reasons, the general topic of energy efficiency being the first one. The second one is methodological focus on phenomena which we cannot observe directly, and therefore we observe them through mediating variables, which is theoretically close to my own method of research. Finally, the phenomenon of rebound effect, namely the fact that, in the presence of temporarily increased energy efficiency, the consumers of energy tend to use more of those locally more energy-efficient goods, is essentially a short-term disturbance being transformed into long-term habits. This is adaptive change.

The model construed by Halvorsen & Larsen is a theoretical delight, just something my internal happy bulldog can bite into. They introduce the general assumption that consumption of energy in households is a build-up of different technologies, which can substitute each other under some conditions, and complementary under different conditions. Households maximize something called ‘energy services’, i.e. everything they can purposefully derive from energy carriers. Halvorsen & Larsen build and test a model where they derive demand for energy services from a whole range of quite practical variables, which all sums up to the following: energy efficiency is indirectly derived from the way that social structures work, and it is highly doubtful whether we can purposefully optimize energy efficiency as such.       

Now, here comes the question: what are the practical implications of all those different theoretical stances, I mean mine and those by other scientists? What does it change, and does it change anything at all, if policy makers follow the theoretical line of the MuSIASEM framework, or, alternatively, my approach? I am guessing differences at the level of both the goals, and the real outcomes of energy-oriented policies, and I am trying to wrap my mind around that guessing. Such as I see it, the MuSIASEM approach advocates for putting energy-efficiency of the whole global economy at the top of any political agenda, as a strategic goal. On the path towards achieving that strategic goal, there seems to be an intermediate one, namely that to narrow down significantly two types of discrepancies:

>> firstly, it is about discrepancies between countries in terms of energy efficiency, with a special focus on helping the poorest developing countries in ramping up their efficiency in using energy

>> secondly, there should be a priority to privilege technologies with the highest possible energy efficiency, whilst kicking out those which perform the least efficiently in that respect.    

If I saw a real policy based on those assumptions, I would have a few critical points to make. Firstly, I firmly believe that large human societies just don’t have the institutions to enforce energy efficiency as chief collective purpose. On the other hand, we have institutions oriented on other goals, which are able to ramp up energy efficiency as instrumental change. One institution, highly informal and yet highly efficient, is there, right in front of our eyes: markets and value chains. Each product and each service contain an input of energy, which manifests as a cost. In the presence of reasonably competitive markets, that cost is under pressure from market prices. Yes, we, humans are greedy, and we like accumulating profits, and therefore we squeeze our costs. Whenever energy comes into play as significant a cost, we figure out ways of diminishing its consumption per unit of real output. Competitive markets, both domestic and international, thus including free trade, act as an unintentional, and yet powerful a reductor of energy consumption, and, under a different angle, they remind us to find cheap sources of energy.


[1] Andreoni, V. (2020). The energy metabolism of countries: Energy efficiency and use in the period that followed the global financial crisis. Energy Policy, 139, 111304. https://doi.org/10.1016/j.enpol.2020.111304

[2] Al-Tamimi and Al-Ghamdi (2020), ‘Multiscale integrated analysis of societal and ecosystem metabolism of Qatar’ Energy Reports, 6, 521-527, https://doi.org/10.1016/j.egyr.2019.09.019

[3] Velasco-Fernández, R., Pérez-Sánchez, L., Chen, L., & Giampietro, M. (2020), A becoming China and the assisted maturity of the EU: Assessing the factors determining their energy metabolic patterns. Energy Strategy Reviews, 32, 100562.  https://doi.org/10.1016/j.esr.2020.100562

[4] Niebel, B., Leupold, S. & Heinemann, M. An upper limit on Gibbs energy dissipation governs cellular metabolism. Nat Metab 1, 125–132 (2019). https://doi.org/10.1038/s42255-018-0006-7

[5] Waśniewski, K. (2017). Technological change as intelligent, energy-maximizing adaptation. Energy-Maximizing Adaptation (August 30, 2017). http://dx.doi.org/10.1453/jest.v4i3.1410

[6] Andreoni, V. (2020). The energy metabolism of countries: Energy efficiency and use in the period that followed the global financial crisis. Energy Policy, 139, 111304. https://doi.org/10.1016/j.enpol.2020.111304

[7] Halbrügge, S., Schott, P., Weibelzahl, M., Buhl, H. U., Fridgen, G., & Schöpf, M. (2021). How did the German and other European electricity systems react to the COVID-19 pandemic?. Applied Energy, 285, 116370. https://doi.org/10.1016/j.apenergy.2020.116370

[8] Salisu, A. A., Ebuh, G. U., & Usman, N. (2020). Revisiting oil-stock nexus during COVID-19 pandemic: Some preliminary results. International Review of Economics & Finance, 69, 280-294. https://doi.org/10.1016/j.iref.2020.06.023

[9] Halvorsen, B., & Larsen, B. M. (2021). Identifying drivers for the direct rebound when energy efficiency is unknown. The importance of substitution and scale effects. Energy, 222, 119879. https://doi.org/10.1016/j.energy.2021.119879

The inflatable dartboard made of fine paper

My views on environmentally friendly production and consumption of energy, and especially on public policies in that field, differ radically from what seems to be currently the mainstream of scientific research and writing. I even got kicked out of a scientific conference because of my views. After my paper was accepted, I received a questionnaire to fill, which was supposed to feed the discussion on the plenary session of that conference. I answered those questions in good faith and sincerely, and: boom! I receive an email which says that my views ‘are not in line with the ideas we want to develop in the scientific community’. You could rightly argue that my views might be so incongruous that kicking me out of that conference was an act of mercy rather than enmity. Good. Let’s pass my views in review.

There is that thing of energy efficiency and climate neutrality. Energy efficiency, i.e. the capacity to derive a maximum of real output out of each unit of energy consumed, can be approached from two different angles: as a stationary value, on the one hand, or an elasticity, on the other hand. We could say: let’s consume as little energy as we possibly can and be as productive as possible with that frugal base. That’s the stationary view. Yet, we can say: let’s rock it, like really. Let’s boost our energy consumption so as to get in control of our climate. Let’s pass from roughly 30% of energy generated on the surface of the Earth, which we consume now, to like 60% or 70%. Sheer laws of thermodynamics suggest that if we manage to do that, we can really run the show. These is the summary of what in my views is not in line with ‘the ideas we want to develop in the scientific community’.

Of course, I can put forth any kind of idiocy and claim this is a valid viewpoint. Politics are full of such episodes. I was born and raised in a communist country. I know something about stupid, suicidal ideas being used as axiology for running a nation. I also think that discarding completely other people’s ‘ideas we want to develop in the scientific community’ and considering those people as pathetically lost would be preposterous from my part. We are all essentially wrong about that complex stuff we call ‘reality’. It is just that some ways of being wrong are more functional than others. I think truly correct a way to review the current literature on energy-related policies is to take its authors’ empirical findings and discuss them

under a different interpretation, namely the one sketched in the preceding paragraph.

I like looking at things with precisely that underlying assumption that I don’t know s**t about anything, and I just make up cognitive stuff which somehow pays off. I like swinging around that Ockham’s razor and cut out all the strong assumptions, staying just with the weak ones, which do not require much assuming and are at the limit of stylized observations and theoretical claims.

My basic academic background is in law (my Master’s degree), and in economics (my PhD). I look at social reality around me through the double lens of those two disciplines, which, when put in stereoscopic view, boil down to having an eye on patterns in human behaviour.

I think I observe that we, humans, are social and want to stay social, and being social means a baseline mutual predictability in our actions. We are very much about maintaining a certain level of coherence in culture, which means a certain level of behavioural coupling. We would rather die than accept the complete dissolution of that coherence. We, humans, we make behavioural coherence: this is our survival strategy, and it allows us to be highly social. Our cultures always develop along the path of differentiation in social roles. We like specializing inside the social group we belong to.

Our proclivity to endorse specific skillsets, which turn into social roles, has the peculiar property of creating local surpluses, and we tend to trade those surpluses. This is how markets form. In economics, there is that old distinction between production and consumption. I believe that one of the first social thinkers who really meant business about it was Jean Baptiste Say, in his “Treatise of Political Economy”. Here >> https://discoversocialsciences.com/wp-content/uploads/2020/03/Say_treatise_political-economy.pdf  you have it in the English translation, whilst there >>

https://discoversocialsciences.com/wp-content/uploads/2018/04/traite-deconomie-politique-jean-baptiste-say.pdf it is in its elegant French original.

In my perspective, the distinction between production and consumption is instrumental, i.e. it is useful for solving some economic problems, but just some. Saying that I am a consumer is a gross simplification. I am a consumer in some of my actions, but in others I am a producer. As I write this blog, I produce written content. I prefer assuming that production and consumption are two manifestations of the same activity, namely of markets working around tradable surpluses created by homo sapiens as individual homo sapiens endorse specific social roles.

When some scientists bring forth empirically backed claims that our patterns of consumption have the capacity to impact climate (e.g. Bjelle et al. 2021[1]), I say ‘Yes, indeed, and at the end of that specific intellectual avenue we find out that creating some specific, tradable surpluses, ergo the fact of endorsing some specific social roles, has the capacity to impact climate’. Bjelle et al. find out something which from my point of view is gobsmacking: whilst relative prevalence of particular goods in the overall patterns of demand has little effect on the emission of Greenhouse Gases (GHG) at the planetary scale, there are regional discrepancies. In developing countries and in emerging markets, changes in the baskets of goods consumed seem to have strong impact GHG-wise. On the other hand, in developed economies, however the consumers shift their preferences between different goods, it seems to be very largely climate neutral. From there, Bjelle et al. conclude into such issues as environmental taxation. My own take on those results is different. What impacts climate is social change occurring in developing economies and emerging markets, and this is relatively quick demographic growth combined with quick creation of new social roles, and a big socio-economic difference between urban environments, and the rural ones.

In the broad theoretical perspective, states of society which we label as classes of socio-economic development are far more than just income brackets. They are truly different patterns of social interactions. I had a glimpse of that when I was comparing data on the consumption of energy per capita (https://data.worldbank.org/indicator/EG.USE.PCAP.KG.OE ) with the distribution of gross national product per capita (https://data.worldbank.org/indicator/NY.GDP.PCAP.CD ). It looks as if different levels of economic development were different levels of energy in the social system. Each 100 ÷ 300 kilograms of oil equivalent per capita per year seem to be associated with specific institutions in society.

Let’s imagine that climate change goes on. New s**t comes our way, which we need to deal with. We need to learn. We form new skillsets, and we define new social roles. New social roles mean new tradable surpluses, and new markets with new goods in it. We don’t really know what kind of skillsets, markets and goods that will be. Enhanced effort of collective adaptation leads to outcomes impossible to predict in themselves. The question is: can we predict the way those otherwise unpredictable outcomes will take shape?         

My fellow scientists seem not to like unpredictable outcomes. Shigetomi et al. (2020[2]) straightforwardly find out empirically that ‘only the very low, low, and very high-income households are likely to achieve a reduction in carbon footprint due to their high level of environmental consciousness. These income brackets include the majority of elderly households who are likely to have higher consciousness about environmental protection and addressing climate change’. In my fairy-tale, it means that only a fringe of society cares about environment and climate, and this is the fringe which does not really move a lot in terms of new social role. People with low income have low income because their social roles do not allow them to trade significant surpluses, and elderly people with high income do not really shape the labour market.

This is what I infer from those empirical results. Yet, Shigetomi et al. conclude that ‘The Input-Output Analysis Sustainability Evaluation Framework (IOSEF), as proposed in this study, demonstrates how disparity in household consumption causes societal distortion via the supply chain, in terms of consumption distribution, environmental burdens and household preferences. The IOSEF has the potential to be a useful tool to aid in measuring social inequity and burden distribution allocation across time and demographics’.

Guys, like really. Just sit and think for a moment. I even pass over the claim that inequality of income is a social distortion, although I am tempted to say that no know human society has ever been free of that alleged distortion, and therefore we’d better accommodate with it and stop calling it a distortion. What I want is logic. Guys, you have just proven empirically that only low-income people, and elderly high-income people care about climate and environment. The middle-incomes and the relatively young high-incomes, thus people who truly run the show of social and technological change, do not care as much as you would like them to. You claim that inequality of income is a distortion, and you want to eliminate it. When you kick inequality out of the social equation, you get rid of the low-income folks, and of the high-income ones. Stands to reason: with enforced equality, everybody is more or less middle-income. Therefore, the majority of society is in a social position where they don’t give a f**k about climate and environment. Besides, when you remove inequality, you remove vertical social mobility along hierarchies, and therefore you give a cold shoulder to a fundamental driver of social change. Still, you want social change, you have just said it.  

Guys, the conclusions you derive from your own findings are the political equivalent of an inflatable dartboard made of fine paper. Cheap to make, might look dashing, and doomed to be extremely short-lived as soon as used in practice.   


[1] Bjelle, E. L., Wiebe, K. S., Többen, J., Tisserant, A., Ivanova, D., Vita, G., & Wood, R. (2021). Future changes in consumption: The income effect on greenhouse gas emissions. Energy Economics, 95, 105114. https://doi.org/10.1016/j.eneco.2021.105114

[2] Shigetomi, Y., Chapman, A., Nansai, K., Matsumoto, K. I., & Tohno, S. (2020). Quantifying lifestyle based social equity implications for national sustainable development policy. Environmental Research Letters, 15(8), 084044. https://doi.org/10.1088/1748-9326/ab9142

The fine details of theory

I keep digging. I keep revising that manuscript of mine – ‘Climbing the right hill – an evolutionary approach to the European market of electricity’ – in order to resubmit it to Applied Energy. Some of my readers might become slightly fed up with that thread. C’mon, man! How long do you mean to work on that revision? It is just an article! Yes, it is just an article, and I have that thing in me, those three mental characters: the curious ape, the happy bulldog, and the austere monk. The ape is curious, and it almost instinctively reaches for interesting things. My internal bulldog just loves digging out tasty pieces and biting into bones. The austere monk in me observes the intellectual mess, which the ape and the bulldog make together, and then he takes that big Ockham’s razor, from the recesses of his robe, and starts cutting bullshit out. When the three of those start dancing around a topic, it is a long path to follow, believe me.

In this update, I intend to structure the theoretical background of my paper. First, I restate the essential point of my own research, which I need and want to position in relation to other people’s views and research. I claim that energy-related policies, including those with environmental edge, should assume that whatever we do with energy, as a civilisation, is a by-product of actions purposefully oriented on other types of outcomes. Metaphorically, when I claim that a society should take the shift towards renewable energies as its chief goal, and take everything else as instrumental, is like saying that the chief goal of an individual should be to keep their blood sugar firmly at 80,00, whatever happens. What’s the best way to achieving it? Putting yourself in a clinic, under permanent intravenous nutrition, and stop experimenting with that thing people call ‘food’, ‘activity’, ‘important things to do’. Anyone wants to do it? Hardly anyone, I guess. The right level of blood sugar can be approximately achieved as balanced outcome of a proper lifestyle, and can serve as a gauge of whether our actual lifestyle is healthy.

Coming back from my nutritional metaphor to energy-related policies, there is no historical evidence that any human society has ever achieved any important change regarding the production of energy or its consumption, by explicitly stating ‘From now on, we want better energy management’. The biggest known shifts in our energy base happened as by-products of changes oriented on something else. In Europe, my home continent, we had three big changes. First, way back in the day, like starting from the 13th century, we harnessed the power of wind and that of water in, respectively, windmills and watermills. That served to provide kinetic energy to grind cereals into flour, which, in turn, served to feed a growing urban population. Windmills and watermills brought with them a silent revolution, which we are still wrapping our minds around. By the end of the 19th century, we started a massive shift towards fossil fuels. Why? Because we expected to drive Ferraris around, one day in the future? Not really. We just went terribly short on wood. People who claim that Europe should recreate its ‘ancestral’ forests deliberately ignore the fact that hardly anyone today knows what those ancestral forests should look like. Virtually all the forests we have today come from massive replantation which took place starting from the beginning of the 20th century. Yes, we have a bunch of 400-year-old oaks across the continent, but I dare reminding that one oak is not exactly a forest.

The civilisational change which I think is going on now, in our human civilisation, is the readjustment of social roles, and of the ways we create new social roles, in the presence of a radical demographic change: unprecedently high headcount of population, accompanied by just as unprecedently low rate of demographic growth. For hundreds of years, our civilisation has been evolving as two concurrent factories: the factory of food in the countryside, and the factory of new social roles in cities. Food comes the best when the headcount of humans immediately around is a low constant, and new social roles burgeon the best when humans interact abundantly, and therefore when they are tightly packed together in a limited space. The basic idea of our civilisation is to put most of the absolute demographic growth into cities and let ourselves invent new ways of being useful to each other, whilst keeping rural land as productive as possible.

That thing had worked for centuries. It had worked for humanity that had been relatively small in relation to available space and had been growing quickly into that space. That idea of separating the production of food from the creation of social roles and institutions was adapted precisely to that demographic pattern, which you can still find vestiges of in some developing countries, as well as in emerging markets, with urban population several dozens of times denser than the rural one, and cities that look like something effervescent. These cities grow bubbles out of themselves, and those bubbles burst just as quickly. My own trip to China showed me how cities can be truly alive, with layers and bubbles inside them. One is tempted to say these cities are something abnormal, as compared to the orderly, demographically balanced urban entities in developed countries. Still, historically, this is what cities are supposed to look like.

Now, something is changing. There is more of us on the planet than it has ever been but, at the same time, we experience unprecedently low rate of demographic growth. Whilst we apparently still manage to keep total urban land on the planet at a constant level (https://data.worldbank.org/indicator/AG.LND.TOTL.UR.K2 ), we struggle with keeping the surface of agricultural land up to our needs (https://data.worldbank.org/indicator/AG.LND.AGRI.ZS ). As in any system tilted out of balance, weird local phenomena start occurring, and the basic metrics pertinent to the production and consumption of energy show an interesting pattern. When I look at the percentage participation of renewable sources in the total consumption of energy (https://data.worldbank.org/indicator/EG.FEC.RNEW.ZS ), I see a bumpy cycle which looks like learning with experimentation. When I narrow down to the participation of renewables in the total consumption of electricity ( https://data.worldbank.org/indicator/EG.ELC.RNEW.ZS), what I see is a more pronounced trend upwards, with visible past experimentation. The use of nuclear power to generate electricity (https://data.worldbank.org/indicator/EG.ELC.NUCL.ZS) looks like a long-run experiment, which now is in its phase of winding down.

Now, two important trends come into my focus. Energy efficiency, defined as average real output per unit of energy use (https://data.worldbank.org/indicator/EG.GDP.PUSE.KO.PP.KD) quite unequivocal a trend upwards. Someone could say ‘Cool, we purposefully make ourselves energy efficient’. Still, when we care to have a look at the coefficient of energy consumed per person per year (https://data.worldbank.org/indicator/EG.USE.PCAP.KG.OE), a strong trend upwards appears, with some deep bumps in the past. When I put those two trends back to back, I conclude that what we really max out on is the real output of goods and services in our civilisation, and energy efficiency is just a means to that end.

It is a good moment to puncture an intellectual balloon. I can frequently see and hear people argue that maximizing real output, in any social entity or context, is a manifestation of stupid, baseless greed and blindness to the truly important stuff. Still, please consider the following line of logic. We, humans, interact with the natural environment, and interact with each other.  When we interact with each other a lot, in highly dense networks of social relations, we reinforce each other’s learning, and start spinning the wheel of innovation and technological change. Abundant interaction with each other gives us new ideas for interacting with the natural environment.

Cities have peculiar properties. Firstly, by creating new social roles through intense social interaction, they create new products and services, and therefore new markets, connected in chains of value added. This is how the real output of goods and services in a society becomes a complex, multi-layered network of technologies, and this is how social structures become self-propelling businesses. The more complexity in social roles is created, the more products and services emerge, which brings the development in greater a number of markets. That, in turn, gives greater a real output, greater income per person, which incentivizes to create new social roles etc. This how social complexity creates the phenomenon called economic growth.

The phenomenon of economic growth, thus the quantitative growth in complex, networked technologies which emerge in relatively dense human settlements, has a few peculiar properties. You can’t see it, you can’t touch it, and yet you can immediately feel when its pace changes. Economic growth is among the most abstract concepts of social sciences, and yet living in a society with real economic growth at 5% per annum is like a different galaxy when compared to living in a place where real economic growth is actually a recession of -5%. The arithmetical difference is just 10 percentage points, around the top of something underlying which makes the base of 1. Still, lives in those two contexts are completely different. At +5% in real economic growth, starting a new business is generally a sensible idea, provided you have it nailed down with a business plan. At – 5% a year, i.e. in recession, the same business plan can be an elaborate way of committing economic and financial suicide. At +5%, political elections are usually won by people who just sell you the standard political bullshit, like ‘I will make your lives better’ claimed by a heavily indebted alcoholic with no real career of their own. At -5%, politics start being haunted by those sinister characters, who look and sound like evil spirits from our dreams and claim they ‘will restore order and social justice’.

The society which we consider today as normal is a society of positive real economic growth. All the institutions we are used to, such as healthcare systems, internal security, public administration, education – all that stuff works at least acceptably smoothly when complex, networked technologies of our society have demonstrable capacity to increase their real economic output. That ‘normal’ state of society is closely connected to the factories of social roles which we commonly call ‘cities’. Real economic growth happens when the amount of new social roles – fabricated through intense interactions between densely packed humans – is enough for the new humans coming around. Being professionally active means having a social role solid enough to participate in the redistribution of value added created in complex technological networks. It is both formal science and sort of accumulated wisdom in governance that we’d better have most of the adult, able bodied people in that state of professional activity. A small fringe of professionally inactive people is somehow healthy a margin of human energy free to be professionally activated, and when I say ‘small’, it is like no more than 5% of the adult population. Anything above becomes both a burden and a disruption to social cohesion. Too big a percentage of people with no clear, working social roles makes it increasingly difficult to make social interactions sufficiently abundant and complex to create enough new social roles for new people. This is why governments of this world attach keen importance to the accurate measurement of the phenomenon quantified as ‘unemployment’.  

Those complex networks of technologies in our societies, which have the capacity to create social roles and generate economic growth, work their work properly when we can transact about them, i.e. when we have working markets for the final economic goods produced with those technologies, and for intermediate economic goods produced for them. It is as if the whole thing worked when we can buy and sell things. I was born in 1968, in a communist country, namely Poland, and I can tell you that in the absence of markets the whole mechanism just jams, progressively to a halt. Yes, markets are messy and capricious, and transactional prices can easily get out of hand, creating inflation, and yet markets give those little local incentives needed to get the most of human social roles. In the communist Poland, I remember people doing really strange things, like hoarding massive inventories of refrigerators or women’s underwear, just to create some speculative spin in an ad hoc, semi-legal or completely illegal market. It looks as if people needed to market and transact for real, amidst the theoretically perfectly planned society.   

Anyway, economic growth is observable through big sets of transactions in product markets, and those transactions have two attributes: quantities and prices AKA Q an P. It is like Q*P = ∑qi*pi. When I have – well, when we have – that complex network of technologies functionally connected to a factory of social roles for new humans, that thing makes ∑qi*pi, thus a lot of local transactions with quantities qi, at prices pi. The economic growth I have been so vocal about in the last few paragraphs is the real growth, i.e. in quantity Q = ∑qi. On the long run, what I am interested in, and my government is interested in, is to reasonably max out on ∆ Q = ∆∑qi. Quantities change slowly and quite predictably, whilst prices tend to change quickly and, mostly on the short term, chaotically. Measuring accurately real economic growth involving kicking the ‘*pi’ component out of the equation and extracting just ∆ Q = ∆∑qi. Question: why bothering with the observation of Q*P = ∑qi*pi when the real thing we need is just ∆ Q = ∆∑qi? Answer: because there is no other way. Complex networks of technologies produce economic growth by creating increasing diversity in social roles in concurrence with increasing diversity in products and their respective markets. No genius has come up, so far, with a method to add up, directly, the volume of visits in hairdresser’s salons with the volume of electric vehicles made, and all that with the volume of energy consumed.

I have ventured myself far from the disciplined logic of revision in my paper, for resubmitting it. The logical flow required in this respect by Applied Energy is the following: introduction first, method and material next, theory follows, and calculations come after. The literature which I refer to in my writing needs to have two dimensions: longitudinal and lateral. Laterally, I divide other people’s publications into three basic groups: a) standpoints which I argue with b) methods and assumptions which I agree with and use to support my own reasoning, and c) viewpoints which sort of go elsewhere, and can be interesting openings into something even different from what I discuss. Longitudinally, the literature I cite needs, in the first place, to open up on the main points of my paper. This is ‘Introduction’. Publications which I cite here need to point at the utility of developing the line of research which I develop. They need to convey strong, general claims which sort of set my landmarks.

The section titled ‘Theory’ is supposed to provide the fine referencing of my method, so as to both support the logic thereof, and to open up on the detailed calculations I develop in the following section. Literature which I bring forth here should contain specific developments, both factual and methodological, something like a conceptual cobweb. In other words, ‘Introduction’ should be provocative, whilst ‘Theory’ transforms provocation into a structure.

Among the recent literature I am passing in review, three papers come forth as provocative enough for me to discuss them in the introduction of my article:  Andreoni 2020[1], Koponen & Le Net 2021[2]. The first of the three on that list, namely the paper by professor Valeria Andreoni, well in the mainstream of the MuSIASEM methodology (Multi-scale Integrated Analysis of Societal and Ecosystem Metabolism), sets an important line of theoretical debate, namely the arguable imperative to focus energy-related policies, and economic policies in general, on two outcomes, namely on maximizing energy efficiency (i.e. maximizing the amount of real output per unit of energy consumption), and on minimizing cross sectional differences between countries as regards energy efficiency. Both postulates are based on the assumption that energy efficiency of national economies corresponds to the metabolic efficiency of living organisms, and that maxing out on both is an objective evolutionary purpose in both cases. My method has the same general foundations as MuSIASEM. I claim that societies can be studied similarly to living organisms.

At that point, I diverge from the MuSIASEM framework: instead of focusing on the metabolism of such organically approached societies, I pay attention to their collective cognitive processes, their collective intelligence. I claim that human societies are collectively intelligent structures, which learn by experimenting with many alternative versions of themselves whilst staying structurally coherent. From that assumption, I derive two further claims. Firstly, if we reduce disparities between countries with respect to any important attribute of theirs, including energy efficiency, we kick out of the game a lot of opportunities for future learning: the ‘many alternative versions’ part of the process is no more there. Secondly, I claim there is no such thing as objective evolutionary purpose, would it be maximizing energy efficiency or anything else. Evolution has no purpose; it just has the mechanism of selection by replication. Replication of humans is proven to happen the most favourably when we collectively learn fast and make a civilisation out of that learning.

Therefore, whilst having no objective evolutionary purpose, human societies have objective orientations: we collectively attempt to optimize some specific outcomes, which have the attribute to organize our collective learning the most efficiently, in a predictable cycle of, respectively, episodes marked with large errors in adjustment, and those displaying much smaller errors in that respect.

From that theoretical cleavage between my method and the postulates of the MuSIASEM framework, I derive two practical claims as regards economic policies, especially as regards environmentally friendly energy systems. Looking for homogeneity between countries is a road to nowhere, for one. Expecting that human societies will purposefully strive to maximize their overall energy efficiency is unrealistic a goal, and therefore it is a harmful assumption in the presence of serious challenges connected to climate change, for two. Public policies should explicitly aim for disparity of outcomes in technological race, and the race should be oriented on outcomes which are being objectively pursued by human societies.

Whilst disagreeing with professor Valeria Andreoni on principles, I find her empirical findings highly interesting. Rapid economic change, especially the kind of change associated with crises, seems to correlate with deepening disparities between countries in terms of energy efficiency. In other words, when large economic systems need to adjust hard and fast, they sort of play their individual games separately as regards energy efficiency. Rapid economic adjustment under constraint is conducive to creating a large discrepancy of alternative states in what energy efficiency can possibly be, in the context of other socio-economic outcomes, and, therefore, more material is there for learning collectively by experimenting with many alternative versions of ourselves.

Against that theoretical sketch, I place the second paper which I judge worth to introduce with: Koponen, K., & Le Net, E. (2021): Towards robust renewable energy investment decisions at the territorial level. Applied Energy, 287, 116552.  https://doi.org/10.1016/j.apenergy.2021.116552 . I chose this one because it shows a method very similar to mine: the authors build a simulative model in Excel, where they create m = 5000 alternative futures for a networked energy system aiming at optimizing 5 performance metrics. The model was based on actual empirical data as for those variables, and the ‘alternative futures’ are, in other words, 5000 alternative states of the same system. Outcomes are gauged with the so-called regret analysis, where the relative performance in a specific outcome is measured as the residual difference between its local value, and, respectively, its general minimum or maximum, depending on whether the given metric is something we strive to maximize (e.g. capacity), or to minimize (e.g. GHG).

I can generalize on the method presented by Koponen and Le Net, and assume that any given state of society can be studied as one among many alternative states of said society, and the future depends very largely on how this society will navigate through the largely uncharted waters of itself being in many alternative states. Navigators need a star in the sky, to find their North, and so do societies. Koponen and Le Net simulate civilizational navigation under the constraint of four stars, namely the cost of CO2, the cost of electricity, the cost of natural gas, and the cost of biomass. I generalize and say that experimentation with alternative versions of us being collectively intelligent can be oriented on optimizing many alternative Norths, and the kind of North we will most likely pursue is the kind which allows us to learn efficiently how to go from one alternative future to another.

Good. This is my ‘Introduction’. It sets the tone for the method I present in the subsequent section, and the method opens up on the fine details of theory.


[1] Andreoni, V. (2020). The energy metabolism of countries: Energy efficiency and use in the period that followed the global financial crisis. Energy Policy, 139, 111304. https://doi.org/10.1016/j.enpol.2020.111304

[2] Koponen, K., & Le Net, E. (2021): Towards robust renewable energy investment decisions at the territorial level. Applied Energy, 287, 116552.  https://doi.org/10.1016/j.apenergy.2021.116552  .

The traps of evolutionary metaphysics

I think I have moved forward in the process of revising my manuscript ‘Climbing the right hill – an evolutionary approach to the European market of electricity’, as a resubmission Applied Energy . A little digression: as I provide, each time, a link to the original form of that manuscript, my readers can compare the changes I develop on, in those updates, with the initial flow of logic.

I like discussing important things in reverse order. I like starting from what apparently is the end and the bottom line of thinking. From there, I go forward by going back, sort of. In an article, the end is the conclusion, possibly summarized in 5 ÷ 6 bullet points and optionally coming together with a graphical abstract. I conclude this specific piece of research by claiming that energy-oriented policies, e.g. those oriented on developing renewable sources, could gain in efficiency by being: a) national rather than continental or global b) explicitly oriented on optimizing the country’s terms of trade in global supply chains c) just as explicitly oriented on the development of some specific types of jobs whilst purposefully winding down other types thereof.

I give twofold a base for that claim. Firstly, I have that stylized general observation about energy-oriented policies: globally or continentally calibrated policies, such as, for example, the now famous Paris Climate Agreement, work so slow and with so much friction that they become ineffective for any practical purpose, whilst country-level policies are much more efficient in the sense that one can see a real transition from point A to point B. Secondly, my own research – which I present in this article under revision – brings evidence that national social structures orient themselves on optimizing their terms of trade and their job markets in priority, whilst treating the energy-related issues as instrumental. That specific collective orientation seems, in turn, to have its source in the capacity of human social structures to develop a strongly cyclical, predictable pattern of collective learning precisely in relation to the terms of trade, and to the job market, whilst collective learning oriented on other measurable variables, inclusive of those pertinent to energy management, is much less predictable.

That general conclusion is based on quantitative results of my empirical research, which brings forth 4 quantitative variables – price index in exports (PL_X), average hours worked per person per year (AVH), the share of labour compensation in Gross National Income (LABSH), and the coefficient of human capital (HC – average years of schooling per person) – out of a total scope of 49 observables, as somehow privileged collective outcomes marked with precisely that recurrent, predictable pattern of learning.

The privileged position of those specific variables, against the others, manifests theoretically as their capacity to produce simulated social realities much more similar to the empirically observable state thereof than simulated realities based on other variables, whilst producing a strongly cyclical series of local residual errors in approximating said empirically observable state.

The method which allowed to produce those results generates simulated social realities with the use of artificial neural networks. Two types of networks are used to generate two types of simulation. One is a neural network which optimizes a specific empirical variable as its output, whilst using the remaining empirical variables as instrumental input. I call that network ‘procedure of learning by orientation’. The other type of network uses the same empirical variable as its optimizable output and replaces the vector of other empirical variables with a vector of hypothetical probabilities, corresponding to just as hypothetical social roles, in the presence of a random disturbance factor. I label this network as ‘learning procedure by pacing’.

The procedure of learning by orientation produces as many alternative sets of numerical values as there are variables in the original empirical dataset X used in research. In this case, it was a set made of n = 49 variables, and thus 49 alternative sets Si are created. Each alternative set Si consists of values transformed by the corresponding neural network from the original empirical ones. Both the original dataset X and the n = 49 transformations Si thereof can be characterized, mathematically, with their respective vectors of mean expected values. Euclidean distances between those vectors are informative about the mathematical similarity between the corresponding sets.

Therefore, learning by orientation produces n = 49 simulations Si of the actual social reality represented in the set X, when each such simulation is biased towards optimizing one particular variable ‘i’ from among the n = 49 variables studied, and each such simulation displays a measurable Euclidean similarity to the actual social reality studied. My own experience in applying this specific procedure is that a few simulations Si, namely those pegged on optimizing four variables – price index in exports [Si(PL_X)], average hours worked per person per year [Si[AVH)], the share of labour compensation in Gross National Income [Si(LABSH)], and the coefficient of human capital [Si(HC) – average years of schooling per person] – display much closer Euclidean a distance to the actual reality X than any other simulation. Much closer means closer by orders of magnitude, by the way. The difference is visible.

The procedure of learning by pacing produces n = 49 simulations as well, yet these simulations are not exactly transformations of the original dataset X. In this case, simulated realities are strictly simulated, i.e. they are hypothetical states from the very beginning, and individual variables from the set X serve as the basis for setting a trajectory of transformation for those hypothetical states. Each such hypothetical state is a matrix of probabilities, associated with two sets of social roles: active and dormant. Active social roles are being endorsed by individuals in that hypothetical society and their baseline, initial probabilities are random, non-null values. Dormant social roles are something like a formalized prospect for immediate social change, and their initial probabilities are null.

This specific neural network produces new hypothetical states in two concurrent ways: typical neural activation, and random disturbance. In the logical structure of the network, random disturbance occurs before neural activation, and thus I am describing details of the former in the first place. Random disturbance is a hypothetical variable, separate from probabilities associated with social roles. It is a random value 0 < d < 1, associated with a threshold of significance d*. When d > d*, d becomes something like an additional error, fed forward into the network, i.e. impacting the next experimental round performed in the process of learning.

In the procedure of learning by pacing, neural activation is triggered by aggregating partial probabilities, associated with social roles, and possibly pre-modified by the random disturbance, through the operation of weighed average of the type ∑ fj(pi, X(i,j), dj, ej-1,), where fj is the function of neural activation in the j-th experimental round of learning, pi is the probability associated with the i-th social role, X(i,j) is the random weight of pi in the j-th experimental round, dj stands for random disturbance specific to that experimental round, and ej-1 is residual error fed forward from the previous experimental round j-1.

Now, just to be clear: there is a mathematical difference, in that logical structure, between random disturbance dj, and the random weight X(i,j). The former is specific to a given experimental round, but general across all the component probabilities in that round. If you want, di is like an earthquake, momentarily shaking the entire network, and is supposed to represent the fact that social reality just never behaves as planned. This is the grain of chaos in that mathematical order. On the other hand, X(i,j) is a strictly formal factor in the neural activation function, and its only job is to allow experimenting with data.

Wrapping it partially up, the whole method I use in this article revolves around the working hypothesis that a given set of empirical data, which I am working with, represents collectively intelligent learning, where human social structures collectively experiment with many alternative versions of themselves and select those versions which offer the most desirable states in a few specific variables. I call these variables ‘collective orientations’ and I further develop that hypothesis by claiming that collective orientations have that privileged position because they allow a specific type of collective learning, strongly cyclical, with large amplitude of residual error.

In both procedures of learning, i.e. in orientation, and in pacing, I introduce an additional component, namely that of self-observed internal coherence. The basic idea is that a social structure is a structure because the functional connections between categories of phenomena are partly independent from the exact local content of those categories. People remain in predictable functional connections to their Tesla cars, whatever exact person and exact car we are talking about. In my method, and, as a matter of fact, in any quantitative method, variables are phenomenological categories, whilst the local values of those variables inform about the local content to find in respective categories. My idea is that mathematical distance between values represents temporary coherence between the phenomenological categories behind the corresponding variables. I use the Euclidean distance of the type E = [(a – b)2]0,5 as the most elementary measure of mathematical distance. The exact calculation I do is the average Euclidean distance that each i-th variable in the set of n variables keeps from each l-th variable among the remaining k = n – 1 variables, in the same experimental round j. Mathematically, it goes like: avgE = { ∑[(xi – xl)2]0,5 }/k. When I use avgE as internally generated input in a neural network, I use the information about internal coherence as meta-data in the process of learning.

Of course, someone could ask what the point is of measuring local Euclidean distance between, for example, annual consumption of energy per capita and the average number of hours worked annually per capita, thus between kilograms of oil equivalent and hours. Isn’t it measuring the distance between apples and oranges? Well, yes, it is, and when you run a grocery store, knowing the coherence between your apples and your oranges can come handy, for one. In a neural network, variables are standardized, usually over their respective maximums, and therefore both apples and oranges are measured on the same scale, for two.       

The method needs to be rooted in theory, which has two levels: general and subject-specific. At the general level, I need acceptably solid theoretical basis for positing the working hypothesis, as phrased out in the preceding paragraph, to any given set of empirical, socio-economic data. Subject-specific theory is supposed to allow interpreting the results of empirical research as conducted according to the above-discussed method.

General theory revolves around four core concepts, namely those of: intelligent structure, chain of states, collective orientation, and social roles as mirroring phenomena for quantitative socio-economic variables. Subject-specific theory, on the other hand, is pertinent to the general issue of energy-related policies, and to their currently most tangible manifestation, i.e., to environmentally friendly sources of energy.

The theoretical concept of intelligent structure, such as I use it in my research, is mostly based on another concept, known from evolutionary biology, namely that of adaptive walk in rugged landscape, combined with the phenomenon of tacit coordination. We, humans, do things together without being fully aware we are doing them together or even whilst thinking we oppose each other (e.g. Kuroda & Kameda 2019[1]). capacity for social evolutionary tinkering (Jacob 1977[2]) through tacit coordination, such that the given society displays social change akin to an adaptive walk in rugged landscape (Kauffman & Levin 1987[3]; Kauffman 1993[4]; Nahum et al. 2015[5]).

Each distinct state of the given society (e.g. different countries in the same time or different moments in time as regards the same country) is interpreted as a vector of observable properties, and each empirical instance of that vector is a 1-mutation-neighbour to at least one other instance. All the instances form a space of social entities. In the presence of external stressor, each such mutation (each entity) displays a given fitness to achieve the optimal state, regarding the stressor in question, and therefore the whole set of social entities yields a complex vector of fitness to cope with the stressor.

The assumption of collective intelligence means that each social entity is able to observe itself as well as other entities, so as to produce social adaptation for achieving optimal fitness. Social change is an adaptive walk, i.e. a set of local experiments, observable to each other and able to learn from each other’s observed fitness. The resulting path of social change is by definition uneven, whence the expression ‘adaptive walk in rugged landscape’. There is a strong argument that such adaptive walks occur at a pace proportional to the complexity of social entities involved. The greater the number of characteristics involved, the greater the number of epistatic interactions between them, and the more experiments it takes to have everything more or less aligned for coping with a stressor.

Somehow concurrently to the evolutionary theory, another angle of approach seems interesting, for solidifying theoretical grounds to my method: the swarm theory (e.g. Wood & Thompson 2021[6]; Li et al. 2021[7]). Swarm learning consists in shifting between different levels of behavioural coupling between individuals. When we know for sure we have everything nicely figured out, we coordinate, between individuals, by fixed rituals or by strongly correlated mutual reaction. As we have more and more doubts whether the reality which we think we are so well adapted to is the reality actually out there, we start loosening the bonds of behavioural coupling, passing through weakening correlation, and all the way up to random concurrence. That unbundling of social coordination allows incorporating new behavioural patterns into individual social roles, and then learning how to coordinate as regards that new stuff.   

As the concept of intelligent structure seems to have a decent theoretical base, the next question is: how the hell can I represent it mathematically? I guess that a structure is a set of connections inside a complex state, where complexity is as a collection of different variables. I think that the best mathematical construct which fits that bill is that of imperfect Markov chains (e.g. Berghout & Verbitskiy 2021[8]): there is a state of reality Xn = {x1, x2, …, xn}, which we cannot observe directly, whilst there is a set of observables {Yn} such that Yn = π (Xn), the π being a coding map of Xn. We can observe through the lens of Yn. That quite contemporary theory by Berghout and Verbitskyi sends to an older one, namely to the theory of g-measures (e.g. Keane 1972[9]), and all that falls into an even broader category of ergodic theory, which is the theory of what happens to complex systems when they are allowed to run for a long time. Yes, when we wonder what kind of adult our kids will grow up into, this is ergodic theory.

The adaptive walk of a human society in the rugged landscape of whatever challenges they face can be represented as a mathematical chain of complex states, and each such state is essentially a matrix: numbers in a structure. In the context of intelligent structures and their adaptive walks, it can be hypothesized that ergodic changes in the long-going, complex stuff about what humans do together happen with a pattern and are far from being random. There is a currently ongoing, conceptual carryover from biology to social sciences, under the general concept of evolutionary trajectory (Turchin et al. 2018[10]; Shafique et al. 2020[11]). That concept of evolutionary trajectory can be combined with the idea that our human culture pays particular attention to phenomena which make valuable outcomes, such as presented, for example, in the Interface Theory of Perception (Hoffman et al. 2015[12], Fields et al. 2018[13]). Those two theories taken together allow hypothesising that, as we collectively learn by experimenting with many alternative versions of our societies, we systematically privilege those particular experiments where specific social outcomes are being optimized. In other words, we can have objectively existing, collective ethical values and collective praxeological goals, without even knowing we pursue them.

The last field of general theory I need to ground in literature is the idea of representing the state of a society as a vector of probabilities associated with social roles. This is probably the wobbliest theoretical boat among all those which I want to have some footing in. Yes, social sciences have developed that strong intuition that humans in society form and endorse social roles, which allows productive coordination. As Max Weber wrote in his book ‘Economy and Society’: “But for the subjective interpretation of action in sociological work these collectivities must be treated as solely the resultants and modes of organization of the particular acts of individual persons, since these alone can be treated as agents in a course of subjectively understandable action”. The same intuition is to find in Talcott Parsons’ ‘Social system’, e.g. in Chapter VI, titled ‘The Learning of Social Role-Expectations and the Mechanisms of 138 Socialization of Motivation’: “An established state of a social system is a process of complementary interaction of two or more individual actors in which each conforms with the expectations of the other(’s) in such a way that alter’s reactions to ego’s actions are positive sanctions which serve to reinforce his given need-dispositions and thus to fulfill his given expectations. This stabilized or equilibrated interaction process is the fundamental point of reference for all dynamic motivational analysis of social process. […] Every society then has the mechanisms which have been called situational specifications of role-orientations and which operate through secondary identifications and imitation. Through them are learned the specific role-values and symbol-systems of that particular society or sub-system of it, the level of expectations which are to be concretely implemented in action in the actual role”.

Those theoretical foundations laid, the further we go, the more emotions awaken as the concept of social role gets included in scientific research. I have encountered views, (e.g. Schneider & Bos 2019[14]) that social roles, whilst being real, are a mechanism of oppression rather than social development. On the other hand, it can be assumed that in the presence of demographic growth, when each consecutive generation brings greater a number of people than the previous one, we need new social roles. That, in turn, allows developing new technologies, instrumental to performing these roles (e.g. Gil-Hernández et al. 2017[15]).

Now, I pass to the subject-specific, theoretical background of my method. I think that the closest cousin to my method, which I can find in recently published literature, is the MuSIASEM framework, where the acronym, deliberately weird, I guess, stands for ‘Multi-scale Integrated Analysis of Societal and Ecosystem Metabolism’. This is a whole stream of research, where human societies are studied as giant organisms, and the ways we, humans, make and use energy, is studied as a metabolic function of those giant bodies. The central assumption of the MuSIASEM methodology is that metabolic systems survive and evolve by maxing out on energy efficiency. The best metabolism for an economic system is the most energy-efficient one, which means the greatest possible amount of real output per unit of energy consumption. In terms of practical metrics, we talk about GDP per kg of oil equivalent in energy, or, conversely, about the kilograms of oil equivalent needed to produce one unit (e.g. $1 bln) of GDP. You can consult Andreoni 2020[16], Al-Tamimi & Al-Ghamdi 2020[17] or Velasco-Fernández et al. 2020[18], as some of the most recent examples of MuSIASEM being applied in empirical research.

This approach is strongly evolutionary. It assumes that any given human society can be in many different, achievable states, each state displaying a different energy efficiency. The specific state which yields the most real output per unit of energy consumed is the most efficient metabolism available to that society at the moment, and, logically, should be the short-term evolutionary target. Here, I dare disagreeing fundamentally. In nature, there is no such thing as evolutionary targets. Evolution happens by successful replication. The catalogue of living organisms which we have around, today, are those which temporarily are the best at replicating themselves, and not necessarily those endowed with the greatest metabolic efficiency. There are many examples of species which, whilst being wonders of nature in terms of biologically termed efficiency, are either endemic or extinct. Feline predators, such as the jaguar or the mountain lion, are wonderfully efficient in biomechanical terms, which translates into their capacity to use energy efficiently. Yet, their capacity to take over available habitats is not really an evolutionary success.

In biological terms, metabolic processes are a balance of flows rather than intelligent strive for maximum efficiency. As Niebel et al. (2019[19]) explain it: ‘The principles governing cellular metabolic operation are poorly understood. Because diverse organisms show similar metabolic flux patterns, we hypothesized that a fundamental thermodynamic constraint might shape cellular metabolism. Here, we develop a constraint-based model for Saccharomyces cerevisiae with a comprehensive description of biochemical thermodynamics including a Gibbs energy balance. Non-linear regression analyses of quantitative metabolome and physiology data reveal the existence of an upper rate limit for cellular Gibbs energy dissipation. By applying this limit in flux balance analyses with growth maximization as the objective function, our model correctly predicts the physiology and intracellular metabolic fluxes for different glucose uptake rates as well as the maximal growth rate. We find that cells arrange their intracellular metabolic fluxes in such a way that, with increasing glucose uptake rates, they can accomplish optimal growth rates but stay below the critical rate limit on Gibbs energy dissipation. Once all possibilities for intracellular flux redistribution are exhausted, cells reach their maximal growth rate. This principle also holds for Escherichia coli and different carbon sources. Our work proposes that metabolic reaction stoichiometry, a limit on the cellular Gibbs energy dissipation rate, and the objective of growth maximization shape metabolism across organisms and conditions’.  

Therefore, if we translate the principles of biological metabolism into those of economics and energy management, the energy-efficiency of any given society is a temporary balance achieved under constraint. Whilst those states of society which clearly favour excessive dissipation of energy are not tolerable on the long run, energy efficiency is a by-product of the strive to survive and replicate, rather than an optimizable target state. Human societies are far from being optimally energy efficient for the simple reason that we have plenty of energy around, and, with the advent of renewable sources, we have even less constraint to optimize energy-efficiency.

We, humans, survive and thrive by doing things together. The kind of efficiency that allows maxing out on our own replication is efficiency in coordination. This is why we have all that stuff of social roles, markets, institutions, laws and whatnot. These are our evolutionary orientations, because we can see immediate results thereof in terms of new humans being around. A stable legal system, with a solid centre of political power in the middle of it, is a well-tested way of minimizing human losses due to haphazard violence. Once a society achieves that state, it can even move from place to place, as local resources get depleted.

I think I have just nailed down one of my core theoretical contentions. The originality of my method is that it allows studying social change as collectively intelligent learning, whilst remaining very open as for what this learning is exactly about. My method is essentially evolutionary, whilst avoiding the traps of evolutionary metaphysics, such as hypothetical evolutionary targets. I can present my method and my findings as a constructive theoretical polemic with the MuSIASEM framework.


[1] Kuroda, K., & Kameda, T. (2019). You watch my back, I’ll watch yours: Emergence of collective risk monitoring through tacit coordination in human social foraging. Evolution and Human Behavior, 40(5), 427-435. https://doi.org/10.1016/j.evolhumbehav.2019.05.004

[2] Jacob, F. (1977). Evolution and tinkering. Science, 196(4295), 1161-1166

[3] Kauffman, S., & Levin, S. (1987). Towards a general theory of adaptive walks on rugged landscapes. Journal of theoretical Biology, 128(1), 11-45

[4] Kauffman, S. A. (1993). The origins of order: Self-organization and selection in evolution. Oxford University Press, USA

[5] Nahum, J. R., Godfrey-Smith, P., Harding, B. N., Marcus, J. H., Carlson-Stevermer, J., & Kerr, B. (2015). A tortoise–hare pattern seen in adapting structured and unstructured populations suggests a rugged fitness landscape in bacteria. Proceedings of the National Academy of Sciences, 112(24), 7530-7535, www.pnas.org/cgi/doi/10.1073/pnas.1410631112    

[6] Wood, M. A., & Thompson, C. (2021). Crime prevention, swarm intelligence and stigmergy: Understanding the mechanisms of social media-facilitated community crime prevention. The British Journal of Criminology, 61(2), 414-433.  https://doi.org/10.1093/bjc/azaa065

[7] Li, M., Porter, A. L., Suominen, A., Burmaoglu, S., & Carley, S. (2021). An exploratory perspective to measure the emergence degree for a specific technology based on the philosophy of swarm intelligence. Technological Forecasting and Social Change, 166, 120621. https://doi.org/10.1016/j.techfore.2021.120621

[8] Berghout, S., & Verbitskiy, E. (2021). On regularity of functions of Markov chains. Stochastic Processes and their Applications, Volume 134, April 2021, Pages 29-54, https://doi.org/10.1016/j.spa.2020.12.006

[9] Keane, M. (1972). Strongly mixingg-measures. Inventiones mathematicae, 16(4), 309-324. DOI

[10] Turchin, P., Currie, T. E., Whitehouse, H., François, P., Feeney, K., Mullins, D., … & Spencer, C. (2018). Quantitative historical analysis uncovers a single dimension of complexity that structures global variation in human social organization. Proceedings of the National Academy of Sciences, 115(2), E144-E151. https://doi.org/10.1073/pnas.1708800115

[11] Shafique, L., Ihsan, A., & Liu, Q. (2020). Evolutionary trajectory for the emergence of novel coronavirus SARS-CoV-2. Pathogens, 9(3), 240. https://doi.org/10.3390/pathogens9030240

[12] Hoffman, D. D., Singh, M., & Prakash, C. (2015). The interface theory of perception. Psychonomic bulletin & review, 22(6), 1480-1506.

[13] Fields, C., Hoffman, D. D., Prakash, C., & Singh, M. (2018). Conscious agent networks: Formal analysis and application to cognition. Cognitive Systems Research, 47, 186-213. https://doi.org/10.1016/j.cogsys.2017.10.003

[14] Schneider, M. C., & Bos, A. L. (2019). The application of social role theory to the study of gender in politics. Political Psychology, 40, 173-213. https://doi.org/10.1111/pops.12573

[15] Gil-Hernández, C. J., Marqués-Perales, I., & Fachelli, S. (2017). Intergenerational social mobility in Spain between 1956 and 2011: The role of educational expansion and economic modernisation in a late industrialised country. Research in social stratification and mobility, 51, 14-27. http://dx.doi.org/10.1016/j.rssm.2017.06.002

[16] Andreoni, V. (2020). The energy metabolism of countries: Energy efficiency and use in the period that followed the global financial crisis. Energy Policy, 139, 111304. https://doi.org/10.1016/j.enpol.2020.111304

[17] Al-Tamimi and Al-Ghamdi (2020), ‘Multiscale integrated analysis of societal and ecosystem metabolism of Qatar’ Energy Reports, 6, 521-527, https://doi.org/10.1016/j.egyr.2019.09.019

[18] Velasco-Fernández, R., Pérez-Sánchez, L., Chen, L., & Giampietro, M. (2020), A becoming China and the assisted maturity of the EU: Assessing the factors determining their energy metabolic patterns. Energy Strategy Reviews, 32, 100562.  https://doi.org/10.1016/j.esr.2020.100562

[19] Niebel, B., Leupold, S. & Heinemann, M. An upper limit on Gibbs energy dissipation governs cellular metabolism. Nat Metab 1, 125–132 (2019). https://doi.org/10.1038/s42255-018-0006-7

The art of pulling the right lever

I dig into the idea of revising my manuscript ‘Climbing the right hill – an evolutionary approach to the European market of electricity’, in order to resubmit it to the journal Applied Energy , by somehow fusing it with two other, unpublished pieces of my writing, namely: ‘Behavioural absorption of Black Swans: simulation with an artificial neural network’, and ‘The labour-oriented, collective intelligence of ours: Penn Tables 9.1 seen through the eyes of a neural network’.

I am focusing on one particular aspect of that revision by recombination, namely on comparing the empirical datasets which I used for each research in question. This is an empiricist approach to scientific writing: I assume that points of overlapping, as well as possible synergies, are based, at the end of the day, on overlapping and synergies between the respective empirical bases of my different papers.

 In ‘Climbing the right hill […]’, my basic dataset consisted in m = 300 ‘country-year’ observations, in the timeframe from 2008 through 2017, and covering the following countries: Belgium, Bulgaria, Czechia, Denmark, Germany, Estonia, Ireland, Greece, Spain, France, Croatia, Italy, Cyprus, Latvia, Lithuania, Luxembourg, Hungary, Malta, Netherlands, Austria, Poland, Portugal, Romania, Slovenia, Slovakia, Finland, Sweden, United Kingdom, Norway, and Turkey. The scope of variables covered is essentially that of Penn Tables 9.1, plus some variables from other sources, pertinent to the market of electricity, to the energy sector in general, and to technological change, namely:

>> The price fork, in € between the retail price of electricity, paid by households and really small institutional entities, on the one hand, and the prices paid by big institutional consumers

>> The capital value of that price fork, in € mln, thus the difference in prices multiplied by the quantity of electricity consumed

>> Total consumption of energy in the country (thousands of tonnes of oil equivalent)

>> The percentage share of electricity in the total consumption of energy

>> The percentage share of renewable sources in the total output of electricity

>> The number of resident patent applications per country per year

>> The coefficient of fixed assets per 1 resident patent application

>> The coefficient of resident patent applications per 1 million people

The full set, in Excel format, is accessible via the following link: https://discoversocialsciences.com/wp-content/uploads/2019/11/Database-300-prices-of-electricity-in-context.xlsx . I also used a recombination of that database, made of m = 3000 randomly stacked records from the m = 300 set, just in order to check the influence of order in ‘country-year’ observations upon the results I obtained

In the two other manuscripts, namely in ‘The behavioural absorption of Black Swans […]’ and in ‘The labour-oriented, collective intelligence of ours […]’, I used one and the same empirical database, made of m = 3006 ‘country-year’ records, all selected from Penn Tables 9.1 , with the criteria of selection being the fullness of information. In other words, I kicked out of Penn Tables 9.1. all the rows with empty cells, and what remains is the m = 3006 set.

As I attempt to make some sort of cross analysis between my results from those three papers, one crossing is obvious. Variables pertinent to the market of labour, i.e. the average number of hours worked per person per year (AVH), the percentage of labour compensation in the gross national income (LABSH), and the indicator of human capital (HC), informative about the average length of educational path in the professionally active people, seem to play a special role as collectively pursued outcomes. The special role of those three – AVH, LABSH, and HC – seems to be impervious to, respectively, the presence or the absence of the variables I added from other sources in ‘Climbing the right hill […]’. It also seems impervious to the geographical scope and the temporal window of observation.

The most interesting direction for a further exploration seems to be in the crossing of ‘Black Swans […]’ with ‘Climbing the right hill […]. I take the structure from ‘Black Swans […]’ – namely the model where the optimization of an empirical variable impacts a range of social roles – and I put in that model the dataset from  ‘Climbing the right hill […]’. I observe the patterns of learning occurring in the perceptron, as I take different empirical variables.

Variables which are strong collective orientations – AVH, LABSH, and HC – display a special pattern of learning, different from other variables. Their local residual error (i.e. the arithmetical difference between the value of neural activation function and the local empirical value at hand), swings in a wide amplitude, yet in a predictable cycle. It is a pattern of learning in the lines of ‘we make a lot of mistakes, then we minimize them, and then we repeat: a lot of mistakes followed by a period of accuracy’. Other variables, run through the same model, display something different: a general tendency to minimal error, with occasional, pretty random bumps. Not much error, and not much of a visible cycle in learning.

The national societies which I study, seem to orient themselves on outcomes which associate with strong and predictably cyclical amplitude of error, this with abundant learning in a predictable cycle. There is one more thing. When optimizing variables relative to the market of labour – AVH, LABSH, and HC – the model from ‘Black Swans […]’ shows relatively the highest resilience in the incumbent social roles, i.e. those in place before social disruption starts.

Good. Something takes shape. I am reframing the method and the material I want to introduce in the revised version of ‘Climbing the right hill […]’, for the journal Applied Energy, and I add some results and provisional conclusions.

When I take the empirical material from Penn Tables 9.1, thus when I observe the otherwise bloody chaotic thing called ‘society’ through the lens of quantitative variables pertinent to the broadly spoken real of macroeconomics, that material shows some repetitive, robust properties. When I run in through a learning procedure, expressed in the form of a simple neural network, the learning centred on optimizing variables pertinent to the labour market (AVH, LABSH, HC), as well as on the index of prices in export (PL_X), – yields artificial datasets more similar to the original one, in terms of Euclidean similarity, than any other such artificial dataset, optimizing other variables. That phenomenological hierarchy seems to be robust both to the modifications of scope, and those of spatial-temporal range. When I add variables pertinent to technological change and to the market of electricity, they obediently take their place in the rank, and don’t step forward. When I extend the geographical scope of observation from Europe to the whole world, and when I extend the window of observation from the initial {2008 ÷ 2017} to the longer {1954 ÷ 2017}, the same still holds.

As I try to explain why is it so, and I try to find an empirical explanation, I make another neural network, where each empirical variable from the original dataset is the optimized output, and optimization takes place by experimenting with a vector of probabilities assigned to a set of social roles, and a random factor of disturbance. The pattern of learning is observed as the distribution of residual errors over the entire experimental sequence of phenomenal instances. In that different perspective, the same variables which seem to be privileged collective outcomes – PL_X, AVH, LABSH, and HC – display a specific pattern of learning: they swing broadly in their error, and yet they swing in a predictable cycle. When my experimental neural network learns on other variables, the pattern is different, with the curve of error being much calmer, less bumpy, and yet much less cyclical.

I return to my method and to my theoretical assumptions. I recapitulate. I start by assuming that social reality is essentially chaotic and unobservable directly, yet I can make epistemological approximations of that thing and see how they work. In this specific piece of research, I make two such types of approximation, based on different assumptions. On the one hand, I assume that quantitative, commonly measured, socio-economic variables, such as those in Penn Tables 9.1 are partial expressions of change in that otherwise chaotic social reality, and we collect those values because they represent change in the collective outcomes which we value. On the other hand, I assume that social reality can be represented as a collection of social roles, in two distinct categories: the already existing, active social roles, accompanied by temporarily dormant, ready-to-be triggered roles. Those social roles are observable as the relative frequency of occurrence, thus as the probability that any given individual endorses them.

I further assume that human societies are collectively intelligent structures, which, in turn, means that we collectively learn by experimenting with many alternative versions of ourselves. By the way, I have been wondering whether this is a hypothesis or an assumption, and I settled for assumption, because I do not really bring any direct proof thereof, and yet I make the claim. Anyway, with the assumption of collective intelligence, I can simulate two mutually correlated processes of learning through experimentation. On the one hand, among all the collective outcomes represented with quantitative socio-economic variables, we learn hierarchically, i.e. we optimize some of those outcomes in the first place, whilst treating the other ones as instrumental to that chief goal. On the other hand, we optimize each of those outcomes, represented with quantitative variables, by experimenting with the relative prevalence (i.e. probability of endorsement) in distinct social roles.

That general theoretical perspective is the foundation which I use to both make an empirical method of research, and to substantiate the claim that public policies and business strategies which stimulate technological race with clear prime for winners and clear penalty for losers are likely to bring better results, especially on the long run, than policies and strategies aiming at erasing local idiosyncrasies and at creating uniformly distributed outcomes. My point is that the latter, i.e. policies oriented on nullifying local idiosyncrasies, lead either to the absence of idiosyncrasies, and, consequently, to the absence of different versions in ourselves to experiment with and learn, or they simply prove inefficient, as they try to move the wrong lever in the machine.

Now, looking through another door inside my head, I am presenting below the structure of semestral projects I assign to my students, in the Summer semester 2021, in two different, and yet somehow concurrent courses: International Trade Policy in the major International Relations, and International Management in the major Management. You will see how I teach, and how I get a bit obsessive about digging into the same ideas, over and over again.

The complex project to graduate the International Management course, Summer semester 2021

Our common goal: develop your understanding of the transition from the domestically based business structure to an international one.

Your goal: prepare a developed, well-informed business plan, for the development of a business, from the level of one national market, to the international level. That business plan is your semestral project, which you graduate the course of International Management with.

You can see this course as an opportunity to put together and utilize the partial learning you have from all the individual subject courses you have had so far.

Your deadline is June 25th, 2021. 

Definition – international scale of a business means that it becomes an economically significant choice to branch the operations into or move them completely to foreign markets. In other words, the essential difference between domestic management and international management – at least the difference we will focus on in this course – is that in domestic management the initial place of incorporation determines the strategy, whilst in international management the geographical location of operations and incorporation(s) is determined by strategic choices. 

You work with a business concept of your own, or you take one of the pre-prepared business plans available at the digital platform. These are graduation business plans prepared by students from other groups, in the Winter semester 2020/2021. In other words, you develop either on your own idea, or on someone else’s idea. One of the things you will find out is that different business concepts have different potential, and follow very different paths for going to the international level.

Below, you will find the list of those pre-prepared business plans. They are coupled with links to the archives of my blog, where you can download them from. Still, you can find them as well in the ‘Files’ section of the group ‘International Management’, folder ‘Class materials’.

>> Pizzeria >> https://discoversocialsciences.com/wp-content/uploads/2021/03/Pizzeria-Business-plan.docx

>> Pancake Café >> https://discoversocialsciences.com/wp-content/uploads/2021/03/Pancake-Cafe-Business-Plan.pptx

>> Never Alone >> https://discoversocialsciences.com/wp-content/uploads/2021/03/Never-Alone-business-plan.pdf

>> 3D Virtual Fitting Room >> https://discoversocialsciences.com/wp-content/uploads/2021/03/3D-Virtual-Fitting-Room-Business-Plan.docx

>> ToyBox >> https://discoversocialsciences.com/wp-content/uploads/2021/03/ToyBox-Business-Plan.pdf

>> Chess Manufacturing (semi-finished, interesting to develop from that form) >> https://discoversocialsciences.com/wp-content/uploads/2021/03/Chess-Business-Plan-Semi-Done.docx

>> Second-hand market for luxury goods >> https://discoversocialsciences.com/wp-content/uploads/2021/03/Business-Plan-second-hand-market-for-luxury-fashion.docx

We will abundantly use real-life cases of big, internationally branched businesses as our business models. Some of them are those which you already know from past semesters, whilst other might be new to you:

>> Netflix >> https://ir.netflix.net/ir-overview/profile/default.aspx

>> Tesla >> https://ir.tesla.com/

>> PayPal >> https://investor.pypl.com/home/default.aspx

>> Solar Edge >> https://investors.solaredge.com/investor-overview

>> Novavax >> https://ir.novavax.com/investor-relations

>> Pfizer >> https://investors.pfizer.com/investors-overview/default.aspx

>> Starbucks >> https://investor.starbucks.com/ir-home/default.aspx

>> Amazon >> https://ir.aboutamazon.com/overview/default.aspx

That orientation on real business cases means that the course of International Management is, from your point of view, a course of market research, business planning, and basic empirical science, more than a theoretical course. This is precisely what we are going to be doing in our classes: market research, business planning, and basic empirical science. 

You can benefit from running yourself through my online course of business planning, to be found at https://discoversocialsciences.com/the-course-of-business-planning/ .

The basic structure of the business plan which you will prepare is the following:

  • Section 1: Executive summary. This is a summary of the essentials, developed in further sections of the business plan. Particular focus on why and how going international with that business concept.
  • Section 2: Description of the business concept. How do we create, and capture value added in that thing? What kind of value added is that? What are the goods we market? Who are our target customers? What kind of really existing, operational business models, observable in actually operational companies, do we emulate in that business?
  • Section 3: Market research. We focus on collecting and presenting information on our customers, and our competitors.
  • Section 4: Organization. How are we going to structure human work in that business? How many people do we need, and what kind of organizational structure should we make them work in? What is the estimate, total payroll per month and per year, in that organization?
  • Section 5: The strategy for going international. Can we develop an original, proprietary technology, and apply it in different national markets? Can we benefit from the economies of scale, or those of scope, as we go international? Can we optimize and standardize our business concept into a franchise, attractive for smaller partners in foreign markets? << this is the ‘INTERNATIONAL MANAGEMENT’ part of that business plan. Now, you demonstrate your understanding of what international management is.
  • Section 6: The corporate business structure. Do you see that business as one compact business entity, which operates internationally via digital platforms and contracts with external partners, or, conversely, would you rather create a network of affiliated companies in separate national (regional?) markets, all tied to and controlled by one mother company? Develop on those options and justify your choice. 
  • Section 7: The financial plan. Plan of revenues, costs, and of the resulting profit/loss for 3 years ahead. The balance sheet we need to start with, and its prospective changes over the next 3 years. The prospective cash-flow.

Guidelines for the graduation project in International Trade Policy Summer semester 2021

You graduate the course of ‘International Trade Policy’ by preparing a project. Your project will be a business report, the kind you could have to prepare if you are assistant to the CEO of a big firm, or to a prime minister. You are supposed to prepare a report on the impact of trade on individual businesses and national economies, in a sort of controlled economic experiment, limited in scope and in space. Your goal in the preparation of that project is to develop active understanding of international trade.

You can access the files provided as additional materials for this assignment in two ways. Below in this document, I provide links to the archives of my blog, ‘Discover social sciences’. On the other hand, all those files are to find in the ‘Files’ section of the ‘International Trade Policy’ group, in the folder ‘Class Materials’.

Your report will have two sections. In Section A, you study the impact of international trade on a set of businesses. Your business cases encompass real companies, some of which you already know from the course of microeconomics – Tesla, Netflix, Amazon, H&M – as well as new business entities which can emerge as per the business plans introduced below (these are real business plans made by students in other groups in the Winter semester 2020/2021).  

In the Section B of your report, imagine that you are the government of, respectively, Poland, Ukraine, and France. Imagine that businesses from Part A grow in your country. Given the macroeconomic characteristics of your national economy, which types of those businesses are likely to grow the most, and which are not really fit? As a country, as those businesses grow, would you see your exports grow, or would it be rather an increase in your imports? How would it affect your overall balance on trade? What would you do as a government and why?

Additional guidelines and materials for the Section A of your report:

You can make a simplifying assumption that businesses can develop with and through trade along two different, although not exactly exclusive paths:

  • Case A: there is a technology with potential for growth, which can be developed through expanding its target market, with exports or with franchise
  • Case B: the gives business can develop significant economies of scale and scope, and trade, i.e. exports or/and imports, are a way to achieve that

You can benefit from studying the model contract of sales in international trade: https://discoversocialsciences.com/wp-content/uploads/2020/02/sale_of_perishables_model_contract.pdf

… as well as studying the so-called Incoterms >> https://discoversocialsciences.com/wp-content/uploads/2020/03/Incoterms.pdf , which are standard conditions of delivery in international trade.

The early business concepts developed by students from other groups, which you are supposed to assess as for their capacity to grow through trade, are:

The investor relations sites of the real, big companies, whose development with trade you are supposed to study as well:

Additional guidelines and materials for the Section B of your report:

The so-called trade profiles of countries, accessible with the World Trade Organization: https://www.wto.org/english/res_e/publications_e/trade_profiles20_e.htm

Example of an international trade agreement, namely than between South Korea and Australia: https://discoversocialsciences.com/wp-content/uploads/2021/03/korea-australia-free-trade-agreement.pdf

Macroeconomic profiles of Poland, Ukraine, and France >> https://discoversocialsciences.com/wp-content/uploads/2021/03/Macroeconomic-Profiles.xlsx