Combinatorial meaning and the cactus

My editorial on You Tube

I am back into blogging, after over two months of pausing. This winter semester I am going, probably, for record workload in terms of classes: 630 hours in total. October and November look like an immersion time, when I had to get into gear for that amount of teaching. I noticed one thing that I haven’t exactly been aware of, so far, or maybe not as distinctly as I am now: when I teach, I love freestyling about the topic at hand. Whatever hand of nice slides I prepare for a given class, you can bet on me going off the beaten tracks and into the wilderness of intellectual quest, like by the mid-class. I mean, I have nothing against Power Point, but at some point it becomes just so limiting… I remember that conference, one year ago, when the projector went dead during my panel (i.e. during the panel when I was supposed to present my research). I remember that mixed, and shared feeling of relief and enjoyment in people present in the room: ‘Good. Finally, no slides. We can like really talk science’.

See? Once again, I am going off track, and that in just one paragraph of writing. You can see what I mean when I refer to me going off track in class. Anyway, I discovered one more thing about myself: freestyling and sailing uncharted intellectual waters has a cost, and this is a very clear and tangible biological cost. After a full day of teaching this way I feel as if my brain was telling me: ‘Look, bro. I know you would like to write a little, but sorry: no way. Them synapses are just tired. You need to give me a break’.

There is a third thing I have discovered about myself: that intense experience of teaching makes me think a lot. I cannot exactly put all this in writing on the spot, fault of fresh neurotransmitter available, still all that thinking tends to crystallize over time and with some patience I can access it later. Later means now, as it seems. I feel that I have crystallized enough and I can start to pull it out into the daylight. The « it » consists, mostly, in a continuous reflection on collective intelligence. How are we (possibly) smart together?

As I have been thinking about it, three events combined and triggered in me a string of more specific questions. I watched another podcast featuring Jordan Peterson, whom I am a big fan of, and who raised the topic of the neurobiological context of meaning. How our brain makes meaning, and how does it find meaning in sensory experience? On the other hand, I have just finished writing the manuscript of an article on the energy-efficiency of national economies, which I have submitted to the ‘Energy Economics’ journal, and which, almost inevitably, made me work with numbers and statistics. As I had been doing that empirical research, I found out something surprising: the most meaningful econometric results came to the surface when I transformed my original data into local coefficients of an exponential progression that hypothetically started in 1989. Long story short, these coefficients are essentially growth rates, which behave in a peculiar way, due to their arithmetical structure: they decrease very quickly over time, whatever is the source, raw empirical observation, as if they were representing weakening shock waves sent by an explosion in 1989.

Different types of transformed data, the same data, in that research of mine, produced different statistical meanings. I am still coining up real understanding of what it exactly means, by the way. As I was putting that together with Jordan Peterson’s thoughts on meaning as a biological process, I asked myself: what is the exact meaning of the fact that we, as scientific community, assign meaning to statistics? How is it connected with collective intelligence?

I think I need to start more or less where Jordan Peterson moves, and ask ‘What is meaning?’. No, not quite. The ontological type, I mean the ‘What?’ type of question, is a mean beast. Something like a hydra: you cut the head, namely you explain the thing, you think that Bob’s your uncle, and a new head pops up, like out of nowhere, and it bites you, where you know. The ‘How?’ question is a bit more amenable. This one is like one of those husky dogs. Yes, it is semi wild, and yes, it can bite you, but once you tame it, and teach it to pull that sleigh, it will just pull. So I ask ‘How is meaning?’. How does meaning occur?

There is a particular type of being smart together, which I have been specifically interested in, for like the last two months. It is the game-based way of being collectively intelligent. The theory of games is a well-established basis for studying human behaviour, including that of whole social structures. As I was thinking about it, there is a deep reason for that. Social interactions are, well, interactions. It means that I do something and you do something, and those two somethings are supposed to make sense together. They really do at one condition: my something needs to be somehow conditioned by how your something unfolds, and vice versa. When I do something, I come to a point when it becomes important for me to see your reaction to what I do, and only when I will have seen it, I will further develop on my action.

Hence, I can study collective action (and interaction) as a sequence of moves in a game. I make my move, and I stop moving, for a moment, in order to see your move. You make yours, and it triggers a new move in me, and so the story goes further on in time. We can experience it very vividly in negotiations. With any experience in having serious talks with other people, thus when we negotiate something, we know that it is pretty counter-efficient to keep pushing our point in an unbroken stream of speech. It is much more functional to pace our strategy into separate strings of argumentation, and between them, we wait for what the other person says. I have already given a first theoretical go at the thing in « Couldn’t they have predicted that? ».

This type of social interaction, when we pace our actions into game-like moves, is a way of being smart together. We can come up with new solutions, or with the understanding of new problems – or a new understanding of old problems, as a matter of fact – and we can do it starting from positions of imperfect agreement and imperfect coordination. We try to make (apparently) divergent points, or we pursue (apparently) divergent goals, and still, if we accept to wait for each other’s reaction, we can coordinate and/or agree about those divergences, so as to actually figure out, and do, some useful s**t together.

What connection with the results of my quantitative research? Let’s imagine that we play a social game, and each of us makes their move, and then they wait for the moves of other players. The state of the game at any given moment can be represented as the outcome of past moves. The state of reality is like a brick wall, made of bricks laid one by one, and the state of that brick wall is the outcome of the past laying of bricks.  In the general theory of science, it is called hysteresis. There is a mathematical function, reputed to represent that thing quite nicely: the exponential progression. On a timeline, I define equal intervals. To each period of time, I assign a value y(t) = et*a, where ‘t’ is the ordinal of the time period, ‘e’ is a mathematical constant, the base of natural logarithm, e = 2,7188, and ‘a’ is what we call the exponential coefficient.

There is something else to that y = et*a story. If we think like in terms of a broader picture, and assume that time is essentially what we imagine it is, the ‘t’ part can be replaced by any number we imagine. Then, the Euler’s formula steps in: ei*x = cos x + i*sin x. If you paid attention in math classes, at high school, you might remember that sine and cosine, the two trigonometric functions, have a peculiar property. As they refer to angles, at the end of the day they refer to a full circle of 360°. It means they go in a circle, thus in a cycle, only they go in perfectly negative a correlation: when the sine goes one unit one way, the cosine goes one unit exactly the other way round etc. We can think about each occurrence we experience – the ‘x’ –  as a nexus of two, mutually opposing cycles, and they can be represented as, respectively, the sine, and the cosine of that occurrence ‘x’. When I grow in height (well, when I used to), my current height can be represented as the nexus of natural growth (sine), and natural depletion with age (cosine), that sort of things.

Now, let’s suppose that we, as a society, play two different games about energy. One game makes us more energy efficient, ‘cause we know we should (see Settlement by energy – can renewable energies sustain our civilisation?). The other game makes us max out on our intake of energy from the environment (see Technological Change as Intelligent, Energy-Maximizing Adaptation). At any given point in time, the incremental change in our energy efficiency is the local equilibrium between those two games. Thus, if I take the natural logarithm of our energy efficiency at a given point in space-time, thus the coefficient of GDP per kg of oil equivalent in energy consumed, that natural logarithm is the outcome of those two games, or, from a slightly different point of view, it descends from the number of consecutive moves made (the ordinal of time period we are currently in), and from a local coefficient – the equivalent of ‘i’ in the Euler’s formula – which represents the pace of building up the outcomes of past moves in the game.

I go back to that ‘meaning’ thing. The consecutive steps ‘t’ in an exponential progression y(t) = et*a progression correspond to successive rounds of moves in the games we play. There is a core structure to observe: the length of what I call ‘one move’, and which means a sequence of actions that each person involved in the interaction carries out without pausing and waiting for the reaction observable in other people in the game. When I say ‘length’, it involves a unit of measurement, and here, I am quite open. It can be a length of time, or the number of distinct actions in my sequence. The length of one move in the game determines the pace of the game, and this, in turn, sets the timeframe for the whole game to produce useful results: solutions, understandings, coordinated action etc.

Now, where the hell is any place for ‘meaning’ in all that game stuff? My view is the following: in social games, we sequence our actions into consecutive moves, with some waiting-for-reaction time in between, because we ascribe meaning to those sub-sequences that we define as ‘one move’. The way we process meaning matters for the way we play social games.

I am a scientist (well, I hope), and for me, meaning occurs very largely as I read what other people have figured out. So I stroll down the discursive avenue named ‘neurobiology of meaning’, welcomingly lit by with the lampposts of Science Direct. I am calling by an article by Lee M. Pierson, and Monroe Trout, entitled ‘What is consciousness for?[1]. The authors formulate a general hypothesis, unfortunately not supported (yet?) with direct empirical check, that consciousness had been occurring, back in the day, I mean like really back in the day, as cognitive support of volitional movement, and evolved, since then, into more elaborate applications. Volitional movement is non-automatic, i.e. decisions have to be made in order for the movement to have any point. It requires quick assemblage of data on the current situation, and consciousness, i.e. the awareness of many abstract categories in the same time, could the solution.

According to that approach, meaning occurs as a process of classification in the neurologically stored data that we need to use virtually simultaneously in order to do something as fundamental as reaching for another can of beer. Classification of data means grouping into sets. You have a random collection of data from sensory experience, like a homogenous cloud of information. You know, the kind you experience after a particularly eventful party. Some stronger experiences stick out: the touch of cold water on your naked skin, someone’s phone number written on your forearm with a lipstick etc. A question emerges: should you call this number? It might be your new girlfriend (i.e. the girlfriend whom you don’t consciously remember as your new one but whom you’d better to if you don’t want your car splashed with acid), or it might be a drug dealer whom you’d better not call back.  You need to group the remaining data in functional sets so as to take the right action.

So you group, and the challenge is to make the right grouping. You need to collect the not-quite-clear-in-their-meaning pieces of information (Whose lipstick had that phone number been written with? Can I associate a face with the lipstick? For sure, the right face?). One grouping of data can lead you to a happy life, another one can lead you into deep s**t. It could be handy to sort of quickly test many alternative groupings as for their elementary coherence, i.e. hold all that data in front of you, for a moment, and contemplate flexibly many possible connections. Volitional movement is very much about that. You want to run? Good. It would be nice not to run into something that could hurt you, so it would be good to cover a set of sensory data, combining something present (what we see), with something we remember from the past (that thing on the 2 o’clock azimuth stings like hell), and sort of quickly turn and return all that information so as to steer clear from that cactus, as we run.

Thus, as I follow the path set by Pierson and Trout, meaning occurs as the grouping of data in functional categories, and it occurs when we need to do it quickly and sort of under pressure of getting into trouble. I am going onto the level of collective intelligence in human social structures. In those structures, meaning, i.e. the emergence of meaningful distinctions communicable between human beings and possible to formalize in language, would occur as said structures need to figure something out quickly and under uncertainty, and meaning would allow putting together the types of information that are normally compartmentalized and fragmented.

From that perspective, one meaningful move in a game encompasses small pieces of action which we intuitively guess we should immediately group together. Meaningful moves in social games are sequences of actions, which we feel like putting immediately back to back, without pausing and letting the other player do their thing. There is some sort of pressing immediacy in that grouping. We guess we just need to carry out those actions smoothly one after the other, in an unbroken sequence. Wedging an interval of waiting time in between those actions could put our whole strategy at peril, or we just think so.

When I apply this logic to energy efficiency, I think about business strategies regarding innovation in products and technologies. When we launch a new product, or implement a new technology, there is something like fixed patterns to follow. When you start beta testing a new mobile app, for example, you don’t stop in the middle of testing. You carry out the tests up to their planned schedule. When you start launching a new product (reminder: more products made on the same energy base mean greater energy efficiency), you keep launching until you reach some sort of conclusive outcome, like unequivocal success or failure. Social games we play around energy efficiency could very well be paced by this sort of business-strategy-based moves.

I pick up another article, that by Friedemann Pulvermüller (2013[2]). The main thing I see right from the beginning is that apparently, neurology is progressively dropping the idea of one, clearly localised area in our brain, in charge of semantics, i.e. of associating abstract signs with sensory data. What we are discovering is that semantics engage many areas in our brain into mutual connection. You can find developments on that issue in: Patterson et al. 2007[3], Bookheimer 2002[4], Price 2000[5], and Binder & Desai 2011[6]. As we use words, thus as we pronounce, hear, write or read them, that linguistic process directly engages (i.e. is directly correlated with the activation of) sensory and motor areas of our brain. That engagement follows multiple, yet recurrent patterns. In other words, instead of having one mechanism in charge of meaning, we are handling different ones.

After reviewing a large bundle of research, Pulvermüller proposes four different patterns: referential, combinatorial, emotional-affective, and abstract semantics. Each time, the semantic pattern consists in one particular area of the brain acting as a boss who wants to be debriefed about something from many sources, and starts pulling together many synaptic strings connected to many places in the brain. Five different pieces of cortex come recurrently as those boss-hubs, hungry for differentiated data, as we process words. They are: inferior frontal cortex (iFC, so far most commonly associated with the linguistic function), superior temporal cortex (sTC), inferior parietal cortex (iPC), inferior and middle temporal cortex (m/iTC), and finally the anterior temporal cortex (aTC). The inferior frontal cortex (iFC) seems to engage in the processing of words related to action (walk, do etc.). The superior temporal cortex (sTC) looks like seriously involved when words related to sounds are being used. The inferior parietal cortex (iPC) activates as words connect to space, and spatio-temporal constructs. The inferior and middle temporal cortex (m/iTC) lights up when we process words connected to animals, tools, persons, colours, shapes, and emotions. That activation is category specific, i.e. inside m/iTC, different Christmas trees start blinking as different categories among those are being named and referred to semantically. The anterior temporal cortex (aTC), interestingly, has not been associated yet with any specific type of semantic connections, and still, when it is damaged, semantic processing in our brain is generally impaired.

All those areas of the brain have other functions, besides that semantic one, and generally speaking, the kind of meaning they process is correlated with the kind of other things they do. The interesting insight, at this point, is the polyvalence of cortical areas that we call ‘temporal’, thus involved in the perception of time. Physicists insist very strongly that time is largely a semantic construct of ours, i.e. time is what we think there is rather than what really is, out there. In physics, what exists is rather sequential a structure of reality (things happen in an order) than what we call time. That review of literature by Pulvermüller indirectly indicates that time is a piece of meaning that we attach to sounds, colours, emotions, animal and people. Sounds come as logical: they are sequences of acoustic waves. On the other hand, how is our perception of colours, or people, connected to our concept of time? This is a good one to ask, and a tough one to answer. What I would look for is recurrence. We identify persons as distinct ones as we interact with them recurrently. Autistic people have frequently that problem: when you put on a different jacket, they have hard time accepting you are the same person. Identification of animals or emotions could follow the same logic.

The article discusses another interesting issue: the more abstract the meaning is, the more different regions of the brain it engages. The really abstract ones, like ‘beauty’ or ‘freedom’, are super Christmas-trees: they provoke involvement all over the place. When we do abstraction, in our mind, for example when writing poetry (OK, just good poetry), we engage a substantial part of our brain. This is why we can be lost in our thoughts: those thoughts, when really abstract, are really energy-consuming, and they might require to shut down some other functions.

My personal understanding of the research reviewed by Pulvermüller is that at the neurological level, we process three essential types of meaning. One consists in finding our bearings in reality, thus in identifying things and people around, and in assigning emotions to them. It is something like a mapping function. Then, we need to do things, i.e. to take action, and that seems to be a different semantic function. Finally, we abstract, thus we connect distant parcels of data into something that has no direct counterpart neither in the mapped reality, nor in our actions.

I have an indirect insight, too. We have a neural wiring, right? We generate meaning with that wiring, right? Now, how is adaptation occurring, in that scheme, over time? Do we just adapt the meaning we make to the neural hardware we have, or is there a reciprocal kick, I mean from meaning to wiring? So far, neurological research has demonstrated that physical alteration in specific regions of the brain impacts semantic functions. Can it work the other way round, i.e. can recurrent change in semantics being processed alter the hardware we have between our ears? For example, as we process a lot of abstract concepts, like ‘taxes’ or ‘interest rate’, can our brains adapt from generation to generation, so as to minimize the gradient of energy expenditure as we shift between levels of abstraction? If we could, we would become more intelligent, i.e. able to handle larger and more differentiated sets of data in a shorter time.

How does all of this translate into collective intelligence? Firstly, there seem to be layers of such intelligence. We can be collectively smart sort of locally – and then we handle those more basic things, like group identity or networks of exchange – and then we can (possibly) become collectively smarter at more combinatorial a level, handling more abstract issues, like multilateral peace treaties or climate change. Moreover, the gradient of energy consumed, between the collective understanding of simple and basic things, on the one hand, and the overarching abstract issues, is a good predictor regarding the capacity of the given society to survive and thrive.

Once again, I am trying to associate this research in neurophysiology with my game-theoretical approach to energy markets. First of all, I recall the three theories of games, co-awarded the economic Nobel prize in 1994, namely those by: John Nash, John (Yan) Harsanyi, and Reinhard Selten. I start with the latter. Reinhard Selten claimed, and seems to have proven, that social games have a memory, and the presence of such memory is needed in order for us to be able to learn collectively through social games. You know those situations of tough talks, when the other person (or you) keeps bringing forth the same argumentation over and over again? This is an example of game without much memory, i.e. without much learning. In such a game we repeat the same move, like fish banging its head against the glass wall of an aquarium. Playing without memory is possible in just some games, e.g. tennis, or poker, if the opponent is not too tough. In other games, like chess, repeating the same move is not really possible. Such games force learning upon us.

Active use of memory requires combinatorial meaning. We need to know what is meaningful, in order to remember it as meaningful, and thus to consider it as valuable data for learning. The more combinatorial meaning is, inside a supposedly intelligent structure, such as our brain, the more energy-consuming that meaning is. Games played with memory and active learning could be more energy-consuming for our collective intelligence than games played without. Maybe that whole thing of electronics and digital technologies, so hungry of energy, is a way that we, collective human intelligence, put in place in order to learn more efficiently through our social games?

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

[1] Pierson, L. M., & Trout, M. (2017). What is consciousness for?. New Ideas in Psychology, 47, 62-71.

[2] Pulvermüller, F. (2013). How neurons make meaning: brain mechanisms for embodied and abstract-symbolic semantics. Trends in cognitive sciences, 17(9), 458-470.

[3] Patterson, K. et al. (2007) Where do you know what you know? The representation of semantic knowledge in the human brain. Nat. Rev. Neurosci. 8, 976–987

[4] Bookheimer,S.(2002) FunctionalMRIoflanguage:newapproachesto understanding the cortical organization of semantic processing. Annu. Rev. Neurosci. 25, 151–188

[5] Price, C.J. (2000) The anatomy of language: contributions from functional neuroimaging. J. Anat. 197, 335–359

[6] Binder, J.R. and Desai, R.H. (2011) The neurobiology of semantic memory. Trends Cogn. Sci. 15, 527–536

Coefficients explosifs court-terme

Mon éditorial sur You Tube

Je savais que ça allait arriver : pour moi, l’avènement du semestre d’hiver, à la fac, c’est inévitablement un changement profond de rythme de vie et de travail. Sur toute l’année académique, donc à partir du mois d’octobre jusqu’au mois de juin, plus de 70% de mon volume horaire d’enseignement prend lieu précisément entre le 1er Octobre et le 30 Janvier. Ce plus de 70%, cette année-ci, cela veut dire 630 heures de cours sur 4 mois.

Quoi qu’il en soit, je tiens à maintenir ne serait-ce qu’un rythme lent d’écriture sur mon blog. Un rythme lent vaut mieux que pas de rythme du tout. Je sais que le fait de blogger régulièrement me donne une sorte d’apprentissage additionnel, comme un processus supplémentaire de traitement d’information dans mon cerveau.

Les deux mois qui viennent de s’écouler m’ont amené à quelques réalisations capitales. Je pense que la première d’entre elles est que l’enseignement est vraiment mon élément. « Eh bien voilà qu’il vient de découvrir l’Amérique, le gars » vous rigolerez, « Il a été prof pour douze ans et voilà qu’il vient de découvrir qu’il aime le job. Vaut mieux tard que jamais ». Oui, ça me fait rigoler, moi aussi, donc j’explique. Les deux premières semaines avec ce plan de cours super chargé, à la fac, ça me faisait un peu irrité (et irritable, par ailleurs). C’était une matinée il y a à peu près dix jours que je m’étais rendu compte que la perspective de cinq heures de cours ce jours précis ça me branchait. Oui, ça me branchait. Je sentais ce flot extrêmement plaisant d’adrénaline et de sérotonine, le mélange d’excitation et de volonté d’action.

L’enseignement est donc ce qui met mon système nerveux en vitesse de compétition, en quelque sorte. Banal et fondamental en même temps. Je veux approcher la chose de façon scientifique. Objectivement, l’enseignement en classe c’est de la communication et c’est précisément ce qui met mon système nerveux central en cet état d’excitation plaisante. Subjectivement, lorsque j’y pense, la communication en classe me procure deux sensations majeures : celle de connecter très vite des parcelles d’information, en des phrases intelligibles, et celle de dire ces phrases à un public.

Puisque j’y suis à la connexion des parcelles d’information, je peux aussi bien me vanter d’en avoir connecté un peu. Je viens de pondre un article sur le marché d’énergie, sous un titre de travail « Apprehending energy efficiency : what is the cognitive value of hypothetical shocks ? ».  Je donne ce titre en lien hypertexte pour que vous puissiez accéder le brouillon que j’ai déposé comme proposition de publication chez « Energy Economics ».  Je donne ici un sommaire de mon raisonnement. J’avais commencé par connecter deux types de phénomènes : tous les trucs que j’eusse observés par rapport au marché d’énergie, durant ces deux dernières années, d’une part, avec un phénomène de toute autre nature, c’est-à-dire le fait que l’économie de notre planète est en train de traverser la plus longue période de croissance économique ininterrompue depuis 1960. En même temps, l’efficacité énergétique de l’économie mondiale – mesurée avec le coefficient de PIB par kilogramme d’équivalent pétrole de consommation finale d’énergie – continue de croître paisiblement. Je m’étais demandé : y-a-t-il un lien entre les deux ? Est-il concevable que l’accalmie présente du cycle macroéconomique vienne du fait que notre espèce avait appris quelque chose de plus en ce qui concerne l’exploitation des ressources énergétiques ?

Comme j’y pense, j’ai quelques intuitions (obsessions ?) qui reviennent encore et encore. Intuition no. 1 : l’intensité de consommation d’énergie est liée au niveau général de développement socio-économique, y compris développement institutionnel (stabilité politique etc.). Je l’ai déjà exprimée, celle-là, dans « Les 2326 kWh de civilisation ». Intuition no. 2 : la vitesse de changement technologique est plutôt une cadence rythmée dans un cycle qu’une vitesse proprement dite. En d’autres mots, les technologies, ça change de génération en génération. Toute technologie à un cycle de vie et ce cycle de vie se reflète dans son coefficient d’amortissement. Changement au niveau de l’efficience énergétique d’un système économique se passe au rythme d’un cycle de vie des technologies. Intuition no. 3 : les marchés financiers, y compris les systèmes monétaires, jouent un rôle similaire au système endocrinien dans un organisme vivant. L’argent, aussi bien que d’autres titres financiers, c’est comme des hormones. Ça transmet l’information, ça catabolise quelque chose et anabolise quelque chose d’autre. S’il y a quoi que ce soit d’important qui se passe niveau innovation, ça engage les marchés financiers.

Dans la science, il est bon de prendre en considération l’avis d’autres personnes, pas seulement mes propres intuitions. Je veux dire, c’est aussi une bonne habitude dans d’autres domaines de la vie. Après avoir donc fait ce qui s’appelle ‘revue de la littérature’, j’avais trouvé une corroboration partielle de mes propres intuitions. Il y a un modèle intéressant d’efficience énergétique au niveau macroéconomique appelé « MuSIASEM » qui appréhende la chose précisément comme s’il était question du métabolisme d’un organisme vivant (consultez, par exemple, Andreoni 2017[1] ou bien Velasco-Fernández et al. 2018[2]).

De tout en tout, j’avais formulé un modèle théorique que vous pouvez trouver, en détail, dans ce manuscrit brouillon. Ce que je voudrais discuter et explorer ici est une composante particulière de cette recherche, un truc que j’avais découvert un peu par hasard. Lorsque je fais de la recherche quantitative, j’aime bien jouer un peu avec les données et avec les façons de les transformer. D’autre part, en sciences économiques, lorsqu’on fait des tests économétriques sur des séries temporelles, l’une des choses les plus fondamentales à faire est de réduire les effets de la non-stationnarité ainsi que ceux de la différence entre des échelles de mesure. De ce fait, une procédure commune consiste à prendre ce qu’on appelle le moment d’observation au lieu de l’observation elle-même. La première dérivée est un moment, par exemple. Le moment est donc la dynamique de quelque chose. Pour ceux qui sont un peu familiers avec l’économie, les coefficients tels que la valeur marginale ou bien l’élasticité sont des moments.

Je voulais donc jouer un peu avec les moments de mes données empiriques. Entre temps, j’avais presque automatiquement calculé les logarithmes naturels de mes données, histoire de les calmer un peu et éliminer des variations accidentelles à court-terme. Le logarithme naturel c’est la puissance à laquelle il faut élever la constante mathématique e = 2,71828 pour obtenir le nombre donné. C’est alors que je m’étais souvenu d’une interprétation possible des logarithmes naturels et de la constante « e », celle de la progression exponentielle. Je peux construire une fonction mathématique de forme générale « y = et*a » où t est le numéro de série d’une période spécifique de temps et a est un paramètre. La progression exponentielle a la réputation de représenter particulièrement bien les phénomènes qui se déploient dans le temps comme la construction d’un mur, où chaque brique consécutive repose sur les briques posées auparavant. On appelle ce type de développement « hystérèse » et en général, cela veut dire que les résultats obtenus dans la période précédente forment une base pour les choses qui se passent dans la période suivante.

Normalement, dans la version scolaire de la progression exponentielle, le paramètre « a » est constant, seulement moi, je voulais jouer avec. Je me suis dit que si j’avais déjà calculé les logarithmes naturels de mes observations empiriques, je peux aussi bien assumer que chaque logarithme est l’exposante « t*a » de la fonction « y = et*a ». J’ai donc un « a » local pour chaque observation empirique et ce « a » local est un moment (une dynamique locale) de cette observation. Question : comment extraire le « t » et le « », séparément, du « t*a » ? La réponse était toute bête : comme je veux. Je peux créer une ligne temporelle arbitraire, donc assigner à chaque observation empirique une abscisse de période selon mon gré.

A ce moment-là, je me suis dit qu’il y a deux lignes temporelles alternatives qui m’intéressent particulièrement dans le contexte donné de recherche sur l’efficience énergétique des économies nationales. Il y a une ligne de changement lent et séculaire, la cadence de mûrissement des civilisations en quelque sorte et d’autre part il y a une ligne de changement explosif à court terme. Mes observations empiriques commençaient toutes en 1990 et continuaient jusqu’en 2014. Je pouvais donc simuler deux situations alternatives. Premièrement, je pouvais considérer tout ce qui s’était passé entre 1990 et 2014 comme partie d’un processus exponentiel initialisé il y a longtemps. Un siècle auparavant, c’est longtemps, tenez. Je pouvais donc prendre chaque abscisse temporelle entre 1990 et 2014 et lui assigner une coordonnée spéciale, égale à « année – 1889 ». L’année 1990 serait donc « 1990 – 1889 = 101 » pendant que 2014 correspondrait à « 2014 – 1889 = 125 » etc.

Deuxièmement, je pouvais assumer que ma période de 1990 à 2014 représente les conséquences de quelque évènement hypothétique qui venait d’avoir pris lieu, par exemple en 1989. L’année 1990 aurait alors l’abscisse temporelle t = 1990 – 1989 = 1, et 2014 serait t = 2014 – 1989 = 25. J’avais fait ces deux transformations : pour chaque observation empirique j’avais pris son logarithme naturel et ensuite je l’avais divisé, respectivement, par ces deux abscisses temporelles alternatives, l’une sur une ligne s’étendant depuis 1889 et l’autre initialisée en 1989. Comme je ruminais ces résultats, j’avais remarqué quelque chose que j’aurais dû prévoir si j’étais un mathématicien et non pas un économiste sauvage qui utilise les maths comme un Néanderthalien utiliserait une calculatrice. Lorsque j’assume que mon histoire commence en 1990, donc que t = 1990 – 1989 = 1 etc., chaque « t » consécutif est beaucoup plus grand que son prédécesseur, mais cette différence décroit très vite. Tenez, t(1991) = 1991 – 1989 = 2 et ça fait deux fois plus que t(1990) = 1990 – 1989 = 1. Cependant t(1995) = 1995 – 1989 = 6 et ça fait juste 20% de plus que t(1996) = 1994 – 1989 = 5. Si je divise donc mes logarithmes naturels par ces « t » qui grimpent vite, mes moments « a » locaux décroissent tout aussi vite et la cadence de cette décroissance ralentit tout aussi vite.

Quel genre de phénomène dans la vie réelle une telle progression mathématique pourrait bien représenter ? Je me suis dit que si un choc profond avait pris lieu en 1989 et avait envoyé des ondes de choc de force décroissante dans l’avenir, ce serait à peu près ça. C’est alors que vient le truc vraiment intéressant dans cette recherche que je viens de faire. Les données transformées en cette onde de choc relativement courte, se répandant depuis 1989, donnent le plus grand pouvoir explicatif dans mon modèle et lorsque je parle de « plus grand » cela veut dire un coefficient de détermination qui se balance vers R2 = 0,9 et un coefficient de signifiance statistique de p < 0,001.

Encore une fois. Je prends un modèle de changement niveau efficience énergétique d’économies nationales. Je veux dire mon modèle. Je le teste avec trois types de données transformées : les logarithmes naturels, genre de calmer le jeu, ensuite des coefficients exponentiels locaux long-terme, qui commencent leur histoire en 1889, et enfin des coefficients exponentiels qui racontent une histoire explosive à partir de 1989. Les derniers, je veux dire les explosifs court-terme, racontent l’histoire la plus cohérente en termes de pouvoir explicatif. Pourquoi ? Qu’y a-t-il de si exceptionnel dans cette représentation particulière des données quantitatives ? Honnêtement, je ne sais pas. Tout ce qui me vient à l’esprit est ce créneau de recherche sur l’innovation et le changement technologique qui perçoit ces phénomènes comme une série d’à-coups et de virages soudains plutôt qu’une ligne continue d’évolution progressive (consultez, par exemple, Vincenti 1994[3], Edgerton 2011[4]).

Je me suis dit que – puisque je discute le mécanisme de changement de l’efficience énergétique des économies nationales, mesurée en unités de PIB par unité d’énergie consommée – il est intéressant de regarder du côté des projections officielles à long terme. Ces derniers jours, deux rapports ont été largement publicisés à cet égard : celui d’OECD et celui de PriceWaterhouse Coopers. Niveau conclusions, ils sont tous les deux plutôt optimistes et semblent contredire les pronostics alarmants de certains économistes qui augurent une crise imminente. Ce qui m’intéresse le plus, toutefois, sont les méthodologies de prédictions utilisées dans les deux rapports. Celui de PriceWaterhouse Coopers se réfère au modèle classique de Solow de 1956[5] pendant qu’OECD vogue plutôt dans la direction de la fonction de production de Cobb-Douglas, transformée en logarithmes naturels. La différence entre les deux ? La fonction de production assume un état d’équilibre macroéconomique. En fait, la fonction de production en elle-même est un équilibre et cet équilibre sert comme point de repère pour prédire ce qui va se passer le plus probablement. En revanche, le modèle de Solow ne requiert pas nécessairement un équilibre. Enfin, ça gène pas, mais ce n’est pas absolument nécessaire. Quand j’y pense, la méthodologie que je viens d’employer dans mon article est plus proche de celle de Solow, donc du rapport de PriceWaterhouse Coopers que de celui d’OECD.

Je continue à vous fournir de la bonne science, presque neuve, juste un peu cabossée dans le processus de conception. Je vous rappelle que vous pouvez télécharger le business plan du projet BeFund (aussi accessible en version anglaise). Vous pouvez aussi télécharger mon livre intitulé “Capitalism and Political Power”. Je veux utiliser le financement participatif pour me donner une assise financière dans cet effort. Vous pouvez soutenir financièrement ma recherche, selon votre meilleur jugement, à travers mon compte PayPal. Vous pouvez aussi vous enregistrer comme mon patron sur mon compte Patreon . Si vous en faites ainsi, je vous serai reconnaissant pour m’indiquer deux trucs importants : quel genre de récompense attendez-vous en échange du patronage et quelles étapes souhaitiez-vous voir dans mon travail ?

Vous pouvez donner votre support financier à ce blog

€10.00

[1] Andreoni, V. (2017). Energy Metabolism of 28 World Countries: A Multi-scale Integrated Analysis. Ecological Economics, 142, 56-69

[2] Velasco-Fernández, R., Giampietro, M., & Bukkens, S. G. (2018). Analyzing the energy performance of manufacturing across levels using the end-use matrix. Energy, 161, 559-572

[3] Vincenti, W.G., 1994, The Retractable Airplane Landing Gear and the Northrop “Anomaly”: Variation-Selection and the Shaping of Technology, Technology and Culture, Vol. 35, No. 1 (Jan., 1994), pp. 1-33

[4] Edgerton, D. (2011). Shock of the old: Technology and global history since 1900. Profile books

[5] Solow, R. M. (1956). A contribution to the theory of economic growth. The quarterly journal of economics, 70(1), 65-94.