Once again, I had quite a break in blogging. I spend a lot of time putting together research projects, in a network of many organisations, which I am supposed to bring to working together. I give it a lot of time and personal energy. It drains me a bit, and I like that drain. I like the thrill of putting together a team, agreeing about goals and possible openings. Since 2005, when I stopped running my own business and I settled for a quite academic career, I haven’t experienced that special kind of personal drive. I sincerely believe that every teacher should apply his or her own teaching in the everyday life of theirs, just to see if their teaching still corresponds to reality.
This is one of the reasons why I have made it a regular activity of mine to invest in the stock market. I teach economics, and the stock market is very much like the pulse of economics, in all its grades and shades, ranging from hardcore macroeconomic cycles, passing through the microeconomics of specific industries I am currently focusing on with my investment portfolio, and all the way down the path of behavioural economics. I teach management, as well, and putting together new projects in research is the closest I can come, currently, to management science being applied in real life.
Still, besides trying to apply my teaching in real life, I still do science. I do research, and I write about the things I think I have found out, on that research path of mine. I do a lot of research as regards the economics of energy. Currently, I am still revising a paper of mine, titled ‘Climbing the right hill – an evolutionary approach to the European market of electricity’. Around the topic of energy economics, I have built more general a method of studying quantitative socio-economic data, with the technical hypothesis that said data manifests collective intelligence in human social structures. It means that whenever I deal with a collection of quantitative socio-economic variables, I study the dataset at hand by assuming that each multivariate record line in the database is the local instance of an otherwise coherent social structure, which experimentins with many such specific instances of itself and selects those offering the best adaptation to the current external stressors. Yes, there is a distinct sound of evolutionary method in that approach.
Over the last three months, I have been slowly ruminating my theoretical foundations for the revision of that paper. Now, I am doing what I love doing: I am disrupting the gently predictable flow of theory with some incongruous facts. Yes, facts don’t know how to behave themselves, like really. Here is an interesting fact about energy: between 1999 and 2016, at the planetary scale, there had been more and more new cars produced per each new human being born. This is visualised in the composite picture below. Data about cars comes from https://www.worldometers.info/cars/ , whilst data about the headcount of population comes from the World Bank (https://data.worldbank.org/indicator/SP.POP.TOTL ).
Now, the meaning of all that. I mean, not ALL THAT (i.e. reality and life in general), just all that data about cars and population. Why do we consistently make more and more physical substance of cars per each new human born? Two explanations come to my mind. One politically correct and nicely environmentalist: we are collectively dumb as f**k and we keep overshooting the output of cars over and above the incremental change in population. The latter, when translated into a rate of growth, tends to settle down (https://data.worldbank.org/indicator/SP.POP.GROW ). Yeah, those capitalists who own car factories just want to make more and more money, and therefore they make more and more cars. Yeah, those corrupt politicians want to conserve jobs in the automotive industry, and they support it. Yeah, f**k them all! Yeah, cars destroy the planet!
I checked. The first door I knocked at was General Motors (https://investor.gm.com/sec-filings ). What I can see is that they actually make more and more operational money by making less and less cars. Their business used to be overshot in terms of volume, and now they are slowly making sense and money out of making less cars. Then I checked with Toyota (https://global.toyota/en/ir/library/sec/ ). These guys looks as if they were struggling to maintain their capacity to make approximately the same operational surplus each year, and they seem to be experimenting with the number of cars they need to put out in order to stay in good financial shape. When I say ‘experimenting’, it means experimenting upwards or downwards.
As a matter of fact, the only player who seems to be unequivocally making more operational money out of making more cars is Tesla (https://ir.tesla.com/#tab-quarterly-disclosure). In There comes another explanation – much less politically correct, if at all – for there being more cars made per each new human, and it says that we, humans, are collectively intelligent, and we have a good reason for making more and more cars per each new human coming to this realm of tears, and the reason is to store energy in a movable, possibly auto-movable a form. Yes, each car has a fuel tank or a set of batteries, in the case of them Teslas or other electric f**kers. Each car is a moving reservoir of chemical energy, immediately converted into kinetic energy, which, in turn, has economic utility. Making more cars with batteries pays off better than making more cars with combustible fuel in their tanks: a new generation of movable reservoirs in chemical energy is replacing an older generation thereof.
Let’s hypothesise that this is precisely the point of each new human being coupled with more and more of a new car being made: the point is more chemical energy convertible into kinetic energy. Do we need to move around more, as time passes? Maybe, although I am a bit doubtful. Technically, with more and more humans being around in a constant space, there is more and more humans per square kilometre, and that incremental growth in the density of population happens mostly in cities. I described that phenomenon in a paper of mine, titled ‘The Puzzle of Urban Density And Energy Consumption’. That means that space available for travelling and needed to be covered, per individual capita of each human being, is actually decreasing. Less space to travel in means less need for means of transportation.
Thus, what are we after, collectively? We might be preparing for having to move around more in the future, or for having to restructure the geography of our settlements. That’s possible, although the research I did for that paper about urban density indicates that geographical patterns of urbanization are quite durable. Anyway, those two cases sum up to some kind of zombie apocalypse. On the other hand, the fact of developing the amount of dispersed, temporarily stored energy (in cars) might be a manifestation of us learning how to build and maintain large, dispersed networks of energy reservoirs.
Isn’t it dumb to hypothesise that we go out of our way, as a civilisation, just to learn the best ways of developing what we are developing? Well, take the medieval cathedrals. Them medieval folks would keep building them for decades or even centuries. The Notre Dame cathedral in Paris, France, seems to be the record holder, with a construction period stretching from 1160 to 1245 (Bruzelius 1987[1]). Still, the same people who were so appallingly slow when building a cathedral could accomplish lightning-fast construction of quite complex military fortifications. When building cathedrals, the masters of stone masonry would do something apparently idiotic: they would build, then demolish, and then build again the same portion of the edifice, many times. WTF? Why slowing down something we can do quickly? In order to experiment with the process and with the technologies involved, sir. Cathedrals were experimental labs of physics, mathematics and management, long before these scientific disciplines even emerged. Yes, there was the official rationale of getting closer to God, to accomplish God’s will, and, honestly, it came handy. There was an entire culture – the medieval Christianity – which was learning how to learn by experimentation. The concept of fulfilling God’s will through perseverant pursuit, whilst being stoic as regards exogenous risks, was excellent a cultural vehicle to that purpose.
We move a few hundreds of years in time, to the 17th century. The cutting edge of technology is to find in textile and garments (Braudel 1992[2]), and the peculiarity of the European culture consisted in quickly changing fashions, geographically idiosyncratic and strongly enforced through social peer pressure. The industry of garments and textile was a giant experimental lab of business and management, developing the division of labour, the management of supply chains, quick study of subtle shades in customers’ tastes and just as quick adaptation thereto. This is how we, Europeans, prepared for the much later introduction of mechanized industry, which, in turn, gave birth to what we are today: a species controlling something like 30% of all energy on the surface of our planet.
Maybe we are experimenting with dispersed, highly mobile and coordinated networks of small energy reservoirs – the automotive fleet – just for the sake of learning how to develop such networks? Some other facts, which, once again, are impolitely disturbing, come to the fore. I had a look at the data published by United Nations, as regards the total installed capacity of generation in electricity (https://unstats.un.org/unsd/energystats/ ). I calculated the average electrical capacity per capita, at the global scale. Turns out in 2014 the average human capita on Earth had around 60% more power capacity to tap from, as compared to a similarly human capita in 1999.
Interesting. It looks even more interesting when taken as the first moment of a process. When I divide the annual incremental change in the installed electrical capacity on the planet, and I divide it by the absolute demographic increment, thus when I go ‘Delta capacity / delta population’, that coefficient of elasticity grows like hell. In 2014, it was almost three times more than in 1999. We, humans, keep developing denser a network of cars, as compared to our population, and, at the same time, we keep increasing the relative power capacity which every human can tap into.
Someone could say it is because we simply consume more and more energy per capita. Cool, I check with the World Bank: https://data.worldbank.org/indicator/EG.USE.PCAP.KG.OE . Yes, we increase our average annual consumption of energy per one human being, and yet this is a very gentle increment: barely 18% from 1999 through 2014. Nothing to do with the quick accumulation of generative capacity. We accumulate densifying a global fleet of cars, and growing a reserve of power capacity. What are we doing it for?
This is a deep question, and I calculated two additional elasticities with the data at hand. Firstly, I denominated incremental change in the number of new cars per each new human born over the average consumption of energy per capita. In the visual below, this is the coefficient ‘Elasticity of cars per capita to energy per capita’. Between 1999 and 2014, this elasticity had passed from 0,49 to 0,79. We keep accumulating something like an overhead of incremental car fleet, as compared to the amount of energy we consume.
Secondly, I formalized the comparison between individual consumption of energy and average power capacity per capita. This is the ‘Elasticity of capacity per capita to energy per capita’ column in the visual below. Once again, it is a growing trend.
At the planetary scale, we keep beefing up our collective reserves of energy, and we seriously mean business about dispersing those reserves into networks of small reservoirs, possibly on wheels.
Increased propensity to store is a historically known collective response to anticipated shortage. Do we, the human race, collectively and not quite consciously anticipate a shortage of energy? How could that happen? Our biology should suggest it just the opposite. With the climate change being around, we technically have more energy in the ambient environment, not less. What exact kind of shortage in energy are we collectively anticipating? This is the type of riddle I like.
As usually, I work on many things at the same time. I mean, not exactly at the same time, just in a tight alternate sequence. I am doing my own science, and I am doing collective science with other people. Right now, I feel like restating and reframing the main lines of my own science, with the intention to both reframe my own research, and be a better scientific partner to other researchers.
Such as I see it now, my own science is mostly methodological, and consists in studying human social structures as collectively intelligent ones. I assume that collectively we have a different type of intelligence from the individual one, and most of what we experience as social life is constant learning through experimentation with alternative versions of our collective way of being together. I use artificial neural networks as simulators of collective intelligence, and my essential process of simulation consists in creating multiple artificial realities and comparing them.
I deliberately use very simple, if not simplistic neural networks, namely those oriented on optimizing just one attribute of theirs, among the many available. I take a dataset, representative for the social structure I study, I take just one variable in the dataset as the optimized output, and I consider the remaining variables as instrumental input. Such a neural network simulates an artificial reality where the social structure studied pursues just one, narrow orientation. I create as many such narrow-minded, artificial societies as I have variables in my dataset. I assess the Euclidean distance between the original empirical dataset, and each of those artificial societies.
It is just now that I realize what kind of implicit assumptions I make when doing so. I assume the actual social reality, manifested in the empirical dataset I study, is a concurrence of different, single-variable-oriented collective pursuits, which remain in some sort of dynamic interaction with each other. The path of social change we take, at the end of the day, manifests the relative prevalence of some among those narrow-minded pursuits, with others being pushed to the second rank of importance.
As I am pondering those generalities, I reconsider the actual scientific writings that I should hatch. Publish or perish, as they say in my profession. With that general method of collective intelligence being assumed in human societies, I focus more specifically on two empirical topics: the market of energy and the transition away from fossil fuels make one stream of my research, whilst the civilisational role of cities, especially in the context of the COVID-19 pandemic, is another stream of me trying to sound smart in my writing.
For now, I focus on issues connected to energy, and I return to revising my manuscript ‘Climbing the right hill – an evolutionary approach to the European market of electricity’, as a resubmission to Applied Energy . According to the guidelines of Applied Energy , I am supposed to structure my paper into the following parts: Introduction, Material and Methods, Theory, Calculations, Results, Discussion, and, as sort of a summary pitch, I need to prepare a cover letter where I shortly introduce the reasons why should the editor of Applied Energy bother about my paper at all. On the top of all these formally expressed requirements, there is something I noticed about the general style of articles published in Applied Energy : they all demonstrate and discuss strong, sharp-cutting hypotheses, with a pronounced theoretical edge in them. If I want my paper to be accepted by that journal, I need to give it that special style.
That special style requires two things which, honestly, I am not really accustomed to doing. First of all, it requires, precisely, to phrase out very sharp claims. What I like the most is to show people material and methods which I work with and sort of provoke a discussion around it. When I have to formulate very sharp claims around that basic empirical stuff, I feel a bit awkward. Still, I understand that many people are willing to discuss only when they are truly pissed by the topic at hand, and sharply cut hypotheses serve to fuel that flame.
Second of all, making sharp claims of my own requires passing in thorough review the claims which other researchers phrase out. It requires doing my homework thoroughly in the review-of-literature. Once again, not really a fan of it, on my part, but well, life is brutal, as my parents used to teach me and as I have learnt in my own life. In other words, real life starts when I get out of my comfort zone.
The first body of literature I want to refer to in my revised article is the so-called MuSIASEM framework AKA Multi-scale Integrated Analysis of Societal and Ecosystem Metabolism’. Human societies are assumed to be giant organisms, and transformation of energy is a metabolic function of theirs (e.g. Andreoni 2020[1], Al-Tamimi & Al-Ghamdi 2020[2] or Velasco-Fernández et al. 2020[3]). The MuSIASEM framework is centred around an evolutionary assumption, which I used to find perfectly sound, and which I have come to consider as highly arguable, namely that the best possible state for both a living organism and a human society is that of the highest possible energy efficiency. As regards social structures, energy efficiency is the coefficient of real output per unit of energy consumption, or, in other words, the amount of real output we can produce with 1 kilogram of oil equivalent in energy. My theoretical departure from that assumption started with my own empirical research, published in my article ‘Energy efficiency as manifestation of collective intelligence in human societies’ (Energy, Volume 191, 15 January 2020, 116500, https://doi.org/10.1016/j.energy.2019.116500 ). As I applied my method of computation with a neural network as simulator of social change, I found out that human societies do not really seem to max out on energy efficiency. Maybe they should but they don’t. It was the first realization, on my part, that we, humans, orient our collective intelligence on optimizing the social structure as such, and whatever comes out of that in terms of energy efficiency, is an unintended by-product rather than a purpose. That general impression has been subsequently reinforced by other empirical findings of mine, precisely those which I introduce in that manuscript ‘Climbing the right hill – an evolutionary approach to the European market of electricity’, which I am currently revising for resubmission with Applied Energy . According to the guidelines of Applied Energy.
In practical terms, it means that when a public policy states that ‘we should maximize our energy efficiency’, it is a declarative goal which human societies actually do not strive for. It is a little as if a public policy imposed the absolute necessity of being nice to each other and punished any deviation from that imperative. People are nice to each other to the extent of current needs in social coordination, period. The absolute imperative of being nice is frequently the correlate of intense rivalry, e.g. as it was the case with traditional aristocracy. The French have even an expression, which I find profoundly true, namely ‘trop gentil pour être honnête’, which means ‘too nice to be honest’. My personal experience makes me kick into an alert state when somebody is that sort of intensely nice to me.
Passing from metaphors to the actual subject matter of energy management, it is a known fact that highly innovative technologies are usually truly inefficient. Optimization of efficiency, would it be energy efficiency or any other aspect thereof, is actually a late stage in the lifecycle of a technology. Deep technological change is usually marked by a temporary slump in efficiency. Imposing energy efficiency as chief goal of technology-related policies means systematically privileging and promoting technologies with the highest energy efficiency, thus, by metaphorical comparison to humans, technologies in their 40ies, past and over the excesses of youth.
The MuSIASEM framework has two other traits which I find arguable, namely the concept of evolutionary purpose, and the imperative of equality between countries in terms of energy efficiency. Researchers who lean towards and into the MuSIASEM methodology claim that it is an evolutionary purpose of every living organism to maximize energy efficiency, and therefore human societies have the same evolutionary purpose. It further implies that species displaying marked evolutionary success, i.e. significant growth in headcount (sometimes in mandibulae-count, should the head be not really what we mean it to be), achieve that success by being particularly energy efficient. I even went into some reading in life sciences and that claim is not grounded in any science. It seems that energy efficiency, and any denomination of efficiency, as a matter of fact, are very crude proportions we apply to complex a balance of flows which we have to learn a lot about. Niebel et al. (2019[4]) phrase it out as follows: ‘The principles governing cellular metabolic operation are poorly understood. Because diverse organisms show similar metabolic flux patterns, we hypothesized that a fundamental thermodynamic constraint might shape cellular metabolism. Here, we develop a constraint-based model for Saccharomyces cerevisiae with a comprehensive description of biochemical thermodynamics including a Gibbs energy balance. Non-linear regression analyses of quantitative metabolome and physiology data reveal the existence of an upper rate limit for cellular Gibbs energy dissipation. By applying this limit in flux balance analyses with growth maximization as the objective function, our model correctly predicts the physiology and intracellular metabolic fluxes for different glucose uptake rates as well as the maximal growth rate. We find that cells arrange their intracellular metabolic fluxes in such a way that, with increasing glucose uptake rates, they can accomplish optimal growth rates but stay below the critical rate limit on Gibbs energy dissipation. Once all possibilities for intracellular flux redistribution are exhausted, cells reach their maximal growth rate. This principle also holds for Escherichia coli and different carbon sources. Our work proposes that metabolic reaction stoichiometry, a limit on the cellular Gibbs energy dissipation rate, and the objective of growth maximization shape metabolism across organisms and conditions’.
I feel like restating the very concept of evolutionary purpose as such. Evolution is a mechanism of change through selection. Selection in itself is largely a random process, based on the principle that whatever works for now can keep working until something else works even better. There is hardly any purpose in that. My take on the thing is that living species strive to maximize their intake of energy from environment rather than their energy efficiency. I even hatched an article about it (Wasniewski 2017[5]).
Now, I pass to the second postulate of the MuSIASEM methodology, namely to the alleged necessity of closing gaps between countries as for their energy efficiency. Professor Andreoni expresses this view quite vigorously in a recent article (Andreoni 2020[6]). I think this postulate doesn’t hold both inside the MuSIASEM framework, and outside of it. As for the purely external perspective, I think I have just laid out the main reasons for discarding the assumption that our civilisation should prioritize energy efficiency above other orientations and values. From the internal perspective of MuSIASEM, i.e. if we assume that energy efficiency is a true priority, we need to give that energy efficiency a boost, right? Now, the last time I checked, the only way we, humans, can get better at whatever we want to get better at is to create positive outliers, i.e. situations when we like really nail it better than in other situations. With a bit of luck, those positive outliers become a workable pattern of doing things. In management science, it is known as the principle of best practices. The only way of having positive outliers is to have a hierarchy of outcomes according to the given criterion. When everybody is at the same level, nobody is an outlier, and there is no way we can give ourselves a boost forward.
Good. Those six paragraphs above, they pretty much summarize my theoretical stance as regards the MuSIASEM framework in research about energy economics. Please, note that I respect that stream of research and the scientists involved in it. I think that representing energy management in human social structures as a metabolism is a great idea: it is one of those metaphors which can be fruitfully turned into a quantitative model. Still, I have my reserves.
I go further. A little more review of literature. Here comes a paper by Halbrügge et al. (2021[7]), titled ‘How did the German and other European electricity systems react to the COVID-19 pandemic?’. It points at an interesting point as regards energy economics: the pandemic has induced a new type of risk, namely short-term fluctuations in local demand for electricity. That, in turn, leads to deeper troughs and higher peaks in both the quantity and the price of energy in the market. More risk requires more liquidity: this is a known principle in business. As regards energy, liquidity can be achieved both through inventories, i.e. by developing storage capacity for energy, and through financial instruments. Halbrügge et al. come to the conclusion that such circumstances in the German market have led to the reinforcement of RES (Renewable Energy Sources). RES installations are typically more dispersed, more local in their reach, and more flexible than large power plants. It is much easier to modulate the output of a windfarm or a solar farm, as compared to a large fossil-fuel-based installation.
Keeping an eye on the impact of the pandemic upon the market of energy, I pass to the article titled ‘Revisiting oil-stock nexus during COVID-19 pandemic: Some preliminary results’, by Salisu, Ebuh & Usman (2020[8]). First of all, a few words of general explanation as for what the hell is the oil-stock nexus. This is a phenomenon, which I saw any research about in 2017, which consists in a diversification of financial investment portfolios from pure financial stock into various mixes of stock and oil. Somehow around 2015, people who used to hold their liquid investments just in financial stock (e.g. as I do currently) started to build investment positions in various types of contracts based on the floating inventory of oil: futures, options and whatnot. When I say ‘floating’, it is quite literal: that inventory of oil really actually floats, stored on board of super-tanker ships, sailing gently through international waters, with proper gravitas (i.e. not too fast).
Long story short, crude oil has been increasingly becoming a financial asset, something like a buffer to hedge against risks encountered in other assets. Whilst the paper by Salisu, Ebuh & Usman is quite technical, without much theoretical generalisation, an interesting observation comes out of it, namely that short-term shocks, during the pandemic in financial markets had adversely impacted the price of oil more than the prices of stock. That, in turn, could indicate that crude oil was good as hedging asset just for a certain range of risks, and in the presence of price shocks induced by the pandemic, the role of oil could diminish.
Those two papers point at a factor which we almost forgot as regards the market of energy, namely the role of short-term shocks. Until recently, i.e. until COVID-19 hit us hard, the textbook business model in the sector of energy had been that of very predictable demand, nearly constant in the long-perspective and varying in a sinusoidal manner in the short-term. The very disputable concept of LCOE AKA Levelized Cost of Energy, where investment outlays are treated as if they were a current cost, is based on those assumptions. The pandemic has shown a different aspect of energy systems, namely the need for buffering capacity. That, in turn, leads to the issue of adaptability, which, gently but surely leads further into the realm of adaptive changes, and that, ladies and gentlemen, is my beloved landscape of evolutionary, collectively intelligent change.
Cool. I move forward, and, by the same occasion, I move back. Back to the concept of energy efficiency. Halvorsen & Larsen study the so-called rebound effect as regards energy efficiency (Halvorsen & Larsen 2021[9]). Their paper is interesting for three reasons, the general topic of energy efficiency being the first one. The second one is methodological focus on phenomena which we cannot observe directly, and therefore we observe them through mediating variables, which is theoretically close to my own method of research. Finally, the phenomenon of rebound effect, namely the fact that, in the presence of temporarily increased energy efficiency, the consumers of energy tend to use more of those locally more energy-efficient goods, is essentially a short-term disturbance being transformed into long-term habits. This is adaptive change.
The model construed by Halvorsen & Larsen is a theoretical delight, just something my internal happy bulldog can bite into. They introduce the general assumption that consumption of energy in households is a build-up of different technologies, which can substitute each other under some conditions, and complementary under different conditions. Households maximize something called ‘energy services’, i.e. everything they can purposefully derive from energy carriers. Halvorsen & Larsen build and test a model where they derive demand for energy services from a whole range of quite practical variables, which all sums up to the following: energy efficiency is indirectly derived from the way that social structures work, and it is highly doubtful whether we can purposefully optimize energy efficiency as such.
Now, here comes the question: what are the practical implications of all those different theoretical stances, I mean mine and those by other scientists? What does it change, and does it change anything at all, if policy makers follow the theoretical line of the MuSIASEM framework, or, alternatively, my approach? I am guessing differences at the level of both the goals, and the real outcomes of energy-oriented policies, and I am trying to wrap my mind around that guessing. Such as I see it, the MuSIASEM approach advocates for putting energy-efficiency of the whole global economy at the top of any political agenda, as a strategic goal. On the path towards achieving that strategic goal, there seems to be an intermediate one, namely that to narrow down significantly two types of discrepancies:
>> firstly, it is about discrepancies between countries in terms of energy efficiency, with a special focus on helping the poorest developing countries in ramping up their efficiency in using energy
>> secondly, there should be a priority to privilege technologies with the highest possible energy efficiency, whilst kicking out those which perform the least efficiently in that respect.
If I saw a real policy based on those assumptions, I would have a few critical points to make. Firstly, I firmly believe that large human societies just don’t have the institutions to enforce energy efficiency as chief collective purpose. On the other hand, we have institutions oriented on other goals, which are able to ramp up energy efficiency as instrumental change. One institution, highly informal and yet highly efficient, is there, right in front of our eyes: markets and value chains. Each product and each service contain an input of energy, which manifests as a cost. In the presence of reasonably competitive markets, that cost is under pressure from market prices. Yes, we, humans are greedy, and we like accumulating profits, and therefore we squeeze our costs. Whenever energy comes into play as significant a cost, we figure out ways of diminishing its consumption per unit of real output. Competitive markets, both domestic and international, thus including free trade, act as an unintentional, and yet powerful a reductor of energy consumption, and, under a different angle, they remind us to find cheap sources of energy.
[1] Andreoni, V. (2020). The energy metabolism of countries: Energy efficiency and use in the period that followed the global financial crisis. Energy Policy, 139, 111304. https://doi.org/10.1016/j.enpol.2020.111304
[2] Al-Tamimi and Al-Ghamdi (2020), ‘Multiscale integrated analysis of societal and ecosystem metabolism of Qatar’ Energy Reports, 6, 521-527, https://doi.org/10.1016/j.egyr.2019.09.019
[3] Velasco-Fernández, R., Pérez-Sánchez, L., Chen, L., & Giampietro, M. (2020), A becoming China and the assisted maturity of the EU: Assessing the factors determining their energy metabolic patterns. Energy Strategy Reviews, 32, 100562. https://doi.org/10.1016/j.esr.2020.100562
[4] Niebel, B., Leupold, S. & Heinemann, M. An upper limit on Gibbs energy dissipation governs cellular metabolism. Nat Metab 1, 125–132 (2019). https://doi.org/10.1038/s42255-018-0006-7
[5] Waśniewski, K. (2017). Technological change as intelligent, energy-maximizing adaptation. Energy-Maximizing Adaptation (August 30, 2017). http://dx.doi.org/10.1453/jest.v4i3.1410
[6] Andreoni, V. (2020). The energy metabolism of countries: Energy efficiency and use in the period that followed the global financial crisis. Energy Policy, 139, 111304. https://doi.org/10.1016/j.enpol.2020.111304
[7] Halbrügge, S., Schott, P., Weibelzahl, M., Buhl, H. U., Fridgen, G., & Schöpf, M. (2021). How did the German and other European electricity systems react to the COVID-19 pandemic?. Applied Energy, 285, 116370. https://doi.org/10.1016/j.apenergy.2020.116370
[8] Salisu, A. A., Ebuh, G. U., & Usman, N. (2020). Revisiting oil-stock nexus during COVID-19 pandemic: Some preliminary results. International Review of Economics & Finance, 69, 280-294. https://doi.org/10.1016/j.iref.2020.06.023
[9] Halvorsen, B., & Larsen, B. M. (2021). Identifying drivers for the direct rebound when energy efficiency is unknown. The importance of substitution and scale effects. Energy, 222, 119879. https://doi.org/10.1016/j.energy.2021.119879
I
noticed it is one month that I did not post anything on my blog. Well, been
doing things, you know. Been writing, and thinking by the same occasion. I am
forming a BIG question in my mind, a question I want to answer: how are we
going to respond to climate change? Among all the possible scenarios of such
response, which are we the most likely to follow? When I have a look, every now
and then, at Greta Thunberg’s astonishingly quick social ascent, I wonder why are
we so divided about something apparently so simple? I am very clear: this is
not a rhetorical question from my part. Maybe I should claim something like: ‘We
just need to get all together, hold our hands and do X, Y, Z…’. Yes, in a
perfect world we would do that. Still, in the world we actually live in, we don’t.
Does it mean we are collectively stupid, like baseline, and just some enlightened
individuals can sometimes see the truly rational path of moving ahead? Might
be. Yet, another view is possible. We might be doing apparently dumb things
locally, and those apparent local flops could sum up to something quite
sensible at the aggregate scale.
There
is some science behind that intuition, and some very provisional observations. I
finally (and hopefully) nailed down the revision of the
article on energy efficiency. I have already started
developing on this one in my last update, entitled ‘Knowledge
and Skills’, and now, it is done. I have just revised the
article, quite deeply, and by the same occasion, I hatched a methodological
paper, which I submitted to MethodsX.
As I want to develop a broader discussion on these two papers, without
repeating their contents, I invite my readers to get acquainted with their PDF,
via the archives of my blog. Thus, by clicking the title Energy
Efficiency as Manifestation of Collective Intelligence in Human Societies,
you can access the subject matter paper on energy efficiency, and clicking on Neural
Networks As Representation of Collective Intelligence
will take you to the methodological article.
I
think I know how to represent, plausibly, collective intelligence with
artificial intelligence. I am showing the essential concept in the picture
below. Thus, I start with a set of empirical data, describing a society. Well
in the lines of what I have been writing, on this blog, since early spring this
year, I assume that quantitative variables in my dataset, e.g. GDP per capita,
schooling indicators, the probability for an average person to become a mad
scientist etc. What is the meaning of those variables? Most of all, they exist
and change together. Banal, but true. In other words, all that stuff represents
the cumulative outcome of past, collective action and decision-making.
I
decided to use the intellectual momentum, and I used the same method with a
different dataset, and a different set of social phenomena. I took Penn Tables
9.1 (Feenstra et al. 2015[1]), thus a well-known base
of macroeconomic data, and I followed the path sketched in the picture below.
Long
story short, I have two big surprises. When I look upon energy efficiency and
its determinants, turns out energy efficiency is not really the chief outcome
pursued by the 59 societies studied: they care much more about the local, temporary
proportions between capital immobilised in fixed assets, and the number of
resident patent applications. More specifically, they seem to be principally
optimizing the coefficient of fixed assets per 1 patent application. That is
quite surprising. It sends me back to my peregrinations through the land of
evolutionary theory (see for example: My
most fundamental piece of theory).
When
I take a look at the collective intelligence (possibly) embodied in Penn Tables
9.1, I can see this particular collective wit aiming at optimizing the share of
labour in the proceeds from selling real output in the first place. Then,
almost immediately after, comes the average number of hours worked per person
per year. You can click on
this link and read the full manuscript I have just submitted with
the Quarterly Journal of Economics.
Wrapping
it (provisionally) up, as I did some social science with the assumption of
collective intelligence in human societies taken at the level of methodology, and
I got truly surprising results. That
thing about energy efficiency – i.e. the fact that when in presence of some
capital in fixed assets, and some R&D embodied in patentable inventions, we
seem caring about energy efficiency only secondarily – is really mind-blowing. I
had already done some research on energy as factor of social change, and,
whilst I have never been really optimistic about our collective capacity to
save energy, I assumed that we orient ourselves, collectively, on some kind of
energy balance. Apparently, we do only when we have nothing else to pay
attention to. On the other hand, the
collective focus on macroeconomic variables pertinent to labour, rather
than prices and quantities, is just as gob-smacking. All economic education,
when you start with Adam Smith and take it from there, assumes that economic
equilibriums, i.e. those special states of society when we are sort of in balance
among many forces at work, are built around prices and quantities. Still, in that
research I have just completed, the only kind of price my neural network can build
a plausibly acceptable learning around, is the average price level in international
trade, i.e. in exports, and in imports. All the prices, which I have been
taught, and which I taught are the cornerstones of economic equilibrium, like
prices in consumption or prices in investment, when I peg them as output
variables of my perceptron, the incriminated perceptron goes dumb like hell and
yields negative economic aggregates. Yes, babe: when I make my neural network
pay attention to price level in investment goods, it comes to the conclusion
that the best idea is to have negative national income, and negative population.
Returning
to the issue of climate change and our collective response to it, I am trying
to connect my essential dots. I have just served some like well-cooked science,
and not it is time to bite into some raw one. I am biting into facts which I
cannot explain yet, like not at all. Did you know, for example, that there are
more and more adult people dying in high-income countries, like per 1000, since
2014? You can consult the data available with World Bank, as regards the
mortality of men and that
in women. Infant mortality is generally falling, just as adult mortality in
low, and middle-income countries. It is just about adult people in wealthy
societies categorized as ‘high income’: there are more and more of them dying per
1000. Well, I should maybe say ‘more of us’, as I am 51, and relatively
well-off, thank you. Anyway, all the way up through 2014, adult mortality in high-income
countries had been consistently subsiding, reaching its minimum in 2014 at 57,5
per 1000 in women, and 103,8 in men. In 2016, it went up to 60,5 per 1000 in
women, and 107,5 in men. It seems counter-intuitive. High-income countries are
the place where adults are technically exposed to the least fatal hazards. We
have virtually no wars around high income, we have food in abundance, we enjoy reasonably
good healthcare systems, so WTF? As regards low-income countries, we could
claim that adults who die are relatively the least fit for survival ones, but what
do you want to be fit for in high-income places? Driving a Mercedes around? Why
it started to revert since 2014?
Intriguingly,
high income countries are also those, where the difference in adult mortality
between men and women is the most pronounced, in men almost the double of what
is observable in women. Once again, it is something counter-intuitive. In low-income
countries, men are more exposed to death in battle, or to extreme conditions,
like work in mines. Still, in high-income countries, such hazards are remote.
Once again, WTF? Someone could say: it is about natural selection, about
eliminating the weak genetics. Could be, and yet not quite. Elimination of weak
genetics takes place mostly through infant mortality. Once we make it like through
the first 5 years of our existence, the riskiest part is over. Adult mortality
is mostly about recycling used organic material (i.e. our bodies). Are human
societies in high-income countries increasing the pace of that recycling? Why
since 2015? Is it more urgent to recycle used men than used women?
There
is one thing about 2015, precisely connected to climate change. As I browsed some
literature about droughts in Europe and their possible impact on agriculture (see
for example All
hope is not lost: the countryside is still exposed), it turned out that
2015 was precisely the year when we started to sort of officially admitting
that we have a problem with agricultural droughts on our continent. Even more
interestingly, 2014 and 2015 seem to have been the turning point when aggregate
damages from floods, in Europe, started to curb down after something like two
decades of progressive increase. We swapped one calamity for another one, and
starting from then, we started to recycle used adults at more rapid a pace. Of
course, most of Europe belongs to the category of high-income countries.
See?
That’s what I call raw science about collective intelligence. Observation with
a lot of questions and very remote idea as for the method of answering them. Something
is apparently happening, maybe we are collectively intelligent in the process, and
yet we don’t know how exactly (are we collectively intelligent). It is possible
that we are not. Warmer climate is associated with greater prevalence of
infectious diseases in adults (Amuakwa-Mensah
et al. 2017[1]),
for example, and yet it does not explain why is greater adult mortality happening
in high-income countries. Intuitively, infections attack where people are
poorly shielded against them, thus in countries with frequent incidence of
malnutrition and poor sanitation, thus in the low-income ones.
I
am consistently delivering good, almost new science to my readers, and love
doing it, and I am working on crowdfunding this activity of mine. You can
communicate with me directly, via the mailbox of this blog: goodscience@discoversocialsciences.com.
As we talk business plans, I remind you that you can download, from the library
of my blog, the business
plan I prepared for my semi-scientific project Befund (and you can access
the French version as well). You can also get a free e-copy of my book ‘Capitalism
and Political Power’ You can support my research by donating directly,
any amount you consider appropriate, to my PayPal account.
You can also consider going to my Patreon
page and become my patron. If you decide so, I will be
grateful for suggesting me two things that Patreon suggests me to suggest you.
Firstly, what kind of reward would you expect in exchange of supporting me?
Secondly, what kind of phases would you like to see in the development of my
research, and of the corresponding educational tools?
[1] Amuakwa-Mensah, F., Marbuah,
G., & Mubanga, M. (2017). Climate variability and infectious diseases
nexus: Evidence from Sweden. Infectious Disease Modelling, 2(2),
203-217.
[1] Feenstra, Robert C., Robert Inklaar
and Marcel P. Timmer (2015), “The Next Generation of the Penn World Table”
American Economic Review, 105(10), 3150-3182, available for download at www.ggdc.net/pwt
Once
again, I break my rhythm. Mind you, it happens a lot this year. Since January,
it is all about breaking whatever rhythm I have had so far in my life. I am
getting used to unusual, and I think it is a good thing. Now, I am breaking the
usual rhythm of my blogging. Normally, I have been alternating updates in
English with those in French, like one to one, with a pinchful of writing in my
mother tongue, Polish, every now and then. Right now, two urgent tasks require
my attention: I need to prepare new syllabuses,
for English-taught courses in the upcoming academic year, and to revise my draft article on the energy
efficiency of national economies.
Before
I attend to those tasks, however, a little bit of extended reflection on goals
and priorities in my life, somehow in the lines of my last update, « It might be a sign of narcissism
». I have just gotten back from Nice, France, where my son has just started his
semester of Erasmus + exchange, with the Sophia Antipolis University. In my
youth, I spent a few years in France, I went many times to France since, and
man, this time, I just felt the same, very special and very French kind of human
energy, which I remember from the 1980ies. Over the last 20 years or so, the
French seemed sort of had been sleeping inside their comfort zone but now, I
can see people who have just woken up and are wondering what the hell they had
wasted so much time on, and they are taking double strides to gather speed in
terms of social change. This is the innovative, brilliant, positively cocky
France I love. There is sort of a social pattern in France: when the French get
vocal, and possibly violent, in the streets, they are up to something as a
nation. The French Revolution in 1789 was an expression of popular discontent,
yet what followed was not popular satisfaction: it was one-century-long
expansion on virtually all plans: political, military, economic, scientific
etc. Right now, France is just over the top of the Yellow Vests protest, which
one of my French students devoted an essay to (see « Carl Lagerfeld and some guest
blogging from Emilien Chalancon, my student »). I
wonder who will be the Napoleon Bonaparte of our times.
When
entire nations are up to something, it is interesting. Dangerous, too, and yet
interesting. Human societies are, as a rule, the most up to something as
regards their food and energy base, and so I come to that revision of my
article. Here, below, you will find the letter of review I received from the
journal “Energy” after I submitted the initial manuscript, referenced
as Ms. Ref. No.: EGY-D-19-00258. The link to my manuscript is to find in the
first paragraph of this update. For those of you who are making their first
steps in science, it can be an illustration of what ‘scientific dialogue’
means. Further below, you will find a first sketch of my revision, accounting
for the remarks from reviewers.
Thus,
here comes the LETTER OF REVIEW (in italic):
Ms. Ref. No.: EGY-D-19-00258
Title:
Apprehending energy efficiency: what is the cognitive value of hypothetical
shocks? Energy
Dear
Dr. Wasniewski,
The
review of your paper is now complete, the Reviewers’ reports are below. As you
can see, the Reviewers present important points of criticism and a series of
recommendations. We kindly ask you to consider all comments and revise the
paper accordingly in order to respond fully and in detail to the Reviewers’
recommendations. If this process is completed thoroughly, the paper will be
acceptable for a second review.
If
you choose to revise your manuscript it will be due into the Editorial Office
by the Jun 23, 2019
Once
you have revised the paper accordingly, please submit it together with a
detailed description of your response to these comments. Please, also include a
separate copy of the revised paper in which you have marked the revisions made.
Please
note if a reviewer suggests you to cite specific literature, you should only do
so if you feel the literature is relevant and will improve your paper.
Otherwise please ignore such suggestions and indicate this fact to the handling
editor in your rebuttal.
When
submitting your revised paper, we ask that you include the following items:
Manuscript
and Figure Source Files (mandatory):
We
cannot accommodate PDF manuscript files for production purposes. We also ask
that when submitting your revision you follow the journal formatting
guidelines. Figures and tables may be embedded within the source file for the
submission as long as they are of sufficient resolution for Production. For any
figure that cannot be embedded within the source file (such as *.PSD Photoshop
files), the original figure needs to be uploaded separately. Refer to the Guide
for Authors for additional information. http://www.elsevier.com/journals/energy/0360-5442/guide-for-authors
Highlights
(mandatory):
Highlights
consist of a short collection of bullet points that convey the core findings of
the article and should be submitted in a separate file in the online submission
system. Please use ‘Highlights’ in the file name and include 3 to 5 bullet
points (maximum 85 characters, including spaces, per bullet point). See the
following website for more information
We
invite you to convert your supplementary data (or a part of it) into a Data in
Brief article. Data in Brief articles are descriptions of the data and
associated metadata which are normally buried in supplementary material. They
are actively reviewed, curated, formatted, indexed, given a DOI and freely
available to all upon publication. Data in Brief should be uploaded with your
revised manuscript directly to Energy. If your Energy research article is
accepted, your Data in Brief article will automatically be transferred over to
our new, fully Open Access journal, Data in Brief, where it will be editorially
reviewed and published as a separate data article upon acceptance. The Open
Access fee for Data in Brief is $500.
Then,
place all Data in Brief files (whichever supplementary files you would like to
include as well as your completed Data in Brief template) into a .zip file and
upload this as a Data in Brief item alongside your Energy revised manuscript.
Note that only this Data in Brief item will be transferred over to Data in
Brief, so ensure all of your relevant Data in Brief documents are zipped into a
single file. Also, make sure you change references to supplementary material in
your Energy manuscript to reference the Data in Brief article where
appropriate.
If
you have questions, please contact the Data in Brief publisher, Paige Shaklee
at dib@elsevier.com
In
order to give our readers a sense of continuity and since editorial procedure
often takes time, we encourage you to update your reference list by conducting
an up-to-date literature search as part of your revision.
On
your Main Menu page, you will find a folder entitled “Submissions Needing
Revision”. Your submission record will be presented here.
MethodsX
file (optional)
If
you have customized (a) research method(s) for the project presented in your
Energy article, you are invited to submit this part of your work as MethodsX
article alongside your revised research article. MethodsX is an independent
journal that publishes the work you have done to develop research methods to
your specific needs or setting. This is an opportunity to get full credit for
the time and money you may have spent on developing research methods, and to
increase the visibility and impact of your work.
2)
Place all MethodsX files (including graphical abstract, figures and other
relevant files) into a .zip file and
upload
this as a ‘Method Details (MethodsX) ‘ item alongside your revised Energy
manuscript. Please ensure all of your relevant MethodsX documents are zipped
into a single file.
3)
If your Energy research article is accepted, your MethodsX article will
automatically be transferred to MethodsX, where it will be reviewed and
published as a separate article upon acceptance. MethodsX is a fully Open
Access journal, the publication fee is only 520 US$.
Include
interactive data visualizations in your publication and let your readers
interact and engage more closely with your research. Follow the instructions
here: https://www.elsevier.com/authors/author-services/data- visualization to
find out about available data visualization options and how to include them
with your article.
MethodsX
file (optional)
We
invite you to submit a method article alongside your research article. This is
an opportunity to get full credit for the time and money you have spent on
developing research methods, and to increase the visibility and impact of your
work. If your research article is accepted, your method article will be
automatically transferred over to the open access journal, MethodsX, where it
will be editorially reviewed and published as a separate method article upon
acceptance. Both articles will be linked on ScienceDirect. Please use the
MethodsX template available here when preparing your article:
https://www.elsevier.com/MethodsX-template. Open access fees apply.
Reviewers’
comments:
Reviewer
#1: The paper is, at least according to the title of the paper, and attempt to
‘comprehend energy efficiency’ at a macro-level and perhaps in relation to
social structures. This is a potentially a topic of interest to the journal
community. However and as presented, the paper is not ready for publication for
the following reasons:
1.
A long introduction details relationship and ‘depth of emotional entanglement
between energy and social structures’ and concomitant stereotypes, the issue
addressed by numerous authors. What the Introduction does not show is the
summary of the problem which comes out of the review and which is consequently
addressed by the paper: this has to be presented in a clear and articulated way
and strongly linked with the rest of the paper. In simplest approach, the paper
does demonstrate why are stereotypes problematic. In the same context, it
appears that proposed methodology heavily relays on MuSIASEM methodology which
the journal community is not necessarily familiar with and hence has to be
explained, at least to the level used in this paper and to make the paper
sufficiently standalone;
2.
Assumptions used in formulating the model have to be justified in terms what
and how they affect understanding of link/interaction between social structures
and function of energy (generation/use) and also why are assumptions formulated
in the first place. Also, it is important here to explicitly articulate what is
aimed to achieve with the proposed model: as presented this somewhat comes
clear only towards the end of the paper. More fundamental question is what is
the difference between model presented here and in other publications by the
author: these have to be clearly explained.
3.
The presented empirical tests and concomitant results are again detached from
reality for i) the problem is not explicitly formulated, and ii) real-life
interpretation of results are not clear.
On
the practical side, the paper needs:
1.
To conform to style of writing adopted by the journal, including referencing;
2.
All figures have to have captions and to be referred to by it;
3.
English needs improvement.
Reviewer
#2: Please find the attached file.
Reviewer
#3: The article has a cognitive value. The author has made a deep analysis of
literature. Methodologically, the article does not raise any objections.
However, getting acquainted with its content, I wonder why the analysis does
not take into account changes in legal provisions. In the countries of the
European Union, energy efficiency is one of the pillars of shaping energy
policy. Does this variable have no impact on improving energy efficiency?
When
reading an article, one gets the impression that the author has prepared it for
editing in another journal. Editing it is incorrect! Line 13, page 10, error –
unwanted semicolon.
Now,
A FIRST SKETCH OF MY REVISION.
There
are the general, structural suggestions from the editors, notably to outline my
method of research, and to discuss my data, in separate papers. After that come
the critical remarks properly spoken, with a focus on explaining clearly – more
clearly than I did it in the manuscript – the assumptions of my model, as well
as its connections with the MUSIASEM model. I start with my method, and it is
an interesting exercise in introspection. I did the empirical research quite a
few months ago, and now I need to look at it from a distance, objectively. Doing
well at this exercise amounts, by the way, to phrasing accurately my
assumptions. I start with my fundamental variable, i.e. the so-called energy
efficiency, measured as the value of real output (i.e. the value of goods and
services produced) per unit of energy consumed, measured in kilograms of oil
equivalent. It is like: energy
efficiency = GDP/ energy consumed.
In
my mind, that coefficient is actually a coefficient of coefficients, more
specifically: GDP / energy consumed = [GDP per capita] / [consumption of
energy per capita ] = [GDP / population] / [energy consumed / population ].
Why so? Well, I assume that when any of us, humans, wants to have a meal, we
generally don’t put our fingers in the nearest electric socket. We consume
energy indirectly, via the local combination of technologies. The same local
combination of technologies makes our GDP. Energy efficiency measures two ends
of the same technological toolbox: its intake of energy, and its outcomes in terms
of goods and services. Changes over time in energy efficiency, as well as its
disparity across space depend on the unfolding of two distinct phenomena: the
exact composition of that local basket of technologies, like the overall heap of
technologies we have stacked up in our daily life, for one, and the efficiency
of individual technologies in the stack, for two. Here, I remember a model I
got to know in management science, precisely about how the efficiency changes
with new technologies supplanting the older ones. Apparently, a freshly implemented,
new technology is always less productive than the one it is kicking out of
business. Only after some time, when people learn how to use that new thing
properly, it starts yielding net gains in productivity. At the end of the day,
when we change our technologies frequently, there could very well not be any
gain in productivity at all, as we are constantly going through consecutive
phases of learning. Anyway, I see the coefficient of energy efficiency at
any given time in a given place as the cumulative outcome of past collective
decisions as for the repertoire of technologies we use.
That
is the first big assumption I make, and the second one comes from the
factorisation: GDP / energy consumed = [GDP per capita] / [consumption of
energy per capita ] = [GDP / population] / [energy consumed / population ].
I noticed a semi-intuitive, although not really robust correlation between the
two component coefficients. GDP per capita tends to be higher in countries with
better developed institutions, which, in turn, tend to be better developed in
the presence of relatively high a consumption of energy per capita. Mind you,
it is quite visible cross-sectionally, when comparing countries, whilst not
happening that obviously over time. If people in country A consume twice as
much energy per capita as people in country B, those in A are very likely to
have better developed institutions than folks in B. Still, if in any of the two
places the consumption of energy per capita grows or falls by 10%, it does not
automatically mean corresponding an increase or decrease in institutional
development.
Wrapping
partially up the above, I can see at least one main assumption in my method:
energy efficiency, measured as GDP per kg of oil equivalent in energy
consumed is, in itself, a pretty foggy metric, arguably devoid of intrinsic
meaning, and it is meaningful as an equilibrium of two component coefficients,
namely in GDP per capita, for one, and energy consumption per capita, for two. Therefore,
the very name ‘energy efficiency’ is problematic. If the vector [GDP; energy
consumption] is really a local equilibrium, as I intuitively see it, then we
need to keep in mind an old assumption of economic sciences: all equilibriums
are efficient, this is basically why they are equilibriums. Further down this
avenue of thinking, the coefficient of GDP per kg of oil equivalent shouldn’t
even be called ‘energy efficiency’, or, just in order not to fall into
pointless semantic bickering, we should take the ‘efficiency’ part into some
sort of intellectual parentheses.
Now,
I move to my analytical method. I accept as pretty obvious the fact that, at a
given moment in time, different national economies display different
coefficients of GDP per kg of oil equivalent consumed. This is coherent with
the above-phrased claim that energy efficiency is a local equilibrium rather
than a measure of efficiency strictly speaking. What gains in importance, with
that intellectual stance, is the study of change over time. In the manuscript
paper, I tested a very intuitive analytical method, based on a classical move,
namely on using natural logarithms of empirical values rather than empirical
values themselves. Natural logarithms eliminate a lot of non-stationarity and
noise in empirical data. A short reminder of what are natural logarithms is due
at this point. Any number can be represented as a power of another number, like
y = xz, where ‘x’ is called the root
of the ‘y’, ‘z’ is the exponent of the root, and ‘x’ is
also the base of ‘z’.
Some
roots are special. One of them is the so-called Euler’s number, or e =
2,718281828459, the base of the natural logarithm. When we treat e
≈ 2,72 as the root of another number, the corresponding exponent z
in y = ez has interesting properties: it can be
further decomposed as z = t*a, where t is the ordinal number of a
moment in time, and a is basically a parameter. In a moment, I
will explain why I said ‘basically’. The function y = t*a is
called ‘exponential function’ and proves useful in studying processes marked by
important hysteresis, i.e. when each consecutive step in the process depends
very strongly on the cumulative outcome of previous steps, like y(t)
depends on y(t – k). Compound interest is a classic example: when
you save money for years, with annual compounding of interest, each consecutive
year builds upon the interest accumulated in preceding years. If we represent
the interest rate, classically, as ‘r’, the function y = xt*r gives a
good approximation of how much you can save, with annually compounded ‘r’,
over ‘t’ years.
Slightly
different an approach to the exponential function can be formulated, and this
is what I did in the manuscript paper I am revising
now, in front of your very eyes. The natural logarithm of
energy efficiency measured as GDP per kg of oil equivalent can be considered as
local occurrence of change with strong a component of hysteresis. The
equilibrium of today depends on the cumulative outcomes of past equilibriums.
In a classic exponential function, I would approach that hysteresis as y(t) = et*a, with a
being a constant parameter of the function. Yet, I can assume that ‘a’ is local
instead of being general. In other words, what I did was y(t) = et*a(t)
with a(t) being obviously t-specific, i.e. local. I assume that
the process of change in energy efficiency is characterized by local magnitudes
of change, the a(t)’s. That a(t), in y(t) = et*a(t) is
slightly akin to the local first derivative, i.e. y’(t). The
difference between the local a(t) and y’(t) is that
the former is supposed to capture somehow more accurately the hysteretic side
of the process under scrutiny.
In
typical econometric tests, the usual strategy is to start with the empirical
values of my variables, transform them into their natural logarithms or some
sort of standardized values (e.g. standardized over their respective means, or
their standard deviations), and then run linear regression on those transformed
values. Another path of analysis consists in exponential regression, only there
is a problem with this one: it is hard to establish a reliable method of
transformation in empirical data. Running exponential regression on natural
logarithms looks stupid, as natural logarithms are precisely the exponents of
the exponential function, whence my intuitive willingness to invent a method
sort of in between linear regression, and the exponential one.
Once
I assume that local exponential coefficients a(t) in the
exponential progression y(t) = et*a(t)
have intrinsic meaning of their own, as local magnitudes of exponential change,
an interesting analytical avenue opens up. For each set of empirical values y(t),
I can construe a set of transformed values a(t) = ln[y(t)]/t.
Now, when you think about it, the actual a(t) depends on how you
calculate ‘t’, or, in other words, what calendar you apply. When I start
counting time 100 years before the starting year of my empirical data, my a(t)
will go like: a(t1) = ln[y(t1)]/101, a(t2)
= ln[y(t2)]/102 etc. The denominator ‘t’ will
change incrementally slowly. On the other hand, if I assume that the first year
of whatever is happening is one year before my empirical time series start, it
is a different ball game. My a(t1) = ln[y(t1)]/1, and
my a(t2) = ln[y(t2)]/2 etc.; incremental
change in denominator is much greater in this case. When I set my t0
at 100 years earlier than the first year of my actual data, thus t0 = t1 –
100, the resulting set of a(t) values transformed from
the initial y(t) data simulates a secular, slow trend of change. On
the other hand, setting t0 at t0 = t1-1 makes the resulting set
of a(t) values reflect quick change, and the t0 = t1 – 1
moment is like a hypothetical shock, occurring just before the actual empirical
data starts to tell its story.
Provisionally
wrapping it up, my assumptions, and thus my method, consists in studying changes
in energy efficiency as a sequence of equilibriums between relative wealth (GDP
per capita), on the one hand, and consumption of energy per capita. The passage
between equilibriums is a complex phenomenon, combining long term trends and
the short-term ones.
I
am introducing a novel angle of approach to the otherwise classic concept of economics,
namely that of economic equilibrium. I claim that equilibriums are
manifestations of collective intelligence in their host societies. In order to
form an economic equilibrium, would it be more local and Marshallian, or more general
and Walrasian, a society needs institutions that assure collective learning
through experimentation. They need some kind of financial market, enforceable
contracts, and institutions of collective bargaining. Small changes in energy
efficiency come out of consistent, collective learning through those
institutions. Big leaps in energy efficiency appear when the institutions of
collective learning undergo substantial structural changes.
I
am thinking about enriching the empirical part of my paper by introducing
additional demonstration of collective intelligence: a neural network, working with
the same empirical data, with or without the so-called fitness function. I
have that intuitive thought – although I don’t know yet how to get it across
coherently – that neural networks endowed with a fitness function are good at representing
collective intelligence in structured societies with relatively well-developed institutions.
I
go towards my syllabuses for the coming academic year. Incidentally, at least
one of the curriculums I am going to teach this fall fits nicely into the line
of research I am pursuing now: collective intelligence and the use of
artificial intelligence. I am developing the thing as an update on my blog, and
I write it directly in English. The course is labelled “Behavioural
Modelling and Content Marketing”. My principal goal is to teach students the
mechanics of behavioural interaction between human beings and digital technologies,
especially in social media, online marketing and content streaming. At my
university, i.e. the Andrzej Frycz-Modrzewski Krakow University (Krakow, Poland),
we have a general drill of splitting the general goal of each course into three
layers of expected didactic outcomes: knowledge, course-specific skills, and
general social skills. The longer I do science and the longer I teach, the less
I believe into the point of distinguishing knowledge from skills. Knowledge
devoid of any skills attached to it is virtually impossible to check, and
virtually useless.
As
I think about it, I imagine many different teachers and many students. Each
teacher follows some didactic goals. How do they match each other? They are
bound to. I mean, the community of teachers, in a university, is a local social
structure. We, teachers, we have different angles of approach to teaching, and,
of course, we teach different subjects. Yet, we all come from more or less the
same cultural background. Here comes a quick glimpse of literature I will be
referring to when lecturing ‘Behavioural Modelling and Content Marketing’:
the article by Molleman and Gachter (2018[1]), entitled
‘Societal background influences social learning in cooperative decision
making’, and another one, by Smaldino (2019[2]), under
the title ‘Social identity and cooperation in cultural evolution’. Molleman and
Gachter start from the well-known assumption that we, humans, largely owe our
evolutionary success to our capacity of social learning and cooperation. They
give the account of an experiment, where Chinese people, assumed to be
collectivist in their ways, are being compared to British people, allegedly
individualist as hell, in a social game based on dilemma and cooperation. Turns
out the cultural background matters: success-based learning is associated with
selfish behaviour and majority-based learning can help foster cooperation.
Smaldino goes down more theoretical a path, arguing that the structure society
shapes the repertoire of social identities available to homo sapiens in a given
place at a given moment, whence the puzzle of emergent, ephemeral groups as a
major factor in human cultural evolution. When I decide to form, on Facebook, a
group of people Not-Yet-Abducted-By-Aliens, is it a factor of cultural change,
or rather an outcome thereof?
When
I teach anything, what do I really want to achieve, and what does the conscious
formulation of those goals have in common with the real outcomes I reach? When
I use a scientific repository, like ScienceDirect,
that thing learns from me. When I download a bunch of articles on energy, it
suggests me further readings along the same lines. It learns from keywords I
use in my searches, and from the journals I browse. You can even have a look at
my recent history of downloads from ScienceDirect and make yourself an opinion about
what I am interested in. Just CLICK HERE,
it opens an Excel spreadsheet.
How
can I know I taught anybody anything useful? If a student asks me: ‘Pardon
me, sir, but why the hell should I learn all that stuff you teach? What’s the
point? Why should I bother?’. Right you are, sir or miss, whatever gender
you think you are. The point of learning that stuff… You can think of some
impressive human creation, like the Notre Dame cathedral, the Eiffel Tower, or
that Da Vinci’s painting, Lady with an Ermine. Have you ever wondered how much
work had been put in those things? However big and impressive a cathedral is,
it had been built brick by f***ing brick. Whatever depth of colour we can see
in a painting, it came out of dozens of hours spent on sketching, mixing
paints, trying, cursing, and tearing down the canvas. This course and its
contents are a small brick in the edifice of your existence. One more small
story that makes your individual depth as a person.
There
is that thing, at the very heart of behavioural modelling, and social sciences
in general. Fault of a better expression, I call it the Bignetti model. See,
for example, Bignetti 2014[3], Bignetti et al. 2017[4], or Bignetti 2018[5] for more
reading. Long story short, what professor Bignetti claims is that whatever
happens in observable human behaviour, individual or collective, whatever, has
already happened neurologically beforehand. Whatever we use to Tweet or
whatever we read, it is rooted in that wiring we have between the ears. The
thing is that actually observing how that wiring works is still a bit
burdensome. You need a lot of technology, and a controlled environment.
Strangely enough, opening one’s skull and trying to observe the contents at
work doesn’t really work. Reverse-engineered, the Bignetti model suggests
behavioural observation, and behavioural modelling, could be a good method to guess
how our individual brains work together, i.e. how we are intelligent
collectively.
I
go back to the formal structure of the course, more specifically to goals and
expected outcomes. I split: knowledge, skills, social competences. The knowledge,
for one. I expect the students to develop the understanding of the
following concepts: a) behavioural pattern b) social life as a collection of
behavioural patterns observable in human beings c) behavioural patterns
occurring as interactions of humans with digital technologies, especially with
online content and online marketing d) modification of human behaviour as a
response to online content e) the basics of artificial intelligence, like the weak
law of great numbers or the logical structure of a neural network. As for the course-specificskills, I expect my students to sharpen their edge in observing
behavioural patterns, and changes thereof in connection with online content.
When it comes to general social competences, I would like my students to
make a few steps forward on two paths: a) handling projects and b) doing
research. It logically implies that assessment in this course should and will
be project-based. Students will be graded on the grounds of complex projects,
covering the definition, observation, and modification of their own behavioural
patterns occurring as interaction with online content.
The
structure of an individual project will cover three main parts:
a) description of the behavioural sequence in question b) description of online
content that allegedly impacts that sequence, and c) the study of behavioural
changes occurring under the influence of online content. The scale of students’
grades is based on two component marks: the completeness of a student’s work,
regarding (a) – (c), and the depth of research the given student has brought up
to support his observations and claims. In Poland, in the academia, we
typically use a grading scale from 2 (fail) all the way up to 5 (very good),
passing through 3, 3+, 4, and 4+. As I see it, each student – or each team of
students, as there will be a possibility to prepare the thing in a team of up
to 5 people – will receive two component grades, like e.g. 3+ for completeness
and 4 for depth of research, and that will give (3,5 + 4)/2 = 3,75 ≈ 4,0.
Such
a project is typical research, whence the necessity to introduce students into
the basic techniques of science. That comes as a bit of a paradox, as those
students’ major is Film and Television Production, thus a thoroughly practical
one. Still, science serves in practical issues: this is something I deeply
believe and which I would like to teach my students. As I look upon those
goals, and the method of assessment, a structure emerges as regards the plan of
in-class teaching. At my university, the bulk of in-class interaction with
students is normally spread over 15 lectures of 1,5 clock hour each, thus 30
hours in total. In some curriculums it is accompanied by the so-called
‘workshops’ in smaller groups, with each such smaller group attending 7 – 8
sessions of 1,5 hour each. In this case, i.e. in the course of ‘Behavioural
Modelling and Content Marketing’, I have just lectures in my schedule.
Still, as I see it, I will need to do practical stuff with my youngsters. This
is a good moment to demonstrate a managerial technique I teach in other
classes, called ‘regressive planning’, which consists in taking the
final goal I want to achieve, assume this is supposed to be the outcome of a
sequence of actions, and then reverse engineer that sequence. Sort of ‘what do
I need to do if I want to achieve X at the end of the day?’.
If
I want to have my students hand me good quality projects by the end of the
semester, the last few classes out of the standard 15 should be devoted to
discussing collectively the draft projects. Those drafts should be based on
prior teaching of basic skills and knowledge, whence the necessity to give
those students a toolbox, and provoke in them curiosity to rummage inside. All
in all, it gives me the following, provisional structure of lecturing:
{input
= 15 classes} => {output = good quality projects by my students}
{input
= 15 classes} ó {input = [10
classes of preparation >> 5 classes of draft presentations and discussion
thereof]}
{input
= 15 classes} ó
{input = [5*(1 class of mindfuck to provoke curiosity + 1 class of systematic
presentation) + 5*(presentation + questioning and discussion)}
As
I see from what I have just written, I need to divide the theory accompanying
this curriculum into 5 big chunks. The first of those 5 blocks needs to
address the general frame of the course, i.e. the phenomenon of recurrent interaction
between humans and online content. I think the most important fact to highlight
is that algorithms of online marketing behave like sales people crossed with
very attentive servants, who try to guess one’s whims and wants. It is a huge
social change: it, I think, the first time in human history when virtually
every human with access to Internet interacts with a form of intelligence that
behaves like a butler, guessing the user’s preferences. It is transformational
for human behaviour, and in that first block I want to show my students how that
transformation can work. The opening, mindfucking class will consists in a behavioural
experiment in the lines of good, old role playing in psychology. I will
demonstrate to my students how a human would behave if they wanted to emulate
the behaviour of neural networks in online marketing. I will ask them questions
about what they usually do, and about what they did like during the last few days,
and I will guess their preferences on the grounds of their described behaviour.
I will tell my students to observe that butler-like behaviour of mine and to pattern
me. In a next step, I will ask students to play the same role, just for them to
get the hang of how a piece of AI works in online marketing. The point of this
first class is to define an expected outcome, like a variable, which neural
networks attempt to achieve, in terms of human behaviour observable through clicking.
The second, theoretical class of that first block will, logically, consist in
explaining the fundamentals of how neural networks work, especially in online
interactions with human users of online content.
I
think in the second two-class block I will address the issue of behavioural
patterns as such, i.e. what they are, and how can we observe them. I want the mindfuck
class in this block to be provocative intellectually, and I think I will use
role playing once again. I will ask my students to play roles of their choice,
and I will discuss their performance under a specific angle: how do you know
that your play is representative for this type of behaviour or person? What
specific pieces of behaviour are, in your opinion, informative about the social
identity of that role? Do other students agree that the type of behaviour played
is representative for this specific type of person? The theoretical class in
this block will be devoted to systematic lecture on the basics of behaviourism.
I guess I will serve to my students some Skinner, and some Timberlake, namely Skinner’s
‘Selection by Consequences’ (1981[6]), and Timberlake’s ‘Behaviour
Systems and Reinforcement’ (1993[7]).
In
the third two-class block I will return to interactions with online
content. In the mindfuck class, I will make my students meddle with You Tube,
and see how the list of suggested videos changes after we search for or click
on specific content, e.g how will it change after clicking 5 videos of
documentaries about wildlife, or after searching for videos on race cars. In
this class, I want my students to pattern the behaviour of You Tube. The theoretical
class of this block will be devoted to the ways those algorithms work. I think
I will focus on a hardcore concept of AI, namely the Gaussian mixture. I will
explain how crude observations on our clicking and viewing allows an algorithm
to categorize us.
As
we will pass to the fourth two-class block, I will switch to the concept
of collective intelligence, i.e. to how whole societies interact with various
forms of online, interactive neural networks. The class devoted to intellectual
provocation will be discursive. I will make students debate on the following claim:
‘Internet and online content allow our society to learn faster and more
efficiently’. There is, of course, a catch, and it is the definition of
learning fast and efficiently. How do we know we are quick and efficient in our
collective learning? What would slow and inefficient learning look like? How
can we check the role of Internet and online content in our collective
learning? Can we apply the John Stuart Mill’s logical canon to that situation? The
theoretical class in this block will be devoted to the phenomenon of collective
intelligence in itself. I would like to work through like two research papers devoted
to online marketing, e.g. Fink
et al. (2018[8])
and Takeuchi
et al. (2018[9]),
in order to show how online marketing unfolds into phenomena of collective
intelligence and collective learning.
Good,
so I come to the fifth two-class block, the last one before the
scheduled draft presentations by my students. It is the last teaching block
before they present their projects, and I think it should bring them back to
the root idea of these, i.e. to the idea of observing one’s own behaviour when
interacting with online content. The first class of the block, the one supposed
to stir curiosity, could consist in two steps of brain storming and discussion.
Students endorse the role of online marketers. In the first step, they define
one or two typical interactions between human behaviour, and the online content
they communicate. We use the previously learnt theory to make both the description
of behavioural patterns, and that of online marketing coherent and state-of-the-art.
In the next step, students discuss under what conditions they would behave according
to those pre-defined patterns, and what conditions would them make diverge from
it and follow different patterns. In the theoretical class of this block, I would
like to discuss two articles, which incite my own curiosity: ‘A place
for emotions in behaviour research system’ by Gordon M.Burghart (2019[10]), and ‘Disequilibrium
in behaviour analysis: A disequilibrium theory redux’ by Jacobs et al. (2019[11]).
I
am consistently delivering good, almost new science to my readers, and love
doing it, and I am working on crowdfunding this activity of mine. You can
communicate with me directly, via the mailbox of this blog: goodscience@discoversocialsciences.com.
As we talk business plans, I remind you that you can download, from the library
of my blog, the business plan I prepared for my
semi-scientific project Befund (and you can access the French version
as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’
You can support my research by donating directly, any amount you consider
appropriate, to my
PayPal account. You can also consider going to my Patreon page and become
my patron. If you decide so, I will be grateful for suggesting me two things
that Patreon suggests me to suggest you. Firstly, what kind of reward would you
expect in exchange of supporting me? Secondly, what kind of phases would you
like to see in the development of my research, and of the corresponding
educational tools?
[1] Molleman, L., & Gächter, S.
(2018). Societal background influences social learning in cooperative decision
making. Evolution and Human Behavior, 39(5), 547-555.
[2] Smaldino, P. E. (2019). Social
identity and cooperation in cultural evolution. Behavioural Processes. Volume
161, April 2019, Pages 108-116
[3] Bignetti, E. (2014). The functional
role of free-will illusion in cognition:“The Bignetti Model”. Cognitive Systems
Research, 31, 45-60.
[4] Bignetti, E., Martuzzi, F., &
Tartabini, A. (2017). A Psychophysical Approach to Test:“The Bignetti Model”. Psychol
Cogn Sci Open J, 3(1), 24-35.
[5] Bignetti, E. (2018). New Insights
into “The Bignetti Model” from Classic and Quantum Mechanics Perspectives. Perspective,
4(1), 24.
[6] Skinner, B. F. (1981).
Selection by consequences. Science, 213(4507), 501-504.
[7] Timberlake, W. (1993).
Behavior systems and reinforcement: An integrative approach. Journal of the
Experimental Analysis of Behavior, 60(1), 105-128.
[8] Fink, M., Koller, M.,
Gartner, J., Floh, A., & Harms, R. (2018). Effective entrepreneurial
marketing on Facebook–A longitudinal study. Journal of business research.
[9] Takeuchi,
H., Masuda, S., Miyamoto, K., & Akihara, S. (2018). Obtaining Exhaustive Answer Set for
Q&A-based Inquiry System using Customer Behavior and Service Function
Modeling. Procedia Computer Science, 126, 986-995.
[10] Burghardt, G. M. (2019). A
place for emotions in behavior systems research. Behavioural processes.
[11] Jacobs, K. W., Morford, Z.
H., & King, J. E. (2019). Disequilibrium in behavior analysis: A
disequilibrium theory redux. Behavioural processes.
I am returning to a long-followed path of research, that on financial solutions for promoting renewable energies, and I am making it into educational content for my course of « Fundamentals of Finance ». I am developing on artificial intelligence as well. I think that artificial intelligence is just made for finance. Financial markets, and the contractual patterns they use, are akin endocrine systems. They generate signals, more or less complex, and those signals essentially say: ‘you lazy f**ks, you need to move and do something, and what that something is supposed to be you can read from between the lines of those financial instruments in circulation’. Anyway, what I am thinking about is to use artificial intelligence for simulating the social change that a financial scheme, i.e. a set of financial instruments, can possibly induce in the ways we produce and use energy. This update is at the frontier of scientific research, business planning, and education strictly spoken. I know that some students can find it hard to follow, but I just want to show real science at work, 100% pure beef.
I took a database which I have already used in my research on the so-called energy efficiency, i.e. on the amount of Gross Domestic Product we can derive on the basis of 1 kilogram of oil equivalent. It is a complex indicator of how efficient a given social system is as regards using energy for making things turn on the economic side. We take the total consumption of energy in a given country, and we convert it into standardized units equivalent to the amount of energy we can have out of one kilogram of natural oil. This standardized consumption of energy becomes the denominator of a coefficient, where the nominator consists in the Gross Domestic Product. Thus, it goes like “GDP / Energy consumed”. The greater the value of that coefficient, i.e. the more dollars we derive from one unit of energy, the greater is the energy efficiency of our economic system.
Since 2012, the global economy has been going through an unprecedentedly long period of expansion in real output[1]. Whilst the obvious question is “When will it crash?”, it is interesting to investigate the correlates of this phenomenon in the sector of energy. In other terms, are we, as a civilisation more energy-efficient as we get (temporarily) much more predictable in terms of economic growth? The very roots of this question are to find in the fundamental mechanics of our civilisation. We, humans, are generally good at transforming energy. There is a body of historical and paleontological evidence that accurate adjustment of energy balance was one of the key factors in the evolutionary success of humans, both at the level of individual organisms and whole communities (Leonard, Robertson 1997[2]; Robson, Wood 2008[3]; Russon 2010[4])
When we talk about energy efficiency of the human civilisation, it is useful to investigate the way we consume energy. In this article, the question is being tackled by observing the pace of growth in energy efficiency, defined as GDP per unit of energy use (https://data.worldbank.org/indicator/EG.GDP.PUSE.KO.PP.KD?view=chart ). The amount of value added we can generate out of a given set of production factors, when using one unit of energy, is an interesting metric. It shows energy efficiency as such, and, in the same time, the relative complexity of the technological basket we use. As stressed, for example, by Moreau and Vuille (2018[5]), when studying energy intensity, we need to keep in mind the threefold distinction between: a) direct consumption of energy b) transport c) energy embodied in goods and services.
One of the really deep questions one can ask about the energy intensity of our culture is to what extent it is being shaped by short-term economic fluctuations. Ziaei (2018[6]) proved empirically that observable changes in energy intensity of the U.S. economy are substantial, in response to changes in monetary policy. There is a correlation between the way that financial markets work and the consumption of energy. If the relative increase in energy consumption is greater than the pace of economic growth, GDP created with one unit of energy decreases, and vice versa. There is also a mechanism of reaction of the energy sector to public policies. In other words, some public policies have significant impact on the energy efficiency of the whole economy. Different sectors of the economy respond with different intensity, as for their consumption of energy, to public policies and to changes in financial markets. We can assume that a distinct sector of the economy corresponds to a distinct basket of technologies, and a distinct institutional outset.
Faisal et al. (2017[7]) found a long-run correlation between the consumption of energy and real output of the economy, studying the case of Belgium. Moreover, the same authors found significant causality from real output to energy consumption, and that causality seems to be uni-directional, without any significant, reciprocal loop.
Energy efficiency of national economies, as measured with the coefficient of GDP per unit of energy (e.g. per kg of oil equivalent), should take into account that any given market is a mix of goods – products and services – which generate aggregate output. Any combination “GDP <> energy use” is a combination of product markets, as well as technologies (Heun et al. 2018[8]).
There is quite a fruitful path of research, which assumes that aggregate use of energy in an economy can be approached in a biological way, as a metabolic process. The MuSIASEM methodological framework seems to be promising in this respect (e.g. Andreoni 2017[9]). This leads to a further question: can changes in the aggregate use of energy be considered as adaptive changes in an organism, or in generations of organisms? In another development regarding the MuSIASEM framework, Velasco-Fernández et al (2018[10]) remind that real output per unit of energy consumption can increase, on a given basis of energy supply, through factors other than technological change towards greater efficiency in energy use. This leads to investigating the very nature of technological change at the aggregate level. Is aggregate technological change made only of engineering improvements at the microeconomic level, or maybe the financial reshuffling of the economic system counts, too, as adaptive technological change?
The MuSIASEM methodology stresses the fact that international trade, and its accompanying financial institutions, allow some countries to externalise industrial production, thus, apparently, to decarbonise their economies. Still, the industrial output they need takes place, just somewhere else.
From the methodological point of view, the MuSIASEM approach explores the compound nature of energy efficiency measured as GDP per unit of energy consumption. Energy intensity can be understood at least at two distinct levels: aggregate and sectoral. At the aggregate level, all the methodological caveats make the « GDP per kg of oil equivalent » just a comparative metric, devoid of much technological meaning. At the sectoral level, we get closer to technology strictly spoken.
There is empirical evidence that at the sectoral level, the consumption of energy per unit of aggregate output tends to: a) converge across different entities (regions, entrepreneurs etc.) b) tends to decrease (see for example: Yu et al. 2012[11]).
There is also empirical evidence that general aging of the population is associated with a lower energy intensity, and urbanization has an opposite effect, i.e. it is positively correlated with energy intensity (Liu et al. 2017[12])
It is important to understand, how and to what extent public policies can influence the energy efficiency at the macroeconomic scale. These policies can either address directly the issue of thermodynamic efficiency of the economy, or just aim at offshoring the most energy – intensive activities. Hardt et al. (2018[13]) study, in this respect, the case of United Kingdom, where each percentage of growth in real output has been accompanied, those last years, by a 0,57% reduction in energy consumption per capita.
There is grounds for claiming that increasing energy efficiency of national economies matters more for combatting climate change that the strictly spoken transition towards renewable energies (Weng, Zhang 2017[14]). Still, other research suggest that the transition towards renewable energies has an indirectly positive impact upon the overall energy efficiency: economies that make a relatively quick transition towards renewables seem to associate that shift with better efficiency in using energy for creating real output (Akalpler, Shingil 2017[15]).
It is to keep in mind that the energy efficiency of national economies has two layers, namely the efficiency of producing energy in itself, as distinct from the usage we make of the so-obtained net energy. This is the concept of Energy Return on Energy Invested (EROI), (see: Odum 1971[16]; Hall 1972[17]). Changes in energy efficiency can occur on both levels, and in this respect, the transition towards renewable sources of energy seems to bring more energy efficiency in that first layer, i.e. in the extraction of energy strictly spoken, as compared with fossil fuels. The problematically slow growth in energy efficiency could be coming precisely from the de-facto decreasing efficiency of transformation in fossil fuels (Sole et al. 2018[18]).
Technology and social structures are mutually entangled (Mumford 1964[19], McKenzie 1984[20], Kline and Pinch 1996[21]; David 1990[22], Vincenti 1994[23]; Mahoney 1988[24]; Ceruzzi 2005[25]). An excellent, recent piece of research by Taalbi (2017[26]) attempts a systematic, quantitative investigation of that entanglement.
The data published by the World Bank regarding energy use per capita in kg of oil equivalent (OEPC) (https://data.worldbank.org/indicator/EG.USE.PCAP.KG.OE ) allows an interesting insight, when combined with structural information provided by the International Energy Agency (https://www.iea.org). As one ranks countries regarding their energy use per capita, the resulting hierarchy is, in the same time, a hierarchy in the broadly spoken socio-economic development. Countries displaying less than 200 kg of oil equivalent per capita are, in the same time, barely structured as economies, with little or no industry and transport infrastructure, with quasi-inexistent institutional orders, and with very limited access to electricity at the level of households and small businesses. In the class comprised between 200 kg OEPC and approximately 600 ÷ 650 kg OEPC, one can observe countries displaying progressively more and more development in their markets and infrastructures, whilst remaining quite imbalanced in their institutional sphere. Past the mark of 650 OEPC, stable institutions are observable. Interestingly, the officially recognised threshold of « middle income », as macroeconomic attribute of whole nations, seems corresponding to a threshold in energy use around 1 500 kg OEPC. The neighbourhood of those 1 500 kg OEPC looks like the transition zone between developing economies, and the emerging ones. This is the transition towards really stable markets, accompanied by well-structured industrial networks, as well as truly stable public sectors. Finally, as income per capita starts qualifying a country into the class of « developed economies », that country is most likely to pass another mark of energy consumption, that of 3000 kg OEPC. This stylized observation of how energy consumption is linked to social structures is partly corroborated by other research, e.g. that regarding social equality in the access to energy (see for example: Luan, Chen 2018[27])
The nexus of energy use per capita, on the one hand, and institutions on the other hand, has even found a general designation in recent literature: “energy justice”. A cursory review of that literature demonstrates the depth of emotional entanglement between energy and social structures: it seems to be more about the connection between energy and self-awareness of societies than about anything else (see for example: Fuller, McCauley 2016[28]; Broto et al. 2018[29]). The difficulty in getting rid of emotionally grounded stereotypes in this path of research might have its roots in the fact that we can hardly understand what energy really is, and attempts at this understanding send us to the very foundations of our understanding as for what reality is (Coelho 2009[30]; McKagan et al. 2012[31]; Frontali 2014[32]). Recent research, conducted from the point of view of management science reveal just as recent an emergence of new, virtually unprecedented, institutional patterns in the sourcing and the use of energy. A good example of that institutional change is to find in the new role of cities as active players in the design and implementation of technologies and infrastructures critical for energy efficiency (see for example: Geels et al. 2016[33]; Heiskanen et al. 2018[34]; Matschoss, Heiskanen 2018[35]).
Changes observable in the global economy, with respect to energy efficiency measured as GDP per unit of energy consumed, are interestingly accompanied by those in the supply of money, urbanization, as well as the shift towards renewable energies. Years 2008 – 2010, which marked, with a deep global recession, the passage towards currently experienced, record-long and record-calm period of economic growth, displayed a few other interesting transitions. In 2008, the supply of broad money in the global economy exceeded, for the first documented time, 100% of the global GDP, and that coefficient of monetization (i.e. the opposite of the velocity of money) has been growing ever since (World Bank 2018[36]). Similarly, the coefficient of urbanization, i.e. the share of urban population in the global total, exceeded 50% in 2008, and has kept growing since (World Bank 2018[37]). Even more intriguingly, the global financial crisis of 2007 – 2009 took place exactly when the global share of renewable energies in the total consumption of energy was hitting a trough, below 17%, and as the global recovery started in 2010, that coefficient started swelling as well, and has been displaying good growth since then[38]. Besides, empirical data indicates that since 2008, the share of aggregate amortization (of fixed assets) in the global GDP has been consistently growing, after having passed the cap of 15% (Feenstra et al. 2015[39]). Some sort of para-organic pattern emerges out of those observations, where energy efficiency of the global economy is being achieved through more intense a pace of technological change, in the presence of money acting as a hormone, catabolizing real output and fixed assets, whilst anabolizing new generations of technologies.
Thus, I have that database, which you can download precisely by clicking this link. One remark: this is an Excel file, and when you click on the link, it downloads without further notice. There is no opening on the screen. In this set, we have 12 variables: i) GDP per unit of energy use (constant 2011 PPP $ per kg of oil equivalent) ii) Fixed assets per 1 resident patent application iii) Share of aggregate depreciation in the GDP – speed of technological obsolescence iv) Resident patent applications per 1 mln people v) Supply of broad money as % of GDP vi)
Energy use per capita (kg of oil equivalent) vii) Depth of the food deficit (kilocalories per person per day) viii) Renewable energy consumption (% of total final energy consumption) ix) Urban population as % of total population x) GDP (demand side) xi) GDP per capita, and finally xii) Population. My general, intuitive idea is to place energy efficiency in a broad socio-economic context, and to see what role in that context is being played by financial liquidity. In simpler words, I want to discover how can the energy efficiency of our civilization be modified by a possible change in financial liquidity.
My database is a mix-up of 59 countries and years of observation ranging from 1960 to 2014, 1228 records in total. Each record is the state of things, regarding the above-named variables, in a given year. In quantitative research we call it a data panel. You have bits of information inside and you try to make sense out of it. I like pictures. Thus, I made some. These are the two graphs below. One of them shows the energy efficiency of national economies, the other one focuses on the consumption of energy per capita, and both variables are being shown as a function of supply of broad money as % of GDP. I consider the latter to be a crude measure of financial liquidity in the given place and time. The more money is being supplied per unit of Gross Domestic Product, the more financial liquidity people have as for doing something with them units of GDP. As you can see, the thing goes really all over the place. You can really say: ‘that is a cloud of points’. As it is usually the case with clouds, you can see any pattern in it, except anything mathematically regular. I can see a dung beetle in the first one, and a goose flapping its wings in the second. Many possible connections exist between the basic financial liquidity of the economic system, on the one hand, and the way we use energy, on the other hand.
I am testing my database for general coherence. In the table below, I am showing the arithmetical average of each variable. As you hopefully know, since Abraham de Moivre we tend to assume that arithmetical average of a large sample of something is the expected value of that something. Thus, the table below shows what we can reasonably expect from the database. We can see a bit of incoherence. Mean energy efficiency is $8,72 per kg of oil equivalent in energy. Good. Now, I check. I take the energy consumption per capita and I multiply in by the number of capitae, thus I go 3 007,28 * 89 965 651 = 270 551 748,43 tons of oil equivalent. This is the amount of energy consumed in one year by the average expected national society of homo sapiens in my database. Now, I divide the average expected GDP in the sample, i.e. $1 120 874,23 mln, by that expected total consumption of energy, and I hit just $1 120 874,23 mln / 270 551 748,43 tons = $4,14 per kilogram.
It is a bit low, given that a few sentences ago the same variable was supposed to be$8,72 per kg. This is just a minor discrepancy as compared to the GDP per capita, which is the central measure of wealth in a population. The average calculated straight from the database is $22 285,63. Cool. This is quite a lot, you know. Now, I check. I take the aggregate average GDP per country, i.e. $1 120 874,23 mln, and I divide it by the average headcount of population, i.e. I go $1 120 874 230 000 / 89 965 651 = $12 458,91. What? $12 458,91 ? But it was supposed to be is $22 285,63! Who took those 10 thousand dollars away from me? I mean, $12 458,91 is quite respectable, it is just a bit below my home country, Poland, presently, but still… Ten thousand dollars of difference? How is it possible?
It is so embarrassing when numbers are not what we expect them to be. As a matter of fact, they usually aren’t. It is just our good will that makes them look so well fitting to each other. Still, this is what numbers do, when they are well accounted for: they embarrass. As they do so, they force us to think, and to dig meaning out from underneath the numbers. This is what quantitative analysis in social sciences is supposed to do: give us the meaning that we expect when we measure things about our own civilisation.
Table 1 – Average values from the pooled database of N = 1228 country-year observations
Variable
Average expected value from empirical data, N = 1228 records
GDP per unit of energy use (constant 2011 PPP $ per kg of oil equivalent)
Share of aggregate depreciation in the GDP – speed of technological obsolescence
14%
Resident patent applications per 1 mln people – speed of invention
158,90
Supply of broad money % of GDP – observed financial liquidity
74,60%
Energy use (kg of oil equivalent per capita)
3 007,28 kg
Depth of the food deficit (kilocalories per person per day)
26,40
Renewable energy consumption (% of total final energy consumption)
16,05%
Urban population as % of total population
69,70%
GDP (demand side; millions of constant 2011 PPP $)
1 120 874,23
GDP per capita (constant 2011 PPP $)
$22 285,63
Population
89 965 651
Let’s get back to the point, i.e. to finance. As I explain over and over again to my students, when we say ‘finance’, we almost immediately need to say: ‘balance sheet’. We need to think in terms of a capital account. Those expected average values from the table can help us to reconstruct at least the active side of that representative, expected, average economy in my database. There are three variables which sort of overlap: a) fixed assets per 1 resident patent application b) resident patent applications per 1 mln people and c) population. I divide the nominal headcount of population by 1 000 000, and thus I get population denominated in millions. I multiply the so-denominated population by the coefficient of resident patent applications per 1 mln people, which gives me, for each country and each year of observation, the absolute number of patent applications in the set. In my next step, I take the coefficient of fixed assets per 1 patent application, and I multiply it by the freshly-calculated-still-warm absolute number of patent applications.
Now, just to make it arithmetically transparent, when I do (« Fixed assets » / « Patent applications ») * « Patent applications », I take a fraction and I multiply it by its own denominator. It is de-factorisation. I stay with just the nominator of that initial fraction, thus with the absolute amount of fixed assets. For my representative, average, expected country in the database, I get Fixed Assets = $50 532 175,96 mln.
I do slightly the same with money. I take “Supply of money as % of the GDP”, and I multiply it by the incriminated GDP, which makes Money Supplied = 74,60% * $1 120 874,23 mln = $836 213,98 mln. We have a fragment in the broader balance sheet of our average expected economy: Fixed Assets $50 532 175,96 mln and Monetary Balances $836 213,98 mln. Interesting. How does it unfold over time? Let’s zeee… A bit of rummaging, and I get the contents of Table 2, below. There are two interesting things about that table.
Table 2 – Changes over time in the capital account of the average national economy
Year
Average fixed assets per national economy, $ mln constant 2011 PPP
GDP per unit of energy use (constant 2011 PPP $ per kg of oil equivalent), in the average national economy
Supply of broad money in average national economy, $ mln constant 2011 PPP
Money to fixed assets
1990
2 036 831,928
8,08
61,526
0,0030%
1991
1 955 283,198
8,198
58,654
0,0030%
1992
2 338 609,511
8,001
61,407
0,0026%
1993
2 267 728,024
7,857
60,162
0,0027%
1994
2 399 075,082
7,992
60,945
0,0025%
1995
2 277 869,991
7,556
60,079
0,0026%
1996
2 409 816,67
7,784
64,268
0,0027%
1997
2 466 046,108
7,707
71,853
0,0029%
1998
2 539 482,259
7,76
77,44
0,0030%
1999
2 634 454,042
8,085
82,987
0,0032%
2000
2 623 451,217
8,422
84,558
0,0032%
2001
2 658 255,842
8,266
88,335
0,0033%
2002
2 734 170,979
8,416
92,739
0,0034%
2003
2 885 480,779
8,473
97,477
0,0034%
2004
3 088 417,325
8,638
100,914
0,0033%
2005
3 346 005,071
8,877
106,836
0,0032%
2006
3 781 802,623
9,106
119,617
0,0032%
2007
4 144 895,314
9,506
130,494
0,0031%
2008
4 372 927,883
9,57
140,04
0,0032%
2009
5 166 422,174
9,656
171,191
0,0033%
2010
5 073 697,622
9,62
164,804
0,0032%
2011
5 702 948,813
9,983
178,381
0,0031%
2012
6 039 017,049
10,112
195,487
0,0032%
2013
6 568 280,779
10,368
205,159
0,0031%
2014
5 559 781,782
10,755
161,435
0,0029%
This is becoming really interesting. Both components in the capital account of the representative, averaged economy had been growing until 2013, then it fell. Energy efficiency has been growing quite consistently, as well. The ratio of money to assets, thus a crude measure of financial liquidity in this capital account, remains sort of steady, with a slight oscillation. You can see it in the graph below. I represented all the variables as fixed-base indexes: the value recorded for the year 2000 is 1,00, and any other value is indexed over that one. We do that thing all the time, in social sciences, when we want to study apparently incompatible magnitudes. A little test of Pearson correlation, and… Yesss! Energy efficiency is Pearson correlated with the amount of fixed assets at r = 0,953096394, and with the amount of money supplied at r = 0,947606073. All that in the presence of more or less steady a liquidity.
Provisional conclusion: the more capital we accumulate, we, the average national economy, the more energy efficient we are, and we sort of dynamically adjust to keep the liquidity of that capital, at least the strictly monetary liquidity, at a constant level.
I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?
[2] Leonard, W.R., and Robertson, M.L. (1997). Comparative primate energetics and hominoid evolution. Am. J. Phys. Anthropol. 102, 265–281.
[3] Robson, S.L., and Wood, B. (2008). Hominin life history: reconstruction and evolution. J. Anat. 212, 394–425
[4] Russon, A. E. (2010). Life history: the energy-efficient orangutan. Current Biology, 20(22), pp. 981- 983.
[5] Moreau, V., & Vuille, F. (2018). Decoupling energy use and economic growth: Counter evidence from structural effects and embodied energy in trade. Applied Energy, 215, 54-62.
[6] Ziaei, S. M. (2018). US interest rate spread and energy consumption by sector (Evidence of pre and post implementation of the Fed’s LSAPs policy). Energy Reports, 4, 288-302.
[7] Faisal, F., Tursoy, T., & Ercantan, O. (2017). The relationship between energy consumption and economic growth: Evidence from non-Granger causality test. Procedia Computer Science, 120, 671-675
[8] Heun, M. K., Owen, A., & Brockway, P. E. (2018). A physical supply-use table framework for energy analysis on the energy conversion chain. Applied Energy, 226, 1134-1162
[9] Andreoni, V. (2017). Energy Metabolism of 28 World Countries: A Multi-scale Integrated Analysis. Ecological Economics, 142, 56-69
[10] Velasco-Fernández, R., Giampietro, M., & Bukkens, S. G. (2018). Analyzing the energy performance of manufacturing across levels using the end-use matrix. Energy, 161, 559-572
[11] Yu, S., Wei, Y. M., Fan, J., Zhang, X., & Wang, K. (2012). Exploring the regional characteristics of inter-provincial CO2 emissions in China: An improved fuzzy clustering analysis based on particle swarm optimization. Applied energy, 92, 552-562
[12] Liu, F., Yu, M., & Gong, P. (2017). Aging, Urbanization, and Energy Intensity based on Cross-national Panel Data. Procedia computer science, 122, 214-220
[13] Hardt, L., Owen, A., Brockway, P., Heun, M. K., Barrett, J., Taylor, P. G., & Foxon, T. J. (2018). Untangling the drivers of energy reduction in the UK productive sectors: Efficiency or offshoring?. Applied Energy, 223, 124-133.
[14] Weng, Y., & Zhang, X. (2017). The role of energy efficiency improvement and energy substitution in achieving China’s carbon intensity target. Energy Procedia, 142, 2786-2790.
[15] Akalpler, E., & Shingil, M. E. (2017). Statistical reasoning the link between energy demand, CO2 emissions and growth: Evidence from China. Procedia Computer Science, 120, 182-188.
[16] Odum, H.T. (1971) Environment, Power, and Society, Wiley, New York, NY, 1971.
[17] Hall, C.A.S., (1972) Migration and metabolism in a temperate stream ecosystem, Ecology, vol. 53 (1972), pp. 585 – 604.
[18] Solé, J., García-Olivares, A., Turiel, A., & Ballabrera-Poy, J. (2018). Renewable transitions and the net energy from oil liquids: A scenarios study. Renewable Energy, 116, 258-271.
[19] Mumford, L., 1964, Authoritarian and Democratic Technics, Technology and Culture, Vol. 5, No. 1 (Winter, 1964), pp. 1-8
[20] MacKenzie, D., 1984, Marx and the Machine, Technology and Culture, Vol. 25, No. 3. (Jul., 1984), pp. 473-502.
[21] Kline, R., Pinch, T., 1996, Users as Agents of Technological Change : The Social Construction of the Automobile in the Rural United States, Technology and Culture, vol. 37, no. 4 (Oct. 1996), pp. 763 – 795
[22] David, P. A. (1990). The dynamo and the computer: an historical perspective on the modern productivity paradox. The American Economic Review, 80(2), 355-361.
[23] Vincenti, W.G., 1994, The Retractable Airplane Landing Gear and the Northrop “Anomaly”: Variation-Selection and the Shaping of Technology, Technology and Culture, Vol. 35, No. 1 (Jan., 1994), pp. 1-33
[24] Mahoney, M.S., 1988, The History of Computing in the History of Technology, Princeton, NJ, Annals of the History of Computing 10(1988), pp. 113-125
[25] Ceruzzi, P.E., 2005, Moore’s Law and Technological Determinism : Reflections on the History of Technology, Technology and Culture, vol. 46, July 2005, pp. 584 – 593
[26] Taalbi, J. (2017). What drives innovation? Evidence from economic history. Research Policy, 46(8), 1437-1453.
[27] Duan, C., & Chen, B. (2018). Analysis of global energy consumption inequality by using Lorenz curve. Energy Procedia, 152, 750-755.
[28] Fuller S, McCauley D. Framing energy justice: perspectives from activism and advocacy. Energy Res Social Sci 2016;11:1–8.
[29] Broto, V. C., Baptista, I., Kirshner, J., Smith, S., & Alves, S. N. (2018). Energy justice and sustainability transitions in Mozambique. Applied Energy, 228, 645-655.
[30] Coelho, R. L. (2009). On the concept of energy: History and philosophy for science teaching. Procedia-Social and Behavioral Sciences, 1(1), 2648-2652.
[31] McKagan, S. B., Scherr, R. E., Close, E. W., & Close, H. G. (2012, February). Criteria for creating and categorizing forms of energy. In AIP Conference Proceedings (Vol. 1413, No. 1, pp. 279-282). AIP.
[32] Frontali, C. (2014). History of physical terms:‘Energy’. Physics Education, 49(5), 564.
[33] Geels, F., Kern, F., Fuchs, G., Hinderer, N., Kungl, G., Mylan, J., Neukirch, M., Wassermann, S., 2016. The enactment of socio-technical transition pathways: a reformulated typology and a comparative multi-level analysis of the German and UK low-carbon electricity transitions (1990–2014). Res. Policy 45, 896–913.
[34] Heiskanen, E., Apajalahti, E. L., Matschoss, K., & Lovio, R. (2018). Incumbent energy companies navigating the energy transitions: Strategic action or bricolage?. Environmental Innovation and Societal Transitions.
[35] Matschoss, K., & Heiskanen, E. (2018). Innovation intermediary challenging the energy incumbent: enactment of local socio-technical transition pathways by destabilization of regime rules. Technology Analysis & Strategic Management, 30(12), 1455-1469
[39] Feenstra, Robert C., Robert Inklaar and Marcel P. Timmer (2015), “The Next Generation of the Penn World Table” American Economic Review, 105(10), 3150-3182, available for download at http://www.ggdc.net/pwt
OK, here is the big picture. The highest demographic growth, in absolute numbers, takes place in Asia and Africa. The biggest migratory flows start from there, as well, and aim at and into regions with much less of human mass in accrual: North America and Europe. Less human accrual, indeed, and yet much better conditions for each new homo sapiens. In some places on the planet, a huge amount of humans is born every year. That huge amount means a huge number of genetic variations around the same genetic tune, namely that of the homo sapiens. Those genetic variations leave their homeland, for a new and better homeland, where they bring their genes into a new social environment, which assures them much more safety, and higher odds of prolonging their genetic line.
What is the point of there being more specimens of any species? I mean, is there a logic to increasing the headcount of any population? When I say ‘any’, is ranges from bacteria to us, humans. After having meddled with the most basic algorithm of a neural network (see « Pardon my French, but the thing is really intelligent » and « Ce petit train-train des petits signaux locaux d’inquiétude »), I have some thoughts about what intelligence is. I think that intelligence is a class, i.e. it is a framework structure able to produce many local, alternative instances of itself.
Being intelligent consists, to start with, in creating alternative versions of itself, and creating them purposefully imperfect so as to generate small local errors, whilst using those errors to create still different versions of itself. The process is tricky. There is some sort of fundamental coherence required between the way of creating those alternative instances of oneself, and the way that resulting errors are being processed. Fault of such coherence, the allegedly intelligent structure can fall into purposeful ignorance, or into panic.
Purposeful ignorance manifests as the incapacity to signal and process the local imperfections in alternative instances of the intelligent structure, although those imperfections actually stand out and wave at you. This is the ‘everything is just fine and there is no way it could be another way’ behavioural pattern. It happens, for example, when the function of processing local errors is too gross – or not sharp enough, if you want – to actually extract meaning from tiny, still observable local errors. The panic mode of an intelligent structure, on the other hand, is that situation when the error-processing function is too sharp for the actually observable errors. Them errors just knock it out of balance, like completely, and the function signals general ‘Error’, or ‘I can’t stand this cognitive dissonance’.
So, what is the point of there being more specimens of any species? The point might be to generate as many specific instances of an intelligent structure – the specific DNA – as possible, so as to generate purposeful (and still largely unpredictable) errors, just to feed those errors into the future instantiations of that structure. In the process of breeding, some path of evolutionary coherence leads to errors that can be handled, and that path unfolds between a state of evolutionary ‘everything is OK, no need to change anything’ (case mosquito, unchanged for millions of years), and a state of evolutionary ‘what the f**k!?’ (case common fruit fly, which produces insane amount of mutations in response to the slightest environmental stressor).
Essentially, all life could be a framework structure, which, back in the day, made a piece of software in artificial intelligence – the genetic code – and ever since that piece of software has been working on minimizing the MSE (mean square error) in predicting the next best version of life, and it has been working by breeding, in a tree-like method of generating variations, indefinitely many instances of the framework structure of life. Question: what happens when, one day, a perfect form of life emerges? Something like TRex – Megalodon – Angelina Jolie – Albert Einstein – Jeff Bezos – [put whatever or whoever you like in the rest of that string]? On the grounds of what I have already learnt about artificial intelligence, such a state of perfection would mean the end of experimentation, thus the end of multiplying instances of the intelligent structure, thus the end of births and deaths, thus the end of life.
Question: if the above is even remotely true, does that overarching structure of life understand how the software it made – the genetic code – works? Not necessarily. That very basic algorithm of neural network, which I have experimented with a bit, produces local instances of the sigmoid function Ω = 1/(1 + e-x) such that Ω < 1, and that 1 + e-x > 1, which is always true. Still, the thing does it just sometimes. Why? How? Go figure. That thing accomplishes an apparently absurd task, and it does so just by being sufficiently flexible with its random coefficients. If Life In General is God, that God might not have a clue about how the actual life works. God just needs to know how to write an algorithm for making actual life work. I would even say more: if God is any good at being one, he would write an algorithm smarter than himself, just to make things advance.
The hypothesis of life being one, big, intelligent structure gives an interesting insight into what the cost of experimentation is. Each instance of life, i.e. each specimen of each species needs energy to sustain it. That energy takes many forms: light, warmth, food, Lexus (a form of matter), parties, Armani (another peculiar form of matter) etc. The more instances of life are there, the more energy they need to be there. Even if we take the Armani particle out of the equation, life is still bloody energy-consuming. The available amount of energy puts a limit to the number of experimental instances of the framework, structural life that the platform (Earth) can handle.
Here comes another one about climate change. Climate change means warmer, let’s be honest. Warmer means more energy on the planet. Yes, temperature is our human measurement scale for the aggregate kinetic energy of vibrating particles. More energy is what we need to have more instances of framework life, in the same time. Logically, incremental change in total energy on the planet translates into incremental change in the capacity of framework life to experiment with itself. Still, as framework life could be just the God who made that software for artificial intelligence (yes, I am still in the same metaphor), said framework life could not be quite aware of how bumpy could the road be, towards the desired minimum in the Mean Square Error. If God is an IT engineer, it could very well be the case.
I had that conversation with my son, who is graduating his IT engineering studies. I told him ‘See, I took that algorithm of neural network, and I just wrote its iterations out into separate tables of values in Excel, just to see what it does, like iteration after iteration. Interesting, isn’t it? I bet you have done such thing many times, eh?’. I still remember that heavy look in my son’s eyes: ‘Why the hell should I ever do that?’ he went. ‘There is a logical loop in that algorithm, you see? This loop is supposed to do the job, I mean to iterate until it comes up with something really useful. What is the point of doing manually what the loop is supposed to do for you? It is like hiring a gardener and then doing everything in the garden by yourself, just to see how it goes. It doesn’t make sense!’. ‘But it’s interesting to observe, isn’t it?’ I went, and then I realized I am talking to an alien form of intelligence, there.
Anyway, if God is a framework life who created some software to learn in itself, it could not be quite aware of the tiny little difficulties in the unfolding of the Big Plan. I mean acidification of oceans, hurricanes and stuff. The framework life could say: ‘Who cares? I want more learning in my algorithm, and it needs more energy to loop on itself, and so it makes those instances of me, pumping more carbon into the atmosphere, so as to have more energy to sustain more instances of me. Stands to reason, man. It is all working smoothly. I don’t understand what you are moaning about’.
Whatever that godly framework life says, I am still interested in studying particular instances of what happens. One of them is my business concept of EneFin. See « Which salesman am I? » as what I think is the last case of me being like fully verbal about it. Long story short, the idea consists in crowdfunding capital for small, local operators of power systems based on renewable energies, by selling shares in equity, or units of corporate debt, in bundles with tradable claims on either the present output of energy, or the future one. In simple terms, you buy from that supplier of energy tradable claims on, for example, 2 000 kWh, and you pay the regular market price, still, in that price, you buy energy properly spoken with a juicy discount. The rest of the actual amount of money you have paid buys you shares in your supplier’s equity.
The idea in that simplest form is largely based on two simple observations about energy bills we pay. In most countries (at least in Europe), our energy bills are made of two components: the (slightly) variable value of the energy actually supplied, and a fixed part labelled sometimes as ‘maintenance of the grid’ or similar. Besides, small users (e.g. households) usually pay a much higher unitary price per kWh than large, institutional scale buyers (factories, office buildings etc.). In my EneFin concept, a local supplier of renewable energy makes a deal with its local customers to sell them electricity at a fair, market price, with participations in equity on the top of electricity.
That would be a classical crowdfunding scheme, such as you can find with, StartEngine, for example. I want to give it some additional, financial spin. Classical crowdfunding has a weakness: low liquidity. The participatory shares you buy via crowdfunding are usually non-tradable, and they create a quasi-cooperative bond between investors and investees. Where I come from, i.e. in Central Europe, we are quite familiar with cooperatives. At the first sight, they look like a form of institutional heaven, compared to those big, ugly, capitalistic corporations. Still, after you have waved out that first mist, cooperatives turn out to be very exposed to embezzlement, and to abuse of managerial power. Besides, they are quite weak when competing for capital against corporate structures. I want to create highly liquid a transactional platform, with those investments being as tradable as possible, and use financial liquidity as a both a shield against managerial excesses, and a competitive edge for those small ventures.
My idea is to assure liquidity via a FinTech solution similar to that used by Katipult Technology Corp., i.e. to create some kind of virtual currency (note: virtual currency is not absolutely the same as cryptocurrency; cousins, but not twins, so to say). Units of currency would correspond to those complex contracts « energy plus equity ». First, you create an account with EneFin, i.e. you buy a certain amount of the virtual currency used inside the EneFin platform. I call them ‘tokens’ to simplify. Next, you pick your complex contracts, in the basket of those offered by local providers of energy. You buy those contracts with the tokens you have already acquired. Now, you change your mind. You want to withdraw your capital from the supplier A, and move it to supplier H, you haven’t considered so far. You move your tokens from A to H, even with a mobile app. It means that the transactional platform – the EneFin one – buys from you the corresponding amount of equity of A and tries to find for you some available equity in H. You can also move your tokens completely out of investment in those suppliers of energy. You can free your money, so to say. Just as simple: you just move them out, even with a movement of your thumb on the screen. The EneFin platform buys from you the shares you have moved out of.
You have an even different idea. Instead of investing your tokens into the equity of a provider of energy, you want to lend them. You move your tokens to the field ‘lending’, you study the interest rates offered on the transactional platform, and you close the deal. Now, the corresponding number of tokens represents securitized (thus tradable) corporate debt.
Question: why the hell bothering about a virtual currency, possibly a cryptocurrency, instead of just using good old fiat money? At this point, I am reaching to the very roots of the Bitcoin, the grandpa of all cryptocurrencies (or so they say). Question: what amount of money you need to finance 20 transactions of equal unitary value P? Answer: it depends on how frequently you monetize them. Imagine that the EneFin app offers you an option like ‘Monetize vs. Don’t Monetize’. As long as – with each transaction you do on the platform – you stick to the ‘Don’t Monetize’ option, your transactions remain recorded inside the transactional platform, and so there is recorded movement in tokens, but there is no monetary outcome, i.e. your strictly spoken monetary balance, for example that in €, does not change. It is only when you hit the ‘Monetize’ button in the app that the current bottom line of your transactions inside the platform is being converted into « official » money.
The virtual currency in the EneFin scheme would serve to allow a high level of liquidity (more transactions in a unit of time), without provoking the exactly corresponding demand for money. What connection with artificial intelligence? I want to study the possible absorption of such a scheme in the market of energy, and in the related financial markets, as a manifestation of collective intelligence. I imagine two intelligent framework structures: one incumbent (the existing markets) and one emerging (the EneFin platform). Both are intelligent structures to the extent that they technically can produce many alternative instances of themselves, and thus intelligently adapt to their environment by testing those instances and utilising the recorded local errors.
In terms of an algorithm of neural network, that intelligent adaptation can be manifest, for example, as an optimization in two coefficients: the share of energy traded via EneFin in the total energy supplied in the given market, and the capitalization of EneFin as a share in the total capitalization of the corresponding financial markets. Those two coefficients can be equated to weights in a classical MLP (Multilayer Perceptron) network, and the perceptron network could work around them. Of course, the issue can be approached from a classical methodological angle, as a general equilibrium to assess via « normal » econometric modelling. Still, what I want is precisely what I hinted in « Pardon my French, but the thing is really intelligent » and « Ce petit train-train des petits signaux locaux d’inquiétude »: I want to study the very process of adaptation and modification in those intelligent framework structures. I want to know, for example, how much experimentation those structures need to form something really workable, i.e. an EneFin platform with serious business going on, and, in the same time, that business contributing to the development of renewable energies in the given region of the world. Do those framework structures have enough local resources – mostly capital – for sustaining the number of alternative instances needed for effective learning? What kind of factors can block learning, i.e. drive the framework structure either into deliberate an ignorance of local errors or into panic?
Here is an example of more exact a theoretical issue. In a typical economic model, things are connected. When I pull on the string ‘capital invested in fixed assets’, I can see a valve open, with ‘Lifecycle in incumbent technologies’, and some steam rushes out. When I push the ‘investment in new production capacity’ button, I can see something happening in the ‘Jobs and employment’ department. In other words, variables present in economic systems mutually constrain each other. Just some combinations work, others just don’t. Now, the thing I have already discovered about them Multilayer Perceptrons is that as soon as I add some constraint on the weights assigned to input data, for example when I swap ‘random’ for ‘erandom’, the scope of possible structural functions leading to effective learning dramatically shrinks, and the likelihood of my neural network falling into deliberate ignorance or into panic just swells like hell. What degree of constraint on those economic variables is tolerable in the economic system conceived as a neural network, thus as a framework intelligent structure?
There are some general guidelines I can see for building a neural network that simulates those things. Creating local power systems, based on microgrids connected to one or more local sources of renewable energies, can be greatly enhanced with efficient financing schemes. The publicly disclosed financial results of companies operating in those segments – such as Tesla[1], Vivint Solar[2], FirstSolar[3], or 8Point3 Energy Partners[4] – suggest that business models in that domain are only emerging, and are far from being battle-tested. There is still a way to pave towards well-rounded business practices as regards such local power systems, both profitable economically and sustainable socially.
The basic assumptions of a neural network in that field are essentially behavioural. Firstly, consumption of energy is greatly predictable at the level of individual users. The size of a market in energy changes, as the number of users change. The output of energy needed to satisfy those users’ needs, and the corresponding capacity to install, are largely predictable on the long run. Consumers of energy use a basket of energy-consuming technologies. The structure of this basket determines their overall consumption, and is determined, in turn, by long-run social behaviour. Changes over time in that behaviour can be represented as a social game, where consecutive moves consist in purchasing, or disposing of a given technology. Thus, a game-like process of relatively slow social change generates a relatively predictable output of energy, and a demand thereof. Secondly, the behaviour of investors in any financial market, crowdfunding or other, is comparatively more volatile. Investment decisions are being taken, and modified at a much faster pace than decisions about the basket of technologies used in everyday life.
The financing of relatively small, local power systems, based on renewable energies and connected by local microgrids, implies an interplay of the two above-mentioned patterns, namely the relatively slower transformation in the technological base, and the quicker, more volatile modification of investors’ behaviour in financial markets.
I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?
And so I am meddling with neural networks. It had to come. It just had to. I started with me having many ideas to develop at once. Routine stuff with me. Then, the Editor-in-Chief of the ‘Energy Economics’ journal returned my manuscript of article on the energy-efficiency of national economies, which I had submitted with them, with a general remark that I should work both on the clarity of my hypotheses, and on the scientific spin of my empirical research. In short, Mr Wasniewski, linear models tested with Ordinary Least Squares is a bit oldie, if you catch my drift. Bloody right, Mr Editor-In-Chief. Basically, I agree with your remarks. I need to move out of my cavern, towards the light of progress, and get acquainted with the latest fashion. The latest fashion we are wearing this season is artificial intelligence, machine learning, and neural networks.
It comes handy, to the extent that I obsessively meddle with the issue of collective intelligence, and am dreaming about creating a model of human social structure acting as collective intelligence, sort of a beehive. Whilst the casting for a queen in that hive remains open, and is likely to stay this way for a while, I am digging into the very basics of neural networks. I am looking in the Python department, as I have already got a bit familiar with that environment. I found an article by James Loy, entitled “How to build your own Neural Network from scratch in Python”. The article looks a bit like sourcing from another one, available at the website of ProBytes Software, thus I use both to develop my understanding. I pasted the whole algorithm by James Loy into my Python Shell, made in run with an ‘enter’, and I am waiting for what it is going to produce. In the meantime, I am being verbal about my understanding.
The author declares he wants to do more or less the same thing that I, namely to understand neural networks. He constructs a simple algorithm for a neural network. It starts with defining the neural network as a class, i.e. as a callable object that acts as a factory for new instances of itself. In the neural network defined as a class, that algorithm starts with calling the constructor function ‘_init_’, which constructs an instance ‘self’ of that class. It goes like ‘def __init__(self, x, y):’. In other words, the class ‘Neural network’ generates instances ‘self’ of itself, and each instance is essentially made of two variables: input x, and output y. The ‘x’ is declared as input variable through the ‘self.input = x’ expression. Then, the output of the network is defined in two steps. Yes, the ‘y’ is generally the output, only in a neural network, we want the network to predict a value of ‘y’, thus some kind of y^. What we have to do is to define ‘self.y = y’, feed the real x-s and the real y-s into the network, and expect the latter to turn out some y^-s.
Logically, we need to prepare a vessel for holding the y^-s. The vessel is defined as ‘self.output = np.zeros(y.shape)’. The ‘shape’ function defines a tuple – a table, for those mildly fond of maths – with given dimensions. What are the dimensions of ‘y’ in that ‘y.shape’? They have been given earlier, as the weights of the network were being defined. It goes as follows. It starts, thus, right after the ‘self.input = x’ has been said, ‘self.weights1 = np.random.rand(self.input.shape[1],4)’ fires off, closely followed by ‘self.weights2 = np.random.rand(4,1)’. All in all, the entire class of ‘Neural network’ is defined in the following form:
The output of each instance in that neural network is a two-dimensional tuple (table) made of one row (I hope I got it correctly), and four columns. Initially, it is filled with zeros, so as to make room for something more meaningful. The predicted y^-s are supposed to jump into those empty sockets, held ready by the zeros. The ‘random.rand’ expression, associated with ‘weights’ means that the network is supposed to assign randomly different levels of importance to different x-s fed into it.
Anyway, the next step is to instruct my snake (i.e. Python) what to do next, with that class ‘Neural Network’. It is supposed to do two things: feed data forward, i.e. makes those neurons work on predicting the y^-s, and then check itself by an operation called backpropagation of errors. The latter consists in comparing the predicted y^-s with the real y-s, measuring the discrepancy as a loss of information, updating the initial random weights with conclusions from that measurement, and do it all again, and again, and again, until the error runs down to very low values. The weights applied by the network in order to generate that lowest possible error are the best the network can do in terms of learning.
The feeding forward of predicted y^-s goes on in two steps, or in two layers of neurons, one hidden, and one final. They are defined as:
The ‘sigmoid’ part means sigmoid function, AKA logistic function, expressed as y=1/(1+e-x), where, at the end of the day, the y always falls somewhere between 0 and 1, and the ‘x’ is not really the empirical, real ‘x’, but the ‘x’ multiplied by a weight, ranging between 0 and 1 as well. The sigmoid function is good for testing the weights we apply to various types of input x-es. Whatever kind of data you take: populations measured in millions, or consumption of energy per capita, measured in kilograms of oil equivalent, the basic sigmoid function y=1/(1+e-x), will always yield a value between 0 and 1. This function essentially normalizes any data.
Now, I want to take differentiated data, like population as headcount, energy consumption in them kilograms of whatever oil equals to, and the supply of money in standardized US dollars. Quite a mix of units and scales of measurement. I label those three as, respectively, xa, xb, and xc. I assign them weights ranging between 0 and 1, so as the sum of weights never exceeds 1. In plain language it means that for every vector of observations made of xa, xb, and xc I take a pinchful of xa, then a zest of xb, and a spoon of xc. I make them into x = wa*xa + wb*xb + wc*xc, I give it a minus sign and put it as an exponent for the Euler’s constant.
That yields y=1/(1+e-(wa*xa + wb*xb + wc*xc)). Long, but meaningful to the extent that now, my y is always to find somewhere between 0 and 1, and I can experiment with various weights for my various shades of x, and look what it gives in terms of y.
In the algorithm above, the ‘np.dot’ function conveys the idea of weighing our x-s. With two dimensions, like the input signal ‘x’ and its weight ‘w’, the ‘np.dot’ function yields a multiplication of those two one-dimensional matrices, exactly in the x = wa*xa + wb*xb + wc*xc drift.
Thus, the first really smart layer of the network, the hidden one, takes the empirical x-s, weighs them with random weights, and makes a sigmoid of that. The next layer, the output one, takes the sigmoid-calculated values from the hidden layer, and applies the same operation to them.
One more remark about the sigmoid. You can put something else instead of 1, in the nominator. Then, the sigmoid will yield your data normalized over that something. If you have a process that tends towards a level of saturation, e.g. number of psilocybin parties per month, you can put that level in the nominator. On the top of that, you can add parameters to the denominator. In other words, you can replace the 1+e-x with ‘b + e-k*x’, where b and k can be whatever seems to make sense for you. With that specific spin, the sigmoid is good for simulating anything that tends towards saturation over time. Depending on the parameters in denominator, the shape of the corresponding curve will change. Usually, ‘b’ works well when taken as a fraction of the nominator (the saturation level), and the ‘k’ seems to be behaving meaningfully when comprised between 0 and 1.
I return to the algorithm. Now, as the network has generated a set of predicted y^-s, it is time to compare them to the actual y-s, and to evaluate how much is there to learn yet. We can use any measure of error, still, most frequently, them algorithms go after the simplest one, namely the Mean Square Error MSE = [(y1 – y^1)2 + (y2 – y^2)2 + … + (yn – y^n)2]0,5. Yes, it is Euclidean distance between the set of actual y-s and that of predicted y^-s. Yes, it is also the standard deviation of predicted y^-s from the actual distribution of empirical y-s.
In this precise algorithm, the author goes down another avenue: he takes the actual differences between observed y-s and predicted y^-s, and then multiplies it by the sigmoid derivative of predicted y^-s. Then he takes the transpose of a uni-dimensional matrix of those (y – y^)*(y^)’ with (y^)’ standing for derivative. It goes like:
def backprop(self):
# application of the chain rule to find derivative of the loss function with respect to weights2 and weights1
# update the weights with the derivative (slope) of the loss function
self.weights1 += d_weights1
self.weights2 += d_weights2
def sigmoid(x):
return 1.0/(1+ np.exp(-x))
def sigmoid_derivative(x):
return x * (1.0 – x)
I am still trying to wrap my mind around the reasons for taking this specific approach to the backpropagation of errors. The derivative of a sigmoid y=1/(1+e-x) is y’ = [1/(1+e-x)]*{1 – [1/(1+e-x)]} and, as any derivative, it measures the slope of change in y. When I do (y1 – y^1)*(y^1)’ + (y2 – y^2)*(y^2)’ + … + (yn – y^n)*(y^n)’ it is as if I were taking some kind of weighted average. That weighted average can be understood in two alternative ways. Either it is standard deviation of y^ from y, weighted with the local slopes, or it is a general slope weighted with local deviations. Now I take the transpose of a matrix like {(y1 – y^1)*(y^1)’ ; (y2 – y^2)*(y^2)’ ; … (yn – y^n)*(y^n)’}, it is a bit as if I made a matrix of inverted terms, i.e. 1/[(yn – y^n)*(y^n)’]. Now, I make a ‘.dot’ product of those inverted terms, so I multiply them by each other. Then, I feed the ‘.dot’ product into the neural network with the ‘+=’ operator. The latter means that in the next round of calculations, the network can do whatever it wants with those terms. Hmmweeellyyeess, makes some sense. I don’t know what exact sense is that, but it has some mathematical charm.
Now, I try to apply the same logic to the data I am working with in my research. Just to give you an idea, I show some data for just one country: Australia. Why Australia? Honestly, I don’t see why it shouldn’t be. Quite a respectable place. Anyway, here is that table. GDP per unit of energy consumed can be considered as the target output variable y, and the rest are those x-s.
Table 1 – Selected data regarding Australia
Year
GDP per unit of energy use (constant 2011 PPP $ per kg of oil equivalent)
Share of aggregate amortization in the GDP
Supply of broad money, % of GDP
Energy use (tons of oil equivalent per capita)
Urban population as % of total population
GDP per capita, ‘000 USD
y
X1
X2
X3
X4
X5
1990
5,662020744
14,46
54,146
5,062
85,4
26,768
1991
5,719765048
14,806
53,369
4,928
85,4
26,496
1992
5,639817305
14,865
56,208
4,959
85,566
27,234
1993
5,597913126
15,277
56,61
5,148
85,748
28,082
1994
5,824685357
15,62
59,227
5,09
85,928
29,295
1995
5,929177604
15,895
60,519
5,129
86,106
30,489
1996
5,780817973
15,431
62,734
5,394
86,283
31,566
1997
5,860645225
15,259
63,981
5,47
86,504
32,709
1998
5,973528571
15,352
65,591
5,554
86,727
33,789
1999
6,139349354
15,086
69,539
5,61
86,947
35,139
2000
6,268129418
14,5
67,72
5,644
87,165
35,35
2001
6,531818805
14,041
70,382
5,447
87,378
36,297
2002
6,563073754
13,609
70,518
5,57
87,541
37,047
2003
6,677186947
13,398
74,818
5,569
87,695
38,302
2004
6,82834791
13,582
77,495
5,598
87,849
39,134
2005
6,99630318
13,737
78,556
5,564
88
39,914
2006
6,908872246
14,116
83,538
5,709
88,15
41,032
2007
6,932137612
14,025
90,679
5,868
88,298
42,022
2008
6,929395465
13,449
97,866
5,965
88,445
42,222
2009
7,039061961
13,698
94,542
5,863
88,59
41,616
2010
7,157467568
12,647
101,042
5,649
88,733
43,155
2011
7,291989544
12,489
100,349
5,638
88,875
43,716
2012
7,671605162
13,071
101,852
5,559
89,015
43,151
2013
7,891026044
13,455
106,347
5,586
89,153
43,238
2014
8,172929207
13,793
109,502
5,485
89,289
43,071
In his article, James Loy reports the cumulative error over 1500 iterations of training, with just four series of x-s, made of four observations. I do something else. I am interested in how the network works, step by step. I do step-by-step calculations with data from that table, following that algorithm I have just discussed. I do it in Excel, and I observe the way that the network behaves. I can see that the hidden layer is really hidden, to the extent that it does not produce much in terms of meaningful information. What really spins is the output layer, thus, in fact, the connection between the hidden layer and the output. In the hidden layer, all the predicted sigmoid y^ are equal to 1, and their derivatives are automatically 0. Still, in the output layer, when the second random distribution of weights overlaps with the first one from the hidden layer. Then, for some years, those output sigmoids demonstrate tiny differences from 1, and their derivatives become very small positive numbers. As a result, tiny, local (yi – y^i)*(y^i)’ expressions are being generated in the output layer, and they modify the initial weights in the next round of training.
I observe the cumulative error (loss) in the first four iterations. In the first one it is 0,003138796, the second round brings 0,000100228, the third round displays 0,0000143, and the fourth one 0,005997739. Looks like an initial reduction of cumulative error, by one order of magnitude at each iteration, and then, in the fourth round, it jumps up to the highest cumulative error of the four. I extend the number to those hand-driven iterations from four to six, and I keep feeding the network with random weights, again and again. A pattern emerges. The cumulative error oscillates. Sometimes the network drives it down, sometimes it swings it up.
F**k! Pardon my French, but just six iterations of that algorithm show me that the thing is really intelligent. It generates an error, it drives it down to a lower value, and then, as if it was somehow dysfunctional to jump to conclusions that quickly, it generates a greater error in consecutive steps, as if it was considering more alternative options. I know that data scientists, should they read this, can slap their thighs at that elderly uncle (i.e. me), fascinated with how a neural network behaves. Still, for me, it is science. I take my data, I feed it into a machine that I see for the first time in my life, and I observe intelligent behaviour in something written on less than one page. It experiments with weights attributed to the stimuli I feed into it, and it evaluates its own error.
Now, I understand why that scientist from MIT, Lex Fridman, says that building artificial intelligence brings insights into how the human brain works.
I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?
I am back into blogging, after over two months of pausing. This winter semester I am going, probably, for record workload in terms of classes: 630 hours in total. October and November look like an immersion time, when I had to get into gear for that amount of teaching. I noticed one thing that I haven’t exactly been aware of, so far, or maybe not as distinctly as I am now: when I teach, I love freestyling about the topic at hand. Whatever hand of nice slides I prepare for a given class, you can bet on me going off the beaten tracks and into the wilderness of intellectual quest, like by the mid-class. I mean, I have nothing against Power Point, but at some point it becomes just so limiting… I remember that conference, one year ago, when the projector went dead during my panel (i.e. during the panel when I was supposed to present my research). I remember that mixed, and shared feeling of relief and enjoyment in people present in the room: ‘Good. Finally, no slides. We can like really talk science’.
See? Once again, I am going off track, and that in just one paragraph of writing. You can see what I mean when I refer to me going off track in class. Anyway, I discovered one more thing about myself: freestyling and sailing uncharted intellectual waters has a cost, and this is a very clear and tangible biological cost. After a full day of teaching this way I feel as if my brain was telling me: ‘Look, bro. I know you would like to write a little, but sorry: no way. Them synapses are just tired. You need to give me a break’.
There is a third thing I have discovered about myself: that intense experience of teaching makes me think a lot. I cannot exactly put all this in writing on the spot, fault of fresh neurotransmitter available, still all that thinking tends to crystallize over time and with some patience I can access it later. Later means now, as it seems. I feel that I have crystallized enough and I can start to pull it out into the daylight. The « it » consists, mostly, in a continuous reflection on collective intelligence. How are we (possibly) smart together?
As I have been thinking about it, three events combined and triggered in me a string of more specific questions. I watched another podcast featuring Jordan Peterson, whom I am a big fan of, and who raised the topic of the neurobiological context of meaning. How our brain makes meaning, and how does it find meaning in sensory experience? On the other hand, I have just finished writing the manuscript of an article on the energy-efficiency of national economies, which I have submitted to the ‘Energy Economics’ journal, and which, almost inevitably, made me work with numbers and statistics. As I had been doing that empirical research, I found out something surprising: the most meaningful econometric results came to the surface when I transformed my original data into local coefficients of an exponential progression that hypothetically started in 1989. Long story short, these coefficients are essentially growth rates, which behave in a peculiar way, due to their arithmetical structure: they decrease very quickly over time, whatever is the source, raw empirical observation, as if they were representing weakening shock waves sent by an explosion in 1989.
Different types of transformed data, the same data, in that research of mine, produced different statistical meanings. I am still coining up real understanding of what it exactly means, by the way. As I was putting that together with Jordan Peterson’s thoughts on meaning as a biological process, I asked myself: what is the exact meaning of the fact that we, as scientific community, assign meaning to statistics? How is it connected with collective intelligence?
I think I need to start more or less where Jordan Peterson moves, and ask ‘What is meaning?’. No, not quite. The ontological type, I mean the ‘What?’ type of question, is a mean beast. Something like a hydra: you cut the head, namely you explain the thing, you think that Bob’s your uncle, and a new head pops up, like out of nowhere, and it bites you, where you know. The ‘How?’ question is a bit more amenable. This one is like one of those husky dogs. Yes, it is semi wild, and yes, it can bite you, but once you tame it, and teach it to pull that sleigh, it will just pull. So I ask ‘How is meaning?’. How does meaning occur?
There is a particular type of being smart together, which I have been specifically interested in, for like the last two months. It is the game-based way of being collectively intelligent. The theory of games is a well-established basis for studying human behaviour, including that of whole social structures. As I was thinking about it, there is a deep reason for that. Social interactions are, well, interactions. It means that I do something and you do something, and those two somethings are supposed to make sense together. They really do at one condition: my something needs to be somehow conditioned by how your something unfolds, and vice versa. When I do something, I come to a point when it becomes important for me to see your reaction to what I do, and only when I will have seen it, I will further develop on my action.
Hence, I can study collective action (and interaction) as a sequence of moves in a game. I make my move, and I stop moving, for a moment, in order to see your move. You make yours, and it triggers a new move in me, and so the story goes further on in time. We can experience it very vividly in negotiations. With any experience in having serious talks with other people, thus when we negotiate something, we know that it is pretty counter-efficient to keep pushing our point in an unbroken stream of speech. It is much more functional to pace our strategy into separate strings of argumentation, and between them, we wait for what the other person says. I have already given a first theoretical go at the thing in « Couldn’t they have predicted that? ».
This type of social interaction, when we pace our actions into game-like moves, is a way of being smart together. We can come up with new solutions, or with the understanding of new problems – or a new understanding of old problems, as a matter of fact – and we can do it starting from positions of imperfect agreement and imperfect coordination. We try to make (apparently) divergent points, or we pursue (apparently) divergent goals, and still, if we accept to wait for each other’s reaction, we can coordinate and/or agree about those divergences, so as to actually figure out, and do, some useful s**t together.
What connection with the results of my quantitative research? Let’s imagine that we play a social game, and each of us makes their move, and then they wait for the moves of other players. The state of the game at any given moment can be represented as the outcome of past moves. The state of reality is like a brick wall, made of bricks laid one by one, and the state of that brick wall is the outcome of the past laying of bricks. In the general theory of science, it is called hysteresis. There is a mathematical function, reputed to represent that thing quite nicely: the exponential progression. On a timeline, I define equal intervals. To each period of time, I assign a value y(t) = et*a, where ‘t’ is the ordinal of the time period, ‘e’ is a mathematical constant, the base of natural logarithm, e = 2,7188, and ‘a’ is what we call the exponential coefficient.
There is something else to that y = et*a story. If we think like in terms of a broader picture, and assume that time is essentially what we imagine it is, the ‘t’ part can be replaced by any number we imagine. Then, the Euler’s formula steps in: ei*x = cos x + i*sin x. If you paid attention in math classes, at high school, you might remember that sine and cosine, the two trigonometric functions, have a peculiar property. As they refer to angles, at the end of the day they refer to a full circle of 360°. It means they go in a circle, thus in a cycle, only they go in perfectly negative a correlation: when the sine goes one unit one way, the cosine goes one unit exactly the other way round etc. We can think about each occurrence we experience – the ‘x’ – as a nexus of two, mutually opposing cycles, and they can be represented as, respectively, the sine, and the cosine of that occurrence ‘x’. When I grow in height (well, when I used to), my current height can be represented as the nexus of natural growth (sine), and natural depletion with age (cosine), that sort of things.
Now, let’s suppose that we, as a society, play two different games about energy. One game makes us more energy efficient, ‘cause we know we should (see Settlement by energy – can renewable energies sustain our civilisation?). The other game makes us max out on our intake of energy from the environment (see Technological Change as Intelligent, Energy-Maximizing Adaptation). At any given point in time, the incremental change in our energy efficiency is the local equilibrium between those two games. Thus, if I take the natural logarithm of our energy efficiency at a given point in space-time, thus the coefficient of GDP per kg of oil equivalent in energy consumed, that natural logarithm is the outcome of those two games, or, from a slightly different point of view, it descends from the number of consecutive moves made (the ordinal of time period we are currently in), and from a local coefficient – the equivalent of ‘i’ in the Euler’s formula – which represents the pace of building up the outcomes of past moves in the game.
I go back to that ‘meaning’ thing. The consecutive steps ‘t’ in an exponential progression y(t) = et*a progression correspond to successive rounds of moves in the games we play. There is a core structure to observe: the length of what I call ‘one move’, and which means a sequence of actions that each person involved in the interaction carries out without pausing and waiting for the reaction observable in other people in the game. When I say ‘length’, it involves a unit of measurement, and here, I am quite open. It can be a length of time, or the number of distinct actions in my sequence. The length of one move in the game determines the pace of the game, and this, in turn, sets the timeframe for the whole game to produce useful results: solutions, understandings, coordinated action etc.
Now, where the hell is any place for ‘meaning’ in all that game stuff? My view is the following: in social games, we sequence our actions into consecutive moves, with some waiting-for-reaction time in between, because we ascribe meaning to those sub-sequences that we define as ‘one move’. The way we process meaning matters for the way we play social games.
I am a scientist (well, I hope), and for me, meaning occurs very largely as I read what other people have figured out. So I stroll down the discursive avenue named ‘neurobiology of meaning’, welcomingly lit by with the lampposts of Science Direct. I am calling by an article by Lee M. Pierson, and Monroe Trout, entitled ‘What is consciousness for?’[1]. The authors formulate a general hypothesis, unfortunately not supported (yet?) with direct empirical check, that consciousness had been occurring, back in the day, I mean like really back in the day, as cognitive support of volitional movement, and evolved, since then, into more elaborate applications. Volitional movement is non-automatic, i.e. decisions have to be made in order for the movement to have any point. It requires quick assemblage of data on the current situation, and consciousness, i.e. the awareness of many abstract categories in the same time, could the solution.
According to that approach, meaning occurs as a process of classification in the neurologically stored data that we need to use virtually simultaneously in order to do something as fundamental as reaching for another can of beer. Classification of data means grouping into sets. You have a random collection of data from sensory experience, like a homogenous cloud of information. You know, the kind you experience after a particularly eventful party. Some stronger experiences stick out: the touch of cold water on your naked skin, someone’s phone number written on your forearm with a lipstick etc. A question emerges: should you call this number? It might be your new girlfriend (i.e. the girlfriend whom you don’t consciously remember as your new one but whom you’d better to if you don’t want your car splashed with acid), or it might be a drug dealer whom you’d better not call back. You need to group the remaining data in functional sets so as to take the right action.
So you group, and the challenge is to make the right grouping. You need to collect the not-quite-clear-in-their-meaning pieces of information (Whose lipstick had that phone number been written with? Can I associate a face with the lipstick? For sure, the right face?). One grouping of data can lead you to a happy life, another one can lead you into deep s**t. It could be handy to sort of quickly test many alternative groupings as for their elementary coherence, i.e. hold all that data in front of you, for a moment, and contemplate flexibly many possible connections. Volitional movement is very much about that. You want to run? Good. It would be nice not to run into something that could hurt you, so it would be good to cover a set of sensory data, combining something present (what we see), with something we remember from the past (that thing on the 2 o’clock azimuth stings like hell), and sort of quickly turn and return all that information so as to steer clear from that cactus, as we run.
Thus, as I follow the path set by Pierson and Trout, meaning occurs as the grouping of data in functional categories, and it occurs when we need to do it quickly and sort of under pressure of getting into trouble. I am going onto the level of collective intelligence in human social structures. In those structures, meaning, i.e. the emergence of meaningful distinctions communicable between human beings and possible to formalize in language, would occur as said structures need to figure something out quickly and under uncertainty, and meaning would allow putting together the types of information that are normally compartmentalized and fragmented.
From that perspective, one meaningful move in a game encompasses small pieces of action which we intuitively guess we should immediately group together. Meaningful moves in social games are sequences of actions, which we feel like putting immediately back to back, without pausing and letting the other player do their thing. There is some sort of pressing immediacy in that grouping. We guess we just need to carry out those actions smoothly one after the other, in an unbroken sequence. Wedging an interval of waiting time in between those actions could put our whole strategy at peril, or we just think so.
When I apply this logic to energy efficiency, I think about business strategies regarding innovation in products and technologies. When we launch a new product, or implement a new technology, there is something like fixed patterns to follow. When you start beta testing a new mobile app, for example, you don’t stop in the middle of testing. You carry out the tests up to their planned schedule. When you start launching a new product (reminder: more products made on the same energy base mean greater energy efficiency), you keep launching until you reach some sort of conclusive outcome, like unequivocal success or failure. Social games we play around energy efficiency could very well be paced by this sort of business-strategy-based moves.
I pick up another article, that by Friedemann Pulvermüller (2013[2]). The main thing I see right from the beginning is that apparently, neurology is progressively dropping the idea of one, clearly localised area in our brain, in charge of semantics, i.e. of associating abstract signs with sensory data. What we are discovering is that semantics engage many areas in our brain into mutual connection. You can find developments on that issue in: Patterson et al. 2007[3], Bookheimer 2002[4], Price 2000[5], and Binder & Desai 2011[6]. As we use words, thus as we pronounce, hear, write or read them, that linguistic process directly engages (i.e. is directly correlated with the activation of) sensory and motor areas of our brain. That engagement follows multiple, yet recurrent patterns. In other words, instead of having one mechanism in charge of meaning, we are handling different ones.
After reviewing a large bundle of research, Pulvermüller proposes four different patterns: referential, combinatorial, emotional-affective, and abstract semantics. Each time, the semantic pattern consists in one particular area of the brain acting as a boss who wants to be debriefed about something from many sources, and starts pulling together many synaptic strings connected to many places in the brain. Five different pieces of cortex come recurrently as those boss-hubs, hungry for differentiated data, as we process words. They are: inferior frontal cortex (iFC, so far most commonly associated with the linguistic function), superior temporal cortex (sTC), inferior parietal cortex (iPC), inferior and middle temporal cortex (m/iTC), and finally the anterior temporal cortex (aTC). The inferior frontal cortex (iFC) seems to engage in the processing of words related to action (walk, do etc.). The superior temporal cortex (sTC) looks like seriously involved when words related to sounds are being used. The inferior parietal cortex (iPC) activates as words connect to space, and spatio-temporal constructs. The inferior and middle temporal cortex (m/iTC) lights up when we process words connected to animals, tools, persons, colours, shapes, and emotions. That activation is category specific, i.e. inside m/iTC, different Christmas trees start blinking as different categories among those are being named and referred to semantically. The anterior temporal cortex (aTC), interestingly, has not been associated yet with any specific type of semantic connections, and still, when it is damaged, semantic processing in our brain is generally impaired.
All those areas of the brain have other functions, besides that semantic one, and generally speaking, the kind of meaning they process is correlated with the kind of other things they do. The interesting insight, at this point, is the polyvalence of cortical areas that we call ‘temporal’, thus involved in the perception of time. Physicists insist very strongly that time is largely a semantic construct of ours, i.e. time is what we think there is rather than what really is, out there. In physics, what exists is rather sequential a structure of reality (things happen in an order) than what we call time. That review of literature by Pulvermüller indirectly indicates that time is a piece of meaning that we attach to sounds, colours, emotions, animal and people. Sounds come as logical: they are sequences of acoustic waves. On the other hand, how is our perception of colours, or people, connected to our concept of time? This is a good one to ask, and a tough one to answer. What I would look for is recurrence. We identify persons as distinct ones as we interact with them recurrently. Autistic people have frequently that problem: when you put on a different jacket, they have hard time accepting you are the same person. Identification of animals or emotions could follow the same logic.
The article discusses another interesting issue: the more abstract the meaning is, the more different regions of the brain it engages. The really abstract ones, like ‘beauty’ or ‘freedom’, are super Christmas-trees: they provoke involvement all over the place. When we do abstraction, in our mind, for example when writing poetry (OK, just good poetry), we engage a substantial part of our brain. This is why we can be lost in our thoughts: those thoughts, when really abstract, are really energy-consuming, and they might require to shut down some other functions.
My personal understanding of the research reviewed by Pulvermüller is that at the neurological level, we process three essential types of meaning. One consists in finding our bearings in reality, thus in identifying things and people around, and in assigning emotions to them. It is something like a mapping function. Then, we need to do things, i.e. to take action, and that seems to be a different semantic function. Finally, we abstract, thus we connect distant parcels of data into something that has no direct counterpart neither in the mapped reality, nor in our actions.
I have an indirect insight, too. We have a neural wiring, right? We generate meaning with that wiring, right? Now, how is adaptation occurring, in that scheme, over time? Do we just adapt the meaning we make to the neural hardware we have, or is there a reciprocal kick, I mean from meaning to wiring? So far, neurological research has demonstrated that physical alteration in specific regions of the brain impacts semantic functions. Can it work the other way round, i.e. can recurrent change in semantics being processed alter the hardware we have between our ears? For example, as we process a lot of abstract concepts, like ‘taxes’ or ‘interest rate’, can our brains adapt from generation to generation, so as to minimize the gradient of energy expenditure as we shift between levels of abstraction? If we could, we would become more intelligent, i.e. able to handle larger and more differentiated sets of data in a shorter time.
How does all of this translate into collective intelligence? Firstly, there seem to be layers of such intelligence. We can be collectively smart sort of locally – and then we handle those more basic things, like group identity or networks of exchange – and then we can (possibly) become collectively smarter at more combinatorial a level, handling more abstract issues, like multilateral peace treaties or climate change. Moreover, the gradient of energy consumed, between the collective understanding of simple and basic things, on the one hand, and the overarching abstract issues, is a good predictor regarding the capacity of the given society to survive and thrive.
Once again, I am trying to associate this research in neurophysiology with my game-theoretical approach to energy markets. First of all, I recall the three theories of games, co-awarded the economic Nobel prize in 1994, namely those by: John Nash, John (Yan) Harsanyi, and Reinhard Selten. I start with the latter. Reinhard Selten claimed, and seems to have proven, that social games have a memory, and the presence of such memory is needed in order for us to be able to learn collectively through social games. You know those situations of tough talks, when the other person (or you) keeps bringing forth the same argumentation over and over again? This is an example of game without much memory, i.e. without much learning. In such a game we repeat the same move, like fish banging its head against the glass wall of an aquarium. Playing without memory is possible in just some games, e.g. tennis, or poker, if the opponent is not too tough. In other games, like chess, repeating the same move is not really possible. Such games force learning upon us.
Active use of memory requires combinatorial meaning. We need to know what is meaningful, in order to remember it as meaningful, and thus to consider it as valuable data for learning. The more combinatorial meaning is, inside a supposedly intelligent structure, such as our brain, the more energy-consuming that meaning is. Games played with memory and active learning could be more energy-consuming for our collective intelligence than games played without. Maybe that whole thing of electronics and digital technologies, so hungry of energy, is a way that we, collective human intelligence, put in place in order to learn more efficiently through our social games?
I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?
[1] Pierson, L. M., & Trout, M. (2017). What is consciousness for?. New Ideas in Psychology, 47, 62-71.
[2] Pulvermüller, F. (2013). How neurons make meaning: brain mechanisms for embodied and abstract-symbolic semantics. Trends in cognitive sciences, 17(9), 458-470.
[3] Patterson, K. et al. (2007) Where do you know what you know? The representation of semantic knowledge in the human brain. Nat. Rev. Neurosci. 8, 976–987
[4] Bookheimer,S.(2002) FunctionalMRIoflanguage:newapproachesto understanding the cortical organization of semantic processing. Annu. Rev. Neurosci. 25, 151–188
[5] Price, C.J. (2000) The anatomy of language: contributions from functional neuroimaging. J. Anat. 197, 335–359
[6] Binder, J.R. and Desai, R.H. (2011) The neurobiology of semantic memory. Trends Cogn. Sci. 15, 527–536