It is important to re-assume the meaning

It is Christmas 2020, late in the morning. I am thinking, sort of deeply. It is a dysfunctional tradition to make, by the end of the year, resolutions for the coming year. Resolutions which we obviously don’t hold to long enough to see them bring anything substantial. Yet, it is a good thing to pass in review the whole passing year, distinguish my own f**k-ups from my valuable actions, and use it as learning material for the incoming year.

What I have been doing consistently for the past year is learning new stuff: investment in the stock market, distance teaching amidst epidemic restrictions, doing research on collective intelligence in human societies, managing research projects, programming, and training consistently while fasting. Finally, and sort of overarchingly, I have learnt the power of learning by solving specific problems and writing about myself mixing successes and failures as I am learning.

Yes, it is precisely the kind you can expect in what we tend to label as girls’ readings, sort of ‘My dear journal, here is what happened today…’. I keep my dear journal focused mostly on my broadly speaking professional development. Professional development combines with personal development, for me, though. I discovered that when I want to achieve some kind of professional success, would it be academic, or business, I need to add a few new arrows to my personal quiver.    

Investing in the stock market and training while fasting are, I think, what I have had the most complete cycle of learning with. Strange combination? Indeed, a strange one, with a surprising common denominator: the capacity to control my emotions, to recognize my cognitive limitations, and to acknowledge the payoff from both. Financial decisions should be cold and calculated. Yes, they should, and sometimes they are, but here comes a big discovery of mine: when I start putting my own money into investment positions in the stock market, emotions flare in me so strongly that I experience something like tunnel vision. What looked like perfectly rational inference from numbers, just minutes ago, now suddenly looks like a jungle, with both game and tigers in it. The strongest emotion of all, at least in my case, is the fear of loss, and not the greed for gain. Yes, it goes against a common stereotype, and yet it is true. Moreover, I discovered that properly acknowledged and controlled, the fear of loss is a great emotional driver for good investment decisions, and, as a matter of fact, it is much better an emotional driver than avidity for gain. I know that I am well off when I keep the latter sort of weak and shy, expecting gains rather than longing for them, if you catch my drift.

Here comes the concept of good investment decisions. As this year 2020 comes to an end, my return on cash invested over the course of the year is 30% with a little change. Not bad at all, compared to a bank deposit (+1,5%) or to sovereign bonds (+4,5% max). I am wrapping my mind around the second most fundamental question about my investment decisions this year – after, of course, of the question about return on investment – and that second question is ontological: what my investment decisions actually have been? What has been their substance? The most general answer is tolerable complexity with intuitive hedging and a pinch of greed. Complexity means that I have progressively passed from the otherwise naïve expectation of one perfect hit to a portfolio of investment positions. Thinking intuitively in terms of portfolio has taught me just as intuitive approach to hedging my risks. Now, when I open one investment position, I already think about another possible one, either to reinforce my glide on the wave crest I intend to ride, or to compensate the risks contingent to seeing my ass gliding off and down from said wave crest.

That portfolio thinking of mine happens in layers, sort of. I have a portfolio of industries, and that seems to be the basic structuring layer of my decisions. I think I can call myself a mid-term investor. I have learnt to spot and utilise mid-term trends of interest that investors in the stock market attach to particular industries. I noticed there are cyclical fashion seasons in the stock market, in that respect. There is a cyclically recurrent biotech season, due to the pandemic. There is just as cyclical a fashion for digital tech, and another one for renewable energies (photovoltaic, in particular). Inside the digital tech, there are smaller waves of popularity as regards the gaming business, others connected to FinTech etc.

Cyclicality means that prices of stock in those industries grow for some time, ranging, by my experience, from 2 to 13 weeks. Riding those waves means jumping on and off at the right moment. The right moment for jumping on is as early as possible after the trend starts to ascend, and jump just as early as possible after it shows signs of durable descent.

The ‘durable’ part is tricky, mind you. I saw many episodes, and during some of them I shamefully yielded to short-termist panic, when the trend curbs down just for a few days before rocketing up again. Those episodes show well what it means in practical terms to face ‘technical factors’. The stock market is like an ocean. There are spots of particular fertility, and big predators tend to flock just there. In the stock market, just as in the ocean, you have bloody big sharks swimming around, and you’d better hold on when they start feeding, ‘cause they feed just as real sharks do: they hit quickly, cause abundant bleeding, and then just wait until their pray bleeds out enough to be defenceless.

When I see, for example, a company like the German Biontech ( suddenly losing value in the stock market, whilst the very vaccine they ganged up with Pfizer to make is being distributed across the world, I am like: ‘Wait a minute! Why the stock price of a super-successful, highly innovative business would fall just at the moment when they are starting to consume the economic fruit of their innovation?’. The only explanation is that sharks are hunting. Your typical stock market shark hunts in a disgusting way, by eating, vomiting and then eating their vomit back with a surplus. It bites a big chunk of a given stock, chews it for a moment, spits it out quickly – which pushes the price down a bit – then eats back its own vomit of stock, with a tiny surplus acquired at the previously down-driven price, and then it repeats. Why wouldn’t it repeat, as long as the thing works?

My personal philosophy, which, unfortunately, sometimes I deviate from when my emotions prevail, is just to sit and wait until those big sharks end their feeding cycle. This is another useful thing to know about big predators in the stock market: they hunt similarly to big predators in nature. They have a feeding cycle. When they have killed and consumed a big prey, they rest, as they are both replete with eating and down on energy. They need to rebuild their capital base.      

My reading of the stock market is that those waves of financial interest in particular industries are based on expectations as for real business cycles going on out there. Of course, in the stock market, there is always the phenomenon of subsidiary interest: I invest in companies which I expect other investors to invest to, as well, and, consequently, whose stock price I expect to grow. Still, investors in the stock market are much more oriented on fundamental business cycles than non-financial people think. When I invest in the stock of a company, and I know for a fact that many other investors think the same, I expect that company to do something constructive with my trust. I want to see those CEOs take bold decisions as for real investment in technological assets. When they really do so, I stay with them, i.e. I hold that stock. This is why I keep holding the stock of Tesla even amidst episodes of while swings in its price. I simply know Elon Musk will always come up with something which, for him, are business concepts, and for the common of mortals are science-fiction. If, on the other hand, I see those CEOs just sitting and gleaming benefits from trading their preferential shares, I leave.

Here I connect to another thing I started to learn during 2020: managing research projects. At my university, I have been assigned this specific job, and I discovered something which I did not expect: there is more money than ideas, out there. There is, actually, plenty of capital available from different sources, to finance innovative science. The tricky part is to translate innovative ideas into an intelligible, communicable form, and then into projects able to convince people with money. The ‘translating’ part is surprisingly complex. I can see many sparse, sort of semi-autonomous ideas in different people, and I still struggle with putting those people together, into some sort of team, or, fault of a team, into a network, and make them mix their respective ideas into one, big, articulate concept. I have been reading for years about managing R&D in corporate structures, about how complex and artful it is to manage R&D efficiently, and now, I am experiencing it in real life. An interesting aspect of that is the writing of preliminary contracts, the so-called ‘Non-Disclosure Agreements’ AKA NDAs, the signature of which is sort of a trigger for starting serious networking between different agents of an R&D project.

As I am wrapping my mind around those questions, I meditate over the words written by Joseph Schumpeter, in his Business Cycles: “Whenever a new production function has been set up successfully and the trade beholds the new thing done and its major problems solved, it becomes much easier for other people to do the same thing and even to improve upon it. In fact, they are driven to copying it if they can, and some people will do so forthwith. It should be observed that it becomes easier not only to do the same thing, but also to do similar things in similar lines—either subsidiary or competitive ones—while certain innovations, such as the steam engine, directly affect a wide variety of industries. This seems to offer perfectly simple and realistic interpretations of two outstanding facts of observation : First, that innovations do not remain isolated events, and are not evenly distributed in time, but that on the contrary they tend to cluster, to come about in bunches, simply because first some, and then most, firms follow in the wake of successful innovation ; second, that innovations are not at any time distributed over the whole economic system at random, but tend to concentrate in certain sectors and their surroundings”. (Business Cycles, Chapter III HOW THE ECONOMIC SYSTEM GENERATES EVOLUTION, The Theory of Innovation). In the Spring, when the pandemic was deploying its wings for the first time, I had a strong feeling that medicine and biotechnology will be the name of the game in technological change for at least a few years to come. Now, as strange as it seems, I have a vivid confirmation of that in my work at the university. Conceptual balls which I receive and which I do my best to play out further in the field come almost exclusively from the faculty of medical sciences. Coincidence? Go figure…

I am developing along two other avenues: my research on cities and my learning of programming in Python. I have been doing research on cities as manifestations of collective intelligence, and I have been doing it for a while. See, for example, ‘Demographic anomalies – the puzzle of urban density’ or ‘The knowingly healthy people’. As I have been digging down this rabbit hole, I have created a database, which, for working purposes, I call ‘DU_DG’. DU_DG is a coefficient of relative density in population, which I came by with some day and which keeps puzzling me.  Just to announce the colour, as we say in Poland when playing cards, ‘DU’ stands for the density of urban population, and ‘DG’ is the general density of population. The ‘DU_DG’ coefficient is a ratio of these two, namely it is DU/DG, or, in other words, this is the density of urban population denominated in the units of general density in population. In still other words, if we take the density of population as a fundamental metric of human social structures, the DU_DG coefficient tells how much denser urban population is, as compared to the mean density, rural settlements included.

I want to rework through my DU_DG database in order both to practice my programming skills, and to reassess the main axes of research on the collective intelligence of cities. I open JupyterLab from my Anaconda panel, and I create a new Notebook with Python 3 as its kernel. I prepare my dataset. Just in case, I make two versions: one in Excel, another one in CSV. I replace decimal comas with decimal points; I know by experience that Python has issues with comas. In human lingo, a coma is a short pause for taking like half a breath before we continue uttering the rest of the sentence. From there, we take the coma into maths, as decimal separator. In Python, as in finance, we talk about decimal point as such, i.e. as a point. The coma is a separator.

Anyway, I have that notebook in JupyterLab, and I start by piling up what I think I will need in terms of libraries:

>> import numpy as np

>> import pandas as pd

>> import os

>> import math

I place my database in the root directory of my user profile, which is, by default, the working directory of Anaconda, and I check if my database is visible for Python:

>> os.listdir()

It is there, in both versions, Excel and CSV. I start with reading from Excel:

>> DU_DG_Excel=pd.DataFrame(pd.read_excel(‘Dataset For Perceptron.xlsx’, header=0))

I check with ‘’. I get:

<class ‘pandas.core.frame.DataFrame’>

RangeIndex: 1155 entries, 0 to 1154

Data columns (total 10 columns):

 #   Column                                                                Non-Null Count  Dtype 

—  ——                                                                      ————–  —– 

 0   Country                                                                1155 non-null   object

 1   Year                                                                      1155 non-null   int64 

 2   DU_DG                                                                1155 non-null   float64

 3   Population                                                           1155 non-null   int64 

 4   GDP (constant 2010 US$)                                  1042 non-null   float64

 5   Broad money (% of GDP)                                  1006 non-null   float64

 6   urban population absolute                                 1155 non-null   float64

 7   Energy use (kg of oil equivalent per capita)    985 non-null    float64

 8   agricultural land km2                                        1124 non-null   float64

 9   Cereal yield (kg per hectare)                                         1124 non-null   float64

dtypes: float64(7), int64(2), object(1)

memory usage: 90.4+ KB  

Cool. Exactly what I wanted. Now, if I want to use this database as a simulator of collective intelligence in human societies, I need to assume that each separate ‘country <> year’ observation is a distinct local instance of an overarching intelligent structure. My so-far experience with programming opens up on a range of actions that structure is supposed to perform. It is supposed to differentiate itself into the desired outcomes, on the one hand, and the instrumental epistatic traits manipulated and adjusted in order to achieve those outcomes.

As I pass in review my past research on the topic, a few big manifestations of collective intelligence in cities come to my mind. Creation and development of cities as purposeful demographic anomalies is the first manifestation. This is an otherwise old problem in economics. Basically, people and resources they use should be disposed evenly over the territory those people occupy, and yet they aren’t. Even with a correction taken for physical conditions, such as mountains or deserts, we tend to like forming demographic anomalies on the landmass of Earth. Those anomalies have one obvious outcome, i.e. the delicate balance between urban land and agricultural land, which is a balance between dense agglomerations generating new social roles due to abundant social interactions, on the one hand, and the local food base for people endorsing those roles. The actual difference between cities and the surrounding countryside, in terms of social density, is very idiosyncratic across the globe and seems to be another aspect of intelligent collective adaptation.

Mankind is becoming more and more urbanized, i.e. a consistently growing percentage of people live in cities (World Bank 1[1]). In 2007 – 2008, the coefficient of urbanization topped 50% and keeps progressing since then. As there is more and more of us, humans, on the planet, we concentrate more and more in urban areas. That process defies preconceived ideas about land use. A commonly used narrative is that cities keep growing out into their once-non-urban surroundings, which is frequently confirmed by anecdotal, local evidence of particular cities effectively sprawling into the neighbouring rural land. Still, as data based on satellite imagery is brought up, and as total urban land area on Earth is measured as the total surface of peculiar agglomerations of man-made structures and night-time lights, that total area seems to be stationary, or, at least, to have been stationary for the last 30 years (World Bank 2[2]). The geographical distribution of urban land over the entire land mass of Earth does change, yet the total seems to be pretty constant. In parallel, the total surface of agricultural land on Earth has been growing, although at a pace far from steady and predictable (World Bank 3[3]).

There is a theory implied in the above-cited methodology of measuring urban land based on satellite imagery. Cities can be seen as demographic anomalies with a social purpose, just as Fernand Braudel used to state it (Braudel 1985[4]) : ‘Towns are like electric transformers. They increase tension, accelerate the rhythm of exchange and constantly recharge human life. […]. Towns, cities, are turning-points, watersheds of human history. […]. The town […] is a demographic anomaly’. The basic theoretical thread of this article consists in viewing cities as complex technologies, for one, and in studying their transformations as a case of technological change. Logically, this is a case of technological change occurring by agglomeration and recombination. Cities can be studied as demographic anomalies with the specific purpose to accommodate a growing population with just as expanding a catalogue of new social roles, possible to structure into non-violent hierarchies. That path of thinking is present, for example, in the now classical work by Arnold Toynbee (Toynbee 1946[5]), and in the even more classical take by Adam Smith (Smith 1763[6]). Cities can literally work as factories of new social roles due to intense social interactions. The greater the density of population, the greater the likelihood of both new agglomerations of technologies being built, and new, adjacent social roles emerging. A good example of that special urban function is the interaction inside age groups. Historically, cities have allowed much more abundant interactions among young people (under the age of 25), that rural environments have. That, in turn, favours the emergence of social roles based on the typically adolescent, high appetite for risk and immediate rewards (see for example: Steinberg 2008[7]). Recent developments in neuroscience, on the other hand, allow assuming that abundant social interactions in the urban environment have a deep impact on the neuroplastic change in our brains, and even on the phenotypical expression of human DNA (Ehninger et al. 2008[8]; Bavelier et al. 2010[9]; Day & Sweatt 2011[10]; Sweatt 2013[11])

At the bottom line of all those theoretical perspectives, cities are quantitatively different from the countryside by their abnormal density of population. Throughout this article, the acronymic symbol [DU/DG] is used to designate the density of urban population denominated in the units of (divided by) general density of population, and is computed on the grounds of data published by combining the above cited coefficient of urbanization (World Bank 1) with the headcount of population (World Bank 4[12]), as well as with the surface of urban land (World Bank 2). The general density of population is taken straight from official statistics (World Bank 5[13]). 

The [DU/DG] coefficient stays in the theoretical perspective of cities as demographic anomalies with a purpose, and it can be considered as a measure of social difference between cities and the countryside. It displays intriguing quantitative properties. Whilst growing steadily over time at the globally aggregate level, from 11,9 in 1961 to 19,3 in 2018, it displays significant disparity across space. Such countries as Mauritania or Somalia display a [DU/DG] > 600, whilst United Kingdom or Switzerland are barely above [DU/DG] = 3. In the 13 smallest national entities in the world, such as Tonga, Puerto Rico or Grenada, [DU/DG] falls below 1. In other words, in those ultra-small national structures, the method of assessing urban space by satellite-imagery-based agglomeration of night-time lights fails utterly. These communities display peculiar, categorially idiosyncratic a spatial pattern of settlement. The cross-sectional variability of [DU/DG] (i.e. its standard deviation across space divided by its cross-sectional mean value) reaches 8.62, and yet some 70% of mankind lives in countries ranging across the 12,84 ≤ [DU/DG] ≤ 23,5 interval.

Correlations which the [DU/DG] coefficient displays at the globally aggregate level (i.e. at the scale of the whole planet) are even more puzzling. When benchmarked against the global real output in constant units of value (World Bank 6[14]), the time series of aggregate, global  [DU/DG] displays a Pearson correlation of r = 0,9967. On the other hand, the same type of Pearson correlation with the relative supply of money to the global economy (World Bank 7[15]) yields r = 0,9761. As the [DU/DG] coefficient is supposed to represent the relative social difference between cities and the countryside, a look at the latter is beneficial. The [DU/DG] Pearson-correlates with the global area of agricultural land (World Bank 8[16]) at r = 0,9271, and with the average, global yield of cereals, in kgs per hectare (World Bank 9[17]), at r = 0,9858. That strong correlations of the [DU/DG] coefficient with metrics pertinent to the global food base match its correlation with the energy base. When Pearson-correlated with the global average consumption of energy per capita (World Bank 10[18]), [DU/DG] proves significantly covariant, at r = 0,9585. All that kept in mind, it is probably not that much of a surprise to see the global aggregate [DU/DG] Pearson correlated with the global headcount of population (World Bank 11[19]) at r = 0,9954.    

It is important to re-assume the meaning of the [DU/DG] coefficient. This is essentially a metric of density in population, and density has abundant ramifications, so to say. The more people live per 1 km2, the more social interactions occur on the same square kilometre. Social interactions mean a lot. They mean learning by civilized rivalry. They mean transactions and markets as well. The greater the density of population, the greater the probability of new skills emerging, which possibly translates into new social roles, new types of business and new technologies. When two types of human settlements coexist, displaying very different densities of population, i.e. type A being many times denser than type B, type A is like a factory of patterns (new social roles and new markets), whilst type B is the supplier of raw resources. The progressively growing global average [DU/DG] means that, at the scale of the human civilization, that polarity of social functions accentuates.

The [DU/DG] coefficient bears strong marks of a statistical stunt. It is based on truly risky the assumption, advanced implicitly by through the World Bank’s data, that total surface of urban land on Earth has remained constant, at least over the last 3 decades. Moreover, denominating the density of urban population in units of general density of population was purely intuitive from the author’s part, and, as a matter of fact, other meaningful denominators can easily come to one’s mind. Still, with all that wobbly theoretical foundation, the [DU/DG] coefficient seems to inform about a significant, structural aspect of human societies. The Pearson correlations, which the global aggregate of that coefficient yields with the fundamental metrics of the global economy, are of an almost uncanny strength in social sciences, especially with respect to the strong cross-sectional disparity in the [DU/DG].

The relative social difference between cities and the countryside, measurable with the gauge of the [DU/DG] coefficient, seems to be a strongly idiosyncratic adaptative mechanism in human societies, and this mechanism seems to be correlated with quantitative growth in population, real output, production of food, and the consumption of energy. That could be a manifestation of tacit coordination, where a growing human population triggers an increasing pace of emergence in new social roles by stimulating urban density. As regards energy, the global correlation between the increasing [DU/DG] coefficient and the average consumption of energy per capita interestingly connects with a stream of research which postulates intelligent collective adaptation of human societies to the existing energy base, including intelligent spatial re-allocation of energy production and consumption (Leonard, Robertson 1997[20]; Robson, Wood 2008[21]; Russon 2010[22]; Wasniewski 2017[23], 2020[24]; Andreoni 2017[25]; Heun et al. 2018[26]; Velasco-Fernández et al 2018[27]).

It is interesting to investigate how smart are human societies in shaping their idiosyncratic social difference between cities and the countryside. This specific path of research is being pursued, further in this article, through the verification and exploration of the following working hypothesis: ‘The collective intelligence of human societies optimizes social interactions in the view of maximizing the absorption of energy from the environment’.  

[1] World Bank 1:

[2] World Bank 2:

[3] World Bank 3:

[4] Braudel, F. (1985). Civilisation and Capitalism 15th and 18th Century–Vol. I: The Structures of Everyday Life, Translated by S. Reynolds, Collins, London, pp. 479 – 482

[5] Royal Institute of International Affairs, Somervell, D. C., & Toynbee, A. (1946). A Study of History. By Arnold J. Toynbee… Abridgement of Volumes I-VI (VII-X.) by DC Somervell. Oxford University Press., Section 3: The Growths of Civilizations, Chapter X.

[6] Smith, A. (1763-1896). Lectures on justice, police, revenue and arms. Delivered in the University of Glasgow in 1763, published by Clarendon Press in 1896, pp. 9 – 20

[7] Steinberg, L. (2008). A social neuroscience perspective on adolescent risk-taking. Developmental review, 28(1), 78-106.

[8] Ehninger, D., Li, W., Fox, K., Stryker, M. P., & Silva, A. J. (2008). Reversing neurodevelopmental disorders in adults. Neuron, 60(6), 950-960.

[9] Bavelier, D., Levi, D. M., Li, R. W., Dan, Y., & Hensch, T. K. (2010). Removing brakes on adult brain plasticity: from molecular to behavioral interventions. Journal of Neuroscience, 30(45), 14964-14971.

[10] Day, J. J., & Sweatt, J. D. (2011). Epigenetic mechanisms in cognition. Neuron, 70(5), 813-829.

[11] Sweatt, J. D. (2013). The emerging field of neuroepigenetics. Neuron, 80(3), 624-632.

[12] World Bank 4:

[13] World Bank 5:

[14] World Bank 6:

[15] World Bank 7:

[16] World Bank 8:

[17] World Bank 9:

[18] World Bank 10:

[19] World Bank 11:

[20] Leonard, W.R., and Robertson, M.L. (1997). Comparative primate energetics and hominoid evolution. Am. J. Phys. Anthropol. 102, 265–281.

[21] Robson, S.L., and Wood, B. (2008). Hominin life history: reconstruction and evolution. J. Anat. 212, 394–425

[22] Russon, A. E. (2010). Life history: the energy-efficient orangutan. Current Biology, 20(22), pp. 981- 983.

[23] Waśniewski, K. (2017). Technological change as intelligent, energy-maximizing adaptation. Energy-Maximizing Adaptation (August 30, 2017).

[24] Wasniewski, K. (2020). Energy efficiency as manifestation of collective intelligence in human societies. Energy, 191, 116500.

[25] Andreoni, V. (2017). Energy Metabolism of 28 World Countries: A Multi-scale Integrated Analysis. Ecological Economics, 142, 56-69

[26] Heun, M. K., Owen, A., & Brockway, P. E. (2018). A physical supply-use table framework for energy analysis on the energy conversion chain. Applied Energy, 226, 1134-1162

[27] Velasco-Fernández, R., Giampietro, M., & Bukkens, S. G. (2018). Analyzing the energy performance of manufacturing across levels using the end-use matrix. Energy, 161, 559-572


I  am considering the idea of making my students – at least some of them – into an innovative task force in order to develop new technologies and/or new businesses. My essential logic is that I teach social sciences, under various possible angles, and the best way of learning is by trial and error. We learn the most when we experiment with many alternative versions of ourselves and select the version which seems the fittest, regarding the values and goals we pursue. Logically, when I want my students to learn social sciences, like really learn, the first step is to make them experiment with the social roles they currently have and make many alternative versions thereof. You are 100% student at the starting point, and now you try to figure out what is it like to be 80% student and 20% innovator, or 50% student and 50% innovator etc. What are your values? Well, as it comes to learning, I advise assuming that the best learning occurs when we get out of our comfort zone but keep the door open for returning there. I believe it can be qualified as a flow state. You should look for situations when you feel a bit awkward, and the whole thing sucks a bit because you feel you do not have all the skills you need for the situation, and still you see like a clear path of passage between your normal comfort zone and that specific state of constructive suck.   

Thus, when I experiment with many alternative versions of myself, without being afraid of losing my identity, thus when I behave like an intelligent structure, the most valuable versions of myself as learning comes are those which push me slightly out of my comfort zone. When you want to learn social sciences, you look for those alternative versions of yourself which are a bit uncomfortably involved in that whole social thing around you. That controlled uncomfortable involvement makes you learn faster and deeper.

The second important thing I know about learning is that I learn faster and deeper when I write and talk about what I am learning and how I am learning. I have just experienced that process of accelerated figuring my s**t out as regards investment in the stock market. I started by the end of January 2020 (see Back in the game or Fathom the outcomes ) and, with a bit of obsessive self-narration, I went from not really knowing what I am doing and barely controlling my emotions to a portfolio of some 20 investment positions, capable to bring me at least 10% a month in terms of return on capital (see Fire and ice. A real-life business case).

Thus, consistently getting out of your comfort zone just enough to feel a bit of suck, and then writing about your own experience in that place, that whole thing has the hell of a propulsive power. You can really burn the (existential) rubber, under just one condition: the ‘consistently’ part. Being relentless in making small everyday steps is the third ingredient of that concoction. We learn by forming habits. Daily repetition of experimenting in the zone of gentle suck makes you be used to that experimentation, and once you are used to that, well, man, you have the turbo boost on, in your existence.

This is precisely what I fathom to talk my students into: experimenting outside of their comfort zone, with a bit of uncomfortably stimulating social involvement into the development of an innovative business concept. The type of innovation I am thinking about is some kind of digital technology or digital product, and I want to start exploration with rummaging a little bit in the investor-relations sites of publicly listed companies, just to see what they are up to and to find some good benchmarks for business modelling. I start with one of the T-Rexes of the industry, namely with Microsoft ( ). As I like going straight for the kill, I dive into the section of SEC filings ( ), and there, a pleasant surprise awaits: they end their fiscal year by the end of June, them people at Microsoft, and thus I have their annual report for the fiscal year 2020 ready and available even before the calendar year 2020 is over. You can download the report from their site or from my archives: .

As I grab my machete and my camera and I cut myself a path through that document, I develop a general impression that digital business goes more and more towards big data and big server power more than programming strictly speaking. I allow myself to source directly from that annual report the table from page 39, with segment results. You can see it here below:

Intelligent Cloud, i.e. Microsoft Azure ( ), seems to be the most dynamic segment in their business. In other words, a lot of data combined with a lot of server power, and with artificial neural networks to extract patterns and optimize. If I consider the case of Microsoft as representative for the technological race taking place in the IT industry, cloud computing seems to be the main track in that race.

Before I forget: IBM has just confirmed that intuition of mine. If you go and call by , you can pick up their half-year results ( ) and their latest strategic update ( ). One fact comes out of it: cloud computing at IBM brings the most gross margin and the most growth in business. It goes to the point of IBM splitting their business in two, with cloud computing spinning out of all the rest, as a separate business.

I would suggest my students to think about digital innovations in the domain of cloud computing. Microsoft Azure ( ) and cloud computing provided by Okta ( ), seen a bit more in focus in their latest annual report ( ), serve me as quick benchmarks. Well, as I think about benchmarks, there are others, more obvious or less obvious, depending on the point of view. You Tube, when you think about it, does cloud computing. It stores data – yes, videos are data – and it adapts the list of videos presented to each user according to the preferences of said used, guessed by algorithms of artificial intelligence. Netflix – same thing: a lot of data, in the form of movies, shows and documentaries, and a lot of server power to support the whole thing.     

My internal curious ape has grabbed this interesting object – innovations in the domain of cloud computing – and now my internal happy bulldog starts playing with it, sniffing around and digging holes, haphazardly, in the search for more stuff like that. My internal austere monk watches the ape and the bulldog, holding his razor ready, I mean the Ockham’s razor to cut bullshit out, should such need arise.

What’s cloud computing from the point of view of a team made of an ape and a bulldog? This is essentially a f**king big amount of data, permeated with artificial neural networks, run on and through f**king big servers, consuming a lot of computational power and a lot of energy. As cloud computing is becoming a separate IT business on its own right, I try to decompose it into key factors of value added. The technology of servers as such is one such factor. Energy efficiency, resilience to factors of operational risk, probably fiberoptics as regards connectivity, sheer computational power per 1 cubic meter of space, negotiably low price of electricity – all those things are sort of related to servers.

Access to big, useful datasets is another component of that business. I see two openings here. Acquiring now intellectual property rights to datasets which are cheap today, but likely to be expensive tomorrow is certainly important. People tend to say that data has become a commodity, and it is partly true. Still, I see that data is becoming an asset, too. As I look at the financials of Netflix (see, for example, The hopefully crazy semester), thus at cloud computing for entertainment, I realize that cloud-stored (clouded?) data can be both a fixed asset and a circulating one. It all depends on its lifecycle. There is data with relatively short shelf life, which works as a circulating asset, akin to inventories. It earns money when it flows: some parcels of data flow into my server, some flow out, and I need that flow to stay in the flow of business. There is other data, which holds value for a longer time, similarly to a fixed asset, and yet is subject to depreciation and amortization.

Here is that emerging skillset: data trader. Being a data trader means that you: a) know where to look for interesting datasets b) have business contacts with people who own it c) can intuitively gauge its market value and its shelf life d) can effectively negotiate its acquisition and e) can do the same on the selling side. I think one more specific skill is to add: intuitive ability to associate the data I am trading with proper algorithms of artificial intelligence, just to blow some life into the otherwise soulless databases. One more comes to my mind: the skill to write and enforce contracts which effectively protect the acquired data from infringement and theft.

Cool. There are the servers, and there is the data. Now, we need to market it somehow. The capacity to invent and market digital products based on cloud computing, i.e. on lots of server power combined with lots of data and with agile artificial neural networks, are another aspect of the business model. As I think of it, it comes to my mind that the whole fashion for Blockchain technology and its emergent products – cryptocurrencies and smart contracts – arose when the technology of servers passed a critical threshold, allowing to play with computational power as a fixed asset.

I am very much Schumpeterian, i.e. I am quite convinced that Joseph Schumpeter’s theory of business cycles was and still is a bloody deep vision, which states, among other things, that with the advent of new technologies and new assets, some incumbent technologies and assets will inevitably disappear. Before inevitability consumes itself, a transitory period happens, when old assets coexist with the new ones and choosing the right cocktail thereof is an art and a craft, requiring piles of cash on the bank account, just to keep the business agile and navigable.     

Another thing strikes me: the type of emergent programming languages. The Python, the R, the Pragma Solidity: all that stuff is primarily about managing data. Twenty years ago, programming was mostly about… well, about programming, i.e. about creating algorithms to make those electronics do what we want. Today, programming is more and more about data management. When we invent new languages for a new type of business, we really mean business, as a collective intelligence.

It had to come. I mean, in me. That mild obsession of mine about collective intelligence just had to poke its head from around the corner. Whatever. Let’s go down that rabbit hole. Collective intelligence consists in an intelligent structure experimenting with many alternative versions of itself whilst staying coherent. The whole business of cloud computing, as it is on the rise and before maturity, consists very largely in experimenting with many alternative versions of claims on data, claims on server power, as well as with many alternative digital products sourced therefrom. Some combinations are fitter than others. What are the criteria of fitness? At the business scale, it would be return on investment, I guess. Still, at the collective level of whole societies, it would be about the capacity to assure high employment and low average workload per person. Yes, Sir Keynes, it still holds.

As I indulge in obsessions, I go to another one of mine: the role of cities in our civilization. In my research, I have noticed strange regularities as for the density of urban population. When I compute a compound indicator which goes as density of urban population divided by the general density of population, or [DU/DG], that coefficient enters into strange correlations with other socio-economic variables. One of the most important observations I made about it is that the overall DU/DG for the whole planet is consistently growing. There is a growing difference in social density between cities and the countryside. See Demographic anomalies – the puzzle of urban density, from May 14th, 2020, in order to make yourself an idea. I think that we, humans, invented cities as complex technologies which consist in stacking a large number of homo sapiens (for some humans, it is just allegedly sapiens, let’s face it) on a relatively small surface, with a twofold purpose: that of preserving and developing agricultural land as a food base, and that of fabricating new social roles for new humans, through intense social interaction in cities. My question regarding the rise of technologies in cloud computing is whether it is concurrent with growing urban density, or, conversely, is it a countering force to that growth. In other words, are those big clouds of data on big servers a by-product of citification or is it rather something completely new, possibly able to supplant cities in their role of factories making new social roles?

When I think about cloud computing in terms of collective intelligence, I perceive it as a civilization-wide mechanism which helps making sense of growing information generated by growing mankind. It is a bit like an internal control system inside a growing company. Cloud computing is essentially a pattern of maintaining internal cohesion inside the civilization. Funny how it plays on words. Clouds form in the atmosphere when the density of water vapour passes a critical threshold. As the density of vaporized water per 1 cubic meter of air grows, other thresholds get passed. The joyful, creamy clouds morph into rain clouds, i.e. clouds able to re-condensate water from vapour back to liquid. I think that technologies of cloud computing do precisely that. They collect sparse, vaporized data and condensate it into effective action in and upon the social environment.

Now comes the funny part. Rain clouds turn into storm clouds when they get really thick, i.e. when wet and warm air – thus air with a lot of water vaporized in it and a lot of kinetic energy in its particles – collides with much colder and drier air. Rain clouds pile up and start polarizing their electric charges. The next thing we know, lightning starts hitting, winds become scary etc. Can a cloud of data pile up to the point of becoming a storm cloud of data, when it enters in contact with a piece of civilisation poor in data and low on energy? Well, this is something I observe with social media and their impact. Any social medium, I mean Twitter, Facebook, Instagram, whatever pleases, essentially, is a computed cloud of data. When it collides with population poor in data (i.e. poor in connection with real life and real world), and low on energy (not much of a job, not much of adversity confronted, not really a pile of business being done), data polarizes in the cloud. Some of it flows to the upper layers of the cloud, whilst another part, the heavier one, flows down to the bottom layer and starts attracting haphazard discharges of lighter data, more sophisticated data from the land underneath. The land underneath is the non-digital realm of social life. The so-polarized cloud of data becomes sort of aggressive and scary. It teaches humans to seek shelter and protection from it.           

Metaphors have various power. This one, namely equating a cloud of data to an atmospheric cloud, seems pretty kickass. It leads me to concluding that cloud computing arises as a new, big digital business because there are good reasons for it to do so. There is more and more of us, humans, on the planet. More and more of us live in cities, in a growing social density, i.e. with more and more social interactions. Those interactions inevitably produce data (e.g. #howcouldtheyhavedoneittome), whence growing information wealth of our civilisation, whence the computed clouds of data.

Metaphors have practical power, too, namely that of making me shoot educational videos. I made two of them, sort of in the stride of writing. Here they are, to your pleasure and leisure (in brackets, you have links to You Tube): International Economics #3 The rise of cloud computing [], for one, and Managerial Economics and Economic Policy #4 The growth of cloud computing and what can governments do about it [], for two.

4 units of quantity in technological assets to make one unit of quantity in final goods

My editorial on You Tube

I am writing a book, right now, and I am sort of taken, and I blog much less frequently than I planned. Just to keep up with the commitment, which any blogger has sort of imprinted in their mind, to deliver some meaningful content, I am publishing, in this update, the outline of my first chapter. It has become almost a truism that we live in a world of increasingly rapid technological change. When a statement becomes almost a cliché, it is useful to pass it in review, just to be sure that we understand what the statement is about. In a very pragmatic perspective of an entrepreneur, or, as a matter of fact, that of an infrastructural engineer, technological change means that something old needs to be coupled with or replaced by something new. When a new technology comes around, it is like a demon: it is essentially an idea, frequently prone to protection through intellectual property rights, and that idea looks for a body to sneak into. Humans are supposed to supply the body, and they can do it in two different ways. They can tell the new idea to coexist with some older ones, i.e. we embody new technologies in equipment and solutions which we couple functionally with older ones. Take any operational system for computers or mobile phones. On the moment, the people who are disseminating it claim it is brand new but scratch the surface just a little bit and you find 10-year-old algorithms underneath. Yes, they are old, and yes, they still work.

Another way to embody a new technological concept is to make it supplant older ones completely. We do it reluctantly, yet sometimes it really looks like a better idea. Electric cars are a good example of this approach. Initially, the basic idea seems to have consisted in putting electric engines into an otherwise unchanged structure of vehicles propelled by combustion engines. Still, electric propulsion is heavier, as we need to drive those batteries around. Significantly greater weight means the necessity to rethink steering, suspension, structural stability etc., whence the need to design a new structure.   

Whichever way of embodying new technological concepts we choose, our equipment ages. It ages physically and morally, in various proportions. Aging in technologies is called depreciation. Physical depreciation means physical wearing and destruction in a piece of equipment. As it happens – and it happens to anything used frequently, e.g. shoes – we choose between repairing and replacing the destroyed parts. Whatever we do, it requires resources. From the economic point of view, it requires capital. As strange as it could sound, physical depreciation occurs in the world of digital technologies, too. When a large digital system, e.g. that of an airport, is being run, something apparently uncanny happens: some component algorithms of that system just stop working properly, under the burden of too much data, and they need to be replaced sort of on the go, without putting the whole system on hold. Of course, the essential cause of that phenomenon is the disproportion between the computational scale of pre-implementation tests, and that of real exploitation. Still, the interesting thing about those on-the-go patches of the system is that they are not fundamentally new, i.e. they do not express any new concept. They are otherwise known, field-tested solutions, and they have to be this way in order to work. Programmers who implement those patches do not invent new digital technologies; they just keep the incumbent ones running. They repair something broken with something working smoothly. Functionally, it is very much like repairing a fleet of vehicles in an express delivery business.   

As we take care of the physical depreciation occurring in our incumbent equipment and software, new solutions come to the market, and let’s be honest: they are usually better than what we have at the moment. The technologies we hold become comparatively less and less modern, as new ones appear. That phenomenon of aging by obsolescence is called moral depreciation. Proportions of the actual physical depreciation & moral depreciation-cocktail depend on the pace of technological race in the given industry. When a lot of alternative, mutually competing solutions emerge, moral obsolescence accelerates and tends to become the dominant factor of aging in our technological assets. Moral depreciation creates a tension: as we look the state-of-the-art in our industry progressively moving away from our current technological position, determined by the assets we have, we find ourselves under a growing pressure to do something about it. Finally, we come to the point of deciding to invest in something definitely more up to date than what we currently have.      

Both layers of depreciation – physical and moral – absorb capital. It seems pertinent to explain how exactly they do so. We need money to pay for goods and services necessary for repairing and replacing the physically used parts of our technological basket. We obviously need money to pay for the completely new equipment, too. Where does that money come from? Are there any patterns as for its sourcing? The first and the most obvious source of money to finance depreciation in our assets is the financial scheme of amortization. In many legal regimes, i.e. in all the developed countries and in a large number of emerging and developing economies, an entity being in possession of assets subject to depreciation is allowed to subtract from its income tax base, a legally determined financial amount, in order to provide for depreciation.

The legally possible amount of amortization is calculated as a percentage of book value ascribed to the corresponding assets, and this percentage is based on their assumed. If a machine is supposed to have a useful life of five years, after all is said and done as for its physical and moral depreciation, I can subtract from my tax base 1/5th = 20% of its book value. Question: which exact book value, the initial one or the current one? It depends on the kind of deal an entrepreneur makes with tax authorities. Three alternative ways are possible: linear, decreasing, and increasing. When I do linear amortization, I take the initial value of the machine, e.g. $200 000, I divide it into 5 equal parts right after the purchase, thus in 5 instalments of $40 000 each, and I subtract those instalments annually from my tax base, starting from the current year. After linear amortization is over, the book value of the machine is exactly zero.  

Should I choose decreasing amortization, I take the current value of my machine as the basis for the 20% reduction of my tax base. The first year, the machine is brand new, worth $200 000, and so I amortize 20% * $200 000 = $40 000. The next year, i.e. in the second year of exploitation, I start with my machine being worth $200 000 – $40 000 = (1 – 20%) * $200 000 =  $160 000. I repeat the same operation of amortizing 20% of the current book value, and I do: $160 000 – 20% * $160 000 = $160 000 – $32 000 = $128 000. I subtracted $32 000 from my tax base in this second year of exploitation (of the machine), and, and the end of the fiscal year, I landed with my machine being worth $128 000 net of amortization. A careful reader will notice that decreasing amortization is, by definition, a non-linear function tending asymptotically towards zero. It is a never-ending story, and a paradox. I assume a useful life of 5 years in my machine; hence I subtract 1/5th = 20% of its current value from my tax base, and yet the process of amortization takes de facto longer than 5 years and has no clear end. After 5 years of amortization, my machine is worth $65 536 net of amortization, and I can keep going. The machine is technically dead as useful technology, but I still have it in my assets.      

Increasing amortization is based on more elaborate assumptions than the two preceding methods. I assume that my machine will be depreciating over time at an accelerating pace, e.g. 10% of the current value in the first year, 20% annually over the years 2 – 4, and 30% in the 5th year. The underlying logic is that of progressively diving into the stream of technological race: the longer I have my technology, the greater is the likelihood that someone comes up with something definitely more modern. With the same assumption of $200 000 as initial investment, that makes me write off my tax base the following amounts: 1st year – $20 000, 2nd ÷ 4th year – $40 000, 5th year – $60 000. After 5 years, the net value of my equipment is zero. 

The exact way I can amortize my assets depends largely on the legal regime in force – national governments have their little ways in that respect, using the rates of amortization as incentives for certain types of investment whilst discouraging other types – and yet there is quite a lot of financial strategy in amortization, especially in large business structures with ownership separate from management. We can notice that linear amortization gives comparatively greater savings in terms of tax due. Still, as amortization consists in writing an amount off the tax base, we need any tax base at all beforehand. When I run a well-established, profitable business way past its break-even point, tax savings are a sensible idea, and so is linear amortization in my fixed assets. However, when I run a start-up, still deep in the red zone below the break-even point, there is not really any tax base to subtract amortization from. Recording a comparatively greater amortization from operations already running at a loss just deepens the loss, which, at the end of the day, has to be subtracted from the equity of my business, and it doesn’t look good in the eyes of my prospective investors and lenders. Relatively quick, linear amortization is a good strategy for highly profitable operations with access to lots of cash. Increasing amortization could be good for that start-up business, when relatively the greatest margin of operational income turns up some time after the day zero of operations.

Interestingly, the least obvious logic comes with decreasing amortization. What is the practical point of amortizing my assets asymptotically down to zero, without ever reaching zero? Good question, especially in the light of a practical fact of life, which the author challenges any reader to test by themselves: most managers and accountants, especially in small and medium sized enterprises, will intuitively amortize the company’s assets precisely this way, i.e. along the decreasing path. Question: why people do something apparently illogical? Answer: because there is a logic to that, it is just hard to phrase out. What about the logic of accumulating capital? Both the linear amortization and the increasing one lead to having, at some point in time, the book value of the corresponding assets drops down to zero. A lot of value off my assets means that either I subtract the corresponding amount from the passive side of my balance sheet (i.e. I repay some loans or I give away some equity), or I compensate the write-off with new investment. Either I lose cash, or I am in need of more cash. When I am in tight technological race, and my assets are subject to quick moral depreciation, those sudden drops down to zero can put a lot of financial streets on my balance sheet. When I do something apparently detached from my technological strategy, i.e. when I amortize decreasingly, sudden capital quakes are replaced by a gentle descent, much more predictable. Predictable means e.g. negotiable with banks who lend me money, or with investors buying shares in my equity.

This is an important pattern to notice in commonly encountered behaviour regarding capital goods: most people will intuitively tend to protect the capital base of their organisation, would it be a regular business or a public agency. When choosing between amortizing their assets faster, so as to reflect the real pace of their ageing, or amortizing them slower, thus a bit against the real occurrence of depreciation, most people will choose the latter, as it smoothens the resulting changes in the capital base. We can notice it even in ways that most of us manage our strictly private assets. Let’s take the example of an ageing car. When a car reaches the age when an average household could consider to change it, like 3 – 4 years, only a relatively tiny fraction of the population, probably not more than 16%, will really change for a new car. The majority (the author of this book included, by the way) will rather patch and repair, and claim that ‘new cars are not as solid as those older ones’. There is a logic to that. A new car is bound to lose around 25% of its market value annually over the first 2 – 3 years of its useful life. An old car, aged 7 years or more, loses around 10% or less per year. In other words, when choosing between shining new things that age quickly and the less shining old things that age slowly, only a minority of people will choose the former. The most common behavioural pattern consists in choosing the latter.

When recurrent behavioural patterns deal with important economic phenomena, such as technological change, an economic equilibrium could be poking its head from around the corner. Here comes an alternative way of denominating depreciation and amortization, i.e. instead of denominating it as a fraction of value attributed to assets, we can denominate over the revenue of our business. Amortization can be seen as the cost of staying in the game. Technological race takes a toll on our current business. The faster our technologies depreciate, the costlier it is to stay in the race. At the end of the day, I have to pay someone or something that helps me keeping up with the technological change happening around, i.e. I have to share, with that someone or something, a fraction of what my customers pay me for the goods and services I offer. When I hold a differentiated basket of technological assets, each ageing at a different pace and starting from a different moment in time, the aggregate capital write-off that corresponds to their amortization is the aggregate cost of keeping up with science.

When denoting K as the book value of assets, with a standing for the rate of amortization corresponding to one of the strategies sketched above, P representing the average price of goods we sell, and Q their quantity, we can sketch the considerations developed above in a more analytical way, as a coefficient labelled A, as in equation (1) below.

A = (K*a)/(P*Q)         (1)

The coefficient A represents the relative burden of aggregate amortization of all the fixed assets in hand, upon the revenues recorded in a set of economic agents. Equation (1) can be further transformed so as to extract quantities at both levels of the fraction. Factors in the denominator of equation (1), i.e. prices and quantities of goods sold in order to generate revenues will be further represented as, respectively, PG and QG, whilst the book value of assets subject to amortization will be symbolized as the arithmetical product QK*PK of market prices PK of assets, and the quantity QK thereof. Additionally, we drive the rate of amortization ‘a’ down to what it really is, i.e. inverted representation of an expected lifecycle F, measured in years, and ascribed to our assets. Equation (2) below shows an analytical development in this spirit.

A = (1/F)*[(PK*QK)/(PG*QG)]        (2)

Before the meaning of equation (2) is explored more in depth, it is worth explaining the little mathematical trick that economists use all the time, and which usually raises doubts in the minds of bystanders. How can anyone talk about an aggregate quantity QG of goods sold, or that of fixed assets, the QK? How can we distil those aggregate quantities out of the facts of life? If anyone in their right mind thinks about the enormous diversity of the goods we trade, and the assets we use, how can we even set a common scale of measurement? Can we add up kilograms of BMW cars with kilograms of food consumed, and use it as denominator for kilograms of robots summed up with kilograms of their operating software?

This is a mathematical trick, yet a useful one. When we think about any set of transactions we make, whether we buy milk or machines for a factory, we can calculate some kind of weighted average price in those transactions. When I spend $1 000 000 on a team of robots, bought at unitary price P(robot), and $500 000 on their software bought at price P(software), the arithmetical operation P(robot)*[$1 000 000 / ($1 000 000 + $500 000)] + P(software)*[$500 000 / ($1 000 000 + $500 000)] will yield a weighted average price P(robot; software) made in one third of the price of software, and in two thirds of the price of robots. Mathematically, this operation is called factorisation, and we use it when we suppose the existence of a common, countable factor in a set of otherwise distinct phenomena. Once we suppose the existence of recurrent transactional prices in anything humans do, we can factorise that anything as Price Multiplied By Quantity, or P*Q. Thus, although we cannot really add up kilograms of factories with kilograms of patents, we can factorise their respective prices out of the phenomenon observed and write PK*QK. In this approach, quantity Q is a semi-metaphysical category, something like a metaphor for the overall, real amount of the things we have, make and do.    

Keeping those explanations in mind, let’s have a look at the empirical representation of coefficient A, as computed according to equation (2), on the grounds of data available in Penn Tables 9.1 (Feenstra et al. 2015[1]), and represented graphically in Figure I_1 below. The database known as Penn Tables provides direct information about three big components of equation (2): the basic rate of amortization, the nominal value of fixed assets, and the nominal value of Gross Domestic Product GDP) for each of the 182 national economies covered. One of the possible ways of thinking about the wealth of a nation is to compute the value of all the final goods and services made by said nation. According to the logic presented in the preceding paragraph, whilst the whole basket of final goods is really diversified, it is possible to nail down a weighted, average transactional price P for all that lot, and, consequently, to factorise the real quantity Q out of it. Hence, the GDP of a country can be seen as a very rough approximation of value added created by all the businesses in that territory, and changes over time in the GDP as such can be seen as representative for changes in the aggregate revenue of all those businesses.

Figure I_1 introduces two metrics, pertinent to the empirical unfolding of equation (2) over time and across countries. The continuous line shows the arithmetical average of local, national coefficients A across the whole sample of countries. The line with square markers represents the standard deviation of those national coefficients from the average represented by the continuous line. Both metrics are based on the nominal computation of the coefficient A for each year in each given national economy, thus in current prices for each year from 1950 through 2017. Equation (2) gives many possibilities of change in the coefficient A – including changes in the proportion between the price PG of final goods, and the market price PK of fixed assets – and the nominal computation used in Figure I_1 captures that factor as well.            

[Figure I_1_Coefficient of amortization in GDP, nominal, world, trend]

In 1950, the average national coefficient A, calculated as specified above, was equal to 6,7%. In 2017, in climbed to A = 20,8%. In other words, the average entrepreneur in 1950 would pay less than one tenth of their revenues to amortize the depreciation of technological assets, whilst in 2017 it was more than one fifth. This change in proportion can encompass many phenomena. It can the pace of scientific change as such, or just a change in entrepreneurial behaviour as regards the strategies of amortization, explained above. Show business is a good example. Content is an asset for television stations, movie makers or streaming services. Content assets age, and some of them age very quickly. Take the tonight news show on any TV channel. The news of today are much less of a news tomorrow, and definitely not news at all the next month. If you have a look at annual financial reports of TV broadcasters, such as the American classic of the industry, CBS Corporation[1], you will see insane nominal amounts of amortization in their cash flow statements. Thus, the ascending trend of average coefficient A, in Figure I_1, could be, at least partly, the result of growth in the amount of content assets held by various entities in show business. It is a good thing to deconstruct that compound phenomenon into its component factors, which is being undertaken further below. Still, before the deconstruction takes place, it is good to have an inquisitive look at the second curve in Figure I_1, the square-marked one, representing standard deviation of coefficient A across countries.

In common interpretation of empirical numbers, we almost intuitively lean towards average values, as the expected ones in a large set, and yet the standard deviation has a peculiar charm of its own. If we compare the paths followed by the two curves in Figure I_1, we can see them diverge: the average A goes resolutely up whilst the standard deviation in A stays almost stationary in its trend. In the 1950ies or 1960ies, the relative burden of amortization upon the GDP of individual countries was almost twice as disparate than it is today. In other words, back in the day it mattered much more where exactly our technological assets are located. Today, it matters less. National economies seem to be converging in their ways of sourcing current, operational cash flow to provide for the depreciation of incumbent technologies.

Getting back to science, and thus back to empirical facts, let’s have a look at two component phenomena of trends sketched in Figure I_1: the pace of scientific invention, and the average lifecycle of assets. As for the former, the coefficient of patent applications per 1 mln people, sourced from the World Bank[2], is used as representative metric. When we invent an original solution to an existing technological problem, and we think we could make some money on, we have the option of applying for legal protection of our invention, in the form of a patent. Acquiring a patent is essentially a three-step process. Firstly, we file the so-called patent application to the patent office adequate for the given geographical jurisdiction. Then, the patent office publishes our application, calling out for anyone who has grounds for objecting to the issuance of patent, e.g. someone we used to do research with, hand in hand, but hands parted as some point in time. As a matter of fact, many such disputes arise, which makes patent applications much more numerous than actually granted patents. If you check patent data, granted patents define a currently appropriated territories of intellectual property, whilst patent applications are pretty much informative about the current state of applied science, i.e. about the path this science takes, and about the pressure it puts on business people towards refreshing their technological assets.       

Figure I_2 below shows the coefficient of patent applications per 1 mln people in the global economy. The shape of the curve is interestingly similar to that of average coefficient A, shown in Figure I_1, although it covers a shorter span of time, from 1985 through 2017. At the first sight, it seems making sense: more and more patentable inventions per 1 million humans, on average, puts more pressure on replacing old assets with new ones. Yet, the first sight may be misleading. Figure I_3, further below, shows the average lifecycle of fixed assets in the global economy. This particular metric is once again calculated on the grounds of data available in Penn Tables 9_1 (Feenstra et al. 2015 op. cit.). The database strictly spoken contains a variable called ‘delta’, which is the basic rate of amortization in fixed assets, i.e. the percentage of their book value commonly written off the income tax base as provision for depreciation. This is factor ‘a’ in equation (1), presented earlier, and reflects the expected lifecycle of assets. The inverted value ‘1/a’ gives the exact value of that lifecycle in years, i.e. the variable ‘F’ in equation (2). Here comes the big surprise: although the lifecycle ‘F’, computed as an average for all the 182 countries in the database, does display a descending trend, the descent is much gentler, and much more cyclical that what we could expect after having seen the trend in nominal burden A of amortization, and in the occurrence of patent applications. Clearly, there is a push from science upon businesses towards shortening the lifecycle of their assets, but businesses do not necessarily yield to that pressure.  

[Figure I_2_Patent Applications per 1 mln people]

Here comes a riddle. The intuitive assumption that growing scientific input provokes shorter a lifespan in technological assets proves too general. It obviously does not encompass the whole phenomenon of increasingly cash-consuming depreciation in fixed assets. There is something else. After having casted a look at the ‘1/F’  component factor of equation (2), let’s move to the  (PK*QK)/(PG*QG) one. Penn Tables 9.1 provide two variables that allow calculating it: the aggregate value of fixed assets in national economies, at current prices, and the GDP of those economies, in current prices as well. Interestingly, those two variables are provided in two versions each: one at constant prices of 2011, the other at current prices. Before the consequences of that dual observation are discussed, let’s remind some basic arithmetic: we can rewrite (PK*QK)/(PG*QG) as (PK/PG)*(QK/QG). The (PK/PG) component fraction corresponds to the proportion between weighted average prices in, respectively, fixed assets (PK), and final goods (PG). The other part, i.e. (QK/QG) stands for the proportion between aggregate quantities of assets and goods. Whilst we refer here to that abstract concept of aggregate quantities, observable only as something mathematically factorized out of something really empirical, there is method to that madness. How big a factory do we need to make 20 000 cars a month? How big a server do we need in order to stream 20 000 hours of films and shows a month? Presented under this angle, the proportion (QK/QG)  is much more real. When both the aggregate stock of fixed assets in national economies, and the GDP of those economies are expressed in current prices, both the (PK/PG) factor, and the (QK/QG) really change over time. What is observed (analytically) is the full (PK*QK)/(PG*QG) coefficient. Yet, when prices are constant, the (PK/PG) component factor does not actually change over time; what really changes, is just the proportion between aggregate quantities of assets and goods.

The factorisation presented above allows another trick at the frontier of arithmetic and economics. The trick consists in using creatively two types of economic aggregates, commonly published in publicly available databases: nominal values as opposed to real values. The former category represents something like P*Q, or price multiplied by quantity. The latter is supposed to have kicked prices out of the equation, i.e. to represent just quantities. With those two types of data we can do something opposite to the procedure presented earlier, which serves to distil real quantities out of nominal values. This time, we have externally provided products ‘price times quantity’, and just quantities. Logically, we can extract prices out of the nominal values.    When we have two coefficients given in the Penn Tables 9.1 database – the full (PK*QK)/(PG*QG) (current prices) and the partial (QK/QG) (constant prices) – we can develop the following equation: [(PK*QK)/(PG*QG)]/ (QK/QG) =  PK/PG.  We can use the really observable proportion between the nominal value of fixed assets and that of Gross Domestic Product, divide it by the proportion between real quantities of, respectively assets and final goods, in order to calculate the proportion between weighted average prices of assets and goods.

Figure I_4, below, attempts to represent all those three phenomena – the change in nominal values, the change in real quantities, and the change in prices – in one graph. As different magnitudes of empirical values are involved, Figure I_4 introduces another analytical method, namely indexation over constant denominator. When we want to study temporal trends in values, which are either measured with different units or display very different magnitudes, we can choose one point in time as the peg value for each of the variables involved. In the case of Figure I_4, the peg year is 2011, as Penn Tables 9.1 use 2011 as reference year for constant prices. Aggregate values of capital stock and national GDP, when measured in constant prices, are measured in the prices of the year 2011. For each of the three variables involved – the nominal proportion of capital stock to GDP (PK*QK)/(PG*QG), the real proportion thereof  QK/QG  and the proportion between the prices of assets and the prices of goods PK*QK – we take their values in 2011 as denominators for the whole time series. Thus, for example, the nominal proportion of capital stock to GDP in 1990 is the quotient of the actual value in 1990 divided by the value in 2011 etc. As a result, we can study each of the three variables as if the value in 2011 was equal to 1,00.     

[Figure I_4 Comparative indexed trends in the proportion between the national capital stock and the GDP]

The indexed trends thus computed are global averages of across the database, i.e. averages of national values computed for individual countries. The continuous blue line marked with red triangles represents the nominal proportion between the national stocks of fixed assets, and the respective GDP of each country, or the full (PK*QK)/(PG*QG) coefficient. It has been consistently climbing since 1950, and since the mid-1980ies the slope of that climb seems to have increased. Just to give a glimpse of actual non-indexed values, in 1950 the average (PK*QK)/(PG*QG) coefficient was 1.905, in 1985 it reached 2.197, in the reference year 2011it went up to 3.868, to end up at 4.617 in 2017. The overall shape of the curve strongly resembles that observed earlier in the coefficient of patent applications per 1 mln people in the global economy, and in another indexed trend to find in Figure I_4, that of price coefficient PK*PG.  Starting from 1985, that latter proportion seems to be following almost perfectly the trend in patentable invention, and its actual, non-indexed values seem to be informative about a deep change in business in connection with technological change. In 1950, the proportion between average weighted prices of fixed assets, and those of final goods was PK*PG = 0,465, and even in the middle of the 1980ies it kept roughly the same level, PK*PG = 0,45. To put it simply, fixed assets were half as expensive as final goods, per unit of quantity. Yet, since 1990, something had changed: that proportion started to grow: productive assets started to be more and more valuable in comparison to the market prices of the goods they served to make. In 2017, PK*PG reached 1,146. From a world, where technological assets were just tools to make final goods we moved into a world, where technologies are goods in themselves. If we look carefully at digital technologies, nanotechnologies or at biotech, this general observation strongly holds. A new molecule is both a tool to make something, and a good in itself. It can make a new drug, and it can be a new drug. An algorithm can create value added as such, or it can serve to make another value-creating algorithm.

Against that background of unequivocal change in the prices of technological assets, and in their proportion to the Gross Domestic Product of national economies, we can observe a different trend in the proportion of quantities: QK/QG. Hence, we return to questions such as ‘How big a factory we need in order to make the amount of final goods we want?’. The answer to that type of question takes the form of something like a long business cycle, with a peak in 1994, at QK/QG = 5,436. The presently observed QK/QG  (2017) = 4,027 looks relatively modest and is very similar to the value observed in 1950ies. Seventy years ago, we used to be a civilization, which needed around 4 units of quantity in technological assets to make one unit of quantity in final goods. Then, starting from the mid-1970ies, we started turning into a more and more technology intensive culture, with more and more units of quantity in assets required to make one unit of quantity in final goods. In the mid-1990ies, that asset-intensity reached its peak, and now it is back at the old level.

[1] last access November 4th, 2019

[2] last access November 4th, 2019

[1] Feenstra, Robert C., Robert Inklaar and Marcel P. Timmer (2015), “The Next Generation of the Penn World Table” American Economic Review, 105(10), 3150-3182, available for download at

The point of doing manually what the loop is supposed to do

My editorial on You Tube

OK, here is the big picture. The highest demographic growth, in absolute numbers, takes place in Asia and Africa. The biggest migratory flows start from there, as well, and aim at and into regions with much less of human mass in accrual: North America and Europe. Less human accrual, indeed, and yet much better conditions for each new homo sapiens. In some places on the planet, a huge amount of humans is born every year. That huge amount means a huge number of genetic variations around the same genetic tune, namely that of the homo sapiens. Those genetic variations leave their homeland, for a new and better homeland, where they bring their genes into a new social environment, which assures them much more safety, and higher odds of prolonging their genetic line.

What is the point of there being more specimens of any species? I mean, is there a logic to increasing the headcount of any population? When I say ‘any’, is ranges from bacteria to us, humans. After having meddled with the most basic algorithm of a neural network (see « Pardon my French, but the thing is really intelligent » and « Ce petit train-train des petits signaux locaux d’inquiétude »), I have some thoughts about what intelligence is. I think that intelligence is a class, i.e. it is a framework structure able to produce many local, alternative instances of itself.

Being intelligent consists, to start with, in creating alternative versions of itself, and creating them purposefully imperfect so as to generate small local errors, whilst using those errors to create still different versions of itself. The process is tricky. There is some sort of fundamental coherence required between the way of creating those alternative instances of oneself, and the way that resulting errors are being processed. Fault of such coherence, the allegedly intelligent structure can fall into purposeful ignorance, or into panic.

Purposeful ignorance manifests as the incapacity to signal and process the local imperfections in alternative instances of the intelligent structure, although those imperfections actually stand out and wave at you. This is the ‘everything is just fine and there is no way it could be another way’ behavioural pattern. It happens, for example, when the function of processing local errors is too gross – or not sharp enough, if you want – to actually extract meaning from tiny, still observable local errors. The panic mode of an intelligent structure, on the other hand, is that situation when the error-processing function is too sharp for the actually observable errors. Them errors just knock it out of balance, like completely, and the function signals general ‘Error’, or ‘I can’t stand this cognitive dissonance’.

So, what is the point of there being more specimens of any species? The point might be to generate as many specific instances of an intelligent structure – the specific DNA – as possible, so as to generate purposeful (and still largely unpredictable) errors, just to feed those errors into the future instantiations of that structure. In the process of breeding, some path of evolutionary coherence leads to errors that can be handled, and that path unfolds between a state of evolutionary ‘everything is OK, no need to change anything’ (case mosquito, unchanged for millions of years), and a state of evolutionary ‘what the f**k!?’ (case common fruit fly, which produces insane amount of mutations in response to the slightest environmental stressor).

Essentially, all life could be a framework structure, which, back in the day, made a piece of software in artificial intelligence – the genetic code – and ever since that piece of software has been working on minimizing the MSE (mean square error) in predicting the next best version of life, and it has been working by breeding, in a tree-like method of generating variations,  indefinitely many instances of the framework structure of life. Question: what happens when, one day, a perfect form of life emerges? Something like TRex – Megalodon – Angelina Jolie – Albert Einstein – Jeff Bezos – [put whatever or whoever you like in the rest of that string]? On the grounds of what I have already learnt about artificial intelligence, such a state of perfection would mean the end of experimentation, thus the end of multiplying instances of the intelligent structure, thus the end of births and deaths, thus the end of life.

Question: if the above is even remotely true, does that overarching structure of life understand how the software it made – the genetic code – works? Not necessarily. That very basic algorithm of neural network, which I have experimented with a bit, produces local instances of the sigmoid function Ω = 1/(1 + e-x) such that Ω < 1, and that 1 + e-x > 1, which is always true. Still, the thing does it just sometimes. Why? How? Go figure. That thing accomplishes an apparently absurd task, and it does so just by being sufficiently flexible with its random coefficients. If Life In General is God, that God might not have a clue about how the actual life works. God just needs to know how to write an algorithm for making actual life work. I would even say more: if God is any good at being one, he would write an algorithm smarter than himself, just to make things advance.

The hypothesis of life being one, big, intelligent structure gives an interesting insight into what the cost of experimentation is. Each instance of life, i.e. each specimen of each species needs energy to sustain it. That energy takes many forms: light, warmth, food, Lexus (a form of matter), parties, Armani (another peculiar form of matter) etc. The more instances of life are there, the more energy they need to be there. Even if we take the Armani particle out of the equation, life is still bloody energy-consuming. The available amount of energy puts a limit to the number of experimental instances of the framework, structural life that the platform (Earth) can handle.

Here comes another one about climate change. Climate change means warmer, let’s be honest. Warmer means more energy on the planet. Yes, temperature is our human measurement scale for the aggregate kinetic energy of vibrating particles. More energy is what we need to have more instances of framework life, in the same time. Logically, incremental change in total energy on the planet translates into incremental change in the capacity of framework life to experiment with itself. Still, as framework life could be just the God who made that software for artificial intelligence (yes, I am still in the same metaphor), said framework life could not be quite aware of how bumpy could the road be, towards the desired minimum in the Mean Square Error. If God is an IT engineer, it could very well be the case.

I had that conversation with my son, who is graduating his IT engineering studies. I told him ‘See, I took that algorithm of neural network, and I just wrote its iterations out into separate tables of values in Excel, just to see what it does, like iteration after iteration. Interesting, isn’t it? I bet you have done such thing many times, eh?’. I still remember that heavy look in my son’s eyes: ‘Why the hell should I ever do that?’ he went. ‘There is a logical loop in that algorithm, you see? This loop is supposed to do the job, I mean to iterate until it comes up with something really useful. What is the point of doing manually what the loop is supposed to do for you? It is like hiring a gardener and then doing everything in the garden by yourself, just to see how it goes. It doesn’t make sense!’. ‘But it’s interesting to observe, isn’t it?’ I went, and then I realized I am talking to an alien form of intelligence, there.

Anyway, if God is a framework life who created some software to learn in itself, it could not be quite aware of the tiny little difficulties in the unfolding of the Big Plan. I mean acidification of oceans, hurricanes and stuff. The framework life could say: ‘Who cares? I want more learning in my algorithm, and it needs more energy to loop on itself, and so it makes those instances of me, pumping more carbon into the atmosphere, so as to have more energy to sustain more instances of me. Stands to reason, man. It is all working smoothly. I don’t understand what you are moaning about’.

Whatever that godly framework life says, I am still interested in studying particular instances of what happens. One of them is my business concept of EneFin. See « Which salesman am I? » as what I think is the last case of me being like fully verbal about it. Long story short, the idea consists in crowdfunding capital for small, local operators of power systems based on renewable energies, by selling shares in equity, or units of corporate debt, in bundles with tradable claims on either the present output of energy, or the future one. In simple terms, you buy from that supplier of energy tradable claims on, for example, 2 000 kWh, and you pay the regular market price, still, in that price, you buy energy properly spoken with a juicy discount. The rest of the actual amount of money you have paid buys you shares in your supplier’s equity.

The idea in that simplest form is largely based on two simple observations about energy bills we pay. In most countries (at least in Europe), our energy bills are made of two components: the (slightly) variable value of the energy actually supplied, and a fixed part labelled sometimes as ‘maintenance of the grid’ or similar. Besides, small users (e.g. households) usually pay a much higher unitary price per kWh than large, institutional scale buyers (factories, office buildings etc.). In my EneFin concept, a local supplier of renewable energy makes a deal with its local customers to sell them electricity at a fair, market price, with participations in equity on the top of electricity.

That would be a classical crowdfunding scheme, such as you can find with, StartEngine, for example. I want to give it some additional, financial spin. Classical crowdfunding has a weakness: low liquidity. The participatory shares you buy via crowdfunding are usually non-tradable, and they create a quasi-cooperative bond between investors and investees. Where I come from, i.e. in Central Europe, we are quite familiar with cooperatives. At the first sight, they look like a form of institutional heaven, compared to those big, ugly, capitalistic corporations. Still, after you have waved out that first mist, cooperatives turn out to be very exposed to embezzlement, and to abuse of managerial power. Besides, they are quite weak when competing for capital against corporate structures. I want to create highly liquid a transactional platform, with those investments being as tradable as possible, and use financial liquidity as a both a shield against managerial excesses, and a competitive edge for those small ventures.

My idea is to assure liquidity via a FinTech solution similar to that used by Katipult Technology Corp., i.e. to create some kind of virtual currency (note: virtual currency is not absolutely the same as cryptocurrency; cousins, but not twins, so to say). Units of currency would correspond to those complex contracts « energy plus equity ». First, you create an account with EneFin, i.e. you buy a certain amount of the virtual currency used inside the EneFin platform. I call them ‘tokens’ to simplify. Next, you pick your complex contracts, in the basket of those offered by local providers of energy. You buy those contracts with the tokens you have already acquired. Now, you change your mind. You want to withdraw your capital from the supplier A, and move it to supplier H, you haven’t considered so far. You move your tokens from A to H, even with a mobile app. It means that the transactional platform – the EneFin one – buys from you the corresponding amount of equity of A and tries to find for you some available equity in H. You can also move your tokens completely out of investment in those suppliers of energy. You can free your money, so to say. Just as simple: you just move them out, even with a movement of your thumb on the screen. The EneFin platform buys from you the shares you have moved out of.

You have an even different idea. Instead of investing your tokens into the equity of a provider of energy, you want to lend them. You move your tokens to the field ‘lending’, you study the interest rates offered on the transactional platform, and you close the deal. Now, the corresponding number of tokens represents securitized (thus tradable) corporate debt.

Question: why the hell bothering about a virtual currency, possibly a cryptocurrency, instead of just using good old fiat money? At this point, I am reaching to the very roots of the Bitcoin, the grandpa of all cryptocurrencies (or so they say). Question: what amount of money you need to finance 20 transactions of equal unitary value P? Answer: it depends on how frequently you monetize them. Imagine that the EneFin app offers you an option like ‘Monetize vs. Don’t Monetize’. As long as – with each transaction you do on the platform – you stick to the ‘Don’t Monetize’ option, your transactions remain recorded inside the transactional platform, and so there is recorded movement in tokens, but there is no monetary outcome, i.e. your strictly spoken monetary balance, for example that in €, does not change. It is only when you hit the ‘Monetize’ button in the app that the current bottom line of your transactions inside the platform is being converted into « official » money.

The virtual currency in the EneFin scheme would serve to allow a high level of liquidity (more transactions in a unit of time), without provoking the exactly corresponding demand for money. What connection with artificial intelligence? I want to study the possible absorption of such a scheme in the market of energy, and in the related financial markets, as a manifestation of collective intelligence. I imagine two intelligent framework structures: one incumbent (the existing markets) and one emerging (the EneFin platform). Both are intelligent structures to the extent that they technically can produce many alternative instances of themselves, and thus intelligently adapt to their environment by testing those instances and utilising the recorded local errors.

In terms of an algorithm of neural network, that intelligent adaptation can be manifest, for example, as an optimization in two coefficients: the share of energy traded via EneFin in the total energy supplied in the given market, and the capitalization of EneFin as a share in the total capitalization of the corresponding financial markets. Those two coefficients can be equated to weights in a classical MLP (Multilayer Perceptron) network, and the perceptron network could work around them. Of course, the issue can be approached from a classical methodological angle, as a general equilibrium to assess via « normal » econometric modelling. Still, what I want is precisely what I hinted in « Pardon my French, but the thing is really intelligent » and « Ce petit train-train des petits signaux locaux d’inquiétude »: I want to study the very process of adaptation and modification in those intelligent framework structures. I want to know, for example, how much experimentation those structures need to form something really workable, i.e. an EneFin platform with serious business going on, and, in the same time, that business contributing to the development of renewable energies in the given region of the world. Do those framework structures have enough local resources – mostly capital – for sustaining the number of alternative instances needed for effective learning? What kind of factors can block learning, i.e. drive the framework structure either into deliberate an ignorance of local errors or into panic?

Here is an example of more exact a theoretical issue. In a typical economic model, things are connected. When I pull on the string ‘capital invested in fixed assets’, I can see a valve open, with ‘Lifecycle in incumbent technologies’, and some steam rushes out. When I push the ‘investment in new production capacity’ button, I can see something happening in the ‘Jobs and employment’ department. In other words, variables present in economic systems mutually constrain each other. Just some combinations work, others just don’t. Now, the thing I have already discovered about them Multilayer Perceptrons is that as soon as I add some constraint on the weights assigned to input data, for example when I swap ‘random’ for ‘erandom’, the scope of possible structural functions leading to effective learning dramatically shrinks, and the likelihood of my neural network falling into deliberate ignorance or into panic just swells like hell. What degree of constraint on those economic variables is tolerable in the economic system conceived as a neural network, thus as a framework intelligent structure?

There are some general guidelines I can see for building a neural network that simulates those things. Creating local power systems, based on microgrids connected to one or more local sources of renewable energies, can be greatly enhanced with efficient financing schemes. The publicly disclosed financial results of companies operating in those segments – such as Tesla[1], Vivint Solar[2], FirstSolar[3], or 8Point3 Energy Partners[4] – suggest that business models in that domain are only emerging, and are far from being battle-tested. There is still a way to pave towards well-rounded business practices as regards such local power systems, both profitable economically and sustainable socially.

The basic assumptions of a neural network in that field are essentially behavioural. Firstly, consumption of energy is greatly predictable at the level of individual users. The size of a market in energy changes, as the number of users change. The output of energy needed to satisfy those users’ needs, and the corresponding capacity to install, are largely predictable on the long run. Consumers of energy use a basket of energy-consuming technologies. The structure of this basket determines their overall consumption, and is determined, in turn, by long-run social behaviour. Changes over time in that behaviour can be represented as a social game, where consecutive moves consist in purchasing, or disposing of a given technology. Thus, a game-like process of relatively slow social change generates a relatively predictable output of energy, and a demand thereof. Secondly, the behaviour of investors in any financial market, crowdfunding or other, is comparatively more volatile. Investment decisions are being taken, and modified at a much faster pace than decisions about the basket of technologies used in everyday life.

The financing of relatively small, local power systems, based on renewable energies and connected by local microgrids, implies an interplay of the two above-mentioned patterns, namely the relatively slower transformation in the technological base, and the quicker, more volatile modification of investors’ behaviour in financial markets.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

[1] last access December, 18th, 2018

[2] last access December, 18th, 2018

[3] last access December, 18th, 2018

[4] last access December, 18th, 2018

Pardon my French, but the thing is really intelligent

My editorial on You Tube

And so I am meddling with neural networks. It had to come. It just had to. I started with me having many ideas to develop at once. Routine stuff with me. Then, the Editor-in-Chief of the ‘Energy Economics’ journal returned my manuscript of article on the energy-efficiency of national economies, which I had submitted with them, with a general remark that I should work both on the clarity of my hypotheses, and on the scientific spin of my empirical research. In short, Mr Wasniewski, linear models tested with Ordinary Least Squares is a bit oldie, if you catch my drift. Bloody right, Mr Editor-In-Chief. Basically, I agree with your remarks. I need to move out of my cavern, towards the light of progress, and get acquainted with the latest fashion. The latest fashion we are wearing this season is artificial intelligence, machine learning, and neural networks.

It comes handy, to the extent that I obsessively meddle with the issue of collective intelligence, and am dreaming about creating a model of human social structure acting as collective intelligence, sort of a beehive. Whilst the casting for a queen in that hive remains open, and is likely to stay this way for a while, I am digging into the very basics of neural networks. I am looking in the Python department, as I have already got a bit familiar with that environment. I found an article by James Loy, entitled “How to build your own Neural Network from scratch in Python”. The article looks a bit like sourcing from another one, available at the website of ProBytes Software, thus I use both to develop my understanding. I pasted the whole algorithm by James Loy into my Python Shell, made in run with an ‘enter’, and I am waiting for what it is going to produce. In the meantime, I am being verbal about my understanding.

The author declares he wants to do more or less the same thing that I, namely to understand neural networks. He constructs a simple algorithm for a neural network. It starts with defining the neural network as a class, i.e. as a callable object that acts as a factory for new instances of itself. In the neural network defined as a class, that algorithm starts with calling the constructor function ‘_init_’, which constructs an instance ‘self’ of that class. It goes like ‘def __init__(self, x, y):’. In other words, the class ‘Neural network’ generates instances ‘self’ of itself, and each instance is essentially made of two variables: input x, and output y. The ‘x’ is declared as input variable through the ‘self.input = x’ expression. Then, the output of the network is defined in two steps. Yes, the ‘y’ is generally the output, only in a neural network, we want the network to predict a value of ‘y’, thus some kind of y^. What we have to do is to define ‘self.y = y’, feed the real x-s and the real y-s into the network, and expect the latter to turn out some y^-s.

Logically, we need to prepare a vessel for holding the y^-s. The vessel is defined as ‘self.output = np.zeros(y.shape)’. The ‘shape’ function defines a tuple – a table, for those mildly fond of maths – with given dimensions. What are the dimensions of ‘y’ in that ‘y.shape’? They have been given earlier, as the weights of the network were being defined. It goes as follows. It starts, thus, right after the ‘self.input = x’ has been said, ‘self.weights1 = np.random.rand(self.input.shape[1],4)’ fires off, closely followed by ‘self.weights2 =  np.random.rand(4,1)’. All in all, the entire class of ‘Neural network’ is defined in the following form:

class NeuralNetwork:

    def __init__(self, x, y):

        self.input      = x

        self.weights1   = np.random.rand(self.input.shape[1],4)

        self.weights2   = np.random.rand(4,1)                

        self.y          = y

        self.output     = np.zeros(self.y.shape)                

The output of each instance in that neural network is a two-dimensional tuple (table) made of one row (I hope I got it correctly), and four columns. Initially, it is filled with zeros, so as to make room for something more meaningful. The predicted y^-s are supposed to jump into those empty sockets, held ready by the zeros. The ‘random.rand’ expression, associated with ‘weights’ means that the network is supposed to assign randomly different levels of importance to different x-s fed into it.

Anyway, the next step is to instruct my snake (i.e. Python) what to do next, with that class ‘Neural Network’. It is supposed to do two things: feed data forward, i.e. makes those neurons work on predicting the y^-s, and then check itself by an operation called backpropagation of errors. The latter consists in comparing the predicted y^-s with the real y-s, measuring the discrepancy as a loss of information, updating the initial random weights with conclusions from that measurement, and do it all again, and again, and again, until the error runs down to very low values. The weights applied by the network in order to generate that lowest possible error are the best the network can do in terms of learning.

The feeding forward of predicted y^-s goes on in two steps, or in two layers of neurons, one hidden, and one final. They are defined as:

def feedforward(self):

        self.layer1 = sigmoid(, self.weights1))

        self.output = sigmoid(, self.weights2))

The ‘sigmoid’ part means sigmoid function, AKA logistic function, expressed as y=1/(1+e-x), where, at the end of the day, the y always falls somewhere between 0 and 1, and the ‘x’ is not really the empirical, real ‘x’, but the ‘x’ multiplied by a weight, ranging between 0 and 1 as well. The sigmoid function is good for testing the weights we apply to various types of input x-es. Whatever kind of data you take: populations measured in millions, or consumption of energy per capita, measured in kilograms of oil equivalent, the basic sigmoid function y=1/(1+e-x), will always yield a value between 0 and 1. This function essentially normalizes any data.

Now, I want to take differentiated data, like population as headcount, energy consumption in them kilograms of whatever oil equals to, and the supply of money in standardized US dollars. Quite a mix of units and scales of measurement. I label those three as, respectively, xa, xb, and xc. I assign them weights ranging between 0 and 1, so as the sum of weights never exceeds 1. In plain language it means that for every vector of observations made of xa, xb, and xc I take a pinchful of  xa, then a zest of xb, and a spoon of xc. I make them into x = wa*xa + wb*xb + wc*xc, I give it a minus sign and put it as an exponent for the Euler’s constant.

That yields y=1/(1+e-( wa*xa + wb*xb + wc*xc)). Long, but meaningful to the extent that now, my y is always to find somewhere between 0 and 1, and I can experiment with various weights for my various shades of x, and look what it gives in terms of y.

In the algorithm above, the ‘’ function conveys the idea of weighing our x-s. With two dimensions, like the input signal ‘x’ and its weight ‘w’, the ‘’ function yields a multiplication of those two one-dimensional matrices, exactly in the x = wa*xa + wb*xb + wc*xc drift.

Thus, the first really smart layer of the network, the hidden one, takes the empirical x-s, weighs them with random weights, and makes a sigmoid of that. The next layer, the output one, takes the sigmoid-calculated values from the hidden layer, and applies the same operation to them.

One more remark about the sigmoid. You can put something else instead of 1, in the nominator. Then, the sigmoid will yield your data normalized over that something. If you have a process that tends towards a level of saturation, e.g. number of psilocybin parties per month, you can put that level in the nominator. On the top of that, you can add parameters to the denominator. In other words, you can replace the 1+e-x with ‘b + e-k*x’, where b and k can be whatever seems to make sense for you. With that specific spin, the sigmoid is good for simulating anything that tends towards saturation over time. Depending on the parameters in denominator, the shape of the corresponding curve will change. Usually, ‘b’ works well when taken as a fraction of the nominator (the saturation level), and the ‘k’ seems to be behaving meaningfully when comprised between 0 and 1.

I return to the algorithm. Now, as the network has generated a set of predicted y^-s, it is time to compare them to the actual y-s, and to evaluate how much is there to learn yet. We can use any measure of error, still, most frequently, them algorithms go after the simplest one, namely the Mean Square Error MSE = [(y1 – y^1)2 + (y2 – y^2)2 + … + (yn – y^n)2]0,5. Yes, it is Euclidean distance between the set of actual y-s and that of predicted y^-s. Yes, it is also the standard deviation of predicted y^-s from the actual distribution of empirical y-s.

In this precise algorithm, the author goes down another avenue: he takes the actual differences between observed y-s and predicted y^-s, and then multiplies it by the sigmoid derivative of predicted y^-s. Then he takes the transpose of a uni-dimensional matrix of those (y – y^)*(y^)’ with (y^)’ standing for derivative. It goes like:

    def backprop(self):

        # application of the chain rule to find derivative of the loss function with respect to weights2 and weights1

        d_weights2 =, (2*(self.y – self.output) * sigmoid_derivative(self.output)))

        d_weights1 =,  (*(self.y – self.output) * sigmoid_derivative(self.output), self.weights2.T) * sigmoid_derivative(self.layer1)))

        # update the weights with the derivative (slope) of the loss function

        self.weights1 += d_weights1

        self.weights2 += d_weights2

    def sigmoid(x):

    return 1.0/(1+ np.exp(-x))

    def sigmoid_derivative(x):

     return x * (1.0 – x)

I am still trying to wrap my mind around the reasons for taking this specific approach to the backpropagation of errors. The derivative of a sigmoid y=1/(1+e-x) is y’ =  [1/(1+e-x)]*{1 – [1/(1+e-x)]} and, as any derivative, it measures the slope of change in y. When I do (y1 – y^1)*(y^1)’ + (y2 – y^2)*(y^2)’ + … + (yn – y^n)*(y^n)’ it is as if I were taking some kind of weighted average. That weighted average can be understood in two alternative ways. Either it is standard deviation of y^ from y, weighted with the local slopes, or it is a general slope weighted with local deviations. Now I take the transpose of a matrix like {(y1 – y^1)*(y^1)’ ; (y2 – y^2)*(y^2)’ ; … (yn – y^n)*(y^n)’}, it is a bit as if I made a matrix of inverted terms, i.e. 1/[(yn – y^n)*(y^n)’]. Now, I make a ‘.dot’ product of those inverted terms, so I multiply them by each other. Then, I feed the ‘.dot’ product into the neural network with the ‘+=’ operator. The latter means that in the next round of calculations, the network can do whatever it wants with those terms. Hmmweeellyyeess, makes some sense. I don’t know what exact sense is that, but it has some mathematical charm.

Now, I try to apply the same logic to the data I am working with in my research. Just to give you an idea, I show some data for just one country: Australia. Why Australia? Honestly, I don’t see why it shouldn’t be. Quite a respectable place. Anyway, here is that table. GDP per unit of energy consumed can be considered as the target output variable y, and the rest are those x-s.

Table 1 – Selected data regarding Australia

Year GDP per unit of energy use (constant 2011 PPP $ per kg of oil equivalent) Share of aggregate amortization in the GDP Supply of broad money, % of GDP Energy use (tons of oil equivalent per capita) Urban population as % of total population GDP per capita, ‘000 USD
  y X1 X2 X3 X4 X5
1990 5,662020744 14,46 54,146 5,062 85,4 26,768
1991 5,719765048 14,806 53,369 4,928 85,4 26,496
1992 5,639817305 14,865 56,208 4,959 85,566 27,234
1993 5,597913126 15,277 56,61 5,148 85,748 28,082
1994 5,824685357 15,62 59,227 5,09 85,928 29,295
1995 5,929177604 15,895 60,519 5,129 86,106 30,489
1996 5,780817973 15,431 62,734 5,394 86,283 31,566
1997 5,860645225 15,259 63,981 5,47 86,504 32,709
1998 5,973528571 15,352 65,591 5,554 86,727 33,789
1999 6,139349354 15,086 69,539 5,61 86,947 35,139
2000 6,268129418 14,5 67,72 5,644 87,165 35,35
2001 6,531818805 14,041 70,382 5,447 87,378 36,297
2002 6,563073754 13,609 70,518 5,57 87,541 37,047
2003 6,677186947 13,398 74,818 5,569 87,695 38,302
2004 6,82834791 13,582 77,495 5,598 87,849 39,134
2005 6,99630318 13,737 78,556 5,564 88 39,914
2006 6,908872246 14,116 83,538 5,709 88,15 41,032
2007 6,932137612 14,025 90,679 5,868 88,298 42,022
2008 6,929395465 13,449 97,866 5,965 88,445 42,222
2009 7,039061961 13,698 94,542 5,863 88,59 41,616
2010 7,157467568 12,647 101,042 5,649 88,733 43,155
2011 7,291989544 12,489 100,349 5,638 88,875 43,716
2012 7,671605162 13,071 101,852 5,559 89,015 43,151
2013 7,891026044 13,455 106,347 5,586 89,153 43,238
2014 8,172929207 13,793 109,502 5,485 89,289 43,071

In his article, James Loy reports the cumulative error over 1500 iterations of training, with just four series of x-s, made of four observations. I do something else. I am interested in how the network works, step by step. I do step-by-step calculations with data from that table, following that algorithm I have just discussed. I do it in Excel, and I observe the way that the network behaves. I can see that the hidden layer is really hidden, to the extent that it does not produce much in terms of meaningful information. What really spins is the output layer, thus, in fact, the connection between the hidden layer and the output. In the hidden layer, all the predicted sigmoid y^ are equal to 1, and their derivatives are automatically 0. Still, in the output layer, when the second random distribution of weights overlaps with the first one from the hidden layer. Then, for some years, those output sigmoids demonstrate tiny differences from 1, and their derivatives become very small positive numbers. As a result, tiny, local (yi – y^i)*(y^i)’ expressions are being generated in the output layer, and they modify the initial weights in the next round of training.

I observe the cumulative error (loss) in the first four iterations. In the first one it is 0,003138796, the second round brings 0,000100228, the third round displays 0,0000143, and the fourth one 0,005997739. Looks like an initial reduction of cumulative error, by one order of magnitude at each iteration, and then, in the fourth round, it jumps up to the highest cumulative error of the four. I extend the number to those hand-driven iterations from four to six, and I keep feeding the network with random weights, again and again. A pattern emerges. The cumulative error oscillates. Sometimes the network drives it down, sometimes it swings it up.

F**k! Pardon my French, but just six iterations of that algorithm show me that the thing is really intelligent. It generates an error, it drives it down to a lower value, and then, as if it was somehow dysfunctional to jump to conclusions that quickly, it generates a greater error in consecutive steps, as if it was considering more alternative options. I know that data scientists, should they read this, can slap their thighs at that elderly uncle (i.e. me), fascinated with how a neural network behaves. Still, for me, it is science. I take my data, I feed it into a machine that I see for the first time in my life, and I observe intelligent behaviour in something written on less than one page. It experiments with weights attributed to the stimuli I feed into it, and it evaluates its own error.

Now, I understand why that scientist from MIT, Lex Fridman, says that building artificial intelligence brings insights into how the human brain works.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

Combinatorial meaning and the cactus

My editorial on You Tube

I am back into blogging, after over two months of pausing. This winter semester I am going, probably, for record workload in terms of classes: 630 hours in total. October and November look like an immersion time, when I had to get into gear for that amount of teaching. I noticed one thing that I haven’t exactly been aware of, so far, or maybe not as distinctly as I am now: when I teach, I love freestyling about the topic at hand. Whatever hand of nice slides I prepare for a given class, you can bet on me going off the beaten tracks and into the wilderness of intellectual quest, like by the mid-class. I mean, I have nothing against Power Point, but at some point it becomes just so limiting… I remember that conference, one year ago, when the projector went dead during my panel (i.e. during the panel when I was supposed to present my research). I remember that mixed, and shared feeling of relief and enjoyment in people present in the room: ‘Good. Finally, no slides. We can like really talk science’.

See? Once again, I am going off track, and that in just one paragraph of writing. You can see what I mean when I refer to me going off track in class. Anyway, I discovered one more thing about myself: freestyling and sailing uncharted intellectual waters has a cost, and this is a very clear and tangible biological cost. After a full day of teaching this way I feel as if my brain was telling me: ‘Look, bro. I know you would like to write a little, but sorry: no way. Them synapses are just tired. You need to give me a break’.

There is a third thing I have discovered about myself: that intense experience of teaching makes me think a lot. I cannot exactly put all this in writing on the spot, fault of fresh neurotransmitter available, still all that thinking tends to crystallize over time and with some patience I can access it later. Later means now, as it seems. I feel that I have crystallized enough and I can start to pull it out into the daylight. The « it » consists, mostly, in a continuous reflection on collective intelligence. How are we (possibly) smart together?

As I have been thinking about it, three events combined and triggered in me a string of more specific questions. I watched another podcast featuring Jordan Peterson, whom I am a big fan of, and who raised the topic of the neurobiological context of meaning. How our brain makes meaning, and how does it find meaning in sensory experience? On the other hand, I have just finished writing the manuscript of an article on the energy-efficiency of national economies, which I have submitted to the ‘Energy Economics’ journal, and which, almost inevitably, made me work with numbers and statistics. As I had been doing that empirical research, I found out something surprising: the most meaningful econometric results came to the surface when I transformed my original data into local coefficients of an exponential progression that hypothetically started in 1989. Long story short, these coefficients are essentially growth rates, which behave in a peculiar way, due to their arithmetical structure: they decrease very quickly over time, whatever is the source, raw empirical observation, as if they were representing weakening shock waves sent by an explosion in 1989.

Different types of transformed data, the same data, in that research of mine, produced different statistical meanings. I am still coining up real understanding of what it exactly means, by the way. As I was putting that together with Jordan Peterson’s thoughts on meaning as a biological process, I asked myself: what is the exact meaning of the fact that we, as scientific community, assign meaning to statistics? How is it connected with collective intelligence?

I think I need to start more or less where Jordan Peterson moves, and ask ‘What is meaning?’. No, not quite. The ontological type, I mean the ‘What?’ type of question, is a mean beast. Something like a hydra: you cut the head, namely you explain the thing, you think that Bob’s your uncle, and a new head pops up, like out of nowhere, and it bites you, where you know. The ‘How?’ question is a bit more amenable. This one is like one of those husky dogs. Yes, it is semi wild, and yes, it can bite you, but once you tame it, and teach it to pull that sleigh, it will just pull. So I ask ‘How is meaning?’. How does meaning occur?

There is a particular type of being smart together, which I have been specifically interested in, for like the last two months. It is the game-based way of being collectively intelligent. The theory of games is a well-established basis for studying human behaviour, including that of whole social structures. As I was thinking about it, there is a deep reason for that. Social interactions are, well, interactions. It means that I do something and you do something, and those two somethings are supposed to make sense together. They really do at one condition: my something needs to be somehow conditioned by how your something unfolds, and vice versa. When I do something, I come to a point when it becomes important for me to see your reaction to what I do, and only when I will have seen it, I will further develop on my action.

Hence, I can study collective action (and interaction) as a sequence of moves in a game. I make my move, and I stop moving, for a moment, in order to see your move. You make yours, and it triggers a new move in me, and so the story goes further on in time. We can experience it very vividly in negotiations. With any experience in having serious talks with other people, thus when we negotiate something, we know that it is pretty counter-efficient to keep pushing our point in an unbroken stream of speech. It is much more functional to pace our strategy into separate strings of argumentation, and between them, we wait for what the other person says. I have already given a first theoretical go at the thing in « Couldn’t they have predicted that? ».

This type of social interaction, when we pace our actions into game-like moves, is a way of being smart together. We can come up with new solutions, or with the understanding of new problems – or a new understanding of old problems, as a matter of fact – and we can do it starting from positions of imperfect agreement and imperfect coordination. We try to make (apparently) divergent points, or we pursue (apparently) divergent goals, and still, if we accept to wait for each other’s reaction, we can coordinate and/or agree about those divergences, so as to actually figure out, and do, some useful s**t together.

What connection with the results of my quantitative research? Let’s imagine that we play a social game, and each of us makes their move, and then they wait for the moves of other players. The state of the game at any given moment can be represented as the outcome of past moves. The state of reality is like a brick wall, made of bricks laid one by one, and the state of that brick wall is the outcome of the past laying of bricks.  In the general theory of science, it is called hysteresis. There is a mathematical function, reputed to represent that thing quite nicely: the exponential progression. On a timeline, I define equal intervals. To each period of time, I assign a value y(t) = et*a, where ‘t’ is the ordinal of the time period, ‘e’ is a mathematical constant, the base of natural logarithm, e = 2,7188, and ‘a’ is what we call the exponential coefficient.

There is something else to that y = et*a story. If we think like in terms of a broader picture, and assume that time is essentially what we imagine it is, the ‘t’ part can be replaced by any number we imagine. Then, the Euler’s formula steps in: ei*x = cos x + i*sin x. If you paid attention in math classes, at high school, you might remember that sine and cosine, the two trigonometric functions, have a peculiar property. As they refer to angles, at the end of the day they refer to a full circle of 360°. It means they go in a circle, thus in a cycle, only they go in perfectly negative a correlation: when the sine goes one unit one way, the cosine goes one unit exactly the other way round etc. We can think about each occurrence we experience – the ‘x’ –  as a nexus of two, mutually opposing cycles, and they can be represented as, respectively, the sine, and the cosine of that occurrence ‘x’. When I grow in height (well, when I used to), my current height can be represented as the nexus of natural growth (sine), and natural depletion with age (cosine), that sort of things.

Now, let’s suppose that we, as a society, play two different games about energy. One game makes us more energy efficient, ‘cause we know we should (see Settlement by energy – can renewable energies sustain our civilisation?). The other game makes us max out on our intake of energy from the environment (see Technological Change as Intelligent, Energy-Maximizing Adaptation). At any given point in time, the incremental change in our energy efficiency is the local equilibrium between those two games. Thus, if I take the natural logarithm of our energy efficiency at a given point in space-time, thus the coefficient of GDP per kg of oil equivalent in energy consumed, that natural logarithm is the outcome of those two games, or, from a slightly different point of view, it descends from the number of consecutive moves made (the ordinal of time period we are currently in), and from a local coefficient – the equivalent of ‘i’ in the Euler’s formula – which represents the pace of building up the outcomes of past moves in the game.

I go back to that ‘meaning’ thing. The consecutive steps ‘t’ in an exponential progression y(t) = et*a progression correspond to successive rounds of moves in the games we play. There is a core structure to observe: the length of what I call ‘one move’, and which means a sequence of actions that each person involved in the interaction carries out without pausing and waiting for the reaction observable in other people in the game. When I say ‘length’, it involves a unit of measurement, and here, I am quite open. It can be a length of time, or the number of distinct actions in my sequence. The length of one move in the game determines the pace of the game, and this, in turn, sets the timeframe for the whole game to produce useful results: solutions, understandings, coordinated action etc.

Now, where the hell is any place for ‘meaning’ in all that game stuff? My view is the following: in social games, we sequence our actions into consecutive moves, with some waiting-for-reaction time in between, because we ascribe meaning to those sub-sequences that we define as ‘one move’. The way we process meaning matters for the way we play social games.

I am a scientist (well, I hope), and for me, meaning occurs very largely as I read what other people have figured out. So I stroll down the discursive avenue named ‘neurobiology of meaning’, welcomingly lit by with the lampposts of Science Direct. I am calling by an article by Lee M. Pierson, and Monroe Trout, entitled ‘What is consciousness for?[1]. The authors formulate a general hypothesis, unfortunately not supported (yet?) with direct empirical check, that consciousness had been occurring, back in the day, I mean like really back in the day, as cognitive support of volitional movement, and evolved, since then, into more elaborate applications. Volitional movement is non-automatic, i.e. decisions have to be made in order for the movement to have any point. It requires quick assemblage of data on the current situation, and consciousness, i.e. the awareness of many abstract categories in the same time, could the solution.

According to that approach, meaning occurs as a process of classification in the neurologically stored data that we need to use virtually simultaneously in order to do something as fundamental as reaching for another can of beer. Classification of data means grouping into sets. You have a random collection of data from sensory experience, like a homogenous cloud of information. You know, the kind you experience after a particularly eventful party. Some stronger experiences stick out: the touch of cold water on your naked skin, someone’s phone number written on your forearm with a lipstick etc. A question emerges: should you call this number? It might be your new girlfriend (i.e. the girlfriend whom you don’t consciously remember as your new one but whom you’d better to if you don’t want your car splashed with acid), or it might be a drug dealer whom you’d better not call back.  You need to group the remaining data in functional sets so as to take the right action.

So you group, and the challenge is to make the right grouping. You need to collect the not-quite-clear-in-their-meaning pieces of information (Whose lipstick had that phone number been written with? Can I associate a face with the lipstick? For sure, the right face?). One grouping of data can lead you to a happy life, another one can lead you into deep s**t. It could be handy to sort of quickly test many alternative groupings as for their elementary coherence, i.e. hold all that data in front of you, for a moment, and contemplate flexibly many possible connections. Volitional movement is very much about that. You want to run? Good. It would be nice not to run into something that could hurt you, so it would be good to cover a set of sensory data, combining something present (what we see), with something we remember from the past (that thing on the 2 o’clock azimuth stings like hell), and sort of quickly turn and return all that information so as to steer clear from that cactus, as we run.

Thus, as I follow the path set by Pierson and Trout, meaning occurs as the grouping of data in functional categories, and it occurs when we need to do it quickly and sort of under pressure of getting into trouble. I am going onto the level of collective intelligence in human social structures. In those structures, meaning, i.e. the emergence of meaningful distinctions communicable between human beings and possible to formalize in language, would occur as said structures need to figure something out quickly and under uncertainty, and meaning would allow putting together the types of information that are normally compartmentalized and fragmented.

From that perspective, one meaningful move in a game encompasses small pieces of action which we intuitively guess we should immediately group together. Meaningful moves in social games are sequences of actions, which we feel like putting immediately back to back, without pausing and letting the other player do their thing. There is some sort of pressing immediacy in that grouping. We guess we just need to carry out those actions smoothly one after the other, in an unbroken sequence. Wedging an interval of waiting time in between those actions could put our whole strategy at peril, or we just think so.

When I apply this logic to energy efficiency, I think about business strategies regarding innovation in products and technologies. When we launch a new product, or implement a new technology, there is something like fixed patterns to follow. When you start beta testing a new mobile app, for example, you don’t stop in the middle of testing. You carry out the tests up to their planned schedule. When you start launching a new product (reminder: more products made on the same energy base mean greater energy efficiency), you keep launching until you reach some sort of conclusive outcome, like unequivocal success or failure. Social games we play around energy efficiency could very well be paced by this sort of business-strategy-based moves.

I pick up another article, that by Friedemann Pulvermüller (2013[2]). The main thing I see right from the beginning is that apparently, neurology is progressively dropping the idea of one, clearly localised area in our brain, in charge of semantics, i.e. of associating abstract signs with sensory data. What we are discovering is that semantics engage many areas in our brain into mutual connection. You can find developments on that issue in: Patterson et al. 2007[3], Bookheimer 2002[4], Price 2000[5], and Binder & Desai 2011[6]. As we use words, thus as we pronounce, hear, write or read them, that linguistic process directly engages (i.e. is directly correlated with the activation of) sensory and motor areas of our brain. That engagement follows multiple, yet recurrent patterns. In other words, instead of having one mechanism in charge of meaning, we are handling different ones.

After reviewing a large bundle of research, Pulvermüller proposes four different patterns: referential, combinatorial, emotional-affective, and abstract semantics. Each time, the semantic pattern consists in one particular area of the brain acting as a boss who wants to be debriefed about something from many sources, and starts pulling together many synaptic strings connected to many places in the brain. Five different pieces of cortex come recurrently as those boss-hubs, hungry for differentiated data, as we process words. They are: inferior frontal cortex (iFC, so far most commonly associated with the linguistic function), superior temporal cortex (sTC), inferior parietal cortex (iPC), inferior and middle temporal cortex (m/iTC), and finally the anterior temporal cortex (aTC). The inferior frontal cortex (iFC) seems to engage in the processing of words related to action (walk, do etc.). The superior temporal cortex (sTC) looks like seriously involved when words related to sounds are being used. The inferior parietal cortex (iPC) activates as words connect to space, and spatio-temporal constructs. The inferior and middle temporal cortex (m/iTC) lights up when we process words connected to animals, tools, persons, colours, shapes, and emotions. That activation is category specific, i.e. inside m/iTC, different Christmas trees start blinking as different categories among those are being named and referred to semantically. The anterior temporal cortex (aTC), interestingly, has not been associated yet with any specific type of semantic connections, and still, when it is damaged, semantic processing in our brain is generally impaired.

All those areas of the brain have other functions, besides that semantic one, and generally speaking, the kind of meaning they process is correlated with the kind of other things they do. The interesting insight, at this point, is the polyvalence of cortical areas that we call ‘temporal’, thus involved in the perception of time. Physicists insist very strongly that time is largely a semantic construct of ours, i.e. time is what we think there is rather than what really is, out there. In physics, what exists is rather sequential a structure of reality (things happen in an order) than what we call time. That review of literature by Pulvermüller indirectly indicates that time is a piece of meaning that we attach to sounds, colours, emotions, animal and people. Sounds come as logical: they are sequences of acoustic waves. On the other hand, how is our perception of colours, or people, connected to our concept of time? This is a good one to ask, and a tough one to answer. What I would look for is recurrence. We identify persons as distinct ones as we interact with them recurrently. Autistic people have frequently that problem: when you put on a different jacket, they have hard time accepting you are the same person. Identification of animals or emotions could follow the same logic.

The article discusses another interesting issue: the more abstract the meaning is, the more different regions of the brain it engages. The really abstract ones, like ‘beauty’ or ‘freedom’, are super Christmas-trees: they provoke involvement all over the place. When we do abstraction, in our mind, for example when writing poetry (OK, just good poetry), we engage a substantial part of our brain. This is why we can be lost in our thoughts: those thoughts, when really abstract, are really energy-consuming, and they might require to shut down some other functions.

My personal understanding of the research reviewed by Pulvermüller is that at the neurological level, we process three essential types of meaning. One consists in finding our bearings in reality, thus in identifying things and people around, and in assigning emotions to them. It is something like a mapping function. Then, we need to do things, i.e. to take action, and that seems to be a different semantic function. Finally, we abstract, thus we connect distant parcels of data into something that has no direct counterpart neither in the mapped reality, nor in our actions.

I have an indirect insight, too. We have a neural wiring, right? We generate meaning with that wiring, right? Now, how is adaptation occurring, in that scheme, over time? Do we just adapt the meaning we make to the neural hardware we have, or is there a reciprocal kick, I mean from meaning to wiring? So far, neurological research has demonstrated that physical alteration in specific regions of the brain impacts semantic functions. Can it work the other way round, i.e. can recurrent change in semantics being processed alter the hardware we have between our ears? For example, as we process a lot of abstract concepts, like ‘taxes’ or ‘interest rate’, can our brains adapt from generation to generation, so as to minimize the gradient of energy expenditure as we shift between levels of abstraction? If we could, we would become more intelligent, i.e. able to handle larger and more differentiated sets of data in a shorter time.

How does all of this translate into collective intelligence? Firstly, there seem to be layers of such intelligence. We can be collectively smart sort of locally – and then we handle those more basic things, like group identity or networks of exchange – and then we can (possibly) become collectively smarter at more combinatorial a level, handling more abstract issues, like multilateral peace treaties or climate change. Moreover, the gradient of energy consumed, between the collective understanding of simple and basic things, on the one hand, and the overarching abstract issues, is a good predictor regarding the capacity of the given society to survive and thrive.

Once again, I am trying to associate this research in neurophysiology with my game-theoretical approach to energy markets. First of all, I recall the three theories of games, co-awarded the economic Nobel prize in 1994, namely those by: John Nash, John (Yan) Harsanyi, and Reinhard Selten. I start with the latter. Reinhard Selten claimed, and seems to have proven, that social games have a memory, and the presence of such memory is needed in order for us to be able to learn collectively through social games. You know those situations of tough talks, when the other person (or you) keeps bringing forth the same argumentation over and over again? This is an example of game without much memory, i.e. without much learning. In such a game we repeat the same move, like fish banging its head against the glass wall of an aquarium. Playing without memory is possible in just some games, e.g. tennis, or poker, if the opponent is not too tough. In other games, like chess, repeating the same move is not really possible. Such games force learning upon us.

Active use of memory requires combinatorial meaning. We need to know what is meaningful, in order to remember it as meaningful, and thus to consider it as valuable data for learning. The more combinatorial meaning is, inside a supposedly intelligent structure, such as our brain, the more energy-consuming that meaning is. Games played with memory and active learning could be more energy-consuming for our collective intelligence than games played without. Maybe that whole thing of electronics and digital technologies, so hungry of energy, is a way that we, collective human intelligence, put in place in order to learn more efficiently through our social games?

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

[1] Pierson, L. M., & Trout, M. (2017). What is consciousness for?. New Ideas in Psychology, 47, 62-71.

[2] Pulvermüller, F. (2013). How neurons make meaning: brain mechanisms for embodied and abstract-symbolic semantics. Trends in cognitive sciences, 17(9), 458-470.

[3] Patterson, K. et al. (2007) Where do you know what you know? The representation of semantic knowledge in the human brain. Nat. Rev. Neurosci. 8, 976–987

[4] Bookheimer,S.(2002) FunctionalMRIoflanguage:newapproachesto understanding the cortical organization of semantic processing. Annu. Rev. Neurosci. 25, 151–188

[5] Price, C.J. (2000) The anatomy of language: contributions from functional neuroimaging. J. Anat. 197, 335–359

[6] Binder, J.R. and Desai, R.H. (2011) The neurobiology of semantic memory. Trends Cogn. Sci. 15, 527–536

The social brain

My editorial on You Tube

I am thinking about my opening lectures in the coming semester. I am trying to phrase out sort of a baseline philosophy of mine, underlying all or most of what I teach, i.e. microeconomics, management, political systems, international economic relations, and economic policy. Certainly, my most fundamental message to my students is: watch reality in a scientific way. Get the hell above clichés, first impressions and tribal thinking. Reach for the information that most other people don’t, and process it rigorously. You will see that once you really mean it, scientific method is anything but boring. When you really swing that Ockham’s razor with dexterity, and cut out the bullshit, you can come to important existential realizations.

Science starts with observation. Social sciences start with the observation of what people do, and what people do consists very largely in doing something with other people. We are social beings, we do things in recurrent sequences of particular actions, sequences that we have learnt and that we keep on learning. Here I come to an interesting point, namely to what I call the « action and reaction paradigm » and what is a slightly simplistic application of the Newtonian principle labelled with the same expression. It goes more or less like: what people do is a reaction to what happens. There is a ‘yes-but’ involved. Yes, people do things in reaction to what happens, but you need to add the component of temporal sequence. People do things in reaction to everything relevant that has happened within their span of memory connected to the particular phenomenon in question.

This is a fundamental distinction. If I say ‘I do what I do in reaction to what is happening now’, my claim is essentially different from saying that ‘I do what I do as a learnt response to all the things which I know to have happened so far and which my brain considers as relevant for the case’. Two examples come to my mind: social conflicts, and technological change. When a social conflict unfolds, would it be a war between countries, a civil war, or a sharp rise in political tension, the first, superficial interpretation is that someone has just done something annoying, and the other someone just couldn’t refrain themselves from reacting, and it all ramped up to the point of being out of control. In this approach, patterns of behaviour observable in social conflicts are not really patterns, in the sense that they are not really predictable. There is a strong temptation to label conflictual behaviour as more or less random and chaotic, devoid of rationality.

Still, here, social sciences come with a firm claim: anything we do is a learnt, recurrent pattern of doing things. Actions that we take in a situation of conflict are just as much a learnt, repetitive strategy as any other piece of behaviour. Some could argue: ‘But how is it possible that people who have very seldom been aggressive in the past suddenly develop whole patterns of aggressive behaviour? And in the case of whole social groups? How can they learn being aggressive if there has not been conflict before?’. Well, this is one of the wonders observable in human culture. Culture is truly like a big virtual server. There are things stored in our culture – and by ‘things’ I mean, precisely, patterns of behaviour – which we could have hardly imagined to be there. We accumulate information over weeks, months, and years, and, all of a sudden, a radical shift in our behaviour occurs. We have tendency to consider such a brusque shift as insanity, but this usually not the case. As long as the newly manifested set of actions is coherent around an expected outcome, this is a new, subjectively rational strategy that we have just picked up from the cultural toolbox.

Cultural memory is usually much longer in its backwards reach than individual memory. If the right set of new information is being input into the life of a social group, or of an individual, centuries-old strategies can suddenly pop up. It works like a protocol: ‘OK, we have now enough information accumulated in this file so as to trigger the strategy AAA’. Different cultures have different toolboxes stored in them, and yet, the simple tools of social conflict are almost omnipresent. Wherever any tribe has ever had to fight for its hunting grounds, the corresponding patterns of whacking-the-other-over-the-head-with-that-piece-of-rock are stored in the depths of culture, most fundamentally in language.

Yes, the language we use is a store of information about how to do things. Never have looked at the thing like that? Just think: the words and expressions we use describe something that happens in our brain in response to accumulated sensory experience. Usually we have less words at hand than different things to designate. In all the abundance of our experience just some among its pieces become dignified enough to have their own words.  For a word or expression to form as part of a language, generations need to recapitulate their things of life. This is how language becomes an archive of strategies. The information it conveys is like a ZIP file in a computer: it is tightly packed, and requires some kind of semantic crowbar in order to become fully accessible and operational. The crowbar is precisely the currently absorbed experience.

Right, so we can get to fighting each other even without special training, as we have the basic strategies stored in the language we speak. And technological change? How do we innovate? When we shift towards some new technology, do we also use old patterns of behaviour conveyed in our cultural heritage? Let’s see… Here is a little intellectual experiment I use to run with my students, when we talk about innovation and technological change. Look around you. Look at all those things that surround you and which, fault of a better word, you call ‘civilisation’. Which of those things would you change, like improve or replace with something else, possibly better?

Now comes an interesting, stylized fact that I can observe in that experiment. Sometimes, I hold my classes in a big conference room, furnished in a 19th – centurish style, and equipped with a modern overhead projector attached to the ceiling. When I ask my students whether they would like to innovate with that respectable, sort of traditional furniture, they give me one of those looks, as if I were out of my mind. ‘What? Change these? But this is traditional, this is chic, this is… I don’t know, it has style!’. On the other hand, virtually each student is eager to change the overhead projector for a new(er) one.

Got it? In that experiment, people would rather change things that are already changing at an observably quick pace. The old and steady things are being left out of the scope of innovation. The 100% rational approach to innovation suggests something else: if you want to innovate, start with the oldest stuff, because it seems to be the most in need of some shake-off. Yet, the actual innovation, such as we can observe it in the culture around us, goes the other way round: it focuses on innovating in things which are already being innovated with.

Got it? Most of what we call innovation is based on a millennia-old pattern of behaviour called ‘joining the fun’. We innovate because we join an observable trend towards innovating. Yes, there are some minds, like Edison or Musk, who start innovating apparently from scratch, when there is no passing wagon to jump on. Thus, we have two patterns of innovation: joining a massively observable trend of change, or starting a new trend. The former is clear in its cultural roots. It has always been fun to join parties, festivities and public executions. The latter is more interesting in its apparent obscurity. What is the culturally rooted pattern of doing something completely new?

Easy, man, easy. Let’s do it step by step. When we perceive something as ‘completely new’, it means there are two sets of phenomena: one made of things that look old, and the other looking new. In other words, we experience cognitive dissonance. Certain things look out of date when, after having been experiencing them as functional, we start experiencing them as no more up to facing the situation at hand. We experience their dissonance as compared to other things of life. This is called perceived obsolescence.

Anything is perceived as completely new only if there is something obsolete to compare with. Let’s generalise it mathematically. There are two sets of phenomena, which I can probably define as two strings of data. I say ‘strings’, and not ‘lists’, on the account of that data being complex. Well, yes: data about real life is complex. In terms of digital technology, our experience is made of strings (not to confound with that special type of beachwear).

And so I have those two strings, and I keep using and reusing them. With time, I notice that I need to add new data, from my ongoing experience, to one of the strings, whilst the other one stays the same. With even more time, as my first string of data gets new pieces of information, i.e. new memory, that other string slowly turns from ‘the same’ into ‘old school’, then into ‘retro’, and finally into ‘that old piece of junk’. This is learning by experiencing cognitive dissonance.

We have, then, two cultural patterns of technological change. The more commonly practiced one consists in the good old ‘let’s join the fun’ sequence of actions. Willing to do things together with other people is simple, universal, and essentially belongs to the very basis of each culture. The much rarer pattern consists in becoming aware of a cognitive dissonance and figuring out something new. This is interesting. Some cultural patterns are like screwdrivers or duck-tape. Sooner or later most people use it. Other models of behaviour, whilst still rooted in our culture, are sort of harder to dig out of that abyssal toolbox. Just some people do it.

I am coming back to that « action and reaction paradigm ». Yes, we act in reaction to what happens, but what happens, happens over time, and the ‘time’ part is vital here. We act in reaction to the information that our brain collects, and when enough information has been collected, it triggers a pre-learnt, culturally rooted pattern of behaviour, and this is our action. In response to basically the same set of data available in the environment, different human beings pull different patterns of action out of the cultural toolbox. This is interesting: how exactly is it happening? I mean, how exactly this differentiation of response to environment occurs?

There is that article I have just found on Science Direct, by Porcelli et al. (2018[1]). The paper puts together quite a cartload of literature concerning the link between major mental disorders – schizophrenia (SCZ), Alzheimer’s disease (AD) and major depressive disorder (MDD) – and their corresponding impairments in social behaviour. More specifically, the authors focus on the correlation between the so-called social withdrawal (i.e. abnormal passivity in social relations), and the neurological pathways observable in these three mental disorders. One of the theoretical conclusions they draw regards what they call ‘the social brain’. The social brain is a set of neurological pathways recurrently correlated with particular patterns of social behaviour.

Yes, ladies and gentlemen, it means that what is observable outside, has its counterpart inside. There is a hypothetical way that human brains can work – a hypothetical set of sequences synaptic activations observable in our neurons – to make the best of social relations, something like a neurological general equilibrium. I have just coined up that term by analogy to general economic equilibrium. Anything outside that sort of perfect model is less efficient in terms of social relations, and so it goes all the way down to pathological behaviour connected with pathological neural pathways. Porcelli et al. go even as far as quantifying the economic value of pathological behaviour grounded in pathological mental impairment. By analogy, there is a hypothetical economic value attached to any recurrent, neural pathway.

Going reeeaally far this speculative avenue, our society can look completely different if we change the way our brain works.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

Support this blog


[1] Porcelli, S., Van Der Wee, N., van der Werff, S., Aghajani, M., Glennon, J. C., van Heukelum, S., … & Posadas, M. (2018). Social brain, social dysfunction and social withdrawal. Neuroscience & Biobehavioral Reviews

Good hypotheses are simple

The thing about doing science is that when you really do it, you do it even when you don’t know you do it. Thinking about reality in truly scientific terms means that you tune yourself on discovery, and when you do that, man, you have released that ginn from the bottle (lamp, ring etc.). When you start discovering, and you get the hang of it, you realize that it is fun and liberating for its own sake. To me, doing science is like playing music: I am just having fun with it.

Having fun with science is important. I had a particularly vivid realization of that yesterday, when, due to a chain of circumstances, I had to hold a lecture in macroeconomics in a classroom of anatomy. There was no whiteboard to write on, but there were two skeletons standing in the two corners on my sides, and there were microscopes, of course covered with protective plastic bags. Have you ever tried to teach macroeconomics using a skeleton, and with nothing to write on? As I think about it, a skeleton is excellent for metaphorical a representation of functional connections in a system.

Since the beginning of this calendar year, I have been taking on those serious business plans, and, by the way, I am still doing it. Still, in my current work on two business plans I am preparing in parallel – one for the EneFinproject (FinTech in the market of energy), and the other one for the MedUsproject (Blockchain in the market of private healthcare) – I recently realized that I am starting to think science. In my last update in French, the one entitled Ça me démange, carrément, I have already nailed down one hypothesis, and some empirical data to check it. The hypothesis goes like: ‘The technology of renewable energies is its phase of banalisation, i.e. it is increasingly adapting its utilitarian forms to the social structures that are supposed to absorb it, and, reciprocally, those structures adapt to those utilitarian forms so as to absorb them efficiently’.

As hypotheses come, this one is still pretty green, i.e. not ripe yet for rigorous scientific proof, on the account of there being too many different ideas in it. Good hypotheses are simple, so as you can give them a shave with the Ockham’s razor and cut bullshit out. Still, a green hypothesis is better than no hypothesis at all. I can farm it and make it ripe, which I have already applied myself to do. In an Excel file you can see and download from the archive of my blog, I included the results of quick empirical research I did with the help of I studied patent applications and patents granted, in the respective fields of wind, hydro, photovoltaic, and solar-thermal energies, in three important patent offices across the world, namely the European Patent Office (‘EP’ in that Excel file), the US Patent & Trademark office (‘US’), and in continental China.

As I had a look at those numbers, yes, indeed, there has been like a recent surge in the diversity of patented technologies. My intuition about banalisation could be true. Technologies pertaining to the generation of renewable energies start to wrap themselves around social structures around them, and said structures do the same with technologies. Historically, it is a known phenomenon. The motor power of animals (oxen, horses and mules, mostly), wind power, water power, thermal energy from the burning of fossil fuels – all these forms of energy started as novelties, and then grew into human social structures. As I think about it, even the power of human muscles went through that process. At some point in time, human beings discovered that their bodies can perform organized work, i.e. muscular power can be organized into labour.

Discovering that we can work together was really a bit of a discovery. You have probably read or heard about Gobekli Tepe, that huge megalithic enclosure located in Turkey, and being, apparently, the oldest proof of temple-sized human architecture. I watched an excellent documentary about the place, on National Geographic. Its point was that, if we put aside all the fantasies about aliens and Atlantians, the huge megalithic structure of Gobekli Tepe had been most probably made by simple, humble hunters-gatherers, who were thus discovering the immense power of organized work, and even invented a religion in order to make the whole business run smoothly. Nothing fancy: they used to cut their deceased ones’ heads off, would clean the skulls and keep them at home, in a prominent place, in order to think themselves into the phenomenon of inter-generational heritage. This is exactly what my great compatriot, Alfred Count Korzybski, wrote about being human: we have that unique capacity to ‘bind time’, or, in other words, to make our history into a heritage with accumulation of skills.

That was precisely the example of what a banalised technology (not to confuse with ‘banal technology’) can do. My point – and my gut feeling – is that we are, right now, precisely at this Gobekli-Tepe-phase with renewable energies. With the progressing diversity in the corresponding technologies, we are transforming our society so as it can work the most efficiently possible with said technologies.

Good, that’s the first piece of science I have come up with as regards renewable technologies. Another piece is connected to what I introduced, about the market of renewable energies in Europe, in my last update in English, namely in At the frontier, with my numbers. In Europe, we are a bit of a bunch of originals, in comparison to the rest of the world. Said rest of the world generally pumps up their consumption of energy per capita, as measured in them kilograms of oil equivalent. We, in Europe, we have mostly chosen the path of frugality, and our kilograms of oil per capita tend to shrink consistently. On the top of all that, there seems to be pattern in all that: a functional connection between the overall consumption of energy per capita and the aggregate consumption of renewable energies.

I am going to expose this particular gut feeling of mine by small steps. I Table 1, below, I am introducing two growth rates, compound between 1990 and 2015: the growth rate in the overall, final consumption of energy per capita, against that in the final consumption of renewable energies. I say ‘against’, as in the graph below the table I make a visualisation of those numbers, and it shows an intriguing regularity. The plot of points take the form opposite to those frontiers I showed you in At the frontier, with my numbers. This time, my points follow something like a gentle slope, and the further to the right, the gentler that slope becomes. It is visualised even more clearly with the exponential trend line (red dotted line).

We, I mean economists, call this type of curve, with a nice convexity, an ‘indifference curve’. Funnily enough, we use indifference curves to study choice. Anyway, there is sort of an intuitive difference between frontiers, on the one hand, and indifference curves, on the other hand. In economics, we assume that frontiers are somehow unstable: they represent a state of things that is doomed to change. A frontier envelops something that either swells or shrinks. On the other hand, an indifference curve suggests an equilibrium, i.e. each point on that curve is somehow steady and respectable as long as nobody comes to knock it out of balance. Whilst a frontier is like a skin, enveloping the body, an indifference curve is more like a spinal cord.

We have an indifference curve, hence a hypothetical equilibrium, between the dynamics of the overall consumption of energy per capita, and those of aggregate use of renewable energies. I don’t even know how to call it. That’s the thing with freshly observed equilibriums: they look nice, you could just fall in love with them, but if somebody asks what exactly are they, those nice things, you could have trouble to answer. As I am trying to sort it out, I start with assuming that the overall consumption of energy per capita reflects two complex sets. The first set is that of everything we do, divided into three basic fields of activity: a) the goods and services we consume (they contain energy that served to supply them) b) transport and c) the strictly spoken household use of energy. The second set, or another way of apprehending essentially the same ensemble of phenomena, is a set of technologies. Our overall consumption of energy depends on the total installed power of engines and electronic devices we use.

Now, the total consumption of renewable energies depends on the aggregate capacity installed in renewable technologies. In other words, this mysterious equilibrium of mine (in there is any, mind you) would be an equilibrium between two sets of technologies: those generating energy, and those serving to consume it. Honestly, I don’t even know how to phrase it into a decent hypothesis. I need time to wrap my mind around it.

Table 1

Growth rate in the overall, final consumption of energy per capita, 1990 – 2015 Growth rate in the final consumption of renewable energies, 1990 – 2015
Austria 17,4% 80,7%
Switzerland -18,4% 48,6%
Czech Republic -19,5% 241,0%
Germany -13,7% 501,2%
Spain 11,0% 104,4%
Estonia -33,0% 359,5%
Finland 4,1% 101,8%
France -3,7% 42,3%
United Kingdom -23,2% 1069,6%
Netherlands -3,7% 434,9%
Norway 17,1% 39,8%
Poland -8,0% 336,8%
Portugal 26,8% 32,6%

Growth rates energy per capita vs total renewable


I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French versionas well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon pageand become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

Smart cities, or rummaging in the waste heap of culture

My editorial

I am trying to put together my four big ideas. I mean, I think they are big. I feel small when I consider them. Anyway, they are: smart cities, Fintech, renewable energies, and collective intelligence. I am putting them together in the framework of a business plan. The business concept I am entertaining, and which, let’s face it, makes a piece of entertaining for my internal curious ape, is the following: investing in the development of a smart city, with a strong component of renewable energies supplanting fossil fuels, and financing this development partly or totally, with FinTech tools, i.e. mostly with something like a cryptocurrency as well as with a local platform for financial transactions. The whole thing is supposed to have collective intelligence, i.e. with time, the efficiency in using resources should increase in time, on the condition that some institutions of collective life emerge in that smart city. Sounds incredible, doesn’t it? It doesn’t? Right, maybe I should explain it a little bit.

A smart city is defined by the extensive use of digital technologies, in order to optimize the local use of resources. Digital technologies age relatively quickly, as compared to technologies that make the ‘hard’ urban infrastructure. If, in a piece of urban infrastructure, we have an amount KH of capital invested in the hard infrastructure, and an amount KS invested in the smart technologies with a strong digital component, the rate of depreciation D(KH) of the capital invested in KH will be much lower than D(KS) invested in KS.


[D(KS)/ KS] > [D(KH)/ KH]

and the ‘>’ in this case really means business.

The rate of depreciation in any technology depends on the pace that new technologies come into the game, thus on the pace of research and development. The ‘depends’, here, works in a self-reinforcing loop: the faster my technologies age, the more research I do to replace them with new ones, and so my next technologies age even faster, and so I put metaphorical ginger in the metaphorical ass of my research lab and I come with even more advanced technologies at even faster a pace, and so the loop spirals up. One day, in the future, as I will be coming back home from work, the technology embodied in my apartment will be one generation more advanced than the one I left there in the morning. I will have a subscription with a technology change company, which, for a monthly lump fee, will assure smooth technological change in my place. Analytically, it means that the residual difference in the rates of depreciation, or [D(KS)/ KS] – [D(KH)/ KH] , will widen.

On the grounds of the research I did in 2017, I can stake three hypotheses as for the development of smart cities. Hypothesis #1 says that the relative infusion of urban infrastructure with advanced and quickly ageing technologies will generate increasing amounts of highly liquid assets, monetary balances included, in the aggregate balance sheets of smart cities  (see Financial Equilibrium in the Presence of Technological Change Journal of Economics Library, Volume 4 (2), June 20, s. 160 – 171 and Technological Change as a Monetary Phenomenon Economics World, May-June 2018, Vol. 6, No. 3, 203-216 ). This, in turn, means that the smarter the city, the more financial assets it will need, kind of around and at hand, in order to function smoothly as a social structure.

On the other hand, in my hypothesis #2, I claim that the relatively fast pace of technological change associated with smart cities will pump up the use of energy per capita, but the reciprocal push, namely from energy-intensity to innovation-intensity will be much weaker, and this particular loop is likely to stabilize itself relatively quickly in some sort of energy-innovation standstill (see Technological change as intelligent, energy-maximizing adaptation Journal of Economic and Social Thought, Volume 4 September 3  ). Mind you, I am a bit less definitive on this one than on hypothesis #1. This is something I found out to exist, in human civilisation, as a statistically significant correlation. Yet, in the precise case of smart cities, I still have to put my finger on the exact phenomena, likely corresponding to the hypothesis. Intuitively, I can see some kind of social change. The very transformation of an ordinary (i.e. dumb) urban infrastructure into a smart one means, initially, lots of construction and engineering work being done, just to put the new infrastructure in place. That means additional consumption of energy. Those advanced technologies embodied in the tissues of the smart cities will tend to be advanced for a consistently shortening amount of time, and as they will be replaced, more and more frequently, with consecutive generations of technological youth. All that process will result in the consumption of energy spiralling up in the particular field of technological change itself. Still, my research suggests some kind of standstill, in that particular respect, coming into place quite quickly. I am thinking about our basic triad in energy consumption. If we imagined our total consumption of energy, I mean as civilisation, as a round cake, one third of that cake would correspond to household consumption, one third to transportation, and the remaining third to the overall industrial activity. With that pattern of technological change, which I have just sketched regarding smart cities, the cake would go somehow more to industrial activity, especially as said activity should, technically, contribute to energy efficiency in households and in transports. I can roughly assume that the spiral of more energy being consumed in the process of changing for more energy-efficient technologies can find some kind of standstill in the proportions between that particular consumption of energy, on the one hand, and the household & transport use. I mean, scrapping the bottom of the energy barrel just in order to install consecutive generations of smart technologies is the kind of strategy, which can quickly turn dumb.

Anyway, the development of smart cities, as I see it, is likely to disrupt the geography of energy consumption in the overall spatial structure of human settlement. Smart cities, although energy-smart, are likely to need, on the long run, more energy to run. Yet, I am focusing on another phenomenon, now. Following in the footsteps of Paul Krugman (see Krugman 1991[1];  Krugman 1998[2]), and on the grounds of my own research ( see Settlement by energy – Can Renewable Energies Sustain Our Civilisation? International Journal of Energy and Environmental Research, Vol.5, No.3, pp.1-18  ) I am formulating hypothesis #3: if the financial loop named in hypothesis #1, and the engineering loop from hypothesis #2 come together, the development of smart cities will create a different geography of human settlement. Places, which will turn into smart (and continuously smarter) cities will attract people at faster a pace than places with relatively weaker a drive towards getting smarter. Still, that change in the geography of our civilisation will be quite idiosyncratic. My own research (the link above) suggests that countries differ strongly in the relative importance of, respectively, access to food and access to energy, in the shaping of social geography. Some of those local idiosyncrasies can come as quite a bit of a surprise. Bulgaria or Estonia, for example, are likely to rebuild their urban tissue on the grounds of local access to energy. People will flock around watermills, solar panels, maybe around cold fusion. On the other hand, in Germany, Iran or Mexico, where my research indicates more importance attached to food, the new geography of smart human settlement is likely to gravitate towards highly efficient farming places.

Now, there is another thing, which I am just putting my finger on, not even enough to call it a hypothesis. Here is the thing: money gets hoarded faster and more easily than fixed assets. We can observe that the growing monetization of the global economy (more money being supplied per unit of real output) is correlated with increasing social inequalities . If, in a smart and ever smarter city, more financial assets are being around, it is likely to create a steeper social hierarchy. In those smart cities, the distance from the bottom to the top of the local social hierarchy is likely to be greater than in other places. I know, I know, it does not exactly sound politically correct. Smart cities are supposed to be egalitarian, and make us live happily ever after. Still, my internal curious ape is what it is, i.e. a nearly pathologically frantic piece of mental activity in me, and it just can’t help rummaging in the waste heap of culture. And you probably know that thing about waste heaps: people tend to throw things, there, which they wouldn’t show to friends who drop by.

I am working on making science fun and fruitful, and I intend to make it a business of mine. I am doing by best to stay consistent in documenting my research in a hopefully interesting form. Right now, I am at the stage of crowdfunding. You can consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

[1] Krugman, P., 1991, Increasing Returns and Economic Geography, The Journal of Political Economy, Volume 99, Issue 3 (Jun. 1991), pp. 483 – 499

[2] Krugman, P., 1998, What’s New About The New Economic Geography?, Oxford Review of Economic Policy, vol. 14, no. 2, pp. 7 – 17

Anyway, the two equations, or the remaining part of Chapter I

My editorial

And so I continue my novel in short episodes, i.e. I am blogging the on-going progress in the writing of my book about renewable technologies and technological change. Today, I am updating my blog with the remaining part of the first Chapter, which I started yesterday. Just for those who try to keep up, a little reminder about notations that you are going to encounter in what follows below: N stands for population, E represents the non-edible energy that we consume, and F is the intake of food. For the moment, I do not have enough theoretical space in my model to represent other vital things, like dreams, pot, beauty, friendship etc.

Anyway, the two equations, namely ‘N = A*Eµ*F1-µ’ and ‘N = A*(E/N)µ*(F/N)1-µ’ can both be seen as mathematical expressions of two hypotheses, which seems perfectly congruent at the first sight, and yet they can be divergent. Firstly, each of these equations can be translated into the claim that the size of human population in a given place at a given time depends on the availability of food and non-edible energy in said place and time. In a next step, one is tempted to claim that incremental change in population depends on the incremental change in the availability of food and non-edible energies. Whilst the logical link between the two hypotheses seems rock-solid, the mathematical one is not as obvious, and this is what Charles Cobb and Paul Douglas discovered as they presented their original research in 1928 (Cobb, Douglas 1928[1]). Their method can be summarised as follows. We have three temporal series of three variables: the output utility on the left side of the equation, and the two input factors on the right side. In the original production function by Cobb and Douglas had aggregate output of the economy (Gross Domestic Product) on the output side, whilst input was made of investment in productive assets and the amount of labour supplied. We return, now, to the most general equation (1), namely U = A*F1µ*F21-µ, and we focus on the ‘F1µ*F21-µ’ part, so on the strictly spoken impact of input factors. The temporal series of output U can be expressed as a linear trend with a general slope, just as the modelled series of values obtained through ‘F1µ*F21-µ’. The empirical observation that any reader can make on their own is that the scale factor A can be narrowed down to that value slightly above 1 only if the slope of the ‘F1µ*F21-µ’ on the right side is significantly smaller than the slope of U. This is a peculiar property of that function: the modelled trend of the compound value ‘F1µ*F21-µ’ is always above the trend of U at the beginning of the period studied, and visibly below U by the end of the same period. The factor of scale ‘A’ is an averaged proportion between reality and the modelled value. It corresponds to a sequence of quotients, which starts with local A noticeably below 1, then closing by 1 at the central part of the period considered, to rise visibly above 1 by the end of this period. This is what made Charles Cobb and Paul Douglas claim that at the beginning of the historical period they studied the real output of the US economy was below its potential and by the end of their window of observation it became overshot. The same property of this function made it a tool for defining general equilibriums rather than local ones. As regards my research on renewable energies, that peculiar property of the compound input of food and energy calculated with ‘Eµ*F1-µ’ or with ‘(E/N)µ*(F/N)1-µ’ means that I can assess, over a definite window in time, whether available food and energy stay in general equilibrium with population. They do so, if my general factor of scale ‘A’, averaged over that window in time, stays very slightly over 1, with relatively low a variance. Relatively low, for a parameter equal more or less to one, means a variance, in A, staying around 0,1 or lower. If these mathematical conditions are fulfilled, I can claim that yes, over this definite window in time, population depends on the available food and energy. Still, as my parameter A has been averaged between trends of different slopes, I cannot directly infer that at any given incremental point in time, like from t0 to t1, my N(t1) – N(t0) = A*{[E(t1)µ*F(t1)1-µ] – [E(t0)µ*F(t0)1-µ]}. If we take that incremental point of view, the local A will be always different than the general one.

Bearing those theoretical limitations in mind, the author undertook testing the above equations on empirical data, in a compound dataset, made of Penn Tables 9.0 (Feenstra et al. 2015[2]), enriched with data published by the World Bank (regarding the consumption of energy and its structure regarding ‘renewable <> non–renewable’), as well as with data published by FAO with respect to the overall nutritive intake in particular countries. Data regarding energy, and that pertaining to the intake of food, is limited, in both cases, to the period 1990 – 2014, and the initial, temporal extension of Penn Tables 9.0 (from 1950 to 2014) has been truncated accordingly. For the same reasons, i.e. the availability of empirical data, the original, geographical scope of the sample has been reduced from 188 countries to just 116. Each country has been treated as a local equilibrium, as the initial intuition of the whole research was to find out the role of renewable energies for local populations, as well as local idiosyncrasies regarding that role. Preliminary tests aimed at finding workable combinations of empirical variables. This is another specificity of the Cobb – Douglas production function: in its original spirit, it is supposed to work with absolute quantities observable in real life. These real-life quantities are supposed to fit into the equation, without being transformed into logarithms, or into standardized values. Once again, this is a consequence of the mathematical path chosen, combined with the hypotheses possible to test with that mathematical tool: we are looking for a general equilibrium between aggregates. Of course, an equilibrium between logarithms can be searched for just as well, similarly to an equilibrium between standardized positions, but these are distinct equilibriums.

After preliminary tests, equation ‘N = A*Eµ*F1-µ’, thus operating with absolute amounts of food and energy, proved not being workable at all. The resulting scale factors were far below 1, i.e. the modelled compound inputs of food and energy produced modelled populations much overshot above the actual ones. On the other hand, the mutated equation ‘N = A*(E/N)µ*(F/N)1-µ’ proved operational. The empirical variables able to yield plausibly robust scale factors A were: final use of energy per capita, in tons of oil equivalent (factor E/N), and alimentary intake of energy per capita, measured annually in mega-calories (thousands of kcal), and averaged over the period studied. Thus, the empirical mutation of produced reasonably robust results was the one, where a relatively volatile (i.e. changing every year) consumption of energy is accompanied by a long-term, de facto constant over time, alimentary status of the given national population. Thus, robust results could be obtained with an implicit assumption that alimentary conditions in each population studied change much more slowly than the technological context, which, in turn, determines the consumption of energy per capita. On the left side of the equation, those two explanatory variables matched with population measured in millions. Wrapping up the results of those preliminary tests, the theoretical tool used for this research had been narrowed down to an empirical situation, where, over the period 1990 – 2014, each million of people in a given country in a given year was being tested for sustainability, regarding the currently available quantity of tons of oil equivalent per capita per year, in non-edible energies, as well as regarding the long-term, annual amount of mega calories per capita, in alimentary intake.

The author is well aware that all this theoretical path-clearing could have been truly boring for the reader, but it seemed necessary, as this is the point, when real surprises started emerging. I was ambitious and impatient in my research, and thus I immediately jumped to testing equation N = A*(E/N)µ*(F/N)1-µ’ with just the renewable energies in the game, after having eliminated all the non-renewable part of final consumption in energy. The initial expectation was to find some plausible local equilibriums, with the scale factor A close to 1 and displaying sufficiently low a variance, in just some local populations. Denmark, Britain, Germany – these were the places where I expected to find those equilibriums, Stable demographics, well-developed energy base, no official food deficit: this was the type of social environment, which I expected to produce that theoretical equilibrium, and yet, I expected to find a lot of variance in the local factors A of scale. Denmark seemed to behave according to expectations: it yielded an empirical equation N = (Renewable energy per capita)0,68*(Alimentary intake per capita)1 0,68 = 0,32. The scale factor A hit a surprising robustness: its average value over 1990 – 2014 was 1,008202138, with a variance var (A) = 0,059873591. I quickly tested its Scandinavian neighbours: Norway, Sweden, and Finland. Finland yielded higher a logarithm in renewable energy per capita, namely µ = 0,85, but the scale factor A was similarly robust, making 1,065855419 on average and displaying a variance equal to 0,021967408. With Norway, results started puzzling me: µ = 0,95, average A = 1,019025526 with a variance 0,002937442. Those results would roughly mean that whilst in Denmark the availability of renewable energies has a predominant role in producing a viable general equilibrium in population, in Norway it has a quasi-monopole in shaping the same equilibrium. Cultural clichés started working at this moment, in my mind. Norway? That cold country with low density of population, where people, over centuries, just had to eat a lot in order to survive winters, and the population of this country is almost exclusively in equilibrium with available renewable energies? Sweden marked some kind of a return to the expected state of nature: µ = 0,77, average A = 1,012941105 with a variance of 0,003898173. Once again, surprisingly robust, but fitting into some kind of predicted state.

What I could already see at this point was that my model produced robust results, but they were not quite what I expected. If one takes a look at the map of the world, Scandinavia is relatively small a region, with quite similar, natural conditions for human settlement across all the four countries. Similar climate, similar geology, similar access to wind power and water power, similar social structures as well. Still, my model yielded surprisingly marked, local idiosyncrasies across just this small region, and all those local idiosyncrasies were mathematically solid, regarding the variance observable in their scale factors A. This was just the beginning of my puzzlement. I moved South in my testing, to countries like Germany, France and Britain. Germany: µ = 0,31, average A = 1,008843147 with a variance of 0,0363637. One second, µ = 0,31? But just next door North, in Denmark, µ = 0,63, doesn’t it? How is it possible? France yielded a robust equilibrium, with average A = 1,021262046 and its variance at 0,002151713, with µ = 0,38. Britain: µ = 0,3, whilst average A = 1,028817158 and variance in A making 0,017810219.  In science, you are generally expected to discover things, but when you discover too much, it causes a sense of discomfort. I had that ‘No, no way, there must be some mistake’ approach to the results I have just presented. The degree of disparity in those nationally observed functions of general equilibrium between population, food, and energy, strongly suggested the presence of some purely arithmetical disturbance. Of course, there was that little voice in the back of my head, saying that absolute aggregates (i.e. not the ratios of intensity per capita) did not yield any acceptable equilibrium, and, consequently, there could be something real about the results I obtained, but I had a lot of doubts.

I thought, for a day or two, that the statistics supplied by the Word Bank, regarding the share of renewable energies in the overall final consumption of energy might be somehow inaccurate. It could be something about the mutual compatibility of data collected from national statistical offices. Fortunately, methods of quantitative analysis of economic phenomena supply a reliable method of checking the robustness of both the model, and the empirical data I am testing it with. You supplant one empirical variable with another one, possibly similar in its logical meaning, and you retest. This is what I did. I assumed that the gross, final consumption of energy, in tons of oil equivalent per capita, might be more reliable than the estimated shares of renewable sources in that total. Thus, I tested the same equations, for the same set of countries, this time with the total consumption of energy per capita. It is worth quoting the results of that second test regarding the same countries. Denmark: average scale factor A = 1,007673381 with an observable variance of 0,006893499, and all that in an equation where µ = 0,93. At this point, I felt, once again, as if I were discovering too much at once. Denmark yielded virtually the same scale factor A, and the same variance in A, with two different metrics of energy consumed per capita (total and just the renewable one), with two different values in the logarithm µ. Two different equilibriums with two different bases, each as robust as the other. Logically, it meant the existence of a clearly cut substitution between renewable energies and the non-renewable ones. Why? I will try to explain it with a metaphor. If I manage to stabilize a car, when changing its tyres, with two hydraulic lifters, and then I take away one of the lifters and the car remains stable, it means that the remaining lifter can do the work of the two. This one tool is the substitute of two tools, at a rate of 2 to 1. In this case, I had the population of Denmark stabilized both on the overall consumption of energy per capita (two lifters), and on just the consumption of renewable energies (one lifter). Total consumption of energy stabilizes population at µ = 0,93 and renewable energies do the same at µ = 0,68. Logically, renewable energies are substitutes to non-renewables with a rate of substitution equal to 0,93/0,68 = 1,367647059. Each ton of oil equivalent in renewable energies consumed per capita, in Denmark, can do the job of some 1,37 tons of non-renewable energies.

Finland was another source of puzzlement: A = 0,788769669, variance of A equal to 0,002606412, and µ = 0,99. Ascribing to the logarithm µ the highest possible value at the second decimal point, i.e. µ = 0,99, I could not get a model population lower than the real one. The model yielded some kind of demographic aggregate much higher than the real population, and the most interesting thing was that this model population seemed correlated with the real one. I could know it by the very low variance in the scale factor A. It meant that Finland, as an environment for human settlement, can perfectly sustain its present headcount with just renewable energies, and if the non-renewables are being dropped into the model, the same territory has a significant, unexploited potential for demographic growth. The rate of substitution between renewable energies and the non-renewable ones, this time, seemed to be 0,99/0,85 = 1,164705882. Norway yielded similar results, with the total consumption of energy per capita on the right side of the equation: A = 0,760631741, variance in A equal to 0,001570101, µ = 0,99, substitution rate 1,042105263. Sweden turned out to be similar to Denmark: A = 1,018026405 with a variance of 0,004626486, µ = 0,91, substitution rate 1,181818182. The four Scandinavian countries seem to form an environment, where energy plays a decisive role in stabilizing the local populations, and renewable energies seem to be able to do the job perfectly. The retesting of Germany, France, and Britain brought interesting results, too. Germany: A = 1,009335161 with a variance of 0,000335601, at µ = 0,48, with a substitution rate of renewables to non-renewables equal to 1,548387097. France: A = 1,019371541, variance of A at 0,001953865, µ = 0,53, substitution at 1,394736842. Finally, Britain: A = 1,028560563 with a variance of 0,006711585, µ = 0,52, substitution rate 1,733333333. Some kind of pattern seems to emerge: the greater the relative weight of energy in producing general equilibrium in population, the greater the substitution rate between renewable energies and the non-renewable ones.

At this point I was pretty certain that I am using a robust model. So many local equilibriums, produced with different empirical variables, was not the result of a mistake. Table 1, in the Appendix to Chapter I, gives the results of testing equation (3), with the above mentioned empirical variables, in 116 countries. The first numerical column of the table gives the arithmetical average of the scale factor ‘A’, calculated over the period studied, i.e. 1990 – 2014. The second column provides the variance of ‘A’ over the same period of time (thus the variance between the annual values of A), and the third specifies the value in the parameter ‘µ’ – or the logarithm ascribed to energy use per capita – at which the given values in A have been obtained. In other words, the mean A, and the variance of A specify how close to equilibrium assumed in equation (3) has it been possible to come in the case of a given country, and the value of µ is the one that produces that neighbourhood of equilibrium. The results from Table 1 seem to confirm that equation (3), with these precise empirical variables, is robust in the great majority of cases.

Most countries studied satisfying the conditions stated earlier: variances in the scale factor ‘A’ are really low, and the average value of ‘A’ is possible to bring just above 1. Still, exceptions abound regarding the theoretical assumption of energy use being the dominant factor that shapes the size of the population. In many cases, the value of the exponent µ that allows a neighbourhood of equilibrium is far below µ = 0,5. According to the underlying logic of the model, the magnitude of µ is informative about how strong an impact does the differentiation and substitution (between renewable energies, and the non-renewable ones), have on the size of the population in a given time and place. In countries with µ > 0.5, population is being built mostly through access to energy, and through substitution between various forms of energy. Conversely, in countries displaying µ < 0,5, access to food, and internal substitution between various forms of food becomes more important regarding demographic change. United States of America come as one of those big surprises. In this respect, empirical check brings a lot of idiosyncrasies to the initial lines of the theoretical model.

Countries accompanied with a (!) are exceptions with respect to the magnitude of the scale factor ‘A’. They are: China, India, Cyprus, Estonia, Gabon, Iceland, Luxembourg, New Zealand, Norway, Slovenia, as well as Trinidad and Tobago. They present a common trait of satisfactorily low a variance in scale factor ‘A’, in conformity with condition (6), but a mean ‘A’ either unusually high (China A = 1.32, India A = 1.40), or unusually low (e.g. Iceland A = 0.02), whatever the value of exponent ‘µ’. It could be just a technical limitation of the model: when operating on absolute, non-transformed values, the actual magnitudes of variance on both sides of the equation matter. Motor traffic is an example: if the number of engine-powered vehicles in a country grows spectacularly, in the presence of a demographic standstill, variance on the right side is much greater than on the left side, and this can affect the scale factor. Yet, variances observable in the scale factor ‘A’, with respect to those exceptional cases, are quite low, and a fundamental explanation is possible. Those countries could be the cases, where the available amounts of food and energy either cannot really produce as big a population as there really is (China, India), or, conversely, they could produce much bigger a population than the current one (Iceland is the most striking example). From this point of view, the model could be able to identify territories with no room left for further demographic growth, and those with comfortable pockets of food and energy to sustain much bigger populations. An interpretation in terms of economic geography is also plausible: these could be situations, where official, national borders cut through human habitats, such as determined by energy and food, rather than circling them.

Partially wrapping it up, results in Table 1 demonstrate that equation (3) of the model is both robust and apt to identify local idiosyncrasies. The blade having been sharpened, the next step of empirical check consisted in replacing the overall consumption of energy per capita with just the consumption of renewable energies, as calculated on the grounds of data published by the World Bank, and in retesting equation (3) on the same countries. Table 2, in the Appendix to Chapter I, shows the results of those 116 tests. The presentational convention is the same (just to keep in mind that values in A and in µ correspond to renewable energy in the equation), and the last column of the table supplies a quotient, which, fault of a better expression, is named ‘rate of substitution between renewable and non-renewable energies’. The meaning of that substitution quotient appears as one studies values observed in the scale factor ‘A’. In the great majority of countries, save for exceptions marked with (!), it was possible to define a neighbourhood of equilibrium regarding equation (3) and condition (6). Exceptions are treated as such, this time, mostly due to unusually (and unacceptably) high a variance in scale factor ‘A’. They are countries where deriving population from access to food and renewable energies is a bit dubious, regarding the robustness of prediction with equation (3).

The provisional bottom line is that for most countries, it is possible to derive, plausibly, the size of population in the given place and time from both the overall consumption of energy, and from the use of just the renewable energies, in the presence of relatively constant an alimentary intake. Similar, national idiosyncrasies appear as in Table 1, but this time, another idiosyncrasy pops up: the gap between µ exponents in the two empirical mutations of equation (3). The µ ascribed to renewable energy per capita is always lower than the µ corresponding to the total use of energy – for the sake of presentational convenience they are further being addressed as, respectively, µ(R/N), and µ(E/N) –  but the proportions between those two exponents vary greatly between countries. It is useful to go once again through the logic of µ. It is the exponent, which has to be ascribed to the consumption of energy per capita in order to produce a neighbourhood of equilibrium in population, in the presence of relatively constant an alimentary regime. For each individual country, both µ(R/N) and µ(E/N) correspond to virtually the same mean and variance in the scale factor ‘A’. If both the total use of energy, and just the consumption of renewable energies can produce such a neighbourhood of equilibrium, the quotient ‘µ(E/N)/µ(R/N)’ reflects the amount of total energy use, in tons of oil equivalent per capita, which can be replaced by one ton of oil equivalent per capita in renewable energies, whilst keeping that neighbourhood of equilibrium. Thus, the quotient µ(E/N)/µ(R/N) can be considered as a levelled, long-term rate of substitution between renewable energies and the non-renewable ones.

One possible objection is to be dealt with at this point. In practically all countries studied, populations use a mix of energies: renewable plus non-renewable. The amount of renewable energies used per capita is always lower than the total use of energy. Mathematically, the magnitude of µ(R/N) is always smaller than the one observable in µ(E/N). Hence, the quotient µ(E/N)/µ(R/N) is bound to be greater than one, and the resulting substitution ratio could be considered as just a mathematical trick. Still, the key issue here is that both ‘E/Nµ’ and ‘R/Nµ’ can produce a neighbourhood of equilibrium with a robust scale factor. Translating maths into the facts of life, the combined results of tables 1 and 2 (see Appendix) strongly suggest that renewable energies can reliably produce a general equilibrium in, and sustain, any population on the planet, with a given supply of food. If a given factor A is supplied in relatively smaller an amount than the factor B, and, other things held constant, the supply of A can produce the same general equilibrium than the supply of B, A is a natural substitute of B at a rate greater than one. Thus, µ(E/N)/µ(R/N) > 1 is far more than just a mathematical accident: it seems to be the structural property of our human civilisation.

Still, it is interesting how far does µ(E/N)/µ(R/N) reach beyond the 1:1 substitution. In this respect, probably the most interesting insight is offered by the exceptions, i.e. countries marked with (!), where the model fails to supply a 100%-robust scale factor in any of the two empirical mutations performed on equation (3). Interestingly, in those cases the rate of substitution is exactly µ(E/N)/µ(R/N) = 1. Populations either too big, or too small, regarding their endowment in energy, do not really have obvious gains in sustainability when switching to renewables.  Such a µ(E/N)/µ(R/N) > 1 substitution occurs only when the actual population is very close to what can be modelled with equation (3). Two countries – Saudi Arabia and Turkmenistan – offer an interesting insight into the underlying logic of the µ(E/N)/µ(R/N) quotient. They both present µ(E/N)/µ(R/N) > 2. Coherently with the explanation supplied above, it means that substituting renewable energies for the non-renewable ones, in those two countries, can fundamentally change their social structures and sustain much bigger populations. Intriguingly, they are both ‘resource-cursed’ economies, with oil and gas taking so big a chunk in economic activity that there is hardly room left for anything else.

Most countries on the planet, with just an exception in the cases of China and India, seem being able to sustain significantly bigger populations than their present ones, through shifting to 100% renewable energies. In two ‘resource-cursed’ cases, namely Saudi Arabia and Turkmenistan, this demographic shift, possible with renewable energies, seems not less than dramatic. As I was progressively wrapping my mind around it, a fundamental question formed: what exactly am I measuring with that logarithm µ? I returned to the source of my inspiration, namely to the model presented by Paul Krugman in 1991 (Krugman 1991 op. cit.). That of the two factors on the right side of the equation, which is endowed with the dominant power is, in the same time, the motor force behind the spatial structuring of human settlement. I have, as a matter of fact, three factors in my model: non-edible renewable energy, substitutable to non-edible and non-renewable energy, and the consumption of food per capita. As I contemplate these three factors, a realisation dawns: none of the three can be maximized or even optimized directly. When I use more electricity than I did five years earlier, it is not because I plug my fingers more frequently into the electric socket: I shape my consumption of energy through a bundle of technologies that I use. As for the availability of food, the same occurs: with the rare exception of top-level athletes, the caloric intake is the by-product of a life style (office clerk vs construction site worker) rather than a fully conscious, purposeful action. Each of the three factors is being absorbed through a set of technologies. Here, some readers may ask: if I grow vegetables in my own garden, isn’t it far-fetched to call it a technology? If we were living in a civilisation who feeds itself exclusively with home-grown vegetables, that could be an exaggeration, I agree. Yet, we are a civilisation, which has developed a huge range of technologies in industrial farming. Vegetables grown in my garden are substitutes to foodstuffs supplied from industrially run farms, as well as to industrially processed food. If something is functionally a substitute to a technology, it is a technology, too. The exponents obtained, according to my model, for particular factors, in individual countries, reflect the relative pace of technological change in three fundamental fields of technology, namely:

  1. a) Everything that makes us use non-edible energies, ranging from a refrigerator to a smartphone; here, we are mostly talking about two broad types of technologies, namely engines of all kind, and electronic devices.
  2. b) Technologies that create choice between the renewable, and the non-renewable sources of energy, thus first and foremost the technologies of generating electricity: windmills, watermills, photovoltaic installations, solar-thermal plants etc. They are, for the most part, one step earlier in the chain of energy than technologies mentioned in (a).
  3. c) Technologies connected to the production and consumption of food, composed into a long chain, with side-branches, starting from farming, through the processing of food, ending with packaging, distribution, vending and gastronomy.

As I tested the theoretical equation N = A*(E/N)µ*(F/N)1-µ’, most countries yielded a plausible, robust equilibrium between the local (national) headcount, and the specific, local mix of technologies grouped in those three categories. A question emerges, as a hypothesis to explore: is it possible that our collective intelligence expresses itself in creating such, local technological mixes of engines, electronics, power generation, and alimentary technologies, which, in turn would allow us to optimize our population? Can technological change be interpreted as an intelligent, energy-maximizing adaptation?

Appendix to Chapter I

Table 1 Parameters of the function:  Population = (Energy use per capita[3])µ*(Food intake per capita[4])(1-µ)

Country name Average daily intake of food, in kcal per capita Mean scale factor ‘A’ over 1990 – 2014 Variance in the scale factor ‘A’ over 1990 – 2014 The exponent ‘µ’ of the ‘energy per capita’ factor
Albania 2787,5 1,028719088 0,048263309 0,78
Algeria 2962,5 1,00792777 0,003115684 0,5
Angola 1747,5 1,042983003 0,034821077 0,52
Argentina 3085 1,05449632 0,001338937 0,53
Armenia 2087,5 1,027874602 0,083587662 0,8
Australia 3120 1,053845754 0,005038742 0,77
Austria 3685 1,021793945 0,002591508 0,87
Azerbaijan 2465 1,006243759 0,044217939 0,74
Bangladesh 2082,5 1,045244854 0,007102476 0,21
Belarus 3142,5 1,041609177 0,016347323 0,8
Belgium 3655 1,004454515 0,003480147 0,88
Benin 2372,5 1,030339133 0,034533869 0,61
Bolivia (Plurinational State of) 2097,5 1,019990919 0,003429637 0,62
Bosnia and Herzegovina (!) 2862,5 1,037385012 0,214843872 0,81
Botswana 2222,5 1,068786155 0,009163141 0,92
Brazil 2907,5 1,013624942 0,003643215 0,26
Bulgaria 2847,5 1,058220643 0,005405994 0,82
Cameroon 2110 1,021629875 0,051074111 0,5
Canada 3345 1,036202396 0,007687519 0,73
Chile 2785 1,027291576 0,003554446 0,65
China (!) 2832,5 1,328918607 0,002814054 0,01
Colombia 2582,5 1,074031013 0,013875766 0,44
Congo 2222,5 1,078933108 0,024472619 0,71
Costa Rica 2802,5 1,050377494 0,005668136 0,78
Côte d’Ivoire 2460 1,004959783 0,007587564 0,52
Croatia 2655 1,072976483 0,009344081 0,72
Cyprus (!) 3185 0,325015959 0,00212915 0,99
Czech Republic 3192,5 1,004089056 0,002061036 0,84
Denmark 3335 1,007673381 0,006893499 0,93
Dominican Republic 2217,5 1,062919767 0,006550924 0,65
Ecuador 2225 1,072013967 0,00294547 0,6
Egypt 3172,5 1,036345512 0,004306619 0,38
El Salvador 2510 1,013036366 0,004187964 0,7
Estonia (!) 2980 0,329425185 0,001662589 0,99
Ethiopia 1747,5 1,073625398 0,039032523 0,31
Finland (!) 3147,5 0,788769669 0,002606412 0,99
France 3557,5 1,019371541 0,001953865 0,53
Gabon (!) 2622,5 0,961643759 0,016248519 0,99
Georgia 2350 1,044229266 0,059636113 0,76
Germany 3440 1,009335161 0,000335601 0,48
Ghana 2532,5 1,000098029 0,047085907 0,48
Greece 3610 1,063074 0,003756555 0,77
Haiti 1815 1,038427773 0,004246483 0,56
Honduras 2457,5 1,030624938 0,005692923 0,67
Hungary 3440 1,024235523 0,001350114 0,78
Iceland (!) 3150 0,025191922 2,57214E-05 0,99
India (!) 2307,5 1,403800869 0,024395268 0,01
Indonesia 2497,5 1,001768442 0,004578895 0,2
Iran (Islamic Republic of) 3030 1,034945678 0,001105326 0,45
Ireland 3622,5 1,007003095 0,017135706 0,96
Israel 3490 1,008446182 0,013265865 0,87
Italy 3615 1,007727182 0,001245927 0,51
Jamaica 2712,5 1,056188543 0,01979275 0,9
Japan 2875 1,0094237 0,000359135 0,38
Jordan 2820 1,015861129 0,031905756 0,77
Kazakhstan 3135 1,01095925 0,021868381 0,74
Kenya 2010 1,018667155 0,02914075 0,42
Kyrgyzstan 2502,5 1,009443502 0,053751489 0,71
Latvia 3015 1,010440502 0,023191031 0,98
Lebanon 3045 1,036073511 0,054610186 0,85
Lithuania 3152,5 1,008092894 0,025234007 0,96
Luxembourg (!) 3632,5 0,052543325 6,62285E-05 0,99
Malaysia 2855 1,017853322 0,001002682 0,61
Mauritius 2847,5 1,070576731 0,019964794 0,96
Mexico 3165 1,01483014 0,009376118 0,36
Mongolia 2147,5 1,061731985 0,030246541 0,9
Morocco 3095 1,07892333 0,000418636 0,47
Mozambique 1922,5 1,023422366 0,041833717 0,48
Nepal 2250 1,059720031 0,006741455 0,46
Netherlands 2925 1,040887411 0,000689576 0,78
New Zealand (!) 2785 0,913678062 0,003946867 0,99
Nicaragua 2102,5 1,045412214 0,007065561 0,69
Nigeria 2527,5 1,069148598 0,032086946 0,28
Norway (!) 3340 0,760631741 0,001570101 0,99
Pakistan 2275 1,062522698 0,020995863 0,24
Panama 2347,5 1,007449033 0,00243433 0,81
Paraguay 2570 1,07179452 0,021405906 0,73
Peru 2280 1,050166142 0,00327043 0,47
Philippines 2387,5 1,0478458 0,022165841 0,32
Poland 3365 1,004848541 0,000688294 0,56
Portugal 3512,5 1,036215564 0,006604633 0,76
Republic of Korea 3027,5 1,01734341 0,011440406 0,56
Republic of Moldova 2762,5 1,002387234 0,038541243 0,8
Romania 3207,5 1,003204035 0,003181708 0,62
Russian Federation 3032,5 1,050934925 0,001953049 0,38
Saudi Arabia 2980 1,026310231 0,007502008 0,72
Senegal 2187,5 1,05981161 0,021382472 0,54
Serbia and Montenegro 2787,5 1,0392151 0,012416926 0,8
Slovakia 2875 1,011063497 0,002657276 0,92
Slovenia (!) 3042,5 0,583332004 0,003458657 0,99
South Africa 2882,5 1,053438343 0,009139913 0,53
Spain 3322,5 1,061083277 0,004844361 0,56
Sri Lanka 2287,5 1,029495671 0,001531167 0,5
Sudan 2122,5 1,028532781 0,044393335 0,4
Sweden 3072,5 1,018026405 0,004626486 0,91
Switzerland 3385 1,047790357 0,007713383 0,88
Syrian Arab Republic 2970 1,010909679 0,017849377 0,59
Tajikistan 2012,5 1,004745997 0,078394669 0,62
Thailand 2420 1,05305435 0,004200173 0,41
The former Yugoslav Republic of Macedonia 2755 1,064764097 0,003242024 0,95
Togo 2020 1,007094875 0,014424982 0,66
Trinidad and Tobago (!) 2645 0,152994618 0,003781236 0,99
Tunisia 3230 1,053626454 0,001201886 0,66
Turkey 3510 1,02188909 0,001740729 0,43
Turkmenistan 2620 1,003674668 0,024196536 0,96
Ukraine 3040 1,044110717 0,005180992 0,54
United Kingdom 3340 1,028560563 0,006711585 0,52
United Republic of Tanzania 1987,5 1,074441381 0,031503549 0,41
United States of America 3637,5 1,023273537 0,006401009 0,3
Uruguay 2760 1,014226024 0,019409309 0,82
Uzbekistan 2550 1,056807711 0,031469698 0,59
Venezuela (Bolivarian Republic of) 2480 1,048332115 0,012077362 0,6
Viet Nam 2425 1,050131152 0,000866138 0,31
Yemen 2005 1,076332698 0,029772287 0,47
Zambia 1937,5 1,0479534 0,044241343 0,59
Zimbabwe 2035 1,063047787 0,022242317 0,6

Source: author’s


Table 2 Parameters of the function:  Population = (Renewable energy use per capita[5])µ*(Food intake per capita[6])(1-µ)

Country name Mean scale factor ‘A’ over 1990 – 2014 Variance in the scale factor ‘A’ over 1990 – 2014 The exponent ‘µ’ of the ‘renewable energy per capita’ factor The rate of substitution between renewable and non-renewable energies[7]
Albania 1,063726823 0,015575246 0,7 1,114285714
Algeria 1,058584384 0,044309122 0,44 1,136363636
Angola 1,044147837 0,063942546 0,49 1,06122449
Argentina 1,039249286 0,005115111 0,39 1,358974359
Armenia 1,082452967 0,023421839 0,59 1,355932203
Australia 1,036777388 0,009700331 0,52 1,480769231
Austria 1,017958672 0,007854467 0,71 1,225352113
Azerbaijan 1,07623299 0,009740098 0,47 1,574468085
Bangladesh 1,088818696 0,017086232 0,2 1,05
Belarus (!) 1,017676486 0,142728478 0,51 1,568627451
Belgium 1,06314732 0,095474709 0,52 1,692307692
Benin (!) 1,045986178 0,101094528 0,58 1,051724138
Bolivia (Plurinational State of) 1,078219551 0,034143037 0,53 1,169811321
Bosnia and Herzegovina 1,077445974 0,084400986 0,66 1,227272727
Botswana 1,022264687 0,056890261 0,79 1,164556962
Brazil 1,066438509 0,005012883 0,24 1,083333333
Bulgaria (!) 1,022253185 0,190476288 0,55 1,490909091
Cameroon 1,040548202 0,059668736 0,5 1
Canada 1,02539319 0,005170473 0,56 1,303571429
Chile 1,006307911 0,001159941 0,55 1,181818182
China 1,347729029 0,003248871 0,01 1
Colombia 1,016164864 0,019413193 0,37 1,189189189
Congo 1,041474959 0,030195913 0,67 1,059701493
Costa Rica 1,008081248 0,01876342 0,68 1,147058824
Côte d’Ivoire 1,013057174 0,009833628 0,5 1,04
Croatia 1,072976483 0,009344081 0,72 1
Cyprus (!) 1,042370253 0,838872562 0,72 1,375
Czech Republic 1,036681212 0,044847525 0,56 1,5
Denmark 1,008202138 0,059873591 0,68 1,367647059
Dominican Republic 1,069124974 0,020305242 0,53 1,226415094
Ecuador 1,008104202 0,025383593 0,47 1,276595745
Egypt 1,03122058 0,016484947 0,28 1,357142857
El Salvador 1,078008598 0,028182822 0,64 1,09375
Estonia (!) 1,062618744 0,418196957 0,88 1,125
Ethiopia 1,01313572 0,036192629 0,3 1,033333333
Finland 1,065855419 0,021967408 0,85 1,164705882
France 1,021262046 0,002151713 0,38 1,394736842
Gabon 1,065944525 0,011751745 0,97 1,020618557
Georgia 1,011709194 0,012808503 0,66 1,151515152
Germany 1,008843147 0,03636378 0,31 1,548387097
Ghana (!) 1,065885579 0,106721005 0,46 1,043478261
Greece 1,033613511 0,009328533 0,55 1,4
Haiti 1,009030442 0,005061414 0,54 1,037037037
Honduras 1,028253048 0,022719417 0,62 1,080645161
Hungary 1,086698434 0,022955955 0,54 1,444444444
Iceland 0,041518305 0,000158837 0,99 1
India 1,414055357 0,025335408 0,01 1
Indonesia 1,003393135 0,008680379 0,18 1,111111111
Iran (Islamic Republic of) 1,06172763 0,011215001 0,26 1,730769231
Ireland 1,075982896 0,02796979 0,61 1,573770492
Israel 1,06421352 0,004086618 0,61 1,426229508
Italy 1,072302127 0,020049639 0,36 1,416666667
Jamaica 1,002749054 0,010620317 0,67 1,343283582
Japan 1,082461225 0,000372112 0,25 1,52
Jordan 1,025652757 0,024889809 0,5 1,54
Kazakhstan 1,078500526 0,007887364 0,44 1,681818182
Kenya 1,039952786 0,031445338 0,41 1,024390244
Kyrgyzstan 1,036451717 0,011487047 0,6 1,183333333
Latvia 1,02535782 0,044807273 0,83 1,180722892
Lebanon 1,050444418 0,053181784 0,6 1,416666667
Lithuania (!) 1,076146779 0,241465686 0,72 1,333333333
Luxembourg (!) 1,080780192 0,197582319 0,93 1,064516129
Malaysia 1,018207799 0,034303031 0,42 1,452380952
Mauritius 1,081652351 0,082673843 0,79 1,215189873
Mexico 1,01253558 0,019098478 0,27 1,333333333
Mongolia 1,073924505 0,017542414 0,6 1,5
Morocco 1,054779512 0,005553697 0,38 1,236842105
Mozambique 1,062086076 0,047101957 0,48 1
Nepal 1,02819587 0,008319264 0,45 1,022222222
Netherlands 1,079123029 0,043322084 0,46 1,695652174
New Zealand 1,046855187 0,004522505 0,83 1,192771084
Nicaragua 1,034941617 0,021798159 0,64 1,078125
Nigeria 1,03609124 0,030236501 0,27 1,037037037
Norway 1,019025526 0,002937442 0,95 1,042105263
Pakistan 1,068995505 0,026598749 0,22 1,090909091
Panama 1,001556162 0,038760767 0,69 1,173913043
Paraguay 1,049861415 0,030603983 0,69 1,057971014
Peru 1,06820116 0,008122931 0,41 1,146341463
Philippines 1,045289953 0,035957042 0,28 1,142857143
Poland 1,035431925 0,035915212 0,39 1,435897436
Portugal 1,044901969 0,003371242 0,62 1,225806452
Republic of Korea 1,06776762 0,017697832 0,31 1,806451613
Republic of Moldova 1,009542233 0,033772795 0,55 1,454545455
Romania 1,011030974 0,079875735 0,47 1,319148936
Russian Federation 1,083901796 0,000876184 0,24 1,583333333
Saudi Arabia 1,099133179 0,080054524 0,27 2,666666667
Senegal 1,019171218 0,032304226 0,49 1,102040816
Serbia and Montenegro 1,042141223 0,00377058 0,63 1,26984127
Slovakia 1,062546838 0,08862799 0,61 1,508196721
Slovenia 1,00512965 0,039266211 0,81 1,222222222
South Africa 1,056957556 0,012656394 0,41 1,292682927
Spain 1,017435095 0,002522983 0,4 1,4
Sri Lanka 1,003117252 0,000607856 0,47 1,063829787
Sudan 1,00209188 0,060026529 0,38 1,052631579
Sweden 1,012941105 0,003898173 0,77 1,181818182
Switzerland 1,07331184 0,000878485 0,69 1,275362319
Syrian Arab Republic 1,048889583 0,03494333 0,38 1,552631579
Tajikistan 1,03533923 0,055646586 0,58 1,068965517
Thailand 1,012034765 0,002131649 0,33 1,242424242
The former Yugoslav Republic of Macedonia (!) 1,021262823 0,379532891 0,72 1,319444444
Togo 1,030339186 0,024874996 0,64 1,03125
Trinidad and Tobago 1,086840331 0,014786844 0,69 1,434782609
Tunisia 1,042654904 0,000806403 0,52 1,269230769
Turkey 1,0821418 0,019688124 0,35 1,228571429
Turkmenistan (!) 1,037854925 0,614587094 0,38 2,526315789
Ukraine 1,022041527 0,026351574 0,31 1,741935484
United Kingdom 1,028817158 0,017810219 0,3 1,733333333
United Republic of Tanzania 1,0319973 0,033120507 0,4 1,025
United States of America 1,001298132 0,001300399 0,19 1,578947368
Uruguay 1,025162405 0,027221297 0,73 1,123287671
Uzbekistan 1,105591195 0,008303345 0,36 1,638888889
Venezuela (Bolivarian Republic of) 1,044353155 0,012830255 0,45 1,333333333
Viet Nam 1,005825608 0,003779368 0,28 1,107142857
Yemen 1,072879389 0,058580323 0,3 1,566666667
Zambia 1,045147143 0,038548336 0,58 1,017241379
Zimbabwe 1,030974989 0,008692551 0,57 1,052631579

Source: author’s

[1] Charles W. Cobb, Paul H. Douglas, 1928, A Theory of Production, The American Economic Review, Volume 18, Issue 1, Supplement, Papers and Proceedings of the Fortieth Annual Meeting of the American Economic Association (March 1928), pp. 139 – 165

[2] Feenstra, Robert C., Robert Inklaar and Marcel P. Timmer (2015), “The Next Generation of the Penn World Table” American Economic Review, 105(10), 3150-3182, available for download at

[3] Current annual use per capita, in tons of oil equivalent

[4] Annual caloric intake in mega-calories (1000 kcal) per capita, averaged over 1990 – 2014.

[5] Current annual use per capita, in tons of oil equivalent

[6] Annual caloric intake in mega-calories (1000 kcal) per capita, averaged over 1990 – 2014.

[7] This is the ratio of two logarithms, namely: µ(renewable energy per capita) / µ(total energy use per capita)