What are the practical outcomes of those hypotheses being true or false?

 

My editorial on You Tube

 

This is one of those moments when I need to reassess what the hell I am doing. Scientifically, I mean. Of course, it is good to reassess things existentially, too, every now and then, but for the moment I am limiting myself to science. Simpler and safer than life in general. Anyway, I have a financial scheme in mind, where local crowdfunding platforms serve to support the development of local suppliers in renewable energies. The scheme is based on the observable difference between prices of electricity for small users (higher), and those reserved to industrial scale users (lower). I wonder if small consumers would be ready to pay the normal, relatively higher price in exchange of a package made of: a) electricity and b) shares in the equity of its suppliers.

I have a general, methodological hypothesis in mind, which I have been trying to develop over the last 2 years or so: collective intelligence. I hypothesise that collective behaviour observable in markets can be studied as a manifestation of collective intelligence. The purpose is to go beyond optimization and to define, with scientific rigour, what are the alternative, essentially equiprobable paths of change that a complex market can take. I think such an approach is useful when I am dealing with an economic model with a lot of internal correlation between variables, and that correlation can be so strong that it turns into those variables basically looping on each other. In such a situation, distinguishing independent variables from the dependent ones becomes bloody hard, and methodologically doubtful.

On the grounds of literature, and my own experimentation, I have defined three essential traits of such collective intelligence: a) distinction between structure and instance b) capacity to accumulate experience, and c) capacity to pass between different levels of freedom in social cohesion. I am using an artificial neural network, a multi-layer perceptron, in order to simulate such collectively intelligent behaviour.

The distinction between structure and instance means that we can devise something, make different instances of that something, each different by some small details, and experiment with those different instances in order to devise an even better something. When I make a mechanical clock, I am a clockmaker. When I am able to have a critical look at this clock, make many different versions of it – all based on the same structural connections between mechanical parts, but differing from each other by subtle details – and experiment with those multiple versions, I become a meta-clock-maker, i.e. someone who can advise clockmakers on how to make clocks. The capacity to distinguish between structures and their instances is one of the basic skills we need in life. Autistic people have a big problem in that department, as they are mostly on the instance side. To a severely autistic person, me in a blue jacket, and me in a brown jacket are two completely different people. Schizophrenic people are on the opposite end of the spectrum. To them, everything is one and the same structure, and they cannot cope with instances. Me in a blue jacket and me in a brown jacket are the same as my neighbour in a yellow jumper, and we all are instances of the same alien monster. I know you think I might be overstating, but my grandmother on the father’s side used to suffer from schizophrenia, and it was precisely that: to her, all strong smells were the manifestation of one and the same volatile poison sprayed in the air by THEM, and every person outside a circle of about 19 people closest to her was a member of THEM. Poor Jadwiga.

In economics, the distinction between structure and instance corresponds to the tension between markets and their underpinning institutions. Markets are fluid and changeable, they are like constant experimenting. Institutions give some gravitas and predictability to that experimenting. Institutions are structures, and markets are ritualized manners of multiplying and testing many alternative instances of those structures.

The capacity to accumulate experience means that as we experiment with different instances of different structures, we can store information we collect in the process, and use this information in some meaningful way. My great compatriot, Alfred Korzybski, in his general semantics, used to designate it as ‘the capacity to bind time’. The thing is not as obvious as one could think. A Nobel-prized mathematician, Reinhard Selten, coined up the concept of social games with imperfect recall (Harsanyi, Selten 1988[1]). He argued that as we, collective humans, accumulate and generalize experience about what the hell is going on, from time to time we shake off that big folder, and pick the pages endowed with the most meaning. All the remaining stuff, judged less useful on the moment, is somehow archived in culture, so as it basically stays there, but becomes much harder to access and utilise. The capacity to accumulate experience means largely the way of accumulating experience, and doing that from-time-to-time archiving. We can observe this basic distinction in everyday life. There are things that we learn sort of incrementally. When I learn to play piano – which I wish I was learning right now, cool stuff – I practice, I practice, I practice and… I accumulate learning from all those practices, and one day I give a concert, in a pub. Still, other things, I learn them sort of haphazardly. Relationships are a good example. I am with someone, one day I am mad at her, the other day I see her as the love of my life, then, again, she really gets on my nerves, and then I think I couldn’t live without her etc. Bit of a bumpy road, isn’t it? Yes, there is some incremental learning, but you become aware of it after like 25 years of conjoint life. Earlier on, you just need to suck ass and keep going.

There is an interesting theory in economics, labelled as « semi – martingale » (see for example: Malkiel, Fama 1970[2]). When we observe changes in stock prices, in a capital market, we tend to say they are random, but they are not. You can test it. If the price is really random, it should fan out according to the pattern of normal distribution. This is what we call a full martingale. Any real price you observe actually swings less broadly than normal distribution: this is a semi-martingale. Still, anyone with any experience in investment knows that prediction inside the semi-martingale is always burdened with a s**tload of error. When you observe stock prices over a long time, like 2 or 3 years, you can see a sequence of distinct semi-martingales. From September through December it swings inside one semi-martingale, then the Ghost of Past Christmases shakes it badly, people panic, and later it settles into another semi-martingale, slightly shifted from the preceding one, and here it goes, semi-martingaling for another dozen of weeks etc.

The central theoretical question in this economic theory, and a couple of others, spells: do we learn something durable through local shocks? Does a sequence of economic shocks, of whatever type, make a learning path similar to the incremental learning of piano playing? There are strong arguments in favour of both possible answers. If you get your face punched, over and over again, you must be a really dumb asshole not to learn anything from that. Still, there is that phenomenon called systemic homeostasis: many systems, social structures included, tend to fight for stability when shaken, and they are frequently successful. The memory of shocks and revolutions is frequently erased, and they are assumed to have never existed.

The issue of different levels in social cohesion refers to the so-called swarm theory (Stradner et al 2013[3]). This theory studies collective intelligence by reference to animals, which we know are intelligent just collectively. Bees, ants, hornets: all those beasts, when acting individually, as dumb as f**k. Still, when they gang up, they develop amazingly complex patterns of action. That’s not all. Those complex patterns of theirs fall into three categories, applicable to human behaviour as well: static coupling, dynamic correlated coupling, and dynamic random coupling.

When we coordinate by static coupling, we always do things together in the same way. These are recurrent rituals, without much room for change. Many legal rules, and institutions they form the basis of, are examples of static coupling. You want to put some equity-based securities in circulation? Good, you do this, and this, and this. You haven’t done the third this? Sorry, man, but you cannot call it a day yet. When we need to change the structure of what we do, we should somehow loosen that static coupling and try something new. We should dissolve the existing business, which is static coupling, and look for creating something new. When we do so, we can sort of stay in touch with our customary business partners, and after some circling and asking around we form a new business structure, involving people we clearly coordinate with. This is dynamic correlated coupling. Finally, we can decide to sail completely uncharted waters, and take our business concept to China, or to New Zealand, and try to work with completely different people. What we do, in such a case, is emitting some sort of business signal into the environment, and waiting for any response from whoever is interested. This is dynamic random coupling. Attracting random followers to a new You Tube channel is very much an example of the same.

At the level of social cohesion, we can be intelligent in two distinct ways. On the one hand, we can keep the given pattern of collective associations behaviour at the same level, i.e. one of the three I have just mentioned. We keep it ritualized and static, or somehow loose and dynamically correlated, or, finally, we take care of not ritualizing too much and keep it deliberately at the level of random associations. On the other hand, we can shift between different levels of cohesion. We take some institutions, we start experimenting with making them more flexible, at some point we possibly make it as free as possible, and we gain experience, which, in turn, allows us to create new institutions.

When applying the issue of social cohesion in collective intelligence to economic phenomena, we can use a little trick, to be found, for example, in de Vincenzo et al (2018[4]): we assume that quantitative economic variables, which we normally perceive as just numbers, are manifestations of distinct collective decisions. When I have the price of energy, let’s say, €0,17 per kilowatt hour, I consider it as the outcome of collective decision-making. At this point, it is useful to remember the fundamentals of intelligence. We perceive our own, individual decisions as outcomes of our independent thinking. We associate them with the fact of wanting something, and being apprehensive regarding something else etc. Still, neurologically, those decisions are outcomes of some neurons firing in a certain sequence. Same for economic variables, i.e. mostly prices and quantities: they are fruit of interactions between the members of a community. When I buy apples in the local marketplace, I just buy them for a certain price, and, if they look bad, I just don’t buy. This is not any form of purposeful influence upon the market. Still, when 10 000 people like me do the same, sort of ‘buy when price good, don’t when the apple is bruised’, a patterned process emerges. The resulting price of apples is the outcome of that process.

Social cohesion can be viewed as association between collective decisions, not just between individual actions. The resulting methodology is made, roughly speaking, of three steps. Step one: I put all the economic variables in my model over a common denominator (common scale of measurement). Step two: I calculate the relative cohesion between them with the general concept of a fitness function, which I can express, for example, as the Euclidean distance between local values of variables in question. Step three: I calculate the average of those Euclidean distances, and I calculate its reciprocal, like « 1/x ». This reciprocal is the direct measure of cohesion between decisions, i.e. the higher the value of this precise « 1/x », the more cohesion between different processes of economic decision-making.

Now, those of you with a sharp scientific edge could say now: “Wait a minute, doc. How do you know we are talking about different processes of decision making? Who do you know that variable X1 comes from a different process than variable X2?”. This is precisely my point. The swarm theory tells me that if I can observe changing a cohesion between those variables, I can reasonably hypothesise that their underlying decision-making processes are distinct. If, on the other hand, their mutual Euclidean distance stays the same, I hypothesise that they come from the same process.

Summing up, here is the general drift: I take an economic model and I formulate three hypotheses as for the occurrence of collective intelligence in that model. Hypothesis #1: different variables of the model come from different processes of collective decision-making.

Hypothesis #2: the economic system underlying the model has the capacity to learn as a collective intelligence, i.e. to durably increase or decrease the mutual cohesion between those processes. Hypothesis #3: collective learning in the presence of economic shocks is different from the instance of learning in the absence of such shocks.

They look nice, those hypotheses. Now, why the hell should anyone bother? I mean what are the practical outcomes of those hypotheses being true or false? In my experimental perceptron, I express the presence of economic shocks by using hyperbolic tangent as neural function of activation, whilst the absence of shocks (or the presence of countercyclical policies) is expressed with a sigmoid function. Those two yield very different processes of learning. Long story short, the sigmoid learns more, i.e. it accumulates more local errors (this more experimental material for learning), and it generates a steady trend towards lower a cohesion between variables (decisions). The hyperbolic tangent accumulates less experiential material (it learns less), and it is quite random in arriving to any tangible change in cohesion. The collective intelligence I mimicked with that perceptron looks like the kind of intelligence, which, when going through shocks, learns only the skill of returning to the initial position after shock: it does not create any lasting type of change. The latter happens only when my perceptron has a device to absorb and alleviate shocks, i.e. the sigmoid neural function.

When I have my perceptron explicitly feeding back that cohesion between variables (i.e. feeding back the fitness function considered as a local error), it learns less and changes less, but not necessarily goes through less shocks. When the perceptron does not care about feeding back the observable distance between variables, there is more learning and more change, but not more shocks. The overall fitness function of my perceptron changes over time The ‘over time’ depends on the kind of neural activation function I use. In the case of hyperbolic tangent, it is brutal change over a short time, eventually coming back to virtually the same point that it started from. In the hyperbolic tangent, the passage between various levels of association, according to the swarm theory, is super quick, but not really productive. In the sigmoid, it is definitely a steady trend of decreasing cohesion.

I want to know what the hell I am doing. I feel I have made a few steps towards that understanding, but getting to know what I am doing proves really hard.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

[1] Harsanyi, J. C., & Selten, R. (1988). A general theory of equilibrium selection in games. MIT Press Books, 1.

[2] Malkiel, B. G., & Fama, E. F. (1970). Efficient capital markets: A review of theory and empirical work. The journal of Finance, 25(2), 383-417.

[3] Stradner, J., Thenius, R., Zahadat, P., Hamann, H., Crailsheim, K., & Schmickl, T. (2013). Algorithmic requirements for swarm intelligence in differently coupled collective systems. Chaos, Solitons & Fractals, 50, 100-114.

[4] De Vincenzo, I., Massari, G. F., Giannoccaro, I., Carbone, G., & Grigolini, P. (2018). Mimicking the collective intelligence of human groups as an optimization tool for complex problems. Chaos, Solitons & Fractals, 110, 259-266.

How can I possibly learn on that thing I have just become aware I do?

 

My editorial on You Tube

 

I keep working on the application of neural networks to simulate the workings of collective intelligence in humans. I am currently macheting my way through the model proposed by de Vincenzo et al in their article entitled ‘Mimicking the collective intelligence of human groups as an optimization tool for complex problems’ (2018[1]). In the spirit of my own research, I am trying to use optimization tools for a slightly different purpose, that is for simulating the way things are done. It usually means that I relax some assumptions which come along with said optimization tools, and I just watch what happens.

Vincenzo et al propose a model of artificial intelligence, which combines a classical perceptron, such as the one I have already discussed on this blog (see « More vigilant than sigmoid », for example) with a component of deep learning based on the observable divergences in decisions. In that model, social agents strive to minimize their divergences and to achieve relative consensus. Mathematically, it means that each decision is characterized by a fitness function, i.e. a function of mathematical distance from other decisions made in the same population.

I take the tensors I have already been working with, namely the input tensor TI = {LCOER, LCOENR, KR, KNR, IR, INR, PA;R, PA;NR, PB;R, PB;NR} and the output tensor is TO = {QR/N; QNR/N}. Once again, consult « More vigilant than sigmoid » as for the meaning of those variables. In the spirit of the model presented by Vincenzo et al, I assume that each variable in my tensors is a decision. Thus, for example, PA;R, i.e. the basic price of energy from renewable sources, which small consumers are charged with, is the tangible outcome of a collective decision. Same for the levelized cost of electricity from renewable sources, the LCOER, etc. For each i-th variable xi in TI and TO, I calculate its relative fitness to the overall universe of decisions, as the average of itself, and of its Euclidean distances to other decisions. It looks like:

 

V(xi) = (1/N)*{xi + [(xi – xi;1)2]0,5 + [(xi – xi;2)2]0,5 + … + [(xi – xi;K)2]0,5}

 

…where N is the total number of variables in my tensors, and K = N – 1.

 

In a next step, I can calculate the average of averages, thus to sum up all the individual V(xi)’s and divide that total by N. That average V*(x) = (1/N) * [V(x1) + V(x2) + … + V(xN)] is the measure of aggregate divergence between individual variables considered as decisions.

Now, I imagine two populations: one who actively learns from the observed divergence of decisions, and another one who doesn’t really. The former is represented with a perceptron that feeds back the observable V(xi)’s into consecutive experimental rounds. Still, it is just feeding that V(xi) back into the loop, without any a priori ideas about it. The latter is more or less what it already is: it just yields those V(xi)’s but does not do much about them.

I needed a bit of thinking as for how exactly should that feeding back of fitness function look like. In the algorithm I finally came up with, it looks differently for the input variables on the one hand, and for the output ones. You might remember, from the reading of « More vigilant than sigmoid », that my perceptron, in its basic version, learns by estimating local errors observed in the last round of experimentation, and then adding those local errors to the values of input variables, just to make them roll once again through the neural activation function (sigmoid or hyperbolic tangent), and see what happens.

As I upgrade my perceptron with the estimation of fitness function V(xi), I ask: who estimates the fitness function? What kind of question is that? Well, a basic one. I have that neural network, right? It is supposed to be intelligent, right? I add a function of intelligence, namely that of estimating the fitness function. Who is doing the estimation: my supposedly intelligent network or some other intelligent entity? If it is an external intelligence, mine, for a start, it just estimates V(xi), sits on its couch, and watches the perceptron struggling through the meanders of attempts to be intelligent. In such a case, the fitness function is like sweat generated by a body. The body sweats but does not have any way of using the sweat produced.

Now, if the V(xi) is to be used for learning, the perceptron is precisely the incumbent intelligent structure supposed to use it. I see two basic ways for the perceptron to do that. First of all, the input neuron of my perceptron can capture the local fitness functions on input variables and add them, as additional information, to the previously used values of input variables. Second of all, the second hidden neuron can add the local fitness functions, observed on output variables, to the exponent of the neural activation function.

I explain. I am a perceptron. I start my adventure with two tensors: input TI = {LCOER, LCOENR, KR, KNR, IR, INR, PA;R, PA;NR, PB;R, PB;NR} and output TO = {QR/N; QNR/N}. The initial values I start with are slightly modified in comparison to what was being processed in « More vigilant than sigmoid ». I assume that the initial market of renewable energies – thus most variables of quantity with ‘R’ in subscript – is quasi inexistent. More specifically, QR/N = 0,01 and  QNR/N = 0,99 in output variables, whilst in the input tensor I have capital invested in capacity IR = 0,46 (thus a readiness to go and generate from renewables), and yet the crowdfunding flow K is KR = 0,01 for renewables and KNR = 0,09 for non-renewables. If you want, it is a sector of renewable energies which is sort of ready to fire off but hasn’t done anything yet in that department. All in all, I start with: LCOER = 0,26; LCOENR = 0,48; KR = 0,01; KNR = 0,09; IR = 0,46; INR = 0,99; PA;R = 0,71; PA;NR = 0,46; PB;R = 0,20; PB;NR = 0,37; QR/N = 0,01; and QNR/N = 0,99.

Being a pure perceptron, I am dumb as f**k. I can learn by pure experimentation. I have ambitions, though, to be smarter, thus to add some deep learning to my repertoire. I estimate the relative mutual fitness of my variables according to the V(xi) formula given earlier, as arithmetical average of each variable separately and its Euclidean distance to others. With the initial values as given, I observe: V(LCOER; t0) = 0,302691788; V(LCOENR; t0) = 0,310267104; V(KR; t0) = 0,410347388; V(KNR; t0) = 0,363680721; V(IR ; t0) = 0,300647174; V(INR ; t0) = 0,652537097; V(PA;R ; t0) = 0,441356844 ; V(PA;NR ; t0) = 0,300683099 ; V(PB;R ; t0) = 0,316248176 ; V(PB;NR ; t0) = 0,293252713 ; V(QR/N ; t0) = 0,410347388 ; and V(QNR/N ; t0) = 0,570485945. All that stuff put together into an overall fitness estimation is like average V*(x; t0) = 0,389378787.

I ask myself: what happens to that fitness function when as I process information with my two alternative neural functions, the sigmoid or the hyperbolic tangent. I jump to experimental round 1500, thus to t1500, and I watch. With the sigmoid, I have V(LCOER; t1500) =  0,359529289 ; V(LCOENR; t1500) =  0,367104605; V(KR; t1500) =  0,467184889; V(KNR; t1500) = 0,420518222; V(IR ; t1500) =  0,357484675; V(INR ; t1500) =  0,709374598; V(PA;R ; t1500) =  0,498194345; V(PA;NR ; t1500) =  0,3575206; V(PB;R ; t1500) =  0,373085677; V(PB;NR ; t1500) =  0,350090214; V(QR/N ; t1500) =  0,467184889; and V(QNR/N ; t1500) = 0,570485945, with average V*(x; t1500) =  0,441479829.

Hmm, interesting. Working my way through intelligent cognition with a sigmoid, after 1500 rounds of experimentation, I have somehow decreased the mutual fitness of decisions I make through individual variables. Those V(xi)’s have changed. Now, let’s see what it gives when I do the same with the hyperbolic tangent: V(LCOER; t1500) =   0,347752478; V(LCOENR; t1500) =  0,317803169; V(KR; t1500) =   0,496752021; V(KNR; t1500) = 0,436752021; V(IR ; t1500) =  0,312040791; V(INR ; t1500) =  0,575690006; V(PA;R ; t1500) =  0,411438698; V(PA;NR ; t1500) =  0,312052766; V(PB;R ; t1500) = 0,370346458; V(PB;NR ; t1500) = 0,319435252; V(QR/N ; t1500) =  0,496752021; and V(QNR/N ; t1500) = 0,570485945, with average V*(x; t1500) =0,413941802.

Well, it is becoming more and more interesting. Being a dumb perceptron, I can, nevertheless, create two different states of mutual fitness between my decisions, depending on the kind of neural function I use. I want to have a bird’s eye view on the whole thing. How can a perceptron have a bird’s eye view of anything? Simple: it rents a drone. How can a perceptron rent a drone? Well, how smart do you have to be to rent a drone? Anyway, it gives something like the graph below:

 

Wow! So this is what I do, as a perceptron, and what I haven’t been aware so far? Amazing. When I think in sigmoid, I sort of consistently increase the relative distance between my decisions, i.e. I decrease their mutual fitness. The sigmoid, that function which sorts of calms down any local disturbance, leads to making a decision-making process like less coherent, more prone to embracing a little chaos. The hyperbolic tangent thinking is different. It occasionally sort of stretches across a broader spectrum of fitness in decisions, but as soon as it does so, it seems being afraid of its own actions, and returns to the initial level of V*(x). Please, note that as a perceptron, I am almost alive, and I produce slightly different outcomes in each instance of myself. The point is that in the line corresponding to hyperbolic tangent, the comb-like pattern of small oscillations can stretch and move from instance to instance. Still, it keeps the general form of a comb.

OK, so this is what I do, and now I ask myself: how can I possibly learn on that thing I have just become aware I do? As a perceptron, endowed with this precise logical structure, I can do one thing with information: I can arithmetically add it to my input. Still, having some ambitions for evolving, I attempt to change my logical structure, and I risk myself into incorporating somehow the observable V(xi) into my neural activation function. Thus, the first thing I do with that new learning is to top the values of input variables with local fitness functions observed in the previous round of experimenting. I am doing it already with local errors observed in outcome variables, so why not doubling the dose of learning? Anyway, it goes like: xi(t0) = xi(t-1) + e(xi; t-1) + V(xi; t-1). It looks interesting, but I am still using just a fraction of information about myself, i.e. just that about input variables. Here is where I start being really ambitious. In the equation of the sigmoid function, I change s = 1 / [1 + exp(∑xi*Wi)] into s = 1 / [1 + exp(∑xi*Wi + V(To)], where V(To) stands for local fitness functions observed in output  variables. I do the same by analogy in my version based on hyperbolic tangent. The th = [exp(2*∑xi*wi)-1] / [exp(2*∑xi*wi) + 1] turns into th = {exp[2*∑xi*wi + V(To)] -1} / {exp[2*∑xi*wi + V(To)] + 1}. I do what I know how to do, i.e. adding information from fresh observation, and I apply it to change the structure of my neural function.

All those ambitious changes in myself, put together, change my pattern of learing as shown in the graph below:

When I think sigmoid, the fact of feeding back my own fitness function does not change much. It makes the learning curve a bit steeper in the early experimental rounds, and makes it asymptotic to a little lower threshold in the last rounds, as compared to learning without feedback on V(xi). Yet, it is the same old sigmoid, with just its sleeves ironed. On the other hand, the hyperbolic tangent thinking changes significantly. What used to look like a comb, without feedback, now looks much more aggressive, like a plough on steroids. There is something like a complex cycle of learning on the internal cohesion of decisions made. Generally, feeding back the observable V(xi) increases the finally achieved cohesion in decisions, and, in the same time, it reduces the cumulative error gathered by the perceptron. With that type of feedback, the cumulative error of the sigmoid, which normally hits around 2,2 in this case, falls to like 0,8. With hyperbolic tangent, cumulative errors which used to be 0,6 ÷ 0,8 without feedback, fall to 0,1 ÷ 0,4 with feedback on V(xi).

 

The (provisional) piece of wisdom I can have as my takeaway is twofold. Firstly, whatever I do, a large chunk of perceptual learning leads to a bit less cohesion in my decisions. As I learn by experience, I allow myself more divergence in decisions. Secondly, looping on that divergence, and including it explicitly in my pattern of learning leads to relatively more cohesion at the end of the day. Still, more cohesion has a price – less learning.

 

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

[1] De Vincenzo, I., Massari, G. F., Giannoccaro, I., Carbone, G., & Grigolini, P. (2018). Mimicking the collective intelligence of human groups as an optimization tool for complex problems. Chaos, Solitons & Fractals, 110, 259-266.

More vigilant than sigmoid

My editorial on You Tube

 

I keep working on the application of neural networks as simulators of collective intelligence. The particular field of research I am diving into is the sector of energy, its shift towards renewable energies, and the financial scheme I invented some time ago, which I called EneFin. As for that last one, you can consult « The essential business concept seems to hold », in order to grasp the outline.

I continue developing the line of research I described in my last update in French: « De la misère, quoi ». There are observable differences in the prices of energy according to the size of the buyer. In many countries – practically in all the countries of Europe – there are two, distinct price brackets. One, which I further designated as PB, is reserved to contracts with big consumers of energy (factories, office buildings etc.) and it is clearly lower. Another one, further called PA, is applied to small buyers, mainly households and really small businesses.

As an economist, I have that intuitive thought in the presence of price forks: that differential in prices is some kind of value. If it is value, why not giving it some financial spin? I came up with the idea of the EneFin contract. People buy energy from a local supplier, in the amount Q, who sources it from renewables (water, wind etc.), and they pay the price PA, thus generating a financial flow equal to Q*PA. That flow buys two things: energy priced at PB, and participatory titles in the capital of their supplier, for the differential Q*(PA – PB). I imagine some kind of crowdfunding platform, which could channel the amount of capital K = Q*(PA – PB).

That K remains in some sort of fluid relationship to I, or capital invested in the productive capacity of energy suppliers. Fluid relationship means that each of those capital balances can date other capital balances, no hard feelings held. As we talk (OK, I talk) about prices of energy and capital invested in capacity, it is worth referring to LCOE, or Levelized Cost Of Electricity. The LCOE is essentially the marginal cost of energy, and a no-go-below limit for energy prices.

I want to simulate the possible process of introducing that general financial concept, namely K = Q*(PA – PB), into the market of energy, in order to promote the development of diversified networks, made of local suppliers in renewable energy.

Here comes my slightly obsessive methodological idea: use artificial intelligence in order to simulate the process. In classical economic method, I make a model, I take empirical data, I regress some of it on another some of it, and I come up with coefficients of regression, and they tell me how the thing should work if we were living in a perfect world. Artificial intelligence opens a different perspective. I can assume that my model is a logical structure, which keeps experimenting with itself and we don’t the hell know where exactly that experimentation leads. I want to use neural networks in order to represent the exact way that social structures can possibly experiment with that K = Q*(PA – PB) thing. Instead of optimizing, I want to see that way that possible optimization can occur.

I have that simple neural network, which I already referred to in « The point of doing manually what the loop is supposed to do » and which is basically quite dumb, as it does not do any abstraction. Still, it nicely experiments with logical structures. I am sketching its logical structure in the picture below. I distinguish four layers of neurons: input, hidden 1, hidden 2, and output. When I say ‘layers’, it is a bit of grand language. For the moment, I am working with one single neuron in each layer. It is more of a synaptic chain.

Anyway, the input neuron feeds data into the chain. In the first round of experimentation, it feeds the source data in. In consecutive rounds of learning through experimentation, that first neuron assesses and feeds back local errors, measured as discrepancies between the output of the output neuron, and the expected values of output variables. The input neuron is like the first step in a chain of perception, in a nervous system: it receives and notices the raw external information.

The hidden layers – or the hidden neurons in the chain – modify the input data. The first hidden neuron generates quasi-random weights, which the second hidden neuron attributes to the input variables. Just as in a nervous system, the input stimuli are assessed as for their relative importance. In the original algorithm of perceptron, which I used to design this network, those two functions, i.e. generating the random weights and attributing them to input variables, were fused in one equation. Still, my fundamental intent is to use neural networks to simulate collective intelligence, and intuitively guess those two functions are somehow distinct. Pondering the importance of things is one action and using that ponderation for practical purposes is another. It is like scientist debating about the way to run a policy, and the government having the actual thing done. These are two separate paths of action.

Whatever. What the second hidden neuron produces is a compound piece of information: the summation of input variables multiplied by random weights. The output neuron transforms this compound data through a neural function. I prepared two versions of this network, with two distinct neural functions: the sigmoid, and the hyperbolic tangent. As I found out, the way they work is very different, just as the results they produce. Once the output neuron generates the transformed data – the neural output – the input neuron measures the discrepancy between the original, expected values of output variables, and the values generated by the output neuron. The exact way of computing that discrepancy is made of two operations: calculating the local derivative of the neural function, and multiplying that derivative by the residual difference ‘original expected output value minus output value generated by the output neuron’. The so calculated discrepancy is considered as a local error, and is being fed back into the input neuron as an addition to the value of each input variable.

Before I go into describing the application I made of that perceptron, as regards my idea for financial scheme, I want to delve into the mechanism of learning triggered through repeated looping of that logical structure. The input neuron measures the arithmetical difference between the output of the network in the preceding round of experimentation, and that difference is being multiplied by the local derivative of said output. Derivative functions, in their deepest, Newtonian sense, are magnitudes of change in something else, i.e. in their base function. In the Newtonian perspective, everything that happens can be seen either as change (derivative) in something else, or as an integral (an aggregate that changes its shape) of still something else. When I multiply the local deviation from expected values by the local derivative of the estimated value, I assume this deviation is as important as the local magnitude of change in its estimation. The faster things happen, the more important they are, so do say. My perceptron learns by assessing the magnitude of local changes it induces in its own estimations of reality.

I took that general logical structure of the perceptron, and I applied it to my core problem, i.e. the possible adoption of the new financial scheme to the market of energy. Here comes sort of an originality in my approach. The basic way of using neural networks is to give them a substantial set of real data as learning material, make them learn on that data, and then make them optimize a hypothetical set of data. Here you have those 20 old cars, take them into pieces and try to put them back together, observe all the anomalies you have thus created, and then make me a new car on the grounds of that learning. I adopted a different approach. My focus is to study the process of learning in itself. I took just one set of actual input values, exogenous to my perceptron, something like an initial situation. I ran 5000 rounds of learning in the perceptron, on the basis of that initial set of values, and I observed how is learning taking place.

My initial set of data is made of two tensors: input TI and output TO.

The thing I am the most focused on is the relative abundance of energy supplied from renewable sources. I express the ‘abundance’ part mathematically as the coefficient of energy consumed per capita, or Q/N. The relative bend towards renewables, or towards the non-renewables is apprehended as the distinction between renewable energy QR/N consumed per capita, and the non-renewable one, the QNR/N, possibly consumed by some other capita. Hence, my output tensor is TO = {QR/N; QNR/N}.

I hypothesise that TO is being generated by input made of prices, costs, and capital outlays. I split my price fork PA – PB (price for the big ones minus price for the small ones) into renewables and non-renewables, namely into: PA;R, PA;NR, PB;R, and PB;NR. I mirror the distinction in prices with that in the cost of energy, and so I call LCOER and LCOENR. I want to create a financial scheme that generates a crowdfunded stream of capital K, to finance new productive capacities, and I want it to finance renewable energies, and I call KR. Still, some other people, like my compatriots in Poland, might be so attached to fossils they might be willing to crowdfund new installations based on non-renewables. Thus, I need to take into account a KNR in the game. When I say capital, and I say LCOE, I sort of feel compelled to say aggregate investment in productive capacity, in renewables, and in non-renewables, and I call it, respectively, IR and INR. All in all, my input tensor spells TI = {LCOER, LCOENR, KR, KNR, IR, INR, PA;R, PA;NR, PB;R, PB;NR}.

The next step is scale and measurement. The neural functions I use in my perceptron like having their input standardized. Their tastes in standardization differ a little. The sigmoid likes it nicely spread between 0 and 1, whilst the hyperbolic tangent, the more reckless of the two, tolerates (-1) ≥ x ≥ 1. I chose to standardize the input data between 0 and 1, so as to make it fit into both. My initial thought was to aim for an energy market with great abundance of renewable energy, and a relatively declining supply of non-renewables. I generally trust my intuition, only I like to leverage it with a bit of chaos, every now and then, and so I ran some pseudo-random strings of values and I chose an output tensor made of TO = {QR/N = 0,95; QNR/N = 0,48}.

That state of output is supposed to be somehow logically connected to the state of input. I imagined a market, where the relative abundance in the consumption of, respectively, renewable energies and non-renewable ones is mostly driven by growing demand for the former, and a declining demand for the latter. Thus, I imagined relatively high a small-user price for renewable energy and a large fork between that PA;R and the PB;R. As for non-renewables, the fork in prices is more restrained (than in the market of renewables), and its top value is relatively lower. The non-renewable power installations are almost fed up with investment INR, whilst the renewables could still do with more capital IR in productive assets. The LCOENR of non-renewables is relatively high, although not very: yes, you need to pay for the fuel itself, but you have economies of scale. As for the LCOER for renewables, it is pretty low, which actually reflects the present situation in the market.

The last part of my input tensor regards the crowdfunded capital K. I assumed two different, initial situations. Firstly, it is virtually no crowdfunding, thus a very low K. Secondly, some crowdfunding is already alive and kicking, and it is sort of slightly above the half of what people expect in the industry.

Once again, I applied those qualitative assumptions to a set of pseudo-random values between 0 and 1. Here comes the result, in the table below.

 

Table 1 – The initial values for learning in the perceptron

Tensor Variable The Market with virtually no crowdfunding   The Market with significant crowdfunding
Input TI LCOER         0,26           0,26
LCOENR         0,48           0,48
KR         0,01   <= !! =>         0,56    
KNR         0,01            0,52    
IR         0,46           0,46
INR         0,99           0,99
PA;R         0,71           0,71
PA;NR         0,46           0,46
PB;R         0,20           0,20
PB;NR         0,37           0,37
Output TO QR/N         0,95           0,95
QNR/N         0,48           0,48

 

The way the perceptron works means that it generates and feeds back local errors in each round of experimentation. Logically, over the 5000 rounds of experimentation, each input variable gathers those local errors, like a snowball rolling downhill. I take the values of input variables from the last, i.e. the 5000th round: they have the initial values, from the table above, and, on the top of them, there is cumulative error from the 5000 experiments. How to standardize them, so as to make them comparable with the initial ones? I observe: all those final output values have the same cumulative error in them, across all the TI input tensor. I choose a simple method for standardization. As the initial values were standardized over the interval between 0 and 1, I standardize the outcoming values over the interval 0 ≥ x ≥ (1 + cumulative error).

I observe the unfolding of cumulative error along the path of learning, made of 5000 steps. There is a peculiarity in each of the neural functions used: the sigmoid, and the hyperbolic tangent. The sigmoid learns in a slightly Hitchcockian way. Initially, local errors just rocket up. It is as if that sigmoid was initially yelling: ‘F******k! What a ride!’. Then, the value of errors drops very sharply, down to something akin to a vanishing tremor, and starts hovering lazily over some implicit asymptote. Hyperbolic tangent learns differently. It seems to do all it can to minimize local errors whenever it is possible. Obviously, it is not always possible. Every now and then, that hyperbolic tangent produces an explosively high value of local error, like a sudden earthquake, just to go back into forced calm right after. You can observe those two radically different ways of learning in the two graphs below.

Two ways of learning – the sigmoidal one and the hyper-tangential one – bring interestingly different results, just as differentiated are the results of learning depending on the initial assumptions as for crowdfunded capital K. Tables 2 – 5, further below, list the results I got. A bit of additional explanation will not hurt. For every version of learning, i.e. sigmoid vs hyperbolic tangent, and K = 0,01 vs K ≈ 0,5, I ran 5 instances of 5000 rounds of learning in my perceptron. This is the meaning of the word ‘Instance’ in those tables. One instance is like a tensor of learning: one happening of 5000 consecutive experiments. The values of output variables remain constant all the time: TO = {QR/N = 0,95; QNR/N = 0,48}. The perceptron sweats in order to come up with some interesting combination of input variables, given this precise tensor of output.

 

Table 2 – Outcomes of learning with the sigmoid, no initial crowdfunding

 

The learnt values of input variables after 5000 rounds of learning
Learning with the sigmoid, no initial crowdfunding
Instance 1 Instance 2 Instance 3 Instance 4 Instance 5
cumulative error 2,11 2,11 2,09 2,12 2,16
LCOER 0,7617 0,7614 0,7678 0,7599 0,7515
LCOENR 0,8340 0,8337 0,8406 0,8321 0,8228
KR 0,6820 0,6817 0,6875 0,6804 0,6729
KNR 0,6820 0,6817 0,6875 0,6804 0,6729
IR 0,8266 0,8262 0,8332 0,8246 0,8155
INR 0,9966 0,9962 1,0045 0,9943 0,9832
PA;R 0,9062 0,9058 0,9134 0,9041 0,8940
PA;NR 0,8266 0,8263 0,8332 0,8247 0,8155
PB;R 0,7443 0,7440 0,7502 0,7425 0,7343
PB;NR 0,7981 0,7977 0,8044 0,7962 0,7873

 

 

Table 3 – Outcomes of learning with the sigmoid, with substantial initial crowdfunding

 

The learnt values of input variables after 5000 rounds of learning
Learning with the sigmoid, substantial initial crowdfunding
Instance 1 Instance 2 Instance 3 Instance 4 Instance 5
cumulative error 1,98 2,01 2,07 2,03 1,96
LCOER 0,7511 0,7536 0,7579 0,7554 0,7494
LCOENR 0,8267 0,8284 0,8314 0,8296 0,8255
KR 0,8514 0,8529 0,8555 0,8540 0,8504
KNR 0,8380 0,8396 0,8424 0,8407 0,8369
IR 0,8189 0,8207 0,8238 0,8220 0,8177
INR 0,9965 0,9965 0,9966 0,9965 0,9965
PA;R 0,9020 0,9030 0,9047 0,9037 0,9014
PA;NR 0,8189 0,8208 0,8239 0,8220 0,8177
PB;R 0,7329 0,7356 0,7402 0,7375 0,7311
PB;NR 0,7891 0,7913 0,7949 0,7927 0,7877

 

 

 

 

 

Table 4 – Outcomes of learning with the hyperbolic tangent, no initial crowdfunding

 

The learnt values of input variables after 5000 rounds of learning
Learning with the hyperbolic tangent, no initial crowdfunding
Instance 1 Instance 2 Instance 3 Instance 4 Instance 5
cumulative error 1,1 1,27 0,69 0,77 0,88
LCOER 0,6470 0,6735 0,5599 0,5805 0,6062
LCOENR 0,7541 0,7726 0,6934 0,7078 0,7257
KR 0,5290 0,5644 0,4127 0,4403 0,4746
KNR 0,5290 0,5644 0,4127 0,4403 0,4746
IR 0,7431 0,7624 0,6797 0,6947 0,7134
INR 0,9950 0,9954 0,9938 0,9941 0,9944
PA;R 0,8611 0,8715 0,8267 0,8349 0,8450
PA;NR 0,7432 0,7625 0,6798 0,6948 0,7135
PB;R 0,6212 0,6497 0,5277 0,5499 0,5774
PB;NR 0,7009 0,7234 0,6271 0,6446 0,6663

 

 

Table 5 – Outcomes of learning with the hyperbolic tangent, substantial initial crowdfunding

 

The learnt values of input variables after 5000 rounds of learning
Learning with the hyperbolic tangent, substantial initial crowdfunding
Instance 1 Instance 2 Instance 3 Instance 4 Instance 5
cumulative error -0,33 0,2 -0,06 0,98 -0,25
LCOER (0,1089) 0,3800 0,2100 0,6245 0,0110
LCOENR 0,2276 0,5681 0,4497 0,7384 0,3111
KR 0,3381 0,6299 0,5284 0,7758 0,4096
KNR 0,2780 0,5963 0,4856 0,7555 0,3560
IR 0,1930 0,5488 0,4251 0,7267 0,2802
INR 0,9843 0,9912 0,9888 0,9947 0,9860
PA;R 0,5635 0,7559 0,6890 0,8522 0,6107
PA;NR 0,1933 0,5489 0,4252 0,7268 0,2804
PB;R (0,1899) 0,3347 0,1522 0,5971 (0,0613)
PB;NR 0,0604 0,4747 0,3306 0,6818 0,1620

 

The cumulative error, the first numerical line in each table, is something like memory. It is a numerical expression of how much experience has the perceptron accumulated in the given instance of learning. Generally, the sigmoid neural function accumulates more memory, as compared to the hyper-tangential one. Interesting. The way of processing information affects the amount of experiential data stored in the process. If you use the links I gave earlier, you will see different logical structures in those two functions. The sigmoid generally smoothes out anything it receives as input. It puts the incoming, compound data in the negative exponent of the Euler’s constant e = 2,72, and then it puts the resulting value as part of the denominator of 1. The sigmoid is like a bumper: it absorbs shocks. The hyperbolic tangent is different. It sort of exposes small discrepancies in input. In human terms, the hyper-tangential function is more vigilant than the sigmoid. As it can be observed in this precise case, absorbing shocks leads to more accumulated experience than vigilantly reacting to observable change.

The difference in cumulative error, observable in the sigmoid-based perceptron vs that based on hyperbolic tangent is particularly sharp in the case of a market with substantial initial crowdfunding K. In 3 instances on 5, in that scenario, the hyper-tangential perceptron yields a negative cumulative error. It can be interpreted as the removal of some memory, implicitly contained in the initial values of input variables. When the initial K is assumed to be 0,01, the difference in accumulated memory, observable between the two neural functions, significantly shrinks. It looks as if K ≥ 0,5 was some kind of disturbance that the vigilant hyperbolic tangent attempts to eliminate. That impression of disturbance created by K ≥ 0,5 is even reinforced as I synthetically compare all the four sets of outcomes, i.e. tables 2 – 5. The case of learning with the hyperbolic tangent, and with substantial initial crowdfunding looks radically different from everything else. The discrepancy between alternative instances seems to be the greatest in this case, and the incidentally negative values in the input tensor suggest some kind of deep shakeoff. Negative prices and/or negative costs mean that someone external is paying for the ride, probably the taxpayers, in the form of some fiscal stimulation.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

Alois in the middle

 

I am returning to my syllabuses for the next academic year. I am focusing more specifically on microeconomics. Next year, I am supposed to give lectures in Microeconomics at both the Undergraduate, and the Master’s level. I feel like asking fundamental questions. My fundamental question, as it comes to teaching any curriculum, is the same: what can my students do with it? What is the function and the purpose of microeconomics? Please, notice that I am not asking that frequently stated, rhetorical question ‘What are microeconomics about?’. Well, buddy, microeconomics are about the things you are going to lecture about. Stands to reason. I want to know, and communicate, what is the practical utility, in one’s life, of those things that microeconomics are about.

The basic claim I am focusing on is the following: microeconomics are the accountancy of social structures. They serve exactly the same purpose that any kind of bookkeeping has ever served: to find and exploit patterns in human behaviour, by the means of accurately applied measures. Them ancients, who built those impressive pyramids (who builds a structure without windows and so little free space inside?), very quickly gathered that in order to have one decent pyramid, you need an army of clerks who do the accounting. They used to count stone, people, food, water etc. This is microeconomics, basically.

Thus, you can do with microeconomics if you want to build an ancient pyramid. Now, I am dividing the construction of said ancient pyramid in two stages: Undergraduate, and Master’s. An Undergraduate ancient pyramid requires the understanding of what do you need to keep the accounts of if you don’t want to be thrown to crocodiles. At the Master’s level, you will want to know what are the odds that you find yourself in a social structure, where inaccurate accounting, in connection with a pyramid, will have you thrown to crocodiles.

Good, now some literature, and a little turn by my current scientific work on the EneFin concept (see « Which salesman am I? » and « Sans une once d’utopisme » for sort of a current account of that research). I have just read that sort of transitional form of science, between an article and a book, basically a report, by Bleich and Guimaraes 2016[1]. It regards investment in renewable energies, mostly from the strictly spoken view of investment logic. Return on investment, net present value – that kind of thing. As I was making my notes out of that reading, my mind made a jump, and it landed on the cover of the quite-well-known book by Joseph Schumpeter: ‘Business Cycles’.

Joseph Schumpeter is an intriguing classic, so to say. Born in 1883, he published ‘Business Cycles’ in 1939, being 56 year-old, after the hell of a ride both for him and for the world, and right at the beginning of another ride (for the world). He was studying economics in Austria, in the early 1900, when social sciences in general were sort of different from their today’s version. They were the living account of a world that used to be changing at a breath-taking pace. Young Joseph (well, Alois in the middle) Schumpeter witnessed the rise of Marxism, World War I, the dissolution of his homeland, the Austro-Hungarian Empire, the rise of the German Reich. He moved from academia to banking, and from European banking to American academia.

I deeply believe that whatever kind of story I am telling, whether I am lecturing about economics, discussing a business concept, or chatting about philosophy, at the bottom line I am telling the story of my own existence. I also deeply believe that the same is true for anyone who goes to any lengths in telling a story. We tell stories in order to rationalize that crazy, exciting, unique and deadly something called ‘life’. To me, those ‘Business Cycles’ by Joseph Schumpeter look very much like a rationalized story of quite turbulent a life.

So, here come a few insights I have out of re-reading ‘Business Cycles’ for the n-th time, in the context of research on my EneFin business concept. Any technological change takes place in a chain of value added. Innovation in one tier of the chain needs to overcome the status quo both upstream and downstream of the chain, but once this happens, the whole chain of technologies and goods changes. I wonder how it can apply specifically to EneFin, which is essentially an institutional scheme. In terms of value added, this scheme is situated somewhere between the classical financial markets, and typical social entrepreneurship. It is social to the extent that it creates that quasi-cooperative connexion between the consumers of energy, and its suppliers. Still, as my idea assumes a financial market for those complex contracts « energy + shares in the supplier’s equity », there is a strong capitalist component.

I guess that the resistance this innovation would have to overcome would consist, on one end, in distrust from the part of those hardcore activists of social entrepreneurship, like ‘Anything that has anything to do with money is bad!’, and, on the other hand, there can be resistance from the classical financial market, namely the willingness to forcibly squeeze the EneFin scheme into some kind of established structure, like the stock market.

The second insight that Joseph has just given me is the following: there is a special type of business model and business action, the entrepreneurial one, centred on innovation rather than on capitalizing on the status quo. This is deep, really. What I could notice, so far, in my research, is that in every industry there are business models which just work, and others which just don’t. However innovative you think you are, most of the times either you follow the field-tested patterns or you simply fail. The real, deep technological change starts when this established order gets a wedge stuffed up its ass, and the wedge is, precisely, that entrepreneurial business model. I wonder how entrepreneurial is the business model of EneFin. Is it really as innovative as I think it is?

In the broad theoretical picture, which comes handy as it comes to science, the incidence of that entrepreneurial business model can be measured and assessed as a probability, and that probability, in turn, is a factor of change. My favourite mathematical approach to structural change is that particular mutation that Paul Krugman[2] made out of the classical production function, as initially formulated by Prof Charles W. Cobb and Prof Paul H. Douglas, in their common work from 1928[3]. We have some output generated by two factors, one of which changes slowly, whilst the other changes quickly. In other words, we have one quite conservative factor, and another one that takes on the crazy ride of creative destruction.

That second factor is innovation, or, if you want, the entrepreneurial business model. If it is to be powerful, then, mathematically, incremental change in that innovative factor should bring much greater a result on the side of output than numerically identical an increment in the conservative factor. The classical notation by Cobb and Douglas fits the bill. We have Y = A*F1a*F21-a and a > 0,5. Any change in F1 automatically brings more Y than the identical change in F2. Now, the big claim by Paul Krugman is that if F1 changes functionally, i.e. if its changes really increase the overall Y, resources will flow from F2 to F1, and a self-reinforcing spiral of change forms: F1 induces faster a change than F2, therefore resources are being transferred to F1, and it induces even more incremental change in F1, which, in turn, makes the Y jump even higher etc.

I can apply this logic to my scientific approach of the EneFin concept. I assume that introducing the institutional scheme of EneFin can improve the access to electricity in remote, rural locations, in the developing countries, and, consequently, it can contribute to creating whole new markets and social structures. Those local power systems organized in the lines of EneFin are the factor of innovation, the one with the a > 0,5 exponent in the Y = A*F1a*F21-a function. The empirical application of this logic requires to approximate the value of ‘a’, somehow. In my research on the fundamental link between population and access to energy, I had those exponents nailed down pretty accurately for many countries in the world. I wonder to what extent I can recycle them intellectually for the purposes of my present research.

As I am thinking on this issue, I will keep talking on something else, and the something else in question is the creation of new markets. I go back to the Venerable Man of microeconomics, the Source of All Wisdom, who used to live with his mother when writing the wisdom which he is so reputed for, today. In other words, I am referring to Adam Smith. Still, just to look original, I will quote his ‘Lectures on Justice’ first, rather than going directly to his staple book, namely ‘The Inquiry Into The Nature And Causes of The Wealth of Nations’.

So, in the ‘Lectures on Justice’, Adam Smith presents his basic considerations about contracts (page 130 and on): « That obligation to performance which arises from contract is founded on the reasonable expectation produced by a promise, which considerably differs from a mere declaration of intention. Though I say I have a mind to do such thing for you, yet on account of some occurrences I do not do it, I am not guilty of breach of promise. A promise is a declaration of your desire that the person for whom you promise should depend on you for the performance of it. Of consequence the promise produces an obligation, and the breach of it is an injury. Breach of contract is naturally the slightest of all injuries, because we naturally depend more on what we possess that what is in the hands of others. A man robbed of five pounds thinks himself much more injured than if he had lost five pounds by a contract ».

People make markets, and markets are made of contracts. A contract implies that two or more people want to do some exchange of value, and they want to perform the exchange without coercion. A contract contains a value that one party engages to transfer on the other party, and, possibly, in the case of mutual contracts, another value will be transferred the other way round. There is one thing about contracts and markets, a paradox as for the role of the state. Private contracts don’t like the government to meddle, but they need the government in order to have any actual force and enforceability. This is one of the central thoughts by another classic, Jean-Jacques Rousseau, in his ‘Social Contract’: if we want enforceable contracts, which can make the intervention of the government superfluous, we need a strong government to back up the enforceability of contracts.

If I want my EneFin scheme to be a game-changer in developing countries, it can work only in countries with relatively well-functioning legal systems. I am thinking about using the metric published by the World Bank, the CPIA property rights and rule-based governance rating.

Still another insight that I have found in Joseph Schumpeter’s ‘Business Cycles’ is that when the entrepreneur, introducing a new technology, struggles against the first inertia of the market, that struggle in itself is a sequence of adaptation, and the strategy(ies) applied in the phases of growth and maturity in the new technology, later on, are the outcome of patterns developed during that early struggle. There is some sort of paradox in that struggle. When the early entrepreneur is progressively building his or her presence in the market, they operate under high uncertainty, and, almost inevitably, do a lot of trial and error, i.e. a lot of adjustments to the initially inaccurate prediction of the future. The developed, more mature version of the newly introduced technology is the outcome of that somehow unique sequence of trials, errors, and adjustments.

Scientifically, that insight means a fundamental uncertainty: once the actual implementation of an entrepreneurial business model, such as EneFin, gets inside that tunnel of learning and struggle, it can take on so many different mutations, and the response of the social environment to those mutations can be so idiosyncratic that we get into really serious economic modelling here.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

Support this blog

€10.00

[1] Bleich, K., & Guimaraes, R. D. (2016). Renewable Infrastructure Investment Handbook: A Guide for Institutional Investors. In World Economic Forum, Geneva.

[2] Krugman, P. (1991). Increasing returns and economic geography. Journal of political economy, 99(3), 483-499.

[3] Charles W. Cobb, Paul H. Douglas, 1928, A Theory of Production, The American Economic Review, Volume 18, Issue 1, Supplement, Papers and Proceedings of the Fortieth Annual Meeting of the American Economic Association (March 1928), pp. 139 – 165

Which salesman am I?

 

I am working on a specific aspect of the scientific presentation regarding my EneFin concept, namely on transposing the initial idea – a quasi-cooperative scheme between a local supplier of renewable energies and his local customers, in an essentially urban environment (I was thinking about smart cities) – into the context of poor, rural communities in developing countries. Basically, it was worth approaching the topic from the scientific angle, instead of the purely business-planning one. When I do science, I need to show that I have read what other scientists have written and published on a given topic. So I did, and a few articles have given me this precise idea of expanding the initial concept: Muller et al. 2018[1], Du et al. 2016[2], Wang et al. 2017[3], and Moallemi, Malekpour 2018[4].

I like feeling that the things I do are useful to somebody. I mean, not just interesting, but like really useful. When I write on this blog, I like the thought that some students in social sciences could use the methods presented in their own learning, or that some teachers in social sciences could get inspired. I’m OK with inspiring negatively. If some academic in social sciences, after reading some of my writing, says ‘This Wasniewski guy is one of the dumbest and most annoying people I have ever read anything written by, and I want to prove it by my own research!’, I’m fine with that. This is inspiration, too.

Science is like a screwdriver: the more different contexts you can use your screwdriver in, the more useful it is. This is, by the way, a very scientific approach to economic utility. The more functions a thing can perform, the greater its aggregate utility. So I want those things I write to be useful, and making them functional in more contexts increases their utility. That’s why applying the initial, essentially urban idea of EneFin to the context of alleviating poverty in developing countries is an interesting challenge.

Here is my general method. I imagine a rural community in some remote location, without regular access to electricity at all. All they have are diesel generators. According to Breyer et al. 2010[5], even in the most favourable conditions, the LCOE (Levelized Cost Of Electricity) for energy generated out of diesel is like 0,16 – 0,34 €/kWh. Those most favourable conditions are made of a relatively low price of crude oil, and, last but not least, the virtual absence of transportation costs as regards the diesel oil itself. In other words, that 0,16 – 0,34 €/kWh is essentially relevant for a diesel generator located right by the commercial port where diesel oil is being unloaded from a tanker ship. Still, we are talking about a remote rural location, and that means far from commercial ports. Diesel has to come there by road, mostly. According to a blog post which I found (OK, Google found) at the blog of the Golden Valley Electric Association, that cost per 1 kWh of electricity could even go up to US$ 0,64 = €0,54.

Technological change brings alternatives to that, in the form of renewable energies. Photovoltaic installations come at really a low cost: their LCOE is already gravitating towards €0,05. Onshore wind and small hydro are quite close to that level. Switching from diesel generators to renewables equals the same type of transition that I already mentioned in « Couldn’t they have predicted that? », i.e. from a bracket of relatively high prices of energy, to that of much lower a price (IRENA 2018[6]).

Here comes the big difference between an urban environment in Europe, and a rural community in a developing country. In the former, shifting from higher prices of energy to lower ones means, in the first place, an aggregate saving on energy bills, which can be subsequently spent on other economic utilities. In the latter, lower price of energy means the possibility of doing things those people simply couldn’t afford before: reading at night, powering a computer 24/24, keeping food in a fridge, using electric tools in some small business etc. More social roles define themselves, more businesses start up; more jobs, crafts and professions develop. It is a quantum leap.

Analytically, the initially lonely price of energy from diesel generators, or PD(t), gets company in the form of energy from renewable sources, PRE(t). As I have already pointed out, PD(t) > PRE(t). The (t) symbol means a moment in time. It is a scientific habit to add moments to categories, like price. Things just need time in order to happen, man. A good price needs to have a (t), if it is to prove its value.

Now, I try to imagine the socio-economic context of PD(t) > PRE(t). If just the diesel generators are available, thus if PD(t) is on its own, a certain consumption of energy occurs. Some people are like 100% on the D (i.e. diesel) energy, and they consume QD(t) = QE(t) kilowatt hours. The aggregate QE(t) is their total use of energy. Some people are to some extent on diesel power, and yet, for various reasons (i.e. lack of money, lack of permanent physical access to a generator etc.), that QD(t) does not cover their QE(t) entirely. I write it as QD(t) = a*QE(t) and 0 < a < 1. Finally, there are people for whom the diesel power is completely out of reach, and, temporarily, their QE(t) = 0.

In a population of N people, I have, thus, three subsets, made, respectively, of ‘m’ people who QD(t) = QE(t), ‘p’ people who QD(t) = a*QE(t) and 0 < a < 1, and ‘q’ people on the strict QE(t) = 0 diet. When renewable energies are being introduced, at a PRE(t+1) < PD(t+1) price, what happens is a new market, balanced or monopolized at the price PRE(t+1), and at the QRE(t+1) aggregate quantity, and people start choosing. As they choose, they actually make that QRE(t+1) happen. Among those who were QE(t) = 0, an aggregate b*QE(t+1) flocks towards QRE(t+1), with 0 < b ≤ 1. In the subset of the QD(t) = a*QE(t), at least (1-a)*QE(t+1) go PRE(t+1) and QRE(t+1), just as some c*QD(t) out of the QD(t) = QE(t) users, with 0 ≤ c ≤ 1.

It makes a lot of different Qs. Time to put them sort of coherently together. What sticks its head through that multitude of Qs is the underlying assumption, which I have just figured out I had made before, that in developing countries there is a significant gap between that sort of full-swing-full-supply consumption of energy, which I can call ‘potential consumption’, or QE(t), on the one hand, and the real, actual consumption, or QA(t). Intuitively, QE(t) > QA(t), I mean way ‘>’.

I like checking my theory with facts. I know, might look not very scientific, but I can’t help it: I just like reality. I go to the website of the World Bank and I check their data on the average consumption of energy per capita. I try to find out a reference level for QE(t) > QA(t), i.e. I want to find a scale of magnitude in QA(t), and from that to infer something about QE(t). The last (t) that yields a more or less comprehensive review of QA(t) is 2014, and so I settle for QA(2014). In t = 2014, the country with the lowest consumption of energy per capita, in kilograms of oil equivalent, was technically South Sudan: QA(2014) = 60,73 kg of oil equivalent = 60,73*11,63 kWh = 706,25 kWh. Still, South Sudan started being present in this particular statistic only in 2012. Thus, if I decide to move my (t) back in ‘t’, there is not much moving to do in this case.

Long story short, I take the next least energy-consuming country on the list: Niger. Niger displays a QA(2014) = 150,73 kg of oil equivalent per person per year = 1753,04 kWh per person per year. I check the energy profile of Niger with the International Energy Agency. Niger is really a good case here. Their total QA(2014) = 2 649 ktoe (kilotons of oil equivalent), where 2 063 ktoe = 77,9% consists in waste and biofuel burnt directly for residential purposes, without even being transformed into electricity. Speaking of the wolf, electricity strictly spoken makes just 55 ktoe in the final consumption, thus 55/2649 = 2% of the total. The remaining part of the cocktail are oil products – 506 ktoe = 19,1% –  mostly made domestically from the prevalently domestic crude oil, and burnt principally in transport (388 ktoe), and then in industry (90 ktoe). Households burn just 20 ktoe of oil products per year.

That strange cocktail of energies reflects in the percentages that Niger displays in the World Bank data regarding the share of renewable energies in the overall consumption of energy, as well as in the generation of electricity. As for the former, Niger is, involuntarily, in the world’s vanguard of renewables, with 78,14% coming from renewables. Strange? Well, life is strange. Biofuels are technically renewable source of energy. When you burn the wood and straw that grows around, there will be some new growing around, whence renewability. Still, that biomass in Niger is being just burnt, without transformation of the resulting thermal energy into electric power. As we pass to data on the share of renewables in the output of electricity, Niger is at 0,58%. Not much.

From there, I have many possible paths to follow so as to answer the basic question: ‘What can Niger get out of enriching their energy base with renewables, possibly using an institutional scheme in the lines of the EneFin concept?’. My practical side tells me to look for a benchmark, i.e. for another country in Africa, where the share of renewable energy in the output of electricity is slightly higher than in Niger, without being lightyears away. Here, surprise awaits: there are not really a lot of African countries close to Niger’s rank, regarding this particular metric. There is South Africa, with 1,39% of their electricity coming from renewable sources. Then, after a long gap, comes Senegal, with 10,43% of electricity from renewables.

I quickly check those two countries with the International Energy Agency. South Africa, in terms of energy, is generally coal and oil-oriented, and looks like not the best benchmark in the world for what I what to study. They are thick in energy, by the way: QA(2014) = 2 695,73 kg of oil equivalent, more than 100 times the level of Niger. Very much the same with Senegal: it is like Niger topped with a large oil-based economy, and with a QA(2014) = 272,08 kg of oil equivalent. Sorry, I have to move further up the ranking of African countries in terms of renewables’ share in the output of electricity. Here comes Nigeria, 17,6% of electricity from renewables, and it is like a bigger brother of Niger: 86% of energy comes from the direct burning of biofuels and waste, only those biofuels are like 50 times more than in Niger. Their QA(2014) = 763,4 kg of oil equivalent per person per year.

I check Cote d’Ivoire, 23,93% of electricity from renewable sources, and I get the same, biofuels-dominated landscape. Gabon, Tanzania, Angola, Zimbabwe: all of them, however is their exact metric as for the share or renewables in the output of electricity, have mostly biofuels as renewable sources. Ghana, QA(2014) = 335.05, Mozambique, QA(2014) = 427.6, and Zambia, QA(2014) = 635.5, present slightly different a profile, with a noticeable share of hydro, but still heavily relying on biofuels.

In general, Africa seems to love biofuel, and to be largely ignoring the solar, the wind, and the hydro. This is a surprise. They have a lot of sunlight and sun heat, over there, for one. I started all my research on renewable energies, back in winter 2016, on the inspiration I had from the Ouarzazate-Noor Project in Morocco (see official updates: 2018, 2014, 2011). I imagined that Africa should be developing a huge capacity in renewable sources other than biofuels.

There is that anecdote, to find in textbooks of marketing. Two salesmen of a footwear company are sent to a remote province in a developing country, to research the local market. Everybody around walks barefoot. Salesman A calls his boss and says there are absolutely no market prospects whatsoever, as all the locals walk barefoot. Salesman B makes his call and, using the same premise – no shoes spotted locally at all – concludes there is a huge market to exploit.

Which salesman am I? Being A, I should conclude that schemes like EneFin, in African countries, should serve mostly to develop the usage of biofuels. Still, I am tempted to go B. As the solar, the hydro and the wind power tend to strike by their absence in Africa, this could be precisely the avenue to exploit.

What is there exactly to exploit, in terms of economic gains? The cursory study of African countries with respect to their energy use per capita show huge disparities. The most notable one is to notice between countries relying mostly on biofuels, on the one hand, and those with more complex energy bases. The difference in terms of the QA(2014) consumption of energy per capita is a multiple, not a percentage margin. Introducing a new source of energy into those economies looks like a huge game-changer.

There is that database I built, last year, out of Penn Tables 9.0, and from stuff published by the World Bank, and that database serves me to do like those big econometric tests. Cool stuff. Works well. Everybody should have one. You can see some examples of how I used it last year, if you care to read « Conversations between the dead and the living (no candles) » or « Core and periphery ». I decided to test my claim, namely that introducing more energy per capita into an economy will contribute to the capita in question having more of average Gross Domestic Product, per capita of course.

I made a simple linear equation with natural logarithms of, respectively, GDP per capita, expenditure side, and energy use per capita. It looks like ln(GDP per capita) = ln(Energy per capita) + constant. That’s all. No scale factors, no controlling variables. Just pure, sheer connection between energy and output. A beauty. I am having a first go at the whole sample in my database, with that most basic equation.

Table 1

Explained variable: ln(GDP per capita), N = 5498, R2 = 0,752
Explanatory variable Coefficient of regression

 

(Robust) Standard Error Significance level at t Student test
Ln(Energy per capita)

 

0,947 (0,007) p < 0,001
Constant

 

2,151 (0,053) p < 0,001

Looks promising. When driven down to natural logarithm, variance in consumption of energy per capita explains like 75% of variance in GDP per capita. In other words, generally speaking, if any institutional scheme allows enriching the energy base of a country – any country – it gives a high probability of going along with higher an aggregate output per capita.

A (partial) summing up is due. The idea of implementing a contractual scheme like EneFin in developing countries seems to make sense. The gains to expect are actually much higher than those I initially envisaged for this business concept in the urban environments of European countries. If I want to go after a scientific development of this idea, the avenue of developing countries and their rural regions seems definitely promising.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

Support this blog

€10.00

[1] Müller, M. F., Thompson, S. E., & Gadgil, A. J. (2018). Estimating the price (in) elasticity of off-grid electricity demand. Development Engineering, 3, 12-22.

[2] Du, F., Zhang, J., Li, H., Yan, J., Galloway, S., & Lo, K. L. (2016). Modelling the impact of social network on energy savings. Applied Energy, 178, 56-65.

[3] Wang, G., Zhang, Q., Li, H., Li, Y., & Chen, S. (2017). The impact of social network on the adoption of real-time electricity pricing mechanism. Energy Procedia, 142, 3154-3159.

[4] Moallemi, E. A., & Malekpour, S. (2018). A participatory exploratory modelling approach for long-term planning in energy transitions. Energy research & social science, 35, 205-216.

[5] Breyer, C., Gerlach, A., Schäfer, D., & Schmid, J. (2010, December). Fuel-parity: new very large and sustainable market segments for PV systems. In Energy Conference and Exhibition (EnergyCon), 2010 IEEE International (pp. 406-411). IEEE.

[6] IRENA (2018), Renewable Power Generation Costs in 2017, International Renewable Energy Agency, Abu Dhabi, ISBN 978-92-9260-040-2

Something to exploit subsequently

In my last three updates, I’ve been turning around one specific topic, namely the technology of wind turbines with vertical axis. Like three updates ago, in Ma petite turbine éolienne à l’axe vertical, I opened up on to the topic by studying the case of a particular invention, filed for patenting, with the European Patent Office, by a group of Slovakian inventors. Just in order to place this one in a broader context, I did some semantic rummaging, with the help of https://patents.google.com. I basically wanted to count how many such inventions had been filed for patenting in different regions of the world. In my research I have been using, for years, the number of patent applications as a metric of aggregate effort in invention, and so I did regarding those wind turbines with vertical axis.

This is when it started to turn weird. Apparently, invention in this specific field follows a stunningly regular trend, and is just as stunningly correlated with the metrics of renewable energies: the share of renewables in the overall output of energy (see Time to come to the ad rem) and the aggregate output of said renewables, in metric tons of oil equivalent (see Je corrèle). When I say ‘stunningly correlated’, I really mean it. In social sciences, coefficients of correlation around r = 0,95happen in truly rare cases, and when they happen, the first reflex of a serious social scientist is to assume that something is messed up in the source data. This is one of those cases. I am still trying to wrap my mind around the fact that the semantic incidence of some logical constructs in patent applications can coincide so strongly with the fundamental metrics of energy consumption.

In this update, I want to return to that business concept of mine, the EneFinproject. I am preparing a business plan for this one. Actually I have been preparing it for weeks, which you can find the track of in the past posts on this blog. Long story short, EneFinis the concept of a FinTech utility, which would allow the creators of new projects in the field of renewable energies to acquire capital, via a scheme combining the sales of futures contracts, on the future output of the business, with the issuance of equity. You can find more explanation in Traps and loopholes, for example.

I want to study this particular case, that wind turbine described in the patent application no. EP 3 214 303 A1, under the EneFinangle. How can a FinTech scheme like the one I am coming up with work for a business based on this particular invention? I start with figuring out the kind of business structure to build around this invention. Wind turbines with vertical axis are generally small stuff, distinctive from their bulky cousins with horizontal axis by the fact they can work in close proximity to human habitat. A wind turbine with vertical axis is something you can essentially install in your yard, and it you will be just fine together, provided there is enough wind in your yard. As for this particular aspect, the quick technological research that I documented in Ma petite turbine éolienne à l’axe vertical, showed that the really interesting places for using wind turbines with vertical axis are, for example, the coastal regions of Europe, with the average wind speed like 12 to 13 metres per second. With that amount of Aeol, this particular turbine starts being serious, at more than 1 MW of electrical capacity. Mind you, it doesn’t have to be coastal, that place where you install it. The upper storeys of a skyscraper, hilltops – in general all the places where you cannot expect your straw hat to hold on your head without a ribbon tied under your chin – are the right place to use that device shaped like a DNA helix.

This particular technology is unlikely to breed power plants in the traditional sense of the term. The whole idea of wind turbines with vertical axis is to make it more apt to being installed in the immediate vicinity of human habitat. You can install them completely scattered or a bit clustered, for example on the roof of a building. I am wrapping my mind around the practical idea, and I start the wrapping by doing two things: maths and pictures. As for maths, PW = ½ * Cp* p * A * v3is the general name of the game. ‘PW’ stands for electric power of a wind turbine with vertical axis, and said power stands on air, which has a density p = 1,225 kg/m3divided by half, so basically that air is dense, in the equation, at sort of p = 0,6125 kg/m3. Whatever speed of wind ‘v’ that air blows at, in this particular equation it blows at the third power of that speed, or v3. That half the density of air, multiplied by the cubic expression of wind speed, is the exogenous force that Mother Nature supplies here and now.

What Mother Nature supplies is being taken on the blades on the turbine, with a working surface of ‘A’, and that surface works with an average efficiency of Cp. That efficiency is technically comprised between 0 and 1, and actually, for this specific type of machine, between 59% and 72% (consult Bhutta et al.2012[1]), which I average at 65,5%. All in all, with that density of air cut by half and efficiency being what it is, my average wind turbine with vertical axis can take like 40,1% of the arithmetical product ‘working surface of the blades times wind speed power three’. Reminder, from school: power first, multiplication next. I mean, don’t raise to cubic power the product of wind speed and blade surface. Wind speed cubic power first, then multiply by the blades.

I pass to pictures, now. A picture is mostly a picture of something, even if that something is just in my mind. My first something is a place I like very much: Lisbon, Portugal, and more specifically the district of Belem, a good kick westwards from the Praca de Comercio. It is beautiful, and really windy. Here below, I am giving a graphical idea of how those small wind turbines with vertical axis could be located. Reminder: each of them, according to the prototype in the patent application no. EP 3 214 303 A1, needs like 5 m2of space to work. Let’s make it 20 m2, just to allow the wind to pass between those wind turbines.

belem-tower-2809818_1280

In Lisbon, the average speed of wind is 10 mph, or 4,47 m/s, and that gives an exogenous energy of the wind like 54,72 kilowatts, to take whoever can take it. That prototype has real working surface of its blades like A = 1,334 m2, which gives, at the end of the day, an electric power of PW = 47,81 kW. In Portugal, the average consumption of energy at the level of households (so transport and industry excluded) seems to be like 4 214,55 kWh a year per person. I divide it by 8760 in your basic year (the odd ones make 8784 hours), which yields 0,48 kW required per person. My wind turbine could power 99 people in their household needs. If they start using that juice for transport, like charging their electric cars, or the batteries of their electric bicycles, that 99 could drop to 50 – 60, probably not less.

Hence, what my mind is wrapping around, right now, is a business that would manage the installation and exploitation of wind turbines with vertical axis, in groups of a few dozens of people, so like 20 – 50 households. Good, let’s try to move on: Lyon, France. Not very coastal, as the nearest sea is more than 300 km away, but: a) it is quite windy, due to the specific circulation of air along the valleys of two rivers, Rhône and Saône b) they are reconstructing a whole district, namely the Confluenceone, as a smart city c) I f*****g love the place. Average wind speed over the year: 4,6 m/s, which allows Mother Nature to supply around 52,25 kWto my prototype. The prototype is supposed to serve a population, where the average person needs 7 291,18 kWh for household use, whence 63 people being servedby my prototype, which could drop like to 20 – 30 people, if said people power their transportation devices with their household juice.

Lyon

Good, last shot: Amsterdam. Never been there, mind you, but they are coastal, statistically speaking quite energy consuming, and apparently keen on innovation. The average wind speed there is 5,14 m/s, which makes my prototype generate a power of 72,72 kilowatts. With the average Dutch consuming around 8 369,15 kWh for household use, 76 such average Dutch could use one such turbine.

 Amsterdam with text

Maths and pictures made me clarify a business concept, or rather two business concepts. Concept #1is simple manufacturing of those wind turbines. Here, EneFin(see Traps and loopholesand the subsequent ones) does not really fit. I remind you that the EneFin concept is based on the observable discrepancy between two categories of final prices for electricity: those for big institutional users (low), and those for households and small businesses (high). Long story short, EneFin takes its appeal from the coincidence of very different prices for the same good (i.e. electricity), and from the juicy margin of value added hidden behind that coincidence. That Concept #1 is essentially industrial, and the value added to expect does not really blow one’s hat off. Neither should we expect any significant price discrepancy between categories of customers. Besides, whilst futures contracts on electricity are already widely practiced in the wholesale market, and the EneFin concept just attempts to transfer the idea to the retail market, I haven’t seen much use of futures contracts in the market of typical industrial products.

Concept #2, for exploiting this particular invention, would be complex, combining the engineering of those turbines so as to make the best version for the given location, their installation, then maintenance and management. The business entity in question would combine manufacturing, management of a value chain, site management, design and engineering, and maintenance. Here, that essentially cooperative version of the EneFinconcept would have more space to breathe. We can imagine a site, made of 200 households, who commission an independent company to engineer a local power system, based on wind turbines with vertical axis, to install, manage, and maintain that facility. In the price paid for particular components of that complex business scheme, those customers could progressively buy into that business entity.

Now, I am following another one of my research routines: I am deconstructing the business model. As truly enlightened a social thinker, I am searching online for the phrase ‘wind turbine investor relations’. To the mildly initiated: publicly listed companies have to maintain a special type of website, called, precisely ‘Investor Relations’, where they publish information about their business cuisine. This is where you can find annual reports, for example. The advantage of following this specific track is the easy access to information I am looking for, like the basic financials. The caveat is that I am browsing through relatively big businesses, big enough to be listed publicly, at least. Hence, I am skipping all the stories of small businesses.

Thus, the data my internal curious ape can find by peeling those ‘investor relations’ bananas is representative for relatively big, somehow established business structures. It can serve to build something like a target vision of what is likely to be created, in a particular field of business, after the early childhood of a project is over. And so I asked dr Google, and, just to make sure, I cross-asked dr Yandex, what they can tell me if I ask around for ‘wind turbine investor relations’. Both yielded more or less the same list of top hits: Nordex,VestasSiemens Gamesa, Senvion,LM Wind Power,  SkyWolf, and Arise. I collected their annual reports, with the exception of SkyWolf, which, for some reason, does not publish any on their ‘investor relations’ page. I followed this particular suspect home, I asked around who are they hanging with, and so I came to visiting their page at Nasdaq, and I finally got it. They are at the stage of their IPO (Initial Public Offering), so they are still sort of timid in annual reporting. Still, I could download their preliminary prospectus for that IPO, dated April 20th2018.

There is that thing about annual reports and prospectuses: they are both disclosure and public relations. Technically, an annual report should, essentially, be reporting about the things material to the business in question. Still, this type of document is also used for, well… for the show. Reading an annual report is good training at reading between the lines, and, more generally, at figuring out how to figure out when people are lying.

Truth has patterns, and lies have patterns as well, although the patterns of truth are somehow more salient. The truth that I look for in annual reports is mostly in the financials. Here is a first glimpse of these:

Revenues Net profit (loss) Assets Equity Ratio assets to revenue
Nordex 2017 EUR mlns 3 127,40 0,30 2 807,60 919,00 0,90
Vestas 2017 EUR mlns 9 953,00 894,00 10 871,00 3 112,00 1,09
Siemens Gamesa 2017 EUR mlns 6 538,20 (135,00) 16 467,13 6 449,87 2,52
Senvion 2017 EUR mlns 1 889,90 (121,10) 1 808,10 230,10 0,96
LM Group 2016 EUR mlns 1 059,00 52,00 1 198,00 445,00 1,13
SkyWolf 2017 USD (!) 49 000 (592 600) 139 730 (673 500) 2,85

As I see it, the business of doing business on installing and managing local power installations can go in truly divergent directions. You can start as SkyWolf is starting, with a ‘debt to assets’ ratio akin to the best (worst?) years of General Motors, or you can have that comfy financial cushion supplied by a big mother ship, as it is the case for Siemens Gamesa. One pattern seems to emerge: the ‘assets to revenue’ ratio seems to oscillate around 1,00. In other words, each dollar invoiced on our customers needs to be backed up by one dollar in our balance sheet. Something to exploit subsequently.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French versionas well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon pageand become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

Support this blog

€10.00

[1]Muhammad Mahmood Aslam Bhutta, Nasir Hayat, Ahmed Uzair Farooq, Zain Ali, Sh. Rehan Jamil, Zahid Hussain (2012) Vertical axis wind turbine – A review of various configurations and design techniques, Renewable and Sustainable Energy Reviews 16 (2012) 1926–1939

Good hypotheses are simple

The thing about doing science is that when you really do it, you do it even when you don’t know you do it. Thinking about reality in truly scientific terms means that you tune yourself on discovery, and when you do that, man, you have released that ginn from the bottle (lamp, ring etc.). When you start discovering, and you get the hang of it, you realize that it is fun and liberating for its own sake. To me, doing science is like playing music: I am just having fun with it.

Having fun with science is important. I had a particularly vivid realization of that yesterday, when, due to a chain of circumstances, I had to hold a lecture in macroeconomics in a classroom of anatomy. There was no whiteboard to write on, but there were two skeletons standing in the two corners on my sides, and there were microscopes, of course covered with protective plastic bags. Have you ever tried to teach macroeconomics using a skeleton, and with nothing to write on? As I think about it, a skeleton is excellent for metaphorical a representation of functional connections in a system.

Since the beginning of this calendar year, I have been taking on those serious business plans, and, by the way, I am still doing it. Still, in my current work on two business plans I am preparing in parallel – one for the EneFinproject (FinTech in the market of energy), and the other one for the MedUsproject (Blockchain in the market of private healthcare) – I recently realized that I am starting to think science. In my last update in French, the one entitled Ça me démange, carrément, I have already nailed down one hypothesis, and some empirical data to check it. The hypothesis goes like: ‘The technology of renewable energies is its phase of banalisation, i.e. it is increasingly adapting its utilitarian forms to the social structures that are supposed to absorb it, and, reciprocally, those structures adapt to those utilitarian forms so as to absorb them efficiently’.

As hypotheses come, this one is still pretty green, i.e. not ripe yet for rigorous scientific proof, on the account of there being too many different ideas in it. Good hypotheses are simple, so as you can give them a shave with the Ockham’s razor and cut bullshit out. Still, a green hypothesis is better than no hypothesis at all. I can farm it and make it ripe, which I have already applied myself to do. In an Excel file you can see and download from the archive of my blog, I included the results of quick empirical research I did with the help of https://patents.google.com: I studied patent applications and patents granted, in the respective fields of wind, hydro, photovoltaic, and solar-thermal energies, in three important patent offices across the world, namely the European Patent Office (‘EP’ in that Excel file), the US Patent & Trademark office (‘US’), and in continental China.

As I had a look at those numbers, yes, indeed, there has been like a recent surge in the diversity of patented technologies. My intuition about banalisation could be true. Technologies pertaining to the generation of renewable energies start to wrap themselves around social structures around them, and said structures do the same with technologies. Historically, it is a known phenomenon. The motor power of animals (oxen, horses and mules, mostly), wind power, water power, thermal energy from the burning of fossil fuels – all these forms of energy started as novelties, and then grew into human social structures. As I think about it, even the power of human muscles went through that process. At some point in time, human beings discovered that their bodies can perform organized work, i.e. muscular power can be organized into labour.

Discovering that we can work together was really a bit of a discovery. You have probably read or heard about Gobekli Tepe, that huge megalithic enclosure located in Turkey, and being, apparently, the oldest proof of temple-sized human architecture. I watched an excellent documentary about the place, on National Geographic. Its point was that, if we put aside all the fantasies about aliens and Atlantians, the huge megalithic structure of Gobekli Tepe had been most probably made by simple, humble hunters-gatherers, who were thus discovering the immense power of organized work, and even invented a religion in order to make the whole business run smoothly. Nothing fancy: they used to cut their deceased ones’ heads off, would clean the skulls and keep them at home, in a prominent place, in order to think themselves into the phenomenon of inter-generational heritage. This is exactly what my great compatriot, Alfred Count Korzybski, wrote about being human: we have that unique capacity to ‘bind time’, or, in other words, to make our history into a heritage with accumulation of skills.

That was precisely the example of what a banalised technology (not to confuse with ‘banal technology’) can do. My point – and my gut feeling – is that we are, right now, precisely at this Gobekli-Tepe-phase with renewable energies. With the progressing diversity in the corresponding technologies, we are transforming our society so as it can work the most efficiently possible with said technologies.

Good, that’s the first piece of science I have come up with as regards renewable technologies. Another piece is connected to what I introduced, about the market of renewable energies in Europe, in my last update in English, namely in At the frontier, with my numbers. In Europe, we are a bit of a bunch of originals, in comparison to the rest of the world. Said rest of the world generally pumps up their consumption of energy per capita, as measured in them kilograms of oil equivalent. We, in Europe, we have mostly chosen the path of frugality, and our kilograms of oil per capita tend to shrink consistently. On the top of all that, there seems to be pattern in all that: a functional connection between the overall consumption of energy per capita and the aggregate consumption of renewable energies.

I am going to expose this particular gut feeling of mine by small steps. I Table 1, below, I am introducing two growth rates, compound between 1990 and 2015: the growth rate in the overall, final consumption of energy per capita, against that in the final consumption of renewable energies. I say ‘against’, as in the graph below the table I make a visualisation of those numbers, and it shows an intriguing regularity. The plot of points take the form opposite to those frontiers I showed you in At the frontier, with my numbers. This time, my points follow something like a gentle slope, and the further to the right, the gentler that slope becomes. It is visualised even more clearly with the exponential trend line (red dotted line).

We, I mean economists, call this type of curve, with a nice convexity, an ‘indifference curve’. Funnily enough, we use indifference curves to study choice. Anyway, there is sort of an intuitive difference between frontiers, on the one hand, and indifference curves, on the other hand. In economics, we assume that frontiers are somehow unstable: they represent a state of things that is doomed to change. A frontier envelops something that either swells or shrinks. On the other hand, an indifference curve suggests an equilibrium, i.e. each point on that curve is somehow steady and respectable as long as nobody comes to knock it out of balance. Whilst a frontier is like a skin, enveloping the body, an indifference curve is more like a spinal cord.

We have an indifference curve, hence a hypothetical equilibrium, between the dynamics of the overall consumption of energy per capita, and those of aggregate use of renewable energies. I don’t even know how to call it. That’s the thing with freshly observed equilibriums: they look nice, you could just fall in love with them, but if somebody asks what exactly are they, those nice things, you could have trouble to answer. As I am trying to sort it out, I start with assuming that the overall consumption of energy per capita reflects two complex sets. The first set is that of everything we do, divided into three basic fields of activity: a) the goods and services we consume (they contain energy that served to supply them) b) transport and c) the strictly spoken household use of energy. The second set, or another way of apprehending essentially the same ensemble of phenomena, is a set of technologies. Our overall consumption of energy depends on the total installed power of engines and electronic devices we use.

Now, the total consumption of renewable energies depends on the aggregate capacity installed in renewable technologies. In other words, this mysterious equilibrium of mine (in there is any, mind you) would be an equilibrium between two sets of technologies: those generating energy, and those serving to consume it. Honestly, I don’t even know how to phrase it into a decent hypothesis. I need time to wrap my mind around it.

Table 1

Growth rate in the overall, final consumption of energy per capita, 1990 – 2015 Growth rate in the final consumption of renewable energies, 1990 – 2015
Austria 17,4% 80,7%
Switzerland -18,4% 48,6%
Czech Republic -19,5% 241,0%
Germany -13,7% 501,2%
Spain 11,0% 104,4%
Estonia -33,0% 359,5%
Finland 4,1% 101,8%
France -3,7% 42,3%
United Kingdom -23,2% 1069,6%
Netherlands -3,7% 434,9%
Norway 17,1% 39,8%
Poland -8,0% 336,8%
Portugal 26,8% 32,6%

Growth rates energy per capita vs total renewable

 

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French versionas well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon pageand become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

A project name has come to my mind: EneFin

My editorial

And so I have made a choice. In my last update in French (Plus ou moins les facteurs associés) I finally decided what I want this FinTech business plan to be about. I want it to be about FinTech in the market of energy, and that FinTech should serve to promote renewable energies, possibly in the environment of smart cities. One decision drags others behind it, and so it is happening this time. I have an idea for further scientific research. A title has come to my mind: ‘Fiscalization or monetization of energy?’. I mean, what can governments do in the market of energy, with their budgets vs. the things that monetary systems can change? I have just connected two more dots in my scientific memory. In my book, entitled Capitalism and Political Power, I presented a curious correlation I had found out, namely that between the total amount of political power in the political system, on the one hand, and the amount of capital controlled by said system, on the other hand.

Long story short: the amount of political power can be measured as the number of distinct entities, in the political systems, who can effectively wield a veto against a new policy. The more veto players in the same system, the more political power the system contains. There is even a whole methodology to assess political systems in that line of logic: it is called The Database of Political Institutions (DPI). I used that database to test a simple intuition: each veto player needs control over some capital in order to be actually able to wield his veto power, and so the more veto players, the more capital they need, in total, to play their vetos. My intuition turned out to be correct: I found strong correlation between the metrics used in DPI and the amount of capital held by the public sector in the form of liquid financial assets deducible from the gross public debt. You take one official, fiscal aggregate, namely gross public debt. Then you take another one, called net public debt. You subtract the latter from the former and Bob’s your uncle: you have the residual difference, i.e. the financial assets possible to interpret as claims on the rest of the world. The amount of those claims in the balance sheet of public debt is strongly and positively correlated with the amount of political power in the veto players of the given system.

This is just part of the story. I know, it should have been short a story, but what can I do: I am a scientist. I love telling people things I think they don’t know and I think I know. What? That’s the same as gossiping? Nonsense. Gossiping has much broader an audience than science. This is the difference: science means I tell people things I think I know and I think they don’t know and don’t want to listen to. Anyway, my story is a bit longer than just a short story. Both my research and the one to find in the World Bank’s ‘World Development Report 2017 : Governance and the Law’ suggest that governments are sort of shrinking, across the world and over time. There are less and less real veto players in political systems, more and more facade democracy, and, in economic terms, less and less hold of fiscal policies over the available capital balances in the private sector. Still, in the background, there is another story going on. Monetary systems swell, and I am talking just about the so-called fiat money (i.e. the money blessed by central banks, so as it goes and breeds happily).

So, there is my new thinking. Governments can promote the transition towards renewable energies in two ways: fiscal or monetary. In the fiscal approach, governments take taxes in one hand, subsidies in the other hand, and they can directly meddle inside the energy sector. In the monetary approach, governments basically act so as to make the monetary system as liquid and flexible as possible and then they let the money do the thinking. The scientific work that I am taking on is focused on studying the modalities, opportunities and threats correlated with each of these directions. The business plan I am starting to write is about developing a FinTech project, which, whilst being workable and profitable, will contribute to promoting renewable energies.

By the way, I have just come up with a working name for this project: EneFin.

After the study of three cases – Square Inc., FinTech Group AG and Katipult  – my internal curious ape suggests me to develop four lines of business in EneFin: trade in purchasing power, organisation of payment services, trade in the equity of energy companies, and, finally, trade in their corporate debt. I am going to study each of these four in terms of its economics, legal regime and technology. Trade in purchasing power is probably the closest to my once-much-honed concept of the Wasun (see, for example, Taking refuge during the reign and my other posts from late spring, and summer 2017) and I start with this one. The basic idea is to buy, from the providers of electricity, standardized deeds of purchasing power, like coupons for electricity, and to buy them at a wholesale price, in order to resell them at a retail price. The most elementary economics of the thing begin with the definition of 6 sets: power installations, grid operators, output of energy, deeds of purchasing power, resellers of deeds, consumers of electricity.

The set PR = {pr1, pr2, …, prn} of n power installations encompasses anything able to generate electricity, ranging from household-scale local installations all the way up to big power plants.  This set is immediately, functionally, and imperfectly connected to the set GR = {gr1, gr2, …, gro} of o grid operators, i.e. the resellers of energy. Note that functional connection between the sets PR and GR largely depends on the regulatory regime in force. If the law allows each power installation to sell directly its power, any t-th element in the PR set can become identical an element in the GR set. As the law imposes limitations on direct sales of electricity, the GR set becomes more rigid in its size and structure.

Both the PR, and the GR set are functionally connected to the set of output, or Q, made of m kilowatt hours; Q = {kWh1, kWh2, …, kWhm}. Note two things about m. Firstly, m is a compound value: it is the arithmetical product of a constant number of hours in the year (basically 24*365 = 8760, 8784 in an odd year), on the one hand, and the total capacity of kilowatts available. Secondly, m is really big, and as all big sets, it gains greatly in its overall workability when split into smaller, local subsets. By the way, as I look at that Q, I realize how much fun I will provide my French readers with, when taking on the topic in my updates written in French. Who speaks French knows what I am talking about. Still, Q is the sacro-saint symbol of quantity in economics, so let there be fun when fun is possible.

The Q set is, in turn, connected to a set of deeds in purchasing power. I call this set D (I know, not very original, plainly reproduces the initial of ‘deeds’, but I just want to get on with the thing), and I assume it is composed of l deeds, and so I have D = {d1, d2, …, dl}. Those deeds can have various forms. They can be like purchasing coupons, or they could be fancily made into a cryptocurrency. They can be futures contracts as well, and even options, if you are really keen on financial engineering. Sorting out this aspect is a separate chapter in my work, still one guiding light shines in the darkness, and this is not a train coming from the opposite direction: FinTech is supposed to minimize transaction costs.

Question: which legal form(s) of purchasing deeds for electricity allow(s) the lowest transaction costs? Options, tokens of cryptocurrency etc.? Answer: the one which combines the lowest uncertainty as for its price with the best shielding against opportunistic behaviour in other market participants, as well as with the greatest liquidity in the assets. I know, this answer looks more like another question and this is not exactly the way actual answers should look like. What do you want, I am a gentleman of a certain age, like half a century, and I advance forward with the due gravitas.

Question: what is the exact relationship between Q and D, thus between their respective sizes m and l? Answer: it depends. It depends on the overall liquidity of the market, i.e. on the capacity of your average kilowatt hour in the Q to be assigned a sibling deed D. It is exactly like humans: you can be an only child, you can have one twin sibling, or you can have a lot of brothers, sisters and cousins. A given kilowatt hour in the Q can be an only child, i.e. not to have any corresponding, tradable deed in the D. If at least some kilowatt hours in the Q are such lonely wolves, this is the case of low, de facto inexistent liquidity in the Q, and l < m. If I ramp it up like by one level, and I give each kilowatt hour in the Q one corresponding, twin deed of purchasing power, like one token of a cryptocurrency, in the D, I have l = m and my Q is sort of liquid.

We can ramp it up even further, and give each kilowatt hour in the Q many siblings in the D, like a futures contract, which can be the base security for an option, and both can become tokenized in a Blockchained network of cryptocurrency. On the top of that, you can add a tradable insurance as for the available capacity in each given kWh, i.e. insurance claimable in case you don’t actually have your kilowatt hour in the time and place you can expect it with the purchasing deed you hold. Insane? Perhaps, but this is how financial markets have been working since there is historical record of how they work. Anyway, in such case, the Q becomes hyper-liquid, and l(D) is waaay bigger than m(Q) (the triple ‘a’ in ‘waaay’ is an emphatic way to show how big the way is).

Four sets out of six laid nicely on the table, there are two more. So, the resellers’ set, or R = {r1, r2, …, rk} handles the whole purchasing deeds business. The elements of R move around the elements of D. The R set is there, in my line of thinking, as a formal approach to competition in the business planned for EneFin. My EneFin project would belong to R, and, let’s face it: there are and will be others in the set. As a matter of fact, when I sign a contract for electricity with my local provider (mine is Tauron, one of the big Polish distributors of electricity), the company actually acts as a reseller of purchasing deeds, to a large extent. They sign a contract of their own with power plants, and they commit to buy a certain amount of kWh (although at this scale, we are rather talking about gigawatt hours), and this commitment is largely a ‘use-it-and-resell-it-anyway-pay-for-it’ type. The R set is largely overlapping with the GR set (that of grid operators). The EneFin business, to the extent that it goes into trading those purchasing deeds for the market of electricity, will enter both into competition and cooperation with the resellers of energy.

Finally, the set of consumers, or end-users of electricity: CN = {cn1, cn2, …, cnz}. Right, now, what about those sets? Why have I just defined them? It is coming back, just wait a minute. I know! I remember now! I want to translate the business concept of EneFin into a functional description of the accompanying digital technology. Thinking in terms of sets helps in that respect. So, out of those 6 sets, EneFin would operate most of all the D = {d1, d2, …, dl} set of purchasing deeds. That would be the core dataset of the whole system. Each t-th d in the D will have a vector of characteristics, and the most important among them are: the hour of the year (i.e. one of the 8760 in even years and 8784 in odd years), the point(s) of supply that accept the given deed as payment, the amount of energy assigned to the deed. I basically thought about making the last one constant and equal to 1 kWh.

Now, a short explanation as for the notion of the ‘point of supply’. Electricity is distributed in a complex network. The part of network which just channels power to its end users is commonly designated as ‘the grid’. Inside the grid sensu largo, we can distinguish the high-voltage grid of distribution, which connects to local grids of supply, which, in turn, operate in medium-voltage and low-voltage. The grids of supply attach to the end users via the points of supply. Simplifying the thing a bit, every electric counter is a point of supply. Each such final point of supply is functionally connected, and mechanically wired, to its nearest converter in the grid. I would like the EneFin purchasing deeds to be valid means of payment at every point of supply in the given national power grid. That would be one of the characteristics of nearly-perfect liquidity in those purchasing deeds. Still, the market of electricity is largely feudal: it is full of imperfectly monopolistic contracts, which bind the end-users to their distributors by the means of fixed-term contracts endowed with very heavy contractual penalties for premature termination. How would those local feuds of the energy market see those purchasing deeds I want my EneFin project to trade? Good question. I don’t know the answer.

I have a general thought to share, sort of a punchline in my today’s update. Last spring and summer, when I was coining up the concept of the Wasun, or cryptocurrency attached to the market of renewable energies, I was struggling. I had the impression to bang my head against a brick wall, in intellectual terms. Now, working on that EneFin project seems easy, and now, I know why: last year I had been trying to invent something economically perfect and now, I am putting together the concept of a financial product, which, in turn, has an opportunity to pitch something really sound. This is one of those deep understandings I developed over the last year: financial markets, FinTech included, are like an endocrine system, where each financial product is like a hormone. The bottom line in finance is to create something that works. As long as it works, it gives a chance to channel human effort into something new and maybe useful.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

Anyway, the two equations, or the remaining part of Chapter I

My editorial

And so I continue my novel in short episodes, i.e. I am blogging the on-going progress in the writing of my book about renewable technologies and technological change. Today, I am updating my blog with the remaining part of the first Chapter, which I started yesterday. Just for those who try to keep up, a little reminder about notations that you are going to encounter in what follows below: N stands for population, E represents the non-edible energy that we consume, and F is the intake of food. For the moment, I do not have enough theoretical space in my model to represent other vital things, like dreams, pot, beauty, friendship etc.

Anyway, the two equations, namely ‘N = A*Eµ*F1-µ’ and ‘N = A*(E/N)µ*(F/N)1-µ’ can both be seen as mathematical expressions of two hypotheses, which seems perfectly congruent at the first sight, and yet they can be divergent. Firstly, each of these equations can be translated into the claim that the size of human population in a given place at a given time depends on the availability of food and non-edible energy in said place and time. In a next step, one is tempted to claim that incremental change in population depends on the incremental change in the availability of food and non-edible energies. Whilst the logical link between the two hypotheses seems rock-solid, the mathematical one is not as obvious, and this is what Charles Cobb and Paul Douglas discovered as they presented their original research in 1928 (Cobb, Douglas 1928[1]). Their method can be summarised as follows. We have three temporal series of three variables: the output utility on the left side of the equation, and the two input factors on the right side. In the original production function by Cobb and Douglas had aggregate output of the economy (Gross Domestic Product) on the output side, whilst input was made of investment in productive assets and the amount of labour supplied. We return, now, to the most general equation (1), namely U = A*F1µ*F21-µ, and we focus on the ‘F1µ*F21-µ’ part, so on the strictly spoken impact of input factors. The temporal series of output U can be expressed as a linear trend with a general slope, just as the modelled series of values obtained through ‘F1µ*F21-µ’. The empirical observation that any reader can make on their own is that the scale factor A can be narrowed down to that value slightly above 1 only if the slope of the ‘F1µ*F21-µ’ on the right side is significantly smaller than the slope of U. This is a peculiar property of that function: the modelled trend of the compound value ‘F1µ*F21-µ’ is always above the trend of U at the beginning of the period studied, and visibly below U by the end of the same period. The factor of scale ‘A’ is an averaged proportion between reality and the modelled value. It corresponds to a sequence of quotients, which starts with local A noticeably below 1, then closing by 1 at the central part of the period considered, to rise visibly above 1 by the end of this period. This is what made Charles Cobb and Paul Douglas claim that at the beginning of the historical period they studied the real output of the US economy was below its potential and by the end of their window of observation it became overshot. The same property of this function made it a tool for defining general equilibriums rather than local ones. As regards my research on renewable energies, that peculiar property of the compound input of food and energy calculated with ‘Eµ*F1-µ’ or with ‘(E/N)µ*(F/N)1-µ’ means that I can assess, over a definite window in time, whether available food and energy stay in general equilibrium with population. They do so, if my general factor of scale ‘A’, averaged over that window in time, stays very slightly over 1, with relatively low a variance. Relatively low, for a parameter equal more or less to one, means a variance, in A, staying around 0,1 or lower. If these mathematical conditions are fulfilled, I can claim that yes, over this definite window in time, population depends on the available food and energy. Still, as my parameter A has been averaged between trends of different slopes, I cannot directly infer that at any given incremental point in time, like from t0 to t1, my N(t1) – N(t0) = A*{[E(t1)µ*F(t1)1-µ] – [E(t0)µ*F(t0)1-µ]}. If we take that incremental point of view, the local A will be always different than the general one.

Bearing those theoretical limitations in mind, the author undertook testing the above equations on empirical data, in a compound dataset, made of Penn Tables 9.0 (Feenstra et al. 2015[2]), enriched with data published by the World Bank (regarding the consumption of energy and its structure regarding ‘renewable <> non–renewable’), as well as with data published by FAO with respect to the overall nutritive intake in particular countries. Data regarding energy, and that pertaining to the intake of food, is limited, in both cases, to the period 1990 – 2014, and the initial, temporal extension of Penn Tables 9.0 (from 1950 to 2014) has been truncated accordingly. For the same reasons, i.e. the availability of empirical data, the original, geographical scope of the sample has been reduced from 188 countries to just 116. Each country has been treated as a local equilibrium, as the initial intuition of the whole research was to find out the role of renewable energies for local populations, as well as local idiosyncrasies regarding that role. Preliminary tests aimed at finding workable combinations of empirical variables. This is another specificity of the Cobb – Douglas production function: in its original spirit, it is supposed to work with absolute quantities observable in real life. These real-life quantities are supposed to fit into the equation, without being transformed into logarithms, or into standardized values. Once again, this is a consequence of the mathematical path chosen, combined with the hypotheses possible to test with that mathematical tool: we are looking for a general equilibrium between aggregates. Of course, an equilibrium between logarithms can be searched for just as well, similarly to an equilibrium between standardized positions, but these are distinct equilibriums.

After preliminary tests, equation ‘N = A*Eµ*F1-µ’, thus operating with absolute amounts of food and energy, proved not being workable at all. The resulting scale factors were far below 1, i.e. the modelled compound inputs of food and energy produced modelled populations much overshot above the actual ones. On the other hand, the mutated equation ‘N = A*(E/N)µ*(F/N)1-µ’ proved operational. The empirical variables able to yield plausibly robust scale factors A were: final use of energy per capita, in tons of oil equivalent (factor E/N), and alimentary intake of energy per capita, measured annually in mega-calories (thousands of kcal), and averaged over the period studied. Thus, the empirical mutation of produced reasonably robust results was the one, where a relatively volatile (i.e. changing every year) consumption of energy is accompanied by a long-term, de facto constant over time, alimentary status of the given national population. Thus, robust results could be obtained with an implicit assumption that alimentary conditions in each population studied change much more slowly than the technological context, which, in turn, determines the consumption of energy per capita. On the left side of the equation, those two explanatory variables matched with population measured in millions. Wrapping up the results of those preliminary tests, the theoretical tool used for this research had been narrowed down to an empirical situation, where, over the period 1990 – 2014, each million of people in a given country in a given year was being tested for sustainability, regarding the currently available quantity of tons of oil equivalent per capita per year, in non-edible energies, as well as regarding the long-term, annual amount of mega calories per capita, in alimentary intake.

The author is well aware that all this theoretical path-clearing could have been truly boring for the reader, but it seemed necessary, as this is the point, when real surprises started emerging. I was ambitious and impatient in my research, and thus I immediately jumped to testing equation N = A*(E/N)µ*(F/N)1-µ’ with just the renewable energies in the game, after having eliminated all the non-renewable part of final consumption in energy. The initial expectation was to find some plausible local equilibriums, with the scale factor A close to 1 and displaying sufficiently low a variance, in just some local populations. Denmark, Britain, Germany – these were the places where I expected to find those equilibriums, Stable demographics, well-developed energy base, no official food deficit: this was the type of social environment, which I expected to produce that theoretical equilibrium, and yet, I expected to find a lot of variance in the local factors A of scale. Denmark seemed to behave according to expectations: it yielded an empirical equation N = (Renewable energy per capita)0,68*(Alimentary intake per capita)1 0,68 = 0,32. The scale factor A hit a surprising robustness: its average value over 1990 – 2014 was 1,008202138, with a variance var (A) = 0,059873591. I quickly tested its Scandinavian neighbours: Norway, Sweden, and Finland. Finland yielded higher a logarithm in renewable energy per capita, namely µ = 0,85, but the scale factor A was similarly robust, making 1,065855419 on average and displaying a variance equal to 0,021967408. With Norway, results started puzzling me: µ = 0,95, average A = 1,019025526 with a variance 0,002937442. Those results would roughly mean that whilst in Denmark the availability of renewable energies has a predominant role in producing a viable general equilibrium in population, in Norway it has a quasi-monopole in shaping the same equilibrium. Cultural clichés started working at this moment, in my mind. Norway? That cold country with low density of population, where people, over centuries, just had to eat a lot in order to survive winters, and the population of this country is almost exclusively in equilibrium with available renewable energies? Sweden marked some kind of a return to the expected state of nature: µ = 0,77, average A = 1,012941105 with a variance of 0,003898173. Once again, surprisingly robust, but fitting into some kind of predicted state.

What I could already see at this point was that my model produced robust results, but they were not quite what I expected. If one takes a look at the map of the world, Scandinavia is relatively small a region, with quite similar, natural conditions for human settlement across all the four countries. Similar climate, similar geology, similar access to wind power and water power, similar social structures as well. Still, my model yielded surprisingly marked, local idiosyncrasies across just this small region, and all those local idiosyncrasies were mathematically solid, regarding the variance observable in their scale factors A. This was just the beginning of my puzzlement. I moved South in my testing, to countries like Germany, France and Britain. Germany: µ = 0,31, average A = 1,008843147 with a variance of 0,0363637. One second, µ = 0,31? But just next door North, in Denmark, µ = 0,63, doesn’t it? How is it possible? France yielded a robust equilibrium, with average A = 1,021262046 and its variance at 0,002151713, with µ = 0,38. Britain: µ = 0,3, whilst average A = 1,028817158 and variance in A making 0,017810219.  In science, you are generally expected to discover things, but when you discover too much, it causes a sense of discomfort. I had that ‘No, no way, there must be some mistake’ approach to the results I have just presented. The degree of disparity in those nationally observed functions of general equilibrium between population, food, and energy, strongly suggested the presence of some purely arithmetical disturbance. Of course, there was that little voice in the back of my head, saying that absolute aggregates (i.e. not the ratios of intensity per capita) did not yield any acceptable equilibrium, and, consequently, there could be something real about the results I obtained, but I had a lot of doubts.

I thought, for a day or two, that the statistics supplied by the Word Bank, regarding the share of renewable energies in the overall final consumption of energy might be somehow inaccurate. It could be something about the mutual compatibility of data collected from national statistical offices. Fortunately, methods of quantitative analysis of economic phenomena supply a reliable method of checking the robustness of both the model, and the empirical data I am testing it with. You supplant one empirical variable with another one, possibly similar in its logical meaning, and you retest. This is what I did. I assumed that the gross, final consumption of energy, in tons of oil equivalent per capita, might be more reliable than the estimated shares of renewable sources in that total. Thus, I tested the same equations, for the same set of countries, this time with the total consumption of energy per capita. It is worth quoting the results of that second test regarding the same countries. Denmark: average scale factor A = 1,007673381 with an observable variance of 0,006893499, and all that in an equation where µ = 0,93. At this point, I felt, once again, as if I were discovering too much at once. Denmark yielded virtually the same scale factor A, and the same variance in A, with two different metrics of energy consumed per capita (total and just the renewable one), with two different values in the logarithm µ. Two different equilibriums with two different bases, each as robust as the other. Logically, it meant the existence of a clearly cut substitution between renewable energies and the non-renewable ones. Why? I will try to explain it with a metaphor. If I manage to stabilize a car, when changing its tyres, with two hydraulic lifters, and then I take away one of the lifters and the car remains stable, it means that the remaining lifter can do the work of the two. This one tool is the substitute of two tools, at a rate of 2 to 1. In this case, I had the population of Denmark stabilized both on the overall consumption of energy per capita (two lifters), and on just the consumption of renewable energies (one lifter). Total consumption of energy stabilizes population at µ = 0,93 and renewable energies do the same at µ = 0,68. Logically, renewable energies are substitutes to non-renewables with a rate of substitution equal to 0,93/0,68 = 1,367647059. Each ton of oil equivalent in renewable energies consumed per capita, in Denmark, can do the job of some 1,37 tons of non-renewable energies.

Finland was another source of puzzlement: A = 0,788769669, variance of A equal to 0,002606412, and µ = 0,99. Ascribing to the logarithm µ the highest possible value at the second decimal point, i.e. µ = 0,99, I could not get a model population lower than the real one. The model yielded some kind of demographic aggregate much higher than the real population, and the most interesting thing was that this model population seemed correlated with the real one. I could know it by the very low variance in the scale factor A. It meant that Finland, as an environment for human settlement, can perfectly sustain its present headcount with just renewable energies, and if the non-renewables are being dropped into the model, the same territory has a significant, unexploited potential for demographic growth. The rate of substitution between renewable energies and the non-renewable ones, this time, seemed to be 0,99/0,85 = 1,164705882. Norway yielded similar results, with the total consumption of energy per capita on the right side of the equation: A = 0,760631741, variance in A equal to 0,001570101, µ = 0,99, substitution rate 1,042105263. Sweden turned out to be similar to Denmark: A = 1,018026405 with a variance of 0,004626486, µ = 0,91, substitution rate 1,181818182. The four Scandinavian countries seem to form an environment, where energy plays a decisive role in stabilizing the local populations, and renewable energies seem to be able to do the job perfectly. The retesting of Germany, France, and Britain brought interesting results, too. Germany: A = 1,009335161 with a variance of 0,000335601, at µ = 0,48, with a substitution rate of renewables to non-renewables equal to 1,548387097. France: A = 1,019371541, variance of A at 0,001953865, µ = 0,53, substitution at 1,394736842. Finally, Britain: A = 1,028560563 with a variance of 0,006711585, µ = 0,52, substitution rate 1,733333333. Some kind of pattern seems to emerge: the greater the relative weight of energy in producing general equilibrium in population, the greater the substitution rate between renewable energies and the non-renewable ones.

At this point I was pretty certain that I am using a robust model. So many local equilibriums, produced with different empirical variables, was not the result of a mistake. Table 1, in the Appendix to Chapter I, gives the results of testing equation (3), with the above mentioned empirical variables, in 116 countries. The first numerical column of the table gives the arithmetical average of the scale factor ‘A’, calculated over the period studied, i.e. 1990 – 2014. The second column provides the variance of ‘A’ over the same period of time (thus the variance between the annual values of A), and the third specifies the value in the parameter ‘µ’ – or the logarithm ascribed to energy use per capita – at which the given values in A have been obtained. In other words, the mean A, and the variance of A specify how close to equilibrium assumed in equation (3) has it been possible to come in the case of a given country, and the value of µ is the one that produces that neighbourhood of equilibrium. The results from Table 1 seem to confirm that equation (3), with these precise empirical variables, is robust in the great majority of cases.

Most countries studied satisfying the conditions stated earlier: variances in the scale factor ‘A’ are really low, and the average value of ‘A’ is possible to bring just above 1. Still, exceptions abound regarding the theoretical assumption of energy use being the dominant factor that shapes the size of the population. In many cases, the value of the exponent µ that allows a neighbourhood of equilibrium is far below µ = 0,5. According to the underlying logic of the model, the magnitude of µ is informative about how strong an impact does the differentiation and substitution (between renewable energies, and the non-renewable ones), have on the size of the population in a given time and place. In countries with µ > 0.5, population is being built mostly through access to energy, and through substitution between various forms of energy. Conversely, in countries displaying µ < 0,5, access to food, and internal substitution between various forms of food becomes more important regarding demographic change. United States of America come as one of those big surprises. In this respect, empirical check brings a lot of idiosyncrasies to the initial lines of the theoretical model.

Countries accompanied with a (!) are exceptions with respect to the magnitude of the scale factor ‘A’. They are: China, India, Cyprus, Estonia, Gabon, Iceland, Luxembourg, New Zealand, Norway, Slovenia, as well as Trinidad and Tobago. They present a common trait of satisfactorily low a variance in scale factor ‘A’, in conformity with condition (6), but a mean ‘A’ either unusually high (China A = 1.32, India A = 1.40), or unusually low (e.g. Iceland A = 0.02), whatever the value of exponent ‘µ’. It could be just a technical limitation of the model: when operating on absolute, non-transformed values, the actual magnitudes of variance on both sides of the equation matter. Motor traffic is an example: if the number of engine-powered vehicles in a country grows spectacularly, in the presence of a demographic standstill, variance on the right side is much greater than on the left side, and this can affect the scale factor. Yet, variances observable in the scale factor ‘A’, with respect to those exceptional cases, are quite low, and a fundamental explanation is possible. Those countries could be the cases, where the available amounts of food and energy either cannot really produce as big a population as there really is (China, India), or, conversely, they could produce much bigger a population than the current one (Iceland is the most striking example). From this point of view, the model could be able to identify territories with no room left for further demographic growth, and those with comfortable pockets of food and energy to sustain much bigger populations. An interpretation in terms of economic geography is also plausible: these could be situations, where official, national borders cut through human habitats, such as determined by energy and food, rather than circling them.

Partially wrapping it up, results in Table 1 demonstrate that equation (3) of the model is both robust and apt to identify local idiosyncrasies. The blade having been sharpened, the next step of empirical check consisted in replacing the overall consumption of energy per capita with just the consumption of renewable energies, as calculated on the grounds of data published by the World Bank, and in retesting equation (3) on the same countries. Table 2, in the Appendix to Chapter I, shows the results of those 116 tests. The presentational convention is the same (just to keep in mind that values in A and in µ correspond to renewable energy in the equation), and the last column of the table supplies a quotient, which, fault of a better expression, is named ‘rate of substitution between renewable and non-renewable energies’. The meaning of that substitution quotient appears as one studies values observed in the scale factor ‘A’. In the great majority of countries, save for exceptions marked with (!), it was possible to define a neighbourhood of equilibrium regarding equation (3) and condition (6). Exceptions are treated as such, this time, mostly due to unusually (and unacceptably) high a variance in scale factor ‘A’. They are countries where deriving population from access to food and renewable energies is a bit dubious, regarding the robustness of prediction with equation (3).

The provisional bottom line is that for most countries, it is possible to derive, plausibly, the size of population in the given place and time from both the overall consumption of energy, and from the use of just the renewable energies, in the presence of relatively constant an alimentary intake. Similar, national idiosyncrasies appear as in Table 1, but this time, another idiosyncrasy pops up: the gap between µ exponents in the two empirical mutations of equation (3). The µ ascribed to renewable energy per capita is always lower than the µ corresponding to the total use of energy – for the sake of presentational convenience they are further being addressed as, respectively, µ(R/N), and µ(E/N) –  but the proportions between those two exponents vary greatly between countries. It is useful to go once again through the logic of µ. It is the exponent, which has to be ascribed to the consumption of energy per capita in order to produce a neighbourhood of equilibrium in population, in the presence of relatively constant an alimentary regime. For each individual country, both µ(R/N) and µ(E/N) correspond to virtually the same mean and variance in the scale factor ‘A’. If both the total use of energy, and just the consumption of renewable energies can produce such a neighbourhood of equilibrium, the quotient ‘µ(E/N)/µ(R/N)’ reflects the amount of total energy use, in tons of oil equivalent per capita, which can be replaced by one ton of oil equivalent per capita in renewable energies, whilst keeping that neighbourhood of equilibrium. Thus, the quotient µ(E/N)/µ(R/N) can be considered as a levelled, long-term rate of substitution between renewable energies and the non-renewable ones.

One possible objection is to be dealt with at this point. In practically all countries studied, populations use a mix of energies: renewable plus non-renewable. The amount of renewable energies used per capita is always lower than the total use of energy. Mathematically, the magnitude of µ(R/N) is always smaller than the one observable in µ(E/N). Hence, the quotient µ(E/N)/µ(R/N) is bound to be greater than one, and the resulting substitution ratio could be considered as just a mathematical trick. Still, the key issue here is that both ‘E/Nµ’ and ‘R/Nµ’ can produce a neighbourhood of equilibrium with a robust scale factor. Translating maths into the facts of life, the combined results of tables 1 and 2 (see Appendix) strongly suggest that renewable energies can reliably produce a general equilibrium in, and sustain, any population on the planet, with a given supply of food. If a given factor A is supplied in relatively smaller an amount than the factor B, and, other things held constant, the supply of A can produce the same general equilibrium than the supply of B, A is a natural substitute of B at a rate greater than one. Thus, µ(E/N)/µ(R/N) > 1 is far more than just a mathematical accident: it seems to be the structural property of our human civilisation.

Still, it is interesting how far does µ(E/N)/µ(R/N) reach beyond the 1:1 substitution. In this respect, probably the most interesting insight is offered by the exceptions, i.e. countries marked with (!), where the model fails to supply a 100%-robust scale factor in any of the two empirical mutations performed on equation (3). Interestingly, in those cases the rate of substitution is exactly µ(E/N)/µ(R/N) = 1. Populations either too big, or too small, regarding their endowment in energy, do not really have obvious gains in sustainability when switching to renewables.  Such a µ(E/N)/µ(R/N) > 1 substitution occurs only when the actual population is very close to what can be modelled with equation (3). Two countries – Saudi Arabia and Turkmenistan – offer an interesting insight into the underlying logic of the µ(E/N)/µ(R/N) quotient. They both present µ(E/N)/µ(R/N) > 2. Coherently with the explanation supplied above, it means that substituting renewable energies for the non-renewable ones, in those two countries, can fundamentally change their social structures and sustain much bigger populations. Intriguingly, they are both ‘resource-cursed’ economies, with oil and gas taking so big a chunk in economic activity that there is hardly room left for anything else.

Most countries on the planet, with just an exception in the cases of China and India, seem being able to sustain significantly bigger populations than their present ones, through shifting to 100% renewable energies. In two ‘resource-cursed’ cases, namely Saudi Arabia and Turkmenistan, this demographic shift, possible with renewable energies, seems not less than dramatic. As I was progressively wrapping my mind around it, a fundamental question formed: what exactly am I measuring with that logarithm µ? I returned to the source of my inspiration, namely to the model presented by Paul Krugman in 1991 (Krugman 1991 op. cit.). That of the two factors on the right side of the equation, which is endowed with the dominant power is, in the same time, the motor force behind the spatial structuring of human settlement. I have, as a matter of fact, three factors in my model: non-edible renewable energy, substitutable to non-edible and non-renewable energy, and the consumption of food per capita. As I contemplate these three factors, a realisation dawns: none of the three can be maximized or even optimized directly. When I use more electricity than I did five years earlier, it is not because I plug my fingers more frequently into the electric socket: I shape my consumption of energy through a bundle of technologies that I use. As for the availability of food, the same occurs: with the rare exception of top-level athletes, the caloric intake is the by-product of a life style (office clerk vs construction site worker) rather than a fully conscious, purposeful action. Each of the three factors is being absorbed through a set of technologies. Here, some readers may ask: if I grow vegetables in my own garden, isn’t it far-fetched to call it a technology? If we were living in a civilisation who feeds itself exclusively with home-grown vegetables, that could be an exaggeration, I agree. Yet, we are a civilisation, which has developed a huge range of technologies in industrial farming. Vegetables grown in my garden are substitutes to foodstuffs supplied from industrially run farms, as well as to industrially processed food. If something is functionally a substitute to a technology, it is a technology, too. The exponents obtained, according to my model, for particular factors, in individual countries, reflect the relative pace of technological change in three fundamental fields of technology, namely:

  1. a) Everything that makes us use non-edible energies, ranging from a refrigerator to a smartphone; here, we are mostly talking about two broad types of technologies, namely engines of all kind, and electronic devices.
  2. b) Technologies that create choice between the renewable, and the non-renewable sources of energy, thus first and foremost the technologies of generating electricity: windmills, watermills, photovoltaic installations, solar-thermal plants etc. They are, for the most part, one step earlier in the chain of energy than technologies mentioned in (a).
  3. c) Technologies connected to the production and consumption of food, composed into a long chain, with side-branches, starting from farming, through the processing of food, ending with packaging, distribution, vending and gastronomy.

As I tested the theoretical equation N = A*(E/N)µ*(F/N)1-µ’, most countries yielded a plausible, robust equilibrium between the local (national) headcount, and the specific, local mix of technologies grouped in those three categories. A question emerges, as a hypothesis to explore: is it possible that our collective intelligence expresses itself in creating such, local technological mixes of engines, electronics, power generation, and alimentary technologies, which, in turn would allow us to optimize our population? Can technological change be interpreted as an intelligent, energy-maximizing adaptation?

Appendix to Chapter I

Table 1 Parameters of the function:  Population = (Energy use per capita[3])µ*(Food intake per capita[4])(1-µ)

Country name Average daily intake of food, in kcal per capita Mean scale factor ‘A’ over 1990 – 2014 Variance in the scale factor ‘A’ over 1990 – 2014 The exponent ‘µ’ of the ‘energy per capita’ factor
Albania 2787,5 1,028719088 0,048263309 0,78
Algeria 2962,5 1,00792777 0,003115684 0,5
Angola 1747,5 1,042983003 0,034821077 0,52
Argentina 3085 1,05449632 0,001338937 0,53
Armenia 2087,5 1,027874602 0,083587662 0,8
Australia 3120 1,053845754 0,005038742 0,77
Austria 3685 1,021793945 0,002591508 0,87
Azerbaijan 2465 1,006243759 0,044217939 0,74
Bangladesh 2082,5 1,045244854 0,007102476 0,21
Belarus 3142,5 1,041609177 0,016347323 0,8
Belgium 3655 1,004454515 0,003480147 0,88
Benin 2372,5 1,030339133 0,034533869 0,61
Bolivia (Plurinational State of) 2097,5 1,019990919 0,003429637 0,62
Bosnia and Herzegovina (!) 2862,5 1,037385012 0,214843872 0,81
Botswana 2222,5 1,068786155 0,009163141 0,92
Brazil 2907,5 1,013624942 0,003643215 0,26
Bulgaria 2847,5 1,058220643 0,005405994 0,82
Cameroon 2110 1,021629875 0,051074111 0,5
Canada 3345 1,036202396 0,007687519 0,73
Chile 2785 1,027291576 0,003554446 0,65
China (!) 2832,5 1,328918607 0,002814054 0,01
Colombia 2582,5 1,074031013 0,013875766 0,44
Congo 2222,5 1,078933108 0,024472619 0,71
Costa Rica 2802,5 1,050377494 0,005668136 0,78
Côte d’Ivoire 2460 1,004959783 0,007587564 0,52
Croatia 2655 1,072976483 0,009344081 0,72
Cyprus (!) 3185 0,325015959 0,00212915 0,99
Czech Republic 3192,5 1,004089056 0,002061036 0,84
Denmark 3335 1,007673381 0,006893499 0,93
Dominican Republic 2217,5 1,062919767 0,006550924 0,65
Ecuador 2225 1,072013967 0,00294547 0,6
Egypt 3172,5 1,036345512 0,004306619 0,38
El Salvador 2510 1,013036366 0,004187964 0,7
Estonia (!) 2980 0,329425185 0,001662589 0,99
Ethiopia 1747,5 1,073625398 0,039032523 0,31
Finland (!) 3147,5 0,788769669 0,002606412 0,99
France 3557,5 1,019371541 0,001953865 0,53
Gabon (!) 2622,5 0,961643759 0,016248519 0,99
Georgia 2350 1,044229266 0,059636113 0,76
Germany 3440 1,009335161 0,000335601 0,48
Ghana 2532,5 1,000098029 0,047085907 0,48
Greece 3610 1,063074 0,003756555 0,77
Haiti 1815 1,038427773 0,004246483 0,56
Honduras 2457,5 1,030624938 0,005692923 0,67
Hungary 3440 1,024235523 0,001350114 0,78
Iceland (!) 3150 0,025191922 2,57214E-05 0,99
India (!) 2307,5 1,403800869 0,024395268 0,01
Indonesia 2497,5 1,001768442 0,004578895 0,2
Iran (Islamic Republic of) 3030 1,034945678 0,001105326 0,45
Ireland 3622,5 1,007003095 0,017135706 0,96
Israel 3490 1,008446182 0,013265865 0,87
Italy 3615 1,007727182 0,001245927 0,51
Jamaica 2712,5 1,056188543 0,01979275 0,9
Japan 2875 1,0094237 0,000359135 0,38
Jordan 2820 1,015861129 0,031905756 0,77
Kazakhstan 3135 1,01095925 0,021868381 0,74
Kenya 2010 1,018667155 0,02914075 0,42
Kyrgyzstan 2502,5 1,009443502 0,053751489 0,71
Latvia 3015 1,010440502 0,023191031 0,98
Lebanon 3045 1,036073511 0,054610186 0,85
Lithuania 3152,5 1,008092894 0,025234007 0,96
Luxembourg (!) 3632,5 0,052543325 6,62285E-05 0,99
Malaysia 2855 1,017853322 0,001002682 0,61
Mauritius 2847,5 1,070576731 0,019964794 0,96
Mexico 3165 1,01483014 0,009376118 0,36
Mongolia 2147,5 1,061731985 0,030246541 0,9
Morocco 3095 1,07892333 0,000418636 0,47
Mozambique 1922,5 1,023422366 0,041833717 0,48
Nepal 2250 1,059720031 0,006741455 0,46
Netherlands 2925 1,040887411 0,000689576 0,78
New Zealand (!) 2785 0,913678062 0,003946867 0,99
Nicaragua 2102,5 1,045412214 0,007065561 0,69
Nigeria 2527,5 1,069148598 0,032086946 0,28
Norway (!) 3340 0,760631741 0,001570101 0,99
Pakistan 2275 1,062522698 0,020995863 0,24
Panama 2347,5 1,007449033 0,00243433 0,81
Paraguay 2570 1,07179452 0,021405906 0,73
Peru 2280 1,050166142 0,00327043 0,47
Philippines 2387,5 1,0478458 0,022165841 0,32
Poland 3365 1,004848541 0,000688294 0,56
Portugal 3512,5 1,036215564 0,006604633 0,76
Republic of Korea 3027,5 1,01734341 0,011440406 0,56
Republic of Moldova 2762,5 1,002387234 0,038541243 0,8
Romania 3207,5 1,003204035 0,003181708 0,62
Russian Federation 3032,5 1,050934925 0,001953049 0,38
Saudi Arabia 2980 1,026310231 0,007502008 0,72
Senegal 2187,5 1,05981161 0,021382472 0,54
Serbia and Montenegro 2787,5 1,0392151 0,012416926 0,8
Slovakia 2875 1,011063497 0,002657276 0,92
Slovenia (!) 3042,5 0,583332004 0,003458657 0,99
South Africa 2882,5 1,053438343 0,009139913 0,53
Spain 3322,5 1,061083277 0,004844361 0,56
Sri Lanka 2287,5 1,029495671 0,001531167 0,5
Sudan 2122,5 1,028532781 0,044393335 0,4
Sweden 3072,5 1,018026405 0,004626486 0,91
Switzerland 3385 1,047790357 0,007713383 0,88
Syrian Arab Republic 2970 1,010909679 0,017849377 0,59
Tajikistan 2012,5 1,004745997 0,078394669 0,62
Thailand 2420 1,05305435 0,004200173 0,41
The former Yugoslav Republic of Macedonia 2755 1,064764097 0,003242024 0,95
Togo 2020 1,007094875 0,014424982 0,66
Trinidad and Tobago (!) 2645 0,152994618 0,003781236 0,99
Tunisia 3230 1,053626454 0,001201886 0,66
Turkey 3510 1,02188909 0,001740729 0,43
Turkmenistan 2620 1,003674668 0,024196536 0,96
Ukraine 3040 1,044110717 0,005180992 0,54
United Kingdom 3340 1,028560563 0,006711585 0,52
United Republic of Tanzania 1987,5 1,074441381 0,031503549 0,41
United States of America 3637,5 1,023273537 0,006401009 0,3
Uruguay 2760 1,014226024 0,019409309 0,82
Uzbekistan 2550 1,056807711 0,031469698 0,59
Venezuela (Bolivarian Republic of) 2480 1,048332115 0,012077362 0,6
Viet Nam 2425 1,050131152 0,000866138 0,31
Yemen 2005 1,076332698 0,029772287 0,47
Zambia 1937,5 1,0479534 0,044241343 0,59
Zimbabwe 2035 1,063047787 0,022242317 0,6

Source: author’s

 

Table 2 Parameters of the function:  Population = (Renewable energy use per capita[5])µ*(Food intake per capita[6])(1-µ)

Country name Mean scale factor ‘A’ over 1990 – 2014 Variance in the scale factor ‘A’ over 1990 – 2014 The exponent ‘µ’ of the ‘renewable energy per capita’ factor The rate of substitution between renewable and non-renewable energies[7]
Albania 1,063726823 0,015575246 0,7 1,114285714
Algeria 1,058584384 0,044309122 0,44 1,136363636
Angola 1,044147837 0,063942546 0,49 1,06122449
Argentina 1,039249286 0,005115111 0,39 1,358974359
Armenia 1,082452967 0,023421839 0,59 1,355932203
Australia 1,036777388 0,009700331 0,52 1,480769231
Austria 1,017958672 0,007854467 0,71 1,225352113
Azerbaijan 1,07623299 0,009740098 0,47 1,574468085
Bangladesh 1,088818696 0,017086232 0,2 1,05
Belarus (!) 1,017676486 0,142728478 0,51 1,568627451
Belgium 1,06314732 0,095474709 0,52 1,692307692
Benin (!) 1,045986178 0,101094528 0,58 1,051724138
Bolivia (Plurinational State of) 1,078219551 0,034143037 0,53 1,169811321
Bosnia and Herzegovina 1,077445974 0,084400986 0,66 1,227272727
Botswana 1,022264687 0,056890261 0,79 1,164556962
Brazil 1,066438509 0,005012883 0,24 1,083333333
Bulgaria (!) 1,022253185 0,190476288 0,55 1,490909091
Cameroon 1,040548202 0,059668736 0,5 1
Canada 1,02539319 0,005170473 0,56 1,303571429
Chile 1,006307911 0,001159941 0,55 1,181818182
China 1,347729029 0,003248871 0,01 1
Colombia 1,016164864 0,019413193 0,37 1,189189189
Congo 1,041474959 0,030195913 0,67 1,059701493
Costa Rica 1,008081248 0,01876342 0,68 1,147058824
Côte d’Ivoire 1,013057174 0,009833628 0,5 1,04
Croatia 1,072976483 0,009344081 0,72 1
Cyprus (!) 1,042370253 0,838872562 0,72 1,375
Czech Republic 1,036681212 0,044847525 0,56 1,5
Denmark 1,008202138 0,059873591 0,68 1,367647059
Dominican Republic 1,069124974 0,020305242 0,53 1,226415094
Ecuador 1,008104202 0,025383593 0,47 1,276595745
Egypt 1,03122058 0,016484947 0,28 1,357142857
El Salvador 1,078008598 0,028182822 0,64 1,09375
Estonia (!) 1,062618744 0,418196957 0,88 1,125
Ethiopia 1,01313572 0,036192629 0,3 1,033333333
Finland 1,065855419 0,021967408 0,85 1,164705882
France 1,021262046 0,002151713 0,38 1,394736842
Gabon 1,065944525 0,011751745 0,97 1,020618557
Georgia 1,011709194 0,012808503 0,66 1,151515152
Germany 1,008843147 0,03636378 0,31 1,548387097
Ghana (!) 1,065885579 0,106721005 0,46 1,043478261
Greece 1,033613511 0,009328533 0,55 1,4
Haiti 1,009030442 0,005061414 0,54 1,037037037
Honduras 1,028253048 0,022719417 0,62 1,080645161
Hungary 1,086698434 0,022955955 0,54 1,444444444
Iceland 0,041518305 0,000158837 0,99 1
India 1,414055357 0,025335408 0,01 1
Indonesia 1,003393135 0,008680379 0,18 1,111111111
Iran (Islamic Republic of) 1,06172763 0,011215001 0,26 1,730769231
Ireland 1,075982896 0,02796979 0,61 1,573770492
Israel 1,06421352 0,004086618 0,61 1,426229508
Italy 1,072302127 0,020049639 0,36 1,416666667
Jamaica 1,002749054 0,010620317 0,67 1,343283582
Japan 1,082461225 0,000372112 0,25 1,52
Jordan 1,025652757 0,024889809 0,5 1,54
Kazakhstan 1,078500526 0,007887364 0,44 1,681818182
Kenya 1,039952786 0,031445338 0,41 1,024390244
Kyrgyzstan 1,036451717 0,011487047 0,6 1,183333333
Latvia 1,02535782 0,044807273 0,83 1,180722892
Lebanon 1,050444418 0,053181784 0,6 1,416666667
Lithuania (!) 1,076146779 0,241465686 0,72 1,333333333
Luxembourg (!) 1,080780192 0,197582319 0,93 1,064516129
Malaysia 1,018207799 0,034303031 0,42 1,452380952
Mauritius 1,081652351 0,082673843 0,79 1,215189873
Mexico 1,01253558 0,019098478 0,27 1,333333333
Mongolia 1,073924505 0,017542414 0,6 1,5
Morocco 1,054779512 0,005553697 0,38 1,236842105
Mozambique 1,062086076 0,047101957 0,48 1
Nepal 1,02819587 0,008319264 0,45 1,022222222
Netherlands 1,079123029 0,043322084 0,46 1,695652174
New Zealand 1,046855187 0,004522505 0,83 1,192771084
Nicaragua 1,034941617 0,021798159 0,64 1,078125
Nigeria 1,03609124 0,030236501 0,27 1,037037037
Norway 1,019025526 0,002937442 0,95 1,042105263
Pakistan 1,068995505 0,026598749 0,22 1,090909091
Panama 1,001556162 0,038760767 0,69 1,173913043
Paraguay 1,049861415 0,030603983 0,69 1,057971014
Peru 1,06820116 0,008122931 0,41 1,146341463
Philippines 1,045289953 0,035957042 0,28 1,142857143
Poland 1,035431925 0,035915212 0,39 1,435897436
Portugal 1,044901969 0,003371242 0,62 1,225806452
Republic of Korea 1,06776762 0,017697832 0,31 1,806451613
Republic of Moldova 1,009542233 0,033772795 0,55 1,454545455
Romania 1,011030974 0,079875735 0,47 1,319148936
Russian Federation 1,083901796 0,000876184 0,24 1,583333333
Saudi Arabia 1,099133179 0,080054524 0,27 2,666666667
Senegal 1,019171218 0,032304226 0,49 1,102040816
Serbia and Montenegro 1,042141223 0,00377058 0,63 1,26984127
Slovakia 1,062546838 0,08862799 0,61 1,508196721
Slovenia 1,00512965 0,039266211 0,81 1,222222222
South Africa 1,056957556 0,012656394 0,41 1,292682927
Spain 1,017435095 0,002522983 0,4 1,4
Sri Lanka 1,003117252 0,000607856 0,47 1,063829787
Sudan 1,00209188 0,060026529 0,38 1,052631579
Sweden 1,012941105 0,003898173 0,77 1,181818182
Switzerland 1,07331184 0,000878485 0,69 1,275362319
Syrian Arab Republic 1,048889583 0,03494333 0,38 1,552631579
Tajikistan 1,03533923 0,055646586 0,58 1,068965517
Thailand 1,012034765 0,002131649 0,33 1,242424242
The former Yugoslav Republic of Macedonia (!) 1,021262823 0,379532891 0,72 1,319444444
Togo 1,030339186 0,024874996 0,64 1,03125
Trinidad and Tobago 1,086840331 0,014786844 0,69 1,434782609
Tunisia 1,042654904 0,000806403 0,52 1,269230769
Turkey 1,0821418 0,019688124 0,35 1,228571429
Turkmenistan (!) 1,037854925 0,614587094 0,38 2,526315789
Ukraine 1,022041527 0,026351574 0,31 1,741935484
United Kingdom 1,028817158 0,017810219 0,3 1,733333333
United Republic of Tanzania 1,0319973 0,033120507 0,4 1,025
United States of America 1,001298132 0,001300399 0,19 1,578947368
Uruguay 1,025162405 0,027221297 0,73 1,123287671
Uzbekistan 1,105591195 0,008303345 0,36 1,638888889
Venezuela (Bolivarian Republic of) 1,044353155 0,012830255 0,45 1,333333333
Viet Nam 1,005825608 0,003779368 0,28 1,107142857
Yemen 1,072879389 0,058580323 0,3 1,566666667
Zambia 1,045147143 0,038548336 0,58 1,017241379
Zimbabwe 1,030974989 0,008692551 0,57 1,052631579

Source: author’s

[1] Charles W. Cobb, Paul H. Douglas, 1928, A Theory of Production, The American Economic Review, Volume 18, Issue 1, Supplement, Papers and Proceedings of the Fortieth Annual Meeting of the American Economic Association (March 1928), pp. 139 – 165

[2] Feenstra, Robert C., Robert Inklaar and Marcel P. Timmer (2015), “The Next Generation of the Penn World Table” American Economic Review, 105(10), 3150-3182, available for download at www.ggdc.net/pwt

[3] Current annual use per capita, in tons of oil equivalent

[4] Annual caloric intake in mega-calories (1000 kcal) per capita, averaged over 1990 – 2014.

[5] Current annual use per capita, in tons of oil equivalent

[6] Annual caloric intake in mega-calories (1000 kcal) per capita, averaged over 1990 – 2014.

[7] This is the ratio of two logarithms, namely: µ(renewable energy per capita) / µ(total energy use per capita)

Something like a potential to exploit

My editorial

I have become quite accidental in my blogging. I mean, I do not have more accidents than I used to, I am just less regular in posting new content. This is academic life: giving lectures, it just drains you out of energy. Not only do you have to talk to people who mostly assume that what you tell them is utterly useless, but also you had to talk meaningfully so as to prove them wrong. On the top of that, I am writing that book, and it additionally taxes my poor brain. Still, I can see a light at the end of the tunnel, and this is not a train coming from the opposite sense. It is probably nothing mystical, as well. When I was a kid (shortly after the invention of the wheel, before the fall of the Warsaw Pact), there was a literary form called ‘novel in short episodes’. People wrote novels, but the socialist economy was constantly short of paper, and short of trust as for its proper use. Expecting to get printed in hard cover could be more hazardous an expectation than alien contact. What was getting printed were newspapers and magazines, as the government needed some vehicle for its propaganda. The caveat in the scheme was that most people didn’t want to pay for being served propaganda. We were astonishingly pragmatic in this respect, as I think of it now. The way to make people buy newspapers was to put inside something more than propaganda. Here, the printless writers, and the contentless newspapers could meet and shake their hands. Novels were being published in short episodes, carefully inserted at the last page of the newspapers, so as the interested reader has the temptation to browse through the account of Herculean efforts, on the part of the government, to build a better world, whilst fighting against the devils from the West.

As for me, I am running that blog at https://discoversocialsciences.com and it is now becoming endangered species in the absence of new, meaningful content being posted regularly. I mean, when you don’t breed, you become endangered species. On the other hand, I have that book in process, which might very well become the next bestseller, but it as well might not. Thus, I shake by blog hand with my book hand, and I decided to post on my blog, the content of the book, as it is being written. Every update will be, from now for the next five weeks or so, an account of my wrestling with my inner writer. I have one tiny little problem to solve, though. Over the last months, I used to blog in English and in French, kind of alternately. Now, I am writing my book in English, and the current account of my writing is, logically, in the beautiful language of Shakespeare and Boris Johnson. I haven’t figured out yet how the hell am I going to insert French in the process. Oh, well, I will make it up as I will be going. The show must go on, anyway.

And so I start.

(Provisional) Introduction (to my book)

This book is the account of the author’s research concerning technological change, especially in the context of observable shift towards renewable energies. This is an account of puzzlement, as well. As I developed my research on innovation, I remember being intrigued by the discrepancy between the reality of technological change at the firm and business level, on the one hand, and the dominant discourse about innovation at the macroeconomic level. The latter keeps measuring something called ‘technological progress’, with coefficients taken from the Cobb – Douglas production function, whose creators, Prof Charles W. Cobb and Prof Paul H. Douglas, in their common work from 1928[1], very strongly emphasized that their model is not really made for measuring changes over time. The so defined technological progress, measured with Total Factor Productivity, has not happened at the global scale since the 1970ies. In the same time, technological change and innovation keep happening. The human civilisation has reached a stage, when virtually any new business needs to be innovative in order to be interesting for investors. Is it really a change? Haven’t we, humans, been always like that, inventive, curious and bold in exploring new paths? The answer is ambiguous. Yes, we are and have been an inventive species. Still, for centuries, innovation has been happening at the fringe of society and then used to take over the whole society. This pattern of innovation is to find in business practices not so long ago, by the end of the 17th century. Since then, innovation, as a pattern of doing business, has progressively passed from the fringe to the centre stage of socio-economic change. Over the last 300 years or so, as a civilisation, we have passed, and keep passing, from being innovative occasionally to being essentially innovators. The question is: what happened in us?

In the author’s opinion, what happened is first and most of all, an unprecedented demographic growth. According to the best historical knowledge we have, right now we are more humans on this planet than we have ever been. More people being around in an otherwise constant space means, inevitably, more human interaction per unit of time and space, and more interaction means faster a learning. This is what technological change and innovation seem to be, in the first place: learning. This is learning by experimentation, where each distinct technology is a distinct experiment. What are we experimenting with? First of all, we keep experimenting with the absorption and transformation of energy. As a species, we are champions of acquiring energy from our environment and transforming it. Secondly, we are experimenting with monetary systems. In the 12th and 13th century, we harnessed the power of wind and water, and, as if by accident, the first documented use of bills of exchange dates back precisely to this period. When Europe started being really serious about the use of steam power, and about the extraction of coal, standardized monetary systems, based on serially issued bank notes, made their appearance during the late 18th century. At the end of the 19th century, as natural oil and gas entered the scene, their ascent closely coincided with final developments in the establishment of corporate structures in business. Once again, as if by accident, said developments consisted very largely in standardizing the financial instruments serving to trade shares in the equity of industrial companies. Presently, as we face the growth of electronics, the first technology ever to grow in complexity at an exponential pace, we can observe both an unprecedented supply of official currencies money – the velocity of money in the global economy has descended to V < 1 and it becomes problematic to call it a velocity – and nothing less than an explosion of virtual currencies, based on the Blockchain technology. Interestingly, each of those historical moments marked by the emergence of both new technologies, and new financial patterns, was associated with new political structures as well. The constitutional state that we know seems to have grown by big leaps, which, in turn, took place at the same historical moments: 12th – 13th century, 18th century, 19th century, and right now, as we are facing something that looks like a shifting paradigm of public governance.

Thus, historically, it is possible to associate these four streams of phenomena: demographic growth, deep technological changes as regards the absorption and use of energy, new patterns of using financial markets, and new types of political structures. Against this background of long duration, the latest developments are quite interesting, too. In 2007 – 2008, the market of renewable energies displayed – and this seems to be a historical precedent since 1992 – a rate of growth superior to that observable in the final consumption of energy as a whole. Something changed, which triggered much faster a quantitative change in the exploitation of renewables. Exactly the same moment, during the years 2007 – 2008, a few other phenomena coincided with this sudden surge in renewable energies. The supply of money in the global economy exceeded the global gross output, for the first time in recorded statistics. Apparently, for the first time in history, one average monetary unit, in the global economy, finances less than one unit of gross output per year. On the side of demography, the years 2007 – 2008 marked a historical threshold in urbanisation: the urban population on our planet exceeded, for the first time, 50% of the total human headcount. At the same moment, the average food deficit, i.e. the average deficit of kilocalories per day per capita, in our civilisation, started to fall sharply below the long-maintained threshold of 131 kcal, and presently we are at a historical minimum of 88,4 kcal. Those years 2007 – 2008, besides being the moment when the global financial crisis erupted, marked a significant turn in many aspects of our collective, global life.

Thus, there is the secular perspective of change, and the recent breakthrough. As a scientist, I mostly ask two questions, namely ‘how?’ and ‘what happens next?’. I am trying to predict future developments, which is the ultimate purpose of any theory. In order to form a reliable prediction, I do my best to understand the mechanics of the predicted change.

Chapter I (or wherever it lands in the final manuscript) The first puzzlement: energy and population

The first scientific puzzlement addressed in this book refers to the most recent research by the author. The research in question was oriented on explaining the role of renewable energies in the sustenance of our civilisation, and it was very much inspired by a piece of information the author had read in Fernand Braudel’s masterpiece ‘Civilisation and Capitalism’ (Braudel 1981[2]). According to historical accounts, based on the official documents of the Habsburg Empire, in the author’s home region, Lesser Poland, known as Austrian Galicia under the Habsburg rule, at the end of the eighteenth century, there was one water mill, on average, per 382 people. The author’s home town, Krakow, Poland, sustains a population of 800 000, which would correspond to 2094 water mills. Said watermills are significant by their absence. Since I had learnt about this little fact, reading Fernard Braudel’s monumental work in summer 2015, I have gradually become quasi-obsessed with the ‘what if?’ question: what if today we had those 2094 water mills in my home city? What would our life look like? How different would it be from the world we are actually living? This gentle obsession crystallized into a general theoretical question: can renewable energies sustain the present human population? This generality found a spur in the reading of statistics pertaining to renewable energies. In 2007 – 2008, the rate of growth in the market of renewable energies changed, and became higher than the rate of growth in the overall, final consumption of energy. This change in trends is observable on the grounds of data published by the World Bank, regarding the consumption of energy per capita (https://data.worldbank.org/indicator/EG.USE.PCAP.KG.OE ), and the share of renewable energies in that overall consumption (https://data.worldbank.org/indicator/EG.FEC.RNEW.ZS ). This change of slope was something of a historical precedent since 1990. In 2007 – 2008, something important happened, and still, to the author’s knowledge, there is no research explaining what that something could possibly have been. Some kind of threshold has been overcome in the absorption of technologies connected to renewable energies.

As the author connected those two dots – the historical facts and the recent ones – the theoretical coin started dropping. If we want to understand the importance of renewable energies in our civilisation, we need to understand how renewable energies can sustain local populations. That general intuition connected with the theoretical contribution of the so-called ‘new economic geography’. In 1998, Paul Krugman referred to models, which allow construing spatial structures of the economy as general equilibriums (Krugman 1998[3]). Earlier work by Paul Krugman, dating from 1991 (Krugman 1991[4]) supplied a first, coherent, theoretical vehicle for the author’s own investigation. The role of renewable energies in any local, human community is possible to express as aggregate utility derived from said energies. Further reflexion led to a simple observation: the most fundamental utility we derive from any form of energy is the simple fact of us being here around. The aggregate amount of utility that renewable energies can possibly create is the sustenance of a given headcount in population. In this reasoning, a subtle tension appeared, namely between ‘any form of energy’ and ‘renewable energies’. An equation started to form in the author’s mind. On the left side, the size of the population, thus the most fundamental, aggregate utility that any resource can provide. On the right side, the general construct to follow was that suggested by Paul Krugman, which deserves some explanation at this point. We divide the whole plethora of human activity, as well as that of available resources into two factors: the principal, differentiating one, and the secondary, which is being differentiated across space. When we have a human population differentiated into countries, the differentiating factor is the political structure of a country, and the differentiated one is all the rest of human activity. When we walk along a busy commercial street, the factor that creates observable differentiation in space is the institutional separation between distinct businesses, whilst labour, capital, and the available urban space are the differentiated ones. In the original model by Paul Krugman, the final demand for manufactured goods – or rather the spatial pattern of said demand – is the differentiating factor, which sets the geographical frame for the development of agriculture. The fundamental mathematical construct to support this reasoning is as in equation (1):

  • (1)         U = A*F1µ*F21-µ        µ < 1

…where ‘U’ stands for the aggregate utility derived from whatever pair of factors F1 and F2 we choose, whilst ‘A’ is the scale factor, or the proportion between aggregate utility, on the one hand, and the product of input factors, on the other hand. This mathematical structure rests on foundations laid 63 years earlier, by the seminal work by Prof Charles W. Cobb and Prof Paul H. Douglas (Cobb, Douglas 1928[5]), which generations of economists have learnt as the Cobb-Douglas production function, and which sheds some foundational light on the author’s own intellectual path in this book. When Charles Cobb and Paul Douglas presented their model, the current economic discourse turned very much around the distinction between nominal economic change and the real one. The beginning of the 20th century, besides being the theatre of World War I, was also the period of truly booming industrial markets, accompanied by significant changes in prices. The market value of any given aggregate of economic goods could swing really wildly, whilst its real value, in terms of utility, remained fairly constant. The intuition behind the research by Charles Cobb and Paul Douglas was precisely to find a way of deriving some kind of equilibrium product, at the macroeconomic scale, out of the observable changes in industrial investment, and in the labour market. This general intuition leads to find such a balance in this type of equation, which yields a scale factor slightly above 1. In other words, the product of the input factors, proportioned in the recipe with the help of logarithms construed as, respectively, µ < 1, and 1-µ, should yield an aggregate utility slightly higher than the actual one, something like a potential to exploit. In the original function presented by Cobb and Douglas, the scale factor A was equal to 1,01.

Investigating the role of renewable energies in the sustenance of human populations led the author to experiment with various input variables on the right side of the equation, so as to have the consumption of renewable energies as input no. 1, something else (we are coming to it) as input no.2. The exploratory challenge was, firstly, to find the right variables, and then the right logarithms to raise them to, in order to obtain a scale factor A slightly above one. The basic path of thinking was that we absorb energy from environment in two essential forms: food, and everything else, which, whilst non-edible, remains useful. Thus, it has been assumed that any human community derives an aggregate utility, in the form of its own headcount, to be subsequently represented as ‘N’, out of the use ‘E’ of non-edible energies (e.g. fuel burnt in vehicles or electricity used in house appliances), and out of the absorption as food, further symbolized as ‘F’.

Thus, we have two consumables – energy and food – and one of the theoretical choices to make is to assign them logarithms: µ < 1, and 1-µ. According to the fundamental intuitions of Paul Krugman’s model from 1991, there are two paths to follow in order to find the dominant factor in the equation, i.e. the differentiating one, endowed with the logarithm µ <  1. The first path is the actual, observable change. Paul Krugman suggested that the factor, whose amount of input changes faster than the other one, is the differentiator, whilst the one displaying slower a pace of change is being differentiated. The second path pertains to the internal substitution between various goods (sub-inputs) inside each of the two big input factors. The new economic geography suggests that the capacity of industrial facilities to shape the spatial structure of human settlements comes, to a great extent, from the fact that manufactured goods have, between them, much neater a set of uses and mutual substitution rates than agricultural goods. Both of these road signs pointed at the use of non-edible energies as the main, differentiating factor. Non-edible energies are used through technologies, and these have clearly cut frontiers between them. A gasoline-based combustion engine is something different from a diesel, which, in turn, is fundamentally different from a power plant. The output of one technology can be substituted, to some extent, to the output of another technology, with relatively predictable a rate of substitution. In comparison, foodstuffs have much foggier borderlines between them. Rice is rice, and is part of risotto, as well as of rice cakes, rice pasta etc., and, in the same time, you can feed your chicken with rice, and thus turn the alimentary value of rice into the alimentary value of meat. This intricate scheme of foods combining with each other is made even more complicated due to idiosyncratic culinary cultures. One pound of herring trades against one pound of pork meat differently in Alaska and in Lebanon. As for the rate of change, technologies of producing food seem changing at slower a pace than technologies connected to the generation of electricity, or those embodied in combustion engines.

Thus, both paths suggested in the geographic model by Paul Krugman pointed at non-edible energies as the factor to be endowed with the dominant logarithm µ < 1, leaving the intake of food with the residual logarithm ‘1 – µ’. Hence, the next step of research consisted in testing empirically the equation (2):

  • (2)         N = A*Eµ*F1-µ        µ < 1; A > 1

At this point, the theoretical model had to detach itself slightly from its Cobb-Douglas-Krugman roots. People cluster around abundance and avoid scarcity. These, in turn, can be understood in two different ways: as the absolute amount of something, like lots of food, or as the amount of something per person. That distinction is particularly important as we consider established human settlements with lots of history in their belt. Whilst early colons in a virgin territory can be attracted by the perceived, absolute amount of available resources, their distant ancestors will care much more about the availability of those resources to particular members of the established community, thus about the amount of resources per inhabitant. This principle pertains to food as well as to non-edible energies. In their early days of exploration, entrepreneurs in the oil & gas industry went wherever they could find oil and gas. As the industry matured, the daily yield from a given exploitation, measured in barrels of oil, or cubic meters of gas, became more important. This reasoning leads to assuming that quantities of input on the right side in equation (2) are actually intensities per capita in, respectively, energy use and absorption of food, rather than their absolute volumes. Thus, a mutation of equation (2) is being posited, as equation (3), where:

(3)                        N =A*[(E/N)µ]*[(F/N)1-µ]          µ < 1; A > 1

[1] Charles W. Cobb, Paul H. Douglas, 1928, A Theory of Production, The American Economic Review, Volume 18, Issue 1, Supplement, Papers and Proceedings of the Fortieth Annual Meeting of the American Economic Association (March 1928), pp. 139 – 165

[2] Braudel, F., 1981, Civilization and Capitalism, Vol. I: The Structures of Everyday Life, rev.ed., English Translation, William Collins Sons & Co London and Harper & Row New York, ISBN 00216303 9, pp. 341 – 358

[3] Krugman, P., 1998, What’s New About The New Economic Geography?, Oxford Review of Economic Policy, vol. 14, no. 2, pp. 7 – 17

[4] Krugman, P., 1991, Increasing Returns and Economic Geography, The Journal of Political Economy, Volume 99, Issue 3 (Jun. 1991), pp. 483 – 499

[5] Charles W. Cobb, Paul H. Douglas, 1928, A Theory of Production, The American Economic Review, Volume 18, Issue 1, Supplement, Papers and Proceedings of the Fortieth Annual Meeting of the American Economic Association (March 1928), pp. 139 – 165