The point of doing manually what the loop is supposed to do

My editorial on You Tube

OK, here is the big picture. The highest demographic growth, in absolute numbers, takes place in Asia and Africa. The biggest migratory flows start from there, as well, and aim at and into regions with much less of human mass in accrual: North America and Europe. Less human accrual, indeed, and yet much better conditions for each new homo sapiens. In some places on the planet, a huge amount of humans is born every year. That huge amount means a huge number of genetic variations around the same genetic tune, namely that of the homo sapiens. Those genetic variations leave their homeland, for a new and better homeland, where they bring their genes into a new social environment, which assures them much more safety, and higher odds of prolonging their genetic line.

What is the point of there being more specimens of any species? I mean, is there a logic to increasing the headcount of any population? When I say ‘any’, is ranges from bacteria to us, humans. After having meddled with the most basic algorithm of a neural network (see « Pardon my French, but the thing is really intelligent » and « Ce petit train-train des petits signaux locaux d’inquiétude »), I have some thoughts about what intelligence is. I think that intelligence is a class, i.e. it is a framework structure able to produce many local, alternative instances of itself.

Being intelligent consists, to start with, in creating alternative versions of itself, and creating them purposefully imperfect so as to generate small local errors, whilst using those errors to create still different versions of itself. The process is tricky. There is some sort of fundamental coherence required between the way of creating those alternative instances of oneself, and the way that resulting errors are being processed. Fault of such coherence, the allegedly intelligent structure can fall into purposeful ignorance, or into panic.

Purposeful ignorance manifests as the incapacity to signal and process the local imperfections in alternative instances of the intelligent structure, although those imperfections actually stand out and wave at you. This is the ‘everything is just fine and there is no way it could be another way’ behavioural pattern. It happens, for example, when the function of processing local errors is too gross – or not sharp enough, if you want – to actually extract meaning from tiny, still observable local errors. The panic mode of an intelligent structure, on the other hand, is that situation when the error-processing function is too sharp for the actually observable errors. Them errors just knock it out of balance, like completely, and the function signals general ‘Error’, or ‘I can’t stand this cognitive dissonance’.

So, what is the point of there being more specimens of any species? The point might be to generate as many specific instances of an intelligent structure – the specific DNA – as possible, so as to generate purposeful (and still largely unpredictable) errors, just to feed those errors into the future instantiations of that structure. In the process of breeding, some path of evolutionary coherence leads to errors that can be handled, and that path unfolds between a state of evolutionary ‘everything is OK, no need to change anything’ (case mosquito, unchanged for millions of years), and a state of evolutionary ‘what the f**k!?’ (case common fruit fly, which produces insane amount of mutations in response to the slightest environmental stressor).

Essentially, all life could be a framework structure, which, back in the day, made a piece of software in artificial intelligence – the genetic code – and ever since that piece of software has been working on minimizing the MSE (mean square error) in predicting the next best version of life, and it has been working by breeding, in a tree-like method of generating variations,  indefinitely many instances of the framework structure of life. Question: what happens when, one day, a perfect form of life emerges? Something like TRex – Megalodon – Angelina Jolie – Albert Einstein – Jeff Bezos – [put whatever or whoever you like in the rest of that string]? On the grounds of what I have already learnt about artificial intelligence, such a state of perfection would mean the end of experimentation, thus the end of multiplying instances of the intelligent structure, thus the end of births and deaths, thus the end of life.

Question: if the above is even remotely true, does that overarching structure of life understand how the software it made – the genetic code – works? Not necessarily. That very basic algorithm of neural network, which I have experimented with a bit, produces local instances of the sigmoid function Ω = 1/(1 + e-x) such that Ω < 1, and that 1 + e-x > 1, which is always true. Still, the thing does it just sometimes. Why? How? Go figure. That thing accomplishes an apparently absurd task, and it does so just by being sufficiently flexible with its random coefficients. If Life In General is God, that God might not have a clue about how the actual life works. God just needs to know how to write an algorithm for making actual life work. I would even say more: if God is any good at being one, he would write an algorithm smarter than himself, just to make things advance.

The hypothesis of life being one, big, intelligent structure gives an interesting insight into what the cost of experimentation is. Each instance of life, i.e. each specimen of each species needs energy to sustain it. That energy takes many forms: light, warmth, food, Lexus (a form of matter), parties, Armani (another peculiar form of matter) etc. The more instances of life are there, the more energy they need to be there. Even if we take the Armani particle out of the equation, life is still bloody energy-consuming. The available amount of energy puts a limit to the number of experimental instances of the framework, structural life that the platform (Earth) can handle.

Here comes another one about climate change. Climate change means warmer, let’s be honest. Warmer means more energy on the planet. Yes, temperature is our human measurement scale for the aggregate kinetic energy of vibrating particles. More energy is what we need to have more instances of framework life, in the same time. Logically, incremental change in total energy on the planet translates into incremental change in the capacity of framework life to experiment with itself. Still, as framework life could be just the God who made that software for artificial intelligence (yes, I am still in the same metaphor), said framework life could not be quite aware of how bumpy could the road be, towards the desired minimum in the Mean Square Error. If God is an IT engineer, it could very well be the case.

I had that conversation with my son, who is graduating his IT engineering studies. I told him ‘See, I took that algorithm of neural network, and I just wrote its iterations out into separate tables of values in Excel, just to see what it does, like iteration after iteration. Interesting, isn’t it? I bet you have done such thing many times, eh?’. I still remember that heavy look in my son’s eyes: ‘Why the hell should I ever do that?’ he went. ‘There is a logical loop in that algorithm, you see? This loop is supposed to do the job, I mean to iterate until it comes up with something really useful. What is the point of doing manually what the loop is supposed to do for you? It is like hiring a gardener and then doing everything in the garden by yourself, just to see how it goes. It doesn’t make sense!’. ‘But it’s interesting to observe, isn’t it?’ I went, and then I realized I am talking to an alien form of intelligence, there.

Anyway, if God is a framework life who created some software to learn in itself, it could not be quite aware of the tiny little difficulties in the unfolding of the Big Plan. I mean acidification of oceans, hurricanes and stuff. The framework life could say: ‘Who cares? I want more learning in my algorithm, and it needs more energy to loop on itself, and so it makes those instances of me, pumping more carbon into the atmosphere, so as to have more energy to sustain more instances of me. Stands to reason, man. It is all working smoothly. I don’t understand what you are moaning about’.

Whatever that godly framework life says, I am still interested in studying particular instances of what happens. One of them is my business concept of EneFin. See « Which salesman am I? » as what I think is the last case of me being like fully verbal about it. Long story short, the idea consists in crowdfunding capital for small, local operators of power systems based on renewable energies, by selling shares in equity, or units of corporate debt, in bundles with tradable claims on either the present output of energy, or the future one. In simple terms, you buy from that supplier of energy tradable claims on, for example, 2 000 kWh, and you pay the regular market price, still, in that price, you buy energy properly spoken with a juicy discount. The rest of the actual amount of money you have paid buys you shares in your supplier’s equity.

The idea in that simplest form is largely based on two simple observations about energy bills we pay. In most countries (at least in Europe), our energy bills are made of two components: the (slightly) variable value of the energy actually supplied, and a fixed part labelled sometimes as ‘maintenance of the grid’ or similar. Besides, small users (e.g. households) usually pay a much higher unitary price per kWh than large, institutional scale buyers (factories, office buildings etc.). In my EneFin concept, a local supplier of renewable energy makes a deal with its local customers to sell them electricity at a fair, market price, with participations in equity on the top of electricity.

That would be a classical crowdfunding scheme, such as you can find with, StartEngine, for example. I want to give it some additional, financial spin. Classical crowdfunding has a weakness: low liquidity. The participatory shares you buy via crowdfunding are usually non-tradable, and they create a quasi-cooperative bond between investors and investees. Where I come from, i.e. in Central Europe, we are quite familiar with cooperatives. At the first sight, they look like a form of institutional heaven, compared to those big, ugly, capitalistic corporations. Still, after you have waved out that first mist, cooperatives turn out to be very exposed to embezzlement, and to abuse of managerial power. Besides, they are quite weak when competing for capital against corporate structures. I want to create highly liquid a transactional platform, with those investments being as tradable as possible, and use financial liquidity as a both a shield against managerial excesses, and a competitive edge for those small ventures.

My idea is to assure liquidity via a FinTech solution similar to that used by Katipult Technology Corp., i.e. to create some kind of virtual currency (note: virtual currency is not absolutely the same as cryptocurrency; cousins, but not twins, so to say). Units of currency would correspond to those complex contracts « energy plus equity ». First, you create an account with EneFin, i.e. you buy a certain amount of the virtual currency used inside the EneFin platform. I call them ‘tokens’ to simplify. Next, you pick your complex contracts, in the basket of those offered by local providers of energy. You buy those contracts with the tokens you have already acquired. Now, you change your mind. You want to withdraw your capital from the supplier A, and move it to supplier H, you haven’t considered so far. You move your tokens from A to H, even with a mobile app. It means that the transactional platform – the EneFin one – buys from you the corresponding amount of equity of A and tries to find for you some available equity in H. You can also move your tokens completely out of investment in those suppliers of energy. You can free your money, so to say. Just as simple: you just move them out, even with a movement of your thumb on the screen. The EneFin platform buys from you the shares you have moved out of.

You have an even different idea. Instead of investing your tokens into the equity of a provider of energy, you want to lend them. You move your tokens to the field ‘lending’, you study the interest rates offered on the transactional platform, and you close the deal. Now, the corresponding number of tokens represents securitized (thus tradable) corporate debt.

Question: why the hell bothering about a virtual currency, possibly a cryptocurrency, instead of just using good old fiat money? At this point, I am reaching to the very roots of the Bitcoin, the grandpa of all cryptocurrencies (or so they say). Question: what amount of money you need to finance 20 transactions of equal unitary value P? Answer: it depends on how frequently you monetize them. Imagine that the EneFin app offers you an option like ‘Monetize vs. Don’t Monetize’. As long as – with each transaction you do on the platform – you stick to the ‘Don’t Monetize’ option, your transactions remain recorded inside the transactional platform, and so there is recorded movement in tokens, but there is no monetary outcome, i.e. your strictly spoken monetary balance, for example that in €, does not change. It is only when you hit the ‘Monetize’ button in the app that the current bottom line of your transactions inside the platform is being converted into « official » money.

The virtual currency in the EneFin scheme would serve to allow a high level of liquidity (more transactions in a unit of time), without provoking the exactly corresponding demand for money. What connection with artificial intelligence? I want to study the possible absorption of such a scheme in the market of energy, and in the related financial markets, as a manifestation of collective intelligence. I imagine two intelligent framework structures: one incumbent (the existing markets) and one emerging (the EneFin platform). Both are intelligent structures to the extent that they technically can produce many alternative instances of themselves, and thus intelligently adapt to their environment by testing those instances and utilising the recorded local errors.

In terms of an algorithm of neural network, that intelligent adaptation can be manifest, for example, as an optimization in two coefficients: the share of energy traded via EneFin in the total energy supplied in the given market, and the capitalization of EneFin as a share in the total capitalization of the corresponding financial markets. Those two coefficients can be equated to weights in a classical MLP (Multilayer Perceptron) network, and the perceptron network could work around them. Of course, the issue can be approached from a classical methodological angle, as a general equilibrium to assess via « normal » econometric modelling. Still, what I want is precisely what I hinted in « Pardon my French, but the thing is really intelligent » and « Ce petit train-train des petits signaux locaux d’inquiétude »: I want to study the very process of adaptation and modification in those intelligent framework structures. I want to know, for example, how much experimentation those structures need to form something really workable, i.e. an EneFin platform with serious business going on, and, in the same time, that business contributing to the development of renewable energies in the given region of the world. Do those framework structures have enough local resources – mostly capital – for sustaining the number of alternative instances needed for effective learning? What kind of factors can block learning, i.e. drive the framework structure either into deliberate an ignorance of local errors or into panic?

Here is an example of more exact a theoretical issue. In a typical economic model, things are connected. When I pull on the string ‘capital invested in fixed assets’, I can see a valve open, with ‘Lifecycle in incumbent technologies’, and some steam rushes out. When I push the ‘investment in new production capacity’ button, I can see something happening in the ‘Jobs and employment’ department. In other words, variables present in economic systems mutually constrain each other. Just some combinations work, others just don’t. Now, the thing I have already discovered about them Multilayer Perceptrons is that as soon as I add some constraint on the weights assigned to input data, for example when I swap ‘random’ for ‘erandom’, the scope of possible structural functions leading to effective learning dramatically shrinks, and the likelihood of my neural network falling into deliberate ignorance or into panic just swells like hell. What degree of constraint on those economic variables is tolerable in the economic system conceived as a neural network, thus as a framework intelligent structure?

There are some general guidelines I can see for building a neural network that simulates those things. Creating local power systems, based on microgrids connected to one or more local sources of renewable energies, can be greatly enhanced with efficient financing schemes. The publicly disclosed financial results of companies operating in those segments – such as Tesla[1], Vivint Solar[2], FirstSolar[3], or 8Point3 Energy Partners[4] – suggest that business models in that domain are only emerging, and are far from being battle-tested. There is still a way to pave towards well-rounded business practices as regards such local power systems, both profitable economically and sustainable socially.

The basic assumptions of a neural network in that field are essentially behavioural. Firstly, consumption of energy is greatly predictable at the level of individual users. The size of a market in energy changes, as the number of users change. The output of energy needed to satisfy those users’ needs, and the corresponding capacity to install, are largely predictable on the long run. Consumers of energy use a basket of energy-consuming technologies. The structure of this basket determines their overall consumption, and is determined, in turn, by long-run social behaviour. Changes over time in that behaviour can be represented as a social game, where consecutive moves consist in purchasing, or disposing of a given technology. Thus, a game-like process of relatively slow social change generates a relatively predictable output of energy, and a demand thereof. Secondly, the behaviour of investors in any financial market, crowdfunding or other, is comparatively more volatile. Investment decisions are being taken, and modified at a much faster pace than decisions about the basket of technologies used in everyday life.

The financing of relatively small, local power systems, based on renewable energies and connected by local microgrids, implies an interplay of the two above-mentioned patterns, namely the relatively slower transformation in the technological base, and the quicker, more volatile modification of investors’ behaviour in financial markets.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

[1] http://ir.tesla.com/ last access December, 18th, 2018

[2] https://investors.vivintsolar.com/company/investors/investors-overview/default.aspx last access December, 18th, 2018

[3] http://investor.firstsolar.com/ last access December, 18th, 2018

[4] http://ir.8point3energypartners.com/ last access December, 18th, 2018

Pardon my French, but the thing is really intelligent

My editorial on You Tube

And so I am meddling with neural networks. It had to come. It just had to. I started with me having many ideas to develop at once. Routine stuff with me. Then, the Editor-in-Chief of the ‘Energy Economics’ journal returned my manuscript of article on the energy-efficiency of national economies, which I had submitted with them, with a general remark that I should work both on the clarity of my hypotheses, and on the scientific spin of my empirical research. In short, Mr Wasniewski, linear models tested with Ordinary Least Squares is a bit oldie, if you catch my drift. Bloody right, Mr Editor-In-Chief. Basically, I agree with your remarks. I need to move out of my cavern, towards the light of progress, and get acquainted with the latest fashion. The latest fashion we are wearing this season is artificial intelligence, machine learning, and neural networks.

It comes handy, to the extent that I obsessively meddle with the issue of collective intelligence, and am dreaming about creating a model of human social structure acting as collective intelligence, sort of a beehive. Whilst the casting for a queen in that hive remains open, and is likely to stay this way for a while, I am digging into the very basics of neural networks. I am looking in the Python department, as I have already got a bit familiar with that environment. I found an article by James Loy, entitled “How to build your own Neural Network from scratch in Python”. The article looks a bit like sourcing from another one, available at the website of ProBytes Software, thus I use both to develop my understanding. I pasted the whole algorithm by James Loy into my Python Shell, made in run with an ‘enter’, and I am waiting for what it is going to produce. In the meantime, I am being verbal about my understanding.

The author declares he wants to do more or less the same thing that I, namely to understand neural networks. He constructs a simple algorithm for a neural network. It starts with defining the neural network as a class, i.e. as a callable object that acts as a factory for new instances of itself. In the neural network defined as a class, that algorithm starts with calling the constructor function ‘_init_’, which constructs an instance ‘self’ of that class. It goes like ‘def __init__(self, x, y):’. In other words, the class ‘Neural network’ generates instances ‘self’ of itself, and each instance is essentially made of two variables: input x, and output y. The ‘x’ is declared as input variable through the ‘self.input = x’ expression. Then, the output of the network is defined in two steps. Yes, the ‘y’ is generally the output, only in a neural network, we want the network to predict a value of ‘y’, thus some kind of y^. What we have to do is to define ‘self.y = y’, feed the real x-s and the real y-s into the network, and expect the latter to turn out some y^-s.

Logically, we need to prepare a vessel for holding the y^-s. The vessel is defined as ‘self.output = np.zeros(y.shape)’. The ‘shape’ function defines a tuple – a table, for those mildly fond of maths – with given dimensions. What are the dimensions of ‘y’ in that ‘y.shape’? They have been given earlier, as the weights of the network were being defined. It goes as follows. It starts, thus, right after the ‘self.input = x’ has been said, ‘self.weights1 = np.random.rand(self.input.shape[1],4)’ fires off, closely followed by ‘self.weights2 =  np.random.rand(4,1)’. All in all, the entire class of ‘Neural network’ is defined in the following form:

class NeuralNetwork:

    def __init__(self, x, y):

        self.input      = x

        self.weights1   = np.random.rand(self.input.shape[1],4)

        self.weights2   = np.random.rand(4,1)                

        self.y          = y

        self.output     = np.zeros(self.y.shape)                

The output of each instance in that neural network is a two-dimensional tuple (table) made of one row (I hope I got it correctly), and four columns. Initially, it is filled with zeros, so as to make room for something more meaningful. The predicted y^-s are supposed to jump into those empty sockets, held ready by the zeros. The ‘random.rand’ expression, associated with ‘weights’ means that the network is supposed to assign randomly different levels of importance to different x-s fed into it.

Anyway, the next step is to instruct my snake (i.e. Python) what to do next, with that class ‘Neural Network’. It is supposed to do two things: feed data forward, i.e. makes those neurons work on predicting the y^-s, and then check itself by an operation called backpropagation of errors. The latter consists in comparing the predicted y^-s with the real y-s, measuring the discrepancy as a loss of information, updating the initial random weights with conclusions from that measurement, and do it all again, and again, and again, until the error runs down to very low values. The weights applied by the network in order to generate that lowest possible error are the best the network can do in terms of learning.

The feeding forward of predicted y^-s goes on in two steps, or in two layers of neurons, one hidden, and one final. They are defined as:

def feedforward(self):

        self.layer1 = sigmoid(np.dot(self.input, self.weights1))

        self.output = sigmoid(np.dot(self.layer1, self.weights2))

The ‘sigmoid’ part means sigmoid function, AKA logistic function, expressed as y=1/(1+e-x), where, at the end of the day, the y always falls somewhere between 0 and 1, and the ‘x’ is not really the empirical, real ‘x’, but the ‘x’ multiplied by a weight, ranging between 0 and 1 as well. The sigmoid function is good for testing the weights we apply to various types of input x-es. Whatever kind of data you take: populations measured in millions, or consumption of energy per capita, measured in kilograms of oil equivalent, the basic sigmoid function y=1/(1+e-x), will always yield a value between 0 and 1. This function essentially normalizes any data.

Now, I want to take differentiated data, like population as headcount, energy consumption in them kilograms of whatever oil equals to, and the supply of money in standardized US dollars. Quite a mix of units and scales of measurement. I label those three as, respectively, xa, xb, and xc. I assign them weights ranging between 0 and 1, so as the sum of weights never exceeds 1. In plain language it means that for every vector of observations made of xa, xb, and xc I take a pinchful of  xa, then a zest of xb, and a spoon of xc. I make them into x = wa*xa + wb*xb + wc*xc, I give it a minus sign and put it as an exponent for the Euler’s constant.

That yields y=1/(1+e-( wa*xa + wb*xb + wc*xc)). Long, but meaningful to the extent that now, my y is always to find somewhere between 0 and 1, and I can experiment with various weights for my various shades of x, and look what it gives in terms of y.

In the algorithm above, the ‘np.dot’ function conveys the idea of weighing our x-s. With two dimensions, like the input signal ‘x’ and its weight ‘w’, the ‘np.dot’ function yields a multiplication of those two one-dimensional matrices, exactly in the x = wa*xa + wb*xb + wc*xc drift.

Thus, the first really smart layer of the network, the hidden one, takes the empirical x-s, weighs them with random weights, and makes a sigmoid of that. The next layer, the output one, takes the sigmoid-calculated values from the hidden layer, and applies the same operation to them.

One more remark about the sigmoid. You can put something else instead of 1, in the nominator. Then, the sigmoid will yield your data normalized over that something. If you have a process that tends towards a level of saturation, e.g. number of psilocybin parties per month, you can put that level in the nominator. On the top of that, you can add parameters to the denominator. In other words, you can replace the 1+e-x with ‘b + e-k*x’, where b and k can be whatever seems to make sense for you. With that specific spin, the sigmoid is good for simulating anything that tends towards saturation over time. Depending on the parameters in denominator, the shape of the corresponding curve will change. Usually, ‘b’ works well when taken as a fraction of the nominator (the saturation level), and the ‘k’ seems to be behaving meaningfully when comprised between 0 and 1.

I return to the algorithm. Now, as the network has generated a set of predicted y^-s, it is time to compare them to the actual y-s, and to evaluate how much is there to learn yet. We can use any measure of error, still, most frequently, them algorithms go after the simplest one, namely the Mean Square Error MSE = [(y1 – y^1)2 + (y2 – y^2)2 + … + (yn – y^n)2]0,5. Yes, it is Euclidean distance between the set of actual y-s and that of predicted y^-s. Yes, it is also the standard deviation of predicted y^-s from the actual distribution of empirical y-s.

In this precise algorithm, the author goes down another avenue: he takes the actual differences between observed y-s and predicted y^-s, and then multiplies it by the sigmoid derivative of predicted y^-s. Then he takes the transpose of a uni-dimensional matrix of those (y – y^)*(y^)’ with (y^)’ standing for derivative. It goes like:

    def backprop(self):

        # application of the chain rule to find derivative of the loss function with respect to weights2 and weights1

        d_weights2 = np.dot(self.layer1.T, (2*(self.y – self.output) * sigmoid_derivative(self.output)))

        d_weights1 = np.dot(self.input.T,  (np.dot(2*(self.y – self.output) * sigmoid_derivative(self.output), self.weights2.T) * sigmoid_derivative(self.layer1)))

        # update the weights with the derivative (slope) of the loss function

        self.weights1 += d_weights1

        self.weights2 += d_weights2

    def sigmoid(x):

    return 1.0/(1+ np.exp(-x))

    def sigmoid_derivative(x):

     return x * (1.0 – x)

I am still trying to wrap my mind around the reasons for taking this specific approach to the backpropagation of errors. The derivative of a sigmoid y=1/(1+e-x) is y’ =  [1/(1+e-x)]*{1 – [1/(1+e-x)]} and, as any derivative, it measures the slope of change in y. When I do (y1 – y^1)*(y^1)’ + (y2 – y^2)*(y^2)’ + … + (yn – y^n)*(y^n)’ it is as if I were taking some kind of weighted average. That weighted average can be understood in two alternative ways. Either it is standard deviation of y^ from y, weighted with the local slopes, or it is a general slope weighted with local deviations. Now I take the transpose of a matrix like {(y1 – y^1)*(y^1)’ ; (y2 – y^2)*(y^2)’ ; … (yn – y^n)*(y^n)’}, it is a bit as if I made a matrix of inverted terms, i.e. 1/[(yn – y^n)*(y^n)’]. Now, I make a ‘.dot’ product of those inverted terms, so I multiply them by each other. Then, I feed the ‘.dot’ product into the neural network with the ‘+=’ operator. The latter means that in the next round of calculations, the network can do whatever it wants with those terms. Hmmweeellyyeess, makes some sense. I don’t know what exact sense is that, but it has some mathematical charm.

Now, I try to apply the same logic to the data I am working with in my research. Just to give you an idea, I show some data for just one country: Australia. Why Australia? Honestly, I don’t see why it shouldn’t be. Quite a respectable place. Anyway, here is that table. GDP per unit of energy consumed can be considered as the target output variable y, and the rest are those x-s.

Table 1 – Selected data regarding Australia

Year GDP per unit of energy use (constant 2011 PPP $ per kg of oil equivalent) Share of aggregate amortization in the GDP Supply of broad money, % of GDP Energy use (tons of oil equivalent per capita) Urban population as % of total population GDP per capita, ‘000 USD
  y X1 X2 X3 X4 X5
1990 5,662020744 14,46 54,146 5,062 85,4 26,768
1991 5,719765048 14,806 53,369 4,928 85,4 26,496
1992 5,639817305 14,865 56,208 4,959 85,566 27,234
1993 5,597913126 15,277 56,61 5,148 85,748 28,082
1994 5,824685357 15,62 59,227 5,09 85,928 29,295
1995 5,929177604 15,895 60,519 5,129 86,106 30,489
1996 5,780817973 15,431 62,734 5,394 86,283 31,566
1997 5,860645225 15,259 63,981 5,47 86,504 32,709
1998 5,973528571 15,352 65,591 5,554 86,727 33,789
1999 6,139349354 15,086 69,539 5,61 86,947 35,139
2000 6,268129418 14,5 67,72 5,644 87,165 35,35
2001 6,531818805 14,041 70,382 5,447 87,378 36,297
2002 6,563073754 13,609 70,518 5,57 87,541 37,047
2003 6,677186947 13,398 74,818 5,569 87,695 38,302
2004 6,82834791 13,582 77,495 5,598 87,849 39,134
2005 6,99630318 13,737 78,556 5,564 88 39,914
2006 6,908872246 14,116 83,538 5,709 88,15 41,032
2007 6,932137612 14,025 90,679 5,868 88,298 42,022
2008 6,929395465 13,449 97,866 5,965 88,445 42,222
2009 7,039061961 13,698 94,542 5,863 88,59 41,616
2010 7,157467568 12,647 101,042 5,649 88,733 43,155
2011 7,291989544 12,489 100,349 5,638 88,875 43,716
2012 7,671605162 13,071 101,852 5,559 89,015 43,151
2013 7,891026044 13,455 106,347 5,586 89,153 43,238
2014 8,172929207 13,793 109,502 5,485 89,289 43,071

In his article, James Loy reports the cumulative error over 1500 iterations of training, with just four series of x-s, made of four observations. I do something else. I am interested in how the network works, step by step. I do step-by-step calculations with data from that table, following that algorithm I have just discussed. I do it in Excel, and I observe the way that the network behaves. I can see that the hidden layer is really hidden, to the extent that it does not produce much in terms of meaningful information. What really spins is the output layer, thus, in fact, the connection between the hidden layer and the output. In the hidden layer, all the predicted sigmoid y^ are equal to 1, and their derivatives are automatically 0. Still, in the output layer, when the second random distribution of weights overlaps with the first one from the hidden layer. Then, for some years, those output sigmoids demonstrate tiny differences from 1, and their derivatives become very small positive numbers. As a result, tiny, local (yi – y^i)*(y^i)’ expressions are being generated in the output layer, and they modify the initial weights in the next round of training.

I observe the cumulative error (loss) in the first four iterations. In the first one it is 0,003138796, the second round brings 0,000100228, the third round displays 0,0000143, and the fourth one 0,005997739. Looks like an initial reduction of cumulative error, by one order of magnitude at each iteration, and then, in the fourth round, it jumps up to the highest cumulative error of the four. I extend the number to those hand-driven iterations from four to six, and I keep feeding the network with random weights, again and again. A pattern emerges. The cumulative error oscillates. Sometimes the network drives it down, sometimes it swings it up.

F**k! Pardon my French, but just six iterations of that algorithm show me that the thing is really intelligent. It generates an error, it drives it down to a lower value, and then, as if it was somehow dysfunctional to jump to conclusions that quickly, it generates a greater error in consecutive steps, as if it was considering more alternative options. I know that data scientists, should they read this, can slap their thighs at that elderly uncle (i.e. me), fascinated with how a neural network behaves. Still, for me, it is science. I take my data, I feed it into a machine that I see for the first time in my life, and I observe intelligent behaviour in something written on less than one page. It experiments with weights attributed to the stimuli I feed into it, and it evaluates its own error.

Now, I understand why that scientist from MIT, Lex Fridman, says that building artificial intelligence brings insights into how the human brain works.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

Combinatorial meaning and the cactus

My editorial on You Tube

I am back into blogging, after over two months of pausing. This winter semester I am going, probably, for record workload in terms of classes: 630 hours in total. October and November look like an immersion time, when I had to get into gear for that amount of teaching. I noticed one thing that I haven’t exactly been aware of, so far, or maybe not as distinctly as I am now: when I teach, I love freestyling about the topic at hand. Whatever hand of nice slides I prepare for a given class, you can bet on me going off the beaten tracks and into the wilderness of intellectual quest, like by the mid-class. I mean, I have nothing against Power Point, but at some point it becomes just so limiting… I remember that conference, one year ago, when the projector went dead during my panel (i.e. during the panel when I was supposed to present my research). I remember that mixed, and shared feeling of relief and enjoyment in people present in the room: ‘Good. Finally, no slides. We can like really talk science’.

See? Once again, I am going off track, and that in just one paragraph of writing. You can see what I mean when I refer to me going off track in class. Anyway, I discovered one more thing about myself: freestyling and sailing uncharted intellectual waters has a cost, and this is a very clear and tangible biological cost. After a full day of teaching this way I feel as if my brain was telling me: ‘Look, bro. I know you would like to write a little, but sorry: no way. Them synapses are just tired. You need to give me a break’.

There is a third thing I have discovered about myself: that intense experience of teaching makes me think a lot. I cannot exactly put all this in writing on the spot, fault of fresh neurotransmitter available, still all that thinking tends to crystallize over time and with some patience I can access it later. Later means now, as it seems. I feel that I have crystallized enough and I can start to pull it out into the daylight. The « it » consists, mostly, in a continuous reflection on collective intelligence. How are we (possibly) smart together?

As I have been thinking about it, three events combined and triggered in me a string of more specific questions. I watched another podcast featuring Jordan Peterson, whom I am a big fan of, and who raised the topic of the neurobiological context of meaning. How our brain makes meaning, and how does it find meaning in sensory experience? On the other hand, I have just finished writing the manuscript of an article on the energy-efficiency of national economies, which I have submitted to the ‘Energy Economics’ journal, and which, almost inevitably, made me work with numbers and statistics. As I had been doing that empirical research, I found out something surprising: the most meaningful econometric results came to the surface when I transformed my original data into local coefficients of an exponential progression that hypothetically started in 1989. Long story short, these coefficients are essentially growth rates, which behave in a peculiar way, due to their arithmetical structure: they decrease very quickly over time, whatever is the source, raw empirical observation, as if they were representing weakening shock waves sent by an explosion in 1989.

Different types of transformed data, the same data, in that research of mine, produced different statistical meanings. I am still coining up real understanding of what it exactly means, by the way. As I was putting that together with Jordan Peterson’s thoughts on meaning as a biological process, I asked myself: what is the exact meaning of the fact that we, as scientific community, assign meaning to statistics? How is it connected with collective intelligence?

I think I need to start more or less where Jordan Peterson moves, and ask ‘What is meaning?’. No, not quite. The ontological type, I mean the ‘What?’ type of question, is a mean beast. Something like a hydra: you cut the head, namely you explain the thing, you think that Bob’s your uncle, and a new head pops up, like out of nowhere, and it bites you, where you know. The ‘How?’ question is a bit more amenable. This one is like one of those husky dogs. Yes, it is semi wild, and yes, it can bite you, but once you tame it, and teach it to pull that sleigh, it will just pull. So I ask ‘How is meaning?’. How does meaning occur?

There is a particular type of being smart together, which I have been specifically interested in, for like the last two months. It is the game-based way of being collectively intelligent. The theory of games is a well-established basis for studying human behaviour, including that of whole social structures. As I was thinking about it, there is a deep reason for that. Social interactions are, well, interactions. It means that I do something and you do something, and those two somethings are supposed to make sense together. They really do at one condition: my something needs to be somehow conditioned by how your something unfolds, and vice versa. When I do something, I come to a point when it becomes important for me to see your reaction to what I do, and only when I will have seen it, I will further develop on my action.

Hence, I can study collective action (and interaction) as a sequence of moves in a game. I make my move, and I stop moving, for a moment, in order to see your move. You make yours, and it triggers a new move in me, and so the story goes further on in time. We can experience it very vividly in negotiations. With any experience in having serious talks with other people, thus when we negotiate something, we know that it is pretty counter-efficient to keep pushing our point in an unbroken stream of speech. It is much more functional to pace our strategy into separate strings of argumentation, and between them, we wait for what the other person says. I have already given a first theoretical go at the thing in « Couldn’t they have predicted that? ».

This type of social interaction, when we pace our actions into game-like moves, is a way of being smart together. We can come up with new solutions, or with the understanding of new problems – or a new understanding of old problems, as a matter of fact – and we can do it starting from positions of imperfect agreement and imperfect coordination. We try to make (apparently) divergent points, or we pursue (apparently) divergent goals, and still, if we accept to wait for each other’s reaction, we can coordinate and/or agree about those divergences, so as to actually figure out, and do, some useful s**t together.

What connection with the results of my quantitative research? Let’s imagine that we play a social game, and each of us makes their move, and then they wait for the moves of other players. The state of the game at any given moment can be represented as the outcome of past moves. The state of reality is like a brick wall, made of bricks laid one by one, and the state of that brick wall is the outcome of the past laying of bricks.  In the general theory of science, it is called hysteresis. There is a mathematical function, reputed to represent that thing quite nicely: the exponential progression. On a timeline, I define equal intervals. To each period of time, I assign a value y(t) = et*a, where ‘t’ is the ordinal of the time period, ‘e’ is a mathematical constant, the base of natural logarithm, e = 2,7188, and ‘a’ is what we call the exponential coefficient.

There is something else to that y = et*a story. If we think like in terms of a broader picture, and assume that time is essentially what we imagine it is, the ‘t’ part can be replaced by any number we imagine. Then, the Euler’s formula steps in: ei*x = cos x + i*sin x. If you paid attention in math classes, at high school, you might remember that sine and cosine, the two trigonometric functions, have a peculiar property. As they refer to angles, at the end of the day they refer to a full circle of 360°. It means they go in a circle, thus in a cycle, only they go in perfectly negative a correlation: when the sine goes one unit one way, the cosine goes one unit exactly the other way round etc. We can think about each occurrence we experience – the ‘x’ –  as a nexus of two, mutually opposing cycles, and they can be represented as, respectively, the sine, and the cosine of that occurrence ‘x’. When I grow in height (well, when I used to), my current height can be represented as the nexus of natural growth (sine), and natural depletion with age (cosine), that sort of things.

Now, let’s suppose that we, as a society, play two different games about energy. One game makes us more energy efficient, ‘cause we know we should (see Settlement by energy – can renewable energies sustain our civilisation?). The other game makes us max out on our intake of energy from the environment (see Technological Change as Intelligent, Energy-Maximizing Adaptation). At any given point in time, the incremental change in our energy efficiency is the local equilibrium between those two games. Thus, if I take the natural logarithm of our energy efficiency at a given point in space-time, thus the coefficient of GDP per kg of oil equivalent in energy consumed, that natural logarithm is the outcome of those two games, or, from a slightly different point of view, it descends from the number of consecutive moves made (the ordinal of time period we are currently in), and from a local coefficient – the equivalent of ‘i’ in the Euler’s formula – which represents the pace of building up the outcomes of past moves in the game.

I go back to that ‘meaning’ thing. The consecutive steps ‘t’ in an exponential progression y(t) = et*a progression correspond to successive rounds of moves in the games we play. There is a core structure to observe: the length of what I call ‘one move’, and which means a sequence of actions that each person involved in the interaction carries out without pausing and waiting for the reaction observable in other people in the game. When I say ‘length’, it involves a unit of measurement, and here, I am quite open. It can be a length of time, or the number of distinct actions in my sequence. The length of one move in the game determines the pace of the game, and this, in turn, sets the timeframe for the whole game to produce useful results: solutions, understandings, coordinated action etc.

Now, where the hell is any place for ‘meaning’ in all that game stuff? My view is the following: in social games, we sequence our actions into consecutive moves, with some waiting-for-reaction time in between, because we ascribe meaning to those sub-sequences that we define as ‘one move’. The way we process meaning matters for the way we play social games.

I am a scientist (well, I hope), and for me, meaning occurs very largely as I read what other people have figured out. So I stroll down the discursive avenue named ‘neurobiology of meaning’, welcomingly lit by with the lampposts of Science Direct. I am calling by an article by Lee M. Pierson, and Monroe Trout, entitled ‘What is consciousness for?[1]. The authors formulate a general hypothesis, unfortunately not supported (yet?) with direct empirical check, that consciousness had been occurring, back in the day, I mean like really back in the day, as cognitive support of volitional movement, and evolved, since then, into more elaborate applications. Volitional movement is non-automatic, i.e. decisions have to be made in order for the movement to have any point. It requires quick assemblage of data on the current situation, and consciousness, i.e. the awareness of many abstract categories in the same time, could the solution.

According to that approach, meaning occurs as a process of classification in the neurologically stored data that we need to use virtually simultaneously in order to do something as fundamental as reaching for another can of beer. Classification of data means grouping into sets. You have a random collection of data from sensory experience, like a homogenous cloud of information. You know, the kind you experience after a particularly eventful party. Some stronger experiences stick out: the touch of cold water on your naked skin, someone’s phone number written on your forearm with a lipstick etc. A question emerges: should you call this number? It might be your new girlfriend (i.e. the girlfriend whom you don’t consciously remember as your new one but whom you’d better to if you don’t want your car splashed with acid), or it might be a drug dealer whom you’d better not call back.  You need to group the remaining data in functional sets so as to take the right action.

So you group, and the challenge is to make the right grouping. You need to collect the not-quite-clear-in-their-meaning pieces of information (Whose lipstick had that phone number been written with? Can I associate a face with the lipstick? For sure, the right face?). One grouping of data can lead you to a happy life, another one can lead you into deep s**t. It could be handy to sort of quickly test many alternative groupings as for their elementary coherence, i.e. hold all that data in front of you, for a moment, and contemplate flexibly many possible connections. Volitional movement is very much about that. You want to run? Good. It would be nice not to run into something that could hurt you, so it would be good to cover a set of sensory data, combining something present (what we see), with something we remember from the past (that thing on the 2 o’clock azimuth stings like hell), and sort of quickly turn and return all that information so as to steer clear from that cactus, as we run.

Thus, as I follow the path set by Pierson and Trout, meaning occurs as the grouping of data in functional categories, and it occurs when we need to do it quickly and sort of under pressure of getting into trouble. I am going onto the level of collective intelligence in human social structures. In those structures, meaning, i.e. the emergence of meaningful distinctions communicable between human beings and possible to formalize in language, would occur as said structures need to figure something out quickly and under uncertainty, and meaning would allow putting together the types of information that are normally compartmentalized and fragmented.

From that perspective, one meaningful move in a game encompasses small pieces of action which we intuitively guess we should immediately group together. Meaningful moves in social games are sequences of actions, which we feel like putting immediately back to back, without pausing and letting the other player do their thing. There is some sort of pressing immediacy in that grouping. We guess we just need to carry out those actions smoothly one after the other, in an unbroken sequence. Wedging an interval of waiting time in between those actions could put our whole strategy at peril, or we just think so.

When I apply this logic to energy efficiency, I think about business strategies regarding innovation in products and technologies. When we launch a new product, or implement a new technology, there is something like fixed patterns to follow. When you start beta testing a new mobile app, for example, you don’t stop in the middle of testing. You carry out the tests up to their planned schedule. When you start launching a new product (reminder: more products made on the same energy base mean greater energy efficiency), you keep launching until you reach some sort of conclusive outcome, like unequivocal success or failure. Social games we play around energy efficiency could very well be paced by this sort of business-strategy-based moves.

I pick up another article, that by Friedemann Pulvermüller (2013[2]). The main thing I see right from the beginning is that apparently, neurology is progressively dropping the idea of one, clearly localised area in our brain, in charge of semantics, i.e. of associating abstract signs with sensory data. What we are discovering is that semantics engage many areas in our brain into mutual connection. You can find developments on that issue in: Patterson et al. 2007[3], Bookheimer 2002[4], Price 2000[5], and Binder & Desai 2011[6]. As we use words, thus as we pronounce, hear, write or read them, that linguistic process directly engages (i.e. is directly correlated with the activation of) sensory and motor areas of our brain. That engagement follows multiple, yet recurrent patterns. In other words, instead of having one mechanism in charge of meaning, we are handling different ones.

After reviewing a large bundle of research, Pulvermüller proposes four different patterns: referential, combinatorial, emotional-affective, and abstract semantics. Each time, the semantic pattern consists in one particular area of the brain acting as a boss who wants to be debriefed about something from many sources, and starts pulling together many synaptic strings connected to many places in the brain. Five different pieces of cortex come recurrently as those boss-hubs, hungry for differentiated data, as we process words. They are: inferior frontal cortex (iFC, so far most commonly associated with the linguistic function), superior temporal cortex (sTC), inferior parietal cortex (iPC), inferior and middle temporal cortex (m/iTC), and finally the anterior temporal cortex (aTC). The inferior frontal cortex (iFC) seems to engage in the processing of words related to action (walk, do etc.). The superior temporal cortex (sTC) looks like seriously involved when words related to sounds are being used. The inferior parietal cortex (iPC) activates as words connect to space, and spatio-temporal constructs. The inferior and middle temporal cortex (m/iTC) lights up when we process words connected to animals, tools, persons, colours, shapes, and emotions. That activation is category specific, i.e. inside m/iTC, different Christmas trees start blinking as different categories among those are being named and referred to semantically. The anterior temporal cortex (aTC), interestingly, has not been associated yet with any specific type of semantic connections, and still, when it is damaged, semantic processing in our brain is generally impaired.

All those areas of the brain have other functions, besides that semantic one, and generally speaking, the kind of meaning they process is correlated with the kind of other things they do. The interesting insight, at this point, is the polyvalence of cortical areas that we call ‘temporal’, thus involved in the perception of time. Physicists insist very strongly that time is largely a semantic construct of ours, i.e. time is what we think there is rather than what really is, out there. In physics, what exists is rather sequential a structure of reality (things happen in an order) than what we call time. That review of literature by Pulvermüller indirectly indicates that time is a piece of meaning that we attach to sounds, colours, emotions, animal and people. Sounds come as logical: they are sequences of acoustic waves. On the other hand, how is our perception of colours, or people, connected to our concept of time? This is a good one to ask, and a tough one to answer. What I would look for is recurrence. We identify persons as distinct ones as we interact with them recurrently. Autistic people have frequently that problem: when you put on a different jacket, they have hard time accepting you are the same person. Identification of animals or emotions could follow the same logic.

The article discusses another interesting issue: the more abstract the meaning is, the more different regions of the brain it engages. The really abstract ones, like ‘beauty’ or ‘freedom’, are super Christmas-trees: they provoke involvement all over the place. When we do abstraction, in our mind, for example when writing poetry (OK, just good poetry), we engage a substantial part of our brain. This is why we can be lost in our thoughts: those thoughts, when really abstract, are really energy-consuming, and they might require to shut down some other functions.

My personal understanding of the research reviewed by Pulvermüller is that at the neurological level, we process three essential types of meaning. One consists in finding our bearings in reality, thus in identifying things and people around, and in assigning emotions to them. It is something like a mapping function. Then, we need to do things, i.e. to take action, and that seems to be a different semantic function. Finally, we abstract, thus we connect distant parcels of data into something that has no direct counterpart neither in the mapped reality, nor in our actions.

I have an indirect insight, too. We have a neural wiring, right? We generate meaning with that wiring, right? Now, how is adaptation occurring, in that scheme, over time? Do we just adapt the meaning we make to the neural hardware we have, or is there a reciprocal kick, I mean from meaning to wiring? So far, neurological research has demonstrated that physical alteration in specific regions of the brain impacts semantic functions. Can it work the other way round, i.e. can recurrent change in semantics being processed alter the hardware we have between our ears? For example, as we process a lot of abstract concepts, like ‘taxes’ or ‘interest rate’, can our brains adapt from generation to generation, so as to minimize the gradient of energy expenditure as we shift between levels of abstraction? If we could, we would become more intelligent, i.e. able to handle larger and more differentiated sets of data in a shorter time.

How does all of this translate into collective intelligence? Firstly, there seem to be layers of such intelligence. We can be collectively smart sort of locally – and then we handle those more basic things, like group identity or networks of exchange – and then we can (possibly) become collectively smarter at more combinatorial a level, handling more abstract issues, like multilateral peace treaties or climate change. Moreover, the gradient of energy consumed, between the collective understanding of simple and basic things, on the one hand, and the overarching abstract issues, is a good predictor regarding the capacity of the given society to survive and thrive.

Once again, I am trying to associate this research in neurophysiology with my game-theoretical approach to energy markets. First of all, I recall the three theories of games, co-awarded the economic Nobel prize in 1994, namely those by: John Nash, John (Yan) Harsanyi, and Reinhard Selten. I start with the latter. Reinhard Selten claimed, and seems to have proven, that social games have a memory, and the presence of such memory is needed in order for us to be able to learn collectively through social games. You know those situations of tough talks, when the other person (or you) keeps bringing forth the same argumentation over and over again? This is an example of game without much memory, i.e. without much learning. In such a game we repeat the same move, like fish banging its head against the glass wall of an aquarium. Playing without memory is possible in just some games, e.g. tennis, or poker, if the opponent is not too tough. In other games, like chess, repeating the same move is not really possible. Such games force learning upon us.

Active use of memory requires combinatorial meaning. We need to know what is meaningful, in order to remember it as meaningful, and thus to consider it as valuable data for learning. The more combinatorial meaning is, inside a supposedly intelligent structure, such as our brain, the more energy-consuming that meaning is. Games played with memory and active learning could be more energy-consuming for our collective intelligence than games played without. Maybe that whole thing of electronics and digital technologies, so hungry of energy, is a way that we, collective human intelligence, put in place in order to learn more efficiently through our social games?

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

[1] Pierson, L. M., & Trout, M. (2017). What is consciousness for?. New Ideas in Psychology, 47, 62-71.

[2] Pulvermüller, F. (2013). How neurons make meaning: brain mechanisms for embodied and abstract-symbolic semantics. Trends in cognitive sciences, 17(9), 458-470.

[3] Patterson, K. et al. (2007) Where do you know what you know? The representation of semantic knowledge in the human brain. Nat. Rev. Neurosci. 8, 976–987

[4] Bookheimer,S.(2002) FunctionalMRIoflanguage:newapproachesto understanding the cortical organization of semantic processing. Annu. Rev. Neurosci. 25, 151–188

[5] Price, C.J. (2000) The anatomy of language: contributions from functional neuroimaging. J. Anat. 197, 335–359

[6] Binder, J.R. and Desai, R.H. (2011) The neurobiology of semantic memory. Trends Cogn. Sci. 15, 527–536

The social brain

My editorial on You Tube

I am thinking about my opening lectures in the coming semester. I am trying to phrase out sort of a baseline philosophy of mine, underlying all or most of what I teach, i.e. microeconomics, management, political systems, international economic relations, and economic policy. Certainly, my most fundamental message to my students is: watch reality in a scientific way. Get the hell above clichés, first impressions and tribal thinking. Reach for the information that most other people don’t, and process it rigorously. You will see that once you really mean it, scientific method is anything but boring. When you really swing that Ockham’s razor with dexterity, and cut out the bullshit, you can come to important existential realizations.

Science starts with observation. Social sciences start with the observation of what people do, and what people do consists very largely in doing something with other people. We are social beings, we do things in recurrent sequences of particular actions, sequences that we have learnt and that we keep on learning. Here I come to an interesting point, namely to what I call the « action and reaction paradigm » and what is a slightly simplistic application of the Newtonian principle labelled with the same expression. It goes more or less like: what people do is a reaction to what happens. There is a ‘yes-but’ involved. Yes, people do things in reaction to what happens, but you need to add the component of temporal sequence. People do things in reaction to everything relevant that has happened within their span of memory connected to the particular phenomenon in question.

This is a fundamental distinction. If I say ‘I do what I do in reaction to what is happening now’, my claim is essentially different from saying that ‘I do what I do as a learnt response to all the things which I know to have happened so far and which my brain considers as relevant for the case’. Two examples come to my mind: social conflicts, and technological change. When a social conflict unfolds, would it be a war between countries, a civil war, or a sharp rise in political tension, the first, superficial interpretation is that someone has just done something annoying, and the other someone just couldn’t refrain themselves from reacting, and it all ramped up to the point of being out of control. In this approach, patterns of behaviour observable in social conflicts are not really patterns, in the sense that they are not really predictable. There is a strong temptation to label conflictual behaviour as more or less random and chaotic, devoid of rationality.

Still, here, social sciences come with a firm claim: anything we do is a learnt, recurrent pattern of doing things. Actions that we take in a situation of conflict are just as much a learnt, repetitive strategy as any other piece of behaviour. Some could argue: ‘But how is it possible that people who have very seldom been aggressive in the past suddenly develop whole patterns of aggressive behaviour? And in the case of whole social groups? How can they learn being aggressive if there has not been conflict before?’. Well, this is one of the wonders observable in human culture. Culture is truly like a big virtual server. There are things stored in our culture – and by ‘things’ I mean, precisely, patterns of behaviour – which we could have hardly imagined to be there. We accumulate information over weeks, months, and years, and, all of a sudden, a radical shift in our behaviour occurs. We have tendency to consider such a brusque shift as insanity, but this usually not the case. As long as the newly manifested set of actions is coherent around an expected outcome, this is a new, subjectively rational strategy that we have just picked up from the cultural toolbox.

Cultural memory is usually much longer in its backwards reach than individual memory. If the right set of new information is being input into the life of a social group, or of an individual, centuries-old strategies can suddenly pop up. It works like a protocol: ‘OK, we have now enough information accumulated in this file so as to trigger the strategy AAA’. Different cultures have different toolboxes stored in them, and yet, the simple tools of social conflict are almost omnipresent. Wherever any tribe has ever had to fight for its hunting grounds, the corresponding patterns of whacking-the-other-over-the-head-with-that-piece-of-rock are stored in the depths of culture, most fundamentally in language.

Yes, the language we use is a store of information about how to do things. Never have looked at the thing like that? Just think: the words and expressions we use describe something that happens in our brain in response to accumulated sensory experience. Usually we have less words at hand than different things to designate. In all the abundance of our experience just some among its pieces become dignified enough to have their own words.  For a word or expression to form as part of a language, generations need to recapitulate their things of life. This is how language becomes an archive of strategies. The information it conveys is like a ZIP file in a computer: it is tightly packed, and requires some kind of semantic crowbar in order to become fully accessible and operational. The crowbar is precisely the currently absorbed experience.

Right, so we can get to fighting each other even without special training, as we have the basic strategies stored in the language we speak. And technological change? How do we innovate? When we shift towards some new technology, do we also use old patterns of behaviour conveyed in our cultural heritage? Let’s see… Here is a little intellectual experiment I use to run with my students, when we talk about innovation and technological change. Look around you. Look at all those things that surround you and which, fault of a better word, you call ‘civilisation’. Which of those things would you change, like improve or replace with something else, possibly better?

Now comes an interesting, stylized fact that I can observe in that experiment. Sometimes, I hold my classes in a big conference room, furnished in a 19th – centurish style, and equipped with a modern overhead projector attached to the ceiling. When I ask my students whether they would like to innovate with that respectable, sort of traditional furniture, they give me one of those looks, as if I were out of my mind. ‘What? Change these? But this is traditional, this is chic, this is… I don’t know, it has style!’. On the other hand, virtually each student is eager to change the overhead projector for a new(er) one.

Got it? In that experiment, people would rather change things that are already changing at an observably quick pace. The old and steady things are being left out of the scope of innovation. The 100% rational approach to innovation suggests something else: if you want to innovate, start with the oldest stuff, because it seems to be the most in need of some shake-off. Yet, the actual innovation, such as we can observe it in the culture around us, goes the other way round: it focuses on innovating in things which are already being innovated with.

Got it? Most of what we call innovation is based on a millennia-old pattern of behaviour called ‘joining the fun’. We innovate because we join an observable trend towards innovating. Yes, there are some minds, like Edison or Musk, who start innovating apparently from scratch, when there is no passing wagon to jump on. Thus, we have two patterns of innovation: joining a massively observable trend of change, or starting a new trend. The former is clear in its cultural roots. It has always been fun to join parties, festivities and public executions. The latter is more interesting in its apparent obscurity. What is the culturally rooted pattern of doing something completely new?

Easy, man, easy. Let’s do it step by step. When we perceive something as ‘completely new’, it means there are two sets of phenomena: one made of things that look old, and the other looking new. In other words, we experience cognitive dissonance. Certain things look out of date when, after having been experiencing them as functional, we start experiencing them as no more up to facing the situation at hand. We experience their dissonance as compared to other things of life. This is called perceived obsolescence.

Anything is perceived as completely new only if there is something obsolete to compare with. Let’s generalise it mathematically. There are two sets of phenomena, which I can probably define as two strings of data. I say ‘strings’, and not ‘lists’, on the account of that data being complex. Well, yes: data about real life is complex. In terms of digital technology, our experience is made of strings (not to confound with that special type of beachwear).

And so I have those two strings, and I keep using and reusing them. With time, I notice that I need to add new data, from my ongoing experience, to one of the strings, whilst the other one stays the same. With even more time, as my first string of data gets new pieces of information, i.e. new memory, that other string slowly turns from ‘the same’ into ‘old school’, then into ‘retro’, and finally into ‘that old piece of junk’. This is learning by experiencing cognitive dissonance.

We have, then, two cultural patterns of technological change. The more commonly practiced one consists in the good old ‘let’s join the fun’ sequence of actions. Willing to do things together with other people is simple, universal, and essentially belongs to the very basis of each culture. The much rarer pattern consists in becoming aware of a cognitive dissonance and figuring out something new. This is interesting. Some cultural patterns are like screwdrivers or duck-tape. Sooner or later most people use it. Other models of behaviour, whilst still rooted in our culture, are sort of harder to dig out of that abyssal toolbox. Just some people do it.

I am coming back to that « action and reaction paradigm ». Yes, we act in reaction to what happens, but what happens, happens over time, and the ‘time’ part is vital here. We act in reaction to the information that our brain collects, and when enough information has been collected, it triggers a pre-learnt, culturally rooted pattern of behaviour, and this is our action. In response to basically the same set of data available in the environment, different human beings pull different patterns of action out of the cultural toolbox. This is interesting: how exactly is it happening? I mean, how exactly this differentiation of response to environment occurs?

There is that article I have just found on Science Direct, by Porcelli et al. (2018[1]). The paper puts together quite a cartload of literature concerning the link between major mental disorders – schizophrenia (SCZ), Alzheimer’s disease (AD) and major depressive disorder (MDD) – and their corresponding impairments in social behaviour. More specifically, the authors focus on the correlation between the so-called social withdrawal (i.e. abnormal passivity in social relations), and the neurological pathways observable in these three mental disorders. One of the theoretical conclusions they draw regards what they call ‘the social brain’. The social brain is a set of neurological pathways recurrently correlated with particular patterns of social behaviour.

Yes, ladies and gentlemen, it means that what is observable outside, has its counterpart inside. There is a hypothetical way that human brains can work – a hypothetical set of sequences synaptic activations observable in our neurons – to make the best of social relations, something like a neurological general equilibrium. I have just coined up that term by analogy to general economic equilibrium. Anything outside that sort of perfect model is less efficient in terms of social relations, and so it goes all the way down to pathological behaviour connected with pathological neural pathways. Porcelli et al. go even as far as quantifying the economic value of pathological behaviour grounded in pathological mental impairment. By analogy, there is a hypothetical economic value attached to any recurrent, neural pathway.

Going reeeaally far this speculative avenue, our society can look completely different if we change the way our brain works.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

Support this blog

€10.00

[1] Porcelli, S., Van Der Wee, N., van der Werff, S., Aghajani, M., Glennon, J. C., van Heukelum, S., … & Posadas, M. (2018). Social brain, social dysfunction and social withdrawal. Neuroscience & Biobehavioral Reviews

Good hypotheses are simple

The thing about doing science is that when you really do it, you do it even when you don’t know you do it. Thinking about reality in truly scientific terms means that you tune yourself on discovery, and when you do that, man, you have released that ginn from the bottle (lamp, ring etc.). When you start discovering, and you get the hang of it, you realize that it is fun and liberating for its own sake. To me, doing science is like playing music: I am just having fun with it.

Having fun with science is important. I had a particularly vivid realization of that yesterday, when, due to a chain of circumstances, I had to hold a lecture in macroeconomics in a classroom of anatomy. There was no whiteboard to write on, but there were two skeletons standing in the two corners on my sides, and there were microscopes, of course covered with protective plastic bags. Have you ever tried to teach macroeconomics using a skeleton, and with nothing to write on? As I think about it, a skeleton is excellent for metaphorical a representation of functional connections in a system.

Since the beginning of this calendar year, I have been taking on those serious business plans, and, by the way, I am still doing it. Still, in my current work on two business plans I am preparing in parallel – one for the EneFinproject (FinTech in the market of energy), and the other one for the MedUsproject (Blockchain in the market of private healthcare) – I recently realized that I am starting to think science. In my last update in French, the one entitled Ça me démange, carrément, I have already nailed down one hypothesis, and some empirical data to check it. The hypothesis goes like: ‘The technology of renewable energies is its phase of banalisation, i.e. it is increasingly adapting its utilitarian forms to the social structures that are supposed to absorb it, and, reciprocally, those structures adapt to those utilitarian forms so as to absorb them efficiently’.

As hypotheses come, this one is still pretty green, i.e. not ripe yet for rigorous scientific proof, on the account of there being too many different ideas in it. Good hypotheses are simple, so as you can give them a shave with the Ockham’s razor and cut bullshit out. Still, a green hypothesis is better than no hypothesis at all. I can farm it and make it ripe, which I have already applied myself to do. In an Excel file you can see and download from the archive of my blog, I included the results of quick empirical research I did with the help of https://patents.google.com: I studied patent applications and patents granted, in the respective fields of wind, hydro, photovoltaic, and solar-thermal energies, in three important patent offices across the world, namely the European Patent Office (‘EP’ in that Excel file), the US Patent & Trademark office (‘US’), and in continental China.

As I had a look at those numbers, yes, indeed, there has been like a recent surge in the diversity of patented technologies. My intuition about banalisation could be true. Technologies pertaining to the generation of renewable energies start to wrap themselves around social structures around them, and said structures do the same with technologies. Historically, it is a known phenomenon. The motor power of animals (oxen, horses and mules, mostly), wind power, water power, thermal energy from the burning of fossil fuels – all these forms of energy started as novelties, and then grew into human social structures. As I think about it, even the power of human muscles went through that process. At some point in time, human beings discovered that their bodies can perform organized work, i.e. muscular power can be organized into labour.

Discovering that we can work together was really a bit of a discovery. You have probably read or heard about Gobekli Tepe, that huge megalithic enclosure located in Turkey, and being, apparently, the oldest proof of temple-sized human architecture. I watched an excellent documentary about the place, on National Geographic. Its point was that, if we put aside all the fantasies about aliens and Atlantians, the huge megalithic structure of Gobekli Tepe had been most probably made by simple, humble hunters-gatherers, who were thus discovering the immense power of organized work, and even invented a religion in order to make the whole business run smoothly. Nothing fancy: they used to cut their deceased ones’ heads off, would clean the skulls and keep them at home, in a prominent place, in order to think themselves into the phenomenon of inter-generational heritage. This is exactly what my great compatriot, Alfred Count Korzybski, wrote about being human: we have that unique capacity to ‘bind time’, or, in other words, to make our history into a heritage with accumulation of skills.

That was precisely the example of what a banalised technology (not to confuse with ‘banal technology’) can do. My point – and my gut feeling – is that we are, right now, precisely at this Gobekli-Tepe-phase with renewable energies. With the progressing diversity in the corresponding technologies, we are transforming our society so as it can work the most efficiently possible with said technologies.

Good, that’s the first piece of science I have come up with as regards renewable technologies. Another piece is connected to what I introduced, about the market of renewable energies in Europe, in my last update in English, namely in At the frontier, with my numbers. In Europe, we are a bit of a bunch of originals, in comparison to the rest of the world. Said rest of the world generally pumps up their consumption of energy per capita, as measured in them kilograms of oil equivalent. We, in Europe, we have mostly chosen the path of frugality, and our kilograms of oil per capita tend to shrink consistently. On the top of all that, there seems to be pattern in all that: a functional connection between the overall consumption of energy per capita and the aggregate consumption of renewable energies.

I am going to expose this particular gut feeling of mine by small steps. I Table 1, below, I am introducing two growth rates, compound between 1990 and 2015: the growth rate in the overall, final consumption of energy per capita, against that in the final consumption of renewable energies. I say ‘against’, as in the graph below the table I make a visualisation of those numbers, and it shows an intriguing regularity. The plot of points take the form opposite to those frontiers I showed you in At the frontier, with my numbers. This time, my points follow something like a gentle slope, and the further to the right, the gentler that slope becomes. It is visualised even more clearly with the exponential trend line (red dotted line).

We, I mean economists, call this type of curve, with a nice convexity, an ‘indifference curve’. Funnily enough, we use indifference curves to study choice. Anyway, there is sort of an intuitive difference between frontiers, on the one hand, and indifference curves, on the other hand. In economics, we assume that frontiers are somehow unstable: they represent a state of things that is doomed to change. A frontier envelops something that either swells or shrinks. On the other hand, an indifference curve suggests an equilibrium, i.e. each point on that curve is somehow steady and respectable as long as nobody comes to knock it out of balance. Whilst a frontier is like a skin, enveloping the body, an indifference curve is more like a spinal cord.

We have an indifference curve, hence a hypothetical equilibrium, between the dynamics of the overall consumption of energy per capita, and those of aggregate use of renewable energies. I don’t even know how to call it. That’s the thing with freshly observed equilibriums: they look nice, you could just fall in love with them, but if somebody asks what exactly are they, those nice things, you could have trouble to answer. As I am trying to sort it out, I start with assuming that the overall consumption of energy per capita reflects two complex sets. The first set is that of everything we do, divided into three basic fields of activity: a) the goods and services we consume (they contain energy that served to supply them) b) transport and c) the strictly spoken household use of energy. The second set, or another way of apprehending essentially the same ensemble of phenomena, is a set of technologies. Our overall consumption of energy depends on the total installed power of engines and electronic devices we use.

Now, the total consumption of renewable energies depends on the aggregate capacity installed in renewable technologies. In other words, this mysterious equilibrium of mine (in there is any, mind you) would be an equilibrium between two sets of technologies: those generating energy, and those serving to consume it. Honestly, I don’t even know how to phrase it into a decent hypothesis. I need time to wrap my mind around it.

Table 1

Growth rate in the overall, final consumption of energy per capita, 1990 – 2015 Growth rate in the final consumption of renewable energies, 1990 – 2015
Austria 17,4% 80,7%
Switzerland -18,4% 48,6%
Czech Republic -19,5% 241,0%
Germany -13,7% 501,2%
Spain 11,0% 104,4%
Estonia -33,0% 359,5%
Finland 4,1% 101,8%
France -3,7% 42,3%
United Kingdom -23,2% 1069,6%
Netherlands -3,7% 434,9%
Norway 17,1% 39,8%
Poland -8,0% 336,8%
Portugal 26,8% 32,6%

Growth rates energy per capita vs total renewable

 

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French versionas well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon pageand become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

Smart cities, or rummaging in the waste heap of culture

My editorial

I am trying to put together my four big ideas. I mean, I think they are big. I feel small when I consider them. Anyway, they are: smart cities, Fintech, renewable energies, and collective intelligence. I am putting them together in the framework of a business plan. The business concept I am entertaining, and which, let’s face it, makes a piece of entertaining for my internal curious ape, is the following: investing in the development of a smart city, with a strong component of renewable energies supplanting fossil fuels, and financing this development partly or totally, with FinTech tools, i.e. mostly with something like a cryptocurrency as well as with a local platform for financial transactions. The whole thing is supposed to have collective intelligence, i.e. with time, the efficiency in using resources should increase in time, on the condition that some institutions of collective life emerge in that smart city. Sounds incredible, doesn’t it? It doesn’t? Right, maybe I should explain it a little bit.

A smart city is defined by the extensive use of digital technologies, in order to optimize the local use of resources. Digital technologies age relatively quickly, as compared to technologies that make the ‘hard’ urban infrastructure. If, in a piece of urban infrastructure, we have an amount KH of capital invested in the hard infrastructure, and an amount KS invested in the smart technologies with a strong digital component, the rate of depreciation D(KH) of the capital invested in KH will be much lower than D(KS) invested in KS.

Mathematically,

[D(KS)/ KS] > [D(KH)/ KH]

and the ‘>’ in this case really means business.

The rate of depreciation in any technology depends on the pace that new technologies come into the game, thus on the pace of research and development. The ‘depends’, here, works in a self-reinforcing loop: the faster my technologies age, the more research I do to replace them with new ones, and so my next technologies age even faster, and so I put metaphorical ginger in the metaphorical ass of my research lab and I come with even more advanced technologies at even faster a pace, and so the loop spirals up. One day, in the future, as I will be coming back home from work, the technology embodied in my apartment will be one generation more advanced than the one I left there in the morning. I will have a subscription with a technology change company, which, for a monthly lump fee, will assure smooth technological change in my place. Analytically, it means that the residual difference in the rates of depreciation, or [D(KS)/ KS] – [D(KH)/ KH] , will widen.

On the grounds of the research I did in 2017, I can stake three hypotheses as for the development of smart cities. Hypothesis #1 says that the relative infusion of urban infrastructure with advanced and quickly ageing technologies will generate increasing amounts of highly liquid assets, monetary balances included, in the aggregate balance sheets of smart cities  (see Financial Equilibrium in the Presence of Technological Change Journal of Economics Library, Volume 4 (2), June 20, s. 160 – 171 and Technological Change as a Monetary Phenomenon Economics World, May-June 2018, Vol. 6, No. 3, 203-216 ). This, in turn, means that the smarter the city, the more financial assets it will need, kind of around and at hand, in order to function smoothly as a social structure.

On the other hand, in my hypothesis #2, I claim that the relatively fast pace of technological change associated with smart cities will pump up the use of energy per capita, but the reciprocal push, namely from energy-intensity to innovation-intensity will be much weaker, and this particular loop is likely to stabilize itself relatively quickly in some sort of energy-innovation standstill (see Technological change as intelligent, energy-maximizing adaptation Journal of Economic and Social Thought, Volume 4 September 3  ). Mind you, I am a bit less definitive on this one than on hypothesis #1. This is something I found out to exist, in human civilisation, as a statistically significant correlation. Yet, in the precise case of smart cities, I still have to put my finger on the exact phenomena, likely corresponding to the hypothesis. Intuitively, I can see some kind of social change. The very transformation of an ordinary (i.e. dumb) urban infrastructure into a smart one means, initially, lots of construction and engineering work being done, just to put the new infrastructure in place. That means additional consumption of energy. Those advanced technologies embodied in the tissues of the smart cities will tend to be advanced for a consistently shortening amount of time, and as they will be replaced, more and more frequently, with consecutive generations of technological youth. All that process will result in the consumption of energy spiralling up in the particular field of technological change itself. Still, my research suggests some kind of standstill, in that particular respect, coming into place quite quickly. I am thinking about our basic triad in energy consumption. If we imagined our total consumption of energy, I mean as civilisation, as a round cake, one third of that cake would correspond to household consumption, one third to transportation, and the remaining third to the overall industrial activity. With that pattern of technological change, which I have just sketched regarding smart cities, the cake would go somehow more to industrial activity, especially as said activity should, technically, contribute to energy efficiency in households and in transports. I can roughly assume that the spiral of more energy being consumed in the process of changing for more energy-efficient technologies can find some kind of standstill in the proportions between that particular consumption of energy, on the one hand, and the household & transport use. I mean, scrapping the bottom of the energy barrel just in order to install consecutive generations of smart technologies is the kind of strategy, which can quickly turn dumb.

Anyway, the development of smart cities, as I see it, is likely to disrupt the geography of energy consumption in the overall spatial structure of human settlement. Smart cities, although energy-smart, are likely to need, on the long run, more energy to run. Yet, I am focusing on another phenomenon, now. Following in the footsteps of Paul Krugman (see Krugman 1991[1];  Krugman 1998[2]), and on the grounds of my own research ( see Settlement by energy – Can Renewable Energies Sustain Our Civilisation? International Journal of Energy and Environmental Research, Vol.5, No.3, pp.1-18  ) I am formulating hypothesis #3: if the financial loop named in hypothesis #1, and the engineering loop from hypothesis #2 come together, the development of smart cities will create a different geography of human settlement. Places, which will turn into smart (and continuously smarter) cities will attract people at faster a pace than places with relatively weaker a drive towards getting smarter. Still, that change in the geography of our civilisation will be quite idiosyncratic. My own research (the link above) suggests that countries differ strongly in the relative importance of, respectively, access to food and access to energy, in the shaping of social geography. Some of those local idiosyncrasies can come as quite a bit of a surprise. Bulgaria or Estonia, for example, are likely to rebuild their urban tissue on the grounds of local access to energy. People will flock around watermills, solar panels, maybe around cold fusion. On the other hand, in Germany, Iran or Mexico, where my research indicates more importance attached to food, the new geography of smart human settlement is likely to gravitate towards highly efficient farming places.

Now, there is another thing, which I am just putting my finger on, not even enough to call it a hypothesis. Here is the thing: money gets hoarded faster and more easily than fixed assets. We can observe that the growing monetization of the global economy (more money being supplied per unit of real output) is correlated with increasing social inequalities . If, in a smart and ever smarter city, more financial assets are being around, it is likely to create a steeper social hierarchy. In those smart cities, the distance from the bottom to the top of the local social hierarchy is likely to be greater than in other places. I know, I know, it does not exactly sound politically correct. Smart cities are supposed to be egalitarian, and make us live happily ever after. Still, my internal curious ape is what it is, i.e. a nearly pathologically frantic piece of mental activity in me, and it just can’t help rummaging in the waste heap of culture. And you probably know that thing about waste heaps: people tend to throw things, there, which they wouldn’t show to friends who drop by.

I am working on making science fun and fruitful, and I intend to make it a business of mine. I am doing by best to stay consistent in documenting my research in a hopefully interesting form. Right now, I am at the stage of crowdfunding. You can consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

[1] Krugman, P., 1991, Increasing Returns and Economic Geography, The Journal of Political Economy, Volume 99, Issue 3 (Jun. 1991), pp. 483 – 499

[2] Krugman, P., 1998, What’s New About The New Economic Geography?, Oxford Review of Economic Policy, vol. 14, no. 2, pp. 7 – 17

Anyway, the two equations, or the remaining part of Chapter I

My editorial

And so I continue my novel in short episodes, i.e. I am blogging the on-going progress in the writing of my book about renewable technologies and technological change. Today, I am updating my blog with the remaining part of the first Chapter, which I started yesterday. Just for those who try to keep up, a little reminder about notations that you are going to encounter in what follows below: N stands for population, E represents the non-edible energy that we consume, and F is the intake of food. For the moment, I do not have enough theoretical space in my model to represent other vital things, like dreams, pot, beauty, friendship etc.

Anyway, the two equations, namely ‘N = A*Eµ*F1-µ’ and ‘N = A*(E/N)µ*(F/N)1-µ’ can both be seen as mathematical expressions of two hypotheses, which seems perfectly congruent at the first sight, and yet they can be divergent. Firstly, each of these equations can be translated into the claim that the size of human population in a given place at a given time depends on the availability of food and non-edible energy in said place and time. In a next step, one is tempted to claim that incremental change in population depends on the incremental change in the availability of food and non-edible energies. Whilst the logical link between the two hypotheses seems rock-solid, the mathematical one is not as obvious, and this is what Charles Cobb and Paul Douglas discovered as they presented their original research in 1928 (Cobb, Douglas 1928[1]). Their method can be summarised as follows. We have three temporal series of three variables: the output utility on the left side of the equation, and the two input factors on the right side. In the original production function by Cobb and Douglas had aggregate output of the economy (Gross Domestic Product) on the output side, whilst input was made of investment in productive assets and the amount of labour supplied. We return, now, to the most general equation (1), namely U = A*F1µ*F21-µ, and we focus on the ‘F1µ*F21-µ’ part, so on the strictly spoken impact of input factors. The temporal series of output U can be expressed as a linear trend with a general slope, just as the modelled series of values obtained through ‘F1µ*F21-µ’. The empirical observation that any reader can make on their own is that the scale factor A can be narrowed down to that value slightly above 1 only if the slope of the ‘F1µ*F21-µ’ on the right side is significantly smaller than the slope of U. This is a peculiar property of that function: the modelled trend of the compound value ‘F1µ*F21-µ’ is always above the trend of U at the beginning of the period studied, and visibly below U by the end of the same period. The factor of scale ‘A’ is an averaged proportion between reality and the modelled value. It corresponds to a sequence of quotients, which starts with local A noticeably below 1, then closing by 1 at the central part of the period considered, to rise visibly above 1 by the end of this period. This is what made Charles Cobb and Paul Douglas claim that at the beginning of the historical period they studied the real output of the US economy was below its potential and by the end of their window of observation it became overshot. The same property of this function made it a tool for defining general equilibriums rather than local ones. As regards my research on renewable energies, that peculiar property of the compound input of food and energy calculated with ‘Eµ*F1-µ’ or with ‘(E/N)µ*(F/N)1-µ’ means that I can assess, over a definite window in time, whether available food and energy stay in general equilibrium with population. They do so, if my general factor of scale ‘A’, averaged over that window in time, stays very slightly over 1, with relatively low a variance. Relatively low, for a parameter equal more or less to one, means a variance, in A, staying around 0,1 or lower. If these mathematical conditions are fulfilled, I can claim that yes, over this definite window in time, population depends on the available food and energy. Still, as my parameter A has been averaged between trends of different slopes, I cannot directly infer that at any given incremental point in time, like from t0 to t1, my N(t1) – N(t0) = A*{[E(t1)µ*F(t1)1-µ] – [E(t0)µ*F(t0)1-µ]}. If we take that incremental point of view, the local A will be always different than the general one.

Bearing those theoretical limitations in mind, the author undertook testing the above equations on empirical data, in a compound dataset, made of Penn Tables 9.0 (Feenstra et al. 2015[2]), enriched with data published by the World Bank (regarding the consumption of energy and its structure regarding ‘renewable <> non–renewable’), as well as with data published by FAO with respect to the overall nutritive intake in particular countries. Data regarding energy, and that pertaining to the intake of food, is limited, in both cases, to the period 1990 – 2014, and the initial, temporal extension of Penn Tables 9.0 (from 1950 to 2014) has been truncated accordingly. For the same reasons, i.e. the availability of empirical data, the original, geographical scope of the sample has been reduced from 188 countries to just 116. Each country has been treated as a local equilibrium, as the initial intuition of the whole research was to find out the role of renewable energies for local populations, as well as local idiosyncrasies regarding that role. Preliminary tests aimed at finding workable combinations of empirical variables. This is another specificity of the Cobb – Douglas production function: in its original spirit, it is supposed to work with absolute quantities observable in real life. These real-life quantities are supposed to fit into the equation, without being transformed into logarithms, or into standardized values. Once again, this is a consequence of the mathematical path chosen, combined with the hypotheses possible to test with that mathematical tool: we are looking for a general equilibrium between aggregates. Of course, an equilibrium between logarithms can be searched for just as well, similarly to an equilibrium between standardized positions, but these are distinct equilibriums.

After preliminary tests, equation ‘N = A*Eµ*F1-µ’, thus operating with absolute amounts of food and energy, proved not being workable at all. The resulting scale factors were far below 1, i.e. the modelled compound inputs of food and energy produced modelled populations much overshot above the actual ones. On the other hand, the mutated equation ‘N = A*(E/N)µ*(F/N)1-µ’ proved operational. The empirical variables able to yield plausibly robust scale factors A were: final use of energy per capita, in tons of oil equivalent (factor E/N), and alimentary intake of energy per capita, measured annually in mega-calories (thousands of kcal), and averaged over the period studied. Thus, the empirical mutation of produced reasonably robust results was the one, where a relatively volatile (i.e. changing every year) consumption of energy is accompanied by a long-term, de facto constant over time, alimentary status of the given national population. Thus, robust results could be obtained with an implicit assumption that alimentary conditions in each population studied change much more slowly than the technological context, which, in turn, determines the consumption of energy per capita. On the left side of the equation, those two explanatory variables matched with population measured in millions. Wrapping up the results of those preliminary tests, the theoretical tool used for this research had been narrowed down to an empirical situation, where, over the period 1990 – 2014, each million of people in a given country in a given year was being tested for sustainability, regarding the currently available quantity of tons of oil equivalent per capita per year, in non-edible energies, as well as regarding the long-term, annual amount of mega calories per capita, in alimentary intake.

The author is well aware that all this theoretical path-clearing could have been truly boring for the reader, but it seemed necessary, as this is the point, when real surprises started emerging. I was ambitious and impatient in my research, and thus I immediately jumped to testing equation N = A*(E/N)µ*(F/N)1-µ’ with just the renewable energies in the game, after having eliminated all the non-renewable part of final consumption in energy. The initial expectation was to find some plausible local equilibriums, with the scale factor A close to 1 and displaying sufficiently low a variance, in just some local populations. Denmark, Britain, Germany – these were the places where I expected to find those equilibriums, Stable demographics, well-developed energy base, no official food deficit: this was the type of social environment, which I expected to produce that theoretical equilibrium, and yet, I expected to find a lot of variance in the local factors A of scale. Denmark seemed to behave according to expectations: it yielded an empirical equation N = (Renewable energy per capita)0,68*(Alimentary intake per capita)1 0,68 = 0,32. The scale factor A hit a surprising robustness: its average value over 1990 – 2014 was 1,008202138, with a variance var (A) = 0,059873591. I quickly tested its Scandinavian neighbours: Norway, Sweden, and Finland. Finland yielded higher a logarithm in renewable energy per capita, namely µ = 0,85, but the scale factor A was similarly robust, making 1,065855419 on average and displaying a variance equal to 0,021967408. With Norway, results started puzzling me: µ = 0,95, average A = 1,019025526 with a variance 0,002937442. Those results would roughly mean that whilst in Denmark the availability of renewable energies has a predominant role in producing a viable general equilibrium in population, in Norway it has a quasi-monopole in shaping the same equilibrium. Cultural clichés started working at this moment, in my mind. Norway? That cold country with low density of population, where people, over centuries, just had to eat a lot in order to survive winters, and the population of this country is almost exclusively in equilibrium with available renewable energies? Sweden marked some kind of a return to the expected state of nature: µ = 0,77, average A = 1,012941105 with a variance of 0,003898173. Once again, surprisingly robust, but fitting into some kind of predicted state.

What I could already see at this point was that my model produced robust results, but they were not quite what I expected. If one takes a look at the map of the world, Scandinavia is relatively small a region, with quite similar, natural conditions for human settlement across all the four countries. Similar climate, similar geology, similar access to wind power and water power, similar social structures as well. Still, my model yielded surprisingly marked, local idiosyncrasies across just this small region, and all those local idiosyncrasies were mathematically solid, regarding the variance observable in their scale factors A. This was just the beginning of my puzzlement. I moved South in my testing, to countries like Germany, France and Britain. Germany: µ = 0,31, average A = 1,008843147 with a variance of 0,0363637. One second, µ = 0,31? But just next door North, in Denmark, µ = 0,63, doesn’t it? How is it possible? France yielded a robust equilibrium, with average A = 1,021262046 and its variance at 0,002151713, with µ = 0,38. Britain: µ = 0,3, whilst average A = 1,028817158 and variance in A making 0,017810219.  In science, you are generally expected to discover things, but when you discover too much, it causes a sense of discomfort. I had that ‘No, no way, there must be some mistake’ approach to the results I have just presented. The degree of disparity in those nationally observed functions of general equilibrium between population, food, and energy, strongly suggested the presence of some purely arithmetical disturbance. Of course, there was that little voice in the back of my head, saying that absolute aggregates (i.e. not the ratios of intensity per capita) did not yield any acceptable equilibrium, and, consequently, there could be something real about the results I obtained, but I had a lot of doubts.

I thought, for a day or two, that the statistics supplied by the Word Bank, regarding the share of renewable energies in the overall final consumption of energy might be somehow inaccurate. It could be something about the mutual compatibility of data collected from national statistical offices. Fortunately, methods of quantitative analysis of economic phenomena supply a reliable method of checking the robustness of both the model, and the empirical data I am testing it with. You supplant one empirical variable with another one, possibly similar in its logical meaning, and you retest. This is what I did. I assumed that the gross, final consumption of energy, in tons of oil equivalent per capita, might be more reliable than the estimated shares of renewable sources in that total. Thus, I tested the same equations, for the same set of countries, this time with the total consumption of energy per capita. It is worth quoting the results of that second test regarding the same countries. Denmark: average scale factor A = 1,007673381 with an observable variance of 0,006893499, and all that in an equation where µ = 0,93. At this point, I felt, once again, as if I were discovering too much at once. Denmark yielded virtually the same scale factor A, and the same variance in A, with two different metrics of energy consumed per capita (total and just the renewable one), with two different values in the logarithm µ. Two different equilibriums with two different bases, each as robust as the other. Logically, it meant the existence of a clearly cut substitution between renewable energies and the non-renewable ones. Why? I will try to explain it with a metaphor. If I manage to stabilize a car, when changing its tyres, with two hydraulic lifters, and then I take away one of the lifters and the car remains stable, it means that the remaining lifter can do the work of the two. This one tool is the substitute of two tools, at a rate of 2 to 1. In this case, I had the population of Denmark stabilized both on the overall consumption of energy per capita (two lifters), and on just the consumption of renewable energies (one lifter). Total consumption of energy stabilizes population at µ = 0,93 and renewable energies do the same at µ = 0,68. Logically, renewable energies are substitutes to non-renewables with a rate of substitution equal to 0,93/0,68 = 1,367647059. Each ton of oil equivalent in renewable energies consumed per capita, in Denmark, can do the job of some 1,37 tons of non-renewable energies.

Finland was another source of puzzlement: A = 0,788769669, variance of A equal to 0,002606412, and µ = 0,99. Ascribing to the logarithm µ the highest possible value at the second decimal point, i.e. µ = 0,99, I could not get a model population lower than the real one. The model yielded some kind of demographic aggregate much higher than the real population, and the most interesting thing was that this model population seemed correlated with the real one. I could know it by the very low variance in the scale factor A. It meant that Finland, as an environment for human settlement, can perfectly sustain its present headcount with just renewable energies, and if the non-renewables are being dropped into the model, the same territory has a significant, unexploited potential for demographic growth. The rate of substitution between renewable energies and the non-renewable ones, this time, seemed to be 0,99/0,85 = 1,164705882. Norway yielded similar results, with the total consumption of energy per capita on the right side of the equation: A = 0,760631741, variance in A equal to 0,001570101, µ = 0,99, substitution rate 1,042105263. Sweden turned out to be similar to Denmark: A = 1,018026405 with a variance of 0,004626486, µ = 0,91, substitution rate 1,181818182. The four Scandinavian countries seem to form an environment, where energy plays a decisive role in stabilizing the local populations, and renewable energies seem to be able to do the job perfectly. The retesting of Germany, France, and Britain brought interesting results, too. Germany: A = 1,009335161 with a variance of 0,000335601, at µ = 0,48, with a substitution rate of renewables to non-renewables equal to 1,548387097. France: A = 1,019371541, variance of A at 0,001953865, µ = 0,53, substitution at 1,394736842. Finally, Britain: A = 1,028560563 with a variance of 0,006711585, µ = 0,52, substitution rate 1,733333333. Some kind of pattern seems to emerge: the greater the relative weight of energy in producing general equilibrium in population, the greater the substitution rate between renewable energies and the non-renewable ones.

At this point I was pretty certain that I am using a robust model. So many local equilibriums, produced with different empirical variables, was not the result of a mistake. Table 1, in the Appendix to Chapter I, gives the results of testing equation (3), with the above mentioned empirical variables, in 116 countries. The first numerical column of the table gives the arithmetical average of the scale factor ‘A’, calculated over the period studied, i.e. 1990 – 2014. The second column provides the variance of ‘A’ over the same period of time (thus the variance between the annual values of A), and the third specifies the value in the parameter ‘µ’ – or the logarithm ascribed to energy use per capita – at which the given values in A have been obtained. In other words, the mean A, and the variance of A specify how close to equilibrium assumed in equation (3) has it been possible to come in the case of a given country, and the value of µ is the one that produces that neighbourhood of equilibrium. The results from Table 1 seem to confirm that equation (3), with these precise empirical variables, is robust in the great majority of cases.

Most countries studied satisfying the conditions stated earlier: variances in the scale factor ‘A’ are really low, and the average value of ‘A’ is possible to bring just above 1. Still, exceptions abound regarding the theoretical assumption of energy use being the dominant factor that shapes the size of the population. In many cases, the value of the exponent µ that allows a neighbourhood of equilibrium is far below µ = 0,5. According to the underlying logic of the model, the magnitude of µ is informative about how strong an impact does the differentiation and substitution (between renewable energies, and the non-renewable ones), have on the size of the population in a given time and place. In countries with µ > 0.5, population is being built mostly through access to energy, and through substitution between various forms of energy. Conversely, in countries displaying µ < 0,5, access to food, and internal substitution between various forms of food becomes more important regarding demographic change. United States of America come as one of those big surprises. In this respect, empirical check brings a lot of idiosyncrasies to the initial lines of the theoretical model.

Countries accompanied with a (!) are exceptions with respect to the magnitude of the scale factor ‘A’. They are: China, India, Cyprus, Estonia, Gabon, Iceland, Luxembourg, New Zealand, Norway, Slovenia, as well as Trinidad and Tobago. They present a common trait of satisfactorily low a variance in scale factor ‘A’, in conformity with condition (6), but a mean ‘A’ either unusually high (China A = 1.32, India A = 1.40), or unusually low (e.g. Iceland A = 0.02), whatever the value of exponent ‘µ’. It could be just a technical limitation of the model: when operating on absolute, non-transformed values, the actual magnitudes of variance on both sides of the equation matter. Motor traffic is an example: if the number of engine-powered vehicles in a country grows spectacularly, in the presence of a demographic standstill, variance on the right side is much greater than on the left side, and this can affect the scale factor. Yet, variances observable in the scale factor ‘A’, with respect to those exceptional cases, are quite low, and a fundamental explanation is possible. Those countries could be the cases, where the available amounts of food and energy either cannot really produce as big a population as there really is (China, India), or, conversely, they could produce much bigger a population than the current one (Iceland is the most striking example). From this point of view, the model could be able to identify territories with no room left for further demographic growth, and those with comfortable pockets of food and energy to sustain much bigger populations. An interpretation in terms of economic geography is also plausible: these could be situations, where official, national borders cut through human habitats, such as determined by energy and food, rather than circling them.

Partially wrapping it up, results in Table 1 demonstrate that equation (3) of the model is both robust and apt to identify local idiosyncrasies. The blade having been sharpened, the next step of empirical check consisted in replacing the overall consumption of energy per capita with just the consumption of renewable energies, as calculated on the grounds of data published by the World Bank, and in retesting equation (3) on the same countries. Table 2, in the Appendix to Chapter I, shows the results of those 116 tests. The presentational convention is the same (just to keep in mind that values in A and in µ correspond to renewable energy in the equation), and the last column of the table supplies a quotient, which, fault of a better expression, is named ‘rate of substitution between renewable and non-renewable energies’. The meaning of that substitution quotient appears as one studies values observed in the scale factor ‘A’. In the great majority of countries, save for exceptions marked with (!), it was possible to define a neighbourhood of equilibrium regarding equation (3) and condition (6). Exceptions are treated as such, this time, mostly due to unusually (and unacceptably) high a variance in scale factor ‘A’. They are countries where deriving population from access to food and renewable energies is a bit dubious, regarding the robustness of prediction with equation (3).

The provisional bottom line is that for most countries, it is possible to derive, plausibly, the size of population in the given place and time from both the overall consumption of energy, and from the use of just the renewable energies, in the presence of relatively constant an alimentary intake. Similar, national idiosyncrasies appear as in Table 1, but this time, another idiosyncrasy pops up: the gap between µ exponents in the two empirical mutations of equation (3). The µ ascribed to renewable energy per capita is always lower than the µ corresponding to the total use of energy – for the sake of presentational convenience they are further being addressed as, respectively, µ(R/N), and µ(E/N) –  but the proportions between those two exponents vary greatly between countries. It is useful to go once again through the logic of µ. It is the exponent, which has to be ascribed to the consumption of energy per capita in order to produce a neighbourhood of equilibrium in population, in the presence of relatively constant an alimentary regime. For each individual country, both µ(R/N) and µ(E/N) correspond to virtually the same mean and variance in the scale factor ‘A’. If both the total use of energy, and just the consumption of renewable energies can produce such a neighbourhood of equilibrium, the quotient ‘µ(E/N)/µ(R/N)’ reflects the amount of total energy use, in tons of oil equivalent per capita, which can be replaced by one ton of oil equivalent per capita in renewable energies, whilst keeping that neighbourhood of equilibrium. Thus, the quotient µ(E/N)/µ(R/N) can be considered as a levelled, long-term rate of substitution between renewable energies and the non-renewable ones.

One possible objection is to be dealt with at this point. In practically all countries studied, populations use a mix of energies: renewable plus non-renewable. The amount of renewable energies used per capita is always lower than the total use of energy. Mathematically, the magnitude of µ(R/N) is always smaller than the one observable in µ(E/N). Hence, the quotient µ(E/N)/µ(R/N) is bound to be greater than one, and the resulting substitution ratio could be considered as just a mathematical trick. Still, the key issue here is that both ‘E/Nµ’ and ‘R/Nµ’ can produce a neighbourhood of equilibrium with a robust scale factor. Translating maths into the facts of life, the combined results of tables 1 and 2 (see Appendix) strongly suggest that renewable energies can reliably produce a general equilibrium in, and sustain, any population on the planet, with a given supply of food. If a given factor A is supplied in relatively smaller an amount than the factor B, and, other things held constant, the supply of A can produce the same general equilibrium than the supply of B, A is a natural substitute of B at a rate greater than one. Thus, µ(E/N)/µ(R/N) > 1 is far more than just a mathematical accident: it seems to be the structural property of our human civilisation.

Still, it is interesting how far does µ(E/N)/µ(R/N) reach beyond the 1:1 substitution. In this respect, probably the most interesting insight is offered by the exceptions, i.e. countries marked with (!), where the model fails to supply a 100%-robust scale factor in any of the two empirical mutations performed on equation (3). Interestingly, in those cases the rate of substitution is exactly µ(E/N)/µ(R/N) = 1. Populations either too big, or too small, regarding their endowment in energy, do not really have obvious gains in sustainability when switching to renewables.  Such a µ(E/N)/µ(R/N) > 1 substitution occurs only when the actual population is very close to what can be modelled with equation (3). Two countries – Saudi Arabia and Turkmenistan – offer an interesting insight into the underlying logic of the µ(E/N)/µ(R/N) quotient. They both present µ(E/N)/µ(R/N) > 2. Coherently with the explanation supplied above, it means that substituting renewable energies for the non-renewable ones, in those two countries, can fundamentally change their social structures and sustain much bigger populations. Intriguingly, they are both ‘resource-cursed’ economies, with oil and gas taking so big a chunk in economic activity that there is hardly room left for anything else.

Most countries on the planet, with just an exception in the cases of China and India, seem being able to sustain significantly bigger populations than their present ones, through shifting to 100% renewable energies. In two ‘resource-cursed’ cases, namely Saudi Arabia and Turkmenistan, this demographic shift, possible with renewable energies, seems not less than dramatic. As I was progressively wrapping my mind around it, a fundamental question formed: what exactly am I measuring with that logarithm µ? I returned to the source of my inspiration, namely to the model presented by Paul Krugman in 1991 (Krugman 1991 op. cit.). That of the two factors on the right side of the equation, which is endowed with the dominant power is, in the same time, the motor force behind the spatial structuring of human settlement. I have, as a matter of fact, three factors in my model: non-edible renewable energy, substitutable to non-edible and non-renewable energy, and the consumption of food per capita. As I contemplate these three factors, a realisation dawns: none of the three can be maximized or even optimized directly. When I use more electricity than I did five years earlier, it is not because I plug my fingers more frequently into the electric socket: I shape my consumption of energy through a bundle of technologies that I use. As for the availability of food, the same occurs: with the rare exception of top-level athletes, the caloric intake is the by-product of a life style (office clerk vs construction site worker) rather than a fully conscious, purposeful action. Each of the three factors is being absorbed through a set of technologies. Here, some readers may ask: if I grow vegetables in my own garden, isn’t it far-fetched to call it a technology? If we were living in a civilisation who feeds itself exclusively with home-grown vegetables, that could be an exaggeration, I agree. Yet, we are a civilisation, which has developed a huge range of technologies in industrial farming. Vegetables grown in my garden are substitutes to foodstuffs supplied from industrially run farms, as well as to industrially processed food. If something is functionally a substitute to a technology, it is a technology, too. The exponents obtained, according to my model, for particular factors, in individual countries, reflect the relative pace of technological change in three fundamental fields of technology, namely:

  1. a) Everything that makes us use non-edible energies, ranging from a refrigerator to a smartphone; here, we are mostly talking about two broad types of technologies, namely engines of all kind, and electronic devices.
  2. b) Technologies that create choice between the renewable, and the non-renewable sources of energy, thus first and foremost the technologies of generating electricity: windmills, watermills, photovoltaic installations, solar-thermal plants etc. They are, for the most part, one step earlier in the chain of energy than technologies mentioned in (a).
  3. c) Technologies connected to the production and consumption of food, composed into a long chain, with side-branches, starting from farming, through the processing of food, ending with packaging, distribution, vending and gastronomy.

As I tested the theoretical equation N = A*(E/N)µ*(F/N)1-µ’, most countries yielded a plausible, robust equilibrium between the local (national) headcount, and the specific, local mix of technologies grouped in those three categories. A question emerges, as a hypothesis to explore: is it possible that our collective intelligence expresses itself in creating such, local technological mixes of engines, electronics, power generation, and alimentary technologies, which, in turn would allow us to optimize our population? Can technological change be interpreted as an intelligent, energy-maximizing adaptation?

Appendix to Chapter I

Table 1 Parameters of the function:  Population = (Energy use per capita[3])µ*(Food intake per capita[4])(1-µ)

Country name Average daily intake of food, in kcal per capita Mean scale factor ‘A’ over 1990 – 2014 Variance in the scale factor ‘A’ over 1990 – 2014 The exponent ‘µ’ of the ‘energy per capita’ factor
Albania 2787,5 1,028719088 0,048263309 0,78
Algeria 2962,5 1,00792777 0,003115684 0,5
Angola 1747,5 1,042983003 0,034821077 0,52
Argentina 3085 1,05449632 0,001338937 0,53
Armenia 2087,5 1,027874602 0,083587662 0,8
Australia 3120 1,053845754 0,005038742 0,77
Austria 3685 1,021793945 0,002591508 0,87
Azerbaijan 2465 1,006243759 0,044217939 0,74
Bangladesh 2082,5 1,045244854 0,007102476 0,21
Belarus 3142,5 1,041609177 0,016347323 0,8
Belgium 3655 1,004454515 0,003480147 0,88
Benin 2372,5 1,030339133 0,034533869 0,61
Bolivia (Plurinational State of) 2097,5 1,019990919 0,003429637 0,62
Bosnia and Herzegovina (!) 2862,5 1,037385012 0,214843872 0,81
Botswana 2222,5 1,068786155 0,009163141 0,92
Brazil 2907,5 1,013624942 0,003643215 0,26
Bulgaria 2847,5 1,058220643 0,005405994 0,82
Cameroon 2110 1,021629875 0,051074111 0,5
Canada 3345 1,036202396 0,007687519 0,73
Chile 2785 1,027291576 0,003554446 0,65
China (!) 2832,5 1,328918607 0,002814054 0,01
Colombia 2582,5 1,074031013 0,013875766 0,44
Congo 2222,5 1,078933108 0,024472619 0,71
Costa Rica 2802,5 1,050377494 0,005668136 0,78
Côte d’Ivoire 2460 1,004959783 0,007587564 0,52
Croatia 2655 1,072976483 0,009344081 0,72
Cyprus (!) 3185 0,325015959 0,00212915 0,99
Czech Republic 3192,5 1,004089056 0,002061036 0,84
Denmark 3335 1,007673381 0,006893499 0,93
Dominican Republic 2217,5 1,062919767 0,006550924 0,65
Ecuador 2225 1,072013967 0,00294547 0,6
Egypt 3172,5 1,036345512 0,004306619 0,38
El Salvador 2510 1,013036366 0,004187964 0,7
Estonia (!) 2980 0,329425185 0,001662589 0,99
Ethiopia 1747,5 1,073625398 0,039032523 0,31
Finland (!) 3147,5 0,788769669 0,002606412 0,99
France 3557,5 1,019371541 0,001953865 0,53
Gabon (!) 2622,5 0,961643759 0,016248519 0,99
Georgia 2350 1,044229266 0,059636113 0,76
Germany 3440 1,009335161 0,000335601 0,48
Ghana 2532,5 1,000098029 0,047085907 0,48
Greece 3610 1,063074 0,003756555 0,77
Haiti 1815 1,038427773 0,004246483 0,56
Honduras 2457,5 1,030624938 0,005692923 0,67
Hungary 3440 1,024235523 0,001350114 0,78
Iceland (!) 3150 0,025191922 2,57214E-05 0,99
India (!) 2307,5 1,403800869 0,024395268 0,01
Indonesia 2497,5 1,001768442 0,004578895 0,2
Iran (Islamic Republic of) 3030 1,034945678 0,001105326 0,45
Ireland 3622,5 1,007003095 0,017135706 0,96
Israel 3490 1,008446182 0,013265865 0,87
Italy 3615 1,007727182 0,001245927 0,51
Jamaica 2712,5 1,056188543 0,01979275 0,9
Japan 2875 1,0094237 0,000359135 0,38
Jordan 2820 1,015861129 0,031905756 0,77
Kazakhstan 3135 1,01095925 0,021868381 0,74
Kenya 2010 1,018667155 0,02914075 0,42
Kyrgyzstan 2502,5 1,009443502 0,053751489 0,71
Latvia 3015 1,010440502 0,023191031 0,98
Lebanon 3045 1,036073511 0,054610186 0,85
Lithuania 3152,5 1,008092894 0,025234007 0,96
Luxembourg (!) 3632,5 0,052543325 6,62285E-05 0,99
Malaysia 2855 1,017853322 0,001002682 0,61
Mauritius 2847,5 1,070576731 0,019964794 0,96
Mexico 3165 1,01483014 0,009376118 0,36
Mongolia 2147,5 1,061731985 0,030246541 0,9
Morocco 3095 1,07892333 0,000418636 0,47
Mozambique 1922,5 1,023422366 0,041833717 0,48
Nepal 2250 1,059720031 0,006741455 0,46
Netherlands 2925 1,040887411 0,000689576 0,78
New Zealand (!) 2785 0,913678062 0,003946867 0,99
Nicaragua 2102,5 1,045412214 0,007065561 0,69
Nigeria 2527,5 1,069148598 0,032086946 0,28
Norway (!) 3340 0,760631741 0,001570101 0,99
Pakistan 2275 1,062522698 0,020995863 0,24
Panama 2347,5 1,007449033 0,00243433 0,81
Paraguay 2570 1,07179452 0,021405906 0,73
Peru 2280 1,050166142 0,00327043 0,47
Philippines 2387,5 1,0478458 0,022165841 0,32
Poland 3365 1,004848541 0,000688294 0,56
Portugal 3512,5 1,036215564 0,006604633 0,76
Republic of Korea 3027,5 1,01734341 0,011440406 0,56
Republic of Moldova 2762,5 1,002387234 0,038541243 0,8
Romania 3207,5 1,003204035 0,003181708 0,62
Russian Federation 3032,5 1,050934925 0,001953049 0,38
Saudi Arabia 2980 1,026310231 0,007502008 0,72
Senegal 2187,5 1,05981161 0,021382472 0,54
Serbia and Montenegro 2787,5 1,0392151 0,012416926 0,8
Slovakia 2875 1,011063497 0,002657276 0,92
Slovenia (!) 3042,5 0,583332004 0,003458657 0,99
South Africa 2882,5 1,053438343 0,009139913 0,53
Spain 3322,5 1,061083277 0,004844361 0,56
Sri Lanka 2287,5 1,029495671 0,001531167 0,5
Sudan 2122,5 1,028532781 0,044393335 0,4
Sweden 3072,5 1,018026405 0,004626486 0,91
Switzerland 3385 1,047790357 0,007713383 0,88
Syrian Arab Republic 2970 1,010909679 0,017849377 0,59
Tajikistan 2012,5 1,004745997 0,078394669 0,62
Thailand 2420 1,05305435 0,004200173 0,41
The former Yugoslav Republic of Macedonia 2755 1,064764097 0,003242024 0,95
Togo 2020 1,007094875 0,014424982 0,66
Trinidad and Tobago (!) 2645 0,152994618 0,003781236 0,99
Tunisia 3230 1,053626454 0,001201886 0,66
Turkey 3510 1,02188909 0,001740729 0,43
Turkmenistan 2620 1,003674668 0,024196536 0,96
Ukraine 3040 1,044110717 0,005180992 0,54
United Kingdom 3340 1,028560563 0,006711585 0,52
United Republic of Tanzania 1987,5 1,074441381 0,031503549 0,41
United States of America 3637,5 1,023273537 0,006401009 0,3
Uruguay 2760 1,014226024 0,019409309 0,82
Uzbekistan 2550 1,056807711 0,031469698 0,59
Venezuela (Bolivarian Republic of) 2480 1,048332115 0,012077362 0,6
Viet Nam 2425 1,050131152 0,000866138 0,31
Yemen 2005 1,076332698 0,029772287 0,47
Zambia 1937,5 1,0479534 0,044241343 0,59
Zimbabwe 2035 1,063047787 0,022242317 0,6

Source: author’s

 

Table 2 Parameters of the function:  Population = (Renewable energy use per capita[5])µ*(Food intake per capita[6])(1-µ)

Country name Mean scale factor ‘A’ over 1990 – 2014 Variance in the scale factor ‘A’ over 1990 – 2014 The exponent ‘µ’ of the ‘renewable energy per capita’ factor The rate of substitution between renewable and non-renewable energies[7]
Albania 1,063726823 0,015575246 0,7 1,114285714
Algeria 1,058584384 0,044309122 0,44 1,136363636
Angola 1,044147837 0,063942546 0,49 1,06122449
Argentina 1,039249286 0,005115111 0,39 1,358974359
Armenia 1,082452967 0,023421839 0,59 1,355932203
Australia 1,036777388 0,009700331 0,52 1,480769231
Austria 1,017958672 0,007854467 0,71 1,225352113
Azerbaijan 1,07623299 0,009740098 0,47 1,574468085
Bangladesh 1,088818696 0,017086232 0,2 1,05
Belarus (!) 1,017676486 0,142728478 0,51 1,568627451
Belgium 1,06314732 0,095474709 0,52 1,692307692
Benin (!) 1,045986178 0,101094528 0,58 1,051724138
Bolivia (Plurinational State of) 1,078219551 0,034143037 0,53 1,169811321
Bosnia and Herzegovina 1,077445974 0,084400986 0,66 1,227272727
Botswana 1,022264687 0,056890261 0,79 1,164556962
Brazil 1,066438509 0,005012883 0,24 1,083333333
Bulgaria (!) 1,022253185 0,190476288 0,55 1,490909091
Cameroon 1,040548202 0,059668736 0,5 1
Canada 1,02539319 0,005170473 0,56 1,303571429
Chile 1,006307911 0,001159941 0,55 1,181818182
China 1,347729029 0,003248871 0,01 1
Colombia 1,016164864 0,019413193 0,37 1,189189189
Congo 1,041474959 0,030195913 0,67 1,059701493
Costa Rica 1,008081248 0,01876342 0,68 1,147058824
Côte d’Ivoire 1,013057174 0,009833628 0,5 1,04
Croatia 1,072976483 0,009344081 0,72 1
Cyprus (!) 1,042370253 0,838872562 0,72 1,375
Czech Republic 1,036681212 0,044847525 0,56 1,5
Denmark 1,008202138 0,059873591 0,68 1,367647059
Dominican Republic 1,069124974 0,020305242 0,53 1,226415094
Ecuador 1,008104202 0,025383593 0,47 1,276595745
Egypt 1,03122058 0,016484947 0,28 1,357142857
El Salvador 1,078008598 0,028182822 0,64 1,09375
Estonia (!) 1,062618744 0,418196957 0,88 1,125
Ethiopia 1,01313572 0,036192629 0,3 1,033333333
Finland 1,065855419 0,021967408 0,85 1,164705882
France 1,021262046 0,002151713 0,38 1,394736842
Gabon 1,065944525 0,011751745 0,97 1,020618557
Georgia 1,011709194 0,012808503 0,66 1,151515152
Germany 1,008843147 0,03636378 0,31 1,548387097
Ghana (!) 1,065885579 0,106721005 0,46 1,043478261
Greece 1,033613511 0,009328533 0,55 1,4
Haiti 1,009030442 0,005061414 0,54 1,037037037
Honduras 1,028253048 0,022719417 0,62 1,080645161
Hungary 1,086698434 0,022955955 0,54 1,444444444
Iceland 0,041518305 0,000158837 0,99 1
India 1,414055357 0,025335408 0,01 1
Indonesia 1,003393135 0,008680379 0,18 1,111111111
Iran (Islamic Republic of) 1,06172763 0,011215001 0,26 1,730769231
Ireland 1,075982896 0,02796979 0,61 1,573770492
Israel 1,06421352 0,004086618 0,61 1,426229508
Italy 1,072302127 0,020049639 0,36 1,416666667
Jamaica 1,002749054 0,010620317 0,67 1,343283582
Japan 1,082461225 0,000372112 0,25 1,52
Jordan 1,025652757 0,024889809 0,5 1,54
Kazakhstan 1,078500526 0,007887364 0,44 1,681818182
Kenya 1,039952786 0,031445338 0,41 1,024390244
Kyrgyzstan 1,036451717 0,011487047 0,6 1,183333333
Latvia 1,02535782 0,044807273 0,83 1,180722892
Lebanon 1,050444418 0,053181784 0,6 1,416666667
Lithuania (!) 1,076146779 0,241465686 0,72 1,333333333
Luxembourg (!) 1,080780192 0,197582319 0,93 1,064516129
Malaysia 1,018207799 0,034303031 0,42 1,452380952
Mauritius 1,081652351 0,082673843 0,79 1,215189873
Mexico 1,01253558 0,019098478 0,27 1,333333333
Mongolia 1,073924505 0,017542414 0,6 1,5
Morocco 1,054779512 0,005553697 0,38 1,236842105
Mozambique 1,062086076 0,047101957 0,48 1
Nepal 1,02819587 0,008319264 0,45 1,022222222
Netherlands 1,079123029 0,043322084 0,46 1,695652174
New Zealand 1,046855187 0,004522505 0,83 1,192771084
Nicaragua 1,034941617 0,021798159 0,64 1,078125
Nigeria 1,03609124 0,030236501 0,27 1,037037037
Norway 1,019025526 0,002937442 0,95 1,042105263
Pakistan 1,068995505 0,026598749 0,22 1,090909091
Panama 1,001556162 0,038760767 0,69 1,173913043
Paraguay 1,049861415 0,030603983 0,69 1,057971014
Peru 1,06820116 0,008122931 0,41 1,146341463
Philippines 1,045289953 0,035957042 0,28 1,142857143
Poland 1,035431925 0,035915212 0,39 1,435897436
Portugal 1,044901969 0,003371242 0,62 1,225806452
Republic of Korea 1,06776762 0,017697832 0,31 1,806451613
Republic of Moldova 1,009542233 0,033772795 0,55 1,454545455
Romania 1,011030974 0,079875735 0,47 1,319148936
Russian Federation 1,083901796 0,000876184 0,24 1,583333333
Saudi Arabia 1,099133179 0,080054524 0,27 2,666666667
Senegal 1,019171218 0,032304226 0,49 1,102040816
Serbia and Montenegro 1,042141223 0,00377058 0,63 1,26984127
Slovakia 1,062546838 0,08862799 0,61 1,508196721
Slovenia 1,00512965 0,039266211 0,81 1,222222222
South Africa 1,056957556 0,012656394 0,41 1,292682927
Spain 1,017435095 0,002522983 0,4 1,4
Sri Lanka 1,003117252 0,000607856 0,47 1,063829787
Sudan 1,00209188 0,060026529 0,38 1,052631579
Sweden 1,012941105 0,003898173 0,77 1,181818182
Switzerland 1,07331184 0,000878485 0,69 1,275362319
Syrian Arab Republic 1,048889583 0,03494333 0,38 1,552631579
Tajikistan 1,03533923 0,055646586 0,58 1,068965517
Thailand 1,012034765 0,002131649 0,33 1,242424242
The former Yugoslav Republic of Macedonia (!) 1,021262823 0,379532891 0,72 1,319444444
Togo 1,030339186 0,024874996 0,64 1,03125
Trinidad and Tobago 1,086840331 0,014786844 0,69 1,434782609
Tunisia 1,042654904 0,000806403 0,52 1,269230769
Turkey 1,0821418 0,019688124 0,35 1,228571429
Turkmenistan (!) 1,037854925 0,614587094 0,38 2,526315789
Ukraine 1,022041527 0,026351574 0,31 1,741935484
United Kingdom 1,028817158 0,017810219 0,3 1,733333333
United Republic of Tanzania 1,0319973 0,033120507 0,4 1,025
United States of America 1,001298132 0,001300399 0,19 1,578947368
Uruguay 1,025162405 0,027221297 0,73 1,123287671
Uzbekistan 1,105591195 0,008303345 0,36 1,638888889
Venezuela (Bolivarian Republic of) 1,044353155 0,012830255 0,45 1,333333333
Viet Nam 1,005825608 0,003779368 0,28 1,107142857
Yemen 1,072879389 0,058580323 0,3 1,566666667
Zambia 1,045147143 0,038548336 0,58 1,017241379
Zimbabwe 1,030974989 0,008692551 0,57 1,052631579

Source: author’s

[1] Charles W. Cobb, Paul H. Douglas, 1928, A Theory of Production, The American Economic Review, Volume 18, Issue 1, Supplement, Papers and Proceedings of the Fortieth Annual Meeting of the American Economic Association (March 1928), pp. 139 – 165

[2] Feenstra, Robert C., Robert Inklaar and Marcel P. Timmer (2015), “The Next Generation of the Penn World Table” American Economic Review, 105(10), 3150-3182, available for download at www.ggdc.net/pwt

[3] Current annual use per capita, in tons of oil equivalent

[4] Annual caloric intake in mega-calories (1000 kcal) per capita, averaged over 1990 – 2014.

[5] Current annual use per capita, in tons of oil equivalent

[6] Annual caloric intake in mega-calories (1000 kcal) per capita, averaged over 1990 – 2014.

[7] This is the ratio of two logarithms, namely: µ(renewable energy per capita) / µ(total energy use per capita)

Experimenting with new structures

My editorial

I return to doing my review of literature in order to find accurate distinctions as for historical changes. Besides those articles on the history of technology as such, I found an interesting paper by Bert J.M. de Vries, , Detlef P. van Vuuren, and Monique M. Hoogwijk (de Vries et al. 2007[1]). It is interesting because the authors attempted to assess the potential for producing renewable energies for the first half of the 21st century, and they did it in 2007, so exactly at the moment when, according to my calculations, something really changed in the sector of renewable energies and it started developing significantly faster than before. It is interesting to see how did other researchers see the future at a moment, which is already the past, and quite a significant bit of past. As usually, I start from the end. The kitchen door I can poke my head through, regarding this paper, is Appendix B, presenting in a concise, tabular form, the so-called four scenarios developed by the authors. So I rummage through those scenarios, and one thing instantaneously jumps to my eye: the absence of the solar-thermal technology. Today, the solar-thermal is probably the most powerful technology for producing renewable energy, yet, when this paper was being written, so not later than in 2007, the solar-thermal technology was still in its infancy. It was precisely in 2007 that the World Bank declassified and published first reports about the big Ouarzazate project in Morocco, and only in 2011 it became more or less clear how to finance those projects. As I am engaged in research on renewable energies, I will probably mention that solar-thermal technology more than once, and for now I am content with noticing that solar-thermal changed a lot in the way we perceive the future of renewable energies.

Returning to this paper by Bert J.M. de Vries, , Detlef P. van Vuuren, and Monique M. Hoogwijk, and to their scenarios, I see four basic technologies being taken into account: wind, solar photovoltaic, biomass electric, and liquid biomass. As a matter of fact, we have another technology, besides the solar-thermal, which strikes by its absence at this precise party: the hydro, in its many shades. Right, probably the authors did not have access to the relevant data. I continue studying this table in appendix B, and what strikes me as odd is that each technology is described with a different set of quantitative variables. It is easy to see that the authors had hard times to drive different technologies to common denominators. As I stop clinging to that Appendix B and browse through the whole paper, I can see a lot of maths but very few numbers. It is hard to say, what was exactly the empirical basis for the scenarios presented. On the whole, this paper by de Vries et al. is interesting as for some methodological insights, but a little disappointing on the whole. I can see nothing that looks like real discovery.

And so I am drifting towards other sources, and I come by an interesting author, David Edgerton and his book ‘Shock of the old: Technology and global history since 1900’ (Edgerton 2011[2]). I am fishing the most interesting bits out of this study, and I find some insights worth stopping by and think. First of all, David Edgerton points out that we commonly live in an illusion of constant technological progress, i.e. of a process, which consistently brings improvement in the conditions of living, as technology changes. Edgerton shows, quite convincingly, that technological change is not necessarily to put at equality with technological progress. According to his findings, there were just a few periods of real technological progress since 1900: between 1900 and 1913, followed by another between 1950 and 1973. Save for those short windows in time, the link between technological change and the conditions of living is really ambiguous.

In Section 4 of his book, entitled ‘Maintenance’, David Edgerton brings forth that intuition that I very much share: that the way we deal with both the physical wear and tear of our technology, and with its moral obsolescence, is deeply significant to our overall well-being due to technology. There is that interesting thing I noticed in Chinese cities, this summer: Chinese people do hardly any renovation in buildings. Save for those occasions when a store is being replaced by another and the facade needs some obvious patching, buildings in China seem to live a life similar to that of clothes: when they are used up, they are being demolished and something new is being built in the same location (they do put a lot of effort into maintaining infrastructure, mind you). In Europe, we are almost obsessively attached to maintaining our buildings in good condition, and we spend a lot on renovation. Two cultures, two completely different roles of maintenance in the construction business. At this point, the research presented by David Edgerton gets strong support in earlier work, by the eminent French historian Fernand Braudel (Braudel 1981[3], 1983[4]) : real technological revolutions happened, in many fields of human ingenuity, when the technology in place allowed providing for more than simple maintenance. Both authors (Edgerton and Braudel) point on a turning point in the history of agriculture, probably by the end of the 19th century, when the technology of farming allowed to spend relatively less effort on rebuilding the fertile power of the soil.

Another great insight to find with David Edgerton is the importance of killing in technological change. Warfare, hunting, whaling, combatting pathogens, eradicating parasitic species from our agricultural structures – it all means killing, and it apparently has had a tremendous impact on technological change in the 20th century. This dramatic and deep reference to one of our fundamental functions as organisms (yes, my dear vegans, we all kill someone or something, even without knowing we do) makes me think about this elaborate metaphor of struggling civilisations, to find in Arnold Toynbee’s ‘Study of History’[5]. Development of societies is not an easy game: it is struggle, risk, sometimes stagnation. We, in our civilisation, as it is now, at the beginning of the 21st century, have no grounds to assume we are different than those who struggled before us. Arnold Toynbee coined this metaphor of struggle in an even broader context: how can we distinguish between civilisations? His point was that criteria like race, skin colour or even religious beliefs are misleading in classifying cultures and societies. To him, the real distinction pertained to the state of the struggle, so to say: is there a visible breakthrough in the past of the given social group, a kind of dramatic threshold which made those people take a completely different turn? If the answer is ‘yes’, then we could very well have two different civilisations, before and after that pivotal point in time. If no, we have, in fact, the same cultural paradigm, just dressed differently according to the current fashion.

All the review of literature about the history of technology, and about renewable energies, makes me ask once more that fundamental question: what exactly happened in 2007 – 2008, when the market of renewable energies suddenly moved its lazy ass and started growing much faster than before? Hypothesis #1: it was mostly technological a change, possibly connected to the banalisation of photovoltaic modules, or to the concurrent emergence of the solar-thermal technology. Hypothesis #2, just a shade away from #1 is that we are talking about technological change in the immediate vicinity of renewable sources of energy. Maybe something about electronics, nanotechnology, transport? Here, although the hypothesis is really interesting, I am very much at a loss. I will have to do some book-worming about it. Hypothesis #3: what happened in 2007 – 2008 was mostly social a change. Something shifted in our patterns of being together in society, which made people turn towards renewable energies more than before. That something could pertain to urbanisation, density of population, food deficit, velocity of money and whatnot. I have already done a lot of empirical testing in this respect.

Now, I am shifting my focus slightly, onto my personal experience in innovating, namely in learning the Python. As I have been wrestling with the Python, those last four days, I became aware of a pattern in my behaviour: I start with trying to make the new technology do exactly the same things that the old technology used to do for me, then I progressively become aware that I need to learn new skills in order to get those results (so I have that strange connection: new technology, old expectations regarding the outcomes, new skills to learn), and I also become aware that I need to reshuffle my resources, namely information, so as to squeeze it into new structures. Now, I am advancing one step further and I try to go the other way round: I had a look at what the logical structures of the Python look like and what I am trying to do now is to discover what can possibly the Python do for me. In other words, I am progressively passing from using Python as just a harder-to-tame version of the Excel towards discovering what it can do for me.

I am playing with one of the first things I found in the ‘Tutorial’ textbook, available at docs.python.org, namely with the structure ‘for n in range(x, y):’ and then you specify what do you want the Python to do while n stays in a given range. Here below is a simple try from my part:

>>> for x in range(1, 10):

            structure={‘x’:(x**2, x**0,5)}

            list(structure)

[‘x’]

[‘x’]

[‘x’]

[‘x’]

[‘x’]

[‘x’]

[‘x’]

[‘x’]

[‘x’]

# Honestly, I don’t even know why I formed this precise function. It just came like that. I imagined a structure when a cardinal order is established, from 1 to 10, and each point in the order has two coordinates derived from its position: square power and square root. It is nice and it didn’t exactly work as I planned. I got a ten-time repeated [‘x’]. As I put that ‘x’ in single quotes, it has been considered as a word, or a piece of content. Now, I modify slightly my structure: I took off those single quotes. Here is the result: 

>>> for x in range(1, 10):

            structure={x:{x**2, x**0,5}}

            list(structure)

[1]

[2]

[3]

[4]

[5]

[6]

[7]

[8]

[9]

#Better a small step forward than no steps at all, as they say. After having removed those single quotes from around the x, I got a cardinal order in response. Still, the function did not yield the coordinates I expected to have out of it. Now, I replace the command ‘list()’ by the command ‘print()’. So far, I did not quite grasp the distinction between those two. I decided to experiment in order to acquire some knowledge, and here comes what I got:

>>> for x in range(1, 10):

            structure={x:{x**2, x**0,5}}

            print(structure)

{1: {1, 5}}

{2: {1, 4, 5}}

{3: {9, 5, 1}}

{4: {16, 1, 5}}

{5: {25, 5, 1}}

{6: {1, 36, 5}}

{7: {49, 5, 1}}

{8: {64, 1, 5}}

{9: {81, 5, 1}}

#Now, this is most interesting. As I replaced ‘list()’ with ‘print()’, the function finally yielded the coordinates. There is just one tiny little problem: instead of having just two coordinates, in 8 cases on 9, starting from the cardinal 1, I have three coordinates instead of two. Why? Oh, I see. I put ‘0,5’ instead of ‘0.5’. For the Python, decimal values are accepted as such if the decimal point is literally the point, not the coma.  The expression ‘x**0,5’, which I meant to be x power 0,5, has been read by the Python as x power 0, followed by a 5. OK, I understand. My mistake. I replace the ‘0,5’ by ‘0.5’, by the book, and I launch it again. 

>>> for x in range(1, 10):

            structure={x:{x**2, x**0.5}}

            print(structure)

{1: {1}}

{2: {1.4142135623730951, 4}}

{3: {9, 1.7320508075688772}}

{4: {16, 2.0}}

{5: {25, 2.23606797749979}}

{6: {2.449489742783178, 36}}

{7: {49, 2.6457513110645907}}

{8: {64, 2.8284271247461903}}

{9: {81, 3.0}}

#Finally, it worked. Now, I push my experiment further and I introduce a third coordinate, the natural logarithm of the cardinal position.  

>>> import math   #I turn on the module with mathematical functions

>>> for x in range(1, 10):

            structure={x:{x**2, x**0.5, math.log(x)}}

            print(structure)

{1: {0.0, 1}}

{2: {0.6931471805599453, 1.4142135623730951, 4}}

{3: {9, 1.0986122886681098, 1.7320508075688772}}

{4: {16, 1.3862943611198906, 2.0}}

{5: {25, 2.23606797749979, 1.6094379124341003}}

{6: {1.791759469228055, 2.449489742783178, 36}}

{7: {49, 2.6457513110645907, 1.9459101490553132}}

{8: {64, 2.8284271247461903, 2.0794415416798357}}

{9: {81, 2.1972245773362196, 3.0}}

Good. It seems to work. What I have just succeeded to do is to learn a typical, very simple structure in Python and this structure does something slightly different than the Excel: it generates a sequence of specific structures out of a general, logical structure I specified. I am wrapping up my learning from my learning: it took me four days to start looking actively for new possibilities offered by the Python. If I extrapolate to the scale of collective behaviour, we have those four patterns of innovating: a) trying to obtain old outcomes with the new technology b) making mistakes, seeing my efficiency plummeting, and learning new skills c) rearranging my resources for the new technology d) experimenting and exploring the new possibilities, which come with the new technology. As I refer this generalized account of my individual experience to the literature I quoted a few paragraphs earlier. How does a breakthrough occur in these specific patterns of behaviour? I can assume there is a critical amount of learning and adaptation, required in the presence of a new technology, which can possibly, referring once more to Arnold Toynbee’s metaphor of struggling civilisations, make a technological transition risky, impossible or null in its balanced outcomes.

[1] de Vries, Bert J.M., van Vuuren, Detlef P., Hoogwijk Monique M., 2007, Renewable energy sources: Their global potential for the first-half of the 21st century at a global level: An integrated approach, Energy Policy, vol. 35 (2007), pp. 2590–2610

[2] Edgerton, D. (2011). Shock of the old: Technology and global history since 1900. Profile books

[3] Braudel, F., 1981, Civilization and Capitalism, Vol. I: The Structures of Everyday Life, rev.ed., English Translation, William Collins Sons & Co London and Harper & Row New York, ISBN 00216303 9

[4] Braudel, F., 1983, Civilisation and Capitalism. Part II: The Wheels of Commerce, trans. Sian Reynolds, Book Club Associates, William Collins Sons & Co,

[5] Toynbee, J. Arnold. Study of history. University press, 1946, pp. 69

The path of thinking, which has brought me to think what I am thinking now

My editorial

I am thinking about the path of research to take from where I am now. A good thing in the view of defining that path would be to know exactly where am I now, mind you. I feel like summarising a chunk of my work, approximately the three last weeks, maybe more. As I finished that article about technological change seen as an intelligent, energy-maximizing adaptation , I kind of went back to my idea of local communities being powered at 100% by renewable energies. I wanted to set kind of scientific foundations for a business plan that a local community could use to go green at 100%. More or less intuitively, I don’t really know why exactly, I connected this quite practical idea to Bayesian statistics, and I went straight for the kill, so to say, by studying the foundational paper of this whole intellectual stream, the one from 1763 (Bayes, Price 1763[1]). I wanted to connect the idea of local communities based entirely on renewable energies to that of a local cryptocurrency (i.e. based on the Blockchain technology), somehow attached to the local market of energy. As I made this connection, I kind of put back to back the original paper by Thomas Bayes with that by Satoshi Nakamoto, the equally mysterious intellectual father of the Bitcoin. Empirically, I did some testing at the level of national data about the final consumption of energy, and about the primary output of electricity, I mean about the share of renewable energy in these. What I have, out of that empirical testing, is quite a lot of linear models, where I multiple-regress the shares, or the amounts, of renewable energies on a range of socio-economic variables. Those multiple regressions brought some seemingly solid stuff. The share of renewable energies in the primary output of electricity is closely correlated with the overall dynamics in the final consumption of energy: the faster the growth of that total market of energy, the greater the likelihood of shifting the production of electricity towards renewables. As dynamics are concerned, the years 2007 – 2008 seem to have marked some kind of threshold: until then, the size of the global market in renewable energies had used to grow at slower a pace than the total market of energy, whilst since then, those paces switched, and the renewables started to grow faster than the whole market. I am still wrapping my mind around that fact. The structure of economic input, understood in terms of the production function, matters as well. Labour-intensive societies seem to be more prone to going green in their energy base than the capital-intensive ones. As I was testing those models, I intuitively used the density of population as control variable. You know, that variable, which is not quite inside the model, but kind of sitting by and supervising. I tested my models in separate quantiles of density in population, and some interesting distinctions came out of it. As I tested the same model in consecutive sextiles of density in population, the model went through a cycle of change, with the most explanatory power, and the most robust correlations occurring in the presence of the highest density in population.

I feel like asking myself why have I been doing what I have been doing. I know, for sure, that the ‘why?’ question is abyssal, and a more practical way of answering it consists in hammering it into a ‘how?’. What has been my process? Step 1: I finish an article, and I come to the conclusion that I can discuss technological change in the human civilisation as a process of absorbing as much energy as we can, and of adapting to maximise that absorption through an evolutionary pattern similar to sexual selection. Step 2: I blow some dust off my earlier idea of local communities based on renewable energies. What was the passage from Step 1 to Step 2? What had been crystallising in my brain at the time? Let’s advance step by step. If I think about local communities, I am thinking about a dispersed structure, kind of a network, made of separate and yet interconnected nodes. I was probably trying to translate those big, global paradigms, which I had identified before, into local phenomena, the kind you can experience whilst walking down the street, starting a new small business, or looking for a new job. My thinking about local communities going 100% green in their energy base could be an expression of an even deeper and less articulate a thinking about how do we, humans, in our social structure, maximize that absorption of energy I wrote about in my last article.

Good, now Step 3: I take on the root theory of Bayesian statistics. What made me take that turn? I remember I started to read that paper by pure curiosity. I like reading the classics, very much because only by reading them I discover how much bulls*** has been said about their ideas. What attracted my attention, I think, in the original theory by Thomas Bayes, was that vision of a semi-ordered universe, limited by previous events, and the attempt to assess the odds of having a predictable number of successes over quite a small number of trials, a number so small that it defies the logic of expected values in big numbers, genre De Moivre – Laplace. I was visibly thinking about people, in local communities, making their choices, taking a limited number of trials at achieving some outcome, and continuing or giving up, according to said outcomes. I think I was trying, at the time, to grasp the process of maximizing the absorption of energy as a sequence of individual and collective choices, achieved through trial and error, with that trial and error chaining into itself, i.e. creating a process marked by hysteresis.

Step 4: putting the model of the Bitcoin, by Satoshi Nakamoto, back to back with the original logic by Thomas Bayes. The logic used by Satoshi Nakamoto, back in the day, was that of a race, inside a network, between a crook trying to abuse the others, and a chained reaction from the part of ‘honest’ nodes. The questions asked were: how quick does a crook has to be in order to overcome the chained reaction of the network? How big and how quick on the uptake does the network has to be in order to fend the crook off? I was visibly thinking about rivalling processes, where rivalry sums up to overtaking and controlling some kind of consecutive nodes in a network. What kind of processes could I have had in mind? Well, the most obvious choice are the processes of absorbing energy: we strive to maximise our absorption of energy, we have the choice between renewable energies and the rest (fossils plus nuclear), and those choices are chained, and they are chained so as to unfold in time at various speeds. I think that when I put Thomas Bayes and Satoshi Nakamoto on the same school bench, the undertow of my thinking was something like: how do the choices we make influence further choices we make, and how does that chain of choices impact the speed the market of renewable energy develops, as compared to the market of other energy sources?

Step 5: empirical tests, those multiple regressions in a big database made of ‘country – year’ observations. Here, at least, I am pretty much at home with my own thinking: I know I habitually represent in my mind those big economic measures, like GDP per capita, or density of population, or the percentage of green energy in my electric socket, as the outcome of complex choices made by simple people, including myself. As I did that regressing, I probably, subconsciously, wanted to understand how some type of economic choices we make impacts other types of choices, more specifically those connected to energy. I found some consistent patterns at this stage of research. Choices about the work we do and about professional activity, and about the wages we pay and receive, are significant to the choices about energy. The very basic choice to live in a given place, so to cluster together with other humans, has one word or two to say, as well. The choices we make about consuming energy, and more specifically the choice of consuming more energy than the year before, are very important for the switch towards the renewables. Now, I noticed that turning point, in 2007 – 2008. Following the same logic, 2007 – 2008 must have been the point in time, where the aggregate outcomes of individual decisions concerning work, wages, settlement and the consumption of energy summed up into a change observable at the global scale. Those outcomes are likely to come out, in fact, from a long chain of choices, where the Bayesian space of available options has been sequentially changing under the impact of past choices, and where the Bitcoin-like race of rivalling technologies took place.

Step 6: my recent review of literature about the history of technology showed me a dominant path of discussion, namely that of technological determinism, and, kind of on the margin of that, the so-called Moore’s law of exponentially growing complexity in one particular technology: electronics. What did I want to understand by reviewing that literature? I think I wanted some ready-made (well, maybe bespoke) patterns, to dress my empirical findings for posh occasions, such as a conference, an article, or a book. I found out, with surprise, that the same logic of ‘choice >> technology >> social change >> choice etc.’ has been followed by many other authors and that it is, actually, the dominant way of thinking about the history of technology. Right, this is the path of thinking, which has brought me to think what I am thinking now. Now, what questions to I want to answer, after this brief recapitulative? First of all, how to determine the Bayesian rectangle of occurrences, regarding the possible future of renewable energies, and what that rectangle is actually likely to be? Answering this question means doing something we, economists, are second to none at doing poorly: forecasting. Splendid. Secondly, how does that Bayesian rectangle of limited choice depend on the place a given population lives in, and how does that geographical disparity impact the general scenario for our civilisation as a whole? Thirdly, what kind of social change is likely to follow along?

[1] Mr. Bayes, and Mr Price. “An essay towards solving a problem in the doctrine of chances. by the late rev. mr. bayes, frs communicated by mr. price, in a letter to john canton, amfrs.” Philosophical Transactions (1683-1775) (1763): 370-418

Provisionally saying goodbye from the year 1990

My editorial

I think that yesterday, in my update in French (see ‘Que s’est-il passé aux alentours de 2007-2008 ?’), I finally nailed down that central fact about renewable energies, the kind of fact to base a scientific publication on: around 2007 – 2008, something changed in the proportions between renewable energies and the overall consumption of energy – the former started growing faster than the latter. Still, I have to bring a small correction to what I wrote yesterday: a similar episode of faster growth in the sector of renewables was to observe in 1991 – 1992 as well. Interestingly, both break-through episodes in the growth of renewables’ market took place concurrently to major socio-economic shake-offs. In 1991-1992, it was the Big Reshuffling in Central and Eastern Europe, the short financial crisis of 1990, and soon after, another financial hiccup in Asia. In 2007-2008, the prime suspect as for the mechanics of change is, of course, the global financial crisis, although this time, a more interesting landscape emerges: it is precisely during 2007 – 2008 that the global supply of money exceeded 100% of the global GDP, on a durable trend, and the urban population in the world exceeded 50% of the total, once again well launched on a long term ascending trend. That second variable looks interesting. If you follow my writing for some time, you have probably noticed I am slightly obsessed with certain social metrics, the kind to use as control variable in a model in order to distinguish between various types of social structures. Those metrics serve me to introduce, kind of via kitchen door, qualitative distinctions into a quantitative model. The percentage of us, humans, living in towns and cities is kind of correlated with the density of population, but just kind of. In my own database, over n = 9 455 ‘country-year’ observations, those two are Pearson-correlated just at r = 0.240, so nothing to write home about. In a simple, linear regression, the density of population explains just 1,4% of the total variance observed in the urbanisation ratio. Nothing to write home about neither. You have countries like Australia or Canada, where most people, and when I say ‘most’, it is like 80%, and yet, their density of population still leaves a lot of empty geography available.

As interesting as this variable of urban population could be, my current focus is slightly different. I am talking about a specific change over time. I am talking history, and history requires a specific point of view. I feel like poking my head over someone else’s shoulder, preferably someone specialized in the history of technology. And so I come by that article, written by Lewis Mumford, about the link between technology and political order (Mumford 1964[1]). Lewis Mumford went as far as claiming that the whole development of human technology, since the first bloody piece of flint had been sharpened, is like a permanent tension between technologies associated with autocratic politics, on the one hand, and those accompanying democratic orders, on the other hand. Whenever a technology requires centralized pooling of resources, or works just fine with centralized control, it naturally flirts with autocracy. Conversely, when a given technology leads to more autonomy in dispersed, local communities, when it mobilizes local resources locally rather than pools them, then we have a ‘democratic’ technology. Interesting a point of view that we have here, especially regarding renewable sources of energy. You can find by yourself, with the International Renewable Energy Agency (www.irena.org ) that Europe is quite different from other regions of the world with respect to the system of distribution in renewable energies. In Europe, practically any generating facility is connected to the power grid, and there seems to be no off-gird renewable energy whatsoever, whereas on other continents, the off-grid power systems seem to be serious drivers of market development in renewable energies. In Lewis Mumford’s terms, Europe is energetically autocratic, whilst other continents have quite a bit of energy out of democratic structures.

 I jump in time, twenty years ahead, into 1984, and I find that paper by Donald MacKenzie, a humorous reflexion on the mutual links between technology and social orders (MacKenzie 1984[2]). In social sciences, by this corner table, where economists, sociologists, and historians tend to drink together, a concept has emerged since more or less Karl Marx’s ‘Capital’:  it is something called ‘technological determinism’. In means that technologies and social structures mate and reproduce together. A bridge that allows free passage of buses transporting poor people – to cite an original MacKenzie’s example – mates preferably with a social structure open to receive said poor people travelling by buses, and rejects the advances of an elitist social structure, where poor people just stay on the other side of the river. Whilst Donald MacKenzie didn’t come to any firm conclusions about technological determinism, and rather opened than closed the path of reflection, the idea of technologies breeding with social structures stirred that evolutionary cord in my mind. If they mate, they must have mating preferences, which, in turn, creates hierarchies based on how attractive a technology looks to a social structure or vice versa. An interesting path further down this road is sexual differentiation. When it comes to mating, you basically have male and female, with all the respect due to other instances. The male organism is specialized in packaging and sending its own genetic code in a way that does not (always) kill the recipient. Them viruses haven’t figured it out yet. Maybe they need another couple of billions of years. Anyway, the male organism Federal Expresses its genetic code to the female organism, which, in turn, has that special ability to combine its own genetic code with the parcel received from the male, and to develop (literally) on the idea. In the couple ‘social structure and technology’, the former is more likely to have those female properties than the latter. I mean, social structures can mix technologies but technologies cannot really mix social structures. Well, well, well, professor MacKenzie, did I read your mind, recently? This is exactly what I came up with in that article,  currently in review with ‘The Journal of Economic and Social Thought’. I think I have already said it at some point, but it always warms my heart to know that I am not totally insane in my scientific ways.

Right, so if we have a quick camp fire between me, professor Lewis Mumford, and professor Donald MacKenzie, interesting hypotheses can be roasted. That sudden acceleration in the market for renewable energies is likely to be connected to some major, structural, social change. In that change, a new type of female social structure, overflowing with sexual attraction, could be having that embarrassing, yet always enriching experience with many new, male technologies at once, and a new tribe of technological babies could be just colonizing the surroundings. As political systems in the world seem to be going towards simplification and to be withdrawing from business strictly spoken , the transition towards renewable energies could be, in spite of the European love for big, strong power grids, a technological democratisation.

Good, but I am going to be slightly more prudent with those time jumps, now. Dire consequences emerge. So I advance just very gently, four years ahead, to 1988, and I grab that article by Michael S. Mahoney, about the history of computing technology (Mahoney 1988[3]). This is a weird feeling. I am reading an article written in the past, and containing a lot of predictions and speculations about something that was future, back then, in the past, and which, in the same time, is both past and future for me. One sentence has particularly attracted my attention in this paper (page 14): ‘What would it mean for a microcomputer to play the role of the Model T in determining new social, economic, and political patterns?’. The Model T in question refers, of course, to Ford Model T, or the simple and affordable version of something revolutionary. Clearly, the computing technology, mating with those attractive, blond social structures around it, has engendered a whole clan of Models T. I am writing with one, right now, and a few minutes ago I had a conversation with my wife using another one. Still, as applied to renewable energies, that metaphor of Model T is truly informative. The change in dynamics of the market of renewable energies, observable in 2007 – 2008, concurs with the beginning of the end of expensive photovoltaic modules. As I study the annual reports of big companies, like Canadian Solar or First Solar, they all point to 2007-2008 as the moment when the prices of solar modules just took a deep, and prolonged dive.

I make another two, year-long strides in my research, and I land in 1990, and I find that article by Paul A. David (David 1990[4]). This is another piece of literature in that larger stream, which wanders into the ambiguous and foggy realm of carryovers from technological change into improvements of productivity. In 1990 it was already obvious that technological change and growth of productivity are not necessarily going hand in hand. Professor David tries to provide an explanation by equating modern technological changes to the historical ones.  His message is the following: technology and society need time to adapt to each other, and the more complex the technology, and the more branching it can grow into other technologies, the more time the social structure needs to become familiar with. Paul A. David states something that initially looks like a paradox: the more revolutionary a technology seems, due to its novelty, the more time we need to swallow it. In fact, there is no such thing as technological revolutions, because the deeper the change, the longer it takes. Professor David’s argument makes me wonder, where are we, in terms of slow revolutions, with the technologies of renewable energies. Is our social structure just beginning to wrap itself around those solar modules and windmills? Or is it already a mature relationship?

Right, science is fun, but it’s time for me to attend to my other commitments. Provisionally, I am saying goodbye to you from the year 1990.

[1] Mumford, L., 1964, Authoritarian and Democratic Technics, Technology and Culture, Vol. 5, No. 1 (Winter, 1964), pp. 1-8

Published by: The Johns Hopkins University Press on behalf of the Society for the History of Technology

[2] MacKenzie, D., 1984, Marx and the Machine, Technology and Culture, Vol. 25, No. 3. (Jul., 1984), pp. 473-502.

[3] Mahoney, M.S., 1988, The History of Computing in the History of Technology, Princeton, NJ, Annals of the History of Computing 10(1988), pp. 113-125

[4] David, P. A. (1990). The dynamo and the computer: an historical perspective on the modern productivity paradox. The American Economic Review, 80(2), 355-361.