4 units of quantity in technological assets to make one unit of quantity in final goods

My editorial on You Tube

I am writing a book, right now, and I am sort of taken, and I blog much less frequently than I planned. Just to keep up with the commitment, which any blogger has sort of imprinted in their mind, to deliver some meaningful content, I am publishing, in this update, the outline of my first chapter. It has become almost a truism that we live in a world of increasingly rapid technological change. When a statement becomes almost a cliché, it is useful to pass it in review, just to be sure that we understand what the statement is about. In a very pragmatic perspective of an entrepreneur, or, as a matter of fact, that of an infrastructural engineer, technological change means that something old needs to be coupled with or replaced by something new. When a new technology comes around, it is like a demon: it is essentially an idea, frequently prone to protection through intellectual property rights, and that idea looks for a body to sneak into. Humans are supposed to supply the body, and they can do it in two different ways. They can tell the new idea to coexist with some older ones, i.e. we embody new technologies in equipment and solutions which we couple functionally with older ones. Take any operational system for computers or mobile phones. On the moment, the people who are disseminating it claim it is brand new but scratch the surface just a little bit and you find 10-year-old algorithms underneath. Yes, they are old, and yes, they still work.

Another way to embody a new technological concept is to make it supplant older ones completely. We do it reluctantly, yet sometimes it really looks like a better idea. Electric cars are a good example of this approach. Initially, the basic idea seems to have consisted in putting electric engines into an otherwise unchanged structure of vehicles propelled by combustion engines. Still, electric propulsion is heavier, as we need to drive those batteries around. Significantly greater weight means the necessity to rethink steering, suspension, structural stability etc., whence the need to design a new structure.   

Whichever way of embodying new technological concepts we choose, our equipment ages. It ages physically and morally, in various proportions. Aging in technologies is called depreciation. Physical depreciation means physical wearing and destruction in a piece of equipment. As it happens – and it happens to anything used frequently, e.g. shoes – we choose between repairing and replacing the destroyed parts. Whatever we do, it requires resources. From the economic point of view, it requires capital. As strange as it could sound, physical depreciation occurs in the world of digital technologies, too. When a large digital system, e.g. that of an airport, is being run, something apparently uncanny happens: some component algorithms of that system just stop working properly, under the burden of too much data, and they need to be replaced sort of on the go, without putting the whole system on hold. Of course, the essential cause of that phenomenon is the disproportion between the computational scale of pre-implementation tests, and that of real exploitation. Still, the interesting thing about those on-the-go patches of the system is that they are not fundamentally new, i.e. they do not express any new concept. They are otherwise known, field-tested solutions, and they have to be this way in order to work. Programmers who implement those patches do not invent new digital technologies; they just keep the incumbent ones running. They repair something broken with something working smoothly. Functionally, it is very much like repairing a fleet of vehicles in an express delivery business.   

As we take care of the physical depreciation occurring in our incumbent equipment and software, new solutions come to the market, and let’s be honest: they are usually better than what we have at the moment. The technologies we hold become comparatively less and less modern, as new ones appear. That phenomenon of aging by obsolescence is called moral depreciation. Proportions of the actual physical depreciation & moral depreciation-cocktail depend on the pace of technological race in the given industry. When a lot of alternative, mutually competing solutions emerge, moral obsolescence accelerates and tends to become the dominant factor of aging in our technological assets. Moral depreciation creates a tension: as we look the state-of-the-art in our industry progressively moving away from our current technological position, determined by the assets we have, we find ourselves under a growing pressure to do something about it. Finally, we come to the point of deciding to invest in something definitely more up to date than what we currently have.      

Both layers of depreciation – physical and moral – absorb capital. It seems pertinent to explain how exactly they do so. We need money to pay for goods and services necessary for repairing and replacing the physically used parts of our technological basket. We obviously need money to pay for the completely new equipment, too. Where does that money come from? Are there any patterns as for its sourcing? The first and the most obvious source of money to finance depreciation in our assets is the financial scheme of amortization. In many legal regimes, i.e. in all the developed countries and in a large number of emerging and developing economies, an entity being in possession of assets subject to depreciation is allowed to subtract from its income tax base, a legally determined financial amount, in order to provide for depreciation.

The legally possible amount of amortization is calculated as a percentage of book value ascribed to the corresponding assets, and this percentage is based on their assumed. If a machine is supposed to have a useful life of five years, after all is said and done as for its physical and moral depreciation, I can subtract from my tax base 1/5th = 20% of its book value. Question: which exact book value, the initial one or the current one? It depends on the kind of deal an entrepreneur makes with tax authorities. Three alternative ways are possible: linear, decreasing, and increasing. When I do linear amortization, I take the initial value of the machine, e.g. $200 000, I divide it into 5 equal parts right after the purchase, thus in 5 instalments of $40 000 each, and I subtract those instalments annually from my tax base, starting from the current year. After linear amortization is over, the book value of the machine is exactly zero.  

Should I choose decreasing amortization, I take the current value of my machine as the basis for the 20% reduction of my tax base. The first year, the machine is brand new, worth $200 000, and so I amortize 20% * $200 000 = $40 000. The next year, i.e. in the second year of exploitation, I start with my machine being worth $200 000 – $40 000 = (1 – 20%) * $200 000 =  $160 000. I repeat the same operation of amortizing 20% of the current book value, and I do: $160 000 – 20% * $160 000 = $160 000 – $32 000 = $128 000. I subtracted $32 000 from my tax base in this second year of exploitation (of the machine), and, and the end of the fiscal year, I landed with my machine being worth $128 000 net of amortization. A careful reader will notice that decreasing amortization is, by definition, a non-linear function tending asymptotically towards zero. It is a never-ending story, and a paradox. I assume a useful life of 5 years in my machine; hence I subtract 1/5th = 20% of its current value from my tax base, and yet the process of amortization takes de facto longer than 5 years and has no clear end. After 5 years of amortization, my machine is worth $65 536 net of amortization, and I can keep going. The machine is technically dead as useful technology, but I still have it in my assets.      

Increasing amortization is based on more elaborate assumptions than the two preceding methods. I assume that my machine will be depreciating over time at an accelerating pace, e.g. 10% of the current value in the first year, 20% annually over the years 2 – 4, and 30% in the 5th year. The underlying logic is that of progressively diving into the stream of technological race: the longer I have my technology, the greater is the likelihood that someone comes up with something definitely more modern. With the same assumption of $200 000 as initial investment, that makes me write off my tax base the following amounts: 1st year – $20 000, 2nd ÷ 4th year – $40 000, 5th year – $60 000. After 5 years, the net value of my equipment is zero. 

The exact way I can amortize my assets depends largely on the legal regime in force – national governments have their little ways in that respect, using the rates of amortization as incentives for certain types of investment whilst discouraging other types – and yet there is quite a lot of financial strategy in amortization, especially in large business structures with ownership separate from management. We can notice that linear amortization gives comparatively greater savings in terms of tax due. Still, as amortization consists in writing an amount off the tax base, we need any tax base at all beforehand. When I run a well-established, profitable business way past its break-even point, tax savings are a sensible idea, and so is linear amortization in my fixed assets. However, when I run a start-up, still deep in the red zone below the break-even point, there is not really any tax base to subtract amortization from. Recording a comparatively greater amortization from operations already running at a loss just deepens the loss, which, at the end of the day, has to be subtracted from the equity of my business, and it doesn’t look good in the eyes of my prospective investors and lenders. Relatively quick, linear amortization is a good strategy for highly profitable operations with access to lots of cash. Increasing amortization could be good for that start-up business, when relatively the greatest margin of operational income turns up some time after the day zero of operations.

Interestingly, the least obvious logic comes with decreasing amortization. What is the practical point of amortizing my assets asymptotically down to zero, without ever reaching zero? Good question, especially in the light of a practical fact of life, which the author challenges any reader to test by themselves: most managers and accountants, especially in small and medium sized enterprises, will intuitively amortize the company’s assets precisely this way, i.e. along the decreasing path. Question: why people do something apparently illogical? Answer: because there is a logic to that, it is just hard to phrase out. What about the logic of accumulating capital? Both the linear amortization and the increasing one lead to having, at some point in time, the book value of the corresponding assets drops down to zero. A lot of value off my assets means that either I subtract the corresponding amount from the passive side of my balance sheet (i.e. I repay some loans or I give away some equity), or I compensate the write-off with new investment. Either I lose cash, or I am in need of more cash. When I am in tight technological race, and my assets are subject to quick moral depreciation, those sudden drops down to zero can put a lot of financial streets on my balance sheet. When I do something apparently detached from my technological strategy, i.e. when I amortize decreasingly, sudden capital quakes are replaced by a gentle descent, much more predictable. Predictable means e.g. negotiable with banks who lend me money, or with investors buying shares in my equity.

This is an important pattern to notice in commonly encountered behaviour regarding capital goods: most people will intuitively tend to protect the capital base of their organisation, would it be a regular business or a public agency. When choosing between amortizing their assets faster, so as to reflect the real pace of their ageing, or amortizing them slower, thus a bit against the real occurrence of depreciation, most people will choose the latter, as it smoothens the resulting changes in the capital base. We can notice it even in ways that most of us manage our strictly private assets. Let’s take the example of an ageing car. When a car reaches the age when an average household could consider to change it, like 3 – 4 years, only a relatively tiny fraction of the population, probably not more than 16%, will really change for a new car. The majority (the author of this book included, by the way) will rather patch and repair, and claim that ‘new cars are not as solid as those older ones’. There is a logic to that. A new car is bound to lose around 25% of its market value annually over the first 2 – 3 years of its useful life. An old car, aged 7 years or more, loses around 10% or less per year. In other words, when choosing between shining new things that age quickly and the less shining old things that age slowly, only a minority of people will choose the former. The most common behavioural pattern consists in choosing the latter.

When recurrent behavioural patterns deal with important economic phenomena, such as technological change, an economic equilibrium could be poking its head from around the corner. Here comes an alternative way of denominating depreciation and amortization, i.e. instead of denominating it as a fraction of value attributed to assets, we can denominate over the revenue of our business. Amortization can be seen as the cost of staying in the game. Technological race takes a toll on our current business. The faster our technologies depreciate, the costlier it is to stay in the race. At the end of the day, I have to pay someone or something that helps me keeping up with the technological change happening around, i.e. I have to share, with that someone or something, a fraction of what my customers pay me for the goods and services I offer. When I hold a differentiated basket of technological assets, each ageing at a different pace and starting from a different moment in time, the aggregate capital write-off that corresponds to their amortization is the aggregate cost of keeping up with science.

When denoting K as the book value of assets, with a standing for the rate of amortization corresponding to one of the strategies sketched above, P representing the average price of goods we sell, and Q their quantity, we can sketch the considerations developed above in a more analytical way, as a coefficient labelled A, as in equation (1) below.

A = (K*a)/(P*Q)         (1)

The coefficient A represents the relative burden of aggregate amortization of all the fixed assets in hand, upon the revenues recorded in a set of economic agents. Equation (1) can be further transformed so as to extract quantities at both levels of the fraction. Factors in the denominator of equation (1), i.e. prices and quantities of goods sold in order to generate revenues will be further represented as, respectively, PG and QG, whilst the book value of assets subject to amortization will be symbolized as the arithmetical product QK*PK of market prices PK of assets, and the quantity QK thereof. Additionally, we drive the rate of amortization ‘a’ down to what it really is, i.e. inverted representation of an expected lifecycle F, measured in years, and ascribed to our assets. Equation (2) below shows an analytical development in this spirit.

A = (1/F)*[(PK*QK)/(PG*QG)]        (2)

Before the meaning of equation (2) is explored more in depth, it is worth explaining the little mathematical trick that economists use all the time, and which usually raises doubts in the minds of bystanders. How can anyone talk about an aggregate quantity QG of goods sold, or that of fixed assets, the QK? How can we distil those aggregate quantities out of the facts of life? If anyone in their right mind thinks about the enormous diversity of the goods we trade, and the assets we use, how can we even set a common scale of measurement? Can we add up kilograms of BMW cars with kilograms of food consumed, and use it as denominator for kilograms of robots summed up with kilograms of their operating software?

This is a mathematical trick, yet a useful one. When we think about any set of transactions we make, whether we buy milk or machines for a factory, we can calculate some kind of weighted average price in those transactions. When I spend $1 000 000 on a team of robots, bought at unitary price P(robot), and $500 000 on their software bought at price P(software), the arithmetical operation P(robot)*[$1 000 000 / ($1 000 000 + $500 000)] + P(software)*[$500 000 / ($1 000 000 + $500 000)] will yield a weighted average price P(robot; software) made in one third of the price of software, and in two thirds of the price of robots. Mathematically, this operation is called factorisation, and we use it when we suppose the existence of a common, countable factor in a set of otherwise distinct phenomena. Once we suppose the existence of recurrent transactional prices in anything humans do, we can factorise that anything as Price Multiplied By Quantity, or P*Q. Thus, although we cannot really add up kilograms of factories with kilograms of patents, we can factorise their respective prices out of the phenomenon observed and write PK*QK. In this approach, quantity Q is a semi-metaphysical category, something like a metaphor for the overall, real amount of the things we have, make and do.    

Keeping those explanations in mind, let’s have a look at the empirical representation of coefficient A, as computed according to equation (2), on the grounds of data available in Penn Tables 9.1 (Feenstra et al. 2015[1]), and represented graphically in Figure I_1 below. The database known as Penn Tables provides direct information about three big components of equation (2): the basic rate of amortization, the nominal value of fixed assets, and the nominal value of Gross Domestic Product GDP) for each of the 182 national economies covered. One of the possible ways of thinking about the wealth of a nation is to compute the value of all the final goods and services made by said nation. According to the logic presented in the preceding paragraph, whilst the whole basket of final goods is really diversified, it is possible to nail down a weighted, average transactional price P for all that lot, and, consequently, to factorise the real quantity Q out of it. Hence, the GDP of a country can be seen as a very rough approximation of value added created by all the businesses in that territory, and changes over time in the GDP as such can be seen as representative for changes in the aggregate revenue of all those businesses.

Figure I_1 introduces two metrics, pertinent to the empirical unfolding of equation (2) over time and across countries. The continuous line shows the arithmetical average of local, national coefficients A across the whole sample of countries. The line with square markers represents the standard deviation of those national coefficients from the average represented by the continuous line. Both metrics are based on the nominal computation of the coefficient A for each year in each given national economy, thus in current prices for each year from 1950 through 2017. Equation (2) gives many possibilities of change in the coefficient A – including changes in the proportion between the price PG of final goods, and the market price PK of fixed assets – and the nominal computation used in Figure I_1 captures that factor as well.            

[Figure I_1_Coefficient of amortization in GDP, nominal, world, trend]


In 1950, the average national coefficient A, calculated as specified above, was equal to 6,7%. In 2017, in climbed to A = 20,8%. In other words, the average entrepreneur in 1950 would pay less than one tenth of their revenues to amortize the depreciation of technological assets, whilst in 2017 it was more than one fifth. This change in proportion can encompass many phenomena. It can the pace of scientific change as such, or just a change in entrepreneurial behaviour as regards the strategies of amortization, explained above. Show business is a good example. Content is an asset for television stations, movie makers or streaming services. Content assets age, and some of them age very quickly. Take the tonight news show on any TV channel. The news of today are much less of a news tomorrow, and definitely not news at all the next month. If you have a look at annual financial reports of TV broadcasters, such as the American classic of the industry, CBS Corporation[1], you will see insane nominal amounts of amortization in their cash flow statements. Thus, the ascending trend of average coefficient A, in Figure I_1, could be, at least partly, the result of growth in the amount of content assets held by various entities in show business. It is a good thing to deconstruct that compound phenomenon into its component factors, which is being undertaken further below. Still, before the deconstruction takes place, it is good to have an inquisitive look at the second curve in Figure I_1, the square-marked one, representing standard deviation of coefficient A across countries.

In common interpretation of empirical numbers, we almost intuitively lean towards average values, as the expected ones in a large set, and yet the standard deviation has a peculiar charm of its own. If we compare the paths followed by the two curves in Figure I_1, we can see them diverge: the average A goes resolutely up whilst the standard deviation in A stays almost stationary in its trend. In the 1950ies or 1960ies, the relative burden of amortization upon the GDP of individual countries was almost twice as disparate than it is today. In other words, back in the day it mattered much more where exactly our technological assets are located. Today, it matters less. National economies seem to be converging in their ways of sourcing current, operational cash flow to provide for the depreciation of incumbent technologies.

Getting back to science, and thus back to empirical facts, let’s have a look at two component phenomena of trends sketched in Figure I_1: the pace of scientific invention, and the average lifecycle of assets. As for the former, the coefficient of patent applications per 1 mln people, sourced from the World Bank[2], is used as representative metric. When we invent an original solution to an existing technological problem, and we think we could make some money on, we have the option of applying for legal protection of our invention, in the form of a patent. Acquiring a patent is essentially a three-step process. Firstly, we file the so-called patent application to the patent office adequate for the given geographical jurisdiction. Then, the patent office publishes our application, calling out for anyone who has grounds for objecting to the issuance of patent, e.g. someone we used to do research with, hand in hand, but hands parted as some point in time. As a matter of fact, many such disputes arise, which makes patent applications much more numerous than actually granted patents. If you check patent data, granted patents define a currently appropriated territories of intellectual property, whilst patent applications are pretty much informative about the current state of applied science, i.e. about the path this science takes, and about the pressure it puts on business people towards refreshing their technological assets.       

Figure I_2 below shows the coefficient of patent applications per 1 mln people in the global economy. The shape of the curve is interestingly similar to that of average coefficient A, shown in Figure I_1, although it covers a shorter span of time, from 1985 through 2017. At the first sight, it seems making sense: more and more patentable inventions per 1 million humans, on average, puts more pressure on replacing old assets with new ones. Yet, the first sight may be misleading. Figure I_3, further below, shows the average lifecycle of fixed assets in the global economy. This particular metric is once again calculated on the grounds of data available in Penn Tables 9_1 (Feenstra et al. 2015 op. cit.). The database strictly spoken contains a variable called ‘delta’, which is the basic rate of amortization in fixed assets, i.e. the percentage of their book value commonly written off the income tax base as provision for depreciation. This is factor ‘a’ in equation (1), presented earlier, and reflects the expected lifecycle of assets. The inverted value ‘1/a’ gives the exact value of that lifecycle in years, i.e. the variable ‘F’ in equation (2). Here comes the big surprise: although the lifecycle ‘F’, computed as an average for all the 182 countries in the database, does display a descending trend, the descent is much gentler, and much more cyclical that what we could expect after having seen the trend in nominal burden A of amortization, and in the occurrence of patent applications. Clearly, there is a push from science upon businesses towards shortening the lifecycle of their assets, but businesses do not necessarily yield to that pressure.  

[Figure I_2_Patent Applications per 1 mln people]


Here comes a riddle. The intuitive assumption that growing scientific input provokes shorter a lifespan in technological assets proves too general. It obviously does not encompass the whole phenomenon of increasingly cash-consuming depreciation in fixed assets. There is something else. After having casted a look at the ‘1/F’  component factor of equation (2), let’s move to the  (PK*QK)/(PG*QG) one. Penn Tables 9.1 provide two variables that allow calculating it: the aggregate value of fixed assets in national economies, at current prices, and the GDP of those economies, in current prices as well. Interestingly, those two variables are provided in two versions each: one at constant prices of 2011, the other at current prices. Before the consequences of that dual observation are discussed, let’s remind some basic arithmetic: we can rewrite (PK*QK)/(PG*QG) as (PK/PG)*(QK/QG). The (PK/PG) component fraction corresponds to the proportion between weighted average prices in, respectively, fixed assets (PK), and final goods (PG). The other part, i.e. (QK/QG) stands for the proportion between aggregate quantities of assets and goods. Whilst we refer here to that abstract concept of aggregate quantities, observable only as something mathematically factorized out of something really empirical, there is method to that madness. How big a factory do we need to make 20 000 cars a month? How big a server do we need in order to stream 20 000 hours of films and shows a month? Presented under this angle, the proportion (QK/QG)  is much more real. When both the aggregate stock of fixed assets in national economies, and the GDP of those economies are expressed in current prices, both the (PK/PG) factor, and the (QK/QG) really change over time. What is observed (analytically) is the full (PK*QK)/(PG*QG) coefficient. Yet, when prices are constant, the (PK/PG) component factor does not actually change over time; what really changes, is just the proportion between aggregate quantities of assets and goods.

The factorisation presented above allows another trick at the frontier of arithmetic and economics. The trick consists in using creatively two types of economic aggregates, commonly published in publicly available databases: nominal values as opposed to real values. The former category represents something like P*Q, or price multiplied by quantity. The latter is supposed to have kicked prices out of the equation, i.e. to represent just quantities. With those two types of data we can do something opposite to the procedure presented earlier, which serves to distil real quantities out of nominal values. This time, we have externally provided products ‘price times quantity’, and just quantities. Logically, we can extract prices out of the nominal values.    When we have two coefficients given in the Penn Tables 9.1 database – the full (PK*QK)/(PG*QG) (current prices) and the partial (QK/QG) (constant prices) – we can develop the following equation: [(PK*QK)/(PG*QG)]/ (QK/QG) =  PK/PG.  We can use the really observable proportion between the nominal value of fixed assets and that of Gross Domestic Product, divide it by the proportion between real quantities of, respectively assets and final goods, in order to calculate the proportion between weighted average prices of assets and goods.

Figure I_4, below, attempts to represent all those three phenomena – the change in nominal values, the change in real quantities, and the change in prices – in one graph. As different magnitudes of empirical values are involved, Figure I_4 introduces another analytical method, namely indexation over constant denominator. When we want to study temporal trends in values, which are either measured with different units or display very different magnitudes, we can choose one point in time as the peg value for each of the variables involved. In the case of Figure I_4, the peg year is 2011, as Penn Tables 9.1 use 2011 as reference year for constant prices. Aggregate values of capital stock and national GDP, when measured in constant prices, are measured in the prices of the year 2011. For each of the three variables involved – the nominal proportion of capital stock to GDP (PK*QK)/(PG*QG), the real proportion thereof  QK/QG  and the proportion between the prices of assets and the prices of goods PK*QK – we take their values in 2011 as denominators for the whole time series. Thus, for example, the nominal proportion of capital stock to GDP in 1990 is the quotient of the actual value in 1990 divided by the value in 2011 etc. As a result, we can study each of the three variables as if the value in 2011 was equal to 1,00.     

[Figure I_4 Comparative indexed trends in the proportion between the national capital stock and the GDP]

The indexed trends thus computed are global averages of across the database, i.e. averages of national values computed for individual countries. The continuous blue line marked with red triangles represents the nominal proportion between the national stocks of fixed assets, and the respective GDP of each country, or the full (PK*QK)/(PG*QG) coefficient. It has been consistently climbing since 1950, and since the mid-1980ies the slope of that climb seems to have increased. Just to give a glimpse of actual non-indexed values, in 1950 the average (PK*QK)/(PG*QG) coefficient was 1.905, in 1985 it reached 2.197, in the reference year 2011it went up to 3.868, to end up at 4.617 in 2017. The overall shape of the curve strongly resembles that observed earlier in the coefficient of patent applications per 1 mln people in the global economy, and in another indexed trend to find in Figure I_4, that of price coefficient PK*PG.  Starting from 1985, that latter proportion seems to be following almost perfectly the trend in patentable invention, and its actual, non-indexed values seem to be informative about a deep change in business in connection with technological change. In 1950, the proportion between average weighted prices of fixed assets, and those of final goods was PK*PG = 0,465, and even in the middle of the 1980ies it kept roughly the same level, PK*PG = 0,45. To put it simply, fixed assets were half as expensive as final goods, per unit of quantity. Yet, since 1990, something had changed: that proportion started to grow: productive assets started to be more and more valuable in comparison to the market prices of the goods they served to make. In 2017, PK*PG reached 1,146. From a world, where technological assets were just tools to make final goods we moved into a world, where technologies are goods in themselves. If we look carefully at digital technologies, nanotechnologies or at biotech, this general observation strongly holds. A new molecule is both a tool to make something, and a good in itself. It can make a new drug, and it can be a new drug. An algorithm can create value added as such, or it can serve to make another value-creating algorithm.

Against that background of unequivocal change in the prices of technological assets, and in their proportion to the Gross Domestic Product of national economies, we can observe a different trend in the proportion of quantities: QK/QG. Hence, we return to questions such as ‘How big a factory we need in order to make the amount of final goods we want?’. The answer to that type of question takes the form of something like a long business cycle, with a peak in 1994, at QK/QG = 5,436. The presently observed QK/QG  (2017) = 4,027 looks relatively modest and is very similar to the value observed in 1950ies. Seventy years ago, we used to be a civilization, which needed around 4 units of quantity in technological assets to make one unit of quantity in final goods. Then, starting from the mid-1970ies, we started turning into a more and more technology intensive culture, with more and more units of quantity in assets required to make one unit of quantity in final goods. In the mid-1990ies, that asset-intensity reached its peak, and now it is back at the old level.

[1] https://investors.cbscorporation.com last access November 4th, 2019

[2] https://data.worldbank.org/indicator/IP.PAT.RESD last access November 4th, 2019

[1] Feenstra, Robert C., Robert Inklaar and Marcel P. Timmer (2015), “The Next Generation of the Penn World Table” American Economic Review, 105(10), 3150-3182, available for download at www.ggdc.net/pwt

A few more insights about collective intelligence

My editorial on You Tube

I noticed it is one month that I did not post anything on my blog. Well, been doing things, you know. Been writing, and thinking by the same occasion. I am forming a BIG question in my mind, a question I want to answer: how are we going to respond to climate change? Among all the possible scenarios of such response, which are we the most likely to follow? When I have a look, every now and then, at Greta Thunberg’s astonishingly quick social ascent, I wonder why are we so divided about something apparently so simple? I am very clear: this is not a rhetorical question from my part. Maybe I should claim something like: ‘We just need to get all together, hold our hands and do X, Y, Z…’. Yes, in a perfect world we would do that. Still, in the world we actually live in, we don’t. Does it mean we are collectively stupid, like baseline, and just some enlightened individuals can sometimes see the truly rational path of moving ahead? Might be. Yet, another view is possible. We might be doing apparently dumb things locally, and those apparent local flops could sum up to something quite sensible at the aggregate scale.

There is some science behind that intuition, and some very provisional observations. I finally (and hopefully) nailed down the revision of the article on energy efficiency. I have already started developing on this one in my last update, entitled ‘Knowledge and Skills’, and now, it is done. I have just revised the article, quite deeply, and by the same occasion, I hatched a methodological paper, which I submitted to MethodsX. As I want to develop a broader discussion on these two papers, without repeating their contents, I invite my readers to get acquainted with their PDF, via the archives of my blog. Thus, by clicking the title Energy Efficiency as Manifestation of Collective Intelligence in Human Societies, you can access the subject matter paper on energy efficiency, and clicking on Neural Networks As Representation of Collective Intelligence will take you to the methodological article. 

I think I know how to represent, plausibly, collective intelligence with artificial intelligence. I am showing the essential concept in the picture below. Thus, I start with a set of empirical data, describing a society. Well in the lines of what I have been writing, on this blog, since early spring this year, I assume that quantitative variables in my dataset, e.g. GDP per capita, schooling indicators, the probability for an average person to become a mad scientist etc. What is the meaning of those variables? Most of all, they exist and change together. Banal, but true. In other words, all that stuff represents the cumulative outcome of past, collective action and decision-making.

I decided to use the intellectual momentum, and I used the same method with a different dataset, and a different set of social phenomena. I took Penn Tables 9.1 (Feenstra et al. 2015[1]), thus a well-known base of macroeconomic data, and I followed the path sketched in the picture below.


Long story short, I have two big surprises. When I look upon energy efficiency and its determinants, turns out energy efficiency is not really the chief outcome pursued by the 59 societies studied: they care much more about the local, temporary proportions between capital immobilised in fixed assets, and the number of resident patent applications. More specifically, they seem to be principally optimizing the coefficient of fixed assets per 1 patent application. That is quite surprising. It sends me back to my peregrinations through the land of evolutionary theory (see for example: My most fundamental piece of theory).

When I take a look at the collective intelligence (possibly) embodied in Penn Tables 9.1, I can see this particular collective wit aiming at optimizing the share of labour in the proceeds from selling real output in the first place. Then, almost immediately after, comes the average number of hours worked per person per year. You can click on this link and read the full manuscript I have just submitted with the Quarterly Journal of Economics.

Wrapping it (provisionally) up, as I did some social science with the assumption of collective intelligence in human societies taken at the level of methodology, and I got truly surprising results. That thing about energy efficiency – i.e. the fact that when in presence of some capital in fixed assets, and some R&D embodied in patentable inventions, we seem caring about energy efficiency only secondarily – is really mind-blowing. I had already done some research on energy as factor of social change, and, whilst I have never been really optimistic about our collective capacity to save energy, I assumed that we orient ourselves, collectively, on some kind of energy balance. Apparently, we do only when we have nothing else to pay attention to. On the other hand, the collective focus on macroeconomic variables pertinent to labour, rather than prices and quantities, is just as gob-smacking. All economic education, when you start with Adam Smith and take it from there, assumes that economic equilibriums, i.e. those special states of society when we are sort of in balance among many forces at work, are built around prices and quantities. Still, in that research I have just completed, the only kind of price my neural network can build a plausibly acceptable learning around, is the average price level in international trade, i.e. in exports, and in imports. All the prices, which I have been taught, and which I taught are the cornerstones of economic equilibrium, like prices in consumption or prices in investment, when I peg them as output variables of my perceptron, the incriminated perceptron goes dumb like hell and yields negative economic aggregates. Yes, babe: when I make my neural network pay attention to price level in investment goods, it comes to the conclusion that the best idea is to have negative national income, and negative population.  

Returning to the issue of climate change and our collective response to it, I am trying to connect my essential dots. I have just served some like well-cooked science, and not it is time to bite into some raw one. I am biting into facts which I cannot explain yet, like not at all. Did you know, for example, that there are more and more adult people dying in high-income countries, like per 1000, since 2014? You can consult the data available with World Bank, as regards the mortality of men and that in women. Infant mortality is generally falling, just as adult mortality in low, and middle-income countries. It is just about adult people in wealthy societies categorized as ‘high income’: there are more and more of them dying per 1000. Well, I should maybe say ‘more of us’, as I am 51, and relatively well-off, thank you. Anyway, all the way up through 2014, adult mortality in high-income countries had been consistently subsiding, reaching its minimum in 2014 at 57,5 per 1000 in women, and 103,8 in men. In 2016, it went up to 60,5 per 1000 in women, and 107,5 in men. It seems counter-intuitive. High-income countries are the place where adults are technically exposed to the least fatal hazards. We have virtually no wars around high income, we have food in abundance, we enjoy reasonably good healthcare systems, so WTF? As regards low-income countries, we could claim that adults who die are relatively the least fit for survival ones, but what do you want to be fit for in high-income places? Driving a Mercedes around? Why it started to revert since 2014?

Intriguingly, high income countries are also those, where the difference in adult mortality between men and women is the most pronounced, in men almost the double of what is observable in women. Once again, it is something counter-intuitive. In low-income countries, men are more exposed to death in battle, or to extreme conditions, like work in mines. Still, in high-income countries, such hazards are remote. Once again, WTF? Someone could say: it is about natural selection, about eliminating the weak genetics. Could be, and yet not quite. Elimination of weak genetics takes place mostly through infant mortality. Once we make it like through the first 5 years of our existence, the riskiest part is over. Adult mortality is mostly about recycling used organic material (i.e. our bodies). Are human societies in high-income countries increasing the pace of that recycling? Why since 2015? Is it more urgent to recycle used men than used women?

There is one thing about 2015, precisely connected to climate change. As I browsed some literature about droughts in Europe and their possible impact on agriculture (see for example All hope is not lost: the countryside is still exposed), it turned out that 2015 was precisely the year when we started to sort of officially admitting that we have a problem with agricultural droughts on our continent. Even more interestingly, 2014 and 2015 seem to have been the turning point when aggregate damages from floods, in Europe, started to curb down after something like two decades of progressive increase. We swapped one calamity for another one, and starting from then, we started to recycle used adults at more rapid a pace. Of course, most of Europe belongs to the category of high-income countries.

See? That’s what I call raw science about collective intelligence. Observation with a lot of questions and very remote idea as for the method of answering them. Something is apparently happening, maybe we are collectively intelligent in the process, and yet we don’t know how exactly (are we collectively intelligent). It is possible that we are not. Warmer climate is associated with greater prevalence of infectious diseases in adults (Amuakwa-Mensah et al. 2017[1]), for example, and yet it does not explain why is greater adult mortality happening in high-income countries. Intuitively, infections attack where people are poorly shielded against them, thus in countries with frequent incidence of malnutrition and poor sanitation, thus in the low-income ones.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. You can communicate with me directly, via the mailbox of this blog: goodscience@discoversocialsciences.com. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?


[1] Amuakwa-Mensah, F., Marbuah, G., & Mubanga, M. (2017). Climate variability and infectious diseases nexus: Evidence from Sweden. Infectious Disease Modelling, 2(2), 203-217.

[1] Feenstra, Robert C., Robert Inklaar and Marcel P. Timmer (2015), “The Next Generation of the Penn World Table” American Economic Review, 105(10), 3150-3182, available for download at www.ggdc.net/pwt

Knowledge and skills

My editorial on You Tube

Once again, I break my rhythm. Mind you, it happens a lot this year. Since January, it is all about breaking whatever rhythm I have had so far in my life. I am getting used to unusual, and I think it is a good thing. Now, I am breaking the usual rhythm of my blogging. Normally, I have been alternating updates in English with those in French, like one to one, with a pinchful of writing in my mother tongue, Polish, every now and then. Right now, two urgent tasks require my attention:  I need to prepare new syllabuses, for English-taught courses in the upcoming academic year, and to revise my draft article on the energy efficiency of national economies.

Before I attend to those tasks, however, a little bit of extended reflection on goals and priorities in my life, somehow in the lines of my last update, « It might be a sign of narcissism ». I have just gotten back from Nice, France, where my son has just started his semester of Erasmus + exchange, with the Sophia Antipolis University. In my youth, I spent a few years in France, I went many times to France since, and man, this time, I just felt the same, very special and very French kind of human energy, which I remember from the 1980ies. Over the last 20 years or so, the French seemed sort of had been sleeping inside their comfort zone but now, I can see people who have just woken up and are wondering what the hell they had wasted so much time on, and they are taking double strides to gather speed in terms of social change. This is the innovative, brilliant, positively cocky France I love. There is sort of a social pattern in France: when the French get vocal, and possibly violent, in the streets, they are up to something as a nation. The French Revolution in 1789 was an expression of popular discontent, yet what followed was not popular satisfaction: it was one-century-long expansion on virtually all plans: political, military, economic, scientific etc. Right now, France is just over the top of the Yellow Vests protest, which one of my French students devoted an essay to (see « Carl Lagerfeld and some guest blogging from Emilien Chalancon, my student »). I wonder who will be the Napoleon Bonaparte of our times.

When entire nations are up to something, it is interesting. Dangerous, too, and yet interesting. Human societies are, as a rule, the most up to something as regards their food and energy base, and so I come to that revision of my article. Here, below, you will find the letter of review I received from the journal “Energy” after I submitted the initial manuscript, referenced as Ms. Ref. No.: EGY-D-19-00258. The link to my manuscript is to find in the first paragraph of this update. For those of you who are making their first steps in science, it can be an illustration of what ‘scientific dialogue’ means. Further below, you will find a first sketch of my revision, accounting for the remarks from reviewers.   

Thus, here comes the LETTER OF REVIEW (in italic):

Ms. Ref. No.: EGY-D-19-00258

Title: Apprehending energy efficiency: what is the cognitive value of hypothetical shocks? Energy

Dear Dr. Wasniewski,

The review of your paper is now complete, the Reviewers’ reports are below. As you can see, the Reviewers present important points of criticism and a series of recommendations. We kindly ask you to consider all comments and revise the paper accordingly in order to respond fully and in detail to the Reviewers’ recommendations. If this process is completed thoroughly, the paper will be acceptable for a second review.

If you choose to revise your manuscript it will be due into the Editorial Office by the Jun 23, 2019

Once you have revised the paper accordingly, please submit it together with a detailed description of your response to these comments. Please, also include a separate copy of the revised paper in which you have marked the revisions made.

Please note if a reviewer suggests you to cite specific literature, you should only do so if you feel the literature is relevant and will improve your paper. Otherwise please ignore such suggestions and indicate this fact to the handling editor in your rebuttal.

To submit a revision, please go to https://ees.elsevier.com/egy/  and login as an Author.

Your username is: ******

If you need to retrieve password details, please go to: http://ees.elsevier.com/egy/automail_query.asp.

NOTE: Upon submitting your revised manuscript, please upload the source files for your article. For additional details regarding acceptable file formats, please refer to the Guide for Authors at: http://www.elsevier.com/journals/energy/0360-5442/guide-for-authors

When submitting your revised paper, we ask that you include the following items:

Manuscript and Figure Source Files (mandatory):

We cannot accommodate PDF manuscript files for production purposes. We also ask that when submitting your revision you follow the journal formatting guidelines. Figures and tables may be embedded within the source file for the submission as long as they are of sufficient resolution for Production. For any figure that cannot be embedded within the source file (such as *.PSD Photoshop files), the original figure needs to be uploaded separately. Refer to the Guide for Authors for additional information. http://www.elsevier.com/journals/energy/0360-5442/guide-for-authors

Highlights (mandatory):

Highlights consist of a short collection of bullet points that convey the core findings of the article and should be submitted in a separate file in the online submission system. Please use ‘Highlights’ in the file name and include 3 to 5 bullet points (maximum 85 characters, including spaces, per bullet point). See the following website for more information

http://www.elsevier.com/highlights

Data in Brief (optional):

We invite you to convert your supplementary data (or a part of it) into a Data in Brief article. Data in Brief articles are descriptions of the data and associated metadata which are normally buried in supplementary material. They are actively reviewed, curated, formatted, indexed, given a DOI and freely available to all upon publication. Data in Brief should be uploaded with your revised manuscript directly to Energy. If your Energy research article is accepted, your Data in Brief article will automatically be transferred over to our new, fully Open Access journal, Data in Brief, where it will be editorially reviewed and published as a separate data article upon acceptance. The Open Access fee for Data in Brief is $500.

Please just fill in the template found here: http://www.elsevier.com/inca/publications/misc/dib_data%20article%20template_for%20other%20journals.docx

Then, place all Data in Brief files (whichever supplementary files you would like to include as well as your completed Data in Brief template) into a .zip file and upload this as a Data in Brief item alongside your Energy revised manuscript. Note that only this Data in Brief item will be transferred over to Data in Brief, so ensure all of your relevant Data in Brief documents are zipped into a single file. Also, make sure you change references to supplementary material in your Energy manuscript to reference the Data in Brief article where appropriate.

If you have questions, please contact the Data in Brief publisher, Paige Shaklee at dib@elsevier.com

Example Data in Brief can be found here: http://www.sciencedirect.com/science/journal/23523409

***

In order to give our readers a sense of continuity and since editorial procedure often takes time, we encourage you to update your reference list by conducting an up-to-date literature search as part of your revision.

On your Main Menu page, you will find a folder entitled “Submissions Needing Revision”. Your submission record will be presented here.

MethodsX file (optional)

If you have customized (a) research method(s) for the project presented in your Energy article, you are invited to submit this part of your work as MethodsX article alongside your revised research article. MethodsX is an independent journal that publishes the work you have done to develop research methods to your specific needs or setting. This is an opportunity to get full credit for the time and money you may have spent on developing research methods, and to increase the visibility and impact of your work.

How does it work?

1) Fill in the MethodsX article template: https://www.elsevier.com/MethodsX-template

2) Place all MethodsX files (including graphical abstract, figures and other relevant files) into a .zip file and

upload this as a ‘Method Details (MethodsX) ‘ item alongside your revised Energy manuscript. Please ensure all of your relevant MethodsX documents are zipped into a single file.

3) If your Energy research article is accepted, your MethodsX article will automatically be transferred to MethodsX, where it will be reviewed and published as a separate article upon acceptance. MethodsX is a fully Open Access journal, the publication fee is only 520 US$.

Questions? Please contact the MethodsX team at methodsx@elsevier.com. Example MethodsX articles can be found here: http://www.sciencedirect.com/science/journal/22150161

Include interactive data visualizations in your publication and let your readers interact and engage more closely with your research. Follow the instructions here: https://www.elsevier.com/authors/author-services/data- visualization to find out about available data visualization options and how to include them with your article.

MethodsX file (optional)

We invite you to submit a method article alongside your research article. This is an opportunity to get full credit for the time and money you have spent on developing research methods, and to increase the visibility and impact of your work. If your research article is accepted, your method article will be automatically transferred over to the open access journal, MethodsX, where it will be editorially reviewed and published as a separate method article upon acceptance. Both articles will be linked on ScienceDirect. Please use the MethodsX template available here when preparing your article: https://www.elsevier.com/MethodsX-template. Open access fees apply.

Reviewers’ comments:

Reviewer #1: The paper is, at least according to the title of the paper, and attempt to ‘comprehend energy efficiency’ at a macro-level and perhaps in relation to social structures. This is a potentially a topic of interest to the journal community. However and as presented, the paper is not ready for publication for the following reasons:

1. A long introduction details relationship and ‘depth of emotional entanglement between energy and social structures’ and concomitant stereotypes, the issue addressed by numerous authors. What the Introduction does not show is the summary of the problem which comes out of the review and which is consequently addressed by the paper: this has to be presented in a clear and articulated way and strongly linked with the rest of the paper. In simplest approach, the paper does demonstrate why are stereotypes problematic. In the same context, it appears that proposed methodology heavily relays on MuSIASEM methodology which the journal community is not necessarily familiar with and hence has to be explained, at least to the level used in this paper and to make the paper sufficiently standalone;

2. Assumptions used in formulating the model have to be justified in terms what and how they affect understanding of link/interaction between social structures and function of energy (generation/use) and also why are assumptions formulated in the first place. Also, it is important here to explicitly articulate what is aimed to achieve with the proposed model: as presented this somewhat comes clear only towards the end of the paper. More fundamental question is what is the difference between model presented here and in other publications by the author: these have to be clearly explained.

3. The presented empirical tests and concomitant results are again detached from reality for i) the problem is not explicitly formulated, and ii) real-life interpretation of results are not clear.

On the practical side, the paper needs:

1. To conform to style of writing adopted by the journal, including referencing;

2. All figures have to have captions and to be referred to by it;

3. English needs improvement.

Reviewer #2: Please find the attached file.

Reviewer #3: The article has a cognitive value. The author has made a deep analysis of literature. Methodologically, the article does not raise any objections. However, getting acquainted with its content, I wonder why the analysis does not take into account changes in legal provisions. In the countries of the European Union, energy efficiency is one of the pillars of shaping energy policy. Does this variable have no impact on improving energy efficiency?

When reading an article, one gets the impression that the author has prepared it for editing in another journal. Editing it is incorrect! Line 13, page 10, error – unwanted semicolon.

Now, A FIRST SKETCH OF MY REVISION.

There are the general, structural suggestions from the editors, notably to outline my method of research, and to discuss my data, in separate papers. After that come the critical remarks properly spoken, with a focus on explaining clearly – more clearly than I did it in the manuscript – the assumptions of my model, as well as its connections with the MUSIASEM model. I start with my method, and it is an interesting exercise in introspection. I did the empirical research quite a few months ago, and now I need to look at it from a distance, objectively. Doing well at this exercise amounts, by the way, to phrasing accurately my assumptions. I start with my fundamental variable, i.e. the so-called energy efficiency, measured as the value of real output (i.e. the value of goods and services produced) per unit of energy consumed, measured in kilograms of oil equivalent.  It is like: energy efficiency = GDP/ energy consumed.

In my mind, that coefficient is actually a coefficient of coefficients, more specifically: GDP / energy consumed = [GDP per capita] / [consumption of energy per capita ] = [GDP / population] / [energy consumed / population ]. Why so? Well, I assume that when any of us, humans, wants to have a meal, we generally don’t put our fingers in the nearest electric socket. We consume energy indirectly, via the local combination of technologies. The same local combination of technologies makes our GDP. Energy efficiency measures two ends of the same technological toolbox: its intake of energy, and its outcomes in terms of goods and services. Changes over time in energy efficiency, as well as its disparity across space depend on the unfolding of two distinct phenomena: the exact composition of that local basket of technologies, like the overall heap of technologies we have stacked up in our daily life, for one, and the efficiency of individual technologies in the stack, for two. Here, I remember a model I got to know in management science, precisely about how the efficiency changes with new technologies supplanting the older ones. Apparently, a freshly implemented, new technology is always less productive than the one it is kicking out of business. Only after some time, when people learn how to use that new thing properly, it starts yielding net gains in productivity. At the end of the day, when we change our technologies frequently, there could very well not be any gain in productivity at all, as we are constantly going through consecutive phases of learning. Anyway, I see the coefficient of energy efficiency at any given time in a given place as the cumulative outcome of past collective decisions as for the repertoire of technologies we use.   

That is the first big assumption I make, and the second one comes from the factorisation: GDP / energy consumed = [GDP per capita] / [consumption of energy per capita ] = [GDP / population] / [energy consumed / population ]. I noticed a semi-intuitive, although not really robust correlation between the two component coefficients. GDP per capita tends to be higher in countries with better developed institutions, which, in turn, tend to be better developed in the presence of relatively high a consumption of energy per capita. Mind you, it is quite visible cross-sectionally, when comparing countries, whilst not happening that obviously over time. If people in country A consume twice as much energy per capita as people in country B, those in A are very likely to have better developed institutions than folks in B. Still, if in any of the two places the consumption of energy per capita grows or falls by 10%, it does not automatically mean corresponding an increase or decrease in institutional development.

Wrapping partially up the above, I can see at least one main assumption in my method: energy efficiency, measured as GDP per kg of oil equivalent in energy consumed is, in itself, a pretty foggy metric, arguably devoid of intrinsic meaning, and it is meaningful as an equilibrium of two component coefficients, namely in GDP per capita, for one, and energy consumption per capita, for two. Therefore, the very name ‘energy efficiency’ is problematic. If the vector [GDP; energy consumption] is really a local equilibrium, as I intuitively see it, then we need to keep in mind an old assumption of economic sciences: all equilibriums are efficient, this is basically why they are equilibriums. Further down this avenue of thinking, the coefficient of GDP per kg of oil equivalent shouldn’t even be called ‘energy efficiency’, or, just in order not to fall into pointless semantic bickering, we should take the ‘efficiency’ part into some sort of intellectual parentheses.   

Now, I move to my analytical method. I accept as pretty obvious the fact that, at a given moment in time, different national economies display different coefficients of GDP per kg of oil equivalent consumed. This is coherent with the above-phrased claim that energy efficiency is a local equilibrium rather than a measure of efficiency strictly speaking. What gains in importance, with that intellectual stance, is the study of change over time. In the manuscript paper, I tested a very intuitive analytical method, based on a classical move, namely on using natural logarithms of empirical values rather than empirical values themselves. Natural logarithms eliminate a lot of non-stationarity and noise in empirical data. A short reminder of what are natural logarithms is due at this point. Any number can be represented as a power of another number, like y = xz, where ‘x’ is called the root of the ‘y’, ‘z’ is the exponent of the root, and ‘x’ is also the base of ‘z’.

Some roots are special. One of them is the so-called Euler’s number, or e = 2,718281828459, the base of the natural logarithm. When we treat e ≈ 2,72 as the root of another number, the corresponding exponent z in y = ez has interesting properties: it can be further decomposed as z = t*a, where t is the ordinal number of a moment in time, and a is basically a parameter. In a moment, I will explain why I said ‘basically’. The function y = t*a is called ‘exponential function’ and proves useful in studying processes marked by important hysteresis, i.e. when each consecutive step in the process depends very strongly on the cumulative outcome of previous steps, like y(t) depends on y(t – k). Compound interest is a classic example: when you save money for years, with annual compounding of interest, each consecutive year builds upon the interest accumulated in preceding years. If we represent the interest rate, classically, as ‘r’, the function y = xt*r gives a good approximation of how much you can save, with annually compounded ‘r’, over ‘t’ years.

Slightly different an approach to the exponential function can be formulated, and this is what I did in the manuscript paper I am revising now, in front of your very eyes. The natural logarithm of energy efficiency measured as GDP per kg of oil equivalent can be considered as local occurrence of change with strong a component of hysteresis. The equilibrium of today depends on the cumulative outcomes of past equilibriums. In a classic exponential function, I would approach that hysteresis as y(t) = et*a, with a being a constant parameter of the function. Yet, I can assume that ‘a’ is local instead of being general. In other words, what I did was y(t) = et*a(t) with a(t) being obviously t-specific, i.e. local. I assume that the process of change in energy efficiency is characterized by local magnitudes of change, the a(t)’s. That a(t), in y(t) = et*a(t) is slightly akin to the local first derivative, i.e. y’(t). The difference between the local a(t) and y’(t) is that the former is supposed to capture somehow more accurately the hysteretic side of the process under scrutiny.              

In typical econometric tests, the usual strategy is to start with the empirical values of my variables, transform them into their natural logarithms or some sort of standardized values (e.g. standardized over their respective means, or their standard deviations), and then run linear regression on those transformed values. Another path of analysis consists in exponential regression, only there is a problem with this one: it is hard to establish a reliable method of transformation in empirical data. Running exponential regression on natural logarithms looks stupid, as natural logarithms are precisely the exponents of the exponential function, whence my intuitive willingness to invent a method sort of in between linear regression, and the exponential one.

Once I assume that local exponential coefficients a(t) in the exponential progression y(t) = et*a(t) have intrinsic meaning of their own, as local magnitudes of exponential change, an interesting analytical avenue opens up. For each set of empirical values y(t), I can construe a set of transformed values a(t) = ln[y(t)]/t. Now, when you think about it, the actual a(t) depends on how you calculate ‘t’, or, in other words, what calendar you apply. When I start counting time 100 years before the starting year of my empirical data, my a(t) will go like: a(t1) = ln[y(t1)]/101, a(t2) = ln[y(t2)]/102 etc. The denominator ‘t’ will change incrementally slowly. On the other hand, if I assume that the first year of whatever is happening is one year before my empirical time series start, it is a different ball game. My a(t1) = ln[y(t1)]/1, and my a(t2) = ln[y(t2)]/2 etc.; incremental change in denominator is much greater in this case. When I set my t0 at 100 years earlier than the first year of my actual data, thus t0 = t1 – 100, the resulting set of a(t) values transformed from the initial y(t) data simulates a secular, slow trend of change. On the other hand, setting t0 at t0 = t1-1 makes the resulting set of a(t) values reflect quick change, and the t0 = t1 – 1 moment is like a hypothetical shock, occurring just before the actual empirical data starts to tell its story.

Provisionally wrapping it up, my assumptions, and thus my method, consists in studying changes in energy efficiency as a sequence of equilibriums between relative wealth (GDP per capita), on the one hand, and consumption of energy per capita. The passage between equilibriums is a complex phenomenon, combining long term trends and the short-term ones.  

I am introducing a novel angle of approach to the otherwise classic concept of economics, namely that of economic equilibrium. I claim that equilibriums are manifestations of collective intelligence in their host societies. In order to form an economic equilibrium, would it be more local and Marshallian, or more general and Walrasian, a society needs institutions that assure collective learning through experimentation. They need some kind of financial market, enforceable contracts, and institutions of collective bargaining. Small changes in energy efficiency come out of consistent, collective learning through those institutions. Big leaps in energy efficiency appear when the institutions of collective learning undergo substantial structural changes.

I am thinking about enriching the empirical part of my paper by introducing additional demonstration of collective intelligence: a neural network, working with the same empirical data, with or without the so-called fitness function. I have that intuitive thought – although I don’t know yet how to get it across coherently – that neural networks endowed with a fitness function are good at representing collective intelligence in structured societies with relatively well-developed institutions.

I go towards my syllabuses for the coming academic year. Incidentally, at least one of the curriculums I am going to teach this fall fits nicely into the line of research I am pursuing now: collective intelligence and the use of artificial intelligence. I am developing the thing as an update on my blog, and I write it directly in English. The course is labelled “Behavioural Modelling and Content Marketing”. My principal goal is to teach students the mechanics of behavioural interaction between human beings and digital technologies, especially in social media, online marketing and content streaming. At my university, i.e. the Andrzej Frycz-Modrzewski Krakow University (Krakow, Poland), we have a general drill of splitting the general goal of each course into three layers of expected didactic outcomes: knowledge, course-specific skills, and general social skills. The longer I do science and the longer I teach, the less I believe into the point of distinguishing knowledge from skills. Knowledge devoid of any skills attached to it is virtually impossible to check, and virtually useless.

As I think about it, I imagine many different teachers and many students. Each teacher follows some didactic goals. How do they match each other? They are bound to. I mean, the community of teachers, in a university, is a local social structure. We, teachers, we have different angles of approach to teaching, and, of course, we teach different subjects. Yet, we all come from more or less the same cultural background. Here comes a quick glimpse of literature I will be referring to when lecturing ‘Behavioural Modelling and Content Marketing’: the article by Molleman and Gachter (2018[1]), entitled ‘Societal background influences social learning in cooperative decision making’, and another one, by Smaldino (2019[2]), under the title ‘Social identity and cooperation in cultural evolution’. Molleman and Gachter start from the well-known assumption that we, humans, largely owe our evolutionary success to our capacity of social learning and cooperation. They give the account of an experiment, where Chinese people, assumed to be collectivist in their ways, are being compared to British people, allegedly individualist as hell, in a social game based on dilemma and cooperation. Turns out the cultural background matters: success-based learning is associated with selfish behaviour and majority-based learning can help foster cooperation. Smaldino goes down more theoretical a path, arguing that the structure society shapes the repertoire of social identities available to homo sapiens in a given place at a given moment, whence the puzzle of emergent, ephemeral groups as a major factor in human cultural evolution. When I decide to form, on Facebook, a group of people Not-Yet-Abducted-By-Aliens, is it a factor of cultural change, or rather an outcome thereof?

When I teach anything, what do I really want to achieve, and what does the conscious formulation of those goals have in common with the real outcomes I reach? When I use a scientific repository, like ScienceDirect, that thing learns from me. When I download a bunch of articles on energy, it suggests me further readings along the same lines. It learns from keywords I use in my searches, and from the journals I browse. You can even have a look at my recent history of downloads from ScienceDirect and make yourself an opinion about what I am interested in. Just CLICK HERE, it opens an Excel spreadsheet.

How can I know I taught anybody anything useful? If a student asks me: ‘Pardon me, sir, but why the hell should I learn all that stuff you teach? What’s the point? Why should I bother?’. Right you are, sir or miss, whatever gender you think you are. The point of learning that stuff… You can think of some impressive human creation, like the Notre Dame cathedral, the Eiffel Tower, or that Da Vinci’s painting, Lady with an Ermine. Have you ever wondered how much work had been put in those things? However big and impressive a cathedral is, it had been built brick by f***ing brick. Whatever depth of colour we can see in a painting, it came out of dozens of hours spent on sketching, mixing paints, trying, cursing, and tearing down the canvas. This course and its contents are a small brick in the edifice of your existence. One more small story that makes your individual depth as a person.

There is that thing, at the very heart of behavioural modelling, and social sciences in general. Fault of a better expression, I call it the Bignetti model. See, for example, Bignetti 2014[3], Bignetti et al. 2017[4], or Bignetti 2018[5] for more reading. Long story short, what professor Bignetti claims is that whatever happens in observable human behaviour, individual or collective, whatever, has already happened neurologically beforehand. Whatever we use to Tweet or whatever we read, it is rooted in that wiring we have between the ears. The thing is that actually observing how that wiring works is still a bit burdensome. You need a lot of technology, and a controlled environment. Strangely enough, opening one’s skull and trying to observe the contents at work doesn’t really work. Reverse-engineered, the Bignetti model suggests behavioural observation, and behavioural modelling, could be a good method to guess how our individual brains work together, i.e. how we are intelligent collectively.

I go back to the formal structure of the course, more specifically to goals and expected outcomes. I split: knowledge, skills, social competences. The knowledge, for one. I expect the students to develop the understanding of the following concepts: a) behavioural pattern b) social life as a collection of behavioural patterns observable in human beings c) behavioural patterns occurring as interactions of humans with digital technologies, especially with online content and online marketing d) modification of human behaviour as a response to online content e) the basics of artificial intelligence, like the weak law of great numbers or the logical structure of a neural network. As for the course-specific skills, I expect my students to sharpen their edge in observing behavioural patterns, and changes thereof in connection with online content. When it comes to general social competences, I would like my students to make a few steps forward on two paths: a) handling projects and b) doing research. It logically implies that assessment in this course should and will be project-based. Students will be graded on the grounds of complex projects, covering the definition, observation, and modification of their own behavioural patterns occurring as interaction with online content.

The structure of an individual project will cover three main parts: a) description of the behavioural sequence in question b) description of online content that allegedly impacts that sequence, and c) the study of behavioural changes occurring under the influence of online content. The scale of students’ grades is based on two component marks: the completeness of a student’s work, regarding (a) – (c), and the depth of research the given student has brought up to support his observations and claims. In Poland, in the academia, we typically use a grading scale from 2 (fail) all the way up to 5 (very good), passing through 3, 3+, 4, and 4+. As I see it, each student – or each team of students, as there will be a possibility to prepare the thing in a team of up to 5 people – will receive two component grades, like e.g. 3+ for completeness and 4 for depth of research, and that will give (3,5 + 4)/2 = 3,75 ≈ 4,0.

Such a project is typical research, whence the necessity to introduce students into the basic techniques of science. That comes as a bit of a paradox, as those students’ major is Film and Television Production, thus a thoroughly practical one. Still, science serves in practical issues: this is something I deeply believe and which I would like to teach my students. As I look upon those goals, and the method of assessment, a structure emerges as regards the plan of in-class teaching. At my university, the bulk of in-class interaction with students is normally spread over 15 lectures of 1,5 clock hour each, thus 30 hours in total. In some curriculums it is accompanied by the so-called ‘workshops’ in smaller groups, with each such smaller group attending 7 – 8 sessions of 1,5 hour each. In this case, i.e. in the course of ‘Behavioural Modelling and Content Marketing’, I have just lectures in my schedule. Still, as I see it, I will need to do practical stuff with my youngsters. This is a good moment to demonstrate a managerial technique I teach in other classes, called ‘regressive planning’, which consists in taking the final goal I want to achieve, assume this is supposed to be the outcome of a sequence of actions, and then reverse engineer that sequence. Sort of ‘what do I need to do if I want to achieve X at the end of the day?’.

If I want to have my students hand me good quality projects by the end of the semester, the last few classes out of the standard 15 should be devoted to discussing collectively the draft projects. Those drafts should be based on prior teaching of basic skills and knowledge, whence the necessity to give those students a toolbox, and provoke in them curiosity to rummage inside. All in all, it gives me the following, provisional structure of lecturing:

{input = 15 classes} => {output = good quality projects by my students}

{input = 15 classes} ó {input = [10 classes of preparation >> 5 classes of draft presentations and discussion thereof]}

{input = 15 classes}  ó {input = [5*(1 class of mindfuck to provoke curiosity + 1 class of systematic presentation) + 5*(presentation + questioning and discussion)}

As I see from what I have just written, I need to divide the theory accompanying this curriculum into 5 big chunks. The first of those 5 blocks needs to address the general frame of the course, i.e. the phenomenon of recurrent interaction between humans and online content. I think the most important fact to highlight is that algorithms of online marketing behave like sales people crossed with very attentive servants, who try to guess one’s whims and wants. It is a huge social change: it, I think, the first time in human history when virtually every human with access to Internet interacts with a form of intelligence that behaves like a butler, guessing the user’s preferences. It is transformational for human behaviour, and in that first block I want to show my students how that transformation can work. The opening, mindfucking class will consists in a behavioural experiment in the lines of good, old role playing in psychology. I will demonstrate to my students how a human would behave if they wanted to emulate the behaviour of neural networks in online marketing. I will ask them questions about what they usually do, and about what they did like during the last few days, and I will guess their preferences on the grounds of their described behaviour. I will tell my students to observe that butler-like behaviour of mine and to pattern me. In a next step, I will ask students to play the same role, just for them to get the hang of how a piece of AI works in online marketing. The point of this first class is to define an expected outcome, like a variable, which neural networks attempt to achieve, in terms of human behaviour observable through clicking. The second, theoretical class of that first block will, logically, consist in explaining the fundamentals of how neural networks work, especially in online interactions with human users of online content.      

I think in the second two-class block I will address the issue of behavioural patterns as such, i.e. what they are, and how can we observe them. I want the mindfuck class in this block to be provocative intellectually, and I think I will use role playing once again. I will ask my students to play roles of their choice, and I will discuss their performance under a specific angle: how do you know that your play is representative for this type of behaviour or person? What specific pieces of behaviour are, in your opinion, informative about the social identity of that role? Do other students agree that the type of behaviour played is representative for this specific type of person? The theoretical class in this block will be devoted to systematic lecture on the basics of behaviourism. I guess I will serve to my students some Skinner, and some Timberlake, namely Skinner’s ‘Selection by Consequences’ (1981[6]), and Timberlake’s ‘Behaviour Systems and Reinforcement’ (1993[7]).    

In the third two-class block I will return to interactions with online content. In the mindfuck class, I will make my students meddle with You Tube, and see how the list of suggested videos changes after we search for or click on specific content, e.g how will it change after clicking 5 videos of documentaries about wildlife, or after searching for videos on race cars. In this class, I want my students to pattern the behaviour of You Tube. The theoretical class of this block will be devoted to the ways those algorithms work. I think I will focus on a hardcore concept of AI, namely the Gaussian mixture. I will explain how crude observations on our clicking and viewing allows an algorithm to categorize us.

As we will pass to the fourth two-class block, I will switch to the concept of collective intelligence, i.e. to how whole societies interact with various forms of online, interactive neural networks. The class devoted to intellectual provocation will be discursive. I will make students debate on the following claim: ‘Internet and online content allow our society to learn faster and more efficiently’. There is, of course, a catch, and it is the definition of learning fast and efficiently. How do we know we are quick and efficient in our collective learning? What would slow and inefficient learning look like? How can we check the role of Internet and online content in our collective learning? Can we apply the John Stuart Mill’s logical canon to that situation? The theoretical class in this block will be devoted to the phenomenon of collective intelligence in itself. I would like to work through like two research papers devoted to online marketing, e.g. Fink et al. (2018[8]) and Takeuchi et al. (2018[9]), in order to show how online marketing unfolds into phenomena of collective intelligence and collective learning.

Good, so I come to the fifth two-class block, the last one before the scheduled draft presentations by my students. It is the last teaching block before they present their projects, and I think it should bring them back to the root idea of these, i.e. to the idea of observing one’s own behaviour when interacting with online content. The first class of the block, the one supposed to stir curiosity, could consist in two steps of brain storming and discussion. Students endorse the role of online marketers. In the first step, they define one or two typical interactions between human behaviour, and the online content they communicate. We use the previously learnt theory to make both the description of behavioural patterns, and that of online marketing coherent and state-of-the-art. In the next step, students discuss under what conditions they would behave according to those pre-defined patterns, and what conditions would them make diverge from it and follow different patterns. In the theoretical class of this block, I would like to discuss two articles, which incite my own curiosity: ‘A place for emotions in behaviour research system’ by Gordon M.Burghart (2019[10]), and ‘Disequilibrium in behaviour analysis: A disequilibrium theory redux’ by Jacobs et al. (2019[11]).

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. You can communicate with me directly, via the mailbox of this blog: goodscience@discoversocialsciences.com. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?


[1] Molleman, L., & Gächter, S. (2018). Societal background influences social learning in cooperative decision making. Evolution and Human Behavior, 39(5), 547-555.

[2] Smaldino, P. E. (2019). Social identity and cooperation in cultural evolution. Behavioural Processes. Volume 161, April 2019, Pages 108-116

[3] Bignetti, E. (2014). The functional role of free-will illusion in cognition:“The Bignetti Model”. Cognitive Systems Research, 31, 45-60.

[4] Bignetti, E., Martuzzi, F., & Tartabini, A. (2017). A Psychophysical Approach to Test:“The Bignetti Model”. Psychol Cogn Sci Open J, 3(1), 24-35.

[5] Bignetti, E. (2018). New Insights into “The Bignetti Model” from Classic and Quantum Mechanics Perspectives. Perspective, 4(1), 24.

[6] Skinner, B. F. (1981). Selection by consequences. Science, 213(4507), 501-504.

[7] Timberlake, W. (1993). Behavior systems and reinforcement: An integrative approach. Journal of the Experimental Analysis of Behavior, 60(1), 105-128.

[8] Fink, M., Koller, M., Gartner, J., Floh, A., & Harms, R. (2018). Effective entrepreneurial marketing on Facebook–A longitudinal study. Journal of business research.

[9] Takeuchi, H., Masuda, S., Miyamoto, K., & Akihara, S. (2018). Obtaining Exhaustive Answer Set for Q&A-based Inquiry System using Customer Behavior and Service Function Modeling. Procedia Computer Science, 126, 986-995.

[10] Burghardt, G. M. (2019). A place for emotions in behavior systems research. Behavioural processes.

[11] Jacobs, K. W., Morford, Z. H., & King, J. E. (2019). Disequilibrium in behavior analysis: A disequilibrium theory redux. Behavioural processes.

It might be a sign of narcissism

I am recapitulating once again. Two things are going on in my mind: science strictly spoken and a technological project. As for science, I am digging around the hypothesis that we, humans, purposefully create institutions for experimenting with new technologies and that the essential purpose of those institutions is to maximize the absorption of energy from environment. I am obstinately turning around the possible use of artificial intelligence as tools for simulating collective intelligence in human societies. As for technology, I am working on my concept of « Energy Ponds ». See my update entitled « The mind-blowing hydro » for relatively the freshest developments on that point. So far, I came to the conclusion that figuring out a viable financial scheme, which would allow local communities to own local projects and adapt them flexibly to local conditions is just as important as working out the technological side. Oh, yes, and there is teaching, the third thing to occupy my mind. The new academic year starts on October 1st and I am already thinking about the stuff I will be teaching.

I think it is good to be honest about myself, and so I am trying to be: I have a limited capacity of multi-tasking. Even if I do a few different things in the same time, I need those things to be kind of convergent and similar. This is one of those moments when a written recapitulation of what I do serves me to put some order in what I intend to do. Actually, why not using one of the methods I teach my students, in management classes? I mean, why not using some scholarly techniques of planning and goal setting?

Good, so I start. What do I want? I want a monography on the application of artificial intelligence to study collective intelligence, with an edge towards practical use in management. I call it ‘Monography AI in CI – Management’. I want the manuscript to be ready by the end of October 2019. I want a monography on a broader topic of technological change being part of human evolution, with the hypothesis mentioned in the preceding paragraph. This monography, I give it a working title: ‘Monography Technological Change and Human Evolution’. I have no clear deadline for the manuscript. I want 2 – 3 articles on renewable energies and their application. Same deadline as that first monography: end of October 2019. I want to promote and develop my idea of “Energy Ponds” and that of local financial schemes for such type of project. I want to present this idea in at least one article, and in at least one public speech. I want to prepare syllabuses for teaching, centred, precisely, on the concept of collective intelligence, i.e. of social structures and institutions made for experimentation and learning. Practically in each of the curriculums I teach I want to go into the topic of collective learning.  

How will I know I have what I want? This is a control question, forcing me to give precise form to my goals. As for monographies and articles it is all about preparing manuscripts on time. A monography should be at least 400 pages each, whilst articles should be some 30 pages-long each, in the manuscript form. That makes 460 – 490 pages to write (meaningfully, of course!) until the end of October, and at least 400 other pages to write subsequently. Of course, it is not just about hatching manuscripts: I need to have a publisher. As for teaching, I can assume that I am somehow prepared to deliver a given line of logic when I have a syllabus nailed down nicely. Thus, I need to rewrite my syllabuses not later than by September 25th. I can evaluate progress in the promotion of my “Energy Ponds” concept as I will have feedback from any people whom I informed or will have informed about it.     

Right, the above is what I want technically and precisely, like in a nice schedule of work. Now, what I like really want? I am 51, with good health and common sense I have some 24 – 25 productive years ahead. This is roughly the time that passed since my son’s birth. The boy is not a boy anymore, he is walking his own path, and what looms ahead of me is like my last big journey in life. What do I want to do with those years? I want to feel useful, very certainly. Yes, I think this is one clear thing about what I want: I want to feel useful. How will I know I am useful? Weeell, that’s harder to tell. As I am patiently following the train of my thoughts, I think that I feel useful today, when I can see that people around need me. On the top of that, I want to be financially important and independent. Wealthy? Yes, but not for comfort as such. Right now, I am employed, and my salary is my main source of income. I perceive myself as dependent on my employer. I want to change it so as to have substantial income (i.e. income greater than my current spending and thus allowing accumulation) from sources other than a salary. Logically, I need capital to generate that stream of non-wage income. I have some – an apartment for rent – but as I look at it critically, I would need at least 7 times more in order to have the rent-based income I want.

Looks like my initial, spontaneous thought of being useful means, after having scratched the surface, being sufficiently high in the social hierarchy to be financially independent, and able to influence other people. Anyway, as I am having a look at my short-term goals, I ask myself how do they bridge into my long-term goals? The answer is: they don’t really connect, my short-term goals and the long-term ones. There is a lot of missing pieces. I mean, how does the fact of writing a scientific monography translate into multiplying by seven my current equity invested in income-generating assets?

Now, I want to think a bit deeper about what I do now, and I want to discover two types of behavioural patterns. Firstly, there is probably something in what I do, which manifests some kind of underlying, long-term ambitions or cravings in my personality. Exploring what I do might be informative as for what I want to achieve in that last big lap of my life. Secondly, in my current activities, I probably have some behavioural patterns, which, when exploited properly, can help me in achieving my long-term goals.

What do I like doing? I like writing and reading about science. I like speaking in public, whether it is a classroom or a conference. Yes, it might be a sign of narcissism, still it can be used to a good purpose. I like travelling in moderate doses. Looks like I am made for being a science writer and a science speaker. It looks some sort of intermediate goal, bridging from my short-term, scheduled achievements into the long-term, unscheduled ones. I do write regularly, especially on my blog. I speak regularly in classrooms, as my basic job is that of an academic teacher. What I do haphazardly, and what could bring me closer to achieving my long-term goals, would be to speak in other public contexts more frequently and sort of regularly, and, of course, make money on it. By the way, as science writing and science speaking is concerned, I have a crazy idea: scientific stand up. I am deeply fascinated with the art of some stand up comedians: Bill Burr, Gabriel Iglesias, Joe Rogan, Kevin Hart or Dave Chapelle. Getting across deep, philosophical content about human condition in the form of jokes, and make people laugh when thinking about those things, is an art I admire, and I would like to translate it somehow into the world of science. The problem is that I don’t know how. I have never done any acting in my life, never have written nor participated in writing any jokes for stand-up comedy. As skillsets come, this is a complete terra incognita to me.

Now, I jump to the timeline. I assume having those 24 years or so ahead of me. What then, I mean when I hopefully reach 75 years of age. Now, I can shock some of my readers, but provisionally I label that moment in 24 years from now as “the decision whether I should die”. Those last years, I have been asking myself how I would like to die. The question might seem stupid: nobody likes dying. Still, I have been asking myself this question. I am going into deep existential ranting, but I think what I think: when I compare my life with some accounts in historical books, there is one striking difference. When I read letters and memoirs of people from the 17th or 18th century, even from the beginnings of the 20th century, those ancestors of ours tended to ask themselves how worthy their life should be and how worthy their death should come. We tend to ask, most of all, how long will we live. When I think about it, that old attitude makes more sense. In the perspective of decades, planning for maxing out on existential value is much more rational than trying to max out on life expectancy as such. I guess we can have much more control over the values we pursue than the duration of our life. I know that what I am going to say might sound horribly pretentious, but I think I would like to die like a Viking. I mean, not necessarily trying to kill somebody, just dying by choice, whilst still having the strength to do something important, and doing those important things. What I am really afraid of is slow death by instalments, when my flame dies out progressively, leaving me just weaker and weaker every month, whilst burdening other people with taking care of me.

I fix that provisional checkpoint at the age of 75, 24 years from now. An important note before I go further: I have not decided I will die at the age of 75. I suppose that would be as presumptuous as assuming to live forever. I just give myself a rationally grounded span of 24 years to live with enough energy to achieve something worthy. If I have more, I will just have more. Anyway, how much can I do in 24 years? In order to plan for that, I need to recapitulate how much have I been able to do so far, like during an average year. A nicely productive year means 2 – 3 acceptable articles, accompanied by 2 – 3 equally acceptable conference presentations. On the top of that, a monography is conceivable in one year. As for teaching, I can realistically do 600 – 700 hours of public speech in one year. With that, I think I can nail down some 20 valuable meetings in business and science. In 24 years, I can write 24*550 = 13 200 pages, I can deliver 15 600 hours of public speech, and I can negotiate something in 480 meetings or so.

Now, as I talk about value, I can see there is something more far reaching than what I have just named as my long-term goals. There are values which I want to pursue. I mean, saying that I want to die like a Viking, and, in the same time, stating my long-term goals in life in terms of income and capital base: that sound ridiculous. I know, I know, dying like a Viking, in the times of Vikings, meant very largely to pillage until the last breath. Still, I need values. I think the shortcut to my values is via my dreams. What are they, my dreams? Now, I make a sharp difference between dreams and fantasies. A fantasy is: a) essentially unrealistic, such as riding a flying unicorn b) involving just a small, relatively childish part of my personality. On the other hand, a dream – such as contributing to making my home country, Poland, go 100% off fossil fuels – is something that might look impossible to achieve, yet its achievement is a logical extension of my present existence.

What are they, my dreams? Well, I have just named one, i.e. playing a role in changing the energy base of my country. What else do I value? Family, certainly. I want my son to have a good life. I want to feel useful to other people (that was already in my long-term goals, and so I am moving it to the category of dreams and values). Another thing comes to my mind: I want to tell the story of my parents. Apparently banal – lots of people do it or at least attempt to – and yet nagging as hell. My father died in February, and around the time of the funeral, as I was talking to family and friends, I discovered things about my dad which I had not the faintest idea of. I started going through old photographs and old letters in a personal album I didn’t even know he still had. Me and my father, we were not very close. There was a lot of bad blood between us. Still, it was my call to take care of him during the last 17 years of his life, and it was my call to care for what we call in Poland ‘his last walk’, namely that from the funeral chapel to the tomb properly spoken. I suddenly had a flash glimpse of the personal history, the rich, textured biography I had in front of my eyes, visible through old images and old words, all that in the background of the vanishing spark of life I could see in my father’s eyes during his last days.  

How will I know those dreams and values are fulfilled in my life? I can measure progress in my work on and around projects connected to new sources of energy. I can measure it by observing the outcomes. When things I work on get done, this is sort of tangible. As for being useful to other people, I go once again down the same avenue: to me, being useful means having an unequivocally positive impact on other people. Impact is important, and thus, in order to have that impact, I need some kind of leadership position. Looking at my personal life and at my dream to see my son having a good life, it comes as the hardest thing to gauge. This seems to be the (apparently) irreducible uncertainty in my perfect plan. Telling my parents’ story: how will I prove to myself I will have told it? A published book? Maybe…  

I sum it up, at least partially. I can reasonably expect to deliver a certain amount of work over the 24 years to come: approximately 13 200 pages of written content, 15 600 hours of public speech, and 450 – 500 meetings, until my next big checkpoint in life, at the age of 75. I would like to focus that work on building a position of leadership, in view of bringing some change to my own country, Poland, mostly in the field of energy. As the first stage is to build a good reputation of science communicator, the leadership in question is likely to be rather a soft one. In that plan, two things remain highly uncertain. Firstly, how should I behave in order to be as good a human being as I possibly can? Secondly, what is the real importance of that telling-my-parents’-story thing in the whole plan? How important is it for my understanding of how to live well those 24 years to come? What fraction of those 13 200 written pages (or so), should refer to that story?  

Now, I move towards collective intelligence, and to possible applications of artificial intelligence to study the collective one. Yes, I am a scientist, and yes, I can use myself as an experimental unit. I can extrapolate my personal experience as the incidence of something in a larger population. The exact path of that incidence can shape the future actions and structures of that population. Good, so now, there is someone – anyone, in fact – who comes and tells to my face: ‘Look, man, you’re bullshitting yourself and people around you! Your plans look stupid, and if attitudes like yours spread, our civilisation will fall into pieces!’. Fair enough, that could be a valid point. Let’s check. According to the data published by the Central Statistical Office of the Republic of Poland, in 2019, there are n = 453 390 people in Poland aged 51, like me, 230 370 of them being men, and 232 020 women. I assume that attitudes such as my own, expressed in the preceding paragraphs, are one type among many occurring in that population of 51-year-old Polish people. People have different views on life and other things, so to say.

Now, I hypothesise in two opposite directions. In Hypothesis A, I state that just some among those different attitudes make any sense, and there is a hypothetical distribution of those attitudes in the general population, which yields the best social outcomes whilst eliminating early all nonsense attitudes from the social landscape. In other words, some worldviews are so dysfunctional that they’d better disappear quickly and be supplanted by those more sensible ones. Going even deeper, it means that quantitative distributions of attitudes in the general population fall into two classes: those completely haphazard, existential accidents without much grounds for staying in existence, on the one hand, and those sensible and functional ones, which can be sustained with benefit to all, on the other hand.  In hypothesis ~A, i.e. the opposite to A, I speculate that observed diversity in attitudes is a phenomenon in itself and does not really reduce to any hypothetically better one. It is the old argument in favour of diversity. Old as it is, it has old mathematical foundations, and, interestingly, is one of cornerstones in what we call today Artificial Intelligence.

In Vapnik, Chervonenkis 1971[1] , a paper reputed to be kind of seminal for the today’s AI, I found reference to the classical Bernoulli’s theorem, known also as the weak law of large numbers: the relative frequency of an event A in a sequence of independent trials converges (in probability) to the probability of that event. Please, note that roughly the same can be found in the so-called Borel’s law of large numbers, named after Émile Borel. It is deep maths: each phenomenon bears a given probability of happening, and this probability is sort of sewn into the fabric of reality. The empirically observable frequency of occurrence is always an approximation of this quasi-metaphysical probability. That goes a bit against the way probability is being taught at school: it is usually about that coin – or dice – being tossed many times etc. It implies that probability exists at all only as long as there are things actually happening. No happening, no probability. Still, if you think about it, there is a reason why those empirically observable frequencies tend to be recurrent, and the reason is precisely that underlying capacity of the given phenomenon to take place.

Basic neural networks, the perceptron-type ones, experiment with weights being attributed to input variables, in order to find a combination of weights which allows the perceptron getting the closest possible to a target value. You can find descriptions of that procedure in « Thinking Poisson, or ‘WTF are the other folks doing?’ », for example. Now, we can shift a little bit our perspective and assume that what we call ‘weights’ of input variables are probabilities that a phenomenon, denoted by the given variable, happens at all. A vector of weights attributed to input variables is a collection of probabilities. Walking down this avenue of thinking leads me precisely to the Hypothesis ~A, presented a few paragraphs ago. Attitudes congruous with that very personal confession of mine, developed even more paragraphs ago, have an inherent probability of happening, and the more we experiment, the closer we can get to that probability. If someone tells to my face that I’m an idiot, I can reply that: a) any worldview has an idiotic side, no worries b) my particular idiocy is representative of a class of idiocies, which, in turn, the civilisation needs to figure out something clever for the next few centuries.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. You can communicate with me directly, via the mailbox of this blog: goodscience@discoversocialsciences.com. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?


[1] Vapnik, V. N. (1971). CHERVONENKIS, On the uniform convergence ofrelativefrequencies. Theory of Probability and Its Applications, 16, 264-280.

Le biais décisionnel

Mon éditorial sur You Tube

Je suis en train, comme presque toujours, de travailler sur plusieurs trucs à la fois. En gros, je fais de la théorie bien respectable, accompagnée par quelque chose de pratique. La théorie que j’essaie de mettre en la forme d’une monographie scientifique tourne autour du phénomène général de l’intelligence collective et des changements technologiques en même temps, avec un focus spécial sur l’intelligence artificielle. Je résume les deux dernières années de recherche et ça donne l’esquisse d’un livre que je pourrais rédiger à partir de mes notes de recherche publiées sur « Discover Social Sciences ». J’ai deux hypothèses de base. La première assume que l’intelligence collective des sociétés humaines se manifeste à travers le fonctionnement des institutions spécifiquement dédiées à expérimenter avec des nouvelles solutions technologiques. Je pense que l’intelligence artificielle en particulier et les technologies digitales en général représentent une accélération dans la création et le fonctionnement de telles institutions. En d’autres monts, je pose la thèse que les changements technologiques et institutionnels de la civilisation humaine convergent vers une capacité plus grande de ladite société d’expérimenter avec elle-même. En 2017, j’avais fait un peu de recherche sur l’énergie en utilisant la méthode évolutionniste. Maintenant j’ai l’impression que l’approche évolutionniste est comme une introduction à l’application d’intelligence artificielle dans les sciences sociales. Là-dedans il y a un truc qui fout un peu de désordre dans la théorie établie des sciences sociales. Cette dernière assume que les institutions de nos sociétés – donc des lois, des coutumes, des systèmes politiques etc. – représentent surtout et avant tout un acquis du passé, comme une sédimentation des stratégies de comportement qui avant la création de ces institutions étaient beaucoup plus floues et changeantes. Le droit constitutionnel représenterait donc une formalisation des stratégies politiques utilisées dans le passé, le droit civil ferait de même en ce qui concerne les contrats entre privés etc. Ça, c’est l’édifice de la théorie dominante et moi, je veux y ajouter quelques briques de plus. Je suis convaincu que certaines institutions – surtout les marchés financiers au sens large et certaines institutions politiques dans les systèmes démocratiques – sont en fait des méta-institutions, quelque chose comme des organismes femelles qui ont la capacité de recombiner l’ADN institutionnel de provenance diverse et donner ainsi naissance à des institutions nouvelles.

Ma deuxième hypothèse est celle que j’avais déjà discuté quelque peu dans un article publié en 2017 : les changements technologiques de la civilisation humaine ont pour fonction biologique essentielle de maximiser l’absorption d’énergie de l’environnement. Pourquoi est-ce important ? C’est une grande fascination intellectuelle que j’ai progressivement développée, très largement sous l’influence de Fernand Braudel et son œuvre remarquable intitulée « Civilisation et Capitalisme ». Comme je lisais et relisais plusieurs fois ce livre, je m’étais rendu compte que toute forme de civilisation humaine est essentiellement composée des technologies d’utilisation d’énergie accessible dans l’environnement – tout comme des technologies d’acquisition de nourriture – et que nos structures sociales manifestent la façon dont ces technologies marchent. En plus, le vent et l’eau – qu’aujourd’hui nous appelons « énergies renouvelables » et considérons comme une innovation – avaient formé la base de ce que nous connaissons aujourd’hui comme civilisation européenne.

J’ai donc deux hypothèses qui donnent une convergence intéressante : comme civilisation, développons-nous des institutions qui nous servent à expérimenter avec des solutions nouvelles pour maximiser notre absorption collective d’énergie ? Et voilà, boum, il y a ce troisième truc, le projet que je conceptualise, pour le moment, sous le nom d’Étangs Énergétiques. Vous pouvez consulter « La marge opérationnelle de $1 539,60 par an par 1 kilowatt » à propos de mes derniers progrès sur le sujet. Je travaille sur ce concept de deux points de vue différents : pratique et scientifique. D’une part, je m’applique à mettre au point un projet de développement d’énergies renouvelables à travers le Navigateur des Projets, accessible à travers la page de « International Renewable Energy Agency ». Les énergies renouvelables en question c’est bien sur l’électricité produite par les turbines hydroélectriques installées dans l’infrastructure d’Étangs Énergétiques. Le Navigateur des Projets est fortement orienté sur la faisabilité économique, financière et politique de l’idée en question : pour qu’un projet soit exécutable, il faut une argumentation solide et claire à l’attention d’acteurs sociaux impliqués. Cette argumentation doit aller de pair avec une idée claire de l’appropriation du projet : un groupe social bien cerné, avec des décideurs bien définis, doit être capable d’approprier aussi bien les ressources nécessaires au projet que les résultats obtenus. Le principe de base est que des projets non appropriés, avec contrôle flou sur les ressources et les résultats, sont les premiers à échouer.

Pour le moment, j’ai deux arguments principaux en faveur de mon idée. Premièrement, même ce qui se passe cet été – des vagues de chaleur, sécheresse agricole, inondations locales – nous montre que le changement climatique nous force à repenser et rebâtir nos infrastructures hydrologiques. Nous ferions bien d’apprendre des trucs nouveaux en ce qui concerne la rétention de l’eau de pluie et son utilisation ultérieure. Le danger le plus grave – côté hydrologie – est une perturbation de plus en plus profonde du marché agricole en Europe. Deuxièmement, la quantité d’eau de pluie qui tombe sur l’Europe, si utilisée de façon adéquate, c’est-à-dire si on la fait passer à travers de turbines hydroélectriques, représente une quantité énorme d’énergie. Nous avons donc un danger sérieux d’une part et des gains possibles d’autre part.

Bon, donc ça, c’est un bref résumé des sujets que je travaille dessus en ce moment. Maintenant, je veux utiliser mon journal de recherche, tel que je le présente sur mon blog, pour passer en revue ce que j’ai lu et appris sur les deux trucs, donc le gros bouquin théorique et le projet pratique. Une fois de plus je me sers de mon blog comme outil de mise en ordre. Je commence avec la théorie de changement technologique, l’intelligence collective et l’intelligence artificielle. J’ai fait un petit saut du côté des sciences humaines : psychologie, neurophysiologie etc. J’ai pu constater que – probablement sous l’impulsion des développements récents de l’intelligence artificielle – une nouvelle discipline syncrétique est née : la théorie générale de l’intelligence et de la connaissance. Le soi-disant modèle de Bignetti en est un bon exemple (consultez, par exemple, Bignetti 2014[1]; Bignetti et al. 2017[2]; Bignetti 2018[3]). Ce modèle met un peu de désordre créatif dans la distinction entre l’action et la connaissance. Cette dernière est définie comme une expression consciente de l’expérience, pendant que le terme « action » est étendu sur tout ce qui est, précisément, l’expérience d’une entité intelligente. Le modèle de Bignetti est un véhicule théorique très général qui sert à expliquer le paradoxe de l’action apparemment inconsciente. Nous, les individus, aussi bien que nous, les collectivités, nous accomplissons tout un tas d’action que nous ne sommes pas capables d’expliquer rationnellement. Enrico Bignetti, professeur émérite de biochimique et de biologie moléculaire de l’université de Parme, pose la thèse que l’expérience consciente de soi ainsi que celle de volonté individuelle et de libre arbitre sont des illusions que notre cerveau crée dans les premiers mois de notre enfance pour faire l’action plus efficiente. Ces illusions servent à mettre de l’ordre dans la masse d’informations que notre cerveau reçoit et traite.

Moi, de mon côté, je pars d’une assomption simple, apparentée à la ligne de raisonnement de professeur Bignetti : une société humaine est une collectivité des systèmes nerveux individuels, une collectivité des cerveaux, pour ainsi dire. Il est donc logique que la façon dont la société fonctionne est en partie déterminée par le fonctionnement de ces cerveaux individuels. Il y a cette observation classique, à la limite de la science et du simple bon sens, qu’une partie substantielle de notre système nerveux sert presque exclusivement à faire fonctionner les relations sociales et ça marche dans la direction opposée aussi : les relations sociales sont ce qui fait marcher la même partie substantielle de notre système nerveux. C’est une intuition que Charles Darwin avait déjà exprimée dans son livre « The Expression of The Emotions In Man And Animals »[4] et que Émile Durkheim avait traité sous un angle sociologique dans « Les règles de la méthode sociologique ». Il y a donc une connexion fonctionnelle entre ce que font nos neurones et ce que fait un ministère. Question : si les neurones d’un cerveau individuel sont capables d’intelligence individuelle, quel genre d’intelligence pouvons-nous espérer de la part des neurones assemblés dans la multitude des cerveaux individuels d’une société humaine ?

J’ai trouvé une ligne de raisonnement intéressante chez Hassabis et al. (2017[5]).Intelligence artificielle permet de créer un nombre indéfiniment grand de solutions possibles, mais l’utilisation de la neurophysiologie peut être utile dans la sélection des solutions qui ont soit la valeur de similitude par rapport au système nerveux humain soit celle de nouveauté complète par rapport à la structure neurale humaine. Dans ce contexte, il est intéressant de se poser la question ontologique : comment est-ce que l’intelligence artificielle existe ? Lorsqu’un réseau neuronal marche, donc lorsque son algorithme prouve son utilité fonctionnelle, la structure logique de ce réseau neuronal existe-t-elle de la même façon que les idées existent ?

Je suis allé un peu plus loin dans l’étude d’algorithmes d’intelligence artificielle en tant que telle. Je me suis concentré sur trois types d’algorithmes qui sont en quelque sorte les piliers d’apprentissage profond : le mélange Gaussien, la forêt aléatoire et les algorithmes d’accélération d’apprentissage, donc Adam et DQN. Je vais brièvement discuter leur logique de base. Le mélange Gaussien d’abord. Tout comme les deux autres, j’ai copié celui-là de GitHub. Plus exactement, j’ai pris comme exemple le mélange Gaussien formalisé comme https://github.com/rushter/MLAlgorithms/blob/master/examples/gaussian_mixture.py .

Notre culture, à commencer par notre langage, est basée sur la catégorisation. Nous avons besoin de nommer les choses et pour les nommer, nous avons besoin d’établir le lien logique entre des ensembles des phénomènes observables et des catégories pour les grouper. C’est ainsi que des arbres deviennent « les arbres » et des chaises deviennent « les chaises ». Nous avons même une partie distincte de notre cerveau responsable de cette fonction de nominalisation : le soi-disant sentier synaptique ventral (Mishkin et al. 1983[6] ; Grossberg 2017[7]) qui est le seul à faire ça et ne fait que ça. Si nous voulons penser quoi que ce soit de complexe, genre « quoi faire ? », c’est toujours « quoi faire avec le truc tel et tel ? » ou « quoi faire à propos de X ? ». Notre cerveau sépare la partie « faire » de la partie « truc tel et tel, X pour les amis ». Cette dernière est toujours traitée par le sentier synaptique ventral et tout ce qui reste – donc la partie « faire à propos… » – c’est le boulot d’autres parties du cerveau.  

Le mélange Gaussien produit des catégories de façon probabiliste à partir des données empiriques données au réseau neuronal doté dudit mélange. La méthode générale est basée sur le concept de similarité entre phénomènes, seulement il est approché sous un angle rigoureusement mathématique. Intuitivement, lorsque nous parlons de similarité entre plusieurs phénomènes, nous pouvons penser à les comparer deux par deux. Seulement, ce n’est pas nécessairement la meilleure idée point de vue efficacité et en plus, il est possible que cette approche intuitive ne représente pas la façon dont notre propre cerveau marche.  Nous pouvons représenter les décisions collectives d’une société humaine comme un ensemble des choix simples, comparable au choix qu’un chimpanzé fait entre deux boîtes, en fonction de la présence espérée d’un fruit dans l’une d’elles. La théorie des jeux va dans cette direction. Néanmoins l’application de l’intelligence artificielle apporte ici une contribution originale. Dans l’article par Garnelo et Shanahan (2019[8]) nous pouvons voir les résultats des tests d’intelligence effectués par un réseau neuronal dans deux structures logiques alternatives : relationnelle, donc similaire au choix du chimpanzé d’une part et auto-attentive d’autre part. La structure auto-attentive marche comme un individu introspectif : le réseau neuronal observe ses propres décisions et prend cette observation en compte lorsqu’il expérimente avec des nouvelles décisions. Le réseau neuronal résout donc le test d’intelligence selon deux logiques différentes : comme une séquence des choix simples ou bien comme un processus complexe de raisonnement. Apparemment, selon Garnelo et Shanahan (2019) la méthode complexe marche mieux et le réseau neuronal score plus de points au test.

Essayons de formaliser une méthode de catégorisation des phénomènes qui utilise cette notion d’auto-attention. Je retourne vers les maths. J’ai donc un ensemble des données empiriques brutes qui servent comme matériel d’apprentissage à un réseau neuronal. Je veux grouper ces données en des catégories aussi fonctionnelles que possible vu l’objectif que je me pose. Bon, faudrait donc le poser vraiment, cet objectif. Comme je l’avais écrit plusieurs fois sur ce blog, l’objectif d’un réseau neuronal consiste à minimiser l’écart entre une valeur qu’il calcule lui-même à travers la fonction d’activation neurale et une valeur arbitraire fixée par l’utilisateur. Si je veux programmer un robot intelligent pour disposer des paquets dans un entrepôt et je veux que ledit robot apprenne à utiliser l’espace de stockage de façon la plus efficiente possible, je lui fais minimiser l’écart entre le volume total des paquets stockés et le volume géométrique de l’entrepôt.

Je formalise donc l’objectif à atteindre comme un vecteur composé d’une ou plusieurs valeurs numériques. J’assume que pour atteindre cet objectif, mon réseau neuronal doit grouper les données de départ en des catégories. Je me tiens à l’exemple du robot d’entreposage et j’assume qu’il doit grouper 10 000 paquets à entreposer dans des catégories qui correspondent à des piles de stockage. Une pile de stockage est un ensemble des paquets superposés l’un sur l’autre, accessible au charriot robotisé de chaque côté. Je veux que le réseau neuronal trouve une telle disposition des paquets en des piles de stockage qui satisfasse l’objectif de gestion optimale d’espace de stockage. Chacun des 10 000 paquets aura donc finalement un vecteur de coordonnées qui va décrire son attribution à une pile de stockage donnée. Seulement voilà, les piles de stockage elles-mêmes ne sont pas encore définies et positionnées. Apparemment, on fait face à un problème en boucle : chaque paquet doit trouver sa place dans une pile de stockage donnée mais les piles de stockage doivent être définies en termes des paquets précis qu’ils vont contenir. En plus, ‘y a ces questions stupides qui viennent à l’esprit. Toutes les piles de stockage doivent-elles être de la même taille et même masse ou bien vaudrait-il mieux les différencier à cet égard ?

Nous pouvons généraliser le problème de stockage. Tenons une population de 500 000 personnes dans une ville de taille moyenne et simulons la transition de leur base énergétique vers un réseau dispersé des nœuds locaux composés de petites turbines éoliennes et hydrauliques accompagnées par des stations des panneaux photovoltaïques. Je sais qu’à la longue, les nœuds locaux d’approvisionnement en énergie vont s’adapter aux populations locales et vice versa. Je veux prévoir les « vice versa » possibles et je veux trouver le plus efficient de parmi eux. Je sais que cela veut dire simuler des sentiers différents d’adaptation mutuelle. La distribution des installations énergétiques à travers la structure spatiale de la ville est un problème similaire à la disposition spatiale d’un nombre fini des paquets dans l’espace fini d’un entrepôt. Côté maths, le problème peut être exprimé comme une relation entre deux ensembles des valeurs numériques : un ensemble des vecteurs décrivant les sources locales d’énergies renouvelables et un autre ensemble des vecteurs qui décrivent les groupements locaux d’habitants de la ville.

Je retourne à la dualité signalée chez Garnelo et Shanahan (2019). Je peux approcher le problème de groupement spatial de deux façons différentes. La plus simpliste est la comparaison en paires. Pour chaque paquet je compare son entreposage dans des endroits alternatifs à l’intérieur de l’entrepôt, ce qui me conduit à comparer l’efficience des groupements alternatifs des paquets en des piles de stockage etc. Ça fait un tas des comparaisons et le problème c’est que si je trouve quelque chose qui marche définitivement mal, il faut que je recule dans de plusieurs pas dans la chaîne des comparaisons et que je recommence. Le mélange Gaussien permet de raccourcir le chemin et de le simplifier.

Avant de discuter la méthode de mélange Gaussien plus en détail, je vais rappeler brièvement l’approche Gaussienne en général. Nous vivons dans une réalité où les trucs qui semblent intuitivement vraisemblables surviennent plus fréquemment que du quasi-fantastique. Si je joue au LOTTO, toucher deux nombres corrects dans trois tirages en trois mois est plus vraisemblable et plus probable que toucher la grosse cagnotte de 6 nombres dans chaque tirage sur la même période. C’est une réalité binomiale et elle se comporte de façon conforme au   théorème de de Moivre – Laplace donc comme une vigne : les phénomènes de cette réalité convergent pour forment des grappes distinctes. Au centre de chaque grappe nous retrouvons les phénomènes relativement les plus fréquents et vraisemblables pendant que les occurrences plus extrêmes sont à trouver dans les couches externes et superficielles de chaque grappe. La neurophysiologie, en particulier la théorie de résonance adaptative nous suggère que notre cerveau expérimente avec plusieurs partitions possibles, en des grappes distinctes, de la réalité observée (Grossberg 2017, par exemple). À la suite de cette expérimentation, notre cerveau choisit la partition dont la structure prouve d’être la plus fonctionnelle eu égard aux objectifs fixés. Mathématiquement, cela veut dire que le réseau neuronal dôté de mélange Gaussien génère une série des valeurs qui sont considérées provisoirement comme les valeurs moyennes espérées d’autant des distributions normales locales, donc d’autant de grappes des phénomènes, donc autant des géographies possibles des turbines éoliennes dans une ville, donc d’autant des piles de stockage dans cet entrepôt d’il y a quelques paragraphes. Est-ce la disposition spatiale ainsi obtenue à l’intérieur de l’entrepôt celle qui donne la meilleure utilisation de l’espace ? Allons voir : répétons l’expérience avec plusieurs séries possibles des moyennes locales, donc avec plusieurs partitions possibles de la réalité en des distributions normales locales.

La catégorisation des phénomènes de réalité est un pas sur le sentier d’adaptation intelligente, un peu comme je l’avais décrit, il y a deux ans, dans « Deux lions de montagne, un bison mort et moi ». Les algorithmes d’intelligence artificielle rendent possible l’observation non seulement de la façon dont une structure intelligente groupe les phénomènes observées en catégories, mais aussi de la manière d’expérimenter avec plusieurs sentiers alternatifs de décision. La forêt aléatoire est le type d’algorithme qui utilise le même principe – générer plusieurs ensembles des valeurs aléatoires et les utiliser comme autant des visions alternatives de réalité afin de choisir la plus efficiente des toutes – pour simuler des différents sentiers décisionnels. Comme exemple pratique d’algorithme j’ai pris celui accessible à https://github.com/rushter/MLAlgorithms/blob/master/examples/random_forest.py. Je suis un officier de sécurité dans un grand aéroport. Je vois défiler devant moi des milliers des passagers. Je viens de recevoir un tuyau qu’une personne parmi ces milliers est un terroriste. J’ai besoin de penser vite comment la pêcher dans toute cette foule et très probablement, j’aurai la possibilité de tester mes intuitions juste une fois. Si j’échoue, le gars va soit attaquer soit s’évanouir dans le paysage.

Ici, la différence entre l’intelligence humaine et les réseaux neuronaux est très visible. Ces derniers peuvent simuler une décision à haute incertitude – comme celle comment trouver un terroriste – comme compétition entre plusieurs sentiers décisionnels possibles. Mathématiquement, une décision complexe est un arbre décisionnel : si A, alors B et ensuite F s’impose plutôt que G etc. Lorsque j’ai un ensemble des phénomènes décrits comme données numériques digestes pour un réseau neuronal, je peux créer un nombre indéfini d’arbres décisionnels pour connecter ces phénomènes dans des implications logiques. Je peux tester chaque arbre point de vue exactitude, vitesse de décision etc. C’est comme ça que marche l’algorithme de forêt aléatoire. Question : comment savoir quel arbre décisionnel marche le mieux ? Je sais par expérience que même un réseau neuronal relativement simple peut achever une exactitude poussée dans l’estimation de la variable de résultat tout en se balançant dans des valeurs tout à fait faramineuses en ce qui concerne les variables d’entrée. Une fois de plus, on semble être dans une boucle, puisque l’évaluation de la valeur pratique d’un arbre décisionnel est un arbre décisionnel en soi. La forêt aléatoire résout ce problème en incluant un ensemble de données de contrôle, où l’optimisation avait déjà été faite et nous savons ce qu’un arbre décisionnel vraiment efficace peut faire avec. Les arbres décisionnels aléatoires sont priés de traiter ces données de contrôle et nous observons lequel de parmi ces arbres tombe le plus près du résultat déjà pré-calculé.

Je me demande quelle peut bien être l’utilité de ces algorithmes que je viens d’esquisser, donc le mélange Gaussien et la forêt aléatoire, dans l’étude de l’intelligence collective des sociétés humaines. Intuitivement, je perçois ces algorithmes comme très objectifs et rationnels en comparaison aux décisions collectives humaines. Dans la vie réelle, nous venons très vite au point quand nous voulons tellement passer aux actes – sous l’impulsion d’une urgence subjectivement perçue – que nous limitions l’éventail d’options possibles dans nos décisions. Lorsque les décisions collectives deviennent des décisions politiques et alors il devient très délicat de suggérer qu’un arbre décisionnel donné n’est pas vraiment le sommet de la logique. Les décisions collectives réelles semblent nettement plus biaisées que celles prises avec l’utilisation du mélange Gaussien ou de la forêt aléatoire. Ces algorithmes peuvent donc servir à évaluer le biais décisionnel.

Je continue à vous fournir de la bonne science, presque neuve, juste un peu cabossée dans le processus de conception. Je vous rappelle que vous pouvez télécharger le business plan du projet BeFund (aussi accessible en version anglaise). Vous pouvez aussi télécharger mon livre intitulé “Capitalism and Political Power”. Je veux utiliser le financement participatif pour me donner une assise financière dans cet effort. Vous pouvez soutenir financièrement ma recherche, selon votre meilleur jugement, à travers mon compte PayPal. Vous pouvez aussi vous enregistrer comme mon patron sur mon compte Patreon . Si vous en faites ainsi, je vous serai reconnaissant pour m’indiquer deux trucs importants : quel genre de récompense attendez-vous en échange du patronage et quelles étapes souhaitiez-vous voir dans mon travail ? Vous pouvez me contacter à travers la boîte électronique de ce blog : goodscience@discoversocialsciences.com .


[1] Bignetti, E. (2014). The functional role of free-will illusion in cognition:“The Bignetti Model”. Cognitive Systems Research, 31, 45-60.

[2] Bignetti, E., Martuzzi, F., & Tartabini, A. (2017). A Psychophysical Approach to Test:“The Bignetti Model”. Psychol Cogn Sci Open J, 3(1), 24-35.

[3] Bignetti, E. (2018). New Insights into “The Bignetti Model” from Classic and Quantum Mechanics Perspectives. Perspective, 4(1), 24.

[4] Darwin, C., & Prodger, P. (1998). The expression of the emotions in man and animals. Oxford University Press, USA.

[5] Hassabis, D., Kumaran, D., Summerfield, C., & Botvinick, M. (2017). Neuroscience-inspired artificial intelligence. Neuron, 95(2), 245-258.

[6] Mishkin, M., Ungerleider, L. G., & Macko, K. A. (1983). Object vision and spatial vision: two cortical pathways. Trends in neurosciences, 6, 414-417.

[7] Grossberg, S. (2017). Towards solving the hard problem of consciousness: The varieties of brain resonances and the conscious experiences that they support. Neural Networks, 87, 38-95.

[8] Garnelo, M., & Shanahan, M. (2019). Reconciling deep learning with symbolic artificial intelligence: representing objects and relations. Current Opinion in Behavioral Sciences, 29, 17-23.

Woda, która spada szybko

Prowadzę mój dziennik badawczy w trzech językach: angielskim, francuskim i polskim. Snobizm? Jasne, pewnie tak. Jest jednak jeszcze coś. Zauważyłem, że kiedy przeskakuję z jednego języka na inny, to jakbym przetwarzał informacje w nieco inny sposób. Niby to samo, ale pod odrobinę odmiennym kątem. Angielski to międzynarodowy język naukowy. Francuski pomaga mi uzyskać dodatkową perspektywę na to, co piszę po angielsku. Polski jest moim językiem ojczystym. Chociażbym nie wiem jak dobrze władał innymi językami, mój mózg jest „zakodowany” po polsku jeżeli chodzi o podstawowe rozróżnienia semantyczne. Pisać po polsku, na blogu Discover Social Sciences, to dla mnie jak przejechać się rowerem po bulwarach wiślanych w moim rodzinnym Krakowie i ułożyć sobie w głowie kotłujące się w niej myśli.

Staram się ubrać w konkretne działania pomysł, który przyszedł mi do głowy ostatnio. Na razie opisywałem go po angielsku (np. w „The mind-blowing hydro”) oraz po francusku (np. „La marge opérationnelle de $1 539,60 par an par 1 kilowatt” ) i teraz pora na wersję polską. Na początek nazwa. Po angielsku nazwałem ten pomysł « Energy Ponds », ale po polsku nazwa „Stawy Energetyczne” brzmi głupio, nasuwając na myśl jeden z tych cudownych żeli na bolesność w łokciu czy w kolanie. Zacznę od opisu pomysłu, a potem spróbuję wymyślić polską nazwę. Chodzi o wodę. Cywilizacja europejska jest w dużej mierze oparta na tym, że w północnej Europie wykształciliśmy, mniej więcej na przełomie X i XI wieku n.e., wyjątkowo wydajny system produkcji żywności. Opierał się on i wydaje się opierać dalej na tym, że większość opadów atmosferycznych w ciągu roku otrzymywaliśmy w postaci śniegu w zimie, no a śnieg to wiadomo: zimą leży i topi się wiosną. Jak leży, to ugniata rzeczy pod spodem, a jak ugniata, to się termodynamika kłania: wysokie ciśnienie to prawie jak wysoka temperatura. No i te rzeczy pod śniegiem gniją sobie powoli, jak gniją to się jeszcze cieplej robi i się próchnica glebowa tworzy. Wiosną śnieg topi się wolno, woda z roztopów wsiąka powoli i głęboko w podłoże i się zasoby wodne akumulują. 

Teraz to się zmieniło. Coraz mniej wody spada w postaci śniegu, za to coraz więcej w postaci gwałtownych deszczów, no i coraz więcej paruje pod wpływem rosnącej temperatury. W efekcie mamy powodzie i susze, a ja mam okazję do małego przeglądziku literatury. Zdrobnienie „przeglądzik” nie istnieje w poprawnej polszczyźnie, ale myślę sobie: co tam, mogę mieć swój własny neologizm. Trafiłem na dwa ciekawe artykuły, jeżeli chodzi o ryzyko związane z powodziami w Europie: Alfieri et al. 2015[1] oraz Paprotny et al. (2018[2]). Ciekawe, dlatego że pokazują sposób działania inteligencji zbiorowej w naszym społeczeństwie. Ryzyko związane z powodziami, mierzone klasyczną metodą aktuarialną „rozmiar strat razy prawdopodobieństwo wystąpienia takiej właśnie sytuacji” zmienia się w różny sposób. Częstość powodzi, a więc prawdopodobieństwo wystąpienia związanych z nimi szkód systematycznie rośnie, jednak od jakichś 20 lat rozmiary lokalnych szkód systematycznie spadają. My, Europejczycy, zaczynamy się przystosowywać do tego konkretnego aspektu zmian klimatycznych. Widać jednak, że przystosowujemy się głównie w miastach. Im więcej infrastruktury miejskiej w jednym miejscu, tym mniejsze straty ludzkie i materialne wywołane powodziami. Duże miasta się bronią przed powodziami ze skutecznością, która gwałtownie spada na terenach wiejskich i w małych miastach. Najwięcej ofiar śmiertelnych zbierają powodzie właśnie na terenach wiejskich.    

Jeżeli chodzi o susze i związane z nimi zagrożenia, sytuacja jest inna. Cztery artykuły na temat temat – Naumann et al. (2015[3]), Vogt et al. (2018[4]), Stagge et al. (2017[5]) oraz Lu et al. (2019[6]) – wskazują, że u nas w Europie dopiero zaczynamy się uczyć ryzyka związanego z suszą. Warto tu zwrócić uwagę na jedną istotną różnicę pomiędzy powodzią a suszą. No wiem, czaję: w pierwszym przypadku jest za dużo wody, w drugim za mało. Chodzi mi o coś innego. Za dużo wody zdarza się gwałtownie i widowiskowo, przynosząc straty wyraźne jak siniak po uderzeniu. Powódź to coś, co przy odrobinie złej woli można obserwować z bezpiecznego miejsca jako interesujący news. Z kolei susza dzieje się powoli i przynosi straty, które widzimy dopiero kiedy nabrzmieją do naprawdę dużych rozmiarów. My, Europejczycy, dopiero zaczynamy rozumieć, kiedy niekorzystny bilans wodny jest prawdziwym powodem do niepokoju. Są już jednak w miarę twarde dane naukowe na temat tych zagrożeń. Wiadomo, że Francja, Hiszpania, Włochy, a także – co zaskakujące – Wielka Brytania są najbardziej zagrożone suszą. U nas w Polsce to tak średnio.

No i tu właśnie dochodzimy to tego, czym dokładnie jest zagrożenie suszą. W najbliższej i najbardziej praktycznej perspektywie grozi nam rozchwianie rynku żywności. Jeżeli ktoś miałby ochotę zerknąć na ceny produktów rolnych w kontraktach terminowych, zobaczy coś na kształt niepokoju na rynku. Ceny są coraz bardziej rozchwiane. Mówimy o kontraktach terminowych, a więc o cenach, jakich nabywcy produktów rolnych oczekują w przyszłości. Im bardziej zmienne i rozbieżne te oczekiwania, tym bardziej rozchwiane ceny. No i czego my możemy oczekiwać, tak obiektywnie? Ciekawie na ten temat piszą Webber et al. (2018[7]). Zmiany klimatyczne wywierają dwojaki wpływ na plony. Zaburzenia hydrologiczne działają na minus, jednak jest jeszcze czynnik tzw. haju węglowego. Zwiększona zawartość węgla w atmosferze pobudza metabolizm roślin. Nie wiemy jednak, do jakiego momentu to pobudzenie będzie działać korzystnie, a po przekroczeniu jakiego progu stanie się czynnikiem niesprzyjającym. Jak to praktycznie działa? Pszenica ozima, na przykład, bez efektu pobudzenia węglowego, może przynosić w 2050 roku o 9% niższe plony niż dziś, podczas gdy z uwzględnieniem haju węglowego można oczekiwać wzrostu plonów o 4%. Nie ma jednak sprawiedliwości na tym świecie i kukurydza wydaje się być skazana na spadek plonów o 20% w 2050 roku, napędzana węglem czy nie.

W tym samym artykule wraca jednak kwestia niepewności odnośnie tego, czym w Europie jest susza, tak praktycznie. Deficyt wody to jedno, a jego funkcjonalny wpływ na rolnictwo to drugie. Znów się tu kłania, w ciekawy sposób, zjawisko naszej zbiorowej inteligencji. Problemy z jednoznacznym prognozowaniem wpływu zmian klimatycznych na rolnictwo wynikają z kwestii metodologicznych. Jeżeli chcemy mieć model, który pozwoli jednoznacznie przewidzieć tą kwestię, lokalne błędy predykcji (wartości resztkowe) dla poszczególnych zmiennych powinny być wzajemnie nieskorelowane, tj. wzajemnie niezależne. W tym przypadku są skorelowane. Wydaje się, że nasze rolnictwo przystosowuje się do zmian klimatycznych tak szybko, że niechcący wpada w tą korelację. Zgodnie z tym, co piszą  Toreti et al. (2019[8]), taka naprawdę ostra jazda klimatyczna w europejskim rolnictwie zaczęła się w roku 2015. Widać też zmiany strukturalne: ubytek w plonach pszenicy jest kompensowany większą produkcją innych zbóż, lecz jednocześnie towarzyszy mu mniejsza produkcja buraków i ziemniaków (Eurostat[9], Schills et al. 2018[10]).

To jest zarys zagrożeń i pora na rozwiązania. Jak mamy coraz mniej wody, która wsiąka wolno, to musimy się nauczyć łapać i zatrzymywać wodę, która spada szybko. Trafiłem na interesujący pomysł w Chinach, wdrażany w 30 aglomeracjach: miasto – gąbka. Jest to szczególny typ infrastruktury miejskiej, nastawiony na wchłanianie i retencję dużych ilości wody deszczowej (Jiang et al. 2018[11]). Ciekawe rozwiązania: nawet specjalny, porowaty beton, zdolny przechowywać deszczówkę. W jednej z 30 chińskich aglomeracji objętych tym programem inwestycyjnym jest Xiamen (Shao et al. 2018[12]) i tam właśnie znalazłem bezpośrednią inspirację dla mojego pomysłu: zatrzymaną deszczówkę można pompować do zbiorników szczytowych, z których spływając ta sama deszczówka napędza turbiny hydroelektryczne.

W Europie ważniejsze wydaje się inwestowanie w gąbczastą wieś raczej niż w gąbczaste miasta. W europejskich realiach miasta są dalekie od megalopolitycznego rozpasania miast chińskich i zmiany klimatyczne zagrażają w pierwszej kolejności wsi oraz jej roli bazy żywnościowej dla miast. Jest taka technologia, którą kiedyś – jeszcze w XVIII wieku – w Europie stosowaliśmy powszechnie i która poszła w zapomnienie wraz z rozpowszechnieniem się urządzeń napędzanych silnikami: pompa wodna napędzana energią wody. Koło wodne zanurzone w nurcie rzeki napędza pompę, która pompuje wodę wzwyż i dalej od rzeki. Znalazłem firmę, które dostarcza taką technologię w nowoczesnej wersji: Aqysta. Jest także znana i opatentowana technologia tzw. tarana hydraulicznego (ang. Ram pump). Taka właśnie pompa może tłoczyć wodę z rzeki do zbiorników szczytowych, skąd dalej spływałaby – przechodząc przez turbiny hydroelektryczne – do sieci stawów i kanałów zatrzymujących wodę. Chodzi mi właśnie o to, żeby zamiast gromadzić wodę w dużym zbiorniku retencyjnym, raczej ją rozprowadzać po stosunkowo dużym obszarze podmokłych łąk i niewielkich rozlewisk. Woda wypompowana z rzeki do zbiorników szczytowych ląduje więc w końcu w strukturach irygacyjnych, które powstrzymują ją przed spływaniem dalej korytem rzeki.

Wracam do kwestii polskiej nazwy dla tego pomysłu. Na razie nazwę to „Energo-Retencja”. Tak czy inaczej, to tylko etykietka ułatwiająca dyskusję na temat tego pomysłu. Poniżej staram się pokazać graficznie, o co mi chodzi.

Kiedy idzie duża woda po silnych deszczach, poziom rzeki się podnosi i przepływ na sekundę w nurcie rzeki się zwiększa. Pompa wodna dostaje więcej energii kinetycznej, ma większą moc i szybciej pompuje wodę do zbiornika szczytowego. Więcej wody przepływa przez zbiornik, więcej spływa z niego w dół przez turbiny hydroelektryczne i te z kolei wytwarzają większą moc. Więcej wody ląduje w stawach i kanałach retencyjnych, które swoją drogą zatrzymują bezpośrednio wodę spadającą z nieba. Poziom wód gruntowych się podnosi, coraz dalej i dalej od koryta rzeki.

Kiedy przychodzi susza meteorologiczna, czyli kiedy mnóstwo za bardzo nie pada, zasilenie nurtu rzeki deszczem maleje do zera. Wtedy rzeka zaczyna działać jak dren i odciąga wodę gruntową z przylegających do niej terenów. Poziom rzeki stopniowo opada i zmniejsza się przepływ. Pompy zanurzone w jej nurcie mają mniej energii i mniej wody pompują do zbiorników szczytowych. Mniej wody przepływa przez turbiny i mniej wody ląduje w strukturach retencyjnych. Susza meteorologiczna osłabia działanie całego systemu Energo-Retencji, jednak im więcej wody było zmagazynowane wcześniej w gruncie, tym mniejszy spadek poziomu rzeki i jej przepływu. No tak: rzeka działa jak dren, a więc im więcej wody ma do wydrenowania z okolicznych gruntów, tym wolniej opada przy braku deszczu.

Zakładam, że system ten można zoptymalizować dla konkretnego miejsca i jego uwarunkowań: podczas wysokiej wody gromadzimy wystarczająco dużo wód gruntowych, żeby susza meteorologiczna w minimalnym stopniu zaburzała przepływ rzeki w tym miejscu, a więc żeby lokalny poziom wód gruntowych oraz lokalnie wytwarzana energia hydroelektryczna były jak najbardziej przewidywalne. To jest moje założenie jako laika w dziedzinie hydrologii. Potrzebowałbym wsparcia kogoś, kto naprawdę się na tym zna. Jedno jest jednak widoczne: z energetycznego punktu widzenia system powinien być wyposażony w akumulatory gromadzące energię elektryczną. Wysoka woda daje rezerwy, które zużywamy przy niższej wodzie.   

Myślę o tym, jak mogłaby wyglądać budowa takiej infrastruktury na terenach wiejskich, a szczególnie myślę nad jej finansowaniem i zarządzaniem nią. Pierwsze skojarzenie mam takie, że jeżeli całość ma wytwarzać energię elektryczną, to właśnie wartość rynkowa tej energii może być podstawą do zwrotu z inwestycji. Myślę tak: woda, która spada z nieba w milimetrach na rok spada w gruncie rzeczy w litrach na metr kwadratowy. W Polsce spada 600 milimetrów opadów atmosferycznych rocznie, czyli 600 litrów na metr kwadratowy, czyli 187 600 bilionów litrów w sumie na wszystkie metry kwadratowe. Przy obecnej gospodarce wodnej zatrzymujemy 28,57% z tych opadów, czyli 53 600 bilionów litrów. Nawiasem mówiąc, zdolność retencji wody jest bardzo zróżnicowana w Europie. My jesteśmy gdzieś w tyle stawki. Może być gorzej, np. retencja niecałych 11% opadów na Węgrzech, ale może też być dużo lepiej: 42,75% w Niemczech czy 44,79% w Estonii. Przy obecnym rozwoju hydrologii, zdolność retencji wody zależy głównie od budowy geologicznej: mistrzami retencji są kraje śródziemnomorskie leżące na grubym, skalistym podłożu z mnóstwem kieszeni wodonośnych w środku. Włochy zatrzymują 72,80% opadów, Grecja 67,41%, Chorwacja 59,86% (FAO AQUASTAT[1]).

Załóżmy więc, że nowa infrastruktura do zatrzymywania wody opadowej pozwoliłaby nam w Polsce przejść od obecnej retencji do 28,57% do 42%, czyli prawie jak w Niemczech. To by oznaczało, że zatrzymywalibyśmy o 25 192 bilionów litrów więcej w skali roku. Jaką wartość przedstawia sobą ta woda na rynku energii elektrycznej? Tu liczy się w pierwszym rzędzie przepływ w litrach na sekundę. W ciągu roku mamy tak typowo 8760 godzin (w latach przestępnych 8784 godziny) razy 60 sekund w godzinie, czyli 525 600 sekund. Zwiększona retencja dałaby nam zatem 25 192 bilionów litrów dzielone przez 525 600 sekund = 47,93 miliardów litrów na sekundę. Po to, żeby ten przepływ mógł wytwarzać energię elektryczną, to musi płynąć. Logiczne. Woda płynie z góry na dół, z przyspieszeniem ziemskim równym g = 9,81 m/s2. Ważne jest, jak bardzo z góry na dół, czyli jaka jest różnica poziomów w metrach. W moim pomyśle widzę zbiorniki szczytowe o konstrukcji tzw. wież wodnych wysokich na 10 metrów (tzn. dno zbiornika jest na wysokości 10 metrów). Turbiny wodne mają średnią wydajność energetyczną 75%.  Biorę więc 47,93 miliardów litrów i mnożę przez 10 metrów i potem jeszcze przez 9,81 m/s2 no i jeszcze przez 75%. Wychodzi moc elektryczna w watach, a konkretnie to 3 526,45 gigawatów mocy. Dużo. Szczególnie, że obecna łączna moc wszystkich elektrowni w Polsce wynosi 43,6 gigawata, z tego elektrownie wodne to 2,3 gigawata.

No fajnie. Wygląda jak science-fiction. Gdybyśmy zdołali stworzyć infrastrukturę pozwalającą zwiększyć naszą retencję wody opadowej do poziomu Niemiec – 42% – i gdybyśmy całą tą dodatkową wodę puścili przez turbiny elektryczne z 10 – metrowych zbiorników szczytowych, to mielibyśmy osiemdziesiąt razy więcej energii elektrycznej niż mamy obecnie ze wszystkich źródeł razem wziętych. Może trochę skromniej policzę, o ile więcej deszczówki powinniśmy zatrzymać i wykorzystać w turbinach hydroelektrycznych, żeby pokryć całe zapotrzebowanie naszego kraju na energię. Zgodnie z danymi Banku Światowego, statystyczny Polak zużywa rocznie energię równą 2 490,2 kg równoważnika ropy naftowej, czyli 2 490,2 * 11,63 =  28 961,03 kilowatogodzin. Czyli gdybym chciał mieć wszystko na prąd ( w sensie wszystko to, co już mam na prąd plus to, co mam na gaz i na benzynę), to potrzebowałbym 28 961,03 kilowatogodzin rocznie dzielone przez 8760 godzin w roku = 3,31 kilowata mocy elektrycznej. Ty też i on też. Oni też. Jest nas onych w sumie 38 430 000, czyli potrzebowalibyśmy 127,05 gigawata mocy. Liczę teraz wspak. Mnożę te gigawaty przez miliard – żeby były waty – oraz przez 525 600 sekund w roku, a następnie dzielę przez 98,1 (10 metrów razy 9,81 metra na sekundę kwadratową) oraz przez 75% wydajności. Wychodzi 907,6 miliona metrów sześciennych wody. To jest 0,48% rocznych opadów atmosferycznych w Polsce. Może inaczej: gdybyśmy zbudowali infrastrukturę, która umożliwi zatrzymanie 29,06% opadów zamiast dzisiejszych 28,57% i gdybyśmy tą dodatkową zatrzymaną wodę przepuścili przez 10-metrowe zbiorniki szczytowe, a następnie spuścili ją przez turbiny elektryczne, to zaspokoilibyśmy pełne zapotrzebowanie kraju na energię. Każdą energię, także tą spożywaną w produktach i usługach, które kupuję.

Lubię ugniatać moje pomysły. Cudze zresztą też. Patrzę na to samo z różnych punktów widzenia. Wyczytałem, że typowy zbiornik szczytowy „na nóżkach”, czyli taki budowany w szczerym polu, ma ok. 5000 m3 pojemności. Owe 907 600 000 m3 wody potrzebnych do zaspokojenia naszych potrzeb energetycznych to 181 520 takich zbiorników. Gdybyśmy je zrobili wysokie na 20 metrów zamiast na 10, to wtedy potrzebowalibyśmy tylko 454 000 000 m3 wody i nieco ponad 90 000 takich zbiorników. Innymi słowy, im wyższe wieże zbiorników szczytowych w „Energo-Retencji”, tym więcej prądu można zrobić z tej samej ilości wody.

Teraz ugniatam od innej strony. Biorę moje rachunki za prąd, dla dwóch lokalizacji: domu jednorodzinnego oraz mieszkania w bloku. W obu ta sama grupa taryfowa G11, z tymże w domu jednorodzinnym mam moc przyłączeniową 14 kilowatów i prąd trójfazowy, a w bloku 4 kilowaty i jedną fazę. Płacąc za prąd, płacę za dwie rzeczy: za gotowość dostawcy (Tauron) do podania mocy, czyli za dystrybucję oraz za faktycznie zużytą energię. Mimo, że oba liczniki mam w tej samej grupie taryfowej, cena za 1 kilowatogodzinę zużytej energii jest nieco różna: 0,24450 zł netto za 1 kWh przy jednej fazie w mieszkaniu i 0,24110 zł przy trzech fazach w domu. Jeżeli chodzi o gotowość Taurona do sprzedawania mi prądu, to deklinuje się ona na wiele sposobów: opłata dystrybucyjna zmienna, opłata dystrybucyjna stała itd. Żeby było śmieszniej, opłata dystrybucyjna zmienna jest stała i wynosi 0,19020 zł za 1 kWh, a opłata dystrybucyjna stała jest zmienna i buja się między 8,34 zł a 22 zł za okres rozliczeniowy. Ja te wszystkie zmiennostałe składniki mojej faktury sumuję razem do kupy i wychodzi mi, że w domu jednorodzinnym, przy 14 kilowatach mocy na liczniku, za utrzymanie 1 kilowata płacę 30,20 zł za okres rozliczeniowy, podczas gdy w mieszkaniu wychodzi to 25,62 za kilowat.

No i jak pougniatałem moje rachunki za prąd razem z moim pomysłem „Energo-Retencja”, wyszła mi masa, z której następnie formuję kontrakty inteligentne na platformie crowdfundingowej. Wyobraźmy sobie, że zamiast płacić dostawcy prądu za gotowość dostarczania mi mocy z odległej elektrowni, płacę za kolejne jednostki udziałowe (akcje?) w lokalnej infrastrukturze typu „Energo-Retencja”. Duża firma – Tauron, PGE, Energa czy jeszcze ktoś inny – inwestuje w stworzenie i rozruch takiej infrastruktury w mojej okolicy. Inwestycja ma postać wyodrębnionej, lokalnej spółki. Następnie początkowy inwestor oferuje mnie i innym w okolicy kupno złożonych kontraktów, w których płacimy z wyprzedzeniem za energię elektryczną na przyszłość – czyli np. płacę z góry za prąd na kolejny rok – a jednocześnie płacimy za kolejne jednostki udziałowe w lokalnej spółce. W ten sposób lokalna społeczność stopniowo przejmuje kontrolę kapitałową nad lokalną spółką będącą operatorem infrastruktury typu „Energo-Retencja”. Kontrakty mają być inteligentne, a więc możliwe do elastycznego budowania z drobnych części składowych i sprzedawane za pośrednictwem platformy crowdfundingowej. Jeżeli jestem zainteresowany, to mogę zrobić koszyk zamówień na przyszły prąd i koszyk udziałów w spółce, która go wytwarza.

No dobra, to teraz rzut oka na to, jak by ta spółka wyglądała, tak konkretnie, na początek od strony technologicznej oraz infrastrukturalnej. Wracam do mojego rachunku za prąd w domu jednorodzinnym: gospodarstwo domowe złożone z trzech osób, 14 kilowatów mocy w trzech fazach. W trzymiesięcznym okresie rozliczeniowym zużyliśmy 1838 kWh energii elektrycznej, więc za rok wychodziłoby 4*1838 = 3676 kWh. Przy 14 kilowatach mocy, jadąc po bandzie i na maksa mamy do dyspozycji 14 kW * 8760 godzin = 122 640 kWh w ciągu roku. Nasze faktyczne zużycie odpowiada więc w przybliżeniu 3% energii teoretycznie dostępnej z tego przyłącza, w którym, gdybyśmy zużywali energię elektryczną w sposób doskonale równomierny, potrzebowalibyśmy tylko 0,42 kW mocy. Gdybyśmy, to tak, ale odkurzacz chodzi tylko czasem, pralka i odkurzacz razem jeszcze bardziej czasem, a piekarnik elektryczny jednocześnie z nimi to już w ogóle rzadko. Moc przyłączeniowa musi być jednak wystarczająca na te rzadkie momenty, kiedy wszystko na raz chodzi. Kuchenkę mamy gazową, ale gdybyśmy mieli płytę elektryczną, to trzeba na nią liczyć co najmniej 7,5 kilowata. Piec mamy też gazowy, ale gdybyśmy go zastąpili bojlerem elektrycznym, trzeba dołożyć co najmniej 3 ÷ 4 kW.

Nasze systemy energetyczne funkcjonują właśnie w ten sposób: wykorzystujemy ich dostępną moc tylko w części, lecz potrzebujemy bufora na niektóre okazje. Kilka akapitów wcześniej pisałem, że w Polsce zużycie energii na głowę mieszkańca to 28 961,03 kilowatogodzin rocznie i obejmuje to wszystkie postacie energii: tą zużywaną w domu, tą potrzebną do różnych form transportu i wreszcie tą, którą pośrednio zużywam kupując nową koszulę albo słone paluszki. Gdzie energia w koszuli? Ano w procesie jej wytwarzania i dystrybucji. W słonych paluszkach takoż. W moim gospodarstwie domowym, złożonym z trzech osób, teoretycznie możemy na nasze własne domowe potrzeby zużyć 122 640 kWh energii elektrycznej, a więc 122 640 kWh / (3*28 961,03 kWh) = 1,41 razy więcej energii niż ogółem zużywają trzy statystyczne osoby. Zużywamy z tego jednak tylko 3%.  

Kiedy więc myślę o lokalnej spółce działającej według koncepcji „Energo-Retencji” i zastanawiam się nad jej rolą dla lokalnej społeczności – poza poprawą lokalnej gospodarki wodnej – przychodzą mi na myśl dwa możliwe układy kontraktowe. Pierwszy jest klasyczny i odpowiada dzisiejszym realiom. Turbiny i akumulatory „Energo-Retencji” podłączone są do sieci dystrybucyjnej, sprzedają energię do tejże sieci i jej operator odsprzedaje tą energię do użytkowników końcowych. Energia wytwarzana w „Energo – Retencji” jest jedną z wielu dystrybuowanych w sieci. Zalety: stabilność zasilania dla mojego piekarnika elektrycznego oraz możliwość przenoszenia energii na duże odległości poprzez sieć dystrybucyjną. Nasze lokalne turbiny hydroelektryczne mogą mieć w ten sposób wartość ekonomiczną dla konsumentów energii oddalonych nawet o setki kilometrów. Wady: konieczność dołożenia do ceny energii nadwyżki opłacającej dystrybucję.

Drugi możliwy układ to energia elektryczna z „Energo – Retencji” w roli głównego źródła prądu dla lokalnej społeczności. To właśnie z „Energo – Retencji” pochodziłaby całość (albo prawie) tych 14 czy iluś tam kilowatów na każdym indywidualnym przyłączu. Ponieważ małe turbiny hydroelektryczne, przewidziane w moim pomyśle, wytwarzają prąd o stosunkowo niskim napięciu, lokalna sieć jego dystrybucji byłaby niewielka i raczej niskonapięciowa. Zalety: koszty dystrybucji bliskie zeru (de facto brak miejsca dla firmy dystrybucyjnej) oraz swego rodzaju technologiczna więź lokalnej społeczności z lokalną spółką typu „Energo-Retencja”. Wady: ekspozycja na ryzyko związane z możliwą awarią takiego lokalnego systemu.   


[1] http://www.fao.org/aquastat/en/ ostatni dostęp 1 sierpnia 2019


[1] Alfieri, L., Feyen, L., Dottori, F., & Bianchi, A. (2015). Ensemble flood risk assessment in Europe under high end climate scenarios. Global Environmental Change, 35, 199-212.

[2] Paprotny, D., Sebastian, A., Morales-Nápoles, O., & Jonkman, S. N. (2018). Trends in flood losses in Europe over the past 150 years. Nature communications, 9(1), 1985.

[3] Gustavo Naumann et al. , 2015, Assessment of drought damages and their uncertainties in Europe, Environmental Research Letters, vol. 10, 124013, DOI https://doi.org/10.1088/1748-9326/10/12/124013

[4] Vogt, J.V., Naumann, G., Masante, D., Spinoni, J., Cammalleri, C., Erian, W., Pischke, F., Pulwarty, R., Barbosa, P., Drought Risk Assessment. A conceptual Framework. EUR 29464 EN, Publications Office of the European Union, Luxembourg, 2018. ISBN 978-92-79-97469-4, doi:10.2760/057223, JRC113937

[5] Stagge, J. H., Kingston, D. G., Tallaksen, L. M., & Hannah, D. M. (2017). Observed drought indices show increasing divergence across Europe. Scientific reports, 7(1), 14045.

[6] Lu, J., Carbone, G. J., & Grego, J. M. (2019). Uncertainty and hotspots in 21st century projections of agricultural drought from CMIP5 models. Scientific reports, 9(1), 4922.

[7] Webber, H., Ewert, F., Olesen, J. E., Müller, C., Fronzek, S., Ruane, A. C., … & Ferrise, R. (2018). Diverging importance of drought stress for maize and winter wheat in Europe. Nature communications, 9(1), 4249.

[8] Toreti, A., Cronie, O., & Zampieri, M. (2019). Concurrent climate extremes in the key wheat producing regions of the world. Scientific reports, 9(1), 5493.

[9] https://ec.europa.eu/eurostat/statistics-explained/index.php/Agricultural_production_-_crops ostatni dostęp 14 lipca 2019

[10] Schils, R., Olesen, J. E., Kersebaum, K. C., Rijk, B., Oberforster, M., Kalyada, V., … & Manolov, I. (2018). Cereal yield gaps across Europe. European journal of agronomy, 101, 109-120.

[11] Jiang, Y., Zevenbergen, C., & Ma, Y. (2018). Urban pluvial flooding and stormwater management: A contemporary review of China’s challenges and “sponge cities” strategy. Environmental science & policy, 80, 132-143.

[12] Shao, W., Liu, J., Yang, Z., Yang, Z., Yu, Y., & Li, W. (2018). Carbon Reduction Effects of Sponge City Construction: A Case Study of the City of Xiamen. Energy Procedia, 152, 1145-1151.

The mind-blowing hydro

My editorial on You Tube

There is that thing about me: I am a strange combination of consistency and ADHD. If you have ever read one of Terry Pratchett’s novels from the ‘Discworld’ series, you probably know the imaginary character of golems: made of clay, with a logical structure – a ‘chem’ – put in their heads, they can work on something endlessly. In my head, there are chems, which just push me to do things over and over and over again. Writing and publishing on that research blog is very much in those lines. I can stop whenever I want, I just don’t want right now. Yet, when I do a lot about one chem, I start craving for another one, like nearby but not quite in the same intellectual location.

Right now, I am working on two big things. Firstly, I feel like drawing a provisional bottom line under those two years of science writing on my blog. Secondly, I want to put together an investment project that would help my city, my country and my continent, thus Krakow, Poland, and Europe, to face one of the big challenges resulting from climate change: water management. Interestingly, I started to work on the latter first, and only then I began to phrase out the former. I explain. As I work on that project of water management, which I provisionally named « Energy Ponds » (see, for example, « All hope is not lost: the countryside is still exposed »), I use the « Project Navigator », made available by the courtesy of the International Renewable Energy Agency (IRENA). The logic built into the « Project Navigator » makes me return, over and over again, to one central question: ‘You, Krzysztof Wasniewski, with your science and your personal energy, how are you aligned with that idea of yours? How can you convince other people to put their money and their personal energy into developing on your concept?’.

And so I am asking myself: ‘What’s your science, bro? What can you get people interested in, with rational grounds and intelligible evidence?’.

As I think about it, my first basic claim is that we can do it together in a smart way. We can act as a collective intelligence. This statement can be considered as a manifestation of the so-called “Bignetti model” in cognitive sciences (Bignetti 2014[1]; Bignetti et al. 2017[2]; Bignetti 2018[3]): for the last two years, I have been progressively centering my work around the topic of collective intelligence, without even being quite aware of it. As I was working on another book of mine, entitled “Capitalism and Political Power”, I came by that puzzling quantitative fact: as a civilization, we have more and more money per unit of real output[4], and, as I reviewed some literature, we seem not to understand why is that happening. Some scholars complain about the allegedly excessive ‘financialization of the economy’ (Krippner 2005[5]; Foster 2007[6]; Stockhammer 2010[7]), yet, besides easy generalizations about ‘greed’, or ‘unhinged race for profit’, no scientifically coherent explanation is offered regarding this phenomenon.

As I was trying to understand this phenomenon, shades of correlations came into my focus. I could see, for example, that growing an amount of money per unit of real output has been accompanied by growing an amount of energy consumed per person per year, in the global economy[8]. Do we convert energy into money, or the other way around? How can it be happening? In 2008, the proportion between the global supply of broad money, and the global real output passed the magical threshold of 100%. Intriguingly, the same year, the share of urban population in the total human population passed the threshold of 50%[9], and the share of renewable energy in the total final consumption of energy, at the global scale, took off for the first time since 1999, and keeps growing since then[10]. I started having that diffuse feeling that, as a civilization, we are really up to something, right now, and money is acting like a social hormone, facilitating change.

We change as we learn, and we learn as we experiment with the things we invent. How can I represent, in a logically coherent way, collective learning through experimentation? When an individual, or a clearly organized group learns through experimentation, the sequence is pretty straightforward: we phrase out an intelligible definition of the problem to solve, we invent various solutions, we test them, we sum up the results, we select seemingly the best solution among those tested, and we repeat the whole sequence. As I kept digging the topic of energy, technological change, and the velocity of money, I started formulating the outline of a complex hypothesis: what if we, humans, are collectively intelligent about building, purposefully, and semi – consciously, social structures supposed to serve as vessels for future collective experiments?

My second claim is that one of the smartest things we can do about climate change is, besides reducing our carbon footprint, to take proper care of our food and energy base. In Europe, climate change is mostly visible as a complex disruption to our water system, and we can observe it in our local rivers. That’s the thing about Europe: we have built our civilization, on this tiny, mountainous continent, in close connection with rivers. Right, I can call them scientifically ‘inland waterways’, but I think that when I say ‘river’, anybody who reads it understands intuitively. Anyway, what we call today ‘the European heritage’ has grown next to EVENLY FLOWING rivers. Once again: evenly flowing. It means that we, Europeans, are used to see the neighbouring river as a steady flow. Streams and creeks can overflow after heavy rains, and rivers can swell, but all that stuff had been happening, for centuries, very recurrently.

Now, with the advent of climate change, we can observe three water-related phenomena. Firstly, as the English saying goes, it never rains but it pours. The steady rhythm and predictable volume of precipitations we are used to, in Europe (mostly in the Northern part), progressively gives ground to sudden downpours, interspersed with periods of drought, hardly predictable in their length. First moral of the fairy tale: if we have less and less of the kind of water that falls from the sky slowly and predictably, we need to learn how to capture and retain the kind of water that falls abruptly, unscheduled. Secondly, just as we have adapted somehow to the new kind of sudden floods, we have a big challenge ahead: droughts are already impacting, directly and indirectly, the food market in Europe, but we don’t have enough science yet to predict accurately neither their occurrence nor their local impact. Yet, there is already one emerging pattern: whatever happens, i.e. floods or droughts, rural populations in Europe suffer more than the urban ones (see my review of literature in « All hope is not lost: the countryside is still exposed »). Second moral of the fairy tale: whatever we do about water management in these new conditions, in Europe, we need to take care of agriculture first, and thus to create new infrastructures so as to shield farms against floods and droughts, cities coming next in line.

Thirdly, the most obviously observable manifestation of floods and droughts is variation in the flow of local rivers. By the way, that variation is already impacting the energy sector: when we have too little flow in European rivers, we need to scale down the output of power plants, as they have not enough water to cool themselves. Rivers are drainpipes of the neighbouring land. Steady flow in a river is closely correlated with steady a level of water in the ground, both in the soil, and in the mineral layers underneath. Third moral of the fairy tale: if we figure out workable ways of retaining as much rainfall in the ground as possible, we can prevent all the three disasters in the same time, i.e. local floods, droughts, and economically adverse variations in the flow of local rivers.           

I keep thinking about that ownership-of-the-project thing I need to cope with when using the « Project Navigator » by IRENA. How to make local communities own, as much as possible, both the resources needed for the project, and its outcomes? Here, precisely, I need to use my science, whatever it is. People at IRENA have experience in such project, which I haven’t. I need to squeeze my brain and extract thereof any useful piece of coherent understanding, to replace experience. I am advancing step by step. I intuitively associate ownership with property rights, i.e. with a set of claims on something – things or rights – together with a set of liberties of action regarding the same things or rights. Ownership from the part of a local community means that claims and liberties should be sort of pooled, and the best idea that comes to my mind is an investment fund. Here, a word of explanation is due: an investment fund is a general concept, whose actual, institutional embodiment can take the shape of a strictly speaking investment fund, for one, and yet other legal forms are possible, such as a trust, a joint stock company, a crowdfunding platform, or even a cryptocurrency operating in a controlled network. The general concept of an investment fund consists in taking a population of investors and making them pool their capital resources over a set of entrepreneurial projects, via the general legal construct of participatory titles: equity-based securities, debt-based ones, insurance, futures contracts, and combinations thereof. Mind you, governments are investment funds too, as regards their capacity to move capital around. They somehow express the interest of their respective populations in a handful of investment projects, they take those populations’ tax money and spread it among said projects. That general concept of investment fund is a good expression of collective intelligence. That thing about social structure for collective experimentation, which I mentioned a few paragraphs ago, an investment fund is an excellent example. It allows spreading resources over a number of ventures considered as local experiments.

Now, I am dicing a few ideas for a financial scheme, based on the general concept of an investment fund, as collectively intelligent as possible, in order to face the new challenges of climate change, through new infrastructures for water management. I start with reformulating the basic technological concept. Water powered water pumps are immersed in the stream of a river. They use the kinetic energy of that stream to pump water up and further away, more specifically into elevated water towers, from which that water falls back to the ground level, as it flows down it powers relatively small hydroelectric turbines, and ends up in a network of ponds, vegetal complexes and channel-like ditches, all that made with a purpose of retaining as much water as possible. Those structures can be connected to others, destined directly to capture rainwater. I was thinking about two setups, respectively for rural environments and for the urban ones. In the rural landscape, those ponds and channels can be profiled so as to collect rainwater from the surface of the ground and conduct it into its deeper layers, through some system of inverted draining. I think it would be possible, under proper geological conditions, to reverse-drain rainwater into deep aquifers, which the neighbouring artesian wells can tap into. In the urban context, I would like to know more about those Chinese technologies used in their Sponge Cities programme (see Jiang et al. 2018[11]).

The research I have done so far suggests that relatively small, local projects work better, for implementing this type of technologies, than big, like national scale endeavours. Of course, national investment programmes will be welcome as indirect support, but at the end of the day, we need a local community owning a project, possibly through an investment-fund-like institutional arrangement. The economic value conveyed by any kind of participatory title in such a capital structure sums up to the Net Present Value of three cash flows: net proceeds from selling hydroelectricity produced in small water turbines, reduction of the aggregate flood-related risk, as well as of the drought-related risk. I separate risks connected to floods from those associated with droughts, as they are different in nature. In economic and financial terms, floods are mostly a menace to property, whilst droughts materialize as more volatile prices of food and basic agricultural products.

In order to apprehend accurately the Net Present Value of any cash flow, we need to set a horizon in time. Very tentatively, by interpreting data from 2012, presented in a report published by IRENA (the same IRENA), I assume that relatively demanding investors in Europe expect to have a full return on their investment within 6,5 years, which I make 7 years, for the sake of simplicity. Now, I go a bit off the beaten tracks, at least those I have beaten so far. I am going to take the total atmospheric precipitations falling on various European countries, which means rainfall + snowfall, and then try to simulate what amount of ‘NPV = hydroelectricity + reduction of risk from floods and droughts’(7 years) could the retention of that water represent.

Let’s walse. I take data from FAOSTAT regarding precipitations and water retention. As a matter of fact, I made a query of that data regarding a handful of European countries. You can have a look at the corresponding Excel file UNDER THIS LINK. I rearranged bit the data from this Excel file so as to have a better idea of what could happen, if those European countries I have on my list, my native Poland included, built infrastructures able to retain 2% of the annual rainfall. The coefficient of 2% is vaguely based on what Shao et al. (2018[12]) give as the target retention coefficient for the city of Xiamen, China, and their Sponge-City-type investment. I used the formulas I had already phrased out in « Sponge Cities », and in « La marge opérationnelle de $1 539,60 par an par 1 kilowatt », to estimate the amount of electricity possible to produce out of those 2% of annual rainfall elevated, according to my idea, into 10-metres-high water towers. On the top of all that, I added, for each country, data regarding the already existing capacity to retain water. All those rearranged numbers, you can see them in the Excel file UNDER THIS OTHER LINK (a table would be too big for inserting into this update).   

The first provisional conclusion I have to make is that I need to revise completely my provisional conclusion from « Sponge Cities », where I claimed that hydroelectricity would have no chance to pay for any significant investment in sponge-like structures for retaining water. The calculations I have just run show just the opposite: as soon as we consider whole countries as rain-retaining basins, the hydroelectric power, and the cash flow dormant in that water is just mind-blowing. I think I will need to get a night of sleep just to check on the accuracy of my calculations.

Deranging as they are, my calculations bear another facet. I compare the postulated 2% of retention in annual precipitations with the already existing capacity of these national basins to retain water. That capacity is measured, in that second Excel file, by the ‘Coefficient of retention’, which denominates the ‘Total internal renewable water resources (IRWR)’ over the annual precipitation, both in 10^9 m3/year. My basic observation is that European countries have a capacity to retain water very similar in disparity to the intensity of precipitations, measured in mm per year. Both coefficients vary in a similar proportion, i.e. their respective standard deviations make around 0,4 of their respective means, across the sample of 37 European countries. When I measure it with the Pearson coefficient of correlation between the intensity of rainfall and the capacity to retain it , it yields r = 0,63. In general, the more water falls from the sky per 1 m2, the greater percentage of that water is retained, as it seems. Another provisional conclusion I make is that the capacity to retain water, in a given country, is some kind of response, possibly both natural and man-engineered, to a relatively big amount of water falling from the sky. It looks as if our hydrological structures, in Europe, had been built to do something with water we have momentarily plenty of, possibly even too much of, and which we should save for later.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. You can communicate with me directly, via the mailbox of this blog: goodscience@discoversocialsciences.com. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?


[1] Bignetti, E. (2014). The functional role of free-will illusion in cognition:“The Bignetti Model”. Cognitive Systems Research, 31, 45-60.

[2] Bignetti, E., Martuzzi, F., & Tartabini, A. (2017). A Psychophysical Approach to Test:“The Bignetti Model”. Psychol Cogn Sci Open J, 3(1), 24-35.

[3] Bignetti, E. (2018). New Insights into “The Bignetti Model” from Classic and Quantum Mechanics Perspectives. Perspective, 4(1), 24.

[4] https://data.worldbank.org/indicator/FM.LBL.BMNY.GD.ZS last access July 15th, 2019

[5] Krippner, G. R. (2005). The financialization of the American economy. Socio-economic review, 3(2), 173-208.

[6] Foster, J. B. (2007). The financialization of capitalism. Monthly Review, 58(11), 1-12.

[7] Stockhammer, E. (2010). Financialization and the global economy. Political Economy Research Institute Working Paper, 242, 40.

[8] https://data.worldbank.org/indicator/EG.USE.PCAP.KG.OE last access July 15th, 2019

[9] https://data.worldbank.org/indicator/SP.URB.TOTL.IN.ZS last access July 15th, 2019

[10] https://data.worldbank.org/indicator/EG.FEC.RNEW.ZS last access July 15th, 2019

[11] Jiang, Y., Zevenbergen, C., & Ma, Y. (2018). Urban pluvial flooding and stormwater management: A contemporary review of China’s challenges and “sponge cities” strategy. Environmental science & policy, 80, 132-143.

[12] Shao, W., Liu, J., Yang, Z., Yang, Z., Yu, Y., & Li, W. (2018). Carbon Reduction Effects of Sponge City Construction: A Case Study of the City of Xiamen. Energy Procedia, 152, 1145-1151.

La marge opérationnelle de $1 539,60 par an par 1 kilowatt

Mon éditorial sur You Tube

Alors, je change un peu d’azimut. Dans « All hope is not lost: the countryside is still exposed » j’ai présenté une revue de littérature à propos des risques liées aux inondations et aux sécheresses en Europe. Il paraît que ces risques sont très différents de ce que je pensais qu’ils étaient. Comme quoi, il est bon de ne pas céder à l’hystérie collective et d’étudier patiemment la science que nous avons à notre disposition. Je reviens donc un peu sur les propos que j’ai exprimés dans « Le cycle d’adaptation ». J’avais écrit que les infrastructures urbaines en Europe sont parfaitement adaptées aux conditions climatiques qui n’existent plus : maintenant je reviens et je nuance sur ce propos. Oui, les villes européennes ont besoin d’adaptation aux changements climatiques, mais elles sont en train de s’adapter déjà. En revanche, la partie majeure des pertes humaines et matérielles suite d’inondations et de sécheresses survient en dehors des grandes villes, dans les endroits ruraux. La sécheresse, ça frappe les agriculteurs bien avant que ça frappe les citadins. Lorsque les habitants des villes voient l’eau manquer dans leurs robinets, les agriculteurs en sont déjà à faire la solde des pertes dues aux récoltes plus modestes que d’habitude.

Le Navigateur des Projets, accessible à travers la page de « International Renewable Energy Agency », m’a fait réfléchir sur les objectifs communs autour desquels les communautés locales d’Europe peuvent s’organiser pour développer des projets comme mon concept d’Étangs Énergétiques. Maintenant, après une revue de littérature, je pense qu’un objectif rationnel est de construire des infrastructures aquatiques, pour stocker l’eau de pluie ainsi que produire et stocker l’hydroélectricité, dans des régions rurales, pour protéger l’agriculture et indirectement protéger les ressources hydrologiques des villes.

Vous pouvez lire dans « All hope is not lost: the countryside is still exposed » que la littérature scientifique n’est pas tout à fait d’accord sur les risques liés à la sécheresse en Europe. Néanmoins, la science à ses limites méthodologiques : elle peut dire quelque chose à coup sûr seulement si les données empiriques sont suffisamment abondantes et claires pour vérifier les hypothèses statistiquement comme il faut. Les données empiriques que nous avons à propos des sécheresses en Europe et de leurs effets économiques souffrent de l’effet pervers de notre capacité d’adaptation. J’explique. Pour une preuve statistique vraiment rigoureuse, il faut que les distributions d’erreurs locales des différentes variables soient mutuellement indépendantes (donc pas de corrélation significative entre les erreurs d’estimation de variable A et celles de variable B) et aléatoires, donc dispersées au moins aussi largement que le suggère la distribution normale. L’erreur d’estimation de l’humidité résiduelle du sol, par exemple, doit être aléatoire et indépendante de l’erreur d’estimation de la récolte de blé. Eh bien, à en croire Webber et al. (2018[1]), il n’en est pas le cas : les bases de données qui croisent du météo et hydrologie avec de l’agriculture rendent des corrélations significatives entre les erreurs d’estimation après régression linéaire d’une variable sur les autres. Pourquoi ? Mon explication intuitive à moi est que nous, les humains, on réagit vite lorsque notre base de bouffe est menacée. Nous réagissons tellement vite, à travers les modifications des technologies agriculturales, que nous induisons de la corrélation entre le climat et la récolte.

Lorsque la rigueur scientifique nous fait défaut, c’est une bonne idée de tourner vers l’observation plus élémentaire et plus anecdotique. Je passe en revue les actualités du marché agricole. Chez moi, en Pologne, la récolte des fruits menace d’être plus basse de 30% par rapport aux pronostics faits au mois de Mai[2]. La récolte céréalière peut baisser entre 8% et même 40% par rapport à celle de l’année dernière, suivant la région exacte du pays[3]. En France, selon Europe 1, l’alerte sécheresse dans l’agriculture est devenue quelque chose de normal[4]. Je passe aux prix des contrats à terme sur les biens agricoles de base. Le blé, contrats MATIF, donc le marché européen, ça s’agite cette année. La tendance des dernières semaines est à la hausse des prix, comme si les traders prévoyaient un déficit d’offre en Europe. Les contrats MATIF sur le maïs montrent à peu de choses près la même tendance. En revanche, les contrats CBOT sur blé, émis par CME Group et basés sur le marché américain, montrent une tendance plus décidément ascendante dans le long terme quoi que descendante dans l’immédiat. Ah, je viens de regarder les prix CBOT dernière minute sur https://www.barchart.com/futures/quotes/ZW*0/futures-prices: ça grimpe aujourd’hui dans la matinée. Voilà donc que je cerne le risque qui correspond à la sécheresse en Europe : c’est le risque de volatilité croissante des prix agricoles. Si je veux approcher ce risque de façon analytique, je peux essayer d’estimer, par exemple, la valeur du marché d’un instrument financier hypothétique – comme un contrat à terme ou une option – qui paie lorsque les prix restent dans l’intervalle désiré et apporte des pertes lorsque les prix vont hors de cet intervalle.

Je généralise l’approche financière à mon concept d’Étangs Énergétiques. Je pense que l’investissement qui a des chances de gagner le support d’acteurs sociaux est celui dont la Valeur Actuelle Nette – pour un cycle de vie utile de l’infrastructure de « m » années – est égale à NPV(m) = vente d’hydroélectricité (m) + réduction du risque lié aux inondations (m) + réduction du risque lié aux sècheresses (m). En ce qui concerne les revenus de la vente d’électricité – disons que j’appelle ces revenus VE(m) – le calcul est comme suit : VE(m) = puissance en kilowatts * 365 jours * 24 heures * prix de marché d’électricité = {flux par seconde en litres (ou en kilogrammes d’eau, revient au même) * constante gravitationnelle a = 9,81 * dénivellation en mètres / 1000} * 365 jours * 24 heures * prix de marché d’électricité (consultez « Sponge Cities »). Chez moi, en Pologne – avec 1 kilowatt heure achetée à un prix total d’à peu près $0,21 – 1 kilowatt de puissance génératrice représente un revenu de : 8760 heures dans l’année multipliées par $0,21 par kilowatt heure égale $1 839,60 par an.

Pour autant que j’ai pu me renseigner dans une publication par IRENA, l’investissement nécessaire en hydro-génération est d’à peu près $1500 ÷ $3000 par 1 kilowatt de puissance, à l’échelle mondiale. Cette moyenne globale représente un éventail assez étendu d’investissement par kilowatt, en fonction de la région géographique, de la puissance totale installée dans l’installation donnée, ainsi que de la dénivellation du cours d’eau correspondant. Pour des raisons que je n’ai pas encore étudié en détail, l’investissement requis par 1 kilowatt de puissance dans les installations classées comme petites varie le plus en Europe, en comparaison aux autres régions du monde. En partant de ce seuil général d’à peu près $1500 l’investissement requis par 1 kilowatt peut aller même jusqu’à $8000. Allez savoir pourquoi. Ce plafond maximum est deux fois plus élevé que ce qui est reporté dans quelle autre région du monde que ce soit.

La dénivellation naturelle du cours d’eau où la turbine hydroélectrique est installée joue son rôle. Dans des endroits vraiment plats, où la seule façon d’avoir un peu de force dans ce flux d’eau est de pomper l’eau dans des réservoirs élevés, l’investissement pour les petites turbines de moins de 50 kilowatts est d’environ $5400 par kilowatt, comme moyenne mondiale. Ça tombe vite à mesure que la dénivellation va de quasi-zéro vers et au-dessus de 25 mètres et ensuite ça tombe de plus en plus gentiment.

À part le retour requis sur l’investissement, le coût complet d’une kilowatt heure contient celui de maintenance et de gestion opérationnelle. Selon le même rapport d’IRENA, ce coût peut atteindre, dans des conditions plutôt pessimistes, comme $300 par an par 1 kilowatt de puissance installée. Après la déduction de ce coût le flux annuel de revenu des ventes d’électricité tourne en un flux de marge opérationnelle égal à $1 839,60 – $300 =  $1 539,60 par an. Quelques pages plus loin, toujours dans la même publication d’IRENA je trouve que le coût actualisé d’énergie, « LCOE » pour les amis, peut se ranger en Europe entre $0,05 et $0,17. Le coût de maintenance et de gestion opérationnelle, qui fait partie de LCOE, est de $300 par an par 1 kilowatt de puissance installée, divisé par 8760 dans l’année, donc $0,03 par kilowatt heure. Par conséquent, la partie « retour sur investissement » du LCOE peut varier entre $0,05 – $0,03 = $0,02 et $0,17 – $0,03 = $0,14 par kilowatt heure. Ce retour sur investissement, je le multiplie par 8760 heures dans l’année, pour obtenir le retour requis par an sur l’investissement en 1 kilowatt de puissance. Ça donne un intervalle entre $175,20 et $1 226,40 par an. Ceci me donne deux informations importantes. Premièrement, la marge opérationnelle de $1 539,60 par anest suffisante pour satisfaire même les projections financières des plus exigeantes.

Deuxièmement, longue histoire courte, comme disent les Anglo-Saxons, je prends l’investissement le plus coûteux possible, donc sur mon continent à moi (l’Europe), donc $8000, et je divise par cette fourchette des retours annuels. Ça tombe entre $8000/$1226,40 et $8000/$175,20, soit entre 6,5 et 46 années. Bon, disons que les 46 années c’est de l’abstrait. En fait, tout ce qui va plus loin que 20 ans, dans les investissements en la génération d’énergie, c’est tout simplement l’absence d’égard au retour sur l’investissement strictement dit. Ce qui m’intéresse c’est la dent inférieure de la fourchette, donc les 6,52 années. Je prends cet intervalle de temps comme benchmark du retour espéré par les investisseurs les plus exigeants. Par ailleurs, là, il est bon de rappeler quelque chose comme un paradoxe : plus vite vont se développer les technologies des turbines hydroélectriques, plus court sera le temps de vie morale de toute technologie spécifique, donc plus court sera le temps alloué au retour sur l’investissement.     

Une conclusion partielle que je peux tirer de ces calculs, à propos de mon projet « Étangs Énergétiques » est que les ventes d’électricité produite dans les turbines hydroélectriques faisant partie de l’infrastructure prévue peuvent constituer une motivation claire pour des investisseurs potentiels, à condition toutefois de maintenir la taille de l’investissement local dans les dizaines des milliers des dollars plutôt que dans les milliards que dépense le gouvernement Chinois sur le projet des « Sponge Cities ».

Je continue à vous fournir de la bonne science, presque neuve, juste un peu cabossée dans le processus de conception. Je vous rappelle que vous pouvez télécharger le business plan du projet BeFund (aussi accessible en version anglaise). Vous pouvez aussi télécharger mon livre intitulé “Capitalism and Political Power”. Je veux utiliser le financement participatif pour me donner une assise financière dans cet effort. Vous pouvez soutenir financièrement ma recherche, selon votre meilleur jugement, à travers mon compte PayPal. Vous pouvez aussi vous enregistrer comme mon patron sur mon compte Patreon . Si vous en faites ainsi, je vous serai reconnaissant pour m’indiquer deux trucs importants : quel genre de récompense attendez-vous en échange du patronage et quelles étapes souhaitiez-vous voir dans mon travail ? Vous pouvez me contacter à travers la boîte électronique de ce blog : goodscience@discoversocialsciences.com .


[1] Webber, H., Ewert, F., Olesen, J. E., Müller, C., Fronzek, S., Ruane, A. C., … & Ferrise, R. (2018). Diverging importance of drought stress for maize and winter wheat in Europe. Nature communications, 9(1), 4249.

[2] http://www.portalspozywczy.pl/owoce-warzywa/wiadomosci/zbiory-owocow-w-2019-roku-beda-nawet-o-30-procent-nizsze-niz-zwykle-wideo,173565.html dernier accès 16 Juillet 2019

[3] http://www.portalspozywczy.pl/zboza/wiadomosci/swietokrzyskie-w-zwiazku-z-susza-zbiory-zboz-moga-byc-nizsze-nawet-o-40-proc,160018.html dernier accès 16 Juillet 2019

[4] https://www.europe1.fr/societe/secheresse-pour-les-agriculteurs-les-restrictions-deau-sont-devenues-la-routine-3908427 dernier accès 16 Juillet 2019

All hope is not lost: the countryside is still exposed

My editorial on You Tube

I am focusing on the possible benefits of transforming urban structures of at least some European cities into sponge-like structures, such as described, for example, by Jiang et al. (2018) as well as in my recent updates on this blog (see Sponge Cities). In parallel to reporting my research on this blog, I am developing a corresponding project with the « Project Navigator », made available by the courtesy of the International Renewable Energy Agency (IRENA). Figuring out my way through the « Project Navigator » made me aware of the importance that social cohesion has in the implementation of such infrastructural projects. Social cohesion means a set of common goals, and an institutional context that allows the appropriation of outcomes. In « Sponge Cities », when studying the case of my hometown, Krakow, Poland, I came to the conclusion that sales of electricity from water turbines incorporated into the infrastructure of a sponge city could hardly pay off for the investment needed. On the other hand, significant reduction of the financially quantifiable risk connected to floods and droughts can be an argument. Especially the flood-related risks, in Europe, already amount to billions of euros, and we seem to be just at the beginning of the road (Alfieri et al. 2015[1]). Shielding against such risks can possibly make a sound base for social coherence, as a common goal. Hence, as I am structuring the complex concept of « Energy Ponds », I start with assessing risks connected to climate change in European cities, and the possible reduction of those risks through sponge-city-type investments.

I start with comparative a review of Alfieri et al. 2015[2] as regards flood-related risks, on the one hand, and Naumann et al. (2015[3]) as well as Vogt et al. (2018[4]) regarding the drought-related risks. As a society, in Europe, we seem to be more at home with floods than with droughts. The former is something we kind of know historically, and with the advent of climate change we just acknowledge more trouble in that department, whilst the latter had been, until recently, something that happens essentially to other people on other continents. The very acknowledgement of droughts as a recurrent risk is a challenge.

Risk is a quantity: this is what I teach my students. It is the probability of occurrence multiplied by the magnitude of damage, should the s**t really hit the fan. Why adopting such an approach? Why not to assume that risk is just the likelihood of something bad happening? Well, because risk management is practical. There is any point in bothering about risk if we can do something about it: insure and cover, hedge, prevent etc. The interesting thing about it is that all human societies show a recurrent pattern: as soon as we organise somehow, we create something like a reserve of resources, supposed to provide for risk. We are exposed to a possible famine? Good, we make a reserve of food. We risk to be invaded by a foreign nation/tribe/village/alien civilisation? Good, we make an army, i.e. a group of people, trained and equipped for actions with no immediate utility, just in case. The nearby river can possibly overflow? Good, we dig and move dirt, stone, wood and whatnot so as to build stopbanks. In each case, we move along the same path: we create a pooled reserve of something, in order to minimize the long-term damage from adverse events.

Now, if we wonder how much food we need to have in stock in case of famine, sooner or later we come to the conclusion that it is individual need for food multiplied by the number of people likely to be starving. That likelihood is not evenly distributed across the population: some people are more exposed than others. A farmer, with a few pigs and some potatoes in cultivation is less likely to be starving than a stonemason, busy to build something and not having time or energy to care for producing food. Providing for the risk of flood works according to the same scheme: some structures and some people are more likely to suffer than others.

We apprehend flood and drought-related risks in a similar way: those risks amount to a quantity of resources we put aside, in order to provide for the corresponding losses, in various ways. That quantity is the arithmetical product of probability times magnitude of loss.    

Total risk is a complex quantity, resulting from events happening in causal, heterogeneous chains. A river overflows and destroys some property: this is direct damage, the first occurrence in the causal chain. Among the property damaged, there are garbage yards. As water floods them, it washes away and further into the surrounding civilisation all kinds of crap, properly spoken crap included. The surrounding civilisation gets contaminated, and decontamination costs money: this is indirect damage, the second tier of the causal chain. Chemical and biological contamination by floodwater causes disruptions in the businesses involved, and those disruptions are costly, too: here goes the third tier in the causal chain etc.

I found some interesting insights, regarding the exposure to flood and drought-related risks in Europe, with Paprotny et al. (2018[5]). Firstly, this piece of research made me realized that floods and droughts do damage in very different ways. Floods are disasters in the most intuitive sense of the term: they are violent, and they physically destroy man-made structures. The magnitude of damage from floods results from two basic variables: the violence and recurrence of floods themselves, on the one hand, and the value of human structures affected. In a city, a flood does much more damage because there is much more property to destroy. Out there, in the countryside, damages inflicted by floods change from the disaster-type destruction into more lingering, long-term impediments to farming (e.g. contamination of farmed soil), as the density of man-made structures subsides. Droughts work insidiously. There is no spectacular disaster to be afraid of. Adverse outcomes build up progressively, sometimes even year after year. Droughts affect directly the countryside much more than the cities, too. It is rivers drying out first, and only in a second step, cities experiencing disruptions in the supply of water, or of the rivers-dependent electricity. It is farm soil drying out progressively, and farmers suffering some damage due to lower crops or increased costs of irrigation, and only then the city dwellers experiencing higher prices for their average carrot or an organic cereal bar. Mind you, there is one type of drought-related disaster, which sometimes can directly affect our towns and cities: forest fires.

Paprotny et al. (2018) give some detailed insights into the magnitude, type, and geographical distribution of flood-related risks in Europe. Firstly, the ‘where exactly?’. France, Spain, Italy, and Germany are the most affected, with Portugal, England, Scotland, Poland, Czech Republic, Hungary, Romania and Portugal following closely behind. As to the type of floods, France, Spain, and Italy are exposed mostly to flash floods, i.e. too much rain falling and not knowing where to go. Germany and virtually all of Central Europe, my native Poland included, are mostly exposed to river floods. As for the incidence of human fatalities, flash-floods are definitely the most dangerous, and their impact seems to be the most serious in the second half of the calendar year, from July on.

Besides, the research by Paprotny et al. (2018) indicates that in Europe, we seem to be already on the path of adaptation to floods. Both the currently observed losses –human and financial – and their 10-year, moving average had their peaks between 1960 and 2000. After 2000, Europe seems to have been progressively acquiring the capacity to minimize the adverse impact of floods, and this capacity seems to have developed in cities more than in the countryside. It truly gives a man a blow, to their ego, when they learn the problem they want to invent a revolutionary solution to does not really exist. I need to return on that claim I made in the « Project Navigator », namely that European cities are perfectly adapted to a climate that does no longer exist. Apparently, I was wrong: European cities seem to be adapting quite well to the adverse effects of climate change. Yet, all hope is not lost. The countryside is still exposed. Now, seriously. Whilst Europe seem to be adapting to greater an occurrence of floods, said occurrence is most likely to increase, as suggested, for example, in the research by Alfieri et al. (2017[6]). That sends us to the issue of limits to adaptation and the cost thereof.

Let’s rummage through more literature. As I study the article by Lu et al. (2019[7]), which compares the relative exposure to future droughts in various regions of the world, I find, first of all, the same uncertainty which I know from Naumann et al. (2015), and Vogt et al. (2018): the economically and socially important drought is a phenomenon we just start to understand, and we are still far from understanding it sufficiently to assess the related risks with precision. I know that special look that empirical research has when we don’t really have a clue what we are observing. You can see it in the multitude of analytical takes on the same empirical data. There are different metrics for detecting drought, and by Lu et al. (2019) demonstrate that assessment of drought-related losses heavily depends on the metric used. Once we account for those methodological disparities, some trends emerge. Europe in general seems to be more and more exposed to long-term drought, and this growing exposure seems to be pretty consistent across various scenarios of climate change. Exposure to short-term episodes of drought seems to be growing mostly under the RCP 4.5 and RCP 6.0 climate change scenarios, a little bit less under the RCP 8.5 scenario. In practical terms it means that even if we, as a civilisation, manage to cut down our total carbon emissions, as in the RCP 4.5. climate change scenario, the incidence of drought in Europe will be still increasing. Stagge et al. (2017[8]) point out that exposure to drought in Europe diverges significantly between the Mediterranean South, on the one hand, and the relatively colder North. The former is definitely exposed to an increasing occurrence of droughts, whilst the latter is likely to experience less frequent episodes. What makes the difference is evapotranspiration (loos of water) rather than precipitation. If we accounted just for the latter, we would actually have more water

I move towards more practical an approach to drought, this time as an agricultural phenomenon, and I scroll across the article on the environmental stress on winter wheat and maize, in Europe, by Webber et al. (2018[9]). Once again, I can see a lot of uncertainty. The authors put it plainly: models that serve to assess the impact of climate change on agriculture violate, by necessity, one of the main principles of statistical hypotheses-testing, namely that error terms are random and independent. In these precise models, error terms are not random, and not mutually independent. This is interesting for me, as I have that (recent) little obsession with applying artificial intelligence – a modest perceptron of my own make – to simulate social change. Non-random and dependent error terms are precisely what a perceptron likes to have for lunch. With that methodological bulwark, Webber et al. (2018) claim that regardless the degree of the so-called CO2 fertilization (i.e. plants being more active due to the presence of more carbon dioxide in the air), maize in Europe seems to be doomed to something like a 20% decline in yield, by 2050. Winter wheat seems to be rowing on a different boat. Without the effect of CO2 fertilization, a 9% decline in yield is to expect, whilst with the plants being sort of restless, and high on carbon, a 4% increase is in view. With Toreti et al. (2019[10]), more global a take is to find on the concurrence between climate extremes, and wheat production. It appears that Europe has been experiencing increasing an incidence of extreme heat events since 1989, and until 2015 it didn’t seem to affect adversely the yield of wheat. Still, since 2015 on, there is a visible drop in the output of wheat. Even stiller, if I may say, less wheat is apparently compensated by more of other cereals (Eurostat[11], Schills et al. 2018[12]), and accompanied by less potatoes and beets.

When I first started to develop on that concept, which I baptised “Energy Ponds”, I mostly thought about it as a way to store water in rural areas, in swamp-and-meadow-like structures, to prevent droughts. It was only after I read a few articles about the Sponge Cities programme in China that I sort of drifted towards that more urban take on the thing. Maybe I was wrong? Maybe the initial concept of rural, hydrological structures was correct? Mind you, whatever we do in Europe, it always costs less if done in the countryside, especially regarding the acquisition of land.

Even in economics, sometimes we need to face reality, and reality presents itself as a choice between developing “Energy Ponds” in urban environment, or in rural one. On the other hand, I am rethinking the idea of electricity generated in water turbines paying off for the investment. In « Sponge Cities », I presented a provisional conclusion that it is a bad idea. Still, I was considering the size of investment that Jiang et al. (2018) talk about in the context of the Chinese Sponge-Cities programme. Maybe it is reasonable to downsize a bit the investment, and to make it sort of lean and adaptable to the cash flow possible to generate out of selling hydropower.    

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. You can communicate with me directly, via the mailbox of this blog: goodscience@discoversocialsciences.com. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?


[1] Alfieri, L., Feyen, L., Dottori, F., & Bianchi, A. (2015). Ensemble flood risk assessment in Europe under high end climate scenarios. Global Environmental Change, 35, 199-212.

[2] Alfieri, L., Feyen, L., Dottori, F., & Bianchi, A. (2015). Ensemble flood risk assessment in Europe under high end climate scenarios. Global Environmental Change, 35, 199-212.

[3] Gustavo Naumann et al. , 2015, Assessment of drought damages and their uncertainties in Europe, Environmental Research Letters, vol. 10, 124013, DOI https://doi.org/10.1088/1748-9326/10/12/124013

[4] Vogt, J.V., Naumann, G., Masante, D., Spinoni, J., Cammalleri, C., Erian, W., Pischke, F., Pulwarty, R., Barbosa, P., Drought Risk Assessment. A conceptual Framework. EUR 29464 EN, Publications Office of the European Union, Luxembourg, 2018. ISBN 978-92-79-97469-4, doi:10.2760/057223, JRC113937

[5] Paprotny, D., Sebastian, A., Morales-Nápoles, O., & Jonkman, S. N. (2018). Trends in flood losses in Europe over the past 150 years. Nature communications, 9(1), 1985.

[6] Alfieri, L., Bisselink, B., Dottori, F., Naumann, G., de Roo, A., Salamon, P., … & Feyen, L. (2017). Global projections of river flood risk in a warmer world. Earth’s Future, 5(2), 171-182.

[7] Lu, J., Carbone, G. J., & Grego, J. M. (2019). Uncertainty and hotspots in 21st century projections of agricultural drought from CMIP5 models. Scientific reports, 9(1), 4922.

[8] Stagge, J. H., Kingston, D. G., Tallaksen, L. M., & Hannah, D. M. (2017). Observed drought indices show increasing divergence across Europe. Scientific reports, 7(1), 14045.

[9] Webber, H., Ewert, F., Olesen, J. E., Müller, C., Fronzek, S., Ruane, A. C., … & Ferrise, R. (2018). Diverging importance of drought stress for maize and winter wheat in Europe. Nature communications, 9(1), 4249.

[10] Toreti, A., Cronie, O., & Zampieri, M. (2019). Concurrent climate extremes in the key wheat producing regions of the world. Scientific reports, 9(1), 5493.

[11] https://ec.europa.eu/eurostat/statistics-explained/index.php/Agricultural_production_-_crops last access July 14th, 2019

[12] Schils, R., Olesen, J. E., Kersebaum, K. C., Rijk, B., Oberforster, M., Kalyada, V., … & Manolov, I. (2018). Cereal yield gaps across Europe. European journal of agronomy, 101, 109-120.

Le cycle d’adaptation

Mon éditorial sur You Tube

Je développe sur mon concept d’Étangs Énergétiques (voir « La ville éponge » et « Sponge Cities »). J’ai décidé d’utiliser le Navigateur des Projets, accessible à travers la page de « International Renewable Energy Agency ». La création d’un projet, à travers cette fonctionnalité, contient 6 étapes : a) identification b) analyse stratégique c) évaluation d) sélection e) pré-développement et f) développement proprement dit.

Le long de ce chemin conceptuel, on peut utiliser des exemples et études des cas accessibles à travers la sous-page intitulée « Learning Section ». Pour le moment, je me concentre sur la première phase, celle d’identification. Je liste les questions correspondantes d’abord, telles qu’elles sont présentées dans le Navigateur des Projets et après j’essaie d’y répondre. 

Questions de la phase d’identification du projet :

Groupes sociaux impliqués

Qui est impliqué dans le projet ? (gouvernement central, gouvernements locaux et communautés locales, investisseurs professionnels etc.)

Qui contrôle les résultats du projet et les bénéfices qui en découlent ?

Quels besoins externes doivent être satisfaits pour assurer le succès du projet ?

Quels groupes-cibles sont directement affectés par le projet ?

Qui sont les bénéficiaires ultimes du projet à long terme ?

Problème

Quel est le problème essentiel que le projet prend pour objectif de résoudre ?

Quelles sont ses causes ?

Quels sont les conséquences du problème essentiel ?

Objectifs

Quelle est la situation désirée que le projet doit aider à atteindre ?

Quelles sont les effets directs de la situation désirée ?

Quelles sont les retombées indirectes de la situation désirée ?

Quelles moyens et méthodes doivent être appliqués pour atteindre la situation désirée ?

Alternatives

Quelles actions alternatives peuvent-elles être envisagées ?

Quelle est la stratégie essentielle du projet ?

Comme j’essaie de répondre en ordre à ces questions, un désordre salutaire s’immisce et me fait formuler cette observation générale : dans la plupart des villes européennes, les infrastructures en place pour le drainage d’eau de pluie et la provision d’eau potable sont adaptées, et même très bien adaptées, à un climat qui n’existe plus qu’à peine. Durant des siècles nous avons appris, en Europe, où est la ligne d’inondation dans un endroit donné et quel est le niveau normal d’eau dans la rivière locale. Nous avons construit des systèmes de drainage qui était presque parfaits 30 ans auparavant mais qui sont débordés de plus en plus souvent. Point de vue technologie, nos infrastructures urbaines forment la solution aux problèmes qui s’évanouissent progressivement. Je veux dire qu’il n’y a pas vraiment d’alternative technologique au concept général de la ville-éponge. Les villes européennes sont ce qu’elles sont, dans une large mesure, parce qu’à travers des siècles les communautés locales avaient appris à utiliser les ressources hydrologiques crées par le climat typiquement tempéré. Le climat change et les conditions hydrologiques changent aussi. Les communautés urbaines d’Europe doivent inventer et mettre en place des solutions infrastructurelles nouvelles ou bien elles vont dépérir. J’exagère ? Allez-donc visiter l’Italie. Vous voyez le Nord opulent et le Sud pauvre. Croiriez-vous qu’il y a 2200 ans c’était exactement l’inverse ? Dans les temps de l’Ancienne Rome, république ou empire, peu importe, le Sud était le quartier chic et le Nord c’étaient les terres quasi-barbares. Les conditions externes avaient changé et certaines communautés locales avaient dégénéré.       

Je pense donc que la direction générale que je veux suivre dans le développement de mon concept d’Étangs Énergétiques est la seule direction viable à long-terme. La question est comment le faire exactement. Voilà donc que je viens à la dernière question de la liste d’identification, quelques paragraphes plus tôt : Quelle est la stratégie essentielle du projet ?  Je pense que cette stratégie doit être institutionnelle d’abord et technologique ensuite. Elle doit avant tout mobiliser plusieurs acteurs sociaux autour des projets infrastructurels. Tel que je l’envisage, le projet d’Étangs Énergétiques implique surtout et d’abord des communautés urbaines locales dans les villes européennes qui se trouvent dans des plaines fluviales le long des rivières. Suivant la structure urbaine exacte en place, on peut parler des communautés urbaines strictement dites ou bien des communautés métropolitaines, mais la logique de base reste la même : ces villes font face à un aspect spécifique des changements climatiques, donc à un rythme de précipitations qui évolue vers des averses de plus en plus violentes entrecoupées par des périodes de sécheresse. Les plaines qui longent les rivières européennes se transforment déjà en quelque chose de typiquement fluvial, un peu comme la vallée du Nile en Égypte : l’irrigation naturelle des couches superficielles du sol dépend de plus en plus de ces averses violentes. Cependant, les infrastructures de provision d’eau dans ces communautés urbaines sont, dans leur grande majorité, adaptés aux conditions environnementales du passé, avec des précipitations bien prévisibles, survenant en des cycles longs, avec des chutes de neige substantielles en hiver et des dégels progressifs dans les dernières semaines d’hiver et les premières semaines du printemps.

Les résultats espérés du projet sont les suivants : a) plus d’eau retenue sur place après averses, y compris plus d’eau potable, donc moindre risque de sécheresse et moins de dégâts causés par la sécheresse  b) moindre risque d’inondation, moindre coût de prévention ponctuelle contre l’inondation ainsi qu’un moindre coût des dégâts causés par les inondations c) contrôle des retombées environnementales indirectes de la transformation du terrain en une plaine fluviale de fait d) électricité produite sur place dans les turbines hydrauliques qui utilisent l’eau de pluie.

Lorsque je me repose la question « Qui contrôle ces résultats et qui peut le plus vraisemblablement ramasser la crème des résultats positifs ? », la réponse est complexe mais elle a une logique de base : ça dépend de la loi en vigueur. Dans le contexte légal européen que je le connais les résultats énumérés ci-dessus sont distribués parmi plusieurs acteurs. De manière générale, le contrôle des ressources fondamentales, comme les rivières et l’infrastructure qui les accompagne ou bien le système de provision d’électricité, sont sous le contrôle essentiel des gouvernements nationaux, qui à leur tour peuvent déléguer ce contrôle aux tierces personnes. Ces tierces personnes sont surtout les communautés urbaines et les grandes sociétés infrastructurelles. En fait, dans le contexte légal européen, les habitants des villes n’ont pratiquement pas de contrôle direct et propriétaire sur les ressources et infrastructures fondamentales dont dépend leur qualité de vie. Ils n’ont donc pas de contrôle direct sur les bénéfices possibles du projet. Ils peuvent avoir des retombées à travers les prix de l’immobilier, où ils ont des droits propriétaires, mais en général, point de vue contrôle des résultats, je vois déjà un problème à résoudre. Le problème c’est que quoi qu’on essaie de transformer dans l’infrastructure urbaine des villes européennes, il est dur de cerner qui est le propriétaire du changement, vu la loi en vigueur.

Je veux cerner les risques que mon concept d’Étangs Énergétiques, ainsi que le concept chinois des Villes Éponges, ont pour but de prévenir ou au moins réduire : les risques liés aux inondations et sécheresses qui surviennent en des épisodes apparemment aléatoires. J’ai fait un petit tour de littérature à ce propos. Je commence par les sécheresses. Intuitivement, ça me semble être plus dangereux que l’inondation, dans la mesure où il est quand même plus facile de faire quelque chose avec de l’eau qui est là en surabondance qu’avec de l’eau qui n’est pas là du tout. Je commence avec une lettre de recherche de Naumann et al. (2015[1]) et il y a un truc qui saute aux yeux : nous ne savons pas exactement ce qui se passe. Les auteurs, qui par ailleurs sont des experts de la Commission Européenne, admettent ouvertement que les sécheresses en Europe surviennent réellement, mais elles surviennent d’une manière que nous ne comprenons que partiellement. Nous avons même des problèmes à définir ce qu’est exactement un sécheresse dans le contexte européen. Est-ce que le dessèchement du sol est suffisant pour parler de la sécheresse ? Ou bien faut-il une corrélation forte et négative dudit dessèchement avec la productivité agriculturale ? Aussi prudent qu’il doive être, le diagnostic des risques liées à la sécheresse en Europe, de la part de Neumann et al., permet de localiser des zones à risque particulièrement élevé : la France, l’Espagne, l’Italie, le Royaume Uni, la Hongrie, la Roumanie, l’Autriche et l’Allemagne.

Il semble que les risques liés aux inondations en Europe sont mappés et quantifiés beaucoup mieux que ceux liés aux épisodes de sécheresse. Selon Alfieri et al. (2015[2]), à l’heure actuelle la population affectée par les inondations en Europe est d’environ 216 000 personnes et la tendance est vers un intervalle entre 500 000 et 640 000 personnes en 2050. Côté finances, les dommages annuels causés par les inondations en Europe sont d’à peu près €5,3 milliards, contre quelque chose entre €20 milliards et €40 milliards par an à espérer en 2050. Lorsque je compare ces deux pièces de recherche – l’une sur les épisodes de sécheresse, l’autre sur les inondations – ce qui saute aux yeux est une disparité en termes d’expérience. Nous savons tout à fait précisément ce qu’une inondation peut nous faire dans un endroit donné sous des conditions hydrologiques précises. En revanche, nous savons encore peu sur ce que nous pouvons souffrir par la suite d’un épisode de sécheresse. Lorsque je lis le rapport technique par Vogt et al. (2018[3]) je constate que pour nous, les Européens, la sécheresse est encore un phénomène qui se passe ailleurs, pas chez nous. D’autant plus difficile il nous sera de s’adapter lorsque les épisodes de sécheresse deviennent plus fréquents.

Je commence donc à penser en termes de cycle d’adaptation : un cycle de changement social en réponse au changement environnemental. Je crois que le premier épisode d’inondation vraiment massive chez moi, en Pologne, c’était en 1997. En revanche, la première sécheresse qui s’est fait vraiment remarquer chez nous, à travers des puits asséchés et des centrales électriques menacées par des problèmes de refroidissement de leurs installations, du au niveau exceptionnellement bas d’eau dans les rivières, ça semble avoir été en 2015. Alors, 2015 – 1997 = 18 ans. C’est étrange. C’est presque exactement le cycle que j’avais identifié dans ma recherche sur l’efficience énergétique et ça me fait repenser l’utilisation d’intelligence artificielle dans ma recherche. Le premier truc c’est l’application cohérente du perceptron pour interpréter les résultats stochastiques de ma recherche sur l’efficience énergétique. La deuxième chose est une généralisation de la première : cela fait un bout de temps que je me demande comment connecter de façon théorique les méthodes stochastiques utilisées dans les sciences sociales avec la structure logique d’un réseau neuronal. L’exemple de parmi les plus évidents, qui me vient maintenant à l’esprit est la définition et l’utilisation d’erreur. Dans l’analyse stochastique nous calculons une erreur standard, sur la base d’erreurs observées localement en ensuite nous utilisons cette erreur standard, par exemple dans le test t de Student. Dans un réseau neuronal, nous naviguons d’erreur locale en erreur locale, pas à pas et c’est de cette façon que notre intelligence artificielle apprend. Le troisième truc c’est la connexion entre les fonctions d’un réseau neuronal d’une part et deux phénomènes de psychologie collective : l’oubli et l’innovation.

Alors, efficience énergétique. Dans le brouillon d’article auquel je me réfère, j’avais posé l’hypothèse générale que l’efficience énergétique d’économies nationales est significativement corrélée avec les variables suivantes :

  1. Le coefficient de proportion entre l’amortissement agrégé d’actifs fixes et le PIB ; c’est une mesure de l’importance économique relative du remplacement des technologies anciennes par des technologies nouvelles ;
  2. Le coefficient du nombre des demandes nationales de brevet par 1 million d’habitants ; c’est une mesure d’intensité relative de l’apparition des nouvelles inventions ;
  3. Le coefficient de l’offre d’argent comme pourcentage du PIB, soit l’inverse de la bonne vieille vélocité de l’argent ; celui-là, c’est un vieux pote à moi : je l’ai déjà étudié, en connexion avec (i) et (ii), dans un article en 2017 ; comme vous avez pu le suivre sur mon blog, je suis très attaché à l’idée de l’argent comme hormone systémique des structures sociales ;
  4. Le coefficient de consommation d’énergie par tête d’habitant ;
  5. Le pourcentage d’énergies renouvelables dans la consommation totale d’énergie ;
  6. Le pourcentage de population urbaine dans la population totale ;
  7. Le coefficient de PIB par tête d’habitant ;

Bien sûr, je peux développer toute une ligne de réflexion sur les inter-corrélations de ces variables explicatives elles-mêmes. Cependant, je veux me concentrer sur une méta-régularité intéressante que j’avais découverte. Alors, vu que ces variables ont des échelles de mesure très différentes, j’avais commencé par en tirer des logarithmes naturels et c’était sur ces logarithmes que je faisais tous les tests économétriques. Comme j’eus effectué la régression linéaire de base sur ces logarithmes, le résultat vraiment robuste me disait que l’efficience énergétique d’un pays – donc son coefficient de PIB par kilogramme d’équivalent pétrole de consommation finale d’énergie – ça dépend surtout de la corrélation négative avec la consommation d’énergie par tête d’habitant ainsi que de la corrélation positive avec le PIB par tête d’habitant. Les autres variables avaient des coefficients de régression plus bas d’un ordre de magnitude ou bien leurs signifiance « p » selon le test t de Student était plutôt dans l’aléatoire. Comme ces deux coefficients sont dénommés par tête d’habitant, la réduction du dénominateur commun me conduisait à la conclusion que le coefficient du PIB par unité de consommation d’énergie est significativement corrélé avec le coefficient de PIB par unité de consommation d’énergie. Pas vraiment intéressant.      

C’est alors que j’ai eu cette association bizarroïde d’idées : le logarithme naturel d’un nombre est l’exposante à laquelle il faut élever la constante « e » , donc e = 2,71828 pour obtenir ledit nombre. La constante e = 2,71828, à son tour, est le paramètre constant de la fonction de progression exponentielle, qui possède une capacité intrigante de refléter des changement dynamiques avec hystérèse, donc des processus de croissance où chaque épisode consécutif bâtit sa croissance locale sur la base de l’épisode précèdent.

Dans la progression exponentielle, l’exposante de la constante e = 2,71828 est un produit complexe d’un paramètre exogène « a » et du numéro ordinal « t » de la période de temps consécutive. Ça va donc comme y = ea*t . Le coefficient de temps « t » est mesuré dans un calendrier. Il dépend de l’assomption en ce qui concerne le moment originel de la progression : t = tx – t0tx est le moment temporel brut en quelque sorte et t0 est le moment originel. Tout ça c’est de l’ontologie profonde en soi-même : le temps dont nous sommes conscients est une projection d’un temps sous-jacent sur le cadre d’un calendrier conventionnel.

Moi, j’ai utilisé cette ontologie comme prétexte pour jouer un peu avec mes logarithmes naturels. Logiquement, le logarithme naturel d’un nombre « » peut s’écrire comme l’exposante de la constante « e » dans une progression exponentielle, donc ln(x) = a*t. Comme t = tx – t0 , la formulation exacte du logarithme naturel est donc ln(x) = a*(tx – t0). Logiquement, la valeur locale du coefficient exogène « a » dépend du choix conventionnel de t0. C’est alors que j’avais imaginé deux histoires alternatives : l’une qui avait commencé un siècle avant – donc en 1889, vers la fin de la deuxième révolution industrielle – et l’autre qui avait commencé en 1989, après le grand changement politique en Europe et la chute du mur de Berlin.

J’avais écrit chaque logarithme naturel dans mon ensemble des données empiriques dans deux formulations alternatives : ln(x) = a1*(tx – 1889) ou alors ln(x) = a2*(tx – 1989). Par conséquent, chaque valeur empirique « x » dans mon échantillon acquiert deux représentations alternatives : a1(x) = ln(x) / (tx – 1889) et a2(x) = ln(x) / (tx – 1989).  Les « a1 » c’est de l’histoire lente et posée. Mes observations empiriques commencent en 1990 et durent jusqu’en 2014 ; a1(x ; 1990) = ln(x)/101 alors que a1(x ; 2014) = ln(x)/125. En revanche, les « a2 » racontent une histoire à l’image d’une onde de choc qui se répand avec force décroissante depuis son point d’origine ; a2(x ; 1990) = ln(x)/1 pendant que a2(x ; 2014) = ln(x)/25.

J’ai repris la même régression linéaire – donc celle que j’avais effectué sur les logarithmes naturels ln(x) de mes données – avec les ensembles transformés « a1(x) » et « a2(x) ». Je cherchais donc à expliquer de façon stochastiques les changements observés dans « a1(efficience énergétique) » ainsi que « a2(efficience énergétique) » par régression sur les « a1(x) » et « a2(x) » des variables explicatives (i) – (vii) énumérées plus haut. La régression des « a1 » paisibles tire de l’ombre l’importance de la corrélation entre l’efficience énergétique et le pourcentage de population urbaine dans la population totale : plus de citadins dans la population totale, plus efficiente énergétiquement est l’économie du pays. Lorsque je régresse sur les « a2 » en onde de choc faiblissante, la corrélation entre l’urbanisation et l’efficience énergétique gagne en force et une autre apparaît : celle avec l’offre d’argent comme pourcentage du PIB. Plus de pognon par unité de PIB, plus de PIB par kilogramme d’équivalent pétrole consommé.

Ici, j’ai un peu le même doute qu’à chaque fois que je vois une technique stochastique nouvelle, par exemple lorsque je compare les résultats de régression linéaire selon la méthode des moindres carrés avec les mêmes données empiriques traitées avec des méthodes comme GARCH ou ARIMA. Les méthodes différentes de calcul appliquées aux mêmes données de départ donnent des résultats différents : c’est normal. Néanmoins, ces résultats différents sont-ils des manifestations de quelque chose réellement différent ? Ce qui me vient à l’esprit est le concept du cycle Schumpétérien. Dans son livre célèbre intitulé « Business Cycles », l’économiste Autrichien Joseph Aloïs Schumpeter avait formulé la thèse qui depuis s’est bien installée dans les sciences sociales : celle du cycle de changement technologique. Mes résultats de recherche indiquent que les changements d’efficience énergétique forment des corrélations les plus cohérentes avec d’autres variables prises en compte lorsque j’impose une analyse de cycle, avec un moment initial hypothétique. Comment ce cycle est lié aux comportements individuels et collectifs, donc comment puis-je l’étudier comme phénomène d’intelligence collective ? 

Je continue à vous fournir de la bonne science, presque neuve, juste un peu cabossée dans le processus de conception. Je vous rappelle que vous pouvez télécharger le business plan du projet BeFund (aussi accessible en version anglaise). Vous pouvez aussi télécharger mon livre intitulé “Capitalism and Political Power”. Je veux utiliser le financement participatif pour me donner une assise financière dans cet effort. Vous pouvez soutenir financièrement ma recherche, selon votre meilleur jugement, à travers mon compte PayPal. Vous pouvez aussi vous enregistrer comme mon patron sur mon compte Patreon . Si vous en faites ainsi, je vous serai reconnaissant pour m’indiquer deux trucs importants : quel genre de récompense attendez-vous en échange du patronage et quelles étapes souhaitiez-vous voir dans mon travail ? Vous pouvez me contacter à travers la boîte électronique de ce blog : goodscience@discoversocialsciences.com .


[1] Gustavo Naumann et al. , 2015, Assessment of drought damages and their uncertainties in Europe, Environmental Research Letters, vol. 10, 124013, DOI https://doi.org/10.1088/1748-9326/10/12/124013

[2] Alfieri, L., Feyen, L., Dottori, F., & Bianchi, A. (2015). Ensemble flood risk assessment in Europe under high end climate scenarios. Global Environmental Change, 35, 199-212.

[3] Vogt, J.V., Naumann, G., Masante, D., Spinoni, J., Cammalleri, C., Erian, W., Pischke, F., Pulwarty, R., Barbosa, P., Drought Risk Assessment. A conceptual Framework. EUR 29464 EN, Publications Office of the European Union, Luxembourg, 2018. ISBN 978-92-79-97469-4, doi:10.2760/057223, JRC113937