The mind-blowing hydro

My editorial on You Tube

There is that thing about me: I am a strange combination of consistency and ADHD. If you have ever read one of Terry Pratchett’s novels from the ‘Discworld’ series, you probably know the imaginary character of golems: made of clay, with a logical structure – a ‘chem’ – put in their heads, they can work on something endlessly. In my head, there are chems, which just push me to do things over and over and over again. Writing and publishing on that research blog is very much in those lines. I can stop whenever I want, I just don’t want right now. Yet, when I do a lot about one chem, I start craving for another one, like nearby but not quite in the same intellectual location.

Right now, I am working on two big things. Firstly, I feel like drawing a provisional bottom line under those two years of science writing on my blog. Secondly, I want to put together an investment project that would help my city, my country and my continent, thus Krakow, Poland, and Europe, to face one of the big challenges resulting from climate change: water management. Interestingly, I started to work on the latter first, and only then I began to phrase out the former. I explain. As I work on that project of water management, which I provisionally named « Energy Ponds » (see, for example, « All hope is not lost: the countryside is still exposed »), I use the « Project Navigator », made available by the courtesy of the International Renewable Energy Agency (IRENA). The logic built into the « Project Navigator » makes me return, over and over again, to one central question: ‘You, Krzysztof Wasniewski, with your science and your personal energy, how are you aligned with that idea of yours? How can you convince other people to put their money and their personal energy into developing on your concept?’.

And so I am asking myself: ‘What’s your science, bro? What can you get people interested in, with rational grounds and intelligible evidence?’.

As I think about it, my first basic claim is that we can do it together in a smart way. We can act as a collective intelligence. This statement can be considered as a manifestation of the so-called “Bignetti model” in cognitive sciences (Bignetti 2014[1]; Bignetti et al. 2017[2]; Bignetti 2018[3]): for the last two years, I have been progressively centering my work around the topic of collective intelligence, without even being quite aware of it. As I was working on another book of mine, entitled “Capitalism and Political Power”, I came by that puzzling quantitative fact: as a civilization, we have more and more money per unit of real output[4], and, as I reviewed some literature, we seem not to understand why is that happening. Some scholars complain about the allegedly excessive ‘financialization of the economy’ (Krippner 2005[5]; Foster 2007[6]; Stockhammer 2010[7]), yet, besides easy generalizations about ‘greed’, or ‘unhinged race for profit’, no scientifically coherent explanation is offered regarding this phenomenon.

As I was trying to understand this phenomenon, shades of correlations came into my focus. I could see, for example, that growing an amount of money per unit of real output has been accompanied by growing an amount of energy consumed per person per year, in the global economy[8]. Do we convert energy into money, or the other way around? How can it be happening? In 2008, the proportion between the global supply of broad money, and the global real output passed the magical threshold of 100%. Intriguingly, the same year, the share of urban population in the total human population passed the threshold of 50%[9], and the share of renewable energy in the total final consumption of energy, at the global scale, took off for the first time since 1999, and keeps growing since then[10]. I started having that diffuse feeling that, as a civilization, we are really up to something, right now, and money is acting like a social hormone, facilitating change.

We change as we learn, and we learn as we experiment with the things we invent. How can I represent, in a logically coherent way, collective learning through experimentation? When an individual, or a clearly organized group learns through experimentation, the sequence is pretty straightforward: we phrase out an intelligible definition of the problem to solve, we invent various solutions, we test them, we sum up the results, we select seemingly the best solution among those tested, and we repeat the whole sequence. As I kept digging the topic of energy, technological change, and the velocity of money, I started formulating the outline of a complex hypothesis: what if we, humans, are collectively intelligent about building, purposefully, and semi – consciously, social structures supposed to serve as vessels for future collective experiments?

My second claim is that one of the smartest things we can do about climate change is, besides reducing our carbon footprint, to take proper care of our food and energy base. In Europe, climate change is mostly visible as a complex disruption to our water system, and we can observe it in our local rivers. That’s the thing about Europe: we have built our civilization, on this tiny, mountainous continent, in close connection with rivers. Right, I can call them scientifically ‘inland waterways’, but I think that when I say ‘river’, anybody who reads it understands intuitively. Anyway, what we call today ‘the European heritage’ has grown next to EVENLY FLOWING rivers. Once again: evenly flowing. It means that we, Europeans, are used to see the neighbouring river as a steady flow. Streams and creeks can overflow after heavy rains, and rivers can swell, but all that stuff had been happening, for centuries, very recurrently.

Now, with the advent of climate change, we can observe three water-related phenomena. Firstly, as the English saying goes, it never rains but it pours. The steady rhythm and predictable volume of precipitations we are used to, in Europe (mostly in the Northern part), progressively gives ground to sudden downpours, interspersed with periods of drought, hardly predictable in their length. First moral of the fairy tale: if we have less and less of the kind of water that falls from the sky slowly and predictably, we need to learn how to capture and retain the kind of water that falls abruptly, unscheduled. Secondly, just as we have adapted somehow to the new kind of sudden floods, we have a big challenge ahead: droughts are already impacting, directly and indirectly, the food market in Europe, but we don’t have enough science yet to predict accurately neither their occurrence nor their local impact. Yet, there is already one emerging pattern: whatever happens, i.e. floods or droughts, rural populations in Europe suffer more than the urban ones (see my review of literature in « All hope is not lost: the countryside is still exposed »). Second moral of the fairy tale: whatever we do about water management in these new conditions, in Europe, we need to take care of agriculture first, and thus to create new infrastructures so as to shield farms against floods and droughts, cities coming next in line.

Thirdly, the most obviously observable manifestation of floods and droughts is variation in the flow of local rivers. By the way, that variation is already impacting the energy sector: when we have too little flow in European rivers, we need to scale down the output of power plants, as they have not enough water to cool themselves. Rivers are drainpipes of the neighbouring land. Steady flow in a river is closely correlated with steady a level of water in the ground, both in the soil, and in the mineral layers underneath. Third moral of the fairy tale: if we figure out workable ways of retaining as much rainfall in the ground as possible, we can prevent all the three disasters in the same time, i.e. local floods, droughts, and economically adverse variations in the flow of local rivers.           

I keep thinking about that ownership-of-the-project thing I need to cope with when using the « Project Navigator » by IRENA. How to make local communities own, as much as possible, both the resources needed for the project, and its outcomes? Here, precisely, I need to use my science, whatever it is. People at IRENA have experience in such project, which I haven’t. I need to squeeze my brain and extract thereof any useful piece of coherent understanding, to replace experience. I am advancing step by step. I intuitively associate ownership with property rights, i.e. with a set of claims on something – things or rights – together with a set of liberties of action regarding the same things or rights. Ownership from the part of a local community means that claims and liberties should be sort of pooled, and the best idea that comes to my mind is an investment fund. Here, a word of explanation is due: an investment fund is a general concept, whose actual, institutional embodiment can take the shape of a strictly speaking investment fund, for one, and yet other legal forms are possible, such as a trust, a joint stock company, a crowdfunding platform, or even a cryptocurrency operating in a controlled network. The general concept of an investment fund consists in taking a population of investors and making them pool their capital resources over a set of entrepreneurial projects, via the general legal construct of participatory titles: equity-based securities, debt-based ones, insurance, futures contracts, and combinations thereof. Mind you, governments are investment funds too, as regards their capacity to move capital around. They somehow express the interest of their respective populations in a handful of investment projects, they take those populations’ tax money and spread it among said projects. That general concept of investment fund is a good expression of collective intelligence. That thing about social structure for collective experimentation, which I mentioned a few paragraphs ago, an investment fund is an excellent example. It allows spreading resources over a number of ventures considered as local experiments.

Now, I am dicing a few ideas for a financial scheme, based on the general concept of an investment fund, as collectively intelligent as possible, in order to face the new challenges of climate change, through new infrastructures for water management. I start with reformulating the basic technological concept. Water powered water pumps are immersed in the stream of a river. They use the kinetic energy of that stream to pump water up and further away, more specifically into elevated water towers, from which that water falls back to the ground level, as it flows down it powers relatively small hydroelectric turbines, and ends up in a network of ponds, vegetal complexes and channel-like ditches, all that made with a purpose of retaining as much water as possible. Those structures can be connected to others, destined directly to capture rainwater. I was thinking about two setups, respectively for rural environments and for the urban ones. In the rural landscape, those ponds and channels can be profiled so as to collect rainwater from the surface of the ground and conduct it into its deeper layers, through some system of inverted draining. I think it would be possible, under proper geological conditions, to reverse-drain rainwater into deep aquifers, which the neighbouring artesian wells can tap into. In the urban context, I would like to know more about those Chinese technologies used in their Sponge Cities programme (see Jiang et al. 2018[11]).

The research I have done so far suggests that relatively small, local projects work better, for implementing this type of technologies, than big, like national scale endeavours. Of course, national investment programmes will be welcome as indirect support, but at the end of the day, we need a local community owning a project, possibly through an investment-fund-like institutional arrangement. The economic value conveyed by any kind of participatory title in such a capital structure sums up to the Net Present Value of three cash flows: net proceeds from selling hydroelectricity produced in small water turbines, reduction of the aggregate flood-related risk, as well as of the drought-related risk. I separate risks connected to floods from those associated with droughts, as they are different in nature. In economic and financial terms, floods are mostly a menace to property, whilst droughts materialize as more volatile prices of food and basic agricultural products.

In order to apprehend accurately the Net Present Value of any cash flow, we need to set a horizon in time. Very tentatively, by interpreting data from 2012, presented in a report published by IRENA (the same IRENA), I assume that relatively demanding investors in Europe expect to have a full return on their investment within 6,5 years, which I make 7 years, for the sake of simplicity. Now, I go a bit off the beaten tracks, at least those I have beaten so far. I am going to take the total atmospheric precipitations falling on various European countries, which means rainfall + snowfall, and then try to simulate what amount of ‘NPV = hydroelectricity + reduction of risk from floods and droughts’(7 years) could the retention of that water represent.

Let’s walse. I take data from FAOSTAT regarding precipitations and water retention. As a matter of fact, I made a query of that data regarding a handful of European countries. You can have a look at the corresponding Excel file UNDER THIS LINK. I rearranged bit the data from this Excel file so as to have a better idea of what could happen, if those European countries I have on my list, my native Poland included, built infrastructures able to retain 2% of the annual rainfall. The coefficient of 2% is vaguely based on what Shao et al. (2018[12]) give as the target retention coefficient for the city of Xiamen, China, and their Sponge-City-type investment. I used the formulas I had already phrased out in « Sponge Cities », and in « La marge opérationnelle de $1 539,60 par an par 1 kilowatt », to estimate the amount of electricity possible to produce out of those 2% of annual rainfall elevated, according to my idea, into 10-metres-high water towers. On the top of all that, I added, for each country, data regarding the already existing capacity to retain water. All those rearranged numbers, you can see them in the Excel file UNDER THIS OTHER LINK (a table would be too big for inserting into this update).   

The first provisional conclusion I have to make is that I need to revise completely my provisional conclusion from « Sponge Cities », where I claimed that hydroelectricity would have no chance to pay for any significant investment in sponge-like structures for retaining water. The calculations I have just run show just the opposite: as soon as we consider whole countries as rain-retaining basins, the hydroelectric power, and the cash flow dormant in that water is just mind-blowing. I think I will need to get a night of sleep just to check on the accuracy of my calculations.

Deranging as they are, my calculations bear another facet. I compare the postulated 2% of retention in annual precipitations with the already existing capacity of these national basins to retain water. That capacity is measured, in that second Excel file, by the ‘Coefficient of retention’, which denominates the ‘Total internal renewable water resources (IRWR)’ over the annual precipitation, both in 10^9 m3/year. My basic observation is that European countries have a capacity to retain water very similar in disparity to the intensity of precipitations, measured in mm per year. Both coefficients vary in a similar proportion, i.e. their respective standard deviations make around 0,4 of their respective means, across the sample of 37 European countries. When I measure it with the Pearson coefficient of correlation between the intensity of rainfall and the capacity to retain it , it yields r = 0,63. In general, the more water falls from the sky per 1 m2, the greater percentage of that water is retained, as it seems. Another provisional conclusion I make is that the capacity to retain water, in a given country, is some kind of response, possibly both natural and man-engineered, to a relatively big amount of water falling from the sky. It looks as if our hydrological structures, in Europe, had been built to do something with water we have momentarily plenty of, possibly even too much of, and which we should save for later.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. You can communicate with me directly, via the mailbox of this blog: goodscience@discoversocialsciences.com. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?


[1] Bignetti, E. (2014). The functional role of free-will illusion in cognition:“The Bignetti Model”. Cognitive Systems Research, 31, 45-60.

[2] Bignetti, E., Martuzzi, F., & Tartabini, A. (2017). A Psychophysical Approach to Test:“The Bignetti Model”. Psychol Cogn Sci Open J, 3(1), 24-35.

[3] Bignetti, E. (2018). New Insights into “The Bignetti Model” from Classic and Quantum Mechanics Perspectives. Perspective, 4(1), 24.

[4] https://data.worldbank.org/indicator/FM.LBL.BMNY.GD.ZS last access July 15th, 2019

[5] Krippner, G. R. (2005). The financialization of the American economy. Socio-economic review, 3(2), 173-208.

[6] Foster, J. B. (2007). The financialization of capitalism. Monthly Review, 58(11), 1-12.

[7] Stockhammer, E. (2010). Financialization and the global economy. Political Economy Research Institute Working Paper, 242, 40.

[8] https://data.worldbank.org/indicator/EG.USE.PCAP.KG.OE last access July 15th, 2019

[9] https://data.worldbank.org/indicator/SP.URB.TOTL.IN.ZS last access July 15th, 2019

[10] https://data.worldbank.org/indicator/EG.FEC.RNEW.ZS last access July 15th, 2019

[11] Jiang, Y., Zevenbergen, C., & Ma, Y. (2018). Urban pluvial flooding and stormwater management: A contemporary review of China’s challenges and “sponge cities” strategy. Environmental science & policy, 80, 132-143.

[12] Shao, W., Liu, J., Yang, Z., Yang, Z., Yu, Y., & Li, W. (2018). Carbon Reduction Effects of Sponge City Construction: A Case Study of the City of Xiamen. Energy Procedia, 152, 1145-1151.

All hope is not lost: the countryside is still exposed

My editorial on You Tube

I am focusing on the possible benefits of transforming urban structures of at least some European cities into sponge-like structures, such as described, for example, by Jiang et al. (2018) as well as in my recent updates on this blog (see Sponge Cities). In parallel to reporting my research on this blog, I am developing a corresponding project with the « Project Navigator », made available by the courtesy of the International Renewable Energy Agency (IRENA). Figuring out my way through the « Project Navigator » made me aware of the importance that social cohesion has in the implementation of such infrastructural projects. Social cohesion means a set of common goals, and an institutional context that allows the appropriation of outcomes. In « Sponge Cities », when studying the case of my hometown, Krakow, Poland, I came to the conclusion that sales of electricity from water turbines incorporated into the infrastructure of a sponge city could hardly pay off for the investment needed. On the other hand, significant reduction of the financially quantifiable risk connected to floods and droughts can be an argument. Especially the flood-related risks, in Europe, already amount to billions of euros, and we seem to be just at the beginning of the road (Alfieri et al. 2015[1]). Shielding against such risks can possibly make a sound base for social coherence, as a common goal. Hence, as I am structuring the complex concept of « Energy Ponds », I start with assessing risks connected to climate change in European cities, and the possible reduction of those risks through sponge-city-type investments.

I start with comparative a review of Alfieri et al. 2015[2] as regards flood-related risks, on the one hand, and Naumann et al. (2015[3]) as well as Vogt et al. (2018[4]) regarding the drought-related risks. As a society, in Europe, we seem to be more at home with floods than with droughts. The former is something we kind of know historically, and with the advent of climate change we just acknowledge more trouble in that department, whilst the latter had been, until recently, something that happens essentially to other people on other continents. The very acknowledgement of droughts as a recurrent risk is a challenge.

Risk is a quantity: this is what I teach my students. It is the probability of occurrence multiplied by the magnitude of damage, should the s**t really hit the fan. Why adopting such an approach? Why not to assume that risk is just the likelihood of something bad happening? Well, because risk management is practical. There is any point in bothering about risk if we can do something about it: insure and cover, hedge, prevent etc. The interesting thing about it is that all human societies show a recurrent pattern: as soon as we organise somehow, we create something like a reserve of resources, supposed to provide for risk. We are exposed to a possible famine? Good, we make a reserve of food. We risk to be invaded by a foreign nation/tribe/village/alien civilisation? Good, we make an army, i.e. a group of people, trained and equipped for actions with no immediate utility, just in case. The nearby river can possibly overflow? Good, we dig and move dirt, stone, wood and whatnot so as to build stopbanks. In each case, we move along the same path: we create a pooled reserve of something, in order to minimize the long-term damage from adverse events.

Now, if we wonder how much food we need to have in stock in case of famine, sooner or later we come to the conclusion that it is individual need for food multiplied by the number of people likely to be starving. That likelihood is not evenly distributed across the population: some people are more exposed than others. A farmer, with a few pigs and some potatoes in cultivation is less likely to be starving than a stonemason, busy to build something and not having time or energy to care for producing food. Providing for the risk of flood works according to the same scheme: some structures and some people are more likely to suffer than others.

We apprehend flood and drought-related risks in a similar way: those risks amount to a quantity of resources we put aside, in order to provide for the corresponding losses, in various ways. That quantity is the arithmetical product of probability times magnitude of loss.    

Total risk is a complex quantity, resulting from events happening in causal, heterogeneous chains. A river overflows and destroys some property: this is direct damage, the first occurrence in the causal chain. Among the property damaged, there are garbage yards. As water floods them, it washes away and further into the surrounding civilisation all kinds of crap, properly spoken crap included. The surrounding civilisation gets contaminated, and decontamination costs money: this is indirect damage, the second tier of the causal chain. Chemical and biological contamination by floodwater causes disruptions in the businesses involved, and those disruptions are costly, too: here goes the third tier in the causal chain etc.

I found some interesting insights, regarding the exposure to flood and drought-related risks in Europe, with Paprotny et al. (2018[5]). Firstly, this piece of research made me realized that floods and droughts do damage in very different ways. Floods are disasters in the most intuitive sense of the term: they are violent, and they physically destroy man-made structures. The magnitude of damage from floods results from two basic variables: the violence and recurrence of floods themselves, on the one hand, and the value of human structures affected. In a city, a flood does much more damage because there is much more property to destroy. Out there, in the countryside, damages inflicted by floods change from the disaster-type destruction into more lingering, long-term impediments to farming (e.g. contamination of farmed soil), as the density of man-made structures subsides. Droughts work insidiously. There is no spectacular disaster to be afraid of. Adverse outcomes build up progressively, sometimes even year after year. Droughts affect directly the countryside much more than the cities, too. It is rivers drying out first, and only in a second step, cities experiencing disruptions in the supply of water, or of the rivers-dependent electricity. It is farm soil drying out progressively, and farmers suffering some damage due to lower crops or increased costs of irrigation, and only then the city dwellers experiencing higher prices for their average carrot or an organic cereal bar. Mind you, there is one type of drought-related disaster, which sometimes can directly affect our towns and cities: forest fires.

Paprotny et al. (2018) give some detailed insights into the magnitude, type, and geographical distribution of flood-related risks in Europe. Firstly, the ‘where exactly?’. France, Spain, Italy, and Germany are the most affected, with Portugal, England, Scotland, Poland, Czech Republic, Hungary, Romania and Portugal following closely behind. As to the type of floods, France, Spain, and Italy are exposed mostly to flash floods, i.e. too much rain falling and not knowing where to go. Germany and virtually all of Central Europe, my native Poland included, are mostly exposed to river floods. As for the incidence of human fatalities, flash-floods are definitely the most dangerous, and their impact seems to be the most serious in the second half of the calendar year, from July on.

Besides, the research by Paprotny et al. (2018) indicates that in Europe, we seem to be already on the path of adaptation to floods. Both the currently observed losses –human and financial – and their 10-year, moving average had their peaks between 1960 and 2000. After 2000, Europe seems to have been progressively acquiring the capacity to minimize the adverse impact of floods, and this capacity seems to have developed in cities more than in the countryside. It truly gives a man a blow, to their ego, when they learn the problem they want to invent a revolutionary solution to does not really exist. I need to return on that claim I made in the « Project Navigator », namely that European cities are perfectly adapted to a climate that does no longer exist. Apparently, I was wrong: European cities seem to be adapting quite well to the adverse effects of climate change. Yet, all hope is not lost. The countryside is still exposed. Now, seriously. Whilst Europe seem to be adapting to greater an occurrence of floods, said occurrence is most likely to increase, as suggested, for example, in the research by Alfieri et al. (2017[6]). That sends us to the issue of limits to adaptation and the cost thereof.

Let’s rummage through more literature. As I study the article by Lu et al. (2019[7]), which compares the relative exposure to future droughts in various regions of the world, I find, first of all, the same uncertainty which I know from Naumann et al. (2015), and Vogt et al. (2018): the economically and socially important drought is a phenomenon we just start to understand, and we are still far from understanding it sufficiently to assess the related risks with precision. I know that special look that empirical research has when we don’t really have a clue what we are observing. You can see it in the multitude of analytical takes on the same empirical data. There are different metrics for detecting drought, and by Lu et al. (2019) demonstrate that assessment of drought-related losses heavily depends on the metric used. Once we account for those methodological disparities, some trends emerge. Europe in general seems to be more and more exposed to long-term drought, and this growing exposure seems to be pretty consistent across various scenarios of climate change. Exposure to short-term episodes of drought seems to be growing mostly under the RCP 4.5 and RCP 6.0 climate change scenarios, a little bit less under the RCP 8.5 scenario. In practical terms it means that even if we, as a civilisation, manage to cut down our total carbon emissions, as in the RCP 4.5. climate change scenario, the incidence of drought in Europe will be still increasing. Stagge et al. (2017[8]) point out that exposure to drought in Europe diverges significantly between the Mediterranean South, on the one hand, and the relatively colder North. The former is definitely exposed to an increasing occurrence of droughts, whilst the latter is likely to experience less frequent episodes. What makes the difference is evapotranspiration (loos of water) rather than precipitation. If we accounted just for the latter, we would actually have more water

I move towards more practical an approach to drought, this time as an agricultural phenomenon, and I scroll across the article on the environmental stress on winter wheat and maize, in Europe, by Webber et al. (2018[9]). Once again, I can see a lot of uncertainty. The authors put it plainly: models that serve to assess the impact of climate change on agriculture violate, by necessity, one of the main principles of statistical hypotheses-testing, namely that error terms are random and independent. In these precise models, error terms are not random, and not mutually independent. This is interesting for me, as I have that (recent) little obsession with applying artificial intelligence – a modest perceptron of my own make – to simulate social change. Non-random and dependent error terms are precisely what a perceptron likes to have for lunch. With that methodological bulwark, Webber et al. (2018) claim that regardless the degree of the so-called CO2 fertilization (i.e. plants being more active due to the presence of more carbon dioxide in the air), maize in Europe seems to be doomed to something like a 20% decline in yield, by 2050. Winter wheat seems to be rowing on a different boat. Without the effect of CO2 fertilization, a 9% decline in yield is to expect, whilst with the plants being sort of restless, and high on carbon, a 4% increase is in view. With Toreti et al. (2019[10]), more global a take is to find on the concurrence between climate extremes, and wheat production. It appears that Europe has been experiencing increasing an incidence of extreme heat events since 1989, and until 2015 it didn’t seem to affect adversely the yield of wheat. Still, since 2015 on, there is a visible drop in the output of wheat. Even stiller, if I may say, less wheat is apparently compensated by more of other cereals (Eurostat[11], Schills et al. 2018[12]), and accompanied by less potatoes and beets.

When I first started to develop on that concept, which I baptised “Energy Ponds”, I mostly thought about it as a way to store water in rural areas, in swamp-and-meadow-like structures, to prevent droughts. It was only after I read a few articles about the Sponge Cities programme in China that I sort of drifted towards that more urban take on the thing. Maybe I was wrong? Maybe the initial concept of rural, hydrological structures was correct? Mind you, whatever we do in Europe, it always costs less if done in the countryside, especially regarding the acquisition of land.

Even in economics, sometimes we need to face reality, and reality presents itself as a choice between developing “Energy Ponds” in urban environment, or in rural one. On the other hand, I am rethinking the idea of electricity generated in water turbines paying off for the investment. In « Sponge Cities », I presented a provisional conclusion that it is a bad idea. Still, I was considering the size of investment that Jiang et al. (2018) talk about in the context of the Chinese Sponge-Cities programme. Maybe it is reasonable to downsize a bit the investment, and to make it sort of lean and adaptable to the cash flow possible to generate out of selling hydropower.    

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. You can communicate with me directly, via the mailbox of this blog: goodscience@discoversocialsciences.com. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?


[1] Alfieri, L., Feyen, L., Dottori, F., & Bianchi, A. (2015). Ensemble flood risk assessment in Europe under high end climate scenarios. Global Environmental Change, 35, 199-212.

[2] Alfieri, L., Feyen, L., Dottori, F., & Bianchi, A. (2015). Ensemble flood risk assessment in Europe under high end climate scenarios. Global Environmental Change, 35, 199-212.

[3] Gustavo Naumann et al. , 2015, Assessment of drought damages and their uncertainties in Europe, Environmental Research Letters, vol. 10, 124013, DOI https://doi.org/10.1088/1748-9326/10/12/124013

[4] Vogt, J.V., Naumann, G., Masante, D., Spinoni, J., Cammalleri, C., Erian, W., Pischke, F., Pulwarty, R., Barbosa, P., Drought Risk Assessment. A conceptual Framework. EUR 29464 EN, Publications Office of the European Union, Luxembourg, 2018. ISBN 978-92-79-97469-4, doi:10.2760/057223, JRC113937

[5] Paprotny, D., Sebastian, A., Morales-Nápoles, O., & Jonkman, S. N. (2018). Trends in flood losses in Europe over the past 150 years. Nature communications, 9(1), 1985.

[6] Alfieri, L., Bisselink, B., Dottori, F., Naumann, G., de Roo, A., Salamon, P., … & Feyen, L. (2017). Global projections of river flood risk in a warmer world. Earth’s Future, 5(2), 171-182.

[7] Lu, J., Carbone, G. J., & Grego, J. M. (2019). Uncertainty and hotspots in 21st century projections of agricultural drought from CMIP5 models. Scientific reports, 9(1), 4922.

[8] Stagge, J. H., Kingston, D. G., Tallaksen, L. M., & Hannah, D. M. (2017). Observed drought indices show increasing divergence across Europe. Scientific reports, 7(1), 14045.

[9] Webber, H., Ewert, F., Olesen, J. E., Müller, C., Fronzek, S., Ruane, A. C., … & Ferrise, R. (2018). Diverging importance of drought stress for maize and winter wheat in Europe. Nature communications, 9(1), 4249.

[10] Toreti, A., Cronie, O., & Zampieri, M. (2019). Concurrent climate extremes in the key wheat producing regions of the world. Scientific reports, 9(1), 5493.

[11] https://ec.europa.eu/eurostat/statistics-explained/index.php/Agricultural_production_-_crops last access July 14th, 2019

[12] Schils, R., Olesen, J. E., Kersebaum, K. C., Rijk, B., Oberforster, M., Kalyada, V., … & Manolov, I. (2018). Cereal yield gaps across Europe. European journal of agronomy, 101, 109-120.

Sponge cities

My editorial on You Tube

I am developing on the same topic I have already highlighted in « Another idea – urban wetlands », i.e. on urban wetlands. By the way, I have found a similar, and interesting concept in the existing literature: the sponge city. It is being particularly promoted by Chinese authors. I am going for a short review of the literature on this specific topic, and I am starting with correcting a mistake I made in my last update in French, « La ville – éponge » when discussing the article by Shao et al. (2018[1]). I got confused in the conversion of square meters into square kilometres. I forgot that 1 km2 = 106 m2, not 103. Thus, correcting myself now, I rerun the corresponding calculations. The Chinese city of Xiamen, population 3 500 000, covers an area of 1 865 km2, i.e. 1 865 000 000 m2. In that, 118 km2 = 118 000 000 m2 are infrastructures of sponge city, or purposefully arranged urban wetlands. Annual precipitations in Xiamen, according to Climate-Data.org, are 1131 millimetres per year, thus 1131 m3 of water per 1 m2. Hence, the entire city of Xiamen receives 1 865 000 000 m2 * 1 131 m3/m2 =  2 109 315 000 000 m3 of precipitation a year, and the sole area of urban wetlands, those 118 square kilometres, receives 118 000 000 m2 * 1 131 m3/m2 =  133 458 000 000 m3. The infrastructures of sponge city in Xiamen have a target capacity of 2% regarding the retention of rain water, which gives  2 669 160 000 m3.

Jiang et al. (2018[2]) present a large scale strategy for the development of sponge cities in China. The first takeaway I notice is the value of investment in sponge city infrastructures across a total of 30 cities in China. Those 30 cities are supposed to absorb $275,6 billions in the corresponding infrastructural investment, thus an average of $9,19 billion per city. The first on the list is Qian’an, population 300 000, are 3 522 km2, total investment planned I = $5,1 billion. That gives $17 000 per resident, and $1 448 041 per 1 km2 of urban area. The city of Xiamen, whose case is discussed by the previously cited Shao et al. (2018[3]), has already got $3,3 billion in investment, with a target at I = $14,14 billion, thus at $4800 per resident, and $7 721 180 per square kilometre. Generally, the intensity of investment, counted per capita or per unit of surface, is really disparate. This is, by the way, commented by the authors: they stress the fact that sponge cities are so novel a concept that local experimentation is norm, not exception.

Wu et al. (2019[4]) present another case study, from among the cities listed in Jiang et al. (2018), namely the city of Wuhan. Wuhan is probably the biggest project of sponge city in terms of capital invested: $20,04 billion, distributed across 293 detailed initiatives. Started after a catastrophic flood in 2016, the project has also proven its value in protecting the city from floods, and, apparently, it is working. As far as I could understand, the case of Wuhan was the first domino block in the chain, the one that triggered the whole, nation-wide programme of sponge cities.

Shao et al. (2016[5]) present an IT approach to organizing sponge-cities, focusing on the issue of data integration. The corresponding empirical field study had been apparently conducted in Fenghuang County, province Hunan. The main engineering challenge consists in integrating geographical data from geographic information systems (GIS) with data pertinent to urban infrastructures, mostly CAD-based, thus graphical. On the top of that, spatial data needs to be integrated with attribute data, i.e. with the characteristics of both infrastructural objects, and their natural counterparts. All that integrated data is supposed to serve efficient application of the so-called Low Impact Development (LID) technology. With the Fenghuang County, we can see the case of a relatively small area: 30,89 km2, 350 195 inhabitants, with a density of population of 200 people per 1 km2. The integrated data system was based on dividing that area into 417 sub-catchments, thus some 74 077 m2 per catchment.         

Good, so this is like a cursory review of literature on the Chinese concept of sponge city. Now, I am trying to combine it with another concept, which I first read about in a history book, namely Civilisation and Capitalism by Fernand Braudel, volume 1: The Structures of Everyday Life[6]: the technology of lifting and pumping water from a river with the help of kinetic energy of waterwheels propelled by the same river. Apparently, back in the day, in cities like Paris, that technology was commonly used to pump river water onto the upper storeys of buildings next to the river, and even to the further-standing buildings. Today, we are used to water supply powered by big pumps located in strategic nodes of large networks, and we are used to seeing waterwheels as hydroelectric turbines. Still, that old concept of using directly the kinetic energy of water seems to pop up again, here and there. Basically, it has been preserved in a slightly different form. Do you know that image in movies, with that windmill in the middle of a desert? What is the point of putting a windmill in the middle of a desert? To pump water from a well. Now, let’s make a little jump from wind power to water power. If we can use the force of wind to pump water from underground, we can use the force of water in a river to pump water from that river.  

In scientific literature, I found just one article making reference to it, namely Yannopoulos et al. (2015[7]). Still, in the less formal areas, I found some more stuff. I found that U.S. patent, from 1951, for a water-wheel-driven brush. I found more modern a technology of the spiral pump, created by a company called PreScouter. Something similar is being proposed by the Dutch company Aqysta. Here are some graphics to give you an idea:


Now, I put together the infrastructure of a sponge city, and the technology of pumping water uphill using the energy of the water. I have provisionally named the thing « Energy Ponds ». Water wheels power water pumps, which convey water to elevated tanks, like water towers. From water towers, water falls back down to the ground level, passes through small hydroelectric turbines on its way down, and lands in the infrastructures of a sponge city, where it is being stored. Here below, I am trying to make a coherent picture of it. The general concept can be extended, which I present graphically further below: infrastructure of the sponge city collects excess water from rainfall or floods, and partly conducts it to the local river(s). What limits the river from overflowing or limits the degree of overflowing is precisely the basic concept of Energy Ponds, i.e. those water-powered water pumps that pump water into elevated tanks. The more water flows in the river – case of flood or immediate threat thereof – the more power in those pumps, the more flow through the elevated tanks, and the more flow through hydroelectric turbines, hence the more electricity. As long as the whole infrastructure physically holds the environmental pressure of heavy rainfall and flood waves, it can work and serve.

My next step is to outline the business and financial framework of the « Energy Ponds » concept, taking the data provided by Jiang et al. (2018) about 29 sponge city projects in China, squeezing as much information as I can from it, and adding the component of hydroelectricity. I transcribed their data into an Excel file, and added some calculations of my own, together with data about demographics and annual rainfall. Here comes the Excel file with data as of July 5th 2019. A pattern emerges. All the 29 local clusters of projects display quite an even coefficient of capital invested per 1 km2 of construction area in those projects: it is $320 402 571,51 on average, with quite a low standard deviation, namely $101 484 206,43. Interestingly, that coefficient is not significantly correlated neither with the local amount of rainfall per 1 m2, nor with the density of population. It looks like quite an autonomous variable, and yet as a recurrent proportion.      

Another interesting pattern is to find in the percentage of the total surface, in each of the cities studied, devoted to being filled with the sponge-type infrastructure. The average value of that percentage is 0,61% and is accompanied by quite big a standard deviation: 0,63%. It gives an overall variability of 1,046. Still, that percentage is correlated with two other variables: annual rainfall, in millimetres per square meter, as well as with the density of population, i.e. average number of people per square kilometre. Measured with the Pearson coefficient of correlation, the former yields r = 0,45, and the latter is r = 0,43: not very much, yet respectable, as correlations come.

From underneath those coefficients of correlation, common sense pokes its head. The more rainfall per unit of surface, the more water there is to retain, and thus the more can we gain by installing the sponge-type infrastructure. The more people per unit of surface, the more people can directly benefit from installing that infrastructure, per 1 km2. This one stands to reason, too.

There is an interesting lack of correlations in that lot of data taken from Jiang et al. (2018). The number of local projects, i.e. projects per one city, is virtually not correlated with anything else, and, intriguingly, is negatively correlated, at Pearson r = – 0,44, with the size of local populations. The more people in the city, the less local projects of sponge city are there.    

By the way, I have some concurrent information on the topic. According to a press release by Voith, this company has recently acquired a contract with the city of Xiamen, one of the sponge-cities, for the supply of large hydroelectric turbines in the technology of pumped storage, i.e. almost exactly the thing I have in mind.

Now, the Chines programme of sponge cities is a starting point for me to reverse engineer my own concept of « Energy Ponds ». I assume that four economic aggregates pay off for the corresponding investment: a) the Net Present Value of proceedings from producing electricity in water turbines b) the Net Present Value of savings on losses connected to floods c) the opportunity cost of tap water available from the retained precipitations, and d) incremental change in the market value of the real estate involved.

There is a city, with N inhabitants, who consume R m3 of water per year, R/N per person per year, and they consume E kWh of energy per year, E/N per person per year. R divided by 8760 hours in a year (R/8760) is the approximate amount of water the local population needs to have in current constant supply. Same for energy: E/8760 is a good approximation of power, in kW, that the local population needs to have standing and offered for immediate use.

The city collects F millimetres of precipitation a year. Note that F mm = F m3/m2. With a density of population D people per 1 km2, the average square kilometre has what I call the sponge function: D*(R/N) = f(F*106). Each square kilometre collects F*106 cubic meters of precipitation a year, and this amount remains is a recurrent proportion to the aggregate amount of water that D people living on that square kilometre consume per year.

The population of N residents spend an aggregate PE*E on energy, and an aggregate PR*R on water, where PE and PR are the respective prices of energy and water. The supply of water and energy happens at levelized costs per unit. The reference math here is the standard calculation of LCOE, or Levelized Cost of Energy in an interval of time t, measured as LCOE(t) = [IE(t) + ME(t) + UE(t)] / E, where IE is the amount of capital invested in the fixed assets of the corresponding power installations, ME is their necessary cost of current maintenance, and UE is the cost of fuel used to generate energy. Per analogy, the levelized cost of water can be calculated as LCOR(t) = [IR(t) + MR(t) + UR(t)] / R, with the same logic: investment in fixed assets plus cost of current maintenance plus cost of water strictly speaking, all that divided by the quantity of water consumed. Mind you, in the case of water, the UR(t) part could be easily zero, and yet it does not have to be.  Imagine a general municipal provider of water, who buys rainwater collected in private, local installations of the sponge type, at UR(t) per cubic metre, that sort of thing.

The supply of water and energy generates gross margins: E(t)*(PE(t) – LCOE(t)) and R(t)*(PR(t) – LCOR(t)). These margins are possible to rephrase as, respectively, PE(t)*E(t)IE(t) – ME(t) – UE(t), and R(t)*PR(t) – IR(t) – MR(t) – UR(t). Gross margins are gross cash flows, which finance organisations (jobs) attached to the supply of, respectively, water and energy, and generate some net surplus. Here comes a little difficulty with appraising the net surplus from the supply of water and energy. Long story short: the levelized values of the « LCO-whatever follows » type explicitly incorporate the yield on capital investment. Each unit of output is supposed to yield a return on investment I. Still, this is not how classical accounting defines a cost. The amounts assigned to costs, both variable and fixed, correspond to the strictly speaking current expenditures, i.e. to payments for the current services of people and things, without any residual value sedimenting over time. It is only after I account for those strictly current outlays that I can calculate the current margin, and a fraction of that margin can be considered as direct yield on my investment. In standard, basic accounting, the return on investment is the net income divided by the capital invested. The net income is calculated as π = Q*P – Q*VC – FC – r*I – T, where Q and P are quantity and price, VC is the variable cost per unit of output Q, FC stands for the fixed costs, r is the price of capital (interest rate) on the capital I invested in the given business, and T represents taxes. In the same standard accounting, Thus calculated net income π is then put into the formula of internal rate of return on investment: IRR = π / I.     

When I calculate my margin of profit on the sales of energy or water, I have those two angles of approach. Angle #1 consists in using the levelized cost, and then the margin generated over that cost, i.e. P – LC (price minus levelized cost) can be accounted for other purposes than the return on investment. Angle #2 comes from traditional accounting: I calculate my margin without reference to the capital invested, and only then I use some residual part of that margin as return on investment. I guess that levelized costs work well in the accounting of infrastructural systems with nicely predictable output. When the quantity demanded, and offered, in the market of energy or water is like really recurrent and easy to predict, thus in well-established infrastructures with stable populations around, the LCO method yields accurate estimations of costs and margins. On the other hand, when the infrastructures in question are developing quickly and/or when their host populations change substantially, classical accounting seems more appropriate, with its sharp distinction between current costs and capital outlays.

Anyway, I start modelling the first component of the possible payoff on investment in the infrastructures of « Energy Ponds », i.e.  the Net Present Value of proceedings from producing electricity in water turbines. As I generally like staying close to real life (well, most of the times), I will be wrapping my thinking around my hometown, where I still live, i.e. Krakow, Poland, area of the city: 326,8 km2, area of the metropolitan area: 1023,21 km2. As for annual precipitations, data from Climate-Data.org[1] tells me that it is a bit more than the general Polish average of 600 mm a year. Apparently, Krakow receives an annual rainfall of 678 mm, which, when translated into litres received by the whole area, makes a total rainfall on the city of  221 570 400 000 litres, and, when enlarged to the whole metropolitan area, makes

693 736 380 000 litres.

In the generation of electricity from hydro turbines, what counts is the flow, measured in litres per second. The above-calculated total rainfall is now to be divided by 365 days, then by 24 hours, and then by 3600 seconds in an hour. Long story short, you divide the annual rainfall in litres by the constant of 31 536 000 seconds in one year. Mind you, on odd years, it will be 31 622 400 seconds. This step leads me to an estimate total flow of 7 026 litres per second in the city area, and 21 998 litres per second in the metropolitan area. Question: what amount of electric power can I get with that flow? I am using a formula I found at Renewables First.co.uk[2] : flow per second, in kgs per second multiplied by the gravitational constant a = 9,81, multiplied by the average efficiency of a hydro turbine equal to 75,1%, further multiplied by the net head – or net difference in height – of the water flow. All that gives me electric power in watts. All in all, when you want to calculate the electric power dormant in your local rainfall, take the total amount of said rainfall, in litres falling on the entire place where you can possibly collect that rainwater from, and multiply it by 0,076346*Head of the waterflow. You will get power in kilowatts, with that implied efficiency of 75,1% in your technology.

For the sake of simplicity, I assume that, in those installations of elevated water tanks, the average elevation, thus the head of the subsequent water flow through hydro turbines, will be H = 10 m. That leads me to P = 518 kW available from the annual rainfall on the city of Krakow, when elevated to H = 10 m, and, accordingly, P = 1 621 kW for the rainfall received over the entire metropolitan area.

In the next step, I want to calculate the market value of that electric power, in terms of revenues from its possible sales. I take the power, and I multiply it by 8760 in a year (8784 hours in an odd year). I get the amount of electricity for sale equal to E = 4 534 383 kWh from the rainfall received over the city of Krakow strictly spoken, and E = 14 197 142 kWh if we hypothetically collect rainwater from the entire metro area.

Now, the pricing. According to data available at GlobalPetrolPrices.com[3], the average price of electricity in Poland is PE = $0,18 per kWh. Still, when I get, more humbly, to my own electricity bill, and I crudely divide the amount billed in Polish zlotys by the amount used in kWh, I get to something like PE = $0,21 per kWh. The discrepancy might be coming from the complexity of that price: it is the actual price per kWh used plus all sorts of constant stuff per kW of power made available. With those prices, the market value of the corresponding revenues from selling electricity from rainfall used smartly would be like $816 189  ≤ Q*PE  $952 220 a year from the city area, and $2 555 485 ≤ Q*PE  $2 981 400 a year from the metropolitan area.

I transform those revenues, even before accounting for any current costs, into a stream, spread over 8 years of average lifecycle in an average investment project. Those 8 years are what is usually expected as the time of full return on investment in those more long-term, infrastructure-like projects. With a technological lifecycle around 20 years, those projects are supposed to pay for themselves over the first 8 years, the following 12 years bringing a net overhead to investors. Depending on the pricing of electricity, and with a discount rate of r = 5% a year, it gives something like $5 275 203 ≤ NPV(Q*PE ; 8 years) ≤ $6 154 403 for the city area, and $16 516 646 ≤ NPV(Q*PE ; 8 years) ≤  $19 269 421 for the metropolitan area.

When I compare that stream of revenue to what is being actually done in the Chinese sponge cities, discussed a few paragraphs earlier, one thing jumps to the eye: even with the most optimistic assumption of capturing 100% of rainwater, so as to make it flow through local hydroelectric turbines, there is no way that selling electricity from those turbines pays off for the entire investment. This is a difference in the orders of magnitude, when we compare investment to revenues from electricity.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. You can communicate with me directly, via the mailbox of this blog: goodscience@discoversocialsciences.com. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?


[1] https://en.climate-data.org/europe/poland/lesser-poland-voivodeship/krakow-715022/ last access July 7th 2019

[2] https://www.renewablesfirst.co.uk/hydropower/hydropower-learning-centre/how-much-power-could-i-generate-from-a-hydro-turbine/ last access July 7th, 2019

[3] https://www.globalpetrolprices.com/electricity_prices/ last access July 8th 2019

[1] Shao, W., Liu, J., Yang, Z., Yang, Z., Yu, Y., & Li, W. (2018). Carbon Reduction Effects of Sponge City Construction: A Case Study of the City of Xiamen. Energy Procedia, 152, 1145-1151.

[2] Jiang, Y., Zevenbergen, C., & Ma, Y. (2018). Urban pluvial flooding and stormwater management: A contemporary review of China’s challenges and “sponge cities” strategy. Environmental science & policy, 80, 132-143.

[3] Shao, W., Liu, J., Yang, Z., Yang, Z., Yu, Y., & Li, W. (2018). Carbon Reduction Effects of Sponge City Construction: A Case Study of the City of Xiamen. Energy Procedia, 152, 1145-1151.

[4] Wu, H. L., Cheng, W. C., Shen, S. L., Lin, M. Y., & Arulrajah, A. (2019). Variation of hydro-environment during past four decades with underground sponge city planning to control flash floods in Wuhan, China: An overview. Underground Space, article in press

[5] Shao, W., Zhang, H., Liu, J., Yang, G., Chen, X., Yang, Z., & Huang, H. (2016). Data integration and its application in the sponge city construction of China. Procedia Engineering, 154, 779-786.

[6] Braudel, F., & Reynolds, S. (1979). Civilization and capitalism 15th-18th Century, vol. 1, The structures of everyday life. Civilization, 10(25), 50.

[7] Yannopoulos, S., Lyberatos, G., Theodossiou, N., Li, W., Valipour, M., Tamburrino, A., & Angelakis, A. (2015). Evolution of water lifting devices (pumps) over the centuries worldwide. Water, 7(9), 5031-5060.

Another idea – urban wetlands

My editorial on You Tube

I have just come with an idea. One of those big ones, the kind that pushes you to write a business plan and some scientific stuff as well. Here is the idea: a network of ponds and waterways, made in the close vicinity of a river, being both a reservoir of water – mostly the excess rainwater from big downpours – and a location for a network of small water turbines. The idea comes from a few observations, as well as other ideas, that I had over the last two years. Firstly. in Central Europe, we have less and less water from the melting snow – as there is almost no snow anymore in winter – and more and more water from sudden, heavy rain. We need to learn how to retain rainwater in the most efficient way. Secondly, as we have local floods due to heavy rains, some sort of spontaneous formation of floodplains happens. Even if there is no visible pond, the ground gets a bit spongy and soaked, flood after flood. We have more and more mosquitoes. If it is happening anyway, let’s use it creatively. This particular point is visualised in the map below, with the example of Central and Southern Europe. Thus, my idea is to utilise purposefully a naturally happening phenomenon, component of climate change.

Source: https://www.eea.europa.eu/data-and-maps/figures/floodplain-distribution last access June 20th, 2019

Thirdly, there is some sort of new generation in water turbines: a whole range of small devices, simple and versatile, has come to the market.  You can have a look at what those guys at Blue Freedom are doing. Really interesting. Hydroelectricity can now be approached in an apparently much less capital-intensive way. Thus, the idea I have is to arrange purposefully the floodplains we have in Europe into as energy-efficient and carbon-efficient places as possible. I give the general idea graphically in the picture below.

I am approaching the whole thing from the economics’ point of view, i.e. I want a piece of floodplain arranged into this particular concept to have more value, financial value included, than the same piece of floodplain just being ignored in its inherent potential. I can see two distinct avenues for developing the concept: that of a generally wild, uninhabited floodplain, like public land, as opposed to an inhabited floodplain, under incumbent or ongoing construction, residential or other. The latter is precisely what I want to focus on. I want to study, and possibly to develop a business plan for a human habitat combined with a semi-aquatic ecosystem, i.e. a network of ponds, waterways and water turbines in places where people live and work. Hence, from the geographic point of view, I am focusing on places where the secondary formation of floodplain-type of terrain already occurs in towns and cities, or in the immediate vicinity thereof. For more than one century, the growth of urban habitats has been accompanied by the entrenching of waterways in strictly defined, concrete-reinforced beds. I want to go the other way, and let those rivers spill around their waters, into wetlands, in a manner beneficial to human dwelling.

My initial approach to the underlying environmental concept is market based. Can we create urban wetlands, in flood-threatened areas, where the presence of the explicitly and purposefully arranged aquatic structures increases the value of property so as to top the investment required? I start with the most fundamental marks in the environment. I imagine a piece of land in an urban area. It has its present market value, and I want to study its possible value in the future.

I imagine a piece of land located in an urban area with the characteristics of a floodplain, i.e. recurrently threatened by local floods or the secondary effects thereof. At the moment ‘t’, that piece of land has a market value M(t) = S * m(t), being the product of its total surface S, constant over time, and the market price m(t) per unit of surface, changing over time. There are two moments in time, i.e. the initial moment t0, and the subsequent moment t1, after the development into urban wetland. Said development requires a stream of investment I(t0 -> t1). I want to study the conditions for M(t1) – M(t0) > I(t0 -> t1). As surface S is constant over time, my problem breaks down into units of surface, whence the aggregate investment I(t0 -> t1) being decomposed into I(t0 -> t1) = S * i(t0 -> t1), and the problem restated as m(t1) – m(t0) >  i(t0 -> t1).

I assume the market price m(t) is based on two types of characteristics: those directly measurable as financials, for one, e.g. the average wage a resident can expect from a locally based job, and those more diffuse ones, whose translation into financial variables is subtler, and sometimes pointless. I allow myself to call the latter ones ‘environmental services’. They cover quite a broad range of phenomena, ranging from the access to clean water outside the public water supply system, all the way to subjectively perceived happiness and well-being. All in all, mathematically, I say m(t) = f(x1, x2, …, xk) : the market price of construction land in cities is a function of k variables. Consistently with the above, I assume that f[t1; (x1, x2, …, xk)] – f[t0; (x1, x2, …, xk)] > i(t0 -> t1).    

It is intellectually honest to tackle those characteristics of urban land that make its market price. There is a useful observation about cities: anything that impacts the value of urban real estate, sooner or later translates into rent that people are willing to pay for being able to stay there. Please, notice that even when we own a piece of real estate, i.e. when we have property rights to it, we usually pay to someone some kind of periodic allowance for being able to execute our property rights fully: the real estate tax, the maintenance fee paid to the management of residential condominiums, the fee for sanitation service (e.g. garbage collection) etc. Any urban piece of land has a rent tag attached. Even those characteristics of a place, which pertain mostly to the subjectively experienced pleasure and well-being derived out of staying there have a rent-like price attached to them, at the end of the day.

Good. I have made a sketch of the thing. Now, I am going to pass in review some published research, in order to set my landmarks. I start with some literature regarding urban planning, and as soon as I do so, I discover an application for artificial intelligence, a topic of interest for me, those last months. Lyu et al. (2017[1]) present a method for procedural modelling of urban layout, and in their work, I can spot something similar to the equations I have just come up with: complex analysis of land-suitability. It starts with dividing the total areal of urban land at hand, in a given city, into standard units of surface. Geometrically, they look nice when they are equisized squares. Each unit ‘i’ can be potentially used for many alternative purposes. Lyu et al. distinguish 5 typical uses of urban land: residential, industrial, commercial, official, and open & green. Each such surface unit ‘i’ is endowed with a certain suitability for different purposes, and this suitability is the function of a finite number of factors. Formally, the suitability sik of land unit i for use k is a weighted average over a vector of factors, where wkj is the weight of factor j for land use k, and rij is the rating of land unit i on factor j. Below, I am trying to reproduce graphically the general logic of this approach.

In a city approached analytically with the general method presented above, Lyu et al. (2017[1]) distribute three layers of urban layout: population, road network, and land use. It starts with an initial state (input state) of population, land use, and available area. In a first step of the procedure, a simulation of highways and arterial transport connections is made. The transportation grid suggests some kind of division of urban space into districts. As far as I understand it, Lyu et al. define districts as functional units with the quantitative dominance of certain land uses, i.e. residential vs. industrial rather than rich folks’ estate vs. losers’ end, sort of.

As a first sketch of district division is made, it allows simulating a first distribution of population in the city, and a first draft of land use. The distribution of population is largely a distribution of density in population, and the corresponding transportation grid is strongly correlated with it. Some modes of urban transport work only above some critical thresholds in the density of population. This is an important point: density of population is a critical variable in social sciences.

Then, some kind of planning freedom can be allowed inside districts, which results in a second draft of spatial distribution in population, where a new type of unit – a neighbourhood – appears. Lyu et al. do not explain in detail the concept of neighbourhood, and yet it is interesting. It suggests the importance of spontaneous settlement vs. that of planned spatial arrangement.

I am strongly attached to that notion of spontaneous settlement. I am firmly convinced that on the long run people live where they want to live, and urban planning can just make that process somehow smoother and more efficient. Thus comes another article in my review of literature, by Mahmoud & Divigalpitiya (2019[2]). By the way, I have an interesting meta-observation: most recent literature about urban development is based on empirical research in emerging economies and in developing countries, with the U.S. coming next, and Europe lagging far behind. In Europe, we do very little research about our own social structures, whilst them Egyptians or Thais are constantly studying the way they live collectively.

Anyway, back to by Mahmoud & Divigalpitiya (2019[3]), the article is interesting from my point of view because its authors study the development of new towns and cities. For me, it is an insight into how the radically new urban structures sink into the incumbent spatial distribution of population. The specific background of this particular study is a public policy of the Egyptian government to establish, in a planned manner, new cities some distance away from the Nile, and do it so as to minimize the encroachment on agricultural land. Thus, we have scarce space and people to fit into, with optimal use of land.

As I study that paper by Mahmoud & Divigalpitiya, some kind of extension to my initial idea emerges. Those researchers report that with proper water and energy management, more specifically with the creation of irrigative structures like those which I came up with – networks of ponds and waterways – paired with a network of small hydropower units, it is possible both to accommodate an increase of 90% in local urban population, and create 3,75% more of agricultural land. Another important finding about those new urban communities in Egypt is that they tend to grow by sprawl rather than by distant settlement. New city dwellers tend to settle close to the incumbent residents, rather than in more remote locations. In simple words: it is bloody hard to create a new city from scratch. Habits and social links are like a tangible expanse of matter, which opposes resistance to distortions.

I switch to another paper based on Egyptian research, namely that by Hatata et al. 2019[4], relative to the use of small hydropower generators. The paper is rich in technicalities, and therefore I note to come back to it many times when I will be going more into the details of my concept. For now, I have a few general takeaways. Firstly, it is wise to combine small hydro off grid with that connected to the power grid, and more generally, small hydro looks like a good complementary source of power, next to a regular grid, rather than a 100% autonomous power base. Still, full autonomy is possible, mostly with the technology of Permanent Magnet Synchronous Generator. Secondly, Hatata et al. present a calculation of economic value in hydropower projects, based on their Net Present Value, which, in turn, is calculated on the grounds of a basic assumption that hydropower installations carry some residual capital value Vr over their entire lifetime, and additionally can generate a current cash flow determined by: a) the revenue Rt from the sales of energy b) the locally needed investment It c) the operating cost Ot and d) the maintenance cost Mt, all that in the presence of a periodic discount rate r.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. You can communicate with me directly, via the mailbox of this blog: goodscience@discoversocialsciences.com. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?


[1] Lyu, X., Han, Q., & de Vries, B. (2017). Procedural modeling of urban layout: population, land use, and road network. Transportation research procedia, 25, 3333-3342.

[2] Mahmoud, H., & Divigalpitiya, P. (2019). Spatiotemporal variation analysis of urban land expansion in the establishment of new communities in Upper Egypt: A case study of New Asyut city. The Egyptian Journal of Remote Sensing and Space Science, 22(1), 59-66.

[3] Mahmoud, H., & Divigalpitiya, P. (2019). Spatiotemporal variation analysis of urban land expansion in the establishment of new communities in Upper Egypt: A case study of New Asyut city. The Egyptian Journal of Remote Sensing and Space Science, 22(1), 59-66.

[4] Hatata, A. Y., El-Saadawi, M. M., & Saad, S. (2019). A feasibility study of small hydro power for selected locations in Egypt. Energy Strategy Reviews, 24, 300-313.


What are the practical outcomes of those hypotheses being true or false?

 

My editorial on You Tube

 

This is one of those moments when I need to reassess what the hell I am doing. Scientifically, I mean. Of course, it is good to reassess things existentially, too, every now and then, but for the moment I am limiting myself to science. Simpler and safer than life in general. Anyway, I have a financial scheme in mind, where local crowdfunding platforms serve to support the development of local suppliers in renewable energies. The scheme is based on the observable difference between prices of electricity for small users (higher), and those reserved to industrial scale users (lower). I wonder if small consumers would be ready to pay the normal, relatively higher price in exchange of a package made of: a) electricity and b) shares in the equity of its suppliers.

I have a general, methodological hypothesis in mind, which I have been trying to develop over the last 2 years or so: collective intelligence. I hypothesise that collective behaviour observable in markets can be studied as a manifestation of collective intelligence. The purpose is to go beyond optimization and to define, with scientific rigour, what are the alternative, essentially equiprobable paths of change that a complex market can take. I think such an approach is useful when I am dealing with an economic model with a lot of internal correlation between variables, and that correlation can be so strong that it turns into those variables basically looping on each other. In such a situation, distinguishing independent variables from the dependent ones becomes bloody hard, and methodologically doubtful.

On the grounds of literature, and my own experimentation, I have defined three essential traits of such collective intelligence: a) distinction between structure and instance b) capacity to accumulate experience, and c) capacity to pass between different levels of freedom in social cohesion. I am using an artificial neural network, a multi-layer perceptron, in order to simulate such collectively intelligent behaviour.

The distinction between structure and instance means that we can devise something, make different instances of that something, each different by some small details, and experiment with those different instances in order to devise an even better something. When I make a mechanical clock, I am a clockmaker. When I am able to have a critical look at this clock, make many different versions of it – all based on the same structural connections between mechanical parts, but differing from each other by subtle details – and experiment with those multiple versions, I become a meta-clock-maker, i.e. someone who can advise clockmakers on how to make clocks. The capacity to distinguish between structures and their instances is one of the basic skills we need in life. Autistic people have a big problem in that department, as they are mostly on the instance side. To a severely autistic person, me in a blue jacket, and me in a brown jacket are two completely different people. Schizophrenic people are on the opposite end of the spectrum. To them, everything is one and the same structure, and they cannot cope with instances. Me in a blue jacket and me in a brown jacket are the same as my neighbour in a yellow jumper, and we all are instances of the same alien monster. I know you think I might be overstating, but my grandmother on the father’s side used to suffer from schizophrenia, and it was precisely that: to her, all strong smells were the manifestation of one and the same volatile poison sprayed in the air by THEM, and every person outside a circle of about 19 people closest to her was a member of THEM. Poor Jadwiga.

In economics, the distinction between structure and instance corresponds to the tension between markets and their underpinning institutions. Markets are fluid and changeable, they are like constant experimenting. Institutions give some gravitas and predictability to that experimenting. Institutions are structures, and markets are ritualized manners of multiplying and testing many alternative instances of those structures.

The capacity to accumulate experience means that as we experiment with different instances of different structures, we can store information we collect in the process, and use this information in some meaningful way. My great compatriot, Alfred Korzybski, in his general semantics, used to designate it as ‘the capacity to bind time’. The thing is not as obvious as one could think. A Nobel-prized mathematician, Reinhard Selten, coined up the concept of social games with imperfect recall (Harsanyi, Selten 1988[1]). He argued that as we, collective humans, accumulate and generalize experience about what the hell is going on, from time to time we shake off that big folder, and pick the pages endowed with the most meaning. All the remaining stuff, judged less useful on the moment, is somehow archived in culture, so as it basically stays there, but becomes much harder to access and utilise. The capacity to accumulate experience means largely the way of accumulating experience, and doing that from-time-to-time archiving. We can observe this basic distinction in everyday life. There are things that we learn sort of incrementally. When I learn to play piano – which I wish I was learning right now, cool stuff – I practice, I practice, I practice and… I accumulate learning from all those practices, and one day I give a concert, in a pub. Still, other things, I learn them sort of haphazardly. Relationships are a good example. I am with someone, one day I am mad at her, the other day I see her as the love of my life, then, again, she really gets on my nerves, and then I think I couldn’t live without her etc. Bit of a bumpy road, isn’t it? Yes, there is some incremental learning, but you become aware of it after like 25 years of conjoint life. Earlier on, you just need to suck ass and keep going.

There is an interesting theory in economics, labelled as « semi – martingale » (see for example: Malkiel, Fama 1970[2]). When we observe changes in stock prices, in a capital market, we tend to say they are random, but they are not. You can test it. If the price is really random, it should fan out according to the pattern of normal distribution. This is what we call a full martingale. Any real price you observe actually swings less broadly than normal distribution: this is a semi-martingale. Still, anyone with any experience in investment knows that prediction inside the semi-martingale is always burdened with a s**tload of error. When you observe stock prices over a long time, like 2 or 3 years, you can see a sequence of distinct semi-martingales. From September through December it swings inside one semi-martingale, then the Ghost of Past Christmases shakes it badly, people panic, and later it settles into another semi-martingale, slightly shifted from the preceding one, and here it goes, semi-martingaling for another dozen of weeks etc.

The central theoretical question in this economic theory, and a couple of others, spells: do we learn something durable through local shocks? Does a sequence of economic shocks, of whatever type, make a learning path similar to the incremental learning of piano playing? There are strong arguments in favour of both possible answers. If you get your face punched, over and over again, you must be a really dumb asshole not to learn anything from that. Still, there is that phenomenon called systemic homeostasis: many systems, social structures included, tend to fight for stability when shaken, and they are frequently successful. The memory of shocks and revolutions is frequently erased, and they are assumed to have never existed.

The issue of different levels in social cohesion refers to the so-called swarm theory (Stradner et al 2013[3]). This theory studies collective intelligence by reference to animals, which we know are intelligent just collectively. Bees, ants, hornets: all those beasts, when acting individually, as dumb as f**k. Still, when they gang up, they develop amazingly complex patterns of action. That’s not all. Those complex patterns of theirs fall into three categories, applicable to human behaviour as well: static coupling, dynamic correlated coupling, and dynamic random coupling.

When we coordinate by static coupling, we always do things together in the same way. These are recurrent rituals, without much room for change. Many legal rules, and institutions they form the basis of, are examples of static coupling. You want to put some equity-based securities in circulation? Good, you do this, and this, and this. You haven’t done the third this? Sorry, man, but you cannot call it a day yet. When we need to change the structure of what we do, we should somehow loosen that static coupling and try something new. We should dissolve the existing business, which is static coupling, and look for creating something new. When we do so, we can sort of stay in touch with our customary business partners, and after some circling and asking around we form a new business structure, involving people we clearly coordinate with. This is dynamic correlated coupling. Finally, we can decide to sail completely uncharted waters, and take our business concept to China, or to New Zealand, and try to work with completely different people. What we do, in such a case, is emitting some sort of business signal into the environment, and waiting for any response from whoever is interested. This is dynamic random coupling. Attracting random followers to a new You Tube channel is very much an example of the same.

At the level of social cohesion, we can be intelligent in two distinct ways. On the one hand, we can keep the given pattern of collective associations behaviour at the same level, i.e. one of the three I have just mentioned. We keep it ritualized and static, or somehow loose and dynamically correlated, or, finally, we take care of not ritualizing too much and keep it deliberately at the level of random associations. On the other hand, we can shift between different levels of cohesion. We take some institutions, we start experimenting with making them more flexible, at some point we possibly make it as free as possible, and we gain experience, which, in turn, allows us to create new institutions.

When applying the issue of social cohesion in collective intelligence to economic phenomena, we can use a little trick, to be found, for example, in de Vincenzo et al (2018[4]): we assume that quantitative economic variables, which we normally perceive as just numbers, are manifestations of distinct collective decisions. When I have the price of energy, let’s say, €0,17 per kilowatt hour, I consider it as the outcome of collective decision-making. At this point, it is useful to remember the fundamentals of intelligence. We perceive our own, individual decisions as outcomes of our independent thinking. We associate them with the fact of wanting something, and being apprehensive regarding something else etc. Still, neurologically, those decisions are outcomes of some neurons firing in a certain sequence. Same for economic variables, i.e. mostly prices and quantities: they are fruit of interactions between the members of a community. When I buy apples in the local marketplace, I just buy them for a certain price, and, if they look bad, I just don’t buy. This is not any form of purposeful influence upon the market. Still, when 10 000 people like me do the same, sort of ‘buy when price good, don’t when the apple is bruised’, a patterned process emerges. The resulting price of apples is the outcome of that process.

Social cohesion can be viewed as association between collective decisions, not just between individual actions. The resulting methodology is made, roughly speaking, of three steps. Step one: I put all the economic variables in my model over a common denominator (common scale of measurement). Step two: I calculate the relative cohesion between them with the general concept of a fitness function, which I can express, for example, as the Euclidean distance between local values of variables in question. Step three: I calculate the average of those Euclidean distances, and I calculate its reciprocal, like « 1/x ». This reciprocal is the direct measure of cohesion between decisions, i.e. the higher the value of this precise « 1/x », the more cohesion between different processes of economic decision-making.

Now, those of you with a sharp scientific edge could say now: “Wait a minute, doc. How do you know we are talking about different processes of decision making? Who do you know that variable X1 comes from a different process than variable X2?”. This is precisely my point. The swarm theory tells me that if I can observe changing a cohesion between those variables, I can reasonably hypothesise that their underlying decision-making processes are distinct. If, on the other hand, their mutual Euclidean distance stays the same, I hypothesise that they come from the same process.

Summing up, here is the general drift: I take an economic model and I formulate three hypotheses as for the occurrence of collective intelligence in that model. Hypothesis #1: different variables of the model come from different processes of collective decision-making.

Hypothesis #2: the economic system underlying the model has the capacity to learn as a collective intelligence, i.e. to durably increase or decrease the mutual cohesion between those processes. Hypothesis #3: collective learning in the presence of economic shocks is different from the instance of learning in the absence of such shocks.

They look nice, those hypotheses. Now, why the hell should anyone bother? I mean what are the practical outcomes of those hypotheses being true or false? In my experimental perceptron, I express the presence of economic shocks by using hyperbolic tangent as neural function of activation, whilst the absence of shocks (or the presence of countercyclical policies) is expressed with a sigmoid function. Those two yield very different processes of learning. Long story short, the sigmoid learns more, i.e. it accumulates more local errors (this more experimental material for learning), and it generates a steady trend towards lower a cohesion between variables (decisions). The hyperbolic tangent accumulates less experiential material (it learns less), and it is quite random in arriving to any tangible change in cohesion. The collective intelligence I mimicked with that perceptron looks like the kind of intelligence, which, when going through shocks, learns only the skill of returning to the initial position after shock: it does not create any lasting type of change. The latter happens only when my perceptron has a device to absorb and alleviate shocks, i.e. the sigmoid neural function.

When I have my perceptron explicitly feeding back that cohesion between variables (i.e. feeding back the fitness function considered as a local error), it learns less and changes less, but not necessarily goes through less shocks. When the perceptron does not care about feeding back the observable distance between variables, there is more learning and more change, but not more shocks. The overall fitness function of my perceptron changes over time The ‘over time’ depends on the kind of neural activation function I use. In the case of hyperbolic tangent, it is brutal change over a short time, eventually coming back to virtually the same point that it started from. In the hyperbolic tangent, the passage between various levels of association, according to the swarm theory, is super quick, but not really productive. In the sigmoid, it is definitely a steady trend of decreasing cohesion.

I want to know what the hell I am doing. I feel I have made a few steps towards that understanding, but getting to know what I am doing proves really hard.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

[1] Harsanyi, J. C., & Selten, R. (1988). A general theory of equilibrium selection in games. MIT Press Books, 1.

[2] Malkiel, B. G., & Fama, E. F. (1970). Efficient capital markets: A review of theory and empirical work. The journal of Finance, 25(2), 383-417.

[3] Stradner, J., Thenius, R., Zahadat, P., Hamann, H., Crailsheim, K., & Schmickl, T. (2013). Algorithmic requirements for swarm intelligence in differently coupled collective systems. Chaos, Solitons & Fractals, 50, 100-114.

[4] De Vincenzo, I., Massari, G. F., Giannoccaro, I., Carbone, G., & Grigolini, P. (2018). Mimicking the collective intelligence of human groups as an optimization tool for complex problems. Chaos, Solitons & Fractals, 110, 259-266.

How can I possibly learn on that thing I have just become aware I do?

 

My editorial on You Tube

 

I keep working on the application of neural networks to simulate the workings of collective intelligence in humans. I am currently macheting my way through the model proposed by de Vincenzo et al in their article entitled ‘Mimicking the collective intelligence of human groups as an optimization tool for complex problems’ (2018[1]). In the spirit of my own research, I am trying to use optimization tools for a slightly different purpose, that is for simulating the way things are done. It usually means that I relax some assumptions which come along with said optimization tools, and I just watch what happens.

Vincenzo et al propose a model of artificial intelligence, which combines a classical perceptron, such as the one I have already discussed on this blog (see « More vigilant than sigmoid », for example) with a component of deep learning based on the observable divergences in decisions. In that model, social agents strive to minimize their divergences and to achieve relative consensus. Mathematically, it means that each decision is characterized by a fitness function, i.e. a function of mathematical distance from other decisions made in the same population.

I take the tensors I have already been working with, namely the input tensor TI = {LCOER, LCOENR, KR, KNR, IR, INR, PA;R, PA;NR, PB;R, PB;NR} and the output tensor is TO = {QR/N; QNR/N}. Once again, consult « More vigilant than sigmoid » as for the meaning of those variables. In the spirit of the model presented by Vincenzo et al, I assume that each variable in my tensors is a decision. Thus, for example, PA;R, i.e. the basic price of energy from renewable sources, which small consumers are charged with, is the tangible outcome of a collective decision. Same for the levelized cost of electricity from renewable sources, the LCOER, etc. For each i-th variable xi in TI and TO, I calculate its relative fitness to the overall universe of decisions, as the average of itself, and of its Euclidean distances to other decisions. It looks like:

 

V(xi) = (1/N)*{xi + [(xi – xi;1)2]0,5 + [(xi – xi;2)2]0,5 + … + [(xi – xi;K)2]0,5}

 

…where N is the total number of variables in my tensors, and K = N – 1.

 

In a next step, I can calculate the average of averages, thus to sum up all the individual V(xi)’s and divide that total by N. That average V*(x) = (1/N) * [V(x1) + V(x2) + … + V(xN)] is the measure of aggregate divergence between individual variables considered as decisions.

Now, I imagine two populations: one who actively learns from the observed divergence of decisions, and another one who doesn’t really. The former is represented with a perceptron that feeds back the observable V(xi)’s into consecutive experimental rounds. Still, it is just feeding that V(xi) back into the loop, without any a priori ideas about it. The latter is more or less what it already is: it just yields those V(xi)’s but does not do much about them.

I needed a bit of thinking as for how exactly should that feeding back of fitness function look like. In the algorithm I finally came up with, it looks differently for the input variables on the one hand, and for the output ones. You might remember, from the reading of « More vigilant than sigmoid », that my perceptron, in its basic version, learns by estimating local errors observed in the last round of experimentation, and then adding those local errors to the values of input variables, just to make them roll once again through the neural activation function (sigmoid or hyperbolic tangent), and see what happens.

As I upgrade my perceptron with the estimation of fitness function V(xi), I ask: who estimates the fitness function? What kind of question is that? Well, a basic one. I have that neural network, right? It is supposed to be intelligent, right? I add a function of intelligence, namely that of estimating the fitness function. Who is doing the estimation: my supposedly intelligent network or some other intelligent entity? If it is an external intelligence, mine, for a start, it just estimates V(xi), sits on its couch, and watches the perceptron struggling through the meanders of attempts to be intelligent. In such a case, the fitness function is like sweat generated by a body. The body sweats but does not have any way of using the sweat produced.

Now, if the V(xi) is to be used for learning, the perceptron is precisely the incumbent intelligent structure supposed to use it. I see two basic ways for the perceptron to do that. First of all, the input neuron of my perceptron can capture the local fitness functions on input variables and add them, as additional information, to the previously used values of input variables. Second of all, the second hidden neuron can add the local fitness functions, observed on output variables, to the exponent of the neural activation function.

I explain. I am a perceptron. I start my adventure with two tensors: input TI = {LCOER, LCOENR, KR, KNR, IR, INR, PA;R, PA;NR, PB;R, PB;NR} and output TO = {QR/N; QNR/N}. The initial values I start with are slightly modified in comparison to what was being processed in « More vigilant than sigmoid ». I assume that the initial market of renewable energies – thus most variables of quantity with ‘R’ in subscript – is quasi inexistent. More specifically, QR/N = 0,01 and  QNR/N = 0,99 in output variables, whilst in the input tensor I have capital invested in capacity IR = 0,46 (thus a readiness to go and generate from renewables), and yet the crowdfunding flow K is KR = 0,01 for renewables and KNR = 0,09 for non-renewables. If you want, it is a sector of renewable energies which is sort of ready to fire off but hasn’t done anything yet in that department. All in all, I start with: LCOER = 0,26; LCOENR = 0,48; KR = 0,01; KNR = 0,09; IR = 0,46; INR = 0,99; PA;R = 0,71; PA;NR = 0,46; PB;R = 0,20; PB;NR = 0,37; QR/N = 0,01; and QNR/N = 0,99.

Being a pure perceptron, I am dumb as f**k. I can learn by pure experimentation. I have ambitions, though, to be smarter, thus to add some deep learning to my repertoire. I estimate the relative mutual fitness of my variables according to the V(xi) formula given earlier, as arithmetical average of each variable separately and its Euclidean distance to others. With the initial values as given, I observe: V(LCOER; t0) = 0,302691788; V(LCOENR; t0) = 0,310267104; V(KR; t0) = 0,410347388; V(KNR; t0) = 0,363680721; V(IR ; t0) = 0,300647174; V(INR ; t0) = 0,652537097; V(PA;R ; t0) = 0,441356844 ; V(PA;NR ; t0) = 0,300683099 ; V(PB;R ; t0) = 0,316248176 ; V(PB;NR ; t0) = 0,293252713 ; V(QR/N ; t0) = 0,410347388 ; and V(QNR/N ; t0) = 0,570485945. All that stuff put together into an overall fitness estimation is like average V*(x; t0) = 0,389378787.

I ask myself: what happens to that fitness function when as I process information with my two alternative neural functions, the sigmoid or the hyperbolic tangent. I jump to experimental round 1500, thus to t1500, and I watch. With the sigmoid, I have V(LCOER; t1500) =  0,359529289 ; V(LCOENR; t1500) =  0,367104605; V(KR; t1500) =  0,467184889; V(KNR; t1500) = 0,420518222; V(IR ; t1500) =  0,357484675; V(INR ; t1500) =  0,709374598; V(PA;R ; t1500) =  0,498194345; V(PA;NR ; t1500) =  0,3575206; V(PB;R ; t1500) =  0,373085677; V(PB;NR ; t1500) =  0,350090214; V(QR/N ; t1500) =  0,467184889; and V(QNR/N ; t1500) = 0,570485945, with average V*(x; t1500) =  0,441479829.

Hmm, interesting. Working my way through intelligent cognition with a sigmoid, after 1500 rounds of experimentation, I have somehow decreased the mutual fitness of decisions I make through individual variables. Those V(xi)’s have changed. Now, let’s see what it gives when I do the same with the hyperbolic tangent: V(LCOER; t1500) =   0,347752478; V(LCOENR; t1500) =  0,317803169; V(KR; t1500) =   0,496752021; V(KNR; t1500) = 0,436752021; V(IR ; t1500) =  0,312040791; V(INR ; t1500) =  0,575690006; V(PA;R ; t1500) =  0,411438698; V(PA;NR ; t1500) =  0,312052766; V(PB;R ; t1500) = 0,370346458; V(PB;NR ; t1500) = 0,319435252; V(QR/N ; t1500) =  0,496752021; and V(QNR/N ; t1500) = 0,570485945, with average V*(x; t1500) =0,413941802.

Well, it is becoming more and more interesting. Being a dumb perceptron, I can, nevertheless, create two different states of mutual fitness between my decisions, depending on the kind of neural function I use. I want to have a bird’s eye view on the whole thing. How can a perceptron have a bird’s eye view of anything? Simple: it rents a drone. How can a perceptron rent a drone? Well, how smart do you have to be to rent a drone? Anyway, it gives something like the graph below:

 

Wow! So this is what I do, as a perceptron, and what I haven’t been aware so far? Amazing. When I think in sigmoid, I sort of consistently increase the relative distance between my decisions, i.e. I decrease their mutual fitness. The sigmoid, that function which sorts of calms down any local disturbance, leads to making a decision-making process like less coherent, more prone to embracing a little chaos. The hyperbolic tangent thinking is different. It occasionally sort of stretches across a broader spectrum of fitness in decisions, but as soon as it does so, it seems being afraid of its own actions, and returns to the initial level of V*(x). Please, note that as a perceptron, I am almost alive, and I produce slightly different outcomes in each instance of myself. The point is that in the line corresponding to hyperbolic tangent, the comb-like pattern of small oscillations can stretch and move from instance to instance. Still, it keeps the general form of a comb.

OK, so this is what I do, and now I ask myself: how can I possibly learn on that thing I have just become aware I do? As a perceptron, endowed with this precise logical structure, I can do one thing with information: I can arithmetically add it to my input. Still, having some ambitions for evolving, I attempt to change my logical structure, and I risk myself into incorporating somehow the observable V(xi) into my neural activation function. Thus, the first thing I do with that new learning is to top the values of input variables with local fitness functions observed in the previous round of experimenting. I am doing it already with local errors observed in outcome variables, so why not doubling the dose of learning? Anyway, it goes like: xi(t0) = xi(t-1) + e(xi; t-1) + V(xi; t-1). It looks interesting, but I am still using just a fraction of information about myself, i.e. just that about input variables. Here is where I start being really ambitious. In the equation of the sigmoid function, I change s = 1 / [1 + exp(∑xi*Wi)] into s = 1 / [1 + exp(∑xi*Wi + V(To)], where V(To) stands for local fitness functions observed in output  variables. I do the same by analogy in my version based on hyperbolic tangent. The th = [exp(2*∑xi*wi)-1] / [exp(2*∑xi*wi) + 1] turns into th = {exp[2*∑xi*wi + V(To)] -1} / {exp[2*∑xi*wi + V(To)] + 1}. I do what I know how to do, i.e. adding information from fresh observation, and I apply it to change the structure of my neural function.

All those ambitious changes in myself, put together, change my pattern of learing as shown in the graph below:

When I think sigmoid, the fact of feeding back my own fitness function does not change much. It makes the learning curve a bit steeper in the early experimental rounds, and makes it asymptotic to a little lower threshold in the last rounds, as compared to learning without feedback on V(xi). Yet, it is the same old sigmoid, with just its sleeves ironed. On the other hand, the hyperbolic tangent thinking changes significantly. What used to look like a comb, without feedback, now looks much more aggressive, like a plough on steroids. There is something like a complex cycle of learning on the internal cohesion of decisions made. Generally, feeding back the observable V(xi) increases the finally achieved cohesion in decisions, and, in the same time, it reduces the cumulative error gathered by the perceptron. With that type of feedback, the cumulative error of the sigmoid, which normally hits around 2,2 in this case, falls to like 0,8. With hyperbolic tangent, cumulative errors which used to be 0,6 ÷ 0,8 without feedback, fall to 0,1 ÷ 0,4 with feedback on V(xi).

 

The (provisional) piece of wisdom I can have as my takeaway is twofold. Firstly, whatever I do, a large chunk of perceptual learning leads to a bit less cohesion in my decisions. As I learn by experience, I allow myself more divergence in decisions. Secondly, looping on that divergence, and including it explicitly in my pattern of learning leads to relatively more cohesion at the end of the day. Still, more cohesion has a price – less learning.

 

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

[1] De Vincenzo, I., Massari, G. F., Giannoccaro, I., Carbone, G., & Grigolini, P. (2018). Mimicking the collective intelligence of human groups as an optimization tool for complex problems. Chaos, Solitons & Fractals, 110, 259-266.

More vigilant than sigmoid

My editorial on You Tube

 

I keep working on the application of neural networks as simulators of collective intelligence. The particular field of research I am diving into is the sector of energy, its shift towards renewable energies, and the financial scheme I invented some time ago, which I called EneFin. As for that last one, you can consult « The essential business concept seems to hold », in order to grasp the outline.

I continue developing the line of research I described in my last update in French: « De la misère, quoi ». There are observable differences in the prices of energy according to the size of the buyer. In many countries – practically in all the countries of Europe – there are two, distinct price brackets. One, which I further designated as PB, is reserved to contracts with big consumers of energy (factories, office buildings etc.) and it is clearly lower. Another one, further called PA, is applied to small buyers, mainly households and really small businesses.

As an economist, I have that intuitive thought in the presence of price forks: that differential in prices is some kind of value. If it is value, why not giving it some financial spin? I came up with the idea of the EneFin contract. People buy energy from a local supplier, in the amount Q, who sources it from renewables (water, wind etc.), and they pay the price PA, thus generating a financial flow equal to Q*PA. That flow buys two things: energy priced at PB, and participatory titles in the capital of their supplier, for the differential Q*(PA – PB). I imagine some kind of crowdfunding platform, which could channel the amount of capital K = Q*(PA – PB).

That K remains in some sort of fluid relationship to I, or capital invested in the productive capacity of energy suppliers. Fluid relationship means that each of those capital balances can date other capital balances, no hard feelings held. As we talk (OK, I talk) about prices of energy and capital invested in capacity, it is worth referring to LCOE, or Levelized Cost Of Electricity. The LCOE is essentially the marginal cost of energy, and a no-go-below limit for energy prices.

I want to simulate the possible process of introducing that general financial concept, namely K = Q*(PA – PB), into the market of energy, in order to promote the development of diversified networks, made of local suppliers in renewable energy.

Here comes my slightly obsessive methodological idea: use artificial intelligence in order to simulate the process. In classical economic method, I make a model, I take empirical data, I regress some of it on another some of it, and I come up with coefficients of regression, and they tell me how the thing should work if we were living in a perfect world. Artificial intelligence opens a different perspective. I can assume that my model is a logical structure, which keeps experimenting with itself and we don’t the hell know where exactly that experimentation leads. I want to use neural networks in order to represent the exact way that social structures can possibly experiment with that K = Q*(PA – PB) thing. Instead of optimizing, I want to see that way that possible optimization can occur.

I have that simple neural network, which I already referred to in « The point of doing manually what the loop is supposed to do » and which is basically quite dumb, as it does not do any abstraction. Still, it nicely experiments with logical structures. I am sketching its logical structure in the picture below. I distinguish four layers of neurons: input, hidden 1, hidden 2, and output. When I say ‘layers’, it is a bit of grand language. For the moment, I am working with one single neuron in each layer. It is more of a synaptic chain.

Anyway, the input neuron feeds data into the chain. In the first round of experimentation, it feeds the source data in. In consecutive rounds of learning through experimentation, that first neuron assesses and feeds back local errors, measured as discrepancies between the output of the output neuron, and the expected values of output variables. The input neuron is like the first step in a chain of perception, in a nervous system: it receives and notices the raw external information.

The hidden layers – or the hidden neurons in the chain – modify the input data. The first hidden neuron generates quasi-random weights, which the second hidden neuron attributes to the input variables. Just as in a nervous system, the input stimuli are assessed as for their relative importance. In the original algorithm of perceptron, which I used to design this network, those two functions, i.e. generating the random weights and attributing them to input variables, were fused in one equation. Still, my fundamental intent is to use neural networks to simulate collective intelligence, and intuitively guess those two functions are somehow distinct. Pondering the importance of things is one action and using that ponderation for practical purposes is another. It is like scientist debating about the way to run a policy, and the government having the actual thing done. These are two separate paths of action.

Whatever. What the second hidden neuron produces is a compound piece of information: the summation of input variables multiplied by random weights. The output neuron transforms this compound data through a neural function. I prepared two versions of this network, with two distinct neural functions: the sigmoid, and the hyperbolic tangent. As I found out, the way they work is very different, just as the results they produce. Once the output neuron generates the transformed data – the neural output – the input neuron measures the discrepancy between the original, expected values of output variables, and the values generated by the output neuron. The exact way of computing that discrepancy is made of two operations: calculating the local derivative of the neural function, and multiplying that derivative by the residual difference ‘original expected output value minus output value generated by the output neuron’. The so calculated discrepancy is considered as a local error, and is being fed back into the input neuron as an addition to the value of each input variable.

Before I go into describing the application I made of that perceptron, as regards my idea for financial scheme, I want to delve into the mechanism of learning triggered through repeated looping of that logical structure. The input neuron measures the arithmetical difference between the output of the network in the preceding round of experimentation, and that difference is being multiplied by the local derivative of said output. Derivative functions, in their deepest, Newtonian sense, are magnitudes of change in something else, i.e. in their base function. In the Newtonian perspective, everything that happens can be seen either as change (derivative) in something else, or as an integral (an aggregate that changes its shape) of still something else. When I multiply the local deviation from expected values by the local derivative of the estimated value, I assume this deviation is as important as the local magnitude of change in its estimation. The faster things happen, the more important they are, so do say. My perceptron learns by assessing the magnitude of local changes it induces in its own estimations of reality.

I took that general logical structure of the perceptron, and I applied it to my core problem, i.e. the possible adoption of the new financial scheme to the market of energy. Here comes sort of an originality in my approach. The basic way of using neural networks is to give them a substantial set of real data as learning material, make them learn on that data, and then make them optimize a hypothetical set of data. Here you have those 20 old cars, take them into pieces and try to put them back together, observe all the anomalies you have thus created, and then make me a new car on the grounds of that learning. I adopted a different approach. My focus is to study the process of learning in itself. I took just one set of actual input values, exogenous to my perceptron, something like an initial situation. I ran 5000 rounds of learning in the perceptron, on the basis of that initial set of values, and I observed how is learning taking place.

My initial set of data is made of two tensors: input TI and output TO.

The thing I am the most focused on is the relative abundance of energy supplied from renewable sources. I express the ‘abundance’ part mathematically as the coefficient of energy consumed per capita, or Q/N. The relative bend towards renewables, or towards the non-renewables is apprehended as the distinction between renewable energy QR/N consumed per capita, and the non-renewable one, the QNR/N, possibly consumed by some other capita. Hence, my output tensor is TO = {QR/N; QNR/N}.

I hypothesise that TO is being generated by input made of prices, costs, and capital outlays. I split my price fork PA – PB (price for the big ones minus price for the small ones) into renewables and non-renewables, namely into: PA;R, PA;NR, PB;R, and PB;NR. I mirror the distinction in prices with that in the cost of energy, and so I call LCOER and LCOENR. I want to create a financial scheme that generates a crowdfunded stream of capital K, to finance new productive capacities, and I want it to finance renewable energies, and I call KR. Still, some other people, like my compatriots in Poland, might be so attached to fossils they might be willing to crowdfund new installations based on non-renewables. Thus, I need to take into account a KNR in the game. When I say capital, and I say LCOE, I sort of feel compelled to say aggregate investment in productive capacity, in renewables, and in non-renewables, and I call it, respectively, IR and INR. All in all, my input tensor spells TI = {LCOER, LCOENR, KR, KNR, IR, INR, PA;R, PA;NR, PB;R, PB;NR}.

The next step is scale and measurement. The neural functions I use in my perceptron like having their input standardized. Their tastes in standardization differ a little. The sigmoid likes it nicely spread between 0 and 1, whilst the hyperbolic tangent, the more reckless of the two, tolerates (-1) ≥ x ≥ 1. I chose to standardize the input data between 0 and 1, so as to make it fit into both. My initial thought was to aim for an energy market with great abundance of renewable energy, and a relatively declining supply of non-renewables. I generally trust my intuition, only I like to leverage it with a bit of chaos, every now and then, and so I ran some pseudo-random strings of values and I chose an output tensor made of TO = {QR/N = 0,95; QNR/N = 0,48}.

That state of output is supposed to be somehow logically connected to the state of input. I imagined a market, where the relative abundance in the consumption of, respectively, renewable energies and non-renewable ones is mostly driven by growing demand for the former, and a declining demand for the latter. Thus, I imagined relatively high a small-user price for renewable energy and a large fork between that PA;R and the PB;R. As for non-renewables, the fork in prices is more restrained (than in the market of renewables), and its top value is relatively lower. The non-renewable power installations are almost fed up with investment INR, whilst the renewables could still do with more capital IR in productive assets. The LCOENR of non-renewables is relatively high, although not very: yes, you need to pay for the fuel itself, but you have economies of scale. As for the LCOER for renewables, it is pretty low, which actually reflects the present situation in the market.

The last part of my input tensor regards the crowdfunded capital K. I assumed two different, initial situations. Firstly, it is virtually no crowdfunding, thus a very low K. Secondly, some crowdfunding is already alive and kicking, and it is sort of slightly above the half of what people expect in the industry.

Once again, I applied those qualitative assumptions to a set of pseudo-random values between 0 and 1. Here comes the result, in the table below.

 

Table 1 – The initial values for learning in the perceptron

Tensor Variable The Market with virtually no crowdfunding   The Market with significant crowdfunding
Input TI LCOER         0,26           0,26
LCOENR         0,48           0,48
KR         0,01   <= !! =>         0,56    
KNR         0,01            0,52    
IR         0,46           0,46
INR         0,99           0,99
PA;R         0,71           0,71
PA;NR         0,46           0,46
PB;R         0,20           0,20
PB;NR         0,37           0,37
Output TO QR/N         0,95           0,95
QNR/N         0,48           0,48

 

The way the perceptron works means that it generates and feeds back local errors in each round of experimentation. Logically, over the 5000 rounds of experimentation, each input variable gathers those local errors, like a snowball rolling downhill. I take the values of input variables from the last, i.e. the 5000th round: they have the initial values, from the table above, and, on the top of them, there is cumulative error from the 5000 experiments. How to standardize them, so as to make them comparable with the initial ones? I observe: all those final output values have the same cumulative error in them, across all the TI input tensor. I choose a simple method for standardization. As the initial values were standardized over the interval between 0 and 1, I standardize the outcoming values over the interval 0 ≥ x ≥ (1 + cumulative error).

I observe the unfolding of cumulative error along the path of learning, made of 5000 steps. There is a peculiarity in each of the neural functions used: the sigmoid, and the hyperbolic tangent. The sigmoid learns in a slightly Hitchcockian way. Initially, local errors just rocket up. It is as if that sigmoid was initially yelling: ‘F******k! What a ride!’. Then, the value of errors drops very sharply, down to something akin to a vanishing tremor, and starts hovering lazily over some implicit asymptote. Hyperbolic tangent learns differently. It seems to do all it can to minimize local errors whenever it is possible. Obviously, it is not always possible. Every now and then, that hyperbolic tangent produces an explosively high value of local error, like a sudden earthquake, just to go back into forced calm right after. You can observe those two radically different ways of learning in the two graphs below.

Two ways of learning – the sigmoidal one and the hyper-tangential one – bring interestingly different results, just as differentiated are the results of learning depending on the initial assumptions as for crowdfunded capital K. Tables 2 – 5, further below, list the results I got. A bit of additional explanation will not hurt. For every version of learning, i.e. sigmoid vs hyperbolic tangent, and K = 0,01 vs K ≈ 0,5, I ran 5 instances of 5000 rounds of learning in my perceptron. This is the meaning of the word ‘Instance’ in those tables. One instance is like a tensor of learning: one happening of 5000 consecutive experiments. The values of output variables remain constant all the time: TO = {QR/N = 0,95; QNR/N = 0,48}. The perceptron sweats in order to come up with some interesting combination of input variables, given this precise tensor of output.

 

Table 2 – Outcomes of learning with the sigmoid, no initial crowdfunding

 

The learnt values of input variables after 5000 rounds of learning
Learning with the sigmoid, no initial crowdfunding
Instance 1 Instance 2 Instance 3 Instance 4 Instance 5
cumulative error 2,11 2,11 2,09 2,12 2,16
LCOER 0,7617 0,7614 0,7678 0,7599 0,7515
LCOENR 0,8340 0,8337 0,8406 0,8321 0,8228
KR 0,6820 0,6817 0,6875 0,6804 0,6729
KNR 0,6820 0,6817 0,6875 0,6804 0,6729
IR 0,8266 0,8262 0,8332 0,8246 0,8155
INR 0,9966 0,9962 1,0045 0,9943 0,9832
PA;R 0,9062 0,9058 0,9134 0,9041 0,8940
PA;NR 0,8266 0,8263 0,8332 0,8247 0,8155
PB;R 0,7443 0,7440 0,7502 0,7425 0,7343
PB;NR 0,7981 0,7977 0,8044 0,7962 0,7873

 

 

Table 3 – Outcomes of learning with the sigmoid, with substantial initial crowdfunding

 

The learnt values of input variables after 5000 rounds of learning
Learning with the sigmoid, substantial initial crowdfunding
Instance 1 Instance 2 Instance 3 Instance 4 Instance 5
cumulative error 1,98 2,01 2,07 2,03 1,96
LCOER 0,7511 0,7536 0,7579 0,7554 0,7494
LCOENR 0,8267 0,8284 0,8314 0,8296 0,8255
KR 0,8514 0,8529 0,8555 0,8540 0,8504
KNR 0,8380 0,8396 0,8424 0,8407 0,8369
IR 0,8189 0,8207 0,8238 0,8220 0,8177
INR 0,9965 0,9965 0,9966 0,9965 0,9965
PA;R 0,9020 0,9030 0,9047 0,9037 0,9014
PA;NR 0,8189 0,8208 0,8239 0,8220 0,8177
PB;R 0,7329 0,7356 0,7402 0,7375 0,7311
PB;NR 0,7891 0,7913 0,7949 0,7927 0,7877

 

 

 

 

 

Table 4 – Outcomes of learning with the hyperbolic tangent, no initial crowdfunding

 

The learnt values of input variables after 5000 rounds of learning
Learning with the hyperbolic tangent, no initial crowdfunding
Instance 1 Instance 2 Instance 3 Instance 4 Instance 5
cumulative error 1,1 1,27 0,69 0,77 0,88
LCOER 0,6470 0,6735 0,5599 0,5805 0,6062
LCOENR 0,7541 0,7726 0,6934 0,7078 0,7257
KR 0,5290 0,5644 0,4127 0,4403 0,4746
KNR 0,5290 0,5644 0,4127 0,4403 0,4746
IR 0,7431 0,7624 0,6797 0,6947 0,7134
INR 0,9950 0,9954 0,9938 0,9941 0,9944
PA;R 0,8611 0,8715 0,8267 0,8349 0,8450
PA;NR 0,7432 0,7625 0,6798 0,6948 0,7135
PB;R 0,6212 0,6497 0,5277 0,5499 0,5774
PB;NR 0,7009 0,7234 0,6271 0,6446 0,6663

 

 

Table 5 – Outcomes of learning with the hyperbolic tangent, substantial initial crowdfunding

 

The learnt values of input variables after 5000 rounds of learning
Learning with the hyperbolic tangent, substantial initial crowdfunding
Instance 1 Instance 2 Instance 3 Instance 4 Instance 5
cumulative error -0,33 0,2 -0,06 0,98 -0,25
LCOER (0,1089) 0,3800 0,2100 0,6245 0,0110
LCOENR 0,2276 0,5681 0,4497 0,7384 0,3111
KR 0,3381 0,6299 0,5284 0,7758 0,4096
KNR 0,2780 0,5963 0,4856 0,7555 0,3560
IR 0,1930 0,5488 0,4251 0,7267 0,2802
INR 0,9843 0,9912 0,9888 0,9947 0,9860
PA;R 0,5635 0,7559 0,6890 0,8522 0,6107
PA;NR 0,1933 0,5489 0,4252 0,7268 0,2804
PB;R (0,1899) 0,3347 0,1522 0,5971 (0,0613)
PB;NR 0,0604 0,4747 0,3306 0,6818 0,1620

 

The cumulative error, the first numerical line in each table, is something like memory. It is a numerical expression of how much experience has the perceptron accumulated in the given instance of learning. Generally, the sigmoid neural function accumulates more memory, as compared to the hyper-tangential one. Interesting. The way of processing information affects the amount of experiential data stored in the process. If you use the links I gave earlier, you will see different logical structures in those two functions. The sigmoid generally smoothes out anything it receives as input. It puts the incoming, compound data in the negative exponent of the Euler’s constant e = 2,72, and then it puts the resulting value as part of the denominator of 1. The sigmoid is like a bumper: it absorbs shocks. The hyperbolic tangent is different. It sort of exposes small discrepancies in input. In human terms, the hyper-tangential function is more vigilant than the sigmoid. As it can be observed in this precise case, absorbing shocks leads to more accumulated experience than vigilantly reacting to observable change.

The difference in cumulative error, observable in the sigmoid-based perceptron vs that based on hyperbolic tangent is particularly sharp in the case of a market with substantial initial crowdfunding K. In 3 instances on 5, in that scenario, the hyper-tangential perceptron yields a negative cumulative error. It can be interpreted as the removal of some memory, implicitly contained in the initial values of input variables. When the initial K is assumed to be 0,01, the difference in accumulated memory, observable between the two neural functions, significantly shrinks. It looks as if K ≥ 0,5 was some kind of disturbance that the vigilant hyperbolic tangent attempts to eliminate. That impression of disturbance created by K ≥ 0,5 is even reinforced as I synthetically compare all the four sets of outcomes, i.e. tables 2 – 5. The case of learning with the hyperbolic tangent, and with substantial initial crowdfunding looks radically different from everything else. The discrepancy between alternative instances seems to be the greatest in this case, and the incidentally negative values in the input tensor suggest some kind of deep shakeoff. Negative prices and/or negative costs mean that someone external is paying for the ride, probably the taxpayers, in the form of some fiscal stimulation.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

Alois in the middle

 

I am returning to my syllabuses for the next academic year. I am focusing more specifically on microeconomics. Next year, I am supposed to give lectures in Microeconomics at both the Undergraduate, and the Master’s level. I feel like asking fundamental questions. My fundamental question, as it comes to teaching any curriculum, is the same: what can my students do with it? What is the function and the purpose of microeconomics? Please, notice that I am not asking that frequently stated, rhetorical question ‘What are microeconomics about?’. Well, buddy, microeconomics are about the things you are going to lecture about. Stands to reason. I want to know, and communicate, what is the practical utility, in one’s life, of those things that microeconomics are about.

The basic claim I am focusing on is the following: microeconomics are the accountancy of social structures. They serve exactly the same purpose that any kind of bookkeeping has ever served: to find and exploit patterns in human behaviour, by the means of accurately applied measures. Them ancients, who built those impressive pyramids (who builds a structure without windows and so little free space inside?), very quickly gathered that in order to have one decent pyramid, you need an army of clerks who do the accounting. They used to count stone, people, food, water etc. This is microeconomics, basically.

Thus, you can do with microeconomics if you want to build an ancient pyramid. Now, I am dividing the construction of said ancient pyramid in two stages: Undergraduate, and Master’s. An Undergraduate ancient pyramid requires the understanding of what do you need to keep the accounts of if you don’t want to be thrown to crocodiles. At the Master’s level, you will want to know what are the odds that you find yourself in a social structure, where inaccurate accounting, in connection with a pyramid, will have you thrown to crocodiles.

Good, now some literature, and a little turn by my current scientific work on the EneFin concept (see « Which salesman am I? » and « Sans une once d’utopisme » for sort of a current account of that research). I have just read that sort of transitional form of science, between an article and a book, basically a report, by Bleich and Guimaraes 2016[1]. It regards investment in renewable energies, mostly from the strictly spoken view of investment logic. Return on investment, net present value – that kind of thing. As I was making my notes out of that reading, my mind made a jump, and it landed on the cover of the quite-well-known book by Joseph Schumpeter: ‘Business Cycles’.

Joseph Schumpeter is an intriguing classic, so to say. Born in 1883, he published ‘Business Cycles’ in 1939, being 56 year-old, after the hell of a ride both for him and for the world, and right at the beginning of another ride (for the world). He was studying economics in Austria, in the early 1900, when social sciences in general were sort of different from their today’s version. They were the living account of a world that used to be changing at a breath-taking pace. Young Joseph (well, Alois in the middle) Schumpeter witnessed the rise of Marxism, World War I, the dissolution of his homeland, the Austro-Hungarian Empire, the rise of the German Reich. He moved from academia to banking, and from European banking to American academia.

I deeply believe that whatever kind of story I am telling, whether I am lecturing about economics, discussing a business concept, or chatting about philosophy, at the bottom line I am telling the story of my own existence. I also deeply believe that the same is true for anyone who goes to any lengths in telling a story. We tell stories in order to rationalize that crazy, exciting, unique and deadly something called ‘life’. To me, those ‘Business Cycles’ by Joseph Schumpeter look very much like a rationalized story of quite turbulent a life.

So, here come a few insights I have out of re-reading ‘Business Cycles’ for the n-th time, in the context of research on my EneFin business concept. Any technological change takes place in a chain of value added. Innovation in one tier of the chain needs to overcome the status quo both upstream and downstream of the chain, but once this happens, the whole chain of technologies and goods changes. I wonder how it can apply specifically to EneFin, which is essentially an institutional scheme. In terms of value added, this scheme is situated somewhere between the classical financial markets, and typical social entrepreneurship. It is social to the extent that it creates that quasi-cooperative connexion between the consumers of energy, and its suppliers. Still, as my idea assumes a financial market for those complex contracts « energy + shares in the supplier’s equity », there is a strong capitalist component.

I guess that the resistance this innovation would have to overcome would consist, on one end, in distrust from the part of those hardcore activists of social entrepreneurship, like ‘Anything that has anything to do with money is bad!’, and, on the other hand, there can be resistance from the classical financial market, namely the willingness to forcibly squeeze the EneFin scheme into some kind of established structure, like the stock market.

The second insight that Joseph has just given me is the following: there is a special type of business model and business action, the entrepreneurial one, centred on innovation rather than on capitalizing on the status quo. This is deep, really. What I could notice, so far, in my research, is that in every industry there are business models which just work, and others which just don’t. However innovative you think you are, most of the times either you follow the field-tested patterns or you simply fail. The real, deep technological change starts when this established order gets a wedge stuffed up its ass, and the wedge is, precisely, that entrepreneurial business model. I wonder how entrepreneurial is the business model of EneFin. Is it really as innovative as I think it is?

In the broad theoretical picture, which comes handy as it comes to science, the incidence of that entrepreneurial business model can be measured and assessed as a probability, and that probability, in turn, is a factor of change. My favourite mathematical approach to structural change is that particular mutation that Paul Krugman[2] made out of the classical production function, as initially formulated by Prof Charles W. Cobb and Prof Paul H. Douglas, in their common work from 1928[3]. We have some output generated by two factors, one of which changes slowly, whilst the other changes quickly. In other words, we have one quite conservative factor, and another one that takes on the crazy ride of creative destruction.

That second factor is innovation, or, if you want, the entrepreneurial business model. If it is to be powerful, then, mathematically, incremental change in that innovative factor should bring much greater a result on the side of output than numerically identical an increment in the conservative factor. The classical notation by Cobb and Douglas fits the bill. We have Y = A*F1a*F21-a and a > 0,5. Any change in F1 automatically brings more Y than the identical change in F2. Now, the big claim by Paul Krugman is that if F1 changes functionally, i.e. if its changes really increase the overall Y, resources will flow from F2 to F1, and a self-reinforcing spiral of change forms: F1 induces faster a change than F2, therefore resources are being transferred to F1, and it induces even more incremental change in F1, which, in turn, makes the Y jump even higher etc.

I can apply this logic to my scientific approach of the EneFin concept. I assume that introducing the institutional scheme of EneFin can improve the access to electricity in remote, rural locations, in the developing countries, and, consequently, it can contribute to creating whole new markets and social structures. Those local power systems organized in the lines of EneFin are the factor of innovation, the one with the a > 0,5 exponent in the Y = A*F1a*F21-a function. The empirical application of this logic requires to approximate the value of ‘a’, somehow. In my research on the fundamental link between population and access to energy, I had those exponents nailed down pretty accurately for many countries in the world. I wonder to what extent I can recycle them intellectually for the purposes of my present research.

As I am thinking on this issue, I will keep talking on something else, and the something else in question is the creation of new markets. I go back to the Venerable Man of microeconomics, the Source of All Wisdom, who used to live with his mother when writing the wisdom which he is so reputed for, today. In other words, I am referring to Adam Smith. Still, just to look original, I will quote his ‘Lectures on Justice’ first, rather than going directly to his staple book, namely ‘The Inquiry Into The Nature And Causes of The Wealth of Nations’.

So, in the ‘Lectures on Justice’, Adam Smith presents his basic considerations about contracts (page 130 and on): « That obligation to performance which arises from contract is founded on the reasonable expectation produced by a promise, which considerably differs from a mere declaration of intention. Though I say I have a mind to do such thing for you, yet on account of some occurrences I do not do it, I am not guilty of breach of promise. A promise is a declaration of your desire that the person for whom you promise should depend on you for the performance of it. Of consequence the promise produces an obligation, and the breach of it is an injury. Breach of contract is naturally the slightest of all injuries, because we naturally depend more on what we possess that what is in the hands of others. A man robbed of five pounds thinks himself much more injured than if he had lost five pounds by a contract ».

People make markets, and markets are made of contracts. A contract implies that two or more people want to do some exchange of value, and they want to perform the exchange without coercion. A contract contains a value that one party engages to transfer on the other party, and, possibly, in the case of mutual contracts, another value will be transferred the other way round. There is one thing about contracts and markets, a paradox as for the role of the state. Private contracts don’t like the government to meddle, but they need the government in order to have any actual force and enforceability. This is one of the central thoughts by another classic, Jean-Jacques Rousseau, in his ‘Social Contract’: if we want enforceable contracts, which can make the intervention of the government superfluous, we need a strong government to back up the enforceability of contracts.

If I want my EneFin scheme to be a game-changer in developing countries, it can work only in countries with relatively well-functioning legal systems. I am thinking about using the metric published by the World Bank, the CPIA property rights and rule-based governance rating.

Still another insight that I have found in Joseph Schumpeter’s ‘Business Cycles’ is that when the entrepreneur, introducing a new technology, struggles against the first inertia of the market, that struggle in itself is a sequence of adaptation, and the strategy(ies) applied in the phases of growth and maturity in the new technology, later on, are the outcome of patterns developed during that early struggle. There is some sort of paradox in that struggle. When the early entrepreneur is progressively building his or her presence in the market, they operate under high uncertainty, and, almost inevitably, do a lot of trial and error, i.e. a lot of adjustments to the initially inaccurate prediction of the future. The developed, more mature version of the newly introduced technology is the outcome of that somehow unique sequence of trials, errors, and adjustments.

Scientifically, that insight means a fundamental uncertainty: once the actual implementation of an entrepreneurial business model, such as EneFin, gets inside that tunnel of learning and struggle, it can take on so many different mutations, and the response of the social environment to those mutations can be so idiosyncratic that we get into really serious economic modelling here.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

Support this blog

€10.00

[1] Bleich, K., & Guimaraes, R. D. (2016). Renewable Infrastructure Investment Handbook: A Guide for Institutional Investors. In World Economic Forum, Geneva.

[2] Krugman, P. (1991). Increasing returns and economic geography. Journal of political economy, 99(3), 483-499.

[3] Charles W. Cobb, Paul H. Douglas, 1928, A Theory of Production, The American Economic Review, Volume 18, Issue 1, Supplement, Papers and Proceedings of the Fortieth Annual Meeting of the American Economic Association (March 1928), pp. 139 – 165

Which salesman am I?

 

I am working on a specific aspect of the scientific presentation regarding my EneFin concept, namely on transposing the initial idea – a quasi-cooperative scheme between a local supplier of renewable energies and his local customers, in an essentially urban environment (I was thinking about smart cities) – into the context of poor, rural communities in developing countries. Basically, it was worth approaching the topic from the scientific angle, instead of the purely business-planning one. When I do science, I need to show that I have read what other scientists have written and published on a given topic. So I did, and a few articles have given me this precise idea of expanding the initial concept: Muller et al. 2018[1], Du et al. 2016[2], Wang et al. 2017[3], and Moallemi, Malekpour 2018[4].

I like feeling that the things I do are useful to somebody. I mean, not just interesting, but like really useful. When I write on this blog, I like the thought that some students in social sciences could use the methods presented in their own learning, or that some teachers in social sciences could get inspired. I’m OK with inspiring negatively. If some academic in social sciences, after reading some of my writing, says ‘This Wasniewski guy is one of the dumbest and most annoying people I have ever read anything written by, and I want to prove it by my own research!’, I’m fine with that. This is inspiration, too.

Science is like a screwdriver: the more different contexts you can use your screwdriver in, the more useful it is. This is, by the way, a very scientific approach to economic utility. The more functions a thing can perform, the greater its aggregate utility. So I want those things I write to be useful, and making them functional in more contexts increases their utility. That’s why applying the initial, essentially urban idea of EneFin to the context of alleviating poverty in developing countries is an interesting challenge.

Here is my general method. I imagine a rural community in some remote location, without regular access to electricity at all. All they have are diesel generators. According to Breyer et al. 2010[5], even in the most favourable conditions, the LCOE (Levelized Cost Of Electricity) for energy generated out of diesel is like 0,16 – 0,34 €/kWh. Those most favourable conditions are made of a relatively low price of crude oil, and, last but not least, the virtual absence of transportation costs as regards the diesel oil itself. In other words, that 0,16 – 0,34 €/kWh is essentially relevant for a diesel generator located right by the commercial port where diesel oil is being unloaded from a tanker ship. Still, we are talking about a remote rural location, and that means far from commercial ports. Diesel has to come there by road, mostly. According to a blog post which I found (OK, Google found) at the blog of the Golden Valley Electric Association, that cost per 1 kWh of electricity could even go up to US$ 0,64 = €0,54.

Technological change brings alternatives to that, in the form of renewable energies. Photovoltaic installations come at really a low cost: their LCOE is already gravitating towards €0,05. Onshore wind and small hydro are quite close to that level. Switching from diesel generators to renewables equals the same type of transition that I already mentioned in « Couldn’t they have predicted that? », i.e. from a bracket of relatively high prices of energy, to that of much lower a price (IRENA 2018[6]).

Here comes the big difference between an urban environment in Europe, and a rural community in a developing country. In the former, shifting from higher prices of energy to lower ones means, in the first place, an aggregate saving on energy bills, which can be subsequently spent on other economic utilities. In the latter, lower price of energy means the possibility of doing things those people simply couldn’t afford before: reading at night, powering a computer 24/24, keeping food in a fridge, using electric tools in some small business etc. More social roles define themselves, more businesses start up; more jobs, crafts and professions develop. It is a quantum leap.

Analytically, the initially lonely price of energy from diesel generators, or PD(t), gets company in the form of energy from renewable sources, PRE(t). As I have already pointed out, PD(t) > PRE(t). The (t) symbol means a moment in time. It is a scientific habit to add moments to categories, like price. Things just need time in order to happen, man. A good price needs to have a (t), if it is to prove its value.

Now, I try to imagine the socio-economic context of PD(t) > PRE(t). If just the diesel generators are available, thus if PD(t) is on its own, a certain consumption of energy occurs. Some people are like 100% on the D (i.e. diesel) energy, and they consume QD(t) = QE(t) kilowatt hours. The aggregate QE(t) is their total use of energy. Some people are to some extent on diesel power, and yet, for various reasons (i.e. lack of money, lack of permanent physical access to a generator etc.), that QD(t) does not cover their QE(t) entirely. I write it as QD(t) = a*QE(t) and 0 < a < 1. Finally, there are people for whom the diesel power is completely out of reach, and, temporarily, their QE(t) = 0.

In a population of N people, I have, thus, three subsets, made, respectively, of ‘m’ people who QD(t) = QE(t), ‘p’ people who QD(t) = a*QE(t) and 0 < a < 1, and ‘q’ people on the strict QE(t) = 0 diet. When renewable energies are being introduced, at a PRE(t+1) < PD(t+1) price, what happens is a new market, balanced or monopolized at the price PRE(t+1), and at the QRE(t+1) aggregate quantity, and people start choosing. As they choose, they actually make that QRE(t+1) happen. Among those who were QE(t) = 0, an aggregate b*QE(t+1) flocks towards QRE(t+1), with 0 < b ≤ 1. In the subset of the QD(t) = a*QE(t), at least (1-a)*QE(t+1) go PRE(t+1) and QRE(t+1), just as some c*QD(t) out of the QD(t) = QE(t) users, with 0 ≤ c ≤ 1.

It makes a lot of different Qs. Time to put them sort of coherently together. What sticks its head through that multitude of Qs is the underlying assumption, which I have just figured out I had made before, that in developing countries there is a significant gap between that sort of full-swing-full-supply consumption of energy, which I can call ‘potential consumption’, or QE(t), on the one hand, and the real, actual consumption, or QA(t). Intuitively, QE(t) > QA(t), I mean way ‘>’.

I like checking my theory with facts. I know, might look not very scientific, but I can’t help it: I just like reality. I go to the website of the World Bank and I check their data on the average consumption of energy per capita. I try to find out a reference level for QE(t) > QA(t), i.e. I want to find a scale of magnitude in QA(t), and from that to infer something about QE(t). The last (t) that yields a more or less comprehensive review of QA(t) is 2014, and so I settle for QA(2014). In t = 2014, the country with the lowest consumption of energy per capita, in kilograms of oil equivalent, was technically South Sudan: QA(2014) = 60,73 kg of oil equivalent = 60,73*11,63 kWh = 706,25 kWh. Still, South Sudan started being present in this particular statistic only in 2012. Thus, if I decide to move my (t) back in ‘t’, there is not much moving to do in this case.

Long story short, I take the next least energy-consuming country on the list: Niger. Niger displays a QA(2014) = 150,73 kg of oil equivalent per person per year = 1753,04 kWh per person per year. I check the energy profile of Niger with the International Energy Agency. Niger is really a good case here. Their total QA(2014) = 2 649 ktoe (kilotons of oil equivalent), where 2 063 ktoe = 77,9% consists in waste and biofuel burnt directly for residential purposes, without even being transformed into electricity. Speaking of the wolf, electricity strictly spoken makes just 55 ktoe in the final consumption, thus 55/2649 = 2% of the total. The remaining part of the cocktail are oil products – 506 ktoe = 19,1% –  mostly made domestically from the prevalently domestic crude oil, and burnt principally in transport (388 ktoe), and then in industry (90 ktoe). Households burn just 20 ktoe of oil products per year.

That strange cocktail of energies reflects in the percentages that Niger displays in the World Bank data regarding the share of renewable energies in the overall consumption of energy, as well as in the generation of electricity. As for the former, Niger is, involuntarily, in the world’s vanguard of renewables, with 78,14% coming from renewables. Strange? Well, life is strange. Biofuels are technically renewable source of energy. When you burn the wood and straw that grows around, there will be some new growing around, whence renewability. Still, that biomass in Niger is being just burnt, without transformation of the resulting thermal energy into electric power. As we pass to data on the share of renewables in the output of electricity, Niger is at 0,58%. Not much.

From there, I have many possible paths to follow so as to answer the basic question: ‘What can Niger get out of enriching their energy base with renewables, possibly using an institutional scheme in the lines of the EneFin concept?’. My practical side tells me to look for a benchmark, i.e. for another country in Africa, where the share of renewable energy in the output of electricity is slightly higher than in Niger, without being lightyears away. Here, surprise awaits: there are not really a lot of African countries close to Niger’s rank, regarding this particular metric. There is South Africa, with 1,39% of their electricity coming from renewable sources. Then, after a long gap, comes Senegal, with 10,43% of electricity from renewables.

I quickly check those two countries with the International Energy Agency. South Africa, in terms of energy, is generally coal and oil-oriented, and looks like not the best benchmark in the world for what I what to study. They are thick in energy, by the way: QA(2014) = 2 695,73 kg of oil equivalent, more than 100 times the level of Niger. Very much the same with Senegal: it is like Niger topped with a large oil-based economy, and with a QA(2014) = 272,08 kg of oil equivalent. Sorry, I have to move further up the ranking of African countries in terms of renewables’ share in the output of electricity. Here comes Nigeria, 17,6% of electricity from renewables, and it is like a bigger brother of Niger: 86% of energy comes from the direct burning of biofuels and waste, only those biofuels are like 50 times more than in Niger. Their QA(2014) = 763,4 kg of oil equivalent per person per year.

I check Cote d’Ivoire, 23,93% of electricity from renewable sources, and I get the same, biofuels-dominated landscape. Gabon, Tanzania, Angola, Zimbabwe: all of them, however is their exact metric as for the share or renewables in the output of electricity, have mostly biofuels as renewable sources. Ghana, QA(2014) = 335.05, Mozambique, QA(2014) = 427.6, and Zambia, QA(2014) = 635.5, present slightly different a profile, with a noticeable share of hydro, but still heavily relying on biofuels.

In general, Africa seems to love biofuel, and to be largely ignoring the solar, the wind, and the hydro. This is a surprise. They have a lot of sunlight and sun heat, over there, for one. I started all my research on renewable energies, back in winter 2016, on the inspiration I had from the Ouarzazate-Noor Project in Morocco (see official updates: 2018, 2014, 2011). I imagined that Africa should be developing a huge capacity in renewable sources other than biofuels.

There is that anecdote, to find in textbooks of marketing. Two salesmen of a footwear company are sent to a remote province in a developing country, to research the local market. Everybody around walks barefoot. Salesman A calls his boss and says there are absolutely no market prospects whatsoever, as all the locals walk barefoot. Salesman B makes his call and, using the same premise – no shoes spotted locally at all – concludes there is a huge market to exploit.

Which salesman am I? Being A, I should conclude that schemes like EneFin, in African countries, should serve mostly to develop the usage of biofuels. Still, I am tempted to go B. As the solar, the hydro and the wind power tend to strike by their absence in Africa, this could be precisely the avenue to exploit.

What is there exactly to exploit, in terms of economic gains? The cursory study of African countries with respect to their energy use per capita show huge disparities. The most notable one is to notice between countries relying mostly on biofuels, on the one hand, and those with more complex energy bases. The difference in terms of the QA(2014) consumption of energy per capita is a multiple, not a percentage margin. Introducing a new source of energy into those economies looks like a huge game-changer.

There is that database I built, last year, out of Penn Tables 9.0, and from stuff published by the World Bank, and that database serves me to do like those big econometric tests. Cool stuff. Works well. Everybody should have one. You can see some examples of how I used it last year, if you care to read « Conversations between the dead and the living (no candles) » or « Core and periphery ». I decided to test my claim, namely that introducing more energy per capita into an economy will contribute to the capita in question having more of average Gross Domestic Product, per capita of course.

I made a simple linear equation with natural logarithms of, respectively, GDP per capita, expenditure side, and energy use per capita. It looks like ln(GDP per capita) = ln(Energy per capita) + constant. That’s all. No scale factors, no controlling variables. Just pure, sheer connection between energy and output. A beauty. I am having a first go at the whole sample in my database, with that most basic equation.

Table 1

Explained variable: ln(GDP per capita), N = 5498, R2 = 0,752
Explanatory variable Coefficient of regression

 

(Robust) Standard Error Significance level at t Student test
Ln(Energy per capita)

 

0,947 (0,007) p < 0,001
Constant

 

2,151 (0,053) p < 0,001

Looks promising. When driven down to natural logarithm, variance in consumption of energy per capita explains like 75% of variance in GDP per capita. In other words, generally speaking, if any institutional scheme allows enriching the energy base of a country – any country – it gives a high probability of going along with higher an aggregate output per capita.

A (partial) summing up is due. The idea of implementing a contractual scheme like EneFin in developing countries seems to make sense. The gains to expect are actually much higher than those I initially envisaged for this business concept in the urban environments of European countries. If I want to go after a scientific development of this idea, the avenue of developing countries and their rural regions seems definitely promising.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

Support this blog

€10.00

[1] Müller, M. F., Thompson, S. E., & Gadgil, A. J. (2018). Estimating the price (in) elasticity of off-grid electricity demand. Development Engineering, 3, 12-22.

[2] Du, F., Zhang, J., Li, H., Yan, J., Galloway, S., & Lo, K. L. (2016). Modelling the impact of social network on energy savings. Applied Energy, 178, 56-65.

[3] Wang, G., Zhang, Q., Li, H., Li, Y., & Chen, S. (2017). The impact of social network on the adoption of real-time electricity pricing mechanism. Energy Procedia, 142, 3154-3159.

[4] Moallemi, E. A., & Malekpour, S. (2018). A participatory exploratory modelling approach for long-term planning in energy transitions. Energy research & social science, 35, 205-216.

[5] Breyer, C., Gerlach, A., Schäfer, D., & Schmid, J. (2010, December). Fuel-parity: new very large and sustainable market segments for PV systems. In Energy Conference and Exhibition (EnergyCon), 2010 IEEE International (pp. 406-411). IEEE.

[6] IRENA (2018), Renewable Power Generation Costs in 2017, International Renewable Energy Agency, Abu Dhabi, ISBN 978-92-9260-040-2

Something to exploit subsequently

In my last three updates, I’ve been turning around one specific topic, namely the technology of wind turbines with vertical axis. Like three updates ago, in Ma petite turbine éolienne à l’axe vertical, I opened up on to the topic by studying the case of a particular invention, filed for patenting, with the European Patent Office, by a group of Slovakian inventors. Just in order to place this one in a broader context, I did some semantic rummaging, with the help of https://patents.google.com. I basically wanted to count how many such inventions had been filed for patenting in different regions of the world. In my research I have been using, for years, the number of patent applications as a metric of aggregate effort in invention, and so I did regarding those wind turbines with vertical axis.

This is when it started to turn weird. Apparently, invention in this specific field follows a stunningly regular trend, and is just as stunningly correlated with the metrics of renewable energies: the share of renewables in the overall output of energy (see Time to come to the ad rem) and the aggregate output of said renewables, in metric tons of oil equivalent (see Je corrèle). When I say ‘stunningly correlated’, I really mean it. In social sciences, coefficients of correlation around r = 0,95happen in truly rare cases, and when they happen, the first reflex of a serious social scientist is to assume that something is messed up in the source data. This is one of those cases. I am still trying to wrap my mind around the fact that the semantic incidence of some logical constructs in patent applications can coincide so strongly with the fundamental metrics of energy consumption.

In this update, I want to return to that business concept of mine, the EneFinproject. I am preparing a business plan for this one. Actually I have been preparing it for weeks, which you can find the track of in the past posts on this blog. Long story short, EneFinis the concept of a FinTech utility, which would allow the creators of new projects in the field of renewable energies to acquire capital, via a scheme combining the sales of futures contracts, on the future output of the business, with the issuance of equity. You can find more explanation in Traps and loopholes, for example.

I want to study this particular case, that wind turbine described in the patent application no. EP 3 214 303 A1, under the EneFinangle. How can a FinTech scheme like the one I am coming up with work for a business based on this particular invention? I start with figuring out the kind of business structure to build around this invention. Wind turbines with vertical axis are generally small stuff, distinctive from their bulky cousins with horizontal axis by the fact they can work in close proximity to human habitat. A wind turbine with vertical axis is something you can essentially install in your yard, and it you will be just fine together, provided there is enough wind in your yard. As for this particular aspect, the quick technological research that I documented in Ma petite turbine éolienne à l’axe vertical, showed that the really interesting places for using wind turbines with vertical axis are, for example, the coastal regions of Europe, with the average wind speed like 12 to 13 metres per second. With that amount of Aeol, this particular turbine starts being serious, at more than 1 MW of electrical capacity. Mind you, it doesn’t have to be coastal, that place where you install it. The upper storeys of a skyscraper, hilltops – in general all the places where you cannot expect your straw hat to hold on your head without a ribbon tied under your chin – are the right place to use that device shaped like a DNA helix.

This particular technology is unlikely to breed power plants in the traditional sense of the term. The whole idea of wind turbines with vertical axis is to make it more apt to being installed in the immediate vicinity of human habitat. You can install them completely scattered or a bit clustered, for example on the roof of a building. I am wrapping my mind around the practical idea, and I start the wrapping by doing two things: maths and pictures. As for maths, PW = ½ * Cp* p * A * v3is the general name of the game. ‘PW’ stands for electric power of a wind turbine with vertical axis, and said power stands on air, which has a density p = 1,225 kg/m3divided by half, so basically that air is dense, in the equation, at sort of p = 0,6125 kg/m3. Whatever speed of wind ‘v’ that air blows at, in this particular equation it blows at the third power of that speed, or v3. That half the density of air, multiplied by the cubic expression of wind speed, is the exogenous force that Mother Nature supplies here and now.

What Mother Nature supplies is being taken on the blades on the turbine, with a working surface of ‘A’, and that surface works with an average efficiency of Cp. That efficiency is technically comprised between 0 and 1, and actually, for this specific type of machine, between 59% and 72% (consult Bhutta et al.2012[1]), which I average at 65,5%. All in all, with that density of air cut by half and efficiency being what it is, my average wind turbine with vertical axis can take like 40,1% of the arithmetical product ‘working surface of the blades times wind speed power three’. Reminder, from school: power first, multiplication next. I mean, don’t raise to cubic power the product of wind speed and blade surface. Wind speed cubic power first, then multiply by the blades.

I pass to pictures, now. A picture is mostly a picture of something, even if that something is just in my mind. My first something is a place I like very much: Lisbon, Portugal, and more specifically the district of Belem, a good kick westwards from the Praca de Comercio. It is beautiful, and really windy. Here below, I am giving a graphical idea of how those small wind turbines with vertical axis could be located. Reminder: each of them, according to the prototype in the patent application no. EP 3 214 303 A1, needs like 5 m2of space to work. Let’s make it 20 m2, just to allow the wind to pass between those wind turbines.

belem-tower-2809818_1280

In Lisbon, the average speed of wind is 10 mph, or 4,47 m/s, and that gives an exogenous energy of the wind like 54,72 kilowatts, to take whoever can take it. That prototype has real working surface of its blades like A = 1,334 m2, which gives, at the end of the day, an electric power of PW = 47,81 kW. In Portugal, the average consumption of energy at the level of households (so transport and industry excluded) seems to be like 4 214,55 kWh a year per person. I divide it by 8760 in your basic year (the odd ones make 8784 hours), which yields 0,48 kW required per person. My wind turbine could power 99 people in their household needs. If they start using that juice for transport, like charging their electric cars, or the batteries of their electric bicycles, that 99 could drop to 50 – 60, probably not less.

Hence, what my mind is wrapping around, right now, is a business that would manage the installation and exploitation of wind turbines with vertical axis, in groups of a few dozens of people, so like 20 – 50 households. Good, let’s try to move on: Lyon, France. Not very coastal, as the nearest sea is more than 300 km away, but: a) it is quite windy, due to the specific circulation of air along the valleys of two rivers, Rhône and Saône b) they are reconstructing a whole district, namely the Confluenceone, as a smart city c) I f*****g love the place. Average wind speed over the year: 4,6 m/s, which allows Mother Nature to supply around 52,25 kWto my prototype. The prototype is supposed to serve a population, where the average person needs 7 291,18 kWh for household use, whence 63 people being servedby my prototype, which could drop like to 20 – 30 people, if said people power their transportation devices with their household juice.

Lyon

Good, last shot: Amsterdam. Never been there, mind you, but they are coastal, statistically speaking quite energy consuming, and apparently keen on innovation. The average wind speed there is 5,14 m/s, which makes my prototype generate a power of 72,72 kilowatts. With the average Dutch consuming around 8 369,15 kWh for household use, 76 such average Dutch could use one such turbine.

 Amsterdam with text

Maths and pictures made me clarify a business concept, or rather two business concepts. Concept #1is simple manufacturing of those wind turbines. Here, EneFin(see Traps and loopholesand the subsequent ones) does not really fit. I remind you that the EneFin concept is based on the observable discrepancy between two categories of final prices for electricity: those for big institutional users (low), and those for households and small businesses (high). Long story short, EneFin takes its appeal from the coincidence of very different prices for the same good (i.e. electricity), and from the juicy margin of value added hidden behind that coincidence. That Concept #1 is essentially industrial, and the value added to expect does not really blow one’s hat off. Neither should we expect any significant price discrepancy between categories of customers. Besides, whilst futures contracts on electricity are already widely practiced in the wholesale market, and the EneFin concept just attempts to transfer the idea to the retail market, I haven’t seen much use of futures contracts in the market of typical industrial products.

Concept #2, for exploiting this particular invention, would be complex, combining the engineering of those turbines so as to make the best version for the given location, their installation, then maintenance and management. The business entity in question would combine manufacturing, management of a value chain, site management, design and engineering, and maintenance. Here, that essentially cooperative version of the EneFinconcept would have more space to breathe. We can imagine a site, made of 200 households, who commission an independent company to engineer a local power system, based on wind turbines with vertical axis, to install, manage, and maintain that facility. In the price paid for particular components of that complex business scheme, those customers could progressively buy into that business entity.

Now, I am following another one of my research routines: I am deconstructing the business model. As truly enlightened a social thinker, I am searching online for the phrase ‘wind turbine investor relations’. To the mildly initiated: publicly listed companies have to maintain a special type of website, called, precisely ‘Investor Relations’, where they publish information about their business cuisine. This is where you can find annual reports, for example. The advantage of following this specific track is the easy access to information I am looking for, like the basic financials. The caveat is that I am browsing through relatively big businesses, big enough to be listed publicly, at least. Hence, I am skipping all the stories of small businesses.

Thus, the data my internal curious ape can find by peeling those ‘investor relations’ bananas is representative for relatively big, somehow established business structures. It can serve to build something like a target vision of what is likely to be created, in a particular field of business, after the early childhood of a project is over. And so I asked dr Google, and, just to make sure, I cross-asked dr Yandex, what they can tell me if I ask around for ‘wind turbine investor relations’. Both yielded more or less the same list of top hits: Nordex,VestasSiemens Gamesa, Senvion,LM Wind Power,  SkyWolf, and Arise. I collected their annual reports, with the exception of SkyWolf, which, for some reason, does not publish any on their ‘investor relations’ page. I followed this particular suspect home, I asked around who are they hanging with, and so I came to visiting their page at Nasdaq, and I finally got it. They are at the stage of their IPO (Initial Public Offering), so they are still sort of timid in annual reporting. Still, I could download their preliminary prospectus for that IPO, dated April 20th2018.

There is that thing about annual reports and prospectuses: they are both disclosure and public relations. Technically, an annual report should, essentially, be reporting about the things material to the business in question. Still, this type of document is also used for, well… for the show. Reading an annual report is good training at reading between the lines, and, more generally, at figuring out how to figure out when people are lying.

Truth has patterns, and lies have patterns as well, although the patterns of truth are somehow more salient. The truth that I look for in annual reports is mostly in the financials. Here is a first glimpse of these:

Revenues Net profit (loss) Assets Equity Ratio assets to revenue
Nordex 2017 EUR mlns 3 127,40 0,30 2 807,60 919,00 0,90
Vestas 2017 EUR mlns 9 953,00 894,00 10 871,00 3 112,00 1,09
Siemens Gamesa 2017 EUR mlns 6 538,20 (135,00) 16 467,13 6 449,87 2,52
Senvion 2017 EUR mlns 1 889,90 (121,10) 1 808,10 230,10 0,96
LM Group 2016 EUR mlns 1 059,00 52,00 1 198,00 445,00 1,13
SkyWolf 2017 USD (!) 49 000 (592 600) 139 730 (673 500) 2,85

As I see it, the business of doing business on installing and managing local power installations can go in truly divergent directions. You can start as SkyWolf is starting, with a ‘debt to assets’ ratio akin to the best (worst?) years of General Motors, or you can have that comfy financial cushion supplied by a big mother ship, as it is the case for Siemens Gamesa. One pattern seems to emerge: the ‘assets to revenue’ ratio seems to oscillate around 1,00. In other words, each dollar invoiced on our customers needs to be backed up by one dollar in our balance sheet. Something to exploit subsequently.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French versionas well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon pageand become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

Support this blog

€10.00

[1]Muhammad Mahmood Aslam Bhutta, Nasir Hayat, Ahmed Uzair Farooq, Zain Ali, Sh. Rehan Jamil, Zahid Hussain (2012) Vertical axis wind turbine – A review of various configurations and design techniques, Renewable and Sustainable Energy Reviews 16 (2012) 1926–1939