The mind-blowing hydro

My editorial on You Tube

There is that thing about me: I am a strange combination of consistency and ADHD. If you have ever read one of Terry Pratchett’s novels from the ‘Discworld’ series, you probably know the imaginary character of golems: made of clay, with a logical structure – a ‘chem’ – put in their heads, they can work on something endlessly. In my head, there are chems, which just push me to do things over and over and over again. Writing and publishing on that research blog is very much in those lines. I can stop whenever I want, I just don’t want right now. Yet, when I do a lot about one chem, I start craving for another one, like nearby but not quite in the same intellectual location.

Right now, I am working on two big things. Firstly, I feel like drawing a provisional bottom line under those two years of science writing on my blog. Secondly, I want to put together an investment project that would help my city, my country and my continent, thus Krakow, Poland, and Europe, to face one of the big challenges resulting from climate change: water management. Interestingly, I started to work on the latter first, and only then I began to phrase out the former. I explain. As I work on that project of water management, which I provisionally named « Energy Ponds » (see, for example, « All hope is not lost: the countryside is still exposed »), I use the « Project Navigator », made available by the courtesy of the International Renewable Energy Agency (IRENA). The logic built into the « Project Navigator » makes me return, over and over again, to one central question: ‘You, Krzysztof Wasniewski, with your science and your personal energy, how are you aligned with that idea of yours? How can you convince other people to put their money and their personal energy into developing on your concept?’.

And so I am asking myself: ‘What’s your science, bro? What can you get people interested in, with rational grounds and intelligible evidence?’.

As I think about it, my first basic claim is that we can do it together in a smart way. We can act as a collective intelligence. This statement can be considered as a manifestation of the so-called “Bignetti model” in cognitive sciences (Bignetti 2014[1]; Bignetti et al. 2017[2]; Bignetti 2018[3]): for the last two years, I have been progressively centering my work around the topic of collective intelligence, without even being quite aware of it. As I was working on another book of mine, entitled “Capitalism and Political Power”, I came by that puzzling quantitative fact: as a civilization, we have more and more money per unit of real output[4], and, as I reviewed some literature, we seem not to understand why is that happening. Some scholars complain about the allegedly excessive ‘financialization of the economy’ (Krippner 2005[5]; Foster 2007[6]; Stockhammer 2010[7]), yet, besides easy generalizations about ‘greed’, or ‘unhinged race for profit’, no scientifically coherent explanation is offered regarding this phenomenon.

As I was trying to understand this phenomenon, shades of correlations came into my focus. I could see, for example, that growing an amount of money per unit of real output has been accompanied by growing an amount of energy consumed per person per year, in the global economy[8]. Do we convert energy into money, or the other way around? How can it be happening? In 2008, the proportion between the global supply of broad money, and the global real output passed the magical threshold of 100%. Intriguingly, the same year, the share of urban population in the total human population passed the threshold of 50%[9], and the share of renewable energy in the total final consumption of energy, at the global scale, took off for the first time since 1999, and keeps growing since then[10]. I started having that diffuse feeling that, as a civilization, we are really up to something, right now, and money is acting like a social hormone, facilitating change.

We change as we learn, and we learn as we experiment with the things we invent. How can I represent, in a logically coherent way, collective learning through experimentation? When an individual, or a clearly organized group learns through experimentation, the sequence is pretty straightforward: we phrase out an intelligible definition of the problem to solve, we invent various solutions, we test them, we sum up the results, we select seemingly the best solution among those tested, and we repeat the whole sequence. As I kept digging the topic of energy, technological change, and the velocity of money, I started formulating the outline of a complex hypothesis: what if we, humans, are collectively intelligent about building, purposefully, and semi – consciously, social structures supposed to serve as vessels for future collective experiments?

My second claim is that one of the smartest things we can do about climate change is, besides reducing our carbon footprint, to take proper care of our food and energy base. In Europe, climate change is mostly visible as a complex disruption to our water system, and we can observe it in our local rivers. That’s the thing about Europe: we have built our civilization, on this tiny, mountainous continent, in close connection with rivers. Right, I can call them scientifically ‘inland waterways’, but I think that when I say ‘river’, anybody who reads it understands intuitively. Anyway, what we call today ‘the European heritage’ has grown next to EVENLY FLOWING rivers. Once again: evenly flowing. It means that we, Europeans, are used to see the neighbouring river as a steady flow. Streams and creeks can overflow after heavy rains, and rivers can swell, but all that stuff had been happening, for centuries, very recurrently.

Now, with the advent of climate change, we can observe three water-related phenomena. Firstly, as the English saying goes, it never rains but it pours. The steady rhythm and predictable volume of precipitations we are used to, in Europe (mostly in the Northern part), progressively gives ground to sudden downpours, interspersed with periods of drought, hardly predictable in their length. First moral of the fairy tale: if we have less and less of the kind of water that falls from the sky slowly and predictably, we need to learn how to capture and retain the kind of water that falls abruptly, unscheduled. Secondly, just as we have adapted somehow to the new kind of sudden floods, we have a big challenge ahead: droughts are already impacting, directly and indirectly, the food market in Europe, but we don’t have enough science yet to predict accurately neither their occurrence nor their local impact. Yet, there is already one emerging pattern: whatever happens, i.e. floods or droughts, rural populations in Europe suffer more than the urban ones (see my review of literature in « All hope is not lost: the countryside is still exposed »). Second moral of the fairy tale: whatever we do about water management in these new conditions, in Europe, we need to take care of agriculture first, and thus to create new infrastructures so as to shield farms against floods and droughts, cities coming next in line.

Thirdly, the most obviously observable manifestation of floods and droughts is variation in the flow of local rivers. By the way, that variation is already impacting the energy sector: when we have too little flow in European rivers, we need to scale down the output of power plants, as they have not enough water to cool themselves. Rivers are drainpipes of the neighbouring land. Steady flow in a river is closely correlated with steady a level of water in the ground, both in the soil, and in the mineral layers underneath. Third moral of the fairy tale: if we figure out workable ways of retaining as much rainfall in the ground as possible, we can prevent all the three disasters in the same time, i.e. local floods, droughts, and economically adverse variations in the flow of local rivers.           

I keep thinking about that ownership-of-the-project thing I need to cope with when using the « Project Navigator » by IRENA. How to make local communities own, as much as possible, both the resources needed for the project, and its outcomes? Here, precisely, I need to use my science, whatever it is. People at IRENA have experience in such project, which I haven’t. I need to squeeze my brain and extract thereof any useful piece of coherent understanding, to replace experience. I am advancing step by step. I intuitively associate ownership with property rights, i.e. with a set of claims on something – things or rights – together with a set of liberties of action regarding the same things or rights. Ownership from the part of a local community means that claims and liberties should be sort of pooled, and the best idea that comes to my mind is an investment fund. Here, a word of explanation is due: an investment fund is a general concept, whose actual, institutional embodiment can take the shape of a strictly speaking investment fund, for one, and yet other legal forms are possible, such as a trust, a joint stock company, a crowdfunding platform, or even a cryptocurrency operating in a controlled network. The general concept of an investment fund consists in taking a population of investors and making them pool their capital resources over a set of entrepreneurial projects, via the general legal construct of participatory titles: equity-based securities, debt-based ones, insurance, futures contracts, and combinations thereof. Mind you, governments are investment funds too, as regards their capacity to move capital around. They somehow express the interest of their respective populations in a handful of investment projects, they take those populations’ tax money and spread it among said projects. That general concept of investment fund is a good expression of collective intelligence. That thing about social structure for collective experimentation, which I mentioned a few paragraphs ago, an investment fund is an excellent example. It allows spreading resources over a number of ventures considered as local experiments.

Now, I am dicing a few ideas for a financial scheme, based on the general concept of an investment fund, as collectively intelligent as possible, in order to face the new challenges of climate change, through new infrastructures for water management. I start with reformulating the basic technological concept. Water powered water pumps are immersed in the stream of a river. They use the kinetic energy of that stream to pump water up and further away, more specifically into elevated water towers, from which that water falls back to the ground level, as it flows down it powers relatively small hydroelectric turbines, and ends up in a network of ponds, vegetal complexes and channel-like ditches, all that made with a purpose of retaining as much water as possible. Those structures can be connected to others, destined directly to capture rainwater. I was thinking about two setups, respectively for rural environments and for the urban ones. In the rural landscape, those ponds and channels can be profiled so as to collect rainwater from the surface of the ground and conduct it into its deeper layers, through some system of inverted draining. I think it would be possible, under proper geological conditions, to reverse-drain rainwater into deep aquifers, which the neighbouring artesian wells can tap into. In the urban context, I would like to know more about those Chinese technologies used in their Sponge Cities programme (see Jiang et al. 2018[11]).

The research I have done so far suggests that relatively small, local projects work better, for implementing this type of technologies, than big, like national scale endeavours. Of course, national investment programmes will be welcome as indirect support, but at the end of the day, we need a local community owning a project, possibly through an investment-fund-like institutional arrangement. The economic value conveyed by any kind of participatory title in such a capital structure sums up to the Net Present Value of three cash flows: net proceeds from selling hydroelectricity produced in small water turbines, reduction of the aggregate flood-related risk, as well as of the drought-related risk. I separate risks connected to floods from those associated with droughts, as they are different in nature. In economic and financial terms, floods are mostly a menace to property, whilst droughts materialize as more volatile prices of food and basic agricultural products.

In order to apprehend accurately the Net Present Value of any cash flow, we need to set a horizon in time. Very tentatively, by interpreting data from 2012, presented in a report published by IRENA (the same IRENA), I assume that relatively demanding investors in Europe expect to have a full return on their investment within 6,5 years, which I make 7 years, for the sake of simplicity. Now, I go a bit off the beaten tracks, at least those I have beaten so far. I am going to take the total atmospheric precipitations falling on various European countries, which means rainfall + snowfall, and then try to simulate what amount of ‘NPV = hydroelectricity + reduction of risk from floods and droughts’(7 years) could the retention of that water represent.

Let’s walse. I take data from FAOSTAT regarding precipitations and water retention. As a matter of fact, I made a query of that data regarding a handful of European countries. You can have a look at the corresponding Excel file UNDER THIS LINK. I rearranged bit the data from this Excel file so as to have a better idea of what could happen, if those European countries I have on my list, my native Poland included, built infrastructures able to retain 2% of the annual rainfall. The coefficient of 2% is vaguely based on what Shao et al. (2018[12]) give as the target retention coefficient for the city of Xiamen, China, and their Sponge-City-type investment. I used the formulas I had already phrased out in « Sponge Cities », and in « La marge opérationnelle de $1 539,60 par an par 1 kilowatt », to estimate the amount of electricity possible to produce out of those 2% of annual rainfall elevated, according to my idea, into 10-metres-high water towers. On the top of all that, I added, for each country, data regarding the already existing capacity to retain water. All those rearranged numbers, you can see them in the Excel file UNDER THIS OTHER LINK (a table would be too big for inserting into this update).   

The first provisional conclusion I have to make is that I need to revise completely my provisional conclusion from « Sponge Cities », where I claimed that hydroelectricity would have no chance to pay for any significant investment in sponge-like structures for retaining water. The calculations I have just run show just the opposite: as soon as we consider whole countries as rain-retaining basins, the hydroelectric power, and the cash flow dormant in that water is just mind-blowing. I think I will need to get a night of sleep just to check on the accuracy of my calculations.

Deranging as they are, my calculations bear another facet. I compare the postulated 2% of retention in annual precipitations with the already existing capacity of these national basins to retain water. That capacity is measured, in that second Excel file, by the ‘Coefficient of retention’, which denominates the ‘Total internal renewable water resources (IRWR)’ over the annual precipitation, both in 10^9 m3/year. My basic observation is that European countries have a capacity to retain water very similar in disparity to the intensity of precipitations, measured in mm per year. Both coefficients vary in a similar proportion, i.e. their respective standard deviations make around 0,4 of their respective means, across the sample of 37 European countries. When I measure it with the Pearson coefficient of correlation between the intensity of rainfall and the capacity to retain it , it yields r = 0,63. In general, the more water falls from the sky per 1 m2, the greater percentage of that water is retained, as it seems. Another provisional conclusion I make is that the capacity to retain water, in a given country, is some kind of response, possibly both natural and man-engineered, to a relatively big amount of water falling from the sky. It looks as if our hydrological structures, in Europe, had been built to do something with water we have momentarily plenty of, possibly even too much of, and which we should save for later.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. You can communicate with me directly, via the mailbox of this blog: goodscience@discoversocialsciences.com. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?


[1] Bignetti, E. (2014). The functional role of free-will illusion in cognition:“The Bignetti Model”. Cognitive Systems Research, 31, 45-60.

[2] Bignetti, E., Martuzzi, F., & Tartabini, A. (2017). A Psychophysical Approach to Test:“The Bignetti Model”. Psychol Cogn Sci Open J, 3(1), 24-35.

[3] Bignetti, E. (2018). New Insights into “The Bignetti Model” from Classic and Quantum Mechanics Perspectives. Perspective, 4(1), 24.

[4] https://data.worldbank.org/indicator/FM.LBL.BMNY.GD.ZS last access July 15th, 2019

[5] Krippner, G. R. (2005). The financialization of the American economy. Socio-economic review, 3(2), 173-208.

[6] Foster, J. B. (2007). The financialization of capitalism. Monthly Review, 58(11), 1-12.

[7] Stockhammer, E. (2010). Financialization and the global economy. Political Economy Research Institute Working Paper, 242, 40.

[8] https://data.worldbank.org/indicator/EG.USE.PCAP.KG.OE last access July 15th, 2019

[9] https://data.worldbank.org/indicator/SP.URB.TOTL.IN.ZS last access July 15th, 2019

[10] https://data.worldbank.org/indicator/EG.FEC.RNEW.ZS last access July 15th, 2019

[11] Jiang, Y., Zevenbergen, C., & Ma, Y. (2018). Urban pluvial flooding and stormwater management: A contemporary review of China’s challenges and “sponge cities” strategy. Environmental science & policy, 80, 132-143.

[12] Shao, W., Liu, J., Yang, Z., Yang, Z., Yu, Y., & Li, W. (2018). Carbon Reduction Effects of Sponge City Construction: A Case Study of the City of Xiamen. Energy Procedia, 152, 1145-1151.

All hope is not lost: the countryside is still exposed

My editorial on You Tube

I am focusing on the possible benefits of transforming urban structures of at least some European cities into sponge-like structures, such as described, for example, by Jiang et al. (2018) as well as in my recent updates on this blog (see Sponge Cities). In parallel to reporting my research on this blog, I am developing a corresponding project with the « Project Navigator », made available by the courtesy of the International Renewable Energy Agency (IRENA). Figuring out my way through the « Project Navigator » made me aware of the importance that social cohesion has in the implementation of such infrastructural projects. Social cohesion means a set of common goals, and an institutional context that allows the appropriation of outcomes. In « Sponge Cities », when studying the case of my hometown, Krakow, Poland, I came to the conclusion that sales of electricity from water turbines incorporated into the infrastructure of a sponge city could hardly pay off for the investment needed. On the other hand, significant reduction of the financially quantifiable risk connected to floods and droughts can be an argument. Especially the flood-related risks, in Europe, already amount to billions of euros, and we seem to be just at the beginning of the road (Alfieri et al. 2015[1]). Shielding against such risks can possibly make a sound base for social coherence, as a common goal. Hence, as I am structuring the complex concept of « Energy Ponds », I start with assessing risks connected to climate change in European cities, and the possible reduction of those risks through sponge-city-type investments.

I start with comparative a review of Alfieri et al. 2015[2] as regards flood-related risks, on the one hand, and Naumann et al. (2015[3]) as well as Vogt et al. (2018[4]) regarding the drought-related risks. As a society, in Europe, we seem to be more at home with floods than with droughts. The former is something we kind of know historically, and with the advent of climate change we just acknowledge more trouble in that department, whilst the latter had been, until recently, something that happens essentially to other people on other continents. The very acknowledgement of droughts as a recurrent risk is a challenge.

Risk is a quantity: this is what I teach my students. It is the probability of occurrence multiplied by the magnitude of damage, should the s**t really hit the fan. Why adopting such an approach? Why not to assume that risk is just the likelihood of something bad happening? Well, because risk management is practical. There is any point in bothering about risk if we can do something about it: insure and cover, hedge, prevent etc. The interesting thing about it is that all human societies show a recurrent pattern: as soon as we organise somehow, we create something like a reserve of resources, supposed to provide for risk. We are exposed to a possible famine? Good, we make a reserve of food. We risk to be invaded by a foreign nation/tribe/village/alien civilisation? Good, we make an army, i.e. a group of people, trained and equipped for actions with no immediate utility, just in case. The nearby river can possibly overflow? Good, we dig and move dirt, stone, wood and whatnot so as to build stopbanks. In each case, we move along the same path: we create a pooled reserve of something, in order to minimize the long-term damage from adverse events.

Now, if we wonder how much food we need to have in stock in case of famine, sooner or later we come to the conclusion that it is individual need for food multiplied by the number of people likely to be starving. That likelihood is not evenly distributed across the population: some people are more exposed than others. A farmer, with a few pigs and some potatoes in cultivation is less likely to be starving than a stonemason, busy to build something and not having time or energy to care for producing food. Providing for the risk of flood works according to the same scheme: some structures and some people are more likely to suffer than others.

We apprehend flood and drought-related risks in a similar way: those risks amount to a quantity of resources we put aside, in order to provide for the corresponding losses, in various ways. That quantity is the arithmetical product of probability times magnitude of loss.    

Total risk is a complex quantity, resulting from events happening in causal, heterogeneous chains. A river overflows and destroys some property: this is direct damage, the first occurrence in the causal chain. Among the property damaged, there are garbage yards. As water floods them, it washes away and further into the surrounding civilisation all kinds of crap, properly spoken crap included. The surrounding civilisation gets contaminated, and decontamination costs money: this is indirect damage, the second tier of the causal chain. Chemical and biological contamination by floodwater causes disruptions in the businesses involved, and those disruptions are costly, too: here goes the third tier in the causal chain etc.

I found some interesting insights, regarding the exposure to flood and drought-related risks in Europe, with Paprotny et al. (2018[5]). Firstly, this piece of research made me realized that floods and droughts do damage in very different ways. Floods are disasters in the most intuitive sense of the term: they are violent, and they physically destroy man-made structures. The magnitude of damage from floods results from two basic variables: the violence and recurrence of floods themselves, on the one hand, and the value of human structures affected. In a city, a flood does much more damage because there is much more property to destroy. Out there, in the countryside, damages inflicted by floods change from the disaster-type destruction into more lingering, long-term impediments to farming (e.g. contamination of farmed soil), as the density of man-made structures subsides. Droughts work insidiously. There is no spectacular disaster to be afraid of. Adverse outcomes build up progressively, sometimes even year after year. Droughts affect directly the countryside much more than the cities, too. It is rivers drying out first, and only in a second step, cities experiencing disruptions in the supply of water, or of the rivers-dependent electricity. It is farm soil drying out progressively, and farmers suffering some damage due to lower crops or increased costs of irrigation, and only then the city dwellers experiencing higher prices for their average carrot or an organic cereal bar. Mind you, there is one type of drought-related disaster, which sometimes can directly affect our towns and cities: forest fires.

Paprotny et al. (2018) give some detailed insights into the magnitude, type, and geographical distribution of flood-related risks in Europe. Firstly, the ‘where exactly?’. France, Spain, Italy, and Germany are the most affected, with Portugal, England, Scotland, Poland, Czech Republic, Hungary, Romania and Portugal following closely behind. As to the type of floods, France, Spain, and Italy are exposed mostly to flash floods, i.e. too much rain falling and not knowing where to go. Germany and virtually all of Central Europe, my native Poland included, are mostly exposed to river floods. As for the incidence of human fatalities, flash-floods are definitely the most dangerous, and their impact seems to be the most serious in the second half of the calendar year, from July on.

Besides, the research by Paprotny et al. (2018) indicates that in Europe, we seem to be already on the path of adaptation to floods. Both the currently observed losses –human and financial – and their 10-year, moving average had their peaks between 1960 and 2000. After 2000, Europe seems to have been progressively acquiring the capacity to minimize the adverse impact of floods, and this capacity seems to have developed in cities more than in the countryside. It truly gives a man a blow, to their ego, when they learn the problem they want to invent a revolutionary solution to does not really exist. I need to return on that claim I made in the « Project Navigator », namely that European cities are perfectly adapted to a climate that does no longer exist. Apparently, I was wrong: European cities seem to be adapting quite well to the adverse effects of climate change. Yet, all hope is not lost. The countryside is still exposed. Now, seriously. Whilst Europe seem to be adapting to greater an occurrence of floods, said occurrence is most likely to increase, as suggested, for example, in the research by Alfieri et al. (2017[6]). That sends us to the issue of limits to adaptation and the cost thereof.

Let’s rummage through more literature. As I study the article by Lu et al. (2019[7]), which compares the relative exposure to future droughts in various regions of the world, I find, first of all, the same uncertainty which I know from Naumann et al. (2015), and Vogt et al. (2018): the economically and socially important drought is a phenomenon we just start to understand, and we are still far from understanding it sufficiently to assess the related risks with precision. I know that special look that empirical research has when we don’t really have a clue what we are observing. You can see it in the multitude of analytical takes on the same empirical data. There are different metrics for detecting drought, and by Lu et al. (2019) demonstrate that assessment of drought-related losses heavily depends on the metric used. Once we account for those methodological disparities, some trends emerge. Europe in general seems to be more and more exposed to long-term drought, and this growing exposure seems to be pretty consistent across various scenarios of climate change. Exposure to short-term episodes of drought seems to be growing mostly under the RCP 4.5 and RCP 6.0 climate change scenarios, a little bit less under the RCP 8.5 scenario. In practical terms it means that even if we, as a civilisation, manage to cut down our total carbon emissions, as in the RCP 4.5. climate change scenario, the incidence of drought in Europe will be still increasing. Stagge et al. (2017[8]) point out that exposure to drought in Europe diverges significantly between the Mediterranean South, on the one hand, and the relatively colder North. The former is definitely exposed to an increasing occurrence of droughts, whilst the latter is likely to experience less frequent episodes. What makes the difference is evapotranspiration (loos of water) rather than precipitation. If we accounted just for the latter, we would actually have more water

I move towards more practical an approach to drought, this time as an agricultural phenomenon, and I scroll across the article on the environmental stress on winter wheat and maize, in Europe, by Webber et al. (2018[9]). Once again, I can see a lot of uncertainty. The authors put it plainly: models that serve to assess the impact of climate change on agriculture violate, by necessity, one of the main principles of statistical hypotheses-testing, namely that error terms are random and independent. In these precise models, error terms are not random, and not mutually independent. This is interesting for me, as I have that (recent) little obsession with applying artificial intelligence – a modest perceptron of my own make – to simulate social change. Non-random and dependent error terms are precisely what a perceptron likes to have for lunch. With that methodological bulwark, Webber et al. (2018) claim that regardless the degree of the so-called CO2 fertilization (i.e. plants being more active due to the presence of more carbon dioxide in the air), maize in Europe seems to be doomed to something like a 20% decline in yield, by 2050. Winter wheat seems to be rowing on a different boat. Without the effect of CO2 fertilization, a 9% decline in yield is to expect, whilst with the plants being sort of restless, and high on carbon, a 4% increase is in view. With Toreti et al. (2019[10]), more global a take is to find on the concurrence between climate extremes, and wheat production. It appears that Europe has been experiencing increasing an incidence of extreme heat events since 1989, and until 2015 it didn’t seem to affect adversely the yield of wheat. Still, since 2015 on, there is a visible drop in the output of wheat. Even stiller, if I may say, less wheat is apparently compensated by more of other cereals (Eurostat[11], Schills et al. 2018[12]), and accompanied by less potatoes and beets.

When I first started to develop on that concept, which I baptised “Energy Ponds”, I mostly thought about it as a way to store water in rural areas, in swamp-and-meadow-like structures, to prevent droughts. It was only after I read a few articles about the Sponge Cities programme in China that I sort of drifted towards that more urban take on the thing. Maybe I was wrong? Maybe the initial concept of rural, hydrological structures was correct? Mind you, whatever we do in Europe, it always costs less if done in the countryside, especially regarding the acquisition of land.

Even in economics, sometimes we need to face reality, and reality presents itself as a choice between developing “Energy Ponds” in urban environment, or in rural one. On the other hand, I am rethinking the idea of electricity generated in water turbines paying off for the investment. In « Sponge Cities », I presented a provisional conclusion that it is a bad idea. Still, I was considering the size of investment that Jiang et al. (2018) talk about in the context of the Chinese Sponge-Cities programme. Maybe it is reasonable to downsize a bit the investment, and to make it sort of lean and adaptable to the cash flow possible to generate out of selling hydropower.    

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. You can communicate with me directly, via the mailbox of this blog: goodscience@discoversocialsciences.com. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?


[1] Alfieri, L., Feyen, L., Dottori, F., & Bianchi, A. (2015). Ensemble flood risk assessment in Europe under high end climate scenarios. Global Environmental Change, 35, 199-212.

[2] Alfieri, L., Feyen, L., Dottori, F., & Bianchi, A. (2015). Ensemble flood risk assessment in Europe under high end climate scenarios. Global Environmental Change, 35, 199-212.

[3] Gustavo Naumann et al. , 2015, Assessment of drought damages and their uncertainties in Europe, Environmental Research Letters, vol. 10, 124013, DOI https://doi.org/10.1088/1748-9326/10/12/124013

[4] Vogt, J.V., Naumann, G., Masante, D., Spinoni, J., Cammalleri, C., Erian, W., Pischke, F., Pulwarty, R., Barbosa, P., Drought Risk Assessment. A conceptual Framework. EUR 29464 EN, Publications Office of the European Union, Luxembourg, 2018. ISBN 978-92-79-97469-4, doi:10.2760/057223, JRC113937

[5] Paprotny, D., Sebastian, A., Morales-Nápoles, O., & Jonkman, S. N. (2018). Trends in flood losses in Europe over the past 150 years. Nature communications, 9(1), 1985.

[6] Alfieri, L., Bisselink, B., Dottori, F., Naumann, G., de Roo, A., Salamon, P., … & Feyen, L. (2017). Global projections of river flood risk in a warmer world. Earth’s Future, 5(2), 171-182.

[7] Lu, J., Carbone, G. J., & Grego, J. M. (2019). Uncertainty and hotspots in 21st century projections of agricultural drought from CMIP5 models. Scientific reports, 9(1), 4922.

[8] Stagge, J. H., Kingston, D. G., Tallaksen, L. M., & Hannah, D. M. (2017). Observed drought indices show increasing divergence across Europe. Scientific reports, 7(1), 14045.

[9] Webber, H., Ewert, F., Olesen, J. E., Müller, C., Fronzek, S., Ruane, A. C., … & Ferrise, R. (2018). Diverging importance of drought stress for maize and winter wheat in Europe. Nature communications, 9(1), 4249.

[10] Toreti, A., Cronie, O., & Zampieri, M. (2019). Concurrent climate extremes in the key wheat producing regions of the world. Scientific reports, 9(1), 5493.

[11] https://ec.europa.eu/eurostat/statistics-explained/index.php/Agricultural_production_-_crops last access July 14th, 2019

[12] Schils, R., Olesen, J. E., Kersebaum, K. C., Rijk, B., Oberforster, M., Kalyada, V., … & Manolov, I. (2018). Cereal yield gaps across Europe. European journal of agronomy, 101, 109-120.

Sponge cities

My editorial on You Tube

I am developing on the same topic I have already highlighted in « Another idea – urban wetlands », i.e. on urban wetlands. By the way, I have found a similar, and interesting concept in the existing literature: the sponge city. It is being particularly promoted by Chinese authors. I am going for a short review of the literature on this specific topic, and I am starting with correcting a mistake I made in my last update in French, « La ville – éponge » when discussing the article by Shao et al. (2018[1]). I got confused in the conversion of square meters into square kilometres. I forgot that 1 km2 = 106 m2, not 103. Thus, correcting myself now, I rerun the corresponding calculations. The Chinese city of Xiamen, population 3 500 000, covers an area of 1 865 km2, i.e. 1 865 000 000 m2. In that, 118 km2 = 118 000 000 m2 are infrastructures of sponge city, or purposefully arranged urban wetlands. Annual precipitations in Xiamen, according to Climate-Data.org, are 1131 millimetres per year, thus 1131 m3 of water per 1 m2. Hence, the entire city of Xiamen receives 1 865 000 000 m2 * 1 131 m3/m2 =  2 109 315 000 000 m3 of precipitation a year, and the sole area of urban wetlands, those 118 square kilometres, receives 118 000 000 m2 * 1 131 m3/m2 =  133 458 000 000 m3. The infrastructures of sponge city in Xiamen have a target capacity of 2% regarding the retention of rain water, which gives  2 669 160 000 m3.

Jiang et al. (2018[2]) present a large scale strategy for the development of sponge cities in China. The first takeaway I notice is the value of investment in sponge city infrastructures across a total of 30 cities in China. Those 30 cities are supposed to absorb $275,6 billions in the corresponding infrastructural investment, thus an average of $9,19 billion per city. The first on the list is Qian’an, population 300 000, are 3 522 km2, total investment planned I = $5,1 billion. That gives $17 000 per resident, and $1 448 041 per 1 km2 of urban area. The city of Xiamen, whose case is discussed by the previously cited Shao et al. (2018[3]), has already got $3,3 billion in investment, with a target at I = $14,14 billion, thus at $4800 per resident, and $7 721 180 per square kilometre. Generally, the intensity of investment, counted per capita or per unit of surface, is really disparate. This is, by the way, commented by the authors: they stress the fact that sponge cities are so novel a concept that local experimentation is norm, not exception.

Wu et al. (2019[4]) present another case study, from among the cities listed in Jiang et al. (2018), namely the city of Wuhan. Wuhan is probably the biggest project of sponge city in terms of capital invested: $20,04 billion, distributed across 293 detailed initiatives. Started after a catastrophic flood in 2016, the project has also proven its value in protecting the city from floods, and, apparently, it is working. As far as I could understand, the case of Wuhan was the first domino block in the chain, the one that triggered the whole, nation-wide programme of sponge cities.

Shao et al. (2016[5]) present an IT approach to organizing sponge-cities, focusing on the issue of data integration. The corresponding empirical field study had been apparently conducted in Fenghuang County, province Hunan. The main engineering challenge consists in integrating geographical data from geographic information systems (GIS) with data pertinent to urban infrastructures, mostly CAD-based, thus graphical. On the top of that, spatial data needs to be integrated with attribute data, i.e. with the characteristics of both infrastructural objects, and their natural counterparts. All that integrated data is supposed to serve efficient application of the so-called Low Impact Development (LID) technology. With the Fenghuang County, we can see the case of a relatively small area: 30,89 km2, 350 195 inhabitants, with a density of population of 200 people per 1 km2. The integrated data system was based on dividing that area into 417 sub-catchments, thus some 74 077 m2 per catchment.         

Good, so this is like a cursory review of literature on the Chinese concept of sponge city. Now, I am trying to combine it with another concept, which I first read about in a history book, namely Civilisation and Capitalism by Fernand Braudel, volume 1: The Structures of Everyday Life[6]: the technology of lifting and pumping water from a river with the help of kinetic energy of waterwheels propelled by the same river. Apparently, back in the day, in cities like Paris, that technology was commonly used to pump river water onto the upper storeys of buildings next to the river, and even to the further-standing buildings. Today, we are used to water supply powered by big pumps located in strategic nodes of large networks, and we are used to seeing waterwheels as hydroelectric turbines. Still, that old concept of using directly the kinetic energy of water seems to pop up again, here and there. Basically, it has been preserved in a slightly different form. Do you know that image in movies, with that windmill in the middle of a desert? What is the point of putting a windmill in the middle of a desert? To pump water from a well. Now, let’s make a little jump from wind power to water power. If we can use the force of wind to pump water from underground, we can use the force of water in a river to pump water from that river.  

In scientific literature, I found just one article making reference to it, namely Yannopoulos et al. (2015[7]). Still, in the less formal areas, I found some more stuff. I found that U.S. patent, from 1951, for a water-wheel-driven brush. I found more modern a technology of the spiral pump, created by a company called PreScouter. Something similar is being proposed by the Dutch company Aqysta. Here are some graphics to give you an idea:


Now, I put together the infrastructure of a sponge city, and the technology of pumping water uphill using the energy of the water. I have provisionally named the thing « Energy Ponds ». Water wheels power water pumps, which convey water to elevated tanks, like water towers. From water towers, water falls back down to the ground level, passes through small hydroelectric turbines on its way down, and lands in the infrastructures of a sponge city, where it is being stored. Here below, I am trying to make a coherent picture of it. The general concept can be extended, which I present graphically further below: infrastructure of the sponge city collects excess water from rainfall or floods, and partly conducts it to the local river(s). What limits the river from overflowing or limits the degree of overflowing is precisely the basic concept of Energy Ponds, i.e. those water-powered water pumps that pump water into elevated tanks. The more water flows in the river – case of flood or immediate threat thereof – the more power in those pumps, the more flow through the elevated tanks, and the more flow through hydroelectric turbines, hence the more electricity. As long as the whole infrastructure physically holds the environmental pressure of heavy rainfall and flood waves, it can work and serve.

My next step is to outline the business and financial framework of the « Energy Ponds » concept, taking the data provided by Jiang et al. (2018) about 29 sponge city projects in China, squeezing as much information as I can from it, and adding the component of hydroelectricity. I transcribed their data into an Excel file, and added some calculations of my own, together with data about demographics and annual rainfall. Here comes the Excel file with data as of July 5th 2019. A pattern emerges. All the 29 local clusters of projects display quite an even coefficient of capital invested per 1 km2 of construction area in those projects: it is $320 402 571,51 on average, with quite a low standard deviation, namely $101 484 206,43. Interestingly, that coefficient is not significantly correlated neither with the local amount of rainfall per 1 m2, nor with the density of population. It looks like quite an autonomous variable, and yet as a recurrent proportion.      

Another interesting pattern is to find in the percentage of the total surface, in each of the cities studied, devoted to being filled with the sponge-type infrastructure. The average value of that percentage is 0,61% and is accompanied by quite big a standard deviation: 0,63%. It gives an overall variability of 1,046. Still, that percentage is correlated with two other variables: annual rainfall, in millimetres per square meter, as well as with the density of population, i.e. average number of people per square kilometre. Measured with the Pearson coefficient of correlation, the former yields r = 0,45, and the latter is r = 0,43: not very much, yet respectable, as correlations come.

From underneath those coefficients of correlation, common sense pokes its head. The more rainfall per unit of surface, the more water there is to retain, and thus the more can we gain by installing the sponge-type infrastructure. The more people per unit of surface, the more people can directly benefit from installing that infrastructure, per 1 km2. This one stands to reason, too.

There is an interesting lack of correlations in that lot of data taken from Jiang et al. (2018). The number of local projects, i.e. projects per one city, is virtually not correlated with anything else, and, intriguingly, is negatively correlated, at Pearson r = – 0,44, with the size of local populations. The more people in the city, the less local projects of sponge city are there.    

By the way, I have some concurrent information on the topic. According to a press release by Voith, this company has recently acquired a contract with the city of Xiamen, one of the sponge-cities, for the supply of large hydroelectric turbines in the technology of pumped storage, i.e. almost exactly the thing I have in mind.

Now, the Chines programme of sponge cities is a starting point for me to reverse engineer my own concept of « Energy Ponds ». I assume that four economic aggregates pay off for the corresponding investment: a) the Net Present Value of proceedings from producing electricity in water turbines b) the Net Present Value of savings on losses connected to floods c) the opportunity cost of tap water available from the retained precipitations, and d) incremental change in the market value of the real estate involved.

There is a city, with N inhabitants, who consume R m3 of water per year, R/N per person per year, and they consume E kWh of energy per year, E/N per person per year. R divided by 8760 hours in a year (R/8760) is the approximate amount of water the local population needs to have in current constant supply. Same for energy: E/8760 is a good approximation of power, in kW, that the local population needs to have standing and offered for immediate use.

The city collects F millimetres of precipitation a year. Note that F mm = F m3/m2. With a density of population D people per 1 km2, the average square kilometre has what I call the sponge function: D*(R/N) = f(F*106). Each square kilometre collects F*106 cubic meters of precipitation a year, and this amount remains is a recurrent proportion to the aggregate amount of water that D people living on that square kilometre consume per year.

The population of N residents spend an aggregate PE*E on energy, and an aggregate PR*R on water, where PE and PR are the respective prices of energy and water. The supply of water and energy happens at levelized costs per unit. The reference math here is the standard calculation of LCOE, or Levelized Cost of Energy in an interval of time t, measured as LCOE(t) = [IE(t) + ME(t) + UE(t)] / E, where IE is the amount of capital invested in the fixed assets of the corresponding power installations, ME is their necessary cost of current maintenance, and UE is the cost of fuel used to generate energy. Per analogy, the levelized cost of water can be calculated as LCOR(t) = [IR(t) + MR(t) + UR(t)] / R, with the same logic: investment in fixed assets plus cost of current maintenance plus cost of water strictly speaking, all that divided by the quantity of water consumed. Mind you, in the case of water, the UR(t) part could be easily zero, and yet it does not have to be.  Imagine a general municipal provider of water, who buys rainwater collected in private, local installations of the sponge type, at UR(t) per cubic metre, that sort of thing.

The supply of water and energy generates gross margins: E(t)*(PE(t) – LCOE(t)) and R(t)*(PR(t) – LCOR(t)). These margins are possible to rephrase as, respectively, PE(t)*E(t)IE(t) – ME(t) – UE(t), and R(t)*PR(t) – IR(t) – MR(t) – UR(t). Gross margins are gross cash flows, which finance organisations (jobs) attached to the supply of, respectively, water and energy, and generate some net surplus. Here comes a little difficulty with appraising the net surplus from the supply of water and energy. Long story short: the levelized values of the « LCO-whatever follows » type explicitly incorporate the yield on capital investment. Each unit of output is supposed to yield a return on investment I. Still, this is not how classical accounting defines a cost. The amounts assigned to costs, both variable and fixed, correspond to the strictly speaking current expenditures, i.e. to payments for the current services of people and things, without any residual value sedimenting over time. It is only after I account for those strictly current outlays that I can calculate the current margin, and a fraction of that margin can be considered as direct yield on my investment. In standard, basic accounting, the return on investment is the net income divided by the capital invested. The net income is calculated as π = Q*P – Q*VC – FC – r*I – T, where Q and P are quantity and price, VC is the variable cost per unit of output Q, FC stands for the fixed costs, r is the price of capital (interest rate) on the capital I invested in the given business, and T represents taxes. In the same standard accounting, Thus calculated net income π is then put into the formula of internal rate of return on investment: IRR = π / I.     

When I calculate my margin of profit on the sales of energy or water, I have those two angles of approach. Angle #1 consists in using the levelized cost, and then the margin generated over that cost, i.e. P – LC (price minus levelized cost) can be accounted for other purposes than the return on investment. Angle #2 comes from traditional accounting: I calculate my margin without reference to the capital invested, and only then I use some residual part of that margin as return on investment. I guess that levelized costs work well in the accounting of infrastructural systems with nicely predictable output. When the quantity demanded, and offered, in the market of energy or water is like really recurrent and easy to predict, thus in well-established infrastructures with stable populations around, the LCO method yields accurate estimations of costs and margins. On the other hand, when the infrastructures in question are developing quickly and/or when their host populations change substantially, classical accounting seems more appropriate, with its sharp distinction between current costs and capital outlays.

Anyway, I start modelling the first component of the possible payoff on investment in the infrastructures of « Energy Ponds », i.e.  the Net Present Value of proceedings from producing electricity in water turbines. As I generally like staying close to real life (well, most of the times), I will be wrapping my thinking around my hometown, where I still live, i.e. Krakow, Poland, area of the city: 326,8 km2, area of the metropolitan area: 1023,21 km2. As for annual precipitations, data from Climate-Data.org[1] tells me that it is a bit more than the general Polish average of 600 mm a year. Apparently, Krakow receives an annual rainfall of 678 mm, which, when translated into litres received by the whole area, makes a total rainfall on the city of  221 570 400 000 litres, and, when enlarged to the whole metropolitan area, makes

693 736 380 000 litres.

In the generation of electricity from hydro turbines, what counts is the flow, measured in litres per second. The above-calculated total rainfall is now to be divided by 365 days, then by 24 hours, and then by 3600 seconds in an hour. Long story short, you divide the annual rainfall in litres by the constant of 31 536 000 seconds in one year. Mind you, on odd years, it will be 31 622 400 seconds. This step leads me to an estimate total flow of 7 026 litres per second in the city area, and 21 998 litres per second in the metropolitan area. Question: what amount of electric power can I get with that flow? I am using a formula I found at Renewables First.co.uk[2] : flow per second, in kgs per second multiplied by the gravitational constant a = 9,81, multiplied by the average efficiency of a hydro turbine equal to 75,1%, further multiplied by the net head – or net difference in height – of the water flow. All that gives me electric power in watts. All in all, when you want to calculate the electric power dormant in your local rainfall, take the total amount of said rainfall, in litres falling on the entire place where you can possibly collect that rainwater from, and multiply it by 0,076346*Head of the waterflow. You will get power in kilowatts, with that implied efficiency of 75,1% in your technology.

For the sake of simplicity, I assume that, in those installations of elevated water tanks, the average elevation, thus the head of the subsequent water flow through hydro turbines, will be H = 10 m. That leads me to P = 518 kW available from the annual rainfall on the city of Krakow, when elevated to H = 10 m, and, accordingly, P = 1 621 kW for the rainfall received over the entire metropolitan area.

In the next step, I want to calculate the market value of that electric power, in terms of revenues from its possible sales. I take the power, and I multiply it by 8760 in a year (8784 hours in an odd year). I get the amount of electricity for sale equal to E = 4 534 383 kWh from the rainfall received over the city of Krakow strictly spoken, and E = 14 197 142 kWh if we hypothetically collect rainwater from the entire metro area.

Now, the pricing. According to data available at GlobalPetrolPrices.com[3], the average price of electricity in Poland is PE = $0,18 per kWh. Still, when I get, more humbly, to my own electricity bill, and I crudely divide the amount billed in Polish zlotys by the amount used in kWh, I get to something like PE = $0,21 per kWh. The discrepancy might be coming from the complexity of that price: it is the actual price per kWh used plus all sorts of constant stuff per kW of power made available. With those prices, the market value of the corresponding revenues from selling electricity from rainfall used smartly would be like $816 189  ≤ Q*PE  $952 220 a year from the city area, and $2 555 485 ≤ Q*PE  $2 981 400 a year from the metropolitan area.

I transform those revenues, even before accounting for any current costs, into a stream, spread over 8 years of average lifecycle in an average investment project. Those 8 years are what is usually expected as the time of full return on investment in those more long-term, infrastructure-like projects. With a technological lifecycle around 20 years, those projects are supposed to pay for themselves over the first 8 years, the following 12 years bringing a net overhead to investors. Depending on the pricing of electricity, and with a discount rate of r = 5% a year, it gives something like $5 275 203 ≤ NPV(Q*PE ; 8 years) ≤ $6 154 403 for the city area, and $16 516 646 ≤ NPV(Q*PE ; 8 years) ≤  $19 269 421 for the metropolitan area.

When I compare that stream of revenue to what is being actually done in the Chinese sponge cities, discussed a few paragraphs earlier, one thing jumps to the eye: even with the most optimistic assumption of capturing 100% of rainwater, so as to make it flow through local hydroelectric turbines, there is no way that selling electricity from those turbines pays off for the entire investment. This is a difference in the orders of magnitude, when we compare investment to revenues from electricity.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. You can communicate with me directly, via the mailbox of this blog: goodscience@discoversocialsciences.com. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?


[1] https://en.climate-data.org/europe/poland/lesser-poland-voivodeship/krakow-715022/ last access July 7th 2019

[2] https://www.renewablesfirst.co.uk/hydropower/hydropower-learning-centre/how-much-power-could-i-generate-from-a-hydro-turbine/ last access July 7th, 2019

[3] https://www.globalpetrolprices.com/electricity_prices/ last access July 8th 2019

[1] Shao, W., Liu, J., Yang, Z., Yang, Z., Yu, Y., & Li, W. (2018). Carbon Reduction Effects of Sponge City Construction: A Case Study of the City of Xiamen. Energy Procedia, 152, 1145-1151.

[2] Jiang, Y., Zevenbergen, C., & Ma, Y. (2018). Urban pluvial flooding and stormwater management: A contemporary review of China’s challenges and “sponge cities” strategy. Environmental science & policy, 80, 132-143.

[3] Shao, W., Liu, J., Yang, Z., Yang, Z., Yu, Y., & Li, W. (2018). Carbon Reduction Effects of Sponge City Construction: A Case Study of the City of Xiamen. Energy Procedia, 152, 1145-1151.

[4] Wu, H. L., Cheng, W. C., Shen, S. L., Lin, M. Y., & Arulrajah, A. (2019). Variation of hydro-environment during past four decades with underground sponge city planning to control flash floods in Wuhan, China: An overview. Underground Space, article in press

[5] Shao, W., Zhang, H., Liu, J., Yang, G., Chen, X., Yang, Z., & Huang, H. (2016). Data integration and its application in the sponge city construction of China. Procedia Engineering, 154, 779-786.

[6] Braudel, F., & Reynolds, S. (1979). Civilization and capitalism 15th-18th Century, vol. 1, The structures of everyday life. Civilization, 10(25), 50.

[7] Yannopoulos, S., Lyberatos, G., Theodossiou, N., Li, W., Valipour, M., Tamburrino, A., & Angelakis, A. (2015). Evolution of water lifting devices (pumps) over the centuries worldwide. Water, 7(9), 5031-5060.

Another idea – urban wetlands

My editorial on You Tube

I have just come with an idea. One of those big ones, the kind that pushes you to write a business plan and some scientific stuff as well. Here is the idea: a network of ponds and waterways, made in the close vicinity of a river, being both a reservoir of water – mostly the excess rainwater from big downpours – and a location for a network of small water turbines. The idea comes from a few observations, as well as other ideas, that I had over the last two years. Firstly. in Central Europe, we have less and less water from the melting snow – as there is almost no snow anymore in winter – and more and more water from sudden, heavy rain. We need to learn how to retain rainwater in the most efficient way. Secondly, as we have local floods due to heavy rains, some sort of spontaneous formation of floodplains happens. Even if there is no visible pond, the ground gets a bit spongy and soaked, flood after flood. We have more and more mosquitoes. If it is happening anyway, let’s use it creatively. This particular point is visualised in the map below, with the example of Central and Southern Europe. Thus, my idea is to utilise purposefully a naturally happening phenomenon, component of climate change.

Source: https://www.eea.europa.eu/data-and-maps/figures/floodplain-distribution last access June 20th, 2019

Thirdly, there is some sort of new generation in water turbines: a whole range of small devices, simple and versatile, has come to the market.  You can have a look at what those guys at Blue Freedom are doing. Really interesting. Hydroelectricity can now be approached in an apparently much less capital-intensive way. Thus, the idea I have is to arrange purposefully the floodplains we have in Europe into as energy-efficient and carbon-efficient places as possible. I give the general idea graphically in the picture below.

I am approaching the whole thing from the economics’ point of view, i.e. I want a piece of floodplain arranged into this particular concept to have more value, financial value included, than the same piece of floodplain just being ignored in its inherent potential. I can see two distinct avenues for developing the concept: that of a generally wild, uninhabited floodplain, like public land, as opposed to an inhabited floodplain, under incumbent or ongoing construction, residential or other. The latter is precisely what I want to focus on. I want to study, and possibly to develop a business plan for a human habitat combined with a semi-aquatic ecosystem, i.e. a network of ponds, waterways and water turbines in places where people live and work. Hence, from the geographic point of view, I am focusing on places where the secondary formation of floodplain-type of terrain already occurs in towns and cities, or in the immediate vicinity thereof. For more than one century, the growth of urban habitats has been accompanied by the entrenching of waterways in strictly defined, concrete-reinforced beds. I want to go the other way, and let those rivers spill around their waters, into wetlands, in a manner beneficial to human dwelling.

My initial approach to the underlying environmental concept is market based. Can we create urban wetlands, in flood-threatened areas, where the presence of the explicitly and purposefully arranged aquatic structures increases the value of property so as to top the investment required? I start with the most fundamental marks in the environment. I imagine a piece of land in an urban area. It has its present market value, and I want to study its possible value in the future.

I imagine a piece of land located in an urban area with the characteristics of a floodplain, i.e. recurrently threatened by local floods or the secondary effects thereof. At the moment ‘t’, that piece of land has a market value M(t) = S * m(t), being the product of its total surface S, constant over time, and the market price m(t) per unit of surface, changing over time. There are two moments in time, i.e. the initial moment t0, and the subsequent moment t1, after the development into urban wetland. Said development requires a stream of investment I(t0 -> t1). I want to study the conditions for M(t1) – M(t0) > I(t0 -> t1). As surface S is constant over time, my problem breaks down into units of surface, whence the aggregate investment I(t0 -> t1) being decomposed into I(t0 -> t1) = S * i(t0 -> t1), and the problem restated as m(t1) – m(t0) >  i(t0 -> t1).

I assume the market price m(t) is based on two types of characteristics: those directly measurable as financials, for one, e.g. the average wage a resident can expect from a locally based job, and those more diffuse ones, whose translation into financial variables is subtler, and sometimes pointless. I allow myself to call the latter ones ‘environmental services’. They cover quite a broad range of phenomena, ranging from the access to clean water outside the public water supply system, all the way to subjectively perceived happiness and well-being. All in all, mathematically, I say m(t) = f(x1, x2, …, xk) : the market price of construction land in cities is a function of k variables. Consistently with the above, I assume that f[t1; (x1, x2, …, xk)] – f[t0; (x1, x2, …, xk)] > i(t0 -> t1).    

It is intellectually honest to tackle those characteristics of urban land that make its market price. There is a useful observation about cities: anything that impacts the value of urban real estate, sooner or later translates into rent that people are willing to pay for being able to stay there. Please, notice that even when we own a piece of real estate, i.e. when we have property rights to it, we usually pay to someone some kind of periodic allowance for being able to execute our property rights fully: the real estate tax, the maintenance fee paid to the management of residential condominiums, the fee for sanitation service (e.g. garbage collection) etc. Any urban piece of land has a rent tag attached. Even those characteristics of a place, which pertain mostly to the subjectively experienced pleasure and well-being derived out of staying there have a rent-like price attached to them, at the end of the day.

Good. I have made a sketch of the thing. Now, I am going to pass in review some published research, in order to set my landmarks. I start with some literature regarding urban planning, and as soon as I do so, I discover an application for artificial intelligence, a topic of interest for me, those last months. Lyu et al. (2017[1]) present a method for procedural modelling of urban layout, and in their work, I can spot something similar to the equations I have just come up with: complex analysis of land-suitability. It starts with dividing the total areal of urban land at hand, in a given city, into standard units of surface. Geometrically, they look nice when they are equisized squares. Each unit ‘i’ can be potentially used for many alternative purposes. Lyu et al. distinguish 5 typical uses of urban land: residential, industrial, commercial, official, and open & green. Each such surface unit ‘i’ is endowed with a certain suitability for different purposes, and this suitability is the function of a finite number of factors. Formally, the suitability sik of land unit i for use k is a weighted average over a vector of factors, where wkj is the weight of factor j for land use k, and rij is the rating of land unit i on factor j. Below, I am trying to reproduce graphically the general logic of this approach.

In a city approached analytically with the general method presented above, Lyu et al. (2017[1]) distribute three layers of urban layout: population, road network, and land use. It starts with an initial state (input state) of population, land use, and available area. In a first step of the procedure, a simulation of highways and arterial transport connections is made. The transportation grid suggests some kind of division of urban space into districts. As far as I understand it, Lyu et al. define districts as functional units with the quantitative dominance of certain land uses, i.e. residential vs. industrial rather than rich folks’ estate vs. losers’ end, sort of.

As a first sketch of district division is made, it allows simulating a first distribution of population in the city, and a first draft of land use. The distribution of population is largely a distribution of density in population, and the corresponding transportation grid is strongly correlated with it. Some modes of urban transport work only above some critical thresholds in the density of population. This is an important point: density of population is a critical variable in social sciences.

Then, some kind of planning freedom can be allowed inside districts, which results in a second draft of spatial distribution in population, where a new type of unit – a neighbourhood – appears. Lyu et al. do not explain in detail the concept of neighbourhood, and yet it is interesting. It suggests the importance of spontaneous settlement vs. that of planned spatial arrangement.

I am strongly attached to that notion of spontaneous settlement. I am firmly convinced that on the long run people live where they want to live, and urban planning can just make that process somehow smoother and more efficient. Thus comes another article in my review of literature, by Mahmoud & Divigalpitiya (2019[2]). By the way, I have an interesting meta-observation: most recent literature about urban development is based on empirical research in emerging economies and in developing countries, with the U.S. coming next, and Europe lagging far behind. In Europe, we do very little research about our own social structures, whilst them Egyptians or Thais are constantly studying the way they live collectively.

Anyway, back to by Mahmoud & Divigalpitiya (2019[3]), the article is interesting from my point of view because its authors study the development of new towns and cities. For me, it is an insight into how the radically new urban structures sink into the incumbent spatial distribution of population. The specific background of this particular study is a public policy of the Egyptian government to establish, in a planned manner, new cities some distance away from the Nile, and do it so as to minimize the encroachment on agricultural land. Thus, we have scarce space and people to fit into, with optimal use of land.

As I study that paper by Mahmoud & Divigalpitiya, some kind of extension to my initial idea emerges. Those researchers report that with proper water and energy management, more specifically with the creation of irrigative structures like those which I came up with – networks of ponds and waterways – paired with a network of small hydropower units, it is possible both to accommodate an increase of 90% in local urban population, and create 3,75% more of agricultural land. Another important finding about those new urban communities in Egypt is that they tend to grow by sprawl rather than by distant settlement. New city dwellers tend to settle close to the incumbent residents, rather than in more remote locations. In simple words: it is bloody hard to create a new city from scratch. Habits and social links are like a tangible expanse of matter, which opposes resistance to distortions.

I switch to another paper based on Egyptian research, namely that by Hatata et al. 2019[4], relative to the use of small hydropower generators. The paper is rich in technicalities, and therefore I note to come back to it many times when I will be going more into the details of my concept. For now, I have a few general takeaways. Firstly, it is wise to combine small hydro off grid with that connected to the power grid, and more generally, small hydro looks like a good complementary source of power, next to a regular grid, rather than a 100% autonomous power base. Still, full autonomy is possible, mostly with the technology of Permanent Magnet Synchronous Generator. Secondly, Hatata et al. present a calculation of economic value in hydropower projects, based on their Net Present Value, which, in turn, is calculated on the grounds of a basic assumption that hydropower installations carry some residual capital value Vr over their entire lifetime, and additionally can generate a current cash flow determined by: a) the revenue Rt from the sales of energy b) the locally needed investment It c) the operating cost Ot and d) the maintenance cost Mt, all that in the presence of a periodic discount rate r.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. You can communicate with me directly, via the mailbox of this blog: goodscience@discoversocialsciences.com. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?


[1] Lyu, X., Han, Q., & de Vries, B. (2017). Procedural modeling of urban layout: population, land use, and road network. Transportation research procedia, 25, 3333-3342.

[2] Mahmoud, H., & Divigalpitiya, P. (2019). Spatiotemporal variation analysis of urban land expansion in the establishment of new communities in Upper Egypt: A case study of New Asyut city. The Egyptian Journal of Remote Sensing and Space Science, 22(1), 59-66.

[3] Mahmoud, H., & Divigalpitiya, P. (2019). Spatiotemporal variation analysis of urban land expansion in the establishment of new communities in Upper Egypt: A case study of New Asyut city. The Egyptian Journal of Remote Sensing and Space Science, 22(1), 59-66.

[4] Hatata, A. Y., El-Saadawi, M. M., & Saad, S. (2019). A feasibility study of small hydro power for selected locations in Egypt. Energy Strategy Reviews, 24, 300-313.


Sketching quickly alternative states of nature

My editorial on You Tube

I am thinking about a few things, as usually, and, as usually, it is a laborious process. The first one is a big one: what the hell am I doing what I am doing for? I mean, what’s the purpose and the point of applying artificial intelligence to simulating collective intelligence? There is one particular issue that I am entertaining in this regard: the experimental check. A neural network can help me in formulating very precise hypotheses as for how a given social structure can behave. Yet, these are hypotheses. How can I have them checked?

Here is an example. Together with a friend, we are doing some research about the socio-economic development of big cities in Poland, in the perspective of seeing them turning into so-called ‘smart cities’. We came to an interesting set of hypotheses generated by a neural network, but we have a tiny little problem: we propose, in the article, a financial scheme for cities but we don’t quite understand why we propose this exact scheme. I know it sounds idiotic, but well: it is what it is. We have an idea, and we don’t know exactly where that idea came from.

I have already discussed the idea in itself on my blog, in « Locally smart. Case study in finance.» : a local investment fund, created by the local government, to finance local startup businesses. Business means investment, especially at the aggregate scale and in the long run. This is how business works: I invest, and I have (hopefully) a return on my investment. If there is more and more private business popping up in those big Polish cities, and, in the same time, local governments are backing off from investment in fixed assets, let’s make those business people channel capital towards the same type of investment that local governments are withdrawing from. What we need is an institutional scheme where local governments financially fuel local startup businesses, and those businesses implement investment projects.

I am going to try and deconstruct the concept, sort of backwards. I am sketching the landscape, i.e. the piece of empirical research that brought us to formulating the whole idea of investment fund paired with crowdfunding.  Big Polish cities show an interesting pattern of change: local populations, whilst largely stagnating demographically, are becoming more and more entrepreneurial, which is observable as an increasing number of startup businesses per 10 000 inhabitants. On the other hand, local governments (city councils) are spending a consistently decreasing share of their budgets on infrastructural investment. There is more and more business going on per capita, and, in the same time, local councils seem to be slowly backing off from investment in infrastructure. The cities we studied as for this phenomenon are: Wroclaw, Lodz, Krakow, Gdansk, Kielce, Poznan, Warsaw.

More specifically, the concept tested through the neural network consists in selecting, each year, 5% of the most promising local startups, and funds each of them with €80 000. The logic behind this concept is that when a phenomenon becomes more and more frequent – and this is the case of startups in big Polish cities – an interesting strategy is to fish out, consistently, the ‘crème de la crème’ from among those frequent occurrences. It is as if we were soccer promotors in a country, where more and more young people start playing at a competitive level. A viable strategy consists, in such a case, in selecting, over and over again, the most promising players from the top of the heap and promote them further.

Thus, in that hypothetical scheme, the local investment fund selects and supports the most promising from amongst the local startups. Mind you, that 5% rate of selection is just an idea. It could be 7% or 3% just as well. A number had to be picked, in order to simulate the whole thing with a neural network, which I present further. The 5% rate can be seen as an intuitive transference from the s-Student significance test in statistics. When you test a correlation for its significance, with the t-Student test, you commonly assume that at least 95% of all the observations under scrutiny is covered by that correlation, and you can tolerate a 5% outlier of fringe cases. I suppose this is why we picked, intuitively, that 5% rate of selection among the local startups: 5% sounds just about right to delineate the subset of most original ideas.

Anyway, the basic idea consists in creating a local investment fund controlled by the local government, and this fund would provide a standard capital injection of €80 000 to 5% of most promising local startups. The absolute number STF (i.e. financed startups) those 5% translate into can be calculated as: STF = 5% * (N/10 000) * ST10 000, where N is the population of the given city, and ST10 000 is the coefficient of startup businesses per 10 000 inhabitants. Just to give you an idea what it looks like empirically, I am presenting data for Krakow (KR, my hometown) and Warsaw (WA, Polish capital), in 2008 and 2017, which I designate, respectively, as STF(city_acronym; 2008) and STF(city_acronym; 2017). It goes like:

STF(KR; 2008) = 5% * (754 624/ 10 000) * 200 = 755

STF(KR; 2017) = 5* * (767 348/ 10 000) * 257 = 986

STF(WA; 2008) = 5% * (1709781/ 10 000) * 200 = 1 710

STF(WA; 2017) = 5% * (1764615/ 10 000) * 345 = 3 044   

That glimpse of empirics allows guessing why we applied a neural network to that whole thing: the two core variables, namely population and the coefficient of startups per 10 000 people, can change with a lot of autonomy vis a vis each other. In the whole sample that we used for basic stochastic analysis, thus 7 cities from 2008 through 2017 equals 70 observations, those two variables are Pearson-correlated at r = 0,6267. There is some significant correlation, and yet some 38% of observable variance in each of those variables doesn’t give a f**k about the variance of the other variable. The covariance of these two seems to be dominated by the variability in population rather than by uncertainty as for the average number of startups per 10 000 people.

What we have is quite predictable a trend of growing propensity to entrepreneurship, combined with a bit of randomness in demographics. Those two can come in various duos, and their duos tend to be actually trios, ‘cause we have that other thing, which I already mentioned: investment outlays of local governments and the share of those outlays in the overall local budgets. Our (my friend’s and mine) intuitive take on that picture was that it is really interesting to know the different ways those Polish cities can go in the future, rather that setting one central model. I mean, the central stochastic model is interesting too. It says, for example, that the natural logarithm of the number of startups per 10 000 inhabitants, whilst being negatively correlated with the share of investment outlays in the local government’s budget, it is positively correlated with the absolute amount of those outlays. The more a local government spends on fixed assets, the more startups it can expect per 10 000 inhabitants. That latter variable is subject to some kind of scale effects from the part of the former. Interesting. I like scale effects. They are intriguing. They show phenomena, which change in a way akin to what happens when I heat up a pot full of water: the more heat have I supplied to water, the more different kinds of stuff can happen. We call it increase in the number of degrees of freedom.

The stochastically approached degrees of freedom in the coefficient of startups per 10 000 inhabitants, you can see them in Table 1, below. The ‘Ln’ prefix means, of course, natural logarithms. Further below, I return to the topic of collective intelligence in this specific context, and to using artificial intelligence to simulate the thing.

Table 1

Explained variable: Ln(number of startups per 10 000 inhabitants) R2 = 0,608 N = 70
Explanatory variable Coefficient of regression Standard error Significance level
Ln(investment outlays of the local government) -0,093 0,048 p = 0,054
Ln(total budget of the local government) 0,565 0,083 p < 0,001
Ln(population) -0,328 0,09 p < 0,001
Constant    -0,741 0,631 p = 0,245

I take the correlations from Table 1, thus the coefficients of regression from the first numerical column, and I check their credentials with the significance level from the last numerical column. As I want to understand them as real, actual things that happen in the cities studied, I recreate the real values. We are talking about coefficients of startups per 10 000 people, comprised somewhere the observable minimum ST10 000 = 140, and the maximum equal to ST10 000 = 345, with a mean at ST10 000 = 223. It terms of natural logarithms, that world folds into something between ln(140) = 4,941642423 and ln(345) = 5,843544417, with the expected mean at ln(223) = 5,407171771. Standard deviation Ω from that mean can be reconstructed from the standard error, which is calculated as s = Ω/√N, and, consequently, Ω = s*√N. In this case, with N = 70, standard deviation Ω = 0,631*√70 = 5,279324767.  

That regression is interesting to the extent that it leads to an absurd prediction. If the population of a city shrinks asymptotically down to zero, and if, in the same time, the budget of the local government swells up to infinity, the occurrence of entrepreneurial behaviour (number of startups per 10 000 inhabitants) will tend towards infinity as well. There is that nagging question, how the hell can the budget of a local government expand when its tax base – the population – is collapsing. I am an economist and I am supposed to answer questions like that.

Before being an economist, I am a scientist. I ask embarrassing questions and then I have to invent a way to give an answer. Those stochastic results I have just presented make me think of somehow haphazard a set of correlations. Such correlations can be called dynamic, and this, in turn, makes me think about the swarm theory and collective intelligence (see Yang et al. 2013[1] or What are the practical outcomes of those hypotheses being true or false?). A social structure, for example that of a city, can be seen as a community of agents reactive to some systemic factors, similarly to ants or bees being reactive to pheromones they produce and dump into their social space. Ants and bees are amazingly intelligent collectively, whilst, let’s face it, they are bloody stupid singlehandedly. Ever seen a bee trying to figure things out in the presence of a window? Well, not only can a swarm of bees get that s**t down easily, but also, they can invent a way of nesting in and exploiting the whereabouts of the window. The thing is that a bee has its nervous system programmed to behave smartly mostly in social interactions with other bees.

I have already developed on the topic of money and capital being a systemic factor akin to a pheromone (see Technological change as monetary a phenomenon). Now, I am walking down this avenue again. What if city dwellers react, through entrepreneurial behaviour – or the lack thereof – to a certain concentration of budgetary spending from the local government? What if the budgetary money has two chemical hooks on it – one hook observable as ‘current spending’ and the other signalling ‘investment’ – and what if the reaction of inhabitants depends on the kind of hook switched on, in the given million of euros (or rather Polish zlotys, or PLN, as we are talking about Polish cities)?

I am returning, for a moment, to the negative correlation between the headcount of population, on the one hand, and the occurrence of new businesses per 10 000 inhabitants. Cities – at least those 7 Polish cities that me and my friend did our research on – are finite spaces. Less people in the city means less people per 1 km2 and vice versa. Hence, the occurrence of entrepreneurial behaviour is negatively correlated with the density of population. A behavioural pattern emerges. The residents of big cities in Poland develop entrepreneurial behaviour in response to greater a concentration of current budgetary spending by local governments, and to lower a density of population. On the other hand, greater a density of population or less money spent as current payments from the local budget act as inhibitors of entrepreneurship. Mind you, greater a density of population means greater a need for infrastructure – yes, those humans tend to crap and charge their smartphones all over the place – whence greater a pressure on the local governments to spend money in the form of investment in fixed assets, whence the secondary in its force, negative correlation between entrepreneurial behaviour and investment outlays from local budgets.

This is a general, behavioural hypothesis. Now, the cognitive challenge consists in translating the general idea into as precise empirical hypotheses as possible. What precise states of nature can happen in those cities? This is when artificial intelligence – a neural network – can serve, and this is when I finally understand where that idea of investment fund had come from. A neural network is good at producing plausible combinations of values in a pre-defined set of variables, and this is what we need if we want to formulate precise hypotheses. Still, a neural network is made for learning. If I want the thing to make those hypotheses for me, I need to give it a purpose, i.e. a variable to optimize, and learn as it is optimizing.

In social sciences, entrepreneurial behaviour is assumed to be a good thing. When people recurrently start new businesses, they are in a generally go-getting frame of mind, and this carries over into social activism, into the formation of institutions etc. In an initial outburst of neophyte enthusiasm, I might program my neural network so as to optimize the coefficient of startups per 10 000 inhabitants. There is a catch, though. When I tell a neural network to optimize a variable, it takes the most likely value of that variable, thus, stochastically, its arithmetical average, and it keeps recombining all the other variables so as to have this one nailed down, as close to that most likely value as possible. Therefore, if I want a neural network to imagine relatively high occurrences of entrepreneurial behaviour, I shouldn’t set said behaviour as the outcome variable. I should mix it with others, as an input variable. It is very human, by the way. You brace for achieving a goal, you struggle the s**t out of yourself, and you discover, with negative amazement, that instead of moving forward, you are actually repeating the same existential pattern over and over again. You can set your personal compass, though, on just doing a good job and having fun with it, and then, something strange happens. Things get done sort of you haven’t even noticed when and how. Goals get nailed down even without being phrased explicitly as goals. And you are having fun with the whole thing, i.e. with life.

Same for artificial intelligence, as it is, as a matter of fact, an artful expression of our own, human intelligence: it produces the most interesting combinations of variables as a by-product of optimizing something boring. Thus, I want my neural network to optimize on something not-necessarily-fascinating and see what it can do in terms of people and their behaviour. Here comes the idea of an investment fund. As I have been racking my brains in the search of place where that idea had come from, I finally understood: an investment fund is both an institutional scheme, and a metaphor. As a metaphor, it allows decomposing an aggregate stream of investment into a set of more or less autonomous projects, and decisions attached thereto. An investment fund is a set of decisions coordinated in a dynamically correlated manner: yes, there are ways and patterns to those decisions, but there is a lot of autonomous figuring-out-the-thing in each individual case.

Thus, if I want to put functionally together those two social phenomena – investment channelled by local governments and entrepreneurial behaviour in local population – an investment fund is a good institutional vessel to that purpose. Local government invests in some assets, and local homo sapiens do the same in the form of startups. What if we mix them together? What if the institutional scheme known as public-private partnership becomes something practiced serially, as a local market for ideas and projects?

When we were designing that financial scheme for local governments, me and my friend had the idea of dropping a bit of crowdfunding into the cooking pot, and, as strange as it could seem, we are bit confused as for where this idea came from. Why did we think about crowdfunding? If I want to understand how a piece of artificial intelligence simulates collective intelligence in a social structure, I need to understand what kind of logical connections had I projected into the neural network. Crowdfunding is sort of spontaneous. When I am having a look at the typical conditions proposed by businesses crowdfunded at Kickstarter or at StartEngine, these are shitty contracts, with all the due respect. Having a Master’s in law, when I look at the contracts offered to investors in those schemes, I wouldn’t sign such a contract if I had any room for negotiation. I wouldn’t even sign a contract the way I am supposed to sign it via a crowdfunding platform.

There is quite a strong piece of legal and business science to claim that crowdfunding contracts are a serious disruption to the established contractual patterns (Savelyev 2017[2]). Crowdfunding largely rests on the so-called smart contracts, i.e. agreements written and signed as software on Blockchain-based platforms. Those contracts are unusually flexible, as each amendment, would it be general or specific, can be hash-coded into the history of the individual contractual relation. That puts a large part of legal science on its head. The basic intuition of any trained lawyer is that we negotiate the s**t of ourselves before the signature of the contract, thus before the formulation of general principles, and anything that happens later is just secondary. With smart contracts, we are pretty relaxed when it comes to setting the basic skeleton of the contract. We just put the big bones in, and expect we gonna make up the more sophisticated stuff as we go along.

With the abundant usage of smart contracts, crowdfunding platforms have peculiar legal flexibility. Today you sign up for having a discount of 10% on one Flower Turbine, in exchange of £400 in capital crowdfunded via a smart contract. Next week, you learn that you can turn your 10% discount on one turbine into 7% on two turbines if you drop just £100 more into that pig coin. Already the first step (£400 against the discount of 10%) would be a bit hard to squeeze into classical contractual arrangements as for investing into the equity of a business, let alone the subsequent amendment (Armour, Enriques 2018[3]).

Yet, with a smart contract on a crowdfunding platform, anything is just a few clicks away, and, as astonishing as it could seem, the whole thing works. The click-based smart contracts are actually enforced and respected. People do sign those contracts, and moreover, when I mentally step out of my academic lawyer’s shoes, I admit being tempted to sign such a contract too. There is a specific behavioural pattern attached to crowdfunding, something like the Russian ‘Davaj, riebiata!’ (‘Давай, ребята!’ in the original spelling). ‘Let’s do it together! Now!’, that sort of thing. It is almost as I were giving someone the power of attorney to be entrepreneurial on my behalf. If people in big Polish cities found more and more startups, per 10 000 residents, it is a more and more recurrent manifestation of entrepreneurial behaviour, and crowdfunding touches the very heart of entrepreneurial behaviour (Agrawal et al. 2014[4]). It is entrepreneurship broken into small, tradable units. The whole concept we invented is generally placed in the European context, and in Europe crowdfunding is way below the popularity it has reached in North America (Rupeika-Aboga, Danovi 2015[5]). As a matter of fact, European entrepreneurs seem to consider crowdfunding as really a secondary source of financing.

Time to sum up a bit all those loose thoughts. Using a neural network to simulate collective behaviour of human societies involves a few deep principles, and a few tricks. When I study a social structure with classical stochastic tools and I encounter strange, apparently paradoxical correlations between phenomena, artificial intelligence may serve. My intuitive guess is that a neural network can help in clarifying what is sometimes called ‘background correlations’ or ‘transitive correlations’: variable A is correlated with variable C through the intermediary of variable B, i.e. A is significantly correlated with B, and B is significantly correlated with C, but the correlation between A and C remains insignificant.

When I started to use a neural network in my research, I realized how important it is to formulate very precise and complex hypotheses rather than definitive answers. Artificial intelligence allows to sketch quickly alternative states of nature, by gazillions. For a moment, I am leaving the topic of those financial solutions for cities, and I return to my research on energy, more specifically on energy efficiency. In a draft article I wrote last autumn, I started to study the relative impact of the velocity of money, as well as that of the speed of technological change, upon the energy efficiency of national economies. Initially, I approached the thing in the nicely and classically stochastic a way. I came up with conclusions of the type: ‘variance in the supply of money makes 7% of the observable variance in energy efficiency, and the correlation is robust’. Good, this is a step forward. Still, in practical terms, what does it give? Does it mean that we need to add money to the system in order to have greater an energy efficiency? Might well be the case, only you don’t add money to the system just like that, ‘cause most of said money is account money on current bank accounts, and the current balances of those accounts reflect the settlement of obligations resulting from complex private contracts. There is no government that could possibly add more complex contracts to the system.

Thus, stochastic results, whilst looking and sounding serious and scientific, have remote connexion to practical applications. On the other hand, if I take the same empirical data and feed it into a neural network, I get alternative states of nature, and those states are bloody interesting. Artificial intelligence can show me, for example, what happens to energy efficiency if a social system is more or less conservative in its experimenting with itself. In short, artificial intelligence allows super-fast simulation of social experiments, and that simulation is theoretically robust.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. You can communicate with me directly, via the mailbox of this blog: goodscience@discoversocialsciences.com. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?


[1] Yang, X. S., Cui, Z., Xiao, R., Gandomi, A. H., & Karamanoglu, M. (2013). Swarm intelligence and bio-inspired computation: theory and applications.

[2] Savelyev, A. (2017). Contract law 2.0:‘Smart’contracts as the beginning of the end of classic contract law. Information & Communications Technology Law, 26(2), 116-134.

[3] Armour, J., & Enriques, L. (2018). The promise and perils of crowdfunding: Between corporate finance and consumer contracts. The Modern Law Review, 81(1), 51-84.

[4] Agrawal, A., Catalini, C., & Goldfarb, A. (2014). Some simple economics of crowdfunding. Innovation Policy and the Economy, 14(1), 63-97

[5] Rupeika-Apoga, R., & Danovi, A. (2015). Availability of alternative financial resources for SMEs as a critical part of the entrepreneurial eco-system: Latvia and Italy. Procedia Economics and Finance, 33, 200-210.

Lean, climbing trends

My editorial on You Tube

Our artificial intelligence: the working title of my research, for now. Volume 1: Energy and technological change. I am doing a little bit of rummaging in available data, just to make sure I keep contact with reality. Here comes a metric: access to electricity in the world, measured as the % of total human population[1]. The trend line looks proudly ascending. In 2016, 87,38% of mankind had at least one electric socket in their place. Ten years earlier, by the end of 2006, they were 81,2%. Optimistic. Looks like something growing almost linearly. Another one: « Electric power transmission and distribution losses »[2]. This one looks different: instead of a clear trend, I observe something shaking and oscillating, with the width of variance narrowing gently down, as time passes. By the end of 2014 (last data point in this dataset), we were globally at 8,25% of electricity lost in transmission. The lowest coefficient of loss occurred in 1998: 7,13%.

I move from distribution to production of electricity, and to its percentage supplied from nuclear power plants[3]. Still another shape, that of a steep bell with surprisingly lean edges. Initially, it was around 2% of global electricity supplied by the nuclear. At the peak of fascination, it was 17,6%, and at the end of 2014, we went down to 10,6%. The thing seems to be temporarily stable at this level. As I move to water, and to the percentage of electricity derived from the hydro[4], I see another type of change: a deeply serrated, generally descending trend. In 1971, we had 20,2% of our total global electricity from the hydro, and by the end of 2014, we were at 16,24%. In the meantime, it looked like a rollercoaster. Yet, as I am having a look at other renewables (i.e. other than hydroelectricity) and their share in the total supply of electricity[5], the shape of the corresponding curve looks like a snake, trying to figure something out about a vertical wall. Between 1971 and 1988, the share of those other renewables in the total electricity supplied moved from 0,25% to 0,6%. Starting from 1989, it is an almost perfectly exponential growth, to reach 6,77% in 2015. 

Just to have a complete picture, I shift slightly, from electricity to energy consumption as a whole, and I check the global share of renewables therein[6]. Surprise! This curve does not behave at all as it is expected to behave, after having seen the previously cited share of renewables in electricity. Instead of a snake sniffing a wall, we can see a snake like from above, or something like e meandering river. This seems to be a cycle over some 25 years (could it be Kondratiev’s?), with a peak around 18% of renewables in the total consumption of energy, and a trough somewhere by 16,9%. Right now, we seem to be close to the peak. 

I am having a look at the big, ugly brother of hydro: the oil, gas and coal sources of electricity and their share in the total amount of electricity produced[7]. Here, I observe a different shape of change. Between 1971 and 1986, the fossils dropped their share from 62% to 51,47%. Then, it rockets up back to 62% in 1990. Later, a slowly ascending trend starts, just to reach a peak, and oscillate for a while around some 65 ÷ 67% between 2007 and 2011. Since then, the fossils are dropping again: the short-term trend is descending.  

Finally, one of the basic metrics I have been using frequently in my research on energy: the final consumption thereof, per capita, measured in kilograms of oil equivalent[8]. Here, we are back in the world of relatively clear trends. This one is ascending, with some bumps on the way, though. In 1971, we were at 1336,2 koe per person per year. In 2014, it was 1920,655 koe.

Thus, what are all those curves telling me? I can see three clearly different patterns. The first is the ascending trend, observable in the access to electricity, in the consumption of energy per capita, and, since the late 1980ies, in the share of electricity derived from renewable sources. The second is a cyclical variation: share of renewables in the overall consumption of energy, to some extent the relative importance of hydroelectricity, as well as that of the nuclear. Finally, I can observe a descending trend in the relative importance of the nuclear since 1988, as well as in some episodes from the life of hydroelectricity, coal and oil.

On the top of that, I can distinguish different patterns in, respectively, the production of energy, on the one hand, and its consumption, on the other hand. The former seems to change along relatively predictable, long-term paths. The latter looks like a set of parallel, and partly independent experiments with different sources of energy. We are collectively intelligent: I deeply believe that. I mean, I hope. If bees and ants can be collectively smarter than singlehandedly, there is some potential in us as well.

Thus, I am progressively designing a collective intelligence, which experiments with various sources of energy, just to produce those two, relatively lean, climbing trends: more energy per capita and ever growing a percentage of capitae with access to electricity. Which combinations of variables can produce a rationally desired energy efficiency? How is the supply of money changing as we reach different levels of energy efficiency? Can artificial intelligence make energy policies? Empirical check: take a real energy policy and build a neural network which reflects the logical structure of that policy. Then add a method of learning and see, what it produces as hypothetical outcome.

What is the cognitive value of hypotheses made with a neural network? The answer to this question starts with another question: how do hypotheses made with a neural network differ from any other set of hypotheses? The hypothetical states of nature produced by a neural network reflect the outcomes of logically structured learning. The process of learning should represent real social change and real collective intelligence. There are four most important distinctions I have observed so far, in this respect: a) awareness of internal cohesion b) internal competition c) relative resistance to new information and d) perceptual selection (different ways of standardizing input data).

The awareness of internal cohesion, in a neural network, is a function that feeds into the consecutive experimental rounds of learning the information on relative cohesion (Euclidean distance) between variables. We assume that each variable used in the neural network reflects a sequence of collective decisions in the corresponding social structure. Cohesion between variables represents the functional connection between sequences of collective decisions. Awareness of internal cohesion, as a logical attribute of a neural network, corresponds to situations when societies are aware of how mutually coherent their different collective decisions are. The lack of logical feedback on internal cohesion represents situation when societies do not have that internal awareness.

As I metaphorically look around and ask myself, what awareness do I have about important collective decisions in my local society. I can observe and pattern people’s behaviour, for one. Next thing: I can read (very literally) the formalized, official information regarding legal issues. On the top of that, I can study (read, mostly) quantitatively formalized information on measurable attributes of the society, such as GDP per capita, supply of money, or emissions of CO2. Finally, I can have that semi-formalized information from what we call “media”, whatever prefix they come with: mainstream media, social media, rebel media, the-only-true-media etc.

As I look back upon my own life and the changes which I have observed on those four levels of social awareness, the fourth one, namely the media, has been, and still is the biggest game changer. I remember the cultural earthquake in 1990 and later, when, after decades of state-controlled media in the communist Poland, we suddenly had free press and complete freedom of publishing. Man! It was like one of those moments when you step out of a calm, dark alleyway right into the middle of heavy traffic in the street. Information, it just wheezed past.         

There is something about media, both those called ‘mainstream’, and the modern platforms like Twitter or You Tube: they adapt to their audience, and the pace of that adaptation is accelerating. With Twitter, it is obvious: when I log into my account, I can see the Tweets only from people and organizations whom I specifically subscribed to observe. With You Tube, on my starting page, I can see the subscribed channels, for one, and a ton of videos suggested by artificial intelligence on the grounds of what I watched in the past. Still, the mainstream media go down the same avenue. When I go bbc.com, the types of news presented are very largely what the editorial team hopes will max out on clicks per hour, which, in turn, is based on the types of news that totalled the most clicks in the past. The same was true for printed newspapers, 20 years ago: the stuff that got to headlines was the kind of stuff that made sales.

Thus, when I simulate collective intelligence of a society with a neural network, the function allowing the network to observe its own, internal cohesion seems to be akin the presence of media platforms. Actually, I have already observed, many times, that adding this specific function to a multi-layer perceptron (type of neural network) makes that perceptron less cohesive. Looks like a paradox: observing the relative cohesion between its own decisions makes a piece of AI less cohesive. Still, real life confirms that observation. Social media favour the phenomenon known as « echo chamber »: if I want, I can expose myself only to the information that minimizes my cognitive dissonance and cut myself from anything that pumps my adrenaline up. On a large scale, this behavioural pattern produces a galaxy of relatively small groups encapsulated in highly distilled, mutually incoherent worldviews. Have you ever wondered what it would be to use GPS navigation to find your way, in the company of a hardcore flat-Earther?   

When I run my perceptron over samples of data regarding the energy – efficiency of national economies – including the function of feedback on the so-called fitness function is largely equivalent to simulating a society with abundant mediatic activity. The absence of such feedback is, on the other hand, like a society without much of a media sector.

Internal competition, in a neural network, is the deep underlying principle for structuring a multi-layer perceptron into separate layers, and manipulating the number of neurons in each layer. Let’s suppose I have two neural layers in a perceptron: A, and B, in this exact order. If I put three neurons in the layer A, and one neuron in the layer B, the one in B will be able to choose between the 3 signals sent from the layer A. Seen from the A perspective, each neuron in A has to compete against the two others for the attention of the single neuron in B. Choice on one end of a synapse equals competition on the other end.

When I want to introduce choice in a neural network, I need to introduce internal competition as well. If any neuron is to have a choice between processing input A and its rival, input B, there must be at least two distinct neurons – A and B – in a functionally distinct, preceding neural layer. In a collective intelligence, choice requires competition, and there seems to be no way around it.  In a real brain, neurons form synaptic sequences, which means that the great majority of our neurons fire because other neurons have fired beforehand. We very largely think because we think, not because something really happens out there. Neurons in charge of early-stage collection in sensory data compete for the attention of our brain stem, which, in turn, proposes its pre-selected information to the limbic system, and the emotional exultation of the latter incites he cortical areas to think about the whole thing. From there, further cortical activity happens just because other cortical activity has been happening so far.

I propose you a quick self-check: think about what you are thinking right now, and ask yourself, how much of what you are thinking about is really connected to what is happening around you. Are you thinking a lot about the gradient of temperature close to your skin? No, not really? Really? Are you giving a lot of conscious attention to the chemical composition of the surface you are touching right now with your fingertips? Not really a lot of conscious thinking about this one either? Now, how much conscious attention are you devoting to what [fill in the blank] said about [fill in the blank], yesterday? Quite a lot of attention, isn’t it?

The point is that some ideas die out, in us, quickly and sort of silently, whilst others are tough survivors and keep popping up to the surface of our awareness. Why? How does it happen? What if there is some kind of competition between synaptic paths? Thoughts, or components thereof, that win one stage of the competition pass to the next, where they compete again.           

Internal competition requires complexity. There needs to be something to compete for, a next step in the chain of thinking. A neural network with internal competition reflects a collective intelligence with internal hierarchies that offer rewards. Interestingly, there is research showing that greater complexity gives more optimizing accuracy to a neural network, but just as long as we are talking about really low complexity, like 3 layers of neurons instead of two. As complexity is further developed, accuracy decreases noticeably. Complexity is not the best solution for optimization: see Olawoyin and Chen (2018[9]).

Relative resistance to new information corresponds to the way that an intelligent structure deals with cognitive dissonance. In order to have any cognitive dissonance whatsoever, we need at least two pieces of information: one that we have already appropriated as our knowledge, and the new stuff, which could possibly disturb the placid self-satisfaction of the I-already-know-how-things-work. Cognitive dissonance is a potent factor of stress in human beings as individuals, and in whole societies. Galileo would have a few words to say about it. Question: how to represent in a mathematical form the stress connected to cognitive dissonance? My provisional answer is: by division. Cognitive dissonance means that I consider my acquired knowledge as more valuable than new information. If I want to decrease the importance of B in relation to A, I divide B by a factor greater than 1, whilst leaving A as it is. The denominator of new information is supposed to grow over time: I am more resistant to the really new stuff than I am to the already slightly processed information, which was new yesterday. In a more elaborate form, I can use the exponential progression (see The really textbook-textbook exponential growth).

I noticed an interesting property of the neural network I use for studying energy efficiency. When I introduce choice, internal competition and hierarchy between neurons, the perceptron gets sort of wild: it produces increasing error instead of decreasing error, so it basically learns how to swing more between possible states, rather than how to narrow its own trial and error down to one recurrent state. When I add a pinchful of resistance to new information, i.e. when I purposefully create stress in the presence of cognitive dissonance, the perceptron calms down a bit, and can produce a decreasing error.   

Selection of information can occur already at the level of primary perception. I developed on this one in « Thinking Poisson, or ‘WTF are the other folks doing?’ ». Let’s suppose that new science comes as for how to use particular sources of energy. We can imagine two scenarios of reaction to that new science. On the one hand, the society can react in a perfectly flexible way, i.e. each new piece of scientific research gets evaluated as for its real utility for energy management, and gest smoothly included into the existing body of technologies. On the other hand, the same society (well, not quite the same, an alternative one) can sharply distinguish those new pieces of science into ‘useful stuff’ and ‘crap’, with little nuance in between.

What do we know about collective learning and collective intelligence? Three essential traits come to my mind. Firstly, we make social structures, i.e. recurrent combinations of social relations, and those structures tend to be quite stable. We like having stable social structures. We almost instinctively create rituals, rules of conduct, enforceable contracts etc., thus we make stuff that is supposed to make the existing stuff last. An unstable social structure is prone to wars, coups etc. Our collective intelligence values stability. Still, stability is not the same as perfect conservatism: our societies have imperfect recall. This is the second important trait. Over (long periods of) time we collectively shake off, and replace old rules of social games with new rules, and we do it without disturbing the fundamental social structure. In other words: stable as they are, our social structures have mechanisms of adaptation to new conditions, and yet those mechanisms require to forget something about our past. OK, not just forget something: we collectively forget a shitload of something. Thirdly, there had been many local human civilisations, and each of them had eventually collapsed, i.e. their fundamental social structures had disintegrated. The civilisations we have made so far had a limited capacity to learn. Sooner or later, they would bump against a challenge which they were unable to adapt to. The mechanism of collective forgetting and shaking off, in every known historically documented case, had a limited efficiency.

I intuitively guess that simulating collective intelligence with artificial intelligence is likely to be the most fruitful when we simulate various capacities to learn. I think we can model something like a perfectly adaptable collective intelligence, i.e. the one which has no cognitive dissonance and processes information uniformly over time, whilst having a broad range of choice and internal competition. Such a neural network behaves in the opposite way to what we tend to associate with AI: instead of optimizing and narrowing down the margin of error, it creates new alternative states, possibly in a broadening range. This is a collective intelligence with lots of capacity to learn, but little capacity to steady itself as a social structure. From there, I can muzzle the collective intelligence with various types of stabilizing devices, making it progressively more and more structure-making, and less flexible. Down that avenue, the solver-type of artificial intelligence lies, thus a neural network that just solves a problem, with one, temporarily optimal solution.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. You can communicate with me directly, via the mailbox of this blog: goodscience@discoversocialsciences.com. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?


[1] https://data.worldbank.org/indicator/EG.ELC.ACCS.ZS last access May 17th, 2019

[2] https://data.worldbank.org/indicator/EG.ELC.LOSS.ZS?end=2016&start=1990&type=points&view=chart last access May 17th, 2019

[3] https://data.worldbank.org/indicator/EG.ELC.NUCL.ZS?end=2014&start=1960&type=points&view=chart last access May 17th, 2019

[4] https://data.worldbank.org/indicator/EG.ELC.HYRO.ZS?end=2014&start=1960&type=points&view=chart last access May 17th, 2019

[5] https://data.worldbank.org/indicator/EG.ELC.RNWX.ZS?type=points last access May 17th, 2019

[6] https://data.worldbank.org/indicator/EG.FEC.RNEW.ZS?type=points last access May 17th, 2019

[7] https://data.worldbank.org/indicator/EG.ELC.FOSL.ZS?end=2014&start=1960&type=points&view=chart last access May 17th, 2019

[8] https://data.worldbank.org/indicator/EG.USE.PCAP.KG.OE?type=points last access May 17th, 2019

[9] Olawoyin, A., & Chen, Y. (2018). Predicting the Future with Artificial Neural Network. Procedia Computer Science, 140, 383-392.

Thinking Poisson, or ‘WTF are the other folks doing?’

My editorial on You Tube

I think I have just put a nice label on all those ideas I have been rummaging in for the last 2 years. The last 4 months, when I have been progressively initiating myself at artificial intelligence, have helped me to put it all in a nice frame. Here is the idea for a book, or rather for THE book, which I have been drafting for some time. « Our artificial intelligence »: this is the general title. The first big chapter, which might very well turn into the first book out of a whole series, will be devoted to energy and technological change. After that, I want to have a go at two other big topics: food and agriculture, then laws and institutions.

I explain. What does it mean « Our artificial intelligence »? As I have been working with an initially simple algorithm of a neural network, and I have been progressively developing it, I understood a few things about the link between what we call, fault of a better word, artificial intelligence, and the way my own brain works. No, not my brain. That would be an overstatement to say that I understand fully my own brain. My mind, this is the right expression. What I call « mind » is an idealized, i.e. linguistic description of what happens in my nervous system. As I have been working with a neural network, I have discovered that artificial intelligence that I make, and use, is a mathematical expression of my mind. I project my way of thinking into a set of mathematical expressions, made into an algorithmic sequence. When I run the sequence, I have the impression of dealing with something clever, yet slightly alien: an artificial intelligence. Still, when I stop staring at the thing, and start thinking about it scientifically (you know: initial observation, assumptions, hypotheses, empirical check, new assumptions and new hypotheses etc.), I become aware that the alien thing in front of me is just a projection of my own way of thinking.

This is important about artificial intelligence: this is our own, human intelligence, just seen from outside and projected into electronics. This particular point is an important piece of theory I want to develop in my book. I want to compile research in neurophysiology, especially in the neurophysiology of meaning, language, and social interactions, in order to give scientific clothes to that idea. When we sometimes ask ourselves whether artificial intelligence can eliminate humans, it boils down to asking: ‘Can human intelligence eliminate humans?’. Well, where I come from, i.e. Central Europe, the answer is certainly ‘yes, it can’. As a matter of fact, when I raise my head and look around, the same answer is true for any part of the world. Human intelligence can eliminate humans, and it can do so because it is human, not because it is ‘artificial’.

When I think about the meaning of the word ‘artificial’, it comes from the Latin ‘artificium’, which, in turn, designates something made with skill and demonstrable craft. Artificium means seasoned skills made into something durable so as to express those skills. Artificial intelligence is a crafty piece of work made with one of the big human inventions: mathematics. Artificial intelligence is mathematics at work. Really at work, i.e. not just as another idealization of reality, but as an actual tool. When I study the working of algorithms in neural networks, I have a vision of an architect in Ancient Greece, where the first mathematics we know seem to be coming from. I have a wall and a roof, and I want them both to hold in balance, so what is the proportion between their respective lengths? I need to learn it by trial and error, as I haven’t any architectural knowledge yet. Although devoid of science, I have common sense, and I make small models of the building I want (have?) to erect, and I test various proportions. Some of those maquettes are more successful than others. I observe, I make my synthesis about the proportions which give the least error, and so I come up with something like the Pythagorean z2 = x2 + y2, something like π = 3,14 etc., or something like the discovery that, for a given angle, the tangent proportion y/x makes always the same number, whatever the empirical lengths of y and x.

This is exactly what artificial intelligence does. It makes small models of itself, tests the error resulting from comparison between those models and something real, and generalizes the observation of those errors. Really: this is what a face recognition piece of software does at an airport, or what Google Ads does. This is human intelligence, just unloaded into a mathematical vessel. This is the first discovery that I have made about AI. Artificial intelligence is actually our own intelligence. Studying the way AI behaves allows seeing, like under a microscope, the workings of human intelligence.

The second discovery is that when I put a neural network to work with empirical data of social sciences, it produces strange, intriguing patterns, something like neighbourhoods of the actual reality. In my root field of research – namely economics – there is a basic concept that we, economists, use a lot and still wonder what it actually means: equilibrium. It is an old observation that networks of exchange in human societies tend to find balance in some precise proportions, for example proportions between demand, supply, price and quantity, or those between labour and capital.

Half of economic sciences is about explaining the equilibriums we can empirically observe. The other half employs itself at discarding what that first half comes up with. Economic equilibriums are something we know that exists, and constantly try to understand its mechanics, but those states of society remain obscure to a large extent. What we know is that networks of exchange are like machines: some designs just work, some others just don’t. One of the most important arguments in economic sciences is whether a given society can find many alternative equilibriums, i.e. whether it can use optimally its resources at many alternative proportions between economic variables, or, conversely, is there just one point of balance in a given place and time. From there on, it is a rabbit hole. What does it mean ‘using our resources optimally’? Is it when we have the lowest unemployment, or when we have just some healthy amount of unemployment? Theories are welcome.

When trying to make predictions about the future, using the apparatus of what can now be called classical statistics, social sciences always face the same dilemma: rigor vs cognitive depth. The most interesting correlations are usually somehow wobbly, and mathematical functions we derive from regression always leave a lot of residual errors.    

This is when AI can step in. Neural networks can be used as tools for optimization in digital systems. Still, they have another useful property: observing a neural network at work allows having an insight into how intelligent structures optimize. If I want to understand how economic equilibriums take shape, I can observe a piece of AI producing many alternative combinations of the relevant variables. Here comes my third fundamental discovery about neural networks: with a few, otherwise quite simple assumptions built into the algorithm, AI can produce very different mechanisms of learning, and, consequently, a broad range of those weird, yet intellectually appealing, alternative states of reality. Here is an example: when I make a neural network observe its own numerical properties, such as its own kernel or its own fitness function, its way of learning changes dramatically. Sounds familiar? When you make a human being performing tasks, and you allow them to see the MRI of their own brain when performing those tasks, the actual performance changes.

When I want to talk about applying artificial intelligence, it is a good thing to return to the sources of my own experience with AI, and explain it works. Some sequences of mathematical equations, when run recurrently many times, behave like intelligent entities: they experiment, they make errors, and after many repeated attempts they come up with a logical structure that minimizes the error. I am looking for a good, simple example from real life; a situation which I experienced personally, and which forced me to learn something new. Recently, I went to Marrakech, Morocco, and I had the kind of experience that most European first-timers have there: the Jemaa El Fna market place, its surrounding souks, and its merchants. The experience consists in finding your way out of the maze-like structure of the alleys adjacent to the Jemaa El Fna. You walk down an alley, you turn into another one, then into still another one, and what you notice only after quite a few such turns is that the whole architectural structure doesn’t follow AT ALL the European concept of urban geometry.  

Thus, you face the length of an alley. You notice five lateral openings and you see a range of lateral passages. In a European town, most of those lateral passages would lead somewhere. A dead end is an exception, and passages between buildings are passages in the strict sense of the term: from one open space to another open space. At Jemaa El Fna, its different: most of the lateral ways lead into deep, dead-end niches, with more shops and stalls inside, yet some other open up into other alleys, possibly leading to the main square, or at least to a main street.

You pin down a goal: get back to the main square in less than… what? One full day? Just kidding. Let’s peg that goal down at 15 minutes. Fault of having a good-quality drone, equipped with thermovision, flying over the whole structure of the souk, and guiding you, you need to experiment. You need to test various routes out of the maze and to trace those, which allow the x ≤ 15 minutes time. If all the possible routes allowed you to get out to the main square in exactly 15 minutes, experimenting would be useless. There is any point in experimenting only if some from among the possible routes yield a suboptimal outcome. You are facing a paradox: in order not to make (too much) errors in your future strolls across Jemaa El Fna, you need to make some errors when you learn how to stroll through.

Now, imagine a fancy app in your smartphone, simulating the possible errors you can make when trying to find your way through the souk. You could watch an imaginary you, on the screen, wandering through the maze of alleys and dead-ends, learning by trial and error to drive the time of passage down to no more than 15 minutes. That would be interesting, wouldn’t it? You could see your possible errors from outside, and you could study the way you can possibly learn from them. Of course, you could always say: ‘it is not the real me, it is just a digital representation of what I could possibly do’. True. Still, I can guarantee you: whatever you say, whatever strong the grip you would try to keep on the actual, here-and-now you, you just couldn’t help being fascinated.

Is there anything more, beyond fascination, in observing ourselves making many possible future mistakes? Let’s think for a moment. I can see, somehow from outside, how a copy of me deals with the things of life. Question: how does the fact of seeing a copy of me trying to find a way through the souk differ from just watching a digital map of said souk, with GPS, such as Google Maps? I tried the latter, and I have two observations. Firstly, in some structures, such as that of maze-like alleys adjacent to Jemaa El Fna, seeing my own position on Google Maps is of very little help. I cannot put my finger on the exact reason, but my impression is that when the environment becomes just too bizarre for my cognitive capacities, having a bird’s eye view of it is virtually no good. Secondly, when I use Google Maps with GPS, I learn very little about my route. I just follow directions on the screen, and ultimately, I get out into the main square, but I know that I couldn’t reproduce that route without the device. Apparently, there is no way around learning stuff by myself: if I really want to learn how to move through the souk, I need to mess around with different possible routes. A device that allows me to see how exactly I can mess around looks like having some potential.

Question: how do I know that what I see, in that imaginary app, is a functional copy of me, and how can I assess the accuracy of that copy? This is, very largely, the rabbit hole I have been diving into for the last 5 months or so. The first path to follow is to look at the variables used. Artificial intelligence works with numerical data, i.e. with local instances of abstract variables. Similarity between the real me, and the me reproduced as artificial intelligence is to find in the variables used. In real life, variables are the kinds of things, which: a) are correlated with my actions, both as outcomes and as determinants b) I care about, and yet I am not bound to be conscious of caring about.

Here comes another discovery I made on my journey through the realm of artificial intelligence: even if, in the simplest possible case, I just make the equations of my neural network so as they represent what I think is the way I think, and I drop some completely random values of the relevant variables into the first round of experimentation, the neural network produces something disquietingly logical and coherent. In other words, if I am even moderately honest in describing, in the form of equations, my way of apprehending reality, the AI I thus created really processes information in the way I would.  

Another way of assessing the similarity between a piece of AI and myself is to compare the empirical data we use: I can make a neural network think more or less like me if I feed it with an accurate description of my so-far experience. In this respect, I discovered something that looks like a keystone in my intellectual structure: as I feed my neural network with more and more empirical data, the scope of the possible ways to learning something meaningful narrows down. When I minimise the amount of empirical data fed into the network, the latter can produce interesting, meaningful results via many alternative sequences of equations. As the volume of real-life information swells, some sequences of equations just naturally drop off the game: they drive the neural network into a state of structural error, when it stops performing calculations.

At this point, I can see some similarity between AI and quantum physics. Quantum mechanics have grown as a methodology, as they proved to be exceptionally accurate in predicting the outcomes of experiments in physics. That accuracy was based on the capacity to formulate very precise hypotheses regarding empirical reality, and the capacity to increase the precision of those hypotheses through the addition of empirical data from past experiments.  

Those fundamental observations I made about the workings of artificial intelligence have progressively brought me to use AI in social sciences. An analytical tool has become a topic of research for me. Happens all the time in science, mind you. Geometry, way back in the day, was a thoroughly practical set of tools, which served to make good boats, ships and buildings. With time, geometry has become a branch of science on its own rights. In my case, it is artificial intelligence. It is a tool, essentially, invented back in the 1960ies and 1970ies, and developed over the last 20 years, and it serves practical purposes: facial identification, financial investment etc. Still, as I have been working with a very simple neural network for the last 4 months, and as I have been developing the logical structure of that network, I am discovering a completely new opening in my research in social sciences.

I am mildly obsessed with the topic of collective human intelligence. I have that deeply rooted intuition that collective human behaviour is always functional regarding some purpose. I perceive social structures such as financial markets or political institutions as something akin to endocrine systems in a body: complex set of signals with a random component in their distribution, and yet a very coherent outcome. I follow up on that intuition by assuming that we, humans, are most fundamentally, collectively intelligent regarding our food and energy base. We shape our social structures according to the quantity and quality of available food and non-edible energy. For quite a while, I was struggling with the methodological issue of precise hypothesis-making. What states of human society can be posited as coherent hypotheses, possible to check or, fault of checking, to speculate about in an informed way?

The neural network I am experimenting with does precisely this: it produces strange, puzzling, complex states, defined by the quantitative variables I use. As I am working with that network, I have come to redefining the concept of artificial intelligence. A movie-based approach to AI is that it is fundamentally non-human. As I think about it sort of step by step, AI is human, as it has been developed on the grounds of human logic. It is human meaning, and therefore an expression of human neural wiring. It is just selective in its scope. Natural human intelligence has no other way of comprehending but comprehending IT ALL, i.e. the whole of perceived existence. Artificial intelligence is limited in scope: it works just with the data we assign it to work with. AI can really afford not to give a f**k about something otherwise important. AI is focused in the strict sense of the term.

During that recent stay in Marrakech, Morocco, I had been observing people around me and their ways of doing things. As it is my habit, I am patterning human behaviour. I am connecting the dots about the ways of using energy (for the moment I haven’t seen any making of energy, yet) and food. I am patterning the urban structure around me and the way people live in it.

Superbly kept gardens and buildings marked by a sense of instability. Human generosity combined with somehow erratic behaviour in the same humans. Of course, women are fully dressed, from head to toes, but surprisingly enough, men too. With close to 30 degrees Celsius outside, most local dudes are dressed like a Polish guy would dress by 10 degrees Celsius. They dress for the heat as I would dress for noticeable cold. Exquisitely fresh and firm fruit and vegetables are a surprise. After having visited Croatia, on the Southern coast of Europe, I would rather expect those tomatoes to be soft and somehow past due. Still, they are excellent. Loads of sugar in very nearly everything. Meat is scarce and tough. All that has been already described and explained by many a researcher, wannabe researchers included. I think about those things around me as about local instances of a complex logical structure: a collective intelligence able to experiment with itself. I wonder what other, hypothetical forms could this collective intelligence take, close to the actually observable reality, as well as some distance from it.

The idea I can see burgeoning in my mind is that I can understand better the actual reality around me if I use some analytical tool to represent slight hypothetical variations in said reality. Human behaviour first. What exactly makes me perceive Moroccans as erratic in their behaviour, and how can I represent it in the form of artificial intelligence? Subjectively perceived erraticism is a perceived dissonance between sequences. I expect a certain sequence to happen in other people’s behaviour. The sequence that really happens is different, and possibly more differentiated than what I expect to happen. When I perceive the behaviour of Moroccans as erratic, does it connect functionally with their ways of making and using food and energy?  

A behavioural sequence is marked by a certain order of actions, and a timing. In a given situation, humans can pick their behaviour from a total basket of Z = {a1, a2, …, az} possible actions. These, in turn, can combine into zPk = z!/(z – k)! = (1*2*…*z) / [1*2*…*(z – k)] possible permutations of k component actions. Each such permutation happens with a certain frequency. The way a human society works can be described as a set of frequencies in the happening of those zPk permutations. Well, that’s exactly what a neural network such as mine can do. It operates with values standardized between 0 and 1, and these can be very easily interpreted as frequencies of happening. I have a variable named ‘energy consumption per capita’. When I use it in the neural network, I routinely standardize each empirical value over the maximum of this variable in the entire empirical dataset. Still, standardization can convey a bit more of a mathematical twist and can be seen as the density of probability under the curve of a statistical distribution.

When I feel like giving such a twist, I can make my neural network stroll down different avenues of intelligence. I can assume that all kinds of things happen, and all those things are sort of densely packed one next to the other, and some of those things are sort of more expected than others, and thus I can standardize my variables under the curve of the normal distribution. Alternatively, I can see each empirical instance of each variable in my database as a rare event in an interval of time, and then I standardize under the curve of the Poisson distribution. A quick check with the database I am using right now brings an important observation: the same empirical data standardized with a Poisson distribution becomes much more disparate as compared to the same data standardized with the normal distribution. When I use Poisson, I lead my empirical network to divide sharply empirical data into important stuff on the one hand, and all the rest, not even worth to bother about, on the other hand.

I am giving an example. Here comes energy consumption per capita in Ecuador (1992) = 629,221 kg of oil equivalent (koe), Slovak Republic (2000) = 3 292,609 koe, and Portugal (2003) = 2 400,766 koe. These are three different states of human society, characterized by a certain level of energy consumption per person per year. They are different. I can choose between three different ways of making sense out of their disparity. I can see them quite simply as ordinals on a scale of magnitude, i.e. I can standardize them as fractions of the greatest energy consumption in the whole sample. When I do so, they become: Ecuador (1992) =  0,066733839, Slovak Republic (2000) =  0,349207223, and Portugal (2003) =  0,254620211.

In an alternative worldview, I can perceive those three different situations as neighbourhoods of an expected average energy consumption, in the presence of an average, standard deviation from that expected value. In other words, I assume that it is normal that countries differ in their energy consumption per capita, as well as it is normal that years of observation differ in that respect. I am thinking normal distribution, and then my three situations come as: Ecuador (1992) = 0,118803134, Slovak Republic (2000) = 0,556341893, and Portugal (2003) = 0,381628627.

I can adopt an even more convoluted approach. I can assume that energy consumption in each given country is the outcome of a unique, hardly reproducible process of local adjustment. Each country, with its energy consumption per capita, is a rare event. Seen from this angle, my three empirical states of energy consumed per capita could occur with the probability of the Poisson distribution, estimated with the whole sample of data. With this specific take on the thing, my three empirical values become: Ecuador (1992) = 0, Slovak Republic (2000) = 0,999999851, and Portugal (2003) = 9,4384E-31.

I come back to Morocco. I perceive some behaviours in Moroccans as erratic. I think I tend to think Poisson distribution. I expect some very tightly defined, rare event of behaviour, and when I see none around, I discard everything else as completely not fitting the bill. As I think about it, I guess most of our human intelligence is Poisson-based. We think ‘good vs bad’, ‘edible vs not food’, ‘friend vs foe’ etc.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

Shut up and keep thinking

This time, something went wrong with the uploading of media on the Word Press server, and so I am publishing my video editorial on You Tube only. Click here to see and hear me saying a few introductory words.

I am trying to put some order in all the updates I have written for my research blog. Right now, I am identifying the main strands of my writing. Still, I want to explain I am doing that sorting of my past thought. I had the idea that, as the academic year is about to start, I could use those past updates as material for teaching. After all, I am writing this blog in sort of a quasi-didactic style, and a thoughtful compilation of such content can be of help for my students.

Right, so I am disentangling those strands of writing. As for the main ideas, I have been writing mostly about three things: a) the market of renewable energies b) monetary systems and cryptocurrencies, as well as the FinTech sector, c) political systems, law and institutions, and d) behavioural research. As I am reviewing what I wrote along these three lines, a few distinct patterns of writing emerge. The first are case studies, focused on interpreting the financial statements of selected companies. I went into four distinct avenues with that form of expression: a) companies operating in the market of renewable energies b) investment funds c) FinTech companies and, lately, d) film and TV companies. Then, as a different form of my writing, come quantitative studies, where I use large databases to run correlations and linear regressions. Finally, there are whole series of updates, which, fault of a better term, I call ‘concept development’. They give account of my personal work on business or scientific concepts, and look very much like daily reports of creative thinking.

Funny, by the way, how I write a lot about behavioural patterns and their importance in social structures, and I have fallen, myself, into recurrent behavioural patterns in my writing. Good, so what I am going to do is to use my readings and findings about behavioural patterns in order to figure out, and make the best possible use of my own behavioural patterns.

How can I use my past writing for educational purposes? I guess that my essential mission, as an educator, consists in communicating an experience in a teachable form, i.e. in a form possible to reproduce, and that reproduction of my experience should be somehow beneficial to other people. Logically, if I want to be an efficient educator in social sciences, what I should do now, is to distillate some sort of essence from my past experience, and formalize it in a teachable form.

My experience is that of looking for recurrent patterns in the most basic phenomena around me. As I am supposed to be clever as a social scientist, let’s settle for social phenomena. Those three distinct forms of my expression correspond to three distinct experiences: focus on one case, search for quantitative data on a s**tload of cases grouped together, and, finally, progressive coining up of complex ideas. This is what I can communicate, as a teacher.

Yet, another idea germinates in my mind. I am a being in time, and I thrust myself into the time to come, as Martin Heidegger would say (if he was alive). I define my social role very largely as that of a scientist and a teacher, and I wonder what am I thrusting, of myself as a scientist and a teacher, into this time that is about to advance towards me. I was tempted to answer grandly that it is my passion to discover that I project into current existence. Yet, precisely, I noticed it is grand talk, and I want to go to the core of things, like to the flesh of my being in time.

As I take off the pompous, out of that ‘passion to discover’ thing, something scientific emerges: a sequence. It all always starts when I see something interesting, and sort of vaguely useful. I intuitively want to know more about that interesting and possibly useful thing, and so I touch, I explore, I turn it under different angles, and yes, my initial intuition was right: it is interesting and useful. Years ago, even before having my PhD, I was strongly involved in preparing new material for management training. I was part of a team lead by a respectable professor from the University of Warsaw, and we were in scientific charge of training for the middle management of a few Polish banks. At the time, I started to read financial reports of companies listed in the stock market. I progressively figured out that large, publicly listed companies published periodical reports, which are like made of two completely different, semantic substances.

In those financial reports, there was the corporate small talk, about ‘exciting new opportunities’, ‘controlled growth’, ‘value for our shareholders’, which, honestly, I find interesting for the sake of its peculiar style, seemingly detached from real life. Yet, there is another semantic substance in those reports: the numbers. Numbers tell a different story. Even if the management of a company do their best to disguise some facts so as they look fancier, the numbers tell the truth. They tell the truth about product markets, about doubtful mergers and acquisitions, about the capacity of a business to accumulate capital etc.

As I started to work seriously on my PhD, and I started to sort out the broadly spoken microeconomic theories, including those of the new institutional school, I suddenly realised the connection between those theories and the sense that numbers make in those financial reports. I discovered that financial statements, i.e. the bare numbers, backed with some technical, explanatory notes, tend to show the true face of any business. They make of those Ockham’s razors, which cut out the b*****it and leave only the really meaningful.

Here comes the underlying, scientifically defined phenomenon. Financial markets have been ever present in human societies. In this respect, I could never recommend enough the monumental work by Fernand Braudel (Braudel 1992a[1]; Braudel 1992b[2]; Braudel 1995[3]). Financial markets have their little ways, and one of them is the charming veil of indefiniteness, put on the facts that laymen should-not-exactly-excite-themselves-about-for-their-own-good. Big business likes to dress into those fancy clothes, made of fancy and foggy language. Still, as soon as numbers have to be published, they start telling the true story. However elusive the management of a company would be in their verbal statements, the financials tell the truth. It is fascinating, how the introduction of precise measurements and accounts, into a realm of social life where plenty of b*****it floats, instantaneously makes things straight and clear.

I know what you can think now, ‘cause I used to think the same when I was (much) younger and listened to lectures at the university: here is that guy, who can be elegantly labelled as more than mature, and he gets excited about his own fascinations, financial reports in the occurrence. Still, I invite you to explore the thing. Financial markets are crucial for the current functioning of our civilisation. We need to shift towards renewable energies, we need to figure out how to make more food in sustainable ways, we need to remove plastic from the oceans, we need to go and see if Mars is an interesting place to hang around: we have a lot of challenges to face. Financial markets are crucial to that end, because they can greatly help in mobilising collective effort, and if we want them to work the way they should work, we need to assure that money goes where it is really needed. Bringing clarity and transparency to finance, over and over again, is really important. Being able to cut through the veil of corporate propaganda and go to the core of business is just as important. Careful reading of financial reports matters. It just matters.

So here is how one of my scientific fascinations formed. More or less at the same epoch, i.e. when I was working on my PhD, I started to work seriously with large datasets, mostly regarding innovation. Patents, patent applications, indicators of R&D effort: I started to go really quantitative about that stuff. I still remember that strange feeling, when synthetic measures of those large datasets started to make sense. I would run some correlations, just because you just need a lot of correlations in a PhD in economics, and vlam!: things would start to be meaningful. Those of you who work with Big Data probably know that feeling well, but I was experiencing it in the 1990ies, when digital technologies were like the grand-parents of the current ones, and even things like Panel Data Analysis, an analytical routine today, were seen as the impressionism of economic research.

I had progressively developed a strongly exploratory manner of working with quantitative data. A friend of mine, the same professor whom I used to work for in those management training projects, called it ‘the bulldog’ approach. He said: ‘Krzysztof, when you find some interesting data, you are like one of those anecdotal bulldogs: you bite into it so strongly, that sometimes you don’t even know how to let go, and you need someone who comes with a crowbar at forces your jaws open’.  Yes, indeed, this is the very same that I have just noticed as I am reviewing the past updates in that research blog of mine. What I do with data can be best described as sniffing, rummaging, playing with, digging and biting into – anything but serious scientific approach.

This is how two of my typical forms of scientific expression – case studies and quantitative studies – formed out of my fascination with the sense coming out of numbers. There is that third form of expression, which I have provisionally labelled ‘concept forming’, and which I developed the most recently, like over the last 18 months, precisely as I started to blog.

I am thinking about the best way to describe my experience in that respect. Here it comes. You have probably experienced those episodes of going outdoors, hiking or running, and then you or someone else starts moaning: ‘These backpack straps are just killing my shoulders! I am thirsty! I am exhausted! My knees are about to explode!’ etc. When I was a kid, I joined the boy scouts, and it was all about hiking. I used to be a fat kid, and that hiking was really killing me, but I liked company, too, and so I went for it. I used to moan exactly the way I have just portrayed. The team leader would just reply in the lines of ‘Just shut up and keep walking! You will adapt!’. Now, I know he was bloody right. There are times in life, when we take on something new and challenging, and then it seems just so hard to carry on, and the best way to deal with it is to shut up and carry on. You will adapt.

This is very much what I experienced as regards thinking and writing. When I started to keep this blog, I had a lot of ideas to express (hopefully, I still have), but I was really struggling with giving an intelligible form to those ideas. This is how I discovered the deep truth of that sentence, attributed to Pablo Picasso (although it could be anyone): ‘When a stroke of genius comes, it finds me at work’. As strange as it could seem, I experienced, and I am still experiencing, over and over again, the fundamental veracity of that principle. When I start working on an idea, the initial enthusiasm sooner or later yields to some moaning function in my brain: ‘F*ck, it is to hard! That thinking about one thing is killing me! And it is sooo complex! I will never sort it out! There is no point!’. Then, hopefully, another part of my brain barks: ‘Just shut up, and think, write, repeat! You will adapt’.

And you know what? It works. When, in the presence of a complex concept to figure out I just shut up (metaphorically, I mean I stop moaning), and keep thinking and writing, it takes shape. Step by step, I am sketching the contours of what’s simmering in the depths of my mind. The process is a bit painful, but rewarding.

Thus, here is the pattern of myself, which I am thrusting into the future, as it comes to science and teaching, and which, hopefully, I can teach. People around me, voluntarily or involuntarily, attract my attention to some sort of scientific and/or teaching work I should do. This is important, and I have just realized it: I take on goals and targets that other people somehow suggest. I need that social prod to wake me up. As I take on that work, I almost instinctively start flipping my Ockham’s razor between and around my intellectual fingers (some people do it with cards, phones, or even knives, you might have spotted it), and I causally give a shave here and there, and I slice observable reality into layers: there is the foam of common narrative about the thing, and there are those factual anchors I can attach to. Usually they are numbers, and, at a deeper philosophical level, they are proportions between things of reality.

As I observe those proportions, I progressively attach them to facts of life, and I start seeing patterns. Those patterns provide me something more or less interesting to say, and so I maintain my intellectual interaction with other people, and sooner or later they attract my attention to another interesting thing to focus on. And so it goes on. And one day, I die. And what will really matter will be made of things that I do but which outlive me. The ethically valuable things.

Good. I return to that metaphor I coined up a like 10 weeks ago, that of social sciences used as a social GPS system, i.e. serving to find one’s location in the social space, and then figure out a sensible route to follow. My personal experience, the one I have just given the account of, can serve to that purpose. My experience tells me that finding my place in the social space always involves interaction with other people. Understanding, and sort of embracing my social role, i.e. the way I can be really useful to other people, is the equivalent of finding my location on the social map. Another important thing I discovered as I deconstructed my experience: my social role is largely made of goals I pursue, not just of labels and rituals. It is sort of dynamic, it is very much my Heideggerian being-in-time, thrusting myself into my own immediate future.

I feel like getting it across really precisely: that thrusting-myself-into-the-future thing is not just pure phenomenology. It is hard science as well. We are defined by what we do. By ‘we’ I mean both individuals and whole societies. What we do involves something we are trying to achieve, i.e. some ethical values we seek to maximise, and to balance with other values. Understanding my social role means tracing the path I am moving along.

Now, whatever goal I am to achieve, according to my social role, around me I can see the foam of common narrative, and the factual anchors. The practical use of social sciences consists in finding those anchors, and figuring out the way to use them so as to thrive in the social role we have now, or change that role efficiently. Here comes the outcome from another piece of my personal experience: forming a valuable understanding requires just shutting up and thinking, and discovering things. Valuable discovery goes beyond and involves more than just amazement: it is intimately connected to purposeful work on discovering things.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

Support this blog

€10.00

[1] Braudel, F. (1992). Civilization and capitalism, 15th-18th Century, Vol. I: The structure of everyday life (Vol. 1). Univ of California Press.

[2] Braudel, F. (1992). Civilization and capitalism, 15th-18th century, vol. II: The wheels of commerce (Vol. 2). Univ of California Press.

[3] Braudel, F. (1995). A history of civilizations (p. 178). New York: Penguin Books

My own zone of proximal development

 

Let’s face it: I am freestyling intellectually. I have those syllabuses to prepare for the next academic year, and so I decided to let my brain crystallize a little bit, subconsciously, without being disturbed, around the business plan for the EneFin concept. Crystallization occurs subconsciously, and I can do plenty of other thinking in the meantime, and so I started doing that other thinking, and I am skating happily on the thin ice of fundamental questions concerning my mission as a scientist and a teacher. The ice those questions make is really thin, and if it cracks under my weight, I will dive into the cold depth of imperative necessity for answers.

You probably know that saying about economics, one of my fundamental disciplines, besides law, namely that economics are the art of making forecasts which do not hold. Nasty, but largely true. I want to devise a method of teaching social sciences, and possibly a contingent method of research, which can be directly useful to the individual, without said individual having to become the president of something big in order to find real utility in social sciences.

I am starting to form that central principle of my teaching and research: social sciences can be used and developed similarly to geography, i.e. they can be used to find one’s bearings in a complex environment, to trace a route towards valuable and attainable goals, and to plan for a realistic pace as for covering this route. Kind of a fundamental thought comes to me, from the realm of hermeneutic philosophy , which I am really fond of, and the thought goes as follows: whatever kind of story I am telling, at the bottom line I am telling the story of my own existence. Question (I mean, a real question, which I am asking right now, not some fake, rhetorical stuff): this view of social sciences, as a quasi-cartographic pathway towards orienting oneself in the social context, is it the story of my own existence? Answer: hell, yes. As I look back at my adult life, it is indeed a long story of wandering, and I perceive a substantial part of that wandering as having been pretty pointless. I could have done much of the same faster, simpler, and with more ethical value achieved on the way. Mind you, here, I am largely sailing the uncharted waters of ‘what could have happened if’. Anyway, what happened, stays happened.

OK, this is the what. Now, I want to phrase out the how. Teaching means essentially two things. Firstly, the student gets to know the skills he or she should master. In educational language it is described as the phase of conscious incompetence: the student gets to know what they don’t know and should develop a skill in. Secondly, teaching should lead them through at least a portion of the path from that conscious incompetence to conscious competence, i.e. to the phase of actually having developed those skills they became aware of in the phase of conscious incompetence.

Logically, I assume there is a set of skills that a person – especially a young one – needs to find and pursue their personal route through the expanse of social structure, once they have been dropped, by the helicopter of adolescence and early adulthood, in some remote spot of said structure. My mission is to use social sciences in order to show them the type of skill they’d better develop, and, possibly, to train them at those skills.

My strictly personal experience of learning is strongly derived from the practice of sport, and there is a piece of wisdom that anyone can have as their takeaway from athletic training: it is called ‘mesocycle’. A mesocycle of training is a period of about 3 months, which is the minimum time our body needs to develop a complex and durable response to training. In any type of learning, a mesocycle can be observed. It is the interval of time that our nervous system needs to get all the core processes, involved in a given pattern of behaviour being under development, well aligned and acceptably optimized.

My academic teaching is structured into semesters. In the curriculum of each particular subject, the realistic cycle of my interaction with students is like 4 months, which gives room to one full mesocycle of training, from conscious incompetence towards conscious competence, plus a little extra time for outlining that conscious incompetence. Logically, I need to structure my teaching into 25% of developing the awareness of skills to form, and 75% of training in those skills.

One of the first syllabuses I am supposed to prepare for the next academic year is ‘Introduction to Management’ for the undergraduate major of film and TV production. It is part of those students’ curriculum for the first year, when, essentially, every subject is an introduction to something. I follow the logic I have just outlined. First of all, what is the initial point of social start, in the world of film and TV production? Someone joins a project, most frequently: the production of a movie, an advertising campaign, the creation of a You Tube channel etc. The route to follow from there? The challenge consists in demonstrably proving one’s value in that project in order to be selected for further projects, rather than maxing out on the profits from this single venture. The next level consists in passing from projects to organisation, i.e. in joining or creating a relatively stable organisation, combining networks and hierarchies, which, in turn, can allow the sprouting of new projects.

Such a path of social movement involves skills centred around the following core episodes: a) quickly and efficiently finding one’s place in a project typical for the world of film and TV production b) starting and managing new projects c) finding one’s place in networks and hierarchies typical for film and TV production and d) possibly developing such an organisation.

Such defined, the introduction to management involves the ability to define social roles and social values, peculiar to the given project and/or organisation, as well as elementary skills in teamwork. As I think of it, the most essential competences in dealing with adversity, like getting one’s s**t together under pressure and forming a realistic plan B, could be helpful.

Good. Roles and values in a project of film and TV production. What comes to my mind in the first place, as I am thinking of it, is once again the teaching of Hans Georg Gadamer, the heavyweight champion of hermeneutic philosophy: historically, art at its best has been a fully commercial enterprise, based on business rules. Concepts such as ‘art for the sake of art’ or ‘pure art’ are relatively new – they emerged by the end of the 19th century – and they are the by-product of another emergence, that of the so-called leisure class, made of people rich enough to afford not to worry about their daily subsistence, and, in the same time, not seriously involved into killing someone in order to stay this way.

One of the first social patterns to teach my students regarding the values of film and TV production is something which, fault of a better word, I call ‘economic base’. It is a value, in this business, to have a relatively predictable stream of income, which is enough for keeping people working on creative projects. The understanding I want my students to form, thus, is precisely this economic base. How much do I need to earn, and how, if I want to keep working on that YT channel long enough for turning it into a business? What kind of job can I do whilst running such a project? How much capital do I need to raise in order to make 50 people work on a movie for 6 months? I think that studying the cases of real businesses in the film and TV production, and building simple business plans on the grounds of those cases can be a good, skill-forming practice.

Once this value identified, it is important to understand how people are most likely to behave whilst striving to achieve it. In other words, it is about the fundamentals of social competition and cooperation. A simple version of the theory of games seems the most workable, in terms of teaching tools.

The economic base for creative work makes one important value, still not the only one. Creation itself is another one. Managing creative teams is tricky. You have a bunch of strong personalities, and you want them to stay this way, and yet you want them to reach some kind of compromise. I think that simple role playing in class, paired with collective projects (i.e. projects carried out by teams of students) can be instructive.

I am summing up. I am a big fan of long-term tasks as educational tools. Preparing a simple business plan, specific to this precise industry (i.e. film and TV production), paired with training in teamwork, should do the job. Now, the easy path is just to tell students ‘Listen, guys! You have those projects to complete until the end of the semester. Just get on with it. We will be having those strange gatherings called “lectures”, but you don’t have to pay too much attention to it. Just have those projects done’. I have already experimented with this approach, and my conclusion is that it generally allows those clever ones to prove they are clever, but not much more. It is a pity to watch those less clever students struggling with a task they have to carry out over the length of one semester.

I want to devise come kind of path in my students’ zone of proximal development : a series of measured, feasible lessons, leading to tangible improvement. Each lesson covers 6 steps: i) define the project to carry out, as well as its goals and constraints, make a plan, make a team, and make them work on the thing ii) purposefully lead to a crisis iii) draw conclusions from the crisis iv) define the improvement needed v) carry out the improvement and vi) check the results.

As I see my usual schedule over one semester, I can arrange like 5 such sequences of 6 steps, thus 5 big lessons. Now, I am thinking about the kind of core task to carry out in each lesson, so as the task is both representative for film and TV production, and feasible in class. Pitching the concept of a movie is a must, and the concept of a YT platform seems to be a sensible idea as well. I have two types of business concepts, and I feel like repeating each of them twice. That gives 4 sequences of training, and leaves one more in reserve. That one more could be, for example, a content store, in the lines of the early Netflix.

Good. One thing to tick off. As I am having a look at it, the same pattern can be transferred, almost as it is, into the curriculum of Principles of Management, which I teach to the 1st year undergraduates in the major of International Relations. In this particular case, the same path is applicable, just the factual scope needs a bit of broadening. Each of those complex, sequenced lessons should be focused on a different type of business. Typical industrial, for one, something in the IT sector, for two, then something really scientific, like biotech, followed by typical service business, and finally something financial.

Now, I jump. It happens all the time in my mind. Something in those synaptic connexions of mine makes them bored with one topic, and willing to embrace the diversity of being. I am asking myself what I can possibly teach to my students, in terms of finding one’s way across the social jungle, on the grounds of the economic theory which either I fully embrace or I have developed by myself. Here come a few ideas.

However inventive and original you think you are, you are as inventive and original as quite a bunch of other people’. This one comes mostly from my reading of Joseph Schumpeter’s theory of creative destruction and neighbourhood of equilibrium. How can it be useful? If you want to do something important, like starting a business or a social action, going for a job connected to expatriation etc.? Well, look for patterns in what other people do. Someone is bound to have the kind of experience you can learn from.

This is deeper than some people could think. As I work with my students on the general issue of business planning, this particular approach proves really useful. There are many instances of complex business planning – the ‘what if?’ sequences, for example – when emulating some existing businesses is the only sensible approach.

The next one spells: ‘Recurrent bargaining leads to figuring out sensible, workable compromises that minimize waste and that nobody is quite satisfied with’. This principle refers to the theoretical concept of local Marshallian equilibrium, but it is also strongly connected to the theory of games. Frequently, you have the impression of being forced into some kind of local custom or ritual, like the average wage you can expect for a given job, or the average rent you have to pay for your apartment, or the habitual way of settling a dispute. It chafes, and it hurts what you perceive as your own originality, but people around you are strangely attached to this particular way of doing things. This is a local equilibrium.

If you want to understand a given local equilibrium, try and figure out the way this equilibrium is being achieved. Who? What? When? How? Under what conditions does the process work, and in which cases it doesn’t? In other words, if you want to figure out the way to influence and change those uncomfortable rituals around you, you need to find a way of making people bargain and get a compromise around a new ritual.

Comes my own research, now, and the fundamental principles of social path-finding I can phrase out of that research. I begin with stating that population matters, in the most numerical sense. The rate of demographic growth, together with the rate of migration, are probably the most powerful social changes we can imagine. Whatever those changing populations do, they adapt to the available supply of food and energy. At the individual level, people express that adaptation by maximizing their personal intake of energy, within socially accepted boundaries, by maintaining a certain portfolio of technologies. Social structures we live in act as regulators of the technological repertoire we have access to, and they change as this repertoire changes.

Practical implications? You want to experience creative social change, with a lot of new types of jobs emerging every year, and a lot of new products? You need a society with vivid demographic growth and a lot of migration going in and/or out. You want security, stability and predictability? You want people around you to be always calm and nice to each other? Then you need a society with slow or null demographic growth, not much of a migration, and plenty of food and energy to tap into. You want to have both, i.e. plenty of creative change, and people being always nice? Sorry, pal, not with this genotype. It just wouldn’t work with humans.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

Support this blog

€10.00

The other cheek of business

My editorial

I am turning towards my educational project. I want to create a step-by-step teaching method, where I guide a student in their learning of social sciences, and this learning is by doing research in social sciences. I have a choice between imposing some predefined topics for research, or invite each student to propose their own. The latter seems certainly more exciting. As a teacher, I know what a brain storm is, and believe: a dozen dedicated and bright individuals, giving their ideas about what they think it is important to do research about, can completely uproot your (my own?) ideas as what it is important to do research about. Still, I can hardly imagine me, individually, handling efficiently all that bloody blissful diversity of ideas. Thus, the first option, namely imposing some predefined topics for research, seems just workable, whilst still being interesting. People can get creative about methods of research, after all, not just about topics for it. Most of the great scientific inventions was actually methodology, and what was really breakthrough about it consisted in the universal applicability of those newly invented methods.

Thus, what I want to put together is a step-by-step path of research, communicable and teachable, regarding my own topics for research. Whilst I still admit the possibility of student-generated topics coming my way, I will consider them as a luxurious delicacy I can indulge in under the condition I can cope with those main topics. Anyway, my research topics for 2018 are:

  1. Smart cities, their emergence, development, and the practical ways of actually doing business there
  2. Fintech, and mostly cryptocurrencies, and even more mostly those hybrid structures, where cryptocurrencies are combined with the “traditional” financial assets
  • Renewable energies
  1. Social and technological change as a manifestation of collective intelligence

Intuitively, I can wrap (I), (II), and (III) into a fancy parcel, decorated with (IV). The first three items actually coincide in time and space. The fourth one is that kind of decorative cherry you can put on a cake to make it look really scientific.

As I start doing research about anything, hypotheses come handy. If you investigate a criminal case, assuming that anyone could have done anything anyhow gives you certainly the biggest possible picture, but the picture is blurred. Contours fade and dance in front on your eyes, idiocies pop up, and it is really hard to stay reasonable. On the other hand, if you make some hypotheses as for who did what and how, your investigation gathers both speed and sense. This is what I strongly advocate for: make some hypotheses at the starting point of your research. Before I go further with hypothesising on my topics for research, a few preliminary remarks can be useful. First of all, we always hypothesise about anything we experience and think. Yes, I am claiming this very strongly: anything we think is a hypothesis or contains a hypothesis. How come? Well, we always generalise, i.e. we simplify and hope the simplification will hold. We very nearly always have less data than we actually need to make the judgments we make with absolute certainty. Actually, everything we pretend to claim with certainty is an approximation.

Thus, we hypothesise intuitively, all the time. Now, I summon the spirit of Milton Friedman from the abyss of pre-Facebook history, and he reminds us the four basic levels of hypothesising. Level one: regarding any given state of nature, we can formulate an indefinitely great number of hypotheses. In practice, there is infinitely many of them. Level two: just some of those infinitely many hypotheses are checkable at all, with the actual access to data I have. Level three: among all the checkable hypotheses, with the data at hand, there are just some, regarding which I can say with reasonable certainty whether they are true or false. Level four: it is much easier to falsify a hypothesis, i.e. to say under what conditions it does not hold at all, than to verify it, i.e. claiming under what conditions it is true. This comes from level one: each hypothesis has cousins, who sound almost exactly the same, but just almost, so under given conditions many mutually non-exclusive hypotheses can be true.

Now, some of you could legitimately ask ‘Good, so I need to start with formulating infinitely many hypotheses, then check which of them are checkable, then identify those allowing more or less rigorous scientific proof? Great. It means that at the very start I get entangled for eternity into checking how checkable is each of the infinitely many hypotheses I can think of. Not very promising as for results’. This is legit to say that, and this is the reason why, in science, we use that tool known as the Ockham’s razor. It serves to give a cognitive shave to badly kept realities. In its traditional form it consists in assuming that the most obvious answer is usually the correct one. Still, as you have a closer look at this precise phrasing, you can see a lot of hidden assumptions. It assumes you can distinguish the obvious from the dubious, which, in turn, means that you have already applied the razor beforehand. Bit of a loop. The practical way of wielding that razor is to assume that the most obvious thing is observable reality. I start with finding my bearings in reality. Recently, I gave an example of that: check ‘My individual square of land, 9 meters on 9’  . I look around and I assess what kind of phenomena, which, at this stage of research, I can intuitively connect to the general topic of my research, and which I can observe, measure, and communicate intelligibly about. These are my anchors in reality.

I look at those things, I measure them, and I do my best to communicate by observations to other people. This is when the Ockham’s razor is put to an ex post test: if the shave has been really neat, other people can easily understand what I am communicating. If I and a bunch of other looneys (oops! sorry, I wanted to say ‘scientists’) can agree on the current reading of the density of population, and not really on the reading of unemployment (‘those people could very well get a job! they are just lazy!), then the density of population is our Ockham’s razor, and unemployment not really (I love the ‘not really’ expression: it can cover any amount of error and bullshit). This is the right moment for distinguishing the obvious from the dubious, and to formulate my first hypotheses, and then I move backwards the long of the Milton Friedman’s four levels of hypothesising. The empirical application of the Ockham’s razor has allowed me to define what I can actually check in real life, and this, in turn, allows distinguishing between two big bags, each with hypotheses inside. One bag contains the verifiable hypotheses, the other one is for the speculative ones, i.e. those non-verifiable.

Anyway, I want my students to follow a path of research together with me. My first step is to organize the first step on this path, namely to find the fundamental, empirical bearings as for those four topics: smart cities, Fintech, renewable energies and collective intelligence. The topic of smart cities certainly can find its empirical anchors in the prices of real estate, and in the density of population, as well as in the local rate of demographic growth. When these three dance together – once again, you can check ‘My individual square of land, 9 meters on 9’ – the business of building smart cities suddenly gets some nice, healthy, reddish glow on its cheeks. Businesses have cheeks, didn’t you know? Well, to be quite precise, businesses have other cheeks. The other cheek, in a business, is what you don’t want to expose when you already get hit somewhere else. Yes, you could call it crown jewels as well, but other cheek sounds just more elegantly.

As for Fintech, the first and most obvious observation, from my point of view, is diversity. The development of Fintech calls into existence many different frameworks for financial transactions in times and places when and where, just recently, we had just one such framework. Observing Fintech means, in the first place, observing diversity in alternative financial frameworks – such as official currencies, cryptocurrencies, securities, corporations, payment platforms – in the given country or industry. In terms of formal analytical tools, diversity refers to a cross-sectional distribution and its general shape. I start with I taking a convenient denominator. The Gross Domestic Product seems a good one, yet you can choose something else, like the aggregate value of intellectual property embodied in selfies posted on Instagram. Once you have chosen your denominator, you measure the outstanding balances, and the current flows, in each of those alternative, financial frameworks, in the units of your denominator. You get things like market capitalization of Ethereum as % of GDP vs. the supply of US dollar as % of its national GDP etc.

I pass to renewable energies, now. When I think about what is the most obviously observable in renewable energies, it is a dual pattern of development. We can have renewable sources of energy supplanting fossil fuels: this is the case in the developed countries. On the other hand, there are places on Earth where electricity from renewable sources is the first source of electricity ever: those people simply didn’t have juice to power their freezer before that wind farm started up in the whereabouts. This is the pattern observable in the developing countries. In the zone of overlapping, between those two patterns, we have emerging markets: there is a bit of shifting from fossils to green, and there is another bit of renewables popping up where nothing had dared to pop up in the past. Those patterns are observable as, essentially, two metrics, which can possibly be combined: the final consumption of energy per capita, and the share of renewable sources in the final consumption of energy. Crude as they are, they allow observing a lot, especially when combined with other variables.

And so I come to collective intelligence. This is seemingly the hardest part. How can I say that any social entity is kind of smart? It is even hard to say in humans. I mean, virtually everybody claims they are smart, and I claim I’m smart, but when it comes to actual choices in real life, I sometimes feel so bloody stupid… Good, I am getting a grip. Anyway, intelligence for me is the capacity to figure out new, useful things on the grounds of memory about old things. There is one aspect of that figuring out, which is really intriguing my internal curious ape: the phenomenon called ultra-socialisation, or supersocialisation. I am inspired, as for this one, by the work of a group of historians: see ‘War, space, and the evolution of Old World complex societies’ (Turchin et al. 2013[1]). As a matter of fact, Jean Jacques Rousseau, in his “Social Contract”, was chasing very much the same rabbit. The general point is that any group of dumb assholes can get social on the level of immediate gains. This is how small, local societies emerge: I am better at running after woolly mammoths, you are better at making spears, which come handy when the mammoth stops running and starts arguing, and he is better at healing wounds. Together, we can gang up and each of us can experience immediate benefits of such socialisation. Still, what makes societies, according to Jean Jacques Rousseau, as well as according to Turchin et al., is the capacity to form institutions of large geographical scope, which require getting over the obsession of immediate gains and provide long-term, developmental a kick. What is observable, then, are precisely those institutions: law, state, money, universally enforceable contracts etc.

Institutions – and this is the really nourishing takeaway from that research by Turchin et al. (2013[2]) – are observable as a genetic code. I can decompose institutions into a finite number of observable characteristics, and each of them can be observable as switched on, or switched off. Complex institutional frameworks can be denoted as sequences of 1’s and 0’s, depending on whether the given characteristics is, respectively, present or absent. Somewhere between the Fintech, and collective intelligence, I have that metric, which I found really meaningful in my research: the share of aggregate depreciation in the GDP. This is the relative burden, imposed on the current economic activity, due to the phenomenon of technologies getting old and replaced by younger ones. When technologies get old, accountants accounts for that fact by depreciating them, i.e. by writing off the book a fraction of their initial value. All that writing off, done by accountants active in a given place and time, makes aggregate depreciation. When denominated in the units of current output (GDP), it tends to get into interesting correlations (the way variables can socialize) with other phenomena.

And so I come with my observables: density of population, demographic growth, prices of real estate, balances and flows of alternative financial platforms expressed as percentages of the GDP, final consumption of energy per capita, share of renewable energies in said final consumption, aggregate depreciation as % of the GDP, and the genetic code of institutions. What I can do with those observables, is to measure their levels, growth rates, cross-sectional distributions, and, at a more elaborate level, their correlations, cointegrations, and their memory. The latter can be observed, among other methods, as their Gaussian vector autoregression, as well as their geometric Brownian motion. This is the first big part of my educational product. This is what I want to teach my students: collecting that data, observing and analysing it, and finally to hypothesise on the grounds of basic observation.

[1] Turchin P., Currie, T.E.,  Turner, E. A. L., Gavrilets, S., 2013, War, space, and the evolution of Old World complex societies, Proceedings of The National Academy of Science, vol. 110, no. 41, pp. 16384 – 16389

[2] Turchin P., Currie, T.E.,  Turner, E. A. L., Gavrilets, S., 2013, War, space, and the evolution of Old World complex societies, Proceedings of The National Academy of Science, vol. 110, no. 41, pp. 16384 – 16389