Knowledge and skills

My editorial on You Tube

Once again, I break my rhythm. Mind you, it happens a lot this year. Since January, it is all about breaking whatever rhythm I have had so far in my life. I am getting used to unusual, and I think it is a good thing. Now, I am breaking the usual rhythm of my blogging. Normally, I have been alternating updates in English with those in French, like one to one, with a pinchful of writing in my mother tongue, Polish, every now and then. Right now, two urgent tasks require my attention:  I need to prepare new syllabuses, for English-taught courses in the upcoming academic year, and to revise my draft article on the energy efficiency of national economies.

Before I attend to those tasks, however, a little bit of extended reflection on goals and priorities in my life, somehow in the lines of my last update, « It might be a sign of narcissism ». I have just gotten back from Nice, France, where my son has just started his semester of Erasmus + exchange, with the Sophia Antipolis University. In my youth, I spent a few years in France, I went many times to France since, and man, this time, I just felt the same, very special and very French kind of human energy, which I remember from the 1980ies. Over the last 20 years or so, the French seemed sort of had been sleeping inside their comfort zone but now, I can see people who have just woken up and are wondering what the hell they had wasted so much time on, and they are taking double strides to gather speed in terms of social change. This is the innovative, brilliant, positively cocky France I love. There is sort of a social pattern in France: when the French get vocal, and possibly violent, in the streets, they are up to something as a nation. The French Revolution in 1789 was an expression of popular discontent, yet what followed was not popular satisfaction: it was one-century-long expansion on virtually all plans: political, military, economic, scientific etc. Right now, France is just over the top of the Yellow Vests protest, which one of my French students devoted an essay to (see « Carl Lagerfeld and some guest blogging from Emilien Chalancon, my student »). I wonder who will be the Napoleon Bonaparte of our times.

When entire nations are up to something, it is interesting. Dangerous, too, and yet interesting. Human societies are, as a rule, the most up to something as regards their food and energy base, and so I come to that revision of my article. Here, below, you will find the letter of review I received from the journal “Energy” after I submitted the initial manuscript, referenced as Ms. Ref. No.: EGY-D-19-00258. The link to my manuscript is to find in the first paragraph of this update. For those of you who are making their first steps in science, it can be an illustration of what ‘scientific dialogue’ means. Further below, you will find a first sketch of my revision, accounting for the remarks from reviewers.   

Thus, here comes the LETTER OF REVIEW (in italic):

Ms. Ref. No.: EGY-D-19-00258

Title: Apprehending energy efficiency: what is the cognitive value of hypothetical shocks? Energy

Dear Dr. Wasniewski,

The review of your paper is now complete, the Reviewers’ reports are below. As you can see, the Reviewers present important points of criticism and a series of recommendations. We kindly ask you to consider all comments and revise the paper accordingly in order to respond fully and in detail to the Reviewers’ recommendations. If this process is completed thoroughly, the paper will be acceptable for a second review.

If you choose to revise your manuscript it will be due into the Editorial Office by the Jun 23, 2019

Once you have revised the paper accordingly, please submit it together with a detailed description of your response to these comments. Please, also include a separate copy of the revised paper in which you have marked the revisions made.

Please note if a reviewer suggests you to cite specific literature, you should only do so if you feel the literature is relevant and will improve your paper. Otherwise please ignore such suggestions and indicate this fact to the handling editor in your rebuttal.

To submit a revision, please go to https://ees.elsevier.com/egy/  and login as an Author.

Your username is: ******

If you need to retrieve password details, please go to: http://ees.elsevier.com/egy/automail_query.asp.

NOTE: Upon submitting your revised manuscript, please upload the source files for your article. For additional details regarding acceptable file formats, please refer to the Guide for Authors at: http://www.elsevier.com/journals/energy/0360-5442/guide-for-authors

When submitting your revised paper, we ask that you include the following items:

Manuscript and Figure Source Files (mandatory):

We cannot accommodate PDF manuscript files for production purposes. We also ask that when submitting your revision you follow the journal formatting guidelines. Figures and tables may be embedded within the source file for the submission as long as they are of sufficient resolution for Production. For any figure that cannot be embedded within the source file (such as *.PSD Photoshop files), the original figure needs to be uploaded separately. Refer to the Guide for Authors for additional information. http://www.elsevier.com/journals/energy/0360-5442/guide-for-authors

Highlights (mandatory):

Highlights consist of a short collection of bullet points that convey the core findings of the article and should be submitted in a separate file in the online submission system. Please use ‘Highlights’ in the file name and include 3 to 5 bullet points (maximum 85 characters, including spaces, per bullet point). See the following website for more information

Data in Brief (optional):

We invite you to convert your supplementary data (or a part of it) into a Data in Brief article. Data in Brief articles are descriptions of the data and associated metadata which are normally buried in supplementary material. They are actively reviewed, curated, formatted, indexed, given a DOI and freely available to all upon publication. Data in Brief should be uploaded with your revised manuscript directly to Energy. If your Energy research article is accepted, your Data in Brief article will automatically be transferred over to our new, fully Open Access journal, Data in Brief, where it will be editorially reviewed and published as a separate data article upon acceptance. The Open Access fee for Data in Brief is $500.

Please just fill in the template found here: http://www.elsevier.com/inca/publications/misc/dib_data%20article%20template_for%20other%20journals.docx

Then, place all Data in Brief files (whichever supplementary files you would like to include as well as your completed Data in Brief template) into a .zip file and upload this as a Data in Brief item alongside your Energy revised manuscript. Note that only this Data in Brief item will be transferred over to Data in Brief, so ensure all of your relevant Data in Brief documents are zipped into a single file. Also, make sure you change references to supplementary material in your Energy manuscript to reference the Data in Brief article where appropriate.

If you have questions, please contact the Data in Brief publisher, Paige Shaklee at dib@elsevier.com

Example Data in Brief can be found here: http://www.sciencedirect.com/science/journal/23523409

***

In order to give our readers a sense of continuity and since editorial procedure often takes time, we encourage you to update your reference list by conducting an up-to-date literature search as part of your revision.

On your Main Menu page, you will find a folder entitled “Submissions Needing Revision”. Your submission record will be presented here.

MethodsX file (optional)

If you have customized (a) research method(s) for the project presented in your Energy article, you are invited to submit this part of your work as MethodsX article alongside your revised research article. MethodsX is an independent journal that publishes the work you have done to develop research methods to your specific needs or setting. This is an opportunity to get full credit for the time and money you may have spent on developing research methods, and to increase the visibility and impact of your work.

How does it work?

1) Fill in the MethodsX article template: https://www.elsevier.com/MethodsX-template

2) Place all MethodsX files (including graphical abstract, figures and other relevant files) into a .zip file and

upload this as a ‘Method Details (MethodsX) ‘ item alongside your revised Energy manuscript. Please ensure all of your relevant MethodsX documents are zipped into a single file.

3) If your Energy research article is accepted, your MethodsX article will automatically be transferred to MethodsX, where it will be reviewed and published as a separate article upon acceptance. MethodsX is a fully Open Access journal, the publication fee is only 520 US$.

Questions? Please contact the MethodsX team at methodsx@elsevier.com. Example MethodsX articles can be found here: http://www.sciencedirect.com/science/journal/22150161

Include interactive data visualizations in your publication and let your readers interact and engage more closely with your research. Follow the instructions here: https://www.elsevier.com/authors/author-services/data- visualization to find out about available data visualization options and how to include them with your article.

MethodsX file (optional)

We invite you to submit a method article alongside your research article. This is an opportunity to get full credit for the time and money you have spent on developing research methods, and to increase the visibility and impact of your work. If your research article is accepted, your method article will be automatically transferred over to the open access journal, MethodsX, where it will be editorially reviewed and published as a separate method article upon acceptance. Both articles will be linked on ScienceDirect. Please use the MethodsX template available here when preparing your article: https://www.elsevier.com/MethodsX-template. Open access fees apply.

Reviewers’ comments:

Reviewer #1: The paper is, at least according to the title of the paper, and attempt to ‘comprehend energy efficiency’ at a macro-level and perhaps in relation to social structures. This is a potentially a topic of interest to the journal community. However and as presented, the paper is not ready for publication for the following reasons:

1. A long introduction details relationship and ‘depth of emotional entanglement between energy and social structures’ and concomitant stereotypes, the issue addressed by numerous authors. What the Introduction does not show is the summary of the problem which comes out of the review and which is consequently addressed by the paper: this has to be presented in a clear and articulated way and strongly linked with the rest of the paper. In simplest approach, the paper does demonstrate why are stereotypes problematic. In the same context, it appears that proposed methodology heavily relays on MuSIASEM methodology which the journal community is not necessarily familiar with and hence has to be explained, at least to the level used in this paper and to make the paper sufficiently standalone;

2. Assumptions used in formulating the model have to be justified in terms what and how they affect understanding of link/interaction between social structures and function of energy (generation/use) and also why are assumptions formulated in the first place. Also, it is important here to explicitly articulate what is aimed to achieve with the proposed model: as presented this somewhat comes clear only towards the end of the paper. More fundamental question is what is the difference between model presented here and in other publications by the author: these have to be clearly explained.

3. The presented empirical tests and concomitant results are again detached from reality for i) the problem is not explicitly formulated, and ii) real-life interpretation of results are not clear.

On the practical side, the paper needs:

1. To conform to style of writing adopted by the journal, including referencing;

2. All figures have to have captions and to be referred to by it;

3. English needs improvement.

Reviewer #2: Please find the attached file.

Reviewer #3: The article has a cognitive value. The author has made a deep analysis of literature. Methodologically, the article does not raise any objections. However, getting acquainted with its content, I wonder why the analysis does not take into account changes in legal provisions. In the countries of the European Union, energy efficiency is one of the pillars of shaping energy policy. Does this variable have no impact on improving energy efficiency?

When reading an article, one gets the impression that the author has prepared it for editing in another journal. Editing it is incorrect! Line 13, page 10, error – unwanted semicolon.

Now, A FIRST SKETCH OF MY REVISION.

There are the general, structural suggestions from the editors, notably to outline my method of research, and to discuss my data, in separate papers. After that come the critical remarks properly spoken, with a focus on explaining clearly – more clearly than I did it in the manuscript – the assumptions of my model, as well as its connections with the MUSIASEM model. I start with my method, and it is an interesting exercise in introspection. I did the empirical research quite a few months ago, and now I need to look at it from a distance, objectively. Doing well at this exercise amounts, by the way, to phrasing accurately my assumptions. I start with my fundamental variable, i.e. the so-called energy efficiency, measured as the value of real output (i.e. the value of goods and services produced) per unit of energy consumed, measured in kilograms of oil equivalent.  It is like: energy efficiency = GDP/ energy consumed.

In my mind, that coefficient is actually a coefficient of coefficients, more specifically: GDP / energy consumed = [GDP per capita] / [consumption of energy per capita ] = [GDP / population] / [energy consumed / population ]. Why so? Well, I assume that when any of us, humans, wants to have a meal, we generally don’t put our fingers in the nearest electric socket. We consume energy indirectly, via the local combination of technologies. The same local combination of technologies makes our GDP. Energy efficiency measures two ends of the same technological toolbox: its intake of energy, and its outcomes in terms of goods and services. Changes over time in energy efficiency, as well as its disparity across space depend on the unfolding of two distinct phenomena: the exact composition of that local basket of technologies, like the overall heap of technologies we have stacked up in our daily life, for one, and the efficiency of individual technologies in the stack, for two. Here, I remember a model I got to know in management science, precisely about how the efficiency changes with new technologies supplanting the older ones. Apparently, a freshly implemented, new technology is always less productive than the one it is kicking out of business. Only after some time, when people learn how to use that new thing properly, it starts yielding net gains in productivity. At the end of the day, when we change our technologies frequently, there could very well not be any gain in productivity at all, as we are constantly going through consecutive phases of learning. Anyway, I see the coefficient of energy efficiency at any given time in a given place as the cumulative outcome of past collective decisions as for the repertoire of technologies we use.   

That is the first big assumption I make, and the second one comes from the factorisation: GDP / energy consumed = [GDP per capita] / [consumption of energy per capita ] = [GDP / population] / [energy consumed / population ]. I noticed a semi-intuitive, although not really robust correlation between the two component coefficients. GDP per capita tends to be higher in countries with better developed institutions, which, in turn, tend to be better developed in the presence of relatively high a consumption of energy per capita. Mind you, it is quite visible cross-sectionally, when comparing countries, whilst not happening that obviously over time. If people in country A consume twice as much energy per capita as people in country B, those in A are very likely to have better developed institutions than folks in B. Still, if in any of the two places the consumption of energy per capita grows or falls by 10%, it does not automatically mean corresponding an increase or decrease in institutional development.

Wrapping partially up the above, I can see at least one main assumption in my method: energy efficiency, measured as GDP per kg of oil equivalent in energy consumed is, in itself, a pretty foggy metric, arguably devoid of intrinsic meaning, and it is meaningful as an equilibrium of two component coefficients, namely in GDP per capita, for one, and energy consumption per capita, for two. Therefore, the very name ‘energy efficiency’ is problematic. If the vector [GDP; energy consumption] is really a local equilibrium, as I intuitively see it, then we need to keep in mind an old assumption of economic sciences: all equilibriums are efficient, this is basically why they are equilibriums. Further down this avenue of thinking, the coefficient of GDP per kg of oil equivalent shouldn’t even be called ‘energy efficiency’, or, just in order not to fall into pointless semantic bickering, we should take the ‘efficiency’ part into some sort of intellectual parentheses.   

Now, I move to my analytical method. I accept as pretty obvious the fact that, at a given moment in time, different national economies display different coefficients of GDP per kg of oil equivalent consumed. This is coherent with the above-phrased claim that energy efficiency is a local equilibrium rather than a measure of efficiency strictly speaking. What gains in importance, with that intellectual stance, is the study of change over time. In the manuscript paper, I tested a very intuitive analytical method, based on a classical move, namely on using natural logarithms of empirical values rather than empirical values themselves. Natural logarithms eliminate a lot of non-stationarity and noise in empirical data. A short reminder of what are natural logarithms is due at this point. Any number can be represented as a power of another number, like y = xz, where ‘x’ is called the root of the ‘y’, ‘z’ is the exponent of the root, and ‘x’ is also the base of ‘z’.

Some roots are special. One of them is the so-called Euler’s number, or e = 2,718281828459, the base of the natural logarithm. When we treat e ≈ 2,72 as the root of another number, the corresponding exponent z in y = ez has interesting properties: it can be further decomposed as z = t*a, where t is the ordinal number of a moment in time, and a is basically a parameter. In a moment, I will explain why I said ‘basically’. The function y = t*a is called ‘exponential function’ and proves useful in studying processes marked by important hysteresis, i.e. when each consecutive step in the process depends very strongly on the cumulative outcome of previous steps, like y(t) depends on y(t – k). Compound interest is a classic example: when you save money for years, with annual compounding of interest, each consecutive year builds upon the interest accumulated in preceding years. If we represent the interest rate, classically, as ‘r’, the function y = xt*r gives a good approximation of how much you can save, with annually compounded ‘r’, over ‘t’ years.

Slightly different an approach to the exponential function can be formulated, and this is what I did in the manuscript paper I am revising now, in front of your very eyes. The natural logarithm of energy efficiency measured as GDP per kg of oil equivalent can be considered as local occurrence of change with strong a component of hysteresis. The equilibrium of today depends on the cumulative outcomes of past equilibriums. In a classic exponential function, I would approach that hysteresis as y(t) = et*a, with a being a constant parameter of the function. Yet, I can assume that ‘a’ is local instead of being general. In other words, what I did was y(t) = et*a(t) with a(t) being obviously t-specific, i.e. local. I assume that the process of change in energy efficiency is characterized by local magnitudes of change, the a(t)’s. That a(t), in y(t) = et*a(t) is slightly akin to the local first derivative, i.e. y’(t). The difference between the local a(t) and y’(t) is that the former is supposed to capture somehow more accurately the hysteretic side of the process under scrutiny.              

In typical econometric tests, the usual strategy is to start with the empirical values of my variables, transform them into their natural logarithms or some sort of standardized values (e.g. standardized over their respective means, or their standard deviations), and then run linear regression on those transformed values. Another path of analysis consists in exponential regression, only there is a problem with this one: it is hard to establish a reliable method of transformation in empirical data. Running exponential regression on natural logarithms looks stupid, as natural logarithms are precisely the exponents of the exponential function, whence my intuitive willingness to invent a method sort of in between linear regression, and the exponential one.

Once I assume that local exponential coefficients a(t) in the exponential progression y(t) = et*a(t) have intrinsic meaning of their own, as local magnitudes of exponential change, an interesting analytical avenue opens up. For each set of empirical values y(t), I can construe a set of transformed values a(t) = ln[y(t)]/t. Now, when you think about it, the actual a(t) depends on how you calculate ‘t’, or, in other words, what calendar you apply. When I start counting time 100 years before the starting year of my empirical data, my a(t) will go like: a(t1) = ln[y(t1)]/101, a(t2) = ln[y(t2)]/102 etc. The denominator ‘t’ will change incrementally slowly. On the other hand, if I assume that the first year of whatever is happening is one year before my empirical time series start, it is a different ball game. My a(t1) = ln[y(t1)]/1, and my a(t2) = ln[y(t2)]/2 etc.; incremental change in denominator is much greater in this case. When I set my t0 at 100 years earlier than the first year of my actual data, thus t0 = t1 – 100, the resulting set of a(t) values transformed from the initial y(t) data simulates a secular, slow trend of change. On the other hand, setting t0 at t0 = t1-1 makes the resulting set of a(t) values reflect quick change, and the t0 = t1 – 1 moment is like a hypothetical shock, occurring just before the actual empirical data starts to tell its story.

Provisionally wrapping it up, my assumptions, and thus my method, consists in studying changes in energy efficiency as a sequence of equilibriums between relative wealth (GDP per capita), on the one hand, and consumption of energy per capita. The passage between equilibriums is a complex phenomenon, combining long term trends and the short-term ones.  

I am introducing a novel angle of approach to the otherwise classic concept of economics, namely that of economic equilibrium. I claim that equilibriums are manifestations of collective intelligence in their host societies. In order to form an economic equilibrium, would it be more local and Marshallian, or more general and Walrasian, a society needs institutions that assure collective learning through experimentation. They need some kind of financial market, enforceable contracts, and institutions of collective bargaining. Small changes in energy efficiency come out of consistent, collective learning through those institutions. Big leaps in energy efficiency appear when the institutions of collective learning undergo substantial structural changes.

I am thinking about enriching the empirical part of my paper by introducing additional demonstration of collective intelligence: a neural network, working with the same empirical data, with or without the so-called fitness function. I have that intuitive thought – although I don’t know yet how to get it across coherently – that neural networks endowed with a fitness function are good at representing collective intelligence in structured societies with relatively well-developed institutions.

I go towards my syllabuses for the coming academic year. Incidentally, at least one of the curriculums I am going to teach this fall fits nicely into the line of research I am pursuing now: collective intelligence and the use of artificial intelligence. I am developing the thing as an update on my blog, and I write it directly in English. The course is labelled “Behavioural Modelling and Content Marketing”. My principal goal is to teach students the mechanics of behavioural interaction between human beings and digital technologies, especially in social media, online marketing and content streaming. At my university, i.e. the Andrzej Frycz-Modrzewski Krakow University (Krakow, Poland), we have a general drill of splitting the general goal of each course into three layers of expected didactic outcomes: knowledge, course-specific skills, and general social skills. The longer I do science and the longer I teach, the less I believe into the point of distinguishing knowledge from skills. Knowledge devoid of any skills attached to it is virtually impossible to check, and virtually useless.

As I think about it, I imagine many different teachers and many students. Each teacher follows some didactic goals. How do they match each other? They are bound to. I mean, the community of teachers, in a university, is a local social structure. We, teachers, we have different angles of approach to teaching, and, of course, we teach different subjects. Yet, we all come from more or less the same cultural background. Here comes a quick glimpse of literature I will be referring to when lecturing ‘Behavioural Modelling and Content Marketing’: the article by Molleman and Gachter (2018[1]), entitled ‘Societal background influences social learning in cooperative decision making’, and another one, by Smaldino (2019[2]), under the title ‘Social identity and cooperation in cultural evolution’. Molleman and Gachter start from the well-known assumption that we, humans, largely owe our evolutionary success to our capacity of social learning and cooperation. They give the account of an experiment, where Chinese people, assumed to be collectivist in their ways, are being compared to British people, allegedly individualist as hell, in a social game based on dilemma and cooperation. Turns out the cultural background matters: success-based learning is associated with selfish behaviour and majority-based learning can help foster cooperation. Smaldino goes down more theoretical a path, arguing that the structure society shapes the repertoire of social identities available to homo sapiens in a given place at a given moment, whence the puzzle of emergent, ephemeral groups as a major factor in human cultural evolution. When I decide to form, on Facebook, a group of people Not-Yet-Abducted-By-Aliens, is it a factor of cultural change, or rather an outcome thereof?

When I teach anything, what do I really want to achieve, and what does the conscious formulation of those goals have in common with the real outcomes I reach? When I use a scientific repository, like ScienceDirect, that thing learns from me. When I download a bunch of articles on energy, it suggests me further readings along the same lines. It learns from keywords I use in my searches, and from the journals I browse. You can even have a look at my recent history of downloads from ScienceDirect and make yourself an opinion about what I am interested in. Just CLICK HERE, it opens an Excel spreadsheet.

How can I know I taught anybody anything useful? If a student asks me: ‘Pardon me, sir, but why the hell should I learn all that stuff you teach? What’s the point? Why should I bother?’. Right you are, sir or miss, whatever gender you think you are. The point of learning that stuff… You can think of some impressive human creation, like the Notre Dame cathedral, the Eiffel Tower, or that Da Vinci’s painting, Lady with an Ermine. Have you ever wondered how much work had been put in those things? However big and impressive a cathedral is, it had been built brick by f***ing brick. Whatever depth of colour we can see in a painting, it came out of dozens of hours spent on sketching, mixing paints, trying, cursing, and tearing down the canvas. This course and its contents are a small brick in the edifice of your existence. One more small story that makes your individual depth as a person.

There is that thing, at the very heart of behavioural modelling, and social sciences in general. Fault of a better expression, I call it the Bignetti model. See, for example, Bignetti 2014[3], Bignetti et al. 2017[4], or Bignetti 2018[5] for more reading. Long story short, what professor Bignetti claims is that whatever happens in observable human behaviour, individual or collective, whatever, has already happened neurologically beforehand. Whatever we use to Tweet or whatever we read, it is rooted in that wiring we have between the ears. The thing is that actually observing how that wiring works is still a bit burdensome. You need a lot of technology, and a controlled environment. Strangely enough, opening one’s skull and trying to observe the contents at work doesn’t really work. Reverse-engineered, the Bignetti model suggests behavioural observation, and behavioural modelling, could be a good method to guess how our individual brains work together, i.e. how we are intelligent collectively.

I go back to the formal structure of the course, more specifically to goals and expected outcomes. I split: knowledge, skills, social competences. The knowledge, for one. I expect the students to develop the understanding of the following concepts: a) behavioural pattern b) social life as a collection of behavioural patterns observable in human beings c) behavioural patterns occurring as interactions of humans with digital technologies, especially with online content and online marketing d) modification of human behaviour as a response to online content e) the basics of artificial intelligence, like the weak law of great numbers or the logical structure of a neural network. As for the course-specific skills, I expect my students to sharpen their edge in observing behavioural patterns, and changes thereof in connection with online content. When it comes to general social competences, I would like my students to make a few steps forward on two paths: a) handling projects and b) doing research. It logically implies that assessment in this course should and will be project-based. Students will be graded on the grounds of complex projects, covering the definition, observation, and modification of their own behavioural patterns occurring as interaction with online content.

The structure of an individual project will cover three main parts: a) description of the behavioural sequence in question b) description of online content that allegedly impacts that sequence, and c) the study of behavioural changes occurring under the influence of online content. The scale of students’ grades is based on two component marks: the completeness of a student’s work, regarding (a) – (c), and the depth of research the given student has brought up to support his observations and claims. In Poland, in the academia, we typically use a grading scale from 2 (fail) all the way up to 5 (very good), passing through 3, 3+, 4, and 4+. As I see it, each student – or each team of students, as there will be a possibility to prepare the thing in a team of up to 5 people – will receive two component grades, like e.g. 3+ for completeness and 4 for depth of research, and that will give (3,5 + 4)/2 = 3,75 ≈ 4,0.

Such a project is typical research, whence the necessity to introduce students into the basic techniques of science. That comes as a bit of a paradox, as those students’ major is Film and Television Production, thus a thoroughly practical one. Still, science serves in practical issues: this is something I deeply believe and which I would like to teach my students. As I look upon those goals, and the method of assessment, a structure emerges as regards the plan of in-class teaching. At my university, the bulk of in-class interaction with students is normally spread over 15 lectures of 1,5 clock hour each, thus 30 hours in total. In some curriculums it is accompanied by the so-called ‘workshops’ in smaller groups, with each such smaller group attending 7 – 8 sessions of 1,5 hour each. In this case, i.e. in the course of ‘Behavioural Modelling and Content Marketing’, I have just lectures in my schedule. Still, as I see it, I will need to do practical stuff with my youngsters. This is a good moment to demonstrate a managerial technique I teach in other classes, called ‘regressive planning’, which consists in taking the final goal I want to achieve, assume this is supposed to be the outcome of a sequence of actions, and then reverse engineer that sequence. Sort of ‘what do I need to do if I want to achieve X at the end of the day?’.

If I want to have my students hand me good quality projects by the end of the semester, the last few classes out of the standard 15 should be devoted to discussing collectively the draft projects. Those drafts should be based on prior teaching of basic skills and knowledge, whence the necessity to give those students a toolbox, and provoke in them curiosity to rummage inside. All in all, it gives me the following, provisional structure of lecturing:

{input = 15 classes} => {output = good quality projects by my students}

{input = 15 classes} ó {input = [10 classes of preparation >> 5 classes of draft presentations and discussion thereof]}

{input = 15 classes}  ó {input = [5*(1 class of mindfuck to provoke curiosity + 1 class of systematic presentation) + 5*(presentation + questioning and discussion)}

As I see from what I have just written, I need to divide the theory accompanying this curriculum into 5 big chunks. The first of those 5 blocks needs to address the general frame of the course, i.e. the phenomenon of recurrent interaction between humans and online content. I think the most important fact to highlight is that algorithms of online marketing behave like sales people crossed with very attentive servants, who try to guess one’s whims and wants. It is a huge social change: it, I think, the first time in human history when virtually every human with access to Internet interacts with a form of intelligence that behaves like a butler, guessing the user’s preferences. It is transformational for human behaviour, and in that first block I want to show my students how that transformation can work. The opening, mindfucking class will consists in a behavioural experiment in the lines of good, old role playing in psychology. I will demonstrate to my students how a human would behave if they wanted to emulate the behaviour of neural networks in online marketing. I will ask them questions about what they usually do, and about what they did like during the last few days, and I will guess their preferences on the grounds of their described behaviour. I will tell my students to observe that butler-like behaviour of mine and to pattern me. In a next step, I will ask students to play the same role, just for them to get the hang of how a piece of AI works in online marketing. The point of this first class is to define an expected outcome, like a variable, which neural networks attempt to achieve, in terms of human behaviour observable through clicking. The second, theoretical class of that first block will, logically, consist in explaining the fundamentals of how neural networks work, especially in online interactions with human users of online content.      

I think in the second two-class block I will address the issue of behavioural patterns as such, i.e. what they are, and how can we observe them. I want the mindfuck class in this block to be provocative intellectually, and I think I will use role playing once again. I will ask my students to play roles of their choice, and I will discuss their performance under a specific angle: how do you know that your play is representative for this type of behaviour or person? What specific pieces of behaviour are, in your opinion, informative about the social identity of that role? Do other students agree that the type of behaviour played is representative for this specific type of person? The theoretical class in this block will be devoted to systematic lecture on the basics of behaviourism. I guess I will serve to my students some Skinner, and some Timberlake, namely Skinner’s ‘Selection by Consequences’ (1981[6]), and Timberlake’s ‘Behaviour Systems and Reinforcement’ (1993[7]).    

In the third two-class block I will return to interactions with online content. In the mindfuck class, I will make my students meddle with You Tube, and see how the list of suggested videos changes after we search for or click on specific content, e.g how will it change after clicking 5 videos of documentaries about wildlife, or after searching for videos on race cars. In this class, I want my students to pattern the behaviour of You Tube. The theoretical class of this block will be devoted to the ways those algorithms work. I think I will focus on a hardcore concept of AI, namely the Gaussian mixture. I will explain how crude observations on our clicking and viewing allows an algorithm to categorize us.

As we will pass to the fourth two-class block, I will switch to the concept of collective intelligence, i.e. to how whole societies interact with various forms of online, interactive neural networks. The class devoted to intellectual provocation will be discursive. I will make students debate on the following claim: ‘Internet and online content allow our society to learn faster and more efficiently’. There is, of course, a catch, and it is the definition of learning fast and efficiently. How do we know we are quick and efficient in our collective learning? What would slow and inefficient learning look like? How can we check the role of Internet and online content in our collective learning? Can we apply the John Stuart Mill’s logical canon to that situation? The theoretical class in this block will be devoted to the phenomenon of collective intelligence in itself. I would like to work through like two research papers devoted to online marketing, e.g. Fink et al. (2018[8]) and Takeuchi et al. (2018[9]), in order to show how online marketing unfolds into phenomena of collective intelligence and collective learning.

Good, so I come to the fifth two-class block, the last one before the scheduled draft presentations by my students. It is the last teaching block before they present their projects, and I think it should bring them back to the root idea of these, i.e. to the idea of observing one’s own behaviour when interacting with online content. The first class of the block, the one supposed to stir curiosity, could consist in two steps of brain storming and discussion. Students endorse the role of online marketers. In the first step, they define one or two typical interactions between human behaviour, and the online content they communicate. We use the previously learnt theory to make both the description of behavioural patterns, and that of online marketing coherent and state-of-the-art. In the next step, students discuss under what conditions they would behave according to those pre-defined patterns, and what conditions would them make diverge from it and follow different patterns. In the theoretical class of this block, I would like to discuss two articles, which incite my own curiosity: ‘A place for emotions in behaviour research system’ by Gordon M.Burghart (2019[10]), and ‘Disequilibrium in behaviour analysis: A disequilibrium theory redux’ by Jacobs et al. (2019[11]).

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. You can communicate with me directly, via the mailbox of this blog: goodscience@discoversocialsciences.com. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?


[1] Molleman, L., & Gächter, S. (2018). Societal background influences social learning in cooperative decision making. Evolution and Human Behavior, 39(5), 547-555.

[2] Smaldino, P. E. (2019). Social identity and cooperation in cultural evolution. Behavioural Processes. Volume 161, April 2019, Pages 108-116

[3] Bignetti, E. (2014). The functional role of free-will illusion in cognition:“The Bignetti Model”. Cognitive Systems Research, 31, 45-60.

[4] Bignetti, E., Martuzzi, F., & Tartabini, A. (2017). A Psychophysical Approach to Test:“The Bignetti Model”. Psychol Cogn Sci Open J, 3(1), 24-35.

[5] Bignetti, E. (2018). New Insights into “The Bignetti Model” from Classic and Quantum Mechanics Perspectives. Perspective, 4(1), 24.

[6] Skinner, B. F. (1981). Selection by consequences. Science, 213(4507), 501-504.

[7] Timberlake, W. (1993). Behavior systems and reinforcement: An integrative approach. Journal of the Experimental Analysis of Behavior, 60(1), 105-128.

[8] Fink, M., Koller, M., Gartner, J., Floh, A., & Harms, R. (2018). Effective entrepreneurial marketing on Facebook–A longitudinal study. Journal of business research.

[9] Takeuchi, H., Masuda, S., Miyamoto, K., & Akihara, S. (2018). Obtaining Exhaustive Answer Set for Q&A-based Inquiry System using Customer Behavior and Service Function Modeling. Procedia Computer Science, 126, 986-995.

[10] Burghardt, G. M. (2019). A place for emotions in behavior systems research. Behavioural processes.

[11] Jacobs, K. W., Morford, Z. H., & King, J. E. (2019). Disequilibrium in behavior analysis: A disequilibrium theory redux. Behavioural processes.

It might be a sign of narcissism

I am recapitulating once again. Two things are going on in my mind: science strictly spoken and a technological project. As for science, I am digging around the hypothesis that we, humans, purposefully create institutions for experimenting with new technologies and that the essential purpose of those institutions is to maximize the absorption of energy from environment. I am obstinately turning around the possible use of artificial intelligence as tools for simulating collective intelligence in human societies. As for technology, I am working on my concept of « Energy Ponds ». See my update entitled « The mind-blowing hydro » for relatively the freshest developments on that point. So far, I came to the conclusion that figuring out a viable financial scheme, which would allow local communities to own local projects and adapt them flexibly to local conditions is just as important as working out the technological side. Oh, yes, and there is teaching, the third thing to occupy my mind. The new academic year starts on October 1st and I am already thinking about the stuff I will be teaching.

I think it is good to be honest about myself, and so I am trying to be: I have a limited capacity of multi-tasking. Even if I do a few different things in the same time, I need those things to be kind of convergent and similar. This is one of those moments when a written recapitulation of what I do serves me to put some order in what I intend to do. Actually, why not using one of the methods I teach my students, in management classes? I mean, why not using some scholarly techniques of planning and goal setting?

Good, so I start. What do I want? I want a monography on the application of artificial intelligence to study collective intelligence, with an edge towards practical use in management. I call it ‘Monography AI in CI – Management’. I want the manuscript to be ready by the end of October 2019. I want a monography on a broader topic of technological change being part of human evolution, with the hypothesis mentioned in the preceding paragraph. This monography, I give it a working title: ‘Monography Technological Change and Human Evolution’. I have no clear deadline for the manuscript. I want 2 – 3 articles on renewable energies and their application. Same deadline as that first monography: end of October 2019. I want to promote and develop my idea of “Energy Ponds” and that of local financial schemes for such type of project. I want to present this idea in at least one article, and in at least one public speech. I want to prepare syllabuses for teaching, centred, precisely, on the concept of collective intelligence, i.e. of social structures and institutions made for experimentation and learning. Practically in each of the curriculums I teach I want to go into the topic of collective learning.  

How will I know I have what I want? This is a control question, forcing me to give precise form to my goals. As for monographies and articles it is all about preparing manuscripts on time. A monography should be at least 400 pages each, whilst articles should be some 30 pages-long each, in the manuscript form. That makes 460 – 490 pages to write (meaningfully, of course!) until the end of October, and at least 400 other pages to write subsequently. Of course, it is not just about hatching manuscripts: I need to have a publisher. As for teaching, I can assume that I am somehow prepared to deliver a given line of logic when I have a syllabus nailed down nicely. Thus, I need to rewrite my syllabuses not later than by September 25th. I can evaluate progress in the promotion of my “Energy Ponds” concept as I will have feedback from any people whom I informed or will have informed about it.     

Right, the above is what I want technically and precisely, like in a nice schedule of work. Now, what I like really want? I am 51, with good health and common sense I have some 24 – 25 productive years ahead. This is roughly the time that passed since my son’s birth. The boy is not a boy anymore, he is walking his own path, and what looms ahead of me is like my last big journey in life. What do I want to do with those years? I want to feel useful, very certainly. Yes, I think this is one clear thing about what I want: I want to feel useful. How will I know I am useful? Weeell, that’s harder to tell. As I am patiently following the train of my thoughts, I think that I feel useful today, when I can see that people around need me. On the top of that, I want to be financially important and independent. Wealthy? Yes, but not for comfort as such. Right now, I am employed, and my salary is my main source of income. I perceive myself as dependent on my employer. I want to change it so as to have substantial income (i.e. income greater than my current spending and thus allowing accumulation) from sources other than a salary. Logically, I need capital to generate that stream of non-wage income. I have some – an apartment for rent – but as I look at it critically, I would need at least 7 times more in order to have the rent-based income I want.

Looks like my initial, spontaneous thought of being useful means, after having scratched the surface, being sufficiently high in the social hierarchy to be financially independent, and able to influence other people. Anyway, as I am having a look at my short-term goals, I ask myself how do they bridge into my long-term goals? The answer is: they don’t really connect, my short-term goals and the long-term ones. There is a lot of missing pieces. I mean, how does the fact of writing a scientific monography translate into multiplying by seven my current equity invested in income-generating assets?

Now, I want to think a bit deeper about what I do now, and I want to discover two types of behavioural patterns. Firstly, there is probably something in what I do, which manifests some kind of underlying, long-term ambitions or cravings in my personality. Exploring what I do might be informative as for what I want to achieve in that last big lap of my life. Secondly, in my current activities, I probably have some behavioural patterns, which, when exploited properly, can help me in achieving my long-term goals.

What do I like doing? I like writing and reading about science. I like speaking in public, whether it is a classroom or a conference. Yes, it might be a sign of narcissism, still it can be used to a good purpose. I like travelling in moderate doses. Looks like I am made for being a science writer and a science speaker. It looks some sort of intermediate goal, bridging from my short-term, scheduled achievements into the long-term, unscheduled ones. I do write regularly, especially on my blog. I speak regularly in classrooms, as my basic job is that of an academic teacher. What I do haphazardly, and what could bring me closer to achieving my long-term goals, would be to speak in other public contexts more frequently and sort of regularly, and, of course, make money on it. By the way, as science writing and science speaking is concerned, I have a crazy idea: scientific stand up. I am deeply fascinated with the art of some stand up comedians: Bill Burr, Gabriel Iglesias, Joe Rogan, Kevin Hart or Dave Chapelle. Getting across deep, philosophical content about human condition in the form of jokes, and make people laugh when thinking about those things, is an art I admire, and I would like to translate it somehow into the world of science. The problem is that I don’t know how. I have never done any acting in my life, never have written nor participated in writing any jokes for stand-up comedy. As skillsets come, this is a complete terra incognita to me.

Now, I jump to the timeline. I assume having those 24 years or so ahead of me. What then, I mean when I hopefully reach 75 years of age. Now, I can shock some of my readers, but provisionally I label that moment in 24 years from now as “the decision whether I should die”. Those last years, I have been asking myself how I would like to die. The question might seem stupid: nobody likes dying. Still, I have been asking myself this question. I am going into deep existential ranting, but I think what I think: when I compare my life with some accounts in historical books, there is one striking difference. When I read letters and memoirs of people from the 17th or 18th century, even from the beginnings of the 20th century, those ancestors of ours tended to ask themselves how worthy their life should be and how worthy their death should come. We tend to ask, most of all, how long will we live. When I think about it, that old attitude makes more sense. In the perspective of decades, planning for maxing out on existential value is much more rational than trying to max out on life expectancy as such. I guess we can have much more control over the values we pursue than the duration of our life. I know that what I am going to say might sound horribly pretentious, but I think I would like to die like a Viking. I mean, not necessarily trying to kill somebody, just dying by choice, whilst still having the strength to do something important, and doing those important things. What I am really afraid of is slow death by instalments, when my flame dies out progressively, leaving me just weaker and weaker every month, whilst burdening other people with taking care of me.

I fix that provisional checkpoint at the age of 75, 24 years from now. An important note before I go further: I have not decided I will die at the age of 75. I suppose that would be as presumptuous as assuming to live forever. I just give myself a rationally grounded span of 24 years to live with enough energy to achieve something worthy. If I have more, I will just have more. Anyway, how much can I do in 24 years? In order to plan for that, I need to recapitulate how much have I been able to do so far, like during an average year. A nicely productive year means 2 – 3 acceptable articles, accompanied by 2 – 3 equally acceptable conference presentations. On the top of that, a monography is conceivable in one year. As for teaching, I can realistically do 600 – 700 hours of public speech in one year. With that, I think I can nail down some 20 valuable meetings in business and science. In 24 years, I can write 24*550 = 13 200 pages, I can deliver 15 600 hours of public speech, and I can negotiate something in 480 meetings or so.

Now, as I talk about value, I can see there is something more far reaching than what I have just named as my long-term goals. There are values which I want to pursue. I mean, saying that I want to die like a Viking, and, in the same time, stating my long-term goals in life in terms of income and capital base: that sound ridiculous. I know, I know, dying like a Viking, in the times of Vikings, meant very largely to pillage until the last breath. Still, I need values. I think the shortcut to my values is via my dreams. What are they, my dreams? Now, I make a sharp difference between dreams and fantasies. A fantasy is: a) essentially unrealistic, such as riding a flying unicorn b) involving just a small, relatively childish part of my personality. On the other hand, a dream – such as contributing to making my home country, Poland, go 100% off fossil fuels – is something that might look impossible to achieve, yet its achievement is a logical extension of my present existence.

What are they, my dreams? Well, I have just named one, i.e. playing a role in changing the energy base of my country. What else do I value? Family, certainly. I want my son to have a good life. I want to feel useful to other people (that was already in my long-term goals, and so I am moving it to the category of dreams and values). Another thing comes to my mind: I want to tell the story of my parents. Apparently banal – lots of people do it or at least attempt to – and yet nagging as hell. My father died in February, and around the time of the funeral, as I was talking to family and friends, I discovered things about my dad which I had not the faintest idea of. I started going through old photographs and old letters in a personal album I didn’t even know he still had. Me and my father, we were not very close. There was a lot of bad blood between us. Still, it was my call to take care of him during the last 17 years of his life, and it was my call to care for what we call in Poland ‘his last walk’, namely that from the funeral chapel to the tomb properly spoken. I suddenly had a flash glimpse of the personal history, the rich, textured biography I had in front of my eyes, visible through old images and old words, all that in the background of the vanishing spark of life I could see in my father’s eyes during his last days.  

How will I know those dreams and values are fulfilled in my life? I can measure progress in my work on and around projects connected to new sources of energy. I can measure it by observing the outcomes. When things I work on get done, this is sort of tangible. As for being useful to other people, I go once again down the same avenue: to me, being useful means having an unequivocally positive impact on other people. Impact is important, and thus, in order to have that impact, I need some kind of leadership position. Looking at my personal life and at my dream to see my son having a good life, it comes as the hardest thing to gauge. This seems to be the (apparently) irreducible uncertainty in my perfect plan. Telling my parents’ story: how will I prove to myself I will have told it? A published book? Maybe…  

I sum it up, at least partially. I can reasonably expect to deliver a certain amount of work over the 24 years to come: approximately 13 200 pages of written content, 15 600 hours of public speech, and 450 – 500 meetings, until my next big checkpoint in life, at the age of 75. I would like to focus that work on building a position of leadership, in view of bringing some change to my own country, Poland, mostly in the field of energy. As the first stage is to build a good reputation of science communicator, the leadership in question is likely to be rather a soft one. In that plan, two things remain highly uncertain. Firstly, how should I behave in order to be as good a human being as I possibly can? Secondly, what is the real importance of that telling-my-parents’-story thing in the whole plan? How important is it for my understanding of how to live well those 24 years to come? What fraction of those 13 200 written pages (or so), should refer to that story?  

Now, I move towards collective intelligence, and to possible applications of artificial intelligence to study the collective one. Yes, I am a scientist, and yes, I can use myself as an experimental unit. I can extrapolate my personal experience as the incidence of something in a larger population. The exact path of that incidence can shape the future actions and structures of that population. Good, so now, there is someone – anyone, in fact – who comes and tells to my face: ‘Look, man, you’re bullshitting yourself and people around you! Your plans look stupid, and if attitudes like yours spread, our civilisation will fall into pieces!’. Fair enough, that could be a valid point. Let’s check. According to the data published by the Central Statistical Office of the Republic of Poland, in 2019, there are n = 453 390 people in Poland aged 51, like me, 230 370 of them being men, and 232 020 women. I assume that attitudes such as my own, expressed in the preceding paragraphs, are one type among many occurring in that population of 51-year-old Polish people. People have different views on life and other things, so to say.

Now, I hypothesise in two opposite directions. In Hypothesis A, I state that just some among those different attitudes make any sense, and there is a hypothetical distribution of those attitudes in the general population, which yields the best social outcomes whilst eliminating early all nonsense attitudes from the social landscape. In other words, some worldviews are so dysfunctional that they’d better disappear quickly and be supplanted by those more sensible ones. Going even deeper, it means that quantitative distributions of attitudes in the general population fall into two classes: those completely haphazard, existential accidents without much grounds for staying in existence, on the one hand, and those sensible and functional ones, which can be sustained with benefit to all, on the other hand.  In hypothesis ~A, i.e. the opposite to A, I speculate that observed diversity in attitudes is a phenomenon in itself and does not really reduce to any hypothetically better one. It is the old argument in favour of diversity. Old as it is, it has old mathematical foundations, and, interestingly, is one of cornerstones in what we call today Artificial Intelligence.

In Vapnik, Chervonenkis 1971[1] , a paper reputed to be kind of seminal for the today’s AI, I found reference to the classical Bernoulli’s theorem, known also as the weak law of large numbers: the relative frequency of an event A in a sequence of independent trials converges (in probability) to the probability of that event. Please, note that roughly the same can be found in the so-called Borel’s law of large numbers, named after Émile Borel. It is deep maths: each phenomenon bears a given probability of happening, and this probability is sort of sewn into the fabric of reality. The empirically observable frequency of occurrence is always an approximation of this quasi-metaphysical probability. That goes a bit against the way probability is being taught at school: it is usually about that coin – or dice – being tossed many times etc. It implies that probability exists at all only as long as there are things actually happening. No happening, no probability. Still, if you think about it, there is a reason why those empirically observable frequencies tend to be recurrent, and the reason is precisely that underlying capacity of the given phenomenon to take place.

Basic neural networks, the perceptron-type ones, experiment with weights being attributed to input variables, in order to find a combination of weights which allows the perceptron getting the closest possible to a target value. You can find descriptions of that procedure in « Thinking Poisson, or ‘WTF are the other folks doing?’ », for example. Now, we can shift a little bit our perspective and assume that what we call ‘weights’ of input variables are probabilities that a phenomenon, denoted by the given variable, happens at all. A vector of weights attributed to input variables is a collection of probabilities. Walking down this avenue of thinking leads me precisely to the Hypothesis ~A, presented a few paragraphs ago. Attitudes congruous with that very personal confession of mine, developed even more paragraphs ago, have an inherent probability of happening, and the more we experiment, the closer we can get to that probability. If someone tells to my face that I’m an idiot, I can reply that: a) any worldview has an idiotic side, no worries b) my particular idiocy is representative of a class of idiocies, which, in turn, the civilisation needs to figure out something clever for the next few centuries.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. You can communicate with me directly, via the mailbox of this blog: goodscience@discoversocialsciences.com. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?


[1] Vapnik, V. N. (1971). CHERVONENKIS, On the uniform convergence ofrelativefrequencies. Theory of Probability and Its Applications, 16, 264-280.

The mind-blowing hydro

My editorial on You Tube

There is that thing about me: I am a strange combination of consistency and ADHD. If you have ever read one of Terry Pratchett’s novels from the ‘Discworld’ series, you probably know the imaginary character of golems: made of clay, with a logical structure – a ‘chem’ – put in their heads, they can work on something endlessly. In my head, there are chems, which just push me to do things over and over and over again. Writing and publishing on that research blog is very much in those lines. I can stop whenever I want, I just don’t want right now. Yet, when I do a lot about one chem, I start craving for another one, like nearby but not quite in the same intellectual location.

Right now, I am working on two big things. Firstly, I feel like drawing a provisional bottom line under those two years of science writing on my blog. Secondly, I want to put together an investment project that would help my city, my country and my continent, thus Krakow, Poland, and Europe, to face one of the big challenges resulting from climate change: water management. Interestingly, I started to work on the latter first, and only then I began to phrase out the former. I explain. As I work on that project of water management, which I provisionally named « Energy Ponds » (see, for example, « All hope is not lost: the countryside is still exposed »), I use the « Project Navigator », made available by the courtesy of the International Renewable Energy Agency (IRENA). The logic built into the « Project Navigator » makes me return, over and over again, to one central question: ‘You, Krzysztof Wasniewski, with your science and your personal energy, how are you aligned with that idea of yours? How can you convince other people to put their money and their personal energy into developing on your concept?’.

And so I am asking myself: ‘What’s your science, bro? What can you get people interested in, with rational grounds and intelligible evidence?’.

As I think about it, my first basic claim is that we can do it together in a smart way. We can act as a collective intelligence. This statement can be considered as a manifestation of the so-called “Bignetti model” in cognitive sciences (Bignetti 2014[1]; Bignetti et al. 2017[2]; Bignetti 2018[3]): for the last two years, I have been progressively centering my work around the topic of collective intelligence, without even being quite aware of it. As I was working on another book of mine, entitled “Capitalism and Political Power”, I came by that puzzling quantitative fact: as a civilization, we have more and more money per unit of real output[4], and, as I reviewed some literature, we seem not to understand why is that happening. Some scholars complain about the allegedly excessive ‘financialization of the economy’ (Krippner 2005[5]; Foster 2007[6]; Stockhammer 2010[7]), yet, besides easy generalizations about ‘greed’, or ‘unhinged race for profit’, no scientifically coherent explanation is offered regarding this phenomenon.

As I was trying to understand this phenomenon, shades of correlations came into my focus. I could see, for example, that growing an amount of money per unit of real output has been accompanied by growing an amount of energy consumed per person per year, in the global economy[8]. Do we convert energy into money, or the other way around? How can it be happening? In 2008, the proportion between the global supply of broad money, and the global real output passed the magical threshold of 100%. Intriguingly, the same year, the share of urban population in the total human population passed the threshold of 50%[9], and the share of renewable energy in the total final consumption of energy, at the global scale, took off for the first time since 1999, and keeps growing since then[10]. I started having that diffuse feeling that, as a civilization, we are really up to something, right now, and money is acting like a social hormone, facilitating change.

We change as we learn, and we learn as we experiment with the things we invent. How can I represent, in a logically coherent way, collective learning through experimentation? When an individual, or a clearly organized group learns through experimentation, the sequence is pretty straightforward: we phrase out an intelligible definition of the problem to solve, we invent various solutions, we test them, we sum up the results, we select seemingly the best solution among those tested, and we repeat the whole sequence. As I kept digging the topic of energy, technological change, and the velocity of money, I started formulating the outline of a complex hypothesis: what if we, humans, are collectively intelligent about building, purposefully, and semi – consciously, social structures supposed to serve as vessels for future collective experiments?

My second claim is that one of the smartest things we can do about climate change is, besides reducing our carbon footprint, to take proper care of our food and energy base. In Europe, climate change is mostly visible as a complex disruption to our water system, and we can observe it in our local rivers. That’s the thing about Europe: we have built our civilization, on this tiny, mountainous continent, in close connection with rivers. Right, I can call them scientifically ‘inland waterways’, but I think that when I say ‘river’, anybody who reads it understands intuitively. Anyway, what we call today ‘the European heritage’ has grown next to EVENLY FLOWING rivers. Once again: evenly flowing. It means that we, Europeans, are used to see the neighbouring river as a steady flow. Streams and creeks can overflow after heavy rains, and rivers can swell, but all that stuff had been happening, for centuries, very recurrently.

Now, with the advent of climate change, we can observe three water-related phenomena. Firstly, as the English saying goes, it never rains but it pours. The steady rhythm and predictable volume of precipitations we are used to, in Europe (mostly in the Northern part), progressively gives ground to sudden downpours, interspersed with periods of drought, hardly predictable in their length. First moral of the fairy tale: if we have less and less of the kind of water that falls from the sky slowly and predictably, we need to learn how to capture and retain the kind of water that falls abruptly, unscheduled. Secondly, just as we have adapted somehow to the new kind of sudden floods, we have a big challenge ahead: droughts are already impacting, directly and indirectly, the food market in Europe, but we don’t have enough science yet to predict accurately neither their occurrence nor their local impact. Yet, there is already one emerging pattern: whatever happens, i.e. floods or droughts, rural populations in Europe suffer more than the urban ones (see my review of literature in « All hope is not lost: the countryside is still exposed »). Second moral of the fairy tale: whatever we do about water management in these new conditions, in Europe, we need to take care of agriculture first, and thus to create new infrastructures so as to shield farms against floods and droughts, cities coming next in line.

Thirdly, the most obviously observable manifestation of floods and droughts is variation in the flow of local rivers. By the way, that variation is already impacting the energy sector: when we have too little flow in European rivers, we need to scale down the output of power plants, as they have not enough water to cool themselves. Rivers are drainpipes of the neighbouring land. Steady flow in a river is closely correlated with steady a level of water in the ground, both in the soil, and in the mineral layers underneath. Third moral of the fairy tale: if we figure out workable ways of retaining as much rainfall in the ground as possible, we can prevent all the three disasters in the same time, i.e. local floods, droughts, and economically adverse variations in the flow of local rivers.           

I keep thinking about that ownership-of-the-project thing I need to cope with when using the « Project Navigator » by IRENA. How to make local communities own, as much as possible, both the resources needed for the project, and its outcomes? Here, precisely, I need to use my science, whatever it is. People at IRENA have experience in such project, which I haven’t. I need to squeeze my brain and extract thereof any useful piece of coherent understanding, to replace experience. I am advancing step by step. I intuitively associate ownership with property rights, i.e. with a set of claims on something – things or rights – together with a set of liberties of action regarding the same things or rights. Ownership from the part of a local community means that claims and liberties should be sort of pooled, and the best idea that comes to my mind is an investment fund. Here, a word of explanation is due: an investment fund is a general concept, whose actual, institutional embodiment can take the shape of a strictly speaking investment fund, for one, and yet other legal forms are possible, such as a trust, a joint stock company, a crowdfunding platform, or even a cryptocurrency operating in a controlled network. The general concept of an investment fund consists in taking a population of investors and making them pool their capital resources over a set of entrepreneurial projects, via the general legal construct of participatory titles: equity-based securities, debt-based ones, insurance, futures contracts, and combinations thereof. Mind you, governments are investment funds too, as regards their capacity to move capital around. They somehow express the interest of their respective populations in a handful of investment projects, they take those populations’ tax money and spread it among said projects. That general concept of investment fund is a good expression of collective intelligence. That thing about social structure for collective experimentation, which I mentioned a few paragraphs ago, an investment fund is an excellent example. It allows spreading resources over a number of ventures considered as local experiments.

Now, I am dicing a few ideas for a financial scheme, based on the general concept of an investment fund, as collectively intelligent as possible, in order to face the new challenges of climate change, through new infrastructures for water management. I start with reformulating the basic technological concept. Water powered water pumps are immersed in the stream of a river. They use the kinetic energy of that stream to pump water up and further away, more specifically into elevated water towers, from which that water falls back to the ground level, as it flows down it powers relatively small hydroelectric turbines, and ends up in a network of ponds, vegetal complexes and channel-like ditches, all that made with a purpose of retaining as much water as possible. Those structures can be connected to others, destined directly to capture rainwater. I was thinking about two setups, respectively for rural environments and for the urban ones. In the rural landscape, those ponds and channels can be profiled so as to collect rainwater from the surface of the ground and conduct it into its deeper layers, through some system of inverted draining. I think it would be possible, under proper geological conditions, to reverse-drain rainwater into deep aquifers, which the neighbouring artesian wells can tap into. In the urban context, I would like to know more about those Chinese technologies used in their Sponge Cities programme (see Jiang et al. 2018[11]).

The research I have done so far suggests that relatively small, local projects work better, for implementing this type of technologies, than big, like national scale endeavours. Of course, national investment programmes will be welcome as indirect support, but at the end of the day, we need a local community owning a project, possibly through an investment-fund-like institutional arrangement. The economic value conveyed by any kind of participatory title in such a capital structure sums up to the Net Present Value of three cash flows: net proceeds from selling hydroelectricity produced in small water turbines, reduction of the aggregate flood-related risk, as well as of the drought-related risk. I separate risks connected to floods from those associated with droughts, as they are different in nature. In economic and financial terms, floods are mostly a menace to property, whilst droughts materialize as more volatile prices of food and basic agricultural products.

In order to apprehend accurately the Net Present Value of any cash flow, we need to set a horizon in time. Very tentatively, by interpreting data from 2012, presented in a report published by IRENA (the same IRENA), I assume that relatively demanding investors in Europe expect to have a full return on their investment within 6,5 years, which I make 7 years, for the sake of simplicity. Now, I go a bit off the beaten tracks, at least those I have beaten so far. I am going to take the total atmospheric precipitations falling on various European countries, which means rainfall + snowfall, and then try to simulate what amount of ‘NPV = hydroelectricity + reduction of risk from floods and droughts’(7 years) could the retention of that water represent.

Let’s walse. I take data from FAOSTAT regarding precipitations and water retention. As a matter of fact, I made a query of that data regarding a handful of European countries. You can have a look at the corresponding Excel file UNDER THIS LINK. I rearranged bit the data from this Excel file so as to have a better idea of what could happen, if those European countries I have on my list, my native Poland included, built infrastructures able to retain 2% of the annual rainfall. The coefficient of 2% is vaguely based on what Shao et al. (2018[12]) give as the target retention coefficient for the city of Xiamen, China, and their Sponge-City-type investment. I used the formulas I had already phrased out in « Sponge Cities », and in « La marge opérationnelle de $1 539,60 par an par 1 kilowatt », to estimate the amount of electricity possible to produce out of those 2% of annual rainfall elevated, according to my idea, into 10-metres-high water towers. On the top of all that, I added, for each country, data regarding the already existing capacity to retain water. All those rearranged numbers, you can see them in the Excel file UNDER THIS OTHER LINK (a table would be too big for inserting into this update).   

The first provisional conclusion I have to make is that I need to revise completely my provisional conclusion from « Sponge Cities », where I claimed that hydroelectricity would have no chance to pay for any significant investment in sponge-like structures for retaining water. The calculations I have just run show just the opposite: as soon as we consider whole countries as rain-retaining basins, the hydroelectric power, and the cash flow dormant in that water is just mind-blowing. I think I will need to get a night of sleep just to check on the accuracy of my calculations.

Deranging as they are, my calculations bear another facet. I compare the postulated 2% of retention in annual precipitations with the already existing capacity of these national basins to retain water. That capacity is measured, in that second Excel file, by the ‘Coefficient of retention’, which denominates the ‘Total internal renewable water resources (IRWR)’ over the annual precipitation, both in 10^9 m3/year. My basic observation is that European countries have a capacity to retain water very similar in disparity to the intensity of precipitations, measured in mm per year. Both coefficients vary in a similar proportion, i.e. their respective standard deviations make around 0,4 of their respective means, across the sample of 37 European countries. When I measure it with the Pearson coefficient of correlation between the intensity of rainfall and the capacity to retain it , it yields r = 0,63. In general, the more water falls from the sky per 1 m2, the greater percentage of that water is retained, as it seems. Another provisional conclusion I make is that the capacity to retain water, in a given country, is some kind of response, possibly both natural and man-engineered, to a relatively big amount of water falling from the sky. It looks as if our hydrological structures, in Europe, had been built to do something with water we have momentarily plenty of, possibly even too much of, and which we should save for later.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. You can communicate with me directly, via the mailbox of this blog: goodscience@discoversocialsciences.com. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?


[1] Bignetti, E. (2014). The functional role of free-will illusion in cognition:“The Bignetti Model”. Cognitive Systems Research, 31, 45-60.

[2] Bignetti, E., Martuzzi, F., & Tartabini, A. (2017). A Psychophysical Approach to Test:“The Bignetti Model”. Psychol Cogn Sci Open J, 3(1), 24-35.

[3] Bignetti, E. (2018). New Insights into “The Bignetti Model” from Classic and Quantum Mechanics Perspectives. Perspective, 4(1), 24.

[4] https://data.worldbank.org/indicator/FM.LBL.BMNY.GD.ZS last access July 15th, 2019

[5] Krippner, G. R. (2005). The financialization of the American economy. Socio-economic review, 3(2), 173-208.

[6] Foster, J. B. (2007). The financialization of capitalism. Monthly Review, 58(11), 1-12.

[7] Stockhammer, E. (2010). Financialization and the global economy. Political Economy Research Institute Working Paper, 242, 40.

[8] https://data.worldbank.org/indicator/EG.USE.PCAP.KG.OE last access July 15th, 2019

[9] https://data.worldbank.org/indicator/SP.URB.TOTL.IN.ZS last access July 15th, 2019

[10] https://data.worldbank.org/indicator/EG.FEC.RNEW.ZS last access July 15th, 2019

[11] Jiang, Y., Zevenbergen, C., & Ma, Y. (2018). Urban pluvial flooding and stormwater management: A contemporary review of China’s challenges and “sponge cities” strategy. Environmental science & policy, 80, 132-143.

[12] Shao, W., Liu, J., Yang, Z., Yang, Z., Yu, Y., & Li, W. (2018). Carbon Reduction Effects of Sponge City Construction: A Case Study of the City of Xiamen. Energy Procedia, 152, 1145-1151.

All hope is not lost: the countryside is still exposed

My editorial on You Tube

I am focusing on the possible benefits of transforming urban structures of at least some European cities into sponge-like structures, such as described, for example, by Jiang et al. (2018) as well as in my recent updates on this blog (see Sponge Cities). In parallel to reporting my research on this blog, I am developing a corresponding project with the « Project Navigator », made available by the courtesy of the International Renewable Energy Agency (IRENA). Figuring out my way through the « Project Navigator » made me aware of the importance that social cohesion has in the implementation of such infrastructural projects. Social cohesion means a set of common goals, and an institutional context that allows the appropriation of outcomes. In « Sponge Cities », when studying the case of my hometown, Krakow, Poland, I came to the conclusion that sales of electricity from water turbines incorporated into the infrastructure of a sponge city could hardly pay off for the investment needed. On the other hand, significant reduction of the financially quantifiable risk connected to floods and droughts can be an argument. Especially the flood-related risks, in Europe, already amount to billions of euros, and we seem to be just at the beginning of the road (Alfieri et al. 2015[1]). Shielding against such risks can possibly make a sound base for social coherence, as a common goal. Hence, as I am structuring the complex concept of « Energy Ponds », I start with assessing risks connected to climate change in European cities, and the possible reduction of those risks through sponge-city-type investments.

I start with comparative a review of Alfieri et al. 2015[2] as regards flood-related risks, on the one hand, and Naumann et al. (2015[3]) as well as Vogt et al. (2018[4]) regarding the drought-related risks. As a society, in Europe, we seem to be more at home with floods than with droughts. The former is something we kind of know historically, and with the advent of climate change we just acknowledge more trouble in that department, whilst the latter had been, until recently, something that happens essentially to other people on other continents. The very acknowledgement of droughts as a recurrent risk is a challenge.

Risk is a quantity: this is what I teach my students. It is the probability of occurrence multiplied by the magnitude of damage, should the s**t really hit the fan. Why adopting such an approach? Why not to assume that risk is just the likelihood of something bad happening? Well, because risk management is practical. There is any point in bothering about risk if we can do something about it: insure and cover, hedge, prevent etc. The interesting thing about it is that all human societies show a recurrent pattern: as soon as we organise somehow, we create something like a reserve of resources, supposed to provide for risk. We are exposed to a possible famine? Good, we make a reserve of food. We risk to be invaded by a foreign nation/tribe/village/alien civilisation? Good, we make an army, i.e. a group of people, trained and equipped for actions with no immediate utility, just in case. The nearby river can possibly overflow? Good, we dig and move dirt, stone, wood and whatnot so as to build stopbanks. In each case, we move along the same path: we create a pooled reserve of something, in order to minimize the long-term damage from adverse events.

Now, if we wonder how much food we need to have in stock in case of famine, sooner or later we come to the conclusion that it is individual need for food multiplied by the number of people likely to be starving. That likelihood is not evenly distributed across the population: some people are more exposed than others. A farmer, with a few pigs and some potatoes in cultivation is less likely to be starving than a stonemason, busy to build something and not having time or energy to care for producing food. Providing for the risk of flood works according to the same scheme: some structures and some people are more likely to suffer than others.

We apprehend flood and drought-related risks in a similar way: those risks amount to a quantity of resources we put aside, in order to provide for the corresponding losses, in various ways. That quantity is the arithmetical product of probability times magnitude of loss.    

Total risk is a complex quantity, resulting from events happening in causal, heterogeneous chains. A river overflows and destroys some property: this is direct damage, the first occurrence in the causal chain. Among the property damaged, there are garbage yards. As water floods them, it washes away and further into the surrounding civilisation all kinds of crap, properly spoken crap included. The surrounding civilisation gets contaminated, and decontamination costs money: this is indirect damage, the second tier of the causal chain. Chemical and biological contamination by floodwater causes disruptions in the businesses involved, and those disruptions are costly, too: here goes the third tier in the causal chain etc.

I found some interesting insights, regarding the exposure to flood and drought-related risks in Europe, with Paprotny et al. (2018[5]). Firstly, this piece of research made me realized that floods and droughts do damage in very different ways. Floods are disasters in the most intuitive sense of the term: they are violent, and they physically destroy man-made structures. The magnitude of damage from floods results from two basic variables: the violence and recurrence of floods themselves, on the one hand, and the value of human structures affected. In a city, a flood does much more damage because there is much more property to destroy. Out there, in the countryside, damages inflicted by floods change from the disaster-type destruction into more lingering, long-term impediments to farming (e.g. contamination of farmed soil), as the density of man-made structures subsides. Droughts work insidiously. There is no spectacular disaster to be afraid of. Adverse outcomes build up progressively, sometimes even year after year. Droughts affect directly the countryside much more than the cities, too. It is rivers drying out first, and only in a second step, cities experiencing disruptions in the supply of water, or of the rivers-dependent electricity. It is farm soil drying out progressively, and farmers suffering some damage due to lower crops or increased costs of irrigation, and only then the city dwellers experiencing higher prices for their average carrot or an organic cereal bar. Mind you, there is one type of drought-related disaster, which sometimes can directly affect our towns and cities: forest fires.

Paprotny et al. (2018) give some detailed insights into the magnitude, type, and geographical distribution of flood-related risks in Europe. Firstly, the ‘where exactly?’. France, Spain, Italy, and Germany are the most affected, with Portugal, England, Scotland, Poland, Czech Republic, Hungary, Romania and Portugal following closely behind. As to the type of floods, France, Spain, and Italy are exposed mostly to flash floods, i.e. too much rain falling and not knowing where to go. Germany and virtually all of Central Europe, my native Poland included, are mostly exposed to river floods. As for the incidence of human fatalities, flash-floods are definitely the most dangerous, and their impact seems to be the most serious in the second half of the calendar year, from July on.

Besides, the research by Paprotny et al. (2018) indicates that in Europe, we seem to be already on the path of adaptation to floods. Both the currently observed losses –human and financial – and their 10-year, moving average had their peaks between 1960 and 2000. After 2000, Europe seems to have been progressively acquiring the capacity to minimize the adverse impact of floods, and this capacity seems to have developed in cities more than in the countryside. It truly gives a man a blow, to their ego, when they learn the problem they want to invent a revolutionary solution to does not really exist. I need to return on that claim I made in the « Project Navigator », namely that European cities are perfectly adapted to a climate that does no longer exist. Apparently, I was wrong: European cities seem to be adapting quite well to the adverse effects of climate change. Yet, all hope is not lost. The countryside is still exposed. Now, seriously. Whilst Europe seem to be adapting to greater an occurrence of floods, said occurrence is most likely to increase, as suggested, for example, in the research by Alfieri et al. (2017[6]). That sends us to the issue of limits to adaptation and the cost thereof.

Let’s rummage through more literature. As I study the article by Lu et al. (2019[7]), which compares the relative exposure to future droughts in various regions of the world, I find, first of all, the same uncertainty which I know from Naumann et al. (2015), and Vogt et al. (2018): the economically and socially important drought is a phenomenon we just start to understand, and we are still far from understanding it sufficiently to assess the related risks with precision. I know that special look that empirical research has when we don’t really have a clue what we are observing. You can see it in the multitude of analytical takes on the same empirical data. There are different metrics for detecting drought, and by Lu et al. (2019) demonstrate that assessment of drought-related losses heavily depends on the metric used. Once we account for those methodological disparities, some trends emerge. Europe in general seems to be more and more exposed to long-term drought, and this growing exposure seems to be pretty consistent across various scenarios of climate change. Exposure to short-term episodes of drought seems to be growing mostly under the RCP 4.5 and RCP 6.0 climate change scenarios, a little bit less under the RCP 8.5 scenario. In practical terms it means that even if we, as a civilisation, manage to cut down our total carbon emissions, as in the RCP 4.5. climate change scenario, the incidence of drought in Europe will be still increasing. Stagge et al. (2017[8]) point out that exposure to drought in Europe diverges significantly between the Mediterranean South, on the one hand, and the relatively colder North. The former is definitely exposed to an increasing occurrence of droughts, whilst the latter is likely to experience less frequent episodes. What makes the difference is evapotranspiration (loos of water) rather than precipitation. If we accounted just for the latter, we would actually have more water

I move towards more practical an approach to drought, this time as an agricultural phenomenon, and I scroll across the article on the environmental stress on winter wheat and maize, in Europe, by Webber et al. (2018[9]). Once again, I can see a lot of uncertainty. The authors put it plainly: models that serve to assess the impact of climate change on agriculture violate, by necessity, one of the main principles of statistical hypotheses-testing, namely that error terms are random and independent. In these precise models, error terms are not random, and not mutually independent. This is interesting for me, as I have that (recent) little obsession with applying artificial intelligence – a modest perceptron of my own make – to simulate social change. Non-random and dependent error terms are precisely what a perceptron likes to have for lunch. With that methodological bulwark, Webber et al. (2018) claim that regardless the degree of the so-called CO2 fertilization (i.e. plants being more active due to the presence of more carbon dioxide in the air), maize in Europe seems to be doomed to something like a 20% decline in yield, by 2050. Winter wheat seems to be rowing on a different boat. Without the effect of CO2 fertilization, a 9% decline in yield is to expect, whilst with the plants being sort of restless, and high on carbon, a 4% increase is in view. With Toreti et al. (2019[10]), more global a take is to find on the concurrence between climate extremes, and wheat production. It appears that Europe has been experiencing increasing an incidence of extreme heat events since 1989, and until 2015 it didn’t seem to affect adversely the yield of wheat. Still, since 2015 on, there is a visible drop in the output of wheat. Even stiller, if I may say, less wheat is apparently compensated by more of other cereals (Eurostat[11], Schills et al. 2018[12]), and accompanied by less potatoes and beets.

When I first started to develop on that concept, which I baptised “Energy Ponds”, I mostly thought about it as a way to store water in rural areas, in swamp-and-meadow-like structures, to prevent droughts. It was only after I read a few articles about the Sponge Cities programme in China that I sort of drifted towards that more urban take on the thing. Maybe I was wrong? Maybe the initial concept of rural, hydrological structures was correct? Mind you, whatever we do in Europe, it always costs less if done in the countryside, especially regarding the acquisition of land.

Even in economics, sometimes we need to face reality, and reality presents itself as a choice between developing “Energy Ponds” in urban environment, or in rural one. On the other hand, I am rethinking the idea of electricity generated in water turbines paying off for the investment. In « Sponge Cities », I presented a provisional conclusion that it is a bad idea. Still, I was considering the size of investment that Jiang et al. (2018) talk about in the context of the Chinese Sponge-Cities programme. Maybe it is reasonable to downsize a bit the investment, and to make it sort of lean and adaptable to the cash flow possible to generate out of selling hydropower.    

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. You can communicate with me directly, via the mailbox of this blog: goodscience@discoversocialsciences.com. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?


[1] Alfieri, L., Feyen, L., Dottori, F., & Bianchi, A. (2015). Ensemble flood risk assessment in Europe under high end climate scenarios. Global Environmental Change, 35, 199-212.

[2] Alfieri, L., Feyen, L., Dottori, F., & Bianchi, A. (2015). Ensemble flood risk assessment in Europe under high end climate scenarios. Global Environmental Change, 35, 199-212.

[3] Gustavo Naumann et al. , 2015, Assessment of drought damages and their uncertainties in Europe, Environmental Research Letters, vol. 10, 124013, DOI https://doi.org/10.1088/1748-9326/10/12/124013

[4] Vogt, J.V., Naumann, G., Masante, D., Spinoni, J., Cammalleri, C., Erian, W., Pischke, F., Pulwarty, R., Barbosa, P., Drought Risk Assessment. A conceptual Framework. EUR 29464 EN, Publications Office of the European Union, Luxembourg, 2018. ISBN 978-92-79-97469-4, doi:10.2760/057223, JRC113937

[5] Paprotny, D., Sebastian, A., Morales-Nápoles, O., & Jonkman, S. N. (2018). Trends in flood losses in Europe over the past 150 years. Nature communications, 9(1), 1985.

[6] Alfieri, L., Bisselink, B., Dottori, F., Naumann, G., de Roo, A., Salamon, P., … & Feyen, L. (2017). Global projections of river flood risk in a warmer world. Earth’s Future, 5(2), 171-182.

[7] Lu, J., Carbone, G. J., & Grego, J. M. (2019). Uncertainty and hotspots in 21st century projections of agricultural drought from CMIP5 models. Scientific reports, 9(1), 4922.

[8] Stagge, J. H., Kingston, D. G., Tallaksen, L. M., & Hannah, D. M. (2017). Observed drought indices show increasing divergence across Europe. Scientific reports, 7(1), 14045.

[9] Webber, H., Ewert, F., Olesen, J. E., Müller, C., Fronzek, S., Ruane, A. C., … & Ferrise, R. (2018). Diverging importance of drought stress for maize and winter wheat in Europe. Nature communications, 9(1), 4249.

[10] Toreti, A., Cronie, O., & Zampieri, M. (2019). Concurrent climate extremes in the key wheat producing regions of the world. Scientific reports, 9(1), 5493.

[11] https://ec.europa.eu/eurostat/statistics-explained/index.php/Agricultural_production_-_crops last access July 14th, 2019

[12] Schils, R., Olesen, J. E., Kersebaum, K. C., Rijk, B., Oberforster, M., Kalyada, V., … & Manolov, I. (2018). Cereal yield gaps across Europe. European journal of agronomy, 101, 109-120.

Sponge cities

My editorial on You Tube

I am developing on the same topic I have already highlighted in « Another idea – urban wetlands », i.e. on urban wetlands. By the way, I have found a similar, and interesting concept in the existing literature: the sponge city. It is being particularly promoted by Chinese authors. I am going for a short review of the literature on this specific topic, and I am starting with correcting a mistake I made in my last update in French, « La ville – éponge » when discussing the article by Shao et al. (2018[1]). I got confused in the conversion of square meters into square kilometres. I forgot that 1 km2 = 106 m2, not 103. Thus, correcting myself now, I rerun the corresponding calculations. The Chinese city of Xiamen, population 3 500 000, covers an area of 1 865 km2, i.e. 1 865 000 000 m2. In that, 118 km2 = 118 000 000 m2 are infrastructures of sponge city, or purposefully arranged urban wetlands. Annual precipitations in Xiamen, according to Climate-Data.org, are 1131 millimetres per year, thus 1131 m3 of water per 1 m2. Hence, the entire city of Xiamen receives 1 865 000 000 m2 * 1 131 m3/m2 =  2 109 315 000 000 m3 of precipitation a year, and the sole area of urban wetlands, those 118 square kilometres, receives 118 000 000 m2 * 1 131 m3/m2 =  133 458 000 000 m3. The infrastructures of sponge city in Xiamen have a target capacity of 2% regarding the retention of rain water, which gives  2 669 160 000 m3.

Jiang et al. (2018[2]) present a large scale strategy for the development of sponge cities in China. The first takeaway I notice is the value of investment in sponge city infrastructures across a total of 30 cities in China. Those 30 cities are supposed to absorb $275,6 billions in the corresponding infrastructural investment, thus an average of $9,19 billion per city. The first on the list is Qian’an, population 300 000, are 3 522 km2, total investment planned I = $5,1 billion. That gives $17 000 per resident, and $1 448 041 per 1 km2 of urban area. The city of Xiamen, whose case is discussed by the previously cited Shao et al. (2018[3]), has already got $3,3 billion in investment, with a target at I = $14,14 billion, thus at $4800 per resident, and $7 721 180 per square kilometre. Generally, the intensity of investment, counted per capita or per unit of surface, is really disparate. This is, by the way, commented by the authors: they stress the fact that sponge cities are so novel a concept that local experimentation is norm, not exception.

Wu et al. (2019[4]) present another case study, from among the cities listed in Jiang et al. (2018), namely the city of Wuhan. Wuhan is probably the biggest project of sponge city in terms of capital invested: $20,04 billion, distributed across 293 detailed initiatives. Started after a catastrophic flood in 2016, the project has also proven its value in protecting the city from floods, and, apparently, it is working. As far as I could understand, the case of Wuhan was the first domino block in the chain, the one that triggered the whole, nation-wide programme of sponge cities.

Shao et al. (2016[5]) present an IT approach to organizing sponge-cities, focusing on the issue of data integration. The corresponding empirical field study had been apparently conducted in Fenghuang County, province Hunan. The main engineering challenge consists in integrating geographical data from geographic information systems (GIS) with data pertinent to urban infrastructures, mostly CAD-based, thus graphical. On the top of that, spatial data needs to be integrated with attribute data, i.e. with the characteristics of both infrastructural objects, and their natural counterparts. All that integrated data is supposed to serve efficient application of the so-called Low Impact Development (LID) technology. With the Fenghuang County, we can see the case of a relatively small area: 30,89 km2, 350 195 inhabitants, with a density of population of 200 people per 1 km2. The integrated data system was based on dividing that area into 417 sub-catchments, thus some 74 077 m2 per catchment.         

Good, so this is like a cursory review of literature on the Chinese concept of sponge city. Now, I am trying to combine it with another concept, which I first read about in a history book, namely Civilisation and Capitalism by Fernand Braudel, volume 1: The Structures of Everyday Life[6]: the technology of lifting and pumping water from a river with the help of kinetic energy of waterwheels propelled by the same river. Apparently, back in the day, in cities like Paris, that technology was commonly used to pump river water onto the upper storeys of buildings next to the river, and even to the further-standing buildings. Today, we are used to water supply powered by big pumps located in strategic nodes of large networks, and we are used to seeing waterwheels as hydroelectric turbines. Still, that old concept of using directly the kinetic energy of water seems to pop up again, here and there. Basically, it has been preserved in a slightly different form. Do you know that image in movies, with that windmill in the middle of a desert? What is the point of putting a windmill in the middle of a desert? To pump water from a well. Now, let’s make a little jump from wind power to water power. If we can use the force of wind to pump water from underground, we can use the force of water in a river to pump water from that river.  

In scientific literature, I found just one article making reference to it, namely Yannopoulos et al. (2015[7]). Still, in the less formal areas, I found some more stuff. I found that U.S. patent, from 1951, for a water-wheel-driven brush. I found more modern a technology of the spiral pump, created by a company called PreScouter. Something similar is being proposed by the Dutch company Aqysta. Here are some graphics to give you an idea:


Now, I put together the infrastructure of a sponge city, and the technology of pumping water uphill using the energy of the water. I have provisionally named the thing « Energy Ponds ». Water wheels power water pumps, which convey water to elevated tanks, like water towers. From water towers, water falls back down to the ground level, passes through small hydroelectric turbines on its way down, and lands in the infrastructures of a sponge city, where it is being stored. Here below, I am trying to make a coherent picture of it. The general concept can be extended, which I present graphically further below: infrastructure of the sponge city collects excess water from rainfall or floods, and partly conducts it to the local river(s). What limits the river from overflowing or limits the degree of overflowing is precisely the basic concept of Energy Ponds, i.e. those water-powered water pumps that pump water into elevated tanks. The more water flows in the river – case of flood or immediate threat thereof – the more power in those pumps, the more flow through the elevated tanks, and the more flow through hydroelectric turbines, hence the more electricity. As long as the whole infrastructure physically holds the environmental pressure of heavy rainfall and flood waves, it can work and serve.

My next step is to outline the business and financial framework of the « Energy Ponds » concept, taking the data provided by Jiang et al. (2018) about 29 sponge city projects in China, squeezing as much information as I can from it, and adding the component of hydroelectricity. I transcribed their data into an Excel file, and added some calculations of my own, together with data about demographics and annual rainfall. Here comes the Excel file with data as of July 5th 2019. A pattern emerges. All the 29 local clusters of projects display quite an even coefficient of capital invested per 1 km2 of construction area in those projects: it is $320 402 571,51 on average, with quite a low standard deviation, namely $101 484 206,43. Interestingly, that coefficient is not significantly correlated neither with the local amount of rainfall per 1 m2, nor with the density of population. It looks like quite an autonomous variable, and yet as a recurrent proportion.      

Another interesting pattern is to find in the percentage of the total surface, in each of the cities studied, devoted to being filled with the sponge-type infrastructure. The average value of that percentage is 0,61% and is accompanied by quite big a standard deviation: 0,63%. It gives an overall variability of 1,046. Still, that percentage is correlated with two other variables: annual rainfall, in millimetres per square meter, as well as with the density of population, i.e. average number of people per square kilometre. Measured with the Pearson coefficient of correlation, the former yields r = 0,45, and the latter is r = 0,43: not very much, yet respectable, as correlations come.

From underneath those coefficients of correlation, common sense pokes its head. The more rainfall per unit of surface, the more water there is to retain, and thus the more can we gain by installing the sponge-type infrastructure. The more people per unit of surface, the more people can directly benefit from installing that infrastructure, per 1 km2. This one stands to reason, too.

There is an interesting lack of correlations in that lot of data taken from Jiang et al. (2018). The number of local projects, i.e. projects per one city, is virtually not correlated with anything else, and, intriguingly, is negatively correlated, at Pearson r = – 0,44, with the size of local populations. The more people in the city, the less local projects of sponge city are there.    

By the way, I have some concurrent information on the topic. According to a press release by Voith, this company has recently acquired a contract with the city of Xiamen, one of the sponge-cities, for the supply of large hydroelectric turbines in the technology of pumped storage, i.e. almost exactly the thing I have in mind.

Now, the Chines programme of sponge cities is a starting point for me to reverse engineer my own concept of « Energy Ponds ». I assume that four economic aggregates pay off for the corresponding investment: a) the Net Present Value of proceedings from producing electricity in water turbines b) the Net Present Value of savings on losses connected to floods c) the opportunity cost of tap water available from the retained precipitations, and d) incremental change in the market value of the real estate involved.

There is a city, with N inhabitants, who consume R m3 of water per year, R/N per person per year, and they consume E kWh of energy per year, E/N per person per year. R divided by 8760 hours in a year (R/8760) is the approximate amount of water the local population needs to have in current constant supply. Same for energy: E/8760 is a good approximation of power, in kW, that the local population needs to have standing and offered for immediate use.

The city collects F millimetres of precipitation a year. Note that F mm = F m3/m2. With a density of population D people per 1 km2, the average square kilometre has what I call the sponge function: D*(R/N) = f(F*106). Each square kilometre collects F*106 cubic meters of precipitation a year, and this amount remains is a recurrent proportion to the aggregate amount of water that D people living on that square kilometre consume per year.

The population of N residents spend an aggregate PE*E on energy, and an aggregate PR*R on water, where PE and PR are the respective prices of energy and water. The supply of water and energy happens at levelized costs per unit. The reference math here is the standard calculation of LCOE, or Levelized Cost of Energy in an interval of time t, measured as LCOE(t) = [IE(t) + ME(t) + UE(t)] / E, where IE is the amount of capital invested in the fixed assets of the corresponding power installations, ME is their necessary cost of current maintenance, and UE is the cost of fuel used to generate energy. Per analogy, the levelized cost of water can be calculated as LCOR(t) = [IR(t) + MR(t) + UR(t)] / R, with the same logic: investment in fixed assets plus cost of current maintenance plus cost of water strictly speaking, all that divided by the quantity of water consumed. Mind you, in the case of water, the UR(t) part could be easily zero, and yet it does not have to be.  Imagine a general municipal provider of water, who buys rainwater collected in private, local installations of the sponge type, at UR(t) per cubic metre, that sort of thing.

The supply of water and energy generates gross margins: E(t)*(PE(t) – LCOE(t)) and R(t)*(PR(t) – LCOR(t)). These margins are possible to rephrase as, respectively, PE(t)*E(t)IE(t) – ME(t) – UE(t), and R(t)*PR(t) – IR(t) – MR(t) – UR(t). Gross margins are gross cash flows, which finance organisations (jobs) attached to the supply of, respectively, water and energy, and generate some net surplus. Here comes a little difficulty with appraising the net surplus from the supply of water and energy. Long story short: the levelized values of the « LCO-whatever follows » type explicitly incorporate the yield on capital investment. Each unit of output is supposed to yield a return on investment I. Still, this is not how classical accounting defines a cost. The amounts assigned to costs, both variable and fixed, correspond to the strictly speaking current expenditures, i.e. to payments for the current services of people and things, without any residual value sedimenting over time. It is only after I account for those strictly current outlays that I can calculate the current margin, and a fraction of that margin can be considered as direct yield on my investment. In standard, basic accounting, the return on investment is the net income divided by the capital invested. The net income is calculated as π = Q*P – Q*VC – FC – r*I – T, where Q and P are quantity and price, VC is the variable cost per unit of output Q, FC stands for the fixed costs, r is the price of capital (interest rate) on the capital I invested in the given business, and T represents taxes. In the same standard accounting, Thus calculated net income π is then put into the formula of internal rate of return on investment: IRR = π / I.     

When I calculate my margin of profit on the sales of energy or water, I have those two angles of approach. Angle #1 consists in using the levelized cost, and then the margin generated over that cost, i.e. P – LC (price minus levelized cost) can be accounted for other purposes than the return on investment. Angle #2 comes from traditional accounting: I calculate my margin without reference to the capital invested, and only then I use some residual part of that margin as return on investment. I guess that levelized costs work well in the accounting of infrastructural systems with nicely predictable output. When the quantity demanded, and offered, in the market of energy or water is like really recurrent and easy to predict, thus in well-established infrastructures with stable populations around, the LCO method yields accurate estimations of costs and margins. On the other hand, when the infrastructures in question are developing quickly and/or when their host populations change substantially, classical accounting seems more appropriate, with its sharp distinction between current costs and capital outlays.

Anyway, I start modelling the first component of the possible payoff on investment in the infrastructures of « Energy Ponds », i.e.  the Net Present Value of proceedings from producing electricity in water turbines. As I generally like staying close to real life (well, most of the times), I will be wrapping my thinking around my hometown, where I still live, i.e. Krakow, Poland, area of the city: 326,8 km2, area of the metropolitan area: 1023,21 km2. As for annual precipitations, data from Climate-Data.org[1] tells me that it is a bit more than the general Polish average of 600 mm a year. Apparently, Krakow receives an annual rainfall of 678 mm, which, when translated into litres received by the whole area, makes a total rainfall on the city of  221 570 400 000 litres, and, when enlarged to the whole metropolitan area, makes

693 736 380 000 litres.

In the generation of electricity from hydro turbines, what counts is the flow, measured in litres per second. The above-calculated total rainfall is now to be divided by 365 days, then by 24 hours, and then by 3600 seconds in an hour. Long story short, you divide the annual rainfall in litres by the constant of 31 536 000 seconds in one year. Mind you, on odd years, it will be 31 622 400 seconds. This step leads me to an estimate total flow of 7 026 litres per second in the city area, and 21 998 litres per second in the metropolitan area. Question: what amount of electric power can I get with that flow? I am using a formula I found at Renewables First.co.uk[2] : flow per second, in kgs per second multiplied by the gravitational constant a = 9,81, multiplied by the average efficiency of a hydro turbine equal to 75,1%, further multiplied by the net head – or net difference in height – of the water flow. All that gives me electric power in watts. All in all, when you want to calculate the electric power dormant in your local rainfall, take the total amount of said rainfall, in litres falling on the entire place where you can possibly collect that rainwater from, and multiply it by 0,076346*Head of the waterflow. You will get power in kilowatts, with that implied efficiency of 75,1% in your technology.

For the sake of simplicity, I assume that, in those installations of elevated water tanks, the average elevation, thus the head of the subsequent water flow through hydro turbines, will be H = 10 m. That leads me to P = 518 kW available from the annual rainfall on the city of Krakow, when elevated to H = 10 m, and, accordingly, P = 1 621 kW for the rainfall received over the entire metropolitan area.

In the next step, I want to calculate the market value of that electric power, in terms of revenues from its possible sales. I take the power, and I multiply it by 8760 in a year (8784 hours in an odd year). I get the amount of electricity for sale equal to E = 4 534 383 kWh from the rainfall received over the city of Krakow strictly spoken, and E = 14 197 142 kWh if we hypothetically collect rainwater from the entire metro area.

Now, the pricing. According to data available at GlobalPetrolPrices.com[3], the average price of electricity in Poland is PE = $0,18 per kWh. Still, when I get, more humbly, to my own electricity bill, and I crudely divide the amount billed in Polish zlotys by the amount used in kWh, I get to something like PE = $0,21 per kWh. The discrepancy might be coming from the complexity of that price: it is the actual price per kWh used plus all sorts of constant stuff per kW of power made available. With those prices, the market value of the corresponding revenues from selling electricity from rainfall used smartly would be like $816 189  ≤ Q*PE  $952 220 a year from the city area, and $2 555 485 ≤ Q*PE  $2 981 400 a year from the metropolitan area.

I transform those revenues, even before accounting for any current costs, into a stream, spread over 8 years of average lifecycle in an average investment project. Those 8 years are what is usually expected as the time of full return on investment in those more long-term, infrastructure-like projects. With a technological lifecycle around 20 years, those projects are supposed to pay for themselves over the first 8 years, the following 12 years bringing a net overhead to investors. Depending on the pricing of electricity, and with a discount rate of r = 5% a year, it gives something like $5 275 203 ≤ NPV(Q*PE ; 8 years) ≤ $6 154 403 for the city area, and $16 516 646 ≤ NPV(Q*PE ; 8 years) ≤  $19 269 421 for the metropolitan area.

When I compare that stream of revenue to what is being actually done in the Chinese sponge cities, discussed a few paragraphs earlier, one thing jumps to the eye: even with the most optimistic assumption of capturing 100% of rainwater, so as to make it flow through local hydroelectric turbines, there is no way that selling electricity from those turbines pays off for the entire investment. This is a difference in the orders of magnitude, when we compare investment to revenues from electricity.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. You can communicate with me directly, via the mailbox of this blog: goodscience@discoversocialsciences.com. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?


[1] https://en.climate-data.org/europe/poland/lesser-poland-voivodeship/krakow-715022/ last access July 7th 2019

[2] https://www.renewablesfirst.co.uk/hydropower/hydropower-learning-centre/how-much-power-could-i-generate-from-a-hydro-turbine/ last access July 7th, 2019

[3] https://www.globalpetrolprices.com/electricity_prices/ last access July 8th 2019

[1] Shao, W., Liu, J., Yang, Z., Yang, Z., Yu, Y., & Li, W. (2018). Carbon Reduction Effects of Sponge City Construction: A Case Study of the City of Xiamen. Energy Procedia, 152, 1145-1151.

[2] Jiang, Y., Zevenbergen, C., & Ma, Y. (2018). Urban pluvial flooding and stormwater management: A contemporary review of China’s challenges and “sponge cities” strategy. Environmental science & policy, 80, 132-143.

[3] Shao, W., Liu, J., Yang, Z., Yang, Z., Yu, Y., & Li, W. (2018). Carbon Reduction Effects of Sponge City Construction: A Case Study of the City of Xiamen. Energy Procedia, 152, 1145-1151.

[4] Wu, H. L., Cheng, W. C., Shen, S. L., Lin, M. Y., & Arulrajah, A. (2019). Variation of hydro-environment during past four decades with underground sponge city planning to control flash floods in Wuhan, China: An overview. Underground Space, article in press

[5] Shao, W., Zhang, H., Liu, J., Yang, G., Chen, X., Yang, Z., & Huang, H. (2016). Data integration and its application in the sponge city construction of China. Procedia Engineering, 154, 779-786.

[6] Braudel, F., & Reynolds, S. (1979). Civilization and capitalism 15th-18th Century, vol. 1, The structures of everyday life. Civilization, 10(25), 50.

[7] Yannopoulos, S., Lyberatos, G., Theodossiou, N., Li, W., Valipour, M., Tamburrino, A., & Angelakis, A. (2015). Evolution of water lifting devices (pumps) over the centuries worldwide. Water, 7(9), 5031-5060.

Another idea – urban wetlands

My editorial on You Tube

I have just come with an idea. One of those big ones, the kind that pushes you to write a business plan and some scientific stuff as well. Here is the idea: a network of ponds and waterways, made in the close vicinity of a river, being both a reservoir of water – mostly the excess rainwater from big downpours – and a location for a network of small water turbines. The idea comes from a few observations, as well as other ideas, that I had over the last two years. Firstly. in Central Europe, we have less and less water from the melting snow – as there is almost no snow anymore in winter – and more and more water from sudden, heavy rain. We need to learn how to retain rainwater in the most efficient way. Secondly, as we have local floods due to heavy rains, some sort of spontaneous formation of floodplains happens. Even if there is no visible pond, the ground gets a bit spongy and soaked, flood after flood. We have more and more mosquitoes. If it is happening anyway, let’s use it creatively. This particular point is visualised in the map below, with the example of Central and Southern Europe. Thus, my idea is to utilise purposefully a naturally happening phenomenon, component of climate change.

Source: https://www.eea.europa.eu/data-and-maps/figures/floodplain-distribution last access June 20th, 2019

Thirdly, there is some sort of new generation in water turbines: a whole range of small devices, simple and versatile, has come to the market.  You can have a look at what those guys at Blue Freedom are doing. Really interesting. Hydroelectricity can now be approached in an apparently much less capital-intensive way. Thus, the idea I have is to arrange purposefully the floodplains we have in Europe into as energy-efficient and carbon-efficient places as possible. I give the general idea graphically in the picture below.

I am approaching the whole thing from the economics’ point of view, i.e. I want a piece of floodplain arranged into this particular concept to have more value, financial value included, than the same piece of floodplain just being ignored in its inherent potential. I can see two distinct avenues for developing the concept: that of a generally wild, uninhabited floodplain, like public land, as opposed to an inhabited floodplain, under incumbent or ongoing construction, residential or other. The latter is precisely what I want to focus on. I want to study, and possibly to develop a business plan for a human habitat combined with a semi-aquatic ecosystem, i.e. a network of ponds, waterways and water turbines in places where people live and work. Hence, from the geographic point of view, I am focusing on places where the secondary formation of floodplain-type of terrain already occurs in towns and cities, or in the immediate vicinity thereof. For more than one century, the growth of urban habitats has been accompanied by the entrenching of waterways in strictly defined, concrete-reinforced beds. I want to go the other way, and let those rivers spill around their waters, into wetlands, in a manner beneficial to human dwelling.

My initial approach to the underlying environmental concept is market based. Can we create urban wetlands, in flood-threatened areas, where the presence of the explicitly and purposefully arranged aquatic structures increases the value of property so as to top the investment required? I start with the most fundamental marks in the environment. I imagine a piece of land in an urban area. It has its present market value, and I want to study its possible value in the future.

I imagine a piece of land located in an urban area with the characteristics of a floodplain, i.e. recurrently threatened by local floods or the secondary effects thereof. At the moment ‘t’, that piece of land has a market value M(t) = S * m(t), being the product of its total surface S, constant over time, and the market price m(t) per unit of surface, changing over time. There are two moments in time, i.e. the initial moment t0, and the subsequent moment t1, after the development into urban wetland. Said development requires a stream of investment I(t0 -> t1). I want to study the conditions for M(t1) – M(t0) > I(t0 -> t1). As surface S is constant over time, my problem breaks down into units of surface, whence the aggregate investment I(t0 -> t1) being decomposed into I(t0 -> t1) = S * i(t0 -> t1), and the problem restated as m(t1) – m(t0) >  i(t0 -> t1).

I assume the market price m(t) is based on two types of characteristics: those directly measurable as financials, for one, e.g. the average wage a resident can expect from a locally based job, and those more diffuse ones, whose translation into financial variables is subtler, and sometimes pointless. I allow myself to call the latter ones ‘environmental services’. They cover quite a broad range of phenomena, ranging from the access to clean water outside the public water supply system, all the way to subjectively perceived happiness and well-being. All in all, mathematically, I say m(t) = f(x1, x2, …, xk) : the market price of construction land in cities is a function of k variables. Consistently with the above, I assume that f[t1; (x1, x2, …, xk)] – f[t0; (x1, x2, …, xk)] > i(t0 -> t1).    

It is intellectually honest to tackle those characteristics of urban land that make its market price. There is a useful observation about cities: anything that impacts the value of urban real estate, sooner or later translates into rent that people are willing to pay for being able to stay there. Please, notice that even when we own a piece of real estate, i.e. when we have property rights to it, we usually pay to someone some kind of periodic allowance for being able to execute our property rights fully: the real estate tax, the maintenance fee paid to the management of residential condominiums, the fee for sanitation service (e.g. garbage collection) etc. Any urban piece of land has a rent tag attached. Even those characteristics of a place, which pertain mostly to the subjectively experienced pleasure and well-being derived out of staying there have a rent-like price attached to them, at the end of the day.

Good. I have made a sketch of the thing. Now, I am going to pass in review some published research, in order to set my landmarks. I start with some literature regarding urban planning, and as soon as I do so, I discover an application for artificial intelligence, a topic of interest for me, those last months. Lyu et al. (2017[1]) present a method for procedural modelling of urban layout, and in their work, I can spot something similar to the equations I have just come up with: complex analysis of land-suitability. It starts with dividing the total areal of urban land at hand, in a given city, into standard units of surface. Geometrically, they look nice when they are equisized squares. Each unit ‘i’ can be potentially used for many alternative purposes. Lyu et al. distinguish 5 typical uses of urban land: residential, industrial, commercial, official, and open & green. Each such surface unit ‘i’ is endowed with a certain suitability for different purposes, and this suitability is the function of a finite number of factors. Formally, the suitability sik of land unit i for use k is a weighted average over a vector of factors, where wkj is the weight of factor j for land use k, and rij is the rating of land unit i on factor j. Below, I am trying to reproduce graphically the general logic of this approach.

In a city approached analytically with the general method presented above, Lyu et al. (2017[1]) distribute three layers of urban layout: population, road network, and land use. It starts with an initial state (input state) of population, land use, and available area. In a first step of the procedure, a simulation of highways and arterial transport connections is made. The transportation grid suggests some kind of division of urban space into districts. As far as I understand it, Lyu et al. define districts as functional units with the quantitative dominance of certain land uses, i.e. residential vs. industrial rather than rich folks’ estate vs. losers’ end, sort of.

As a first sketch of district division is made, it allows simulating a first distribution of population in the city, and a first draft of land use. The distribution of population is largely a distribution of density in population, and the corresponding transportation grid is strongly correlated with it. Some modes of urban transport work only above some critical thresholds in the density of population. This is an important point: density of population is a critical variable in social sciences.

Then, some kind of planning freedom can be allowed inside districts, which results in a second draft of spatial distribution in population, where a new type of unit – a neighbourhood – appears. Lyu et al. do not explain in detail the concept of neighbourhood, and yet it is interesting. It suggests the importance of spontaneous settlement vs. that of planned spatial arrangement.

I am strongly attached to that notion of spontaneous settlement. I am firmly convinced that on the long run people live where they want to live, and urban planning can just make that process somehow smoother and more efficient. Thus comes another article in my review of literature, by Mahmoud & Divigalpitiya (2019[2]). By the way, I have an interesting meta-observation: most recent literature about urban development is based on empirical research in emerging economies and in developing countries, with the U.S. coming next, and Europe lagging far behind. In Europe, we do very little research about our own social structures, whilst them Egyptians or Thais are constantly studying the way they live collectively.

Anyway, back to by Mahmoud & Divigalpitiya (2019[3]), the article is interesting from my point of view because its authors study the development of new towns and cities. For me, it is an insight into how the radically new urban structures sink into the incumbent spatial distribution of population. The specific background of this particular study is a public policy of the Egyptian government to establish, in a planned manner, new cities some distance away from the Nile, and do it so as to minimize the encroachment on agricultural land. Thus, we have scarce space and people to fit into, with optimal use of land.

As I study that paper by Mahmoud & Divigalpitiya, some kind of extension to my initial idea emerges. Those researchers report that with proper water and energy management, more specifically with the creation of irrigative structures like those which I came up with – networks of ponds and waterways – paired with a network of small hydropower units, it is possible both to accommodate an increase of 90% in local urban population, and create 3,75% more of agricultural land. Another important finding about those new urban communities in Egypt is that they tend to grow by sprawl rather than by distant settlement. New city dwellers tend to settle close to the incumbent residents, rather than in more remote locations. In simple words: it is bloody hard to create a new city from scratch. Habits and social links are like a tangible expanse of matter, which opposes resistance to distortions.

I switch to another paper based on Egyptian research, namely that by Hatata et al. 2019[4], relative to the use of small hydropower generators. The paper is rich in technicalities, and therefore I note to come back to it many times when I will be going more into the details of my concept. For now, I have a few general takeaways. Firstly, it is wise to combine small hydro off grid with that connected to the power grid, and more generally, small hydro looks like a good complementary source of power, next to a regular grid, rather than a 100% autonomous power base. Still, full autonomy is possible, mostly with the technology of Permanent Magnet Synchronous Generator. Secondly, Hatata et al. present a calculation of economic value in hydropower projects, based on their Net Present Value, which, in turn, is calculated on the grounds of a basic assumption that hydropower installations carry some residual capital value Vr over their entire lifetime, and additionally can generate a current cash flow determined by: a) the revenue Rt from the sales of energy b) the locally needed investment It c) the operating cost Ot and d) the maintenance cost Mt, all that in the presence of a periodic discount rate r.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. You can communicate with me directly, via the mailbox of this blog: goodscience@discoversocialsciences.com. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?


[1] Lyu, X., Han, Q., & de Vries, B. (2017). Procedural modeling of urban layout: population, land use, and road network. Transportation research procedia, 25, 3333-3342.

[2] Mahmoud, H., & Divigalpitiya, P. (2019). Spatiotemporal variation analysis of urban land expansion in the establishment of new communities in Upper Egypt: A case study of New Asyut city. The Egyptian Journal of Remote Sensing and Space Science, 22(1), 59-66.

[3] Mahmoud, H., & Divigalpitiya, P. (2019). Spatiotemporal variation analysis of urban land expansion in the establishment of new communities in Upper Egypt: A case study of New Asyut city. The Egyptian Journal of Remote Sensing and Space Science, 22(1), 59-66.

[4] Hatata, A. Y., El-Saadawi, M. M., & Saad, S. (2019). A feasibility study of small hydro power for selected locations in Egypt. Energy Strategy Reviews, 24, 300-313.


Sketching quickly alternative states of nature

My editorial on You Tube

I am thinking about a few things, as usually, and, as usually, it is a laborious process. The first one is a big one: what the hell am I doing what I am doing for? I mean, what’s the purpose and the point of applying artificial intelligence to simulating collective intelligence? There is one particular issue that I am entertaining in this regard: the experimental check. A neural network can help me in formulating very precise hypotheses as for how a given social structure can behave. Yet, these are hypotheses. How can I have them checked?

Here is an example. Together with a friend, we are doing some research about the socio-economic development of big cities in Poland, in the perspective of seeing them turning into so-called ‘smart cities’. We came to an interesting set of hypotheses generated by a neural network, but we have a tiny little problem: we propose, in the article, a financial scheme for cities but we don’t quite understand why we propose this exact scheme. I know it sounds idiotic, but well: it is what it is. We have an idea, and we don’t know exactly where that idea came from.

I have already discussed the idea in itself on my blog, in « Locally smart. Case study in finance.» : a local investment fund, created by the local government, to finance local startup businesses. Business means investment, especially at the aggregate scale and in the long run. This is how business works: I invest, and I have (hopefully) a return on my investment. If there is more and more private business popping up in those big Polish cities, and, in the same time, local governments are backing off from investment in fixed assets, let’s make those business people channel capital towards the same type of investment that local governments are withdrawing from. What we need is an institutional scheme where local governments financially fuel local startup businesses, and those businesses implement investment projects.

I am going to try and deconstruct the concept, sort of backwards. I am sketching the landscape, i.e. the piece of empirical research that brought us to formulating the whole idea of investment fund paired with crowdfunding.  Big Polish cities show an interesting pattern of change: local populations, whilst largely stagnating demographically, are becoming more and more entrepreneurial, which is observable as an increasing number of startup businesses per 10 000 inhabitants. On the other hand, local governments (city councils) are spending a consistently decreasing share of their budgets on infrastructural investment. There is more and more business going on per capita, and, in the same time, local councils seem to be slowly backing off from investment in infrastructure. The cities we studied as for this phenomenon are: Wroclaw, Lodz, Krakow, Gdansk, Kielce, Poznan, Warsaw.

More specifically, the concept tested through the neural network consists in selecting, each year, 5% of the most promising local startups, and funds each of them with €80 000. The logic behind this concept is that when a phenomenon becomes more and more frequent – and this is the case of startups in big Polish cities – an interesting strategy is to fish out, consistently, the ‘crème de la crème’ from among those frequent occurrences. It is as if we were soccer promotors in a country, where more and more young people start playing at a competitive level. A viable strategy consists, in such a case, in selecting, over and over again, the most promising players from the top of the heap and promote them further.

Thus, in that hypothetical scheme, the local investment fund selects and supports the most promising from amongst the local startups. Mind you, that 5% rate of selection is just an idea. It could be 7% or 3% just as well. A number had to be picked, in order to simulate the whole thing with a neural network, which I present further. The 5% rate can be seen as an intuitive transference from the s-Student significance test in statistics. When you test a correlation for its significance, with the t-Student test, you commonly assume that at least 95% of all the observations under scrutiny is covered by that correlation, and you can tolerate a 5% outlier of fringe cases. I suppose this is why we picked, intuitively, that 5% rate of selection among the local startups: 5% sounds just about right to delineate the subset of most original ideas.

Anyway, the basic idea consists in creating a local investment fund controlled by the local government, and this fund would provide a standard capital injection of €80 000 to 5% of most promising local startups. The absolute number STF (i.e. financed startups) those 5% translate into can be calculated as: STF = 5% * (N/10 000) * ST10 000, where N is the population of the given city, and ST10 000 is the coefficient of startup businesses per 10 000 inhabitants. Just to give you an idea what it looks like empirically, I am presenting data for Krakow (KR, my hometown) and Warsaw (WA, Polish capital), in 2008 and 2017, which I designate, respectively, as STF(city_acronym; 2008) and STF(city_acronym; 2017). It goes like:

STF(KR; 2008) = 5% * (754 624/ 10 000) * 200 = 755

STF(KR; 2017) = 5* * (767 348/ 10 000) * 257 = 986

STF(WA; 2008) = 5% * (1709781/ 10 000) * 200 = 1 710

STF(WA; 2017) = 5% * (1764615/ 10 000) * 345 = 3 044   

That glimpse of empirics allows guessing why we applied a neural network to that whole thing: the two core variables, namely population and the coefficient of startups per 10 000 people, can change with a lot of autonomy vis a vis each other. In the whole sample that we used for basic stochastic analysis, thus 7 cities from 2008 through 2017 equals 70 observations, those two variables are Pearson-correlated at r = 0,6267. There is some significant correlation, and yet some 38% of observable variance in each of those variables doesn’t give a f**k about the variance of the other variable. The covariance of these two seems to be dominated by the variability in population rather than by uncertainty as for the average number of startups per 10 000 people.

What we have is quite predictable a trend of growing propensity to entrepreneurship, combined with a bit of randomness in demographics. Those two can come in various duos, and their duos tend to be actually trios, ‘cause we have that other thing, which I already mentioned: investment outlays of local governments and the share of those outlays in the overall local budgets. Our (my friend’s and mine) intuitive take on that picture was that it is really interesting to know the different ways those Polish cities can go in the future, rather that setting one central model. I mean, the central stochastic model is interesting too. It says, for example, that the natural logarithm of the number of startups per 10 000 inhabitants, whilst being negatively correlated with the share of investment outlays in the local government’s budget, it is positively correlated with the absolute amount of those outlays. The more a local government spends on fixed assets, the more startups it can expect per 10 000 inhabitants. That latter variable is subject to some kind of scale effects from the part of the former. Interesting. I like scale effects. They are intriguing. They show phenomena, which change in a way akin to what happens when I heat up a pot full of water: the more heat have I supplied to water, the more different kinds of stuff can happen. We call it increase in the number of degrees of freedom.

The stochastically approached degrees of freedom in the coefficient of startups per 10 000 inhabitants, you can see them in Table 1, below. The ‘Ln’ prefix means, of course, natural logarithms. Further below, I return to the topic of collective intelligence in this specific context, and to using artificial intelligence to simulate the thing.

Table 1

Explained variable: Ln(number of startups per 10 000 inhabitants) R2 = 0,608 N = 70
Explanatory variable Coefficient of regression Standard error Significance level
Ln(investment outlays of the local government) -0,093 0,048 p = 0,054
Ln(total budget of the local government) 0,565 0,083 p < 0,001
Ln(population) -0,328 0,09 p < 0,001
Constant    -0,741 0,631 p = 0,245

I take the correlations from Table 1, thus the coefficients of regression from the first numerical column, and I check their credentials with the significance level from the last numerical column. As I want to understand them as real, actual things that happen in the cities studied, I recreate the real values. We are talking about coefficients of startups per 10 000 people, comprised somewhere the observable minimum ST10 000 = 140, and the maximum equal to ST10 000 = 345, with a mean at ST10 000 = 223. It terms of natural logarithms, that world folds into something between ln(140) = 4,941642423 and ln(345) = 5,843544417, with the expected mean at ln(223) = 5,407171771. Standard deviation Ω from that mean can be reconstructed from the standard error, which is calculated as s = Ω/√N, and, consequently, Ω = s*√N. In this case, with N = 70, standard deviation Ω = 0,631*√70 = 5,279324767.  

That regression is interesting to the extent that it leads to an absurd prediction. If the population of a city shrinks asymptotically down to zero, and if, in the same time, the budget of the local government swells up to infinity, the occurrence of entrepreneurial behaviour (number of startups per 10 000 inhabitants) will tend towards infinity as well. There is that nagging question, how the hell can the budget of a local government expand when its tax base – the population – is collapsing. I am an economist and I am supposed to answer questions like that.

Before being an economist, I am a scientist. I ask embarrassing questions and then I have to invent a way to give an answer. Those stochastic results I have just presented make me think of somehow haphazard a set of correlations. Such correlations can be called dynamic, and this, in turn, makes me think about the swarm theory and collective intelligence (see Yang et al. 2013[1] or What are the practical outcomes of those hypotheses being true or false?). A social structure, for example that of a city, can be seen as a community of agents reactive to some systemic factors, similarly to ants or bees being reactive to pheromones they produce and dump into their social space. Ants and bees are amazingly intelligent collectively, whilst, let’s face it, they are bloody stupid singlehandedly. Ever seen a bee trying to figure things out in the presence of a window? Well, not only can a swarm of bees get that s**t down easily, but also, they can invent a way of nesting in and exploiting the whereabouts of the window. The thing is that a bee has its nervous system programmed to behave smartly mostly in social interactions with other bees.

I have already developed on the topic of money and capital being a systemic factor akin to a pheromone (see Technological change as monetary a phenomenon). Now, I am walking down this avenue again. What if city dwellers react, through entrepreneurial behaviour – or the lack thereof – to a certain concentration of budgetary spending from the local government? What if the budgetary money has two chemical hooks on it – one hook observable as ‘current spending’ and the other signalling ‘investment’ – and what if the reaction of inhabitants depends on the kind of hook switched on, in the given million of euros (or rather Polish zlotys, or PLN, as we are talking about Polish cities)?

I am returning, for a moment, to the negative correlation between the headcount of population, on the one hand, and the occurrence of new businesses per 10 000 inhabitants. Cities – at least those 7 Polish cities that me and my friend did our research on – are finite spaces. Less people in the city means less people per 1 km2 and vice versa. Hence, the occurrence of entrepreneurial behaviour is negatively correlated with the density of population. A behavioural pattern emerges. The residents of big cities in Poland develop entrepreneurial behaviour in response to greater a concentration of current budgetary spending by local governments, and to lower a density of population. On the other hand, greater a density of population or less money spent as current payments from the local budget act as inhibitors of entrepreneurship. Mind you, greater a density of population means greater a need for infrastructure – yes, those humans tend to crap and charge their smartphones all over the place – whence greater a pressure on the local governments to spend money in the form of investment in fixed assets, whence the secondary in its force, negative correlation between entrepreneurial behaviour and investment outlays from local budgets.

This is a general, behavioural hypothesis. Now, the cognitive challenge consists in translating the general idea into as precise empirical hypotheses as possible. What precise states of nature can happen in those cities? This is when artificial intelligence – a neural network – can serve, and this is when I finally understand where that idea of investment fund had come from. A neural network is good at producing plausible combinations of values in a pre-defined set of variables, and this is what we need if we want to formulate precise hypotheses. Still, a neural network is made for learning. If I want the thing to make those hypotheses for me, I need to give it a purpose, i.e. a variable to optimize, and learn as it is optimizing.

In social sciences, entrepreneurial behaviour is assumed to be a good thing. When people recurrently start new businesses, they are in a generally go-getting frame of mind, and this carries over into social activism, into the formation of institutions etc. In an initial outburst of neophyte enthusiasm, I might program my neural network so as to optimize the coefficient of startups per 10 000 inhabitants. There is a catch, though. When I tell a neural network to optimize a variable, it takes the most likely value of that variable, thus, stochastically, its arithmetical average, and it keeps recombining all the other variables so as to have this one nailed down, as close to that most likely value as possible. Therefore, if I want a neural network to imagine relatively high occurrences of entrepreneurial behaviour, I shouldn’t set said behaviour as the outcome variable. I should mix it with others, as an input variable. It is very human, by the way. You brace for achieving a goal, you struggle the s**t out of yourself, and you discover, with negative amazement, that instead of moving forward, you are actually repeating the same existential pattern over and over again. You can set your personal compass, though, on just doing a good job and having fun with it, and then, something strange happens. Things get done sort of you haven’t even noticed when and how. Goals get nailed down even without being phrased explicitly as goals. And you are having fun with the whole thing, i.e. with life.

Same for artificial intelligence, as it is, as a matter of fact, an artful expression of our own, human intelligence: it produces the most interesting combinations of variables as a by-product of optimizing something boring. Thus, I want my neural network to optimize on something not-necessarily-fascinating and see what it can do in terms of people and their behaviour. Here comes the idea of an investment fund. As I have been racking my brains in the search of place where that idea had come from, I finally understood: an investment fund is both an institutional scheme, and a metaphor. As a metaphor, it allows decomposing an aggregate stream of investment into a set of more or less autonomous projects, and decisions attached thereto. An investment fund is a set of decisions coordinated in a dynamically correlated manner: yes, there are ways and patterns to those decisions, but there is a lot of autonomous figuring-out-the-thing in each individual case.

Thus, if I want to put functionally together those two social phenomena – investment channelled by local governments and entrepreneurial behaviour in local population – an investment fund is a good institutional vessel to that purpose. Local government invests in some assets, and local homo sapiens do the same in the form of startups. What if we mix them together? What if the institutional scheme known as public-private partnership becomes something practiced serially, as a local market for ideas and projects?

When we were designing that financial scheme for local governments, me and my friend had the idea of dropping a bit of crowdfunding into the cooking pot, and, as strange as it could seem, we are bit confused as for where this idea came from. Why did we think about crowdfunding? If I want to understand how a piece of artificial intelligence simulates collective intelligence in a social structure, I need to understand what kind of logical connections had I projected into the neural network. Crowdfunding is sort of spontaneous. When I am having a look at the typical conditions proposed by businesses crowdfunded at Kickstarter or at StartEngine, these are shitty contracts, with all the due respect. Having a Master’s in law, when I look at the contracts offered to investors in those schemes, I wouldn’t sign such a contract if I had any room for negotiation. I wouldn’t even sign a contract the way I am supposed to sign it via a crowdfunding platform.

There is quite a strong piece of legal and business science to claim that crowdfunding contracts are a serious disruption to the established contractual patterns (Savelyev 2017[2]). Crowdfunding largely rests on the so-called smart contracts, i.e. agreements written and signed as software on Blockchain-based platforms. Those contracts are unusually flexible, as each amendment, would it be general or specific, can be hash-coded into the history of the individual contractual relation. That puts a large part of legal science on its head. The basic intuition of any trained lawyer is that we negotiate the s**t of ourselves before the signature of the contract, thus before the formulation of general principles, and anything that happens later is just secondary. With smart contracts, we are pretty relaxed when it comes to setting the basic skeleton of the contract. We just put the big bones in, and expect we gonna make up the more sophisticated stuff as we go along.

With the abundant usage of smart contracts, crowdfunding platforms have peculiar legal flexibility. Today you sign up for having a discount of 10% on one Flower Turbine, in exchange of £400 in capital crowdfunded via a smart contract. Next week, you learn that you can turn your 10% discount on one turbine into 7% on two turbines if you drop just £100 more into that pig coin. Already the first step (£400 against the discount of 10%) would be a bit hard to squeeze into classical contractual arrangements as for investing into the equity of a business, let alone the subsequent amendment (Armour, Enriques 2018[3]).

Yet, with a smart contract on a crowdfunding platform, anything is just a few clicks away, and, as astonishing as it could seem, the whole thing works. The click-based smart contracts are actually enforced and respected. People do sign those contracts, and moreover, when I mentally step out of my academic lawyer’s shoes, I admit being tempted to sign such a contract too. There is a specific behavioural pattern attached to crowdfunding, something like the Russian ‘Davaj, riebiata!’ (‘Давай, ребята!’ in the original spelling). ‘Let’s do it together! Now!’, that sort of thing. It is almost as I were giving someone the power of attorney to be entrepreneurial on my behalf. If people in big Polish cities found more and more startups, per 10 000 residents, it is a more and more recurrent manifestation of entrepreneurial behaviour, and crowdfunding touches the very heart of entrepreneurial behaviour (Agrawal et al. 2014[4]). It is entrepreneurship broken into small, tradable units. The whole concept we invented is generally placed in the European context, and in Europe crowdfunding is way below the popularity it has reached in North America (Rupeika-Aboga, Danovi 2015[5]). As a matter of fact, European entrepreneurs seem to consider crowdfunding as really a secondary source of financing.

Time to sum up a bit all those loose thoughts. Using a neural network to simulate collective behaviour of human societies involves a few deep principles, and a few tricks. When I study a social structure with classical stochastic tools and I encounter strange, apparently paradoxical correlations between phenomena, artificial intelligence may serve. My intuitive guess is that a neural network can help in clarifying what is sometimes called ‘background correlations’ or ‘transitive correlations’: variable A is correlated with variable C through the intermediary of variable B, i.e. A is significantly correlated with B, and B is significantly correlated with C, but the correlation between A and C remains insignificant.

When I started to use a neural network in my research, I realized how important it is to formulate very precise and complex hypotheses rather than definitive answers. Artificial intelligence allows to sketch quickly alternative states of nature, by gazillions. For a moment, I am leaving the topic of those financial solutions for cities, and I return to my research on energy, more specifically on energy efficiency. In a draft article I wrote last autumn, I started to study the relative impact of the velocity of money, as well as that of the speed of technological change, upon the energy efficiency of national economies. Initially, I approached the thing in the nicely and classically stochastic a way. I came up with conclusions of the type: ‘variance in the supply of money makes 7% of the observable variance in energy efficiency, and the correlation is robust’. Good, this is a step forward. Still, in practical terms, what does it give? Does it mean that we need to add money to the system in order to have greater an energy efficiency? Might well be the case, only you don’t add money to the system just like that, ‘cause most of said money is account money on current bank accounts, and the current balances of those accounts reflect the settlement of obligations resulting from complex private contracts. There is no government that could possibly add more complex contracts to the system.

Thus, stochastic results, whilst looking and sounding serious and scientific, have remote connexion to practical applications. On the other hand, if I take the same empirical data and feed it into a neural network, I get alternative states of nature, and those states are bloody interesting. Artificial intelligence can show me, for example, what happens to energy efficiency if a social system is more or less conservative in its experimenting with itself. In short, artificial intelligence allows super-fast simulation of social experiments, and that simulation is theoretically robust.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. You can communicate with me directly, via the mailbox of this blog: goodscience@discoversocialsciences.com. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?


[1] Yang, X. S., Cui, Z., Xiao, R., Gandomi, A. H., & Karamanoglu, M. (2013). Swarm intelligence and bio-inspired computation: theory and applications.

[2] Savelyev, A. (2017). Contract law 2.0:‘Smart’contracts as the beginning of the end of classic contract law. Information & Communications Technology Law, 26(2), 116-134.

[3] Armour, J., & Enriques, L. (2018). The promise and perils of crowdfunding: Between corporate finance and consumer contracts. The Modern Law Review, 81(1), 51-84.

[4] Agrawal, A., Catalini, C., & Goldfarb, A. (2014). Some simple economics of crowdfunding. Innovation Policy and the Economy, 14(1), 63-97

[5] Rupeika-Apoga, R., & Danovi, A. (2015). Availability of alternative financial resources for SMEs as a critical part of the entrepreneurial eco-system: Latvia and Italy. Procedia Economics and Finance, 33, 200-210.

Lean, climbing trends

My editorial on You Tube

Our artificial intelligence: the working title of my research, for now. Volume 1: Energy and technological change. I am doing a little bit of rummaging in available data, just to make sure I keep contact with reality. Here comes a metric: access to electricity in the world, measured as the % of total human population[1]. The trend line looks proudly ascending. In 2016, 87,38% of mankind had at least one electric socket in their place. Ten years earlier, by the end of 2006, they were 81,2%. Optimistic. Looks like something growing almost linearly. Another one: « Electric power transmission and distribution losses »[2]. This one looks different: instead of a clear trend, I observe something shaking and oscillating, with the width of variance narrowing gently down, as time passes. By the end of 2014 (last data point in this dataset), we were globally at 8,25% of electricity lost in transmission. The lowest coefficient of loss occurred in 1998: 7,13%.

I move from distribution to production of electricity, and to its percentage supplied from nuclear power plants[3]. Still another shape, that of a steep bell with surprisingly lean edges. Initially, it was around 2% of global electricity supplied by the nuclear. At the peak of fascination, it was 17,6%, and at the end of 2014, we went down to 10,6%. The thing seems to be temporarily stable at this level. As I move to water, and to the percentage of electricity derived from the hydro[4], I see another type of change: a deeply serrated, generally descending trend. In 1971, we had 20,2% of our total global electricity from the hydro, and by the end of 2014, we were at 16,24%. In the meantime, it looked like a rollercoaster. Yet, as I am having a look at other renewables (i.e. other than hydroelectricity) and their share in the total supply of electricity[5], the shape of the corresponding curve looks like a snake, trying to figure something out about a vertical wall. Between 1971 and 1988, the share of those other renewables in the total electricity supplied moved from 0,25% to 0,6%. Starting from 1989, it is an almost perfectly exponential growth, to reach 6,77% in 2015. 

Just to have a complete picture, I shift slightly, from electricity to energy consumption as a whole, and I check the global share of renewables therein[6]. Surprise! This curve does not behave at all as it is expected to behave, after having seen the previously cited share of renewables in electricity. Instead of a snake sniffing a wall, we can see a snake like from above, or something like e meandering river. This seems to be a cycle over some 25 years (could it be Kondratiev’s?), with a peak around 18% of renewables in the total consumption of energy, and a trough somewhere by 16,9%. Right now, we seem to be close to the peak. 

I am having a look at the big, ugly brother of hydro: the oil, gas and coal sources of electricity and their share in the total amount of electricity produced[7]. Here, I observe a different shape of change. Between 1971 and 1986, the fossils dropped their share from 62% to 51,47%. Then, it rockets up back to 62% in 1990. Later, a slowly ascending trend starts, just to reach a peak, and oscillate for a while around some 65 ÷ 67% between 2007 and 2011. Since then, the fossils are dropping again: the short-term trend is descending.  

Finally, one of the basic metrics I have been using frequently in my research on energy: the final consumption thereof, per capita, measured in kilograms of oil equivalent[8]. Here, we are back in the world of relatively clear trends. This one is ascending, with some bumps on the way, though. In 1971, we were at 1336,2 koe per person per year. In 2014, it was 1920,655 koe.

Thus, what are all those curves telling me? I can see three clearly different patterns. The first is the ascending trend, observable in the access to electricity, in the consumption of energy per capita, and, since the late 1980ies, in the share of electricity derived from renewable sources. The second is a cyclical variation: share of renewables in the overall consumption of energy, to some extent the relative importance of hydroelectricity, as well as that of the nuclear. Finally, I can observe a descending trend in the relative importance of the nuclear since 1988, as well as in some episodes from the life of hydroelectricity, coal and oil.

On the top of that, I can distinguish different patterns in, respectively, the production of energy, on the one hand, and its consumption, on the other hand. The former seems to change along relatively predictable, long-term paths. The latter looks like a set of parallel, and partly independent experiments with different sources of energy. We are collectively intelligent: I deeply believe that. I mean, I hope. If bees and ants can be collectively smarter than singlehandedly, there is some potential in us as well.

Thus, I am progressively designing a collective intelligence, which experiments with various sources of energy, just to produce those two, relatively lean, climbing trends: more energy per capita and ever growing a percentage of capitae with access to electricity. Which combinations of variables can produce a rationally desired energy efficiency? How is the supply of money changing as we reach different levels of energy efficiency? Can artificial intelligence make energy policies? Empirical check: take a real energy policy and build a neural network which reflects the logical structure of that policy. Then add a method of learning and see, what it produces as hypothetical outcome.

What is the cognitive value of hypotheses made with a neural network? The answer to this question starts with another question: how do hypotheses made with a neural network differ from any other set of hypotheses? The hypothetical states of nature produced by a neural network reflect the outcomes of logically structured learning. The process of learning should represent real social change and real collective intelligence. There are four most important distinctions I have observed so far, in this respect: a) awareness of internal cohesion b) internal competition c) relative resistance to new information and d) perceptual selection (different ways of standardizing input data).

The awareness of internal cohesion, in a neural network, is a function that feeds into the consecutive experimental rounds of learning the information on relative cohesion (Euclidean distance) between variables. We assume that each variable used in the neural network reflects a sequence of collective decisions in the corresponding social structure. Cohesion between variables represents the functional connection between sequences of collective decisions. Awareness of internal cohesion, as a logical attribute of a neural network, corresponds to situations when societies are aware of how mutually coherent their different collective decisions are. The lack of logical feedback on internal cohesion represents situation when societies do not have that internal awareness.

As I metaphorically look around and ask myself, what awareness do I have about important collective decisions in my local society. I can observe and pattern people’s behaviour, for one. Next thing: I can read (very literally) the formalized, official information regarding legal issues. On the top of that, I can study (read, mostly) quantitatively formalized information on measurable attributes of the society, such as GDP per capita, supply of money, or emissions of CO2. Finally, I can have that semi-formalized information from what we call “media”, whatever prefix they come with: mainstream media, social media, rebel media, the-only-true-media etc.

As I look back upon my own life and the changes which I have observed on those four levels of social awareness, the fourth one, namely the media, has been, and still is the biggest game changer. I remember the cultural earthquake in 1990 and later, when, after decades of state-controlled media in the communist Poland, we suddenly had free press and complete freedom of publishing. Man! It was like one of those moments when you step out of a calm, dark alleyway right into the middle of heavy traffic in the street. Information, it just wheezed past.         

There is something about media, both those called ‘mainstream’, and the modern platforms like Twitter or You Tube: they adapt to their audience, and the pace of that adaptation is accelerating. With Twitter, it is obvious: when I log into my account, I can see the Tweets only from people and organizations whom I specifically subscribed to observe. With You Tube, on my starting page, I can see the subscribed channels, for one, and a ton of videos suggested by artificial intelligence on the grounds of what I watched in the past. Still, the mainstream media go down the same avenue. When I go bbc.com, the types of news presented are very largely what the editorial team hopes will max out on clicks per hour, which, in turn, is based on the types of news that totalled the most clicks in the past. The same was true for printed newspapers, 20 years ago: the stuff that got to headlines was the kind of stuff that made sales.

Thus, when I simulate collective intelligence of a society with a neural network, the function allowing the network to observe its own, internal cohesion seems to be akin the presence of media platforms. Actually, I have already observed, many times, that adding this specific function to a multi-layer perceptron (type of neural network) makes that perceptron less cohesive. Looks like a paradox: observing the relative cohesion between its own decisions makes a piece of AI less cohesive. Still, real life confirms that observation. Social media favour the phenomenon known as « echo chamber »: if I want, I can expose myself only to the information that minimizes my cognitive dissonance and cut myself from anything that pumps my adrenaline up. On a large scale, this behavioural pattern produces a galaxy of relatively small groups encapsulated in highly distilled, mutually incoherent worldviews. Have you ever wondered what it would be to use GPS navigation to find your way, in the company of a hardcore flat-Earther?   

When I run my perceptron over samples of data regarding the energy – efficiency of national economies – including the function of feedback on the so-called fitness function is largely equivalent to simulating a society with abundant mediatic activity. The absence of such feedback is, on the other hand, like a society without much of a media sector.

Internal competition, in a neural network, is the deep underlying principle for structuring a multi-layer perceptron into separate layers, and manipulating the number of neurons in each layer. Let’s suppose I have two neural layers in a perceptron: A, and B, in this exact order. If I put three neurons in the layer A, and one neuron in the layer B, the one in B will be able to choose between the 3 signals sent from the layer A. Seen from the A perspective, each neuron in A has to compete against the two others for the attention of the single neuron in B. Choice on one end of a synapse equals competition on the other end.

When I want to introduce choice in a neural network, I need to introduce internal competition as well. If any neuron is to have a choice between processing input A and its rival, input B, there must be at least two distinct neurons – A and B – in a functionally distinct, preceding neural layer. In a collective intelligence, choice requires competition, and there seems to be no way around it.  In a real brain, neurons form synaptic sequences, which means that the great majority of our neurons fire because other neurons have fired beforehand. We very largely think because we think, not because something really happens out there. Neurons in charge of early-stage collection in sensory data compete for the attention of our brain stem, which, in turn, proposes its pre-selected information to the limbic system, and the emotional exultation of the latter incites he cortical areas to think about the whole thing. From there, further cortical activity happens just because other cortical activity has been happening so far.

I propose you a quick self-check: think about what you are thinking right now, and ask yourself, how much of what you are thinking about is really connected to what is happening around you. Are you thinking a lot about the gradient of temperature close to your skin? No, not really? Really? Are you giving a lot of conscious attention to the chemical composition of the surface you are touching right now with your fingertips? Not really a lot of conscious thinking about this one either? Now, how much conscious attention are you devoting to what [fill in the blank] said about [fill in the blank], yesterday? Quite a lot of attention, isn’t it?

The point is that some ideas die out, in us, quickly and sort of silently, whilst others are tough survivors and keep popping up to the surface of our awareness. Why? How does it happen? What if there is some kind of competition between synaptic paths? Thoughts, or components thereof, that win one stage of the competition pass to the next, where they compete again.           

Internal competition requires complexity. There needs to be something to compete for, a next step in the chain of thinking. A neural network with internal competition reflects a collective intelligence with internal hierarchies that offer rewards. Interestingly, there is research showing that greater complexity gives more optimizing accuracy to a neural network, but just as long as we are talking about really low complexity, like 3 layers of neurons instead of two. As complexity is further developed, accuracy decreases noticeably. Complexity is not the best solution for optimization: see Olawoyin and Chen (2018[9]).

Relative resistance to new information corresponds to the way that an intelligent structure deals with cognitive dissonance. In order to have any cognitive dissonance whatsoever, we need at least two pieces of information: one that we have already appropriated as our knowledge, and the new stuff, which could possibly disturb the placid self-satisfaction of the I-already-know-how-things-work. Cognitive dissonance is a potent factor of stress in human beings as individuals, and in whole societies. Galileo would have a few words to say about it. Question: how to represent in a mathematical form the stress connected to cognitive dissonance? My provisional answer is: by division. Cognitive dissonance means that I consider my acquired knowledge as more valuable than new information. If I want to decrease the importance of B in relation to A, I divide B by a factor greater than 1, whilst leaving A as it is. The denominator of new information is supposed to grow over time: I am more resistant to the really new stuff than I am to the already slightly processed information, which was new yesterday. In a more elaborate form, I can use the exponential progression (see The really textbook-textbook exponential growth).

I noticed an interesting property of the neural network I use for studying energy efficiency. When I introduce choice, internal competition and hierarchy between neurons, the perceptron gets sort of wild: it produces increasing error instead of decreasing error, so it basically learns how to swing more between possible states, rather than how to narrow its own trial and error down to one recurrent state. When I add a pinchful of resistance to new information, i.e. when I purposefully create stress in the presence of cognitive dissonance, the perceptron calms down a bit, and can produce a decreasing error.   

Selection of information can occur already at the level of primary perception. I developed on this one in « Thinking Poisson, or ‘WTF are the other folks doing?’ ». Let’s suppose that new science comes as for how to use particular sources of energy. We can imagine two scenarios of reaction to that new science. On the one hand, the society can react in a perfectly flexible way, i.e. each new piece of scientific research gets evaluated as for its real utility for energy management, and gest smoothly included into the existing body of technologies. On the other hand, the same society (well, not quite the same, an alternative one) can sharply distinguish those new pieces of science into ‘useful stuff’ and ‘crap’, with little nuance in between.

What do we know about collective learning and collective intelligence? Three essential traits come to my mind. Firstly, we make social structures, i.e. recurrent combinations of social relations, and those structures tend to be quite stable. We like having stable social structures. We almost instinctively create rituals, rules of conduct, enforceable contracts etc., thus we make stuff that is supposed to make the existing stuff last. An unstable social structure is prone to wars, coups etc. Our collective intelligence values stability. Still, stability is not the same as perfect conservatism: our societies have imperfect recall. This is the second important trait. Over (long periods of) time we collectively shake off, and replace old rules of social games with new rules, and we do it without disturbing the fundamental social structure. In other words: stable as they are, our social structures have mechanisms of adaptation to new conditions, and yet those mechanisms require to forget something about our past. OK, not just forget something: we collectively forget a shitload of something. Thirdly, there had been many local human civilisations, and each of them had eventually collapsed, i.e. their fundamental social structures had disintegrated. The civilisations we have made so far had a limited capacity to learn. Sooner or later, they would bump against a challenge which they were unable to adapt to. The mechanism of collective forgetting and shaking off, in every known historically documented case, had a limited efficiency.

I intuitively guess that simulating collective intelligence with artificial intelligence is likely to be the most fruitful when we simulate various capacities to learn. I think we can model something like a perfectly adaptable collective intelligence, i.e. the one which has no cognitive dissonance and processes information uniformly over time, whilst having a broad range of choice and internal competition. Such a neural network behaves in the opposite way to what we tend to associate with AI: instead of optimizing and narrowing down the margin of error, it creates new alternative states, possibly in a broadening range. This is a collective intelligence with lots of capacity to learn, but little capacity to steady itself as a social structure. From there, I can muzzle the collective intelligence with various types of stabilizing devices, making it progressively more and more structure-making, and less flexible. Down that avenue, the solver-type of artificial intelligence lies, thus a neural network that just solves a problem, with one, temporarily optimal solution.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. You can communicate with me directly, via the mailbox of this blog: goodscience@discoversocialsciences.com. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?


[1] https://data.worldbank.org/indicator/EG.ELC.ACCS.ZS last access May 17th, 2019

[2] https://data.worldbank.org/indicator/EG.ELC.LOSS.ZS?end=2016&start=1990&type=points&view=chart last access May 17th, 2019

[3] https://data.worldbank.org/indicator/EG.ELC.NUCL.ZS?end=2014&start=1960&type=points&view=chart last access May 17th, 2019

[4] https://data.worldbank.org/indicator/EG.ELC.HYRO.ZS?end=2014&start=1960&type=points&view=chart last access May 17th, 2019

[5] https://data.worldbank.org/indicator/EG.ELC.RNWX.ZS?type=points last access May 17th, 2019

[6] https://data.worldbank.org/indicator/EG.FEC.RNEW.ZS?type=points last access May 17th, 2019

[7] https://data.worldbank.org/indicator/EG.ELC.FOSL.ZS?end=2014&start=1960&type=points&view=chart last access May 17th, 2019

[8] https://data.worldbank.org/indicator/EG.USE.PCAP.KG.OE?type=points last access May 17th, 2019

[9] Olawoyin, A., & Chen, Y. (2018). Predicting the Future with Artificial Neural Network. Procedia Computer Science, 140, 383-392.

Thinking Poisson, or ‘WTF are the other folks doing?’

My editorial on You Tube

I think I have just put a nice label on all those ideas I have been rummaging in for the last 2 years. The last 4 months, when I have been progressively initiating myself at artificial intelligence, have helped me to put it all in a nice frame. Here is the idea for a book, or rather for THE book, which I have been drafting for some time. « Our artificial intelligence »: this is the general title. The first big chapter, which might very well turn into the first book out of a whole series, will be devoted to energy and technological change. After that, I want to have a go at two other big topics: food and agriculture, then laws and institutions.

I explain. What does it mean « Our artificial intelligence »? As I have been working with an initially simple algorithm of a neural network, and I have been progressively developing it, I understood a few things about the link between what we call, fault of a better word, artificial intelligence, and the way my own brain works. No, not my brain. That would be an overstatement to say that I understand fully my own brain. My mind, this is the right expression. What I call « mind » is an idealized, i.e. linguistic description of what happens in my nervous system. As I have been working with a neural network, I have discovered that artificial intelligence that I make, and use, is a mathematical expression of my mind. I project my way of thinking into a set of mathematical expressions, made into an algorithmic sequence. When I run the sequence, I have the impression of dealing with something clever, yet slightly alien: an artificial intelligence. Still, when I stop staring at the thing, and start thinking about it scientifically (you know: initial observation, assumptions, hypotheses, empirical check, new assumptions and new hypotheses etc.), I become aware that the alien thing in front of me is just a projection of my own way of thinking.

This is important about artificial intelligence: this is our own, human intelligence, just seen from outside and projected into electronics. This particular point is an important piece of theory I want to develop in my book. I want to compile research in neurophysiology, especially in the neurophysiology of meaning, language, and social interactions, in order to give scientific clothes to that idea. When we sometimes ask ourselves whether artificial intelligence can eliminate humans, it boils down to asking: ‘Can human intelligence eliminate humans?’. Well, where I come from, i.e. Central Europe, the answer is certainly ‘yes, it can’. As a matter of fact, when I raise my head and look around, the same answer is true for any part of the world. Human intelligence can eliminate humans, and it can do so because it is human, not because it is ‘artificial’.

When I think about the meaning of the word ‘artificial’, it comes from the Latin ‘artificium’, which, in turn, designates something made with skill and demonstrable craft. Artificium means seasoned skills made into something durable so as to express those skills. Artificial intelligence is a crafty piece of work made with one of the big human inventions: mathematics. Artificial intelligence is mathematics at work. Really at work, i.e. not just as another idealization of reality, but as an actual tool. When I study the working of algorithms in neural networks, I have a vision of an architect in Ancient Greece, where the first mathematics we know seem to be coming from. I have a wall and a roof, and I want them both to hold in balance, so what is the proportion between their respective lengths? I need to learn it by trial and error, as I haven’t any architectural knowledge yet. Although devoid of science, I have common sense, and I make small models of the building I want (have?) to erect, and I test various proportions. Some of those maquettes are more successful than others. I observe, I make my synthesis about the proportions which give the least error, and so I come up with something like the Pythagorean z2 = x2 + y2, something like π = 3,14 etc., or something like the discovery that, for a given angle, the tangent proportion y/x makes always the same number, whatever the empirical lengths of y and x.

This is exactly what artificial intelligence does. It makes small models of itself, tests the error resulting from comparison between those models and something real, and generalizes the observation of those errors. Really: this is what a face recognition piece of software does at an airport, or what Google Ads does. This is human intelligence, just unloaded into a mathematical vessel. This is the first discovery that I have made about AI. Artificial intelligence is actually our own intelligence. Studying the way AI behaves allows seeing, like under a microscope, the workings of human intelligence.

The second discovery is that when I put a neural network to work with empirical data of social sciences, it produces strange, intriguing patterns, something like neighbourhoods of the actual reality. In my root field of research – namely economics – there is a basic concept that we, economists, use a lot and still wonder what it actually means: equilibrium. It is an old observation that networks of exchange in human societies tend to find balance in some precise proportions, for example proportions between demand, supply, price and quantity, or those between labour and capital.

Half of economic sciences is about explaining the equilibriums we can empirically observe. The other half employs itself at discarding what that first half comes up with. Economic equilibriums are something we know that exists, and constantly try to understand its mechanics, but those states of society remain obscure to a large extent. What we know is that networks of exchange are like machines: some designs just work, some others just don’t. One of the most important arguments in economic sciences is whether a given society can find many alternative equilibriums, i.e. whether it can use optimally its resources at many alternative proportions between economic variables, or, conversely, is there just one point of balance in a given place and time. From there on, it is a rabbit hole. What does it mean ‘using our resources optimally’? Is it when we have the lowest unemployment, or when we have just some healthy amount of unemployment? Theories are welcome.

When trying to make predictions about the future, using the apparatus of what can now be called classical statistics, social sciences always face the same dilemma: rigor vs cognitive depth. The most interesting correlations are usually somehow wobbly, and mathematical functions we derive from regression always leave a lot of residual errors.    

This is when AI can step in. Neural networks can be used as tools for optimization in digital systems. Still, they have another useful property: observing a neural network at work allows having an insight into how intelligent structures optimize. If I want to understand how economic equilibriums take shape, I can observe a piece of AI producing many alternative combinations of the relevant variables. Here comes my third fundamental discovery about neural networks: with a few, otherwise quite simple assumptions built into the algorithm, AI can produce very different mechanisms of learning, and, consequently, a broad range of those weird, yet intellectually appealing, alternative states of reality. Here is an example: when I make a neural network observe its own numerical properties, such as its own kernel or its own fitness function, its way of learning changes dramatically. Sounds familiar? When you make a human being performing tasks, and you allow them to see the MRI of their own brain when performing those tasks, the actual performance changes.

When I want to talk about applying artificial intelligence, it is a good thing to return to the sources of my own experience with AI, and explain it works. Some sequences of mathematical equations, when run recurrently many times, behave like intelligent entities: they experiment, they make errors, and after many repeated attempts they come up with a logical structure that minimizes the error. I am looking for a good, simple example from real life; a situation which I experienced personally, and which forced me to learn something new. Recently, I went to Marrakech, Morocco, and I had the kind of experience that most European first-timers have there: the Jemaa El Fna market place, its surrounding souks, and its merchants. The experience consists in finding your way out of the maze-like structure of the alleys adjacent to the Jemaa El Fna. You walk down an alley, you turn into another one, then into still another one, and what you notice only after quite a few such turns is that the whole architectural structure doesn’t follow AT ALL the European concept of urban geometry.  

Thus, you face the length of an alley. You notice five lateral openings and you see a range of lateral passages. In a European town, most of those lateral passages would lead somewhere. A dead end is an exception, and passages between buildings are passages in the strict sense of the term: from one open space to another open space. At Jemaa El Fna, its different: most of the lateral ways lead into deep, dead-end niches, with more shops and stalls inside, yet some other open up into other alleys, possibly leading to the main square, or at least to a main street.

You pin down a goal: get back to the main square in less than… what? One full day? Just kidding. Let’s peg that goal down at 15 minutes. Fault of having a good-quality drone, equipped with thermovision, flying over the whole structure of the souk, and guiding you, you need to experiment. You need to test various routes out of the maze and to trace those, which allow the x ≤ 15 minutes time. If all the possible routes allowed you to get out to the main square in exactly 15 minutes, experimenting would be useless. There is any point in experimenting only if some from among the possible routes yield a suboptimal outcome. You are facing a paradox: in order not to make (too much) errors in your future strolls across Jemaa El Fna, you need to make some errors when you learn how to stroll through.

Now, imagine a fancy app in your smartphone, simulating the possible errors you can make when trying to find your way through the souk. You could watch an imaginary you, on the screen, wandering through the maze of alleys and dead-ends, learning by trial and error to drive the time of passage down to no more than 15 minutes. That would be interesting, wouldn’t it? You could see your possible errors from outside, and you could study the way you can possibly learn from them. Of course, you could always say: ‘it is not the real me, it is just a digital representation of what I could possibly do’. True. Still, I can guarantee you: whatever you say, whatever strong the grip you would try to keep on the actual, here-and-now you, you just couldn’t help being fascinated.

Is there anything more, beyond fascination, in observing ourselves making many possible future mistakes? Let’s think for a moment. I can see, somehow from outside, how a copy of me deals with the things of life. Question: how does the fact of seeing a copy of me trying to find a way through the souk differ from just watching a digital map of said souk, with GPS, such as Google Maps? I tried the latter, and I have two observations. Firstly, in some structures, such as that of maze-like alleys adjacent to Jemaa El Fna, seeing my own position on Google Maps is of very little help. I cannot put my finger on the exact reason, but my impression is that when the environment becomes just too bizarre for my cognitive capacities, having a bird’s eye view of it is virtually no good. Secondly, when I use Google Maps with GPS, I learn very little about my route. I just follow directions on the screen, and ultimately, I get out into the main square, but I know that I couldn’t reproduce that route without the device. Apparently, there is no way around learning stuff by myself: if I really want to learn how to move through the souk, I need to mess around with different possible routes. A device that allows me to see how exactly I can mess around looks like having some potential.

Question: how do I know that what I see, in that imaginary app, is a functional copy of me, and how can I assess the accuracy of that copy? This is, very largely, the rabbit hole I have been diving into for the last 5 months or so. The first path to follow is to look at the variables used. Artificial intelligence works with numerical data, i.e. with local instances of abstract variables. Similarity between the real me, and the me reproduced as artificial intelligence is to find in the variables used. In real life, variables are the kinds of things, which: a) are correlated with my actions, both as outcomes and as determinants b) I care about, and yet I am not bound to be conscious of caring about.

Here comes another discovery I made on my journey through the realm of artificial intelligence: even if, in the simplest possible case, I just make the equations of my neural network so as they represent what I think is the way I think, and I drop some completely random values of the relevant variables into the first round of experimentation, the neural network produces something disquietingly logical and coherent. In other words, if I am even moderately honest in describing, in the form of equations, my way of apprehending reality, the AI I thus created really processes information in the way I would.  

Another way of assessing the similarity between a piece of AI and myself is to compare the empirical data we use: I can make a neural network think more or less like me if I feed it with an accurate description of my so-far experience. In this respect, I discovered something that looks like a keystone in my intellectual structure: as I feed my neural network with more and more empirical data, the scope of the possible ways to learning something meaningful narrows down. When I minimise the amount of empirical data fed into the network, the latter can produce interesting, meaningful results via many alternative sequences of equations. As the volume of real-life information swells, some sequences of equations just naturally drop off the game: they drive the neural network into a state of structural error, when it stops performing calculations.

At this point, I can see some similarity between AI and quantum physics. Quantum mechanics have grown as a methodology, as they proved to be exceptionally accurate in predicting the outcomes of experiments in physics. That accuracy was based on the capacity to formulate very precise hypotheses regarding empirical reality, and the capacity to increase the precision of those hypotheses through the addition of empirical data from past experiments.  

Those fundamental observations I made about the workings of artificial intelligence have progressively brought me to use AI in social sciences. An analytical tool has become a topic of research for me. Happens all the time in science, mind you. Geometry, way back in the day, was a thoroughly practical set of tools, which served to make good boats, ships and buildings. With time, geometry has become a branch of science on its own rights. In my case, it is artificial intelligence. It is a tool, essentially, invented back in the 1960ies and 1970ies, and developed over the last 20 years, and it serves practical purposes: facial identification, financial investment etc. Still, as I have been working with a very simple neural network for the last 4 months, and as I have been developing the logical structure of that network, I am discovering a completely new opening in my research in social sciences.

I am mildly obsessed with the topic of collective human intelligence. I have that deeply rooted intuition that collective human behaviour is always functional regarding some purpose. I perceive social structures such as financial markets or political institutions as something akin to endocrine systems in a body: complex set of signals with a random component in their distribution, and yet a very coherent outcome. I follow up on that intuition by assuming that we, humans, are most fundamentally, collectively intelligent regarding our food and energy base. We shape our social structures according to the quantity and quality of available food and non-edible energy. For quite a while, I was struggling with the methodological issue of precise hypothesis-making. What states of human society can be posited as coherent hypotheses, possible to check or, fault of checking, to speculate about in an informed way?

The neural network I am experimenting with does precisely this: it produces strange, puzzling, complex states, defined by the quantitative variables I use. As I am working with that network, I have come to redefining the concept of artificial intelligence. A movie-based approach to AI is that it is fundamentally non-human. As I think about it sort of step by step, AI is human, as it has been developed on the grounds of human logic. It is human meaning, and therefore an expression of human neural wiring. It is just selective in its scope. Natural human intelligence has no other way of comprehending but comprehending IT ALL, i.e. the whole of perceived existence. Artificial intelligence is limited in scope: it works just with the data we assign it to work with. AI can really afford not to give a f**k about something otherwise important. AI is focused in the strict sense of the term.

During that recent stay in Marrakech, Morocco, I had been observing people around me and their ways of doing things. As it is my habit, I am patterning human behaviour. I am connecting the dots about the ways of using energy (for the moment I haven’t seen any making of energy, yet) and food. I am patterning the urban structure around me and the way people live in it.

Superbly kept gardens and buildings marked by a sense of instability. Human generosity combined with somehow erratic behaviour in the same humans. Of course, women are fully dressed, from head to toes, but surprisingly enough, men too. With close to 30 degrees Celsius outside, most local dudes are dressed like a Polish guy would dress by 10 degrees Celsius. They dress for the heat as I would dress for noticeable cold. Exquisitely fresh and firm fruit and vegetables are a surprise. After having visited Croatia, on the Southern coast of Europe, I would rather expect those tomatoes to be soft and somehow past due. Still, they are excellent. Loads of sugar in very nearly everything. Meat is scarce and tough. All that has been already described and explained by many a researcher, wannabe researchers included. I think about those things around me as about local instances of a complex logical structure: a collective intelligence able to experiment with itself. I wonder what other, hypothetical forms could this collective intelligence take, close to the actually observable reality, as well as some distance from it.

The idea I can see burgeoning in my mind is that I can understand better the actual reality around me if I use some analytical tool to represent slight hypothetical variations in said reality. Human behaviour first. What exactly makes me perceive Moroccans as erratic in their behaviour, and how can I represent it in the form of artificial intelligence? Subjectively perceived erraticism is a perceived dissonance between sequences. I expect a certain sequence to happen in other people’s behaviour. The sequence that really happens is different, and possibly more differentiated than what I expect to happen. When I perceive the behaviour of Moroccans as erratic, does it connect functionally with their ways of making and using food and energy?  

A behavioural sequence is marked by a certain order of actions, and a timing. In a given situation, humans can pick their behaviour from a total basket of Z = {a1, a2, …, az} possible actions. These, in turn, can combine into zPk = z!/(z – k)! = (1*2*…*z) / [1*2*…*(z – k)] possible permutations of k component actions. Each such permutation happens with a certain frequency. The way a human society works can be described as a set of frequencies in the happening of those zPk permutations. Well, that’s exactly what a neural network such as mine can do. It operates with values standardized between 0 and 1, and these can be very easily interpreted as frequencies of happening. I have a variable named ‘energy consumption per capita’. When I use it in the neural network, I routinely standardize each empirical value over the maximum of this variable in the entire empirical dataset. Still, standardization can convey a bit more of a mathematical twist and can be seen as the density of probability under the curve of a statistical distribution.

When I feel like giving such a twist, I can make my neural network stroll down different avenues of intelligence. I can assume that all kinds of things happen, and all those things are sort of densely packed one next to the other, and some of those things are sort of more expected than others, and thus I can standardize my variables under the curve of the normal distribution. Alternatively, I can see each empirical instance of each variable in my database as a rare event in an interval of time, and then I standardize under the curve of the Poisson distribution. A quick check with the database I am using right now brings an important observation: the same empirical data standardized with a Poisson distribution becomes much more disparate as compared to the same data standardized with the normal distribution. When I use Poisson, I lead my empirical network to divide sharply empirical data into important stuff on the one hand, and all the rest, not even worth to bother about, on the other hand.

I am giving an example. Here comes energy consumption per capita in Ecuador (1992) = 629,221 kg of oil equivalent (koe), Slovak Republic (2000) = 3 292,609 koe, and Portugal (2003) = 2 400,766 koe. These are three different states of human society, characterized by a certain level of energy consumption per person per year. They are different. I can choose between three different ways of making sense out of their disparity. I can see them quite simply as ordinals on a scale of magnitude, i.e. I can standardize them as fractions of the greatest energy consumption in the whole sample. When I do so, they become: Ecuador (1992) =  0,066733839, Slovak Republic (2000) =  0,349207223, and Portugal (2003) =  0,254620211.

In an alternative worldview, I can perceive those three different situations as neighbourhoods of an expected average energy consumption, in the presence of an average, standard deviation from that expected value. In other words, I assume that it is normal that countries differ in their energy consumption per capita, as well as it is normal that years of observation differ in that respect. I am thinking normal distribution, and then my three situations come as: Ecuador (1992) = 0,118803134, Slovak Republic (2000) = 0,556341893, and Portugal (2003) = 0,381628627.

I can adopt an even more convoluted approach. I can assume that energy consumption in each given country is the outcome of a unique, hardly reproducible process of local adjustment. Each country, with its energy consumption per capita, is a rare event. Seen from this angle, my three empirical states of energy consumed per capita could occur with the probability of the Poisson distribution, estimated with the whole sample of data. With this specific take on the thing, my three empirical values become: Ecuador (1992) = 0, Slovak Republic (2000) = 0,999999851, and Portugal (2003) = 9,4384E-31.

I come back to Morocco. I perceive some behaviours in Moroccans as erratic. I think I tend to think Poisson distribution. I expect some very tightly defined, rare event of behaviour, and when I see none around, I discard everything else as completely not fitting the bill. As I think about it, I guess most of our human intelligence is Poisson-based. We think ‘good vs bad’, ‘edible vs not food’, ‘friend vs foe’ etc.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

Shut up and keep thinking

This time, something went wrong with the uploading of media on the Word Press server, and so I am publishing my video editorial on You Tube only. Click here to see and hear me saying a few introductory words.

I am trying to put some order in all the updates I have written for my research blog. Right now, I am identifying the main strands of my writing. Still, I want to explain I am doing that sorting of my past thought. I had the idea that, as the academic year is about to start, I could use those past updates as material for teaching. After all, I am writing this blog in sort of a quasi-didactic style, and a thoughtful compilation of such content can be of help for my students.

Right, so I am disentangling those strands of writing. As for the main ideas, I have been writing mostly about three things: a) the market of renewable energies b) monetary systems and cryptocurrencies, as well as the FinTech sector, c) political systems, law and institutions, and d) behavioural research. As I am reviewing what I wrote along these three lines, a few distinct patterns of writing emerge. The first are case studies, focused on interpreting the financial statements of selected companies. I went into four distinct avenues with that form of expression: a) companies operating in the market of renewable energies b) investment funds c) FinTech companies and, lately, d) film and TV companies. Then, as a different form of my writing, come quantitative studies, where I use large databases to run correlations and linear regressions. Finally, there are whole series of updates, which, fault of a better term, I call ‘concept development’. They give account of my personal work on business or scientific concepts, and look very much like daily reports of creative thinking.

Funny, by the way, how I write a lot about behavioural patterns and their importance in social structures, and I have fallen, myself, into recurrent behavioural patterns in my writing. Good, so what I am going to do is to use my readings and findings about behavioural patterns in order to figure out, and make the best possible use of my own behavioural patterns.

How can I use my past writing for educational purposes? I guess that my essential mission, as an educator, consists in communicating an experience in a teachable form, i.e. in a form possible to reproduce, and that reproduction of my experience should be somehow beneficial to other people. Logically, if I want to be an efficient educator in social sciences, what I should do now, is to distillate some sort of essence from my past experience, and formalize it in a teachable form.

My experience is that of looking for recurrent patterns in the most basic phenomena around me. As I am supposed to be clever as a social scientist, let’s settle for social phenomena. Those three distinct forms of my expression correspond to three distinct experiences: focus on one case, search for quantitative data on a s**tload of cases grouped together, and, finally, progressive coining up of complex ideas. This is what I can communicate, as a teacher.

Yet, another idea germinates in my mind. I am a being in time, and I thrust myself into the time to come, as Martin Heidegger would say (if he was alive). I define my social role very largely as that of a scientist and a teacher, and I wonder what am I thrusting, of myself as a scientist and a teacher, into this time that is about to advance towards me. I was tempted to answer grandly that it is my passion to discover that I project into current existence. Yet, precisely, I noticed it is grand talk, and I want to go to the core of things, like to the flesh of my being in time.

As I take off the pompous, out of that ‘passion to discover’ thing, something scientific emerges: a sequence. It all always starts when I see something interesting, and sort of vaguely useful. I intuitively want to know more about that interesting and possibly useful thing, and so I touch, I explore, I turn it under different angles, and yes, my initial intuition was right: it is interesting and useful. Years ago, even before having my PhD, I was strongly involved in preparing new material for management training. I was part of a team lead by a respectable professor from the University of Warsaw, and we were in scientific charge of training for the middle management of a few Polish banks. At the time, I started to read financial reports of companies listed in the stock market. I progressively figured out that large, publicly listed companies published periodical reports, which are like made of two completely different, semantic substances.

In those financial reports, there was the corporate small talk, about ‘exciting new opportunities’, ‘controlled growth’, ‘value for our shareholders’, which, honestly, I find interesting for the sake of its peculiar style, seemingly detached from real life. Yet, there is another semantic substance in those reports: the numbers. Numbers tell a different story. Even if the management of a company do their best to disguise some facts so as they look fancier, the numbers tell the truth. They tell the truth about product markets, about doubtful mergers and acquisitions, about the capacity of a business to accumulate capital etc.

As I started to work seriously on my PhD, and I started to sort out the broadly spoken microeconomic theories, including those of the new institutional school, I suddenly realised the connection between those theories and the sense that numbers make in those financial reports. I discovered that financial statements, i.e. the bare numbers, backed with some technical, explanatory notes, tend to show the true face of any business. They make of those Ockham’s razors, which cut out the b*****it and leave only the really meaningful.

Here comes the underlying, scientifically defined phenomenon. Financial markets have been ever present in human societies. In this respect, I could never recommend enough the monumental work by Fernand Braudel (Braudel 1992a[1]; Braudel 1992b[2]; Braudel 1995[3]). Financial markets have their little ways, and one of them is the charming veil of indefiniteness, put on the facts that laymen should-not-exactly-excite-themselves-about-for-their-own-good. Big business likes to dress into those fancy clothes, made of fancy and foggy language. Still, as soon as numbers have to be published, they start telling the true story. However elusive the management of a company would be in their verbal statements, the financials tell the truth. It is fascinating, how the introduction of precise measurements and accounts, into a realm of social life where plenty of b*****it floats, instantaneously makes things straight and clear.

I know what you can think now, ‘cause I used to think the same when I was (much) younger and listened to lectures at the university: here is that guy, who can be elegantly labelled as more than mature, and he gets excited about his own fascinations, financial reports in the occurrence. Still, I invite you to explore the thing. Financial markets are crucial for the current functioning of our civilisation. We need to shift towards renewable energies, we need to figure out how to make more food in sustainable ways, we need to remove plastic from the oceans, we need to go and see if Mars is an interesting place to hang around: we have a lot of challenges to face. Financial markets are crucial to that end, because they can greatly help in mobilising collective effort, and if we want them to work the way they should work, we need to assure that money goes where it is really needed. Bringing clarity and transparency to finance, over and over again, is really important. Being able to cut through the veil of corporate propaganda and go to the core of business is just as important. Careful reading of financial reports matters. It just matters.

So here is how one of my scientific fascinations formed. More or less at the same epoch, i.e. when I was working on my PhD, I started to work seriously with large datasets, mostly regarding innovation. Patents, patent applications, indicators of R&D effort: I started to go really quantitative about that stuff. I still remember that strange feeling, when synthetic measures of those large datasets started to make sense. I would run some correlations, just because you just need a lot of correlations in a PhD in economics, and vlam!: things would start to be meaningful. Those of you who work with Big Data probably know that feeling well, but I was experiencing it in the 1990ies, when digital technologies were like the grand-parents of the current ones, and even things like Panel Data Analysis, an analytical routine today, were seen as the impressionism of economic research.

I had progressively developed a strongly exploratory manner of working with quantitative data. A friend of mine, the same professor whom I used to work for in those management training projects, called it ‘the bulldog’ approach. He said: ‘Krzysztof, when you find some interesting data, you are like one of those anecdotal bulldogs: you bite into it so strongly, that sometimes you don’t even know how to let go, and you need someone who comes with a crowbar at forces your jaws open’.  Yes, indeed, this is the very same that I have just noticed as I am reviewing the past updates in that research blog of mine. What I do with data can be best described as sniffing, rummaging, playing with, digging and biting into – anything but serious scientific approach.

This is how two of my typical forms of scientific expression – case studies and quantitative studies – formed out of my fascination with the sense coming out of numbers. There is that third form of expression, which I have provisionally labelled ‘concept forming’, and which I developed the most recently, like over the last 18 months, precisely as I started to blog.

I am thinking about the best way to describe my experience in that respect. Here it comes. You have probably experienced those episodes of going outdoors, hiking or running, and then you or someone else starts moaning: ‘These backpack straps are just killing my shoulders! I am thirsty! I am exhausted! My knees are about to explode!’ etc. When I was a kid, I joined the boy scouts, and it was all about hiking. I used to be a fat kid, and that hiking was really killing me, but I liked company, too, and so I went for it. I used to moan exactly the way I have just portrayed. The team leader would just reply in the lines of ‘Just shut up and keep walking! You will adapt!’. Now, I know he was bloody right. There are times in life, when we take on something new and challenging, and then it seems just so hard to carry on, and the best way to deal with it is to shut up and carry on. You will adapt.

This is very much what I experienced as regards thinking and writing. When I started to keep this blog, I had a lot of ideas to express (hopefully, I still have), but I was really struggling with giving an intelligible form to those ideas. This is how I discovered the deep truth of that sentence, attributed to Pablo Picasso (although it could be anyone): ‘When a stroke of genius comes, it finds me at work’. As strange as it could seem, I experienced, and I am still experiencing, over and over again, the fundamental veracity of that principle. When I start working on an idea, the initial enthusiasm sooner or later yields to some moaning function in my brain: ‘F*ck, it is to hard! That thinking about one thing is killing me! And it is sooo complex! I will never sort it out! There is no point!’. Then, hopefully, another part of my brain barks: ‘Just shut up, and think, write, repeat! You will adapt’.

And you know what? It works. When, in the presence of a complex concept to figure out I just shut up (metaphorically, I mean I stop moaning), and keep thinking and writing, it takes shape. Step by step, I am sketching the contours of what’s simmering in the depths of my mind. The process is a bit painful, but rewarding.

Thus, here is the pattern of myself, which I am thrusting into the future, as it comes to science and teaching, and which, hopefully, I can teach. People around me, voluntarily or involuntarily, attract my attention to some sort of scientific and/or teaching work I should do. This is important, and I have just realized it: I take on goals and targets that other people somehow suggest. I need that social prod to wake me up. As I take on that work, I almost instinctively start flipping my Ockham’s razor between and around my intellectual fingers (some people do it with cards, phones, or even knives, you might have spotted it), and I causally give a shave here and there, and I slice observable reality into layers: there is the foam of common narrative about the thing, and there are those factual anchors I can attach to. Usually they are numbers, and, at a deeper philosophical level, they are proportions between things of reality.

As I observe those proportions, I progressively attach them to facts of life, and I start seeing patterns. Those patterns provide me something more or less interesting to say, and so I maintain my intellectual interaction with other people, and sooner or later they attract my attention to another interesting thing to focus on. And so it goes on. And one day, I die. And what will really matter will be made of things that I do but which outlive me. The ethically valuable things.

Good. I return to that metaphor I coined up a like 10 weeks ago, that of social sciences used as a social GPS system, i.e. serving to find one’s location in the social space, and then figure out a sensible route to follow. My personal experience, the one I have just given the account of, can serve to that purpose. My experience tells me that finding my place in the social space always involves interaction with other people. Understanding, and sort of embracing my social role, i.e. the way I can be really useful to other people, is the equivalent of finding my location on the social map. Another important thing I discovered as I deconstructed my experience: my social role is largely made of goals I pursue, not just of labels and rituals. It is sort of dynamic, it is very much my Heideggerian being-in-time, thrusting myself into my own immediate future.

I feel like getting it across really precisely: that thrusting-myself-into-the-future thing is not just pure phenomenology. It is hard science as well. We are defined by what we do. By ‘we’ I mean both individuals and whole societies. What we do involves something we are trying to achieve, i.e. some ethical values we seek to maximise, and to balance with other values. Understanding my social role means tracing the path I am moving along.

Now, whatever goal I am to achieve, according to my social role, around me I can see the foam of common narrative, and the factual anchors. The practical use of social sciences consists in finding those anchors, and figuring out the way to use them so as to thrive in the social role we have now, or change that role efficiently. Here comes the outcome from another piece of my personal experience: forming a valuable understanding requires just shutting up and thinking, and discovering things. Valuable discovery goes beyond and involves more than just amazement: it is intimately connected to purposeful work on discovering things.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

Support this blog

€10.00

[1] Braudel, F. (1992). Civilization and capitalism, 15th-18th Century, Vol. I: The structure of everyday life (Vol. 1). Univ of California Press.

[2] Braudel, F. (1992). Civilization and capitalism, 15th-18th century, vol. II: The wheels of commerce (Vol. 2). Univ of California Press.

[3] Braudel, F. (1995). A history of civilizations (p. 178). New York: Penguin Books