As it is ripe, I can harvest

I keep revising my manuscript titled ‘Climbing the right hill – an evolutionary approach to the European market of electricity’, in order to resubmit it to the journal Applied Energy. In my last update, titled ‘Still some juice in facts’, I used the technique of reverted reading to break the manuscript down into a chain of ideas. Now, I start reviewing the most recent literature associated with those ideas. I start with Rosales-Asensio et al. (2020)[1], i.e. with ‘Decision-making tools for sustainable planning and conceptual framework for the energy–water–food nexus’. The paper comes within a broader stream of literature, which I already mentioned in the first version of my manuscript, namely within the so-called MUSIASEM framework, where energy management in national economies is viewed as metabolic function, and socio-economic systems in general are equated to metabolic structures. Energy, water, food, and land are considered in this paper as sectors in the economic system, i.e. as chains of markets with economic goods being exchanged. We know that energy, water and food are interconnected, and all the three are connected to the way that our human social structures work. Yet, in the study of those connections we have been going into growing complexity of theoretical models, hardly workable at all when applied to actual policies. Rosales-Asensio et al. propose a method to simplify theoretical models in order to make them functional in decision-making. Water, land, and food can be included into economic planning as soon as we explicitly treat them as valuable assets. Here, the approach by Rosales-Asensio et al. goes interestingly against the current of something that can be labelled as ‘popular environmentalism’. Whilst the latter treats those natural (or semi-natural, in the case of food base) resources as invaluable and therefore impossible to put a price tag on, Rosales-Asensio et al. argue that it is much more workable, policy-wise to do exactly the opposite, i.e. to give explicit prices and book values to those resources. The connection between energy, water, food, and the economy is being done as transformation of matrices, thus as something akin a Markov chain of states. 

The next article I pass in review is that by Al-Tamimi and Al-Ghamdi (2020), titled ‘Multiscale integrated analysis of societal and ecosystem metabolism of Qatar’ (Energy Reports, 6, 521-527, ). This paper presents interesting findings, namely that energy consumption in Quatar, between 2006 and 2015, grew at a faster rate than GDP within the same period, and energy consumption per capita and energy intensity grew approximately at the same rate. That could suggest some kind of trade-off between productivity and energy intensity of an economy. Interestingly, the fall of productivity was accompanied by increased economic activity of the Quatar’s population, i.e. the growth of professionally active population, and thence of the labour market, was faster than the overall demographic growth.

In still another paper, titled ‘The energy metabolism of countries: Energy efficiency and use in the period that followed the global financial crisis’. Energy Policy, 139, 111304. (2020),  professor Valeria Andreoni develops a line of research, where rapid economic change, even when it is a crisis-like change, contributes to reducing energy intensity of national economies. Still, some kind of blueprint for energy-efficient technological change needs to be in place, at the level of national policies. Energy-efficient technological change might be easier than we think, and yet, apparently, it needs some sort of accompanying economic change as its trigger. Energy efficiency seems to be correlated with competitive technological development in national economies. Financial constraints can hamper those positive changes. Cross-sectional (i.e. inter-country) gaps in energy efficiency are essentially bad for sustainable development. Public policies should aim at equalizing those gaps, by integrating the market of energy within EU. 

Velasco-Fernández, R., Pérez-Sánchez, L., Chen, L., & Giampietro, M. (2020), in the article titled ‘A becoming China and the assisted maturity of the EU: Assessing the factors determining their energy metabolic patterns’. Energy Strategy Reviews, 32, 100562. , bring empirical results somehow similar to mine, although with a different method. The number of hours worked per person per year is mentioned in this paper as an important variable of the MuSIASEM framework for China. There is, for example, a comparison of energy metabolized in the sector of paid work, as compared to the household sector. It is found that the aggregate amount of human work used in a given sector of the economy is closely correlated with the aggregate energy metabolized by that sector. The economic development of China, and its pattern of societal metabolism in using energy, displays increase in the level of capitalization of all sectors, while reducing the human activity (paid work) in all of them except in the services. In the same time, the amount of human work per unit of real output seems to be negatively correlated with the capital-intensity (or capital-endowment) of particular sectors in the economy. Energy efficiency seems to be driven by decreasing work-intensity and increasing capital-intensity.

I found another similarity to my own research, although under a different angle, in the article by Koponen, K., & Le Net, E. (2021): Towards robust renewable energy investment decisions at the territorial level. Applied Energy, 287, 116552. . The authors build a simulative model in Excel, where they create m = 5000 alternative futures for a networked energy system aiming at optimizing 5 performance metrics, namely: the LCOE cost of electricity, the GHG metric (greenhouse gases emission) for the climate, the density of PM2.5 and PM10 particles in the ambient air as a metric of health, capacity of power generation as a technological benchmark, and the number of jobs as social outcome. That complex vector of outcomes has been simulated as dependent on a vector of uncertainty as regards costs, and more specifically: cost of CO2, cost of electricity, cost of natural gas, and the cost of biomass. The model was based on actual empirical data as for those variables, and the ‘alternative futures’ are, in other words, 5000 alternative states of the same system. Outcomes are gauged with the so-called regret analysis, where the relative performance in a specific outcome is measured as the residual difference between its local value, and, respectively, its general minimum or maximum, depending on whether the given metric is something we strive to maximize (e.g. capacity), or to minimize (e.g. GHG). The regret analysis is very similar to the estimation of residual local error.

That short review of literature has the merit of showing me that I am not completely off the picture with the method and he findings which I initially presented to the editor of Applied Energy in that manuscript: ‘Climbing the right hill – an evolutionary approach to the European market of electricity’. The idea of understanding the mechanism of change in social structures, including the market of energy, by studying many alternative versions of said structure, seems to be catching in literature. I am progressively wrapping my mind around the fact that in my manuscript, the method is more important than the findings. The real value for money of my article seems to reside in the extent to which I can demonstrate the reproducibility and robustness of that method.

Thus, probably for the umpteenth time, I am rephrasing the fundamentals of my approach, and I am trying to fit it into the structure which Applied Energy recommends for articles submitted to their attention. I should open up with an ‘Introduction’, where I sketch the purpose of the paper, as well as the main points of the theoretical background which my paper stems from, although without entering into detailed study thereof. Then, I should develop on ‘Material and Methods’, with the main focus on making my method as reproducible as possible, and now comes the time to develop on, respectively, ‘Theory’ and ‘Calculation’, thus elaborating on the theoretical foundations of my research as pitched against literature, and on the detailed computational procedures I used. I guess that I need to distinguish, at this specific point, between the literature pertinent to the substance of my research (Theory), and that oriented on the method of working with empirical data (Calculation).

Those four initial sections – Introduction, Material and Methods, Theory, Calculation – open the topic up and then comes the time to give it a closure, with, respectively: ‘Results’, ‘Discussion’, and, optionally, a separate ‘Conclusion’. Over the top of that logical flow, I need to decorate with sections pertinent to ‘Data availability’, ‘Glossary’, and ‘Appendices’. As I get further back from the core and substance of my manuscript, and deeper into peripheral information, I need to address three succinct ways of presenting my research: Highlights, Graphical Abstract, and a structured cover letter. Highlights are 5 – 6 bullet points, 1 – 2 lines each, sort of abstract translated into a corporate presentation on slides. The Graphical Abstract is a challenge – as I need to present complex ideas in a pictographic form – and it is an interesting challenge. The structured cover letter should address the following points:

>> what is the novelty of this work?

>> is the paper appealing to a popular or scientific audience?

>> why the author thinks the paper is important and why the journal should publish it?

>> has the article been checked by an expert native speaker?

>> is the author available as reviewer?

Now, I ask myself fundamental questions. Why should anyone bother about the substance and the method of the research I present in my article. I noticed, both in public policies and in business strategies, a tendency to formulate completely unrealistic plans, and then to complain about other people not being smart enough to carry those plans out and up to happy ending. It is very visible in everything related to environmental policies and environmentally friendly strategies in business. Environmental activism consumes itself, very largely, in bashing everyone around for not being diligent enough in saving the planet.

To me, it looks very similarly to what I did many times as a person: unrealistic plans, obvious failure which anyone sensible could have predicted, frustration, resentment, practical inefficiency. I did it many times, and, obviously, whole societies are perfectly able to do it collectively. Action is key to success. A good plan is the plan which utilizes and reinforces the skills and capacities I already have, makes those skills into recurrent patterns of action, something like one good thing done per day, whilst clearly defining the skills I need to learn in order to be even more complete and more efficient in what I do. A good public policy, just as a good business strategy, should work in the same way.

When we talk about energy efficiency, or about the transition towards renewable energies, what is our action? Like really, what is the most fundamental thing we do together? Do we purposefully increase energy efficiency, in the first place? Do we deliberately transition to renewables? Yes, and no. Yes, at the end of the day we get those outcomes, and no, what we do on a daily basis is something else. We work. We do business. We study in order to get a job, or to start a business. We live our lives, from day to day, and small outcomes of that daily activity pile up, producing big cumulative change.   

Instead of discussing what we do completely wrong, and thus need to change, it is a good direction to discover what we do well, consistently and with visible learning. That line of action can be reinforced and amplified, with good results. The so-far review of literature suggests that research concerning energy and energy transition is progressively changing direction, from the tendency to growing complexity and depth in study, dominant until recently, towards a translation of those complex, in-depth findings into relatively simple decision-making tools for policies and business strategies.

Here comes my method. I think it is important to create an analytical background for policies and business strategies, where we take commonly available empirical data at the macro scale, and use this data to discover the essential, recurrently pursued collective outcomes of a society, in the context of specific social goals. My point and purpose is to nail down a reproducible, relatively simple method of discovering what whole societies are really after. Once again, I think about something simple, which anyone can perform on their computer, with access to Internet. Nothing of that fancy stuff of social engineering, with personal data collected from unaware folks on Facebook. I want the equivalent of a screwdriver in positive, acceptably fair social engineering.

How do I think I can make a social screwdriver? I start with defining a collective goal we think we should pursue. In the specific case of my research on energy it is the transition to renewable sources. I nail down my observation of achievement, regarding that goal, with a simple metric, such as e.g. the percentage of renewables in total energy consumed ( ) or in total electricity produced ( ). I place that metric in the context of other socio-economic variables, such as GDP per capita, average hours worked per person per year etc. At this point, I make an important assumption as regards the meaning of all the variables I use. I assume that if a lot of humans go to great lengths in measuring something and reporting those measurements, it must be important stuff. I know, sounds simplistic, yet it is fundamental. I assume that quantitative variables used in social sciences represent important aspects of social life, which we do our best to observe and understand. Importance translates as significant connection to the outcomes of our actions.

Quantitative variables which we use in social sciences represent collectively acknowledged outcomes of our collective action. They inform about something we consistently care about, as a society, and, at the same time, something we recurrently produce, as a society. An array of quantitative socio-economic variables represents an imperfect, and yet consistently construed representation of complex social reality.

We essentially care about change. Both individual human nervous systems, and whole cultures, are incredibly accommodative. When we stay in a really strange state long enough to develop adaptive habits, that strange state becomes normal. We pay attention to things that change, whence a further hypothesis of mine that quantitative socio-economic variables, even if arithmetically they are local stationary states, serve us to apprehend gradients of change, at the level of collective, communicable cognition.

If many different variables I study serve to represent, imperfectly but consistently, the process of change in social reality, they might zoom on the right thing with various degrees of accuracy. Some of them reflect better the kind of change that is really important for us, collectively, whilst some others are just sort of accurate in representing those collectively pursed outcomes. An important assumption pops its head from between the lines of my writing: the bridging between pursued outcomes and important change. We pay attention to change, and some types of change are more important to us than others. Those particularly important changes are, I think, the outcomes we are after. We pay the most attention, both individually and collectively, to phenomena which bring us payoffs, or, conversely, which seriously hamper such payoffs. This is, once again on my path of research, a salute to the Interface Theory of Perception (Hoffman et al. 2015[2]; Fields et al. 2018[3]).

Now, the question is: how to extract orientations, i.e. objectively pursued collective outcomes, from that array of apparently important, structured observations of what is happening to our society? One possible method consists in observing trends and variance over time, and this is what I had very largely done, up to a moment, and what I always do now, with a fresh dataset, as a way of data mining. In this approach, I generally assume that a combination of relatively strong variance with strong correlation to the remaining metrics, makes a particular variable likely to be the driving undertow of the whole social reality represented by the dataset at hand.

Still, there is another method, which I focus on in my research, and which consists in treating the empirical dataset as a complex and imperfect representation of the way that collectively intelligent social structures learn by experimenting with many alternative versions of themselves. That general hypothesis leads to building supervised, purposefully biased experiments with that data. Each experiment consists in running the dataset through a specifically skewed neural network – a perceptron – where one variable from the dataset is the output which the perceptron strives to optimize, and the remaining variables make the complex input instrumental to that end. Therefore, each such experiment simulates an artificial situation when one variable is the desired and collectively pursued outcome, with other variables representing gradients of change subservient to that chief value.

When I run such experiments with any dataset, I create as many transformed datasets as there are variables in the game. Both for the original dataset, and for the transformed ones, I can calculate the mean value of each variable, thus construing a vector of mean expected values, and, according to classical statistics, such a vector is representative for the expected state of the dataset in question. I end up with both the original dataset and the transformed ones being tied to the corresponding vectors of mean expected values. It is easy to estimate the Euclidean distance between those vectors, and thus to assess the relative mathematical resemblance between the underlying datasets. Here comes something I discovered more than assumed: those Euclidean distances are very disparate, and some of them are one or two orders of magnitude smaller than all the rest. In other words, some among all the supervised experiments done yield a simulated state of social reality much more similar to the original, empirical one than all the other experiments. This is the methodological discovery which underpins my whole research in this article, and which emerged as pure coincidence, when I was working on a revised version of another paper, titled ‘Energy efficiency as manifestation of collective intelligence in human societies’, which I published with the journal ‘Energy’( ).

My guess from there was – and still is – that those supervised experiments have disparate capacity to represent the social reality I study with the given dataset. Experiments which yield mathematical transformations relatively the most similar to the original set of empirical numbers are probably the most representative. Once again, the mathematical structure of the perceptron used in all those experiments is rigorously the same, and what makes the difference is the focus on one particular variable as the output to optimize. In other words, some among the variables studied represent much more plausible collective outputs than others.

I feel a bit lost in my own thinking. Good. It means I have generated enough loose thoughts to put some order in them. It would be much worse if I didn’t have thoughts to put order in. Productive chaos is better than sterile emptiness. Anyway, the reproducible method I want to present and validate in my article ‘Climbing the right hill – an evolutionary approach to the European market of electricity’ aims at discovering the collectively pursued social outcomes, which, in turn, are assumed to be the key drivers of social change, and the path to that discovery leads through the hypothesis that such outcomes are equivalent to specific a gradient of change, which we collectively pay particular attention to in the complex social reality, imperfectly represented with an array of quantitative socio-economic variables. The methodological discovery which I bring forth in that reproducible method is that when any dataset of quantitative socio-economic variables is being transformed, with a perceptron, into as many single-variable-optimizing transformations as there are variables in the set, 1 ÷ 3 among those transformations are mathematically much more similar to the original set of observations that all the other thus transformed sets. Consequently, in this method, it is expected to find 1 ÷ 3 variables which represent – much more plausibly than others – the possible orientations, i.e. the collectively pursued outcomes of the society I study with the given empirical dataset.

Ouff! I have finally spat it out. It took some time. The idea needed to ripe, intellectually. As it is ripe, I can harvest.

[1] Rosales-Asensio, E., de la Puente-Gil, Á., García-Moya, F. J., Blanes-Peiró, J., & de Simón-Martín, M. (2020). Decision-making tools for sustainable planning and conceptual framework for the energy–water–food nexus. Energy Reports, 6, 4-15.

[2] Hoffman, D. D., Singh, M., & Prakash, C. (2015). The interface theory of perception. Psychonomic bulletin & review, 22(6), 1480-1506.

[3] Fields, C., Hoffman, D. D., Prakash, C., & Singh, M. (2018). Conscious agent networks: Formal analysis and application to cognition. Cognitive Systems Research, 47, 186-213.

Still some juice in facts

I am working on improving my manuscript titled ‘Climbing the right hill – an evolutionary approach to the European market of electricity’, after it received an amicable rejection from the journal Applied Energy, and, in the same time, I am working on other stuff. As usually. Some of that other staff is a completely new method of teaching in the summer semester, sort of a gentle revolution, with glorious prospects ahead, and without guillotines (well, not always).

As for the manuscript, I intend to work in three phases. I restate and reformulate the main lines of the article, and this is phase one. I pass in review the freshest literature in energy economics, as well as in the applications of artificial neural networks therein, and this is phase two. Finally, in phase three, I plan to position my method and my findings vis a vis that latest research.

I start phase one. When I want to understand what I wrote about 1 year ago, it is very nearly as if I was trying to understand what someone else wrote. Yes, I function like that. I have pretty good long-term memory, and it is because I learnt to detach emotions from old stuff. I sort of archive my old thoughts in order to make room for the always slightly disquieting waterfall of new thoughts. I need to dig and unearth my past meaning. I use the technique of reverse reading to achieve that. I read written content from its end back upstream to its beginning, and I go back upstream at two levels of structure: the whole piece of text, and individual sentences. In practical terms, when I work with that manuscript of mine, I take the last paragraph of the conclusion, and I actively write it backwards word-wise (I keep proper names unchanged). See by yourself.

This is the original last paragraph: ‘What if economic systems, inclusive of their technological change, optimized themselves so as to satisfy a certain workstyle? The thought seems incongruous, and yet Adam Smith noticed that division of labour, hence the way we work, shapes the way we structure our society. Can we hypothesise that technological change we are witnessing is, most of all, a collectively intelligent adaptation in the view of making a growing mass of humans work in ways they collectively like working? That would revert the Marxist logic, still, the report by World Bank, cited in the beginning of the article, allows such an intellectual adventure. On the path to clarify the concept, it is useful to define the meaning of collective intelligence’.

Now, I write it backwards: ‘Intelligence collective of meaning the define to useful is it concept the clarify to path the on adventure intellectual an such allows article the of beginning the in cited World Bank by report the still logic Marxist the revert that would that. Working like collectively they ways in work humans of mass growing a making view of the in adaptation intelligent collectively a all of most is witnessing are we change technological that hypothesise we can? Society our structure we way the shapes work we way the hence labour of division that noticed Adam Smith yet and incongruous seems thought the workstyle certain a satisfy to as so themselves optimized change technological their of inclusive systems economic if what?

Strange? Certainly, it is strange, as it is information with its pants on its head, and this is precisely why it is informative. The paper is about the market of energy, and my last paragraph of conclusions is about the market of labour, and its connection to the market of energy.

I go further upstream in my writing. The before-last paragraph of conclusions goes like: ‘Since David Ricardo, all the way through the works of Karl Marks, John Maynard Keynes, and those of Kuznets, economic sciences seem to be treating the labour market as easily transformable in response to an otherwise exogenous technological change. It is the assumption that technological change brings greater a productivity, and technology has the capacity to bend social structures. In this view, work means executing instructions coming from the management of business structures. In other words, human labour is supposed to be subservient and executive in relation to technological change. Still, the interaction between technology and society seems to be mutual, rather than unidirectional (Mumford 1964, McKenzie 1984, Kline and Pinch 1996; David 1990, Vincenti 1994). The relation between technological change and the labour market can be restated in the opposite direction. There is a body of literature, which perceives society as an organism, and social change is seen as complex metabolic adaptation of that organism. This channel of research is applied, for example, in order to apprehend energy efficiency of national economies. The so-called MuSIASEM model is an example of that approach, claiming that complex economic and technological change, including transformations in the labour market, can be seen as a collectively intelligent change towards optimal use of energy (see for example: Andreoni 2017 op. cit.; Velasco-Fernández et al 2018 op. cit.). Work can be seen as fundamental human activity, crucial for the management of energy in human societies. The amount of work we perform creates the need for a certain caloric intake, in the form of food, which, in turn, shapes the economic system around, so as to produce that food. This is a looped adaptation, as, on the long run, the system supposed to feed humans at work relies on this very work’.

Here is what comes from reverted writing of mine: ‘Work very this on relies work at humans feed to supposed system the run long the on as adaptation looped a is this food that produce to around system economic the shapes turn in which food of form the in intake caloric certain a for need the creates perform we work of amount the societies human in energy of management the for crucial activity human fundamental as seen be can work. Energy of use optimal towards change intelligent collectively a as seen be can market labour the in transformations including change technological and economic complex that claiming approach that of example an is model MuSIASEM called so the economies national of efficiency energy apprehend to order in example for applied is research of channel this. Organism that of adaptation metabolic complex as seen is change social and organism an as society perceives which literature of body a is there. Direction opposite the in restated be can market labour the and change technological between relation the. Unidirectional than rather mutual be to seems society and technology between interaction the still. Change technological to relation in executive and subservient be to supposed is labour human words other in. Structures social bend to capacity the has technology and productivity a greater brings change technological that assumption the is it. Change technological exogenous otherwise an to response in transformable easily as market labour the treating ne to seem sciences economic Kuznets of those and Keynes […], Marks […] of works the through way the all Ricardo […]’.

Good. I speed up. I am going back upstream through consecutive paragraphs of my manuscript. The chain of 35 ideas which I write here below corresponds to the reverted logical structure (i.e. from the end backstream to the beginning) of my manuscript. Here I go. Ideas listed below have numbers corresponding to their place in the manuscript. The higher the number, the later in the text the given idea is phrased out for the first time.

>> Idea 35: The market of labour, i.e. the way we organize for working, determines the way we use energy.

>> Idea 34: The way we work shapes technological change more than vice versa. Technologies and workstyles interact

>> Idea 33: The labour market offsets the loss of jobs in some sectors by the creation of jobs in other sectors, and thus the labour market accommodates the emergent technological change.

>> Idea 32: The basket of technologies we use determines the ways we use energy. work in itself is human effort, and that effort is functionally connected to the energy base of our society

>> Idea 31: Digital technologies seem to have a special function in mediating the connection between technological change and the labour market

>> Idea 30: the number of hours worked per person per year (AVH), the share of labour in the GNI (LABSH), and the indicator of human capital (HC) seem to make an axis of social change, both as input and as output of the collectively intelligent structure.

>> Idea 29: The price index in exports (PL_X) comes as the chief collective goal pursued, and the share of public expenditures in the Gross National Income (CSH_G) appears as the main epistatic driver in that pursuit.

>> Idea 28: The methodological novelty of the article consists in using the capacity of a neural network to produce many variations of itself, and thus to perform evolutionary adaptive walk in rugged landscape.

>> Idea 27: The here-presented methodology assumes: a) tacit coordination b) evolutionary adaptive walk in rugged landscape c) collective intelligence d) observable socio-economic variables are manifestations of the past, coordinated decisions.

>> Idea 26: Variance observable in the average Euclidean distances that each variable has with the remaining 48 ones reflects the capacity of each variable to enter into epistatic interactions with other variables, as the social system studied climbs different hills, i.e. pursues different outcomes to optimize.

>> Idea 25: Coherence: across 48 sets Si out of the 49 generated with the neural network, variances in Euclidean distances between variables are quite even. Only one set Si yields different variances, namely the one pegged on the coefficient of patent applications per 1 million people.

>> Idea 24: the order of phenomenal occurrences in the set X does not have a significant influence on the outcomes of learning.

>> Idea 23: results of multiple linear regression of natural logarithms in the variables observed is compared to the application of an artificial neural network with the same dataset – to pass in review and to rework – lots of meaning there.

>> Idea 22: the phenomena assumed to be a disturbance, i.e. the discrepancy in retail prices of electricity, as well as the resulting aggregate cash flow, are strongly correlated with many other variables in the dataset. Perhaps the most puzzling is their significant correlation with the absolute number of resident patent applications, and with its coefficient denominated per million of inhabitants. Apparently, the more patent applications in the system, the deeper is that market imperfection.

>> Idea 21: Another puzzling correlation of these variables is the negative one with the variable AVH, or the number of hours worked per person per year. The more an average person works per year, in the given country and year, the less likely this local market is to display harmful differences in the retail prices of electricity for households.

>> Idea 20: On the other hand, variables which we wish to see as systemic – the share of electricity in energy consumption and the share of renewables in the output of electricity – have surprisingly few significant correlations in the dataset studied, just as if they were exogenous stressors with little foothold in the market as for yet. 

>> Idea 19: None of the four key variables regarding the European market of energy: a) the price fork in the retail market of electricity (€) b) the capital value of cash flow resulting from that price fork (€ mln) c) the share of electricity in energy consumption (%) and d) the share of renewables in electricity output (%)seems having been generated by a ‘regular’ Gaussian process: they all produce definitely too much outliers for a Gaussian process to be the case.

>> Idea 18: other variables in the dataset, the ‘regulars’ such as GDP or price levels, seem to be distributed quite close to normal, and Gaussian processes can be assumed to work in the background. This is a typical context for evolutionary adaptive walk in rugged landscape. An otherwise stable socio-economic environment gets disturbed by changes in the energy base of the society living in the whereabouts. As new stressors (e.g. the need to switch to electricity, from the direct combustion of fossil fuels) come into the game, some ‘mutant’ social entities stick out of the lot and stimulate an adaptive walk uphill.

>> Idea 17: The formal test of Euclidean distances, according to equation (1), yields a hierarchy of alternative sets Si, as for their similarity to the source empirical set X of m= 300 observations. This hierarchy represents the relative importance of variables, which each corresponding set Si is pegged on.

>> Idea 16: The comparative set XR has been created as a sequence of 10 stacked, pseudo-random permutations of the original set X has been created as one database. Each permutation consists in sorting the records of the original set X according to a pseudo-random index variable. The resulting set covers m = 3000 phenomenal occurrences.

>> Idea 15: The underlying assumption as regards the collective intelligence of that set is that each country learns separately over the time frame of observation (2008 – 2017), and once one country develops some learning, that experience is being taken and reframed by the next country etc. 

>> Idea 14: we have a market of energy with goals to meet, regarding the local energy mix, and with a significant disturbance in the form of market imperfections

>> Idea 13: special focus on two variables, which the author perceives as crucial for tackling climate change: a) the share of renewable energy in the total output of electricity, and b) the share of electricity in the total consumption of energy.

>> Idea 12: A est for robustness, possible to apply together with this method, is based on a category of algorithms called ‘random forest’

>> Idea 11: The vector of variances in the xi-specific fitness function V[xi(pj)] across the n sets Si has another methodological role to play: it can serve to assess the interpretative robustness of the whole complex model. If, across neural networks oriented on different outcome variables, the given input variable xi displays a pretty uniform variance in its fitness function V[xi(pj)], the collective intelligence represented in equations (2) – (5) performs its adaptive walk in rugged landscape coherently across all the different hills considered to walk up. Conversely, should all or most variables xi, across different sets Si, display noticeably disparate variances in V[xi(pj)], the network represents a collective intelligence which adapts in a clearly different manner to each specific outcome (i.e. output variable).

>> Idea 10: the mathematical model for this research is composed of 5 main equations, which, in the same time, make the logical structure of the artificial neural network used for treating empirical data. That structure entails: a) a measure of mathematical similarity between numerical representations of collectively intelligent structure b) the expected state of intelligent structure reverse engineered from the behaviour of the neural network c) neural activation and the error of observation, the latter being material for learning by measurable failure, for the collectively intelligent structure d) transformation of multi-variate empirical data into one number fed into the neural activation function e) a measure of internal coherence in the collectively intelligent structure

>> Idea 9: the more complexity, the more is the hyperbolic tangent, based on the expression e2h, driven away from its constant root e2. Complexity in variables induces greater swings in the hyperbolic tangent, i.e. greater magnitudes of error, and, consequently, longer strides in the process of learning.

>> Idea 8: Each congruent set Si is produced with the same logical structure of the neural network, i.e. with the same procedure of estimating the value of output variable, valuing the error of estimation, and feeding the error forward into consecutive experimental rounds. This, in turn, represents a hypothetical state of nature, where the social system represented with the set X is oriented on optimizing the given variable xi, which the corresponding set Si is pegged on as its output.

>> Idea 7: complex entities can internalize an external stressor as they perform their adaptive walk. Therefore, observable variance in each variable xi in the set X can be considered as manifestation of such internalization. In other words, observable change in each separate variable can result from the adaptation of social entities observed to some kind of ‘survival imperative’.

>> Idea 6: hypothesis that collectively intelligent adaptation in human societies, regarding the ways of generating and using energy, is instrumental to the optimization of other social traits.    

>> Idea 5: Adaptive walks in rugged landscape consist in overcoming environmental challenges in a process comparable to climbing a hill: it is both an effort and a learning, where each step sets a finite range of possibilities for the next step.

>> Idea 4: the MuSIASEM methodological framework – aggregate use of energy in an economy can be studied as a metabolic process

>> Idea 3: human societies are collectively intelligent about the ways of generating and using energy: each social entity (country, city, region etc.) displays a set of characteristics in that respect

>> Idea 2: adaptive walk of a collective intelligence happens in a very rugged landscape, and the ruggedness of that landscape comes from the complexity of human societies

>> Idea 1: Collective intelligence occurs even in animals as simple neurologically as bees, or even as the Toxo parasite. Collective intelligence means shifting between different levels of coordination.

As I look at that thing, namely at what I wrote something like one year ago, I have a doubly recomforting feeling. The article seems to make sense from the end to the beginning, and from the beginning to the end. Both logical streams seem coherent and interesting, whilst being slightly different in their intellectual melody. This is the first comfortable feeling. The second is that I have still some meaning, and, therefore, some possible truth, to unearth out of my empirical findings, and this is always a good thing. In science, the view of empirical findings squeezed out of the last bit of meaning and yet still standing as something potentially significant is one of the saddest perspectives one can have. Here, there is still some juice in facts. Good.  

Shut up and keep thinking

This time, something went wrong with the uploading of media on the Word Press server, and so I am publishing my video editorial on You Tube only. Click here to see and hear me saying a few introductory words.

I am trying to put some order in all the updates I have written for my research blog. Right now, I am identifying the main strands of my writing. Still, I want to explain I am doing that sorting of my past thought. I had the idea that, as the academic year is about to start, I could use those past updates as material for teaching. After all, I am writing this blog in sort of a quasi-didactic style, and a thoughtful compilation of such content can be of help for my students.

Right, so I am disentangling those strands of writing. As for the main ideas, I have been writing mostly about three things: a) the market of renewable energies b) monetary systems and cryptocurrencies, as well as the FinTech sector, c) political systems, law and institutions, and d) behavioural research. As I am reviewing what I wrote along these three lines, a few distinct patterns of writing emerge. The first are case studies, focused on interpreting the financial statements of selected companies. I went into four distinct avenues with that form of expression: a) companies operating in the market of renewable energies b) investment funds c) FinTech companies and, lately, d) film and TV companies. Then, as a different form of my writing, come quantitative studies, where I use large databases to run correlations and linear regressions. Finally, there are whole series of updates, which, fault of a better term, I call ‘concept development’. They give account of my personal work on business or scientific concepts, and look very much like daily reports of creative thinking.

Funny, by the way, how I write a lot about behavioural patterns and their importance in social structures, and I have fallen, myself, into recurrent behavioural patterns in my writing. Good, so what I am going to do is to use my readings and findings about behavioural patterns in order to figure out, and make the best possible use of my own behavioural patterns.

How can I use my past writing for educational purposes? I guess that my essential mission, as an educator, consists in communicating an experience in a teachable form, i.e. in a form possible to reproduce, and that reproduction of my experience should be somehow beneficial to other people. Logically, if I want to be an efficient educator in social sciences, what I should do now, is to distillate some sort of essence from my past experience, and formalize it in a teachable form.

My experience is that of looking for recurrent patterns in the most basic phenomena around me. As I am supposed to be clever as a social scientist, let’s settle for social phenomena. Those three distinct forms of my expression correspond to three distinct experiences: focus on one case, search for quantitative data on a s**tload of cases grouped together, and, finally, progressive coining up of complex ideas. This is what I can communicate, as a teacher.

Yet, another idea germinates in my mind. I am a being in time, and I thrust myself into the time to come, as Martin Heidegger would say (if he was alive). I define my social role very largely as that of a scientist and a teacher, and I wonder what am I thrusting, of myself as a scientist and a teacher, into this time that is about to advance towards me. I was tempted to answer grandly that it is my passion to discover that I project into current existence. Yet, precisely, I noticed it is grand talk, and I want to go to the core of things, like to the flesh of my being in time.

As I take off the pompous, out of that ‘passion to discover’ thing, something scientific emerges: a sequence. It all always starts when I see something interesting, and sort of vaguely useful. I intuitively want to know more about that interesting and possibly useful thing, and so I touch, I explore, I turn it under different angles, and yes, my initial intuition was right: it is interesting and useful. Years ago, even before having my PhD, I was strongly involved in preparing new material for management training. I was part of a team lead by a respectable professor from the University of Warsaw, and we were in scientific charge of training for the middle management of a few Polish banks. At the time, I started to read financial reports of companies listed in the stock market. I progressively figured out that large, publicly listed companies published periodical reports, which are like made of two completely different, semantic substances.

In those financial reports, there was the corporate small talk, about ‘exciting new opportunities’, ‘controlled growth’, ‘value for our shareholders’, which, honestly, I find interesting for the sake of its peculiar style, seemingly detached from real life. Yet, there is another semantic substance in those reports: the numbers. Numbers tell a different story. Even if the management of a company do their best to disguise some facts so as they look fancier, the numbers tell the truth. They tell the truth about product markets, about doubtful mergers and acquisitions, about the capacity of a business to accumulate capital etc.

As I started to work seriously on my PhD, and I started to sort out the broadly spoken microeconomic theories, including those of the new institutional school, I suddenly realised the connection between those theories and the sense that numbers make in those financial reports. I discovered that financial statements, i.e. the bare numbers, backed with some technical, explanatory notes, tend to show the true face of any business. They make of those Ockham’s razors, which cut out the b*****it and leave only the really meaningful.

Here comes the underlying, scientifically defined phenomenon. Financial markets have been ever present in human societies. In this respect, I could never recommend enough the monumental work by Fernand Braudel (Braudel 1992a[1]; Braudel 1992b[2]; Braudel 1995[3]). Financial markets have their little ways, and one of them is the charming veil of indefiniteness, put on the facts that laymen should-not-exactly-excite-themselves-about-for-their-own-good. Big business likes to dress into those fancy clothes, made of fancy and foggy language. Still, as soon as numbers have to be published, they start telling the true story. However elusive the management of a company would be in their verbal statements, the financials tell the truth. It is fascinating, how the introduction of precise measurements and accounts, into a realm of social life where plenty of b*****it floats, instantaneously makes things straight and clear.

I know what you can think now, ‘cause I used to think the same when I was (much) younger and listened to lectures at the university: here is that guy, who can be elegantly labelled as more than mature, and he gets excited about his own fascinations, financial reports in the occurrence. Still, I invite you to explore the thing. Financial markets are crucial for the current functioning of our civilisation. We need to shift towards renewable energies, we need to figure out how to make more food in sustainable ways, we need to remove plastic from the oceans, we need to go and see if Mars is an interesting place to hang around: we have a lot of challenges to face. Financial markets are crucial to that end, because they can greatly help in mobilising collective effort, and if we want them to work the way they should work, we need to assure that money goes where it is really needed. Bringing clarity and transparency to finance, over and over again, is really important. Being able to cut through the veil of corporate propaganda and go to the core of business is just as important. Careful reading of financial reports matters. It just matters.

So here is how one of my scientific fascinations formed. More or less at the same epoch, i.e. when I was working on my PhD, I started to work seriously with large datasets, mostly regarding innovation. Patents, patent applications, indicators of R&D effort: I started to go really quantitative about that stuff. I still remember that strange feeling, when synthetic measures of those large datasets started to make sense. I would run some correlations, just because you just need a lot of correlations in a PhD in economics, and vlam!: things would start to be meaningful. Those of you who work with Big Data probably know that feeling well, but I was experiencing it in the 1990ies, when digital technologies were like the grand-parents of the current ones, and even things like Panel Data Analysis, an analytical routine today, were seen as the impressionism of economic research.

I had progressively developed a strongly exploratory manner of working with quantitative data. A friend of mine, the same professor whom I used to work for in those management training projects, called it ‘the bulldog’ approach. He said: ‘Krzysztof, when you find some interesting data, you are like one of those anecdotal bulldogs: you bite into it so strongly, that sometimes you don’t even know how to let go, and you need someone who comes with a crowbar at forces your jaws open’.  Yes, indeed, this is the very same that I have just noticed as I am reviewing the past updates in that research blog of mine. What I do with data can be best described as sniffing, rummaging, playing with, digging and biting into – anything but serious scientific approach.

This is how two of my typical forms of scientific expression – case studies and quantitative studies – formed out of my fascination with the sense coming out of numbers. There is that third form of expression, which I have provisionally labelled ‘concept forming’, and which I developed the most recently, like over the last 18 months, precisely as I started to blog.

I am thinking about the best way to describe my experience in that respect. Here it comes. You have probably experienced those episodes of going outdoors, hiking or running, and then you or someone else starts moaning: ‘These backpack straps are just killing my shoulders! I am thirsty! I am exhausted! My knees are about to explode!’ etc. When I was a kid, I joined the boy scouts, and it was all about hiking. I used to be a fat kid, and that hiking was really killing me, but I liked company, too, and so I went for it. I used to moan exactly the way I have just portrayed. The team leader would just reply in the lines of ‘Just shut up and keep walking! You will adapt!’. Now, I know he was bloody right. There are times in life, when we take on something new and challenging, and then it seems just so hard to carry on, and the best way to deal with it is to shut up and carry on. You will adapt.

This is very much what I experienced as regards thinking and writing. When I started to keep this blog, I had a lot of ideas to express (hopefully, I still have), but I was really struggling with giving an intelligible form to those ideas. This is how I discovered the deep truth of that sentence, attributed to Pablo Picasso (although it could be anyone): ‘When a stroke of genius comes, it finds me at work’. As strange as it could seem, I experienced, and I am still experiencing, over and over again, the fundamental veracity of that principle. When I start working on an idea, the initial enthusiasm sooner or later yields to some moaning function in my brain: ‘F*ck, it is to hard! That thinking about one thing is killing me! And it is sooo complex! I will never sort it out! There is no point!’. Then, hopefully, another part of my brain barks: ‘Just shut up, and think, write, repeat! You will adapt’.

And you know what? It works. When, in the presence of a complex concept to figure out I just shut up (metaphorically, I mean I stop moaning), and keep thinking and writing, it takes shape. Step by step, I am sketching the contours of what’s simmering in the depths of my mind. The process is a bit painful, but rewarding.

Thus, here is the pattern of myself, which I am thrusting into the future, as it comes to science and teaching, and which, hopefully, I can teach. People around me, voluntarily or involuntarily, attract my attention to some sort of scientific and/or teaching work I should do. This is important, and I have just realized it: I take on goals and targets that other people somehow suggest. I need that social prod to wake me up. As I take on that work, I almost instinctively start flipping my Ockham’s razor between and around my intellectual fingers (some people do it with cards, phones, or even knives, you might have spotted it), and I causally give a shave here and there, and I slice observable reality into layers: there is the foam of common narrative about the thing, and there are those factual anchors I can attach to. Usually they are numbers, and, at a deeper philosophical level, they are proportions between things of reality.

As I observe those proportions, I progressively attach them to facts of life, and I start seeing patterns. Those patterns provide me something more or less interesting to say, and so I maintain my intellectual interaction with other people, and sooner or later they attract my attention to another interesting thing to focus on. And so it goes on. And one day, I die. And what will really matter will be made of things that I do but which outlive me. The ethically valuable things.

Good. I return to that metaphor I coined up a like 10 weeks ago, that of social sciences used as a social GPS system, i.e. serving to find one’s location in the social space, and then figure out a sensible route to follow. My personal experience, the one I have just given the account of, can serve to that purpose. My experience tells me that finding my place in the social space always involves interaction with other people. Understanding, and sort of embracing my social role, i.e. the way I can be really useful to other people, is the equivalent of finding my location on the social map. Another important thing I discovered as I deconstructed my experience: my social role is largely made of goals I pursue, not just of labels and rituals. It is sort of dynamic, it is very much my Heideggerian being-in-time, thrusting myself into my own immediate future.

I feel like getting it across really precisely: that thrusting-myself-into-the-future thing is not just pure phenomenology. It is hard science as well. We are defined by what we do. By ‘we’ I mean both individuals and whole societies. What we do involves something we are trying to achieve, i.e. some ethical values we seek to maximise, and to balance with other values. Understanding my social role means tracing the path I am moving along.

Now, whatever goal I am to achieve, according to my social role, around me I can see the foam of common narrative, and the factual anchors. The practical use of social sciences consists in finding those anchors, and figuring out the way to use them so as to thrive in the social role we have now, or change that role efficiently. Here comes the outcome from another piece of my personal experience: forming a valuable understanding requires just shutting up and thinking, and discovering things. Valuable discovery goes beyond and involves more than just amazement: it is intimately connected to purposeful work on discovering things.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

Support this blog


[1] Braudel, F. (1992). Civilization and capitalism, 15th-18th Century, Vol. I: The structure of everyday life (Vol. 1). Univ of California Press.

[2] Braudel, F. (1992). Civilization and capitalism, 15th-18th century, vol. II: The wheels of commerce (Vol. 2). Univ of California Press.

[3] Braudel, F. (1995). A history of civilizations (p. 178). New York: Penguin Books

Couldn’t they have predicted that?


I am following up, smoothly, on my last update in French, namely « Quel rapport avec l’incertitude comportementale ? », and so I give a little prod to the path of scientific research possible do develop around my work on the EneFin project. I am focusing on the practical assumptions of the project itself, and I am translating them into science. There is one notable difference between business planning and science (I mean, there is more than one, but this one is the one I feel like moaning about a little). In business, when you make bold assumptions, you can make people cautious or enthusiastic, it depends, really. In science, bold assumptions are what Agatha Christie’s characters used to describe as ‘a particularly laborious way to commit suicide’. Bold assumptions, in a scientific publication, are like feebly guarded doors to a vault full of gold: it is just a matter of time before someone tries and breaks in.

Thus, I am translating my bold assumptions from the business plan into weak scientific assumptions. A weak scientific assumption is the one which does not really assume a lot. The less is being assumed, the weaker is the assumption, hence the stronger it is against criticism. Sounds crazy, but what do you want, that’s science.

Anyway, the whole idea of EneFinc came to my mind as I was comparing two types of end-user, retail prices in the European market of electricity: those offered to small users, like households, and those, noticeably lower, reserved for the big, institutional customers. EeFin is a scheme, which allows small users of electricity to get it at the actual low price for big customers, and, in the same time, still allows the suppliers to benefit from the surplus between the small-customer prices, and the big-customer ones, only in the form of equity, not sales as such.

There are markets, thus, where, at a moment t, there is a difference between the price of electricity for small users PESU(t), and that for the big ones: PEBU(t). Formally, I express it as PESU(t) > PEBU(t) or as PESU(t) – PEBU(t) > 0. The difference PESU(t) > PEBU(t) comes from different levels of consumption (Q) per user, thus from QSU(t) < QBU(t).

Now, I further (weakly) assume that the difference PESU(t) – PEBU(t) > 0 can make a behavioural incentive for the small users to migrate towards suppliers, who minimize that gap. Alternatively, a supplier can offer additional economic utility ‘U’ to the user, on the top of the energy supplied. I imagine two different markets of energy, with two different games being played. In the market A, small users migrate between suppliers so minimize the differential in prices, and the desired outcome of the game is min[PESU(t) – PEBU(t)]. In the market B, a similar type of migration occurs, just with a different prize in view, namely maximizing the additional economic utility offered by the suppliers of energy to compensate the PESU(t) – PEBU(t) gap. In other words, in the market B, the desired outcome of the game is max{U = f[PESU(t) – PEBU(t)]}. That utility can consist, for example, of claims on the equity of the suppliers, just as in my EneFin concept. Still, we experience the same type of scheme from the part of our usual suppliers. As an example, I can give the contractual scheme that my current supplier, the Polish company Tauron, uses to secure the loyalty of customers. The thing is called ‘Professional 24’ and is a 24/24 emergency service for all that touches to electrical and/or mechanical maintenance in the user’s house. If my dishwasher breaks down, I have the option to call ‘Professional 24’ and they will fix the thing at the cost of spare parts, no labour compensation. All I have to do in order to benefit from that wonder scheme is to sign a fixed-term contract for 2 years. In other words, I pay that high price for small users, and thus I pay a really juicy surplus over the price paid by big users, and in exchange Tauron gives me the opportunity to use those maintenance services at no cost of labour.

Now, I assume that both markets, namely A and B, and their corresponding games can overlap in the same physical market. Thus, there are two games being played in parallel, the min[PESU(t) – PEBU(t)] and the max{U = f[PESU(t) – PEBU(t)]} one. Both are being played by a population of users NU, and that of suppliers MS. A third game, that of status quo, is hanging around as well. The general theoretical questions I ask is the following: « Under what conditions can each of the three games – the min[PESU(t) – PEBU(t)], the max{U = f[PESU(t) – PEBU(t)]}, or the status quo – prevail in the market, and what can be the long-term implications of such prevalence? Should the max{U = f[PESU(t) – PEBU(t)]} game prevail, under what conditions can the U = f[PESU(t) – PEBU(t)] utility find its expression in claims on the equity of suppliers?   ».

The next step consists in translating those general questions into hypotheses, which, in turn, should have two basic attributes. In the first place, a good hypothesis is simple and coherent enough to enable rational classification of all the observable phenomena into two subsets: those conform to the hypothesis, on the one hand, and those which make that hypothesis sound false, on the other hand. Secondly, a good hypothesis is empirically verifiable, i.e. I can devise and apply a rational method of empirical research to check the veracity of what I have hypothesised.

Intuitively, I turn towards one of the most fundamental economic concepts, i.e. that of equilibrium. I hypothesise that there is an equilibrium point, where the outcomes of the min[PESU(t) – PEBU(t)] game are equal to those of the max{U = f[PESU(t) – PEBU(t)]} game, thus min[PESU(t) – PEBU(t)] = max{U = f[PESU(t) – PEBU(t)]}. This is my hypothesis #1. Postulating the existence of an equilibrium is sort of handy, as it gives all the freedom to explore the neighbourhood of that equilibrium.

My second hypothesis goes a bit more in depth of the avenue I am following in my EneFin concept. I assume that the max{U = f[PESU(t) – PEBU(t)]} game makes sort of a framework, within which distinct subgames emerge, oriented on different kinds of that utility ‘U’, like U = {U1, U2, …, Uk}. In that set of utilities, one item is made of claims on the equity of suppliers. I call it Ueq. From there, I can hypothesise in two directions. One way is to postulate a hierarchy inside the U = {U1, U2, …, Uk} set, and Ueq maxing out in that hierarchy, so as Ue  = max{U = f[PESU(t) – PEBU(t)]}.

The other way is to open up, once again, with the concept of equilibrium, and postulate that although, basically, we have U1 ≠ U2 ≠ … ≠ Uk in the U = {U1, U2, …, Uk} set, there is a set of equilibriums, where Ueq = Ui. Going into the equilibrium department, instead of the hierarchy one, is just simpler. I can make like the function of Ueq, as an equation, put it at equality with any other function, and, as long as those identities are solvable at all, Bob’s my uncle, essentially. In that sense, equilibrium, or the absence thereof, is almost self-explanatory. On the other hand, hierarchies need a structuring function, more complex than that of an equilibrium. A structuring function is a set of conditions expressed as inequalities, and I need to nail down quite specifically the conditions for those inequalities being real inequalities. Seen from this perspective, the hypothesis with equilibrium is sort of conducive towards the one with hierarchy.

My mind makes a leap, now, towards that thing of political systems. Playing a game means winning or losing, and one of the biggest prizes to win or lose is a country, i.e. the controlling package of political power in said country. I teach a few curriculums which involve the understanding of political systems, and I did quite a bit of research in that field. Anyway, the hot topic I want to refer to is Brexit, and more exactly the policy paper entitled ‘The future relationship between the United Kingdom and the European Union’, issued by Her Majesty’s Department for Exiting the European Union. The leap I am doing, from that model of the energy market towards Brexit, I am doing it with some method. I started developing on the theory of games, and politics are probably one of the most obvious and straightforward applications thereof.

My students frequently ask me questions like: ‘Why this stupid government does things this way? Couldn’t they be more rational?’. The first thing I am trying to get across as I attempt to answer those questions is that in public governance some strategies just work, and some others just don’t, with little margin of manoeuvre in between. Policies are like complex patterns of behaviour, manifest in complex, intelligent entities called ‘political systems’. In the case of Brexit, the initial game played by the Her Majesty’s government was akin the strategy used by the government of the United States. The United States are signatory member to multilateral, international agreements like the GATT (General Agreement on Tariffs and Trade), or the NAFTA. Still, the dominant institutional contrivance that the US Federal Government uses to design their international economic relations is the bilateral agreement. The logic is simple: in any bilateral agreement, the US are the dominant party to the contract, and they can dictate the conditions. In multilateral schemes, they can be outvoted, and you don’t like being outvoted when you know you have a bigger button than anyone else.

Before I go further, there is an important distinction to grasp, namely that between an international agreement, and a treaty. An agreement is essentially made by executives – usually ministers or Prime Ministers – who sign it on behalf of their respective governments. Parliaments do not need to ratify signed agreements; neither do such agreements require to run a referendum. Agreements remain essentially executive acts, and, as such, they are flexible. Countries can easily back off from those schemes. The easiest way to do it sort of respectably is to vote non-confidence regarding a Prime Minister, and to label what they had done as a series of mistakes. Treaties, on the other hand, are being ratified. Parliaments, presidents, monarchs, and, in the case of the European Union, whole nations voting in referendums, give their final fiat to the signature of an executive. It is bloody hard to pull out of a treaty, as it essentially requires to walk back on your tracks, i.e. to revert the whole sequence of ratifying decisions.

The Britons seem to have bet on a similar horse. They decided to pull out of the European Union – a multilateral treaty, bloody limiting and clumsy to renegotiate – and to govern their economic relations with other countries with a set of bilateral agreements. Each of those bilateral agreements was supposed to be sort of tailored for the specific economic relations between Britain and the given country. Being an agreement, and not a treaty, each such understanding was supposed to be much more manœuvrable than a multilateral treaty.

Mind you, the game was worth playing, as I see it, and still there was a risk. The prize to win was a lot of local business deals, impossible or very hard to achieve under the common rules of the European Union. The big hurdle to jump over, on the way, was the specific geopolitical structure of the EU. If you want to replace your multilateral relations with a set of countries by a range of bilateral relations, you need to look at the hierarchy of the whole tribe. In the European village, we have two big blokes: France and Germany. Negotiating bilateral treaties must have started with them, and there was clearly no point in going and knocking on other doors, as long as these two bilateral schemes were not nailed down and secured.

Without entering into highly speculative details, one thing is sure and certain: this particular step in the Perfect Plan simply didn’t work. Neither France nor Germany expressed any will to play one-on-one with the British government. Instead, they quickly secured beachheads in that sort of international political void being created by Britain pulling out of the EU, and worked towards brutally pushing the Britons against the wall. As a result, today, we have that strange architecture expressed in the ‘Future relationship…’ policy paper, where Britain enters an agreement with the whole of EU, without being a member of the EU anymore.

In theoretical terms, this political episode demonstrates an important trait in games: they are made of successive moves. When you play chess, or any other game with sequenced moves, you have that little voice in your head saying: ‘This is just a game. In real life, no sensible person would wait until their opponent makes a move’. Weelll, yes and no. It is true that in real life we play few games with gentleman’s rules in place. Most real games involve a fair dose of sucker punches, coming from the least expected directions. Still, there is that thing: even if you firmly intend to be the meanest dog in the pack, you need to adapt accordingly, and in order to adapt, you need to observe other players and figure them out. You just need to leave them that little window in time, during which you will be watching them and learning from their actions. This is why in theoretical games we frequently assume sequential moves. It has nothing to do with being fair and honest; it is much more about having to learn by observation.

This is what comes to mind when somebody studies the Brexit policy finally adopted by the government of Her Majesty. ‘Couldn’t they have predicted that…[put whatever between those parentheses]?’. No, they couldn’t. When you want to know the move of another player, you have to make your move in order to force them to make theirs. Once you have made that move, it can be too late to back off. This is probably the biggest difference between mainstream economics and the theory of games. The former assumes the existence of equilibriums, which we can sort of come close to, and adjust, in a series of essentially reversible actions. The theory of games assumes, on the other hand, that most of our actions bring irreversible consequences, as what we do makes other people do things.

After that short distractive excursion into Brexit, I come back to my scientific development on the market of energy. The political distraction allowed me to define something important in any kind of game: a single move. In those three games, which I imagine being played in parallel in the market of energy – the min[PESU(t) – PEBU(t)] game, the max{U = f[PESU(t) – PEBU(t)]} game, and the conservation of status quo – a single move can be defined as what people usually do when dealing with a supplier of energy. My intuition wanders around what I do, actually, and what I do is signing, every two years, a contract with my supplier for another fixed term of two years. Batter that than nothing. I assume that one move, in my energy games, consists in negotiating and signing a fixed-term, two-year contract.

As I define one move in this manner, I intuitively feel like including quantities in the formal expression of those games. Thus, I transform the min[PESU(t) – PEBU(t)] game into min{QSU(t)* [PESU(t) – PEBU(t)]}, and the max{U = f[PESU(t) – PEBU(t)]} into max{U = f[QSU(t)*PESU(t) – PEBU(t)]}. I just remind you that QSU(t) is the typical consumption of energy per one small user, like one household. An explanation seems due. Why have I made that match between ‘one move <=> one 2-year contract’ and the inclusion of quantity consumed into my equations? A contract for 2 years is a mutual promise of supplying-consuming a certain amount of energy.

That QSU(t) can be provisionally identified with the average, individual consumption of energy on one year. Hence, and individual move – contract for two years – amounts to committing to [PESU(t+1)*QSU(t+1)]+[PESU(t+2)*QSU(t+2)]. Such a formal expression allows further rewriting of my two games, namely I have:

Game A: min{QSU(t+1)*[PESU(t+1) – PEBU(t+1)] + QSU(t+2)*[PESU(t+2) – PEBU(t+2)]}

Game B: max{U = f{QSU(t+1)*[PESU(t+1) – PEBU(t+1)] + QSU(t+2)*[PESU(t+2) – PEBU(t+2)] }

With this formulation, my two games are very nearly identical. They both contain an identical aggregate, calculated between the ‘{ }’ parentheses. As I put it a few paragraphs ago, I want to explore my model through the testing of a hypothetical equilibrium point between those two games. This, in turn, amounts to searching a function, which, for a given range of Q and P, can yield a maximal utility out of {QSU(t+1)*[PESU(t+1) – PEBU(t+1)] + QSU(t+2)*[PESU(t+2) – PEBU(t+2)] }, and, in the same time, has at least one intersection point with a function that minimizes the same aggregate.

As I think about it, I need to include transaction costs in the model. I mean, moves in those games consist in signing contracts. A contract implies uncertainty, probability of opportunistic behaviour, and commitment of assets to a specific purpose. In other words, it implies transaction costs, as in: Williamson 1973[1]. I need to wrap my mind around it.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

[1] Williamson, O. E. (1973). Markets and hierarchies: some elementary considerations. The American economic review, 63(2), 316-325.

Support this blog


Any given piece of my behaviour (yours too, by the way)

My editorial

Those last weeks I am very much involved with designing experimental environments. I want to develop a business plan for investing in smart cities, and a good business plan could do with the understanding of how can human behaviour change in various experimental conditions. Thus, how are people likely to behave when living in a smart city? How is their behaviour going to be different from those living in a classical, non-smart (dumb?) city? Behaviour is what we do in response to stimuli from our environment. OK, so now I begin by defining what I do. When I take on defining something apparently so bloody complex that my head is turning at the very thought of defining it, the first thing I do is to structure what I do, i.e. I distinguish pieces in the whole.

What kind of categories can I distinguish in what I do? One of the first distinctions I can come up with is by the degree of recurrence. There are things I do so regularly that I even don’t always notice I do them. When repeating those actions, I practically fly on automatic pilot. Walking is one of them. I breathe quite systematically as well, as I think about it. I drive my car almost every day, along mostly repetitive itineraries. As I sail onto the waters of the incidental, and the shore of mindless recurrence progressively vanishes behind me, I cross that ring of islands, like the rings of the Saturn, where big things happen just sometimes, yet they happen in many, periodically spaced sometimes. These are summer holidays, Christmases, or wedding anniversaries. As I sail past the reef of those reassuringly festive occasions, I enter the hardly charted waters of whatever happens next: these are things that I subjectively perceive as absolutely uncertain, and still which happen with a logic I cannot grasp on the happening.

So, here we are with one distinction inside our behaviour: the steady stream of routine, decorated with the regular patches of periodical, ritualized, big events, and all that occasionally visited by the hurricanes of the uncertain. When people live in an urban environment (in any environment, as a matter of fact), their living is composed of three behavioural types: routines, cyclical actions, and reactions to what they perceive as unpredictable. If living in smart cities is about to change our urban lives, it should have some kind of impact upon our behaviour, thus on routines, cyclical events and emergencies. How can it happen and how can we experiment with it?

Another intuitive distinction about our behaviour is that between freedom and constraint. I perceive some of my actions as taken and done out of my sheer free will, whilst I see some others as done under significant constraint. I know that the very concept of free will is arguable, yet I decided to rely on my intuition, and this not a good moment to back off. Thus, I rely and I distinguish. There are actions, which I perceive as undertaken and carried out freely. Trying to be logical, now, I interpret that feeling of freedom as being my own experience of choice. In some situations, I am experiencing quite a broad repertoire of alternative paths to take in my action. A lot of alternatives means that I don’t have enough information to figure out one best way of doing things, and I am entertaining myself with my own feeling of uncertainty. Freedom is connected to uncertainty, but not just to uncertainty. If I can do things in many alternative ways, it means nobody tells me to do those things in one, precise way. There are no ready-made recipes for the situation, or relevant social norms, in my local culture. On the other hand, my highly constrained behaviour corresponds to situations tightly regulated by social norms.

When I have two different distinctions, I can make a third one, two-dimensional, this time. In the most obvious and the least elaborate form it is a table, as shown below:

  Free behaviour (no detailed social norms) Constrained behaviour (normatively regulated)
Routine behaviour Modality #1 Modality #2
Cyclical behaviour Modality #3 Modality #4
Emergency behaviour Modality #5 Modality #6

A normal, fully sane person would leave that table as it is, but I am a scientist, and I have inside me that curious ape, that happy bulldog, and the austere monk. I just need some maths to have something for rummaging in. I just have convert my table into a manifold, with those two nice axes. Maybe I could even trace an indifference curve in it, who knows? Anyway, I need converting modalities into numbers. The kind of numbers I see here are probabilities. The head of the table, namely the distinction between freedom and constraint can be translated into the probability that any given piece of my behaviour (yours too, by the way) is regulated by an unequivocal social norm. It is more fun than you think, as a matter of fact, as we have lots of situations when there are many social norms involved and they are kind of conflicting. I am driving, in order to pick my kid from school, and suddenly I drive over a dog. I should stop and give emergency care to the dog, but then I will not pick up my child from school at time. Of course, at the end of the day, we can convert all such dilemmas into the Hamletic “to be or not to be”, which really narrows down the scope of available options. Still, real life is complicated.

Anyway, I am passing now to scaling numerically the side of my table, as a probability, and I am bumping against a problem: if I translate the recurrence of anything as a probability, it would be the probability of happening in a definite period of time. Thus, it would be a binomial distribution of probability. I take my period of time, like one month, for example, and I just stuff each occurrence in my behaviour into one of the two bags: “yes, it happens at least once in one month” or “no, it doesn’t”. The binomial distribution is fascinating for studying the issue of structural stability (see Fringe phenomena, which happen just sometimes), but in a numerical manifold it gives just two discrete classes, which is not much of a numerical approach, really. I have to figure out something else and that something else is simply recurrence, understood as the cycle of happening, like every day, every three days, every millennium etc.

And so I come up with that nice behavioural graph in PDF, available from the library of my blog . See? Didn’t I tell you I would make an indifference curve? This is the red one in the graph. It is convex, with its tails nicely, assymptotically gliding the long of the axes of reference, so it is bound to be an indifference curve, or an isoquant. The only problem is that I haven’t figured out, yet, what kind of constant quantity it measures. It will come to me, no worries. Still, for the moment, what comes is the idea that on the two tails of this curve I have somehow opposite patterns of behaviour, mostly as for their modifiability. On the bottom right tail, where those ritualized routines dwell, I can modify human behaviour simply by modifying one simple rule, or just a few of them. From now on, I tell those people (or myself) to do things in way B, instead of way A, and Bob’s my uncle: with any luck, and with a little help from Mr Selten and Mr Hammerstein (1994[1]) those people (or me) will soon forget that the rule has ever been changed. On the opposite, upper left tail of that curve, I have things happening really just sometimes, and virtually no rules to regulate human behaviour. How the hell can I modify behavioural patterns in these whereabouts? Honestly, nothing sensible comes to my mind.

Smart cities mean lots of digital technologies. I have just watched a short video, featuring a robot (well, a pair of automated arms fixed to the structure of a bar), which can prepare hundreds of meals, like a professional cook, imitating the movements of a human. Looks a bit scary, I can tell you, but this is what a smart city can look like: some repetitive jobs done by robots. Besides robots, what can we have in a smart city, in terms of smart technologies? GPS tracking, real-time functional optimization (sounds complicated, but this is what you have, for example, in those escalators, which suddenly speed up when you step onto them), personal identification, quick interpersonal communication, and the Internet of things (an escalator can send emails to a cooling pump, which, in turn, can get friends, via social media, among the local smart energy grids). These technologies can take the functional form of: robots (something moving), mobile apps in a phone, in a pair of glasses etc. (something that makes people and things move), and infrastructure (something that definitely shouldn’t move). In their smart form, these things can optimize energy, and learn. We use to call the latter capacity Artificial Intelligence. I think that it is precisely the learning part that can affect our lives the most, in a smart city. We, humans, are kind of historically used to be learning faster than our environment. We are proudly accustomed to figure out things about things before those things change. In a smart city, we have things figuring out things about us, and at an accelerating pace.

In one of my previous updates (see Smart cities, or rummaging in the waste heap of culture ) I made those four hypotheses about smart cities. Good, now I can reappraise those four hypotheses in terms of human behaviour. We behave the way we behave because we have learnt to do so. In a smart city, we will be behaving in the presence of technologies, which can possible learn faster and better than us. Now, keeping in mind that table and that graph, above, how can the coexistence with something possibly smarter than us modify our patterns of behaviour? Following the logic, which I have just unfolded, modification of behaviour can start in the bottom right area of my graph, or with Modality #2 in the tabular form, and then it could possibly move kind of along the red curve in the graph. Thus, what I previously wrote about new patterns observable in handling money, in consuming energy, in rearranging the geography of our habitat, and finally in shaping our social hierarchies, means that smart cities, and their inherently intelligent technologies can impact our behaviour first and most of all by creating and enforcing new rules for highly recurrent, ritualized actions in our life.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. You can consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

[1] Hammerstein, P., & Selten, R. (1994). Game theory and evolutionary biology. Handbook of game theory with economic applications, 2, 929-993.

The path of thinking, which has brought me to think what I am thinking now

My editorial

I am thinking about the path of research to take from where I am now. A good thing in the view of defining that path would be to know exactly where am I now, mind you. I feel like summarising a chunk of my work, approximately the three last weeks, maybe more. As I finished that article about technological change seen as an intelligent, energy-maximizing adaptation , I kind of went back to my idea of local communities being powered at 100% by renewable energies. I wanted to set kind of scientific foundations for a business plan that a local community could use to go green at 100%. More or less intuitively, I don’t really know why exactly, I connected this quite practical idea to Bayesian statistics, and I went straight for the kill, so to say, by studying the foundational paper of this whole intellectual stream, the one from 1763 (Bayes, Price 1763[1]). I wanted to connect the idea of local communities based entirely on renewable energies to that of a local cryptocurrency (i.e. based on the Blockchain technology), somehow attached to the local market of energy. As I made this connection, I kind of put back to back the original paper by Thomas Bayes with that by Satoshi Nakamoto, the equally mysterious intellectual father of the Bitcoin. Empirically, I did some testing at the level of national data about the final consumption of energy, and about the primary output of electricity, I mean about the share of renewable energy in these. What I have, out of that empirical testing, is quite a lot of linear models, where I multiple-regress the shares, or the amounts, of renewable energies on a range of socio-economic variables. Those multiple regressions brought some seemingly solid stuff. The share of renewable energies in the primary output of electricity is closely correlated with the overall dynamics in the final consumption of energy: the faster the growth of that total market of energy, the greater the likelihood of shifting the production of electricity towards renewables. As dynamics are concerned, the years 2007 – 2008 seem to have marked some kind of threshold: until then, the size of the global market in renewable energies had used to grow at slower a pace than the total market of energy, whilst since then, those paces switched, and the renewables started to grow faster than the whole market. I am still wrapping my mind around that fact. The structure of economic input, understood in terms of the production function, matters as well. Labour-intensive societies seem to be more prone to going green in their energy base than the capital-intensive ones. As I was testing those models, I intuitively used the density of population as control variable. You know, that variable, which is not quite inside the model, but kind of sitting by and supervising. I tested my models in separate quantiles of density in population, and some interesting distinctions came out of it. As I tested the same model in consecutive sextiles of density in population, the model went through a cycle of change, with the most explanatory power, and the most robust correlations occurring in the presence of the highest density in population.

I feel like asking myself why have I been doing what I have been doing. I know, for sure, that the ‘why?’ question is abyssal, and a more practical way of answering it consists in hammering it into a ‘how?’. What has been my process? Step 1: I finish an article, and I come to the conclusion that I can discuss technological change in the human civilisation as a process of absorbing as much energy as we can, and of adapting to maximise that absorption through an evolutionary pattern similar to sexual selection. Step 2: I blow some dust off my earlier idea of local communities based on renewable energies. What was the passage from Step 1 to Step 2? What had been crystallising in my brain at the time? Let’s advance step by step. If I think about local communities, I am thinking about a dispersed structure, kind of a network, made of separate and yet interconnected nodes. I was probably trying to translate those big, global paradigms, which I had identified before, into local phenomena, the kind you can experience whilst walking down the street, starting a new small business, or looking for a new job. My thinking about local communities going 100% green in their energy base could be an expression of an even deeper and less articulate a thinking about how do we, humans, in our social structure, maximize that absorption of energy I wrote about in my last article.

Good, now Step 3: I take on the root theory of Bayesian statistics. What made me take that turn? I remember I started to read that paper by pure curiosity. I like reading the classics, very much because only by reading them I discover how much bulls*** has been said about their ideas. What attracted my attention, I think, in the original theory by Thomas Bayes, was that vision of a semi-ordered universe, limited by previous events, and the attempt to assess the odds of having a predictable number of successes over quite a small number of trials, a number so small that it defies the logic of expected values in big numbers, genre De Moivre – Laplace. I was visibly thinking about people, in local communities, making their choices, taking a limited number of trials at achieving some outcome, and continuing or giving up, according to said outcomes. I think I was trying, at the time, to grasp the process of maximizing the absorption of energy as a sequence of individual and collective choices, achieved through trial and error, with that trial and error chaining into itself, i.e. creating a process marked by hysteresis.

Step 4: putting the model of the Bitcoin, by Satoshi Nakamoto, back to back with the original logic by Thomas Bayes. The logic used by Satoshi Nakamoto, back in the day, was that of a race, inside a network, between a crook trying to abuse the others, and a chained reaction from the part of ‘honest’ nodes. The questions asked were: how quick does a crook has to be in order to overcome the chained reaction of the network? How big and how quick on the uptake does the network has to be in order to fend the crook off? I was visibly thinking about rivalling processes, where rivalry sums up to overtaking and controlling some kind of consecutive nodes in a network. What kind of processes could I have had in mind? Well, the most obvious choice are the processes of absorbing energy: we strive to maximise our absorption of energy, we have the choice between renewable energies and the rest (fossils plus nuclear), and those choices are chained, and they are chained so as to unfold in time at various speeds. I think that when I put Thomas Bayes and Satoshi Nakamoto on the same school bench, the undertow of my thinking was something like: how do the choices we make influence further choices we make, and how does that chain of choices impact the speed the market of renewable energy develops, as compared to the market of other energy sources?

Step 5: empirical tests, those multiple regressions in a big database made of ‘country – year’ observations. Here, at least, I am pretty much at home with my own thinking: I know I habitually represent in my mind those big economic measures, like GDP per capita, or density of population, or the percentage of green energy in my electric socket, as the outcome of complex choices made by simple people, including myself. As I did that regressing, I probably, subconsciously, wanted to understand how some type of economic choices we make impacts other types of choices, more specifically those connected to energy. I found some consistent patterns at this stage of research. Choices about the work we do and about professional activity, and about the wages we pay and receive, are significant to the choices about energy. The very basic choice to live in a given place, so to cluster together with other humans, has one word or two to say, as well. The choices we make about consuming energy, and more specifically the choice of consuming more energy than the year before, are very important for the switch towards the renewables. Now, I noticed that turning point, in 2007 – 2008. Following the same logic, 2007 – 2008 must have been the point in time, where the aggregate outcomes of individual decisions concerning work, wages, settlement and the consumption of energy summed up into a change observable at the global scale. Those outcomes are likely to come out, in fact, from a long chain of choices, where the Bayesian space of available options has been sequentially changing under the impact of past choices, and where the Bitcoin-like race of rivalling technologies took place.

Step 6: my recent review of literature about the history of technology showed me a dominant path of discussion, namely that of technological determinism, and, kind of on the margin of that, the so-called Moore’s law of exponentially growing complexity in one particular technology: electronics. What did I want to understand by reviewing that literature? I think I wanted some ready-made (well, maybe bespoke) patterns, to dress my empirical findings for posh occasions, such as a conference, an article, or a book. I found out, with surprise, that the same logic of ‘choice >> technology >> social change >> choice etc.’ has been followed by many other authors and that it is, actually, the dominant way of thinking about the history of technology. Right, this is the path of thinking, which has brought me to think what I am thinking now. Now, what questions to I want to answer, after this brief recapitulative? First of all, how to determine the Bayesian rectangle of occurrences, regarding the possible future of renewable energies, and what that rectangle is actually likely to be? Answering this question means doing something we, economists, are second to none at doing poorly: forecasting. Splendid. Secondly, how does that Bayesian rectangle of limited choice depend on the place a given population lives in, and how does that geographical disparity impact the general scenario for our civilisation as a whole? Thirdly, what kind of social change is likely to follow along?

[1] Mr. Bayes, and Mr Price. “An essay towards solving a problem in the doctrine of chances. by the late rev. mr. bayes, frs communicated by mr. price, in a letter to john canton, amfrs.” Philosophical Transactions (1683-1775) (1763): 370-418