As it is ripe, I can harvest

I keep revising my manuscript titled ‘Climbing the right hill – an evolutionary approach to the European market of electricity’, in order to resubmit it to the journal Applied Energy. In my last update, titled ‘Still some juice in facts’, I used the technique of reverted reading to break the manuscript down into a chain of ideas. Now, I start reviewing the most recent literature associated with those ideas. I start with Rosales-Asensio et al. (2020)[1], i.e. with ‘Decision-making tools for sustainable planning and conceptual framework for the energy–water–food nexus’. The paper comes within a broader stream of literature, which I already mentioned in the first version of my manuscript, namely within the so-called MUSIASEM framework, where energy management in national economies is viewed as metabolic function, and socio-economic systems in general are equated to metabolic structures. Energy, water, food, and land are considered in this paper as sectors in the economic system, i.e. as chains of markets with economic goods being exchanged. We know that energy, water and food are interconnected, and all the three are connected to the way that our human social structures work. Yet, in the study of those connections we have been going into growing complexity of theoretical models, hardly workable at all when applied to actual policies. Rosales-Asensio et al. propose a method to simplify theoretical models in order to make them functional in decision-making. Water, land, and food can be included into economic planning as soon as we explicitly treat them as valuable assets. Here, the approach by Rosales-Asensio et al. goes interestingly against the current of something that can be labelled as ‘popular environmentalism’. Whilst the latter treats those natural (or semi-natural, in the case of food base) resources as invaluable and therefore impossible to put a price tag on, Rosales-Asensio et al. argue that it is much more workable, policy-wise to do exactly the opposite, i.e. to give explicit prices and book values to those resources. The connection between energy, water, food, and the economy is being done as transformation of matrices, thus as something akin a Markov chain of states. 

The next article I pass in review is that by Al-Tamimi and Al-Ghamdi (2020), titled ‘Multiscale integrated analysis of societal and ecosystem metabolism of Qatar’ (Energy Reports, 6, 521-527, https://doi.org/10.1016/j.egyr.2019.09.019 ). This paper presents interesting findings, namely that energy consumption in Quatar, between 2006 and 2015, grew at a faster rate than GDP within the same period, and energy consumption per capita and energy intensity grew approximately at the same rate. That could suggest some kind of trade-off between productivity and energy intensity of an economy. Interestingly, the fall of productivity was accompanied by increased economic activity of the Quatar’s population, i.e. the growth of professionally active population, and thence of the labour market, was faster than the overall demographic growth.

In still another paper, titled ‘The energy metabolism of countries: Energy efficiency and use in the period that followed the global financial crisis’. Energy Policy, 139, 111304. https://doi.org/10.1016/j.enpol.2020.111304 (2020),  professor Valeria Andreoni develops a line of research, where rapid economic change, even when it is a crisis-like change, contributes to reducing energy intensity of national economies. Still, some kind of blueprint for energy-efficient technological change needs to be in place, at the level of national policies. Energy-efficient technological change might be easier than we think, and yet, apparently, it needs some sort of accompanying economic change as its trigger. Energy efficiency seems to be correlated with competitive technological development in national economies. Financial constraints can hamper those positive changes. Cross-sectional (i.e. inter-country) gaps in energy efficiency are essentially bad for sustainable development. Public policies should aim at equalizing those gaps, by integrating the market of energy within EU. 

Velasco-Fernández, R., Pérez-Sánchez, L., Chen, L., & Giampietro, M. (2020), in the article titled ‘A becoming China and the assisted maturity of the EU: Assessing the factors determining their energy metabolic patterns’. Energy Strategy Reviews, 32, 100562.  https://doi.org/10.1016/j.esr.2020.100562 , bring empirical results somehow similar to mine, although with a different method. The number of hours worked per person per year is mentioned in this paper as an important variable of the MuSIASEM framework for China. There is, for example, a comparison of energy metabolized in the sector of paid work, as compared to the household sector. It is found that the aggregate amount of human work used in a given sector of the economy is closely correlated with the aggregate energy metabolized by that sector. The economic development of China, and its pattern of societal metabolism in using energy, displays increase in the level of capitalization of all sectors, while reducing the human activity (paid work) in all of them except in the services. In the same time, the amount of human work per unit of real output seems to be negatively correlated with the capital-intensity (or capital-endowment) of particular sectors in the economy. Energy efficiency seems to be driven by decreasing work-intensity and increasing capital-intensity.

I found another similarity to my own research, although under a different angle, in the article by Koponen, K., & Le Net, E. (2021): Towards robust renewable energy investment decisions at the territorial level. Applied Energy, 287, 116552.  https://doi.org/10.1016/j.apenergy.2021.116552 . The authors build a simulative model in Excel, where they create m = 5000 alternative futures for a networked energy system aiming at optimizing 5 performance metrics, namely: the LCOE cost of electricity, the GHG metric (greenhouse gases emission) for the climate, the density of PM2.5 and PM10 particles in the ambient air as a metric of health, capacity of power generation as a technological benchmark, and the number of jobs as social outcome. That complex vector of outcomes has been simulated as dependent on a vector of uncertainty as regards costs, and more specifically: cost of CO2, cost of electricity, cost of natural gas, and the cost of biomass. The model was based on actual empirical data as for those variables, and the ‘alternative futures’ are, in other words, 5000 alternative states of the same system. Outcomes are gauged with the so-called regret analysis, where the relative performance in a specific outcome is measured as the residual difference between its local value, and, respectively, its general minimum or maximum, depending on whether the given metric is something we strive to maximize (e.g. capacity), or to minimize (e.g. GHG). The regret analysis is very similar to the estimation of residual local error.

That short review of literature has the merit of showing me that I am not completely off the picture with the method and he findings which I initially presented to the editor of Applied Energy in that manuscript: ‘Climbing the right hill – an evolutionary approach to the European market of electricity’. The idea of understanding the mechanism of change in social structures, including the market of energy, by studying many alternative versions of said structure, seems to be catching in literature. I am progressively wrapping my mind around the fact that in my manuscript, the method is more important than the findings. The real value for money of my article seems to reside in the extent to which I can demonstrate the reproducibility and robustness of that method.

Thus, probably for the umpteenth time, I am rephrasing the fundamentals of my approach, and I am trying to fit it into the structure which Applied Energy recommends for articles submitted to their attention. I should open up with an ‘Introduction’, where I sketch the purpose of the paper, as well as the main points of the theoretical background which my paper stems from, although without entering into detailed study thereof. Then, I should develop on ‘Material and Methods’, with the main focus on making my method as reproducible as possible, and now comes the time to develop on, respectively, ‘Theory’ and ‘Calculation’, thus elaborating on the theoretical foundations of my research as pitched against literature, and on the detailed computational procedures I used. I guess that I need to distinguish, at this specific point, between the literature pertinent to the substance of my research (Theory), and that oriented on the method of working with empirical data (Calculation).

Those four initial sections – Introduction, Material and Methods, Theory, Calculation – open the topic up and then comes the time to give it a closure, with, respectively: ‘Results’, ‘Discussion’, and, optionally, a separate ‘Conclusion’. Over the top of that logical flow, I need to decorate with sections pertinent to ‘Data availability’, ‘Glossary’, and ‘Appendices’. As I get further back from the core and substance of my manuscript, and deeper into peripheral information, I need to address three succinct ways of presenting my research: Highlights, Graphical Abstract, and a structured cover letter. Highlights are 5 – 6 bullet points, 1 – 2 lines each, sort of abstract translated into a corporate presentation on slides. The Graphical Abstract is a challenge – as I need to present complex ideas in a pictographic form – and it is an interesting challenge. The structured cover letter should address the following points:

>> what is the novelty of this work?

>> is the paper appealing to a popular or scientific audience?

>> why the author thinks the paper is important and why the journal should publish it?

>> has the article been checked by an expert native speaker?

>> is the author available as reviewer?

Now, I ask myself fundamental questions. Why should anyone bother about the substance and the method of the research I present in my article. I noticed, both in public policies and in business strategies, a tendency to formulate completely unrealistic plans, and then to complain about other people not being smart enough to carry those plans out and up to happy ending. It is very visible in everything related to environmental policies and environmentally friendly strategies in business. Environmental activism consumes itself, very largely, in bashing everyone around for not being diligent enough in saving the planet.

To me, it looks very similarly to what I did many times as a person: unrealistic plans, obvious failure which anyone sensible could have predicted, frustration, resentment, practical inefficiency. I did it many times, and, obviously, whole societies are perfectly able to do it collectively. Action is key to success. A good plan is the plan which utilizes and reinforces the skills and capacities I already have, makes those skills into recurrent patterns of action, something like one good thing done per day, whilst clearly defining the skills I need to learn in order to be even more complete and more efficient in what I do. A good public policy, just as a good business strategy, should work in the same way.

When we talk about energy efficiency, or about the transition towards renewable energies, what is our action? Like really, what is the most fundamental thing we do together? Do we purposefully increase energy efficiency, in the first place? Do we deliberately transition to renewables? Yes, and no. Yes, at the end of the day we get those outcomes, and no, what we do on a daily basis is something else. We work. We do business. We study in order to get a job, or to start a business. We live our lives, from day to day, and small outcomes of that daily activity pile up, producing big cumulative change.   

Instead of discussing what we do completely wrong, and thus need to change, it is a good direction to discover what we do well, consistently and with visible learning. That line of action can be reinforced and amplified, with good results. The so-far review of literature suggests that research concerning energy and energy transition is progressively changing direction, from the tendency to growing complexity and depth in study, dominant until recently, towards a translation of those complex, in-depth findings into relatively simple decision-making tools for policies and business strategies.

Here comes my method. I think it is important to create an analytical background for policies and business strategies, where we take commonly available empirical data at the macro scale, and use this data to discover the essential, recurrently pursued collective outcomes of a society, in the context of specific social goals. My point and purpose is to nail down a reproducible, relatively simple method of discovering what whole societies are really after. Once again, I think about something simple, which anyone can perform on their computer, with access to Internet. Nothing of that fancy stuff of social engineering, with personal data collected from unaware folks on Facebook. I want the equivalent of a screwdriver in positive, acceptably fair social engineering.

How do I think I can make a social screwdriver? I start with defining a collective goal we think we should pursue. In the specific case of my research on energy it is the transition to renewable sources. I nail down my observation of achievement, regarding that goal, with a simple metric, such as e.g. the percentage of renewables in total energy consumed (https://data.worldbank.org/indicator/EG.FEC.RNEW.ZS ) or in total electricity produced (https://data.worldbank.org/indicator/EG.ELC.RNEW.ZS ). I place that metric in the context of other socio-economic variables, such as GDP per capita, average hours worked per person per year etc. At this point, I make an important assumption as regards the meaning of all the variables I use. I assume that if a lot of humans go to great lengths in measuring something and reporting those measurements, it must be important stuff. I know, sounds simplistic, yet it is fundamental. I assume that quantitative variables used in social sciences represent important aspects of social life, which we do our best to observe and understand. Importance translates as significant connection to the outcomes of our actions.

Quantitative variables which we use in social sciences represent collectively acknowledged outcomes of our collective action. They inform about something we consistently care about, as a society, and, at the same time, something we recurrently produce, as a society. An array of quantitative socio-economic variables represents an imperfect, and yet consistently construed representation of complex social reality.

We essentially care about change. Both individual human nervous systems, and whole cultures, are incredibly accommodative. When we stay in a really strange state long enough to develop adaptive habits, that strange state becomes normal. We pay attention to things that change, whence a further hypothesis of mine that quantitative socio-economic variables, even if arithmetically they are local stationary states, serve us to apprehend gradients of change, at the level of collective, communicable cognition.

If many different variables I study serve to represent, imperfectly but consistently, the process of change in social reality, they might zoom on the right thing with various degrees of accuracy. Some of them reflect better the kind of change that is really important for us, collectively, whilst some others are just sort of accurate in representing those collectively pursed outcomes. An important assumption pops its head from between the lines of my writing: the bridging between pursued outcomes and important change. We pay attention to change, and some types of change are more important to us than others. Those particularly important changes are, I think, the outcomes we are after. We pay the most attention, both individually and collectively, to phenomena which bring us payoffs, or, conversely, which seriously hamper such payoffs. This is, once again on my path of research, a salute to the Interface Theory of Perception (Hoffman et al. 2015[2]; Fields et al. 2018[3]).

Now, the question is: how to extract orientations, i.e. objectively pursued collective outcomes, from that array of apparently important, structured observations of what is happening to our society? One possible method consists in observing trends and variance over time, and this is what I had very largely done, up to a moment, and what I always do now, with a fresh dataset, as a way of data mining. In this approach, I generally assume that a combination of relatively strong variance with strong correlation to the remaining metrics, makes a particular variable likely to be the driving undertow of the whole social reality represented by the dataset at hand.

Still, there is another method, which I focus on in my research, and which consists in treating the empirical dataset as a complex and imperfect representation of the way that collectively intelligent social structures learn by experimenting with many alternative versions of themselves. That general hypothesis leads to building supervised, purposefully biased experiments with that data. Each experiment consists in running the dataset through a specifically skewed neural network – a perceptron – where one variable from the dataset is the output which the perceptron strives to optimize, and the remaining variables make the complex input instrumental to that end. Therefore, each such experiment simulates an artificial situation when one variable is the desired and collectively pursued outcome, with other variables representing gradients of change subservient to that chief value.

When I run such experiments with any dataset, I create as many transformed datasets as there are variables in the game. Both for the original dataset, and for the transformed ones, I can calculate the mean value of each variable, thus construing a vector of mean expected values, and, according to classical statistics, such a vector is representative for the expected state of the dataset in question. I end up with both the original dataset and the transformed ones being tied to the corresponding vectors of mean expected values. It is easy to estimate the Euclidean distance between those vectors, and thus to assess the relative mathematical resemblance between the underlying datasets. Here comes something I discovered more than assumed: those Euclidean distances are very disparate, and some of them are one or two orders of magnitude smaller than all the rest. In other words, some among all the supervised experiments done yield a simulated state of social reality much more similar to the original, empirical one than all the other experiments. This is the methodological discovery which underpins my whole research in this article, and which emerged as pure coincidence, when I was working on a revised version of another paper, titled ‘Energy efficiency as manifestation of collective intelligence in human societies’, which I published with the journal ‘Energy’(https://doi.org/10.1016/j.energy.2019.116500 ).

My guess from there was – and still is – that those supervised experiments have disparate capacity to represent the social reality I study with the given dataset. Experiments which yield mathematical transformations relatively the most similar to the original set of empirical numbers are probably the most representative. Once again, the mathematical structure of the perceptron used in all those experiments is rigorously the same, and what makes the difference is the focus on one particular variable as the output to optimize. In other words, some among the variables studied represent much more plausible collective outputs than others.

I feel a bit lost in my own thinking. Good. It means I have generated enough loose thoughts to put some order in them. It would be much worse if I didn’t have thoughts to put order in. Productive chaos is better than sterile emptiness. Anyway, the reproducible method I want to present and validate in my article ‘Climbing the right hill – an evolutionary approach to the European market of electricity’ aims at discovering the collectively pursued social outcomes, which, in turn, are assumed to be the key drivers of social change, and the path to that discovery leads through the hypothesis that such outcomes are equivalent to specific a gradient of change, which we collectively pay particular attention to in the complex social reality, imperfectly represented with an array of quantitative socio-economic variables. The methodological discovery which I bring forth in that reproducible method is that when any dataset of quantitative socio-economic variables is being transformed, with a perceptron, into as many single-variable-optimizing transformations as there are variables in the set, 1 ÷ 3 among those transformations are mathematically much more similar to the original set of observations that all the other thus transformed sets. Consequently, in this method, it is expected to find 1 ÷ 3 variables which represent – much more plausibly than others – the possible orientations, i.e. the collectively pursued outcomes of the society I study with the given empirical dataset.

Ouff! I have finally spat it out. It took some time. The idea needed to ripe, intellectually. As it is ripe, I can harvest.


[1] Rosales-Asensio, E., de la Puente-Gil, Á., García-Moya, F. J., Blanes-Peiró, J., & de Simón-Martín, M. (2020). Decision-making tools for sustainable planning and conceptual framework for the energy–water–food nexus. Energy Reports, 6, 4-15. https://doi.org/10.1016/j.egyr.2020.08.020

[2] Hoffman, D. D., Singh, M., & Prakash, C. (2015). The interface theory of perception. Psychonomic bulletin & review, 22(6), 1480-1506.

[3] Fields, C., Hoffman, D. D., Prakash, C., & Singh, M. (2018). Conscious agent networks: Formal analysis and application to cognition. Cognitive Systems Research, 47, 186-213. https://doi.org/10.1016/j.cogsys.2017.10.003

Still some juice in facts

I am working on improving my manuscript titled ‘Climbing the right hill – an evolutionary approach to the European market of electricity’, after it received an amicable rejection from the journal Applied Energy, and, in the same time, I am working on other stuff. As usually. Some of that other staff is a completely new method of teaching in the summer semester, sort of a gentle revolution, with glorious prospects ahead, and without guillotines (well, not always).

As for the manuscript, I intend to work in three phases. I restate and reformulate the main lines of the article, and this is phase one. I pass in review the freshest literature in energy economics, as well as in the applications of artificial neural networks therein, and this is phase two. Finally, in phase three, I plan to position my method and my findings vis a vis that latest research.

I start phase one. When I want to understand what I wrote about 1 year ago, it is very nearly as if I was trying to understand what someone else wrote. Yes, I function like that. I have pretty good long-term memory, and it is because I learnt to detach emotions from old stuff. I sort of archive my old thoughts in order to make room for the always slightly disquieting waterfall of new thoughts. I need to dig and unearth my past meaning. I use the technique of reverse reading to achieve that. I read written content from its end back upstream to its beginning, and I go back upstream at two levels of structure: the whole piece of text, and individual sentences. In practical terms, when I work with that manuscript of mine, I take the last paragraph of the conclusion, and I actively write it backwards word-wise (I keep proper names unchanged). See by yourself.

This is the original last paragraph: ‘What if economic systems, inclusive of their technological change, optimized themselves so as to satisfy a certain workstyle? The thought seems incongruous, and yet Adam Smith noticed that division of labour, hence the way we work, shapes the way we structure our society. Can we hypothesise that technological change we are witnessing is, most of all, a collectively intelligent adaptation in the view of making a growing mass of humans work in ways they collectively like working? That would revert the Marxist logic, still, the report by World Bank, cited in the beginning of the article, allows such an intellectual adventure. On the path to clarify the concept, it is useful to define the meaning of collective intelligence’.

Now, I write it backwards: ‘Intelligence collective of meaning the define to useful is it concept the clarify to path the on adventure intellectual an such allows article the of beginning the in cited World Bank by report the still logic Marxist the revert that would that. Working like collectively they ways in work humans of mass growing a making view of the in adaptation intelligent collectively a all of most is witnessing are we change technological that hypothesise we can? Society our structure we way the shapes work we way the hence labour of division that noticed Adam Smith yet and incongruous seems thought the workstyle certain a satisfy to as so themselves optimized change technological their of inclusive systems economic if what?

Strange? Certainly, it is strange, as it is information with its pants on its head, and this is precisely why it is informative. The paper is about the market of energy, and my last paragraph of conclusions is about the market of labour, and its connection to the market of energy.

I go further upstream in my writing. The before-last paragraph of conclusions goes like: ‘Since David Ricardo, all the way through the works of Karl Marks, John Maynard Keynes, and those of Kuznets, economic sciences seem to be treating the labour market as easily transformable in response to an otherwise exogenous technological change. It is the assumption that technological change brings greater a productivity, and technology has the capacity to bend social structures. In this view, work means executing instructions coming from the management of business structures. In other words, human labour is supposed to be subservient and executive in relation to technological change. Still, the interaction between technology and society seems to be mutual, rather than unidirectional (Mumford 1964, McKenzie 1984, Kline and Pinch 1996; David 1990, Vincenti 1994). The relation between technological change and the labour market can be restated in the opposite direction. There is a body of literature, which perceives society as an organism, and social change is seen as complex metabolic adaptation of that organism. This channel of research is applied, for example, in order to apprehend energy efficiency of national economies. The so-called MuSIASEM model is an example of that approach, claiming that complex economic and technological change, including transformations in the labour market, can be seen as a collectively intelligent change towards optimal use of energy (see for example: Andreoni 2017 op. cit.; Velasco-Fernández et al 2018 op. cit.). Work can be seen as fundamental human activity, crucial for the management of energy in human societies. The amount of work we perform creates the need for a certain caloric intake, in the form of food, which, in turn, shapes the economic system around, so as to produce that food. This is a looped adaptation, as, on the long run, the system supposed to feed humans at work relies on this very work’.

Here is what comes from reverted writing of mine: ‘Work very this on relies work at humans feed to supposed system the run long the on as adaptation looped a is this food that produce to around system economic the shapes turn in which food of form the in intake caloric certain a for need the creates perform we work of amount the societies human in energy of management the for crucial activity human fundamental as seen be can work. Energy of use optimal towards change intelligent collectively a as seen be can market labour the in transformations including change technological and economic complex that claiming approach that of example an is model MuSIASEM called so the economies national of efficiency energy apprehend to order in example for applied is research of channel this. Organism that of adaptation metabolic complex as seen is change social and organism an as society perceives which literature of body a is there. Direction opposite the in restated be can market labour the and change technological between relation the. Unidirectional than rather mutual be to seems society and technology between interaction the still. Change technological to relation in executive and subservient be to supposed is labour human words other in. Structures social bend to capacity the has technology and productivity a greater brings change technological that assumption the is it. Change technological exogenous otherwise an to response in transformable easily as market labour the treating ne to seem sciences economic Kuznets of those and Keynes […], Marks […] of works the through way the all Ricardo […]’.

Good. I speed up. I am going back upstream through consecutive paragraphs of my manuscript. The chain of 35 ideas which I write here below corresponds to the reverted logical structure (i.e. from the end backstream to the beginning) of my manuscript. Here I go. Ideas listed below have numbers corresponding to their place in the manuscript. The higher the number, the later in the text the given idea is phrased out for the first time.

>> Idea 35: The market of labour, i.e. the way we organize for working, determines the way we use energy.

>> Idea 34: The way we work shapes technological change more than vice versa. Technologies and workstyles interact

>> Idea 33: The labour market offsets the loss of jobs in some sectors by the creation of jobs in other sectors, and thus the labour market accommodates the emergent technological change.

>> Idea 32: The basket of technologies we use determines the ways we use energy. work in itself is human effort, and that effort is functionally connected to the energy base of our society

>> Idea 31: Digital technologies seem to have a special function in mediating the connection between technological change and the labour market

>> Idea 30: the number of hours worked per person per year (AVH), the share of labour in the GNI (LABSH), and the indicator of human capital (HC) seem to make an axis of social change, both as input and as output of the collectively intelligent structure.

>> Idea 29: The price index in exports (PL_X) comes as the chief collective goal pursued, and the share of public expenditures in the Gross National Income (CSH_G) appears as the main epistatic driver in that pursuit.

>> Idea 28: The methodological novelty of the article consists in using the capacity of a neural network to produce many variations of itself, and thus to perform evolutionary adaptive walk in rugged landscape.

>> Idea 27: The here-presented methodology assumes: a) tacit coordination b) evolutionary adaptive walk in rugged landscape c) collective intelligence d) observable socio-economic variables are manifestations of the past, coordinated decisions.

>> Idea 26: Variance observable in the average Euclidean distances that each variable has with the remaining 48 ones reflects the capacity of each variable to enter into epistatic interactions with other variables, as the social system studied climbs different hills, i.e. pursues different outcomes to optimize.

>> Idea 25: Coherence: across 48 sets Si out of the 49 generated with the neural network, variances in Euclidean distances between variables are quite even. Only one set Si yields different variances, namely the one pegged on the coefficient of patent applications per 1 million people.

>> Idea 24: the order of phenomenal occurrences in the set X does not have a significant influence on the outcomes of learning.

>> Idea 23: results of multiple linear regression of natural logarithms in the variables observed is compared to the application of an artificial neural network with the same dataset – to pass in review and to rework – lots of meaning there.

>> Idea 22: the phenomena assumed to be a disturbance, i.e. the discrepancy in retail prices of electricity, as well as the resulting aggregate cash flow, are strongly correlated with many other variables in the dataset. Perhaps the most puzzling is their significant correlation with the absolute number of resident patent applications, and with its coefficient denominated per million of inhabitants. Apparently, the more patent applications in the system, the deeper is that market imperfection.

>> Idea 21: Another puzzling correlation of these variables is the negative one with the variable AVH, or the number of hours worked per person per year. The more an average person works per year, in the given country and year, the less likely this local market is to display harmful differences in the retail prices of electricity for households.

>> Idea 20: On the other hand, variables which we wish to see as systemic – the share of electricity in energy consumption and the share of renewables in the output of electricity – have surprisingly few significant correlations in the dataset studied, just as if they were exogenous stressors with little foothold in the market as for yet. 

>> Idea 19: None of the four key variables regarding the European market of energy: a) the price fork in the retail market of electricity (€) b) the capital value of cash flow resulting from that price fork (€ mln) c) the share of electricity in energy consumption (%) and d) the share of renewables in electricity output (%)seems having been generated by a ‘regular’ Gaussian process: they all produce definitely too much outliers for a Gaussian process to be the case.

>> Idea 18: other variables in the dataset, the ‘regulars’ such as GDP or price levels, seem to be distributed quite close to normal, and Gaussian processes can be assumed to work in the background. This is a typical context for evolutionary adaptive walk in rugged landscape. An otherwise stable socio-economic environment gets disturbed by changes in the energy base of the society living in the whereabouts. As new stressors (e.g. the need to switch to electricity, from the direct combustion of fossil fuels) come into the game, some ‘mutant’ social entities stick out of the lot and stimulate an adaptive walk uphill.

>> Idea 17: The formal test of Euclidean distances, according to equation (1), yields a hierarchy of alternative sets Si, as for their similarity to the source empirical set X of m= 300 observations. This hierarchy represents the relative importance of variables, which each corresponding set Si is pegged on.

>> Idea 16: The comparative set XR has been created as a sequence of 10 stacked, pseudo-random permutations of the original set X has been created as one database. Each permutation consists in sorting the records of the original set X according to a pseudo-random index variable. The resulting set covers m = 3000 phenomenal occurrences.

>> Idea 15: The underlying assumption as regards the collective intelligence of that set is that each country learns separately over the time frame of observation (2008 – 2017), and once one country develops some learning, that experience is being taken and reframed by the next country etc. 

>> Idea 14: we have a market of energy with goals to meet, regarding the local energy mix, and with a significant disturbance in the form of market imperfections

>> Idea 13: special focus on two variables, which the author perceives as crucial for tackling climate change: a) the share of renewable energy in the total output of electricity, and b) the share of electricity in the total consumption of energy.

>> Idea 12: A est for robustness, possible to apply together with this method, is based on a category of algorithms called ‘random forest’

>> Idea 11: The vector of variances in the xi-specific fitness function V[xi(pj)] across the n sets Si has another methodological role to play: it can serve to assess the interpretative robustness of the whole complex model. If, across neural networks oriented on different outcome variables, the given input variable xi displays a pretty uniform variance in its fitness function V[xi(pj)], the collective intelligence represented in equations (2) – (5) performs its adaptive walk in rugged landscape coherently across all the different hills considered to walk up. Conversely, should all or most variables xi, across different sets Si, display noticeably disparate variances in V[xi(pj)], the network represents a collective intelligence which adapts in a clearly different manner to each specific outcome (i.e. output variable).

>> Idea 10: the mathematical model for this research is composed of 5 main equations, which, in the same time, make the logical structure of the artificial neural network used for treating empirical data. That structure entails: a) a measure of mathematical similarity between numerical representations of collectively intelligent structure b) the expected state of intelligent structure reverse engineered from the behaviour of the neural network c) neural activation and the error of observation, the latter being material for learning by measurable failure, for the collectively intelligent structure d) transformation of multi-variate empirical data into one number fed into the neural activation function e) a measure of internal coherence in the collectively intelligent structure

>> Idea 9: the more complexity, the more is the hyperbolic tangent, based on the expression e2h, driven away from its constant root e2. Complexity in variables induces greater swings in the hyperbolic tangent, i.e. greater magnitudes of error, and, consequently, longer strides in the process of learning.

>> Idea 8: Each congruent set Si is produced with the same logical structure of the neural network, i.e. with the same procedure of estimating the value of output variable, valuing the error of estimation, and feeding the error forward into consecutive experimental rounds. This, in turn, represents a hypothetical state of nature, where the social system represented with the set X is oriented on optimizing the given variable xi, which the corresponding set Si is pegged on as its output.

>> Idea 7: complex entities can internalize an external stressor as they perform their adaptive walk. Therefore, observable variance in each variable xi in the set X can be considered as manifestation of such internalization. In other words, observable change in each separate variable can result from the adaptation of social entities observed to some kind of ‘survival imperative’.

>> Idea 6: hypothesis that collectively intelligent adaptation in human societies, regarding the ways of generating and using energy, is instrumental to the optimization of other social traits.    

>> Idea 5: Adaptive walks in rugged landscape consist in overcoming environmental challenges in a process comparable to climbing a hill: it is both an effort and a learning, where each step sets a finite range of possibilities for the next step.

>> Idea 4: the MuSIASEM methodological framework – aggregate use of energy in an economy can be studied as a metabolic process

>> Idea 3: human societies are collectively intelligent about the ways of generating and using energy: each social entity (country, city, region etc.) displays a set of characteristics in that respect

>> Idea 2: adaptive walk of a collective intelligence happens in a very rugged landscape, and the ruggedness of that landscape comes from the complexity of human societies

>> Idea 1: Collective intelligence occurs even in animals as simple neurologically as bees, or even as the Toxo parasite. Collective intelligence means shifting between different levels of coordination.

As I look at that thing, namely at what I wrote something like one year ago, I have a doubly recomforting feeling. The article seems to make sense from the end to the beginning, and from the beginning to the end. Both logical streams seem coherent and interesting, whilst being slightly different in their intellectual melody. This is the first comfortable feeling. The second is that I have still some meaning, and, therefore, some possible truth, to unearth out of my empirical findings, and this is always a good thing. In science, the view of empirical findings squeezed out of the last bit of meaning and yet still standing as something potentially significant is one of the saddest perspectives one can have. Here, there is still some juice in facts. Good.  

I needed that

It’s been quite a few days without me writing and posting anything new on my blog. This is one of those strange moments, when many different strands of action emerge, none is truly preponderant over the others, and I feel like having to walk down many divergent paths all at once. As such an exercise can end up in serious injuries, the smart way to go is to make those divergent paths converge at some point.

As usually in such situations of slight chaos in my head, I use the method of questions to put some order in it. Let’s do it. What do I want? I want to develop my theoretical concept of collectively intelligent social structure into a workable, communicable, and reproducible methodology of research. I want to use that methodology as intellectual core for a big project of research and development. The development part would be some kind of digital tool which, using an otherwise very simple version of artificial neural network, can run the diagnosis of a society (e.g. a city), regarding: a) the collective outcomes pursued by the collective intelligence of that society b) the patterns of collective learning, and more specifically the phenomena which are likely to knock that society out of balance as opposed to those which make it stabilize.

As I am writing these words, I intuitively guess that my investment in the stock market, such as I consistently do it, is successfully based on the hypothesis of collective intelligence in the stock market, and in the industries which I invest in. As I consistently oscillate around 50% of annual return on the cash invested in the stock market, that hypothesis of collective intelligence seems to be workable. When I think about my recipe for success, it strangely resembles the findings of my scientific research. In a paper published with the journal ‘Energy’, titled ‘Energy efficiency as manifestation of collective intelligence in human societies’, I found out that the coefficient of fixed assets per one patentable invention is a key variable that societies optimize, and prioritize over energy efficiency. When I look at my investment portfolio, and what seems to work in it, it is precisely about some kind of balance between innovation and assets. When that sweet spot is there, the company’s stock brings me nice return.

I want to develop my concept of collectively intelligent social structure into a method of teaching social sciences, and to interweave that teaching into the canonical subjects I teach: microeconomics, macroeconomics, international trade etc. I wonder how I can use that concept e.g. in business planning or in the analysis of contracts and legal acts.

What am I afraid of? What can possibly go wrong with my plans? Good question. My fears are essentially those of publicly acknowledged failure on my part. I am shit scared of being labelled as a loser, but also of being seen as someone who fails to take any challenge at all. There is another deep fear in me, and this is a strange fear, as it is interwoven with hope: it is both the fear and the hope of deep change in my existence, like changing my professional occupation for a radically new one, or moving to live in another place, that kind of thing. It looks like I dread two types of suffering: that coming from socially recognized failure in building my position in social hierarchy, and that coming from existential change. Yet, my apprehension vis a vis those two types of suffering is different. Socially recognized failure is something I simply want to avoid. Existential change is that strange case of love and hate, a bit like my practice of the Wim Hof method. As I think of it, overcoming the fear of change can lead me to discovering new, wonderful things in my life, and this is what I want.

As I connect the dots I have just written down, turns out that what I really need to do is to utilise my research on collective intelligence as a platform for deep existential change. What specific kind of change would both scare me and thrill me in the best possible combination? What kinds of change can I take into account at all? Change of job inside the same occupation, i.e. inside the academia, for one. Further reaching a change of occupation, thus going outside academia, is the next level of professional change. The slightly fantasque move in that department would be to transform my investment in the stock market into a small investment fund for innovative projects, like a start-up fund. Moving to another place – a different city or a different country – is another option. Change of environment can be enormously stimulating, I know it by experience. Besides, my home country, Poland, is progressively turning into a mix of a catholic version of Iran, i.e. a religious state, with what I remember from the times of communism. A big part of the Polish population seems to be delighted with the process, and I am not delighted at all. I intuitively feel that compulsive thinking about how much ours is what we have means heading towards a disaster, and we just serve ourselves a lot of tranquilizing pills to kill the otherwise quite legitimate fear. It is all becoming both scary and suffocating, and I feel like getting out of the swamp before I sink too deep. Still, I know that geographical move has to be backed with realistic assumptions as for my social role: job, family etc. I am the kind of big, steady animal, like a moose, and it is both physical and existential. Jumping from one rooftop to another, parkour-style, is something I like watching but I completely suck at. I need a path and a structure to achieve change. 

I am exploring my deeply hidden drivers, and I am trying to be honest with myself and my readers. Which of those existential moves looks the most tempting to me? I think that a progressive transition, or, I should rather say: expansion, of out the academia is the most thrilling to me. I want it to be a progressive expansion, with a path of progress and learning. What do I need to learn in that process? In order to answer that question, I need to define my endgame, i.e. the target state I am working up to. In other words, how will I know I have what I want? I know I have a method when it has been intersubjectively validated, either by publication or by practical use in a collective research project.  How will other people know I have what I want? How will other people know I have a valid method? They need to buy into its logic, and acknowledge it as fit for publication or for application in a collective research project.

Here comes a fortunate coincidence, which has just knocked me out of philosophizing and closer to actual life. A scientific journal, Applied Energy, has just rejected positively my manuscript titled ‘Climbing the right hill – an evolutionary approach to the European market of electricity’, and I am sort of happy about it. Why being happy about rejection? Well, in the world of science, there are two types of rejection: the ‘f**k you, man!’ type, and the maybe-if-you-improve-and-develop type. With that specific manuscript, I have already knocked at the doors of many scientific journals, and each time I received the former type of rejection letter. This time, with Applied Energy, it is the latter type. The editorial letter I have just received states ‘While your submission is of interest to Applied Energy, your manuscript does not meet the following criteria, we are returning the manuscript to you before the review:

*Lack of scientific originality/novelty:

The novelty/originality shall be justified by highlighting that the manuscript contains sufficient contributions to the new body of knowledge. The knowledge gap needs to be clearly addressed in Introduction.

*Literature survey is not sufficient to present the most updated R&D status for further justification of the originality of the manuscript. You should carry out a thorough literature survey of papers published in a range of top energy journals in the last three/four years so as to fully appreciate the latest findings and key challenges relating to the topic addressed in your manuscript and to allow you to more clearly present your contributions to the pool of existing knowledge. In the case the subject is really novel and few or no specific references are found, the novelty of the subject, the methodology used and the similarity to other older or newer subjects should be explicitly addressed.

At this time, your submission will be rejected from Applied Energy but please feel free to re-submit to the journal once the aforementioned comments have been addressed’.

The journal Applied Energy is top of the food chain in as journals about energy economics come. Such a nice and polite rejection from them is an invitation to dialogue. At last! I really needed that.

As I am preparing teaching material for the next semester, and I am interweaving that stream of work with my research on collective intelligence in human societies. I drop by some published science, just to chat with Berghout, S., & Verbitskiy, E. (2021). On regularity of functions of Markov chains. Stochastic Processes and their Applications. https://doi.org/10.1016/j.spa.2020.12.006 . There is a state of reality Xn = {x1, x2, …, xn}, which we cannot observe directly; {Xn} slips easily of our observational capacity. Thus, instead of chasing ghosts, we nail down a set of observables {Yn} such that Yn = π (Xn), the π being a coding map of Xn so as we can observe through the lens of Yn.

These are the basic assumptions expressed in the paper by Berghout & Verbitskiy, and this is an important building bloc in my research and in my teaching. If I want to teach my hypothesis of collective intelligence to undergraduate students, I need to make it simple, and to show immediate benefits of using an analytical method based on it. I want to focus, for a moment, on the latter component, thus on practical applications. The hypothesis of collective intelligence implies that human societies are intelligent structures, and they learn new stuff by experimenting with many alternative versions of themselves. That capacity of learning by experimenting with ourselves, whilst staying structurally coherent, is precisely the gain out of being collectively intelligent. Here, I go a bit far with my next claim: I think we can enhance our capacity of collective learning if we accurately grasp and communicate the exact way we learn collectively, i.e. the exact way we experiment with many alternative versions of us doing things together. That hypothesis comes from my observation about myself, and about some other people I know: when I narrate to myself the way I learn something, my learning speeds up. What if we, humans being together, can speed up the process of our collective learning by narrating to ourselves the exact way we learn?

Here, I stress the ‘exact way’ part. We have culture, which recently turns into outrage culture, with a lot of moralizing and little action. Here, I allow myself to quote one of my students. The guy comes from Rwanda, Africa, and in the class of management, when we were discussing different business concepts my students come up with, he gave the example of an actual business model which apparently grows like hell in Rwanda and in Africa as a whole. You buy a small fleet of electric cars, like 5 – 10, you rent them, and you assure full technical support to your clients, and you build a charging station for those cars, powered by a solar farm just next door. Investment goes into five types of assets: land, solar farm with full equipment (big batteries for storage included), electric cars, and equipment for their maintenance. You sell rental hours, additional maintenance services, and energy from the charging station. Simple, clean, workable, just the way I like it.

When I heard that story from my student, I had one of those ‘F**k!’ realizations. In Europe, and I think in North America as well, when we want to do something for the planet and the climate, we start by bashing each other about how bad we are at it and how necessary it is to turn vegan, then we burn thousands of tons of fuel to gather in one place and do a big march for the planet, then we do a strike for climate, and finally we claim that the government should do something about the climate, and, by the way, it would be a good thing if Jeff Bezos gave away some of his wealth. In Rwanda, when those people realize they should take care of the climate and the planet, they develop businesses which do. I think their way is somehow more promising.

I come back to the exact way we learn collectively. There is the Greta-Thunberg-way of caring about the planet, and there is the Rwandan way. Both exist, both are different experimental versions of ourselves, and both get reinforced by communication. One march for the planet, properly covered by the media, incites further marches for the planet, and, in the same way, disseminating that business model – involving a small fleet of electric vehicles, charging stations and solar farms – is likely to speed up its development. Narrating to ourselves the ways we develop new technologies can speed up their development.

The exact way we learn collectively is made, in the first place, of the specific, alternative versions of the social structure. When I want to know the exact way we learn collectively, I need to look at the alternative versions (of our collective) which we are experimenting with, thus at the actual degrees of freedom we have in that experimentation. Those alternative versions are described in terms of observables that Yn = π (Xn), which, in turn, are our best epistemological take on the otherwise unobservable reality {Xn}, through the coding map π.

I can see something promising here, I mean in that notion of actual experimental versions of ourselves. My scientific discipline, i.e. social sciences with a strong edge of economics and management, is plagued by claims that things ‘should be done’ in a given way just because it worked locally. Recently, I witnessed a heated debate between some acquaintances of mine, on Facebook, as for which economic model is better: the American one or the Scandinavian one. You know, the thing about education, healthcare, economic equality and stuff. As I was observing the ball of thoughts being played between those people, I had the impression of seeing an argument without common field. One camp argued that because something works in Sweden or Finland, it should be applied everywhere, whilst their opponents claimed exactly the same about the American economic model. In the middle of that, I was watching the protagonists flexing their respective intellects, and I couldn’t help thinking about my own research on economic models. I found empirical evidence that economic systems, across the board, aim for optimizing the average number of hours worked per person per year, and the amount of education one needs to get into the job market. All the rest is apparently instrumental.

F**k! I got distracted once again. I am supposed to show practical applications of my hypothesis regarding collective intelligence. Here comes an idea for a research project, with some potential for acquiring a research grant, which is as practical an application as there can be, in science. In my update titled ‘Out-of-the-lab monsters’, I hypothesised that economic recovery after the COVID-19 pandemic will be somehow slower than we expect, and certainly very different in terms of business models and institutions. The pandemic has triggered accelerated change as regards the use of digital technologies, the prevalence of biotechnology as business, and as regards social roles that people can endorse. Therefore, it would be a good thing to know which specific direction that change is going to take.

My idea is to take a large sample of business entities listed in public stock markets, which disclose their activity via the mechanism of investor relations, and to study their publicly disclosed information in order to discover the exact way they take in their business models. I am formulating the following hypothesis: in the economic conditions peculiar to the COVID-19 pandemic, business entities build up their reserves of cash and cash-equivalent securities in order to reinforce their strategic flexibility as regards technological change

Out-of-the-lab monsters

That period, end of January, beginning of February, is usually a moment of reassessment for me. This might be associated with my job – I am a scientist and an academic teacher – and right now, it is the turn of semesters in my country, Poland. I need to have some plan of teaching for the next semester, and, with the pandemic still around, I need to record some new video material for the courses of the Summer semester: Macroeconomics, International Trade, and International Management.

That being said, I think that formulating my current research on collective intelligence in terms of teachable material could help me to phrase out those thoughts of mine coherently and intelligibly enough to advance with the writing of my book on the same topic. I feel like translating a few distinct pieces of scientific research into teaching. The theoretical science of Markov chains is the first one. The empirically observed rise of two technologically advanced industry, namely biotechnology and electric vehicles comes as the second big thing. Thirdly, and finally, I want to develop on the general empirical observation that money tends to flow towards those new technologies even if they struggle to wrap themselves into operationally profitable business models. Next comes a whole set of empirical observations which I made à propos of the role of cities in our civilization. Finally, the way we collectively behave amidst the pandemic is, of course, the most obvious piece of empirical science I need connecting to in my teaching. 

In discussing those pieces of science in a teachable form, I feel like using the method I have been progressively forming in my research over the last 2 years or so. I use simple artificial neural networks as simulators of collectively intelligent behaviour. I have singled out a few epistemological regularities I feel like using in my teaching. Large datasets of socio-economic variables seem to have privileged orientations: they sort of wrap themselves around some specific variables rather than others. When disturbed with a random exogenous factor, the same datasets display different ways of learning, depending, precisely, on the exact variable I make them wrap themselves around. One and the same dataset, annoyingly disturbed by the buzz of a random disturbance, displays consistent learning when oriented on some variables, and goes haywire when oriented on others.

On the top of all that, I want to use in my teaching the experience I have collected when investing in the stock market. This is mostly auto-narrative experience, about my own behaviour and my own reactions when sailing in my tiny boat across the big ocean, filled with sharks, of the stock market.

What exactly do I want to teach my students? I mean, I know the labels: Macroeconomics, International Trade, International Management. These are cool labels. Yet, what do I want to teach in terms of real skills and understanding? I think that my core message is that science is f**king amazing, and when we combine scientific thinking with good, old-fashioned perseverance and grit, great things emerge. My students are young people, and having been their age, back in the day, I know that entering adulthood and developing personal independence is a lot about pretending, and a lot about finding one’s place in a fluid, essentially chaotic reality. That place is called a social role. I think I can deliver valuable teaching as for how to use the basic tools of social sciences in order to make ourselves good, functional social roles.

Concurrently to that purpose, I have another one, about mathematics. I can see many of my students the same kind of almost visceral, and yet visibly acquired abhorrence of mathematics, which I used to have in my mind. I think this is one of the failures in our educational system: early at school, we start learning mathematics as multiplication tables, which quite thoroughly kills the understanding that mathematics are a language. It is a language which speaks about the structure of reality, just a bit less convivially than spoken languages do. That language proves being bloody useful when talking about tough and controversial, such as ways of starting a new business from scratch (hence engaging people’s equity into something fundamentally risky), ways of getting out of an economic crisis, or ways of solving a political conflict.     

I think I can teach my students to perceive their existence as if they were travelling engineers in the small patch social reality around them, particularly engineers of their own social role. Look around you, across the surrounding social landscape. Find your bearings and figure out your coordinates on those bearings. Formulate a strategy: set your goals, assess your risks, make the best-case scenario and the worst-case scenario. What is your action? What can you do every day in order to implement that strategy? Therefore, what repetitive patterns of behaviour should you develop and become skilful at, in order to perform your action with the best possible outcomes? Let’s be clear: it is not about being world champion in anything (although it wouldn’t hurt), it is about being constructively optimistic, with a toolbox close at hand.  

What do I really know about macroeconomics, international trade, and international management? This is a fundamental question. Most of what I know, I know from the observation of secondary sources. Periodical financial reports of the companies, coupled with their stock prices, and with general economic reports, such as the World Economic Outlook, published by the International Monetary Fund, are my basic sources of information about what’s up in business and economics. What I know in those fields is descriptive knowledge.    

Where do I start? We, humans, form collectively intelligent structures which learn by experimenting with many alternative versions of themselves. Those versions are built around a fundamental balance between two institutional orders: the institutions of agriculture, which serve as a factory of food, and the institutions of cities, whose function consists in creating and sustaining social roles, whilst speeding up technological change. We collectively experiment with ourselves by creating demographic anomalies: abnormally dense populations in cities, next door to abnormally dispersed populations in the countryside. I think this is the fundamental distinction between the populations of hunters-gatherers, and the populations of settlers. Hunters-gatherers live in just one social density, whilst settlers live in two of them: the high urban density coexisting with low rural density.

I can put it in a different way. We, humans, interact with the natural environment, and interact with each other.  When we interact with each other a lot, in highly dense networks of social relations, we reinforce each other’s learning, and start spinning the wheel of innovation and technological change. Abundant interaction with each other gives us new ideas for interacting with the natural environment.

Cities have peculiar properties. Firstly, by creating new social roles through intense social interaction, they create new products and services, and therefore new markets, connected in chains of value added. This is how the real output of goods and services in a society becomes a complex, multi-layered network of technologies, and this is how social structures become self-propelling businesses. The more complexity in social roles is created, the more products and services emerge, which brings the development in greater a number of markets. That, in turn, gives greater a real output, greater income per person, which incentivizes to create new social roles etc. This how social complexity creates the phenomenon called economic growth.

The phenomenon of economic growth, thus the quantitative growth in complex, networked technologies which emerge in relatively dense human settlements, has a few peculiar properties. You can’t see it, you can’t touch it, and yet you can immediately feel when its pace changes. Economic growth is among the most abstract concepts of social sciences, and yet living in a society with real economic growth at 5% per annum is like a different galaxy when compared to living in a place where real economic growth is actually a recession of -5%. The arithmetical difference is just 10 percentage points, around the top of something underlying which makes the base of 1. Still, lives in those two contexts are completely different. At +5% in real economic growth, starting a new business is generally a sensible idea, provided you have it nailed down with a business plan. At – 5% a year, i.e. in recession, the same business plan can be an elaborate way of committing economic and financial suicide. At +5%, political elections are usually won by people who just sell you the standard political bullshit, like ‘I will make your lives better’ claimed by a heavily indebted alcoholic with no real career of their own. At -5%, politics start being haunted by those sinister characters, who look and sound like evil spirits from our dreams and claim they ‘will restore order and social justice’.

The society which we consider today as normal is a society of positive real economic growth. All the institutions we are used to, such as healthcare systems, internal security, public administration, education – all that stuff works at least acceptably smoothly when complex, networked technologies of our society have demonstrable capacity to increase their real economic output. That ‘normal’ state of society is closely connected to the factories of social roles which we commonly call ‘cities’. Real economic growth happens when the amount of new social roles – fabricated through intense interactions between densely packed humans – is enough for the new humans coming around. Being professionally active means having a social role solid enough to participate in the redistribution of value added created in complex technological networks. It is both formal science and sort of accumulated wisdom in governance that we’d better have most of the adult, able bodied people in that state of professional activity. A small fringe of professionally inactive people is somehow healthy a margin of human energy free to be professionally activated, and when I say ‘small’, it is like no more than 5% of the adult population. Anything above becomes both a burden and a disruption to social cohesion. Too big a percentage of people with no clear, working social roles makes it increasingly difficult to make social interactions sufficiently abundant and complex to create enough new social roles for new people. This is why governments of this world attach keen importance to the accurate measurement of the phenomenon quantified as ‘unemployment’.  

Those complex networks of technologies in our societies, which have the capacity to create social roles and generate economic growth, work their work properly when we can transact about them, i.e. when we have working markets for the final economic goods produced with those technologies, and for intermediate economic goods produced for them. It is as if the whole thing worked when we can buy and sell things. I was born in 1968, in a communist country, namely Poland, and I can tell you that in the absence of markets the whole mechanism just jams, progressively to a halt. Yes, markets are messy and capricious, and transactional prices can easily get out of hand, creating inflation, and yet markets give those little local incentives needed to get the most of human social roles. In the communist Poland, I remember people doing really strange things, like hoarding massive inventories of refrigerators or women’s underwear, just to create some speculative spin in an ad hoc, semi-legal or completely illegal market. It looks as if people needed to market and transact for real, amidst the theoretically perfectly planned society.   

Anyway, economic growth is observable through big sets of transactions in product markets, and those transactions have two attributes: quantities and prices AKA Q an P. It is like Q*P = ∑qi*pi. When I have – well, when we have – that complex network of technologies functionally connected to a factory of social roles for new humans, that thing makes ∑qi*pi, thus a lot of local transactions with quantities qi, at prices pi. The economic growth I have been so vocal about in the last few paragraphs is the real growth, i.e. in quantity Q = ∑qi. On the long run, what I am interested in, and my government is interested in, is to reasonably max out on ∆ Q = ∆∑qi. Quantities change slowly and quite predictably, whilst prices tend to change quickly and, mostly on the short term, chaotically. Measuring accurately real economic growth involving kicking the ‘*pi’ component out of the equation and extracting just ∆ Q = ∆∑qi. Question: why bothering with the observation of Q*P = ∑qi*pi when the real thing we need is just ∆ Q = ∆∑qi? Answer: because there is no other way. Complex networks of technologies produce economic growth by creating increasing diversity in social roles in concurrence with increasing diversity in products and their respective markets. No genius has come up, so far, with a method to add up, directly, the volume of visits in hairdresser’s salons with the volume of electric vehicles made, and all that with the volume of energy consumed.

Cities trade. Initially, they trade with the surrounding farms, out in the countryside, but, with time, the zone of trade relations tends to extend, and, interestingly enough, its extent is roughly proportional to the relative weight of the given city’s real output in the overall economic activity of the whole region. It is as if cities were developing some sort of gravitational field around them. The bigger the city as compared to other cities in the vicinity, the greater share of overall trade it takes, both in terms of exports and imports. Countries with many big cities trade a lot with other countries.     

There is an interesting relationship between exports and imports. Do I, as a person, import anything? Sure, I import plenty of goods. This software I am writing in is an imported good, to start with. Bananas which I ate for breakfast are imported. I drive a Honda, another imported good. My washing machine is a Samsung, my dish washer is a Siemens, and my phone and computer both come from Apple. I am a walking micro-hub of imports. Do I export anything? Almost nothing. One could argue that I export intellectual content with my blog. Still, as I am not being paid (yet) for my blog, it is rather voluntary cultural communication than exports. Well, there is one thing that creates a flow of export and import in me: my investment in the stock market. The money which I invested in the stock market is mostly placed in US-based companies, a few German and Dutch, and just a tiny bit is invested in Poland. Why? Because there is nothing happening in the Polish stock market, really. Boring. Anyway, I sort of export capital.

Cities and countries import a whole diversified basket of goods, but they usually export just a few, which they are really good at making and marketing. There is something like structural asymmetry between exports and imports. As soon as economic sciences started to burgeon, even before they were called economics and had been designated as ‘political economy’, social thinkers were trying to explain that phenomenon. Probably the best known is the explanation by David Ricardo, namely the notion of comparative advantage AKA productive specialization. There are exceptions, called ‘super exporters’, e.g. China or South Korea. These are countries which successfully export virtually any manufactured good, mostly due to low labour costs. However we label that phenomenon, here it is: whilst the global map of imports look like a very tight web, the map of exports is more like a few huge fountains of goods, pouring their output across the world. Practically every known imported good has its specialized big exporters. Thus, if my students ask me what international trade is, I am more and more prone to answer that trade is a structural pattern of the human civilization, where some places on Earth become super-efficient at making and marketing specific goods, and, consequently, the whole planetary civilization is a like team of people, with clearly assigned roles.

What is international management in that context? What is the difference between international management and domestic management, actually? What I can see, for example in the companies whose stock I invest my savings in, there is a special phase in the development of a business. It is when you have developed a product or service which you start marketing successfully at the international scale, thus you are exporting it, and there comes a moment when branching abroad with your organisational structure looks like a good idea. Mind you, there are plenty of business which, whilst growing nicely and exporting a lot, remain firmly domestic. If I run a diamond mine in Botswana – to take one of the most incongruous examples that come to my mind – I mind those diamonds in order to export them. There is no point in mining diamonds in Botswana just to keep those diamonds in Botswana. Export is the name of the game, here. Still, do I need to branch out internationally? My diamonds go to Paris, but is it a sensible idea to open a branch office in Paris? Not necessarily, rents for office space are killers over there. Still, when I run a manufacturing business in Ukraine, and I make equipment for power grids, e.g. electric transformers, and I export that equipment across Europe and to US, it could be a good idea to branch out. More specifically, it becomes a good idea when the value of my sales to a given country makes it profitable to be closer to the end user. Closer means two things. I can clone my original manufacturing technology in the target market, thus instead of making those transformers in Ukraine and shipping them to Texas, I can make them in Texas. On the other hand, closer means more direct human interaction, like customer support. 

Good. I got carried away a bit. I need to return to the things I want to teach my students, i.e. to skills I want to develop in them when teaching those three courses: Macroeconomics, International Trade, and International Management. Here is my take on the thing. These three courses represent three levels of work with quantitative data. Doing Macroeconomics in real life means reading actively macroeconomic reports and data, for the purposes of private business or those of public policy. It means being able to interpret changes in real output, inflation, unemployment, as well as in financial markets.

Doing International Trade for real might go two different ways: either you work in international trade, i.e. you do the technicalities of export and import, on the one hand, or you work about and around international trade, namely you need to nail down some kind of business plan or policy strongly related to export and import. That latter aspect involves working with data much more than the former, which, in turn, is more about documents, procedures and negotiation. I am much more at home with data analysis, contracts, and business planning than with the very technicalities of international trade. My teaching of international trade will go in that direction.

As for International Management, my only real experience is that of advising, doing market research and business planning for people who are about to decide about branching out abroad with their business. This is the only real experience I can communicate to my students.

I want to combine that general drift of my teaching with more specific a take on the current social reality, i.e. that of pandemic, economic recession and plans for recovery, and technological change combined with a modification of established business models. That last phenomenon, namely new technologies coming to the game and forcing a change in business structures is the main kind of understanding I want to provide my students with, as regards current events. Digital technologies, biotechnologies, and complex power systems increasingly reliant on both renewable energies and batteries of all kinds, are the thread of change. On and around that thread, cash is being hoarded, in unusually big cash-oriented corporate balance sheets. Cash is king, and science is the queen, so to say, in those newly developing business models. That’s logical: deep and quick technological change creates substantial risks, and increased financial liquidity is a normal response thereto.

Whatever will be happening over the months and years to come, in terms of economic recovery after the epidemic recession, will be happening through and in businesses which hoard important amounts of cash, and constantly look for the most competitive digital technologies. When governments say ‘We want to support the bouncing back of our domestic businesses’, those governments have to keep in mind that before investing in new property, plant, equipment, and in new intangible intellectual property, those businesses will be bouncing back by accumulating cash. This time, economic recovery will be probably very much non-Keynesian. Instead of unfreezing cash balances and investing them in new productive assets, microeconomic recovery of local business structures will involve them juicing themselves with cash. I think this is to take or to leave, as the French say. Bitching and moaning about ‘those capitalists who just hoard money with no regard for jobs and social gain’ seems as pointless as an inflatable dartboard.

Those cash-rich balance sheets are going to translate into strategies oriented on flexibility and adaptability more than anything else. Business entities are naturally flexible, and they are because they have the capacity to build, purposefully, a zone of proximal development around their daily routines. It is a zone of manageable risks, made of projects which the given business entity can jump into on demand, almost instantaneously. I think that businesses across the globe will be developing such zones of proximal development around themselves: zones of readiness for action rather than action itself. There is another aspect to that. I intuitively feel that we are entering a period of increasingly quick technological change. If you just think about the transformation of manufacturing processes and supply chains in the pharmaceutical industry, so as to supply the entire global population with vaccines, you can understand the magnitude of change. Technologies need to break even just as business models do. In a business model, breaking even means learning how to finance the fixed costs with the gross margin created and captured when transacting with customers and suppliers. In a technology, breaking even means to drive the occurrence of flukes and mistakes, unavoidable in large-scale applications, down to an acceptable level. This, in turn, means that the aggregate costs of said flukes and mistakes, which enters into the fixed costs of the business structure, is low enough to be covered by the gross margin generated from the technological process itself.

That technological breaking even applies to the digital world just as it applies to industrial processes. If you use MS Teams, just as I and many other people do, you probably know that polity enquiry which Teams address you after each video call or meeting: ‘What was the call quality?’. This is because that quality is really poor, with everybody using online connections much more than before the pandemic (much worse than with Zoom, for example), and Microsoft is working on it, as far as I know. Working on something means putting additional effort and expense into that thing, thus temporarily pumping up the fixed costs.

Now, suppose that you are starting up with a new technology, and you brace for the period of breaking even with it. You will need to build up a cushion of cash to finance the costs of flukes and mistakes, as well as the cost of adapting and streamlining your technology as the scale of application grows (hopefully).

We live in a period when a lot of science breaks free out of experimental labs much earlier and faster than it was intended to. Vaccines against COVID-19 are the best example. You probably know those sci fi movies, where some kind of strange experimental creature, claimed to be a super-specimen of a new super-species, and yet strangely ill-adapted to function in the normal world, breaks out of a lab. It wreaks havoc, it causes people to panic, and it unavoidably attracts the attention of an evil businessperson who wants to turn it into a weapon or into a slave. This is, metaphorically, what is happening now and what will keep happening for quite a while. Of course, the Sars-Cov-2 virus could very well be such an out-of-the-lab monster, still I think about all the technologies we deploy in response, vaccines included. They are such out-of-the-lab monsters as well. We have, and we will keep having, a lot of out-of-the-lab monsters running around, which, in turn, requires a lot of evil businesspeople to step in and deploy they demoniac plots.

All that means that the years to come are likely to be bracing, adapting and transforming much more than riding a rising wave crest of economic growth. Recovery will be slower than the most optimistic scenarios imply. We need to adapt to a world of fence-sitting business strategies, with a lot of preparation and build-up in capacity, rather than direct economic bounce-back. When preparing a business plan, we need to prepare for investors asking questions like ‘How quickly and how specifically can you adapt if the competitor A implements the technology X faster than predicted? How much cash do we need to shield against that risk? How do we hedge? How do we insure?’, rather than questions of the type ‘How quickly will I have my money back?’. In such an environment, substantial operational surplus in business is a rarity. Profits are much more likely to be speculative, based on trading corporate stock and other financial instruments, maybe on trading surpluses of inventories.

The right side of the disruption

I am swivelling my intellectual crosshairs around, as there is a lot going on, in the world. Well, there is usually a lot going on, in the world, and I think it is just the focus of my personal attention that changes its scope. Sometimes, I pay attention just to the stuff immediately in front of me, whilst on other times I go wide and broad in my perspective.

My research on collective intelligence, and on the application of artificial neural networks as simulators thereof has brought me recently to studying outlier cases. I am an economist, and I do business in the stock market, and therefore it comes as sort of logical that I am interested in business outliers. I hold some stock of the two so-far winners of the vaccine race: Moderna (https://investors.modernatx.com/ ) and BionTech (https://investors.biontech.de/investors-media ), the vaccine companies. I am interested in the otherwise classical, Schumpeterian questions: to what extent are their respective business models predictors of their so-far success in the vaccine contest, and, seen from the opposite perspective, to what extent is that whole technological race of vaccines predictive of the business models which its contenders adopt?

I like approaching business models with the attitude of a mean detective. I assume that people usually lie, and it starts with lying to themselves, and that, consequently, those nicely rounded statements in annual reports about ‘efficient strategies’ and ‘ambitious goals’ are always bullshit to some extent. In the same spirit, I assume that I am prone to lying to myself. All in all, I like falling back onto hard numbers, in the first place. When I want to figure out someone’s business model with a minimum of preconceived ideas, I start with their balance sheet, to see their capital base and the way they finance it, just to continue with their cash-flow. The latter helps my understanding on how they make money, at the end of the day, or how they fail to make any.

I take two points in time: the end of 2019, thus the starting blocks of the vaccine race, and then the latest reported period, namely the 3rd quarter of 2020. Landscape #1: end of 2019. BionTech sports $885 388 000 in total assets, whilst Moderna has $1 589 422 000. Here, a pretty amazing detail pops up. I do a routine check of proportion between fixed assets and total assets. It is about to see what percentage of the company’s capital base is immobilized, and thus supposed to bring steady capital returns, as opposed to the current assets, fluid, quick to exchange and made for greasing the current working of the business. When I measure that coefficient ‘fixed assets divided by total assets’, it comes as 29,8% for BionTech, and 29% for Moderna. Coincidence? There is a lot of coincidence in those two companies. When I switch to Landscape #2: end of September 2020, it is pretty much the. You can see it in the two tables below:

As you look at those numbers, they sort of collide with the common image of biotech companies in sci fi movies. In movies, we can see huge labs, like 10 storeys underground, with caged animals inside etc. In real life, biotech is cash, most of all. Biotech companies are like big wallets, camped next to some useful science. Direct investment in biotech means very largely depositing one’s cash on the bank account run by the biotech company.

After studying the active side of those two balance sheets, i.e. in BionTech and in Moderna, I shift my focus to the passive side. I want to know how exactly people put cash in those businesses. I can see that most of it comes in the form of additional paid-in equity, which is an interesting thing for publicly listed companies. In the case of Moderna, the bulk of that addition to equity comes as a mechanism called ‘vesting of restricted common stock’. Although it is not specified in their financial report how exactly that vesting takes place, the generic category corresponds to operations where people close to the company, employees or close collaborators, anyway in a closed private circle, buy stock of the company in a restricted issuance.  With Biontech, it is slightly different. Most of the proceeds from public issuance of common stock is considered as reserve capital, distinct from share capital, and on the top of that they seem to be running, similarly to Moderna, transactions of vesting restricted stock. Another important source of financing in both companies are short-term liabilities, mostly deferred transactional payments. Still, I have an intuitive impression of being surrounded by maybies (you know: ‘maybe I am correct, unless I am wrong), and thus I decided to broaden my view. I take all the 7 biotech companies I currently have in my investment portfolio, which are, besides BionTech and Moderna, five others: Soligenix (http://ir.soligenix.com/ ), Altimmune (http://ir.altimmune.com/investors ), Novavax (https://ir.novavax.com/ ) and VBI Vaccines (https://www.vbivaccines.com/investors/  ). In the two tables below, I am trying to summarize my essential observations about those seven business models.

Despite significant differences in the size of their respective capital base, all the seven businesses hold most of their capital in the highly liquid financial form: cash or tradable financial securities. Their main source of financing is definitely the additional paid-in equity. Now, some readers could ask: how the hell is it possible for the additional paid-in equity to make more than the value of assets, like 193%? When a business accumulates a lot of operational losses, they have to be subtracted from the incumbent equity. Additions to equity serve as a compensation of those losses. It seems to be a routine business practice in biotech.

Now, I am going to go slightly conspiracy-theoretical. Not much, just an inch. When I see businesses such as Soligenix, where cumulative losses, and the resulting additions to equity amount to teen times the value of assets, I am suspicious. I believe in the power of science, but I also believe that facing a choice between using my equity to compensate so big a loss, on the one hand, and using it to invest into something less catastrophic financially, I will choose the latter. My point is that cases such as Soligenix smell scam. There must be some non-reported financial interests in that business. Something is going on behind the stage, there.  

In my previous update, titled ‘An odd vector in a comfortably Apple world’, I studied the cases of Tesla and Apple in order to understand better the phenomenon of outlier events in technological change. The short glance I had on those COVID-vaccine-involved biotechs gives me some more insight. Biotech companies are heavily scientific. This is scientific research shaped into a business structure. Most of the biotech business looks like an ever-lasting debut, long before breaking even. In textbooks of microeconomics and management, we can read that being able to run the business at a profit is a basic condition of calling it a business. In biotech, it is different. Biotechs are the true outliers, nascent at the very juncture of cutting-edge science, and business strictly spoken. This is how outliers emerge: there is some cool science. I mean, really cool, the one likely to change the face of the world. Those mRNA biotechnologies are likely to do so. The COVID vaccine is the first big attempt to transform those mRNA therapies from experimental ones into massively distributed and highly standardized medicine. If this stuff works on a big scale, it is a new perspective. It allows fixing people, literally, instead of just curing diseases.

Anyway, there is that cool science, and it somehow attracts large amounts of cash. Here, a little digression from the theory of finance is due. Money and other liquid financial instruments can be seen as risk-absorbing bumpers. People accumulate large monetary balances in times and places when and where they perceive a lot of manageable risk, i.e. where they perceive something likely to disrupt the incumbent business, and they want to be on the right side of the disruption.

An odd vector in a comfortably Apple world

Work pays. Writing about my work helps me learn new things. I am coining up, step by step, the logical structure of my book on collective intelligence. Those last days, I realized the point of using an artificial neural network as simulator of collective behaviour. There is a difference between studying the properties of a social structure, on the one hand, and simulating its collective behaviour, on the other hand. When I study the partial properties of something, I make samples and put them under a microscope. This is what most quantitative methods in social sciences do: they sample and zoom on. This is cool, don’t get me wrong. That method has made the body of science we have today, and, historically, this is the hell of a body of science. Yet, there is a difference between, for example, a study of clusters in a society, and a simulation of the way those clusters form. There is a difference between identifying auto-regressive cycles in the time series of a variable, and understanding how those cycles happen in real life, with respect to collective human behaviour (see ‘An overhead of individuals’). Autoregression translated into human behaviour means that what we do actually accomplish today is somehow derived from and functionally connected to the outcomes of our actions some time ago. Please, notice: not to the outcomes of the actions which immediately preceded the current one, but to the outcomes generated with a lag in the past. Go figure how we, humans, can pick a specific, lagged occurrence in the past and make it a factor in what we do today? Intriguing, isn’t it?

The sample-and-put-under-the-microscope method is essentially based on classical statistics, thus on the assumption that the mean expected value of a variable, or of a vector, is the expected state of the corresponding phenomenon. Here we enter the tricky, and yet interesting a realm of questions such as ‘What do you mean by expected state? Expected by whom?’. First and most of all, we have no idea what other people expect. We can, at best, nail down our own expectations to the point of making them intelligible to ourselves and communicable to other people, and we do our best to understand what other people say they expect. Yet, all hope is not lost. Whilst we can hardly have any clue as for what other people expect, we can learn how they learn. The process of learning is much more objectively observable than expectations.

Here comes the subtle and yet fundamental distinction between expectations and judgments. Both regard the same domain – the reality we live in – but they are different in nature. Judgment is functional. I make myself an opinion about reality because I need it to navigate through said reality. Judgment is an approximation of truth. Emotions play their role in my judgments, certainly, but they are easy to check. When my judgment is correct, i.e. when my emotions give it the functionally right shade, I make the right decisions and I am satisfied with the outcomes. When my judgment is too emotional, or emotional the wrong way, I simply screw it, at the end of the day, and I am among the first people to know it.

On the other hand, when I expect something, it is much more imbibed with emotions. I expect things which are highly satisfactory, or, conversely, which raise my apprehension, ranging from disgust to fear. Expectations are so emotional that we even have a coping mechanism of ex-post rationalization. Something clearly unexpected happens to us and we reduce our cognitive dissonance by persuading ourselves post factum that ‘I expected it to happen, really. I just wasn’t sure’.

I think there is fundamental difference between applying a given quantitative method of to the way that society works, on the one hand, and attributing the logic of this method to the collective behaviour of people in that society, on the other hand. I will try to make my point more explicit by taking on one single business case: Tesla (https://ir.tesla.com/ ). Why Tesla? For two reasons. I invested significant money of mine in their stock, for one, and when I have my money invested in something, I like updating my understanding as for how the thing works. Tesla seems to me something like a unique phenomenon, an industry in itself. This is a perfect Black Swan, in the lines of Nassim Nicholas Taleb’s ‘The black swan. The impact of the highly improbable’ (2010, Penguin Books, ISBN 9780812973815). Ten years ago, Elon Musk, the founder of Tesla, was considered as just a harmless freak. Two years ago, many people could see him on the verge of tears, publicly explaining to shareholders why Tesla kept losing cash. Still, if I invested in the stock of Tesla 4 years ago, today I would have like seven times the money. Tesla is an outlier which turned into a game changer. Today, they are one of the rare business entities who managed to increase their cash flow over the year 2020. No analyst would predict that. As a matter of fact, even I considered Tesla as an extremely risky investment. Risky means burdened with a lot of uncertainty, which, in turn, hinges on a lot of money engaged.

When a business thrives amidst a general crisis, just as Tesla has been thriving amidst the COVID-19 pandemic, I assume there is a special adaptive mechanism at work, and I want to understand that mechanism. My first, intuitive association of ideas goes to the classic book by Joseph Schumpeter, namely ‘Business Cycles’. Tesla is everything Schumpeter mentioned (almost 100 years ago!) as attributes of breakthrough innovation: new type of plant, new type of entrepreneur, processes rethought starting from first principles, complete outlier as compared to the industrial sector of origin.

What does Tesla have to do with collective intelligence? Saying that Tesla’s success is a pleasant surprise to its shareholders, and that it is essentially sheer luck, would be too easy and simplistic. At the end of September 2020, Tesla had $45,7 billion in total assets. Interestingly, only 0,44% is the so-called ‘Goodwill’, which is the financial chemtrail left after a big company acquires smaller ones and which has nothing to do with good intentions. According to my best knowledge, those assets of $45,7 billion have been accumulated mostly through the good, old-fashioned organic growth, i.e. the growth of the capital base in correlation with the growth of operations. That type of growth requires the concurrence of many factors: the development of a product market, paired with the development of a technological base, and all that associated with a stream of financial capital.    

This is more than luck or accident. The organic growth of Tesla has been concurring with a mounting tide of start-up businesses in the domain of electric vehicles. It coincides closely with significant acceleration in the launching of electric vehicles in the big, established companies of the automotive sector, such as VW Group, PSG or Renault. When a Black Swan deeply modifies an entire industry, it means that an outlier has provoked adaptive change in the social structure. That kept in mind, an interesting question surfaces in my mind: why is there only one Tesla business, as for now? Why isn’t there more business entities like them? Why this specific outlier remains an outlier? I know there are many start-ups in the industry of electric vehicles, but none of them even remotely approaches the kind and the size of business structure that Tesla displays. How does an apparently unique social phenomenon remain unique, whilst having proven to be a successful experiment?

I am intuitively comparing Tesla to its grandfather in uniqueness, namely to Apple Inc. (https://investor.apple.com/investor-relations/default.aspx ). Apple used to be that outlier which Tesla is today, and, over time, it has become sort of banalized, business-wise. How can I say that? Let’s have a look at the financials of both companies. Since 2015, Tesla has been growing like hell, in terms of assets and revenues. However, they started to make real money just recently, in 2020, amidst the pandemic. Their cash-flow is record-high for the nine months of 2020. Apple is the opposite case. If you look at their financials over the last few years, they seem to be shrinking assets-wise, and sort of floating at the same level in terms of revenues. Tesla makes cash mostly by tax write-offs, through amortization and stock-based compensations for their employees. Apple makes cash in the good, old-fashioned way, by generating net income after tax. At the bottom line of all cash-flows, Tesla accumulates cash in their balance sheet, whilst Apple apparently gives it away and seems preferring the accumulation of marketable financial securities. Tesla seems to be wild and up to something, Apple not really. Apple is respectable.    

Uniqueness manifests in two attributes: distance from the mean expected state of the social system, on the one hand, and probability of happening, on the other hand. Outliers display low probability and noticeable distance from the mean expected state of social stuff. Importance of outlier phenomena can be apprehended similarly to risk factors: probability times magnitude of change or difference. With low probability, outliers are important by their magnitude.

Mathematically, I can express the emergence of any new phenomenon in two ways: as the activation of a dormant phenomenon, or as recombination of the already active ones. I can go p(x; t0) = 0 and p(x; t1) > 0, where p(x; t) is the probability of the phenomenon x. It used to be zero before, it is greater than zero now, although not much greater, we are talking about outliers, noblesse oblige. That’s the activation of something dormant. I have a phenomenon nicely defined, as ‘x’, and I sort of know the contour of it, it just has not been happening recently at all. Suddenly, it starts happening. I remember having read a theory of innovation, many years ago, which stated that each blueprint of a technology sort of hides and conveys in itself a set of potential improvements and modifications, like a cloud of dormant changes attached to it and logically inferable from the blueprint itself.

When, on the other hand, new things emerge as the result of recombination in something already existing, I can express it as p(x) = p(y)a * p(z)b. Phenomena ‘y’ and ‘z’ are the big fat incumbents of the neighbourhood. When they do something specific together (yes, they do what you think they do), phenomenon ‘x’ comes into existence, and its probability of happening p(x) is a combination of powers ascribed to the respective probabilities. I can go fancier, and use one of them neural activation functions, such as the hyperbolic tangent. Many existing phenomena – sort of Z = {p(z1), p(z2),…, p(zn)} – combine in a more or less haphazard way (frequently, it is the best way of all ways), meaning that the vector of Z of their probabilities has a date with a structurally identical vector of random significances W = {s1, s2, …, sn}, 0 < si <1. They date and they produce a weighted sum h = ∑p(zi)*si, and that weighed sum gets sucked into the vortex of reality via tanh = [(e2h – 1)/(e2h + 1)]. Why via tanh? First of all, why not? Second of all, tanh is a structure in itself. It is essentially (e2 -1)/(e2 +1) = 6,389056099 / 8,389056099 = 0,761594156, which has the courtesy of accepting h in, and producing something new.

Progress in the development of artificial neural networks leads to the discovery of those peculiar structures – the activation functions – which have the capacity to suck in a number of individual phenomena with their respective probabilities, and produce something new, like a by-phenomenon. In the article available at https://missinglink.ai/guides/neural-network-concepts/7-types-neural-network-activation-functions-right/ I read about a newly discovered activation function called ‘Swish’. With that weighted sum h = ∑p(zi)*si, Swish = h/(1+e-h). We have a pre-existing structure 1/(1 + e) = 0,268941421, which, combined with h in a complex way (as a direct factor of multiplication and as exponent in the denominator), produces something surprisingly meaningful.

Writing those words, I have suddenly noticed a meaning of activation functions which I have been leaving aside, so far. Values produced by activation functions are aggregations of the input which we feed into the neural network. Yet, under a different angle, activation functions produce a single new probability, usually very close to 1, which can be understood as something new happening right now, almost for sure, and deriving its existence from many individual phenomena happening now as well. I need to wrap my mind around it. It is interesting.    

Now, I study the mathematical take on the magnitude of the outlier ‘x’, which makes its impact on everything around, and makes it into a Black Swan. I guess x has some properties. I mean not real estate, just attributes. It has a vector of attributes R = {r1, r2, …, rm}, and, if I want to present ‘x’ as an outlier in mathematical terms, those attributes should be the same for all the comparable phenomena in the same domain. That R = {r1, r2, …, rm} is a manifold, in which every observable phenomenon is mapped into m coordinates. If I take any two phenomena, like z and y, each has its vector of attributes, i.e. R(z) and R(y). Each such pair can estimate their mutual closeness by going Euclidean[R(z), R(y)] = ∑{[ri(z) – ri(y)]2}0,5 / m. We remember that m is the number of attributes in that universe.

Phenomena are used to be in a comfortably predictable Euclidean distance to each other, and, all of a sudden, x pops out, and shows a bloody big Euclidean distance from any other phenomenon. Its vector R(x) is odd. This is how a Tesla turns up, as an odd vector in a comfortably Apple world.

An overhead of individuals

I think I have found out, when writing my last update (‘Cultural classes’) another piece of the puzzle which I need to assemble in order to finish writing my book on collective intelligence. I think I have nailed down the general scientific interest of the book, i.e. the reason why my fellow scientists should even bother to have a look at it. That reason is the possibility to have deep insight into various quantitative models used in social sciences, with a particular emphasis on the predictive power of those models in the presence of exogenous stressors, and, digging further, the representativeness of those models as simulators of social reality.

Let’s have a look at one quantitative model, just one picked at random (well, almost at random): autoregressive conditional heteroscedasticity AKA ARCH (https://en.wikipedia.org/wiki/Autoregressive_conditional_heteroskedasticity ). It goes as follows. I have a process, i.e. a time-series of a quantitative variable. I compute the mean expected value in that time series, which, in plain human, means arithmetical average of all the observations in that series. In even plainer human, the one we speak after having watched a lot of You Tube, it means that we sum up the values of all the consecutive observations in that time series and we divide the so-obtained total by the number of observations.

Mean expected values have that timid charm of not existing, i.e. when I compute the mean expected value in my time series, none of the observations will be exactly equal to it. Each observation t will return a residual error εt. The ARCH approach assumes that εtis the product of two factors, namely of the time-dependentstandard deviation σt, and a factor of white noise zt. Long story short, we have εttzt.

The time-dependent standard deviation shares the common characteristics of all the standard deviations, namely it is the square root of time-dependent variance: σt = [(σt)2]1/2. That time-dependent variance is computed as:

Against that general methodological background, many variations arise, especially as regards the mean expected value which everything else is wrapped around. It can be a constant value, i.e. computed for the entire time-series once and for all. We can allow the time series to extend, and then each extension leads to the recalculation of the mean expected value, including the new observation(s). We can make the mean expected value a moving average over a specific window in time.

Before I dig further into the underlying assumptions of ARCH, one reminder begs for being reminded: I am talking about social sciences, and about the application of ARCH to all kinds of crazy stuff that we, humans, do collectively. All the equations and conditions phrased out above apply to collective human behaviour. The next step in understanding of ARCH, in the specific context of social sciences, is that ARCH has any point when the measurable attributes of our collective human behaviour really oscillate and change. When I have, for example, a trend in the price of something, and that trend is essentially smooth, without much of a dentition jumping to the eye, ARCH is pretty much pointless. On the other hand, that analytical approach – where each observation in the real measurable process which I observe is par excellence a deviation from the expected state – gains in cognitive value as the process in question becomes increasingly dented and bumpy.

A brief commentary on the very name of the method might be interesting. The term ‘heteroskedasticity’ means that real observations tend to be grouped on one side of the mean expected value rather than on the other. There is a slant, which, over time, translates into a drift. Let’s simulate the way it happens. Before I even start going down this rabbit hole, another assumption is worth deconstructing. If I deem a phenomenon to be describable as white noise, AKA zt, I assume there is no pattern in the occurrence thereof. Any state of that phenomenon can happen with equal probability. It is the ‘Who knows?’ state of reality in its purest form.

White noise is at the very basis of the way we experience reality. This is pure chaos. We make distinctions in this chaos; we group phenomena, and we assess the probability of each newly observed phenomenon falling into one of the groups. Our essential cognition of reality assumes that in any given pound of chaos, there are a few ounces of order, and a few residual ounces of chaos. Then we have the ‘Wait a minute!’ moment and we further decompose the residual ounces of chaos into some order and even more residual a chaos. From there, we can go ad infinitum, sequestrating streams of regularity and order out of the essentially chaotic flow of reality. I would argue that the book of Genesis in the Old Testament is a poetic, metaphorical account of the way that human mind cuts layers of intelligible order out of the primordial chaos.

Seen from a slightly different angle, it means that white noise zt can be interpreted as an error in itself, because it is essentially a departure from the nicely predictable process εt = σt, i.e. where residual departure from the mean expected value is equal to the mean expected departure from the mean expected value. Being a residual error, zt can be factorized into zt = σ’t*z’t , and, once again, that factorization can go all the way down to the limits of observability as regards the phenomena studied.     

At this point, I am going to put the whole reasoning on its head, as regards white noise. It is because I know and use a lot the same concept, just under a different name, namely that of mean-reverted value. I use mean-reversion a lot in my investment decisions in the stock market, with a very simple logic: when I am deciding to buy or sell a given stock, my purely technical concern is to know how far away the current price from its moving average is. When I do this calculation for many different stocks, priced differently, I need a common denominator, and I use standard deviation in price for that purpose. In other words, I compute as follows: mean-reverted price = (current price – mean expected price)/ standard deviation in price.

If you have a closer look at this coefficient of mean-reverted price, its nominator is error, because it is the deviation from mean expected value. I divide that error by standard deviation, and, logically, what I get is error divided by standard deviation, therefore the white noise component zt of the equation εt = σtzt. This is perfectly fine mathematically, only my experience with that coefficient tells me it is anything but white noise. When I want to grasp very sharply and accurately the way which the price of a given stock reacts to its economic environment, I use precisely the mean-reverted coefficient of price. As soon as I recalculate the time series of a price into its mean-reverted form, patterns emerge, sharp and distinct. In other words, the allegedly white-noise-based factor in the stock price is much more patterned than the original price used for its calculation.

The same procedure which I call ‘mean-reversion’ is, by the way, a valid procedure to standardize empirical data. You take each empirical observation, you subtract from it the mean expected value of the corresponding variable, you divide the residual difference by its standard deviation, and Bob’s your uncle. You have your data standardized.

Summing up that little rant of mine, I understand the spirit of the ARCH method. If I want to extract some kind of autoregression in time-series, I can test the hypothesis that standard deviation is time-dependent. Do I need, for that purpose, to assume the existence of strong white noise in the time series? I would say cautiously: maybe, although I do not see the immediate necessity for it. Is the equation εt = σtzt the right way to grasp the distinction into the stochastic component and the random one, in the time series? Honestly: I don’t think so. Where is the catch? I think it is in the definition and utilization of error, which, further, leads to the definition and utilization of the expected state.

In order to make my point clearer, I am going to quote two short passages from pages xxviii-xxix in Nicolas Nassim Taleb’s book ‘The Black Swan’. Here it goes. ‘There are two possible ways to approach phenomena. The first is to rule out the extraordinary and focus on the “normal.” The examiner leaves aside “outliers” and studies ordinary cases. The second approach is to consider that in order to understand a phenomenon, one needs first to consider the extremes—particularly if, like the Black Swan, they carry an extraordinary cumulative effect. […] Almost everything in social life is produced by rare but consequential shocks and jumps; all the while almost everything studied about social life focuses on the “normal,” particularly with “bell curve” methods of inference that tell you close to nothing’.

When I use mean-reversion to study stock prices, for my investment decisions, I go very much in the spirit of Nicolas Taleb. I am most of all interested in the outlying values of the metric (current price – mean expected price)/ standard deviation in price, which, once again, the proponents of the ARCH method interpret as white noise. When that metric spikes up, it is a good moment to sell, whilst when it is in a deep trough, it might be the right moment to buy. I have one more interesting observation about those mean-reverted prices of stock: when they change their direction from ascending to descending and vice versa, it is always a sharp change, like a spike, never a gentle recurving. Outliers always produce sharp change. Exactly, as Nicolas Taleb claims. In order to understand better what I am talking about, you can have a look at one of the analytical graphs I used for my investment decisions, precisely with mean-reverted prices and transactional volumes, as regards Ethereum: https://discoversocialsciences.com/wp-content/uploads/2020/04/Slide5-Ethereum-MR.png .

In a manuscript that I wrote and which I am still struggling to polish enough for making it publishable (https://discoversocialsciences.com/wp-content/uploads/2021/01/Black-Swans-article.pdf ), I have identified three different modes of collective learning. In most of the cases I studied empirically, societies learn cyclically, i.e. first they produce big errors in adjustment, then they narrow their error down, which means they figure s**t out, and in a next phase the error increases again, just to decrease once again in the next cycle of learning. This is cyclical adjustment. In some cases, societies (national economies, to be exact) adjust in a pretty continuous process of diminishing error. They make big errors initially, and they reduce their error of adjustment in a visible trend of nailing down workable patterns. Finally, in some cases, national economies can go haywire and increase their error continuously instead of decreasing it or cycling on it.

I am reconnecting to my method of quantitative analysis, based on simulating with a simple neural network. As I did that little excursion into the realm of autoregressive conditional heteroscedasticity, I realized that most of the quantitative methods used today start from studying one single variable, and then increase the scope of analysis by including many variables in the dataset, whilst each variable keeps being the essential monad of observation. For me, the complex local state of the society studied is that monad of observation and empirical study. By default, I group all the variables together, as distinct, and yet fundamentally correlated manifestations of the same existential stuff happening here and now. What I study is a chain of here-and-now states of reality rather than a bundle of different variables.    

I realize that whilst it is almost axiomatic, in typical quantitative analysis, to phrase out the null hypothesis as the absence of correlation between variables, I don’t even think about it. For me, all the empirical variables which we, humans, measure and report in our statistical data, are mutually correlated one way or another, because they all talk about us doing things together. In phenomenological terms, is it reasonable to assume that we do in order to produce real output, i.e. our Gross Domestic Product, is uncorrelated with what we do with the prices of productive assets? Probably not.

There is a fundamental difference between discovering and studying individual properties of a social system, such as heteroskedastic autoregression in a variable, on the one hand, and studying the way this social system changes and learns as a collective. It means two different definitions of expected state. In most quantitative methods, the expected state is the mean value of one single variable. In my approach, it is always a vector of expected values.

I think I start nailing down, at last, the core scientific idea I want to convey in my book about collective intelligence. Studying human societies as instances of collective intelligence, or, if you want, as collectively intelligent structure, means studying chains of complex states. The Markov chain of states, and the concept of state space, are the key mathematical notions here.

I have used that method, so far, to study four distinct fields of empirical research: a) the way we collectively approach energy management in our societies b) the orientation of national economies on the optimization of specific macroeconomic variables c) the way we collectively manage the balance between urban land, urban density of population, and agricultural production, and d) the way we collectively learn in the presence of random disturbances. The main findings I can phrase out start with the general observation that in a chain of complex social states, we collectively tend to lean towards some specific aspects of our social reality. Fault of a better word, I equate those aspects to the quantitative variables I find them represented by, although it is something to dig in. We tend to optimize the way we work, in the first place, and the way we sell our work. Concerns such as return on investment or real output come as secondary. That makes sense. At the large scale, the way we work is important for the way we use energy, and collectively learn. Surprisingly, variables commonly associated with energy management, such as energy efficiency, or the exact composition of energy sources, are secondary.

The second big finding is related to a manuscript t which I am still struggling to polish enough for making it publishable (https://discoversocialsciences.com/wp-content/uploads/2021/01/Black-Swans-article.pdf ), I have identified three different modes of collective learning. In most of the cases I studied empirically, societies learn cyclically, i.e. first they produce big errors in adjustment, then they narrow their error down, which means they figure s**t out, and in a next phase the error increases again, just to decrease once again in the next cycle of learning. This is cyclical adjustment. In some cases, societies (national economies, to be exact) adjust in a pretty continuous process of diminishing error. They make big errors initially, and they reduce their error of adjustment in a visible trend of nailing down workable patterns. Finally, in some cases, national economies can go haywire and increase their error continuously instead of decreasing it or cycling on it.

The third big finding is about the fundamental logic of social change, or so I perceive it. We seem to be balancing, over decades, the proportions between urban land and agricultural land so as to balance the production of food with the production of new social roles for new humans. The countryside is the factory of food, and cities are factories of new social roles. I think I can make a strong, counterintuitive claim that social unrest, such as what is currently going on in United States, for example, erupts when the capacity to produce food in the countryside grows much faster than the capacity to produce new social roles in the cities. When our food systems can sustain more people than our collective learning can provide social roles for, we have an overhead of individuals whose most essential physical subsistence is provided for, and yet they have nothing sensible to do, in the collective intelligent structure of the society.

Cultural classes

Some of my readers asked me to explain how to get in control of one’s own emotions when starting their adventure as small investors in the stock market. The purely psychological side of self-control is something I leave to people smarter than me in that respect. What I do to have more control is the Wim Hof method (https://www.wimhofmethod.com/ ) and it works. You are welcome to try. I described my experience in that matter in the update titled ‘Something even more basic’. Still, there is another thing, namely, to start with a strategy of investment clever enough to allow emotional self-control. The strongest emotion I have been experiencing on my otherwise quite successful path of investment is the fear of loss. Yes, there are occasional bubbles of greed, but they are more like childish expectations to get the biggest toy in the neighbourhood. They are bubbles, which burst quickly and inconsequentially. The fear of loss is there to stay, on the other hand.    

This is what I advise to do. I mean this is what I didn’t do at the very beginning, and fault of doing it I made some big mistakes in my decisions. Only after some time (around 2 months), I figured out the mental framework I am going to present. Start by picking up a market. I started with a dual portfolio, like 50% in the Polish stock market, and 50% in the big foreign ones, such as US, Germany, France etc. Define the industries you want to invest in, like biotech, IT, renewable energies. Whatever: pick something. Study the stock prices in those industries. Pay particular attention to the observed losses, i.e., the observed magnitude of depreciation in those stocks. Figure out the average possible loss, and the maximum one. Now, you have an idea of how much you can lose in percentage. Quantitative techniques such as mean-reversion or extrapolation of the past changes can help. You can consult my update titled ‘What is my take on these four: Bitcoin, Ethereum, Steem, and Golem?’ to see the general drift.

The next step is to accept the occurrence of losses. You need to acknowledge very openly the following: you will lose money on some of your investment positions, inevitably. This is why you build a portfolio of many investment positions. All investors lose money on parts of their portfolio. The trick is to balance losses with even greater gains. You will be experimenting, and some of those experiments will be successful, whilst others will be failures. When you learn investment, you fail a lot. The losses you incur when learning, are the cost of your learning.

My price of learning was around €600, and then I bounced back and compensated it with a large surplus. If I take those €600 and compare it to the cost of taking an investment course online, e.g. with Coursera, I think I made a good deal.

Never invest all your money in the stock market. My method is to take some 30% of my monthly income and invest it, month after month, patiently and rhythmically, by instalments. For you, it can be 10% or 50%, which depends on what exactly your personal budget looks like. Invest just the amount you feel you can afford exposing to losses. Nail down this amount honestly. My experience is that big gains in the stock market are always the outcome of many consecutive steps, with experimentation and the cumulative learning derived therefrom.

General remark: you are much calmer when you know what you’re doing. Look at the fundamental trends and factors. Look beyond stock prices. Try to understand what is happening in the real business you are buying and selling the stock of. That gives perspective and allows more rational decisions.  

That would be it, as regards investment. You are welcome to ask questions. Now, I shift my topic radically. I return to the painful and laborious process of writing my book about collective intelligence. I feel like shaking things off a bit. I feel I need a kick in the ass. The pandemic being around and little social contacts being around, I need to be the one who kicks my own ass.

I am running myself through a series of typical questions asked by a publisher. Those questions fall in two broad categories: interest for me, as compared to interest for readers. I start with the external point of view: why should anyone bother to read what I am going to write? I guess that I will have two groups of readers: social scientists on the one hand, and plain folks on the other hand. The latter might very well have a deeper insight than the former, only the former like being addressed with reverence. I know something about it: I am a scientist.

Now comes the harsh truth: I don’t know why other people should bother about my writing. Honestly. I don’t know. I have been sort of carried away and in the stream of my own blogging and research, and that question comes as alien to the line of logic I have been developing for months. I need to look at my own writing and thinking from outside, so as to adopt something like a fake observer’s perspective. I have to ask myself what is really interesting in my writing.

I think it is going to be a case of assembling a coherent whole out of sparse pieces. I guess I can enumerate, once again, the main points of interest I find in my research on collective intelligence and investigate whether at all and under what conditions the same points are likely to be interesting for other people.

Here I go. There are two, sort of primary and foundational points. For one, I started my whole research on collective intelligence when I experienced the neophyte’s fascination with Artificial Intelligence, i.e. when I discovered that some specific sequences of equations can really figure stuff out just by experimenting with themselves. I did both some review of literature, and some empirical testing of my own, and I discovered that artificial neural networks can be and are used as more advanced counterparts to classical quantitative models. In social sciences, quantitative models are about the things that human societies do. If an artificial form of intelligence can be representative for what happens in societies, I can hypothesise that said societies are forms of intelligence, too, just collective forms.

I am trying to remember what triggered in me that ‘Aha!’ moment, when I started seriously hypothesising about collective intelligence. I think it was when I was casually listening to an online lecture on AI, streamed from the Massachusetts Institute of Technology. It was about programming AI in robots, in order to make them able to learn. I remember one ‘Aha!’ sentence: ‘With a given set of empirical data supplied for training, robots become more proficient at completing some specific tasks rather than others’. At the time, I was working on an article for the journal ‘Energy’. I was struggling. I had an empirical dataset on energy efficiency in selected countries (i.e. on the average amount of real output per unit of energy consumption), combined with some other variables. After weeks and weeks of data mining, I had a gut feeling that some important meaning is hidden in that data, only I wasn’t able to put my finger precisely on it.

That MIT-coined sentence on robots triggered that crazy question in me. What if I return to the old and apparently obsolete claim of the utilitarian school in social sciences, and assume that all those societies I have empirical data about are something like one big organism, with different variables being just different measurable manifestations of its activity?

Why was that question crazy? Utilitarianism is always contentious, as it is frequently used to claim that small local injustice can be justified by bringing a greater common good for the whole society. Many scholars have advocated for that claim, and probably even more of them have advocated against. I am essentially against. Injustice is injustice, whatever greater good you bring about to justify it. Besides, being born and raised in a communist country, I am viscerally vigilant to people who wield the argument of ‘greater good’.

Yet, the fundamental assumptions of utilitarianism can be used under a different angle. Social systems are essentially collective, and energy systems in a society are just as collective. There is any point at all in talking about the energy efficiency of a society when we are talking about the entire intricate system of using energy. About 30% of the energy that we use is used in transport, and transport is from one person to another. Stands to reason, doesn’t it?

Studying my dataset as a complex manifestation of activity in a big complex organism begs for the basic question: what do organisms do, like in their daily life? They adapt, I thought. They constantly adjust to their environment. I mean, they do if they want to survive. If I settle for studying my dataset as informative about a complex social organism, what does this organism adapt to? It could be adapting to a gazillion of factors, including some invisible cosmic radiation (the visible one is called ‘sunlight’). Still, keeping in mind that sentence about robots, adaptation can be considered as actual optimization of some specific traits. In my dataset, I have a range of variables. Each variable can be hypothetically considered as informative about a task, which the collective social robot strives to excel at.

From there, it was relatively simple. At the time (some 16 months ago), I was already familiar with the logical structure of a perceptron, i.e. a very basic form of artificial neural network. I didn’t know – and I still don’t – how to program effectively the algorithm of a perceptron, but I knew how to make a perceptron in Excel. In a perceptron, I take one variable from my dataset as output, the remaining ones are instrumental as input, and I make my perceptron minimize the error on estimating the output. With that simple strategy in mind, I can make as many alternative perceptrons out of my dataset as I have variables in the latter, and it was exactly what I did with my data on energy efficiency. Out of sheer curiosity, I wanted to check how similar were the datasets transformed by the perceptron to the source empirical data. I computed Euclidean distances between the vectors of expected mean values, in all the datasets I had. I expected something foggy and pretty random, and once again, life went against my expectations. What I found was a clear pattern. The perceptron pegged on optimizing the coefficient of fixed capital assets per one domestic patent application was much more similar to the source dataset than any other transformation.

In other words, I created an intelligent computation, and I made it optimize different variables in my dataset, and it turned out that, when optimizing that specific variable, i.e. the coefficient of fixed capital assets per one domestic patent application, that computation was the most fidel representation of the real empirical data.   

This is when I started wrapping my mind around the idea that artificial neural networks can be more than just tools for optimizing quantitative models; they can be simulators of social reality. If that intuition of mine is true, societies can be studied as forms of intelligence, and, as they are, precisely, societies, we are talking about collective intelligence.

Much to my surprise, I am discovering similar a perspective in Steven Pinker’s book ‘How The Mind Works’ (W. W. Norton & Company, New York London, Copyright 1997 by Steven Pinker, ISBN 0-393-04535-8). Professor Steven Pinker uses a perceptron as a representation of human mind, and it seems to be a bloody accurate representation.

That makes me come back to the interest that readers could have in my book about collective intelligence, and I cannot help referring to still another book of another author: Nassim Nicholas Taleb’s ‘The black swan. The impact of the highly improbable’ (2010, Penguin Books, ISBN 9780812973815). Speaking from an abundant experience of quantitative assessment of risk, Nassim Taleb criticizes most quantitative models used in finance and economics as pretty much useless in making reliable predictions. Those quantitative models are good solvers, and they are good at capturing correlations, but they suck are predicting things, based on those correlations, he says.

My experience of investment in the stock market tells me that those mid-term waves of stock prices, which I so much like riding, are the product of dissonance rather than correlation. When a specific industry or a specific company suddenly starts behaving in an unexpected way, e.g. in the context of the pandemic, investors really pay attention. Correlations are boring. In the stock market, you make good money when you spot a Black Swan, not another white one. Here comes a nuance. I think that black swans happen unexpectedly from the point of view of quantitative predictions, yet they don’t come out of nowhere. There is always a process that leads to the emergence of a Black Swan. The trick is to spot it in time.

F**k, I need to focus. The interest of my book for the readers. Right. I think I can use the concept of collective intelligence as a pretext to discuss the logic of using quantitative models in social sciences in general. More specifically, I want to study the relation between correlations and orientations. I am going to use an example in order to make my point a bit more explicit, hopefully. In my preceding update, titled ‘Cool discovery’, I did my best, using my neophytic and modest skills in programming, the method of negotiation proposed in Chris Voss’s book ‘Never Split the Difference’ into a Python algorithm. Surprisingly for myself, I found two alternative ways of doing it: as a loop, on the one hand, and as a class, on the other hand. They differ greatly.

Now, I simulate a situation when all social life is a collection of negotiations between people who try to settle, over and over again, contentious issues arising from us being human and together. I assume that we are a collective intelligence of people who learn by negotiated interactions, i.e. by civilized management of conflictual issues. We form social games, and each game involves negotiations. It can be represented as a lot of these >>

… and a lot of those >>

In other words, we collectively negotiate by creating cultural classes – logical structures connecting names to facts – and inside those classes we ritualise looping behaviours.

Cool discovery

Writing about me learning something helps me to control emotions involved into the very process of learning. It is like learning on the top of learning. I want to practice programming, in Python, the learning process of an intelligent structure on the basis of negotiation techniques presented in Chris Voss’s book ‘Never Split the Difference’. It could be hard to translate a book into an algorithm, I know. I like hard stuff, and I am having a go at something even harder: translating two different books into one algorithm. A summary, and an explanation, are due. Chris Voss develops, in the last chapter of his book, a strategy of negotiation based on the concept of Black Swan, as defined by Nassim Nicholas Taleb in his book ‘The black swan. The impact of the highly improbable’ (I am talking about the revised edition from 2010, published with Penguin Books, ISBN 9780812973815).

Generally, Chriss Voss takes a very practical drift in his method of negotiation. By ‘practical’, I mean that he presents techniques which he developed and tested in hostage negotiations with FBI, where he used to be the chief international hostage negotiator. He seems to attach particular importance to all the techniques which allow unearthing the non-obvious in negotiations: hidden emotions, ethical values, and contextual factors with strong impact on the actual negotiation. His method is an unusual mix of rigorous cognitive approach with a very emotion-targeting thread. His reference to Black Swans, thus to what we don’t know we don’t know, is an extreme version of that approach. It consists in using literally all our cognitive tools to uncover events and factors in the game which we even didn’t initially know were in the game.

Translating a book into an algorithm, especially for a newbie of programming such as I am, is hard. Still, in the case of ‘Never Split the Difference’, it is a bit easier because of the very game-theoretic nature of the method presented. Chriss Voss attaches a lot of importance to taking our time in negotiations, and to making our counterpart make a move rather than overwhelming them with our moves. All that is close to my own perspective and makes the method easier to translate into a functional sequence where each consecutive phase depends on the preceding phase.

Anyway, I assume that a negotiation is an intelligent structure, i.e. it is an otherwise coherent and relatively durable structure which learns by experimenting with many alternative versions of itself. That implies a lot. Firstly, it implies that the interaction between negotiating parties is far from being casual and accidental: it is a structure, it has coherence, and it is supposed to last by recurrence. Secondly, negotiations are supposed to be learning much more than bargaining and confrontation. Yes, it is a confrontation of interests and viewpoints, nevertheless the endgame is learning. Thirdly, an intelligent structure experiments with many alternative versions of itself and learns by assessing the fitness of those versions in coping with a vector of external stressors. Therefore, negotiating in an intelligent structure means that, consciously or unconsciously, we, mutual counterparts in negotiation, experiment together with many alternative ways of settling our differences, and we are essentially constructive in that process.

Do those assumptions hold? I guess I can somehow verify them by making first steps into programming a negotiation.  I already know two ways of representing an intelligent structure as an algorithm: in the form of a loop (primitive, tried it, does not fully work, yet has some interesting basic properties), or in the form of a class, i.e. a complex logical structure which connects names to numbers.

When represented as a loop, a negotiation is a range of recurrent steps, where the same action is performed a given number of times. Looping means that a negotiation can be divided into a finite number of essentially identical steps, and the endgame is the cumulative output of those steps. With that in mind, I can see that a loop is not truly intelligent a structure. Intelligent learning requires more than just repetition: we need consistent assessment and dissemination of new knowledge. Mind you, many negotiations can play out as ritualized loops, and this is when they are the least productive. Under the condition of unearthing Black Swans hidden in the contentious context of the negotiation, the whole thing can play out as an intelligent structure. Still, many loop-like negotiations which recurrently happen in a social structure, can together form an intelligent structure. Looks like intelligent structures are fractal: there are intelligent structures inside intelligent structures etc. Intelligent social structures can contain chains of ritualized, looped negotiations, which are intelligent structures in themselves.   

Whatever. I program. When I try to sift out the essential phenomenological categories out of the Chris Voss’s book ‘Never Split the Difference’, I get to the following list of techniques recommended by Chriss Voss:

>> Mirroring – I build emotional rapport by just repeating the last three words of each big claim phrased out by my counterpart.

 >> Labelling – I further build emotional rapport by carefully and impersonally naming emotions and aspirations in my counterpart.

>> Open-ended questions – I clarify claims and disarm emotional bottlenecks by asking calibrated open questions such as ‘How can we do X,Y, Z?’ or ‘What do we mean by…?’ etc.

>> Claims – I state either what I want or what I want my counterpart to think I want

Those four techniques can be used in various shades and combinations to accomplish typical partial outcomes in negotiation, namely: a) opportunities for your counterpart to say openly ‘No’ b) agreement in principle c) guarantee of implementation d) Black Swans, i.e. unexpected attributes of the situation which turn the negotiation in a completely different, favourable direction.

I practice phrasing it out as a class in Python. Here is what I came up with and which my JupyterLab compiler swallows nicely without yielding any errors:

Mind you, I don’t know how exactly it works, algorithmically. I am a complete newbie to programming classes in Python, and my first goal is to have the grammar right, and thus not to have to deal with those annoying, salmon-pink-shaded messages of error.

Before I go further into programming negotiation as a class, I feel like I need to go back to my primitive skills, i.e. to programming loops, in order to understand the mechanics of the class I have just created. Each ‘self’ in the class is a category able to have many experimental versions of itself. I try the following structure:

As you can see, I received an error of non-definition. I have not defined the dataset which I want to use for appending my lists. Such a dataset would contain linguistic strings, essentially. Thus, the type of datasets I am operating with, here, are sets of linguistic strings, thus sets of objects. An intelligent structure representative for negotiation is an algorithm for processing natural language. Cool discovery.

Money being just money for the sake of it

I have been doing that research on the role of cities in our human civilization, and I remember the moment of first inspiration to go down this particular rabbit hole. It was the beginning of March, 2020, when the first epidemic lockdown has been imposed in my home country, Poland. I was cycling through streets of Krakow, my city, from home to the campus of my university. I remember being floored at how dead – yes, literally dead – the city looked. That was the moment when I started perceiving cities as something almost alive. I started wondering how will pandemic affect the mechanics of those quasi-living, urban organisms.

Here is one aspect I want to discuss: restaurants. Most restaurants in Krakow turn into takeouts. In the past, each restaurant had the catering part of the business, but it was mostly for special events, like conferences, weddings and whatnot. Catering was sort of a wholesale segment in the restaurant business, and the retail was, well, the table, the napkin, the waiter, that type of story. That retail part was supposed to be the main one. Catering was an addition to that basic business model, which entailed a few characteristic traits. When your essential business process takes place in a restaurant room with tables and guests sitting at them, the place is just as essential. The location, the size, the look, the relative accessibility: it all played a fundamental role. The rent for the place was among the most important fixed costs of a restaurant. When setting up business, one of the most important questions – and risk factors – was: “Will I be able to attract sufficiently profuse customers to this place, and to ramp up prices sufficiently high to as to pay the rent for the place and still have satisfactory profit?”. It was like a functional loop: a better place (location, look) meant more select a clientele and higher prices, which required to pay a high rent etc.

As I was travelling to other countries, and across my own country, I noticed many times that the attributes of the restaurant as physical place were partly substitute to the quality of food. I know a lot of places where the customers used to pretend that the food is excellent just because said food was so strange that it just didn’t do to say it is crappy in taste. Those people pretended they enjoy the food because the place was awesome. Awesomeness of the place, in turn, was largely based on the fact that many people enjoyed coming there, it was trendy, stylish, it was a good thing to show up there from time to time, just to show I have something to show to others. That was another loop in the business model of restaurants: the peculiar, idiosyncratic, gravitational field between places and customers.

In that business model, quite substantial expenses, i.e.  the rent, and the money spent on decorating and equipping the space for customers were essentially sunk costs. The most important financial outlays you made to make the place competitive did not translate into any capital value in your assets. The only way to do such translation was to buy the place instead of renting it. Advantageous, long-term lease was another option. In some cities, e.g. the big French ones, such as Paris, Lyon or Marseille, the market of places suitable for running restaurants, both legally and physically, used to be a special segment in the market of real estate, with its own special contracts, barriers to entry etc.   

As restaurants turn into takeouts, amidst epidemic restrictions, their business model changes. Food counts in the first place, and the place counts only to the extent of accessibility for takeout. Even if I order food from a very fancy restaurant, I pay for food, not for fanciness. When consumed at home, with the glittering reputation of the restaurant taken away from it, food suddenly tastes differently. I consume it much more with my palate and much less with my ideas of what is trendy. Preparation and delivery of food becomes the essential business process. I think it facilitates new entries into the market of gastronomy. Yes, I know, restaurants are going bankrupt, and my take on it is that places are going bankrupt, but people stay. Chefs and cooks are still there. Human capital, until recently being 50/50 important – together with the real estate aspect of the business – becomes definitely the greatest asset of the restaurants’ sector as they focus on takeout. The broadly spoken cooking skills, including the ability to purchase ingredients of good quality, become primordial. Equipping a business-scale kitchen is not really rocket science, and, what is just as important, there is a market for second-hand equipment of that kind. The equipment of a kitchen, in a takeout-oriented restaurant, is much more of an asset than the decoration of a dining room. The rent you pay, or the market price of the whole place in the real-estate market are much lower, too, as compared to classical restaurants.

What restaurant owners face amidst the pandemic is the necessity to switch quickly, and on a very short notice of 1 – 2 weeks, between their classical business model based on a classy place to receive customers, and the takeout business model, focused on the quality of food and the promptness of delivery. It is a zone of uncertainty more than a durable change, and this zone is

associated with different cash flows and different assets. That, in turn, means measurable risk. Risk in big amounts is an amount, essentially, much more than a likelihood. We talk about risk, in economics and in finance, when we are actually sure that some adverse events will happen, and we even know what is going to be the total amount of adversity to deal with; we just don’t know where exactly that adversity will hit and who exactly will have to deal with it.

There are two basic ways of responding to measurable risk: hedging and insurance. I can face risk by having some aces up my sleeve, i.e. by having some alternative assets, sort of fall-back ones, which assure me slightly softer a landing, should the s**t which I hedge against really happen. When I am at risk in my in-situ restaurant business, I can hedge towards my takeout business. With time, I can discover that I am so good at the logistics of delivery that it pays off to hedge towards a marketing platform for takeouts rather than one takeout business. There is an old saying that you shouldn’t put all your eggs in the same basket, and hedging is the perfect illustration thereof. I hedge in business by putting my resources in many different baskets.

On the other hand, I can face risk by sharing it with other people. I can make a business partnership with a few other folks. When I don’t really care who exactly those folks are, I can make a joint-stock company with tradable shares of participation in equity. I can issue derivative financial instruments pegged on the value of the assets which I perceive as risky. When I lend money to a business perceived as risky, I can demand it to be secured with tradable notes AKA bills of exchange. All that is insurance, i.e. a scheme where I give away part of my cash flow in exchange of the guarantee that other people will share with me the burden of damage, if I come to consume my risks. The type of contract designated expressis verbis as ‘insurance’ is one among many forms of insurance: I pay an insurance premium in exchange o the insurer’s guarantee to cover my damages. Restaurant owners can insure their epidemic-based risk by sharing it with someone else. With whom and against what kind of premium on risk? Good question. I can see like a shade of that. During the pandemic, marketing platforms for gastronomy, such as Uber Eats, swell like balloons. These might be the insurers of the restaurant business. They capitalize on the customer base for takeout. As a matter of fact, they can almost own that customer base.

A group of my students, all from France, as if by accident, had an interesting business concept: a platform for ordering food from specific chefs. A list of well-credentialed chefs is available on the website. Each of them recommends a few flagship recipes of theirs. The customer picks the specific chef and their specific culinary chef d’oeuvre. One more click, and the customer has that chef d’oeuvre delivered on their doorstep. Interesting development. Pas si con que ça, as the French say.     

Businesspeople have been using both hedging and insurance for centuries, to face various risks. When used systematically, those two schemes create two characteristic types of capitalistic institutions: financial markets and pooled funds. Spreading my capitalistic eggs across many baskets means that, over time, we need a way to switch quickly among baskets. Tradable financial instruments serve to that purpose, and money is probably the most liquid and versatile among them. Yet, it is the least profitable one: flexibility and adaptability is the only gain that one can derive from holding large monetary balances. No interest rate, no participation in profits of any kind, no speculative gain on the market value. Just adaptability. Sometimes, just being adaptable is enough to forego other gains. In the presence of significant need for hedging risks, businesses hold abnormally large amounts of cash money.

When people insure a lot – and we keep in mind the general meaning of insurance as described above – they tend to create large pooled funds of liquid financial assets, which stand at the ready to repair any breach in the hull of the market. Once again, we return to money and financial markets. Whilst abundant use of hedging as strategy for facing risk leads to hoarding money at the individual level, systematic application of insurance-type contracts favours pooling funds in joint ventures. Hedging and insurance sort of balance each other.

Those pieces of the puzzle sort of fall together into a pattern. As I have been doing my investment in the stock market, all over 2020, financial markets seems to be puffy with liquid capital, and that capital seems to be avid of some productive application. It is as if money itself was saying: ‘C’mon, guys. I know I’m liquid, and I can compensate risk, but I am more than that. Me being liquid and versatile makes me easily convertible into productive assets, so please, start converting. I’m bored with being just me, I mean with money being just money for the sake of it’.