The art of pulling the right lever

I dig into the idea of revising my manuscript ‘Climbing the right hill – an evolutionary approach to the European market of electricity’, in order to resubmit it to the journal Applied Energy , by somehow fusing it with two other, unpublished pieces of my writing, namely: ‘Behavioural absorption of Black Swans: simulation with an artificial neural network’, and ‘The labour-oriented, collective intelligence of ours: Penn Tables 9.1 seen through the eyes of a neural network’.

I am focusing on one particular aspect of that revision by recombination, namely on comparing the empirical datasets which I used for each research in question. This is an empiricist approach to scientific writing: I assume that points of overlapping, as well as possible synergies, are based, at the end of the day, on overlapping and synergies between the respective empirical bases of my different papers.

 In ‘Climbing the right hill […]’, my basic dataset consisted in m = 300 ‘country-year’ observations, in the timeframe from 2008 through 2017, and covering the following countries: Belgium, Bulgaria, Czechia, Denmark, Germany, Estonia, Ireland, Greece, Spain, France, Croatia, Italy, Cyprus, Latvia, Lithuania, Luxembourg, Hungary, Malta, Netherlands, Austria, Poland, Portugal, Romania, Slovenia, Slovakia, Finland, Sweden, United Kingdom, Norway, and Turkey. The scope of variables covered is essentially that of Penn Tables 9.1, plus some variables from other sources, pertinent to the market of electricity, to the energy sector in general, and to technological change, namely:

>> The price fork, in € between the retail price of electricity, paid by households and really small institutional entities, on the one hand, and the prices paid by big institutional consumers

>> The capital value of that price fork, in € mln, thus the difference in prices multiplied by the quantity of electricity consumed

>> Total consumption of energy in the country (thousands of tonnes of oil equivalent)

>> The percentage share of electricity in the total consumption of energy

>> The percentage share of renewable sources in the total output of electricity

>> The number of resident patent applications per country per year

>> The coefficient of fixed assets per 1 resident patent application

>> The coefficient of resident patent applications per 1 million people

The full set, in Excel format, is accessible via the following link: . I also used a recombination of that database, made of m = 3000 randomly stacked records from the m = 300 set, just in order to check the influence of order in ‘country-year’ observations upon the results I obtained

In the two other manuscripts, namely in ‘The behavioural absorption of Black Swans […]’ and in ‘The labour-oriented, collective intelligence of ours […]’, I used one and the same empirical database, made of m = 3006 ‘country-year’ records, all selected from Penn Tables 9.1 , with the criteria of selection being the fullness of information. In other words, I kicked out of Penn Tables 9.1. all the rows with empty cells, and what remains is the m = 3006 set.

As I attempt to make some sort of cross analysis between my results from those three papers, one crossing is obvious. Variables pertinent to the market of labour, i.e. the average number of hours worked per person per year (AVH), the percentage of labour compensation in the gross national income (LABSH), and the indicator of human capital (HC), informative about the average length of educational path in the professionally active people, seem to play a special role as collectively pursued outcomes. The special role of those three – AVH, LABSH, and HC – seems to be impervious to, respectively, the presence or the absence of the variables I added from other sources in ‘Climbing the right hill […]’. It also seems impervious to the geographical scope and the temporal window of observation.

The most interesting direction for a further exploration seems to be in the crossing of ‘Black Swans […]’ with ‘Climbing the right hill […]. I take the structure from ‘Black Swans […]’ – namely the model where the optimization of an empirical variable impacts a range of social roles – and I put in that model the dataset from  ‘Climbing the right hill […]’. I observe the patterns of learning occurring in the perceptron, as I take different empirical variables.

Variables which are strong collective orientations – AVH, LABSH, and HC – display a special pattern of learning, different from other variables. Their local residual error (i.e. the arithmetical difference between the value of neural activation function and the local empirical value at hand), swings in a wide amplitude, yet in a predictable cycle. It is a pattern of learning in the lines of ‘we make a lot of mistakes, then we minimize them, and then we repeat: a lot of mistakes followed by a period of accuracy’. Other variables, run through the same model, display something different: a general tendency to minimal error, with occasional, pretty random bumps. Not much error, and not much of a visible cycle in learning.

The national societies which I study, seem to orient themselves on outcomes which associate with strong and predictably cyclical amplitude of error, this with abundant learning in a predictable cycle. There is one more thing. When optimizing variables relative to the market of labour – AVH, LABSH, and HC – the model from ‘Black Swans […]’ shows relatively the highest resilience in the incumbent social roles, i.e. those in place before social disruption starts.

Good. Something takes shape. I am reframing the method and the material I want to introduce in the revised version of ‘Climbing the right hill […]’, for the journal Applied Energy, and I add some results and provisional conclusions.

When I take the empirical material from Penn Tables 9.1, thus when I observe the otherwise bloody chaotic thing called ‘society’ through the lens of quantitative variables pertinent to the broadly spoken real of macroeconomics, that material shows some repetitive, robust properties. When I run in through a learning procedure, expressed in the form of a simple neural network, the learning centred on optimizing variables pertinent to the labour market (AVH, LABSH, HC), as well as on the index of prices in export (PL_X), – yields artificial datasets more similar to the original one, in terms of Euclidean similarity, than any other such artificial dataset, optimizing other variables. That phenomenological hierarchy seems to be robust both to the modifications of scope, and those of spatial-temporal range. When I add variables pertinent to technological change and to the market of electricity, they obediently take their place in the rank, and don’t step forward. When I extend the geographical scope of observation from Europe to the whole world, and when I extend the window of observation from the initial {2008 ÷ 2017} to the longer {1954 ÷ 2017}, the same still holds.

As I try to explain why is it so, and I try to find an empirical explanation, I make another neural network, where each empirical variable from the original dataset is the optimized output, and optimization takes place by experimenting with a vector of probabilities assigned to a set of social roles, and a random factor of disturbance. The pattern of learning is observed as the distribution of residual errors over the entire experimental sequence of phenomenal instances. In that different perspective, the same variables which seem to be privileged collective outcomes – PL_X, AVH, LABSH, and HC – display a specific pattern of learning: they swing broadly in their error, and yet they swing in a predictable cycle. When my experimental neural network learns on other variables, the pattern is different, with the curve of error being much calmer, less bumpy, and yet much less cyclical.

I return to my method and to my theoretical assumptions. I recapitulate. I start by assuming that social reality is essentially chaotic and unobservable directly, yet I can make epistemological approximations of that thing and see how they work. In this specific piece of research, I make two such types of approximation, based on different assumptions. On the one hand, I assume that quantitative, commonly measured, socio-economic variables, such as those in Penn Tables 9.1 are partial expressions of change in that otherwise chaotic social reality, and we collect those values because they represent change in the collective outcomes which we value. On the other hand, I assume that social reality can be represented as a collection of social roles, in two distinct categories: the already existing, active social roles, accompanied by temporarily dormant, ready-to-be triggered roles. Those social roles are observable as the relative frequency of occurrence, thus as the probability that any given individual endorses them.

I further assume that human societies are collectively intelligent structures, which, in turn, means that we collectively learn by experimenting with many alternative versions of ourselves. By the way, I have been wondering whether this is a hypothesis or an assumption, and I settled for assumption, because I do not really bring any direct proof thereof, and yet I make the claim. Anyway, with the assumption of collective intelligence, I can simulate two mutually correlated processes of learning through experimentation. On the one hand, among all the collective outcomes represented with quantitative socio-economic variables, we learn hierarchically, i.e. we optimize some of those outcomes in the first place, whilst treating the other ones as instrumental to that chief goal. On the other hand, we optimize each of those outcomes, represented with quantitative variables, by experimenting with the relative prevalence (i.e. probability of endorsement) in distinct social roles.

That general theoretical perspective is the foundation which I use to both make an empirical method of research, and to substantiate the claim that public policies and business strategies which stimulate technological race with clear prime for winners and clear penalty for losers are likely to bring better results, especially on the long run, than policies and strategies aiming at erasing local idiosyncrasies and at creating uniformly distributed outcomes. My point is that the latter, i.e. policies oriented on nullifying local idiosyncrasies, lead either to the absence of idiosyncrasies, and, consequently, to the absence of different versions in ourselves to experiment with and learn, or they simply prove inefficient, as they try to move the wrong lever in the machine.

Now, looking through another door inside my head, I am presenting below the structure of semestral projects I assign to my students, in the Summer semester 2021, in two different, and yet somehow concurrent courses: International Trade Policy in the major International Relations, and International Management in the major Management. You will see how I teach, and how I get a bit obsessive about digging into the same ideas, over and over again.

The complex project to graduate the International Management course, Summer semester 2021

Our common goal: develop your understanding of the transition from the domestically based business structure to an international one.

Your goal: prepare a developed, well-informed business plan, for the development of a business, from the level of one national market, to the international level. That business plan is your semestral project, which you graduate the course of International Management with.

You can see this course as an opportunity to put together and utilize the partial learning you have from all the individual subject courses you have had so far.

Your deadline is June 25th, 2021. 

Definition – international scale of a business means that it becomes an economically significant choice to branch the operations into or move them completely to foreign markets. In other words, the essential difference between domestic management and international management – at least the difference we will focus on in this course – is that in domestic management the initial place of incorporation determines the strategy, whilst in international management the geographical location of operations and incorporation(s) is determined by strategic choices. 

You work with a business concept of your own, or you take one of the pre-prepared business plans available at the digital platform. These are graduation business plans prepared by students from other groups, in the Winter semester 2020/2021. In other words, you develop either on your own idea, or on someone else’s idea. One of the things you will find out is that different business concepts have different potential, and follow very different paths for going to the international level.

Below, you will find the list of those pre-prepared business plans. They are coupled with links to the archives of my blog, where you can download them from. Still, you can find them as well in the ‘Files’ section of the group ‘International Management’, folder ‘Class materials’.

>> Pizzeria >>

>> Pancake Café >>

>> Never Alone >>

>> 3D Virtual Fitting Room >>

>> ToyBox >>

>> Chess Manufacturing (semi-finished, interesting to develop from that form) >>

>> Second-hand market for luxury goods >>

We will abundantly use real-life cases of big, internationally branched businesses as our business models. Some of them are those which you already know from past semesters, whilst other might be new to you:

>> Netflix >>

>> Tesla >>

>> PayPal >>

>> Solar Edge >>

>> Novavax >>

>> Pfizer >>

>> Starbucks >>

>> Amazon >>

That orientation on real business cases means that the course of International Management is, from your point of view, a course of market research, business planning, and basic empirical science, more than a theoretical course. This is precisely what we are going to be doing in our classes: market research, business planning, and basic empirical science. 

You can benefit from running yourself through my online course of business planning, to be found at .

The basic structure of the business plan which you will prepare is the following:

  • Section 1: Executive summary. This is a summary of the essentials, developed in further sections of the business plan. Particular focus on why and how going international with that business concept.
  • Section 2: Description of the business concept. How do we create, and capture value added in that thing? What kind of value added is that? What are the goods we market? Who are our target customers? What kind of really existing, operational business models, observable in actually operational companies, do we emulate in that business?
  • Section 3: Market research. We focus on collecting and presenting information on our customers, and our competitors.
  • Section 4: Organization. How are we going to structure human work in that business? How many people do we need, and what kind of organizational structure should we make them work in? What is the estimate, total payroll per month and per year, in that organization?
  • Section 5: The strategy for going international. Can we develop an original, proprietary technology, and apply it in different national markets? Can we benefit from the economies of scale, or those of scope, as we go international? Can we optimize and standardize our business concept into a franchise, attractive for smaller partners in foreign markets? << this is the ‘INTERNATIONAL MANAGEMENT’ part of that business plan. Now, you demonstrate your understanding of what international management is.
  • Section 6: The corporate business structure. Do you see that business as one compact business entity, which operates internationally via digital platforms and contracts with external partners, or, conversely, would you rather create a network of affiliated companies in separate national (regional?) markets, all tied to and controlled by one mother company? Develop on those options and justify your choice. 
  • Section 7: The financial plan. Plan of revenues, costs, and of the resulting profit/loss for 3 years ahead. The balance sheet we need to start with, and its prospective changes over the next 3 years. The prospective cash-flow.

Guidelines for the graduation project in International Trade Policy Summer semester 2021

You graduate the course of ‘International Trade Policy’ by preparing a project. Your project will be a business report, the kind you could have to prepare if you are assistant to the CEO of a big firm, or to a prime minister. You are supposed to prepare a report on the impact of trade on individual businesses and national economies, in a sort of controlled economic experiment, limited in scope and in space. Your goal in the preparation of that project is to develop active understanding of international trade.

You can access the files provided as additional materials for this assignment in two ways. Below in this document, I provide links to the archives of my blog, ‘Discover social sciences’. On the other hand, all those files are to find in the ‘Files’ section of the ‘International Trade Policy’ group, in the folder ‘Class Materials’.

Your report will have two sections. In Section A, you study the impact of international trade on a set of businesses. Your business cases encompass real companies, some of which you already know from the course of microeconomics – Tesla, Netflix, Amazon, H&M – as well as new business entities which can emerge as per the business plans introduced below (these are real business plans made by students in other groups in the Winter semester 2020/2021).  

In the Section B of your report, imagine that you are the government of, respectively, Poland, Ukraine, and France. Imagine that businesses from Part A grow in your country. Given the macroeconomic characteristics of your national economy, which types of those businesses are likely to grow the most, and which are not really fit? As a country, as those businesses grow, would you see your exports grow, or would it be rather an increase in your imports? How would it affect your overall balance on trade? What would you do as a government and why?

Additional guidelines and materials for the Section A of your report:

You can make a simplifying assumption that businesses can develop with and through trade along two different, although not exactly exclusive paths:

  • Case A: there is a technology with potential for growth, which can be developed through expanding its target market, with exports or with franchise
  • Case B: the gives business can develop significant economies of scale and scope, and trade, i.e. exports or/and imports, are a way to achieve that

You can benefit from studying the model contract of sales in international trade:

… as well as studying the so-called Incoterms >> , which are standard conditions of delivery in international trade.

The early business concepts developed by students from other groups, which you are supposed to assess as for their capacity to grow through trade, are:

The investor relations sites of the real, big companies, whose development with trade you are supposed to study as well:

Additional guidelines and materials for the Section B of your report:

The so-called trade profiles of countries, accessible with the World Trade Organization:

Example of an international trade agreement, namely than between South Korea and Australia:

Macroeconomic profiles of Poland, Ukraine, and France >>

We haven’t nailed down all our equations yet

As I keep digging into the topic of collective intelligence, and my research thereon with the use of artificial neural networks, I am making a list of key empirical findings that pave my way down this particular rabbit hole. I am reinterpreting them with the new understandings I have from translating my mathematical model of artificial neural network into an algorithm. I am learning to program in Python, which comes sort of handy given I want to use AI. How could I have made and used artificial neural networks without programming, just using Excel? You see, that’s Laplace and his hypothesis that mathematics represent the structure of reality ( ).

An artificial neural network is a sequence of equations which interact, in a loop, with a domain of data. Just as any of us, humans, essentially. We just haven’t nailed down all of our own equations yet. What I can do and have done with Excel was to understand the structure of those equations and their order. This is a logical structure, and as long as I don’t give it any domain of data to feed on, is stays put.

When I feed data into that structure, it starts working. Now, with any set of empirical socio-economic variables I have worked with, so far, there is always 1 – 2 among them which are different from others as output. Generally, my neural network works differently according to the output variable I make it optimize. Yes, it is the output variable, supposedly being the desired outcome to optimize, and not the input variables treated as instrumental in that view, which makes the greatest difference in the results produced by the network.

That seems counterintuitive, and yet this is like the most fundamental common denominator of everything I have found out so far: the way that a simple neural network simulates the collective intelligence of human societies seems to be conditioned most of all by the variables pre-set as the output of the adaptation process, not by the input ones. Is it a sensible conclusion regarding collective intelligence in real life, or is it just a property of the data? In other words, is it social science or data science? This is precisely one of the questions which I want to answer by learning programming.

If it is a pattern of collective human intelligence, that would mean we are driven by the orientations pursued much more than by the actual perception of reality. What we are after would be more important a differentiating factor of your actions than what we perceive and experience as reality. Strangely congruent with the Interface Theory of Perception (Hoffman et al. 2015[1], Fields et al. 2018[2]). 

As it is some kind of habit in me, in the second part of this update I put the account of my learning how to program and to Data Science in Python. This time, I wanted to work with hard cases of CSV import, like trouble files. I want to practice data cleansing. I have downloaded the ‘World Economic Outlook October 2020’ database from the website . Already when downloading, I could notice that the announced format is ‘TAB delimited’, not ‘Comma Separated’. It downloads as Excel.

To start with, I used the website to do the conversion. In parallel, I tested two other ways:

  1. opening in Excel, and then saving as CSV
  2. opening with Excel, converting to *.TXT, importing into Wizard for MacOS (statistical package), and then exporting as CSV.

What I can see like right off the bat are different sizes in the same data, technically saved in the same format. The AnyConv-generated CSV is 12,3 MB, the one converted through Excel is 9,6 MB, and the last one, filtered through Excel to TXT, then to Wizard and to CSV makes 10,1 MB. Intriguing.

I open JupyterLab online, and I create a Python 3-based Notebook titled ‘Practice 27_11_2020_part2’.

I prepare the Notebook by importing Numpy, Pandas, Matplotlib and OS. I do:

>> import numpy as np

      import pandas as pd

      import matplotlib.pyplot as plt

      import os

I upload the AnyConv version of the CSV. I make sure to have the name of the file right by doing:

>> os.listdir()

…and I do:

>> WEO1=pd.DataFrame(pd.read_csv(‘AnyConv__WEOOct2020all.csv’))


/srv/conda/envs/notebook/lib/python3.7/site-packages/IPython/core/ DtypeWarning: Columns (83,85,87,89,91,93,95,98,99,102,103,106,107,110,111,114,115,118,119,122,123,126,127,130,131,134,135,138,139,142,143,146,147,150,151,154,155,158) have mixed types. Specify dtype option on import or set low_memory=False.

  interactivity=interactivity, compiler=compiler, result=result)

As I have been told, I add the “low_memory=False” option to the command, and I retype:

>> WEO1=pd.DataFrame(pd.read_csv(‘AnyConv__WEOOct2020all.csv’, low_memory=False))

Result: the file is apparently imported successfully. I investigate the structure.

>> WEO1.describe()

Result: I know I have 8 rows (there should be much more, over 200), and 32 columns. Something is wrong.

I upload the Excel-converted CSV.

>> WEO2=pd.DataFrame(pd.read_csv(‘WEOOct2020all_Excel.csv’))

Result: Parser error

I retry, with parameter sep=‘;’ (usually works with Excel)

>> WEO2=pd.DataFrame(pd.read_csv(‘WEOOct2020all_Excel.csv’,sep=’;’))

Result: import successful. Let’s check the shape of the data

>> WEO2.describe()

Result: Pandas can see just the last column. I make sure.

>> WEO2.columns


Index([‘WEO Country Code’, ‘ISO’, ‘WEO Subject Code’, ‘Country’,

       ‘Subject Descriptor’, ‘Subject Notes’, ‘Units’, ‘Scale’,

       ‘Country/Series-specific Notes’, ‘1980’, ‘1981’, ‘1982’, ‘1983’, ‘1984’,

       ‘1985’, ‘1986’, ‘1987’, ‘1988’, ‘1989’, ‘1990’, ‘1991’, ‘1992’, ‘1993’,

       ‘1994’, ‘1995’, ‘1996’, ‘1997’, ‘1998’, ‘1999’, ‘2000’, ‘2001’, ‘2002’,

       ‘2003’, ‘2004’, ‘2005’, ‘2006’, ‘2007’, ‘2008’, ‘2009’, ‘2010’, ‘2011’,

       ‘2012’, ‘2013’, ‘2014’, ‘2015’, ‘2016’, ‘2017’, ‘2018’, ‘2019’, ‘2020’,

       ‘2021’, ‘2022’, ‘2023’, ‘2024’, ‘2025’, ‘Estimates Start After’],


I will try to import the same file with a different ‘sep’ parameter, this time as sep=‘\t’

>> WEO3=pd.DataFrame(pd.read_csv(‘WEOOct2020all_Excel.csv’,sep=’\t’))

Result: import apparently successful. I check the shape of my data.

>> WEO3.describe()

Result: apparently, this time, no column is distinguished.

When I type:

>> WEO3.columns

…I get

Index([‘WEO Country Code;ISO;WEO Subject Code;Country;Subject Descriptor;Subject Notes;Units;Scale;Country/Series-specific Notes;1980;1981;1982;1983;1984;1985;1986;1987;1988;1989;1990;1991;1992;1993;1994;1995;1996;1997;1998;1999;2000;2001;2002;2003;2004;2005;2006;2007;2008;2009;2010;2011;2012;2013;2014;2015;2016;2017;2018;2019;2020;2021;2022;2023;2024;2025;Estimates Start After’], dtype=’object’)

Now, I test with the 3rd file, the one converted through Wizard.

>> WEO4=pd.DataFrame(pd.read_csv(‘WEOOct2020all_Wizard.csv’))

Result: import successful.

I check the shape.

>> WEO4.describe()

Result: still just 8 rows. Something is wrong.

I do another experiment. I take the original*.XLS from, and I save it as regular Excel *.XLSX, and then I save this one as CSV.

>> WEO5=pd.DataFrame(pd.read_csv(‘WEOOct2020all_XLSX.csv’))

Result: parser error

I will retry with two options as for the separator: sep=‘;’ and sep=‘\t’. Ledzeee…

>> WEO5=pd.DataFrame(pd.read_csv(‘WEOOct2020all_XLSX.csv’,sep=’;’))

Import successful. “WEO5.describe()” yields just one column.

>> WEO6=pd.DataFrame(pd.read_csv(‘WEOOct2020all_XLSX.csv’,sep=’\t’))

yields successful import, yet all the data is just one long row, without separation into columns.

I check WEO5 and WEO6 with “*.index”, and “*.shape”. 

“WEO5.index” yields “RangeIndex(start=0, stop=8777, step=1)”

“WEO6.index” yields “RangeIndex(start=0, stop=8777, step=1)

“WEO5.shape” gives “(8777, 56)”

“WEO6.shape” gives “(8777, 1)”

Depending on the separator given as parameter in the “pd.read_csv” command, I get 56 columns or just 1 column, yet the “*.describe()” command cannot make sense of them.

I try the *.describe” command, thus more specific than the “*.describe()” one.

I can see that structures are clearly different.

I try another trick, namely to assume separator ‘;’ and TAB delimiter.

>> WEO7=pd.DataFrame(pd.read_csv(‘WEOOct2020all_XLSX.csv’,sep=’;’,delimiter=’\t’))

Result: WEO7.shape yields 8777 rows in just one column.

Maybe ‘header=0’? Same thing.

The provisional moral of the fairy tale is that ‘Data cleansing’ means very largely making sense of the exact shape and syntax of CSV files. Depending on the parametrisation of separators and delimiters, different Data Frames are obtained.

[1] Hoffman, D. D., Singh, M., & Prakash, C. (2015). The interface theory of perception. Psychonomic bulletin & review, 22(6), 1480-1506.

[2] Fields, C., Hoffman, D. D., Prakash, C., & Singh, M. (2018). Conscious agent networks: Formal analysis and application to cognition. Cognitive Systems Research, 47, 186-213.

I re-run my executable script

I am thinking (again) about the phenomenon of collective intelligence, this time in terms of behavioural reinforcement that we give to each other, and the role that cities and intelligent digital clouds can play in delivering such reinforcement. As it is usually the case with science, there is a basic question to ask: ‘What’s the point of all the fuss with that nice theory of yours, Mr Wasniewski? Any good for anything?’.

Good question. My tentative answer is that studying human societies as collectively intelligent structures is a phenomenology, which allows some major methodological developments, which, I think, are missing from other methodologies in social sciences. First of all, it allows a completely clean slate at the starting point of research, as regards ethics and moral orientations, whilst it almost inevitably leads to defining ethical values through empirical research. This was my first big ‘Oh, f**k!’ with that method: I realized that ethical values can be reliably studied as objectively pursued outcomes at the collective level, and that study can be robustly backed with maths and empirics.

I have that thing with my science, and, as a matter of fact, with other people’s science too: I am an empiricist. I like prodding my assumptions and make them lose some fat, so as they become lighter. I like having as much of a clean slate at the starting point of my research as possible. I believe that one single assumption, namely that human social structures are collectively intelligent structures, almost automatically transforms all the other assumptions into hypotheses to investigate. Still, I need to go, very carefully, through that one single Mother Of All Assumptions, i.e. about us, humans as a society, being collectively intelligent a structure, in order to nail down, and possibly kick out any logical shortcut.

Intelligent structures learn by producing many alternative versions of themselves and testing those versions for fitness in coping with a vector of constraints. There are three claims hidden in this single claim: learning, production of different versions, and testing for fitness. Do human social structures learn, like at all? Well, we have that thing called culture, and culture changes. There is observable change in lifestyles, aesthetic tastes, fashions, institutions and technologies. This is learning. Cool. One down, two still standing.

Do human social structures produce many different versions of themselves? Here, we enter the subtleties of distinction between different versions of a structure, on the one hand, and different structures, on the other hand. A structure remains the same, and just makes different versions of itself, as long as it stays structurally coherent. When it loses structural coherence, it turns into a different structure. How can I know that a structure keeps its s**t together, i.e. it stays internally coherent? That’s a tough question, and I know by experience that in the presence of tough questions, it is essential to keep it simple. One of the simplest facts about any structure is that it is made of parts. As long as all the initial parts are still there, I can assume they hold together somehow. In other words, as long as whatever I observe about social reality can be represented as the same complex set, with the same components inside, I can assume this is one and the same structure just making copies of itself. Still, this question remains a tough one, especially that any intelligent structure should be smart enough to morph into another intelligent structure when the time is right.      

The time is right when the old structure is no longer able to cope with the vector of constraints, and so I arrive to the third component question: how can I know there is adaptation to constraints? How can I know there are constraints for assessing fitness? In a very broad sense, I can see constraints when I see error, and correction thereof, in someone’s behaviour. In other words, when I can see someone sort of making two steps forward and one step back, correcting their course etc., this is a sign of adaptation to constraints. Unconstrained change is linear or exponential, whilst constrained change always shows signs of bumping against some kind of wall. Here comes a caveat as regards using artificial neural networks as simulators of collective human intelligence: they are any good only when they have constraints, and, consequently, when they make errors. An artificial neural network is no good at simulating unconstrained change. When I explore the possibility of simulating collective human intelligence with artificial neural networks, it has marks of a pleonasm. I can use AI as simulator only when the simulation involves constrained adaptation.

F**k! I have gone philosophical in those paragraphs. I can feel a part of my mind gently disconnecting from real life, and this is time to do something in order to stay close to said real life. Here is a topic, which I can treat as teaching material for my students, and, in the same time, make those general concepts bounce a bit around, inside my head, just to see what happens. I make the following claim: ‘Markets are manifestations of collective intelligence in human societies’. In science, this is a working hypothesis. It is called ‘working’ because it is not proven yet, and thus it has to earn its own living, so to say. This is why it has to work.

I pass in review the same bullet points: learning, for one, production of many alternative versions in a structure as opposed to creating new structures, for two, and the presence of constraints as the third component. Do markets manifest collective learning? Ledzzeee… Markets display fashions and trends. Markets adapt to lifestyles, and vice versa. Markets are very largely connected to technological change and facilitate the occurrence thereof. Yes, they learn.

How can I say whether a market stays the same structure and just experiments with many alternative versions thereof, or, conversely, whether it turns into another structure? It is time to go back to the fundamental concepts of microeconomics, and assess (once more), what makes a market structure. A market structure is the mechanism of setting transactional prices. When I don’t know s**t about said mechanism, I just observe prices and I can see two alternative pictures. Picture one is that of very similar prices, sort of clustered in the same, narrow interval. This is a market with equilibrium price, which translates into a local market equilibrium. Picture two shows noticeably disparate prices in what I initially perceived as the same category of goods. There is no equilibrium price in that case, and speaking more broadly, there is no local equilibrium in that market.

Markets with local equilibriums are assumed to be perfectly competitive or very close thereto. They are supposed to serve for transacting in goods so similar that customers perceive them as identical, and technologies used for producing those goods don’t differ sufficiently to create any kind of competitive advantage (homogeneity of supply), for one. Markets with local equilibriums require the customers to be so similar to each other in their tastes and purchasing patterns that, on the whole, they can be assumed identical (homogeneity of demand), for two. Customers are supposed to be perfectly informed about all the deals available in the market (perfect information). Oh, yes, the last one: no barriers to entry or exit. A perfectly competitive market is supposed to offer virtually no minimum investment required for suppliers to enter the game, and no sunk costs in the case of exit.  

Here is that thing: many markets present the alignment of prices typical for a state of local equilibrium, and yet their institutional characteristics – such as technologies, the diversity of goods offered, capital requirements and whatnot – do not match the textbook description of a perfectly competitive market. In other words, many markets form local equilibriums, thus they display equilibrium prices, without having the required institutional characteristics for that, at least in theory. In still other words, they manifest the alignment of prices typical for one type of market structure, whilst all the other characteristics are typical for another type of market structure.

Therefore, the completely justified ‘What the hell…?’question arises. What is a market structure, at the end of the day? What is a structure, in general?

I go down another avenue now. Some time ago, I signalled on my blog that I am learning programming in Python, or, as I should rather say, I make one more attempt at nailing it down. Programming teaches me a lot about the basic logic of what I do, including that whole theory of collective intelligence. Anyway, I started to keep a programming log, and here below, I paste the current entry, from November 27th, 2020.

 Tasks to practice:

  1. reading well structured CSV,
  2. plotting
  3. saving and retrieving a Jupyter Notebook in JupyterLab

I am practicing with Penn World Tables 9.1. I take the version without empty cells, and I transform it into CSV.

I create a new notebook on JupyterLab. I name it ‘Practice November 27th 2020’.

  • Path: demo/Practice November 27th 2020.ipynb

I upload the CSV version of Penn Tables 9.1 with no empty cells.

Shareable link:

Path: demo/PWT 9_1 no empty cells.csv

Download path:

I code libraries:

import numpy as np

import pandas as pd

import matplotlib.pyplot as plt

import os

I check my directory:

>> os.getcwd()

result: ‘/home/jovyan/demo’

>> os.listdir()











 ‘Practice November 27th 2020.ipynb’,



 ‘PWT 9_1 no empty cells.csv’]

>> PWT9_1=pd.DataFrame(pd.read_csv(‘PWT 9_1 no empty cells.csv’,header=0))


  File “<ipython-input-5-32375ff59964>”, line 1

    PWT9_1=pd.DataFrame(pd.read_csv(‘PWT 9_1 no empty cells.csv’,header=0))


SyntaxError: invalid character in identifier

>> I rename the file on Jupyter, into ‘PWT 9w1 no empty cells.csv’.

>> os.listdir()











 ‘Practice November 27th 2020.ipynb’,



 ‘PWT 9w1 no empty cells.csv’]

>> PWT9w1=pd.DataFrame(pd.read_csv(‘PWT 9w1 no empty cells.csv’,header=0))

Result: imported successfully

>> PWT9w1.describe()

Result: descriptive statistics

# I want to list columns (variables) in my file

>> PWT9w1.columns


Index([‘country’, ‘year’, ‘rgdpe’, ‘rgdpo’, ‘pop’, ’emp’, ’emp / pop’, ‘avh’,

       ‘hc’, ‘ccon’, ‘cda’, ‘cgdpe’, ‘cgdpo’, ‘cn’, ‘ck’, ‘ctfp’, ‘cwtfp’,

       ‘rgdpna’, ‘rconna’, ‘rdana’, ‘rnna’, ‘rkna’, ‘rtfpna’, ‘rwtfpna’,

       ‘labsh’, ‘irr’, ‘delta’, ‘xr’, ‘pl_con’, ‘pl_da’, ‘pl_gdpo’, ‘csh_c’,

       ‘csh_i’, ‘csh_g’, ‘csh_x’, ‘csh_m’, ‘csh_r’, ‘pl_c’, ‘pl_i’, ‘pl_g’,

       ‘pl_x’, ‘pl_m’, ‘pl_n’, ‘pl_k’],


>> PWT9w1.columns()


TypeError                                 Traceback (most recent call last)

<ipython-input-11-38dfd3da71de> in <module>

—-> 1 PWT9w1.columns()

TypeError: ‘Index’ object is not callable

# I try plotting

>> plt.plot(df.index, df[‘rnna’])


I get a long list of rows like: ‘<matplotlib.lines.Line2D at 0x7fc59d899c10>’, and a plot which is visibly not OK (looks like a fan).

# I want to separate one column from PWT9w1 as a separate series, and then plot it. Maybe it is going to work.

>> RNNA=pd.DataFrame(PWT9w1[‘rnna’])

Result: apparently successful.

# I try to plot RNNA

>> RNNA.plot()


<matplotlib.axes._subplots.AxesSubplot at 0x7fc55e7b9e10> + a basic graph. Good.

# I try to extract a few single series from PWT9w1 and to plot them. Let’s go for AVH, PL_I and CWTFP.

>> AVH=pd.DataFrame(PWT9w1[‘avh’])

>> PL_I=pd.DataFrame(PWT9w1[‘pl_i’])

>> CWTFP=pd.DataFrame(PWT9w1[‘cwtfp’])

>> AVH.plot()

>> PL_I.plot()

>> CWTFP.plot()


It worked. I have basic plots.

# It is 8:20 a.m. I go to make myself a coffee. I will quit JupyterLab for a moment. I saved my today’s notebook on server, and I will see how I can open it. Just in case, I make a PDF copy, and a Python copy on my disk.

I cannot do saving into PDF. An error occurs. I will have to sort it out. I made an *.ipynb copy on my disk.

demo/Practice November 27th 2020.ipynb

# It is 8:40 a.m. I am logging back into JupyterLab. I am trying to open my today’s notebook from path. Does not seem to work. I am uploading my *.ipynb copy. This worked. I know now: I upload the *.ipynb script from my own location and then just double click on it. I needed to re-upload my CSV file ‘PWT 9w1 no empty cells.csv’.

# I check if my re-uploaded CSV file is fully accessible. I discover that I need to re-create the whole algorithm. In other words: when I upload on JupyterLab a *.ipynb script from my disk, I need to re-run all the operations. My first idea is to re-run each executable cell in the uploaded script. That worked. Question: how to automatise it? Probably by making a Python script all in one piece, uploading my CSV data source first, and then run the whole script.

I like being a mad scientist

I like being a mad scientist. Am I a mad scientist? A tiny bit, yes, ‘cause I do research on things just because I feel like. Mind you, me being that mad scientist I like being happens to be practical. Those rabbit holes I dive into prove to have interesting outcomes in real life.

I feel like writing, and therefore thinking in an articulate way, about two things I do in parallel: science and investment. I have just realized these two realms of activity tend to merge and overlap in me. When I do science, I tend to think like an investor, or a gardener. I invest my personal energy in ideas which I think have potential for growth. On the other hand, I invest in the stock market with a strong dose of curiosity. Those companies, and the investment positions I can open therein, are like animals which I observe, try to figure out how not to get killed by them, or by predators that hunt them, and I try to domesticate those beasts.

The scientific thing I am working on is the application of artificial intelligence to studying collective intelligence in human societies. The thing I am working on sort of at the crest between science and investment is fundraising for scientific projects (my new job at the university).

The project aims at defining theoretical and empirical fundamentals for using intelligent digital clouds, i.e. large datasets combined with artificial neural networks, in the field of remote digital diagnostics and remote digital care, in medical sciences and medical engineering. That general purpose translates into science strictly speaking, and into the prospective development of medical technologies.

There is observable growth in the percentage of population using various forms of digital remote diagnostics and healthcare. Yet, that growth is very uneven across different social groups, which suggests an early, pre-popular stage of development in those technologies (Mahajan et al. 2020[i]). Other research confirms that supposition, as judging by the very disparate results obtained with those technologies, in terms of diagnostic and therapeutic effectiveness (Cheng et al. 2020[ii]; Wong et al. 2020[iii]). There are known solutions where intelligent digital cloud allows transforming the patient’s place of stay (home, apartment) into the local substitute of a hospital bed, which opens interesting possibilities as regards medical care for patients with significantly reduced mobility, e.g. geriatric patients (Ben Hassen et al. 2020[iv]). Already around 2015, creative applications of medical imagery appeared, where the camera of a person’s smartphone served for early detection of skin cancer (Bliznuks et al. 2017[v]). The connection between distance diagnostics with the acquisition and processing of image comes as one of the most interesting and challenging innovations to make in the here-discussed field of technology (Marwan et al. 2018[vi]). The experience of COVID-19 pandemic has already showed the potential of digital intelligent clouds in assisting national healthcare systems, especially in optimising and providing flexibility to the use of resources, both material and human (Alashhab et al. 2020[vii]). Yet, the same pandemic experience has shown the depth of social disparities as regards real actual access to digital technologies supported by intelligent clouds (Whitelaw et al. 2020[viii]). Intelligent digital clouds enter into learning-generative interactions with the professionals of healthcare. There is observable behavioural modification, for example, in students of healthcare who train with such technologies from the very beginning of their education (Brown Wilson et al. 2020[ix]). That phenomenon of behavioural change requires rethinking from scratch, with the development of each individual technology, the ethical and legal issues relative to interactions between users, on the one hand, and system operators, on the other hand (Godding 2019[x]).

Against that general background, the present project focuses on studying the phenomenon of tacit coordination among the users of digital technologies in remote medical diagnostics and remote medical care. Tacit coordination is essential as regards the well-founded application of intelligent digital cloud to support and enhance these technologies. Intelligent digital clouds are intelligent structures, i.e. they learn by producing many alternative versions of themselves and testing those versions for fitness in coping with a vector of external constraints. It is important to explore the extent and way that populations of users behave similarly, i.e. as collectively intelligent structures. The deep theoretical meaning of that exploration is the extent to which the intelligent structure of a digital cloud really maps and represents the collectively intelligent structure of the users’ population.

The scientific method used in the project explores the main working hypothesis that populations of actual and/or prospective patients, in their own health-related behaviour, and in their relations with the healthcare systems, are collectively intelligent structures, with tacit coordination. In practical terms, that hypothesis means that any intelligent digital cloud in the domain of remote medical care should assume collectively intelligent, thus more than just individual, behavioural change on the part of users. Collectively intelligent behavioural change in a population, marked by tacit coordination, is a long-term, evolutionary process of adaptive walk in rugged landscape (Kauffman & Levin 1987[xi]; Nahum et al. 2015[xii]). Therefore, it is something deeper and more durable that fashions and styles. It is the deep, underlying mechanism of social change accompanying the use of digital intelligent clouds in medical engineering.

The scientific method used in this project aims at exploring and checking the above-stated working hypothesis by creating a large and differentiated dataset of health-related data, and processing that dataset in an intelligent digital cloud, in two distinct phases. The first phase consists in processing a first sample of data with a relatively simple, artificial neural network, in order to discover its underlying orientations and its mechanisms of collective learning. The second phase allows an intelligent digital cloud to respond adaptively to users behaviour, i.e to produce intelligent interaction with them. The first phase serves to understand the process of adaptation observable in the second phase. Both phases are explained more in detail below.

The tests of, respectively, orientation and mode of learning, in the first phase of empirical research aim at defining the vector of collectively pursued social outcomes in the population studied. The initially collected empirical dataset is transformed, with the use of an artificial neural network, into as many representations as there are variables in the set, with each representation being oriented on a different variable as its output (with the remaining ones considered as instrumental input). Each such transformation of the initial set can be tested for its mathematical similarity therewith (e.g. for Euclidean distance between the vectors of expected mean values). Transformations displaying relatively the greatest similarity to the source dataset are assumed to be the most representative for the collectively intelligent structure in the population studied, and, consequently, their output variables can be assumed to represent collectively pursued social outcomes in that collective intelligence (see, for example: Wasniewski 2020[xiii]). Modes of learning in that dataset can be discovered by creating a shadow vector of probabilities (representing, for example, a finite set of social roles endorsed with given probabilities by members of the population), and a shadow process that introduces random disturbance, akin to the theory of Black Swans (Taleb 2007[xiv]; Taleb & Blyth 2011[xv]). The so-created shadow structure is subsequently transformed with an artificial neural network in as many alternative versions as there are variables in the source empirical dataset, each version taking a different variable from the set as its pre-set output. Three different modes of learning can be observed, and assigned to particular variables: a) cyclical adjustment without clear end-state b) finite optimisation with defined end-state and c) structural disintegration with growing amplitude of oscillation around central states.

The above-summarised first phase of research involves the use of two basic digital tools, i.e. an online functionality to collect empirical data from and about patients, and an artificial neural network to process it. There comes an important aspect of that first phase in research, i.e. the actual collectability and capacity to process the corresponding data. It can be assumed that comprehensive medical care involves the collection of both strictly health-related data (e.g. blood pressure, blood sugar etc.), and peripheral data of various kinds (environmental, behavioural). The complexity of data collected in that phase can be additionally enhanced by including imagery such as pictures taken with smartphones (e.g. skin, facial symmetry etc.). In that respect, the first phase of research aims at testing the actual possibility and reliability of collection in various types of data. Phenomena such as outliers of fake data can be detected then.

Once the first phase is finished and expressed in the form of theoretical conclusions, the second phase of research is triggered. An intelligent digital cloud is created, with the capacity of intelligent adaptation to users’ behaviour. A very basic example of such adaptation are behavioural reinforcements. The cloud can generate simple messages of praise for health-functional behaviour (positive reinforcements), or, conversely, warning messages in the case of health-dysfunctional behaviour (negative reinforcements). More elaborate form of intelligent adaptation are possible to implement, e.g. a Twitter-like reinforcement to create trending information, or a Tik-Tok-like reinforcement to stay in the loop of communication in the cloud. This phase aims specifically at defining the actually workable scope and strength of possible behavioural reinforcements which a digital functionality in the domain of healthcare could use vis a vis its end users. Legal and ethical implications thereof are studied as one of the theoretical outcomes of that second phase.

I feel like generalizing a bit my last few updates, and to develop on the general hypothesis of collectively intelligent, human social structures. In order to consider any social structure as manifestation of collective intelligence, I need to place intelligence in a specific empirical context. I need an otherwise exogenous environment, which the social structure has to adapt to. Empirical study of collective intelligence, such as I have been doing it, and, as a matter of fact, the only one I know how to do, consists in studying adaptive effort in human social structures. 

[i] Shiwani Mahajan, Yuan Lu, Erica S. Spatz, Khurram Nasir, Harlan M. Krumholz, Trends and Predictors of Use of Digital Health Technology in the United States, The American Journal of Medicine, 2020, ISSN 0002-9343, (  )

[ii] Lei Cheng, Mingxia Duan, Xiaorong Mao, Youhong Ge, Yanqing Wang, Haiying Huang, The effect of digital health technologies on managing symptoms across pediatric cancer continuum: A systematic review, International Journal of Nursing Sciences, 2020, ISSN 2352-0132, , ( )

[iii] Charlene A. Wong, Farrah Madanay, Elizabeth M. Ozer, Sion K. Harris, Megan Moore, Samuel O. Master, Megan Moreno, Elissa R. Weitzman, Digital Health Technology to Enhance Adolescent and Young Adult Clinical Preventive Services: Affordances and Challenges, Journal of Adolescent Health, Volume 67, Issue 2, Supplement, 2020, Pages S24-S33, ISSN 1054-139X, , ( )

[iv] Hassen, H. B., Ayari, N., & Hamdi, B. (2020). A home hospitalization system based on the Internet of things, Fog computing and cloud computing. Informatics in Medicine Unlocked, 100368,

[v] Bliznuks, D., Bolocko, K., Sisojevs, A., & Ayub, K. (2017). Towards the Scalable Cloud Platform for Non-Invasive Skin Cancer Diagnostics. Procedia Computer Science, 104, 468-476

[vi] Marwan, M., Kartit, A., & Ouahmane, H. (2018). Security enhancement in healthcare cloud using machine learning. Procedia Computer Science, 127, 388-397.

[vii] Alashhab, Z. R., Anbar, M., Singh, M. M., Leau, Y. B., Al-Sai, Z. A., & Alhayja’a, S. A. (2020). Impact of Coronavirus Pandemic Crisis on Technologies and Cloud Computing Applications. Journal of Electronic Science and Technology, 100059.

[viii] Whitelaw, S., Mamas, M. A., Topol, E., & Van Spall, H. G. (2020). Applications of digital technology in COVID-19 pandemic planning and response. The Lancet Digital Health.

[ix] Christine Brown Wilson, Christine Slade, Wai Yee Amy Wong, Ann Peacock, Health care students experience of using digital technology in patient care: A scoping review of the literature, Nurse Education Today, Volume 95, 2020, 104580, ISSN 0260-6917, ,( )

[x] Piers Gooding, Mapping the rise of digital mental health technologies: Emerging issues for law and society, International Journal of Law and Psychiatry, Volume 67, 2019, 101498, ISSN 0160-2527, , ( )

[xi] Kauffman, S., & Levin, S. (1987). Towards a general theory of adaptive walks on rugged landscapes. Journal of theoretical Biology, 128(1), 11-45

[xii] Nahum, J. R., Godfrey-Smith, P., Harding, B. N., Marcus, J. H., Carlson-Stevermer, J., & Kerr, B. (2015). A tortoise–hare pattern seen in adapting structured and unstructured populations suggests a rugged fitness landscape in bacteria. Proceedings of the National Academy of Sciences, 112(24), 7530-7535, 

[xiii] Wasniewski, K. (2020). Energy efficiency as manifestation of collective intelligence in human societies. Energy, 191, 116500.

[xiv] Taleb, N. N. (2007). The black swan: The impact of the highly improbable (Vol. 2). Random house

[xv] Taleb, N. N., & Blyth, M. (2011). The black swan of Cairo: How suppressing volatility makes the world less predictable and more dangerous. Foreign Affairs, 33-39

Checkpoint for business

I am changing the path of my writing, ‘cause real life knocks at my door, and it goes ‘Hey, scientist, you economist, right? Good, ‘cause there is some good stuff, I mean, ideas for business. That’s economics, right? Just sort of real stuff, OK?’. Sure. I can go with real things, but first, I explain. At my university, I have recently taken on the job of coordinating research projects and finding some financing for them. One of the first things I did, right after November 1st, was to send around a reminder that we had 12 days left to apply, with the Ministry of Science and Higher Education, for relatively small grants, in a call titled ‘Students make innovation’. Honestly, I was expecting to have 1 – 2 applications max, in response. Yet, life can make surprises. There are 7 innovative ideas in terms of feedback, and 5 of them look like good material for business concepts and for serious development. I am taking on giving them a first prod, in terms of business planning. Interestingly, those ideas are all related to medical technologies, thus something I have been both investing a lot in, during 2020, and thinking a lot about, as a possible path of substantial technological change.

I am progressively wrapping my mind up around ideas and projects formulated by those students, and, walking down the same intellectual avenue, I am making sense of making money on and around science. I am fully appreciating the value of real-life experience. I have been doing research and writing about technological change for years. Until recently, I had that strange sort of complex logical oxymoron in my mind, where I had the impression of both understanding technological change, and missing a fundamental aspect of it. Now, I think I start to understand that missing part: it is the microeconomic mechanism of innovation.

I have collected those 5 ideas from ambitious students at Faculty of Medicine, in my university:

>> Idea 1: An AI-based app, with a chatbot, which facilitates early diagnosis of cardio-vascular diseases

>> Idea 2: Similar thing, i.e. a mobile app, but oriented on early diagnosis and monitoring of urinary incontinence in women.

>> Idea 3: Technology for early diagnosis of Parkinson’s disease, through the observation of speech and motor disturbance.

>> Idea 4: Intelligent cloud to store, study and possibly find something smart about two types of data: basic health data (blood-work etc.), and environmental factors (pollution, climate etc.).

>> Idea 5: Something similar to Idea 4, i.e. an intelligent cloud with medical edge, but oriented on storing and studying data from large cohorts of patients infected with Sars-Cov-2. 

As I look at those 5 ideas, surprisingly simple and basic association of ideas comes to my mind: hierarchy of interest and the role of overarching technologies. It is something I have never thought seriously about: when we face many alternative ideas for new technologies, almost intuitively we hierarchize them. Some of them seem more interesting, some others are less. I am trying to dig out of my own mind the criteria I use, and here they are: I hierarchize with the expected lifecycle of technology, and the breadth of the technological platform involved. In other words, I like big, solid, durable stuff. I am intuitively looking for innovations which offer a relatively long lifecycle in the corresponding technology, and the technology involved is sort of two-level, with a broad base and many specific applicational developments built upon that base.  

Why do I take this specific approach? One step further down into my mind, I discover the willingness to have some sort of broad base of business and scientific points of attachment when I start business planning. I want some kind of horizon to choose my exact target on. The common technological base among those 5 ideas is some kind of intelligent digital cloud, with artificial intelligence learns on the data that flows in. The common scientific base is the collection of health-related data, including behavioural aspects (e.g. sleep, diet, exercise, stress management).

The financial context which I am operating in is complex. It is made of public financial grants for strictly speaking scientific research, other public financing for projects more oriented on research and development in consortiums made of universities and business entities, still a different stream of financing for business entities alone, and finally private capital to look for once the technology is ripe enough for being marketed.

I am operating from an academic position. Intuitively, I guess that the more valuable science academic people bring to their common table with businesspeople and government people, the better position those academics will have in any future joint ventures. Hence, we should max out on useful, functional science to back those ideas. I am trying to understand what that science should consist in. An intelligent digital cloud can yield mind-blowing findings. I know that for a fact from my own research. Yet, what I know too is that I need very fundamental science, something at the frontier of logic, philosophy, mathematics, and of the phenomenology pertinent to the scientific research at hand, in order to understand and use meaningfully whatever the intelligent digital cloud spits back out, after being fed with data. I have already gone once through that process of understanding, as I have been working on the application of artificial neural networks to the simulation of collective intelligence in human societies. I had to coin up a theory of intelligent structure, applicable to the problem at hand. I believe that any application of intelligent digital cloud requires assuming that whatever we investigate with that cloud is an intelligent structure, i.e. a structure which learns by producing many alternative versions of itself, and testing them for their fitness to optimize a given desired outcome.  

With those medical ideas, I (we?) need to figure out what the intelligent structure in action is, how can it possibly produce many alternative versions of itself, and how those alternative thingies can be tested for fitness. What we have in a medically edged digital cloud is data about a population of people. The desired outcome we look for is health, quite simply. I said ‘simply’? No, it was a mistake. It is health, in all complexity. Those apps our students want to develop are supposed to pull someone out of the crowd, someone with early symptoms which they do not identify as relevant. In a next step, some kind of dialogue is proposed to such a person, sort of let’s dig a bit more into those symptoms, let’s try something simple to treat them etc. The vector of health in that population is made, roughly speaking, of three sub-vectors: preventive health (e.g. exercise, sleep, stop eating crap food), effectiveness of early medical intervention (e.g. c’mon men, if you are 30 and can’t have erection, you are bound to concoct some cardio-vascular s**t), and finally effectiveness of advanced medicine, applied when the former two haven’t worked.  

I can see at least one salient, scientific hurdle to jump over: that outcome vector of health. In my own research, I found out that artificial neural networks can give empirical evidence as for what outcomes we are really actually after, as collectively intelligent a structure. That’s my first big idea as regards those digital medical solutions: we collect medical and behavioural data in the cloud, we assume that data represents experimental learning of a collectively intelligent social structure, and we make the cloud discover the phenomena (variables) which the structure actually optimizes.

My own experience with that method is that societies which I studied optimize outcomes which look almost too simplistic in the fancy realm of social sciences, such as the average number of hours worked per person per year, the average amount of human capital per person, measured as years of education before entering the job market, or price index in exports, thus the average price which countries sell their exports at. In general, societies which I studied tend to optimize structural proportions, measurables as coefficients in the lines of ‘amount of thingy one divided by the amount of thingy two’.  

Checkpoint for business. Supposing that our research team, at the Andrzej Frycz – Modrzewski Krakow University, comes up with robust empirical results of that type, i.e. when we take a million of random humans and their broadly spoken health, and we assume they are collectively intelligent (I mean, beyond Facebook), then their collectively shared experimental learning of the stuff called ‘life’ makes them optimize health-related behavioural patterns A, B, and C. How can those findings be used in the form of marketable digital technologies? If I know the behavioural patterns someone tries to optimize, I can break those patterns down into small components and figure out a way to utilize the way to influence behaviour. It is a common technique in marketing. If I know someone’s lifestyle, and the values that come with it, I can artfully include into that pattern the technology I am marketing. In this specific case, it could be done ethically and for a good purpose, for a change.  In that context, my mind keeps returning to that barely marked trend of rising mortality in adult males in high-income countries, since 2016 ( WTF? We’ll live, we’ll see.

The understanding of how collective human intelligence goes after health could be, therefore, the kind of scientific bacon our university could bring to the table when starting serious consortial projects with business partners, for the development of intelligent digital technologies in healthcare. Let’s move one step forward. As I have been using artificial neural network in my research on what I call, and maybe overstate as collective human intelligence, I have been running those experiments where I take a handful of behavioural patterns, I assign them probabilities of happening (sort of how many folks out of 10 000 will endorse those patterns), and I treat those probabilities as instrumental input in the optimization of pre-defined social outcomes. I was going to forget: I add random disturbance to that form of learning, in the lines of the Black Swan theory (Taleb 2007[1]; Taleb & Blyth 2011[2]).

I nailed down three patterns of collective learning in the presence of randomly happening s**t: recurrent, optimizing, and panic mode. The recurrent pattern of collective learning, which I tentatively expect to be the most powerful, is essentially a cycle with recurrent amplitude of error. We face a challenge, we go astray, we run around like headless chickens for a while, and then we figure s**t out, we progressively settle for solutions, and then the cycle repeats. It is like everlasting learning, without any clear endgame. The optimizing pattern is something I observed when making my collective intelligence optimize something like the headcount of population, or the GDP. There is a clear phase of ‘WTF!’(error in optimization goes haywire), which, passing through a somehow milder ‘WTH?’, ends up in a calm phase of ‘what works?’, with very little residual error.

The panic mode is different from the other two. There is no visible learning in the strict sense of the term, i.e. no visible narrowing down of error in what the network estimates as its desired outcome. On the contrary, that type of network consistently goes into the headless chicken mode, and it is becoming more and more headless with each consecutive hundred of experimental rounds, so to say. It happens when I make my network go after some very specific socio-economic outcomes, like price index in capital goods (i.e. fixed assets) or Total Factor Productivity.

Checkpoint for business, once again. That particular thing, about Black Swans randomly disturbing people in their endorsing of behavioural patterns, what business value does it have in a digital cloud? I suppose there are fields of applied medical sciences, for example epidemiology, or the management of healthcare systems, where it pays to know in advance which aspects of our health-related behaviour are the most prone to deep destabilization in the presence of exogenous stressors (e.g. epidemic, or the president of our country trending on Tik Tok). It could also pay off to know, which collectively pursued outcomes act as stabilizers. If another pandemic breaks out, for example, which social activities and social roles should keep going, at all price, on the one hand, and which ones can be safely shut down, as they will go haywire anyway?      

[1] Taleb, N. N. (2007). The black swan: The impact of the highly improbable (Vol. 2). Random house.

[2] Taleb, N. N., & Blyth, M. (2011). The black swan of Cairo: How suppressing volatility makes the world less predictable and more dangerous. Foreign Affairs, 33-39.

Quite abundant a walk of life

My editorial

I have just finished writing an article about the link between energy and human settlement. You could have noticed that I have been kind of absent from scientific blogging for a few days. I had my classes starting, at the university, and this was the first reason, but the second one was precisely that article. On Wednesday, I started doing some calculations, well in the lines of that latest line of my research (you can look up ‘Core and periphery’ ). Nothing very serious, just some casual dabbling with numbers. You know, when you are an economist, you start having cold turkey symptoms when you are parted with an Excel spreadsheet. From time to time, you just need to do some calculations, and so I was doing when, suddenly, those numbers started making sense. It is a peculiar feeling when numbers start making sense, because usually, you just kind of feel that sense but you don’t exactly know what it actually is. That was exactly my case, on Wednesday. I started playing with the parameters of that general equilibrium, with population size on the left side of the equation, and energy use, as well as food intake, on the other side. All of a sudden, that theoretical equilibrium started yielding real, robust, local equilibria in individual countries. Then, something just fired off in my mind. My internal happy bulldog, you know, that little beast who just loves biting into big, juicy loafs of data, really bit in. My internal ape, that curious and slightly impolite part of me, went to force the bulldog’s jaws open, but it got fascinated. My internal austere monk, that really-frontal-cortex guy inside of me, who walks around with the Ockham’s razor ready to slash into bullshit, had to settle the matters. He said: ‘Good, folks, as you are, we need to hatch an article, and we do it know’. You don’t discuss with a guy who has a big razor, and so all of me wrote this article. Literally all of me. It was the first time, since I was 22 (bloody long ago), that I spent a night awake, writing. The result, for the moment in the pre-editorial form, is entitled ‘Settlement by energy – can renewable energies sustain our civilisation?’  and you can read it just by clicking this link.

Anyway, now I am in a post-article frame of mind, which means I need to shake it off a bit. What I usually do in terms of shaking off is having conversations with dead people. No, I don’t need candles. One of my favourite and not-quite-alive-anymore interlocutors is Jacques Savary, a merchant and public officer, who, in 1675, two years after both the real and the fictional d’Artagnan had been dead, published, with the privilege of the King, and through the industrious efforts of the publishing house run by Louis Billaine, located at the Second Pillar of the Grand Salle of the Palace, at Grand Cesar, a book entitled, originally, ‘Le Parfait Négociant ou Instruction Générale Pour Ce Qui Regarde Le Commerce’. In English, that would be ‘The Perfect Merchant or General Instructions as Regards Commerce’. And so I am summoning Master Savary from the after world of social sciences, and we start chatting about what he wrote regarding manufactures (Book II, Chapter XLV and XLVI). First, a light stroke of brush to paint the general landscape. Back in the days, in the second half of the 17th century, manufactures meant mostly textile and garments. There was some industrial activity in other goods (glass, tapestry), but the bulk of industry was about cloth, in many forms. People at the time were really inventive as it came to new types of cloth: they experimented with mixing cotton, wool and silk, in various proportions, and they experimented with dyeing (I mean, they experimented with dying, as well, but we do it all the time), and they had fashions. Anyway, textile and garment was THE industry.

As Master Savary starts his exposition about manufactures, he opens up with a warning: manufactures can lead you to ruin. Interesting opening for an instruction. The question is why? Or rather, how? I mean, how could a manufacturing business lead to ruin? Well, back in the day, in 17th century, in Europe, manufacturing activities used to be quite separated institutionally from the circulation of big money. Really big business was being done mostly in trade, and large-scale manufacturing was seen as kind of odd. In trade, merchants of the time devised various legal tools to speed up the circulation of capital. Bills of exchange, maritime insurance, tax farming – it all allowed, with just the right people to know, a really smooth flow of money, even in the presence of many-year-long maritime commercial trips. In manufacturing, many of those clever tricks didn’t work, or at least didn’t work yet. They had to wait, those people, some 200 years before manufacturing would become really smooth a way of circulating capital. Anyway, putting money in manufacturing meant that you could not recover it as quickly as you could in trade. Basically, when you invested in manufactures, you were much more dependent on the actual marketability of your actual products than you were in trade. Thus, many merchants, Master Savary obviously included, perceived manufacturing as terribly risky.

What did he recommend in the presence of such dire risk? First of all, he advised to distinguish between three strategies. One, imitate a foreign manufacture. Second, invent something new and set a new manufacture. Third, invest in ‘an already established Manufacture, whose merchandise has an ordinary course in the Kingdom as well as in foreign Countries, by the general consent of all the people who had recognized its goodness, in the use of fabric which have been manufactured there’. I tried to translate literally the phrasing of the last strategy, in order to highlight the key points of the corresponding business plan. An established manufacture meant, first of all, the one with ‘an ordinary course in the Kingdom as well as in foreign Countries’. Ordinary course meant a predictable final selling price. As a matter of fact, this is my problem with that translation. Master Savary originally used the French expression: ‘cours ordinaire’, which, in English, becomes ambiguous. First, it can mean ‘ordinary course’, i.e. something like an established channel of distribution. Still, it can also mean ‘ordinary rate of exchange’. Why ‘rate of exchange’? We are some 150 years before the development of modern, standardized monetary systems. We are even some 100 years before the appearance of paper money. There were coins, and there was a s***load of other things you could exchange your goods against. At Master Savary’s time, many things were currencies. In business, you traded your goods against various types of coins, you accepted bills of exchange instead of coins, you traded against gold and silver in ingots, as well, and finally, you did barter. Some young, rich, and spoilt marquis had lost some of its estates by playing cards, he signed some papers, and here you are, with the guy who wants to buy your entire stock of woollen garments and who wants to pay you precisely with those papers signed by the young marquis. If you were doing really big business, none of your goods has one price: instead, they all had complex exchange rates against other valuables. Trading goods with what Master Savary originally called ‘cours ordinaire’ meant that the goods in question were kind of predictable as for their exchange rate against anything else in that economic jungle of the late 17th century.

What worked on the selling side, had to work on the supply side as well. You had to buy your raw materials, your transport, your labour etc. at complex exchange rates, and not at those nice, tame, clearly cut prices in one definite currency. Making the right match between exchange rates achieved when purchasing things, and those practiced at the end of the value chain was an art, and frequently a pain in your ass. In other words, business in 17th century was very much like what we would have now if our banking and monetary systems collapsed. Yes, baby, them bankers are mean and abjectly rich, but they keep that wheel spinning smoothly, and you don’t have to deal with Somalian pirates in order to buy from them some drugs, which you are going to exchange against natural oil in Yemen, which, in turn, you will use to back some bills of exchange, which will allow you to buy cotton for your factory.

Now, let’s return to what Master Savary had to say about those three strategies for manufacturing. As he discusses the first one – imitating a foreign factory – he recommends five wise things to do. One, check if you can achieve exactly the same quality of fabric as those bloody foreigners do. If you cannot, there is no point in starting imitation. Two, make sure you can acquire your raw materials, in the necessary bracket of quality, in the place where you locate your manufacture. Three, make sure the place where you locate your operations will allow you to practice prices competitive as compared to those foreign goods you are imitating. Four, create for yourself conditions for experimenting with your product and your business. Launch some kind of test missiles in many directions, present your fabrics to many potential customers. In other words, take your time, bite your ambition, suck ass and make your way into the market step by step. Five, arrange for acquiring the same tools, and even the same people that work in those foreign manufactures. Today, we would say: acquire the technology, both the formal, and the informal one.

As he passes to discussing the second strategy, namely inventing something new, Master Savary recommends even more prudence, and, in the same time, he pulls open a bit the veil of discretion regarding his own life, and confesses that he, in person, had invented three new fabrics during his business career: a thick woollen ribbon made of camel wool, a thick drugget for making simple, coarse, work clothes, and finally a ribbon made of woven gold and silver. Interesting. Here is a guy, who started his professional life as a merchant, then he went into commercial arbitrage for some time, then he went into the service of a rich aristocrat ( see ‘Comes the time, comes the calm duke’ ), then he entered into a panel of experts commissioned by Louis XIV, the Sun King, to prepare new business law, and in the meantime he invented decorative ribbons for rich people, as well as coarse fabrics for poor people. Quite abundant a walk of life. As I am reading the account of his textile inventions, he seems to be the most attached to, and the most vocal about that last one, the gold and silver ribbon. He insists that nobody before him had ever succeeded in weaving gold and silver into something wearable. He describes in detail all the technological nuances, like for example preventing the chipping off of the very thinly pulled, thread size, golden wire. He concludes: ‘I have given my own example, in order to make those young people, who want to invent new Manufactures, understand they should take their precautions, not to engage imprudently and not to let themselves being carried away by the profits they will make on their first fabrics, and to have a great number of them fabricated, before being certain they will be pleasant to the public, as well as for their beauty as for quality; for it is really dangerous, and they will risk their fortune at it’.