Boots on the ground

I continue the fundamental cleaning in my head, as the year 2020 touches to its end. What do I want? Firstly,I want to exploit and develop on my hypothesis of collective intelligence in human societies, and I want to develop my programming skills in Python. Secondly, I want to develop my skills and my position as a facilitator and manager of research projects at the frontier of the academic world and that of business.  How will I know I have what I want? If I actually program a workable (and working) intelligent structure, able to uncover and reconstruct the collective intelligence of a social structure out of available empirical data – namely to uncover and reconstruct the chief collective outcomes that structure is after, and its patterns of reaction to random exogenous disturbances – that would be an almost tangible outcome for me, telling me I have made a significant step. When I see that I have repetitive, predictable patterns of facilitating the start of joint research projects in consortiums of scientific entities and business ones, then I know I have nailed down something in terms of project management. If I can start something like an investment fund for innovative technologies, then I definitely know I am on the right track.

As I want to program an intelligent structure, it is essentially an artificial neural network, possibly instrumented with additional functions, such as data collection, data cleansing etc. I know I want to understand very specifically what my neural network does. I want to understand every step in takes. To that purpose, I need to figure out a workable algorithm of my own, where I understand every line of code. It can be sub-optimally slow and limited in its real computational power, yet I need it. On the other hand, Internet is more and more equipped with platforms and libraries in the form of digital clouds, such as IBM Watson, or Tensorflow, which provide optimized processes to build complex pieces of AI. I already know that being truly proficient in Data Science entails skills pertinent to using those cloud-based tools. My bottom line is that if I want to program an intelligent structure communicable and appealing to other people, I need to program it at two levels: as my own prototypic code, and as a procedure of using cloud-based platforms to replicate it.             

At the juncture of those two how-will-I-know pieces of evidence, an idea emerges, a crazy one. What if I can program an intelligent structure which uncovers and reconstructs one or more alternative business models out of the available empirical data? Interesting. The empirical data I work the most with, as regards business models, is the data provided in the annual reports of publicly listed companies. Secondarily, data about financial markets sort of connects. My own experience as small investor supplies me with existential basis to back this external data, and that experience suggests me to define a business model as a portfolio of assets combined with broadly spoken behavioural patterns both in people active inside the business model, thus running it and employed with it, and in people connected to that model from outside, as customers, suppliers, investors etc.

How will other people know I have what I want? The intelligent structure I will have programmed has to work across different individual environments, which is an elegant way of saying it should work on different computers. Logically, I can say I have clearly demonstrated to other people that I achieved what I wanted with that thing of collective intelligence when said other people will be willing to and successful at trying my algorithm. Here comes the point of willingness in other people. I think it is something like an existential thing across the board. When we want other people to try and do something, and they don’t, we are pissed. When other people want us to try and do something, and we don’t, we are pissed, and they are pissed. As regards my hypothesis of collective intelligence, I have already experienced that sort of intellectual barrier, when my articles get reviewed. Reviewers write that my hypothesis is interesting, yet not articulate and not grounded enough. Honestly, I can’t blame them. My feeling is that it is even hard to say that I have that hypothesis of collective intelligence. It is rather as if that hypothesis was having me as its voice and speech. Crazy, I know, only this is how I feel about the thing, and I know by experience that good science (and good things, in general) turn up when I am honest with myself.

My point is that I feel I need to write a book about that concept of collective intelligence, in order to give a full explanation of my hypothesis. My observations about cities and their role in the human civilization make, for the moment, one of the most tangible topics I can attach the theoretical hypothesis to. Writing that book about cities, together with programming an intelligent structure, takes a different shade, now. It becomes a complex account of how we can deconstruct something – our own collective intelligence – which we know is there and yet, as we are inside that thing, we have hard times to describe it.

That book about cities, abundantly referring to my hypothesis of collective intelligence, could be one of the ways to convince other people to at least try what I propose. Thus, once again, I restate what I understand by intelligent structure. It is a structure which learns new patterns by experimenting with many alternative versions of itself, whilst staying internally coherent. I return to my ‘DU_DG’ database about cities (see ‘It is important to re-assume the meaning’) and I am re-assuming the concept of alternative versions, in an intelligent structure.

I have a dataset structured into n variables and m empirical observations. In my DU_DG database, as in many other economic datasets, distinct observations are defined as the state of a country in a given year. As I look at the dataset (metaphorically, it has content and meaning, but it does not have any physical shape save for the one my software supplies it with), and as I look at my thoughts (metaphorically, once again), I realize I have been subconsciously distinguishing two levels of defining an intelligent structure in that dataset, and, correspondingly, two ways of defining alternative versions thereof. At the first level, the entire dataset is supposed to be an intelligent structure and alternative versions thereof consist in alternative dichotomies of the type ‘one variable as output, i.e. as the desired outcome to optimize, and the remaining ones as instrumental input’. At this level of structural intelligence – by which term I understand the way of being in an intelligent structure – alternative versions are alternative orientations, and there are as many of them as there are variables.

Distinction into variables is largely, although not entirely, epistemic, and not ontological. The headcount of urban population is not fundamentally different phenomenon from the surface of agricultural land. Yes, the units of measurement are different, i.e. people vs. square kilometres, but, ontologically, it is largely the same existential stuff, possible to describe as people living somewhere in large numbers and being successful at it. Historically, social scientists and governments alike have come to the conclusion, though, that these two metrics have different a meaning, and thus it comes handy to distinguish them as semantic vessels to collect and convey information. The distinction of alternative orientations in an intelligent structure, supposedly represented in a dataset, is arbitrary and cognitive more than ontological. It depends on the number of variables we have. If I add variables to the dataset, e.g. by computing coefficients between the incumbent variables, I can create new orientations for the intelligent structure, i.e. new alternative versions to experiment with.

The point which comes to my mind is that the complexity of an intelligent structure, at that first level, depends on the complexity of my observational apparatus. The more different variables I can distinguish, and measure as regards a given society, the more complexity I can give to the allegedly existing, collectively intelligent structure of that society.

Whichever combination ‘output variable vs. input variables’ I am experimenting with, there comes the second level of defining intelligent structures, i.e. that of defining them as separate countries. They are sort of local intelligent structures, and, at the same time, they are alternative experimental versions of the overarching intelligent structure to be found in the vector of variables. Each such local intelligent structure, with a flag, a national anthem, and a government, produces many alternative versions of itself in consecutive years covered by the window of observation I have in my dataset.

I can see a subtle distinction here. A country produces alternative versions of itself, in different years of its existence, sort of objectively and without giving a f**k about my epistemological distinctions. It just exists and tries to be good at it. Experimenting comes as natural in the flow of time. This is unguided learning. On the other hand, I produce different orientations of the entire dataset. This is guided learning. Now, I understand the importance of the degree of supervision in artificial neural networks.

I can see an important lesson for me, here. If I want to program intelligent structures ‘able to uncover and reconstruct the collective intelligence of a social structure out of available empirical data – namely to uncover and reconstruct the chief collective outcomes that structure is after, and its patterns of reaction to random exogenous disturbances’, I need to distinguish those two levels of learning in the first place, namely the unguided flow of existential states from the guided structuring into variables and orientations. When I have an empirical dataset and I want to program an intelligent structure able to deconstruct the collective intelligence represented in that dataset, I need to define accurately the basic ontological units, i.e. the fundamentally existing things, then I define alternative states of those things, and finally I define alternative orientations.

Now, I am contrasting. I pass from those abstract thoughts on intelligent structures to a quick review of my so-far learning to program those structures in Python. Below, I present that review as a quick list of separate files I created in JupyterLab, together with a quick characteristic of problems I am trying to solve in each of those files, as well as of the solutions found and not found.

>> Practice Dec 11 2020.iypnb.

In this file, I work with IMF database WEOOct2020 (https://www.imf.org/en/Publications/WEO/weo-database/2020/October ).  I practiced reading complex datasets, with an artificially flattened structure. It is a table, in which index columns are used to add dimensions to an otherwise two-dimensional format. I practiced the ‘read_excel’ and ‘read_csv’ commands. On the whole, it seems that converting an Excel to CSV and then reading CSV in Python is a better method than reading excel. Problems solved: a) cleansing the dataset of not-a-number components and successful conversion of initially ‘object’ columns into the desired ‘float64’ format b) setting descriptive indexes to the data frame c) listing unique labels from a descriptive index d) inserting new columns into the data frame e) adding (compounding) the contents of two existing, descriptive index columns into a third index column. Failures: i) reading data from XML file ii) reading data from SDMX format iii) transposing my data frame so as to put index values of economic variables as column names and years as index values in a column.

>> Practice Dec 8 2020.iypnb.

In this file, I worked with a favourite dataset of mine, the Penn Tables 9.1. (https://www.rug.nl/ggdc/productivity/pwt/?lang=en ). I described my work with it in two earlier updates, namely ‘Two loops, one inside the other’, and ‘Mathematical distance’. I succeeded in creating an intelligent structure from that dataset. I failed at properly formatting the output of that structure and thus at comparing the cognitive value of different orientations I made it simulate.   

>> Practice with Mortality.iypnb.

I created this file as a first practice before working with the above-mentioned WEOOct2020 database. I took one dataset from the website of the World Bank, namely that pertinent to the coefficient of adult male mortality (https://data.worldbank.org/indicator/SP.DYN.AMRT.MA ). I practiced reading data from CSV files, and I unsuccessfully tried to stack the dataset, i.e. to transform columns corresponding to different years of observation into rows indexed with labels corresponding to years.   

>> Practice DU_DG.iypnb.

In this file, I am practicing with my own dataset pertinent to the density of urban population and its correlates. The dataset is already structured in Excel. I start practicing the coding of the same intelligent structure I made with Penn Tables, supposed to study orientation of the societies studied. Same problems and same failures as with Penn Tables 9.1.: for the moment, I cannot nail down the way to get output data in structures that allow full comparability. My columns tend to wander across the output data frames. In other words, the vectors of mean expected values produced by the code I made have slightly (just slightly, and sufficiently to be annoying) different a structure from the original dataset. I don’t know why, yet, and I don’t know how to fix it.  

On the other hand, in that same file, I have been messing around a bit with algorithms based on the ‘scikit’ library for Python. Nice graphs, and functions which I still need to understand.

>> Practice SEC Financials.iypnb.

Here, I work with data published by the US Securities and Exchange Commission, regarding the financials of individual companies listed in the US stock market (https://www.sec.gov/dera/data/financial-statement-data-sets.html ). The challenge here consists in translating data originally supplied in *.TXT files into numerical data frames in Python. The problem with I managed to solve, so far (this is the most recent piece of my programming), is the most elementary translation of TXT data into a Pandas data frame, using the ‘open()’ command, and the ‘f.readlines()’ one. Another small victory here is to read data from a sub-directory inside the working directory of JupyterLab, i.e. inside the root directory of my user profile. I used two methods of reading TXT data. Both worked sort of. First, I used the following sequence:

>> with open(‘2020q3/num.txt’) as f:

            numbers=f.readlines()

>> Numbers=pd.DataFrame(numbers)

… which, when checked with the ‘Numbers.info()’ command, yields:

<class ‘pandas.core.frame.DataFrame’>

RangeIndex: 2351641 entries, 0 to 2351640

Data columns (total 1 columns):

 #   Column  Dtype

—  ——  —–

 0   0       object

dtypes: object(1)

memory usage: 17.9+ MB

In other words, that sequence did not split the string of column names into separate columns, and the ‘Numbers’ data frame contains one column, in which every row is a long string structured with the ‘\’ separators. I tried to be smart with it. I did:

>> Numbers.to_csv(‘Num2’) # I converted the Pandas data frame into a CSV file

>> Num3=pd.DataFrame(pd.read_csv(‘Num2′,sep=’;’)) # …and I tried to read back from CSV, experimenting with different separators. None of it worked. With the ‘sep=’ argument in the command, I kept getting an error of parsing, in the lines of ‘ParserError: Error tokenizing data. C error: Expected 1 fields in line 3952, saw 10’. When I didn’t use the ‘sep=’ argument, the command did not yield error, yet it yielded the same long column of structured strings instead of many data columns.  

Thus, I gave up a bit, and I used Excel to open the TXT file, and to save a copy of it in the CSV format. Then, I just created a data frame from the CSV dataset, through the ‘NUM_from_CSV=pd.DataFrame(pd.read_csv(‘SEC_NUM.csv’, sep=’;’))’ command, which, checked with the ‘NUM_from_CSV.info()’ command, yields:

<class ‘pandas.core.frame.DataFrame’>

RangeIndex: 1048575 entries, 0 to 1048574

Data columns (total 9 columns):

 #   Column    Non-Null Count    Dtype 

—  ——    ————–    —– 

 0   adsh      1048575 non-null  object

 1   tag       1048575 non-null  object

 2   version   1048575 non-null  object

 3   coreg     30131 non-null    object

 4   ddate     1048575 non-null  int64 

 5   qtrs      1048575 non-null  int64 

 6   uom       1048575 non-null  object

 7   value     1034174 non-null  float64

 8   footnote  1564 non-null     object

dtypes: float64(1), int64(2), object(6)

memory usage: 72.0+ MB

The ‘tag’ column in this data frame contains the names of financial variables ascribed to companies identified with their ‘adsh’ codes. I experience the same challenge, and, so far, the same failure as with the WEOOct2020 database from IMF, namely translating different values in a descriptive index into a dictionary, and then, in the next step, to flip the database so as to make those different index categories into separate columns (variables).   

As I have passed in review that programming of mine, I have become aware that reading and properly structuring different formats of data is the sensory apparatus of the intelligent structure I want to program.  Operations of data cleansing and data formatting are the fundamental skills I need to develop in programming. Contrarily to what I expected a few weeks ago, when I was taking on programming in Python, elaborate mathematical constructs are simpler to code than I thought they would be. What might be harder, mind you, is to program them so as to optimize computational efficiency with large datasets. Still, the very basic, boots-on-the-ground structuring of data seems to be the name of the game for programming intelligent structures.

It is important to re-assume the meaning

It is Christmas 2020, late in the morning. I am thinking, sort of deeply. It is a dysfunctional tradition to make, by the end of the year, resolutions for the coming year. Resolutions which we obviously don’t hold to long enough to see them bring anything substantial. Yet, it is a good thing to pass in review the whole passing year, distinguish my own f**k-ups from my valuable actions, and use it as learning material for the incoming year.

What I have been doing consistently for the past year is learning new stuff: investment in the stock market, distance teaching amidst epidemic restrictions, doing research on collective intelligence in human societies, managing research projects, programming, and training consistently while fasting. Finally, and sort of overarchingly, I have learnt the power of learning by solving specific problems and writing about myself mixing successes and failures as I am learning.

Yes, it is precisely the kind you can expect in what we tend to label as girls’ readings, sort of ‘My dear journal, here is what happened today…’. I keep my dear journal focused mostly on my broadly speaking professional development. Professional development combines with personal development, for me, though. I discovered that when I want to achieve some kind of professional success, would it be academic, or business, I need to add a few new arrows to my personal quiver.    

Investing in the stock market and training while fasting are, I think, what I have had the most complete cycle of learning with. Strange combination? Indeed, a strange one, with a surprising common denominator: the capacity to control my emotions, to recognize my cognitive limitations, and to acknowledge the payoff from both. Financial decisions should be cold and calculated. Yes, they should, and sometimes they are, but here comes a big discovery of mine: when I start putting my own money into investment positions in the stock market, emotions flare in me so strongly that I experience something like tunnel vision. What looked like perfectly rational inference from numbers, just minutes ago, now suddenly looks like a jungle, with both game and tigers in it. The strongest emotion of all, at least in my case, is the fear of loss, and not the greed for gain. Yes, it goes against a common stereotype, and yet it is true. Moreover, I discovered that properly acknowledged and controlled, the fear of loss is a great emotional driver for good investment decisions, and, as a matter of fact, it is much better an emotional driver than avidity for gain. I know that I am well off when I keep the latter sort of weak and shy, expecting gains rather than longing for them, if you catch my drift.

Here comes the concept of good investment decisions. As this year 2020 comes to an end, my return on cash invested over the course of the year is 30% with a little change. Not bad at all, compared to a bank deposit (+1,5%) or to sovereign bonds (+4,5% max). I am wrapping my mind around the second most fundamental question about my investment decisions this year – after, of course, of the question about return on investment – and that second question is ontological: what my investment decisions actually have been? What has been their substance? The most general answer is tolerable complexity with intuitive hedging and a pinch of greed. Complexity means that I have progressively passed from the otherwise naïve expectation of one perfect hit to a portfolio of investment positions. Thinking intuitively in terms of portfolio has taught me just as intuitive approach to hedging my risks. Now, when I open one investment position, I already think about another possible one, either to reinforce my glide on the wave crest I intend to ride, or to compensate the risks contingent to seeing my ass gliding off and down from said wave crest.

That portfolio thinking of mine happens in layers, sort of. I have a portfolio of industries, and that seems to be the basic structuring layer of my decisions. I think I can call myself a mid-term investor. I have learnt to spot and utilise mid-term trends of interest that investors in the stock market attach to particular industries. I noticed there are cyclical fashion seasons in the stock market, in that respect. There is a cyclically recurrent biotech season, due to the pandemic. There is just as cyclical a fashion for digital tech, and another one for renewable energies (photovoltaic, in particular). Inside the digital tech, there are smaller waves of popularity as regards the gaming business, others connected to FinTech etc.

Cyclicality means that prices of stock in those industries grow for some time, ranging, by my experience, from 2 to 13 weeks. Riding those waves means jumping on and off at the right moment. The right moment for jumping on is as early as possible after the trend starts to ascend, and jump just as early as possible after it shows signs of durable descent.

The ‘durable’ part is tricky, mind you. I saw many episodes, and during some of them I shamefully yielded to short-termist panic, when the trend curbs down just for a few days before rocketing up again. Those episodes show well what it means in practical terms to face ‘technical factors’. The stock market is like an ocean. There are spots of particular fertility, and big predators tend to flock just there. In the stock market, just as in the ocean, you have bloody big sharks swimming around, and you’d better hold on when they start feeding, ‘cause they feed just as real sharks do: they hit quickly, cause abundant bleeding, and then just wait until their pray bleeds out enough to be defenceless.

When I see, for example, a company like the German Biontech (https://investors.biontech.de/investors-media) suddenly losing value in the stock market, whilst the very vaccine they ganged up with Pfizer to make is being distributed across the world, I am like: ‘Wait a minute! Why the stock price of a super-successful, highly innovative business would fall just at the moment when they are starting to consume the economic fruit of their innovation?’. The only explanation is that sharks are hunting. Your typical stock market shark hunts in a disgusting way, by eating, vomiting and then eating their vomit back with a surplus. It bites a big chunk of a given stock, chews it for a moment, spits it out quickly – which pushes the price down a bit – then eats back its own vomit of stock, with a tiny surplus acquired at the previously down-driven price, and then it repeats. Why wouldn’t it repeat, as long as the thing works?

My personal philosophy, which, unfortunately, sometimes I deviate from when my emotions prevail, is just to sit and wait until those big sharks end their feeding cycle. This is another useful thing to know about big predators in the stock market: they hunt similarly to big predators in nature. They have a feeding cycle. When they have killed and consumed a big prey, they rest, as they are both replete with eating and down on energy. They need to rebuild their capital base.      

My reading of the stock market is that those waves of financial interest in particular industries are based on expectations as for real business cycles going on out there. Of course, in the stock market, there is always the phenomenon of subsidiary interest: I invest in companies which I expect other investors to invest to, as well, and, consequently, whose stock price I expect to grow. Still, investors in the stock market are much more oriented on fundamental business cycles than non-financial people think. When I invest in the stock of a company, and I know for a fact that many other investors think the same, I expect that company to do something constructive with my trust. I want to see those CEOs take bold decisions as for real investment in technological assets. When they really do so, I stay with them, i.e. I hold that stock. This is why I keep holding the stock of Tesla even amidst episodes of while swings in its price. I simply know Elon Musk will always come up with something which, for him, are business concepts, and for the common of mortals are science-fiction. If, on the other hand, I see those CEOs just sitting and gleaming benefits from trading their preferential shares, I leave.

Here I connect to another thing I started to learn during 2020: managing research projects. At my university, I have been assigned this specific job, and I discovered something which I did not expect: there is more money than ideas, out there. There is, actually, plenty of capital available from different sources, to finance innovative science. The tricky part is to translate innovative ideas into an intelligible, communicable form, and then into projects able to convince people with money. The ‘translating’ part is surprisingly complex. I can see many sparse, sort of semi-autonomous ideas in different people, and I still struggle with putting those people together, into some sort of team, or, fault of a team, into a network, and make them mix their respective ideas into one, big, articulate concept. I have been reading for years about managing R&D in corporate structures, about how complex and artful it is to manage R&D efficiently, and now, I am experiencing it in real life. An interesting aspect of that is the writing of preliminary contracts, the so-called ‘Non-Disclosure Agreements’ AKA NDAs, the signature of which is sort of a trigger for starting serious networking between different agents of an R&D project.

As I am wrapping my mind around those questions, I meditate over the words written by Joseph Schumpeter, in his Business Cycles: “Whenever a new production function has been set up successfully and the trade beholds the new thing done and its major problems solved, it becomes much easier for other people to do the same thing and even to improve upon it. In fact, they are driven to copying it if they can, and some people will do so forthwith. It should be observed that it becomes easier not only to do the same thing, but also to do similar things in similar lines—either subsidiary or competitive ones—while certain innovations, such as the steam engine, directly affect a wide variety of industries. This seems to offer perfectly simple and realistic interpretations of two outstanding facts of observation : First, that innovations do not remain isolated events, and are not evenly distributed in time, but that on the contrary they tend to cluster, to come about in bunches, simply because first some, and then most, firms follow in the wake of successful innovation ; second, that innovations are not at any time distributed over the whole economic system at random, but tend to concentrate in certain sectors and their surroundings”. (Business Cycles, Chapter III HOW THE ECONOMIC SYSTEM GENERATES EVOLUTION, The Theory of Innovation). In the Spring, when the pandemic was deploying its wings for the first time, I had a strong feeling that medicine and biotechnology will be the name of the game in technological change for at least a few years to come. Now, as strange as it seems, I have a vivid confirmation of that in my work at the university. Conceptual balls which I receive and which I do my best to play out further in the field come almost exclusively from the faculty of medical sciences. Coincidence? Go figure…

I am developing along two other avenues: my research on cities and my learning of programming in Python. I have been doing research on cities as manifestations of collective intelligence, and I have been doing it for a while. See, for example, ‘Demographic anomalies – the puzzle of urban density’ or ‘The knowingly healthy people’. As I have been digging down this rabbit hole, I have created a database, which, for working purposes, I call ‘DU_DG’. DU_DG is a coefficient of relative density in population, which I came by with some day and which keeps puzzling me.  Just to announce the colour, as we say in Poland when playing cards, ‘DU’ stands for the density of urban population, and ‘DG’ is the general density of population. The ‘DU_DG’ coefficient is a ratio of these two, namely it is DU/DG, or, in other words, this is the density of urban population denominated in the units of general density in population. In still other words, if we take the density of population as a fundamental metric of human social structures, the DU_DG coefficient tells how much denser urban population is, as compared to the mean density, rural settlements included.

I want to rework through my DU_DG database in order both to practice my programming skills, and to reassess the main axes of research on the collective intelligence of cities. I open JupyterLab from my Anaconda panel, and I create a new Notebook with Python 3 as its kernel. I prepare my dataset. Just in case, I make two versions: one in Excel, another one in CSV. I replace decimal comas with decimal points; I know by experience that Python has issues with comas. In human lingo, a coma is a short pause for taking like half a breath before we continue uttering the rest of the sentence. From there, we take the coma into maths, as decimal separator. In Python, as in finance, we talk about decimal point as such, i.e. as a point. The coma is a separator.

Anyway, I have that notebook in JupyterLab, and I start by piling up what I think I will need in terms of libraries:

>> import numpy as np

>> import pandas as pd

>> import os

>> import math

I place my database in the root directory of my user profile, which is, by default, the working directory of Anaconda, and I check if my database is visible for Python:

>> os.listdir()

It is there, in both versions, Excel and CSV. I start with reading from Excel:

>> DU_DG_Excel=pd.DataFrame(pd.read_excel(‘Dataset For Perceptron.xlsx’, header=0))

I check with ‘DU_DG_Excel.info()’. I get:

<class ‘pandas.core.frame.DataFrame’>

RangeIndex: 1155 entries, 0 to 1154

Data columns (total 10 columns):

 #   Column                                                                Non-Null Count  Dtype 

—  ——                                                                      ————–  —– 

 0   Country                                                                1155 non-null   object

 1   Year                                                                      1155 non-null   int64 

 2   DU_DG                                                                1155 non-null   float64

 3   Population                                                           1155 non-null   int64 

 4   GDP (constant 2010 US$)                                  1042 non-null   float64

 5   Broad money (% of GDP)                                  1006 non-null   float64

 6   urban population absolute                                 1155 non-null   float64

 7   Energy use (kg of oil equivalent per capita)    985 non-null    float64

 8   agricultural land km2                                        1124 non-null   float64

 9   Cereal yield (kg per hectare)                                         1124 non-null   float64

dtypes: float64(7), int64(2), object(1)

memory usage: 90.4+ KB  

Cool. Exactly what I wanted. Now, if I want to use this database as a simulator of collective intelligence in human societies, I need to assume that each separate ‘country <> year’ observation is a distinct local instance of an overarching intelligent structure. My so-far experience with programming opens up on a range of actions that structure is supposed to perform. It is supposed to differentiate itself into the desired outcomes, on the one hand, and the instrumental epistatic traits manipulated and adjusted in order to achieve those outcomes.

As I pass in review my past research on the topic, a few big manifestations of collective intelligence in cities come to my mind. Creation and development of cities as purposeful demographic anomalies is the first manifestation. This is an otherwise old problem in economics. Basically, people and resources they use should be disposed evenly over the territory those people occupy, and yet they aren’t. Even with a correction taken for physical conditions, such as mountains or deserts, we tend to like forming demographic anomalies on the landmass of Earth. Those anomalies have one obvious outcome, i.e. the delicate balance between urban land and agricultural land, which is a balance between dense agglomerations generating new social roles due to abundant social interactions, on the one hand, and the local food base for people endorsing those roles. The actual difference between cities and the surrounding countryside, in terms of social density, is very idiosyncratic across the globe and seems to be another aspect of intelligent collective adaptation.

Mankind is becoming more and more urbanized, i.e. a consistently growing percentage of people live in cities (World Bank 1[1]). In 2007 – 2008, the coefficient of urbanization topped 50% and keeps progressing since then. As there is more and more of us, humans, on the planet, we concentrate more and more in urban areas. That process defies preconceived ideas about land use. A commonly used narrative is that cities keep growing out into their once-non-urban surroundings, which is frequently confirmed by anecdotal, local evidence of particular cities effectively sprawling into the neighbouring rural land. Still, as data based on satellite imagery is brought up, and as total urban land area on Earth is measured as the total surface of peculiar agglomerations of man-made structures and night-time lights, that total area seems to be stationary, or, at least, to have been stationary for the last 30 years (World Bank 2[2]). The geographical distribution of urban land over the entire land mass of Earth does change, yet the total seems to be pretty constant. In parallel, the total surface of agricultural land on Earth has been growing, although at a pace far from steady and predictable (World Bank 3[3]).

There is a theory implied in the above-cited methodology of measuring urban land based on satellite imagery. Cities can be seen as demographic anomalies with a social purpose, just as Fernand Braudel used to state it (Braudel 1985[4]) : ‘Towns are like electric transformers. They increase tension, accelerate the rhythm of exchange and constantly recharge human life. […]. Towns, cities, are turning-points, watersheds of human history. […]. The town […] is a demographic anomaly’. The basic theoretical thread of this article consists in viewing cities as complex technologies, for one, and in studying their transformations as a case of technological change. Logically, this is a case of technological change occurring by agglomeration and recombination. Cities can be studied as demographic anomalies with the specific purpose to accommodate a growing population with just as expanding a catalogue of new social roles, possible to structure into non-violent hierarchies. That path of thinking is present, for example, in the now classical work by Arnold Toynbee (Toynbee 1946[5]), and in the even more classical take by Adam Smith (Smith 1763[6]). Cities can literally work as factories of new social roles due to intense social interactions. The greater the density of population, the greater the likelihood of both new agglomerations of technologies being built, and new, adjacent social roles emerging. A good example of that special urban function is the interaction inside age groups. Historically, cities have allowed much more abundant interactions among young people (under the age of 25), that rural environments have. That, in turn, favours the emergence of social roles based on the typically adolescent, high appetite for risk and immediate rewards (see for example: Steinberg 2008[7]). Recent developments in neuroscience, on the other hand, allow assuming that abundant social interactions in the urban environment have a deep impact on the neuroplastic change in our brains, and even on the phenotypical expression of human DNA (Ehninger et al. 2008[8]; Bavelier et al. 2010[9]; Day & Sweatt 2011[10]; Sweatt 2013[11])

At the bottom line of all those theoretical perspectives, cities are quantitatively different from the countryside by their abnormal density of population. Throughout this article, the acronymic symbol [DU/DG] is used to designate the density of urban population denominated in the units of (divided by) general density of population, and is computed on the grounds of data published by combining the above cited coefficient of urbanization (World Bank 1) with the headcount of population (World Bank 4[12]), as well as with the surface of urban land (World Bank 2). The general density of population is taken straight from official statistics (World Bank 5[13]). 

The [DU/DG] coefficient stays in the theoretical perspective of cities as demographic anomalies with a purpose, and it can be considered as a measure of social difference between cities and the countryside. It displays intriguing quantitative properties. Whilst growing steadily over time at the globally aggregate level, from 11,9 in 1961 to 19,3 in 2018, it displays significant disparity across space. Such countries as Mauritania or Somalia display a [DU/DG] > 600, whilst United Kingdom or Switzerland are barely above [DU/DG] = 3. In the 13 smallest national entities in the world, such as Tonga, Puerto Rico or Grenada, [DU/DG] falls below 1. In other words, in those ultra-small national structures, the method of assessing urban space by satellite-imagery-based agglomeration of night-time lights fails utterly. These communities display peculiar, categorially idiosyncratic a spatial pattern of settlement. The cross-sectional variability of [DU/DG] (i.e. its standard deviation across space divided by its cross-sectional mean value) reaches 8.62, and yet some 70% of mankind lives in countries ranging across the 12,84 ≤ [DU/DG] ≤ 23,5 interval.

Correlations which the [DU/DG] coefficient displays at the globally aggregate level (i.e. at the scale of the whole planet) are even more puzzling. When benchmarked against the global real output in constant units of value (World Bank 6[14]), the time series of aggregate, global  [DU/DG] displays a Pearson correlation of r = 0,9967. On the other hand, the same type of Pearson correlation with the relative supply of money to the global economy (World Bank 7[15]) yields r = 0,9761. As the [DU/DG] coefficient is supposed to represent the relative social difference between cities and the countryside, a look at the latter is beneficial. The [DU/DG] Pearson-correlates with the global area of agricultural land (World Bank 8[16]) at r = 0,9271, and with the average, global yield of cereals, in kgs per hectare (World Bank 9[17]), at r = 0,9858. That strong correlations of the [DU/DG] coefficient with metrics pertinent to the global food base match its correlation with the energy base. When Pearson-correlated with the global average consumption of energy per capita (World Bank 10[18]), [DU/DG] proves significantly covariant, at r = 0,9585. All that kept in mind, it is probably not that much of a surprise to see the global aggregate [DU/DG] Pearson correlated with the global headcount of population (World Bank 11[19]) at r = 0,9954.    

It is important to re-assume the meaning of the [DU/DG] coefficient. This is essentially a metric of density in population, and density has abundant ramifications, so to say. The more people live per 1 km2, the more social interactions occur on the same square kilometre. Social interactions mean a lot. They mean learning by civilized rivalry. They mean transactions and markets as well. The greater the density of population, the greater the probability of new skills emerging, which possibly translates into new social roles, new types of business and new technologies. When two types of human settlements coexist, displaying very different densities of population, i.e. type A being many times denser than type B, type A is like a factory of patterns (new social roles and new markets), whilst type B is the supplier of raw resources. The progressively growing global average [DU/DG] means that, at the scale of the human civilization, that polarity of social functions accentuates.

The [DU/DG] coefficient bears strong marks of a statistical stunt. It is based on truly risky the assumption, advanced implicitly by through the World Bank’s data, that total surface of urban land on Earth has remained constant, at least over the last 3 decades. Moreover, denominating the density of urban population in units of general density of population was purely intuitive from the author’s part, and, as a matter of fact, other meaningful denominators can easily come to one’s mind. Still, with all that wobbly theoretical foundation, the [DU/DG] coefficient seems to inform about a significant, structural aspect of human societies. The Pearson correlations, which the global aggregate of that coefficient yields with the fundamental metrics of the global economy, are of an almost uncanny strength in social sciences, especially with respect to the strong cross-sectional disparity in the [DU/DG].

The relative social difference between cities and the countryside, measurable with the gauge of the [DU/DG] coefficient, seems to be a strongly idiosyncratic adaptative mechanism in human societies, and this mechanism seems to be correlated with quantitative growth in population, real output, production of food, and the consumption of energy. That could be a manifestation of tacit coordination, where a growing human population triggers an increasing pace of emergence in new social roles by stimulating urban density. As regards energy, the global correlation between the increasing [DU/DG] coefficient and the average consumption of energy per capita interestingly connects with a stream of research which postulates intelligent collective adaptation of human societies to the existing energy base, including intelligent spatial re-allocation of energy production and consumption (Leonard, Robertson 1997[20]; Robson, Wood 2008[21]; Russon 2010[22]; Wasniewski 2017[23], 2020[24]; Andreoni 2017[25]; Heun et al. 2018[26]; Velasco-Fernández et al 2018[27]).

It is interesting to investigate how smart are human societies in shaping their idiosyncratic social difference between cities and the countryside. This specific path of research is being pursued, further in this article, through the verification and exploration of the following working hypothesis: ‘The collective intelligence of human societies optimizes social interactions in the view of maximizing the absorption of energy from the environment’.  


[1] World Bank 1: https://data.worldbank.org/indicator/SP.URB.TOTL.IN.ZS

[2] World Bank 2: https://data.worldbank.org/indicator/AG.LND.TOTL.UR.K2

[3] World Bank 3:  https://data.worldbank.org/indicator/AG.LND.AGRI.K2

[4] Braudel, F. (1985). Civilisation and Capitalism 15th and 18th Century–Vol. I: The Structures of Everyday Life, Translated by S. Reynolds, Collins, London, pp. 479 – 482

[5] Royal Institute of International Affairs, Somervell, D. C., & Toynbee, A. (1946). A Study of History. By Arnold J. Toynbee… Abridgement of Volumes I-VI (VII-X.) by DC Somervell. Oxford University Press., Section 3: The Growths of Civilizations, Chapter X.

[6] Smith, A. (1763-1896). Lectures on justice, police, revenue and arms. Delivered in the University of Glasgow in 1763, published by Clarendon Press in 1896, pp. 9 – 20

[7] Steinberg, L. (2008). A social neuroscience perspective on adolescent risk-taking. Developmental review, 28(1), 78-106. https://dx.doi.org/10.1016%2Fj.dr.2007.08.002

[8] Ehninger, D., Li, W., Fox, K., Stryker, M. P., & Silva, A. J. (2008). Reversing neurodevelopmental disorders in adults. Neuron, 60(6), 950-960. https://doi.org/10.1016/j.neuron.2008.12.007

[9] Bavelier, D., Levi, D. M., Li, R. W., Dan, Y., & Hensch, T. K. (2010). Removing brakes on adult brain plasticity: from molecular to behavioral interventions. Journal of Neuroscience, 30(45), 14964-14971. https://www.jneurosci.org/content/jneuro/30/45/14964.full.pdf

[10] Day, J. J., & Sweatt, J. D. (2011). Epigenetic mechanisms in cognition. Neuron, 70(5), 813-829. https://doi.org/10.1016/j.neuron.2011.05.019

[11] Sweatt, J. D. (2013). The emerging field of neuroepigenetics. Neuron, 80(3), 624-632. https://doi.org/10.1016/j.neuron.2013.10.023

[12] World Bank 4: https://data.worldbank.org/indicator/SP.POP.TOTL

[13] World Bank 5: https://data.worldbank.org/indicator/EN.POP.DNST

[14] World Bank 6: https://data.worldbank.org/indicator/NY.GDP.MKTP.KD

[15] World Bank 7: https://data.worldbank.org/indicator/FM.LBL.BMNY.GD.ZS

[16] World Bank 8: https://data.worldbank.org/indicator/AG.LND.AGRI.K2

[17] World Bank 9: https://data.worldbank.org/indicator/AG.YLD.CREL.KG

[18] World Bank 10: https://data.worldbank.org/indicator/EG.USE.PCAP.KG.OE

[19] World Bank 11: https://data.worldbank.org/indicator/SP.POP.TOTL

[20] Leonard, W.R., and Robertson, M.L. (1997). Comparative primate energetics and hominoid evolution. Am. J. Phys. Anthropol. 102, 265–281.

[21] Robson, S.L., and Wood, B. (2008). Hominin life history: reconstruction and evolution. J. Anat. 212, 394–425

[22] Russon, A. E. (2010). Life history: the energy-efficient orangutan. Current Biology, 20(22), pp. 981- 983.

[23] Waśniewski, K. (2017). Technological change as intelligent, energy-maximizing adaptation. Energy-Maximizing Adaptation (August 30, 2017).

[24] Wasniewski, K. (2020). Energy efficiency as manifestation of collective intelligence in human societies. Energy, 191, 116500.

[25] Andreoni, V. (2017). Energy Metabolism of 28 World Countries: A Multi-scale Integrated Analysis. Ecological Economics, 142, 56-69

[26] Heun, M. K., Owen, A., & Brockway, P. E. (2018). A physical supply-use table framework for energy analysis on the energy conversion chain. Applied Energy, 226, 1134-1162

[27] Velasco-Fernández, R., Giampietro, M., & Bukkens, S. G. (2018). Analyzing the energy performance of manufacturing across levels using the end-use matrix. Energy, 161, 559-572

Once again, I don’t really know what I am doing, and I love the feeling

I am digressing a bit in my learning of programming in Python, and I come back to a task which I have kept failing at so far, namely at reading data out of the World Economic Outlook database, published by the International Monetary Fund (https://www.imf.org/en/Publications/WEO/weo-database/2020/October ). This is good training in data cleansing. When I click on that link, I can choose two different formats: TAB delimited values or SDMX. The former download as an Excel file, essentially. Frankly, I feel not up to treating the latter: it is a dynamic format, essentially based on an XML tree. Still to learn, for me. This is one of those cases when I prefer staying in the cave. Daylight can wait. I stick to Excel. I download it, I open it in Excel and I preliminarily cleanse the spreadsheet of the most salient stuff, such as e.g. title rows in the heading above the column labels.

Preliminary cleansing done, I copy the Excel workbook to the working directory of my Anaconda, which is, by default, the root directory of my user profile. I create a new notebook in JupyterLab, and I start by importing whatever I think can be useful:

>> import numpy as np

>> import pandas as pd

>> import os

>> import math    

I check the presence of the Excel file with the ‘os.listdir()’ command, and I go:

>> WEO=pd.DataFrame(pd.read_excel(‘WEOOct2020all.xlsx’,sheet_name=’WEOOct2020all’,header=0))     

Seems cool. The kernel has swallowed the command. Just in case, I check with ‘WEO.describe()’, and I get:

 Estimates Start After
count7585.000000
mean2015.186421
std80.240679
min0.000000
25%2018.000000
50%2019.000000
75%2019.000000
max2020.000000

WTF? ‘Estimates start after’ is the last column of a two-dimensional table in Excel, and this column gives the year up to which the database provides actual empirics, and after which it is just projections. Besides this one, the database contains numerical columns corresponding to years, starting from 1980. When I go ‘WEO.columns’, I get:

Index([              ‘Country’,    ‘Subject Descriptor’,

                       ‘Units’,                 ‘Scale’,

                          1980,                    1981,

                          1982,                    1983,

                          1984,                    1985,

                          1986,                    1987,

                          1988,                    1989,

                          1990,                    1991,

                          1992,                    1993,

                          1994,                    1995,

                          1996,                    1997,

                          1998,                    1999,

                          2000,                    2001,

                          2002,                    2003,

                          2004,                    2005,

                          2006,                    2007,

                          2008,                    2009,

                          2010,                    2011,

                          2012,                    2013,

                          2014,                    2015,

                          2016,                    2017,

                          2018,                    2019,

                          2020,                    2021,

                          2022,                    2023,

                          2024,                    2025,

       ‘Estimates Start After’],

      dtype=’object’)

Aha! These columns are there, only Python sees them as non-numerical and does not compute any stats from them. As we say in Poland, I am trying to get my man from another angle. I open the source XLSX file in Excel and I save a copy thereof in the CSV format, in the working directory of my Anaconda. I remember that when saved out of an XLSX file, CSVs tend to have the semi-column as separator, instead of the coma. To everyone their ways, mind you. Thus, I go:

>> WEO2=pd.DataFrame(pd.read_csv(‘WEOOct2020all.csv’,header=0,sep=’;’))

When I check with ‘WEO2.info()’, I get:

<class ‘pandas.core.frame.DataFrame’>

RangeIndex: 8775 entries, 0 to 8774

Data columns (total 51 columns):

 #   Column                 Non-Null Count  Dtype 

—  ——                 ————–  —– 

 0   Country                8775 non-null   object

 1   Subject Descriptor     8775 non-null   object

 2   Units                  8775 non-null   object

 3   Scale                  3900 non-null   object

 4   1980                   3879 non-null   object

 5   1981                   4008 non-null   object

 6   1982                   4049 non-null   object

 7   1983                   4091 non-null   object

 8   1984                   4116 non-null   object

 9   1985                   4192 non-null   object

 10  1986                   4228 non-null   object

 11  1987                   4249 non-null   object

 12  1988                   4338 non-null   object

 13  1989                   4399 non-null   object

 14  1990                   4888 non-null   object

 15  1991                   5045 non-null   object

 16  1992                   5428 non-null   object

 17  1993                   5621 non-null   object

 18  1994                   5748 non-null   object

 19  1995                   6104 non-null   object

 20  1996                   6247 non-null   object

 21  1997                   6412 non-null   object

 22  1998                   6584 non-null   object

 23  1999                   6662 non-null   object

 24  2000                   7071 non-null   object

 25  2001                   7193 non-null   object

 26  2002                   7289 non-null   object

 27  2003                   7323 non-null   object

 28  2004                   7391 non-null   object

 29  2005                   7428 non-null   object

 30  2006                   7433 non-null   object

 31  2007                   7441 non-null   object

 32  2008                   7452 non-null   object

 33  2009                   7472 non-null   object

 34  2010                   7475 non-null   object

 35  2011                   7477 non-null   object

 36  2012                   7484 non-null   object

 37  2013                   7493 non-null   object

 38  2014                   7523 non-null   object

 39  2015                   7545 non-null   object

 40  2016                   7547 non-null   object

 41  2017                   7551 non-null   object

 42  2018                   7547 non-null   object

 43  2019                   7539 non-null   object

 44  2020                   7501 non-null   object

 45  2021                   7449 non-null   object

 46  2022                   7389 non-null   object

 47  2023                   7371 non-null   object

 48  2024                   7371 non-null   object

 49  2025                   7371 non-null   object

 50  Estimates Start After  7585 non-null   float64

dtypes: float64(1), object(50)

memory usage: 3.4+ MB

There is some progress, still it is not the entire progress I expected. I still don’t have numerical data, in ‘float64’ type, where I expect it to have. I dig a bit and I see the source of the problem. In the WEO database there is plenty of empty cells, especially before the year 2000. They correspond to missing data, quite simply. In the source XLSX file, they are either just empty, or filled with something that looks like a double hyphen: ‘- -‘. Python shows the contents of these cells as ‘NaN’, which stands for ‘Not a Number’. That double hyphen is the most annoying of the two, as Excel does not see it in the command ‘Replace’. I need to use Python. I do two phases of cleansing:

>> WEO3=WEO2.replace(np.nan,”, regex=True)

>> WEO4=WEO3.replace(‘–‘,”, regex=True)

 I check with ‘WEO4.info()’ aaaaand… Bingo! Columns from ‘1980’ to ‘2025’ are of the type ‘float64’.

The WEO database is made of several variables stacked one underneath the other in consecutive rows. You have one country, and for that country you have variables such as GDP, fiscal balance and whatnot. Essentially, it is a database presented in the two-dimensional format with multiple indexes, embedded one inside the other. The complexity of indexes replaces the multitude of dimensions in the actual data. I start intuitively, with creating lists of column labels corresponding, respectively, to numerical data, and to index descriptors:

>> Numerical_Data=[‘1980’, ‘1981’,

       ‘1982’, ‘1983’, ‘1984’, ‘1985’, ‘1986’, ‘1987’, ‘1988’, ‘1989’, ‘1990’,

       ‘1991’, ‘1992’, ‘1993’, ‘1994’, ‘1995’, ‘1996’, ‘1997’, ‘1998’, ‘1999’,

       ‘2000’, ‘2001’, ‘2002’, ‘2003’, ‘2004’, ‘2005’, ‘2006’, ‘2007’, ‘2008’,

       ‘2009’, ‘2010’, ‘2011’, ‘2012’, ‘2013’, ‘2014’, ‘2015’, ‘2016’, ‘2017’,

       ‘2018’, ‘2019’, ‘2020’, ‘2021’, ‘2022’, ‘2023’, ‘2024’, ‘2025’]

>>  Index_descriptors=[‘Country’, ‘Subject Descriptor’, ‘Units’, ‘Scale’,’Estimates Start After’]

Now, I mess around a bit with those dictionaries and with indexing that big dataset. In a moment, you will understand why I do so. I go:

>> Subject_Descriptors=pd.unique(WEO4[‘Subject Descriptor’]) # I made a data frame out of unique index labels in the column ‘Subject Descriptor’.  I get:

>> array([‘Gross domestic product, constant prices’,

       ‘Gross domestic product, current prices’,

       ‘Gross domestic product, deflator’,

       ‘Gross domestic product per capita, constant prices’,

       ‘Gross domestic product per capita, current prices’,

       ‘Output gap in percent of potential GDP’,

       ‘Gross domestic product based on purchasing-power-parity (PPP) share of world total’,

       ‘Implied PPP conversion rate’, ‘Total investment’,

       ‘Gross national savings’, ‘Inflation, average consumer prices’,

       ‘Inflation, end of period consumer prices’,

       ‘Six-month London interbank offered rate (LIBOR)’,

       ‘Volume of imports of goods and services’,

       ‘Volume of Imports of goods’,

       ‘Volume of exports of goods and services’,

       ‘Volume of exports of goods’, ‘Unemployment rate’, ‘Employment’,

       ‘Population’, ‘General government revenue’,

       ‘General government total expenditure’,

       ‘General government net lending/borrowing’,

       ‘General government structural balance’,

       ‘General government primary net lending/borrowing’,

       ‘General government net debt’, ‘General government gross debt’,

       ‘Gross domestic product corresponding to fiscal year, current prices’,

       ‘Current account balance’], dtype=object)

In other words, each country is characterized in the WEOOct2020 database with the above characteristics. I need to group and extract data so as to have those variables separated. The kind of transformation which I want to nail down is to transpose those variables with years. In the source version of WEOOct2020, years are separate columns that cut across three basic indexes: countries, for one, the above presented subject descriptors, for two, and finally the indexes of units and scale. The latter is important to the extent that most macroeconomic aggregates are presented either as absolute amounts or as percentages of the country’s GDP. Probably you remember from math classes at school, and those of physics and chemistry too, actually, that confusing units of measurement is a cardinal sin in science. What I want to do is to flip the thing on its side. I want each country to be associated with a series of index labels corresponding to years, and variables associated with proper units of measurement being the columns of the dataset.

In other words, now, years are the main quantitative categories of the WEOOct202 data frame, and categorial variables are index labels, or phenomenological units of observation. I want these two to change places, as it essentially should be: categorial variables should become phenomenological categories, and years should gracefully step down to the status of observational units.

As I don’t know what to do, I reach to what I know how to do, i.e. to creating some sort of dictionaries out of index labels. What I did for subject descriptors, I do for units,     

>> Units=pd.unique(WEO4[‘Units’])

…which yields:

array([‘National currency’, ‘Percent change’, ‘U.S. dollars’,

       ‘Purchasing power parity; international dollars’, ‘Index’,

       ‘Purchasing power parity; 2017 international dollar’,

       ‘Percent of potential GDP’, ‘Percent’,

       ‘National currency per current international dollar’,

       ‘Percent of GDP’, ‘Percent of total labor force’, ‘Persons’],

      dtype=object)

and…

>> Countries=pd.unique(WEO4[‘Country’])

…which yields:

array([‘Afghanistan’, ‘Albania’, ‘Algeria’, ‘Angola’,

       ‘Antigua and Barbuda’, ‘Argentina’, ‘Armenia’, ‘Aruba’,

       ‘Australia’, ‘Austria’, ‘Azerbaijan’, ‘The Bahamas’, ‘Bahrain’,

       ‘Bangladesh’, ‘Barbados’, ‘Belarus’, ‘Belgium’, ‘Belize’, ‘Benin’,

       ‘Bhutan’, ‘Bolivia’, ‘Bosnia and Herzegovina’, ‘Botswana’,

       ‘Brazil’, ‘Brunei Darussalam’, ‘Bulgaria’, ‘Burkina Faso’,

       ‘Burundi’, ‘Cabo Verde’, ‘Cambodia’, ‘Cameroon’, ‘Canada’,

       ‘Central African Republic’, ‘Chad’, ‘Chile’, ‘China’, ‘Colombia’,

       ‘Comoros’, ‘Democratic Republic of the Congo’, ‘Republic of Congo’,

       ‘Costa Rica’, “CÙte d’Ivoire”, ‘Croatia’, ‘Cyprus’,

       ‘Czech Republic’, ‘Denmark’, ‘Djibouti’, ‘Dominica’,

       ‘Dominican Republic’, ‘Ecuador’, ‘Egypt’, ‘El Salvador’,

       ‘Equatorial Guinea’, ‘Eritrea’, ‘Estonia’, ‘Eswatini’, ‘Ethiopia’,

       ‘Fiji’, ‘Finland’, ‘France’, ‘Gabon’, ‘The Gambia’, ‘Georgia’,

       ‘Germany’, ‘Ghana’, ‘Greece’, ‘Grenada’, ‘Guatemala’, ‘Guinea’,

       ‘Guinea-Bissau’, ‘Guyana’, ‘Haiti’, ‘Honduras’, ‘Hong Kong SAR’,

       ‘Hungary’, ‘Iceland’, ‘India’, ‘Indonesia’,

       ‘Islamic Republic of Iran’, ‘Iraq’, ‘Ireland’, ‘Israel’, ‘Italy’,

       ‘Jamaica’, ‘Japan’, ‘Jordan’, ‘Kazakhstan’, ‘Kenya’, ‘Kiribati’,

       ‘Korea’, ‘Kosovo’, ‘Kuwait’, ‘Kyrgyz Republic’, ‘Lao P.D.R.’,

       ‘Latvia’, ‘Lebanon’, ‘Lesotho’, ‘Liberia’, ‘Libya’, ‘Lithuania’,

       ‘Luxembourg’, ‘Macao SAR’, ‘Madagascar’, ‘Malawi’, ‘Malaysia’,

       ‘Maldives’, ‘Mali’, ‘Malta’, ‘Marshall Islands’, ‘Mauritania’,

       ‘Mauritius’, ‘Mexico’, ‘Micronesia’, ‘Moldova’, ‘Mongolia’,

       ‘Montenegro’, ‘Morocco’, ‘Mozambique’, ‘Myanmar’, ‘Namibia’,

       ‘Nauru’, ‘Nepal’, ‘Netherlands’, ‘New Zealand’, ‘Nicaragua’,

       ‘Niger’, ‘Nigeria’, ‘North Macedonia’, ‘Norway’, ‘Oman’,

       ‘Pakistan’, ‘Palau’, ‘Panama’, ‘Papua New Guinea’, ‘Paraguay’,

       ‘Peru’, ‘Philippines’, ‘Poland’, ‘Portugal’, ‘Puerto Rico’,

       ‘Qatar’, ‘Romania’, ‘Russia’, ‘Rwanda’, ‘Samoa’, ‘San Marino’,

       ‘S„o TomÈ and PrÌncipe’, ‘Saudi Arabia’, ‘Senegal’, ‘Serbia’,

       ‘Seychelles’, ‘Sierra Leone’, ‘Singapore’, ‘Slovak Republic’,

       ‘Slovenia’, ‘Solomon Islands’, ‘Somalia’, ‘South Africa’,

       ‘South Sudan’, ‘Spain’, ‘Sri Lanka’, ‘St. Kitts and Nevis’,

       ‘St. Lucia’, ‘St. Vincent and the Grenadines’, ‘Sudan’, ‘Suriname’,

       ‘Sweden’, ‘Switzerland’, ‘Syria’, ‘Taiwan Province of China’,

       ‘Tajikistan’, ‘Tanzania’, ‘Thailand’, ‘Timor-Leste’, ‘Togo’,

       ‘Tonga’, ‘Trinidad and Tobago’, ‘Tunisia’, ‘Turkey’,

       ‘Turkmenistan’, ‘Tuvalu’, ‘Uganda’, ‘Ukraine’,

       ‘United Arab Emirates’, ‘United Kingdom’, ‘United States’,

       ‘Uruguay’, ‘Uzbekistan’, ‘Vanuatu’, ‘Venezuela’, ‘Vietnam’,

       ‘West Bank and Gaza’, ‘Yemen’, ‘Zambia’, ‘Zimbabwe’], dtype=object)

Once again, I don’t really know what I am doing. I just intuitively look for some sort of landmarks in that landscape of data. By the way, this is what we all do when we don’t know what to do: we look for reliable ways to partition observable reality into categories.

Now, I want to make sure that Python has the same views as me as for what index descriptors are in that dataset. I go:

>> pd.MultiIndex.from_frame(WEO4[Index_descriptors])

… and I get:

MultiIndex([(‘Afghanistan’, …),

            (‘Afghanistan’, …),

            (‘Afghanistan’, …),

            (‘Afghanistan’, …),

            (‘Afghanistan’, …),

            (‘Afghanistan’, …),

            (‘Afghanistan’, …),

            (‘Afghanistan’, …),

            (‘Afghanistan’, …),

            (‘Afghanistan’, …),

            …

            (   ‘Zimbabwe’, …),

            (   ‘Zimbabwe’, …),

            (   ‘Zimbabwe’, …),

            (   ‘Zimbabwe’, …),

            (   ‘Zimbabwe’, …),

            (   ‘Zimbabwe’, …),

            (   ‘Zimbabwe’, …),

            (   ‘Zimbabwe’, …),

            (   ‘Zimbabwe’, …),

            (   ‘Zimbabwe’, …)],

           names=[‘Country’, ‘Subject Descriptor’, ‘Units’, ‘Scale’, ‘Estimates Start After’], length=8775)

Seems OK.

Now, I need to fuse somehow the index of Subject Descriptor with the Index of Units, so as to have categories ready for flipping. I keep sort of feeling my way forward, rather than seeing it clearly. Love it, actually. I create an empty data series to contain the merged indexes of ‘Subject Descriptor’ and ‘Units’:

>> Variable=pd.Series(‘object’) # The ‘object’ part means that I want to have words in that data series

Now, I append and I check:

>> WEO4.append(Variable,ignore_index=True)

Aaaand… it doesn’t work. When I check ‘WEO4.info()’, I get the list of columns I had before, without the ‘Variable’. In other words, Python acknowledged that I want to append that columns, and it sort of appended, but just sort of. There is that thing I have already learnt with Python: there is a huge difference between having sort of expected output, on the one hand, and having it 100%, on the other hand. The one hand is bloody frustrating.  

I try another trick, the ‘df.insert’ command. I do:

>> WEO4.insert(1,’Variable’,’ ‘)

I check with ‘WEO4.info()’ aaaaand….this time, it worked sort of. I get the new column ‘Variable’, yes, and I have all my numerical columns, the one with ‘Year’ headers, turned back into the ‘object’ format. I f**king love programming. I do:

>> for i in range(0,len(WEO4.columns)):

    WEO4.iloc[:,i]=pd.to_numeric(WEO4.iloc[:,i], errors=’ignore’)

… and I check with ‘WEO4.info()’ once again. Victory: numerical is back to numerical.

Now, I am looking for a method to sort of concatenate smartly the contents of two incumbent columns, namely ‘Subject Descriptor’ and ‘Units’, into the new vessel, i.e. the column ‘Variable’. I found the simplest possible method, which is straightforward addition:

>> WEO4[“Variable”]=WEO4[“Subject Descriptor”]+WEO4[“Units”]

I typed it, I executed, and, as strange as it seems, Python seems to be OK with that. Well, after all, Python is a language, and languages have that thing: they add words to each other. It is called ‘making sentences’. Cool. I check by creating an array of unique values in the index labels of ‘Variable:

>> Variables=pd.unique(WEO4[‘Variable’])

I check by just typing:

>> Variables

… and running it as a command. I get:

array([‘Gross domestic product, constant pricesNational currency’,

       ‘Gross domestic product, constant pricesPercent change’,

       ‘Gross domestic product, current pricesNational currency’,

       ‘Gross domestic product, current pricesU.S. dollars’,

       ‘Gross domestic product, current pricesPurchasing power parity; international dollars’,

       ‘Gross domestic product, deflatorIndex’,

       ‘Gross domestic product per capita, constant pricesNational currency’,

       ‘Gross domestic product per capita, constant pricesPurchasing power parity; 2017 international dollar’,

       ‘Gross domestic product per capita, current pricesNational currency’,

       ‘Gross domestic product per capita, current pricesU.S. dollars’,

       ‘Gross domestic product per capita, current pricesPurchasing power parity; international dollars’,

       ‘Output gap in percent of potential GDPPercent of potential GDP’,

       ‘Gross domestic product based on purchasing-power-parity (PPP) share of world totalPercent’,

       ‘Implied PPP conversion rateNational currency per current international dollar’,

       ‘Total investmentPercent of GDP’,

       ‘Gross national savingsPercent of GDP’,

       ‘Inflation, average consumer pricesIndex’,

       ‘Inflation, average consumer pricesPercent change’,

       ‘Inflation, end of period consumer pricesIndex’,

       ‘Inflation, end of period consumer pricesPercent change’,

       ‘Six-month London interbank offered rate (LIBOR)Percent’,

       ‘Volume of imports of goods and servicesPercent change’,

       ‘Volume of Imports of goodsPercent change’,

       ‘Volume of exports of goods and servicesPercent change’,

       ‘Volume of exports of goodsPercent change’,

       ‘Unemployment ratePercent of total labor force’,

       ‘EmploymentPersons’, ‘PopulationPersons’,

       ‘General government revenueNational currency’,

       ‘General government revenuePercent of GDP’,

       ‘General government total expenditureNational currency’,

       ‘General government total expenditurePercent of GDP’,

       ‘General government net lending/borrowingNational currency’,

       ‘General government net lending/borrowingPercent of GDP’,

       ‘General government structural balanceNational currency’,

       ‘General government structural balancePercent of potential GDP’,

       ‘General government primary net lending/borrowingNational currency’,

       ‘General government primary net lending/borrowingPercent of GDP’,

       ‘General government net debtNational currency’,

       ‘General government net debtPercent of GDP’,

       ‘General government gross debtNational currency’,

       ‘General government gross debtPercent of GDP’,

       ‘Gross domestic product corresponding to fiscal year, current pricesNational currency’,

       ‘Current account balanceU.S. dollars’,

       ‘Current account balancePercent of GDP’], dtype=object)

Cool. It seems to have worked.

Mathematical distance

I continue learning Python as regards data analysis. I have a few thoughts on what I have already learnt, and a new challenge, namely to repeat the same thing with another source of data, namely the World Economic Outlook database, published by the International Monetary Fund (https://www.imf.org/en/Publications/WEO/weo-database/2020/October ). My purpose is to use that data in the same way as I used that from Penn Tables 9.1 (see ‘Two loops, one inside the other’, for example), namely to run it through a digital intelligent structure consisting of a double algorithmic loop.

First things first, I need to do what I promised to do in Two loops, one inside the other, that is to test the cognitive value of the algorithm I presented there. By the way, as I keep working with algorithms known as ‘artificial intelligence’, I am more and more convinced that the term ‘artificial neural networks’ is not really appropriate. I think that talking about artificial intelligent structure is much closer to reality. Giving the name of ‘neurons’ to particular fragments of the algorithm reflects the properties of some of those neurons, I get it. Yet, the neurons of a digital technology are the micro-transistors in the CPU or in the GPU. Yes, micro-transistors do what neurons do in our brain: they fire conditionally and so they produce neural signals. Algorithms of AI can be run on any computer with proper software. AI is software, not hardware.

Yes, I know I’m ranting. This is how I am gathering intellectual speed for my writing. Learning to program in Python has led me to a few realizations about the digital intelligent structures I am working with, as simulators of collective intelligence in human societies. Algorithms are different from equations in the sense that algorithms do things, whilst equations represent things. When I want an algorithm to do the things represented with equations, I need functional coherence between commands. A command needs data to work on, and it is a good thing if I can utilize the data it puts out. A chain of commands is functional, when earlier commands give accurate input to later commands, and when the final output of the last command can be properly stored and utilized. On the other hand, equations don’t need data to work, because equations don’t work. They just are.

I can say my equations are fine when they make sense logically. On the other hand, I can be sure my algorithm works the way it is supposed to work, when I can empirically prove its functionality by testing it. Hence, I need a method of testing it and I need to be sure the method in itself is robust. Now, I understand why the hell in all the tutorials which I could find as regards programming in Python there is that ‘print(output)’ command at the end of each algorithm. Whatever the output is, printing it, i.e. displaying it on the screen, is the most elementary method of checking whether that output is what I expect it to be. By the way, I have made my own little discovery about the usefulness of the ‘print()’ command. In looping algorithms, which, by nature, are prone to looping forever if the range of iterations is not properly defined, I put ‘print(‘Finished’)’ at the very end of the code. When I see ‘Finished’ printed in the line below, I can be sure the thing has done the work it was supposed to do.

Good, I was supposed to write about the testing of my algorithm. How do I test? I start by taking small pieces of the algorithm and checking the kind of output they give. By doing that, I modified the algorithm from ‘Two loops, one inside the other’, into the form you can see below:

That’s the preliminary part: importing libraries and data for analysis >>

In [1]: import numpy as np

   …: import pandas as pd

   …: import os

   …: import math

In [2]: PWT=pd.DataFrame(pd.read_csv(‘PWT 9_1 no empty cells.csv’,header=0)) # PWT 9_1 no empty cells.csv is a CSV version of the database I made with non-empty observations in the Penn Tables 9.1 database

Now, I extract the purely numerical columns, into another data frame, which I label ‘PWT_Numerical’

In [3]: Variables=[‘rgdpe’, ‘rgdpo’, ‘pop’, ’emp’, ’emp / pop’, ‘avh’,

   …:        ‘hc’, ‘ccon’, ‘cda’, ‘cgdpe’, ‘cgdpo’, ‘cn’, ‘ck’, ‘ctfp’, ‘cwtfp’,

   …:        ‘rgdpna’, ‘rconna’, ‘rdana’, ‘rnna’, ‘rkna’, ‘rtfpna’, ‘rwtfpna’,

   …:        ‘labsh’, ‘irr’, ‘delta’, ‘xr’, ‘pl_con’, ‘pl_da’, ‘pl_gdpo’, ‘csh_c’,

   …:        ‘csh_i’, ‘csh_g’, ‘csh_x’, ‘csh_m’, ‘csh_r’, ‘pl_c’, ‘pl_i’, ‘pl_g’,

   …:        ‘pl_x’, ‘pl_m’, ‘pl_n’, ‘pl_k’]

In [4]: PWT_Numerical=pd.DataFrame(PWT[Variables])

My next step is to practice with creating dictionaries out of column names in my data frame

In [5]: Names_Output_Data=[]

   …: for i in range(42):

   …:     Name_Output_Data=PWT_Numerical.iloc[:,i].name

   …:     Names_Output_Data.append(Name_Output_Data)

I start coding the intelligent structure. I start by defining empty lists, to store data which the intelligent structure produces.

In [6]: ER=[]

   …: Transformed=[]

   …: MEANS=[]

   …: EUC=[]

I define an important numerical array in NumPy: the vector of mean expected values in the variables of PWT_Numerical.

   …: Source_means=np.array(PWT_Numerical.mean())

I open the big external loop of my intelligent structure. This loop is supposed to produce as many alternative intelligent structures as there are variables in my PWT_Numerical data frame.

   …: for i in range(42):

   …:     Name_Output_Data=PWT_Numerical.iloc[:,i].name

   …:     Names_Output_Data.append(Name_Output_Data)

   …:     Output=pd.DataFrame(PWT_Numerical.iloc[:,i]) # I make an output data frame

   …:     Mean=Output.mean()

   …:     MEANS.append(Mean) # I store the expected mean of each output variable in a separate list.

   …:     Input=pd.DataFrame(PWT_Numerical.drop(Output,axis=1)) # I make an input data frame, coupled with output

   …:     Input_STD=pd.DataFrame(Input/Input.max(axis=0)) # I standardize input data over the respective maximum of each variable

   …:     Input_Means=pd.DataFrame(Input.mean()) # I prepare two data frames sort of for later: one with the vector of means…

   …:     Input_Max=pd.DataFrame(Input.max(axis=0)) #… and another one with the vector of maximums

Now, I put in motion the intelligent structure strictly speaking: a simple perceptron, which…

   …:     for j in range(10): # … is short, for testing purposes, just 10 rows in the source data

   …:         Input_STD_randomized=np.array(Input_STD.iloc[j])*np.random.rand(41) #… sprays the standardized input data with random weights

   …:         Input_STD_summed=Input_STD_randomized.sum(axis=0) # … and then sums up sort of ∑(input variable *random weight).

   …:         T=math.tanh(Input_STD_summed) # …computes the hyperbolic tangent of summed randomized input. This is neural activation.

   …:         D=1-(T**2) # …computes the local first derivative of that hyperbolic tangent

   …:         E=(Output.iloc[j]-T)*D # … computes the local error of estimating the value of output variable, with input data neural-activated with the function of hyperbolic tangent

   …:         E_vector=np.array(np.repeat(E,41)) # I spread the local error into a vector to feed forward

   …:         Next_row_with_error=Input_STD.iloc[j+1]+E_vector # I feed the error forward

   …:         Next_row_DESTD=Next_row_with_error*Input.max(axis=0) # I destandardize

   …:         ER.append(E) # I store local errors in the list ER

   …:         ERROR=pd.DataFrame(ER) # I make a data frame out of the list ER

   …:         Transformed.append(Next_row_with_error) # I store the input values transformed by the perceptron (through the forward feed of error), in the list Transformed

   …:     TR=pd.DataFrame(Transformed) # I turn the Transformed list into a data frame

   …:     MEAN_TR=pd.DataFrame(TR.mean()) # I compute the mean values of transformed input and store them in a data frame. They are still mean values of standardized data.

   …:     MEAN_TR_DESTD=pd.DataFrame(MEAN_TR*Input_Max) # I destandardise

   …: MEANS_DF=pd.DataFrame(MEANS)

   …: print(MEANS)

   …: print(‘Finished’)

The general problem which I encounter with that algorithm is essentially that of reading correctly and utilizing the output, or, at least, this is how I understand that problem. First, I remind the general hypothesis which I want to test and explore with that algorithm presented above. Here it comes: for a given set of phenomena, informative about the state of a human social structure, and observable as a dataset of empirical values in numerical variables, there is a subset of variables which inform about the ethical and functional orientation of that human social structure; orientation manifests as relatively the least significant transformation, which the original dataset needs to undergo in order to minimize error in estimating the orientation-informative variable as output, when the remaining variables are used as input.

When the empirical dataset in question is being used as training set for an artificial neural network of the perceptron type, i.e. a network which tests for the optimal values in the input variables, for minimizing the error in estimating the output variable, such neural testing transforms the original dataset into a specific version thereof. I want to know how far away  from the original empirical dataset  does the specific transformation, oriented on a specific output, go. I measure that mathematical distance as the Euclidean distance between the vector of mean expected values in the transformed dataset, and the original one.

Therefore, I need two data frames in Pandas, or two arrays in NumPy, one containing the mean expected values of the original input data, the other storing mean expected values of the transformed dataset. Here is where my problems start, with the algorithm presented above. The ‘TR’ data frame has a different shape and structure than the ‘Input’ data frame, from which, technically, it is derived.  The Input data frame has 41 columns, and the TR has 42 columns. Besides, one column from ‘Input’, the ‘rgdpe’, AKA real GDP on the final consumption side, moves from being the first column in ‘Input’ to being the last ‘column’ in ‘TR’. For the moment, I have no clue what’s going on at that level. I even checked the algorithm with a debugger, available with the integrated development environment called Spyder (https://www.spyder-ide.org ). Technically, as far as the grammar of Python is concerned, the algorithm is OK. Still, it produces different than expected vectors of mean expected values in transformed data. I don’t even know where to start looking for a solution.    

There is one more thing I want to include in this algorithm, which I have already been doing in Excel. At each row of transformed data, thus at each ‘Next_row_with_error’, I want to add a mirroring row of mean Euclidean distance from each individual variable to all the remaining ones. It is a measure of internal coherence in the process of learning through trial and error, and I already know, having learnt it by trial and error, that including that specific metric, and feeding it forward together with the error, changes a lot in the way a perceptron learns.    

Two loops, one inside the other

I am developing my skills in programming by attacking the general construct of Markov chains and state space. My theory on the bridging between collective intelligence in human societies and artificial neural networks as simulators thereof is that both are intelligent structures. I assume that they learn by producing many alternative versions of themselves whilst staying structurally coherent, and they pitch each such version against a desired output, just to see how fit that particular take on existence is, regarding the requirements in place.  

Mathematically, that learning-by-doing is a Markov chain of states, i.e. a sequence of complex states, described by a handful of variables, such that each consecutive state in the sequence is a modification of the preceding state, through a logically coherent σ-algebra. My so-far findings suggest that orienting the intelligent structure on specific outcomes, out of all those available, is crucial for the path of learning that structure takes. In other words, the general hypothesis I am sniffing around and digging into is that the way an intelligent structure learns is principally determined by the desired outcomes which the structure is after, more than by the exact basket of inputs it uses. Stands to reason, for a neural network: the thing optimises inputs so as to make it fit to the outcome it seeks to get as close to as possible.  

As I am taking real taste in stepping out of my cavern, I have installed Anaconda on my computer, from https://www.anaconda.com/products/individual/download-success . When I use Anaconda, I use the same JupyterLab online functionality which I have been using so far, with one difference. Anaconda allows me to create a user account with JupyterLab, and to have all my work stored on that account. Probably, there are some storage limits, yet the thing is practical. 

Anyway, I want to program in Python, just as I do it in Excel, intelligent structures able to emulate the collective intelligence of human societies. A basic finding of mine, in the so-far research, is that intelligent structures alter their behaviour significantly depending on the outcome they pursue. The initial landscape I start operating in is akin a junkyard of information. I go to the website of World Bank, for example, I mean the one with freely available data, AKA https://data.worldbank.org , and I start rummaging. Quality of life, size of economies, headcount of populations… What else? Oh, yes, there are things about education, energy consumption and whatnot. All that stuff just piled up nicely, each item easy to retrieve, and yet, how does it all make sense together? My take on the thing is that there is stuff going on, like all the time and everywhere. We are part of that ongoing stuff, actually. Out of that stream of happening, we perceptually single out phenomenological cuts , and we isolate those specific cuts because we are able to measure them with some kind of gauge. Data-driven observation of ourselves in the world is closely connected to our science of measuring and counting stuff. Have you noticed that a basic metric, i.e. how many of us is there around, can take a denominator of one – when we count the population of a city – or a denominator of 10 000, when we are interested in the incidence of criminality. 

Each quantitative variable I can observe and download the dataset of from https://data.worldbank.org  comes out of that complex process of collective cognition, resembling a huge bunch of psychos walking around with rulers and abacuses, trying to measure everything they perceive. I use data as phenomenological description of both the reality those psychos (me included) live in, and the way they measure that reality. I want to check which among those quantitative variables are particularly suitable to represent the things we are really after, our collectively desired outcomes. The method I use to do it consists in producing as many variations of the original dataset as I have variables. Each variation of the original dataset has one variable singled out as output, and the remaining ones are input. I run such variation through a simple neural network – the simpler, the better – where standardised, randomly weighed and neurally activated input gets compared with the pre-set output. I measure the mean expected values of all the variables in such a transformation, i.e. when I run it through 3000 experimental rounds, I measure those means over the same 3000 rounds. I compute the Euclidean distance between each such vector of means and its cousin computed for the original dataset. I assume that, with rigorously the same logical structure of the neural network, those variations differ from each other just by the output variable they are pegged on. When I say ‘pegged’, by the way, I mean that the output variable is not subject to random weighing, i.e. it is not being experimented with. It comes exogenously, and is taken as it is. 

I noticed that each time I do that procedure, with whatever set of variables I take, one or two among them, when taken as output ones, produce variations much closer to the original dataset that other, in terms of Euclidean distance. It looks as if the neural network, when pegged on those particular variables, emulated a process of adaptation particularly similar to what is represented by the original empirical data. 

Now, I want to learn how to program, in Python, the production of alternative ‘input <> output’  couplings out of a source dataset. I already know the general drill for producing just one such coupling. Once I have my dataset read out of a CSV file into a Data Frame in Python Pandas, I start with creating a dictionary of all the numerical columns:

>> dict_numerical = [‘numerical_column1’, ‘numerical_column2’, …, ‘numerical column_n’]

A simple way of doing that, with large data frames, is to type in Python:

>> df.columns

… and it yields a string of labels in quotation marks ‘’, separated with commas. I just copy that lot , without the non-numerical columns, into the square brackets of dict_numerical = […], and Bob’s my uncle. 

Then I make a strictly numerical version of my database, by:

>> df_numerical = pd.DataFrame(df[dict_numerical])

By the way, each time I produce a new data frame, I check its structure with commands ‘df.info()‘ and ‘df.describe()’. At my neophytic level of programming, I want to make sure that what I have in a strictly numerical database is strictly numerical data, i.e. the ‘float64’ type. Here, one hint: when you convert your data from an original Excel file, pay attention to having your decimal point as a point, i.e. as ‘0.0’, not as a comma. With a comma, the Pandas reader tends to interpret such data by default as ‘object’. Annoying. 

Once I have that numerical data frame in place, I make another dictionary of the type:

>> dict_for_Input_pegged_on_X_as_output = [‘numerical_input_column1’, ‘numerical_input_column2’, …, ‘numerical_input_column_k’]

… where k = n -1, of course, and the 1 corresponds to the variable X, supposed to be the output one. 

I use that dictionary to split df_numerical:

>> df_output_X = df_numerical[‘numerical_column_X’]

>> df_input_for_X = df_numerical[dict_for_Input_pegged_on_X_as_output]     

I would like to automatise the process. It means I need a loop. I am looping over a range of numerical columns df_numerical. Let’s dance. I start routinely, in my Anaconda-Jupyter Lab-powered notebook. By the way, I noticed an interesting practical feature of Jupyter Lab. When you start it directly from its website https://jupyter.org , the notebook you can use has somehow limited functionality as compared to the notebook you can create when accessing Jupyter Lab from the Anaconda app on your computer. In the latter case you can create an account with Jupyter Lab, with a very useful functionality of mirroring the content of your cloud account on your hard drive. I know, I know: we use the cloud so as not to collect rubbish on our own disk. Still, Python files are small, they take little space, and I discovered that this mirroring stuff is really useful. 

I open up with importing the libraries I think I will need:

>> import numpy as np

>> import pandas as pd

>> import math

>> import os

As I am learning new stuff, I prefer taking known stuff as my data. Once again, I use a dataset which I made out of Penn Tables 9.1., by kicking out all the rows with empty cells [see: Feenstra, Robert C., Robert Inklaar and Marcel P. Timmer (2015), “The Next Generation of the Penn World Table” American Economic Review, 105(10), 3150-3182, www.ggdc.net/pwt ].

I already have that dataset in my working directory. By the way, when you install Anaconda on a MacBook, its working directory is by default the root directory of the user’s profile. For the moment, I keep ip that way. Anyway, I have that dataset and I read it into a Pandas dataframe:

>> PWT=pd.DataFrame(pd.read_csv(‘PWT 9_1 no empty cells.csv’,header=0))

I create my first dictionaries. I type:

>> PWT.columns

… which yields:

Index([‘country’, ‘year’, ‘rgdpe’, ‘rgdpo’, ‘pop’, ’emp’, ’emp / pop’, ‘avh’,

       ‘hc’, ‘ccon’, ‘cda’, ‘cgdpe’, ‘cgdpo’, ‘cn’, ‘ck’, ‘ctfp’, ‘cwtfp’,

       ‘rgdpna’, ‘rconna’, ‘rdana’, ‘rnna’, ‘rkna’, ‘rtfpna’, ‘rwtfpna’,

       ‘labsh’, ‘irr’, ‘delta’, ‘xr’, ‘pl_con’, ‘pl_da’, ‘pl_gdpo’, ‘csh_c’,

       ‘csh_i’, ‘csh_g’, ‘csh_x’, ‘csh_m’, ‘csh_r’, ‘pl_c’, ‘pl_i’, ‘pl_g’,

       ‘pl_x’, ‘pl_m’, ‘pl_n’, ‘pl_k’],

      dtype=’object’)

…and I create the dictionary of quantitative variables:

>> Variables=[‘rgdpe’, ‘rgdpo’, ‘pop’, ’emp’, ’emp / pop’, ‘avh’,

       ‘hc’, ‘ccon’, ‘cda’, ‘cgdpe’, ‘cgdpo’, ‘cn’, ‘ck’, ‘ctfp’, ‘cwtfp’,

       ‘rgdpna’, ‘rconna’, ‘rdana’, ‘rnna’, ‘rkna’, ‘rtfpna’, ‘rwtfpna’,

       ‘labsh’, ‘irr’, ‘delta’, ‘xr’, ‘pl_con’, ‘pl_da’, ‘pl_gdpo’, ‘csh_c’,

       ‘csh_i’, ‘csh_g’, ‘csh_x’, ‘csh_m’, ‘csh_r’, ‘pl_c’, ‘pl_i’, ‘pl_g’,

       ‘pl_x’, ‘pl_m’, ‘pl_n’, ‘pl_k’]

The ‘Variables’ dictionary serves me to mutate the ‘PWT’ dataframe into its close cousin, obsessed with numbers, namely into ‘PWT_Numerical’:

>> PWT_Numerical = pd.DataFrame(PWT[Variables])

I quickly check the PWT_Numerical’s driving licence, by typing ‘PWT_Numerical.info()’ and  ‘PWT_Numerical.shape’. All is well, data is in the ‘float64’ format, there are 42 columns and 3006 rows, the guy is cleared to go.

Once I have that nailed down, I mess around a bit with creating names for my cloned datasets. I practice with the following loop:

>> for i in range(42):

    print(“Input_for_”+PWT_Numerical.iloc[:,i].name) 

It yields a list of names for input databases in various ‘input <> output’ configurations of my experiment with the PWT 9.1 dataset. The ‘print’ command gives a string of 42 names: Input_for_rgdpe, Input_for_rgdpo, Input_for_pop etc. 

In my next step, I want to make that outcome durable. The ‘print’ command just prints the output of the loop, it does not store it in any logical structure. The output is gone as soon as it is printed. I create a loop that makes a dictionary, this time with names of output data frames:

>> Names_Output_Data=[] # Here, I create an empty dictionary

>> for i in range(42): # I design the loop

    >> Name_Output_Data=PWT_Numerical.iloc[:,i].name # I create a mechanism for generating strings to fill the dictionary up. 

    >> Names_Output_Data.append(Name_Output_Data) # This is the mechanism of appending   the dictionary with names generated in the previous command 

I check the result by typing the name of the dictionary – ‘Names_Output_Data’ – and executing (Shift + Enter in Jupyter Lab). It yields a full dictionary, filled with column names from PWT_Numerical

Now,  pass to designing my Markov chain of states, i.e. into making an intelligent structure, which produces many alternative versions of itself and tests them for fitness to meet a pre-defined desired outcome. In my neophyte’s logic, I see it as two loops, one inside the other. 

The big, external loop is the one which clones the initial ‘PWT_Numerical’ into pairs of data frames of the style: ’Input variables’ plus ‘Output variable’. I make as many such cloned pairs as there are numerical variables in PWT_Numerical, i.e. 42. Thus, my loop opens up as ‘for i in range(42):’. Inside each iteration of that loop, there is an internal loop of passing the input variables  through a very simple perceptron, assessing the error in estimating the output variable, and then feeding the error forward. Now, I will present below the entire code for those two loops, and then discuss what works, what doesn’t, and what I have no idea how to check whether it works or not. The code is grammatically correct in Python, i.e. it does not yield any error message when put to execution (Shift + Enter in JupyterLab, by the way).  After I present the entire code, I will discuss, further below, its particular parts. Anyway, here it is:

>> List_of_Output_DB=[]

>>Names_Output_Data=[]

>>MEANS=[]

>> Source_means=np.array(PWT_Numerical.mean())

>> EUC=[]

>>for i in range(42):

    >> Name_Output_Data=PWT_Numerical.iloc[:,i].name

    >> Names_Output_Data.append(Name_Output_Data)

    >> Output=pd.DataFrame(PWT_Numerical.iloc[:,i])    

    >> Mean=Output.mean()

   >> MEANS.append(Mean)

    >> Input=pd.DataFrame(PWT_Numerical.drop(Output,axis=1)) 

   >> Input_STD=pd.DataFrame(Input/Input.max(axis=0))

    >> ER=[]

    >> Transformed=[]

      >> for j in range(30):        

>> Input_STD_randomized=Input.iloc[j]*np.random.rand(41)

        >> Input_STD_summed=Input_STD_randomized.sum(axis=0)

        >> T=math.tanh(Input_STD_summed)

        >> D=1-(T**2)

        >> E=(Output.iloc[j]-T)*D

        >> E_vector=np.array(np.repeat(E,41))        

>> Next_row_with_error=Input_STD.iloc[j+1]+E_vector

>> Next_row_DESTD=Next_row_with_error*Input.max(axis=0)

        >> ER.append(E)

        >> ERROR=pd.DataFrame(ER)

        >> Transformed.append(Next_row_DESTD)

        >> CLONE=pd.DataFrame(Transformed).mean()

>> frames=[CLONE,MEANS[i]]

>> CLONE_Means=np.array(pd.concat(frames))

>> Euclidean=np.linalg.norm(Source_means-CLONE_Means)

>> EUC.append(Euclidean)

>> print(‘Finished’)   

Here is a shareable link to my Python file with that code inside: http://localhost:8880/lab/tree/Practice%20Dec%208%202020.ipynb  . I hope it works. 

I start explaining this code casually, from its end. This is a little trick I discovered as regards looping on datasets. Looping takes time and energy. In my struggles to learn Python, I have already managed to make a loop which kept looping forever. All I did was to call the loop as ‘for i in range PWT.index:’, without putting any ‘break’ command at the end. Yes, the index of a data frame is a finite number, yet it is also a sequence. When you don’t break explicitly the looping over that sequence, it will loop over and over again. 

Anyway, the trick. I put the command ‘print(‘Finished’)’ at the very end of the code, after all the loops. When the thing is done with being an intelligent structure at work, it simply prints ‘Finished’ in the next line. Among other things, it allows me to count the time it needs to deal with a given amount of data. As you might have already noticed, whilst I have a dataset with index = 3005 rows, I made the internal loop of the code to go just over 30 rows: ‘for j in range (30)’. The code took some 4 seconds in total to create 42 big loops (‘for i in range (42)’) , and then to loop over 30 rows of data inside each of them. It gives like 42*30 = 1260 experimental rounds in 10 seconds, thus something like 0,0079 seconds per one round. If I took the full dataset of 3005 rows, it would be like 42*3000*0,0079 = 1000 seconds, i.e. 16,6666 minutes. Satanic. I like it. 

Before opening each level of looping, I create empty lists. You can see:

>> List_of_Output_DB=[]

>>Names_Output_Data=[]

>>MEANS=[]

>> Source_means=np.array(PWT_Numerical.mean())

>> EUC=[]

… before I open the external loop, and…

  >> ER=[]

>> Transformed=[]

… before the internal loop.

I noticed that I create those empty lists in a loop, essentially. This is more than just a play on words. When I code a loop, I have output of the loop. The loop does something, and as it does, I discover I want to store that particular outcome in some kind of repository vessel, and I go back to the lines of code before the loop opens and I add an empty list, just in case. I come up with a smart name for the   list, e.g. MEANS, which stands for the mean values of numerical variables, such as they are after being transformed by the perceptron. Mathematically, it is the most basic representation of expected state in a particular transformation of the source dataset “PWT’. 

I code it as ‘MEANS=[]’, and, once I have done that, I add a mechanism of updating a list, inside the loop. This, in turn, goes in two steps. First, I code the variable which should be stored in that list. In the case of ‘MEANS’, as this list is created before I open the big loop of 42 ‘input <> output’ mutations, I append it in that loop. Logically, is must be appended with the mean expected values of output variables in each instance of the big loop. I code it in the big loop, and before opening the internal loop, as:

>> Output=pd.DataFrame(PWT_Numerical.iloc[:,i])  # Here, I define the data frame for the output variable in this specific instance of the big loop   

>> Mean=Output.mean() # Now, I define the function to generate values, which later append the ‘MEANS’ list

    >> MEANS.append(Mean) # Finally, I append the ‘MEANS’ list with values generated in the previous line of the code. 

It is a good thing for me to write about the things I do. I have just noticed that I use two different methods of storing partial outcomes of my loops. The first one is the one I have just presented. The second one is visible in the part of code presented below, included in the internal loop ‘for j in range(number of rows experimented with)’, range(30) in the occurrence tested. 

In this situation, I need to store in some kind of repository the values of input variables transformed by the neural network, i.e. with local error from each experimental round fed forward to the next experimental round. I need to store the way my data looks under each possible orientation of the intelligent structure I assume it represents. I denote that data under the general name ‘Transformed’, and, before opening the internal loop, just at the end of the big external loop, I define an empty list: ‘Transformed=[]’, which is supposed to contain those values I want.

In other words, when I structure the big external loop, I go like: 

# Step 1: for each variables in the dataset, i.e. ‘for i in range(number of variables)’, split the overall dataset into into this variable as the output, in a separate data frame, and all the other variables grouped separately as input. These are the lines of code:

>> Output=pd.DataFrame(PWT_Numerical.iloc[:,i])  # I define the output variable 

[…]    

>> Input=pd.DataFrame(PWT_Numerical.drop(Output,axis=1)) # I drop the output from the entire dataset and I group the remaining columns as ‘Input’

# Step 2: I standardise the input data by denominating it over the respective maximums for each variable:    

>> Input_STD=pd.DataFrame(Input/Input.max(axis=0))

# Step 3: I define, at the end of the big external loop, containers for data which I want to store from each round of the big loop:

>> ER=[] # This is the list of local errors generated by the perceptron when working with each ‘Input <> Output’ configuration

    >> Transformed=[] # That’s the container for input data transformed by the perceptron 

# Step 4: I open the internal loop, with ‘for j in range(number of rows to experiment with)’, and I start by coding the computational procedure of the perceptron:

>> Input_STD_randomized=Input.iloc[j]*np.random.rand(41) # I weigh each empirical, standardised value in this specific row with a random weight

        >> Input_STD_summed=Input_STD_randomized.sum(axis=0) # I sum the randomised values from that specific row of input. This line of code together with the preceding one are equivalent to the mathematical structure ‘∑x*random’.

        >> T=math.tanh(Input_STD_summed) # I compute the hyperbolic tangent of summed, randomised input data

        >> D=1-(T**2) # I compute the local first derivative of the hyperbolic tangent

        >> E=(Output.iloc[j]-T)*D # I compute the error, as: (Expected Output minus Hyperbolic Tangent of Randomised Input) times local derivative of the Hyperbolic Tangent

        >> E_vector=np.array(np.repeat(E,41)) # I create a NumPy array, with the error repeated as many times as there are input variables.

>> Next_row_with_error=Input_STD.iloc[j+1]+E_vector # I feed the error forward. In the next experimental row ‘j+1’, error from row ‘j’ is added to the value of each standardised input variable. This is probably the most elementary representation of learning: I include into my input for the next go the knowledge about what I f**ked up in the previous go. This line creates the transformed input data I want to store later on. 

# Step 5: I collect and store information about the things my perceptron did to input data in the given j-th round of the internal loop:

>> Next_row_DESTD=Next_row_with_error*Input.max(axis=0) # I destandardise the data transformed by the perceptron. It is like translating the work of the perceptron, which operates on standardised values, back into the measurement scale proper to each variable. In a sense, I deneuralise that data. 

        >> ER.append(E) # I collect and store error in the ER list

        >> ERROR=pd.DataFrame(ER) #I transform the ER list into a data frame, which I name ‘ERROR’. I do it a few times with different data, and, quite honestly, I do it intuitively. I already know that data frames defines in Pandas are somehow handier to do statistics with than lists defined in the basic code of Python. Just as honestly: I know too little yet about programming to know whether this turn of code makes sense at all.   

        >> Transformed.append(Next_row_DESTD) # I collect and store the destandardized, transformed input data in the ‘Transformed’ list.

# Step 6: I step out of both loops, and I start putting some order in the data I generated and collected. Stepping out of both loops means that in my code, the lines presented below have no indent. They all start at the left margin, just as the definition of the big external loop.

       >> CLONE=pd.DataFrame(Transformed).mean() # I transform the ‘Transformed’ list into a data frame. Same procedure as two lines of code earlier, only now, I know why I do it. I intend to put together the mean values of destandardised input with the mean value of output, and I am going to do it by concatenation of data frames. 

    >> frames=[CLONE,MEANS[i]] # I define the data frames for concatenation. I put together mean values in the input variables, generated in this specific, i-th round of the big external loop, with the mean value of the output variable corresponding to the same i-th round. You can notice that in the full code, such as I presented it earlier in this update, at this verse of code I move back by one indent. In other words, this definition is already outside of the internal loop, and still inside the big external loop. 

    >> CLONE_Means=np.array(pd.concat(frames)) # I concatenate the data I defined in the previous line. 

    >> Euclidean=np.linalg.norm(Source_means-CLONE_Means) # Something I need for my science. I estimate the mathematical similarity between the source data set ‘PWT_Numerical’, and the data set created by the perceptron, in the given i-th round of the big external loop. I do it by computing the Euclidean distance between the respective vectors of mean expected values in this specific pair of datasets, i.e. the pair ‘source vs i-th clone’.

    >> EUC.append(Euclidean) # I collect and store information generated in the ‘Euclidean’ line. I store it in the EUC list, which I opened as empty before starting the big external loop. 

One step out of the cavern

I have made one step further in my learning of programming. I finally have learn’t at least one method of standardising numerical values in a dataset. In a moment, I will show what exact method did I nail down. First, I want to share a thought of more general nature. I learn programming in order to enrich my research on the application of artificial intelligence for simulating collective intelligence in human societies. I have already discovered the importance of libraries, i.e. ready-made pieces of code, possible to call with a simple command, and short-cutting across many verses of code which I would have to write laboriously. I mean libraries such as NumPy, Pandas, Math etc. It is very similar to human consciousness. Using pre-constructed cognitive structures, i.e. using language and culture is a turbo boost for whatever we do of things that humans are supposed to do when being a civilisation.  

Anyway, I kept working with the dataset which I had already mentioned in my earlier updates, namely a version of Penn Tables 9.1., cleaned of all the rows with empty cells [see: Feenstra, Robert C., Robert Inklaar and Marcel P. Timmer (2015), “The Next Generation of the Penn World Table” American Economic Review, 105(10), 3150-3182, www.ggdc.net/pwt ]. Thus I started by creating an online notebook at JupyterLab (https://jupyter.org/try), with Python 3 as its kernel. Then I imported what I needed from Python in terms of ready-cooked culture, i.e. I went:

>> import numpy as np

>> import pandas as pd

>> import os

I uploaded the ‘PWT 9_1 no empty cells.csv’ file from my computer, and, just in case, I checked its presence in the working directory, with >> os.listdir(). I read the contents of the file into a Pandas Data Frame, which spells: PWT = pd.DataFrame(pd.read_csv(‘PWT 9_1 no empty cells.csv’)). Worked.  

In my next step, as I planned to mess up a bit with the columns of that dataset, I typed: PWT.columns. The thing nicely gave me back a list of columns, i.e. literally a list of labels in quotation marks [‘’]. I used that list to create a dictionary of columns with numerical values, and therefore the most interesting to me. I went:

>> Variables=[‘rgdpe’, ‘rgdpo’, ‘pop’, ’emp’, ’emp / pop’, ‘avh’,

       ‘hc’, ‘ccon’, ‘cda’, ‘cgdpe’, ‘cgdpo’, ‘cn’, ‘ck’, ‘ctfp’, ‘cwtfp’,

       ‘rgdpna’, ‘rconna’, ‘rdana’, ‘rnna’, ‘rkna’, ‘rtfpna’, ‘rwtfpna’,

       ‘labsh’, ‘irr’, ‘delta’, ‘xr’, ‘pl_con’, ‘pl_da’, ‘pl_gdpo’, ‘csh_c’,

       ‘csh_i’, ‘csh_g’, ‘csh_x’, ‘csh_m’, ‘csh_r’, ‘pl_c’, ‘pl_i’, ‘pl_g’,

       ‘pl_x’, ‘pl_m’, ‘pl_n’, ‘pl_k’]

The ‘Variables’ dictionary served me to make a purely numerical mutation of my dataset, namely: PWTVar=pd.DataFrame(PWT[Variables]).  

I generated the fixed components of standardisation in my data, i.e. maximums, means, and standard deviations across columns in PWTVar. It looked like this: 

>> Maximums=PWTVar.max(axis=0)

>> Means=PWTVar.mean(axis=0)

>> Deviations=PWTVar.std(axis=0)

The ‘axis=0’ part means that I want to generate those values across columns, not rows. Once that done, I made my two standardisations of data from PWTVar, namely: a) standardisation over maximums, like s(x) = x/max(x) and b) standardisation by mean-reversion, where s(x) = [x – avg(x)]/std(x)]. I did it as:

>> Standardized=pd.DataFrame(PWTVar/Maximums)

>> MR=pd.DataFrame((PWTVar-Means)/Deviations)

I used here the in-built function of Python Pandas, i.e. the fact that they automatically operate data frames as matrices. When, for example, I subtract ‘Means’ from ‘PWTVar’, the one-row matrix of ‘Means’ gets subtracted from each among the 3005 rows of ‘PWTVar’ etc. I checked those two data frames with commands such as ‘df.describe()’, ’df.shape’, and df.info(), just to make sure they are what I think they are. They are, indeed. 

Standardisation allowed me to step out of my cavern, in terms of programming artificial neural networks. The next step I took was to split my numerical dataset PWTVar into one output variable, on the one hand, and all the other variables grouped as input. As output, I took a variable which, as I have already found out in my research, is extremely important in social change seen through the lens of Penn Tables 9.1. This is ‘avh’ AKA the average number of hours worked per person per year. I did:  

>> Output_AVH=pd.DataFrame(PWTVar[‘avh’])

>> Input_dict=[‘rgdpe’, ‘rgdpo’, ‘pop’, ’emp’, ’emp / pop’, ‘hc’, ‘ccon’, ‘cda’,

        ‘cgdpe’, ‘cgdpo’, ‘cn’, ‘ck’, ‘ctfp’, ‘cwtfp’, ‘rgdpna’, ‘rconna’,

        ‘rdana’, ‘rnna’, ‘rkna’, ‘rtfpna’, ‘rwtfpna’, ‘labsh’, ‘irr’, ‘delta’,

        ‘xr’, ‘pl_con’, ‘pl_da’, ‘pl_gdpo’, ‘csh_c’, ‘csh_i’, ‘csh_g’, ‘csh_x’,

        ‘csh_m’, ‘csh_r’, ‘pl_c’, ‘pl_i’, ‘pl_g’, ‘pl_x’, ‘pl_m’, ‘pl_n’,

        ‘pl_k’] 

#As you can see, ‘avh’ is absent from the ‘Input-dict’ dictionary 

>> Input = pd.DataFrame(PWT[Input_dict])

The last thing that worked, in this episode of my learning, was to multiply the ‘Input’ dataset by a matrix of random float values generated with NumPy:

>> Randomized_input=pd.DataFrame(Input*np.random.rand(3006,41)) 

## Gives an entire Data Frame of randomized values