Money being just money for the sake of it

I have been doing that research on the role of cities in our human civilization, and I remember the moment of first inspiration to go down this particular rabbit hole. It was the beginning of March, 2020, when the first epidemic lockdown has been imposed in my home country, Poland. I was cycling through streets of Krakow, my city, from home to the campus of my university. I remember being floored at how dead – yes, literally dead – the city looked. That was the moment when I started perceiving cities as something almost alive. I started wondering how will pandemic affect the mechanics of those quasi-living, urban organisms.

Here is one aspect I want to discuss: restaurants. Most restaurants in Krakow turn into takeouts. In the past, each restaurant had the catering part of the business, but it was mostly for special events, like conferences, weddings and whatnot. Catering was sort of a wholesale segment in the restaurant business, and the retail was, well, the table, the napkin, the waiter, that type of story. That retail part was supposed to be the main one. Catering was an addition to that basic business model, which entailed a few characteristic traits. When your essential business process takes place in a restaurant room with tables and guests sitting at them, the place is just as essential. The location, the size, the look, the relative accessibility: it all played a fundamental role. The rent for the place was among the most important fixed costs of a restaurant. When setting up business, one of the most important questions – and risk factors – was: “Will I be able to attract sufficiently profuse customers to this place, and to ramp up prices sufficiently high to as to pay the rent for the place and still have satisfactory profit?”. It was like a functional loop: a better place (location, look) meant more select a clientele and higher prices, which required to pay a high rent etc.

As I was travelling to other countries, and across my own country, I noticed many times that the attributes of the restaurant as physical place were partly substitute to the quality of food. I know a lot of places where the customers used to pretend that the food is excellent just because said food was so strange that it just didn’t do to say it is crappy in taste. Those people pretended they enjoy the food because the place was awesome. Awesomeness of the place, in turn, was largely based on the fact that many people enjoyed coming there, it was trendy, stylish, it was a good thing to show up there from time to time, just to show I have something to show to others. That was another loop in the business model of restaurants: the peculiar, idiosyncratic, gravitational field between places and customers.

In that business model, quite substantial expenses, i.e.  the rent, and the money spent on decorating and equipping the space for customers were essentially sunk costs. The most important financial outlays you made to make the place competitive did not translate into any capital value in your assets. The only way to do such translation was to buy the place instead of renting it. Advantageous, long-term lease was another option. In some cities, e.g. the big French ones, such as Paris, Lyon or Marseille, the market of places suitable for running restaurants, both legally and physically, used to be a special segment in the market of real estate, with its own special contracts, barriers to entry etc.   

As restaurants turn into takeouts, amidst epidemic restrictions, their business model changes. Food counts in the first place, and the place counts only to the extent of accessibility for takeout. Even if I order food from a very fancy restaurant, I pay for food, not for fanciness. When consumed at home, with the glittering reputation of the restaurant taken away from it, food suddenly tastes differently. I consume it much more with my palate and much less with my ideas of what is trendy. Preparation and delivery of food becomes the essential business process. I think it facilitates new entries into the market of gastronomy. Yes, I know, restaurants are going bankrupt, and my take on it is that places are going bankrupt, but people stay. Chefs and cooks are still there. Human capital, until recently being 50/50 important – together with the real estate aspect of the business – becomes definitely the greatest asset of the restaurants’ sector as they focus on takeout. The broadly spoken cooking skills, including the ability to purchase ingredients of good quality, become primordial. Equipping a business-scale kitchen is not really rocket science, and, what is just as important, there is a market for second-hand equipment of that kind. The equipment of a kitchen, in a takeout-oriented restaurant, is much more of an asset than the decoration of a dining room. The rent you pay, or the market price of the whole place in the real-estate market are much lower, too, as compared to classical restaurants.

What restaurant owners face amidst the pandemic is the necessity to switch quickly, and on a very short notice of 1 – 2 weeks, between their classical business model based on a classy place to receive customers, and the takeout business model, focused on the quality of food and the promptness of delivery. It is a zone of uncertainty more than a durable change, and this zone is

associated with different cash flows and different assets. That, in turn, means measurable risk. Risk in big amounts is an amount, essentially, much more than a likelihood. We talk about risk, in economics and in finance, when we are actually sure that some adverse events will happen, and we even know what is going to be the total amount of adversity to deal with; we just don’t know where exactly that adversity will hit and who exactly will have to deal with it.

There are two basic ways of responding to measurable risk: hedging and insurance. I can face risk by having some aces up my sleeve, i.e. by having some alternative assets, sort of fall-back ones, which assure me slightly softer a landing, should the s**t which I hedge against really happen. When I am at risk in my in-situ restaurant business, I can hedge towards my takeout business. With time, I can discover that I am so good at the logistics of delivery that it pays off to hedge towards a marketing platform for takeouts rather than one takeout business. There is an old saying that you shouldn’t put all your eggs in the same basket, and hedging is the perfect illustration thereof. I hedge in business by putting my resources in many different baskets.

On the other hand, I can face risk by sharing it with other people. I can make a business partnership with a few other folks. When I don’t really care who exactly those folks are, I can make a joint-stock company with tradable shares of participation in equity. I can issue derivative financial instruments pegged on the value of the assets which I perceive as risky. When I lend money to a business perceived as risky, I can demand it to be secured with tradable notes AKA bills of exchange. All that is insurance, i.e. a scheme where I give away part of my cash flow in exchange of the guarantee that other people will share with me the burden of damage, if I come to consume my risks. The type of contract designated expressis verbis as ‘insurance’ is one among many forms of insurance: I pay an insurance premium in exchange o the insurer’s guarantee to cover my damages. Restaurant owners can insure their epidemic-based risk by sharing it with someone else. With whom and against what kind of premium on risk? Good question. I can see like a shade of that. During the pandemic, marketing platforms for gastronomy, such as Uber Eats, swell like balloons. These might be the insurers of the restaurant business. They capitalize on the customer base for takeout. As a matter of fact, they can almost own that customer base.

A group of my students, all from France, as if by accident, had an interesting business concept: a platform for ordering food from specific chefs. A list of well-credentialed chefs is available on the website. Each of them recommends a few flagship recipes of theirs. The customer picks the specific chef and their specific culinary chef d’oeuvre. One more click, and the customer has that chef d’oeuvre delivered on their doorstep. Interesting development. Pas si con que ça, as the French say.     

Businesspeople have been using both hedging and insurance for centuries, to face various risks. When used systematically, those two schemes create two characteristic types of capitalistic institutions: financial markets and pooled funds. Spreading my capitalistic eggs across many baskets means that, over time, we need a way to switch quickly among baskets. Tradable financial instruments serve to that purpose, and money is probably the most liquid and versatile among them. Yet, it is the least profitable one: flexibility and adaptability is the only gain that one can derive from holding large monetary balances. No interest rate, no participation in profits of any kind, no speculative gain on the market value. Just adaptability. Sometimes, just being adaptable is enough to forego other gains. In the presence of significant need for hedging risks, businesses hold abnormally large amounts of cash money.

When people insure a lot – and we keep in mind the general meaning of insurance as described above – they tend to create large pooled funds of liquid financial assets, which stand at the ready to repair any breach in the hull of the market. Once again, we return to money and financial markets. Whilst abundant use of hedging as strategy for facing risk leads to hoarding money at the individual level, systematic application of insurance-type contracts favours pooling funds in joint ventures. Hedging and insurance sort of balance each other.

Those pieces of the puzzle sort of fall together into a pattern. As I have been doing my investment in the stock market, all over 2020, financial markets seems to be puffy with liquid capital, and that capital seems to be avid of some productive application. It is as if money itself was saying: ‘C’mon, guys. I know I’m liquid, and I can compensate risk, but I am more than that. Me being liquid and versatile makes me easily convertible into productive assets, so please, start converting. I’m bored with being just me, I mean with money being just money for the sake of it’.

Boots on the ground

I continue the fundamental cleaning in my head, as the year 2020 touches to its end. What do I want? Firstly,I want to exploit and develop on my hypothesis of collective intelligence in human societies, and I want to develop my programming skills in Python. Secondly, I want to develop my skills and my position as a facilitator and manager of research projects at the frontier of the academic world and that of business.  How will I know I have what I want? If I actually program a workable (and working) intelligent structure, able to uncover and reconstruct the collective intelligence of a social structure out of available empirical data – namely to uncover and reconstruct the chief collective outcomes that structure is after, and its patterns of reaction to random exogenous disturbances – that would be an almost tangible outcome for me, telling me I have made a significant step. When I see that I have repetitive, predictable patterns of facilitating the start of joint research projects in consortiums of scientific entities and business ones, then I know I have nailed down something in terms of project management. If I can start something like an investment fund for innovative technologies, then I definitely know I am on the right track.

As I want to program an intelligent structure, it is essentially an artificial neural network, possibly instrumented with additional functions, such as data collection, data cleansing etc. I know I want to understand very specifically what my neural network does. I want to understand every step in takes. To that purpose, I need to figure out a workable algorithm of my own, where I understand every line of code. It can be sub-optimally slow and limited in its real computational power, yet I need it. On the other hand, Internet is more and more equipped with platforms and libraries in the form of digital clouds, such as IBM Watson, or Tensorflow, which provide optimized processes to build complex pieces of AI. I already know that being truly proficient in Data Science entails skills pertinent to using those cloud-based tools. My bottom line is that if I want to program an intelligent structure communicable and appealing to other people, I need to program it at two levels: as my own prototypic code, and as a procedure of using cloud-based platforms to replicate it.             

At the juncture of those two how-will-I-know pieces of evidence, an idea emerges, a crazy one. What if I can program an intelligent structure which uncovers and reconstructs one or more alternative business models out of the available empirical data? Interesting. The empirical data I work the most with, as regards business models, is the data provided in the annual reports of publicly listed companies. Secondarily, data about financial markets sort of connects. My own experience as small investor supplies me with existential basis to back this external data, and that experience suggests me to define a business model as a portfolio of assets combined with broadly spoken behavioural patterns both in people active inside the business model, thus running it and employed with it, and in people connected to that model from outside, as customers, suppliers, investors etc.

How will other people know I have what I want? The intelligent structure I will have programmed has to work across different individual environments, which is an elegant way of saying it should work on different computers. Logically, I can say I have clearly demonstrated to other people that I achieved what I wanted with that thing of collective intelligence when said other people will be willing to and successful at trying my algorithm. Here comes the point of willingness in other people. I think it is something like an existential thing across the board. When we want other people to try and do something, and they don’t, we are pissed. When other people want us to try and do something, and we don’t, we are pissed, and they are pissed. As regards my hypothesis of collective intelligence, I have already experienced that sort of intellectual barrier, when my articles get reviewed. Reviewers write that my hypothesis is interesting, yet not articulate and not grounded enough. Honestly, I can’t blame them. My feeling is that it is even hard to say that I have that hypothesis of collective intelligence. It is rather as if that hypothesis was having me as its voice and speech. Crazy, I know, only this is how I feel about the thing, and I know by experience that good science (and good things, in general) turn up when I am honest with myself.

My point is that I feel I need to write a book about that concept of collective intelligence, in order to give a full explanation of my hypothesis. My observations about cities and their role in the human civilization make, for the moment, one of the most tangible topics I can attach the theoretical hypothesis to. Writing that book about cities, together with programming an intelligent structure, takes a different shade, now. It becomes a complex account of how we can deconstruct something – our own collective intelligence – which we know is there and yet, as we are inside that thing, we have hard times to describe it.

That book about cities, abundantly referring to my hypothesis of collective intelligence, could be one of the ways to convince other people to at least try what I propose. Thus, once again, I restate what I understand by intelligent structure. It is a structure which learns new patterns by experimenting with many alternative versions of itself, whilst staying internally coherent. I return to my ‘DU_DG’ database about cities (see ‘It is important to re-assume the meaning’) and I am re-assuming the concept of alternative versions, in an intelligent structure.

I have a dataset structured into n variables and m empirical observations. In my DU_DG database, as in many other economic datasets, distinct observations are defined as the state of a country in a given year. As I look at the dataset (metaphorically, it has content and meaning, but it does not have any physical shape save for the one my software supplies it with), and as I look at my thoughts (metaphorically, once again), I realize I have been subconsciously distinguishing two levels of defining an intelligent structure in that dataset, and, correspondingly, two ways of defining alternative versions thereof. At the first level, the entire dataset is supposed to be an intelligent structure and alternative versions thereof consist in alternative dichotomies of the type ‘one variable as output, i.e. as the desired outcome to optimize, and the remaining ones as instrumental input’. At this level of structural intelligence – by which term I understand the way of being in an intelligent structure – alternative versions are alternative orientations, and there are as many of them as there are variables.

Distinction into variables is largely, although not entirely, epistemic, and not ontological. The headcount of urban population is not fundamentally different phenomenon from the surface of agricultural land. Yes, the units of measurement are different, i.e. people vs. square kilometres, but, ontologically, it is largely the same existential stuff, possible to describe as people living somewhere in large numbers and being successful at it. Historically, social scientists and governments alike have come to the conclusion, though, that these two metrics have different a meaning, and thus it comes handy to distinguish them as semantic vessels to collect and convey information. The distinction of alternative orientations in an intelligent structure, supposedly represented in a dataset, is arbitrary and cognitive more than ontological. It depends on the number of variables we have. If I add variables to the dataset, e.g. by computing coefficients between the incumbent variables, I can create new orientations for the intelligent structure, i.e. new alternative versions to experiment with.

The point which comes to my mind is that the complexity of an intelligent structure, at that first level, depends on the complexity of my observational apparatus. The more different variables I can distinguish, and measure as regards a given society, the more complexity I can give to the allegedly existing, collectively intelligent structure of that society.

Whichever combination ‘output variable vs. input variables’ I am experimenting with, there comes the second level of defining intelligent structures, i.e. that of defining them as separate countries. They are sort of local intelligent structures, and, at the same time, they are alternative experimental versions of the overarching intelligent structure to be found in the vector of variables. Each such local intelligent structure, with a flag, a national anthem, and a government, produces many alternative versions of itself in consecutive years covered by the window of observation I have in my dataset.

I can see a subtle distinction here. A country produces alternative versions of itself, in different years of its existence, sort of objectively and without giving a f**k about my epistemological distinctions. It just exists and tries to be good at it. Experimenting comes as natural in the flow of time. This is unguided learning. On the other hand, I produce different orientations of the entire dataset. This is guided learning. Now, I understand the importance of the degree of supervision in artificial neural networks.

I can see an important lesson for me, here. If I want to program intelligent structures ‘able to uncover and reconstruct the collective intelligence of a social structure out of available empirical data – namely to uncover and reconstruct the chief collective outcomes that structure is after, and its patterns of reaction to random exogenous disturbances’, I need to distinguish those two levels of learning in the first place, namely the unguided flow of existential states from the guided structuring into variables and orientations. When I have an empirical dataset and I want to program an intelligent structure able to deconstruct the collective intelligence represented in that dataset, I need to define accurately the basic ontological units, i.e. the fundamentally existing things, then I define alternative states of those things, and finally I define alternative orientations.

Now, I am contrasting. I pass from those abstract thoughts on intelligent structures to a quick review of my so-far learning to program those structures in Python. Below, I present that review as a quick list of separate files I created in JupyterLab, together with a quick characteristic of problems I am trying to solve in each of those files, as well as of the solutions found and not found.

>> Practice Dec 11 2020.iypnb.

In this file, I work with IMF database WEOOct2020 (https://www.imf.org/en/Publications/WEO/weo-database/2020/October ).  I practiced reading complex datasets, with an artificially flattened structure. It is a table, in which index columns are used to add dimensions to an otherwise two-dimensional format. I practiced the ‘read_excel’ and ‘read_csv’ commands. On the whole, it seems that converting an Excel to CSV and then reading CSV in Python is a better method than reading excel. Problems solved: a) cleansing the dataset of not-a-number components and successful conversion of initially ‘object’ columns into the desired ‘float64’ format b) setting descriptive indexes to the data frame c) listing unique labels from a descriptive index d) inserting new columns into the data frame e) adding (compounding) the contents of two existing, descriptive index columns into a third index column. Failures: i) reading data from XML file ii) reading data from SDMX format iii) transposing my data frame so as to put index values of economic variables as column names and years as index values in a column.

>> Practice Dec 8 2020.iypnb.

In this file, I worked with a favourite dataset of mine, the Penn Tables 9.1. (https://www.rug.nl/ggdc/productivity/pwt/?lang=en ). I described my work with it in two earlier updates, namely ‘Two loops, one inside the other’, and ‘Mathematical distance’. I succeeded in creating an intelligent structure from that dataset. I failed at properly formatting the output of that structure and thus at comparing the cognitive value of different orientations I made it simulate.   

>> Practice with Mortality.iypnb.

I created this file as a first practice before working with the above-mentioned WEOOct2020 database. I took one dataset from the website of the World Bank, namely that pertinent to the coefficient of adult male mortality (https://data.worldbank.org/indicator/SP.DYN.AMRT.MA ). I practiced reading data from CSV files, and I unsuccessfully tried to stack the dataset, i.e. to transform columns corresponding to different years of observation into rows indexed with labels corresponding to years.   

>> Practice DU_DG.iypnb.

In this file, I am practicing with my own dataset pertinent to the density of urban population and its correlates. The dataset is already structured in Excel. I start practicing the coding of the same intelligent structure I made with Penn Tables, supposed to study orientation of the societies studied. Same problems and same failures as with Penn Tables 9.1.: for the moment, I cannot nail down the way to get output data in structures that allow full comparability. My columns tend to wander across the output data frames. In other words, the vectors of mean expected values produced by the code I made have slightly (just slightly, and sufficiently to be annoying) different a structure from the original dataset. I don’t know why, yet, and I don’t know how to fix it.  

On the other hand, in that same file, I have been messing around a bit with algorithms based on the ‘scikit’ library for Python. Nice graphs, and functions which I still need to understand.

>> Practice SEC Financials.iypnb.

Here, I work with data published by the US Securities and Exchange Commission, regarding the financials of individual companies listed in the US stock market (https://www.sec.gov/dera/data/financial-statement-data-sets.html ). The challenge here consists in translating data originally supplied in *.TXT files into numerical data frames in Python. The problem with I managed to solve, so far (this is the most recent piece of my programming), is the most elementary translation of TXT data into a Pandas data frame, using the ‘open()’ command, and the ‘f.readlines()’ one. Another small victory here is to read data from a sub-directory inside the working directory of JupyterLab, i.e. inside the root directory of my user profile. I used two methods of reading TXT data. Both worked sort of. First, I used the following sequence:

>> with open(‘2020q3/num.txt’) as f:

            numbers=f.readlines()

>> Numbers=pd.DataFrame(numbers)

… which, when checked with the ‘Numbers.info()’ command, yields:

<class ‘pandas.core.frame.DataFrame’>

RangeIndex: 2351641 entries, 0 to 2351640

Data columns (total 1 columns):

 #   Column  Dtype

—  ——  —–

 0   0       object

dtypes: object(1)

memory usage: 17.9+ MB

In other words, that sequence did not split the string of column names into separate columns, and the ‘Numbers’ data frame contains one column, in which every row is a long string structured with the ‘\’ separators. I tried to be smart with it. I did:

>> Numbers.to_csv(‘Num2’) # I converted the Pandas data frame into a CSV file

>> Num3=pd.DataFrame(pd.read_csv(‘Num2′,sep=’;’)) # …and I tried to read back from CSV, experimenting with different separators. None of it worked. With the ‘sep=’ argument in the command, I kept getting an error of parsing, in the lines of ‘ParserError: Error tokenizing data. C error: Expected 1 fields in line 3952, saw 10’. When I didn’t use the ‘sep=’ argument, the command did not yield error, yet it yielded the same long column of structured strings instead of many data columns.  

Thus, I gave up a bit, and I used Excel to open the TXT file, and to save a copy of it in the CSV format. Then, I just created a data frame from the CSV dataset, through the ‘NUM_from_CSV=pd.DataFrame(pd.read_csv(‘SEC_NUM.csv’, sep=’;’))’ command, which, checked with the ‘NUM_from_CSV.info()’ command, yields:

<class ‘pandas.core.frame.DataFrame’>

RangeIndex: 1048575 entries, 0 to 1048574

Data columns (total 9 columns):

 #   Column    Non-Null Count    Dtype 

—  ——    ————–    —– 

 0   adsh      1048575 non-null  object

 1   tag       1048575 non-null  object

 2   version   1048575 non-null  object

 3   coreg     30131 non-null    object

 4   ddate     1048575 non-null  int64 

 5   qtrs      1048575 non-null  int64 

 6   uom       1048575 non-null  object

 7   value     1034174 non-null  float64

 8   footnote  1564 non-null     object

dtypes: float64(1), int64(2), object(6)

memory usage: 72.0+ MB

The ‘tag’ column in this data frame contains the names of financial variables ascribed to companies identified with their ‘adsh’ codes. I experience the same challenge, and, so far, the same failure as with the WEOOct2020 database from IMF, namely translating different values in a descriptive index into a dictionary, and then, in the next step, to flip the database so as to make those different index categories into separate columns (variables).   

As I have passed in review that programming of mine, I have become aware that reading and properly structuring different formats of data is the sensory apparatus of the intelligent structure I want to program.  Operations of data cleansing and data formatting are the fundamental skills I need to develop in programming. Contrarily to what I expected a few weeks ago, when I was taking on programming in Python, elaborate mathematical constructs are simpler to code than I thought they would be. What might be harder, mind you, is to program them so as to optimize computational efficiency with large datasets. Still, the very basic, boots-on-the-ground structuring of data seems to be the name of the game for programming intelligent structures.

It is important to re-assume the meaning

It is Christmas 2020, late in the morning. I am thinking, sort of deeply. It is a dysfunctional tradition to make, by the end of the year, resolutions for the coming year. Resolutions which we obviously don’t hold to long enough to see them bring anything substantial. Yet, it is a good thing to pass in review the whole passing year, distinguish my own f**k-ups from my valuable actions, and use it as learning material for the incoming year.

What I have been doing consistently for the past year is learning new stuff: investment in the stock market, distance teaching amidst epidemic restrictions, doing research on collective intelligence in human societies, managing research projects, programming, and training consistently while fasting. Finally, and sort of overarchingly, I have learnt the power of learning by solving specific problems and writing about myself mixing successes and failures as I am learning.

Yes, it is precisely the kind you can expect in what we tend to label as girls’ readings, sort of ‘My dear journal, here is what happened today…’. I keep my dear journal focused mostly on my broadly speaking professional development. Professional development combines with personal development, for me, though. I discovered that when I want to achieve some kind of professional success, would it be academic, or business, I need to add a few new arrows to my personal quiver.    

Investing in the stock market and training while fasting are, I think, what I have had the most complete cycle of learning with. Strange combination? Indeed, a strange one, with a surprising common denominator: the capacity to control my emotions, to recognize my cognitive limitations, and to acknowledge the payoff from both. Financial decisions should be cold and calculated. Yes, they should, and sometimes they are, but here comes a big discovery of mine: when I start putting my own money into investment positions in the stock market, emotions flare in me so strongly that I experience something like tunnel vision. What looked like perfectly rational inference from numbers, just minutes ago, now suddenly looks like a jungle, with both game and tigers in it. The strongest emotion of all, at least in my case, is the fear of loss, and not the greed for gain. Yes, it goes against a common stereotype, and yet it is true. Moreover, I discovered that properly acknowledged and controlled, the fear of loss is a great emotional driver for good investment decisions, and, as a matter of fact, it is much better an emotional driver than avidity for gain. I know that I am well off when I keep the latter sort of weak and shy, expecting gains rather than longing for them, if you catch my drift.

Here comes the concept of good investment decisions. As this year 2020 comes to an end, my return on cash invested over the course of the year is 30% with a little change. Not bad at all, compared to a bank deposit (+1,5%) or to sovereign bonds (+4,5% max). I am wrapping my mind around the second most fundamental question about my investment decisions this year – after, of course, of the question about return on investment – and that second question is ontological: what my investment decisions actually have been? What has been their substance? The most general answer is tolerable complexity with intuitive hedging and a pinch of greed. Complexity means that I have progressively passed from the otherwise naïve expectation of one perfect hit to a portfolio of investment positions. Thinking intuitively in terms of portfolio has taught me just as intuitive approach to hedging my risks. Now, when I open one investment position, I already think about another possible one, either to reinforce my glide on the wave crest I intend to ride, or to compensate the risks contingent to seeing my ass gliding off and down from said wave crest.

That portfolio thinking of mine happens in layers, sort of. I have a portfolio of industries, and that seems to be the basic structuring layer of my decisions. I think I can call myself a mid-term investor. I have learnt to spot and utilise mid-term trends of interest that investors in the stock market attach to particular industries. I noticed there are cyclical fashion seasons in the stock market, in that respect. There is a cyclically recurrent biotech season, due to the pandemic. There is just as cyclical a fashion for digital tech, and another one for renewable energies (photovoltaic, in particular). Inside the digital tech, there are smaller waves of popularity as regards the gaming business, others connected to FinTech etc.

Cyclicality means that prices of stock in those industries grow for some time, ranging, by my experience, from 2 to 13 weeks. Riding those waves means jumping on and off at the right moment. The right moment for jumping on is as early as possible after the trend starts to ascend, and jump just as early as possible after it shows signs of durable descent.

The ‘durable’ part is tricky, mind you. I saw many episodes, and during some of them I shamefully yielded to short-termist panic, when the trend curbs down just for a few days before rocketing up again. Those episodes show well what it means in practical terms to face ‘technical factors’. The stock market is like an ocean. There are spots of particular fertility, and big predators tend to flock just there. In the stock market, just as in the ocean, you have bloody big sharks swimming around, and you’d better hold on when they start feeding, ‘cause they feed just as real sharks do: they hit quickly, cause abundant bleeding, and then just wait until their pray bleeds out enough to be defenceless.

When I see, for example, a company like the German Biontech (https://investors.biontech.de/investors-media) suddenly losing value in the stock market, whilst the very vaccine they ganged up with Pfizer to make is being distributed across the world, I am like: ‘Wait a minute! Why the stock price of a super-successful, highly innovative business would fall just at the moment when they are starting to consume the economic fruit of their innovation?’. The only explanation is that sharks are hunting. Your typical stock market shark hunts in a disgusting way, by eating, vomiting and then eating their vomit back with a surplus. It bites a big chunk of a given stock, chews it for a moment, spits it out quickly – which pushes the price down a bit – then eats back its own vomit of stock, with a tiny surplus acquired at the previously down-driven price, and then it repeats. Why wouldn’t it repeat, as long as the thing works?

My personal philosophy, which, unfortunately, sometimes I deviate from when my emotions prevail, is just to sit and wait until those big sharks end their feeding cycle. This is another useful thing to know about big predators in the stock market: they hunt similarly to big predators in nature. They have a feeding cycle. When they have killed and consumed a big prey, they rest, as they are both replete with eating and down on energy. They need to rebuild their capital base.      

My reading of the stock market is that those waves of financial interest in particular industries are based on expectations as for real business cycles going on out there. Of course, in the stock market, there is always the phenomenon of subsidiary interest: I invest in companies which I expect other investors to invest to, as well, and, consequently, whose stock price I expect to grow. Still, investors in the stock market are much more oriented on fundamental business cycles than non-financial people think. When I invest in the stock of a company, and I know for a fact that many other investors think the same, I expect that company to do something constructive with my trust. I want to see those CEOs take bold decisions as for real investment in technological assets. When they really do so, I stay with them, i.e. I hold that stock. This is why I keep holding the stock of Tesla even amidst episodes of while swings in its price. I simply know Elon Musk will always come up with something which, for him, are business concepts, and for the common of mortals are science-fiction. If, on the other hand, I see those CEOs just sitting and gleaming benefits from trading their preferential shares, I leave.

Here I connect to another thing I started to learn during 2020: managing research projects. At my university, I have been assigned this specific job, and I discovered something which I did not expect: there is more money than ideas, out there. There is, actually, plenty of capital available from different sources, to finance innovative science. The tricky part is to translate innovative ideas into an intelligible, communicable form, and then into projects able to convince people with money. The ‘translating’ part is surprisingly complex. I can see many sparse, sort of semi-autonomous ideas in different people, and I still struggle with putting those people together, into some sort of team, or, fault of a team, into a network, and make them mix their respective ideas into one, big, articulate concept. I have been reading for years about managing R&D in corporate structures, about how complex and artful it is to manage R&D efficiently, and now, I am experiencing it in real life. An interesting aspect of that is the writing of preliminary contracts, the so-called ‘Non-Disclosure Agreements’ AKA NDAs, the signature of which is sort of a trigger for starting serious networking between different agents of an R&D project.

As I am wrapping my mind around those questions, I meditate over the words written by Joseph Schumpeter, in his Business Cycles: “Whenever a new production function has been set up successfully and the trade beholds the new thing done and its major problems solved, it becomes much easier for other people to do the same thing and even to improve upon it. In fact, they are driven to copying it if they can, and some people will do so forthwith. It should be observed that it becomes easier not only to do the same thing, but also to do similar things in similar lines—either subsidiary or competitive ones—while certain innovations, such as the steam engine, directly affect a wide variety of industries. This seems to offer perfectly simple and realistic interpretations of two outstanding facts of observation : First, that innovations do not remain isolated events, and are not evenly distributed in time, but that on the contrary they tend to cluster, to come about in bunches, simply because first some, and then most, firms follow in the wake of successful innovation ; second, that innovations are not at any time distributed over the whole economic system at random, but tend to concentrate in certain sectors and their surroundings”. (Business Cycles, Chapter III HOW THE ECONOMIC SYSTEM GENERATES EVOLUTION, The Theory of Innovation). In the Spring, when the pandemic was deploying its wings for the first time, I had a strong feeling that medicine and biotechnology will be the name of the game in technological change for at least a few years to come. Now, as strange as it seems, I have a vivid confirmation of that in my work at the university. Conceptual balls which I receive and which I do my best to play out further in the field come almost exclusively from the faculty of medical sciences. Coincidence? Go figure…

I am developing along two other avenues: my research on cities and my learning of programming in Python. I have been doing research on cities as manifestations of collective intelligence, and I have been doing it for a while. See, for example, ‘Demographic anomalies – the puzzle of urban density’ or ‘The knowingly healthy people’. As I have been digging down this rabbit hole, I have created a database, which, for working purposes, I call ‘DU_DG’. DU_DG is a coefficient of relative density in population, which I came by with some day and which keeps puzzling me.  Just to announce the colour, as we say in Poland when playing cards, ‘DU’ stands for the density of urban population, and ‘DG’ is the general density of population. The ‘DU_DG’ coefficient is a ratio of these two, namely it is DU/DG, or, in other words, this is the density of urban population denominated in the units of general density in population. In still other words, if we take the density of population as a fundamental metric of human social structures, the DU_DG coefficient tells how much denser urban population is, as compared to the mean density, rural settlements included.

I want to rework through my DU_DG database in order both to practice my programming skills, and to reassess the main axes of research on the collective intelligence of cities. I open JupyterLab from my Anaconda panel, and I create a new Notebook with Python 3 as its kernel. I prepare my dataset. Just in case, I make two versions: one in Excel, another one in CSV. I replace decimal comas with decimal points; I know by experience that Python has issues with comas. In human lingo, a coma is a short pause for taking like half a breath before we continue uttering the rest of the sentence. From there, we take the coma into maths, as decimal separator. In Python, as in finance, we talk about decimal point as such, i.e. as a point. The coma is a separator.

Anyway, I have that notebook in JupyterLab, and I start by piling up what I think I will need in terms of libraries:

>> import numpy as np

>> import pandas as pd

>> import os

>> import math

I place my database in the root directory of my user profile, which is, by default, the working directory of Anaconda, and I check if my database is visible for Python:

>> os.listdir()

It is there, in both versions, Excel and CSV. I start with reading from Excel:

>> DU_DG_Excel=pd.DataFrame(pd.read_excel(‘Dataset For Perceptron.xlsx’, header=0))

I check with ‘DU_DG_Excel.info()’. I get:

<class ‘pandas.core.frame.DataFrame’>

RangeIndex: 1155 entries, 0 to 1154

Data columns (total 10 columns):

 #   Column                                                                Non-Null Count  Dtype 

—  ——                                                                      ————–  —– 

 0   Country                                                                1155 non-null   object

 1   Year                                                                      1155 non-null   int64 

 2   DU_DG                                                                1155 non-null   float64

 3   Population                                                           1155 non-null   int64 

 4   GDP (constant 2010 US$)                                  1042 non-null   float64

 5   Broad money (% of GDP)                                  1006 non-null   float64

 6   urban population absolute                                 1155 non-null   float64

 7   Energy use (kg of oil equivalent per capita)    985 non-null    float64

 8   agricultural land km2                                        1124 non-null   float64

 9   Cereal yield (kg per hectare)                                         1124 non-null   float64

dtypes: float64(7), int64(2), object(1)

memory usage: 90.4+ KB  

Cool. Exactly what I wanted. Now, if I want to use this database as a simulator of collective intelligence in human societies, I need to assume that each separate ‘country <> year’ observation is a distinct local instance of an overarching intelligent structure. My so-far experience with programming opens up on a range of actions that structure is supposed to perform. It is supposed to differentiate itself into the desired outcomes, on the one hand, and the instrumental epistatic traits manipulated and adjusted in order to achieve those outcomes.

As I pass in review my past research on the topic, a few big manifestations of collective intelligence in cities come to my mind. Creation and development of cities as purposeful demographic anomalies is the first manifestation. This is an otherwise old problem in economics. Basically, people and resources they use should be disposed evenly over the territory those people occupy, and yet they aren’t. Even with a correction taken for physical conditions, such as mountains or deserts, we tend to like forming demographic anomalies on the landmass of Earth. Those anomalies have one obvious outcome, i.e. the delicate balance between urban land and agricultural land, which is a balance between dense agglomerations generating new social roles due to abundant social interactions, on the one hand, and the local food base for people endorsing those roles. The actual difference between cities and the surrounding countryside, in terms of social density, is very idiosyncratic across the globe and seems to be another aspect of intelligent collective adaptation.

Mankind is becoming more and more urbanized, i.e. a consistently growing percentage of people live in cities (World Bank 1[1]). In 2007 – 2008, the coefficient of urbanization topped 50% and keeps progressing since then. As there is more and more of us, humans, on the planet, we concentrate more and more in urban areas. That process defies preconceived ideas about land use. A commonly used narrative is that cities keep growing out into their once-non-urban surroundings, which is frequently confirmed by anecdotal, local evidence of particular cities effectively sprawling into the neighbouring rural land. Still, as data based on satellite imagery is brought up, and as total urban land area on Earth is measured as the total surface of peculiar agglomerations of man-made structures and night-time lights, that total area seems to be stationary, or, at least, to have been stationary for the last 30 years (World Bank 2[2]). The geographical distribution of urban land over the entire land mass of Earth does change, yet the total seems to be pretty constant. In parallel, the total surface of agricultural land on Earth has been growing, although at a pace far from steady and predictable (World Bank 3[3]).

There is a theory implied in the above-cited methodology of measuring urban land based on satellite imagery. Cities can be seen as demographic anomalies with a social purpose, just as Fernand Braudel used to state it (Braudel 1985[4]) : ‘Towns are like electric transformers. They increase tension, accelerate the rhythm of exchange and constantly recharge human life. […]. Towns, cities, are turning-points, watersheds of human history. […]. The town […] is a demographic anomaly’. The basic theoretical thread of this article consists in viewing cities as complex technologies, for one, and in studying their transformations as a case of technological change. Logically, this is a case of technological change occurring by agglomeration and recombination. Cities can be studied as demographic anomalies with the specific purpose to accommodate a growing population with just as expanding a catalogue of new social roles, possible to structure into non-violent hierarchies. That path of thinking is present, for example, in the now classical work by Arnold Toynbee (Toynbee 1946[5]), and in the even more classical take by Adam Smith (Smith 1763[6]). Cities can literally work as factories of new social roles due to intense social interactions. The greater the density of population, the greater the likelihood of both new agglomerations of technologies being built, and new, adjacent social roles emerging. A good example of that special urban function is the interaction inside age groups. Historically, cities have allowed much more abundant interactions among young people (under the age of 25), that rural environments have. That, in turn, favours the emergence of social roles based on the typically adolescent, high appetite for risk and immediate rewards (see for example: Steinberg 2008[7]). Recent developments in neuroscience, on the other hand, allow assuming that abundant social interactions in the urban environment have a deep impact on the neuroplastic change in our brains, and even on the phenotypical expression of human DNA (Ehninger et al. 2008[8]; Bavelier et al. 2010[9]; Day & Sweatt 2011[10]; Sweatt 2013[11])

At the bottom line of all those theoretical perspectives, cities are quantitatively different from the countryside by their abnormal density of population. Throughout this article, the acronymic symbol [DU/DG] is used to designate the density of urban population denominated in the units of (divided by) general density of population, and is computed on the grounds of data published by combining the above cited coefficient of urbanization (World Bank 1) with the headcount of population (World Bank 4[12]), as well as with the surface of urban land (World Bank 2). The general density of population is taken straight from official statistics (World Bank 5[13]). 

The [DU/DG] coefficient stays in the theoretical perspective of cities as demographic anomalies with a purpose, and it can be considered as a measure of social difference between cities and the countryside. It displays intriguing quantitative properties. Whilst growing steadily over time at the globally aggregate level, from 11,9 in 1961 to 19,3 in 2018, it displays significant disparity across space. Such countries as Mauritania or Somalia display a [DU/DG] > 600, whilst United Kingdom or Switzerland are barely above [DU/DG] = 3. In the 13 smallest national entities in the world, such as Tonga, Puerto Rico or Grenada, [DU/DG] falls below 1. In other words, in those ultra-small national structures, the method of assessing urban space by satellite-imagery-based agglomeration of night-time lights fails utterly. These communities display peculiar, categorially idiosyncratic a spatial pattern of settlement. The cross-sectional variability of [DU/DG] (i.e. its standard deviation across space divided by its cross-sectional mean value) reaches 8.62, and yet some 70% of mankind lives in countries ranging across the 12,84 ≤ [DU/DG] ≤ 23,5 interval.

Correlations which the [DU/DG] coefficient displays at the globally aggregate level (i.e. at the scale of the whole planet) are even more puzzling. When benchmarked against the global real output in constant units of value (World Bank 6[14]), the time series of aggregate, global  [DU/DG] displays a Pearson correlation of r = 0,9967. On the other hand, the same type of Pearson correlation with the relative supply of money to the global economy (World Bank 7[15]) yields r = 0,9761. As the [DU/DG] coefficient is supposed to represent the relative social difference between cities and the countryside, a look at the latter is beneficial. The [DU/DG] Pearson-correlates with the global area of agricultural land (World Bank 8[16]) at r = 0,9271, and with the average, global yield of cereals, in kgs per hectare (World Bank 9[17]), at r = 0,9858. That strong correlations of the [DU/DG] coefficient with metrics pertinent to the global food base match its correlation with the energy base. When Pearson-correlated with the global average consumption of energy per capita (World Bank 10[18]), [DU/DG] proves significantly covariant, at r = 0,9585. All that kept in mind, it is probably not that much of a surprise to see the global aggregate [DU/DG] Pearson correlated with the global headcount of population (World Bank 11[19]) at r = 0,9954.    

It is important to re-assume the meaning of the [DU/DG] coefficient. This is essentially a metric of density in population, and density has abundant ramifications, so to say. The more people live per 1 km2, the more social interactions occur on the same square kilometre. Social interactions mean a lot. They mean learning by civilized rivalry. They mean transactions and markets as well. The greater the density of population, the greater the probability of new skills emerging, which possibly translates into new social roles, new types of business and new technologies. When two types of human settlements coexist, displaying very different densities of population, i.e. type A being many times denser than type B, type A is like a factory of patterns (new social roles and new markets), whilst type B is the supplier of raw resources. The progressively growing global average [DU/DG] means that, at the scale of the human civilization, that polarity of social functions accentuates.

The [DU/DG] coefficient bears strong marks of a statistical stunt. It is based on truly risky the assumption, advanced implicitly by through the World Bank’s data, that total surface of urban land on Earth has remained constant, at least over the last 3 decades. Moreover, denominating the density of urban population in units of general density of population was purely intuitive from the author’s part, and, as a matter of fact, other meaningful denominators can easily come to one’s mind. Still, with all that wobbly theoretical foundation, the [DU/DG] coefficient seems to inform about a significant, structural aspect of human societies. The Pearson correlations, which the global aggregate of that coefficient yields with the fundamental metrics of the global economy, are of an almost uncanny strength in social sciences, especially with respect to the strong cross-sectional disparity in the [DU/DG].

The relative social difference between cities and the countryside, measurable with the gauge of the [DU/DG] coefficient, seems to be a strongly idiosyncratic adaptative mechanism in human societies, and this mechanism seems to be correlated with quantitative growth in population, real output, production of food, and the consumption of energy. That could be a manifestation of tacit coordination, where a growing human population triggers an increasing pace of emergence in new social roles by stimulating urban density. As regards energy, the global correlation between the increasing [DU/DG] coefficient and the average consumption of energy per capita interestingly connects with a stream of research which postulates intelligent collective adaptation of human societies to the existing energy base, including intelligent spatial re-allocation of energy production and consumption (Leonard, Robertson 1997[20]; Robson, Wood 2008[21]; Russon 2010[22]; Wasniewski 2017[23], 2020[24]; Andreoni 2017[25]; Heun et al. 2018[26]; Velasco-Fernández et al 2018[27]).

It is interesting to investigate how smart are human societies in shaping their idiosyncratic social difference between cities and the countryside. This specific path of research is being pursued, further in this article, through the verification and exploration of the following working hypothesis: ‘The collective intelligence of human societies optimizes social interactions in the view of maximizing the absorption of energy from the environment’.  


[1] World Bank 1: https://data.worldbank.org/indicator/SP.URB.TOTL.IN.ZS

[2] World Bank 2: https://data.worldbank.org/indicator/AG.LND.TOTL.UR.K2

[3] World Bank 3:  https://data.worldbank.org/indicator/AG.LND.AGRI.K2

[4] Braudel, F. (1985). Civilisation and Capitalism 15th and 18th Century–Vol. I: The Structures of Everyday Life, Translated by S. Reynolds, Collins, London, pp. 479 – 482

[5] Royal Institute of International Affairs, Somervell, D. C., & Toynbee, A. (1946). A Study of History. By Arnold J. Toynbee… Abridgement of Volumes I-VI (VII-X.) by DC Somervell. Oxford University Press., Section 3: The Growths of Civilizations, Chapter X.

[6] Smith, A. (1763-1896). Lectures on justice, police, revenue and arms. Delivered in the University of Glasgow in 1763, published by Clarendon Press in 1896, pp. 9 – 20

[7] Steinberg, L. (2008). A social neuroscience perspective on adolescent risk-taking. Developmental review, 28(1), 78-106. https://dx.doi.org/10.1016%2Fj.dr.2007.08.002

[8] Ehninger, D., Li, W., Fox, K., Stryker, M. P., & Silva, A. J. (2008). Reversing neurodevelopmental disorders in adults. Neuron, 60(6), 950-960. https://doi.org/10.1016/j.neuron.2008.12.007

[9] Bavelier, D., Levi, D. M., Li, R. W., Dan, Y., & Hensch, T. K. (2010). Removing brakes on adult brain plasticity: from molecular to behavioral interventions. Journal of Neuroscience, 30(45), 14964-14971. https://www.jneurosci.org/content/jneuro/30/45/14964.full.pdf

[10] Day, J. J., & Sweatt, J. D. (2011). Epigenetic mechanisms in cognition. Neuron, 70(5), 813-829. https://doi.org/10.1016/j.neuron.2011.05.019

[11] Sweatt, J. D. (2013). The emerging field of neuroepigenetics. Neuron, 80(3), 624-632. https://doi.org/10.1016/j.neuron.2013.10.023

[12] World Bank 4: https://data.worldbank.org/indicator/SP.POP.TOTL

[13] World Bank 5: https://data.worldbank.org/indicator/EN.POP.DNST

[14] World Bank 6: https://data.worldbank.org/indicator/NY.GDP.MKTP.KD

[15] World Bank 7: https://data.worldbank.org/indicator/FM.LBL.BMNY.GD.ZS

[16] World Bank 8: https://data.worldbank.org/indicator/AG.LND.AGRI.K2

[17] World Bank 9: https://data.worldbank.org/indicator/AG.YLD.CREL.KG

[18] World Bank 10: https://data.worldbank.org/indicator/EG.USE.PCAP.KG.OE

[19] World Bank 11: https://data.worldbank.org/indicator/SP.POP.TOTL

[20] Leonard, W.R., and Robertson, M.L. (1997). Comparative primate energetics and hominoid evolution. Am. J. Phys. Anthropol. 102, 265–281.

[21] Robson, S.L., and Wood, B. (2008). Hominin life history: reconstruction and evolution. J. Anat. 212, 394–425

[22] Russon, A. E. (2010). Life history: the energy-efficient orangutan. Current Biology, 20(22), pp. 981- 983.

[23] Waśniewski, K. (2017). Technological change as intelligent, energy-maximizing adaptation. Energy-Maximizing Adaptation (August 30, 2017).

[24] Wasniewski, K. (2020). Energy efficiency as manifestation of collective intelligence in human societies. Energy, 191, 116500.

[25] Andreoni, V. (2017). Energy Metabolism of 28 World Countries: A Multi-scale Integrated Analysis. Ecological Economics, 142, 56-69

[26] Heun, M. K., Owen, A., & Brockway, P. E. (2018). A physical supply-use table framework for energy analysis on the energy conversion chain. Applied Energy, 226, 1134-1162

[27] Velasco-Fernández, R., Giampietro, M., & Bukkens, S. G. (2018). Analyzing the energy performance of manufacturing across levels using the end-use matrix. Energy, 161, 559-572

Mathematical distance

I continue learning Python as regards data analysis. I have a few thoughts on what I have already learnt, and a new challenge, namely to repeat the same thing with another source of data, namely the World Economic Outlook database, published by the International Monetary Fund (https://www.imf.org/en/Publications/WEO/weo-database/2020/October ). My purpose is to use that data in the same way as I used that from Penn Tables 9.1 (see ‘Two loops, one inside the other’, for example), namely to run it through a digital intelligent structure consisting of a double algorithmic loop.

First things first, I need to do what I promised to do in Two loops, one inside the other, that is to test the cognitive value of the algorithm I presented there. By the way, as I keep working with algorithms known as ‘artificial intelligence’, I am more and more convinced that the term ‘artificial neural networks’ is not really appropriate. I think that talking about artificial intelligent structure is much closer to reality. Giving the name of ‘neurons’ to particular fragments of the algorithm reflects the properties of some of those neurons, I get it. Yet, the neurons of a digital technology are the micro-transistors in the CPU or in the GPU. Yes, micro-transistors do what neurons do in our brain: they fire conditionally and so they produce neural signals. Algorithms of AI can be run on any computer with proper software. AI is software, not hardware.

Yes, I know I’m ranting. This is how I am gathering intellectual speed for my writing. Learning to program in Python has led me to a few realizations about the digital intelligent structures I am working with, as simulators of collective intelligence in human societies. Algorithms are different from equations in the sense that algorithms do things, whilst equations represent things. When I want an algorithm to do the things represented with equations, I need functional coherence between commands. A command needs data to work on, and it is a good thing if I can utilize the data it puts out. A chain of commands is functional, when earlier commands give accurate input to later commands, and when the final output of the last command can be properly stored and utilized. On the other hand, equations don’t need data to work, because equations don’t work. They just are.

I can say my equations are fine when they make sense logically. On the other hand, I can be sure my algorithm works the way it is supposed to work, when I can empirically prove its functionality by testing it. Hence, I need a method of testing it and I need to be sure the method in itself is robust. Now, I understand why the hell in all the tutorials which I could find as regards programming in Python there is that ‘print(output)’ command at the end of each algorithm. Whatever the output is, printing it, i.e. displaying it on the screen, is the most elementary method of checking whether that output is what I expect it to be. By the way, I have made my own little discovery about the usefulness of the ‘print()’ command. In looping algorithms, which, by nature, are prone to looping forever if the range of iterations is not properly defined, I put ‘print(‘Finished’)’ at the very end of the code. When I see ‘Finished’ printed in the line below, I can be sure the thing has done the work it was supposed to do.

Good, I was supposed to write about the testing of my algorithm. How do I test? I start by taking small pieces of the algorithm and checking the kind of output they give. By doing that, I modified the algorithm from ‘Two loops, one inside the other’, into the form you can see below:

That’s the preliminary part: importing libraries and data for analysis >>

In [1]: import numpy as np

   …: import pandas as pd

   …: import os

   …: import math

In [2]: PWT=pd.DataFrame(pd.read_csv(‘PWT 9_1 no empty cells.csv’,header=0)) # PWT 9_1 no empty cells.csv is a CSV version of the database I made with non-empty observations in the Penn Tables 9.1 database

Now, I extract the purely numerical columns, into another data frame, which I label ‘PWT_Numerical’

In [3]: Variables=[‘rgdpe’, ‘rgdpo’, ‘pop’, ’emp’, ’emp / pop’, ‘avh’,

   …:        ‘hc’, ‘ccon’, ‘cda’, ‘cgdpe’, ‘cgdpo’, ‘cn’, ‘ck’, ‘ctfp’, ‘cwtfp’,

   …:        ‘rgdpna’, ‘rconna’, ‘rdana’, ‘rnna’, ‘rkna’, ‘rtfpna’, ‘rwtfpna’,

   …:        ‘labsh’, ‘irr’, ‘delta’, ‘xr’, ‘pl_con’, ‘pl_da’, ‘pl_gdpo’, ‘csh_c’,

   …:        ‘csh_i’, ‘csh_g’, ‘csh_x’, ‘csh_m’, ‘csh_r’, ‘pl_c’, ‘pl_i’, ‘pl_g’,

   …:        ‘pl_x’, ‘pl_m’, ‘pl_n’, ‘pl_k’]

In [4]: PWT_Numerical=pd.DataFrame(PWT[Variables])

My next step is to practice with creating dictionaries out of column names in my data frame

In [5]: Names_Output_Data=[]

   …: for i in range(42):

   …:     Name_Output_Data=PWT_Numerical.iloc[:,i].name

   …:     Names_Output_Data.append(Name_Output_Data)

I start coding the intelligent structure. I start by defining empty lists, to store data which the intelligent structure produces.

In [6]: ER=[]

   …: Transformed=[]

   …: MEANS=[]

   …: EUC=[]

I define an important numerical array in NumPy: the vector of mean expected values in the variables of PWT_Numerical.

   …: Source_means=np.array(PWT_Numerical.mean())

I open the big external loop of my intelligent structure. This loop is supposed to produce as many alternative intelligent structures as there are variables in my PWT_Numerical data frame.

   …: for i in range(42):

   …:     Name_Output_Data=PWT_Numerical.iloc[:,i].name

   …:     Names_Output_Data.append(Name_Output_Data)

   …:     Output=pd.DataFrame(PWT_Numerical.iloc[:,i]) # I make an output data frame

   …:     Mean=Output.mean()

   …:     MEANS.append(Mean) # I store the expected mean of each output variable in a separate list.

   …:     Input=pd.DataFrame(PWT_Numerical.drop(Output,axis=1)) # I make an input data frame, coupled with output

   …:     Input_STD=pd.DataFrame(Input/Input.max(axis=0)) # I standardize input data over the respective maximum of each variable

   …:     Input_Means=pd.DataFrame(Input.mean()) # I prepare two data frames sort of for later: one with the vector of means…

   …:     Input_Max=pd.DataFrame(Input.max(axis=0)) #… and another one with the vector of maximums

Now, I put in motion the intelligent structure strictly speaking: a simple perceptron, which…

   …:     for j in range(10): # … is short, for testing purposes, just 10 rows in the source data

   …:         Input_STD_randomized=np.array(Input_STD.iloc[j])*np.random.rand(41) #… sprays the standardized input data with random weights

   …:         Input_STD_summed=Input_STD_randomized.sum(axis=0) # … and then sums up sort of ∑(input variable *random weight).

   …:         T=math.tanh(Input_STD_summed) # …computes the hyperbolic tangent of summed randomized input. This is neural activation.

   …:         D=1-(T**2) # …computes the local first derivative of that hyperbolic tangent

   …:         E=(Output.iloc[j]-T)*D # … computes the local error of estimating the value of output variable, with input data neural-activated with the function of hyperbolic tangent

   …:         E_vector=np.array(np.repeat(E,41)) # I spread the local error into a vector to feed forward

   …:         Next_row_with_error=Input_STD.iloc[j+1]+E_vector # I feed the error forward

   …:         Next_row_DESTD=Next_row_with_error*Input.max(axis=0) # I destandardize

   …:         ER.append(E) # I store local errors in the list ER

   …:         ERROR=pd.DataFrame(ER) # I make a data frame out of the list ER

   …:         Transformed.append(Next_row_with_error) # I store the input values transformed by the perceptron (through the forward feed of error), in the list Transformed

   …:     TR=pd.DataFrame(Transformed) # I turn the Transformed list into a data frame

   …:     MEAN_TR=pd.DataFrame(TR.mean()) # I compute the mean values of transformed input and store them in a data frame. They are still mean values of standardized data.

   …:     MEAN_TR_DESTD=pd.DataFrame(MEAN_TR*Input_Max) # I destandardise

   …: MEANS_DF=pd.DataFrame(MEANS)

   …: print(MEANS)

   …: print(‘Finished’)

The general problem which I encounter with that algorithm is essentially that of reading correctly and utilizing the output, or, at least, this is how I understand that problem. First, I remind the general hypothesis which I want to test and explore with that algorithm presented above. Here it comes: for a given set of phenomena, informative about the state of a human social structure, and observable as a dataset of empirical values in numerical variables, there is a subset of variables which inform about the ethical and functional orientation of that human social structure; orientation manifests as relatively the least significant transformation, which the original dataset needs to undergo in order to minimize error in estimating the orientation-informative variable as output, when the remaining variables are used as input.

When the empirical dataset in question is being used as training set for an artificial neural network of the perceptron type, i.e. a network which tests for the optimal values in the input variables, for minimizing the error in estimating the output variable, such neural testing transforms the original dataset into a specific version thereof. I want to know how far away  from the original empirical dataset  does the specific transformation, oriented on a specific output, go. I measure that mathematical distance as the Euclidean distance between the vector of mean expected values in the transformed dataset, and the original one.

Therefore, I need two data frames in Pandas, or two arrays in NumPy, one containing the mean expected values of the original input data, the other storing mean expected values of the transformed dataset. Here is where my problems start, with the algorithm presented above. The ‘TR’ data frame has a different shape and structure than the ‘Input’ data frame, from which, technically, it is derived.  The Input data frame has 41 columns, and the TR has 42 columns. Besides, one column from ‘Input’, the ‘rgdpe’, AKA real GDP on the final consumption side, moves from being the first column in ‘Input’ to being the last ‘column’ in ‘TR’. For the moment, I have no clue what’s going on at that level. I even checked the algorithm with a debugger, available with the integrated development environment called Spyder (https://www.spyder-ide.org ). Technically, as far as the grammar of Python is concerned, the algorithm is OK. Still, it produces different than expected vectors of mean expected values in transformed data. I don’t even know where to start looking for a solution.    

There is one more thing I want to include in this algorithm, which I have already been doing in Excel. At each row of transformed data, thus at each ‘Next_row_with_error’, I want to add a mirroring row of mean Euclidean distance from each individual variable to all the remaining ones. It is a measure of internal coherence in the process of learning through trial and error, and I already know, having learnt it by trial and error, that including that specific metric, and feeding it forward together with the error, changes a lot in the way a perceptron learns.    

Two loops, one inside the other

I am developing my skills in programming by attacking the general construct of Markov chains and state space. My theory on the bridging between collective intelligence in human societies and artificial neural networks as simulators thereof is that both are intelligent structures. I assume that they learn by producing many alternative versions of themselves whilst staying structurally coherent, and they pitch each such version against a desired output, just to see how fit that particular take on existence is, regarding the requirements in place.  

Mathematically, that learning-by-doing is a Markov chain of states, i.e. a sequence of complex states, described by a handful of variables, such that each consecutive state in the sequence is a modification of the preceding state, through a logically coherent σ-algebra. My so-far findings suggest that orienting the intelligent structure on specific outcomes, out of all those available, is crucial for the path of learning that structure takes. In other words, the general hypothesis I am sniffing around and digging into is that the way an intelligent structure learns is principally determined by the desired outcomes which the structure is after, more than by the exact basket of inputs it uses. Stands to reason, for a neural network: the thing optimises inputs so as to make it fit to the outcome it seeks to get as close to as possible.  

As I am taking real taste in stepping out of my cavern, I have installed Anaconda on my computer, from https://www.anaconda.com/products/individual/download-success . When I use Anaconda, I use the same JupyterLab online functionality which I have been using so far, with one difference. Anaconda allows me to create a user account with JupyterLab, and to have all my work stored on that account. Probably, there are some storage limits, yet the thing is practical. 

Anyway, I want to program in Python, just as I do it in Excel, intelligent structures able to emulate the collective intelligence of human societies. A basic finding of mine, in the so-far research, is that intelligent structures alter their behaviour significantly depending on the outcome they pursue. The initial landscape I start operating in is akin a junkyard of information. I go to the website of World Bank, for example, I mean the one with freely available data, AKA https://data.worldbank.org , and I start rummaging. Quality of life, size of economies, headcount of populations… What else? Oh, yes, there are things about education, energy consumption and whatnot. All that stuff just piled up nicely, each item easy to retrieve, and yet, how does it all make sense together? My take on the thing is that there is stuff going on, like all the time and everywhere. We are part of that ongoing stuff, actually. Out of that stream of happening, we perceptually single out phenomenological cuts , and we isolate those specific cuts because we are able to measure them with some kind of gauge. Data-driven observation of ourselves in the world is closely connected to our science of measuring and counting stuff. Have you noticed that a basic metric, i.e. how many of us is there around, can take a denominator of one – when we count the population of a city – or a denominator of 10 000, when we are interested in the incidence of criminality. 

Each quantitative variable I can observe and download the dataset of from https://data.worldbank.org  comes out of that complex process of collective cognition, resembling a huge bunch of psychos walking around with rulers and abacuses, trying to measure everything they perceive. I use data as phenomenological description of both the reality those psychos (me included) live in, and the way they measure that reality. I want to check which among those quantitative variables are particularly suitable to represent the things we are really after, our collectively desired outcomes. The method I use to do it consists in producing as many variations of the original dataset as I have variables. Each variation of the original dataset has one variable singled out as output, and the remaining ones are input. I run such variation through a simple neural network – the simpler, the better – where standardised, randomly weighed and neurally activated input gets compared with the pre-set output. I measure the mean expected values of all the variables in such a transformation, i.e. when I run it through 3000 experimental rounds, I measure those means over the same 3000 rounds. I compute the Euclidean distance between each such vector of means and its cousin computed for the original dataset. I assume that, with rigorously the same logical structure of the neural network, those variations differ from each other just by the output variable they are pegged on. When I say ‘pegged’, by the way, I mean that the output variable is not subject to random weighing, i.e. it is not being experimented with. It comes exogenously, and is taken as it is. 

I noticed that each time I do that procedure, with whatever set of variables I take, one or two among them, when taken as output ones, produce variations much closer to the original dataset that other, in terms of Euclidean distance. It looks as if the neural network, when pegged on those particular variables, emulated a process of adaptation particularly similar to what is represented by the original empirical data. 

Now, I want to learn how to program, in Python, the production of alternative ‘input <> output’  couplings out of a source dataset. I already know the general drill for producing just one such coupling. Once I have my dataset read out of a CSV file into a Data Frame in Python Pandas, I start with creating a dictionary of all the numerical columns:

>> dict_numerical = [‘numerical_column1’, ‘numerical_column2’, …, ‘numerical column_n’]

A simple way of doing that, with large data frames, is to type in Python:

>> df.columns

… and it yields a string of labels in quotation marks ‘’, separated with commas. I just copy that lot , without the non-numerical columns, into the square brackets of dict_numerical = […], and Bob’s my uncle. 

Then I make a strictly numerical version of my database, by:

>> df_numerical = pd.DataFrame(df[dict_numerical])

By the way, each time I produce a new data frame, I check its structure with commands ‘df.info()‘ and ‘df.describe()’. At my neophytic level of programming, I want to make sure that what I have in a strictly numerical database is strictly numerical data, i.e. the ‘float64’ type. Here, one hint: when you convert your data from an original Excel file, pay attention to having your decimal point as a point, i.e. as ‘0.0’, not as a comma. With a comma, the Pandas reader tends to interpret such data by default as ‘object’. Annoying. 

Once I have that numerical data frame in place, I make another dictionary of the type:

>> dict_for_Input_pegged_on_X_as_output = [‘numerical_input_column1’, ‘numerical_input_column2’, …, ‘numerical_input_column_k’]

… where k = n -1, of course, and the 1 corresponds to the variable X, supposed to be the output one. 

I use that dictionary to split df_numerical:

>> df_output_X = df_numerical[‘numerical_column_X’]

>> df_input_for_X = df_numerical[dict_for_Input_pegged_on_X_as_output]     

I would like to automatise the process. It means I need a loop. I am looping over a range of numerical columns df_numerical. Let’s dance. I start routinely, in my Anaconda-Jupyter Lab-powered notebook. By the way, I noticed an interesting practical feature of Jupyter Lab. When you start it directly from its website https://jupyter.org , the notebook you can use has somehow limited functionality as compared to the notebook you can create when accessing Jupyter Lab from the Anaconda app on your computer. In the latter case you can create an account with Jupyter Lab, with a very useful functionality of mirroring the content of your cloud account on your hard drive. I know, I know: we use the cloud so as not to collect rubbish on our own disk. Still, Python files are small, they take little space, and I discovered that this mirroring stuff is really useful. 

I open up with importing the libraries I think I will need:

>> import numpy as np

>> import pandas as pd

>> import math

>> import os

As I am learning new stuff, I prefer taking known stuff as my data. Once again, I use a dataset which I made out of Penn Tables 9.1., by kicking out all the rows with empty cells [see: Feenstra, Robert C., Robert Inklaar and Marcel P. Timmer (2015), “The Next Generation of the Penn World Table” American Economic Review, 105(10), 3150-3182, www.ggdc.net/pwt ].

I already have that dataset in my working directory. By the way, when you install Anaconda on a MacBook, its working directory is by default the root directory of the user’s profile. For the moment, I keep ip that way. Anyway, I have that dataset and I read it into a Pandas dataframe:

>> PWT=pd.DataFrame(pd.read_csv(‘PWT 9_1 no empty cells.csv’,header=0))

I create my first dictionaries. I type:

>> PWT.columns

… which yields:

Index([‘country’, ‘year’, ‘rgdpe’, ‘rgdpo’, ‘pop’, ’emp’, ’emp / pop’, ‘avh’,

       ‘hc’, ‘ccon’, ‘cda’, ‘cgdpe’, ‘cgdpo’, ‘cn’, ‘ck’, ‘ctfp’, ‘cwtfp’,

       ‘rgdpna’, ‘rconna’, ‘rdana’, ‘rnna’, ‘rkna’, ‘rtfpna’, ‘rwtfpna’,

       ‘labsh’, ‘irr’, ‘delta’, ‘xr’, ‘pl_con’, ‘pl_da’, ‘pl_gdpo’, ‘csh_c’,

       ‘csh_i’, ‘csh_g’, ‘csh_x’, ‘csh_m’, ‘csh_r’, ‘pl_c’, ‘pl_i’, ‘pl_g’,

       ‘pl_x’, ‘pl_m’, ‘pl_n’, ‘pl_k’],

      dtype=’object’)

…and I create the dictionary of quantitative variables:

>> Variables=[‘rgdpe’, ‘rgdpo’, ‘pop’, ’emp’, ’emp / pop’, ‘avh’,

       ‘hc’, ‘ccon’, ‘cda’, ‘cgdpe’, ‘cgdpo’, ‘cn’, ‘ck’, ‘ctfp’, ‘cwtfp’,

       ‘rgdpna’, ‘rconna’, ‘rdana’, ‘rnna’, ‘rkna’, ‘rtfpna’, ‘rwtfpna’,

       ‘labsh’, ‘irr’, ‘delta’, ‘xr’, ‘pl_con’, ‘pl_da’, ‘pl_gdpo’, ‘csh_c’,

       ‘csh_i’, ‘csh_g’, ‘csh_x’, ‘csh_m’, ‘csh_r’, ‘pl_c’, ‘pl_i’, ‘pl_g’,

       ‘pl_x’, ‘pl_m’, ‘pl_n’, ‘pl_k’]

The ‘Variables’ dictionary serves me to mutate the ‘PWT’ dataframe into its close cousin, obsessed with numbers, namely into ‘PWT_Numerical’:

>> PWT_Numerical = pd.DataFrame(PWT[Variables])

I quickly check the PWT_Numerical’s driving licence, by typing ‘PWT_Numerical.info()’ and  ‘PWT_Numerical.shape’. All is well, data is in the ‘float64’ format, there are 42 columns and 3006 rows, the guy is cleared to go.

Once I have that nailed down, I mess around a bit with creating names for my cloned datasets. I practice with the following loop:

>> for i in range(42):

    print(“Input_for_”+PWT_Numerical.iloc[:,i].name) 

It yields a list of names for input databases in various ‘input <> output’ configurations of my experiment with the PWT 9.1 dataset. The ‘print’ command gives a string of 42 names: Input_for_rgdpe, Input_for_rgdpo, Input_for_pop etc. 

In my next step, I want to make that outcome durable. The ‘print’ command just prints the output of the loop, it does not store it in any logical structure. The output is gone as soon as it is printed. I create a loop that makes a dictionary, this time with names of output data frames:

>> Names_Output_Data=[] # Here, I create an empty dictionary

>> for i in range(42): # I design the loop

    >> Name_Output_Data=PWT_Numerical.iloc[:,i].name # I create a mechanism for generating strings to fill the dictionary up. 

    >> Names_Output_Data.append(Name_Output_Data) # This is the mechanism of appending   the dictionary with names generated in the previous command 

I check the result by typing the name of the dictionary – ‘Names_Output_Data’ – and executing (Shift + Enter in Jupyter Lab). It yields a full dictionary, filled with column names from PWT_Numerical

Now,  pass to designing my Markov chain of states, i.e. into making an intelligent structure, which produces many alternative versions of itself and tests them for fitness to meet a pre-defined desired outcome. In my neophyte’s logic, I see it as two loops, one inside the other. 

The big, external loop is the one which clones the initial ‘PWT_Numerical’ into pairs of data frames of the style: ’Input variables’ plus ‘Output variable’. I make as many such cloned pairs as there are numerical variables in PWT_Numerical, i.e. 42. Thus, my loop opens up as ‘for i in range(42):’. Inside each iteration of that loop, there is an internal loop of passing the input variables  through a very simple perceptron, assessing the error in estimating the output variable, and then feeding the error forward. Now, I will present below the entire code for those two loops, and then discuss what works, what doesn’t, and what I have no idea how to check whether it works or not. The code is grammatically correct in Python, i.e. it does not yield any error message when put to execution (Shift + Enter in JupyterLab, by the way).  After I present the entire code, I will discuss, further below, its particular parts. Anyway, here it is:

>> List_of_Output_DB=[]

>>Names_Output_Data=[]

>>MEANS=[]

>> Source_means=np.array(PWT_Numerical.mean())

>> EUC=[]

>>for i in range(42):

    >> Name_Output_Data=PWT_Numerical.iloc[:,i].name

    >> Names_Output_Data.append(Name_Output_Data)

    >> Output=pd.DataFrame(PWT_Numerical.iloc[:,i])    

    >> Mean=Output.mean()

   >> MEANS.append(Mean)

    >> Input=pd.DataFrame(PWT_Numerical.drop(Output,axis=1)) 

   >> Input_STD=pd.DataFrame(Input/Input.max(axis=0))

    >> ER=[]

    >> Transformed=[]

      >> for j in range(30):        

>> Input_STD_randomized=Input.iloc[j]*np.random.rand(41)

        >> Input_STD_summed=Input_STD_randomized.sum(axis=0)

        >> T=math.tanh(Input_STD_summed)

        >> D=1-(T**2)

        >> E=(Output.iloc[j]-T)*D

        >> E_vector=np.array(np.repeat(E,41))        

>> Next_row_with_error=Input_STD.iloc[j+1]+E_vector

>> Next_row_DESTD=Next_row_with_error*Input.max(axis=0)

        >> ER.append(E)

        >> ERROR=pd.DataFrame(ER)

        >> Transformed.append(Next_row_DESTD)

        >> CLONE=pd.DataFrame(Transformed).mean()

>> frames=[CLONE,MEANS[i]]

>> CLONE_Means=np.array(pd.concat(frames))

>> Euclidean=np.linalg.norm(Source_means-CLONE_Means)

>> EUC.append(Euclidean)

>> print(‘Finished’)   

Here is a shareable link to my Python file with that code inside: http://localhost:8880/lab/tree/Practice%20Dec%208%202020.ipynb  . I hope it works. 

I start explaining this code casually, from its end. This is a little trick I discovered as regards looping on datasets. Looping takes time and energy. In my struggles to learn Python, I have already managed to make a loop which kept looping forever. All I did was to call the loop as ‘for i in range PWT.index:’, without putting any ‘break’ command at the end. Yes, the index of a data frame is a finite number, yet it is also a sequence. When you don’t break explicitly the looping over that sequence, it will loop over and over again. 

Anyway, the trick. I put the command ‘print(‘Finished’)’ at the very end of the code, after all the loops. When the thing is done with being an intelligent structure at work, it simply prints ‘Finished’ in the next line. Among other things, it allows me to count the time it needs to deal with a given amount of data. As you might have already noticed, whilst I have a dataset with index = 3005 rows, I made the internal loop of the code to go just over 30 rows: ‘for j in range (30)’. The code took some 4 seconds in total to create 42 big loops (‘for i in range (42)’) , and then to loop over 30 rows of data inside each of them. It gives like 42*30 = 1260 experimental rounds in 10 seconds, thus something like 0,0079 seconds per one round. If I took the full dataset of 3005 rows, it would be like 42*3000*0,0079 = 1000 seconds, i.e. 16,6666 minutes. Satanic. I like it. 

Before opening each level of looping, I create empty lists. You can see:

>> List_of_Output_DB=[]

>>Names_Output_Data=[]

>>MEANS=[]

>> Source_means=np.array(PWT_Numerical.mean())

>> EUC=[]

… before I open the external loop, and…

  >> ER=[]

>> Transformed=[]

… before the internal loop.

I noticed that I create those empty lists in a loop, essentially. This is more than just a play on words. When I code a loop, I have output of the loop. The loop does something, and as it does, I discover I want to store that particular outcome in some kind of repository vessel, and I go back to the lines of code before the loop opens and I add an empty list, just in case. I come up with a smart name for the   list, e.g. MEANS, which stands for the mean values of numerical variables, such as they are after being transformed by the perceptron. Mathematically, it is the most basic representation of expected state in a particular transformation of the source dataset “PWT’. 

I code it as ‘MEANS=[]’, and, once I have done that, I add a mechanism of updating a list, inside the loop. This, in turn, goes in two steps. First, I code the variable which should be stored in that list. In the case of ‘MEANS’, as this list is created before I open the big loop of 42 ‘input <> output’ mutations, I append it in that loop. Logically, is must be appended with the mean expected values of output variables in each instance of the big loop. I code it in the big loop, and before opening the internal loop, as:

>> Output=pd.DataFrame(PWT_Numerical.iloc[:,i])  # Here, I define the data frame for the output variable in this specific instance of the big loop   

>> Mean=Output.mean() # Now, I define the function to generate values, which later append the ‘MEANS’ list

    >> MEANS.append(Mean) # Finally, I append the ‘MEANS’ list with values generated in the previous line of the code. 

It is a good thing for me to write about the things I do. I have just noticed that I use two different methods of storing partial outcomes of my loops. The first one is the one I have just presented. The second one is visible in the part of code presented below, included in the internal loop ‘for j in range(number of rows experimented with)’, range(30) in the occurrence tested. 

In this situation, I need to store in some kind of repository the values of input variables transformed by the neural network, i.e. with local error from each experimental round fed forward to the next experimental round. I need to store the way my data looks under each possible orientation of the intelligent structure I assume it represents. I denote that data under the general name ‘Transformed’, and, before opening the internal loop, just at the end of the big external loop, I define an empty list: ‘Transformed=[]’, which is supposed to contain those values I want.

In other words, when I structure the big external loop, I go like: 

# Step 1: for each variables in the dataset, i.e. ‘for i in range(number of variables)’, split the overall dataset into into this variable as the output, in a separate data frame, and all the other variables grouped separately as input. These are the lines of code:

>> Output=pd.DataFrame(PWT_Numerical.iloc[:,i])  # I define the output variable 

[…]    

>> Input=pd.DataFrame(PWT_Numerical.drop(Output,axis=1)) # I drop the output from the entire dataset and I group the remaining columns as ‘Input’

# Step 2: I standardise the input data by denominating it over the respective maximums for each variable:    

>> Input_STD=pd.DataFrame(Input/Input.max(axis=0))

# Step 3: I define, at the end of the big external loop, containers for data which I want to store from each round of the big loop:

>> ER=[] # This is the list of local errors generated by the perceptron when working with each ‘Input <> Output’ configuration

    >> Transformed=[] # That’s the container for input data transformed by the perceptron 

# Step 4: I open the internal loop, with ‘for j in range(number of rows to experiment with)’, and I start by coding the computational procedure of the perceptron:

>> Input_STD_randomized=Input.iloc[j]*np.random.rand(41) # I weigh each empirical, standardised value in this specific row with a random weight

        >> Input_STD_summed=Input_STD_randomized.sum(axis=0) # I sum the randomised values from that specific row of input. This line of code together with the preceding one are equivalent to the mathematical structure ‘∑x*random’.

        >> T=math.tanh(Input_STD_summed) # I compute the hyperbolic tangent of summed, randomised input data

        >> D=1-(T**2) # I compute the local first derivative of the hyperbolic tangent

        >> E=(Output.iloc[j]-T)*D # I compute the error, as: (Expected Output minus Hyperbolic Tangent of Randomised Input) times local derivative of the Hyperbolic Tangent

        >> E_vector=np.array(np.repeat(E,41)) # I create a NumPy array, with the error repeated as many times as there are input variables.

>> Next_row_with_error=Input_STD.iloc[j+1]+E_vector # I feed the error forward. In the next experimental row ‘j+1’, error from row ‘j’ is added to the value of each standardised input variable. This is probably the most elementary representation of learning: I include into my input for the next go the knowledge about what I f**ked up in the previous go. This line creates the transformed input data I want to store later on. 

# Step 5: I collect and store information about the things my perceptron did to input data in the given j-th round of the internal loop:

>> Next_row_DESTD=Next_row_with_error*Input.max(axis=0) # I destandardise the data transformed by the perceptron. It is like translating the work of the perceptron, which operates on standardised values, back into the measurement scale proper to each variable. In a sense, I deneuralise that data. 

        >> ER.append(E) # I collect and store error in the ER list

        >> ERROR=pd.DataFrame(ER) #I transform the ER list into a data frame, which I name ‘ERROR’. I do it a few times with different data, and, quite honestly, I do it intuitively. I already know that data frames defines in Pandas are somehow handier to do statistics with than lists defined in the basic code of Python. Just as honestly: I know too little yet about programming to know whether this turn of code makes sense at all.   

        >> Transformed.append(Next_row_DESTD) # I collect and store the destandardized, transformed input data in the ‘Transformed’ list.

# Step 6: I step out of both loops, and I start putting some order in the data I generated and collected. Stepping out of both loops means that in my code, the lines presented below have no indent. They all start at the left margin, just as the definition of the big external loop.

       >> CLONE=pd.DataFrame(Transformed).mean() # I transform the ‘Transformed’ list into a data frame. Same procedure as two lines of code earlier, only now, I know why I do it. I intend to put together the mean values of destandardised input with the mean value of output, and I am going to do it by concatenation of data frames. 

    >> frames=[CLONE,MEANS[i]] # I define the data frames for concatenation. I put together mean values in the input variables, generated in this specific, i-th round of the big external loop, with the mean value of the output variable corresponding to the same i-th round. You can notice that in the full code, such as I presented it earlier in this update, at this verse of code I move back by one indent. In other words, this definition is already outside of the internal loop, and still inside the big external loop. 

    >> CLONE_Means=np.array(pd.concat(frames)) # I concatenate the data I defined in the previous line. 

    >> Euclidean=np.linalg.norm(Source_means-CLONE_Means) # Something I need for my science. I estimate the mathematical similarity between the source data set ‘PWT_Numerical’, and the data set created by the perceptron, in the given i-th round of the big external loop. I do it by computing the Euclidean distance between the respective vectors of mean expected values in this specific pair of datasets, i.e. the pair ‘source vs i-th clone’.

    >> EUC.append(Euclidean) # I collect and store information generated in the ‘Euclidean’ line. I store it in the EUC list, which I opened as empty before starting the big external loop. 

One step out of the cavern

I have made one step further in my learning of programming. I finally have learn’t at least one method of standardising numerical values in a dataset. In a moment, I will show what exact method did I nail down. First, I want to share a thought of more general nature. I learn programming in order to enrich my research on the application of artificial intelligence for simulating collective intelligence in human societies. I have already discovered the importance of libraries, i.e. ready-made pieces of code, possible to call with a simple command, and short-cutting across many verses of code which I would have to write laboriously. I mean libraries such as NumPy, Pandas, Math etc. It is very similar to human consciousness. Using pre-constructed cognitive structures, i.e. using language and culture is a turbo boost for whatever we do of things that humans are supposed to do when being a civilisation.  

Anyway, I kept working with the dataset which I had already mentioned in my earlier updates, namely a version of Penn Tables 9.1., cleaned of all the rows with empty cells [see: Feenstra, Robert C., Robert Inklaar and Marcel P. Timmer (2015), “The Next Generation of the Penn World Table” American Economic Review, 105(10), 3150-3182, www.ggdc.net/pwt ]. Thus I started by creating an online notebook at JupyterLab (https://jupyter.org/try), with Python 3 as its kernel. Then I imported what I needed from Python in terms of ready-cooked culture, i.e. I went:

>> import numpy as np

>> import pandas as pd

>> import os

I uploaded the ‘PWT 9_1 no empty cells.csv’ file from my computer, and, just in case, I checked its presence in the working directory, with >> os.listdir(). I read the contents of the file into a Pandas Data Frame, which spells: PWT = pd.DataFrame(pd.read_csv(‘PWT 9_1 no empty cells.csv’)). Worked.  

In my next step, as I planned to mess up a bit with the columns of that dataset, I typed: PWT.columns. The thing nicely gave me back a list of columns, i.e. literally a list of labels in quotation marks [‘’]. I used that list to create a dictionary of columns with numerical values, and therefore the most interesting to me. I went:

>> Variables=[‘rgdpe’, ‘rgdpo’, ‘pop’, ’emp’, ’emp / pop’, ‘avh’,

       ‘hc’, ‘ccon’, ‘cda’, ‘cgdpe’, ‘cgdpo’, ‘cn’, ‘ck’, ‘ctfp’, ‘cwtfp’,

       ‘rgdpna’, ‘rconna’, ‘rdana’, ‘rnna’, ‘rkna’, ‘rtfpna’, ‘rwtfpna’,

       ‘labsh’, ‘irr’, ‘delta’, ‘xr’, ‘pl_con’, ‘pl_da’, ‘pl_gdpo’, ‘csh_c’,

       ‘csh_i’, ‘csh_g’, ‘csh_x’, ‘csh_m’, ‘csh_r’, ‘pl_c’, ‘pl_i’, ‘pl_g’,

       ‘pl_x’, ‘pl_m’, ‘pl_n’, ‘pl_k’]

The ‘Variables’ dictionary served me to make a purely numerical mutation of my dataset, namely: PWTVar=pd.DataFrame(PWT[Variables]).  

I generated the fixed components of standardisation in my data, i.e. maximums, means, and standard deviations across columns in PWTVar. It looked like this: 

>> Maximums=PWTVar.max(axis=0)

>> Means=PWTVar.mean(axis=0)

>> Deviations=PWTVar.std(axis=0)

The ‘axis=0’ part means that I want to generate those values across columns, not rows. Once that done, I made my two standardisations of data from PWTVar, namely: a) standardisation over maximums, like s(x) = x/max(x) and b) standardisation by mean-reversion, where s(x) = [x – avg(x)]/std(x)]. I did it as:

>> Standardized=pd.DataFrame(PWTVar/Maximums)

>> MR=pd.DataFrame((PWTVar-Means)/Deviations)

I used here the in-built function of Python Pandas, i.e. the fact that they automatically operate data frames as matrices. When, for example, I subtract ‘Means’ from ‘PWTVar’, the one-row matrix of ‘Means’ gets subtracted from each among the 3005 rows of ‘PWTVar’ etc. I checked those two data frames with commands such as ‘df.describe()’, ’df.shape’, and df.info(), just to make sure they are what I think they are. They are, indeed. 

Standardisation allowed me to step out of my cavern, in terms of programming artificial neural networks. The next step I took was to split my numerical dataset PWTVar into one output variable, on the one hand, and all the other variables grouped as input. As output, I took a variable which, as I have already found out in my research, is extremely important in social change seen through the lens of Penn Tables 9.1. This is ‘avh’ AKA the average number of hours worked per person per year. I did:  

>> Output_AVH=pd.DataFrame(PWTVar[‘avh’])

>> Input_dict=[‘rgdpe’, ‘rgdpo’, ‘pop’, ’emp’, ’emp / pop’, ‘hc’, ‘ccon’, ‘cda’,

        ‘cgdpe’, ‘cgdpo’, ‘cn’, ‘ck’, ‘ctfp’, ‘cwtfp’, ‘rgdpna’, ‘rconna’,

        ‘rdana’, ‘rnna’, ‘rkna’, ‘rtfpna’, ‘rwtfpna’, ‘labsh’, ‘irr’, ‘delta’,

        ‘xr’, ‘pl_con’, ‘pl_da’, ‘pl_gdpo’, ‘csh_c’, ‘csh_i’, ‘csh_g’, ‘csh_x’,

        ‘csh_m’, ‘csh_r’, ‘pl_c’, ‘pl_i’, ‘pl_g’, ‘pl_x’, ‘pl_m’, ‘pl_n’,

        ‘pl_k’] 

#As you can see, ‘avh’ is absent from the ‘Input-dict’ dictionary 

>> Input = pd.DataFrame(PWT[Input_dict])

The last thing that worked, in this episode of my learning, was to multiply the ‘Input’ dataset by a matrix of random float values generated with NumPy:

>> Randomized_input=pd.DataFrame(Input*np.random.rand(3006,41)) 

## Gives an entire Data Frame of randomized values

We haven’t nailed down all our equations yet

As I keep digging into the topic of collective intelligence, and my research thereon with the use of artificial neural networks, I am making a list of key empirical findings that pave my way down this particular rabbit hole. I am reinterpreting them with the new understandings I have from translating my mathematical model of artificial neural network into an algorithm. I am learning to program in Python, which comes sort of handy given I want to use AI. How could I have made and used artificial neural networks without programming, just using Excel? You see, that’s Laplace and his hypothesis that mathematics represent the structure of reality (https://discoversocialsciences.com/wp-content/uploads/2020/10/Laplace-A-Philosophical-Essay-on-Probabilities.pdf ).

An artificial neural network is a sequence of equations which interact, in a loop, with a domain of data. Just as any of us, humans, essentially. We just haven’t nailed down all of our own equations yet. What I can do and have done with Excel was to understand the structure of those equations and their order. This is a logical structure, and as long as I don’t give it any domain of data to feed on, is stays put.

When I feed data into that structure, it starts working. Now, with any set of empirical socio-economic variables I have worked with, so far, there is always 1 – 2 among them which are different from others as output. Generally, my neural network works differently according to the output variable I make it optimize. Yes, it is the output variable, supposedly being the desired outcome to optimize, and not the input variables treated as instrumental in that view, which makes the greatest difference in the results produced by the network.

That seems counterintuitive, and yet this is like the most fundamental common denominator of everything I have found out so far: the way that a simple neural network simulates the collective intelligence of human societies seems to be conditioned most of all by the variables pre-set as the output of the adaptation process, not by the input ones. Is it a sensible conclusion regarding collective intelligence in real life, or is it just a property of the data? In other words, is it social science or data science? This is precisely one of the questions which I want to answer by learning programming.

If it is a pattern of collective human intelligence, that would mean we are driven by the orientations pursued much more than by the actual perception of reality. What we are after would be more important a differentiating factor of your actions than what we perceive and experience as reality. Strangely congruent with the Interface Theory of Perception (Hoffman et al. 2015[1], Fields et al. 2018[2]). 

As it is some kind of habit in me, in the second part of this update I put the account of my learning how to program and to Data Science in Python. This time, I wanted to work with hard cases of CSV import, like trouble files. I want to practice data cleansing. I have downloaded the ‘World Economic Outlook October 2020’ database from the website https://www.imf.org/en/Publications/WEO/weo-database/2020/October/download-entire-database . Already when downloading, I could notice that the announced format is ‘TAB delimited’, not ‘Comma Separated’. It downloads as Excel.

To start with, I used the https://anyconv.com/tab-to-csv-converter/ website to do the conversion. In parallel, I tested two other ways:

  1. opening in Excel, and then saving as CSV
  2. opening with Excel, converting to *.TXT, importing into Wizard for MacOS (statistical package), and then exporting as CSV.

What I can see like right off the bat are different sizes in the same data, technically saved in the same format. The AnyConv-generated CSV is 12,3 MB, the one converted through Excel is 9,6 MB, and the last one, filtered through Excel to TXT, then to Wizard and to CSV makes 10,1 MB. Intriguing.

I open JupyterLab online, and I create a Python 3-based Notebook titled ‘Practice 27_11_2020_part2’.

I prepare the Notebook by importing Numpy, Pandas, Matplotlib and OS. I do:

>> import numpy as np

      import pandas as pd

      import matplotlib.pyplot as plt

      import os

I upload the AnyConv version of the CSV. I make sure to have the name of the file right by doing:

>> os.listdir()


…and I do:

>> WEO1=pd.DataFrame(pd.read_csv(‘AnyConv__WEOOct2020all.csv’))

Result:

/srv/conda/envs/notebook/lib/python3.7/site-packages/IPython/core/interactiveshell.py:3072: DtypeWarning: Columns (83,85,87,89,91,93,95,98,99,102,103,106,107,110,111,114,115,118,119,122,123,126,127,130,131,134,135,138,139,142,143,146,147,150,151,154,155,158) have mixed types. Specify dtype option on import or set low_memory=False.

  interactivity=interactivity, compiler=compiler, result=result)

As I have been told, I add the “low_memory=False” option to the command, and I retype:

>> WEO1=pd.DataFrame(pd.read_csv(‘AnyConv__WEOOct2020all.csv’, low_memory=False))

Result: the file is apparently imported successfully. I investigate the structure.

>> WEO1.describe()

Result: I know I have 8 rows (there should be much more, over 200), and 32 columns. Something is wrong.

I upload the Excel-converted CSV.

>> WEO2=pd.DataFrame(pd.read_csv(‘WEOOct2020all_Excel.csv’))

Result: Parser error

I retry, with parameter sep=‘;’ (usually works with Excel)

>> WEO2=pd.DataFrame(pd.read_csv(‘WEOOct2020all_Excel.csv’,sep=’;’))

Result: import successful. Let’s check the shape of the data

>> WEO2.describe()

Result: Pandas can see just the last column. I make sure.

>> WEO2.columns

Result:

Index([‘WEO Country Code’, ‘ISO’, ‘WEO Subject Code’, ‘Country’,

       ‘Subject Descriptor’, ‘Subject Notes’, ‘Units’, ‘Scale’,

       ‘Country/Series-specific Notes’, ‘1980’, ‘1981’, ‘1982’, ‘1983’, ‘1984’,

       ‘1985’, ‘1986’, ‘1987’, ‘1988’, ‘1989’, ‘1990’, ‘1991’, ‘1992’, ‘1993’,

       ‘1994’, ‘1995’, ‘1996’, ‘1997’, ‘1998’, ‘1999’, ‘2000’, ‘2001’, ‘2002’,

       ‘2003’, ‘2004’, ‘2005’, ‘2006’, ‘2007’, ‘2008’, ‘2009’, ‘2010’, ‘2011’,

       ‘2012’, ‘2013’, ‘2014’, ‘2015’, ‘2016’, ‘2017’, ‘2018’, ‘2019’, ‘2020’,

       ‘2021’, ‘2022’, ‘2023’, ‘2024’, ‘2025’, ‘Estimates Start After’],

      dtype=’object’)

I will try to import the same file with a different ‘sep’ parameter, this time as sep=‘\t’

>> WEO3=pd.DataFrame(pd.read_csv(‘WEOOct2020all_Excel.csv’,sep=’\t’))

Result: import apparently successful. I check the shape of my data.

>> WEO3.describe()

Result: apparently, this time, no column is distinguished.

When I type:

>> WEO3.columns

…I get

Index([‘WEO Country Code;ISO;WEO Subject Code;Country;Subject Descriptor;Subject Notes;Units;Scale;Country/Series-specific Notes;1980;1981;1982;1983;1984;1985;1986;1987;1988;1989;1990;1991;1992;1993;1994;1995;1996;1997;1998;1999;2000;2001;2002;2003;2004;2005;2006;2007;2008;2009;2010;2011;2012;2013;2014;2015;2016;2017;2018;2019;2020;2021;2022;2023;2024;2025;Estimates Start After’], dtype=’object’)

Now, I test with the 3rd file, the one converted through Wizard.

>> WEO4=pd.DataFrame(pd.read_csv(‘WEOOct2020all_Wizard.csv’))

Result: import successful.

I check the shape.

>> WEO4.describe()

Result: still just 8 rows. Something is wrong.

I do another experiment. I take the original*.XLS from imf.org, and I save it as regular Excel *.XLSX, and then I save this one as CSV.

>> WEO5=pd.DataFrame(pd.read_csv(‘WEOOct2020all_XLSX.csv’))

Result: parser error

I will retry with two options as for the separator: sep=‘;’ and sep=‘\t’. Ledzeee…

>> WEO5=pd.DataFrame(pd.read_csv(‘WEOOct2020all_XLSX.csv’,sep=’;’))

Import successful. “WEO5.describe()” yields just one column.

>> WEO6=pd.DataFrame(pd.read_csv(‘WEOOct2020all_XLSX.csv’,sep=’\t’))

yields successful import, yet all the data is just one long row, without separation into columns.

I check WEO5 and WEO6 with “*.index”, and “*.shape”. 

“WEO5.index” yields “RangeIndex(start=0, stop=8777, step=1)”

“WEO6.index” yields “RangeIndex(start=0, stop=8777, step=1)

“WEO5.shape” gives “(8777, 56)”

“WEO6.shape” gives “(8777, 1)”

Depending on the separator given as parameter in the “pd.read_csv” command, I get 56 columns or just 1 column, yet the “*.describe()” command cannot make sense of them.

I try the *.describe” command, thus more specific than the “*.describe()” one.

I can see that structures are clearly different.

I try another trick, namely to assume separator ‘;’ and TAB delimiter.

>> WEO7=pd.DataFrame(pd.read_csv(‘WEOOct2020all_XLSX.csv’,sep=’;’,delimiter=’\t’))

Result: WEO7.shape yields 8777 rows in just one column.

Maybe ‘header=0’? Same thing.

The provisional moral of the fairy tale is that ‘Data cleansing’ means very largely making sense of the exact shape and syntax of CSV files. Depending on the parametrisation of separators and delimiters, different Data Frames are obtained.


[1] Hoffman, D. D., Singh, M., & Prakash, C. (2015). The interface theory of perception. Psychonomic bulletin & review, 22(6), 1480-1506.

[2] Fields, C., Hoffman, D. D., Prakash, C., & Singh, M. (2018). Conscious agent networks: Formal analysis and application to cognition. Cognitive Systems Research, 47, 186-213. https://doi.org/10.1016/j.cogsys.2017.10.003

I re-run my executable script

I am thinking (again) about the phenomenon of collective intelligence, this time in terms of behavioural reinforcement that we give to each other, and the role that cities and intelligent digital clouds can play in delivering such reinforcement. As it is usually the case with science, there is a basic question to ask: ‘What’s the point of all the fuss with that nice theory of yours, Mr Wasniewski? Any good for anything?’.

Good question. My tentative answer is that studying human societies as collectively intelligent structures is a phenomenology, which allows some major methodological developments, which, I think, are missing from other methodologies in social sciences. First of all, it allows a completely clean slate at the starting point of research, as regards ethics and moral orientations, whilst it almost inevitably leads to defining ethical values through empirical research. This was my first big ‘Oh, f**k!’ with that method: I realized that ethical values can be reliably studied as objectively pursued outcomes at the collective level, and that study can be robustly backed with maths and empirics.

I have that thing with my science, and, as a matter of fact, with other people’s science too: I am an empiricist. I like prodding my assumptions and make them lose some fat, so as they become lighter. I like having as much of a clean slate at the starting point of my research as possible. I believe that one single assumption, namely that human social structures are collectively intelligent structures, almost automatically transforms all the other assumptions into hypotheses to investigate. Still, I need to go, very carefully, through that one single Mother Of All Assumptions, i.e. about us, humans as a society, being collectively intelligent a structure, in order to nail down, and possibly kick out any logical shortcut.

Intelligent structures learn by producing many alternative versions of themselves and testing those versions for fitness in coping with a vector of constraints. There are three claims hidden in this single claim: learning, production of different versions, and testing for fitness. Do human social structures learn, like at all? Well, we have that thing called culture, and culture changes. There is observable change in lifestyles, aesthetic tastes, fashions, institutions and technologies. This is learning. Cool. One down, two still standing.

Do human social structures produce many different versions of themselves? Here, we enter the subtleties of distinction between different versions of a structure, on the one hand, and different structures, on the other hand. A structure remains the same, and just makes different versions of itself, as long as it stays structurally coherent. When it loses structural coherence, it turns into a different structure. How can I know that a structure keeps its s**t together, i.e. it stays internally coherent? That’s a tough question, and I know by experience that in the presence of tough questions, it is essential to keep it simple. One of the simplest facts about any structure is that it is made of parts. As long as all the initial parts are still there, I can assume they hold together somehow. In other words, as long as whatever I observe about social reality can be represented as the same complex set, with the same components inside, I can assume this is one and the same structure just making copies of itself. Still, this question remains a tough one, especially that any intelligent structure should be smart enough to morph into another intelligent structure when the time is right.      

The time is right when the old structure is no longer able to cope with the vector of constraints, and so I arrive to the third component question: how can I know there is adaptation to constraints? How can I know there are constraints for assessing fitness? In a very broad sense, I can see constraints when I see error, and correction thereof, in someone’s behaviour. In other words, when I can see someone sort of making two steps forward and one step back, correcting their course etc., this is a sign of adaptation to constraints. Unconstrained change is linear or exponential, whilst constrained change always shows signs of bumping against some kind of wall. Here comes a caveat as regards using artificial neural networks as simulators of collective human intelligence: they are any good only when they have constraints, and, consequently, when they make errors. An artificial neural network is no good at simulating unconstrained change. When I explore the possibility of simulating collective human intelligence with artificial neural networks, it has marks of a pleonasm. I can use AI as simulator only when the simulation involves constrained adaptation.

F**k! I have gone philosophical in those paragraphs. I can feel a part of my mind gently disconnecting from real life, and this is time to do something in order to stay close to said real life. Here is a topic, which I can treat as teaching material for my students, and, in the same time, make those general concepts bounce a bit around, inside my head, just to see what happens. I make the following claim: ‘Markets are manifestations of collective intelligence in human societies’. In science, this is a working hypothesis. It is called ‘working’ because it is not proven yet, and thus it has to earn its own living, so to say. This is why it has to work.

I pass in review the same bullet points: learning, for one, production of many alternative versions in a structure as opposed to creating new structures, for two, and the presence of constraints as the third component. Do markets manifest collective learning? Ledzzeee… Markets display fashions and trends. Markets adapt to lifestyles, and vice versa. Markets are very largely connected to technological change and facilitate the occurrence thereof. Yes, they learn.

How can I say whether a market stays the same structure and just experiments with many alternative versions thereof, or, conversely, whether it turns into another structure? It is time to go back to the fundamental concepts of microeconomics, and assess (once more), what makes a market structure. A market structure is the mechanism of setting transactional prices. When I don’t know s**t about said mechanism, I just observe prices and I can see two alternative pictures. Picture one is that of very similar prices, sort of clustered in the same, narrow interval. This is a market with equilibrium price, which translates into a local market equilibrium. Picture two shows noticeably disparate prices in what I initially perceived as the same category of goods. There is no equilibrium price in that case, and speaking more broadly, there is no local equilibrium in that market.

Markets with local equilibriums are assumed to be perfectly competitive or very close thereto. They are supposed to serve for transacting in goods so similar that customers perceive them as identical, and technologies used for producing those goods don’t differ sufficiently to create any kind of competitive advantage (homogeneity of supply), for one. Markets with local equilibriums require the customers to be so similar to each other in their tastes and purchasing patterns that, on the whole, they can be assumed identical (homogeneity of demand), for two. Customers are supposed to be perfectly informed about all the deals available in the market (perfect information). Oh, yes, the last one: no barriers to entry or exit. A perfectly competitive market is supposed to offer virtually no minimum investment required for suppliers to enter the game, and no sunk costs in the case of exit.  

Here is that thing: many markets present the alignment of prices typical for a state of local equilibrium, and yet their institutional characteristics – such as technologies, the diversity of goods offered, capital requirements and whatnot – do not match the textbook description of a perfectly competitive market. In other words, many markets form local equilibriums, thus they display equilibrium prices, without having the required institutional characteristics for that, at least in theory. In still other words, they manifest the alignment of prices typical for one type of market structure, whilst all the other characteristics are typical for another type of market structure.

Therefore, the completely justified ‘What the hell…?’question arises. What is a market structure, at the end of the day? What is a structure, in general?

I go down another avenue now. Some time ago, I signalled on my blog that I am learning programming in Python, or, as I should rather say, I make one more attempt at nailing it down. Programming teaches me a lot about the basic logic of what I do, including that whole theory of collective intelligence. Anyway, I started to keep a programming log, and here below, I paste the current entry, from November 27th, 2020.

 Tasks to practice:

  1. reading well structured CSV,
  2. plotting
  3. saving and retrieving a Jupyter Notebook in JupyterLab

I am practicing with Penn World Tables 9.1. I take the version without empty cells, and I transform it into CSV.

I create a new notebook on JupyterLab. I name it ‘Practice November 27th 2020’.

  • Path: demo/Practice November 27th 2020.ipynb

I upload the CSV version of Penn Tables 9.1 with no empty cells.

Shareable link: https://hub.gke2.mybinder.org/user/jupyterlab-jupyterlab-demo-zbo0hr9b/lab/tree/demo/PWT%209_1%20no%20empty%20cells.csv

Path: demo/PWT 9_1 no empty cells.csv

Download path: https://hub.gke2.mybinder.org/user/jupyterlab-jupyterlab-demo-zbo0hr9b/files/demo/PWT%209_1%20no%20empty%20cells.csv?_xsrf=2%7C2ce78815%7C547592bc83c83fd951870ab01113e7eb%7C1605464585

I code libraries:

import numpy as np

import pandas as pd

import matplotlib.pyplot as plt

import os

I check my directory:

>> os.getcwd()

result: ‘/home/jovyan/demo’

>> os.listdir()

result:

[‘jupyterlab.md’,

 ‘TCGA_Data’,

 ‘Lorenz.ipynb’,

 ‘lorenz.py’,

 ‘notebooks’,

 ‘data’,

 ‘jupyterlab-slides.pdf’,

 ‘markdown_python.md’,

 ‘big.csv’,

 ‘Practice November 27th 2020.ipynb’,

 ‘.ipynb_checkpoints’,

 ‘Untitled.ipynb’,

 ‘PWT 9_1 no empty cells.csv’]

>> PWT9_1=pd.DataFrame(pd.read_csv(‘PWT 9_1 no empty cells.csv’,header=0))

Result:

  File “<ipython-input-5-32375ff59964>”, line 1

    PWT9_1=pd.DataFrame(pd.read_csv(‘PWT 9_1 no empty cells.csv’,header=0))

                                       ^

SyntaxError: invalid character in identifier

>> I rename the file on Jupyter, into ‘PWT 9w1 no empty cells.csv’.

>> os.listdir()

Result:

[‘jupyterlab.md’,

 ‘TCGA_Data’,

 ‘Lorenz.ipynb’,

 ‘lorenz.py’,

 ‘notebooks’,

 ‘data’,

 ‘jupyterlab-slides.pdf’,

 ‘markdown_python.md’,

 ‘big.csv’,

 ‘Practice November 27th 2020.ipynb’,

 ‘.ipynb_checkpoints’,

 ‘Untitled.ipynb’,

 ‘PWT 9w1 no empty cells.csv’]

>> PWT9w1=pd.DataFrame(pd.read_csv(‘PWT 9w1 no empty cells.csv’,header=0))

Result: imported successfully

>> PWT9w1.describe()

Result: descriptive statistics

# I want to list columns (variables) in my file

>> PWT9w1.columns

Result:

Index([‘country’, ‘year’, ‘rgdpe’, ‘rgdpo’, ‘pop’, ’emp’, ’emp / pop’, ‘avh’,

       ‘hc’, ‘ccon’, ‘cda’, ‘cgdpe’, ‘cgdpo’, ‘cn’, ‘ck’, ‘ctfp’, ‘cwtfp’,

       ‘rgdpna’, ‘rconna’, ‘rdana’, ‘rnna’, ‘rkna’, ‘rtfpna’, ‘rwtfpna’,

       ‘labsh’, ‘irr’, ‘delta’, ‘xr’, ‘pl_con’, ‘pl_da’, ‘pl_gdpo’, ‘csh_c’,

       ‘csh_i’, ‘csh_g’, ‘csh_x’, ‘csh_m’, ‘csh_r’, ‘pl_c’, ‘pl_i’, ‘pl_g’,

       ‘pl_x’, ‘pl_m’, ‘pl_n’, ‘pl_k’],

      dtype=’object’)

>> PWT9w1.columns()

Result:

TypeError                                 Traceback (most recent call last)

<ipython-input-11-38dfd3da71de> in <module>

—-> 1 PWT9w1.columns()

TypeError: ‘Index’ object is not callable

# I try plotting

>> plt.plot(df.index, df[‘rnna’])

Result:

I get a long list of rows like: ‘<matplotlib.lines.Line2D at 0x7fc59d899c10>’, and a plot which is visibly not OK (looks like a fan).

# I want to separate one column from PWT9w1 as a separate series, and then plot it. Maybe it is going to work.

>> RNNA=pd.DataFrame(PWT9w1[‘rnna’])

Result: apparently successful.

# I try to plot RNNA

>> RNNA.plot()

Result:

<matplotlib.axes._subplots.AxesSubplot at 0x7fc55e7b9e10> + a basic graph. Good.

# I try to extract a few single series from PWT9w1 and to plot them. Let’s go for AVH, PL_I and CWTFP.

>> AVH=pd.DataFrame(PWT9w1[‘avh’])

>> PL_I=pd.DataFrame(PWT9w1[‘pl_i’])

>> CWTFP=pd.DataFrame(PWT9w1[‘cwtfp’])

>> AVH.plot()

>> PL_I.plot()

>> CWTFP.plot()

Result:

It worked. I have basic plots.

# It is 8:20 a.m. I go to make myself a coffee. I will quit JupyterLab for a moment. I saved my today’s notebook on server, and I will see how I can open it. Just in case, I make a PDF copy, and a Python copy on my disk.

I cannot do saving into PDF. An error occurs. I will have to sort it out. I made an *.ipynb copy on my disk.

demo/Practice November 27th 2020.ipynb

# It is 8:40 a.m. I am logging back into JupyterLab. I am trying to open my today’s notebook from path. Does not seem to work. I am uploading my *.ipynb copy. This worked. I know now: I upload the *.ipynb script from my own location and then just double click on it. I needed to re-upload my CSV file ‘PWT 9w1 no empty cells.csv’.

# I check if my re-uploaded CSV file is fully accessible. I discover that I need to re-create the whole algorithm. In other words: when I upload on JupyterLab a *.ipynb script from my disk, I need to re-run all the operations. My first idea is to re-run each executable cell in the uploaded script. That worked. Question: how to automatise it? Probably by making a Python script all in one piece, uploading my CSV data source first, and then run the whole script.

I like being a mad scientist

I like being a mad scientist. Am I a mad scientist? A tiny bit, yes, ‘cause I do research on things just because I feel like. Mind you, me being that mad scientist I like being happens to be practical. Those rabbit holes I dive into prove to have interesting outcomes in real life.

I feel like writing, and therefore thinking in an articulate way, about two things I do in parallel: science and investment. I have just realized these two realms of activity tend to merge and overlap in me. When I do science, I tend to think like an investor, or a gardener. I invest my personal energy in ideas which I think have potential for growth. On the other hand, I invest in the stock market with a strong dose of curiosity. Those companies, and the investment positions I can open therein, are like animals which I observe, try to figure out how not to get killed by them, or by predators that hunt them, and I try to domesticate those beasts.

The scientific thing I am working on is the application of artificial intelligence to studying collective intelligence in human societies. The thing I am working on sort of at the crest between science and investment is fundraising for scientific projects (my new job at the university).

The project aims at defining theoretical and empirical fundamentals for using intelligent digital clouds, i.e. large datasets combined with artificial neural networks, in the field of remote digital diagnostics and remote digital care, in medical sciences and medical engineering. That general purpose translates into science strictly speaking, and into the prospective development of medical technologies.

There is observable growth in the percentage of population using various forms of digital remote diagnostics and healthcare. Yet, that growth is very uneven across different social groups, which suggests an early, pre-popular stage of development in those technologies (Mahajan et al. 2020[i]). Other research confirms that supposition, as judging by the very disparate results obtained with those technologies, in terms of diagnostic and therapeutic effectiveness (Cheng et al. 2020[ii]; Wong et al. 2020[iii]). There are known solutions where intelligent digital cloud allows transforming the patient’s place of stay (home, apartment) into the local substitute of a hospital bed, which opens interesting possibilities as regards medical care for patients with significantly reduced mobility, e.g. geriatric patients (Ben Hassen et al. 2020[iv]). Already around 2015, creative applications of medical imagery appeared, where the camera of a person’s smartphone served for early detection of skin cancer (Bliznuks et al. 2017[v]). The connection between distance diagnostics with the acquisition and processing of image comes as one of the most interesting and challenging innovations to make in the here-discussed field of technology (Marwan et al. 2018[vi]). The experience of COVID-19 pandemic has already showed the potential of digital intelligent clouds in assisting national healthcare systems, especially in optimising and providing flexibility to the use of resources, both material and human (Alashhab et al. 2020[vii]). Yet, the same pandemic experience has shown the depth of social disparities as regards real actual access to digital technologies supported by intelligent clouds (Whitelaw et al. 2020[viii]). Intelligent digital clouds enter into learning-generative interactions with the professionals of healthcare. There is observable behavioural modification, for example, in students of healthcare who train with such technologies from the very beginning of their education (Brown Wilson et al. 2020[ix]). That phenomenon of behavioural change requires rethinking from scratch, with the development of each individual technology, the ethical and legal issues relative to interactions between users, on the one hand, and system operators, on the other hand (Godding 2019[x]).

Against that general background, the present project focuses on studying the phenomenon of tacit coordination among the users of digital technologies in remote medical diagnostics and remote medical care. Tacit coordination is essential as regards the well-founded application of intelligent digital cloud to support and enhance these technologies. Intelligent digital clouds are intelligent structures, i.e. they learn by producing many alternative versions of themselves and testing those versions for fitness in coping with a vector of external constraints. It is important to explore the extent and way that populations of users behave similarly, i.e. as collectively intelligent structures. The deep theoretical meaning of that exploration is the extent to which the intelligent structure of a digital cloud really maps and represents the collectively intelligent structure of the users’ population.

The scientific method used in the project explores the main working hypothesis that populations of actual and/or prospective patients, in their own health-related behaviour, and in their relations with the healthcare systems, are collectively intelligent structures, with tacit coordination. In practical terms, that hypothesis means that any intelligent digital cloud in the domain of remote medical care should assume collectively intelligent, thus more than just individual, behavioural change on the part of users. Collectively intelligent behavioural change in a population, marked by tacit coordination, is a long-term, evolutionary process of adaptive walk in rugged landscape (Kauffman & Levin 1987[xi]; Nahum et al. 2015[xii]). Therefore, it is something deeper and more durable that fashions and styles. It is the deep, underlying mechanism of social change accompanying the use of digital intelligent clouds in medical engineering.

The scientific method used in this project aims at exploring and checking the above-stated working hypothesis by creating a large and differentiated dataset of health-related data, and processing that dataset in an intelligent digital cloud, in two distinct phases. The first phase consists in processing a first sample of data with a relatively simple, artificial neural network, in order to discover its underlying orientations and its mechanisms of collective learning. The second phase allows an intelligent digital cloud to respond adaptively to users behaviour, i.e to produce intelligent interaction with them. The first phase serves to understand the process of adaptation observable in the second phase. Both phases are explained more in detail below.

The tests of, respectively, orientation and mode of learning, in the first phase of empirical research aim at defining the vector of collectively pursued social outcomes in the population studied. The initially collected empirical dataset is transformed, with the use of an artificial neural network, into as many representations as there are variables in the set, with each representation being oriented on a different variable as its output (with the remaining ones considered as instrumental input). Each such transformation of the initial set can be tested for its mathematical similarity therewith (e.g. for Euclidean distance between the vectors of expected mean values). Transformations displaying relatively the greatest similarity to the source dataset are assumed to be the most representative for the collectively intelligent structure in the population studied, and, consequently, their output variables can be assumed to represent collectively pursued social outcomes in that collective intelligence (see, for example: Wasniewski 2020[xiii]). Modes of learning in that dataset can be discovered by creating a shadow vector of probabilities (representing, for example, a finite set of social roles endorsed with given probabilities by members of the population), and a shadow process that introduces random disturbance, akin to the theory of Black Swans (Taleb 2007[xiv]; Taleb & Blyth 2011[xv]). The so-created shadow structure is subsequently transformed with an artificial neural network in as many alternative versions as there are variables in the source empirical dataset, each version taking a different variable from the set as its pre-set output. Three different modes of learning can be observed, and assigned to particular variables: a) cyclical adjustment without clear end-state b) finite optimisation with defined end-state and c) structural disintegration with growing amplitude of oscillation around central states.

The above-summarised first phase of research involves the use of two basic digital tools, i.e. an online functionality to collect empirical data from and about patients, and an artificial neural network to process it. There comes an important aspect of that first phase in research, i.e. the actual collectability and capacity to process the corresponding data. It can be assumed that comprehensive medical care involves the collection of both strictly health-related data (e.g. blood pressure, blood sugar etc.), and peripheral data of various kinds (environmental, behavioural). The complexity of data collected in that phase can be additionally enhanced by including imagery such as pictures taken with smartphones (e.g. skin, facial symmetry etc.). In that respect, the first phase of research aims at testing the actual possibility and reliability of collection in various types of data. Phenomena such as outliers of fake data can be detected then.

Once the first phase is finished and expressed in the form of theoretical conclusions, the second phase of research is triggered. An intelligent digital cloud is created, with the capacity of intelligent adaptation to users’ behaviour. A very basic example of such adaptation are behavioural reinforcements. The cloud can generate simple messages of praise for health-functional behaviour (positive reinforcements), or, conversely, warning messages in the case of health-dysfunctional behaviour (negative reinforcements). More elaborate form of intelligent adaptation are possible to implement, e.g. a Twitter-like reinforcement to create trending information, or a Tik-Tok-like reinforcement to stay in the loop of communication in the cloud. This phase aims specifically at defining the actually workable scope and strength of possible behavioural reinforcements which a digital functionality in the domain of healthcare could use vis a vis its end users. Legal and ethical implications thereof are studied as one of the theoretical outcomes of that second phase.

I feel like generalizing a bit my last few updates, and to develop on the general hypothesis of collectively intelligent, human social structures. In order to consider any social structure as manifestation of collective intelligence, I need to place intelligence in a specific empirical context. I need an otherwise exogenous environment, which the social structure has to adapt to. Empirical study of collective intelligence, such as I have been doing it, and, as a matter of fact, the only one I know how to do, consists in studying adaptive effort in human social structures. 


[i] Shiwani Mahajan, Yuan Lu, Erica S. Spatz, Khurram Nasir, Harlan M. Krumholz, Trends and Predictors of Use of Digital Health Technology in the United States, The American Journal of Medicine, 2020, ISSN 0002-9343, https://doi.org/10.1016/j.amjmed.2020.06.033 (http://www.sciencedirect.com/science/article/pii/S0002934320306173  )

[ii] Lei Cheng, Mingxia Duan, Xiaorong Mao, Youhong Ge, Yanqing Wang, Haiying Huang, The effect of digital health technologies on managing symptoms across pediatric cancer continuum: A systematic review, International Journal of Nursing Sciences, 2020, ISSN 2352-0132, https://doi.org/10.1016/j.ijnss.2020.10.002 , (http://www.sciencedirect.com/science/article/pii/S2352013220301630 )

[iii] Charlene A. Wong, Farrah Madanay, Elizabeth M. Ozer, Sion K. Harris, Megan Moore, Samuel O. Master, Megan Moreno, Elissa R. Weitzman, Digital Health Technology to Enhance Adolescent and Young Adult Clinical Preventive Services: Affordances and Challenges, Journal of Adolescent Health, Volume 67, Issue 2, Supplement, 2020, Pages S24-S33, ISSN 1054-139X, https://doi.org/10.1016/j.jadohealth.2019.10.018 , (http://www.sciencedirect.com/science/article/pii/S1054139X19308675 )

[iv] Hassen, H. B., Ayari, N., & Hamdi, B. (2020). A home hospitalization system based on the Internet of things, Fog computing and cloud computing. Informatics in Medicine Unlocked, 100368, https://doi.org/10.1016/j.imu.2020.100368

[v] Bliznuks, D., Bolocko, K., Sisojevs, A., & Ayub, K. (2017). Towards the Scalable Cloud Platform for Non-Invasive Skin Cancer Diagnostics. Procedia Computer Science, 104, 468-476

[vi] Marwan, M., Kartit, A., & Ouahmane, H. (2018). Security enhancement in healthcare cloud using machine learning. Procedia Computer Science, 127, 388-397.

[vii] Alashhab, Z. R., Anbar, M., Singh, M. M., Leau, Y. B., Al-Sai, Z. A., & Alhayja’a, S. A. (2020). Impact of Coronavirus Pandemic Crisis on Technologies and Cloud Computing Applications. Journal of Electronic Science and Technology, 100059. https://doi.org/10.1016/j.jnlest.2020.100059

[viii] Whitelaw, S., Mamas, M. A., Topol, E., & Van Spall, H. G. (2020). Applications of digital technology in COVID-19 pandemic planning and response. The Lancet Digital Health. https://doi.org/10.1016/S2589-7500(20)30142-4

[ix] Christine Brown Wilson, Christine Slade, Wai Yee Amy Wong, Ann Peacock, Health care students experience of using digital technology in patient care: A scoping review of the literature, Nurse Education Today, Volume 95, 2020, 104580, ISSN 0260-6917, https://doi.org/10.1016/j.nedt.2020.104580 ,(http://www.sciencedirect.com/science/article/pii/S0260691720314301 )

[x] Piers Gooding, Mapping the rise of digital mental health technologies: Emerging issues for law and society, International Journal of Law and Psychiatry, Volume 67, 2019, 101498, ISSN 0160-2527, https://doi.org/10.1016/j.ijlp.2019.101498 , (http://www.sciencedirect.com/science/article/pii/S0160252719300950 )

[xi] Kauffman, S., & Levin, S. (1987). Towards a general theory of adaptive walks on rugged landscapes. Journal of theoretical Biology, 128(1), 11-45

[xii] Nahum, J. R., Godfrey-Smith, P., Harding, B. N., Marcus, J. H., Carlson-Stevermer, J., & Kerr, B. (2015). A tortoise–hare pattern seen in adapting structured and unstructured populations suggests a rugged fitness landscape in bacteria. Proceedings of the National Academy of Sciences, 112(24), 7530-7535, www.pnas.org/cgi/doi/10.1073/pnas.1410631112 

[xiii] Wasniewski, K. (2020). Energy efficiency as manifestation of collective intelligence in human societies. Energy, 191, 116500. https://doi.org/10.1016/j.energy.2019.116500

[xiv] Taleb, N. N. (2007). The black swan: The impact of the highly improbable (Vol. 2). Random house

[xv] Taleb, N. N., & Blyth, M. (2011). The black swan of Cairo: How suppressing volatility makes the world less predictable and more dangerous. Foreign Affairs, 33-39

Checkpoint for business

I am changing the path of my writing, ‘cause real life knocks at my door, and it goes ‘Hey, scientist, you economist, right? Good, ‘cause there is some good stuff, I mean, ideas for business. That’s economics, right? Just sort of real stuff, OK?’. Sure. I can go with real things, but first, I explain. At my university, I have recently taken on the job of coordinating research projects and finding some financing for them. One of the first things I did, right after November 1st, was to send around a reminder that we had 12 days left to apply, with the Ministry of Science and Higher Education, for relatively small grants, in a call titled ‘Students make innovation’. Honestly, I was expecting to have 1 – 2 applications max, in response. Yet, life can make surprises. There are 7 innovative ideas in terms of feedback, and 5 of them look like good material for business concepts and for serious development. I am taking on giving them a first prod, in terms of business planning. Interestingly, those ideas are all related to medical technologies, thus something I have been both investing a lot in, during 2020, and thinking a lot about, as a possible path of substantial technological change.

I am progressively wrapping my mind up around ideas and projects formulated by those students, and, walking down the same intellectual avenue, I am making sense of making money on and around science. I am fully appreciating the value of real-life experience. I have been doing research and writing about technological change for years. Until recently, I had that strange sort of complex logical oxymoron in my mind, where I had the impression of both understanding technological change, and missing a fundamental aspect of it. Now, I think I start to understand that missing part: it is the microeconomic mechanism of innovation.

I have collected those 5 ideas from ambitious students at Faculty of Medicine, in my university:

>> Idea 1: An AI-based app, with a chatbot, which facilitates early diagnosis of cardio-vascular diseases

>> Idea 2: Similar thing, i.e. a mobile app, but oriented on early diagnosis and monitoring of urinary incontinence in women.

>> Idea 3: Technology for early diagnosis of Parkinson’s disease, through the observation of speech and motor disturbance.

>> Idea 4: Intelligent cloud to store, study and possibly find something smart about two types of data: basic health data (blood-work etc.), and environmental factors (pollution, climate etc.).

>> Idea 5: Something similar to Idea 4, i.e. an intelligent cloud with medical edge, but oriented on storing and studying data from large cohorts of patients infected with Sars-Cov-2. 

As I look at those 5 ideas, surprisingly simple and basic association of ideas comes to my mind: hierarchy of interest and the role of overarching technologies. It is something I have never thought seriously about: when we face many alternative ideas for new technologies, almost intuitively we hierarchize them. Some of them seem more interesting, some others are less. I am trying to dig out of my own mind the criteria I use, and here they are: I hierarchize with the expected lifecycle of technology, and the breadth of the technological platform involved. In other words, I like big, solid, durable stuff. I am intuitively looking for innovations which offer a relatively long lifecycle in the corresponding technology, and the technology involved is sort of two-level, with a broad base and many specific applicational developments built upon that base.  

Why do I take this specific approach? One step further down into my mind, I discover the willingness to have some sort of broad base of business and scientific points of attachment when I start business planning. I want some kind of horizon to choose my exact target on. The common technological base among those 5 ideas is some kind of intelligent digital cloud, with artificial intelligence learns on the data that flows in. The common scientific base is the collection of health-related data, including behavioural aspects (e.g. sleep, diet, exercise, stress management).

The financial context which I am operating in is complex. It is made of public financial grants for strictly speaking scientific research, other public financing for projects more oriented on research and development in consortiums made of universities and business entities, still a different stream of financing for business entities alone, and finally private capital to look for once the technology is ripe enough for being marketed.

I am operating from an academic position. Intuitively, I guess that the more valuable science academic people bring to their common table with businesspeople and government people, the better position those academics will have in any future joint ventures. Hence, we should max out on useful, functional science to back those ideas. I am trying to understand what that science should consist in. An intelligent digital cloud can yield mind-blowing findings. I know that for a fact from my own research. Yet, what I know too is that I need very fundamental science, something at the frontier of logic, philosophy, mathematics, and of the phenomenology pertinent to the scientific research at hand, in order to understand and use meaningfully whatever the intelligent digital cloud spits back out, after being fed with data. I have already gone once through that process of understanding, as I have been working on the application of artificial neural networks to the simulation of collective intelligence in human societies. I had to coin up a theory of intelligent structure, applicable to the problem at hand. I believe that any application of intelligent digital cloud requires assuming that whatever we investigate with that cloud is an intelligent structure, i.e. a structure which learns by producing many alternative versions of itself, and testing them for their fitness to optimize a given desired outcome.  

With those medical ideas, I (we?) need to figure out what the intelligent structure in action is, how can it possibly produce many alternative versions of itself, and how those alternative thingies can be tested for fitness. What we have in a medically edged digital cloud is data about a population of people. The desired outcome we look for is health, quite simply. I said ‘simply’? No, it was a mistake. It is health, in all complexity. Those apps our students want to develop are supposed to pull someone out of the crowd, someone with early symptoms which they do not identify as relevant. In a next step, some kind of dialogue is proposed to such a person, sort of let’s dig a bit more into those symptoms, let’s try something simple to treat them etc. The vector of health in that population is made, roughly speaking, of three sub-vectors: preventive health (e.g. exercise, sleep, stop eating crap food), effectiveness of early medical intervention (e.g. c’mon men, if you are 30 and can’t have erection, you are bound to concoct some cardio-vascular s**t), and finally effectiveness of advanced medicine, applied when the former two haven’t worked.  

I can see at least one salient, scientific hurdle to jump over: that outcome vector of health. In my own research, I found out that artificial neural networks can give empirical evidence as for what outcomes we are really actually after, as collectively intelligent a structure. That’s my first big idea as regards those digital medical solutions: we collect medical and behavioural data in the cloud, we assume that data represents experimental learning of a collectively intelligent social structure, and we make the cloud discover the phenomena (variables) which the structure actually optimizes.

My own experience with that method is that societies which I studied optimize outcomes which look almost too simplistic in the fancy realm of social sciences, such as the average number of hours worked per person per year, the average amount of human capital per person, measured as years of education before entering the job market, or price index in exports, thus the average price which countries sell their exports at. In general, societies which I studied tend to optimize structural proportions, measurables as coefficients in the lines of ‘amount of thingy one divided by the amount of thingy two’.  

Checkpoint for business. Supposing that our research team, at the Andrzej Frycz – Modrzewski Krakow University, comes up with robust empirical results of that type, i.e. when we take a million of random humans and their broadly spoken health, and we assume they are collectively intelligent (I mean, beyond Facebook), then their collectively shared experimental learning of the stuff called ‘life’ makes them optimize health-related behavioural patterns A, B, and C. How can those findings be used in the form of marketable digital technologies? If I know the behavioural patterns someone tries to optimize, I can break those patterns down into small components and figure out a way to utilize the way to influence behaviour. It is a common technique in marketing. If I know someone’s lifestyle, and the values that come with it, I can artfully include into that pattern the technology I am marketing. In this specific case, it could be done ethically and for a good purpose, for a change.  In that context, my mind keeps returning to that barely marked trend of rising mortality in adult males in high-income countries, since 2016 (https://data.worldbank.org/indicator/SP.DYN.AMRT.MA). WTF? We’ll live, we’ll see.

The understanding of how collective human intelligence goes after health could be, therefore, the kind of scientific bacon our university could bring to the table when starting serious consortial projects with business partners, for the development of intelligent digital technologies in healthcare. Let’s move one step forward. As I have been using artificial neural network in my research on what I call, and maybe overstate as collective human intelligence, I have been running those experiments where I take a handful of behavioural patterns, I assign them probabilities of happening (sort of how many folks out of 10 000 will endorse those patterns), and I treat those probabilities as instrumental input in the optimization of pre-defined social outcomes. I was going to forget: I add random disturbance to that form of learning, in the lines of the Black Swan theory (Taleb 2007[1]; Taleb & Blyth 2011[2]).

I nailed down three patterns of collective learning in the presence of randomly happening s**t: recurrent, optimizing, and panic mode. The recurrent pattern of collective learning, which I tentatively expect to be the most powerful, is essentially a cycle with recurrent amplitude of error. We face a challenge, we go astray, we run around like headless chickens for a while, and then we figure s**t out, we progressively settle for solutions, and then the cycle repeats. It is like everlasting learning, without any clear endgame. The optimizing pattern is something I observed when making my collective intelligence optimize something like the headcount of population, or the GDP. There is a clear phase of ‘WTF!’(error in optimization goes haywire), which, passing through a somehow milder ‘WTH?’, ends up in a calm phase of ‘what works?’, with very little residual error.

The panic mode is different from the other two. There is no visible learning in the strict sense of the term, i.e. no visible narrowing down of error in what the network estimates as its desired outcome. On the contrary, that type of network consistently goes into the headless chicken mode, and it is becoming more and more headless with each consecutive hundred of experimental rounds, so to say. It happens when I make my network go after some very specific socio-economic outcomes, like price index in capital goods (i.e. fixed assets) or Total Factor Productivity.

Checkpoint for business, once again. That particular thing, about Black Swans randomly disturbing people in their endorsing of behavioural patterns, what business value does it have in a digital cloud? I suppose there are fields of applied medical sciences, for example epidemiology, or the management of healthcare systems, where it pays to know in advance which aspects of our health-related behaviour are the most prone to deep destabilization in the presence of exogenous stressors (e.g. epidemic, or the president of our country trending on Tik Tok). It could also pay off to know, which collectively pursued outcomes act as stabilizers. If another pandemic breaks out, for example, which social activities and social roles should keep going, at all price, on the one hand, and which ones can be safely shut down, as they will go haywire anyway?      


[1] Taleb, N. N. (2007). The black swan: The impact of the highly improbable (Vol. 2). Random house.

[2] Taleb, N. N., & Blyth, M. (2011). The black swan of Cairo: How suppressing volatility makes the world less predictable and more dangerous. Foreign Affairs, 33-39.