Ugly little cherubs

I am working on my long-term investment strategy, and I keep using the Warren Buffet’s tenets of investment (Hagstrom, Robert G.. The Warren Buffett Way (p. 98). Wiley. Kindle Edition.).

At the same time, one of my strategic goals is coming true, progressively: other people reach out to me and ask whether I would agree to advise them on their investment in the stock market. People see my results, sometimes I talk to them about my investment philosophy, and it seems to catch on.

This is both a blessing and a challenge. My dream, 2 years ago, when I was coming back to the business of regular investing in the stock market, was to create, with time, something like a small investment fund specialized in funding highly innovative, promising start-ups. It looks like that dream is progressively becoming reality. Reality requires realistic and intelligible strategies. I need to phrase out my own experience as regards investment in a manner, which is both understandable and convincing to other people.

As I am thinking about it, I want to articulate my strategy along three logical paths. Firstly, what is the logic in my current portfolio? Why am I holding the investment positions I am holding? Why in these proportions? How have I come to have that particular portfolio? If I can verbally explain the process of my so-far investment, I will know what kind of strategy I have been following up to now. This is the first step, and the next one is to formulate a strategy for the future. In one of my recent updates (Tesla first in line), I briefly introduced my portfolio, such as it was on December 2nd, 2021. Since then, I did some thinking, most of all in reference to the investment philosophy of Warren Buffett, and I made some moves. I came to the conclusion that my portfolio was astride a bit too many stocks, and the whole was somehow baroque. By ‘baroque’ I mean that type of structure, where we can have a horribly ugly little cherub, accompanied by just as ugly a little shepherd, but the whole looks nice due to the presence of a massive golden rim, woven around ugliness.

I made myself an idea of what are the ugly cherubs in my portfolio from December 2nd, and I kicked them out of the picture. In the list below, these entities are marked in slashed bold italic:

>> Tesla (https://ir.tesla.com/#tab-quarterly-disclosure),

>> Allegro.eu SA (https://about.allegro.eu/ir-home ),

>> Alten (https://www.alten.com/investors/ ),

>> Altimmune Inc (https://ir.altimmune.com/ ),

>> Apple Inc (https://investor.apple.com/investor-relations/default.aspx ),

>> CureVac NV (https://www.curevac.com/en/investor-relations/overview/ ),

>> Deepmatter Group PLC (https://www.deepmatter.io/investors/ ), 

>> FedEx Corp (https://investors.fedex.com/home/default.aspx ),

>> First Solar Inc (https://investor.firstsolar.com/home/default.aspx )

>> Inpost SA (https://www.inpost.eu/investors )

>> Intellia Therapeutics Inc (https://ir.intelliatx.com/ )

>> Lucid Group Inc (https://ir.lucidmotors.com/ )

>> Mercator Medical SA (https://en.mercatormedical.eu/investors/ )

>> Nucor Corp (https://www.nucor.com/investors/ )

>> Oncolytics Biotech Inc (https://ir.oncolyticsbiotech.com/ )

>> Solaredge Technologies Inc (https://investors.solaredge.com/ )

>> Soligenix Inc (https://ir.soligenix.com/ )

>> Vitalhub Corp (https://www.vitalhub.com/investors )

>> Whirlpool Corp (https://investors.whirlpoolcorp.com/home/default.aspx )

>> Biogened (https://biogened.com/ )

>> Biomaxima (https://www.biomaxima.com/325-investor-relations.html )

>> CyfrPolsat (https://grupapolsatplus.pl/en/investor-relations )

>> Emtasia (https://elemental-asia.biz/en/ )

>> Forposta (http://www.forposta.eu/relacje_inwestorskie/dzialalnosc_i_historia.html )

>> Gameops (http://www.gameops.pl/en/about-us/ )

>> HMInvest (https://grupainwest.pl/relacje )

>> Ifirma (https://www.ifirma.pl/dla-inwestorow )

>> Moderncom (http://moderncommercesa.com/wpmccom/en/dla-inwestorow/ )

>> PolimexMS (https://www.polimex-mostostal.pl/en/reports/raporty-okresowe )

>> Selvita (https://selvita.com/investors-media/ )

>> Swissmed (https://swissmed.com.pl/?menu_id=8 )  

Why did I put those specific investment positions into the bag labelled ‘ugly little cherubs in the picture’? Here comes a cognitive clash between the investment philosophy I used to have before I started studying in depth that of Warren Buffet and of Berkshire Hathaway. Before, I was using the purely probabilistic approach, according to which the stock market is so unpredictable that my likelihood of failure, on any individual investment, is greater than the likelihood of success, and, therefore, the more I spread my portfolio between different stocks, the less exposed I am to the risk of a complete fuck-up. As I studied the investment philosophy of Warren Buffet, I had great behavioural insights as regards my decisions. Diversifying one’s portfolio is cool, yet it can lead to careless individual choices. If my portfolio is really diversified, each individual position weighs so little that I am tempted to overlook its important features. At the end of the day, I might land with a bag full of potatoes instead of a chest full of gems.

I decided to kick out the superfluous. What did I put in this category? The superfluous investment positions which I kicked out shared some common characteristics, which I reconstructed from the history of the corresponding ‘buy’ orders. Firstly, these were comparatively small positions, hundreds of euros at best. This is one of the lessons by Warren Buffet. Small investments matter little, and they are probably going to stay this way. There is no point in collecting stocks which don’t matter to me. They give is a false sense of security, which is detrimental to the focus on capital gains.  

Secondly, I realized that I bought those ugly little cherubs by affinity to something else, not for their own sake. Two of them, FedEx and Allegro, are in the busines of express delivery. I made a ton of money of their stock, just as on the stock of Deutsche Post, during the trough of the pandemic, where retail distribution went mostly into the ‘online order >> express delivery’ pipeline. It was back then, and then I sold out, and then I thought ‘why not trying the same hack again?’. The ‘why not…?’ question was easy to answer, actually: because times change, and the commodity markets have adapted to the pandemic. FedEx and Allegro has returned to what it used to be: a solid business without much charm to me.  

Four others – Soligenix, Altimmune, CureVac and Oncolytics Biotech – are biotechnological companies. Once again: I made a ton of money in 2020 on biotech companies, because of the pandemic. Now, emotions in the market have settled, and biotech companies are back what they used to be, namely interesting investments endowed with high risk, high potential reward, and a bottomless capacity for burning cash. Those companies are what Tesla used to be a decade ago. I kept a serious position on a few other biotech businesses: Intellia Therapeutics, Biogened, Biomaxima, and Selvita. I want to keep a few of such undug gems in my portfolio, yet too much would be too much.

Thirdly, I had a loss on all of those ugly little cherubs I have just kicked out of my portfolio. Summing up, these were small positions, casually opened without much strategic thinking, and they were bringing me a loss. I could have waited to have a profit, but I preferred to sell them out and to concentrate my capital on the really promising stocks, which I nailed down using the method of intrinsic value. I realized that my portfolio was what it was, one week ago, before I started strategizing consciously, because I had hard times finding balance between two different motivations: running away from the danger of massive loss, on the one hand, and focusing on investments with a true potential for bringing long-term gains.

I focus more specifically on the concept of intrinsic value. Such as Warren Buffet used it, intrinsic value was based on what he called ‘owner’s earnings’ from a business. Owner’s earnings are spread over a window in time corresponding to the risk-free yield on sovereign bonds. The financial statement used for calculating intrinsic value is the cash-flow of the company in question, plus external data as regards average annual yield on sovereign bonds. The basic formula to calculate owner’s earnings goes like: net income after tax + amortization charges – capital expenditures). Once that nailed down, I divide those owner’s earnings by the interest rate on long-term sovereign bonds. For my positions in the US stock market, I use the long-term yield on the US federal bonds, i.e. 1,35% a year. As regards my portfolio in the Polish stock market, I use the yield 3,42% for Polish sovereign bonds on long-term.

I have calculated that intrinsic value for a few of my investments (I mean those I kept in my portfolio), on the basis of their financial results for 2020 and compared it to their market capitalisation. Then, additionally, I did the same calculation based on their published (yet unaudited) cash-flow for Q3 2021. Here are the results I had for Tesla. Net income 2020 $862,00 mln plus amortization charges 2020 $2 322,00 mln minus capital expenditures 2020 $3 132,00 mln equals owner’s earnings 2020 $52,00 mln. Divided by 1,35%, that gives an intrinsic value of $3 851,85 mln. Market capitalization on December 6th, 2021: $1 019 000,00 mln. The intrinsic value looks like several orders of magnitude smaller than market capitalisation. Looks risky.

Let’s see the Q3 2021 unaudited cash-flows. Here, I extrapolate the numbers for 9 months of 2021 over the whole year 2021: I multiply them by 4/3. Extrapolated net income for Q3 2021 $4 401,33 mln plus extrapolated amortization charges for Q3 2021 $2 750,67 minus extrapolated capital expenditures for Q3 2021 $7 936,00 equals extrapolated owner’s earnings amounting to $4 401,33 mln. Divided by 1,35%, it gives an extrapolated intrinsic value of $326 024,69 mln. It is much closer to market capitalization, yet much below it as for now. A lot of risk in that biggest investment position of mine. We live and we learn, as they say.

Another stock: Apple. With the economic size of a medium-sized country, Apple seems solid. Let’s walk it through the computational path of intrinsic value. There is an important methodological remark to formulate as for this cat. In the cash-flow statement of Apple for 2020-2021 (Apple Inc. ends its fiscal year by the end of September in the calendar year), under the category of ‘Investing activities’, most of the business pertains to buying and selling financial assets. It goes, ike:

Investing activities, in millions of USD:

>> Purchases of marketable securities (109 558)

>> Proceeds from maturities of marketable securities: 59 023

>> Proceeds from sales of marketable securities: 47 460

>> Payments for acquisition of property, plant and equipment (11 085)

>> Payments made in connection with business acquisitions, net (33)

>> Purchases of non-marketable securities (131)

>> Proceeds from non-marketable securities: 387

>> Bottom line: Cash generated by/(used in) investing activities (14 545)

Now, when I look at the thing through the lens of Warren Buffett’s investment tenets, anything that happens with and through financial securities, is retention of cash in the business. It just depends on what exact form we want to keep that cash under. Transactions grouped under the heading of ‘Purchases of marketable securities (109 558)’, for example, are not capital expenditures. They do not lead to exchanging cash money against productive technology. In all that list of investment activities, only two categories, namely: ‘Payments for acquisition of property, plant and equipment (11 085)’, and ‘Payments made in connection with business acquisitions, net (33)’ are capital expenditures sensu stricto. All the other categories, although placed in the account of investing activities, are labelled as such just because they pertain to transactions on assets. From the Warren Buffet’s point of view they all mean retained cash.

Therefore, when I calculate owner’s earnings for Apple, based on their latest annual cash-flow, I go like:

>> Net Income $94 680 mln + Depreciation and Amortization $11 284 mln + Purchases of marketable securities $109 558 mln + Proceeds from maturities of marketable securities $59 023 mln + Proceeds from sales of marketable securities $47 460 mln – Payments for acquisition of property, plant and equipment $11 085 mln – Payments made in connection with business acquisitions, net $33 mln + Purchases of non-marketable securities $131 mln + Proceeds from non-marketable securities $387 mln = Owner’s earnings $311 405 mln.

I divide that number by the 1,35% annual yield of the long-term Treasury bonds in the US, and I get an intrinsic value of $23 067 037 mln, against a market capitalisation floating around $2 600 000 mln, which gives a huge overhead in the former over the latter. Good investment.

I pass to another one of my investments, First Solar Inc. (https://investor.firstsolar.com/financials/sec-filings/default.aspx ). Same thing: investment activities consist most of all in moves pertinent to financial assets. It looks like:

>> Net income (loss) $398,35 mln

>> Depreciation, amortization and accretion $232,93 mln

>> Impairments and net losses on disposal of long-lived assets $35,81 mln

… and then come the Cash flows from investing activities:

 >> Purchases of property, plant and equipment ($416,64 mln)

>> Purchases of marketable securities and restricted marketable securities ($901,92 mln)

>> Proceeds from sales and maturities of marketable securities and restricted marketable securities $1 192,83 mln

>> Other investing activities ($5,5 mln)

… and therefore, from the perspective of owner’s earnings, the net cash used in investing activities is not, as stated officially, minus $131,23 mln. Net capital expenses, I mean net of transactions on financial assets, are: – $416,64 mln + $901,92 mln + $1 192,83 mln – $5,5 mln = $1 672,61 mln. Combined with the aforementioned net income, amortization and fiscally compensated impairments on long-lived assets, it makes owner’s earnings of $2 339,7 mln. And an intrinsic value of $173 311,11 mln, against some $10 450 000 mln in market capitalization. Once again, good and solid in terms of Warren Buffet’s margin of security.

I start using the method of intrinsic value for my investments, and it gives interesting results. It allows me to distinguish, with a precise gauge, between high-risk investments and the low-risk ones.

Je dois faire gaffe à la valeur intrinsèque

Me revoilà sur mon blog et je me concentre sur un truc : ma stratégie d’investissements boursiers. Je veux optimiser ma stratégie et à cette fin je me réfère à Warren Buffett et à sa philosophie d’investissement telle que vous et moi pouvons la trouver dans les rapports annuels de Berkshire Hathaway Inc. (https://www.berkshirehathaway.com/reports.html ). Je prends donc les principes de Warren Buffett et je les applique comparativement à deux compagnies : Tesla (https://ir.tesla.com/#tab-quarterly-disclosure ), la plus grande position dans mon portefeuille d’investissement, d’une part, et Selvita (https://selvita.com/investors-media/ ), une société polonaise de biotechnologie, sur les actions de laquelle je commence à développer un investissement sérieux.   

Avec Tesla, j’ai déjà entamé une analyse façon Warren Buffett (Tesla first in line) et maintenant je continue de manière comparative. C’est un truc qui marche : lorsque je veux comprendre quelque chose de complexe, je peux comparer cette chose complexe avec une autre chose complexe. Comparaison est une stratégie cognitive fondamentale. Elle me permet d’apercevoir les différences et les similarités entre des phénomènes complexes (tout est complexe, en fait) et de comprendre ainsi ce que la science cognitive désigne comme « saillance ».

Le genre de saillance sur laquelle je me concentre maintenant sont les traits distinctifs (et donc saillants et importants) de ces deux sociétés – Tesla (https://ir.tesla.com/#tab-quarterly-disclosure ) et Selvita (https://selvita.com/investors-media/ ) – en ce qui concerne les piliers conceptuels de la stratégie de Warren Buffett, qui sont :

>> l’entreprise en tant que telle : le modèle d’entreprise est-il simple et compréhensible ? l’entreprise a-t-elle une histoire cohérente d’exploitation ainsi que des perspectives favorables à long terme ?

>> la gestion : est-ce que la gestion de l’entreprise semble rationnelle ? Les gestionnaires semblent-ils agir dans le meilleur intérêt des actionnaires ? Les gestionnaires résistent-ils les

 modes et les pressions institutionnelles externes ?

>> la finance : quel est le retour sur capitaux propres ? quels sont les bénéfices agrégés pour les actionnaires ? Quelle est la marge de bénéfice dans les produits de l’entreprise ? Quelle est la concordance entre rétention de trésorerie d’une part et l’accroissement de valeur boursière ?   

>> le marché boursier : quelle est la valeur économique de la société en question ? comment cela concorde-t-il avec sa valeur boursière ?

Je me concentre sur la concordance entre la valeur économique de, respectivement, Tesla et Selvita, d’une part et leur valeur boursière d’autre part. Dans la stratégie modèle de Warren Buffett, la valeur économique d’une entreprise est égale au flux prévisible de trésorerie d’activités d’exploitation, escompté avec un taux de retour sur investissement sans risque. Pour donner un exemple pratiqe de cette méthode de base, je cite et traduis un passage du livre « The Warren Buffett Way » par Robert G. Hagstrom, plus précisément le fragment des pages 136 – 137 de l’édition Kindle, où la méthode de Buffett est démontrée dans son achat de Washington Post en 1973. Alors : « Nous commençons par calculer les revenus propriétaires pour l’année fiscale : bénéfice net de $13,3 millions plus dépréciation et amortissement de $3,7 millions moins les investissements capitalisables de $6,6 millions, ça donne un revenu propriétaire de $10,4 millions. Si nous divisons ce revenu par le taux de rendement des obligations souveraines long-terme de la Trésorerie Fédérale des États-Unis (6,81%), la valeur de Washington Post atteint $150 millions […]. Buffett dit qu’avec le temps, les investissements capitalisables d’un journal vont être égales au flux d’amortissement et de ce fait le bénéfice net devrait être une bonne estimation du revenu propriétaire. »

J’applique le même raisonnement à mes deux cas particuliers, donc Tesla et Selvita. Je vais chez https://ir.tesla.com/#tab-quarterly-disclosure et je sélectionne le rapport annuel pour 2020, soit https://www.sec.gov/Archives/edgar/data/1318605/000156459021004599/tsla-10k_20201231.htm. Je vais droit au rapport des flux de trésorerie https://www.sec.gov/Archives/edgar/data/1318605/000156459021004599/tsla-10k_20201231.htm#Consolidated_Statements_of_Cash_Flows .

Le bénéfice net de Tesla pour 2020 était de $862 millions, la charge d’amortissement montait à $2 322 millions, et le solde d’investissements capitalisables fût $3 132. J’obtiens un revenu propriétaire de $52 millions.  J’utilise deux taux de rendement comme référence : celui des obligations souveraines du Trésor Polonais, soit 3,242% (puisque j’investis à partir de Pologne), ainsi que celui des obligations souveraines long-terme de la Trésorerie Fédérale des États-Unis (1,35%), puisque mon résultat sur Tesla est déterminé par la valeur intrinsèque de Tesla telle qu’estimée par le marché financier pour Tesla se trouve aux États-Unis.

Après avoir divisé le revenu propriétaire de Tesla pour 2020 par ces deux taux alternatifs, j’obtiens une fourchette de valeur intrinsèque entre $1 603,95 milliards et $3 851,85 milliards. La capitalisation boursière de Tesla est couramment de $1 019 milliards, mais tout récemment, le 4 novembre, elle atteignait $1 248,43 milliards.

Je répète le même exercice – donc basé sur les résultats financiers pour l’année 2020 – pour Selvita. Je prends leur rapport financier annuel 2020 (https://selvita.com/wp-content/uploads/2021/03/Selvita-Group-Consolidated-Financial-Statements-2020.pdf ) et sur la page 8 je trouve le rapport des flux de trésorerie. Le bénéfice net était de PLN19 921 919, avec les charges d’amortissement de PLN13 525 722.

En ce qui concerne les d’investissements capitalisables, ça se corse. Je trouve un investissement en actifs matériels et immatériels de valeur totale de PLN15 003 636, ainsi que « l’acquisition d’autres actifs financiers » qui monte à PLN10 152 560. Selon la logique de Warren Buffett, l’investissement strictement dit, donc celui qui est déductible du revenu propriétaire, est celui en actifs productifs afférents à l’exploitation. L’acquisition d’actifs financiers est un placement, pas un investissement en actifs d’exploitation.

Je pense donc que je peux calculer le revenu propriétaire de Selvita de deux façons différentes. La première variante c’est « bénéfice net plus amortissement moins l’investissement en actifs matériels et immatériels » et dans la deuxième variante je considère l’acquisition d’actifs financiers comme un flux additionnel de trésorerie et je l’ajoute au solde du premier calcul. J’obtiens ainsi un revenu propriétaire façon Warren Buffett dans la fourchette entre PLN18 444 005 et PLN28 596 565. Je divise par le taux de rendement comme des obligations souveraines du Trésor Polonais (3,242%) et j’obtiens une fourchette correspondante de valeur intrinsèque entre PLN568 908 235 et PLN882 065 545. La dernière capitalisation boursière de Selvita est de PLN1 478 millions, avec un maximum sur les 12 mois derniers noté le 5 juillet 2021, égal à PLN2 894,7 millions.

Ma conclusion provisoire est que, sur la base des résultats financiers audités pour 2020, Tesla reste sous-valorisée par le marché boursier et ça donne des opportunités intéressantes. En revanche, Selvita semble être un peu gonflée en Bourse et je dois être sur mes gardes. Maintenant je passe à l’extension de l’exercice précèdent avec la méthode de MRQ ou « Most Recent Quarter », soit avec les résultats financiers non-audités de deux sociétés pour le troisième quart de 2021. Je fais un truc très primitif, qui est néanmoins utilisé fréquemment en analyse financière, donc j’extrapole les résultats de trois quarts de l’année fiscale en les multipliant par « 4/3 ». Oui, c’est simpliste et ça donne juste une estimation très provisoire de ce que les résultats annuels audités pour 2021 peuvent bien être. Néanmoins, cette méthode permet de simuler l’état d’esprit d’autres investisseurs qui – tout comme moi – utilisent la méthode de valeur intrinsèque façon Warren Buffett.

 Je commence par Tesla, encore une fois  (https://www.sec.gov/Archives/edgar/data/1318605/000095017021002253/tsla-20210930.htm#consolidated_statements_of_cash_flows . Bénéfice net $3 301 millions plus charge d’amortissement $2 063 moins investissements capitalisables de $5 952, ça donne… merde… – $588. Embarrassant, n’est-ce pas ? Revenu propriétaire négatif veut dire valeur intrinsèque négative. Esquive élégante : « Buffett dit qu’avec le temps, les investissements capitalisables d’un journal vont être égales au flux d’amortissement et de ce fait le bénéfice net devrait être une bonne estimation du revenu propriétaire. » Bon, Tesla, c’est presque comme un journal, quoi. Sauf que ça n’a rien à voir. Ce n’est même pas le même type fondamental de bien économique. Enfin, essayons avec l’équivalence « bénéfice net = revenu propriétaire ». J’extrapole le bénéfice net pour les 9 mois de 2021 sur les 12 mois de l’année fiscale et ça donne $4 401,33 millions. Je divise par le taux de rendement des obligations souveraines long-terme de la Trésorerie Fédérale des États-Unis (1,35%) et j’ai $326 024 milliards.

Je commence à comprendre la danse folle autour des actions de Tesla. Vous regardez le bénéfice net et ça a l’air de décoiffer (positivement). Vous jetez un coup d’œil sur les dépenses capitalisables d’investissement et vous commencer à vous poser des questions. Si le calcul très simple de Warren Buffett donne autant de doute, pas étonnant que plusieurs petits investisseurs se laissent prendre au jeu des grands fonds d’investissement futés.

Je passe à Selvita : https://selvita.com/wp-content/uploads/2021/11/Selvita-Group-Consolidated-Financial-Statements-Q3-2021.pdf . Bénéfice net sur les 9 mois de 2021 fait PLN8 844 005, les charges d’amortissement sur la même période montent à PLN17 764 894, acquisition d’actifs productifs est de PLN9 655 884 et celle d’actifs financiers c’est PLN3 172 566. Je somme des deux façons alternatives décrites plus tôt, j’extrapole sur les 12 mois, je divise par le taux d’intérêt sur les obligations de Trésor Polonais (3,242%)  et j’obtiens une valeur intrinsèque entre PLN660 936 257 et PLN784 623 041. C’est toujours très en-dessous de la capitalisation boursière de Selvita. Je dois faire gaffe.

Tesla first in line

Once again, a big gap in my blogging. What do you want – it happens when the academic year kicks in. As it kicks in, I need to divide my attention between scientific research and writing, on the one hand, and my teaching on the other hand.

I feel like taking a few steps back, namely back to the roots of my observation. I observe two essential types of phenomena, as a scientist: technological change, and, contiguously to that, the emergence of previously unexpected states of reality. Well, I guess we all observe the latter, we just sometimes don’t pay attention. I narrow it down a bit. When it comes to technological change, I am just bewildered with the amounts of cash that businesses have started holding, across the board, amidst an accelerating technological race. Twenty years ago, any teacher of economics would tell their students: ‘Guys, cash is the least productive asset of all. Keep just the sufficient cash to face the most immediate expenses. All the rest, invest it in something that makes sense’. Today, when I talk to my students, I tell them: ‘Guys, with the crazy speed of technological change we are observing, cash is king, like really. The greater reserves of cash you hold, the more flexible you stay in your strategy’.

Those abnormally big amounts of cash that businesses tend to hold, those last years, it has two dimensions in terms of research. On the one hand, it is economics and finance, and yet, on the other hand, it is management. For quite some time, digital transformation has been about the only thing worth writing about in management science, but that, namely the crazy accumulation of cash balances in corporate balance sheets, is definitely something worth writing about. Still, there is amazingly little published research on the general topic of cash flow and cash management in business, just as there is very little on financial liquidity in business. The latter topic is developed almost exclusively in the context of banks, mostly the central ones. Maybe it is all that craze about the abominable capitalism and the general claim that money is evil. I don’t know.

Anyway, it is interesting. Money, when handled at the microeconomic level, tells the hell of a story about our behaviour, our values, our mutual trust, and our emotions. Money held in corporate balance sheets tells the hell of a story about decision making. I explain. Please, consider the amount of money you carry around with you, like the contents of your wallet (credit cards included) plus whatever you have available instantly on your phone. Done? Visualised? Good. Now, ask yourself what percentage of all those immediately available monetary balances you use during your one average day. Done? Analysed? Good. In my case, it would be like 0,5%. Yes, 0,5%. I did that intellectual exercise with my students, many time. They usually hit no more than 10%, and they are gobsmacked. Their first reaction is WOKEish: ‘So I don’t really need all that money, right. Money is pointless, right?’. Not quite, my dear students. You need all that money; you just need it in a way which you don’t immediately notice.

There is a model in the theory of complex systems, called the ants’ colony (see for example: (Chaouch, Driss & Ghedira 2017[1]; Asghari & Azadi 2017[2]; Emdadi et al. 2019[3]; Gupta & Srivastava 2020[4]; Di Caprio et al. 2021[5]). Yes, Di Caprio. Not the Di Caprio you intuitively think about, though. Ants communicate with pheromones. They drop pheromones somewhere they sort of know (how?) it is going to be a signal for other ants. Each ant drops sort of a standard parcel of pheromones. Nothing to write home about, really, and yet enough to attract the attention of another ant which could drop its individual pheromonal parcel in the same location. With any luck, other ants will discover those chemical traces and validate them with their individual dumps of pheromones, and this is how the colony of ants maps its territories, mostly to find and exploit sources of food. This is interesting to find out that in order for all that chemical dance to work, there needs to be a minimum number of ants on the job. In there are not enough ants per square meter of territory, they just don’t find each other’s chemical imprints and have no chance to grab hold of the resources available. Yes, they all die prematurely. Money in human societies could be the equivalent of a pheromone. We need to spread it in order to carry out complex systemic changes. Interestingly, each of us, humans, is essentially blind to those complex changes: we just cannot wrap our mind around quickly around the technical details of something apparently as simple as the manufacturing chain of a gardening rake (do you know where exactly and in what specific amounts all the ingredients of steel come from? I don’t).  

All that talk about money made me think about my investments in the stock market. I feel like doing things the Warren Buffet’s way: going to the periodical financial reports of each company in my portfolio, and just passing in review what they do and what they are up to. By the way, talking about Warren Buffet’s way, I recommend my readers to go to the source: go to https://www.berkshirehathaway.com/ first, and then to  https://www.berkshirehathaway.com/2020ar/2020ar.pdf as well as to https://www.berkshirehathaway.com/qtrly/3rdqtr21.pdf . For now, I focus on studying my own portfolio according to the so called “12 immutable tenets by Warren Buffet”, such as I allow myself to quote them:

>> Business Tenets: Is the business simple and understandable? Does the business have a consistent operating history? Does the business have favourable long-term prospects?

>> Management Tenets: Is management rational? Is management candid with its shareholders? Does management resist the institutional imperative?

>> Financial Tenets Focus on return on equity, not earnings per share. Calculate “owner earnings.” Look for companies with high profit margins. For every dollar retained, make sure the company has created at least one dollar of market value.

>> Market Tenets: What is the value of the business? Can the business be purchased at a significant discount to its value?

(Hagstrom, Robert G.. The Warren Buffett Way (p. 98). Wiley. Kindle Edition.)

Anyway, here is my current portfolio:

>> Tesla (https://ir.tesla.com/#tab-quarterly-disclosure),

>> Allegro.eu SA (https://about.allegro.eu/ir-home ),

>> Alten (https://www.alten.com/investors/ ),

>> Altimmune Inc (https://ir.altimmune.com/ ),

>> Apple Inc (https://investor.apple.com/investor-relations/default.aspx ),

>> CureVac NV (https://www.curevac.com/en/investor-relations/overview/ ),

>> Deepmatter Group PLC (https://www.deepmatter.io/investors/ ), 

>> FedEx Corp (https://investors.fedex.com/home/default.aspx ),

>> First Solar Inc (https://investor.firstsolar.com/home/default.aspx )

>> Inpost SA (https://www.inpost.eu/investors )

>> Intellia Therapeutics Inc (https://ir.intelliatx.com/ )

>> Lucid Group Inc (https://ir.lucidmotors.com/ )

>> Mercator Medical SA (https://en.mercatormedical.eu/investors/ )

>> Nucor Corp (https://www.nucor.com/investors/ )

>> Oncolytics Biotech Inc (https://ir.oncolyticsbiotech.com/ )

>> Solaredge Technologies Inc (https://investors.solaredge.com/ )

>> Soligenix Inc (https://ir.soligenix.com/ )

>> Vitalhub Corp (https://www.vitalhub.com/investors )

>> Whirlpool Corp (https://investors.whirlpoolcorp.com/home/default.aspx )

>> Biogened (https://biogened.com/ )

>> Biomaxima (https://www.biomaxima.com/325-investor-relations.html )

>> CyfrPolsat (https://grupapolsatplus.pl/en/investor-relations )

>> Emtasia (https://elemental-asia.biz/en/ )

>> Forposta (http://www.forposta.eu/relacje_inwestorskie/dzialalnosc_i_historia.html )

>> Gameops (http://www.gameops.pl/en/about-us/ )

>> HMInvest (https://grupainwest.pl/relacje )

>> Ifirma (https://www.ifirma.pl/dla-inwestorow )

>> Moderncom (http://moderncommercesa.com/wpmccom/en/dla-inwestorow/ )

>> PolimexMS (https://www.polimex-mostostal.pl/en/reports/raporty-okresowe )

>> Selvita (https://selvita.com/investors-media/ )

>> Swissmed (https://swissmed.com.pl/?menu_id=8 )   

Studying that whole portfolio of mine through the lens of Warren Buffet’s tenets looks like a piece of work, really. Good. I like working. Besides, as I have been reading Warren Buffett’s annual reports at https://www.berkshirehathaway.com/ , I realized that I need a real strategy for investment. So far, I have developed a few efficient hacks, such as, for example, the habit of keeping my s**t together when other people panic or when they get euphoric. Still, hacks are not the same as strategy.

I feel like adding my own general principles to Warren Buffet’s tenets. Principle #1: whatever I think I do my essential strategy consists in running away from what I perceive as danger. Thus, what am I afraid of, in my investment? What subjective fears and objective risks factors shape my actions as investor? Once I understand that, I will know more about my own actions and decisions. Principle #2: the best strategy I can think of is a game with nature, where each move serves to learn something new about the rules of the game, and each move should be both decisive and leaving me with a margin of safety. What am I learning as I make my moves? What my typical moves actually are?

Let’s rock. Tesla (https://ir.tesla.com/#tab-quarterly-disclosure), comes first in line, as it is the biggest single asset in my portfolio. I start my digging with their quarterly financial report for Q3 2021 (https://www.sec.gov/Archives/edgar/data/1318605/000095017021002253/tsla-20210930.htm ), and I fish out their Consolidated Balance Sheets (in millions, except per share data, unaudited: https://www.sec.gov/Archives/edgar/data/1318605/000095017021002253/tsla-20210930.htm#consolidated_balance_sheets ).

Now, I assume that if I can understand why and how numbers change in the financial statements of a business, I can understand the business itself. The first change I can spot in that balance sheet is property, plant and equipment, net passing from $12 747 million to $17 298 million in 12 months. What exactly has happened? Here comes Note 7 – Property, Plant and Equipment, Net, in that quarterly report, and it starts with a specification of fixed assets comprised in that category. Good. What really increased in this category of assets is construction in progress, and here comes the descriptive explanation pertinent thereto: “Construction in progress is primarily comprised of construction of Gigafactory Berlin and Gigafactory Texas, expansion of Gigafactory Shanghai and equipment and tooling related to the manufacturing of our products. We are currently constructing Gigafactory Berlin under conditional permits in anticipation of being granted final permits. Completed assets are transferred to their respective asset classes, and depreciation begins when an asset is ready for its intended use. Interest on outstanding debt is capitalized during periods of significant capital asset construction and amortized over the useful lives of the related assets. During the three and nine months ended September 30, 2021, we capitalized $14 million and $52 million, respectively, of interest. During the three and nine months ended September 30, 2020, we capitalized $13 million and $33 million, respectively, of interest.

Depreciation expense during the three and nine months ended September 30, 2021 was $495 million and $1.38 billion, respectively. Depreciation expense during the three and nine months ended September 30, 2020 was $403 million and $1.13 billion, respectively. Gross property, plant and equipment under finance leases as of September 30, 2021 and December 31, 2020 was $2.60 billion and $2.28 billion, respectively, with accumulated depreciation of $1.11 billion and $816 million, respectively.

Panasonic has partnered with us on Gigafactory Nevada with investments in the production equipment that it uses to manufacture and supply us with battery cells. Under our arrangement with Panasonic, we plan to purchase the full output from their production equipment at negotiated prices. As the terms of the arrangement convey a finance lease under ASC 842, Leases, we account for their production equipment as leased assets when production commences. We account for each lease and any non-lease components associated with that lease as a single lease component for all asset classes, except production equipment classes embedded in supply agreements. This results in us recording the cost of their production equipment within Property, plant and equipment, net, on the consolidated balance sheets with a corresponding liability recorded to debt and finance leases. Depreciation on Panasonic production equipment is computed using the units-of-production method whereby capitalized costs are amortized over the total estimated productive life of the respective assets. As of September 30, 2021 and December 31, 2020, we had cumulatively capitalized costs of $1.89 billion and $1.77 billion, respectively, on the consolidated balance sheets in relation to the production equipment under our Panasonic arrangement.”

Good. I can try to wrap my mind around the contents of Note 7. Tesla is expanding its manufacturing base, including a Gigafactory in my beloved Europe. Expansion of the manufacturing capacity means significant, quantitative growth of the business. According to Warren Buffett’s philosophy: “The question of where to allocate earnings is linked to where that company is in its life cycle. As a company moves through its economic life cycle, its growth rates, sales, earnings, and cash flows change dramatically. In the development stage, a company loses money as it develops products and establishes markets. During the next stage, rapid growth, the company is profitable but growing so fast that it cannot support the growth; often it must not only retain all of its earnings but also borrow money or issue equity to finance growth” (Hagstrom, Robert G.. The Warren Buffett Way (p. 104). Wiley. Kindle Edition).  Tesla looks like they are in the phase of rapid growth. They have finally nailed down how to generate profits (yes, they have!), and they are expanding capacity-wise. They are likely to retain earnings and to be in need of cash, and that attracts my attention to another passage in Note 7: “Interest on outstanding debt is capitalized during periods of significant capital asset construction and amortized over the useful lives of the related assets”. If I understand correctly, the financial strategy consists in not servicing (i.e. not paying the interest due on) outstanding debt when that borrowed money is really being used to finance the construction of productive assets, and starting to service that debt only after the corresponding asset starts working and paying its bills. That means, in turn, that lenders are being patient and confident with Tesla. They assume their unconditional claims on Tesla’s future cash flows (this is one of the possible ways to define outstanding debt) are secure.   

Good. Now, I am having a look at Tesla’s Consolidated Statements of Operations (in millions, except per share data, unaudited: https://www.sec.gov/Archives/edgar/data/1318605/000095017021002253/tsla-20210930.htm#consolidated_statements_of_operations ). It is time to have a look at Warren Buffett’s Business Tenets as regards Tesla. Is the business simple and understandable? Yes, I think I can understand it. Does the business have a consistent operating history? No, operational results changed in 2020 and they keep changing. Tesla is passing from the stage of development (which took them a decade) to the stage of rapid growth. Does the business have favourable long-term prospects? Yes, they seem to have good prospects. The market of electric vehicles is booming (EV-Volumes[6]; IEA[7]).

Is Tesla’s management rational? Well, that’s another ball game. To develop in my next update.


[1] Chaouch, I., Driss, O. B., & Ghedira, K. (2017). A modified ant colony optimization algorithm for the distributed job shop scheduling problem. Procedia computer science, 112, 296-305. https://doi.org/10.1016/j.procs.2017.08.267

[2] Asghari, S., & Azadi, K. (2017). A reliable path between target users and clients in social networks using an inverted ant colony optimization algorithm. Karbala International Journal of Modern Science, 3(3), 143-152. http://dx.doi.org/10.1016/j.kijoms.2017.05.004

[3] Emdadi, A., Moughari, F. A., Meybodi, F. Y., & Eslahchi, C. (2019). A novel algorithm for parameter estimation of Hidden Markov Model inspired by Ant Colony Optimization. Heliyon, 5(3), e01299. https://doi.org/10.1016/j.heliyon.2019.e01299

[4] Gupta, A., & Srivastava, S. (2020). Comparative analysis of ant colony and particle swarm optimization algorithms for distance optimization. Procedia Computer Science, 173, 245-253. https://doi.org/10.1016/j.procs.2020.06.029

[5] Di Caprio, D., Ebrahimnejad, A., Alrezaamiri, H., & Santos-Arteaga, F. J. (2021). A novel ant colony algorithm for solving shortest path problems with fuzzy arc weights. Alexandria Engineering Journal. https://doi.org/10.1016/j.aej.2021.08.058

[6] https://www.ev-volumes.com/

[7] https://www.iea.org/reports/global-ev-outlook-2021/trends-and-developments-in-electric-vehicle-markets

DIY algorithms of our own

I return to that interesting interface of science and business, which I touched upon in my before-last update, titled ‘Investment, national security, and psychiatry’ and which means that I return to discussing two research projects I start being involved in, one in the domain of national security, another one in psychiatry, both connected by the idea of using artificial neural networks as analytical tools. What I intend to do now is to pass in review some literature, just to get the hang of what is the state of science, those last days.

On the top of that, I have been asked by my colleagues to crash take the leadership of a big, multi-thread research project in management science. The multitude of threads has emerged as a circumstantial by-product of partly the disruption caused by the pandemic, and partly as a result of excessive partition in the funding of research. As regards the funding of research, Polish universities have sort of two financial streams. One consists of big projects, usually team-based, financed by specialized agencies, such as the National Science Centre (https://www.ncn.gov.pl/?language=en ) or the National Centre for Research and Development (https://www.gov.pl/web/ncbr-en ). Another one is based on relatively small grants, applied for by and granted to individual scientists by their respective universities, which, in turn, receive bulk subventions from the Ministry of Education and Science. Personally, I think that last category, such as it is being allocated and used now, is a bit of a relic. It is some sort of pocket money for the most urgent and current expenses, relatively small in scale and importance, such as the costs of publishing books and articles, the costs of attending conferences etc. This is a financial paradox: we save and allocate money long in advance, in order to have money for essentially incidental expenses – which come at the very end of the scientific pipeline – and we have to make long-term plans for it. It is a case of fundamental mismatch between the intrinsic properties of a cash flow, on the one hand, and the instruments used for managing that cash flow, on the other hand.

Good. This is introduction to detailed thinking. Once I have those semantic niceties checked out, I cut into the flesh of thinking, and the first piece I intend to cut out is the state of science as regards Territorial Defence Forces and their role amidst the COVID-19 pandemic. I found an interesting article by Tiutiunyk et al. (2018[1]). It is interesting because it gives a detailed methodology for assessing operational readiness in any military unit, territorial defence or other. That corresponds nicely to Hypothesis #2 which I outlined for that project in national security, namely: ‘the actual role played by the TDF during the pandemic was determined by the TDF’s actual capacity of reaction, i.e. speed and diligence in the mobilisation of human and material resources’. That article by Tiutiunyk et al. (2018) allows entering into details as regards that claim. 

Those details start unfolding from the assumption that operational readiness is there when the entity studied possesses the required quantity of efficient technical and human resources. The underlying mathematical concept is quite simple. I the given situation, adequate response requires using m units of resources at k% of capacity during time te. The social entity studied can muster n units of the same resources at l% of capacity during the same time te. The most basic expression of operational readiness is, therefore, a coefficient OR = (n*l)/(m*k). I am trying to find out what specific resources are the key to that readiness. Tiutiunyk et al. (2018) offer a few interesting insights in that respect. They start by noticing the otherwise known fact that resources used in crisis situations are not exactly the same we use in everyday course of life and business, and therefore we tend to hold them for a time longer than their effective lifecycle. We don’t amortize them properly because we don’t really control for their physical and moral depreciation. One of the core concepts in territorial defence is to counter that negative phenomenon, and to maintain, through comprehensive training and internal control, a required level of capacity.

As I continue going through literature, I come by an interesting study by I. Bet-El (2020), titled: ‘COVID-19 and the future of security and defence’, published by the European Leadership Network (https://www.europeanleadershipnetwork.org/wp-content/uploads/2020/05/Covid-security-defence-1.pdf ). Bet-El introduces an important distinction between threats and risks, and, contiguously, the distinction between security and defence: ‘A threat is a patent, clear danger, while risk is the probability of a latent danger becoming patent; evaluating that probability requires judgement. Within this framework, defence is to be seen as the defeat or deterrence of a patent threat, primarily by military, while security involves taking measures to prevent latent threats from becoming patent and if the measures fail, to do so in such a way that there is time and space to mount an effective defence’. This is deep. I do a lot of research in risk management, especially as I invest in the stock market. When we face a risk factor, our basic behavioural response is hedging or insurance. We hedge by diversifying our exposures to risk, and we insure by sharing the risk with other people. Healthcare systems are a good example of insurance. We have a flow of capital that fuels a manned infrastructure (hospitals, ambulances etc.), and that infrastructure allows each single sick human to share his or her risks with other people. Social distancing is the epidemic equivalent of hedging. When cutting completely or significantly throttling social interactions between households, we have each household being sort of separated from the epidemic risk in other households. When one node in a network is shielded from some of the risk occurring in other nodes, this is hedging.

The military is made for responding to threats rather than risks. Military action is a contingency plan, implemented when insurance and hedging have gone to hell. The pandemic has shown that we need more of such buffers, i.e. more social entities able to mobilise quickly into deterring directly an actual threat. Territorial Defence Forces seem to fit the bill.  Another piece of literature, from my own, Polish turf, by Gąsiorek & Marek (2020[2]), state straightforwardly that Territorial Defence Forces have proven to be a key actor during the COVID-19 pandemic precisely because they maintain a high degree of actual readiness in their crisis-oriented resources, as compared to other entities in the Polish public sector.

Good. I have a thread, from literature, for the project devoted to national security. The issue of operational readiness seems to be somehow in the centre, and it translates into the apparently fluent frontier between security and national defence. Speed of mobilisation in the available resources, as well as the actual reliability of those resources, once mobilized, look like the key to understanding the surprisingly significant role of Territorial Defence Forces during the COVID-19 pandemic. Looks like my initial hypothesis #2, claiming that the actual role played by the TDF during the pandemic was determined by the TDF’s actual capacity of reaction, i.e. speed and diligence in the mobilisation of human and material resources, is some sort of theoretical core to that whole body of research.

In our team, we plan and have a provisional green light to run interviews with the soldiers of Territorial Defence Forces. That basic notion of actually mobilizable resources can help narrowing down the methodology to apply in those interviews, by asking specific questions pertinent to that issue. Which specific resources proved to be the most valuable in the actual intervention of TDF in pandemic? Which resources – if any – proved to be 100% mobilizable on the spot? Which of those resources proved to be much harder to mobilise than it had been initially assumed? Can we rate and rank all the human and technical resources of TDF as for their capacity to be mobilised?

Good. I gently close the door of that room in my head, filled with Territorial Defence Forces and the pandemic. I make sure I can open it whenever I want, and I open the door to that other room, where psychiatry dwells. Me and those psychiatrists I am working with can study a sample of medical records as regards patients with psychosis. Verbal elocutions of those patients are an important part of that material, and I make two hypotheses along that tangent:

>> Hypothesis #1: the probability of occurrence in specific grammatical structures A, B, C, in the general grammatical structure of a patient’s elocutions, both written and spoken, is informative about the patient’s mental state, including the likelihood of psychosis and its specific form.

>> Hypothesis #2: the action of written self-reporting, e.g. via email, from the part of a psychotic patient, allows post-clinical treatment of psychosis, with results observable as transition from mental state A to mental state B.

I start listening to what smarter people than me have to say on the matter. I start with Worthington et al. (2019[3]), and I learn there is a clinical category: clinical high risk for psychosis (CHR-P), thus a set of subtler (than psychotic) ‘changes in belief, perception, and thought that appear to represent attenuated forms of delusions, hallucinations, and formal thought disorder’. I like going backwards upstream, and I immediately ask myself whether that line of logic can be reverted. If there is clinical high risk for psychosis, the occurrence of those same symptoms in reverse order, from severe to light, could be a path of healing, couldn’t it?

Anyway, according to Worthington et al. (2019), some 25% of people with diagnosed CHR-P transition into fully scaled psychosis. Once again, from the perspective of risk management, 25% of actual occurrence in a risk category is a lot. It means that CHR-P is pretty solid as risk assessment comes. I further learn that CHR-P, when represented as a collection of variables (a vector for friends with a mathematical edge), entails an internal distinction into predictors and converters. Predictors are the earliest possible observables, something like a subtle smell of possible s**t, swirling here and there in the ambient air. Converters are information that bring progressive confirmation to predictors.

That paper by Worthington et al. (2019) is a review of literature in itself, and allows me to compare different approaches to CHR-P. The most solid ones, in terms of accurately predicting the onset of full-clip psychosis, always incorporate two components: assessment of the patient’s social role, and analysis of verbalized thought. Good. Looks promising. I think the initial hypotheses should be expanded into claims about socialization.

I continue with another paper, by Corcoran and Cecchi (2020[4]). Generally, patients with psychotic disorders display lower a semantic coherence than ordinary. The flow of meaning in their speech is impended: they can express less meaning in the same volume of words, as compared to a mentally healthy person. Reduced capacity to deliver meaning manifests as apparent tangentiality in verbal expression. Psychotic patients seem to err in their elocutions. Reduced complexity of speech, i.e. relatively low a capacity to swing between different levels of abstraction, with a tendency to exaggerate concreteness, is another observable which informs about psychosis. Two big families of diagnostic methods follow that twofold path. Latent Semantic Analysis (LSA) seems to be the name of the game as regards the study of semantic coherence. Its fundamental assumption is that words convey meaning by connecting to other words, which further unfolds into assuming that semantic similarity, or dissimilarity, with a more or less complex coefficient joint occurrence, as opposed to disjoint occurrence inside big corpuses of language.  

Corcoran and Cecchi (2020) name two main types of digital tools for Latent Semantic Analysis. One is Word2Vec (https://en.wikipedia.org/wiki/Word2vec), and I found a more technical and programmatic approach there to at: https://towardsdatascience.com/a-word2vec-implementation-using-numpy-and-python-d256cf0e5f28 . Another one is GloVe, which I found three interesting references to, at https://nlp.stanford.edu/projects/glove/ , https://github.com/maciejkula/glove-python , and at https://pypi.org/project/glove-py/ .

As regards semantic complexity, two types of analytical tools seem to run the show. One is the part-of-speech (POS) algorithm, where we tag words according to their grammatical function in the sentence: noun, verb, determiner etc. There are already existing digital platforms for implementing that approach, such as Natural Language Toolkit (http://www.nltk.org/ ). Another angle is that of speech graphs, where words are nodes in the network of discourse, and their connections (e.g. joint occurrence) to other words are edges in that network. Now, the intriguing thing about that last thread is that it seems to had been burgeoning in the late 1990ies, and then it sort of faded away. Anyway, I found two references for an algorithmic approach to speech graphs, at https://github.com/guillermodoghel/speechgraph , and at https://www.researchgate.net/publication/224741196_A_general_algorithm_for_word_graph_matrix_decomposition .

That quick review of literature, as regards natural language as predictor of psychosis, leads me to an interesting sidestep. Language is culture, right? Low coherence, and low complexity in natural language are informative about psychosis, right? Now, I put that argument upside down. What if we, homo (mostly) sapiens have a natural proclivity to psychosis, with that overblown cortex of ours? What if we had figured out, at some point of our evolutionary path, that language is a collectively intelligent tool which, with is unique coherence and complexity required for efficient communication, keeps us in a state of acceptable sanity, until we go on Twitter, of course.  

Returning to the intellectual discipline which I should demonstrate, as a respectable researcher, the above review of literature brings one piece of good news, as regards the project in psychiatry. Initially, in this specific team, we assumed that we necessarily need an external partner, most likely a digital business, with important digital resources in AI, in order to run research on natural language. Now, I realized that we can assume two scenarios: one with big, fat AI from that external partner, and another one, with DIY algorithms of our own. Gives some freedom of movement. Cool.


[1] Tiutiunyk, V. V., Ivanets, H. V., Tolkunov, І. A., & Stetsyuk, E. I. (2018). System approach for readiness assessment units of civil defense to actions at emergency situations. Науковий вісник Національного гірничого університету, (1), 99-105. DOI: 10.29202/nvngu/2018-1/7

[2] Gąsiorek, K., & Marek, A. (2020). Działania wojsk obrony terytorialnej podczas pandemii COVID–19 jako przykład wojskowego wsparcia władz cywilnych i społeczeństwa. Wiedza Obronna. DOI: https://doi.org/10.34752/vs7h-g945

[3] Worthington, M. A., Cao, H., & Cannon, T. D. (2019). Discovery and validation of prediction algorithms for psychosis in youths at clinical high risk. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging. https://doi.org/10.1016/j.bpsc.2019.10.006

[4] Corcoran, C. M., & Cecchi, G. (2020). Using language processing and speech analysis for the identification of psychosis and other disorders. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging. https://doi.org/10.1016/j.bpsc.2020.06.004

An enlightened grandfather

I am going personal in my writing, or at least in the piece of writing which follows. It is because I am going through an important change in my life. I mean, I am going through another important change in my life, and, as it is just one more twist among many, I already know a few things about change. I know that when I write about it, I can handle it better as compared to a situation, when I just try to shelf it somewhere in my head and write about something else. The something else I technically should be writing about is science, and here comes the next issue: science and life. I believe science is useful, and it is useful when I have the courage to implement it in real life. I am a social scientist. Changes in my life are social changes in microscale: it is all about me being connected in a certain way to other people. I can wrap my mind around my existential changes both honestly, as a person, and scientifically, as that peculiar mix of a curious ape, a happy bulldog living in the moment, and an austere monk equipped with an Ockham’s razor to cut bullshit out.

The change I am going through is about me and my son. Junior, age (almost) 25, has just left Poland, for Nice, France, to start a new job. In Poland, he was leaving with us, his parents. High time to leave, you would say. Yes, you’re right. I think the same. Still, here is the story. Every story needs proper characters, and thus I am going to name my son. His name is Mikołaj, or Nicolas, from the point of view of non-Slavic folks. Mikołaj used to study computer science and live with us, his parents, until Summer 2019. Steam was building up. As you can easily guess, Mikołaj was 23 in 2019. When a guy in his early twenties lives with his parents, friction starts. His nervous system is already calibrated on social expansion, sex, procreation and generally on thrusting himself into life head-first. None of these things matches with a guy living with his parents. In Summer 2019, Mikołaj left home for one year, and went on an Erasmus+ academic exchange to Nice, France, where, by a strange chain of coincidences, as well as by a lot of his own wit and grit he completed a graduated, at the Sophia Antipolis University, a separate Master’s program of studies. A nice prospect for professional career in France was sketching itself, with a job with the same company where Mikołaj had been doing his internship with.

When Mikołaj was away, we spent hundreds of hours on the phone. I swear, it was him more than I. We just git that vibe on the phone which we seldom could hit when talking face to face. I had been having those distance conversations with a guy who was turning, at a turbo speed, from an over-age teenager into an adult. I was talking to a guy who learnt to cook, who was keeping his apartment clean and tidy, who was open to talk about his mistakes and calmly pointed out at my mistakes. It was cool.

The pandemic changed a lot. The nice professional prospect faded away, as the company in question is specialized in IT services for hotels, airports and airlines, which is not really a tail wind right now. Mikołaj came back home on October 1st, 2020, with the purpose of completing the Polish Master’s program – which he initially started the whole Erasmus adventure with – and another purpose of finding a job. His plan was to wrap it all up – graduation and job seeking – in about 3 months. As plans like doing, this one went sideways, and what was supposed to take three months took a bit more than six. During those 6 months which Mikołaj spent with us, in Poland, both we and him had the impression of having gone back in time, in a weirdly painful and unpleasant way. Mikołaj went back to being the overgrown teenager he had been before leaving for the Erasmus exchange. Our cohabitation was a bit tense. Still, things can change for the better. Around Christmas, they started to. As I was coaching and supporting Mikołaj with his job seeking, we sort of started working together, as if it was a project we would run as a team. It was cool.

Yesterday, on April 10th, 2021, early in the morning, Mikołaj left again, to start a job he found, once again in Nice, France. Splitting up was both painful and liberating. Me and my wife experienced – and still are experiencing – the syndrome of empty nest which, interestingly, rhymes with emptiness. This is precisely what I am trying to wrap my mind around in order to produce some useful, almost new wisdom. As Mikołaj called us from Nice, yesterday in the evening, he said openly he experienced the same. Still, things can change for the better. When I heard Mikołaj’s voice on the phone, yesterday, I knew he is an adult again, and happy again. I wonder what I will cook for lunch, tomorrow, he said. In his voice, he had that peculiar vibe I know from his last stay in Nice. That ‘I am lost as f**k and happy as f**k, and I am kicking ass’ vibe. It was cool.

I am still in the process of realizing that my son is happier and stronger when being away from me than what he used to be when being close to me. It is painful, liberating, and I think it is necessary. Here comes the science. Those last years, I almost obsessively do research about the social role of social roles. What is my social role, after I have realized that from now on, being a father for my son is going to be a whole lot different? First of all, I think that my social role is partly given by external circumstances, and partly created by myself as I respond to those external stimuli. I have the freedom of shaping some part of my social role. Which part exactly? As I look at it from inside, I guess the only way to know it is to try, over and over again. I am trying, over and over again, to be the best possible version of myself. Doesn’t everybody try the same, by the way? Basing on my own life experience, I can cautiously say: ‘No’. Not everybody, or at least not always. I know I haven’t always tried to be the best version of myself. I know I am trying now because I know it has paid me off over the last 6 years or so. This is the window in time when I really started to work purposefully on being the best human I can, and I can tell you, there was a lot to do. I was 46 at the time (now, I am 53).

A bit late for starting personal development, you could say. Well, yes and no. Yes, it is late. Still, there is science behind it. During the reproductive age bracket, i.e. roughly between the age of 20 and that of 45÷50, young men are driven mostly by their sexual instinct, because that instinct is overwhelming and we have the capacity to translate it into elaborate patterns of social behaviour. Long story short, between 20 and 50, we build a position in the social hierarchy. This is how sexual instinct civilizes itself. In their late 40ies, most males start experiencing a noticeable folding down in their levels of testosterone, and the strength of sexual drive follows in step. All the motivation based on it is sort of crumbling down, too. This is what we call mid-life crisis, or, in Polish, the Faun’s afternoon.

I remember a conversation I had with a data scientist specialized in Artificial Intelligence. She told me there are AI-based simulations of the human genome, which demonstrate that said genome is programmed to work until we are 50. Anything after that is culture-based. Our culture takes a lot of pains to raise and educate young humans during the first two decades of their lives. Someone has to take care of that secondary socialization, and the most logical way of assigning that role is to take someone who is post-reproductive as a person. This is how grandparents are made by culture.

As I am meditating about the best possible version of myself, right now, this is precisely what comes to my mind. I can be and I think I want to be an enlightened grandfather. There is a bit of a problem, here, ‘cause my son has no kids for the moment. It is hard to be an actual grandfather in the absence of grandchildren, and this is why I said I want to be an enlightened one. I mean that I take the essence of the social role that a grandfather plays in society, and I try to coin it up into a mission statement.  

A good grandfather should provide wisdom. It means I need to have wisdom, and I need to communicate it intelligibly. How do I know I have wisdom? I think there are two components to that. I need to be aware of and accountable for my own mistakes. I need to work through my personal story with as much objectivism as I can, for one. How can I be objective about myself? Here is a little trick. As I live and make mistakes, I learn to observe other people’s response to my own f**k-ups. I learn there is a different, external perspective on my own actions, and with a bit of effort I can reconstruct that external perspective. I can make good, almost new wisdom about myself by combining thorough introspection of my personal experience with that intersubjective reading of my actions.   

There is more to wisdom than just my personal story. I need to collect information about my cultural surroundings, and aggregate it into intelligible a narrative, and I need to do it in the same spirit of critical observation, with curiosity, love and cold objectivism, all in one. I need to be like a local server in a digital network, with enough content stored on my hard drive, and enough efficiency in retrieving that content to be a valuable node in the system.

A good grandfather should support others and accept to act in the backstage. This is what I have been experiencing since I got that job of fundraising and coordination of research projects in my home university. I take surprisingly great a pleasure in supporting other people’s work and research. I remember that 10 years ago I would approach things differently. I would take care, most of all, about putting myself in the centre and at the top of collective projects. Now, I take pleasure in outcomes more than in my own position within those outcomes.   

Now, by antithesis, what a good grandfather shouldn’t be? I think the kind of big existential mistake I could make now would be to become a burden for other people, especially for my son. How can I make such a mistake? It is simple, I observed it in my own father. I convince myself that all good things in life are over for me, because the falling level of testosterone leaves a gaping hole in my emotional structure. I stop taking care of myself, I let myself sink into depression and cynicism, and Bob’s my uncle: I have become a burden for others. Really simple. Don’t try it at home, even under the supervision of qualified professionals.

That brings me to still another positive aspect of being a good grandfather: grit. For me, grit is something that has the chance to supplant fear and anger, under favourable circumstances. When I was young, and even when I was a mature adult, I did not really know how to fight and stand up against existential adversities. It is mostly by observing other people – who developed that skill – that I progressively learnt some of it. Grit is the emotional counterpart of resilience, and I think that conscious, purposeful resilience requires the perspective of time. I need to know, by experience, how that really crazy kind of s**t unfolds over years, in human existence, in order to develop cognitive and emotional structures for coping with it.  

Summing up, an enlightened grandfather is a critical teller of his own story, a good source of knowledge about the general story of his culture, and a supportive mentor for other people. An enlightened grandfather takes care of his body and his mind, to stay healthy, strong and happy as long as possible. An enlightened grandfather keeps himself sharp enough to help those youngsters keep their s**together when things go south. This is my personal mission statement. This is who I want to be over the 20 – 25 years to come, which is what I reasonably expect to be the time I have left for being any good in this world. What next? Well, next, it is time to say goodbye.   

Cultural classes

Some of my readers asked me to explain how to get in control of one’s own emotions when starting their adventure as small investors in the stock market. The purely psychological side of self-control is something I leave to people smarter than me in that respect. What I do to have more control is the Wim Hof method (https://www.wimhofmethod.com/ ) and it works. You are welcome to try. I described my experience in that matter in the update titled ‘Something even more basic’. Still, there is another thing, namely, to start with a strategy of investment clever enough to allow emotional self-control. The strongest emotion I have been experiencing on my otherwise quite successful path of investment is the fear of loss. Yes, there are occasional bubbles of greed, but they are more like childish expectations to get the biggest toy in the neighbourhood. They are bubbles, which burst quickly and inconsequentially. The fear of loss is there to stay, on the other hand.    

This is what I advise to do. I mean this is what I didn’t do at the very beginning, and fault of doing it I made some big mistakes in my decisions. Only after some time (around 2 months), I figured out the mental framework I am going to present. Start by picking up a market. I started with a dual portfolio, like 50% in the Polish stock market, and 50% in the big foreign ones, such as US, Germany, France etc. Define the industries you want to invest in, like biotech, IT, renewable energies. Whatever: pick something. Study the stock prices in those industries. Pay particular attention to the observed losses, i.e., the observed magnitude of depreciation in those stocks. Figure out the average possible loss, and the maximum one. Now, you have an idea of how much you can lose in percentage. Quantitative techniques such as mean-reversion or extrapolation of the past changes can help. You can consult my update titled ‘What is my take on these four: Bitcoin, Ethereum, Steem, and Golem?’ to see the general drift.

The next step is to accept the occurrence of losses. You need to acknowledge very openly the following: you will lose money on some of your investment positions, inevitably. This is why you build a portfolio of many investment positions. All investors lose money on parts of their portfolio. The trick is to balance losses with even greater gains. You will be experimenting, and some of those experiments will be successful, whilst others will be failures. When you learn investment, you fail a lot. The losses you incur when learning, are the cost of your learning.

My price of learning was around €600, and then I bounced back and compensated it with a large surplus. If I take those €600 and compare it to the cost of taking an investment course online, e.g. with Coursera, I think I made a good deal.

Never invest all your money in the stock market. My method is to take some 30% of my monthly income and invest it, month after month, patiently and rhythmically, by instalments. For you, it can be 10% or 50%, which depends on what exactly your personal budget looks like. Invest just the amount you feel you can afford exposing to losses. Nail down this amount honestly. My experience is that big gains in the stock market are always the outcome of many consecutive steps, with experimentation and the cumulative learning derived therefrom.

General remark: you are much calmer when you know what you’re doing. Look at the fundamental trends and factors. Look beyond stock prices. Try to understand what is happening in the real business you are buying and selling the stock of. That gives perspective and allows more rational decisions.  

That would be it, as regards investment. You are welcome to ask questions. Now, I shift my topic radically. I return to the painful and laborious process of writing my book about collective intelligence. I feel like shaking things off a bit. I feel I need a kick in the ass. The pandemic being around and little social contacts being around, I need to be the one who kicks my own ass.

I am running myself through a series of typical questions asked by a publisher. Those questions fall in two broad categories: interest for me, as compared to interest for readers. I start with the external point of view: why should anyone bother to read what I am going to write? I guess that I will have two groups of readers: social scientists on the one hand, and plain folks on the other hand. The latter might very well have a deeper insight than the former, only the former like being addressed with reverence. I know something about it: I am a scientist.

Now comes the harsh truth: I don’t know why other people should bother about my writing. Honestly. I don’t know. I have been sort of carried away and in the stream of my own blogging and research, and that question comes as alien to the line of logic I have been developing for months. I need to look at my own writing and thinking from outside, so as to adopt something like a fake observer’s perspective. I have to ask myself what is really interesting in my writing.

I think it is going to be a case of assembling a coherent whole out of sparse pieces. I guess I can enumerate, once again, the main points of interest I find in my research on collective intelligence and investigate whether at all and under what conditions the same points are likely to be interesting for other people.

Here I go. There are two, sort of primary and foundational points. For one, I started my whole research on collective intelligence when I experienced the neophyte’s fascination with Artificial Intelligence, i.e. when I discovered that some specific sequences of equations can really figure stuff out just by experimenting with themselves. I did both some review of literature, and some empirical testing of my own, and I discovered that artificial neural networks can be and are used as more advanced counterparts to classical quantitative models. In social sciences, quantitative models are about the things that human societies do. If an artificial form of intelligence can be representative for what happens in societies, I can hypothesise that said societies are forms of intelligence, too, just collective forms.

I am trying to remember what triggered in me that ‘Aha!’ moment, when I started seriously hypothesising about collective intelligence. I think it was when I was casually listening to an online lecture on AI, streamed from the Massachusetts Institute of Technology. It was about programming AI in robots, in order to make them able to learn. I remember one ‘Aha!’ sentence: ‘With a given set of empirical data supplied for training, robots become more proficient at completing some specific tasks rather than others’. At the time, I was working on an article for the journal ‘Energy’. I was struggling. I had an empirical dataset on energy efficiency in selected countries (i.e. on the average amount of real output per unit of energy consumption), combined with some other variables. After weeks and weeks of data mining, I had a gut feeling that some important meaning is hidden in that data, only I wasn’t able to put my finger precisely on it.

That MIT-coined sentence on robots triggered that crazy question in me. What if I return to the old and apparently obsolete claim of the utilitarian school in social sciences, and assume that all those societies I have empirical data about are something like one big organism, with different variables being just different measurable manifestations of its activity?

Why was that question crazy? Utilitarianism is always contentious, as it is frequently used to claim that small local injustice can be justified by bringing a greater common good for the whole society. Many scholars have advocated for that claim, and probably even more of them have advocated against. I am essentially against. Injustice is injustice, whatever greater good you bring about to justify it. Besides, being born and raised in a communist country, I am viscerally vigilant to people who wield the argument of ‘greater good’.

Yet, the fundamental assumptions of utilitarianism can be used under a different angle. Social systems are essentially collective, and energy systems in a society are just as collective. There is any point at all in talking about the energy efficiency of a society when we are talking about the entire intricate system of using energy. About 30% of the energy that we use is used in transport, and transport is from one person to another. Stands to reason, doesn’t it?

Studying my dataset as a complex manifestation of activity in a big complex organism begs for the basic question: what do organisms do, like in their daily life? They adapt, I thought. They constantly adjust to their environment. I mean, they do if they want to survive. If I settle for studying my dataset as informative about a complex social organism, what does this organism adapt to? It could be adapting to a gazillion of factors, including some invisible cosmic radiation (the visible one is called ‘sunlight’). Still, keeping in mind that sentence about robots, adaptation can be considered as actual optimization of some specific traits. In my dataset, I have a range of variables. Each variable can be hypothetically considered as informative about a task, which the collective social robot strives to excel at.

From there, it was relatively simple. At the time (some 16 months ago), I was already familiar with the logical structure of a perceptron, i.e. a very basic form of artificial neural network. I didn’t know – and I still don’t – how to program effectively the algorithm of a perceptron, but I knew how to make a perceptron in Excel. In a perceptron, I take one variable from my dataset as output, the remaining ones are instrumental as input, and I make my perceptron minimize the error on estimating the output. With that simple strategy in mind, I can make as many alternative perceptrons out of my dataset as I have variables in the latter, and it was exactly what I did with my data on energy efficiency. Out of sheer curiosity, I wanted to check how similar were the datasets transformed by the perceptron to the source empirical data. I computed Euclidean distances between the vectors of expected mean values, in all the datasets I had. I expected something foggy and pretty random, and once again, life went against my expectations. What I found was a clear pattern. The perceptron pegged on optimizing the coefficient of fixed capital assets per one domestic patent application was much more similar to the source dataset than any other transformation.

In other words, I created an intelligent computation, and I made it optimize different variables in my dataset, and it turned out that, when optimizing that specific variable, i.e. the coefficient of fixed capital assets per one domestic patent application, that computation was the most fidel representation of the real empirical data.   

This is when I started wrapping my mind around the idea that artificial neural networks can be more than just tools for optimizing quantitative models; they can be simulators of social reality. If that intuition of mine is true, societies can be studied as forms of intelligence, and, as they are, precisely, societies, we are talking about collective intelligence.

Much to my surprise, I am discovering similar a perspective in Steven Pinker’s book ‘How The Mind Works’ (W. W. Norton & Company, New York London, Copyright 1997 by Steven Pinker, ISBN 0-393-04535-8). Professor Steven Pinker uses a perceptron as a representation of human mind, and it seems to be a bloody accurate representation.

That makes me come back to the interest that readers could have in my book about collective intelligence, and I cannot help referring to still another book of another author: Nassim Nicholas Taleb’s ‘The black swan. The impact of the highly improbable’ (2010, Penguin Books, ISBN 9780812973815). Speaking from an abundant experience of quantitative assessment of risk, Nassim Taleb criticizes most quantitative models used in finance and economics as pretty much useless in making reliable predictions. Those quantitative models are good solvers, and they are good at capturing correlations, but they suck are predicting things, based on those correlations, he says.

My experience of investment in the stock market tells me that those mid-term waves of stock prices, which I so much like riding, are the product of dissonance rather than correlation. When a specific industry or a specific company suddenly starts behaving in an unexpected way, e.g. in the context of the pandemic, investors really pay attention. Correlations are boring. In the stock market, you make good money when you spot a Black Swan, not another white one. Here comes a nuance. I think that black swans happen unexpectedly from the point of view of quantitative predictions, yet they don’t come out of nowhere. There is always a process that leads to the emergence of a Black Swan. The trick is to spot it in time.

F**k, I need to focus. The interest of my book for the readers. Right. I think I can use the concept of collective intelligence as a pretext to discuss the logic of using quantitative models in social sciences in general. More specifically, I want to study the relation between correlations and orientations. I am going to use an example in order to make my point a bit more explicit, hopefully. In my preceding update, titled ‘Cool discovery’, I did my best, using my neophytic and modest skills in programming, the method of negotiation proposed in Chris Voss’s book ‘Never Split the Difference’ into a Python algorithm. Surprisingly for myself, I found two alternative ways of doing it: as a loop, on the one hand, and as a class, on the other hand. They differ greatly.

Now, I simulate a situation when all social life is a collection of negotiations between people who try to settle, over and over again, contentious issues arising from us being human and together. I assume that we are a collective intelligence of people who learn by negotiated interactions, i.e. by civilized management of conflictual issues. We form social games, and each game involves negotiations. It can be represented as a lot of these >>

… and a lot of those >>

In other words, we collectively negotiate by creating cultural classes – logical structures connecting names to facts – and inside those classes we ritualise looping behaviours.

Cool discovery

Writing about me learning something helps me to control emotions involved into the very process of learning. It is like learning on the top of learning. I want to practice programming, in Python, the learning process of an intelligent structure on the basis of negotiation techniques presented in Chris Voss’s book ‘Never Split the Difference’. It could be hard to translate a book into an algorithm, I know. I like hard stuff, and I am having a go at something even harder: translating two different books into one algorithm. A summary, and an explanation, are due. Chris Voss develops, in the last chapter of his book, a strategy of negotiation based on the concept of Black Swan, as defined by Nassim Nicholas Taleb in his book ‘The black swan. The impact of the highly improbable’ (I am talking about the revised edition from 2010, published with Penguin Books, ISBN 9780812973815).

Generally, Chriss Voss takes a very practical drift in his method of negotiation. By ‘practical’, I mean that he presents techniques which he developed and tested in hostage negotiations with FBI, where he used to be the chief international hostage negotiator. He seems to attach particular importance to all the techniques which allow unearthing the non-obvious in negotiations: hidden emotions, ethical values, and contextual factors with strong impact on the actual negotiation. His method is an unusual mix of rigorous cognitive approach with a very emotion-targeting thread. His reference to Black Swans, thus to what we don’t know we don’t know, is an extreme version of that approach. It consists in using literally all our cognitive tools to uncover events and factors in the game which we even didn’t initially know were in the game.

Translating a book into an algorithm, especially for a newbie of programming such as I am, is hard. Still, in the case of ‘Never Split the Difference’, it is a bit easier because of the very game-theoretic nature of the method presented. Chriss Voss attaches a lot of importance to taking our time in negotiations, and to making our counterpart make a move rather than overwhelming them with our moves. All that is close to my own perspective and makes the method easier to translate into a functional sequence where each consecutive phase depends on the preceding phase.

Anyway, I assume that a negotiation is an intelligent structure, i.e. it is an otherwise coherent and relatively durable structure which learns by experimenting with many alternative versions of itself. That implies a lot. Firstly, it implies that the interaction between negotiating parties is far from being casual and accidental: it is a structure, it has coherence, and it is supposed to last by recurrence. Secondly, negotiations are supposed to be learning much more than bargaining and confrontation. Yes, it is a confrontation of interests and viewpoints, nevertheless the endgame is learning. Thirdly, an intelligent structure experiments with many alternative versions of itself and learns by assessing the fitness of those versions in coping with a vector of external stressors. Therefore, negotiating in an intelligent structure means that, consciously or unconsciously, we, mutual counterparts in negotiation, experiment together with many alternative ways of settling our differences, and we are essentially constructive in that process.

Do those assumptions hold? I guess I can somehow verify them by making first steps into programming a negotiation.  I already know two ways of representing an intelligent structure as an algorithm: in the form of a loop (primitive, tried it, does not fully work, yet has some interesting basic properties), or in the form of a class, i.e. a complex logical structure which connects names to numbers.

When represented as a loop, a negotiation is a range of recurrent steps, where the same action is performed a given number of times. Looping means that a negotiation can be divided into a finite number of essentially identical steps, and the endgame is the cumulative output of those steps. With that in mind, I can see that a loop is not truly intelligent a structure. Intelligent learning requires more than just repetition: we need consistent assessment and dissemination of new knowledge. Mind you, many negotiations can play out as ritualized loops, and this is when they are the least productive. Under the condition of unearthing Black Swans hidden in the contentious context of the negotiation, the whole thing can play out as an intelligent structure. Still, many loop-like negotiations which recurrently happen in a social structure, can together form an intelligent structure. Looks like intelligent structures are fractal: there are intelligent structures inside intelligent structures etc. Intelligent social structures can contain chains of ritualized, looped negotiations, which are intelligent structures in themselves.   

Whatever. I program. When I try to sift out the essential phenomenological categories out of the Chris Voss’s book ‘Never Split the Difference’, I get to the following list of techniques recommended by Chriss Voss:

>> Mirroring – I build emotional rapport by just repeating the last three words of each big claim phrased out by my counterpart.

 >> Labelling – I further build emotional rapport by carefully and impersonally naming emotions and aspirations in my counterpart.

>> Open-ended questions – I clarify claims and disarm emotional bottlenecks by asking calibrated open questions such as ‘How can we do X,Y, Z?’ or ‘What do we mean by…?’ etc.

>> Claims – I state either what I want or what I want my counterpart to think I want

Those four techniques can be used in various shades and combinations to accomplish typical partial outcomes in negotiation, namely: a) opportunities for your counterpart to say openly ‘No’ b) agreement in principle c) guarantee of implementation d) Black Swans, i.e. unexpected attributes of the situation which turn the negotiation in a completely different, favourable direction.

I practice phrasing it out as a class in Python. Here is what I came up with and which my JupyterLab compiler swallows nicely without yielding any errors:

Mind you, I don’t know how exactly it works, algorithmically. I am a complete newbie to programming classes in Python, and my first goal is to have the grammar right, and thus not to have to deal with those annoying, salmon-pink-shaded messages of error.

Before I go further into programming negotiation as a class, I feel like I need to go back to my primitive skills, i.e. to programming loops, in order to understand the mechanics of the class I have just created. Each ‘self’ in the class is a category able to have many experimental versions of itself. I try the following structure:

As you can see, I received an error of non-definition. I have not defined the dataset which I want to use for appending my lists. Such a dataset would contain linguistic strings, essentially. Thus, the type of datasets I am operating with, here, are sets of linguistic strings, thus sets of objects. An intelligent structure representative for negotiation is an algorithm for processing natural language. Cool discovery.

I got it all wrong

I like doing research on and writing about collective intelligence in human societies. I am even learning to program in Python in order to know how to code collective intelligence in the form of an artificial neural network. I decided to take on my own intelligence as an interesting diversion from the main course. I hope I can assume I am an intelligent structure. Whilst staying essentially coherent, i.e. whilst remembering who I am, I can experiment a bit with many different versions of myself. Of course, a substantial part of the existential Me works like a shuttle train, going back and forth on the rails of my habits. Still, I can learn heuristically on my own experience. Heuristic learning means that as I learn something, I gain new knowledge about how much more I can learn about and along the lines of the same something.

I want to put into a Python code the experience of heuristic, existential learning which I exemplified in the update titled ‘Something even more basic’. It starts with experience which happens through intentional action from my part. I define a vector of action, i.e. a vector of behavioural patterns, associated with the percentage of my total time they take. That percentage can be understood, alternatively, as the probability that any given minute in the day is devoted to that specific action. Some of those patterns are active, and some are dormant, with the possibility of being triggered into existence. Anyway, it is something like A = {a1, a2, …, an}. Now, in terms of coding in Python, is that vector of action a NumPy array, or is it a Pandas data frame? In terms of pure algorithmic technique, it is a trade-off between computational speed, with a NumPy array, and programmatic versatility in the case of a Pandas data frame. Here are a few points of view expressed, as regards this specific matter, by people smarter than me:

>> https://www.geeksforgeeks.org/difference-between-pandas-vs-numpy/

>> https://towardsdatascience.com/performance-of-numpy-and-pandas-comparison-7b3e0bea69bb

>> https://vitalflux.com/pandas-dataframe-vs-numpy-array-what-to-use/

In terms of algorithmic theory, these are two different, cognitive phenomena. A NumPy array is a structured collection of numbers, whilst a Pandas data frame is a structure combining many types of data, e.g. string objects with numbers. How does it translate into my own experience? I think that, essentially, my action is a data frame. I take purposeful action to learn something when I have a logical frame to put it in, i.e. when I have words to label what I do. That leads me to starting at even more elementary a level, namely that of a dictionary as regards my actions.

Anyway, I create a notebook with JupyterLab, and I start like a hamster, with stuffing my cheeks with libraries:

>> import numpy as np

>> import pandas as pd

>> import os

>> import math     

Then, I make a first dictionary:

>> Types_of_action=[‘Action 1′,’Action 2′,’Action 3′,’Action 4′,’Action 5’]

A part of my brain says, at this point: ‘Wait a minute, bro. Before you put labels on the things that you do, you need to be doing things. Humans label stuff that happens, essentially. Yes, of course, later on, me can make them metaphors and abstract concepts but, fundamentally, descriptive language comes after experience’. Well, dear part of my brain, this is a valid argument. Things unfold into a paradox, just as I like it. I need raw experience, primal to any logical structuring. How to express it in Python? I can go like:

>> Raw_experience=np.random.rand(np.random.randint(1)) #This is a NumPy array made of random decimal values, and the number of those values in the array is random as well.

I check. I type ‘Raw_experience’ and run it. Python answers:

>> array([], dtype=float64) #  I have just made a paradox: a totally empty array of numbers, i.e. with no numbers in it, and yet those inexistent numbers have a type, namely that of ‘float64’.

I try something less raw and more cooked, like:

>> Raw_experience_50=np.random.rand(50) # I assume a priori there are 50 distinct units of raw experience

>> Raw_experience_50 # yields…

>> array([0.73209089, 0.94390333, 0.44267215, 0.92111994, 0.4098961 ,

       0.22435079, 0.61447481, 0.21183481, 0.10223352, 0.04481922,

       0.01418667, 0.65747087, 0.22180559, 0.6158434 , 0.82275393,

       0.22446375, 0.31331992, 0.64459349, 0.90762324, 0.65626915,

       0.41462473, 0.35278516, 0.13978946, 0.79563848, 0.41794509,

       0.12931173, 0.37012284, 0.37117378, 0.30989358, 0.26912215,

       0.7404481 , 0.61690128, 0.41023962, 0.9405769 , 0.86930885,

       0.84279381, 0.91174751, 0.04715724, 0.35663278, 0.75116884,

       0.78188546, 0.30712707, 0.00615981, 0.93404037, 0.82483854,

       0.99342718, 0.74814767, 0.49888401, 0.93164796, 0.87413073])

This is a short lesson of empiricism. When I try to code raw, completely unstructured experience, I obtain an empty expanse. I return to that interesting conversation with a part of my brain. Dear part of my brain, you were right to point out that experience comes before language, and yet, without language, i.e. without any logical structuring of reality, I don’t know s**t about experience, and I cannot intelligibly describe it. I need to go for a compromise. I make that experience as raw as possible by making it happen at random, and, in the same time, I need to give it some frame, like the number of times those random things are supposed to happen to me.

I defined a dictionary with 5 types of action in it. Thus, I define a random path of happening as an array made of 5 categories (columns), and 50 rows of experience: Raw_experience_for_action=np.random.rand(50,5).

I acknowledge the cognitive loop I am in, made of raw experience and requiring some language to put order in all that stuff. I make a data frame:

>> Frame_of_action = pd.DataFrame (Raw_experience_for_action, columns = [Types_of_action]) # One remark is due, just in case. In the Python code, normally, there are no spaces. I put spaces, somehow in phase with interpunction, just to make some commands more readable.

I check with ‘Frame_of_action.info()’ and I get:

 
>> <class 'pandas.core.frame.DataFrame'>
RangeIndex: 50 entries, 0 to 49
Data columns (total 5 columns):
 #   Column       Non-Null Count  Dtype  
---  ------       --------------  -----  
 0   (Action 1,)  50 non-null     float64
 1   (Action 2,)  50 non-null     float64
 2   (Action 3,)  50 non-null     float64
 3   (Action 4,)  50 non-null     float64
 4   (Action 5,)  50 non-null     float64
dtypes: float64(5)
memory usage: 2.1 KB

Once I have that basic frame of action, what is my next step? I need to learn from that experience. The frame of action is supposed to give me knowledge. What is knowledge coming from action? That type of knowledge is called ‘outcomes’. My action brings an outcome, and I evaluate it. Now, in a baseline algorithm of artificial neural network, evaluation of outcomes happens by pitching them against a predefined benchmark, something like expected outcome. As I am doing my best to be an intelligent structure, there is that aspect too, of course. Yet, there is something else, which I want to deconstruct, understand, and reconstruct as Python code. There is discovery and exploration, thus something that I perceive as entirely new a piece of experience. I don’t have any benchmark I can consciously pitch that experience against.

I can perceive my fresh experiential knowledge in two different ways: as a new piece of data, or as an error, i.e. as deviation from the expected state of reality. Both mathematically, and algorithmically, it is a difference. Mathematically, any number, thus any piece of data, is the result of an operation. If I note down, in the algorithm of my heuristic learning, my new knowledge as literally new, anyway it needs to come from some kind of mathematical operation: addition, subtraction, multiplication, or division.

As I think about myself learning new stuff, there is a phase, in the beginning, when I have some outcomes, and yet I don’t have any precise idea what those outcomes are, exactly. This is something that happens in coincidence (I don’t even know, yet, if this is a functional correlation) with the actions I do.

As I think about all that stuff, I try to make a loop of learning between action and outcomes, and as I am doing it, I realize I got it all wrong. For the last few weeks, I have been assuming that an intelligent structure can and should be coded as a loop (see, for example, ‘Two loops, one inside the other’). Still, as I am trying to code the process of my own heuristic learning, I realize that an algorithmic loop has fundamental flaws in that respect. Essentially, each experimental round – where I pitch the outcomes of my actions against a pre-defined expected outcome – is a separate loop, as I have to feed forward the resulting error. With many experimental rounds, like thousands, making a separate loop for each of them is algorithmically clumsy. I know it even at my neophytic stage of advancement in Python.

When I don’t know what to do, I can ask around. I can ask people smarter than me. And so I ask:    

>> https://hackernoon.com/building-a-feedforward-neural-network-from-scratch-in-python-d3526457156b

>> https://towardsdatascience.com/how-to-build-your-own-neural-network-from-scratch-in-python-68998a08e4f6

After rummaging a bit in the content available under those links, I realize that intelligent structures can be represented algorithmically as classes (https://docs.python.org/3/tutorial/classes.html ), and it is more functional a way than representing them as loops. From the second of the above-mentioned links, I took an example of algorithm, which I allow myself to reproduce below. Discussing this algorithm will help me wrapping my own mind around it and developing new understandings.

(Source: https://towardsdatascience.com/how-to-build-your-own-neural-network-from-scratch-in-python-68998a08e4f6)

A neural network is a class, i.e. a type of object, which allows creating many different instances of itself. Inside the class, types of instances are defined, using selves: ‘self.input’, ‘self.output’ etc. Selves are composed into distinct functions, introduced with the command ‘def’. Among the three functions defined inside the class ‘NeuralNetwork’, one is particularly interesting, namely the ‘_init_’. As I rummage through online resources, it turns out that ‘_init_’ serves to create objects inside a class, and then to make selves of those objects. 

I am trying to dissect the use of ‘_init_’ in this specific algorithm. It is introduced with three attributes: self, x, and y. I don’t quite get the corresponding logic. I am trying with something simpler: an algorithm I found at https://www.tutorialspoint.com/What-is-difference-between-self-and-init-methods-in-python-Class :

I think I start to understand. Inside the ‘_init_’ function, I need to signal there are different instances – selves – of the class I create. Then, I add the variables I intend to use. In other words, each specific self of the class ‘Rectangle’ has three dimensions: length, breadth, and unit cost.

I am trying to apply this logic to my initial problem, i.e. my own heuristic learning, with the bold assumption that I am an intelligent structure. I go:

>> class MyLearning:

            def _init_(self, action, outcomes, new_knowledge = 0):

                        self.action = action

                        self.outcomes = outcomes

                        self.new_knowledge = new_knowledge

            def learning(self):

                        return action-outcomes

When I run this code, there is no error message from Python, which is encouraging for a newbie such as I am. Mind you, I have truly vague an idea of what I have just coded. I know it is grammatically correct in Python.

Something even more basic

It had to happen. I have been seeing it coming, essentially, only I didn’t want to face it. The nature of truth. Whatever kind of intellectual adventure we engage, i.e. whatever kind of game we start playing, with coherent understanding of reality in the centre of it, that s**t just has to come our way. We ask ourselves: ‘What is true?’.

My present take on truth is largely based on something even more basic than intellectual pursuit as such. Since December 2016, I have been practicing the Wim Hof method (https://www.wimhofmethod.com/ ). In short, this is a combination of breathing exercises with purposeful exposition to cold. You can check the original recipe on that website I have just provided, and I want to give an account of my own experience, such as I practice it now. Can I say that I practice an experience? Yes, in this case this is definitely the right way to call it. My practice of the Wim Hof method consists, precisely, in me exposing myself, over and over again, consistently, to a special experience.

Anyway, every evening, I practice breathing exercises, followed by a cold shower. I do the breathing in two phases. In the first phase, I do the classical Wim Hof pattern: 30 – 40 deep, powerful breaths, when I work with my respiratory muscles as energetically as I can, inhale through the nose, exhale through pinched mouth (so as to feel a light resistance in my lips and cheeks, as if my mouth was a bellow with limited flow capacity), and then, after I exhale for the 30th ÷ 40th time, I pass in apnoea, i.e. I stop breathing. For how long? Here comes the first component of experiential truth: I have no idea how long. I don’t measure my phase of apnoea with a timer, I measure it with my proprioception.

In the beginning, that is 3 – 4 years ago, I used to follow religiously the recommendation of Wim Hof himself, namely to stay at the limit of my comfort zone. In other words, beginners should stay in apnoea just long enough to start feeling uncomfortable, and then they should inhale. When I feel something like muscular panic inside, e.g. my throat muscles going into a sort of spasm, I inhale, deeply, and I hold for like 10 seconds. Then, I repeat the whole cycle as many times as I feel like. Now, after 4 years of practice, I know that my comfort zone can stretch. Now, when I pass into apnoea, the spasm of my throat is one of the first proprioceptive feelings I experience, not the last. I stop breathing, my throat muscles contract in something that feels like habitual panic, and then something in me says: ‘Wait a minute. Like really, wait. That feeling of muscular panic in the throat muscles, it is interesting. Wait, explore it, discover. Please’. Thus, I discover. I discover that my throat muscles spasm in a cycle of contraction and relaxation. I discover that once I set for discovering that feeling, I start discovering layers of calm. Each short cycle of spasmatic panic in my throat induces in me something like a deep reach into my body, with emotions such as fear fading away a bit. I experience something like spots of pleasurable tingling and warmth, across my body, mostly in big muscular groups, like the legs, the abs, or the back. There comes a moment when the next spasm in my respiratory muscles drives me to inhaling. This specific moment moves in time as I practice. My brain seems to be doing, at every daily practice of mine, something like accounting work: ‘How much of my subjectively experienced safety from suffocation am I willing to give away, when rewarded with still another piece of that strange experience, when I feel as if I were suffocating, and, as strange as it seems, I sort of like the feeling?’.       

I repeat the cycle ’30 – 40 powerful breaths, apnoea, then inhale and hold for a few seconds’ a few times, usually 3 to 5 times, and this is my phase one. After, I pass into my phase two, which consists in doing as many power breaths as I can until I start feeling fatigue in my respiratory muscles. Usually, it is something like 250 breaths, sometimes I go beyond 300. After the last breath, I pass in apnoea, and, whilst staying in that state, I do 20 push-ups. After the last push-up, I breathe in, and hold for a long moment.  

I repeat two times the whole big cycle of phase one followed by phase two. Why two times? Why those two phases inside each cycle? See? That’s the big question. I don’t know. Something in me calibrates my practice into that specific protocol. That something in me is semi-conscious. I know I am following some sort of discovery into my sensory experience. When I am in phase one, I am playing with my fear of suffocation. Playing with that fear is bloody interesting. In phase two, let’s face it, I get high as f**k on oxygen. Really. Being oxygen-high, my brain starts behaving differently. It starts racing. Thoughts turn up in my consciousness like fireworks. Many of those thoughts are flashback memories. For a short moment, my brain dives head-first into a moment of my past. I suddenly re-experience a situation, inclusive of emotions and sensory feelings. This is a transporting experience. I realize, over and over again, that what I remember is full of ambiguity. There are many possible versions of memories about the same occurrence in the past. I feel great with that. When I started to go through this specific experience, when high on oxygen, I realized that my own memorized past is a mine of information. Among other things, it opens me up onto the realization how subjective is my view of events that happened to me. I realized how many points of very different viewpoints I can hold as regards the same occurrence.

Here comes another interesting thing which I experience when being high on oxygen amidst those hyper-long sequences of power breathing. Just as I rework my memories, I rework my intellectual take on present events. Ideas suddenly recombine into something new and interesting. Heavily emotional judgments about recent or ongoing tense situations suddenly get nuanced, and I joyfully indulge in accepting what a d**k I have been and how many alternative options do I have.

After the full practice of two cycles, each composed of two phases in breathing exercises, I go under a cold shower. Nothing big, something like 30 seconds. Enough to experience another interesting distinction. When I pour cold water on my skin, the first, spontaneous label I can put on that experience is ‘cold’. Once again, something in me says ‘Wait a minute! There is no way in biology you can go into hypothermia in 30 seconds under a cold shower. What you experience is not cold, it is something else. It is the fear of cold. It is a flood on sensory warnings’. There is a follow-up experience which essentially confirms that intuition. Sometimes, when I feel like going really hard on that cold shower, like 2 minutes, I experience a delayed feeling of cold. One or two hours later, when I am warm and comfy, suddenly I start shivering and experiencing once again the fear of cold. Now, there is even no sensory stimulation whatsoever. It is something like a vegetative memory of the cold which I have experienced shortly before. My brain seems to be so overwhelmed with that information that it needs to rework through it. It is something like a vegetative night-dream. Funny.  

All of that happens during each daily practice of the Wim Hof method. Those internal experiences vanish shortly after I stop the session. There is another layer, namely that of change which I observe in me as I practice over a long time. I have realized, and I keep realizing the immense diversity of ways I can experience my life. I keep taking and re-taking a step back from my current emotions and viewpoints. I sort of ride the wave crest of all the emotional and cognitive stuff that goes on inside of me. Hilarious and liberating an experience. Over the time spent practicing the Wim Hof method, I have learnt to empty my consciousness. It is real. For years, I have been dutifully listening to psychologists who claim that no one can really clear their conscious mind of any thought whatsoever. Here comes the deep revelation of existential truth through experience. My dear psychologists, you are wrong. I can and do experience, frequently and wilfully, a frame of mind when my consciousness is really like calm water. Nothing, like really. Not a single conscious thought. On the other hand, I know that I can very productively combine that state of non-thinking with a wild, no-holds-barred ride on thinking, when I just let it go internally and let thoughts flow through my consciousness. I know by experience that when I go in that sort of meditation, alternating the limpid state of consciousness with the crazy rollercoaster of running thoughts, my subconscious gets a kick, and I just get problem-solving done.

The nature of truth that I want to be the provisional bottom line, under my personal account of practicing the Wim Hof method, is existential. I know, people have already said it. Jean-Paul Sartre, Martin Heidegger. Existentialists. They claimed that truth is existential, that it comes essentially and primordially from experience. I know. Still, reading about the depth of existential truth is one thing, and experiencing it by myself is a completely different ball game. Existential truth has limits, and those limits are set, precisely, by the scope of experience we have lived. Here comes a painful, and yet quite enlightening and experience of mine, as regards the limits of existential truth. Yesterday, i.e. on January 1st, 2021, I was listening to one of my favourite podcasts, namely the ‘Full Auto Friday’ by Andy Stumpf (https://youtu.be/Svw0gxOj_to ). A fan of the podcast asked for discussing his case, namely that of a young guy, age 18, whose father is alcoholic, elaborately destroys himself and people around him, and makes his young son face a deep dilemma: ‘To what extent should I sacrifice myself in order to help my father?’.

I know the story, in a slightly different shade. My relations with my father (dead in 2019), had never been exemplary, for many reasons. His f**k-ups and my f**k-ups summed up and elevated each other to a power, and long story short, my dad fell into alcoholism around the age I am now, i.e. in his fifties, and our common 1990ies. He would drink himself into death and destruction, literally. Destroyed his liver, went through long-lasting portal hypertension (https://en.wikipedia.org/wiki/Portal_hypertension ), destroyed his professional life, destroyed his relationships, inclusive, very largely, of his relationship with me. At the time, I was very largely watching the show from outside. Me and my father lived in different cities, did not really share much of our existence. Just to prevent your questions: the role and position of my late mum, in all that, is a different story. She was living her life in France.

My position of bystander cam brutally to an end in 2002, when my dad found himself at the bottom of the abyss. He stopped drinking, because in Polish hospitals we have something like semi-legal forced detox, and this is one of those rare times when something just semi-legal comes as deliverance. Yet, him having stopped drinking was about the only bright spot in his existence. The moment he would be discharged from hospital, he was homeless. The state he was in, it meant death within weeks. I was his only hope. Age 34, I had to decide, whether I will take care of a freshly detoxicated alcoholic, who had not really been paying much attention to me and who happened to be my biological father. I decided I will take care of him, and so I did. Long story short, the next 17 years were rough. Both the existential status of my father and his health, inclusive of his personality, were irremediably damaged by alcoholism. Yes, this is the sad truth: even fully rehabbed alcoholics have just a chance to regain normal life, and that chance is truly far from certainty. Those 17 years, until my dad’s death, were like a very long agony, pointless and painful.

And here I was, yesterday, listening to Andy Stumpf’s podcast, and him instructing his young listener that he should ‘draw a line in the sand, and hold that line, and not allow his alcoholic father to destroy another life’. I was listening to that, I fully acknowledged the wisdom of those words, and I realized that I did exactly the opposite. No line in the sand, just a crazy dive, head-first, into what, so far, had been the darkest experience of my life. Did I make the right choice? I don’t know, really. No clue. Even when I run that situation through the meditative technique which I described a few paragraphs earlier, and I did it many times, I have still no clue. These are my personal limits of existential truth. I was facing a loop. To any young man, his father is the main role model for ethical behaviour. My father’s behaviour had been anything but a moral compass. Trying to figure out the right thing to do, in that situation, was like being locked in a box and trying to open it with a key lying outside the box.

I guess any person in a similar situation faces the same limits. This is one of those situations, when I really need cold, scientific, objectivized truth. Some life choices are so complex that existential truth is not enough. Yet, with many other things, existential truth just works. What is my existential truth, though, sort of generally and across the board? I think it is a very functional type of truth, namely that of gradual, step-by-step achievement. In my life, existential truth serves me to calibrate myself to achieve what I want to achieve.

Boots on the ground

I continue the fundamental cleaning in my head, as the year 2020 touches to its end. What do I want? Firstly,I want to exploit and develop on my hypothesis of collective intelligence in human societies, and I want to develop my programming skills in Python. Secondly, I want to develop my skills and my position as a facilitator and manager of research projects at the frontier of the academic world and that of business.  How will I know I have what I want? If I actually program a workable (and working) intelligent structure, able to uncover and reconstruct the collective intelligence of a social structure out of available empirical data – namely to uncover and reconstruct the chief collective outcomes that structure is after, and its patterns of reaction to random exogenous disturbances – that would be an almost tangible outcome for me, telling me I have made a significant step. When I see that I have repetitive, predictable patterns of facilitating the start of joint research projects in consortiums of scientific entities and business ones, then I know I have nailed down something in terms of project management. If I can start something like an investment fund for innovative technologies, then I definitely know I am on the right track.

As I want to program an intelligent structure, it is essentially an artificial neural network, possibly instrumented with additional functions, such as data collection, data cleansing etc. I know I want to understand very specifically what my neural network does. I want to understand every step in takes. To that purpose, I need to figure out a workable algorithm of my own, where I understand every line of code. It can be sub-optimally slow and limited in its real computational power, yet I need it. On the other hand, Internet is more and more equipped with platforms and libraries in the form of digital clouds, such as IBM Watson, or Tensorflow, which provide optimized processes to build complex pieces of AI. I already know that being truly proficient in Data Science entails skills pertinent to using those cloud-based tools. My bottom line is that if I want to program an intelligent structure communicable and appealing to other people, I need to program it at two levels: as my own prototypic code, and as a procedure of using cloud-based platforms to replicate it.             

At the juncture of those two how-will-I-know pieces of evidence, an idea emerges, a crazy one. What if I can program an intelligent structure which uncovers and reconstructs one or more alternative business models out of the available empirical data? Interesting. The empirical data I work the most with, as regards business models, is the data provided in the annual reports of publicly listed companies. Secondarily, data about financial markets sort of connects. My own experience as small investor supplies me with existential basis to back this external data, and that experience suggests me to define a business model as a portfolio of assets combined with broadly spoken behavioural patterns both in people active inside the business model, thus running it and employed with it, and in people connected to that model from outside, as customers, suppliers, investors etc.

How will other people know I have what I want? The intelligent structure I will have programmed has to work across different individual environments, which is an elegant way of saying it should work on different computers. Logically, I can say I have clearly demonstrated to other people that I achieved what I wanted with that thing of collective intelligence when said other people will be willing to and successful at trying my algorithm. Here comes the point of willingness in other people. I think it is something like an existential thing across the board. When we want other people to try and do something, and they don’t, we are pissed. When other people want us to try and do something, and we don’t, we are pissed, and they are pissed. As regards my hypothesis of collective intelligence, I have already experienced that sort of intellectual barrier, when my articles get reviewed. Reviewers write that my hypothesis is interesting, yet not articulate and not grounded enough. Honestly, I can’t blame them. My feeling is that it is even hard to say that I have that hypothesis of collective intelligence. It is rather as if that hypothesis was having me as its voice and speech. Crazy, I know, only this is how I feel about the thing, and I know by experience that good science (and good things, in general) turn up when I am honest with myself.

My point is that I feel I need to write a book about that concept of collective intelligence, in order to give a full explanation of my hypothesis. My observations about cities and their role in the human civilization make, for the moment, one of the most tangible topics I can attach the theoretical hypothesis to. Writing that book about cities, together with programming an intelligent structure, takes a different shade, now. It becomes a complex account of how we can deconstruct something – our own collective intelligence – which we know is there and yet, as we are inside that thing, we have hard times to describe it.

That book about cities, abundantly referring to my hypothesis of collective intelligence, could be one of the ways to convince other people to at least try what I propose. Thus, once again, I restate what I understand by intelligent structure. It is a structure which learns new patterns by experimenting with many alternative versions of itself, whilst staying internally coherent. I return to my ‘DU_DG’ database about cities (see ‘It is important to re-assume the meaning’) and I am re-assuming the concept of alternative versions, in an intelligent structure.

I have a dataset structured into n variables and m empirical observations. In my DU_DG database, as in many other economic datasets, distinct observations are defined as the state of a country in a given year. As I look at the dataset (metaphorically, it has content and meaning, but it does not have any physical shape save for the one my software supplies it with), and as I look at my thoughts (metaphorically, once again), I realize I have been subconsciously distinguishing two levels of defining an intelligent structure in that dataset, and, correspondingly, two ways of defining alternative versions thereof. At the first level, the entire dataset is supposed to be an intelligent structure and alternative versions thereof consist in alternative dichotomies of the type ‘one variable as output, i.e. as the desired outcome to optimize, and the remaining ones as instrumental input’. At this level of structural intelligence – by which term I understand the way of being in an intelligent structure – alternative versions are alternative orientations, and there are as many of them as there are variables.

Distinction into variables is largely, although not entirely, epistemic, and not ontological. The headcount of urban population is not fundamentally different phenomenon from the surface of agricultural land. Yes, the units of measurement are different, i.e. people vs. square kilometres, but, ontologically, it is largely the same existential stuff, possible to describe as people living somewhere in large numbers and being successful at it. Historically, social scientists and governments alike have come to the conclusion, though, that these two metrics have different a meaning, and thus it comes handy to distinguish them as semantic vessels to collect and convey information. The distinction of alternative orientations in an intelligent structure, supposedly represented in a dataset, is arbitrary and cognitive more than ontological. It depends on the number of variables we have. If I add variables to the dataset, e.g. by computing coefficients between the incumbent variables, I can create new orientations for the intelligent structure, i.e. new alternative versions to experiment with.

The point which comes to my mind is that the complexity of an intelligent structure, at that first level, depends on the complexity of my observational apparatus. The more different variables I can distinguish, and measure as regards a given society, the more complexity I can give to the allegedly existing, collectively intelligent structure of that society.

Whichever combination ‘output variable vs. input variables’ I am experimenting with, there comes the second level of defining intelligent structures, i.e. that of defining them as separate countries. They are sort of local intelligent structures, and, at the same time, they are alternative experimental versions of the overarching intelligent structure to be found in the vector of variables. Each such local intelligent structure, with a flag, a national anthem, and a government, produces many alternative versions of itself in consecutive years covered by the window of observation I have in my dataset.

I can see a subtle distinction here. A country produces alternative versions of itself, in different years of its existence, sort of objectively and without giving a f**k about my epistemological distinctions. It just exists and tries to be good at it. Experimenting comes as natural in the flow of time. This is unguided learning. On the other hand, I produce different orientations of the entire dataset. This is guided learning. Now, I understand the importance of the degree of supervision in artificial neural networks.

I can see an important lesson for me, here. If I want to program intelligent structures ‘able to uncover and reconstruct the collective intelligence of a social structure out of available empirical data – namely to uncover and reconstruct the chief collective outcomes that structure is after, and its patterns of reaction to random exogenous disturbances’, I need to distinguish those two levels of learning in the first place, namely the unguided flow of existential states from the guided structuring into variables and orientations. When I have an empirical dataset and I want to program an intelligent structure able to deconstruct the collective intelligence represented in that dataset, I need to define accurately the basic ontological units, i.e. the fundamentally existing things, then I define alternative states of those things, and finally I define alternative orientations.

Now, I am contrasting. I pass from those abstract thoughts on intelligent structures to a quick review of my so-far learning to program those structures in Python. Below, I present that review as a quick list of separate files I created in JupyterLab, together with a quick characteristic of problems I am trying to solve in each of those files, as well as of the solutions found and not found.

>> Practice Dec 11 2020.iypnb.

In this file, I work with IMF database WEOOct2020 (https://www.imf.org/en/Publications/WEO/weo-database/2020/October ).  I practiced reading complex datasets, with an artificially flattened structure. It is a table, in which index columns are used to add dimensions to an otherwise two-dimensional format. I practiced the ‘read_excel’ and ‘read_csv’ commands. On the whole, it seems that converting an Excel to CSV and then reading CSV in Python is a better method than reading excel. Problems solved: a) cleansing the dataset of not-a-number components and successful conversion of initially ‘object’ columns into the desired ‘float64’ format b) setting descriptive indexes to the data frame c) listing unique labels from a descriptive index d) inserting new columns into the data frame e) adding (compounding) the contents of two existing, descriptive index columns into a third index column. Failures: i) reading data from XML file ii) reading data from SDMX format iii) transposing my data frame so as to put index values of economic variables as column names and years as index values in a column.

>> Practice Dec 8 2020.iypnb.

In this file, I worked with a favourite dataset of mine, the Penn Tables 9.1. (https://www.rug.nl/ggdc/productivity/pwt/?lang=en ). I described my work with it in two earlier updates, namely ‘Two loops, one inside the other’, and ‘Mathematical distance’. I succeeded in creating an intelligent structure from that dataset. I failed at properly formatting the output of that structure and thus at comparing the cognitive value of different orientations I made it simulate.   

>> Practice with Mortality.iypnb.

I created this file as a first practice before working with the above-mentioned WEOOct2020 database. I took one dataset from the website of the World Bank, namely that pertinent to the coefficient of adult male mortality (https://data.worldbank.org/indicator/SP.DYN.AMRT.MA ). I practiced reading data from CSV files, and I unsuccessfully tried to stack the dataset, i.e. to transform columns corresponding to different years of observation into rows indexed with labels corresponding to years.   

>> Practice DU_DG.iypnb.

In this file, I am practicing with my own dataset pertinent to the density of urban population and its correlates. The dataset is already structured in Excel. I start practicing the coding of the same intelligent structure I made with Penn Tables, supposed to study orientation of the societies studied. Same problems and same failures as with Penn Tables 9.1.: for the moment, I cannot nail down the way to get output data in structures that allow full comparability. My columns tend to wander across the output data frames. In other words, the vectors of mean expected values produced by the code I made have slightly (just slightly, and sufficiently to be annoying) different a structure from the original dataset. I don’t know why, yet, and I don’t know how to fix it.  

On the other hand, in that same file, I have been messing around a bit with algorithms based on the ‘scikit’ library for Python. Nice graphs, and functions which I still need to understand.

>> Practice SEC Financials.iypnb.

Here, I work with data published by the US Securities and Exchange Commission, regarding the financials of individual companies listed in the US stock market (https://www.sec.gov/dera/data/financial-statement-data-sets.html ). The challenge here consists in translating data originally supplied in *.TXT files into numerical data frames in Python. The problem with I managed to solve, so far (this is the most recent piece of my programming), is the most elementary translation of TXT data into a Pandas data frame, using the ‘open()’ command, and the ‘f.readlines()’ one. Another small victory here is to read data from a sub-directory inside the working directory of JupyterLab, i.e. inside the root directory of my user profile. I used two methods of reading TXT data. Both worked sort of. First, I used the following sequence:

>> with open(‘2020q3/num.txt’) as f:

            numbers=f.readlines()

>> Numbers=pd.DataFrame(numbers)

… which, when checked with the ‘Numbers.info()’ command, yields:

<class ‘pandas.core.frame.DataFrame’>

RangeIndex: 2351641 entries, 0 to 2351640

Data columns (total 1 columns):

 #   Column  Dtype

—  ——  —–

 0   0       object

dtypes: object(1)

memory usage: 17.9+ MB

In other words, that sequence did not split the string of column names into separate columns, and the ‘Numbers’ data frame contains one column, in which every row is a long string structured with the ‘\’ separators. I tried to be smart with it. I did:

>> Numbers.to_csv(‘Num2’) # I converted the Pandas data frame into a CSV file

>> Num3=pd.DataFrame(pd.read_csv(‘Num2′,sep=’;’)) # …and I tried to read back from CSV, experimenting with different separators. None of it worked. With the ‘sep=’ argument in the command, I kept getting an error of parsing, in the lines of ‘ParserError: Error tokenizing data. C error: Expected 1 fields in line 3952, saw 10’. When I didn’t use the ‘sep=’ argument, the command did not yield error, yet it yielded the same long column of structured strings instead of many data columns.  

Thus, I gave up a bit, and I used Excel to open the TXT file, and to save a copy of it in the CSV format. Then, I just created a data frame from the CSV dataset, through the ‘NUM_from_CSV=pd.DataFrame(pd.read_csv(‘SEC_NUM.csv’, sep=’;’))’ command, which, checked with the ‘NUM_from_CSV.info()’ command, yields:

<class ‘pandas.core.frame.DataFrame’>

RangeIndex: 1048575 entries, 0 to 1048574

Data columns (total 9 columns):

 #   Column    Non-Null Count    Dtype 

—  ——    ————–    —– 

 0   adsh      1048575 non-null  object

 1   tag       1048575 non-null  object

 2   version   1048575 non-null  object

 3   coreg     30131 non-null    object

 4   ddate     1048575 non-null  int64 

 5   qtrs      1048575 non-null  int64 

 6   uom       1048575 non-null  object

 7   value     1034174 non-null  float64

 8   footnote  1564 non-null     object

dtypes: float64(1), int64(2), object(6)

memory usage: 72.0+ MB

The ‘tag’ column in this data frame contains the names of financial variables ascribed to companies identified with their ‘adsh’ codes. I experience the same challenge, and, so far, the same failure as with the WEOOct2020 database from IMF, namely translating different values in a descriptive index into a dictionary, and then, in the next step, to flip the database so as to make those different index categories into separate columns (variables).   

As I have passed in review that programming of mine, I have become aware that reading and properly structuring different formats of data is the sensory apparatus of the intelligent structure I want to program.  Operations of data cleansing and data formatting are the fundamental skills I need to develop in programming. Contrarily to what I expected a few weeks ago, when I was taking on programming in Python, elaborate mathematical constructs are simpler to code than I thought they would be. What might be harder, mind you, is to program them so as to optimize computational efficiency with large datasets. Still, the very basic, boots-on-the-ground structuring of data seems to be the name of the game for programming intelligent structures.