The right side of the disruption

I am swivelling my intellectual crosshairs around, as there is a lot going on, in the world. Well, there is usually a lot going on, in the world, and I think it is just the focus of my personal attention that changes its scope. Sometimes, I pay attention just to the stuff immediately in front of me, whilst on other times I go wide and broad in my perspective.

My research on collective intelligence, and on the application of artificial neural networks as simulators thereof has brought me recently to studying outlier cases. I am an economist, and I do business in the stock market, and therefore it comes as sort of logical that I am interested in business outliers. I hold some stock of the two so-far winners of the vaccine race: Moderna ( ) and BionTech ( ), the vaccine companies. I am interested in the otherwise classical, Schumpeterian questions: to what extent are their respective business models predictors of their so-far success in the vaccine contest, and, seen from the opposite perspective, to what extent is that whole technological race of vaccines predictive of the business models which its contenders adopt?

I like approaching business models with the attitude of a mean detective. I assume that people usually lie, and it starts with lying to themselves, and that, consequently, those nicely rounded statements in annual reports about ‘efficient strategies’ and ‘ambitious goals’ are always bullshit to some extent. In the same spirit, I assume that I am prone to lying to myself. All in all, I like falling back onto hard numbers, in the first place. When I want to figure out someone’s business model with a minimum of preconceived ideas, I start with their balance sheet, to see their capital base and the way they finance it, just to continue with their cash-flow. The latter helps my understanding on how they make money, at the end of the day, or how they fail to make any.

I take two points in time: the end of 2019, thus the starting blocks of the vaccine race, and then the latest reported period, namely the 3rd quarter of 2020. Landscape #1: end of 2019. BionTech sports $885 388 000 in total assets, whilst Moderna has $1 589 422 000. Here, a pretty amazing detail pops up. I do a routine check of proportion between fixed assets and total assets. It is about to see what percentage of the company’s capital base is immobilized, and thus supposed to bring steady capital returns, as opposed to the current assets, fluid, quick to exchange and made for greasing the current working of the business. When I measure that coefficient ‘fixed assets divided by total assets’, it comes as 29,8% for BionTech, and 29% for Moderna. Coincidence? There is a lot of coincidence in those two companies. When I switch to Landscape #2: end of September 2020, it is pretty much the. You can see it in the two tables below:

As you look at those numbers, they sort of collide with the common image of biotech companies in sci fi movies. In movies, we can see huge labs, like 10 storeys underground, with caged animals inside etc. In real life, biotech is cash, most of all. Biotech companies are like big wallets, camped next to some useful science. Direct investment in biotech means very largely depositing one’s cash on the bank account run by the biotech company.

After studying the active side of those two balance sheets, i.e. in BionTech and in Moderna, I shift my focus to the passive side. I want to know how exactly people put cash in those businesses. I can see that most of it comes in the form of additional paid-in equity, which is an interesting thing for publicly listed companies. In the case of Moderna, the bulk of that addition to equity comes as a mechanism called ‘vesting of restricted common stock’. Although it is not specified in their financial report how exactly that vesting takes place, the generic category corresponds to operations where people close to the company, employees or close collaborators, anyway in a closed private circle, buy stock of the company in a restricted issuance.  With Biontech, it is slightly different. Most of the proceeds from public issuance of common stock is considered as reserve capital, distinct from share capital, and on the top of that they seem to be running, similarly to Moderna, transactions of vesting restricted stock. Another important source of financing in both companies are short-term liabilities, mostly deferred transactional payments. Still, I have an intuitive impression of being surrounded by maybies (you know: ‘maybe I am correct, unless I am wrong), and thus I decided to broaden my view. I take all the 7 biotech companies I currently have in my investment portfolio, which are, besides BionTech and Moderna, five others: Soligenix ( ), Altimmune ( ), Novavax ( ) and VBI Vaccines (  ). In the two tables below, I am trying to summarize my essential observations about those seven business models.

Despite significant differences in the size of their respective capital base, all the seven businesses hold most of their capital in the highly liquid financial form: cash or tradable financial securities. Their main source of financing is definitely the additional paid-in equity. Now, some readers could ask: how the hell is it possible for the additional paid-in equity to make more than the value of assets, like 193%? When a business accumulates a lot of operational losses, they have to be subtracted from the incumbent equity. Additions to equity serve as a compensation of those losses. It seems to be a routine business practice in biotech.

Now, I am going to go slightly conspiracy-theoretical. Not much, just an inch. When I see businesses such as Soligenix, where cumulative losses, and the resulting additions to equity amount to teen times the value of assets, I am suspicious. I believe in the power of science, but I also believe that facing a choice between using my equity to compensate so big a loss, on the one hand, and using it to invest into something less catastrophic financially, I will choose the latter. My point is that cases such as Soligenix smell scam. There must be some non-reported financial interests in that business. Something is going on behind the stage, there.  

In my previous update, titled ‘An odd vector in a comfortably Apple world’, I studied the cases of Tesla and Apple in order to understand better the phenomenon of outlier events in technological change. The short glance I had on those COVID-vaccine-involved biotechs gives me some more insight. Biotech companies are heavily scientific. This is scientific research shaped into a business structure. Most of the biotech business looks like an ever-lasting debut, long before breaking even. In textbooks of microeconomics and management, we can read that being able to run the business at a profit is a basic condition of calling it a business. In biotech, it is different. Biotechs are the true outliers, nascent at the very juncture of cutting-edge science, and business strictly spoken. This is how outliers emerge: there is some cool science. I mean, really cool, the one likely to change the face of the world. Those mRNA biotechnologies are likely to do so. The COVID vaccine is the first big attempt to transform those mRNA therapies from experimental ones into massively distributed and highly standardized medicine. If this stuff works on a big scale, it is a new perspective. It allows fixing people, literally, instead of just curing diseases.

Anyway, there is that cool science, and it somehow attracts large amounts of cash. Here, a little digression from the theory of finance is due. Money and other liquid financial instruments can be seen as risk-absorbing bumpers. People accumulate large monetary balances in times and places when and where they perceive a lot of manageable risk, i.e. where they perceive something likely to disrupt the incumbent business, and they want to be on the right side of the disruption.

Cultural classes

Some of my readers asked me to explain how to get in control of one’s own emotions when starting their adventure as small investors in the stock market. The purely psychological side of self-control is something I leave to people smarter than me in that respect. What I do to have more control is the Wim Hof method ( ) and it works. You are welcome to try. I described my experience in that matter in the update titled ‘Something even more basic’. Still, there is another thing, namely, to start with a strategy of investment clever enough to allow emotional self-control. The strongest emotion I have been experiencing on my otherwise quite successful path of investment is the fear of loss. Yes, there are occasional bubbles of greed, but they are more like childish expectations to get the biggest toy in the neighbourhood. They are bubbles, which burst quickly and inconsequentially. The fear of loss is there to stay, on the other hand.    

This is what I advise to do. I mean this is what I didn’t do at the very beginning, and fault of doing it I made some big mistakes in my decisions. Only after some time (around 2 months), I figured out the mental framework I am going to present. Start by picking up a market. I started with a dual portfolio, like 50% in the Polish stock market, and 50% in the big foreign ones, such as US, Germany, France etc. Define the industries you want to invest in, like biotech, IT, renewable energies. Whatever: pick something. Study the stock prices in those industries. Pay particular attention to the observed losses, i.e., the observed magnitude of depreciation in those stocks. Figure out the average possible loss, and the maximum one. Now, you have an idea of how much you can lose in percentage. Quantitative techniques such as mean-reversion or extrapolation of the past changes can help. You can consult my update titled ‘What is my take on these four: Bitcoin, Ethereum, Steem, and Golem?’ to see the general drift.

The next step is to accept the occurrence of losses. You need to acknowledge very openly the following: you will lose money on some of your investment positions, inevitably. This is why you build a portfolio of many investment positions. All investors lose money on parts of their portfolio. The trick is to balance losses with even greater gains. You will be experimenting, and some of those experiments will be successful, whilst others will be failures. When you learn investment, you fail a lot. The losses you incur when learning, are the cost of your learning.

My price of learning was around €600, and then I bounced back and compensated it with a large surplus. If I take those €600 and compare it to the cost of taking an investment course online, e.g. with Coursera, I think I made a good deal.

Never invest all your money in the stock market. My method is to take some 30% of my monthly income and invest it, month after month, patiently and rhythmically, by instalments. For you, it can be 10% or 50%, which depends on what exactly your personal budget looks like. Invest just the amount you feel you can afford exposing to losses. Nail down this amount honestly. My experience is that big gains in the stock market are always the outcome of many consecutive steps, with experimentation and the cumulative learning derived therefrom.

General remark: you are much calmer when you know what you’re doing. Look at the fundamental trends and factors. Look beyond stock prices. Try to understand what is happening in the real business you are buying and selling the stock of. That gives perspective and allows more rational decisions.  

That would be it, as regards investment. You are welcome to ask questions. Now, I shift my topic radically. I return to the painful and laborious process of writing my book about collective intelligence. I feel like shaking things off a bit. I feel I need a kick in the ass. The pandemic being around and little social contacts being around, I need to be the one who kicks my own ass.

I am running myself through a series of typical questions asked by a publisher. Those questions fall in two broad categories: interest for me, as compared to interest for readers. I start with the external point of view: why should anyone bother to read what I am going to write? I guess that I will have two groups of readers: social scientists on the one hand, and plain folks on the other hand. The latter might very well have a deeper insight than the former, only the former like being addressed with reverence. I know something about it: I am a scientist.

Now comes the harsh truth: I don’t know why other people should bother about my writing. Honestly. I don’t know. I have been sort of carried away and in the stream of my own blogging and research, and that question comes as alien to the line of logic I have been developing for months. I need to look at my own writing and thinking from outside, so as to adopt something like a fake observer’s perspective. I have to ask myself what is really interesting in my writing.

I think it is going to be a case of assembling a coherent whole out of sparse pieces. I guess I can enumerate, once again, the main points of interest I find in my research on collective intelligence and investigate whether at all and under what conditions the same points are likely to be interesting for other people.

Here I go. There are two, sort of primary and foundational points. For one, I started my whole research on collective intelligence when I experienced the neophyte’s fascination with Artificial Intelligence, i.e. when I discovered that some specific sequences of equations can really figure stuff out just by experimenting with themselves. I did both some review of literature, and some empirical testing of my own, and I discovered that artificial neural networks can be and are used as more advanced counterparts to classical quantitative models. In social sciences, quantitative models are about the things that human societies do. If an artificial form of intelligence can be representative for what happens in societies, I can hypothesise that said societies are forms of intelligence, too, just collective forms.

I am trying to remember what triggered in me that ‘Aha!’ moment, when I started seriously hypothesising about collective intelligence. I think it was when I was casually listening to an online lecture on AI, streamed from the Massachusetts Institute of Technology. It was about programming AI in robots, in order to make them able to learn. I remember one ‘Aha!’ sentence: ‘With a given set of empirical data supplied for training, robots become more proficient at completing some specific tasks rather than others’. At the time, I was working on an article for the journal ‘Energy’. I was struggling. I had an empirical dataset on energy efficiency in selected countries (i.e. on the average amount of real output per unit of energy consumption), combined with some other variables. After weeks and weeks of data mining, I had a gut feeling that some important meaning is hidden in that data, only I wasn’t able to put my finger precisely on it.

That MIT-coined sentence on robots triggered that crazy question in me. What if I return to the old and apparently obsolete claim of the utilitarian school in social sciences, and assume that all those societies I have empirical data about are something like one big organism, with different variables being just different measurable manifestations of its activity?

Why was that question crazy? Utilitarianism is always contentious, as it is frequently used to claim that small local injustice can be justified by bringing a greater common good for the whole society. Many scholars have advocated for that claim, and probably even more of them have advocated against. I am essentially against. Injustice is injustice, whatever greater good you bring about to justify it. Besides, being born and raised in a communist country, I am viscerally vigilant to people who wield the argument of ‘greater good’.

Yet, the fundamental assumptions of utilitarianism can be used under a different angle. Social systems are essentially collective, and energy systems in a society are just as collective. There is any point at all in talking about the energy efficiency of a society when we are talking about the entire intricate system of using energy. About 30% of the energy that we use is used in transport, and transport is from one person to another. Stands to reason, doesn’t it?

Studying my dataset as a complex manifestation of activity in a big complex organism begs for the basic question: what do organisms do, like in their daily life? They adapt, I thought. They constantly adjust to their environment. I mean, they do if they want to survive. If I settle for studying my dataset as informative about a complex social organism, what does this organism adapt to? It could be adapting to a gazillion of factors, including some invisible cosmic radiation (the visible one is called ‘sunlight’). Still, keeping in mind that sentence about robots, adaptation can be considered as actual optimization of some specific traits. In my dataset, I have a range of variables. Each variable can be hypothetically considered as informative about a task, which the collective social robot strives to excel at.

From there, it was relatively simple. At the time (some 16 months ago), I was already familiar with the logical structure of a perceptron, i.e. a very basic form of artificial neural network. I didn’t know – and I still don’t – how to program effectively the algorithm of a perceptron, but I knew how to make a perceptron in Excel. In a perceptron, I take one variable from my dataset as output, the remaining ones are instrumental as input, and I make my perceptron minimize the error on estimating the output. With that simple strategy in mind, I can make as many alternative perceptrons out of my dataset as I have variables in the latter, and it was exactly what I did with my data on energy efficiency. Out of sheer curiosity, I wanted to check how similar were the datasets transformed by the perceptron to the source empirical data. I computed Euclidean distances between the vectors of expected mean values, in all the datasets I had. I expected something foggy and pretty random, and once again, life went against my expectations. What I found was a clear pattern. The perceptron pegged on optimizing the coefficient of fixed capital assets per one domestic patent application was much more similar to the source dataset than any other transformation.

In other words, I created an intelligent computation, and I made it optimize different variables in my dataset, and it turned out that, when optimizing that specific variable, i.e. the coefficient of fixed capital assets per one domestic patent application, that computation was the most fidel representation of the real empirical data.   

This is when I started wrapping my mind around the idea that artificial neural networks can be more than just tools for optimizing quantitative models; they can be simulators of social reality. If that intuition of mine is true, societies can be studied as forms of intelligence, and, as they are, precisely, societies, we are talking about collective intelligence.

Much to my surprise, I am discovering similar a perspective in Steven Pinker’s book ‘How The Mind Works’ (W. W. Norton & Company, New York London, Copyright 1997 by Steven Pinker, ISBN 0-393-04535-8). Professor Steven Pinker uses a perceptron as a representation of human mind, and it seems to be a bloody accurate representation.

That makes me come back to the interest that readers could have in my book about collective intelligence, and I cannot help referring to still another book of another author: Nassim Nicholas Taleb’s ‘The black swan. The impact of the highly improbable’ (2010, Penguin Books, ISBN 9780812973815). Speaking from an abundant experience of quantitative assessment of risk, Nassim Taleb criticizes most quantitative models used in finance and economics as pretty much useless in making reliable predictions. Those quantitative models are good solvers, and they are good at capturing correlations, but they suck are predicting things, based on those correlations, he says.

My experience of investment in the stock market tells me that those mid-term waves of stock prices, which I so much like riding, are the product of dissonance rather than correlation. When a specific industry or a specific company suddenly starts behaving in an unexpected way, e.g. in the context of the pandemic, investors really pay attention. Correlations are boring. In the stock market, you make good money when you spot a Black Swan, not another white one. Here comes a nuance. I think that black swans happen unexpectedly from the point of view of quantitative predictions, yet they don’t come out of nowhere. There is always a process that leads to the emergence of a Black Swan. The trick is to spot it in time.

F**k, I need to focus. The interest of my book for the readers. Right. I think I can use the concept of collective intelligence as a pretext to discuss the logic of using quantitative models in social sciences in general. More specifically, I want to study the relation between correlations and orientations. I am going to use an example in order to make my point a bit more explicit, hopefully. In my preceding update, titled ‘Cool discovery’, I did my best, using my neophytic and modest skills in programming, the method of negotiation proposed in Chris Voss’s book ‘Never Split the Difference’ into a Python algorithm. Surprisingly for myself, I found two alternative ways of doing it: as a loop, on the one hand, and as a class, on the other hand. They differ greatly.

Now, I simulate a situation when all social life is a collection of negotiations between people who try to settle, over and over again, contentious issues arising from us being human and together. I assume that we are a collective intelligence of people who learn by negotiated interactions, i.e. by civilized management of conflictual issues. We form social games, and each game involves negotiations. It can be represented as a lot of these >>

… and a lot of those >>

In other words, we collectively negotiate by creating cultural classes – logical structures connecting names to facts – and inside those classes we ritualise looping behaviours.

Money being just money for the sake of it

I have been doing that research on the role of cities in our human civilization, and I remember the moment of first inspiration to go down this particular rabbit hole. It was the beginning of March, 2020, when the first epidemic lockdown has been imposed in my home country, Poland. I was cycling through streets of Krakow, my city, from home to the campus of my university. I remember being floored at how dead – yes, literally dead – the city looked. That was the moment when I started perceiving cities as something almost alive. I started wondering how will pandemic affect the mechanics of those quasi-living, urban organisms.

Here is one aspect I want to discuss: restaurants. Most restaurants in Krakow turn into takeouts. In the past, each restaurant had the catering part of the business, but it was mostly for special events, like conferences, weddings and whatnot. Catering was sort of a wholesale segment in the restaurant business, and the retail was, well, the table, the napkin, the waiter, that type of story. That retail part was supposed to be the main one. Catering was an addition to that basic business model, which entailed a few characteristic traits. When your essential business process takes place in a restaurant room with tables and guests sitting at them, the place is just as essential. The location, the size, the look, the relative accessibility: it all played a fundamental role. The rent for the place was among the most important fixed costs of a restaurant. When setting up business, one of the most important questions – and risk factors – was: “Will I be able to attract sufficiently profuse customers to this place, and to ramp up prices sufficiently high to as to pay the rent for the place and still have satisfactory profit?”. It was like a functional loop: a better place (location, look) meant more select a clientele and higher prices, which required to pay a high rent etc.

As I was travelling to other countries, and across my own country, I noticed many times that the attributes of the restaurant as physical place were partly substitute to the quality of food. I know a lot of places where the customers used to pretend that the food is excellent just because said food was so strange that it just didn’t do to say it is crappy in taste. Those people pretended they enjoy the food because the place was awesome. Awesomeness of the place, in turn, was largely based on the fact that many people enjoyed coming there, it was trendy, stylish, it was a good thing to show up there from time to time, just to show I have something to show to others. That was another loop in the business model of restaurants: the peculiar, idiosyncratic, gravitational field between places and customers.

In that business model, quite substantial expenses, i.e.  the rent, and the money spent on decorating and equipping the space for customers were essentially sunk costs. The most important financial outlays you made to make the place competitive did not translate into any capital value in your assets. The only way to do such translation was to buy the place instead of renting it. Advantageous, long-term lease was another option. In some cities, e.g. the big French ones, such as Paris, Lyon or Marseille, the market of places suitable for running restaurants, both legally and physically, used to be a special segment in the market of real estate, with its own special contracts, barriers to entry etc.   

As restaurants turn into takeouts, amidst epidemic restrictions, their business model changes. Food counts in the first place, and the place counts only to the extent of accessibility for takeout. Even if I order food from a very fancy restaurant, I pay for food, not for fanciness. When consumed at home, with the glittering reputation of the restaurant taken away from it, food suddenly tastes differently. I consume it much more with my palate and much less with my ideas of what is trendy. Preparation and delivery of food becomes the essential business process. I think it facilitates new entries into the market of gastronomy. Yes, I know, restaurants are going bankrupt, and my take on it is that places are going bankrupt, but people stay. Chefs and cooks are still there. Human capital, until recently being 50/50 important – together with the real estate aspect of the business – becomes definitely the greatest asset of the restaurants’ sector as they focus on takeout. The broadly spoken cooking skills, including the ability to purchase ingredients of good quality, become primordial. Equipping a business-scale kitchen is not really rocket science, and, what is just as important, there is a market for second-hand equipment of that kind. The equipment of a kitchen, in a takeout-oriented restaurant, is much more of an asset than the decoration of a dining room. The rent you pay, or the market price of the whole place in the real-estate market are much lower, too, as compared to classical restaurants.

What restaurant owners face amidst the pandemic is the necessity to switch quickly, and on a very short notice of 1 – 2 weeks, between their classical business model based on a classy place to receive customers, and the takeout business model, focused on the quality of food and the promptness of delivery. It is a zone of uncertainty more than a durable change, and this zone is

associated with different cash flows and different assets. That, in turn, means measurable risk. Risk in big amounts is an amount, essentially, much more than a likelihood. We talk about risk, in economics and in finance, when we are actually sure that some adverse events will happen, and we even know what is going to be the total amount of adversity to deal with; we just don’t know where exactly that adversity will hit and who exactly will have to deal with it.

There are two basic ways of responding to measurable risk: hedging and insurance. I can face risk by having some aces up my sleeve, i.e. by having some alternative assets, sort of fall-back ones, which assure me slightly softer a landing, should the s**t which I hedge against really happen. When I am at risk in my in-situ restaurant business, I can hedge towards my takeout business. With time, I can discover that I am so good at the logistics of delivery that it pays off to hedge towards a marketing platform for takeouts rather than one takeout business. There is an old saying that you shouldn’t put all your eggs in the same basket, and hedging is the perfect illustration thereof. I hedge in business by putting my resources in many different baskets.

On the other hand, I can face risk by sharing it with other people. I can make a business partnership with a few other folks. When I don’t really care who exactly those folks are, I can make a joint-stock company with tradable shares of participation in equity. I can issue derivative financial instruments pegged on the value of the assets which I perceive as risky. When I lend money to a business perceived as risky, I can demand it to be secured with tradable notes AKA bills of exchange. All that is insurance, i.e. a scheme where I give away part of my cash flow in exchange of the guarantee that other people will share with me the burden of damage, if I come to consume my risks. The type of contract designated expressis verbis as ‘insurance’ is one among many forms of insurance: I pay an insurance premium in exchange o the insurer’s guarantee to cover my damages. Restaurant owners can insure their epidemic-based risk by sharing it with someone else. With whom and against what kind of premium on risk? Good question. I can see like a shade of that. During the pandemic, marketing platforms for gastronomy, such as Uber Eats, swell like balloons. These might be the insurers of the restaurant business. They capitalize on the customer base for takeout. As a matter of fact, they can almost own that customer base.

A group of my students, all from France, as if by accident, had an interesting business concept: a platform for ordering food from specific chefs. A list of well-credentialed chefs is available on the website. Each of them recommends a few flagship recipes of theirs. The customer picks the specific chef and their specific culinary chef d’oeuvre. One more click, and the customer has that chef d’oeuvre delivered on their doorstep. Interesting development. Pas si con que ça, as the French say.     

Businesspeople have been using both hedging and insurance for centuries, to face various risks. When used systematically, those two schemes create two characteristic types of capitalistic institutions: financial markets and pooled funds. Spreading my capitalistic eggs across many baskets means that, over time, we need a way to switch quickly among baskets. Tradable financial instruments serve to that purpose, and money is probably the most liquid and versatile among them. Yet, it is the least profitable one: flexibility and adaptability is the only gain that one can derive from holding large monetary balances. No interest rate, no participation in profits of any kind, no speculative gain on the market value. Just adaptability. Sometimes, just being adaptable is enough to forego other gains. In the presence of significant need for hedging risks, businesses hold abnormally large amounts of cash money.

When people insure a lot – and we keep in mind the general meaning of insurance as described above – they tend to create large pooled funds of liquid financial assets, which stand at the ready to repair any breach in the hull of the market. Once again, we return to money and financial markets. Whilst abundant use of hedging as strategy for facing risk leads to hoarding money at the individual level, systematic application of insurance-type contracts favours pooling funds in joint ventures. Hedging and insurance sort of balance each other.

Those pieces of the puzzle sort of fall together into a pattern. As I have been doing my investment in the stock market, all over 2020, financial markets seems to be puffy with liquid capital, and that capital seems to be avid of some productive application. It is as if money itself was saying: ‘C’mon, guys. I know I’m liquid, and I can compensate risk, but I am more than that. Me being liquid and versatile makes me easily convertible into productive assets, so please, start converting. I’m bored with being just me, I mean with money being just money for the sake of it’.

It is important to re-assume the meaning

It is Christmas 2020, late in the morning. I am thinking, sort of deeply. It is a dysfunctional tradition to make, by the end of the year, resolutions for the coming year. Resolutions which we obviously don’t hold to long enough to see them bring anything substantial. Yet, it is a good thing to pass in review the whole passing year, distinguish my own f**k-ups from my valuable actions, and use it as learning material for the incoming year.

What I have been doing consistently for the past year is learning new stuff: investment in the stock market, distance teaching amidst epidemic restrictions, doing research on collective intelligence in human societies, managing research projects, programming, and training consistently while fasting. Finally, and sort of overarchingly, I have learnt the power of learning by solving specific problems and writing about myself mixing successes and failures as I am learning.

Yes, it is precisely the kind you can expect in what we tend to label as girls’ readings, sort of ‘My dear journal, here is what happened today…’. I keep my dear journal focused mostly on my broadly speaking professional development. Professional development combines with personal development, for me, though. I discovered that when I want to achieve some kind of professional success, would it be academic, or business, I need to add a few new arrows to my personal quiver.    

Investing in the stock market and training while fasting are, I think, what I have had the most complete cycle of learning with. Strange combination? Indeed, a strange one, with a surprising common denominator: the capacity to control my emotions, to recognize my cognitive limitations, and to acknowledge the payoff from both. Financial decisions should be cold and calculated. Yes, they should, and sometimes they are, but here comes a big discovery of mine: when I start putting my own money into investment positions in the stock market, emotions flare in me so strongly that I experience something like tunnel vision. What looked like perfectly rational inference from numbers, just minutes ago, now suddenly looks like a jungle, with both game and tigers in it. The strongest emotion of all, at least in my case, is the fear of loss, and not the greed for gain. Yes, it goes against a common stereotype, and yet it is true. Moreover, I discovered that properly acknowledged and controlled, the fear of loss is a great emotional driver for good investment decisions, and, as a matter of fact, it is much better an emotional driver than avidity for gain. I know that I am well off when I keep the latter sort of weak and shy, expecting gains rather than longing for them, if you catch my drift.

Here comes the concept of good investment decisions. As this year 2020 comes to an end, my return on cash invested over the course of the year is 30% with a little change. Not bad at all, compared to a bank deposit (+1,5%) or to sovereign bonds (+4,5% max). I am wrapping my mind around the second most fundamental question about my investment decisions this year – after, of course, of the question about return on investment – and that second question is ontological: what my investment decisions actually have been? What has been their substance? The most general answer is tolerable complexity with intuitive hedging and a pinch of greed. Complexity means that I have progressively passed from the otherwise naïve expectation of one perfect hit to a portfolio of investment positions. Thinking intuitively in terms of portfolio has taught me just as intuitive approach to hedging my risks. Now, when I open one investment position, I already think about another possible one, either to reinforce my glide on the wave crest I intend to ride, or to compensate the risks contingent to seeing my ass gliding off and down from said wave crest.

That portfolio thinking of mine happens in layers, sort of. I have a portfolio of industries, and that seems to be the basic structuring layer of my decisions. I think I can call myself a mid-term investor. I have learnt to spot and utilise mid-term trends of interest that investors in the stock market attach to particular industries. I noticed there are cyclical fashion seasons in the stock market, in that respect. There is a cyclically recurrent biotech season, due to the pandemic. There is just as cyclical a fashion for digital tech, and another one for renewable energies (photovoltaic, in particular). Inside the digital tech, there are smaller waves of popularity as regards the gaming business, others connected to FinTech etc.

Cyclicality means that prices of stock in those industries grow for some time, ranging, by my experience, from 2 to 13 weeks. Riding those waves means jumping on and off at the right moment. The right moment for jumping on is as early as possible after the trend starts to ascend, and jump just as early as possible after it shows signs of durable descent.

The ‘durable’ part is tricky, mind you. I saw many episodes, and during some of them I shamefully yielded to short-termist panic, when the trend curbs down just for a few days before rocketing up again. Those episodes show well what it means in practical terms to face ‘technical factors’. The stock market is like an ocean. There are spots of particular fertility, and big predators tend to flock just there. In the stock market, just as in the ocean, you have bloody big sharks swimming around, and you’d better hold on when they start feeding, ‘cause they feed just as real sharks do: they hit quickly, cause abundant bleeding, and then just wait until their pray bleeds out enough to be defenceless.

When I see, for example, a company like the German Biontech ( suddenly losing value in the stock market, whilst the very vaccine they ganged up with Pfizer to make is being distributed across the world, I am like: ‘Wait a minute! Why the stock price of a super-successful, highly innovative business would fall just at the moment when they are starting to consume the economic fruit of their innovation?’. The only explanation is that sharks are hunting. Your typical stock market shark hunts in a disgusting way, by eating, vomiting and then eating their vomit back with a surplus. It bites a big chunk of a given stock, chews it for a moment, spits it out quickly – which pushes the price down a bit – then eats back its own vomit of stock, with a tiny surplus acquired at the previously down-driven price, and then it repeats. Why wouldn’t it repeat, as long as the thing works?

My personal philosophy, which, unfortunately, sometimes I deviate from when my emotions prevail, is just to sit and wait until those big sharks end their feeding cycle. This is another useful thing to know about big predators in the stock market: they hunt similarly to big predators in nature. They have a feeding cycle. When they have killed and consumed a big prey, they rest, as they are both replete with eating and down on energy. They need to rebuild their capital base.      

My reading of the stock market is that those waves of financial interest in particular industries are based on expectations as for real business cycles going on out there. Of course, in the stock market, there is always the phenomenon of subsidiary interest: I invest in companies which I expect other investors to invest to, as well, and, consequently, whose stock price I expect to grow. Still, investors in the stock market are much more oriented on fundamental business cycles than non-financial people think. When I invest in the stock of a company, and I know for a fact that many other investors think the same, I expect that company to do something constructive with my trust. I want to see those CEOs take bold decisions as for real investment in technological assets. When they really do so, I stay with them, i.e. I hold that stock. This is why I keep holding the stock of Tesla even amidst episodes of while swings in its price. I simply know Elon Musk will always come up with something which, for him, are business concepts, and for the common of mortals are science-fiction. If, on the other hand, I see those CEOs just sitting and gleaming benefits from trading their preferential shares, I leave.

Here I connect to another thing I started to learn during 2020: managing research projects. At my university, I have been assigned this specific job, and I discovered something which I did not expect: there is more money than ideas, out there. There is, actually, plenty of capital available from different sources, to finance innovative science. The tricky part is to translate innovative ideas into an intelligible, communicable form, and then into projects able to convince people with money. The ‘translating’ part is surprisingly complex. I can see many sparse, sort of semi-autonomous ideas in different people, and I still struggle with putting those people together, into some sort of team, or, fault of a team, into a network, and make them mix their respective ideas into one, big, articulate concept. I have been reading for years about managing R&D in corporate structures, about how complex and artful it is to manage R&D efficiently, and now, I am experiencing it in real life. An interesting aspect of that is the writing of preliminary contracts, the so-called ‘Non-Disclosure Agreements’ AKA NDAs, the signature of which is sort of a trigger for starting serious networking between different agents of an R&D project.

As I am wrapping my mind around those questions, I meditate over the words written by Joseph Schumpeter, in his Business Cycles: “Whenever a new production function has been set up successfully and the trade beholds the new thing done and its major problems solved, it becomes much easier for other people to do the same thing and even to improve upon it. In fact, they are driven to copying it if they can, and some people will do so forthwith. It should be observed that it becomes easier not only to do the same thing, but also to do similar things in similar lines—either subsidiary or competitive ones—while certain innovations, such as the steam engine, directly affect a wide variety of industries. This seems to offer perfectly simple and realistic interpretations of two outstanding facts of observation : First, that innovations do not remain isolated events, and are not evenly distributed in time, but that on the contrary they tend to cluster, to come about in bunches, simply because first some, and then most, firms follow in the wake of successful innovation ; second, that innovations are not at any time distributed over the whole economic system at random, but tend to concentrate in certain sectors and their surroundings”. (Business Cycles, Chapter III HOW THE ECONOMIC SYSTEM GENERATES EVOLUTION, The Theory of Innovation). In the Spring, when the pandemic was deploying its wings for the first time, I had a strong feeling that medicine and biotechnology will be the name of the game in technological change for at least a few years to come. Now, as strange as it seems, I have a vivid confirmation of that in my work at the university. Conceptual balls which I receive and which I do my best to play out further in the field come almost exclusively from the faculty of medical sciences. Coincidence? Go figure…

I am developing along two other avenues: my research on cities and my learning of programming in Python. I have been doing research on cities as manifestations of collective intelligence, and I have been doing it for a while. See, for example, ‘Demographic anomalies – the puzzle of urban density’ or ‘The knowingly healthy people’. As I have been digging down this rabbit hole, I have created a database, which, for working purposes, I call ‘DU_DG’. DU_DG is a coefficient of relative density in population, which I came by with some day and which keeps puzzling me.  Just to announce the colour, as we say in Poland when playing cards, ‘DU’ stands for the density of urban population, and ‘DG’ is the general density of population. The ‘DU_DG’ coefficient is a ratio of these two, namely it is DU/DG, or, in other words, this is the density of urban population denominated in the units of general density in population. In still other words, if we take the density of population as a fundamental metric of human social structures, the DU_DG coefficient tells how much denser urban population is, as compared to the mean density, rural settlements included.

I want to rework through my DU_DG database in order both to practice my programming skills, and to reassess the main axes of research on the collective intelligence of cities. I open JupyterLab from my Anaconda panel, and I create a new Notebook with Python 3 as its kernel. I prepare my dataset. Just in case, I make two versions: one in Excel, another one in CSV. I replace decimal comas with decimal points; I know by experience that Python has issues with comas. In human lingo, a coma is a short pause for taking like half a breath before we continue uttering the rest of the sentence. From there, we take the coma into maths, as decimal separator. In Python, as in finance, we talk about decimal point as such, i.e. as a point. The coma is a separator.

Anyway, I have that notebook in JupyterLab, and I start by piling up what I think I will need in terms of libraries:

>> import numpy as np

>> import pandas as pd

>> import os

>> import math

I place my database in the root directory of my user profile, which is, by default, the working directory of Anaconda, and I check if my database is visible for Python:

>> os.listdir()

It is there, in both versions, Excel and CSV. I start with reading from Excel:

>> DU_DG_Excel=pd.DataFrame(pd.read_excel(‘Dataset For Perceptron.xlsx’, header=0))

I check with ‘’. I get:

<class ‘pandas.core.frame.DataFrame’>

RangeIndex: 1155 entries, 0 to 1154

Data columns (total 10 columns):

 #   Column                                                                Non-Null Count  Dtype 

—  ——                                                                      ————–  —– 

 0   Country                                                                1155 non-null   object

 1   Year                                                                      1155 non-null   int64 

 2   DU_DG                                                                1155 non-null   float64

 3   Population                                                           1155 non-null   int64 

 4   GDP (constant 2010 US$)                                  1042 non-null   float64

 5   Broad money (% of GDP)                                  1006 non-null   float64

 6   urban population absolute                                 1155 non-null   float64

 7   Energy use (kg of oil equivalent per capita)    985 non-null    float64

 8   agricultural land km2                                        1124 non-null   float64

 9   Cereal yield (kg per hectare)                                         1124 non-null   float64

dtypes: float64(7), int64(2), object(1)

memory usage: 90.4+ KB  

Cool. Exactly what I wanted. Now, if I want to use this database as a simulator of collective intelligence in human societies, I need to assume that each separate ‘country <> year’ observation is a distinct local instance of an overarching intelligent structure. My so-far experience with programming opens up on a range of actions that structure is supposed to perform. It is supposed to differentiate itself into the desired outcomes, on the one hand, and the instrumental epistatic traits manipulated and adjusted in order to achieve those outcomes.

As I pass in review my past research on the topic, a few big manifestations of collective intelligence in cities come to my mind. Creation and development of cities as purposeful demographic anomalies is the first manifestation. This is an otherwise old problem in economics. Basically, people and resources they use should be disposed evenly over the territory those people occupy, and yet they aren’t. Even with a correction taken for physical conditions, such as mountains or deserts, we tend to like forming demographic anomalies on the landmass of Earth. Those anomalies have one obvious outcome, i.e. the delicate balance between urban land and agricultural land, which is a balance between dense agglomerations generating new social roles due to abundant social interactions, on the one hand, and the local food base for people endorsing those roles. The actual difference between cities and the surrounding countryside, in terms of social density, is very idiosyncratic across the globe and seems to be another aspect of intelligent collective adaptation.

Mankind is becoming more and more urbanized, i.e. a consistently growing percentage of people live in cities (World Bank 1[1]). In 2007 – 2008, the coefficient of urbanization topped 50% and keeps progressing since then. As there is more and more of us, humans, on the planet, we concentrate more and more in urban areas. That process defies preconceived ideas about land use. A commonly used narrative is that cities keep growing out into their once-non-urban surroundings, which is frequently confirmed by anecdotal, local evidence of particular cities effectively sprawling into the neighbouring rural land. Still, as data based on satellite imagery is brought up, and as total urban land area on Earth is measured as the total surface of peculiar agglomerations of man-made structures and night-time lights, that total area seems to be stationary, or, at least, to have been stationary for the last 30 years (World Bank 2[2]). The geographical distribution of urban land over the entire land mass of Earth does change, yet the total seems to be pretty constant. In parallel, the total surface of agricultural land on Earth has been growing, although at a pace far from steady and predictable (World Bank 3[3]).

There is a theory implied in the above-cited methodology of measuring urban land based on satellite imagery. Cities can be seen as demographic anomalies with a social purpose, just as Fernand Braudel used to state it (Braudel 1985[4]) : ‘Towns are like electric transformers. They increase tension, accelerate the rhythm of exchange and constantly recharge human life. […]. Towns, cities, are turning-points, watersheds of human history. […]. The town […] is a demographic anomaly’. The basic theoretical thread of this article consists in viewing cities as complex technologies, for one, and in studying their transformations as a case of technological change. Logically, this is a case of technological change occurring by agglomeration and recombination. Cities can be studied as demographic anomalies with the specific purpose to accommodate a growing population with just as expanding a catalogue of new social roles, possible to structure into non-violent hierarchies. That path of thinking is present, for example, in the now classical work by Arnold Toynbee (Toynbee 1946[5]), and in the even more classical take by Adam Smith (Smith 1763[6]). Cities can literally work as factories of new social roles due to intense social interactions. The greater the density of population, the greater the likelihood of both new agglomerations of technologies being built, and new, adjacent social roles emerging. A good example of that special urban function is the interaction inside age groups. Historically, cities have allowed much more abundant interactions among young people (under the age of 25), that rural environments have. That, in turn, favours the emergence of social roles based on the typically adolescent, high appetite for risk and immediate rewards (see for example: Steinberg 2008[7]). Recent developments in neuroscience, on the other hand, allow assuming that abundant social interactions in the urban environment have a deep impact on the neuroplastic change in our brains, and even on the phenotypical expression of human DNA (Ehninger et al. 2008[8]; Bavelier et al. 2010[9]; Day & Sweatt 2011[10]; Sweatt 2013[11])

At the bottom line of all those theoretical perspectives, cities are quantitatively different from the countryside by their abnormal density of population. Throughout this article, the acronymic symbol [DU/DG] is used to designate the density of urban population denominated in the units of (divided by) general density of population, and is computed on the grounds of data published by combining the above cited coefficient of urbanization (World Bank 1) with the headcount of population (World Bank 4[12]), as well as with the surface of urban land (World Bank 2). The general density of population is taken straight from official statistics (World Bank 5[13]). 

The [DU/DG] coefficient stays in the theoretical perspective of cities as demographic anomalies with a purpose, and it can be considered as a measure of social difference between cities and the countryside. It displays intriguing quantitative properties. Whilst growing steadily over time at the globally aggregate level, from 11,9 in 1961 to 19,3 in 2018, it displays significant disparity across space. Such countries as Mauritania or Somalia display a [DU/DG] > 600, whilst United Kingdom or Switzerland are barely above [DU/DG] = 3. In the 13 smallest national entities in the world, such as Tonga, Puerto Rico or Grenada, [DU/DG] falls below 1. In other words, in those ultra-small national structures, the method of assessing urban space by satellite-imagery-based agglomeration of night-time lights fails utterly. These communities display peculiar, categorially idiosyncratic a spatial pattern of settlement. The cross-sectional variability of [DU/DG] (i.e. its standard deviation across space divided by its cross-sectional mean value) reaches 8.62, and yet some 70% of mankind lives in countries ranging across the 12,84 ≤ [DU/DG] ≤ 23,5 interval.

Correlations which the [DU/DG] coefficient displays at the globally aggregate level (i.e. at the scale of the whole planet) are even more puzzling. When benchmarked against the global real output in constant units of value (World Bank 6[14]), the time series of aggregate, global  [DU/DG] displays a Pearson correlation of r = 0,9967. On the other hand, the same type of Pearson correlation with the relative supply of money to the global economy (World Bank 7[15]) yields r = 0,9761. As the [DU/DG] coefficient is supposed to represent the relative social difference between cities and the countryside, a look at the latter is beneficial. The [DU/DG] Pearson-correlates with the global area of agricultural land (World Bank 8[16]) at r = 0,9271, and with the average, global yield of cereals, in kgs per hectare (World Bank 9[17]), at r = 0,9858. That strong correlations of the [DU/DG] coefficient with metrics pertinent to the global food base match its correlation with the energy base. When Pearson-correlated with the global average consumption of energy per capita (World Bank 10[18]), [DU/DG] proves significantly covariant, at r = 0,9585. All that kept in mind, it is probably not that much of a surprise to see the global aggregate [DU/DG] Pearson correlated with the global headcount of population (World Bank 11[19]) at r = 0,9954.    

It is important to re-assume the meaning of the [DU/DG] coefficient. This is essentially a metric of density in population, and density has abundant ramifications, so to say. The more people live per 1 km2, the more social interactions occur on the same square kilometre. Social interactions mean a lot. They mean learning by civilized rivalry. They mean transactions and markets as well. The greater the density of population, the greater the probability of new skills emerging, which possibly translates into new social roles, new types of business and new technologies. When two types of human settlements coexist, displaying very different densities of population, i.e. type A being many times denser than type B, type A is like a factory of patterns (new social roles and new markets), whilst type B is the supplier of raw resources. The progressively growing global average [DU/DG] means that, at the scale of the human civilization, that polarity of social functions accentuates.

The [DU/DG] coefficient bears strong marks of a statistical stunt. It is based on truly risky the assumption, advanced implicitly by through the World Bank’s data, that total surface of urban land on Earth has remained constant, at least over the last 3 decades. Moreover, denominating the density of urban population in units of general density of population was purely intuitive from the author’s part, and, as a matter of fact, other meaningful denominators can easily come to one’s mind. Still, with all that wobbly theoretical foundation, the [DU/DG] coefficient seems to inform about a significant, structural aspect of human societies. The Pearson correlations, which the global aggregate of that coefficient yields with the fundamental metrics of the global economy, are of an almost uncanny strength in social sciences, especially with respect to the strong cross-sectional disparity in the [DU/DG].

The relative social difference between cities and the countryside, measurable with the gauge of the [DU/DG] coefficient, seems to be a strongly idiosyncratic adaptative mechanism in human societies, and this mechanism seems to be correlated with quantitative growth in population, real output, production of food, and the consumption of energy. That could be a manifestation of tacit coordination, where a growing human population triggers an increasing pace of emergence in new social roles by stimulating urban density. As regards energy, the global correlation between the increasing [DU/DG] coefficient and the average consumption of energy per capita interestingly connects with a stream of research which postulates intelligent collective adaptation of human societies to the existing energy base, including intelligent spatial re-allocation of energy production and consumption (Leonard, Robertson 1997[20]; Robson, Wood 2008[21]; Russon 2010[22]; Wasniewski 2017[23], 2020[24]; Andreoni 2017[25]; Heun et al. 2018[26]; Velasco-Fernández et al 2018[27]).

It is interesting to investigate how smart are human societies in shaping their idiosyncratic social difference between cities and the countryside. This specific path of research is being pursued, further in this article, through the verification and exploration of the following working hypothesis: ‘The collective intelligence of human societies optimizes social interactions in the view of maximizing the absorption of energy from the environment’.  

[1] World Bank 1:

[2] World Bank 2:

[3] World Bank 3:

[4] Braudel, F. (1985). Civilisation and Capitalism 15th and 18th Century–Vol. I: The Structures of Everyday Life, Translated by S. Reynolds, Collins, London, pp. 479 – 482

[5] Royal Institute of International Affairs, Somervell, D. C., & Toynbee, A. (1946). A Study of History. By Arnold J. Toynbee… Abridgement of Volumes I-VI (VII-X.) by DC Somervell. Oxford University Press., Section 3: The Growths of Civilizations, Chapter X.

[6] Smith, A. (1763-1896). Lectures on justice, police, revenue and arms. Delivered in the University of Glasgow in 1763, published by Clarendon Press in 1896, pp. 9 – 20

[7] Steinberg, L. (2008). A social neuroscience perspective on adolescent risk-taking. Developmental review, 28(1), 78-106.

[8] Ehninger, D., Li, W., Fox, K., Stryker, M. P., & Silva, A. J. (2008). Reversing neurodevelopmental disorders in adults. Neuron, 60(6), 950-960.

[9] Bavelier, D., Levi, D. M., Li, R. W., Dan, Y., & Hensch, T. K. (2010). Removing brakes on adult brain plasticity: from molecular to behavioral interventions. Journal of Neuroscience, 30(45), 14964-14971.

[10] Day, J. J., & Sweatt, J. D. (2011). Epigenetic mechanisms in cognition. Neuron, 70(5), 813-829.

[11] Sweatt, J. D. (2013). The emerging field of neuroepigenetics. Neuron, 80(3), 624-632.

[12] World Bank 4:

[13] World Bank 5:

[14] World Bank 6:

[15] World Bank 7:

[16] World Bank 8:

[17] World Bank 9:

[18] World Bank 10:

[19] World Bank 11:

[20] Leonard, W.R., and Robertson, M.L. (1997). Comparative primate energetics and hominoid evolution. Am. J. Phys. Anthropol. 102, 265–281.

[21] Robson, S.L., and Wood, B. (2008). Hominin life history: reconstruction and evolution. J. Anat. 212, 394–425

[22] Russon, A. E. (2010). Life history: the energy-efficient orangutan. Current Biology, 20(22), pp. 981- 983.

[23] Waśniewski, K. (2017). Technological change as intelligent, energy-maximizing adaptation. Energy-Maximizing Adaptation (August 30, 2017).

[24] Wasniewski, K. (2020). Energy efficiency as manifestation of collective intelligence in human societies. Energy, 191, 116500.

[25] Andreoni, V. (2017). Energy Metabolism of 28 World Countries: A Multi-scale Integrated Analysis. Ecological Economics, 142, 56-69

[26] Heun, M. K., Owen, A., & Brockway, P. E. (2018). A physical supply-use table framework for energy analysis on the energy conversion chain. Applied Energy, 226, 1134-1162

[27] Velasco-Fernández, R., Giampietro, M., & Bukkens, S. G. (2018). Analyzing the energy performance of manufacturing across levels using the end-use matrix. Energy, 161, 559-572

Checkpoint for business

I am changing the path of my writing, ‘cause real life knocks at my door, and it goes ‘Hey, scientist, you economist, right? Good, ‘cause there is some good stuff, I mean, ideas for business. That’s economics, right? Just sort of real stuff, OK?’. Sure. I can go with real things, but first, I explain. At my university, I have recently taken on the job of coordinating research projects and finding some financing for them. One of the first things I did, right after November 1st, was to send around a reminder that we had 12 days left to apply, with the Ministry of Science and Higher Education, for relatively small grants, in a call titled ‘Students make innovation’. Honestly, I was expecting to have 1 – 2 applications max, in response. Yet, life can make surprises. There are 7 innovative ideas in terms of feedback, and 5 of them look like good material for business concepts and for serious development. I am taking on giving them a first prod, in terms of business planning. Interestingly, those ideas are all related to medical technologies, thus something I have been both investing a lot in, during 2020, and thinking a lot about, as a possible path of substantial technological change.

I am progressively wrapping my mind up around ideas and projects formulated by those students, and, walking down the same intellectual avenue, I am making sense of making money on and around science. I am fully appreciating the value of real-life experience. I have been doing research and writing about technological change for years. Until recently, I had that strange sort of complex logical oxymoron in my mind, where I had the impression of both understanding technological change, and missing a fundamental aspect of it. Now, I think I start to understand that missing part: it is the microeconomic mechanism of innovation.

I have collected those 5 ideas from ambitious students at Faculty of Medicine, in my university:

>> Idea 1: An AI-based app, with a chatbot, which facilitates early diagnosis of cardio-vascular diseases

>> Idea 2: Similar thing, i.e. a mobile app, but oriented on early diagnosis and monitoring of urinary incontinence in women.

>> Idea 3: Technology for early diagnosis of Parkinson’s disease, through the observation of speech and motor disturbance.

>> Idea 4: Intelligent cloud to store, study and possibly find something smart about two types of data: basic health data (blood-work etc.), and environmental factors (pollution, climate etc.).

>> Idea 5: Something similar to Idea 4, i.e. an intelligent cloud with medical edge, but oriented on storing and studying data from large cohorts of patients infected with Sars-Cov-2. 

As I look at those 5 ideas, surprisingly simple and basic association of ideas comes to my mind: hierarchy of interest and the role of overarching technologies. It is something I have never thought seriously about: when we face many alternative ideas for new technologies, almost intuitively we hierarchize them. Some of them seem more interesting, some others are less. I am trying to dig out of my own mind the criteria I use, and here they are: I hierarchize with the expected lifecycle of technology, and the breadth of the technological platform involved. In other words, I like big, solid, durable stuff. I am intuitively looking for innovations which offer a relatively long lifecycle in the corresponding technology, and the technology involved is sort of two-level, with a broad base and many specific applicational developments built upon that base.  

Why do I take this specific approach? One step further down into my mind, I discover the willingness to have some sort of broad base of business and scientific points of attachment when I start business planning. I want some kind of horizon to choose my exact target on. The common technological base among those 5 ideas is some kind of intelligent digital cloud, with artificial intelligence learns on the data that flows in. The common scientific base is the collection of health-related data, including behavioural aspects (e.g. sleep, diet, exercise, stress management).

The financial context which I am operating in is complex. It is made of public financial grants for strictly speaking scientific research, other public financing for projects more oriented on research and development in consortiums made of universities and business entities, still a different stream of financing for business entities alone, and finally private capital to look for once the technology is ripe enough for being marketed.

I am operating from an academic position. Intuitively, I guess that the more valuable science academic people bring to their common table with businesspeople and government people, the better position those academics will have in any future joint ventures. Hence, we should max out on useful, functional science to back those ideas. I am trying to understand what that science should consist in. An intelligent digital cloud can yield mind-blowing findings. I know that for a fact from my own research. Yet, what I know too is that I need very fundamental science, something at the frontier of logic, philosophy, mathematics, and of the phenomenology pertinent to the scientific research at hand, in order to understand and use meaningfully whatever the intelligent digital cloud spits back out, after being fed with data. I have already gone once through that process of understanding, as I have been working on the application of artificial neural networks to the simulation of collective intelligence in human societies. I had to coin up a theory of intelligent structure, applicable to the problem at hand. I believe that any application of intelligent digital cloud requires assuming that whatever we investigate with that cloud is an intelligent structure, i.e. a structure which learns by producing many alternative versions of itself, and testing them for their fitness to optimize a given desired outcome.  

With those medical ideas, I (we?) need to figure out what the intelligent structure in action is, how can it possibly produce many alternative versions of itself, and how those alternative thingies can be tested for fitness. What we have in a medically edged digital cloud is data about a population of people. The desired outcome we look for is health, quite simply. I said ‘simply’? No, it was a mistake. It is health, in all complexity. Those apps our students want to develop are supposed to pull someone out of the crowd, someone with early symptoms which they do not identify as relevant. In a next step, some kind of dialogue is proposed to such a person, sort of let’s dig a bit more into those symptoms, let’s try something simple to treat them etc. The vector of health in that population is made, roughly speaking, of three sub-vectors: preventive health (e.g. exercise, sleep, stop eating crap food), effectiveness of early medical intervention (e.g. c’mon men, if you are 30 and can’t have erection, you are bound to concoct some cardio-vascular s**t), and finally effectiveness of advanced medicine, applied when the former two haven’t worked.  

I can see at least one salient, scientific hurdle to jump over: that outcome vector of health. In my own research, I found out that artificial neural networks can give empirical evidence as for what outcomes we are really actually after, as collectively intelligent a structure. That’s my first big idea as regards those digital medical solutions: we collect medical and behavioural data in the cloud, we assume that data represents experimental learning of a collectively intelligent social structure, and we make the cloud discover the phenomena (variables) which the structure actually optimizes.

My own experience with that method is that societies which I studied optimize outcomes which look almost too simplistic in the fancy realm of social sciences, such as the average number of hours worked per person per year, the average amount of human capital per person, measured as years of education before entering the job market, or price index in exports, thus the average price which countries sell their exports at. In general, societies which I studied tend to optimize structural proportions, measurables as coefficients in the lines of ‘amount of thingy one divided by the amount of thingy two’.  

Checkpoint for business. Supposing that our research team, at the Andrzej Frycz – Modrzewski Krakow University, comes up with robust empirical results of that type, i.e. when we take a million of random humans and their broadly spoken health, and we assume they are collectively intelligent (I mean, beyond Facebook), then their collectively shared experimental learning of the stuff called ‘life’ makes them optimize health-related behavioural patterns A, B, and C. How can those findings be used in the form of marketable digital technologies? If I know the behavioural patterns someone tries to optimize, I can break those patterns down into small components and figure out a way to utilize the way to influence behaviour. It is a common technique in marketing. If I know someone’s lifestyle, and the values that come with it, I can artfully include into that pattern the technology I am marketing. In this specific case, it could be done ethically and for a good purpose, for a change.  In that context, my mind keeps returning to that barely marked trend of rising mortality in adult males in high-income countries, since 2016 ( WTF? We’ll live, we’ll see.

The understanding of how collective human intelligence goes after health could be, therefore, the kind of scientific bacon our university could bring to the table when starting serious consortial projects with business partners, for the development of intelligent digital technologies in healthcare. Let’s move one step forward. As I have been using artificial neural network in my research on what I call, and maybe overstate as collective human intelligence, I have been running those experiments where I take a handful of behavioural patterns, I assign them probabilities of happening (sort of how many folks out of 10 000 will endorse those patterns), and I treat those probabilities as instrumental input in the optimization of pre-defined social outcomes. I was going to forget: I add random disturbance to that form of learning, in the lines of the Black Swan theory (Taleb 2007[1]; Taleb & Blyth 2011[2]).

I nailed down three patterns of collective learning in the presence of randomly happening s**t: recurrent, optimizing, and panic mode. The recurrent pattern of collective learning, which I tentatively expect to be the most powerful, is essentially a cycle with recurrent amplitude of error. We face a challenge, we go astray, we run around like headless chickens for a while, and then we figure s**t out, we progressively settle for solutions, and then the cycle repeats. It is like everlasting learning, without any clear endgame. The optimizing pattern is something I observed when making my collective intelligence optimize something like the headcount of population, or the GDP. There is a clear phase of ‘WTF!’(error in optimization goes haywire), which, passing through a somehow milder ‘WTH?’, ends up in a calm phase of ‘what works?’, with very little residual error.

The panic mode is different from the other two. There is no visible learning in the strict sense of the term, i.e. no visible narrowing down of error in what the network estimates as its desired outcome. On the contrary, that type of network consistently goes into the headless chicken mode, and it is becoming more and more headless with each consecutive hundred of experimental rounds, so to say. It happens when I make my network go after some very specific socio-economic outcomes, like price index in capital goods (i.e. fixed assets) or Total Factor Productivity.

Checkpoint for business, once again. That particular thing, about Black Swans randomly disturbing people in their endorsing of behavioural patterns, what business value does it have in a digital cloud? I suppose there are fields of applied medical sciences, for example epidemiology, or the management of healthcare systems, where it pays to know in advance which aspects of our health-related behaviour are the most prone to deep destabilization in the presence of exogenous stressors (e.g. epidemic, or the president of our country trending on Tik Tok). It could also pay off to know, which collectively pursued outcomes act as stabilizers. If another pandemic breaks out, for example, which social activities and social roles should keep going, at all price, on the one hand, and which ones can be safely shut down, as they will go haywire anyway?      

[1] Taleb, N. N. (2007). The black swan: The impact of the highly improbable (Vol. 2). Random house.

[2] Taleb, N. N., & Blyth, M. (2011). The black swan of Cairo: How suppressing volatility makes the world less predictable and more dangerous. Foreign Affairs, 33-39.

Time for a revolution

I am rethinking the economics of technological change, especially in the context of cloud computing and its spectacular rise, as, essentially, a new and distinct segment of digital business. As I am teaching microeconomics, this semester, I am connecting mostly to that level of technological change. I want to dive a bit more into the business level of cloud computing, and thus I pass in review the annual reports of heavyweights in the IT industry: Alphabet, Microsoft and IBM.

First of all, a didactic reminder is due. When I want to study the business, which is publicly listed in a stock market, I am approaching that business from its investor-relations side, and more specifically the investor-relations site. Each company listed in the stock market runs such a site, dedicated to show, with some reluctance to full transparency, mind you, the way the business works. Thus, in my review, I call by, respectively: for Alphabet (you know, the mothership of Google), as regards Microsoft, and as for them Ibemians.

I start with the Mother of All Clouds, i.e. with Google and its mother company, namely Alphabet. Keep in mind: the GDP of Poland, my home country, is roughly $590 billions, and the gross margin which Alphabet generated in 2019 was $89 857 million, thus 15% of the Polish GDP. That’s the size of business we are talking about and I am talking about that business precisely for that reason. There is a school in economic sciences, called new institutionalism. Roughly speaking, those guys study the question why big corporate structures exist at all. The answer is that corporations are a social contrivance which allows internalizing a market inside an organization. You can understand the general drift of that scientific school if you study a foundational paper by O.D. Hart (Hart 1988[1]). Long story short, when a corporate structure grows as big as Alphabet, I can assume its internal structure is somehow representative for the digital industry as a whole. You could say: but them Google people, they don’t make hardware. No, they don’t, and yet they massively invest in hardware, mostly in servers. Their activity translates into a lot of IT hardware.

Anyway, I assume that the business structure of Alphabet is informative about the general structure and the drift of the digital business globally. In the two tables below, I show the structure of their revenues. For the non-economic people: revenue is the value of sales, or, in analytical terms, Price multiplied by Quantity.     

Semi-annual revenue of Alphabet Inc.

The next step is to understand specifically the meaning of categories defined as ‘Segments’, and the general business drift. The latter is strongly rooted in what the Google tribe cherishes as ‘Moonshots’, and which means technological change seen as revolution rather than evolution. Their business develops by technological leaps, smoothed by exogenous economic conditions. Those exogenous conditions translate into the Alphabet’s business mostly as advertising. In the subsection titled ‘How we make money’, you can read it explicitly. By the way, under the mysterious categories of ‘Google other’ and ‘Other Bets revenues’, Alphabet understands, respectively:

>> Google other: Google Play, including sales of apps and in-app purchases, as well as digital content sold in the Google Play store; hardware, including Google Nest home products, Pixelbooks, Pixel phones and other devices; YouTube non-advertising, including YouTube Premium and YouTube TV subscriptions and other services;

>> Other Bet revenues are, in the Google corporate jargon, young and risky businesses, slightly off the main Googly track; right now, they cover the sales of Access internet, TV services, Verily licensing, and R&D services.

Against that background, Google Cloud, which most of us are not really familiar with, as it is a business-to-business functionality, shows interesting growth. Still, it is to keep in mind that Google is cloud: ‘Google was a company built in the cloud. We continue to invest in infrastructure, security, data management, analytics and AI’ (page 7 of the 10K annual report for 2019). You Tube ads, which show a breath-taking ascent in the company’s revenue, base their efficiency and attractiveness on artificial intelligence operating in a huge cloud of data regarding the viewers’ activity on You Tube.

Now, I want to have a look at Alphabet from other financial angles. Their balance sheet, i.e. their capital account, comes next in line. In two tables below, I present that balance sheet one side at a time, and I start with the active side, i.e. with assets. I use the principle that if I know what kind of assets a company invests money in, I can guess a lot about the way their business works. When I look at Alphabet’s assets, the biggest single category is that of ‘Marketable securities’, closely followed by ‘Property and Equipment’. They are like a big factory with a big portfolio of financial securities, and the portfolio is noticeably bigger than the factory. This is a pattern which I recently observe in a lot of tech companies. They hold huge reserves of liquid financial assets, probably in order to max out on their flexibility. You never know when exactly you will face both the opportunity and the necessity to invest in the next technological moonshot. Accounts receivable and goodwill come in the second place, as regards the value in distinct groups of assets. A bit of explanation is due as for that latter category. Goodwill might suggest someone had good intentions. Weeell, sort of. When you are a big company and you buy a smaller company, and you obviously overpay for the control over that company, over the market price of that stock, the surplus you have overpaid you call ‘Goodwill’. It means that this really expensive purchase is, in the same time, very promising, and there is likely to be plenty of future profits. When? In the future, stands to reason.

Now, I call by the passive side of Alphabet’s balance sheet, i.e. by their liabilities and equity, which is shown schematically in the next table below. The biggest single category here, i.e. the biggest distinct stream of financial capital fuelling this specific corporate machine is made of ‘Retained Earnings’, and stock equity comes in the second place. Those two categories taken together made 73% of the Alphabet’s total capital base, by the end of 2019. Still, by the end of 2018, that share was of 77%. Whilst Alphabet retains a lot of its net profit, something like 50%, there is a subtle shift in their financing. They seem to be moving from an equity-based model of financing towards more liability-based one. It happens by baby steps, yet it happens. Some accrued compensations and benefits (i.e. money which Alphabet should pay to their employees, yet they don’t, because…), some accrued revenue share… all those little movements indicate a change in their way of accumulating and using capital.   

The next two tables below give a bird’s eye view of Alphabet in terms of trends in their financials. They have a steady profitability (i.e. capacity to make money out of current business), their capacity to bring return on equity and assets steadily grows, and they shift gently from equity-based finance towards more complex a capital base, with more long-term liabilities. My general conclusion is that Alphabet is up to something, like really. They claim they constantly do revolution, but my gut feeling is that they are poising themselves for a really big revolution, business-wise, coming shortly. Those reserves of liquid financial assets, that accumulation of liabilities… All that stuff is typical in businesses coiling for a big leap.  There is another thing, closely correlated with this one. In their annual report, Alphabet claims that they mostly make money on advertising. In a narrow, operational sense, it might be true. Yet, when I have a look at their cash-flow, it looks different. What they have cash from, first and most of all, are maturities and sales of financial securities, and this one comes as way a dominant, single source of cash, hands down. They make money on financial operations in the stock market, in somehow plainer a human lingo. Then, in the second place, come two operational inflows of cash: amortization of fixed assets, and tax benefits resulting from the payment of stock-based compensations. Alphabet makes real money on financial operations and tax benefits. They might be a cloud in their operations, but in their cash-flows they are a good, old-fashioned financial scheme.  

Now, I compare with Microsoft ( In a recent update, titled ‘#howcouldtheyhavedoneittome’, I discussed the emerging position of cloud computing in the overall business of Microsoft. Now, I focus on their general financials, with a special focus on their balance sheet and their cash-flow. I show a detailed view of both in the two tables that follow. Capital-wise, Microsoft follows slightly different a pattern as compared to Alphabet, although some common denominators appear. On the active side, i.e. as regards the ways of employing capital, Microsoft seems to be even more oriented on liquid financial than Alphabet. Cash, its equivalents, and short-term investments are, by far, the biggest single category of assets in Microsoft. The capital they have in property and equipment is far lower, and, interestingly, almost equal to goodwill. In other words, when Microsoft acquires productive assets, it seems to be like 50/50 their own ones, on the one hand, and those located in acquired companies, on the other hand. As for the sources of capital, Microsoft is clearly more debt-based, especially long-term debt, than Alphabet, whilst retaining comparatively lower a proportion of their net income. It looks as if Alphabet was only discovering, by now, the charms of a capital structure which Microsoft seems to have discovered quite a while ago. As for cash-flows, both giants are very similar. In Microsoft, as in Alphabet, the main single source of cash is the monetization of financial securities, through maturity or by sales, with operational tax write-offs coming in the second place. Both giants seem to be financially bored, so to say. Operations run their way, people are interested in the company’s stock, from time to time a smaller company gets swallowed, and it goes repeatedly, year by year. Boring. Time for a revolution.      

Edit: as I was ruminating my thoughts after having written this update, I recorded a quick video ( ) on the economics of technological change, where I connect my observations about Alphabet and Microsoft with a classic, namely with the theory of innovation by Joseph Schumpeter.

[1] Hart, O. D. (1988). Incomplete Contracts and the Theory of the Firm. Journal of Law, Economics, & Organization, 4(1), 119-139.


[1] Hart, O. D. (1988). Incomplete Contracts and the Theory of the Firm. Journal of Law, Economics, & Organization, 4(1), 119-139.

Neighbourhoods of Cineworld

As I write about cities and their social function, I want to mess around a bit with a business model known as Real Estate Investment Trust, or REIT. You can consult my video on REITs in general, namely the one titled ‘In ‘Urban Economics and City Management #2 Case study of REIT: Urban Edge and Atrium [ ]’. I study there the cases of two REITs, i.e. Real Estate Investment Trusts, namely Urban Edge (U.S.) and Atrium (Central Europe).

I am pursuing the idea of investment as fundamental social activity. I intuitively guess that cities will be developing along the lines of what we will be collectively investing in. By investment I mean a compound process which loops between two specific activities: the accumulation of resources, and the allocation thereof. Since the dawn of human civilization, we have been putting things in reserve. First, it was food. Then, we discovered that putting some of our current resources into building durable architectural structures paid off: warmer in winter, cooler in summer, plenty of room for storing food, some protection against anyone or anything willing to take that food from us etc. Yes, architectural construction is investment. I put my resources – capital, labour, natural resources – into something that will pay me back in the future, over a prolonged period of time.

Investment is an interesting component of our collective intelligence. Our society changes in directions and at paces very much determined by the things we willingly invest in. We organize those things according to the principle of delayed gratification, as controlled today’s deprivation oriented on having some durable outcomes in the future. I deliberately use the term ‘things’, so general and plain. We invest in railroads, and we invest in feeling safe from natural disasters. We invest in businesses, and we invest in the expectation of having the most luxurious car/house/dress/holiday in the entire neighbourhood. We invest in collections of physical things and we invest in ideas.

We have governments and political systems because we have that pattern in our collective intelligence. Governments are in place because and as long as they have legitimation, i.e. because and as long as at least some part of the population accepts being governed, without being coerced into obedience. People give legitimation to governments because they accept sacrificing some of the presently available resources (taxes) and freedoms (compliance with the law) in order to have delayed gratification in the form of security, territorial stability, enforceable contracts etc.

Thus, we go in the direction we invest into. That direction is set by the exact kind of delayed gratification we expect to have in the future, and by the exact type of resources and freedoms we give away today in order to have that delayed thing. Cities evolve exactly according to that pattern. Cities look what they look today because at some point in the past, citizens (yes, the term ‘citizen’ comes from the status of being officially acknowledged and accepted as permanent resident of a city) collectively invested in a given type of urban structures. It is important to understand the way I use words such as ‘collective’ and ‘collectively’. People do things collectively even when they say they completely disagree about doing those things together. This is called ‘tacit coordination’. Let’s consider an example. We disagree, in a city, about the way of organizing a piece of urban space. Some people want to build residential structures there, essentially made for rent. Some others want to see a green space in exactly the same spot, like a park. What you can see emerging out of that disagreement on the long run is a patchwork of residential buildings and green spaces, all over the neighbourhood.

Disagreement is a pattern of tacit coordination, thus a pattern of collective intelligence. We disagree about things which we judge important. Openly expressed disagreement is, in the first place, tacit agreement as for what we really care for (object of disagreement) and who really cares for it (protagonists of disagreement). In my personal experience, if a collective, e.g. a business organization, follows a strategy with unanimous enthusiasm, without any voices of dissent, I am like ‘Ooooh, f**k! That thing is heading towards the edge of the cliff…’.

Good. We invest, i.e. we are collectively intelligent about what kind of present satisfaction we sacrifice for the sake of future delayed gratification. The most important investments we collectively make are subject to disagreement, which is more or less ritualized with legal norms and/or political institutions. Here comes an interesting case, disquietingly connected to real life. Cineworld, a chain of cinema theatres ( has just announced that ‘In response to an increasingly challenging theatrical landscape and sustained key market closures due to the COVID-19 pandemic, Cineworld confirms that it will be temporarily suspending operations at all of its 536 Regal theatres in the US and its 127 Cineworld and Picturehouse theatres in the UK from Thursday, 8 October 2020’ (look up That provokes a question: what will happen to those theatres as physical places? Will the pandemic force a rethinking and reengineering of their functions in the surrounding urban space and of the way they should be managed? Is that closure of cinema theatres a durable, irreversible change or is it just temporary?

You can see the entire map of Cineworld’s cinemas under this link: . A bit of digital zoom, i.e. at, and you can make yourself an opinion about the Cineworld cinemas located in London under the brand of ‘PictureHouse’. Look at the Clapham PictureHouse ( ).  and at its location: 76 Venn St, Clapham Town, London SW4 0AT, United Kingdom. The neighbourhood looks more or less like that:

What can be done there? What will the locals collectively invest in? What will be the key features of that investment which they will be disagreeing about? These are low buildings; the neighbourhood looks like a combination of residential structures and small utility ones. Whatever can that cinema theatre be turned into, that thing will make sense for the immediate neighbourhood, like 5 kilometres around.

I turn that cursory reflection on the closure of Cineworld’s theatres into three pieces of teaching, namely as a case of Urban Development sensu stricte ( ),  for one, then as a case of Economic Policy (, for two, and finally as a case of International Economics (, because as cinemas close, folks are bound to spend more time in front of their private screens, and that means growth in the global market of digital entertainment.

New, complete course of Business Planning

I have just finished putting together a complete course of Business Planning. You can find the link on the sidebar. In a series of video lectures combined with Power Point presentations, you will go through all the basic skills of business planning: pitching and modelling your business concept, market research and its translation into financials, assessment of the optimal capital base, and thorough reflection on the soft side of the business plan, i.e. your goals, your risks, your people etc.

Click, dive into, dig through and enjoy.

Cautiously bon-vivant

I keep developing on a few topics in parallel, with a special focus on two of them. Lessons in economics and management which I can derive for my students, out of my personal experience as a small investor in the stock market, for one, and a broader, scientific work on the civilizational role of cities and our human collective intelligence, for two.

I like starting with the observation of real life, and I like ending with it as well. What I see around gives me the initial incentive to do research and makes the last pitch for testing my findings and intuitions. In my personal experience as investor, I have simply confirmed an initial intuition that giving a written, consistent and public account thereof helps me nailing down efficient strategies as an investor. As regards cities and collective intelligence, the first part of that topic comes from observing changes in urban life since COVID-19 broke out, and the second part is just a generalized, though mild an intellectual obsession, which I started developing once I observed the way artificial neural networks work.

In this update, I want to develop on two specific points, connected to those two paths of research and writing. As far as my investment is concerned, I am seriously entertaining the idea of broadening my investment portfolio in the sector of renewable energies, more specifically in the photovoltaic. I can notice a rush on the solar business in the U.S. I am thinking about investing in some of those shares. I already have, and have made a nice profit on the stock of First Solar ( ) as well as on that of SMA Solar ( ). Currently, I am observing three other companies: Vivint Solar ( ),  Canadian Solar ( ), and SolarEdge Technologies ( ). Below, I am placing the graphs of stock price over the last year, as regards those solar businesses. There is something like a common trend in those stock prices. March and April 2020 were a moment of brief jump upwards, which subsequently turned into a shy lie-down, and since the beginning of August 2020 another journey into the realm of investors’ keen interest seems to be on the way.

Before you have a look at the graphs, here is a summary table with selected financials, approached as relative gradients of change, or d(x).

 Change from 01/01/2020 to 31/08/2020
Companyd(market cap)d(assets)d(operational cash-flow)
First Solar+23,9%-6%Deeper negative: – $80 million
SMA Solar+27,5%-10%Deeper negative: -€40 million
Vivint Solar+362%+11%Deeper negative: – $9 million
SolarEdge+98%0+ $50 million
Canadian Solar+41%+4%+ $90 million

There are two fundamental traits of business models which I am having a close look at. Firstly, it is the correlation between changes in market capitalization, and changes in assets. I am checking if the solar businesses I want to invest in have their capital base functionally connected to the financial market. Looks a bit wobbly, as for now. Secondly, I look at current operational efficiency, measured with operational cash flow. Here, I can see there is still a lot to do. Here is the link to You Tube video with all that topic developed: Business models in renewable energies #3 Solar business and investment opportunities [Renew BM 3 2020-09-06 09-20-30 ; ].

Those business models seem to be in a phase of slow stabilization. The industry as a whole seems to be slowly figuring out the right way of running that PV show, however the truly efficient scheme is still to be nailed down. Investment in those companies is based on reasonable trust in the growth of their market, and in the positive impact of technological innovation. Question: is it a good move to invest now? Answer: it is risky, but acceptably rational; once those business models become really efficient, the industry will be in or close to the phase of maturity, which, in turn, does not really allow expecting abnormally high return on investment.  

This is a very ‘financial’, hands-off approach to business models. In this case, business models of those photovoltaic businesses matter to me just to the extent of being fundamentally predictable. I don’t want to run a solar business, I just want to have elementary understanding of what’s going on, business-wise, to make my investment better grounded. Looking from inside a business, such an approach is informative about the way that a business model should ‘speak’ to investors.

At the end of the day, I think I am most likely to invest in SolarEdge. It seems to have all the LEGO blocks in place for a good opening. Good cash flow, although a bit sluggish when it comes to real investment.

As regards COVID-19 and cities, I am formulating the following hypothesis: COVID-19 has awakened some deeply rooted cultural patterns, which date back to the times of high epidemic risk, long before vaccines, sanitation and widespread basic healthcare. Those patterns involve less spatial mobility in the population, and social interactions within relatively steady social circles of knowingly healthy people. As a result, the overall frequency of social interactions in cities is likely to decrease, and, as a contingent result, the formation of new social roles is likely to slow down. Then, either digital technologies take over the function of direct social interactions and new social roles will be shaping themselves via your average smartphone, with all the apps it is blessed (haunted?) with, or the formation of new social roles will slow down in general. In that last case, we could have hard times with keeping up our pace of technological change. Here is the link to You Tube video which summarizes what is written below: Urban Economics and City Management #4 COVID and social mobility in cities [ Cities 4 2020-09-06 09-43-06 ;  ].

I want to gain some insight into the epidemiological angle of that claim, and I am passing in review some recent literature. I start with: Gatto, M., Bertuzzo, E., Mari, L., Miccoli, S., Carraro, L., Casagrandi, R., & Rinaldo, A. (2020). Spread and dynamics of the COVID-19 epidemic in Italy: Effects of emergency containment measures. Proceedings of the National Academy of Sciences, 117(19), 10484-10491 ( ). As it is usually the case, my internal curious ape starts paying attention to details which could come as secondary for other people, and my internal happy bulldog follows along and bites deep into those details. The little detail in this specific paper is a parameter: the number of people quarantined as a percentage of those positively diagnosed with Sars-Cov-2. In the model developed by Gatto et al., that parameter is kept constant at 40%, which is, apparently, the average level empirically observed in Italy during the Spring 2020 outbreak. Quarantine is strict isolation between carriers and (supposedly) non-carriers of the virus. Quarantine can be placed on the same scale as basic social distancing. It is just stricter, and, in quantitative terms, it drives much lower the likelihood of infectious social interaction. Gatto el al. insist that testing effort and quarantining are essential components of collective defence against the epidemic. I generalize: testing and quarantine are patterns of collective behaviour. I check whether people around me are carriers or not, and then I split them into two categories: those whom I strongly suspect to host and transmit Sars-Cov-2, and all the rest. I define two patterns of social interaction with those two groups: very restrictive with the former, and cautiously bon vivant with the others (still, no hugging). As the technologies of testing will be inevitably diffusing across the social landscape, that structured pattern is likely to spread as well.    

Now, I pay a short intellectual visit to Jiang, P., Fu, X., Van Fan, Y., Klemeš, J. J., Chen, P., Ma, S., & Zhang, W. (2020). Spatial-temporal potential exposure risk analytics and urban sustainability impacts related to COVID-19 mitigation: A perspective from car mobility behaviour. Journal of Cleaner Production, 123673 . Their methodology is based on correlating spatial mobility of cars in residential areas of Singapore with the risk of infection with COVID-19. A 44,3% ÷ 55,4% decrease in the spatial mobility of cars is correlated with a 72% decrease in the risk of social transmission of the virus. I intuitively translate it into geometrical patterns. Lower mobility in cars means a shorter average radius of travel by the means of available urban transportation. In the presence of epidemic risk, people move across a smaller average territory.

In another paper (or rather in a commented dataset), namely in Pepe, E., Bajardi, P., Gauvin, L., Privitera, F., Lake, B., Cattuto, C., & Tizzoni, M. (2020). COVID-19 outbreak response, a dataset to assess mobility changes in Italy following national lockdown. Scientific data, 7(1), 1-7. , I find an enlarged catalogue of metrics pertinent to spatial mobility. That paper, in turn, lead me to the functionality run by Google: . I went through all of it a bit cursorily, and I noticed two things. First of all, countries are strongly idiosyncratic in their social response to the pandemic. Still, and second of all, there are common denominators across idiosyncrasies and the most visible one is cyclicality. Each society seems to have been experimenting with the spatial mobility they can afford and sustain in the presence of epidemic risk. There is a cycle experimentation, around 3 – 4 weeks. Experimentation means learning and learning usually leads to durable behavioural change. In other words, we (I mean, homo sapiens) are currently learning, with the pandemic, new ways of being together, and those ways are likely to incrust themselves into our social structures.    

The article by Kraemer, M. U., Yang, C. H., Gutierrez, B., Wu, C. H., Klein, B., Pigott, D. M., … & Brownstein, J. S. (2020). The effect of human mobility and control measures on the COVID-19 epidemic in China. Science, 368(6490), 493-497 ( ) shows that without any restrictions in place, the spatial distribution of COVID-19 cases is strongly correlated with spatial mobility of people. With restrictions in place, that correlation can be curbed, however it is impossible to drive down to zero. In plain human, it means that even as stringent lockdowns as we could see in China cannot reduce spatial mobility to a level which would completely prevent the spread of the virus. 

By the way, in Gao, S., Rao, J., Kang, Y., Liang, Y., & Kruse, J. (2020). Mapping county-level mobility pattern changes in the United States in response to COVID-19. SIGSPATIAL Special, 12(1), 16-26 ( ), I read that the whole idea of tracking spatial mobility with people’s personal smartphones largely backfired because the GDS transponders, installed in the average phone, have around 20 metres of horizontal error, on average, and are easily blurred when people gather in one place. Still, whilst the idea went down the drain as regards individual tracking of mobility, smartphone data seems to provide reliable data for observing entire clusters of people, and the way those clusters flow across space. You can consult Jia, J. S., Lu, X., Yuan, Y., Xu, G., Jia, J., & Christakis, N. A. (2020). Population flow drives spatio-temporal distribution of COVID-19 in China. Nature, 1-5.  ( .

Bonaccorsi, G., Pierri, F., Cinelli, M., Flori, A., Galeazzi, A., Porcelli, F., … & Pammolli, F. (2020). Economic and social consequences of human mobility restrictions under COVID-19. Proceedings of the National Academy of Sciences, 117(27), 15530-15535 ( ) show an interesting economic aspect of the pandemic. Restrictions in mobility give the strongest economic blow to the poorest people and to local communities marked by relatively the greatest economic inequalities. Restrictions imposed by governments are one thing, and self-imposed limitations in spatial mobility are another. If my intuition is correct, namely that we will be spontaneously modifying and generally limiting our social interactions, in order to protect ourselves from COVID-19, those changes are likely to be the fastest and the deepest in high-income, low-inequality communities. As income decreases and inequality rises, those adaptive behavioural modifications are likely to weaken.

As I am drawing a provisional bottom line under that handful of scientific papers, my initial hypothesis seems to hold. We do modify, as a species, our social patterns, towards more encapsulated social circles. There is a process of learning taking place, and there is no mistake about it. That process of learning involves a downwards recalibration in the average territory of activity, and smart selection of people whom we hang out with, based on what we know about the epidemic risk they convey. This is a process of learning by trial and error, and it is locally idiosyncratic. Idiosyncrasies seem to be somehow correlated with differences in wealth. Income and accumulated capital visibly give local communities an additional edge in the adaptive learning. On the long run, economic resilience seems to be a key factor in successful adaptation to epidemic risk.

Just to end up with, here you have an educational piece as regards Business models in the Media Industry #4 The gaming business[ Media BM 4 2020-09-02 10-42-44;]. I study the case of CD Projekt ( ), a Polish gaming company, known mostly for ‘The Witcher’ game and currently working on the next one, Cyberpunk, with Keanu Reeves giving his face to the hero. I discover a strange business model, which obviously has hard times to connect with the creative process at the operational level. As strange as it might seem, the main investment activity, for the moment, consists in terminating and initiating cash bank deposits (!), and one of the most important operational activities is to push further in time the moment of officially charging customers with some economically due receivables. On the top of all that, those revenues deferred into the future are officially written in the balance sheet as short-term liabilities, which CD Projekt owes to…whom exactly?