#howcouldtheyhavedoneittome

I  am considering the idea of making my students – at least some of them – into an innovative task force in order to develop new technologies and/or new businesses. My essential logic is that I teach social sciences, under various possible angles, and the best way of learning is by trial and error. We learn the most when we experiment with many alternative versions of ourselves and select the version which seems the fittest, regarding the values and goals we pursue. Logically, when I want my students to learn social sciences, like really learn, the first step is to make them experiment with the social roles they currently have and make many alternative versions thereof. You are 100% student at the starting point, and now you try to figure out what is it like to be 80% student and 20% innovator, or 50% student and 50% innovator etc. What are your values? Well, as it comes to learning, I advise assuming that the best learning occurs when we get out of our comfort zone but keep the door open for returning there. I believe it can be qualified as a flow state. You should look for situations when you feel a bit awkward, and the whole thing sucks a bit because you feel you do not have all the skills you need for the situation, and still you see like a clear path of passage between your normal comfort zone and that specific state of constructive suck.   

Thus, when I experiment with many alternative versions of myself, without being afraid of losing my identity, thus when I behave like an intelligent structure, the most valuable versions of myself as learning comes are those which push me slightly out of my comfort zone. When you want to learn social sciences, you look for those alternative versions of yourself which are a bit uncomfortably involved in that whole social thing around you. That controlled uncomfortable involvement makes you learn faster and deeper.

The second important thing I know about learning is that I learn faster and deeper when I write and talk about what I am learning and how I am learning. I have just experienced that process of accelerated figuring my s**t out as regards investment in the stock market. I started by the end of January 2020 (see Back in the game or Fathom the outcomes ) and, with a bit of obsessive self-narration, I went from not really knowing what I am doing and barely controlling my emotions to a portfolio of some 20 investment positions, capable to bring me at least 10% a month in terms of return on capital (see Fire and ice. A real-life business case).

Thus, consistently getting out of your comfort zone just enough to feel a bit of suck, and then writing about your own experience in that place, that whole thing has the hell of a propulsive power. You can really burn the (existential) rubber, under just one condition: the ‘consistently’ part. Being relentless in making small everyday steps is the third ingredient of that concoction. We learn by forming habits. Daily repetition of experimenting in the zone of gentle suck makes you be used to that experimentation, and once you are used to that, well, man, you have the turbo boost on, in your existence.

This is precisely what I fathom to talk my students into: experimenting outside of their comfort zone, with a bit of uncomfortably stimulating social involvement into the development of an innovative business concept. The type of innovation I am thinking about is some kind of digital technology or digital product, and I want to start exploration with rummaging a little bit in the investor-relations sites of publicly listed companies, just to see what they are up to and to find some good benchmarks for business modelling. I start with one of the T-Rexes of the industry, namely with Microsoft (https://www.microsoft.com/en-us/investor ). As I like going straight for the kill, I dive into the section of SEC filings (https://www.microsoft.com/en-us/Investor/sec-filings.aspx ), and there, a pleasant surprise awaits: they end their fiscal year by the end of June, them people at Microsoft, and thus I have their annual report for the fiscal year 2020 ready and available even before the calendar year 2020 is over. You can download the report from their site or from my archives: https://discoversocialsciences.com/wp-content/uploads/2020/10/Microsoft-_FY20Q4_10K.docx .

As I grab my machete and my camera and I cut myself a path through that document, I develop a general impression that digital business goes more and more towards big data and big server power more than programming strictly speaking. I allow myself to source directly from that annual report the table from page 39, with segment results. You can see it here below:

Intelligent Cloud, i.e. Microsoft Azure (https://azure.microsoft.com/en-us/ ), seems to be the most dynamic segment in their business. In other words, a lot of data combined with a lot of server power, and with artificial neural networks to extract patterns and optimize. If I consider the case of Microsoft as representative for the technological race taking place in the IT industry, cloud computing seems to be the main track in that race.

Before I forget: IBM has just confirmed that intuition of mine. If you go and call by https://www.ibm.com/investor , you can pick up their half-year results (https://www.ibm.com/investor/att/pdf/IBM-2Q20-Earnings-Press-Release.pdf ) and their latest strategic update (https://www.ibm.com/investor/att/pdf/IBM-Strategic-Update-2020-Press-Release.pdf ). One fact comes out of it: cloud computing at IBM brings the most gross margin and the most growth in business. It goes to the point of IBM splitting their business in two, with cloud computing spinning out of all the rest, as a separate business.

I would suggest my students to think about digital innovations in the domain of cloud computing. Microsoft Azure (https://azure.microsoft.com/en-us/ ) and cloud computing provided by Okta (https://investor.okta.com/ ), seen a bit more in focus in their latest annual report (https://discoversocialsciences.com/wp-content/uploads/2020/10/Okta-10K-2019.pdf ), serve me as quick benchmarks. Well, as I think about benchmarks, there are others, more obvious or less obvious, depending on the point of view. You Tube, when you think about it, does cloud computing. It stores data – yes, videos are data – and it adapts the list of videos presented to each user according to the preferences of said used, guessed by algorithms of artificial intelligence. Netflix – same thing: a lot of data, in the form of movies, shows and documentaries, and a lot of server power to support the whole thing.     

My internal curious ape has grabbed this interesting object – innovations in the domain of cloud computing – and now my internal happy bulldog starts playing with it, sniffing around and digging holes, haphazardly, in the search for more stuff like that. My internal austere monk watches the ape and the bulldog, holding his razor ready, I mean the Ockham’s razor to cut bullshit out, should such need arise.

What’s cloud computing from the point of view of a team made of an ape and a bulldog? This is essentially a f**king big amount of data, permeated with artificial neural networks, run on and through f**king big servers, consuming a lot of computational power and a lot of energy. As cloud computing is becoming a separate IT business on its own right, I try to decompose it into key factors of value added. The technology of servers as such is one such factor. Energy efficiency, resilience to factors of operational risk, probably fiberoptics as regards connectivity, sheer computational power per 1 cubic meter of space, negotiably low price of electricity – all those things are sort of related to servers.

Access to big, useful datasets is another component of that business. I see two openings here. Acquiring now intellectual property rights to datasets which are cheap today, but likely to be expensive tomorrow is certainly important. People tend to say that data has become a commodity, and it is partly true. Still, I see that data is becoming an asset, too. As I look at the financials of Netflix (see, for example, The hopefully crazy semester), thus at cloud computing for entertainment, I realize that cloud-stored (clouded?) data can be both a fixed asset and a circulating one. It all depends on its lifecycle. There is data with relatively short shelf life, which works as a circulating asset, akin to inventories. It earns money when it flows: some parcels of data flow into my server, some flow out, and I need that flow to stay in the flow of business. There is other data, which holds value for a longer time, similarly to a fixed asset, and yet is subject to depreciation and amortization.

Here is that emerging skillset: data trader. Being a data trader means that you: a) know where to look for interesting datasets b) have business contacts with people who own it c) can intuitively gauge its market value and its shelf life d) can effectively negotiate its acquisition and e) can do the same on the selling side. I think one more specific skill is to add: intuitive ability to associate the data I am trading with proper algorithms of artificial intelligence, just to blow some life into the otherwise soulless databases. One more comes to my mind: the skill to write and enforce contracts which effectively protect the acquired data from infringement and theft.

Cool. There are the servers, and there is the data. Now, we need to market it somehow. The capacity to invent and market digital products based on cloud computing, i.e. on lots of server power combined with lots of data and with agile artificial neural networks, are another aspect of the business model. As I think of it, it comes to my mind that the whole fashion for Blockchain technology and its emergent products – cryptocurrencies and smart contracts – arose when the technology of servers passed a critical threshold, allowing to play with computational power as a fixed asset.

I am very much Schumpeterian, i.e. I am quite convinced that Joseph Schumpeter’s theory of business cycles was and still is a bloody deep vision, which states, among other things, that with the advent of new technologies and new assets, some incumbent technologies and assets will inevitably disappear. Before inevitability consumes itself, a transitory period happens, when old assets coexist with the new ones and choosing the right cocktail thereof is an art and a craft, requiring piles of cash on the bank account, just to keep the business agile and navigable.     

Another thing strikes me: the type of emergent programming languages. The Python, the R, the Pragma Solidity: all that stuff is primarily about managing data. Twenty years ago, programming was mostly about… well, about programming, i.e. about creating algorithms to make those electronics do what we want. Today, programming is more and more about data management. When we invent new languages for a new type of business, we really mean business, as a collective intelligence.

It had to come. I mean, in me. That mild obsession of mine about collective intelligence just had to poke its head from around the corner. Whatever. Let’s go down that rabbit hole. Collective intelligence consists in an intelligent structure experimenting with many alternative versions of itself whilst staying coherent. The whole business of cloud computing, as it is on the rise and before maturity, consists very largely in experimenting with many alternative versions of claims on data, claims on server power, as well as with many alternative digital products sourced therefrom. Some combinations are fitter than others. What are the criteria of fitness? At the business scale, it would be return on investment, I guess. Still, at the collective level of whole societies, it would be about the capacity to assure high employment and low average workload per person. Yes, Sir Keynes, it still holds.

As I indulge in obsessions, I go to another one of mine: the role of cities in our civilization. In my research, I have noticed strange regularities as for the density of urban population. When I compute a compound indicator which goes as density of urban population divided by the general density of population, or [DU/DG], that coefficient enters into strange correlations with other socio-economic variables. One of the most important observations I made about it is that the overall DU/DG for the whole planet is consistently growing. There is a growing difference in social density between cities and the countryside. See Demographic anomalies – the puzzle of urban density, from May 14th, 2020, in order to make yourself an idea. I think that we, humans, invented cities as complex technologies which consist in stacking a large number of homo sapiens (for some humans, it is just allegedly sapiens, let’s face it) on a relatively small surface, with a twofold purpose: that of preserving and developing agricultural land as a food base, and that of fabricating new social roles for new humans, through intense social interaction in cities. My question regarding the rise of technologies in cloud computing is whether it is concurrent with growing urban density, or, conversely, is it a countering force to that growth. In other words, are those big clouds of data on big servers a by-product of citification or is it rather something completely new, possibly able to supplant cities in their role of factories making new social roles?

When I think about cloud computing in terms of collective intelligence, I perceive it as a civilization-wide mechanism which helps making sense of growing information generated by growing mankind. It is a bit like an internal control system inside a growing company. Cloud computing is essentially a pattern of maintaining internal cohesion inside the civilization. Funny how it plays on words. Clouds form in the atmosphere when the density of water vapour passes a critical threshold. As the density of vaporized water per 1 cubic meter of air grows, other thresholds get passed. The joyful, creamy clouds morph into rain clouds, i.e. clouds able to re-condensate water from vapour back to liquid. I think that technologies of cloud computing do precisely that. They collect sparse, vaporized data and condensate it into effective action in and upon the social environment.

Now comes the funny part. Rain clouds turn into storm clouds when they get really thick, i.e. when wet and warm air – thus air with a lot of water vaporized in it and a lot of kinetic energy in its particles – collides with much colder and drier air. Rain clouds pile up and start polarizing their electric charges. The next thing we know, lightning starts hitting, winds become scary etc. Can a cloud of data pile up to the point of becoming a storm cloud of data, when it enters in contact with a piece of civilisation poor in data and low on energy? Well, this is something I observe with social media and their impact. Any social medium, I mean Twitter, Facebook, Instagram, whatever pleases, essentially, is a computed cloud of data. When it collides with population poor in data (i.e. poor in connection with real life and real world), and low on energy (not much of a job, not much of adversity confronted, not really a pile of business being done), data polarizes in the cloud. Some of it flows to the upper layers of the cloud, whilst another part, the heavier one, flows down to the bottom layer and starts attracting haphazard discharges of lighter data, more sophisticated data from the land underneath. The land underneath is the non-digital realm of social life. The so-polarized cloud of data becomes sort of aggressive and scary. It teaches humans to seek shelter and protection from it.           

Metaphors have various power. This one, namely equating a cloud of data to an atmospheric cloud, seems pretty kickass. It leads me to concluding that cloud computing arises as a new, big digital business because there are good reasons for it to do so. There is more and more of us, humans, on the planet. More and more of us live in cities, in a growing social density, i.e. with more and more social interactions. Those interactions inevitably produce data (e.g. #howcouldtheyhavedoneittome), whence growing information wealth of our civilisation, whence the computed clouds of data.

Metaphors have practical power, too, namely that of making me shoot educational videos. I made two of them, sort of in the stride of writing. Here they are, to your pleasure and leisure (in brackets, you have links to You Tube): International Economics #3 The rise of cloud computing [ https://youtu.be/FerCBcsGyq0], for one, and Managerial Economics and Economic Policy #4 The growth of cloud computing and what can governments do about it [ https://youtu.be/J-T4QQDEdlU], for two.

A flow I can ride, rather than a storm I should fear

My editorial on You Tube

I am in an intellectually playful frame of mind, and I decide to play with Keynes and probability. It makes like 4 weeks that I mess around with the theory of probability, and yesterday my students told me they have a problem with Keynes. I mean, not with Sir John Maynard Keynes as a person, but more sort of with what he wrote. I decided to connect those two dots. Before John Maynard Keynes wrote his ‘General Theory of Employment, Interest, and Money’, in 1935, he wrote a few other books, and among them was ‘A Treatise on Probability’ (1921).

I am deeply convinced that mathematics expresses our cognitive take on that otherwise little known, chaotic stuff we call reality, fault of a better label. I am going to compare John Maynard Keynes’s approaches to, respectively, probability and economics, so as to find connections. I start with the beginning of Chapter I, entitled ‘The Meaning of Probability’, in Keynes’s Treatise on Probability,   

Part of our knowledge we obtain direct; and part by argument. The Theory of Probability is concerned with that part which we obtain by argument, and it treats of the different degrees in which the results so obtained are conclusive or inconclusive. In most branches of academic logic, such as the theory of the syllogism or the geometry of ideal space, all the arguments aim at demonstrative certainty. They claim to be conclusive. But many other arguments are rational and claim some weight without pretending to be certain. In Metaphysics, in Science, and in Conduct, most of the arguments, upon which we habitually base our rational beliefs, are admitted to be inconclusive in a greater or less degree. Thus for a philosophical treatment of these branches of knowledge, the study of probability is required. […] The Theory of Probability is logical, therefore, because it is concerned with the degree of belief which it is rational to entertain in given conditions, and not merely with the actual beliefs of particular individuals, which may or may not be rational. Given the body of direct knowledge which constitutes our ultimate premises, this theory tells us what further rational beliefs, certain or probable, can be derived by valid argument from our direct knowledge. This involves purely logical relations between the propositions which embody our direct knowledge and the propositions about which we seek indirect knowledge. […] Writers on Probability have generally dealt with what they term the “happening” of “events.” In the problems which they first studied this did not involve much departure from common usage. But these expressions are now used in a way which is vague and ambiguous; and it will be more than a verbal improvement to discuss the truth and the probability of propositions instead of the occurrence and the probability of events’.

See? Something interesting. I think most of us connect the concept of probability to that experiment which we used to perform at high school: toss a coin 100 times, see how many times you have tails, and how many occurrences of heads you had etc. Tossing a coin is empirical: we make very little assumptions and we just observe. How is it possible, then, for anybody to even hypothesise that probability is a science of propositions rather than hard facts?

Now, here is the thing with John Maynard Keynes (and I address this passage to all those of my students who struggle with understanding what the hell did John Maynard mean): John Maynard Keynes had a unique ability to sell his ideas, and his ideas came from his experience. Whatever general principles you can read in Keynes’s writings, and however irrefutable he suggests these principles are, John Maynard tells us the same kind of story that everybody tells: the story of his own existence. He just tells it in so elegantly sleek a way that most people just feel disarmed and conquered. Yet, convincing is not the same as true. Even the most persuasive theorists – and John Maynard Keynes could persuade the s**t out of most common mortals – can be wrong. How can they be wrong? Well, when I fail to own my own story, i.e. when I am just too afraid of looking the chaos of life straight in the eyes (which is elegantly called ‘cognitive bias’), then I tell just the nice little story which I would like to hear, in order to calm down my own fear.      

Let’s try to understand John Maynard Keynes’s story of existence, which leads to seeing probabilities as a type of logic rather than data. I browse through his ‘Treatise on Probability’. I’m patient. I know he will give himself away sooner or later. Everybody does. Well, let’s say that according to my experience of conversations with dead people via their writings, each of them ends up by telling me, through his very writing, what kind of existential story made him tell the elegantly packaged theoretical story in the title of the book. Gotcha’, Sir Keynes! Part I – Fundamental Ideas – Chapter III, ‘The Measurement of Probabilities’, page 22 in the PDF I am linking to: ‘If we pass from the opinions of theorists to the experience of practical men, it might perhaps be held that a presumption in favour of the numerical valuation of all probabilities can be based on the practice of underwriters and the willingness of Lloyd’s to insure against practically any risk. Underwriters are actually willing, it might be urged, to name a numerical measure in every case, and to back their opinion with money. But this practice shows no more than that many probabilities are greater or less than some numerical measure, not that they themselves are numerically definite. It is sufficient for the underwriter if the premium he names exceeds the probable risk. But, apart from this, I doubt whether in extreme cases the process of thought, through which he goes before naming a premium, is wholly rational and determinate; or that two equally intelligent brokers acting on the same evidence would always arrive at the same result. In the case, for instance, of insurances effected before a Budget, the figures quoted must be partly arbitrary. There is in them an element of caprice, and the broker’s state of mind, when he quotes a figure, is like a bookmaker’s when he names odds. Whilst he may be able to make sure of a profit, on the principles of the bookmaker, yet the individual figures that make up the book are, within certain limits, arbitrary. He may be almost certain, that is to say, that there will not be new taxes on more than one of the articles tea, sugar, and whisky; there may be an opinion abroad, reasonable or unreasonable, that the likelihood is in the order—whisky, tea, sugar; and he may, therefore be able to effect insurances for equal amounts in each at 30 per cent, 40 per cent, and 45 per cent. He has thus made sure of a profit of 15 per cent, however absurd and arbitrary his quotations may be’.  

See? Told you he’s got a REAL story to tell, Sir Keynes. You just need to follow him home and see whom he’s hanging with. He is actually hanging with financial brokers and insurers. He observes them and concludes there is no way of predicting the exact probability of complex occurrences they essentially bet money on. There is some deeply intuitive mental process taking place in their minds, which makes them guess correctly if insuring a ship full of cotton, for reimbursable damages worth X amount of money, in exchange of an insurance premium worth Y money.

The story that John Maynard Keynes tells is through his ‘Treatise on Probability’ is the story of the wild, exuberant capitalism of the early 1920ies, right after World War I, and after the epidemic of Spanish flu. It was a frame of mind that pushed people to run towards a mirage of wealth, and they would run towards it so frantically, because they wanted to run away from memories of horrible things. Sometimes we assume that what’s can possibly catch us from behind is so frightening that whatever we can run towards is worth running forward. In such a world, probability is a hasty evaluation of odds, with no time left for elaborate calculations. There are so many opportunities to catch, and so much fear to run away from that I don’t waste my time to think what an event actually is. It is just the ‘have I placed my bets right?’ thing. I think I understand it, as I recently experienced very much the same (see A day of trade. Learning short positions).

The very same existential story, just more seasoned and marinated in the oils of older age, can be seen in John Maynard Keynes’s ‘General Theory of Employment, Interest, and Money’. I read the ‘Preface’, dated December 13th, 1935, where the last paragraph says: ‘The composition of this book has been for the author a long struggle of escape, and so must the reading of it be for most readers if the author’s assault upon them is to be successful,—a struggle of escape from habitual modes of thought and expression. The ideas which are here expressed so laboriously are extremely simple and should be obvious. The difficulty lies, not in the new ideas, but in escaping from the old ones, which ramify, for those brought up as most of us have been, into every corner of our minds’. The same line of logic is present in country-specific prefaces that follow, i.e. to national translations of ‘General Theory’ published in Germany, France, and Japan.

In 1935, John Maynard Keynes had lived the exuberance of the 1920ies and the sobering cruelty of the 1930ies. He felt like telling a completely new story, yet the established theory, that of classical economics, would resist. How can you overcome resistance of such type? One of the strategies we can use is to take the old concepts and just present them in a new way, and I think this is very largely what John Maynard Keynes did. He took the well-known ideas, such as aggregate output, average wage etc., and made a desperate effort to reframe them. In the preface to the French edition of ‘General Theory’, there is a passage which, I believe, sums up some 50%, if not more, of all the general theorizing to be found in this book. It goes: ‘I believe that economics everywhere up to recent times has been dominated, much more than has been understood, by the doctrines associated with the name of J.-B. Say. It is true that his ‘law of markets’ has been long abandoned by most economists; but they have not extricated themselves from his basic assumptions and particularly from his fallacy that demand is created by supply. Say was implicitly assuming that the economic system was always operating up to its full capacity, so that a new activity was always in substitution for, and never in addition to, some other activity. Nearly all subsequent economic theory has depended on, in the sense that it has required, this same assumption. Yet a theory so based is clearly incompetent to tackle the problems of unemployment and of the trade cycle. Perhaps I can best express to French readers what I claim for this book by saying that in the theory of production it is a final break-away from the doctrines of J.- B. Say and that in the theory of interest it is a return to the doctrines of Montesquieu’.

Good. Sir Keynes assumes that it is a delicate thing to keep the economic system in balance. Why? Well, Sir Keynes knows it because he had lived it. That preface to the French edition of ‘General Theory’ is dated February 20th, 1939. We are all the way through the Great Depression, Hitler has already overtaken Austria and Czechoslovakia, and the United States are in the New Deal. Things don’t balance themselves by themselves, it is true. Yet, against this general assumption of equilibrium-is-something-precarious, the development which follows, in ‘General Theory’ goes exactly in the opposite direction. John Maynard Keynes builds a perfect world of equations, where Savings equal Investment, Investment equals Amortization, and generally things are equal to many other things. Having claimed the precarity of economic equilibrium, Sir Keynes paints one in bright pink.

I think that Keynes tried to express radically new ideas with old concepts, whence the confusion. He wanted to communicate the clearly underrated power of change vs that of homeostasis, yet he kept thinking in terms of, precisely, homeostasis between absolute aggregates, e.g. the sum of all proceedings anyone can have from a given amount of business is equal to the value conveyed by the same amount of business (this is my own, completely unauthorized summary of the principle, which Keynes called ‘effective demand’).  

The ‘General Theory of Employment, Interest, and Money’ was somehow competing for the interest of readers with another theory, phrased out practically at the same moment, namely the theory of business cycles by Joseph Alois Schumpeter. I perceive the difference between the respective takes by Keynes and Schumpeter, on the general turbulence of existence, in the acknowledgment of chaos and complexity. Keynes says: ‘Look, folks. This, I mean that whole stuff around, is bloody uncertain and volatile. Still, the good news is that I can wrap it up, just for you, in an elegant theory with nice equations, and then you will have a very ordered picture of chaos’. Joseph Alois Schumpeter retorts: ‘Not quite. What we perceive as chaos is simply complex change, too complex for being grasped once and for all. There is a cycle of change, and we are part of the cycle. We are in the cycle, not the other way around (i.e. cycle is not in us). What we can understand, and even exploit, is the change in itself’.

Where do I stand in all that? I am definitely more Schumpeterian than Keynesian. I prefer dishevelled reality to any nicely ordered and essentially false picture thereof. Yes, existence is change, and any impression of permanence is temporary. My recent intellectual wrestling with stochastic processes (see We really don’t see small change) showed me that even when I use quite elaborate analytical tools, such as mean-reversion, I keep stumbling upon my purely subjective partition of perceivable reality into the normal order, and the alarming chaos (see The kind of puzzle that Karl Friedrich was after).

A vision of game comes to my mind. This is me vs universe. Looks familiar? Right you are. That’s exactly the kind of game each of us plays throughout time. I make a move, and I wait for the universe to make its own. I have a problem: I don’t really know what kind of phenomenon I can account as move made by the universe. I need to guess: has the universe already made its move, in that game with me, or not yet? If I answer ‘yes’, I react. I assume that what has just happened is informative about the way my existence works. If, on the other hand, I guess that the universe has not figured yet any plausible way to put me at check, I wait and observe. Which is better, day after day: assuming that the universe made its move or sitting and waiting? I can very strongly feel this dilemma in my learning of investment in the stock market. Something happened. Prices have changed. Should I react immediately, or should I wait?

I provisionally claim that it depends. The universe moves at an uneven speed. By ‘provisionally’ I mean I claim it until I die, and then someone else will take on claiming the same, just as provisionally. Yet, all that existential instability acknowledged, there are rhythms I can follow. As regards my investment, I discovered that the most sensible rhythm to follow beats on the passive side of my investment portfolio. Every month, I collect the rent from an apartment, downtown, and I invest that rent in the stock market. I discovered that when I orchestrate my own thinking into that monthly rhythm of inflow in equity, it sort of works nicely. I collect the rent around the 5th day of each month, and for like one week beforehand, I do my homework about the market. When the rent comes, I have a scenario in mind, usually with a few question marks, i.e. with uncertainty to deal with. I play my investment game for 1 – 3 days, with occasional adjustments, and this is my move. Then I let the universe (the stock market in this case) make its own move over the next 3 – 4 weeks, and I repeat the same cycle over and over again.

I make a short move, and I let the universe making a long move. Is it a sensible strategy? From my point of view, there are two reasons for answering ‘yes’ to that question. First of all, it works in purely financial terms. I have learnt to wait patiently for an abnormally good opportunity to make profits. When I go too fast, like every day is a decision day, I usually get entangled in a game of my own illusions, and I lose money on transactions which I don’t quite understand. When I take my time, pace myself, and define a precise window for going hunting, usually something appears in that window, and I can make good money. Second of all, it is something I have sort of learnt generally and existentially: chaos is there, and I am there, and a good way to be alongside the chaos is to find a rhythm. When I follow my beat, chaos becomes a flow I can ride, rather than a storm I should fear.