Different paths

I keep digging in the business models of hydrogen-oriented companies, more specifically five of them: 

>> Fuel Cell Energy https://investor.fce.com/Investors/default.aspx

>> Plug Power https://www.ir.plugpower.com/overview/default.aspx

>> Green Hydrogen Systems https://investor.greenhydrogen.dk/

>> Nel Hydrogen https://nelhydrogen.com/investor-relations/

>> Next Hydrogen (previously BioHEP Technologies Ltd.) https://nexthydrogen.com/investor-relations/why-invest/

I am studying their current reports. This is the type of report which listed companies publish when something special happens, which goes beyond the normal course of everyday business, and can affect shareholders. I have already started with Fuel Cell Energy and their current report from July 12th, 2022 (https://d18rn0p25nwr6d.cloudfront.net/CIK-0000886128/b866ae77-6f4a-421e-bedd-906cb92850d7.pdf ), where they disclose a deal with a group of financial institutions: Jefferies LLC, B. Riley Securities, Inc., Barclays Capital Inc., BMO Capital Markets Corp., BofA Securities, Inc., Canaccord Genuity LLC, Citigroup Global

Markets Inc., J.P. Morgan Securities LLC and Loop Capital Markets LLC. Strange kind of deal, I should add. Those 10 financial firms are supposed to either buy or intermediate in selling to third parties parcels of 95 000 000 shares in the equity of Fuel Cell Energy. The tricky part is that the face value of those shares is supposed to be $0,0001 per share, just as it is the case with the ordinary 837 000 000 shares outstanding, whilst the market value of Fuel Cell Energy’s shares is currently above $4,00 per share, thus carrying an addition of thousands of percentage points of capital to pay.

It looks as if the part of equity in Fuel Cell Energy which is free floating in the stock market – quite a tiny part of their share capital – was becoming subject to quick financial gambling. I don’t like it. Whatever. Let’s go further, i.e. to the next current report of Fuel Cell Energy, that from July 7th, 2022 (https://d18rn0p25nwr6d.cloudfront.net/CIK-0000886128/77053fbf-f22a-4288-b702-6b82a039f588.pdf ). It brings updates on two projects:

>> The Toyota Project: a 2,3 megawatt trigeneration platform for Toyota at the Port of Long Beach, California.

>> The Groton Project: a 7.4 MW platform at the U.S. Navy Submarine Base in Groton, Connecticut.

Going further back in time, I browse through the current report from June 9th, 2022 (https://d18rn0p25nwr6d.cloudfront.net/CIK-0000886128/9f4b19f0-0a11-4d27-acd2-f0881fdefbc3.pdf ). It is the official version of a press release regarding financial and operational results of Fuel Cell Energy by the end of the 1st quarter 2022. As I am reading through it, I find data about other projects:

>> Joint Development Agreement with ExxonMobil, related to carbon capture and generation, which includes the 7,4 MW LIPA Yaphank fuel cell project

>>  a carbon capture project with Canadian National Resources Limited

>> a program with U.S. Department of Energy regarding solid oxide. I suppose that ‘solid oxide’ stands for solid oxide fuel cells, which use a solid, ceramic core of fuel, which is being oxidized and produces energy in the process.     

I pass to the current reports of Plug Power (https://www.ir.plugpower.com/financials/sec-filings/default.aspx ). Interesting things start when I go back to the current report from June 23rd, 2022 (https://d18rn0p25nwr6d.cloudfront.net/CIK-0001093691/36efa8c2-a675-451b-a41f-308221f5e612.pdf ). This is a summary presentation of something which looks like the company’s strategy. Apparently, Plug Power plans to have 13 plants with Green Hydrogen running in the United States by 2025, with a total expected yield of 500 tons per day. In a more immediate perspective, the company plans to locate 5 new plants in the U.S. over 2022 (total capacity of 70 tons per day) and 2023 (200 tons per day). Further, I read that what I thought was a hydrogen-focused company, has, in fact, a broader spectrum of operations: eFuel and methanol, ammonia, vehicle refueling, blending and heating, refining of natural oil, and the storage of renewable energy.  

As part of its strategy, Plug Power announces the acquisitions of companies supposed to bring additional technological competences: Frames Group (https://www.frames-group.com/ ) with power transmission systems and technology for building electrolyzers, ACT (Applied Cryo Technologies: https://www.appliedcryotech.com/ ) for cryogenics, and Joule (https://www.jouleprocess.com/about ) for the liquefaction of hydrogen. My immediate remark as regards those acquisitions, sort of intellectually straight-from-the-oven-still-warm-sorry-but-I-told-you-still-warm, is that Plug Power is acquiring a broad technological base rather than a specialized one. Officially, those acquisitions serve to enhance the Plug Power’s capacity as regards the deployment of hydrogen-focused technologies. Yet, as I am rummaging through the websites of those acquired companies, their technological competences go far beyond hydrogen.

Sort of contingent (adjacent?) to that current report is the piece of news, still on the Plug Power’s investors-relations site, from June 8th, 2022. It regards the deployment of a project in Europe, more specifically in the Port of Antwerp-Bruges (https://www.ir.plugpower.com/press-releases/news-details/2022/Plug-to-Build-Large-Scale-Green-Hydrogen-Generation-Plant-in-Europe-at-Port-of-Antwerp-Bruges/default.aspx ). This is supposed to be something labelled as a ‘Gigafactory’.

A little bit earlier this year, on my birthday, May 9th, Plug Power published a current report (https://d18rn0p25nwr6d.cloudfront.net/CIK-0001093691/203fd9c3-5302-4fa1-9edd-32fe4905689c.pdf ) coupled with a quarterly financial report (https://d18rn0p25nwr6d.cloudfront.net/CIK-0001093691/c7ad880f-71ff-4b58-8265-bd9791d98740.pdf ). Apparently, in the 1st quarter 2022, they had revenues 96% higher than 1Q 2021. Nice. There are interesting operational goals signaled in that current report. Plug Power plans to reduce services costs on a per unit basis by 30% in the 12 months following the report, thus until the end of the 1st quarter 2023. The exact quote is: ‘Plug remains focused on delivering on our previously announced target to reduce services costs on a per unit basis by 30% in the next 12 months, and 45% by the end of 2023. We are pleased to report that we have begun to see meaningful improvement in service margins on fuel cell systems and related infrastructure with a positive 30% increase in first quarter of 2022 versus the fourth quarter of 2021. The service margin improvement is a direct result of the enhanced technology GenDrive units that were delivered in 2021 which reduce service costs by 50%. The performance of these enhanced units demonstrates that the products are robust, and we expect these products will help support our long-term business needs. We believe service margins are tracking in the right direction with potential to break even by year end’.

When a business purposefully and effectively works on optimizing margins of profit, and the corresponding costs, it is a step forward in the lifecycle of the technologies used. This is a passage from the phase of early development towards late development, or, in other words, it is the phase when the company starts getting in control of small economic details in its technology.

I switch to the next company on my list, namely to Green Hydrogen Systems (Denmark, https://investor.greenhydrogen.dk/ ). They do not follow the SEC classification of reports, and, in order to get an update on their current developments, I go to their ‘Announcements & News’ section (https://investor.greenhydrogen.dk/announcements-and-news/default.aspx ).  On July 18th, 2022, Green Hydrogen Systems held an extraordinary General Meeting of shareholders. They amended their Articles of Association, as regards the Board of Directors, and the new version is: ‘The board of directors consists of no less than four and no more than nine members, all of whom must be elected by the general meeting. Members of the board of directors must resign at the next annual general meeting, but members of the board of directors may be eligible for re-election’. At the same extraordinary General Meeting, three new directors have been elected to the Board, on the top of the six already there.

To the extent that I know the Scandinavian ways of corporate governance, appointment of new directors to the Board usually comes with new business ties of the company. Those people are supposed to be something like intermediaries between the company and some external entities (research units? other companies? NGOs?). That change in the Board of Directors at Green Hydrogen Systems suggests something like the broadening of their network. That intuition is somehow confirmed by an earlier announcement, from June 13th (https://investor.greenhydrogen.dk/announcements-and-news/news-details/2022/072022-Green-Hydrogen-Systems-announces-changes-to-the-Board-of-Directors-and-provides-product-status-update/default.aspx ). The three new members of the Board come, respectively, from: Vestas Wind Systems, Siemens Energy, and Sonnedix (https://www.sonnedix.com/ ).

Still earlier this year, on April 12th, Green Hydrogen Systems announced ‘design complications in its HyProvide® A-Series platform’, and said complications are supposed to affect adversely the financial performance in 2022 (https://investor.greenhydrogen.dk/announcements-and-news/news-details/2022/Green-Hydrogen-Systems-announces-technical-design-complications-in-its-HyProvide-A-Series-platform/default.aspx ). When I think about it, design normally comes before its implementation, and therefore before any financial performance based thereon. When ‘design complications’ are serious enough for the company to disclose them and announce a possible negative impact on the financial side of the house, it means some serious mistakes years earlier, when that design was being conceptualized. I say ‘years’ because I notice the trademark symbol ‘®’ by the name of the technology. That means there had been time to: a) figure out the design b) register it as a trademark. That suggests at least 2 years, maybe more.

I quickly sum up my provisional conclusions from browsing current reports at Fuel Cell Energy, Plug Power, and Green Hydrogen Systems. I can see three different courses of events as regards the business models of those companies. At Fuel Cell Energy, broadly spoken marketing, including financial marketing, seems to be the name of the game. Both the technology and the equity of Fuel Cell Energy seems to be merchandise for trading. My educated guess is that the management of Fuel Cell Energy is trying to attract more financial investors to the game, and to close more technological deals, of the joint-venture type, at the operational level. It further suggests an attempt at broadening the business network of the company, whilst keeping the strategic ownership in the hands of the initial founders. As for Plug Power, the development I see is largely quantitative. They are broadening their technological base, including the acquisitions of strategically important assets, expanding their revenues, and ramping up their operational margins. This a textbook type of industrial development. Finally, at Green Hydrogen Systems, this still seems to be the phase of early development, with serious adjustments needed to both the technology owned and the team that runs it.

Those hydrogen-oriented companies seem to be following different paths and to be at different stages in the lifecycle of their technological base.

Ugly little cherubs

I am working on my long-term investment strategy, and I keep using the Warren Buffet’s tenets of investment (Hagstrom, Robert G.. The Warren Buffett Way (p. 98). Wiley. Kindle Edition.).

At the same time, one of my strategic goals is coming true, progressively: other people reach out to me and ask whether I would agree to advise them on their investment in the stock market. People see my results, sometimes I talk to them about my investment philosophy, and it seems to catch on.

This is both a blessing and a challenge. My dream, 2 years ago, when I was coming back to the business of regular investing in the stock market, was to create, with time, something like a small investment fund specialized in funding highly innovative, promising start-ups. It looks like that dream is progressively becoming reality. Reality requires realistic and intelligible strategies. I need to phrase out my own experience as regards investment in a manner, which is both understandable and convincing to other people.

As I am thinking about it, I want to articulate my strategy along three logical paths. Firstly, what is the logic in my current portfolio? Why am I holding the investment positions I am holding? Why in these proportions? How have I come to have that particular portfolio? If I can verbally explain the process of my so-far investment, I will know what kind of strategy I have been following up to now. This is the first step, and the next one is to formulate a strategy for the future. In one of my recent updates (Tesla first in line), I briefly introduced my portfolio, such as it was on December 2nd, 2021. Since then, I did some thinking, most of all in reference to the investment philosophy of Warren Buffett, and I made some moves. I came to the conclusion that my portfolio was astride a bit too many stocks, and the whole was somehow baroque. By ‘baroque’ I mean that type of structure, where we can have a horribly ugly little cherub, accompanied by just as ugly a little shepherd, but the whole looks nice due to the presence of a massive golden rim, woven around ugliness.

I made myself an idea of what are the ugly cherubs in my portfolio from December 2nd, and I kicked them out of the picture. In the list below, these entities are marked in slashed bold italic:

>> Tesla (https://ir.tesla.com/#tab-quarterly-disclosure),

>> Allegro.eu SA (https://about.allegro.eu/ir-home ),

>> Alten (https://www.alten.com/investors/ ),

>> Altimmune Inc (https://ir.altimmune.com/ ),

>> Apple Inc (https://investor.apple.com/investor-relations/default.aspx ),

>> CureVac NV (https://www.curevac.com/en/investor-relations/overview/ ),

>> Deepmatter Group PLC (https://www.deepmatter.io/investors/ ), 

>> FedEx Corp (https://investors.fedex.com/home/default.aspx ),

>> First Solar Inc (https://investor.firstsolar.com/home/default.aspx )

>> Inpost SA (https://www.inpost.eu/investors )

>> Intellia Therapeutics Inc (https://ir.intelliatx.com/ )

>> Lucid Group Inc (https://ir.lucidmotors.com/ )

>> Mercator Medical SA (https://en.mercatormedical.eu/investors/ )

>> Nucor Corp (https://www.nucor.com/investors/ )

>> Oncolytics Biotech Inc (https://ir.oncolyticsbiotech.com/ )

>> Solaredge Technologies Inc (https://investors.solaredge.com/ )

>> Soligenix Inc (https://ir.soligenix.com/ )

>> Vitalhub Corp (https://www.vitalhub.com/investors )

>> Whirlpool Corp (https://investors.whirlpoolcorp.com/home/default.aspx )

>> Biogened (https://biogened.com/ )

>> Biomaxima (https://www.biomaxima.com/325-investor-relations.html )

>> CyfrPolsat (https://grupapolsatplus.pl/en/investor-relations )

>> Emtasia (https://elemental-asia.biz/en/ )

>> Forposta (http://www.forposta.eu/relacje_inwestorskie/dzialalnosc_i_historia.html )

>> Gameops (http://www.gameops.pl/en/about-us/ )

>> HMInvest (https://grupainwest.pl/relacje )

>> Ifirma (https://www.ifirma.pl/dla-inwestorow )

>> Moderncom (http://moderncommercesa.com/wpmccom/en/dla-inwestorow/ )

>> PolimexMS (https://www.polimex-mostostal.pl/en/reports/raporty-okresowe )

>> Selvita (https://selvita.com/investors-media/ )

>> Swissmed (https://swissmed.com.pl/?menu_id=8 )  

Why did I put those specific investment positions into the bag labelled ‘ugly little cherubs in the picture’? Here comes a cognitive clash between the investment philosophy I used to have before I started studying in depth that of Warren Buffet and of Berkshire Hathaway. Before, I was using the purely probabilistic approach, according to which the stock market is so unpredictable that my likelihood of failure, on any individual investment, is greater than the likelihood of success, and, therefore, the more I spread my portfolio between different stocks, the less exposed I am to the risk of a complete fuck-up. As I studied the investment philosophy of Warren Buffet, I had great behavioural insights as regards my decisions. Diversifying one’s portfolio is cool, yet it can lead to careless individual choices. If my portfolio is really diversified, each individual position weighs so little that I am tempted to overlook its important features. At the end of the day, I might land with a bag full of potatoes instead of a chest full of gems.

I decided to kick out the superfluous. What did I put in this category? The superfluous investment positions which I kicked out shared some common characteristics, which I reconstructed from the history of the corresponding ‘buy’ orders. Firstly, these were comparatively small positions, hundreds of euros at best. This is one of the lessons by Warren Buffet. Small investments matter little, and they are probably going to stay this way. There is no point in collecting stocks which don’t matter to me. They give is a false sense of security, which is detrimental to the focus on capital gains.  

Secondly, I realized that I bought those ugly little cherubs by affinity to something else, not for their own sake. Two of them, FedEx and Allegro, are in the busines of express delivery. I made a ton of money of their stock, just as on the stock of Deutsche Post, during the trough of the pandemic, where retail distribution went mostly into the ‘online order >> express delivery’ pipeline. It was back then, and then I sold out, and then I thought ‘why not trying the same hack again?’. The ‘why not…?’ question was easy to answer, actually: because times change, and the commodity markets have adapted to the pandemic. FedEx and Allegro has returned to what it used to be: a solid business without much charm to me.  

Four others – Soligenix, Altimmune, CureVac and Oncolytics Biotech – are biotechnological companies. Once again: I made a ton of money in 2020 on biotech companies, because of the pandemic. Now, emotions in the market have settled, and biotech companies are back what they used to be, namely interesting investments endowed with high risk, high potential reward, and a bottomless capacity for burning cash. Those companies are what Tesla used to be a decade ago. I kept a serious position on a few other biotech businesses: Intellia Therapeutics, Biogened, Biomaxima, and Selvita. I want to keep a few of such undug gems in my portfolio, yet too much would be too much.

Thirdly, I had a loss on all of those ugly little cherubs I have just kicked out of my portfolio. Summing up, these were small positions, casually opened without much strategic thinking, and they were bringing me a loss. I could have waited to have a profit, but I preferred to sell them out and to concentrate my capital on the really promising stocks, which I nailed down using the method of intrinsic value. I realized that my portfolio was what it was, one week ago, before I started strategizing consciously, because I had hard times finding balance between two different motivations: running away from the danger of massive loss, on the one hand, and focusing on investments with a true potential for bringing long-term gains.

I focus more specifically on the concept of intrinsic value. Such as Warren Buffet used it, intrinsic value was based on what he called ‘owner’s earnings’ from a business. Owner’s earnings are spread over a window in time corresponding to the risk-free yield on sovereign bonds. The financial statement used for calculating intrinsic value is the cash-flow of the company in question, plus external data as regards average annual yield on sovereign bonds. The basic formula to calculate owner’s earnings goes like: net income after tax + amortization charges – capital expenditures). Once that nailed down, I divide those owner’s earnings by the interest rate on long-term sovereign bonds. For my positions in the US stock market, I use the long-term yield on the US federal bonds, i.e. 1,35% a year. As regards my portfolio in the Polish stock market, I use the yield 3,42% for Polish sovereign bonds on long-term.

I have calculated that intrinsic value for a few of my investments (I mean those I kept in my portfolio), on the basis of their financial results for 2020 and compared it to their market capitalisation. Then, additionally, I did the same calculation based on their published (yet unaudited) cash-flow for Q3 2021. Here are the results I had for Tesla. Net income 2020 $862,00 mln plus amortization charges 2020 $2 322,00 mln minus capital expenditures 2020 $3 132,00 mln equals owner’s earnings 2020 $52,00 mln. Divided by 1,35%, that gives an intrinsic value of $3 851,85 mln. Market capitalization on December 6th, 2021: $1 019 000,00 mln. The intrinsic value looks like several orders of magnitude smaller than market capitalisation. Looks risky.

Let’s see the Q3 2021 unaudited cash-flows. Here, I extrapolate the numbers for 9 months of 2021 over the whole year 2021: I multiply them by 4/3. Extrapolated net income for Q3 2021 $4 401,33 mln plus extrapolated amortization charges for Q3 2021 $2 750,67 minus extrapolated capital expenditures for Q3 2021 $7 936,00 equals extrapolated owner’s earnings amounting to $4 401,33 mln. Divided by 1,35%, it gives an extrapolated intrinsic value of $326 024,69 mln. It is much closer to market capitalization, yet much below it as for now. A lot of risk in that biggest investment position of mine. We live and we learn, as they say.

Another stock: Apple. With the economic size of a medium-sized country, Apple seems solid. Let’s walk it through the computational path of intrinsic value. There is an important methodological remark to formulate as for this cat. In the cash-flow statement of Apple for 2020-2021 (Apple Inc. ends its fiscal year by the end of September in the calendar year), under the category of ‘Investing activities’, most of the business pertains to buying and selling financial assets. It goes, ike:

Investing activities, in millions of USD:

>> Purchases of marketable securities (109 558)

>> Proceeds from maturities of marketable securities: 59 023

>> Proceeds from sales of marketable securities: 47 460

>> Payments for acquisition of property, plant and equipment (11 085)

>> Payments made in connection with business acquisitions, net (33)

>> Purchases of non-marketable securities (131)

>> Proceeds from non-marketable securities: 387

>> Bottom line: Cash generated by/(used in) investing activities (14 545)

Now, when I look at the thing through the lens of Warren Buffett’s investment tenets, anything that happens with and through financial securities, is retention of cash in the business. It just depends on what exact form we want to keep that cash under. Transactions grouped under the heading of ‘Purchases of marketable securities (109 558)’, for example, are not capital expenditures. They do not lead to exchanging cash money against productive technology. In all that list of investment activities, only two categories, namely: ‘Payments for acquisition of property, plant and equipment (11 085)’, and ‘Payments made in connection with business acquisitions, net (33)’ are capital expenditures sensu stricto. All the other categories, although placed in the account of investing activities, are labelled as such just because they pertain to transactions on assets. From the Warren Buffet’s point of view they all mean retained cash.

Therefore, when I calculate owner’s earnings for Apple, based on their latest annual cash-flow, I go like:

>> Net Income $94 680 mln + Depreciation and Amortization $11 284 mln + Purchases of marketable securities $109 558 mln + Proceeds from maturities of marketable securities $59 023 mln + Proceeds from sales of marketable securities $47 460 mln – Payments for acquisition of property, plant and equipment $11 085 mln – Payments made in connection with business acquisitions, net $33 mln + Purchases of non-marketable securities $131 mln + Proceeds from non-marketable securities $387 mln = Owner’s earnings $311 405 mln.

I divide that number by the 1,35% annual yield of the long-term Treasury bonds in the US, and I get an intrinsic value of $23 067 037 mln, against a market capitalisation floating around $2 600 000 mln, which gives a huge overhead in the former over the latter. Good investment.

I pass to another one of my investments, First Solar Inc. (https://investor.firstsolar.com/financials/sec-filings/default.aspx ). Same thing: investment activities consist most of all in moves pertinent to financial assets. It looks like:

>> Net income (loss) $398,35 mln

>> Depreciation, amortization and accretion $232,93 mln

>> Impairments and net losses on disposal of long-lived assets $35,81 mln

… and then come the Cash flows from investing activities:

 >> Purchases of property, plant and equipment ($416,64 mln)

>> Purchases of marketable securities and restricted marketable securities ($901,92 mln)

>> Proceeds from sales and maturities of marketable securities and restricted marketable securities $1 192,83 mln

>> Other investing activities ($5,5 mln)

… and therefore, from the perspective of owner’s earnings, the net cash used in investing activities is not, as stated officially, minus $131,23 mln. Net capital expenses, I mean net of transactions on financial assets, are: – $416,64 mln + $901,92 mln + $1 192,83 mln – $5,5 mln = $1 672,61 mln. Combined with the aforementioned net income, amortization and fiscally compensated impairments on long-lived assets, it makes owner’s earnings of $2 339,7 mln. And an intrinsic value of $173 311,11 mln, against some $10 450 000 mln in market capitalization. Once again, good and solid in terms of Warren Buffet’s margin of security.

I start using the method of intrinsic value for my investments, and it gives interesting results. It allows me to distinguish, with a precise gauge, between high-risk investments and the low-risk ones.

Tesla first in line

Once again, a big gap in my blogging. What do you want – it happens when the academic year kicks in. As it kicks in, I need to divide my attention between scientific research and writing, on the one hand, and my teaching on the other hand.

I feel like taking a few steps back, namely back to the roots of my observation. I observe two essential types of phenomena, as a scientist: technological change, and, contiguously to that, the emergence of previously unexpected states of reality. Well, I guess we all observe the latter, we just sometimes don’t pay attention. I narrow it down a bit. When it comes to technological change, I am just bewildered with the amounts of cash that businesses have started holding, across the board, amidst an accelerating technological race. Twenty years ago, any teacher of economics would tell their students: ‘Guys, cash is the least productive asset of all. Keep just the sufficient cash to face the most immediate expenses. All the rest, invest it in something that makes sense’. Today, when I talk to my students, I tell them: ‘Guys, with the crazy speed of technological change we are observing, cash is king, like really. The greater reserves of cash you hold, the more flexible you stay in your strategy’.

Those abnormally big amounts of cash that businesses tend to hold, those last years, it has two dimensions in terms of research. On the one hand, it is economics and finance, and yet, on the other hand, it is management. For quite some time, digital transformation has been about the only thing worth writing about in management science, but that, namely the crazy accumulation of cash balances in corporate balance sheets, is definitely something worth writing about. Still, there is amazingly little published research on the general topic of cash flow and cash management in business, just as there is very little on financial liquidity in business. The latter topic is developed almost exclusively in the context of banks, mostly the central ones. Maybe it is all that craze about the abominable capitalism and the general claim that money is evil. I don’t know.

Anyway, it is interesting. Money, when handled at the microeconomic level, tells the hell of a story about our behaviour, our values, our mutual trust, and our emotions. Money held in corporate balance sheets tells the hell of a story about decision making. I explain. Please, consider the amount of money you carry around with you, like the contents of your wallet (credit cards included) plus whatever you have available instantly on your phone. Done? Visualised? Good. Now, ask yourself what percentage of all those immediately available monetary balances you use during your one average day. Done? Analysed? Good. In my case, it would be like 0,5%. Yes, 0,5%. I did that intellectual exercise with my students, many time. They usually hit no more than 10%, and they are gobsmacked. Their first reaction is WOKEish: ‘So I don’t really need all that money, right. Money is pointless, right?’. Not quite, my dear students. You need all that money; you just need it in a way which you don’t immediately notice.

There is a model in the theory of complex systems, called the ants’ colony (see for example: (Chaouch, Driss & Ghedira 2017[1]; Asghari & Azadi 2017[2]; Emdadi et al. 2019[3]; Gupta & Srivastava 2020[4]; Di Caprio et al. 2021[5]). Yes, Di Caprio. Not the Di Caprio you intuitively think about, though. Ants communicate with pheromones. They drop pheromones somewhere they sort of know (how?) it is going to be a signal for other ants. Each ant drops sort of a standard parcel of pheromones. Nothing to write home about, really, and yet enough to attract the attention of another ant which could drop its individual pheromonal parcel in the same location. With any luck, other ants will discover those chemical traces and validate them with their individual dumps of pheromones, and this is how the colony of ants maps its territories, mostly to find and exploit sources of food. This is interesting to find out that in order for all that chemical dance to work, there needs to be a minimum number of ants on the job. In there are not enough ants per square meter of territory, they just don’t find each other’s chemical imprints and have no chance to grab hold of the resources available. Yes, they all die prematurely. Money in human societies could be the equivalent of a pheromone. We need to spread it in order to carry out complex systemic changes. Interestingly, each of us, humans, is essentially blind to those complex changes: we just cannot wrap our mind around quickly around the technical details of something apparently as simple as the manufacturing chain of a gardening rake (do you know where exactly and in what specific amounts all the ingredients of steel come from? I don’t).  

All that talk about money made me think about my investments in the stock market. I feel like doing things the Warren Buffet’s way: going to the periodical financial reports of each company in my portfolio, and just passing in review what they do and what they are up to. By the way, talking about Warren Buffet’s way, I recommend my readers to go to the source: go to https://www.berkshirehathaway.com/ first, and then to  https://www.berkshirehathaway.com/2020ar/2020ar.pdf as well as to https://www.berkshirehathaway.com/qtrly/3rdqtr21.pdf . For now, I focus on studying my own portfolio according to the so called “12 immutable tenets by Warren Buffet”, such as I allow myself to quote them:

>> Business Tenets: Is the business simple and understandable? Does the business have a consistent operating history? Does the business have favourable long-term prospects?

>> Management Tenets: Is management rational? Is management candid with its shareholders? Does management resist the institutional imperative?

>> Financial Tenets Focus on return on equity, not earnings per share. Calculate “owner earnings.” Look for companies with high profit margins. For every dollar retained, make sure the company has created at least one dollar of market value.

>> Market Tenets: What is the value of the business? Can the business be purchased at a significant discount to its value?

(Hagstrom, Robert G.. The Warren Buffett Way (p. 98). Wiley. Kindle Edition.)

Anyway, here is my current portfolio:

>> Tesla (https://ir.tesla.com/#tab-quarterly-disclosure),

>> Allegro.eu SA (https://about.allegro.eu/ir-home ),

>> Alten (https://www.alten.com/investors/ ),

>> Altimmune Inc (https://ir.altimmune.com/ ),

>> Apple Inc (https://investor.apple.com/investor-relations/default.aspx ),

>> CureVac NV (https://www.curevac.com/en/investor-relations/overview/ ),

>> Deepmatter Group PLC (https://www.deepmatter.io/investors/ ), 

>> FedEx Corp (https://investors.fedex.com/home/default.aspx ),

>> First Solar Inc (https://investor.firstsolar.com/home/default.aspx )

>> Inpost SA (https://www.inpost.eu/investors )

>> Intellia Therapeutics Inc (https://ir.intelliatx.com/ )

>> Lucid Group Inc (https://ir.lucidmotors.com/ )

>> Mercator Medical SA (https://en.mercatormedical.eu/investors/ )

>> Nucor Corp (https://www.nucor.com/investors/ )

>> Oncolytics Biotech Inc (https://ir.oncolyticsbiotech.com/ )

>> Solaredge Technologies Inc (https://investors.solaredge.com/ )

>> Soligenix Inc (https://ir.soligenix.com/ )

>> Vitalhub Corp (https://www.vitalhub.com/investors )

>> Whirlpool Corp (https://investors.whirlpoolcorp.com/home/default.aspx )

>> Biogened (https://biogened.com/ )

>> Biomaxima (https://www.biomaxima.com/325-investor-relations.html )

>> CyfrPolsat (https://grupapolsatplus.pl/en/investor-relations )

>> Emtasia (https://elemental-asia.biz/en/ )

>> Forposta (http://www.forposta.eu/relacje_inwestorskie/dzialalnosc_i_historia.html )

>> Gameops (http://www.gameops.pl/en/about-us/ )

>> HMInvest (https://grupainwest.pl/relacje )

>> Ifirma (https://www.ifirma.pl/dla-inwestorow )

>> Moderncom (http://moderncommercesa.com/wpmccom/en/dla-inwestorow/ )

>> PolimexMS (https://www.polimex-mostostal.pl/en/reports/raporty-okresowe )

>> Selvita (https://selvita.com/investors-media/ )

>> Swissmed (https://swissmed.com.pl/?menu_id=8 )   

Studying that whole portfolio of mine through the lens of Warren Buffet’s tenets looks like a piece of work, really. Good. I like working. Besides, as I have been reading Warren Buffett’s annual reports at https://www.berkshirehathaway.com/ , I realized that I need a real strategy for investment. So far, I have developed a few efficient hacks, such as, for example, the habit of keeping my s**t together when other people panic or when they get euphoric. Still, hacks are not the same as strategy.

I feel like adding my own general principles to Warren Buffet’s tenets. Principle #1: whatever I think I do my essential strategy consists in running away from what I perceive as danger. Thus, what am I afraid of, in my investment? What subjective fears and objective risks factors shape my actions as investor? Once I understand that, I will know more about my own actions and decisions. Principle #2: the best strategy I can think of is a game with nature, where each move serves to learn something new about the rules of the game, and each move should be both decisive and leaving me with a margin of safety. What am I learning as I make my moves? What my typical moves actually are?

Let’s rock. Tesla (https://ir.tesla.com/#tab-quarterly-disclosure), comes first in line, as it is the biggest single asset in my portfolio. I start my digging with their quarterly financial report for Q3 2021 (https://www.sec.gov/Archives/edgar/data/1318605/000095017021002253/tsla-20210930.htm ), and I fish out their Consolidated Balance Sheets (in millions, except per share data, unaudited: https://www.sec.gov/Archives/edgar/data/1318605/000095017021002253/tsla-20210930.htm#consolidated_balance_sheets ).

Now, I assume that if I can understand why and how numbers change in the financial statements of a business, I can understand the business itself. The first change I can spot in that balance sheet is property, plant and equipment, net passing from $12 747 million to $17 298 million in 12 months. What exactly has happened? Here comes Note 7 – Property, Plant and Equipment, Net, in that quarterly report, and it starts with a specification of fixed assets comprised in that category. Good. What really increased in this category of assets is construction in progress, and here comes the descriptive explanation pertinent thereto: “Construction in progress is primarily comprised of construction of Gigafactory Berlin and Gigafactory Texas, expansion of Gigafactory Shanghai and equipment and tooling related to the manufacturing of our products. We are currently constructing Gigafactory Berlin under conditional permits in anticipation of being granted final permits. Completed assets are transferred to their respective asset classes, and depreciation begins when an asset is ready for its intended use. Interest on outstanding debt is capitalized during periods of significant capital asset construction and amortized over the useful lives of the related assets. During the three and nine months ended September 30, 2021, we capitalized $14 million and $52 million, respectively, of interest. During the three and nine months ended September 30, 2020, we capitalized $13 million and $33 million, respectively, of interest.

Depreciation expense during the three and nine months ended September 30, 2021 was $495 million and $1.38 billion, respectively. Depreciation expense during the three and nine months ended September 30, 2020 was $403 million and $1.13 billion, respectively. Gross property, plant and equipment under finance leases as of September 30, 2021 and December 31, 2020 was $2.60 billion and $2.28 billion, respectively, with accumulated depreciation of $1.11 billion and $816 million, respectively.

Panasonic has partnered with us on Gigafactory Nevada with investments in the production equipment that it uses to manufacture and supply us with battery cells. Under our arrangement with Panasonic, we plan to purchase the full output from their production equipment at negotiated prices. As the terms of the arrangement convey a finance lease under ASC 842, Leases, we account for their production equipment as leased assets when production commences. We account for each lease and any non-lease components associated with that lease as a single lease component for all asset classes, except production equipment classes embedded in supply agreements. This results in us recording the cost of their production equipment within Property, plant and equipment, net, on the consolidated balance sheets with a corresponding liability recorded to debt and finance leases. Depreciation on Panasonic production equipment is computed using the units-of-production method whereby capitalized costs are amortized over the total estimated productive life of the respective assets. As of September 30, 2021 and December 31, 2020, we had cumulatively capitalized costs of $1.89 billion and $1.77 billion, respectively, on the consolidated balance sheets in relation to the production equipment under our Panasonic arrangement.”

Good. I can try to wrap my mind around the contents of Note 7. Tesla is expanding its manufacturing base, including a Gigafactory in my beloved Europe. Expansion of the manufacturing capacity means significant, quantitative growth of the business. According to Warren Buffett’s philosophy: “The question of where to allocate earnings is linked to where that company is in its life cycle. As a company moves through its economic life cycle, its growth rates, sales, earnings, and cash flows change dramatically. In the development stage, a company loses money as it develops products and establishes markets. During the next stage, rapid growth, the company is profitable but growing so fast that it cannot support the growth; often it must not only retain all of its earnings but also borrow money or issue equity to finance growth” (Hagstrom, Robert G.. The Warren Buffett Way (p. 104). Wiley. Kindle Edition).  Tesla looks like they are in the phase of rapid growth. They have finally nailed down how to generate profits (yes, they have!), and they are expanding capacity-wise. They are likely to retain earnings and to be in need of cash, and that attracts my attention to another passage in Note 7: “Interest on outstanding debt is capitalized during periods of significant capital asset construction and amortized over the useful lives of the related assets”. If I understand correctly, the financial strategy consists in not servicing (i.e. not paying the interest due on) outstanding debt when that borrowed money is really being used to finance the construction of productive assets, and starting to service that debt only after the corresponding asset starts working and paying its bills. That means, in turn, that lenders are being patient and confident with Tesla. They assume their unconditional claims on Tesla’s future cash flows (this is one of the possible ways to define outstanding debt) are secure.   

Good. Now, I am having a look at Tesla’s Consolidated Statements of Operations (in millions, except per share data, unaudited: https://www.sec.gov/Archives/edgar/data/1318605/000095017021002253/tsla-20210930.htm#consolidated_statements_of_operations ). It is time to have a look at Warren Buffett’s Business Tenets as regards Tesla. Is the business simple and understandable? Yes, I think I can understand it. Does the business have a consistent operating history? No, operational results changed in 2020 and they keep changing. Tesla is passing from the stage of development (which took them a decade) to the stage of rapid growth. Does the business have favourable long-term prospects? Yes, they seem to have good prospects. The market of electric vehicles is booming (EV-Volumes[6]; IEA[7]).

Is Tesla’s management rational? Well, that’s another ball game. To develop in my next update.


[1] Chaouch, I., Driss, O. B., & Ghedira, K. (2017). A modified ant colony optimization algorithm for the distributed job shop scheduling problem. Procedia computer science, 112, 296-305. https://doi.org/10.1016/j.procs.2017.08.267

[2] Asghari, S., & Azadi, K. (2017). A reliable path between target users and clients in social networks using an inverted ant colony optimization algorithm. Karbala International Journal of Modern Science, 3(3), 143-152. http://dx.doi.org/10.1016/j.kijoms.2017.05.004

[3] Emdadi, A., Moughari, F. A., Meybodi, F. Y., & Eslahchi, C. (2019). A novel algorithm for parameter estimation of Hidden Markov Model inspired by Ant Colony Optimization. Heliyon, 5(3), e01299. https://doi.org/10.1016/j.heliyon.2019.e01299

[4] Gupta, A., & Srivastava, S. (2020). Comparative analysis of ant colony and particle swarm optimization algorithms for distance optimization. Procedia Computer Science, 173, 245-253. https://doi.org/10.1016/j.procs.2020.06.029

[5] Di Caprio, D., Ebrahimnejad, A., Alrezaamiri, H., & Santos-Arteaga, F. J. (2021). A novel ant colony algorithm for solving shortest path problems with fuzzy arc weights. Alexandria Engineering Journal. https://doi.org/10.1016/j.aej.2021.08.058

[6] https://www.ev-volumes.com/

[7] https://www.iea.org/reports/global-ev-outlook-2021/trends-and-developments-in-electric-vehicle-markets

I have proven myself wrong

I keep working on a proof-of-concept paper for the idea I baptized ‘Energy Ponds’. You can consult two previous updates, namely ‘We keep going until we observe’ and ‘Ça semble expérimenter toujours’ to keep track of the intellectual drift I am taking. This time, I am focusing on the end of the technological pipeline, namely on the battery-powered charging station for electric cars. First, I want to make myself an idea of the market for charging.

I take the case of France. In December 2020, they had a total of 119 737 electric vehicles officially registered (matriculated), which made + 135% as compared to December 2019[1]. That number pertains only to 100% electrical ones, with plug-in hybrids left aside for the moment. When plug-in hybrids enter the game, France had, in December 2020, 470 295 vehicles that need or might need the services of charging stations. According to the same source, there were 28 928 charging stations in France at the time, which makes 13 EVs per charging station. That coefficient is presented for 4 other European countries: Norway (23 EVs per charging station), UK (12), Germany (9), and Netherlands (4).

I look up into other sources. According to Reuters[2], there was 250 000 charging stations in Europe by September 2020, as compared to 34 000 in 2014. That means an average increase by 36 000 a year. I find a different estimation with Statista[3]: 2010 – 3 201; 2011 – 7 018; 2012 – 17 498; 2013 – 28 824; 2014 – 40 910; 2015 – 67 064; 2016 – 98 669; 2017 – 136 059; 2018 – 153 841; 2019 – 211 438; 2020 – 285 796.

On the other hand, the European Alternative Fuels Observatory supplies their own data at https://www.eafo.eu/electric-vehicle-charging-infrastructure, as regards European Union.

Number of EVs per charging station (source: European Alternative Fuels Observatory):

EVs per charging station
201014
20116
20123
20134
20145
20155
20165
20175
20186
20197
20209

The same EAFO site gives their own estimation as regards the number of charging stations in Europe:

Number of charging stations in Europe (source: European Alternative Fuels Observatory):

High-power recharging points (more than 22 kW) in EUNormal charging stations in EUTotal charging stations
201225710 25010 507
201375117 09317 844
20141 47424 91726 391
20153 39644 78648 182
20165 19070 01275 202
20178 72397 287106 010
201811 138107 446118 584
201915 136148 880164 016
202024 987199 250224 237

Two conclusions jump to the eye. Firstly, there is just a very approximate count of charging stations. Numbers differ substantially from source to source. I can just guess that one of the reasons for that discrepancy is the distinction between officially issued permits to build charging points, on the one hand, and the actually active charging points, on the other hand. In Europe, building charging points for electric vehicles has become sort of a virtue, which governments at all levels like signaling. I guess there is some boasting and chest-puffing in the numbers those individual countries report.  

Secondly, high-power stations, charging with direct current, with a power of at least 22 kWh,  gain in importance. In 2012, that category made 2,45% of the total charging network in Europe, and in 2020 that share climbed to 11,14%. This is an important piece of information as regards the proof-of-concept which I am building up for my idea of Energy Ponds. The charging station I placed at the end of the pipeline in the concept of Energy Ponds, and which is supposed to earn a living for all the technologies and installations upstream of it, is supposed to be powered from a power storage facility. That means direct current, and most likely, high power.   

On the whole, the www.eafo.eu site seems somehow more credible that Statista, with all the due respect for the latter, and thus I am reporting some data they present on the fleet of EVs in Europe. Here it comes, in a few consecutive tables below:

Passenger EVs in Europe (source: European Alternative Fuels Observatory):

BEV (pure electric)PHEV (plug-in-hybrid)Total
20084 1554 155
20094 8414 841
20105 7855 785
201113 39516313 558
201225 8913 71229 603
201345 66232 47478 136
201475 47956 745132 224
2015119 618125 770245 388
2016165 137189 153354 290
2017245 347254 473499 820
2018376 398349 616726 014
2019615 878479 7061 095 584
20201 125 485967 7212 093 206

Light Commercial EVs in Europe (source: European Alternative Fuels Observatory):

BEV (pure electric)PHEV (plug-in-hybrid)Total
2008253253
2009254254
2010309309
20117 6697 669
20129 5279 527
201313 66913 669
201410 04910 049
201528 61028 610
201640 926140 927
201752 026152 027
201876 286176 287
201997 36311797 480
2020120 7111 054121 765

Bus EVs in Europe (source: European Alternative Fuels Observatory):

BEV (pure electric)PHEV (plug-in-hybrid)Total
20082727
20091212
2010123123
2011128128
2012286286
2013376376
201438940429
2015420145565
2016686304990
20178884451 333
20181 6084862 094
20193 6365254 161
20205 3115505 861

Truck EVs in Europe (source: European Alternative Fuels Observatory):

BEV (pure electric)PHEV (plug-in-hybrid)Total
200855
200955
201066
201177
201288
20134747
20145858
20157171
201611339152
2017544094
201822240262
201959538633
2020983291 012

Structure of EV fleet in Europe as regards the types of vehicles (source: European Alternative Fuels Observatory):

Passenger EVLight commercial EVBus EVTruck EV
200893,58%5,70%0,61%0,11%
200994,70%4,97%0,23%0,10%
201092,96%4,97%1,98%0,10%
201163,47%35,90%0,60%0,03%
201275,09%24,17%0,73%0,02%
201384,72%14,82%0,41%0,05%
201492,62%7,04%0,30%0,04%
201589,35%10,42%0,21%0,03%
201689,39%10,33%0,25%0,04%
201790,34%9,40%0,24%0,02%
201890,23%9,48%0,26%0,03%
201991,46%8,14%0,35%0,05%
202094,21%5,48%0,26%0,05%

Summing it up a bit. The market of Electric Vehicles in Europe seems being durably dominated by passenger cars. There is some fleet in other categories of vehicles, and there is even some increase, but, for the moment, in all looks more like an experiment. Well, maybe electric buses turn up sort of more systemically.

The proportion between the fleet of electric vehicles and the infrastructure of charging stations still seems to be in the phase of adjustment in the latter to the abundance of the former. Generally, the number of charging stations seems to be growing slower than the fleet of EVs. Thus, for my own concept, I assume that the coefficient of 9 EVs per charging station, on average, will stand still or will slightly increase. For the moment, I take 9. I assume that my charging stations will have like 9 habitual customers, plus a fringe of incidental ones.

From there, I think in the following terms. The number of times the average customer charges their car depends on the distance they cover. Apparently, there is like a 100 km  50 kWh equivalence. I did not find detailed statistics as regards distances covered by electric vehicles as such, however I came by some Eurostat data on distances covered by all passenger vehicles taken together: https://ec.europa.eu/eurostat/statistics-explained/index.php?title=Passenger_mobility_statistics#Distance_covered . There is a lot of discrepancy between the 11 European countries studied for that metric, but the average is 12,49 km per day. My average 9 customers would do, in total, an average of 410,27 of 50 kWh charging purchases per year. I checked the prices of fast charging with direct current: 2,3 PLN per 1 kWh in Poland[4],  €0,22 per 1 kWh in France[5], $0,13 per 1 kWh in US[6], 0,25 pence per 1 kWh in UK[7]. Once converted to US$, it gives $0,59 in Poland, $0,26 in France, $0,35 in UK, and, of course, $0,13 in US. Even at the highest price, namely that in Poland, those 410,27 charging stops give barely more than $12 000 a year.

If I want to have a station able to charge 2 EVs at the same time, fast charging, and counting 350 kW per charging pile (McKinsey 2018[8]), I need 700 kW it total. Investment in batteries is like $600 ÷ $800 per 1 kW (Cole & Frazier 2019[9]; Cole, Frazier, Augustine 2021[10]), thus 700 * ($600 ÷ $800) = $420 000 ÷ $560 000. There is no way that investment pays back with $12 000 a year in revenue, and I haven’t even started talking about paying off on investment in all the remaining infrastructure of Energy Ponds: ram pumps, elevated tanks, semi-artificial wetlands, and hydroelectric turbines.

Now, I revert my thinking. Investment in the range of $420 000 ÷ $560 000, in the charging station and its batteries, gives a middle-of-the-interval value of $490 000. I found a paper by Zhang et al. (2018[11]) who claim that a charging station has chances to pay off, as a business, when it sells some 5 000 000 kWh a year. When I put it back-to-back with the [50 kWh / 100 km] coefficient, it gives 10 000 000 km. Divided by the average annual distance covered by European drivers, thus by 4 558,55 km, it gives 2 193,68 customers per year, or some 6 charging stops per day. That seems hardly feasible with 9 customers. I assume that one customer would charge their electric vehicle no more than twice a week, and 6 chargings a day make 6*7 = 42 chargings, and therefore 21 customers.

I need to stop and think. Essentially, I have proven myself wrong. I had been assuming that putting a charging station for electric vehicles at the end of the internal value chain in the overall infrastructure of Energy Ponds will solve the problem of making money on selling electricity. Turns out it makes even more problems. I need time to wrap my mind around it.


[1] http://www.avere-france.org/Uploads/Documents/161011498173a9d7b7d55aef7bdda9008a7e50cb38-barometre-des-immatriculations-decembre-2020(9).pdf

[2] https://www.reuters.com/article/us-eu-autos-electric-charging-idUSKBN2C023C

[3] https://www.statista.com/statistics/955443/number-of-electric-vehicle-charging-stations-in-europe/

[4] https://elo.city/news/ile-kosztuje-ladowanie-samochodu-elektrycznego

[5] https://particulier.edf.fr/fr/accueil/guide-energie/electricite/cout-recharge-voiture-electrique.html

[6] https://afdc.energy.gov/fuels/electricity_charging_home.html

[7] https://pod-point.com/guides/driver/cost-of-charging-electric-car

[8] McKinsey Center for Future Mobility, How Battery Storage Can Help Charge the Electric-Vehicle Market?, February 2018,

[9] Cole, Wesley, and A. Will Frazier. 2019. Cost Projections for Utility-Scale Battery Storage.

Golden, CO: National Renewable Energy Laboratory. NREL/TP-6A20-73222. https://www.nrel.gov/docs/fy19osti/73222.pdf

[10] Cole, Wesley, A. Will Frazier, and Chad Augustine. 2021. Cost Projections for UtilityScale Battery Storage: 2021 Update. Golden, CO: National Renewable Energy

Laboratory. NREL/TP-6A20-79236. https://www.nrel.gov/docs/fy21osti/79236.pdf.

[11] Zhang, J., Liu, C., Yuan, R., Li, T., Li, K., Li, B., … & Jiang, Z. (2019). Design scheme for fast charging station for electric vehicles with distributed photovoltaic power generation. Global Energy Interconnection, 2(2), 150-159. https://doi.org/10.1016/j.gloei.2019.07.003

Ça semble expérimenter toujours

Je continue avec l’idée que j’avais baptisée « Projet Aqueduc ». Je suis en train de préparer un article sur ce sujet, du type « démonstration de faisabilité ». Je le prépare en anglais et je me suis dit que c’est une bonne idée de reformuler en français ce que j’ai écrit jusqu’à maintenant, l’histoire de changer l’angle intellectuel, me dégourdir un peu et prendre de la distance.

Une démonstration de faisabilité suit une logique similaire à tout autre article scientifique, sauf qu’au lieu d’explorer et vérifier une hypothèse théorique du type « les choses marchent de façon ABCD, sous conditions RTYU », j’explore et vérifie l’hypothèse qu’un concept pratique, comme celui du « Projet Aqueduc », a des fondements scientifiques suffisamment solides pour que ça vaille la peine de travailler dessus et de le tester en vie réelle. Les fondements scientifiques viennent en deux couches, en quelque sorte. La couche de base consiste à passer en revue la littérature du sujet pour voir si quelqu’un a déjà décrit des solutions similaires et là, le truc c’est explorer des différentes perspectives de similarité. Similaire ne veut pas dire identique, n’est-ce pas ? Cette revue de littérature doit apporter une structure logique – un modèle – applicable à la recherche empirique, avec des variables et des paramètres constants. C’est alors que vient la couche supérieure de démonstration de faisabilité, qui consiste à conduire de la recherche empirique proprement dite avec ce modèle.    

Moi, pour le moment, j’en suis à la couche de base. Je passe donc en revue la littérature pertinente aux solutions hydrologiques et hydroélectriques, tout en formant, progressivement, un modèle numérique du « Projet Aqueduc ». Dans cette mise à jour, je commence par une brève récapitulation du concept et j’enchaîne avec ce que j’ai réussi à trouver dans la littérature. Le concept de base du « Projet Aqueduc » consiste donc à placer dans le cours d’une rivière des pompes qui travaillent selon le principe du bélier hydraulique et qui donc utilisent l’énergie cinétique de l’eau pour pomper une partie de cette eau en dehors du lit de la rivière, vers des structures marécageuses qui ont pour fonction de retenir l’eau dans l’écosystème local. Le bélier hydraulique à la capacité de pomper à la verticale aussi bien qu’à l’horizontale et donc avant d’être retenue dans les marécages, l’eau passe par une structure similaire à un aqueduc élevé (d’où le nom du concept en français), avec des réservoirs d’égalisation de flux, et ensuite elle descend vers les marécages à travers des turbines hydroélectriques. Ces dernières produisent de l’énergie qui est ensuite emmagasinée dans une installation de stockage et de là, elle est vendue pour assurer la survie financière à la structure entière. On peut ajouter des installations éoliennes et/ou photovoltaïques pour optimiser la production de l’énergie sur le terrain occupé par la structure entière.  Vous pouvez trouver une description plus élaborée du concept dans ma mise à jour intitulée « Le Catch 22 dans ce jardin d’Eden ». La faisabilité dont je veux faire une démonstration c’est la capacité de cette structure à se financer entièrement sur la base des ventes d’électricité, comme un business régulier, donc de se développer et durer sans subventions publiques. La solution pratique que je prends en compte très sérieusement en termes de créneau de vente d’électricité est une station de chargement des véhicules électriques.   

L’approche de base que j’utilise dans la démonstration de faisabilité – donc mon modèle de base – consiste à représenter le concept en question comme une chaîne des technologies :

>> TCES – stockage d’énergie

>> TCCS – station de chargement des véhicules électriques

>> TCRP – pompage en bélier hydraulique

>> TCEW – réservoirs élevés d’égalisation

>> TCCW – acheminement et siphonage d’eau

>> TCWS – l’équipement artificiel des structures marécageuses

>> TCHE – les turbines hydroélectriques

>> TCSW – installations éoliennes et photovoltaïques     

Mon intuition de départ, que j’ai l’intention de vérifier dans ma recherche à travers la littérature, est que certaines de ces technologies sont plutôt prévisibles et bien calibrées, pendant qu’il y en a d’autres qui sont plus floues et sujettes au changement, donc moins prévisibles. Les technologies prévisibles sont une sorte d’ancrage pour the concept entier et celles plus floues sont l’objet d’expérimentation.

Je commence la revue de littérature par le contexte environnemental, donc avec l’hydrologie. Les variations au niveau de la nappe phréatiques, qui est un terme scientifique pour les eaux souterraines, semblent être le facteur numéro 1 des anomalies au niveau de rétention d’eau dans les réservoirs artificiels (Neves, Nunes, & Monteiro 2020[1]). D’autre part, même sans modélisation hydrologique détaillée, il y a des preuves empiriques substantielles que la taille des réservoirs naturels et artificiels dans les plaines fluviales, ainsi que la densité de placement de ces réservoirs et ma manière de les exploiter ont une influence majeure sur l’accès pratique à l’eau dans les écosystèmes locaux. Il semble que la taille et la densité des espaces boisés intervient comme un facteur d’égalisation dans l’influence environnementale des réservoirs (Chisola, Van der Laan, & Bristow 2020[2]). Par comparaison aux autres types de technologie, l’hydrologie semble être un peu en arrière en termes de rythme d’innovation et il semble aussi que des méthodes de gestion d’innovation appliquées ailleurs avec succès peuvent marcher pour l’hydrologie, par exemple des réseaux d’innovation ou des incubateurs des technologies (Wehn & Montalvo 2018[3]; Mvulirwenande & Wehn 2020[4]). L’hydrologie rurale et agriculturale semble être plus innovatrice que l’hydrologie urbaine, par ailleurs (Wong, Rogers & Brown 2020[5]).

Ce que je trouve assez surprenant est le manque apparent de consensus scientifique à propos de la quantité d’eau dont les sociétés humaines ont besoin. Toute évaluation à ce sujet commence avec « beaucoup et certainement trop » et à partir de là, le beaucoup et le trop deviennent plutôt flous. J’ai trouvé un seul calcul, pour le moment, chez Hogeboom (2020[6]), qui maintient que la personne moyenne dans les pays développés consomme 3800 litres d’eau par jour au total, mais c’est une estimation très holistique qui inclue la consommation indirecte à travers les biens et les services ainsi que le transport. Ce qui est consommé directement via le robinet et la chasse d’eau dans les toilettes, ça reste un mystère pour la science, apparemment, à moins que la science ne considère ce sujet comment trop terre-à-terre pour s’en occuper sérieusement.     

Il y a un créneau de recherche intéressant, que certains de ses représentants appellent « la socio-hydrologie », qui étudie les comportements collectifs vis-à-vis de l’eau et des systèmes hydrologiques et qui est basée sur l’observation empirique que lesdits comportements collectifs s’adaptent, d’une façon profonde et pernicieuse à la fois, aux conditions hydrologiques que la société en question vit avec (Kumar et al. 2020[7]). Il semble que nous nous adaptons collectivement à la consommation accrue de l’eau par une productivité croissante dans l’exploitation de nos ressources hydrologiques et le revenu moyen par tête d’habitant semble être positivement corrélé avec cette productivité (Bagstad et al. 2020[8]). Il paraît donc que l’accumulation et superposition de nombreuses technologies, caractéristique aux pays développés, contribue à utiliser l’eau de façon de plus en plus productive. Dans ce contexte, il y a une recherche intéressante conduite par Mohamed et al. (2020[9]) qui avance la thèse qu’un environnement aride est non seulement un état hydrologique mais aussi une façon de gérer les ressources hydrologiques, sur ma base des données qui sont toujours incomplètes par rapport à une situation qui change rapidement.

Il y a une question qui vient plus ou moins naturellement : dans la foulée de l’adaptation socio-hydrologique quelqu’un a-t-il présenté un concept similaire à ce que moi je présente comme « Projet Aqueduc » ? Eh bien, je n’ai rien trouvé d’identique, néanmoins il y a des idées intéressement proches. Dans l’hydrologie descriptive il y a ce concept de pseudo-réservoir, qui veut dire une structure comme les marécages ou des nappes phréatiques peu profondes qui ne retiennent pas l’eau de façons statique, comme un lac artificiel, mais qui ralentissent la circulation de l’eau dans le bassin fluvial d’une rivière suffisamment pour modifier les conditions hydrologiques dans l’écosystème (Harvey et al. 2009[10]; Phiri et al. 2021[11]). D’autre part, il y a une équipe des chercheurs australiens qui ont inventé une structure qu’ils appellent par l’acronyme STORES et dont le nom complet est « short-term off-river energy storage » (Lu et al. 2021[12]; Stocks et al. 2021[13]). STORES est une structure semi-artificielle d’accumulation par pompage, où on bâtit un réservoir artificiel au sommet d’un monticule naturel placé à une certaine distance de la rivière la plus proche et ce réservoir reçoit l’eau pompée artificiellement de la rivière. Ces chercheurs australiens avancent et donnent des preuves scientifiques pour appuyer la thèse qu’avec un peu d’astuce on peut faire fonctionner ce réservoir naturel en boucle fermée avec la rivière qui l’alimente et donc de créer un système de rétention d’eau. STORES semble être relativement le plus près de mon concept de « Projet Aqueduc » et ce qui est épatant est que moi, j’avais inventé mon idée pour l’environnement des plaines alluviales de l’Europe tandis que STORES avait été mis au point pour l’environnement aride et quasi-désertique d’Australie. Enfin, il y a l’idée des soi-disant « jardins de pluie » qui sont une technologie de rétention d’eau de pluie dans l’environnement urbain, dans des structures horticulturales, souvent placées sur les toits d’immeubles (Bortolini & Zanin 2019[14], par exemple).

Je peux conclure provisoirement que tout ce qui touche à l’hydrologie strictement dite dans le cadre du « Projet Aqueduc » est sujet aux changements plutôt imprévisible. Ce que j’ai pu déduire de la littérature ressemble à un potage bouillant sous couvercle. Il y a du potentiel pour changement technologique, il y a de la pression environnementale et sociale, mais il n’y pas encore de mécanismes institutionnels récurrents pour connecter l’un à l’autre. Les technologies TCEW (réservoirs élevés d’égalisation), TCCW (acheminement et siphonage d’eau), et TCWS (l’équipement artificiel des structures marécageuses) démontrant donc un avenir flou, je passe à la technologie TCRP de pompage en bélier hydraulique. J’ai trouvé deux articles chinois, qui se suivent chronologiquement et qui semblent par ailleurs avoir été écrits par la même équipe de chercheurs : Guo et al. (2018[15]), and Li et al. (2021[16]). Ils montrent la technologie du bélier hydraulique sous un angle intéressant. D’une part, les Chinois semblent avoir donné du vrai élan à l’innovation dans ce domaine spécifique, tout au moins beaucoup plus d’élan que j’ai pu observer en Europe. D’autre part, les estimations de la hauteur effective à laquelle l’eau peut être pompée avec les béliers hydrauliques dernier cri sont respectivement de 50 mètres dans l’article de 2018 et 30 mètres dans celui de 2021. Vu que les deux articles semblent être le fruit du même projet, il y a eu comme une fascination suivie par une correction vers le bas. Quoi qu’il en soit, même l’estimation plus conservative de 30 mètres c’est nettement mieux que les 20 mètres que j’assumais jusqu’à maintenant.

Cette élévation relative possible à atteindre avec la technologie du bélier hydraulique est importante pour la technologie suivante de ma chaîne, donc celle des petites turbines hydroélectriques, la TCHE. L’élévation relative de l’eau et le flux par seconde sont les deux paramètres clés qui déterminent la puissance électrique produite (Cai, Ye & Gholinia 2020[17]) et il se trouve que dans le « Projet Aqueduc », avec l’élévation et le flux largement contrôlés à travers la technologie du bélier hydraulique, les turbines deviennent un peu moins dépendantes sur les conditions naturelles.

J’ai trouvé une revue merveilleusement encyclopédique des paramètres pertinents aux petites turbines hydroélectriques chez Hatata, El-Saadawi, & Saad (2019[18]). La puissance électrique se calcule donc comme : Puissance = densité de l’eau (1000 kg/m3) * constante d’accélération gravitationnelle (9,8 m/s2) * élévation nette (mètres) * Q (flux par seconde m3/s).

L’investissement initial en de telles installations se calcule par unité de puissance, donc sur la base de 1 kilowatt et se divise en 6 catégories : la construction de la prise d’eau, la centrale électrique strictement dite, les turbines, le générateur, l’équipement auxiliaire, le transformateur et enfin le poste extérieur. Je me dis par ailleurs que – vu la structure du « Projet Aqueduc » – l’investissement en la construction de prise d’eau est en quelque sorte équivalent au système des béliers hydrauliques et réservoirs élevés. En tout cas :

>> la construction de la prise d’eau, par 1 kW de puissance  ($) 186,216 * Puissance-0,2368 * Élévation -0,597

>> la centrale électrique strictement dite, par 1 kW de puissance  ($) 1389,16 * Puissance-0,2351 * Élévation-0,0585

>> les turbines, par 1 kW de puissance  ($)

@ la turbine Kaplan: 39398 * Puissance-0,58338 * Élévation-0,113901

@ la turbine Frances: 30462 * Puissance-0,560135 * Élévation-0,127243

@ la turbine à impulsions radiales: 10486,65 * Puissance-0,3644725 * Élévation-0,281735

@ la turbine Pelton: 2 * la turbine à impulsions radiales

>> le générateur, par 1 kW de puissance  ($) 1179,86 * Puissance-0,1855 * Élévation-0,2083

>> l’équipement auxiliaire, par 1 kW de puissance  ($) 612,87 * Puissance-0,1892 * Élévation-0,2118

>> le transformateur et le poste extérieur, par 1 kW de puissance 

($) 281 * Puissance0,1803 * Élévation-0,2075

Une fois la puissance électrique calculée avec le paramètre d’élévation relative assurée par les béliers hydrauliques, je peux calculer l’investissement initial en hydro-génération comme la somme des positions mentionnées ci-dessus. Hatata, El-Saadawi, & Saad (2019 op. cit.) recommandent aussi de multiplier une telle somme par le facteur de 1,13 (c’est donc un facteur du type « on ne sait jamais ») et d’assumer que les frais courants d’exploitation annuelle vont se situer entre 1% et 6% de l’investissement initial.

Syahputra & Soesanti (2021[19]) étudient le cas de la rivière Progo, dotée d’un flux tout à fait modeste de 6,696 mètres cubes par seconde et située dans Kulon Progo Regency (une region spéciale au sein de Yogyakarta, Indonesia). Le système des petites turbines hydroélectriques y fournit l’électricité aux 962 ménages locaux, et crée un surplus de 4 263 951 kWh par an d’énergie à revendre aux consommateurs externes. Dans un autre article, Sterl et al. (2020[20]) étudient le cas de Suriname et avancent une thèse intéressante, notamment que le développement d’installations basées sur les énergies renouvelables crée un phénomène d’appétit d’énergie qui croît à mesure de manger et qu’un tel développement en une source d’énergie – le vent, par exemple – stimule l’investissement en installations basées sur d’autres sources, donc l’hydraulique et le photovoltaïque.  

Ces études relativement récentes corroborent celles d’il y a quelques années, comme celle de Vilanova & Balestieri (2014[21]) ou bien celle de Vieira et al. (2015[22]), avec une conclusion générale que les petites turbines hydroélectriques ont atteint un degré de sophistication technologique suffisante pour dégager une quantité d’énergie économiquement profitable. Par ailleurs, il semble qu’il y a beaucoup à gagner dans ce domaine à travers l’optimisation de la distribution de puissance entre les turbines différentes. De retour aux publications les plus récentes, j’ai trouvé des études de faisabilité tout à fait robustes pour les petites turbines hydroélectriques, qui indiquent que – pourvu qu’on soit prêt à accepter un retour d’environ 10 à 11 ans sur l’investissement initial – le petit hydro peut être exploité profitablement même avec une élévation relative en dessous de 20 mètres (Arthur et al. 2020[23] ; Ali et al. 2021[24]).

C’est ainsi que j’arrive donc à la portion finale dans la chaîne technologique du « Projet Aqueduc », donc au stockage d’énergie (TCES) ainsi que TCCS ou la station de chargement des véhicules électriques. La puissance à installer dans une station de chargement semble se situer entre 700 et 1000 kilowatts (Zhang et al. 2018[25]; McKinsey 2018[26]). En dessous de 700 kilowatt la station peut devenir si difficile à accéder pour le consommateur moyen, due aux files d’attente, qu’elle peut perdre la confiance des clients locaux. En revanche, tout ce qui va au-dessus de 1000 kilowatts est vraiment utile seulement aux heures de pointe dans des environnements urbains denses. Il y a des études de concept pour les stations de chargement où l’unité de stockage d’énergie est alimentée à partir des sources renouvelables (Al Wahedi & Bicer 2020[27]). Zhang et al. (2019[28]) présentent un concept d’entreprise tout fait pour une station de chargement située dans le milieu urbain. Apparemment, le seuil de profitabilité se situe aux environs de 5 100 000 kilowatt heures vendues par an.  

En termes de technologie de stockage strictement dite, les batteries Li-ion semblent être la solution de base pour maintenant, quoi qu’une combinaison avec les piles à combustible ou bien avec l’hydrogène semble prometteuse (Al Wahedi & Bicer 2020 op. cit. ; Sharma, Panvar & Tripati 2020[29]). En général, pour le moment, les batteries Li-Ion montrent le rythme d’innovation relativement le plus soutenu (Tomaszewska et al. 2019[30] ; de Simone & Piegari 2019[31]; Koohi-Fayegh & Rosen 2020[32]). Un article récent par Elmeligy et al. (2021[33]) présente un concept intéressant d’unité mobile de stockage qui pourrait se déplacer entre plusieurs stations de chargement. Quant à l’investissement initial requis pour une station de chargement, ça semble expérimenter toujours mais la marge de manœuvre se rétrécit pour tomber quelque part entre $600 ÷ $800 par 1 kW de puissance (Cole & Frazier 2019[34]; Cole, Frazier, Augustine 2021[35]).


[1] Neves, M. C., Nunes, L. M., & Monteiro, J. P. (2020). Evaluation of GRACE data for water resource management in Iberia: a case study of groundwater storage monitoring in the Algarve region. Journal of Hydrology: Regional Studies, 32, 100734. https://doi.org/10.1016/j.ejrh.2020.100734

[2] Chisola, M. N., Van der Laan, M., & Bristow, K. L. (2020). A landscape hydrology approach to inform sustainable water resource management under a changing environment. A case study for the Kaleya River Catchment, Zambia. Journal of Hydrology: Regional Studies, 32, 100762. https://doi.org/10.1016/j.ejrh.2020.100762

[3] Wehn, U., & Montalvo, C. (2018). Exploring the dynamics of water innovation: Foundations for water innovation studies. Journal of Cleaner Production, 171, S1-S19. https://doi.org/10.1016/j.jclepro.2017.10.118

[4] Mvulirwenande, S., & Wehn, U. (2020). Fostering water innovation in Africa through virtual incubation: Insights from the Dutch VIA Water programme. Environmental Science & Policy, 114, 119-127. https://doi.org/10.1016/j.envsci.2020.07.025

[5] Wong, T. H., Rogers, B. C., & Brown, R. R. (2020). Transforming cities through water-sensitive principles and practices. One Earth, 3(4), 436-447. https://doi.org/10.1016/j.oneear.2020.09.012

[6] Hogeboom, R. J. (2020). The Water Footprint Concept and Water’s Grand Environmental Challenges. One earth, 2(3), 218-222. https://doi.org/10.1016/j.oneear.2020.02.010

[7] Kumar, P., Avtar, R., Dasgupta, R., Johnson, B. A., Mukherjee, A., Ahsan, M. N., … & Mishra, B. K. (2020). Socio-hydrology: A key approach for adaptation to water scarcity and achieving human well-being in large riverine islands. Progress in Disaster Science, 8, 100134. https://doi.org/10.1016/j.pdisas.2020.100134

[8] Bagstad, K. J., Ancona, Z. H., Hass, J., Glynn, P. D., Wentland, S., Vardon, M., & Fay, J. (2020). Integrating physical and economic data into experimental water accounts for the United States: Lessons and opportunities. Ecosystem Services, 45, 101182. https://doi.org/10.1016/j.ecoser.2020.101182

[9] Mohamed, M. M., El-Shorbagy, W., Kizhisseri, M. I., Chowdhury, R., & McDonald, A. (2020). Evaluation of policy scenarios for water resources planning and management in an arid region. Journal of Hydrology: Regional Studies, 32, 100758. https://doi.org/10.1016/j.ejrh.2020.100758

[10] Harvey, J.W., Schaffranek, R.W., Noe, G.B., Larsen, L.G., Nowacki, D.J., O’Connor, B.L., 2009. Hydroecological factors governing surface water flow on a low-gradient floodplain. Water Resour. Res. 45, W03421, https://doi.org/10.1029/2008WR007129.

[11] Phiri, W. K., Vanzo, D., Banda, K., Nyirenda, E., & Nyambe, I. A. (2021). A pseudo-reservoir concept in SWAT model for the simulation of an alluvial floodplain in a complex tropical river system. Journal of Hydrology: Regional Studies, 33, 100770. https://doi.org/10.1016/j.ejrh.2020.100770.

[12] Lu, B., Blakers, A., Stocks, M., & Do, T. N. (2021). Low-cost, low-emission 100% renewable electricity in Southeast Asia supported by pumped hydro storage. Energy, 121387. https://doi.org/10.1016/j.energy.2021.121387

[13] Stocks, M., Stocks, R., Lu, B., Cheng, C., & Blakers, A. (2021). Global atlas of closed-loop pumped hydro energy storage. Joule, 5(1), 270-284. https://doi.org/10.1016/j.joule.2020.11.015

[14] Bortolini, L., & Zanin, G. (2019). Reprint of: Hydrological behaviour of rain gardens and plant suitability: A study in the Veneto plain (north-eastern Italy) conditions. Urban forestry & urban greening, 37, 74-86. https://doi.org/10.1016/j.ufug.2018.07.003

[15] Guo, X., Li, J., Yang, K., Fu, H., Wang, T., Guo, Y., … & Huang, W. (2018). Optimal design and performance analysis of hydraulic ram pump system. Proceedings of the Institution of Mechanical Engineers, Part A: Journal of Puissance and Energy, 232(7), 841-855. https://doi.org/10.1177%2F0957650918756761

[16] Li, J., Yang, K., Guo, X., Huang, W., Wang, T., Guo, Y., & Fu, H. (2021). Structural design and parameter optimization on a waste valve for hydraulic ram pumps. Proceedings of the Institution of Mechanical Engineers, Part A: Journal of Puissance and Energy, 235(4), 747–765. https://doi.org/10.1177/0957650920967489

[17] Cai, X., Ye, F., & Gholinia, F. (2020). Application of artificial neural network and Soil and Water Assessment Tools in evaluating Puissance generation of small hydroPuissance stations. Energy Reports, 6, 2106-2118. https://doi.org/10.1016/j.egyr.2020.08.010.

[18] Hatata, A. Y., El-Saadawi, M. M., & Saad, S. (2019). A feasibility study of small hydro Puissance for selected locations in Egypt. Energy Strategy Reviews, 24, 300-313. https://doi.org/10.1016/j.esr.2019.04.013

[19] Syahputra, R., & Soesanti, I. (2021). Renewable energy systems based on micro-hydro and solar photovoltaic for rural areas: A case study in Yogyakarta, Indonesia. Energy Reports, 7, 472-490. https://doi.org/10.1016/j.egyr.2021.01.015

[20] Sterl, S., Donk, P., Willems, P., & Thiery, W. (2020). Turbines of the Caribbean: Decarbonising Suriname’s electricity mix through hydro-supported integration of wind Puissance. Renewable and Sustainable Energy Reviews, 134, 110352. https://doi.org/10.1016/j.rser.2020.110352

[21] Vilanova, M. R. N., & Balestieri, J. A. P. (2014). HydroPuissance recovery in water supply systems: Models and case study. Energy conversion and management, 84, 414-426. https://doi.org/10.1016/j.enconman.2014.04.057

[22] Vieira, D. A. G., Guedes, L. S. M., Lisboa, A. C., & Saldanha, R. R. (2015). Formulations for hydroelectric energy production with optimality conditions. Energy Conversion and Management, 89, 781-788. https://doi.org/10.1016/j.enconman.2014.10.048

[23] Arthur, E., Anyemedu, F. O. K., Gyamfi, C., Asantewaa-Tannor, P., Adjei, K. A., Anornu, G. K., & Odai, S. N. (2020). Potential for small hydroPuissance development in the Lower Pra River Basin, Ghana. Journal of Hydrology: Regional Studies, 32, 100757. https://doi.org/10.1016/j.ejrh.2020.100757

[24] Ali, M., Wazir, R., Imran, K., Ullah, K., Janjua, A. K., Ulasyar, A., … & Guerrero, J. M. (2021). Techno-economic assessment and sustainability impact of hybrid energy systems in Gilgit-Baltistan, Pakistan. Energy Reports, 7, 2546-2562. https://doi.org/10.1016/j.egyr.2021.04.036

[25] Zhang, Y., He, Y., Wang, X., Wang, Y., Fang, C., Xue, H., & Fang, C. (2018). Modeling of fast charging station equipped with energy storage. Global Energy Interconnection, 1(2), 145-152. DOI:10.14171/j.2096-5117.gei.2018.02.006

[26] McKinsey Center for Future Mobility, How Battery Storage Can Help Charge the Electric-Vehicle Market?, February 2018,

[27] Al Wahedi, A., & Bicer, Y. (2020). Development of an off-grid electrical vehicle charging station hybridized with renewables including battery cooling system and multiple energy storage units. Energy Reports, 6, 2006-2021. https://doi.org/10.1016/j.egyr.2020.07.022

[28] Zhang, J., Liu, C., Yuan, R., Li, T., Li, K., Li, B., … & Jiang, Z. (2019). Design scheme for fast charging station for electric vehicles with distributed photovoltaic power generation. Global Energy Interconnection, 2(2), 150-159. https://doi.org/10.1016/j.gloei.2019.07.003

[29] Sharma, S., Panwar, A. K., & Tripathi, M. M. (2020). Storage technologies for electric vehicles. Journal of traffic and transportation engineering (english edition), 7(3), 340-361. https://doi.org/10.1016/j.jtte.2020.04.004

[30] Tomaszewska, A., Chu, Z., Feng, X., O’Kane, S., Liu, X., Chen, J., … & Wu, B. (2019). Lithium-ion battery fast charging: A review. ETransportation, 1, 100011. https://doi.org/10.1016/j.etran.2019.100011

[31] De Simone, D., & Piegari, L. (2019). Integration of stationary batteries for fast charge EV charging stations. Energies, 12(24), 4638. https://doi.org/10.3390/en12244638

[32] Koohi-Fayegh, S., & Rosen, M. A. (2020). A review of energy storage types, applications and recent developments. Journal of Energy Storage, 27, 101047. https://doi.org/10.1016/j.est.2019.101047

[33] Elmeligy, M. M., Shaaban, M. F., Azab, A., Azzouz, M. A., & Mokhtar, M. (2021). A Mobile Energy Storage Unit Serving Multiple EV Charging Stations. Energies, 14(10), 2969. https://doi.org/10.3390/en14102969

[34] Cole, Wesley, and A. Will Frazier. 2019. Cost Projections for Utility-Scale Battery Storage.

Golden, CO: National Renewable Energy Laboratory. NREL/TP-6A20-73222. https://www.nrel.gov/docs/fy19osti/73222.pdf

[35] Cole, Wesley, A. Will Frazier, and Chad Augustine. 2021. Cost Projections for UtilityScale Battery Storage: 2021 Update. Golden, CO: National Renewable Energy

Laboratory. NREL/TP-6A20-79236. https://www.nrel.gov/docs/fy21osti/79236.pdf.

Seasonal lakes

Once again, been a while since I last blogged. What do you want, I am having a busy summer. Putting order in my own chaos, and, over the top of that, putting order in other people’s chaos, this is all quite demanding in terms of time and energy. What? Without trying to put order in chaos, that chaos might take less time and energy? Well, yes, but order look tidier than chaos.

I am returning to the technological concept which I labelled ‘Energy Ponds’ (or ‘projet Aqueduc’ in French >> see: Le Catch 22 dans ce jardin d’Eden). You can find a description of that concept onder the hyperlinked titles provided. I am focusing on refining my repertoire of skills in scientific validation of technological concepts. I am passing in review some recent literature, and I am trying to find good narrative practices in that domain.

The general background of ‘Energy Ponds’ consists in natural phenomena observable in Europe as the climate change progresses, namely: a) long-term shift in the structure of precipitations, from snow to rain b) increasing occurrence of floods and droughts c) spontaneous reemergence of wetlands. All these phenomena have one common denominator: increasingly volatile flow per second in rivers. The essential idea of Energy Ponds is to ‘financialize’ that volatile flow, so to say, i.e. to capture its local surpluses, store them for later, and use the very mechanism of storage itself as a source of economic value.

When water flows downstream, in a river, its retention can be approached as the opportunity for the same water to loop many times over the same specific portion of the collecting basin (of the river). Once such a loop is created, we can extend the average time that a liter of water spends in the whereabouts. Ram pumps, connected to storage structures akin to swamps, can give such an opportunity. A ram pump uses the kinetic energy of flowing water in order to pump some of that flow up and away from its mainstream. Ram pumps allow forcing a process, which we now as otherwise natural. Rivers, especially in geological plains, where they flow relatively slowly, tend to build, with time, multiple ramifications. Those branchings can be directly observable at the surface, as meanders, floodplains or seasonal lakes, but much of them is underground, as pockets of groundwater. In this respect, it is useful to keep in mind that mechanically, rivers are the drainpipes of rainwater from their respective basins. Another basic hydrological fact, useful to remember in the context of the Energy Ponds concept, is that strictly speaking retention of rainwater – i.e. a complete halt in its circulation through the collecting basin of the river – is rarely possible, and just as rarely it is a sensible idea to implement. Retention means rather a slowdown to the flow of rainwater through the collecting basin into the river.

One of the ways that water can be slowed down consists in making it loop many times over the same section of the river. Let’s imagine a simple looping sequence: water from the river is being ram-pumped up and away into retentive structures akin to swamps, i.e. moderately deep spongy structures underground, with high capacity for retention, covered with a superficial layer of shallow-rooted vegetation. With time, as the swamp fills with water, the surplus is evacuated back into the river, by a system of canals. Water stored in the swamp will be ultimately evacuated, too, minus evaporation, it will just happen much more slowly, by the intermediary of groundwaters. In order to illustrate the concept mathematically, let’ s suppose that we have water in the river flowing at the pace of, e.g. 45 m3 per second. We make it loop once via ram pumps and retentive swamps, and, if as a result of that looping, the speed of the flow is sliced by 3. On the long run we slow down the way that the river works as the local drainpipe: we slow it from 43 m3 per second down to [43/3 = 14,33…] m3 per second.  As water from the river flows slower overall, it can yield more environmental services: each cubic meter of water has more time to ‘work’ in the ecosystem.  

When I think of it, any human social structure, such as settlements, industries, infrastructures etc., needs to stay in balance with natural environment. That balance is to be understood broadly, as the capacity to stay, for a satisfactorily long time, within a ‘safety zone’, where the ecosystem simply doesn’t kill us. That view has little to do with the moral concepts of environment-friendliness or sustainability. As a matter of fact, most known human social structures sooner or later fall out of balance with the ecosystem, and this is how civilizations collapse. Thus, here comes the first important assumption: any human social structure is, at some level, an environmental project. The incumbent social structures, possible to consider as relatively stable, are environmental projects which have simply hold in place long enough to grow social institutions, and those institutions allow further seeking of environmental balance.

I am starting my review of literature with an article by Phiri et al. (2021[1]), where the authors present a model for assessing the way that alluvial floodplains behave. I chose this one because my concept of Energy Ponds is supposed to work precisely in alluvial floodplains, i.e. in places where we have: a) a big river b) a lot of volatility in the amount of water in that river, and, as a consequence, we have (c) an alternation of floods and droughts. Normal stuff where I come from, i.e. in Northern Europe. Phiri et al. use the general model, acronymically called SWAT, which comes from ‘Soil and Water Assessment Tool’ (see also: Arnold et al. 1998[2]; Neitsch et al. 2005[3]), and with that general tool, they study the concept of pseudo-reservoirs in alluvial plains. In short, a pseudo-reservoir is a hydrological structure which works like a reservoir but does not necessarily look like one. In that sense, wetlands in floodplains can work as reservoirs of water, even if from the hydrological point of view they are rather extensions of the main river channel (Harvey et al. 2009[4]).

Analytically, the SWAT model defines the way a reservoir works with the following equation: V = Vstored + Vflowin − Vflowout + Vpcp − Vevap − Vseep . People can rightly argue that it is a good thing to know what symbols mean in an equation, and therefore V stands for the volume of water in reservoir at the end of the day, Vstored corresponds to the amount of water stored at the beginning of the day, Vflowin means the quantity of water entering reservoir during the day, Vflowout is the metric outflow of water during the day, Vpcp is volume of precipitation falling on the water body during the day, Vevap is volume of water removed from the water body by evaporation during the day, Vseep is volume of water lost from the water body by seepage.

This is a good thing to know, as well, once we have a nice equation, what the hell are we supposed to do with it in real life. Well, the SWAT model has even its fan page (http://www.swatusers.com ), and, as Phiri et al. phrase it out, it seems that the best practical use is to control the so-called ‘target release’, i.e. the quantity of water released at a given point in space and time, designated as Vtarg. The target release is mostly used as a control metric for preventing or alleviating floods, and with that purpose in mind, two decision rules are formulated. During the non-flood season, no reservation for flood is needed, and target storage is set at emergency spillway volume. In other words, in the absence of imminent flood, we can keep the reservoir full. On the other hand, when the flood season is on, flood control reservation is a function of soil water content. This is set to maximum and 50 % of maximum for wet and dry grounds, respectively. In the context of the V = Vstored + Vflowin − Vflowout + Vpcp − Vevap − Vseep equation, Vtarg is a specific value (or interval of values) in the Vflowout component.

As I am wrapping my mind around those conditions, I am thinking about the opposite application, i.e. about preventing and alleviating droughts. Drought is recognizable by exceptionally low values in the amount of water stored at the end of the given period, thus in the basic V, in the presence of low precipitation, thus low Vpcp, and high evaporation, which corresponds to high Vevap. More generally, both floods and droughts occur when – or rather after – in a given Vflowin − Vflowout balance, precipitation and evaporation take one turn or another.

I feel like moving those exogenous meteorological factors on one side of the equation, which goes like  – Vpcp + Vevap =  – V + Vstored + Vflowin − Vflowout − Vseep and doesn’t make much sense, as there are not really many cases of negative precipitation. I need to switch signs, and then it is more presentable, as Vpcp – VevapV – Vstored – Vflowin + Vflowout + Vseep . Weeell, almost makes sense. I guess that Vflowin is sort of exogenous, too. The inflow of water into the basin of the river comes from a melting glacier, from another river, from an upstream section of the same river etc. I reframe: Vpcp – Vevap + Vflowin V – Vstored + Vflowout + Vseep  . Now, it makes sense. Precipitations plus the inflow of water through the main channel of the river, minus evaporation, all that stuff creates a residual quantity of water. That residual quantity seeps into the groundwaters (Vseep), flows out (Vflowout), and stays in the reservoir-like structure at the end of the day (V – Vstored).

I am having a look at how Phiri et al. (2021 op. cit.) phrase out their model of pseudo-reservoir. The output value they peg the whole thing on is Vpsrc, or the quantity of water retained in the pseudo-reservoir at the end of the day. The Vpsrc is modelled for two alternative situations: no flood (V ≤ Vtarg), or flood (V > Vtarg). I interpret drought as particularly uncomfortable a case of the absence of flood.

Whatever. If V ≤ Vtarg , then Vpsrc = Vstored + Vflowin − Vbaseflowout + Vpcp − Vevap − Vseep  , where, besides the already known variables, Vbaseflowoutstands for volume of water leaving PSRC during the day as base flow. When, on the other hand, we have flood, Vpsrc = Vstored + Vflowin − Vbaseflowout − Voverflowout + Vpcp − Vevap − Vseep .

Phiri et al. (2021 op. cit.) argue that once we incorporate the phenomenon of pseudo-reservoirs in the evaluation of possible water discharge from alluvial floodplains, the above-presented equations perform better than the standard SWAT model, or V = Vstored + Vflowin − Vflowout + Vpcp − Vevap − Vseep

My principal takeaway from the research by Phiri et al. (2021 op. cit.) is that wetlands matter significantly for the hydrological balance of areas with characteristics of floodplains. My concept of ‘Energy Ponds’ assumes, among other things, storing water in swamp-like structures, including urban and semi-urban ones, such as rain gardens (Sharma & Malaviya 2021[5] ; Li, Liu & Li 2020[6] ; Venvik & Boogaard 2020[7],) or sponge cities (Ma, Jiang & Swallow 2020[8] ; Sun, Cheshmehzangi & Wang 2020[9]).  

Now, I have a few papers which allow me to have sort of a bird’s eye view of the SWAT model as regards the actual predictability of flow and retention in fluvial basins. It turns out that identifying optimal sites for hydropower installations is a very complex task, prone to a lot of error, and only the introduction of digital data such as GIS allows acceptable precision. The problem is to estimate accurately both the flow and the head of the waterway in question at an exact location (Liu et al., 2017[10]; Gollou and Ghadimi 2017[11]; Aghajani & Ghadimi 2018[12]; Yu & Ghadimi 2019[13]; Cai, Ye & Gholinia 2020[14]). My concept of ‘Energy Ponds’ includes hydrogeneration, but makes one of those variables constant, by introducing something like Roman siphons, with a constant head, apparently possible to peg at 20 metres. The hydro-power generation seems to be pseudo-concave function (i.e. it hits quite a broad, concave peak of performance) if the hydraulic head (height differential) is constant, and the associated productivity function is strongly increasing. Analytically, it can be expressed as a polynomial, i.e. as a combination of independent factors with various powers (various impact) assigned to them (Cordova et al. 2014[15]; Vieira et al. 2015[16]). In other words, by introducing, in my technological concept, that constant head (height) makes the whole thing more prone to optimization.

Now, I take on a paper which shows how to present a proof of concept properly: Pradhan, A., Marence, M., & Franca, M. J. (2021). The adoption of Seawater Pump Storage Hydropower Systems increases the share of renewable energy production in Small Island Developing States. Renewable Energy, https://doi.org/10.1016/j.renene.2021.05.151 . This paper is quite close to my concept of ‘Energy Ponds’, as it includes the technology of pumped storage, which I think about morphing and changing into something slightly different. Such as presented by Pradhan, Marence & Franca (2021, op. cit.), the proof of concept is structured in two parts: the general concept is presented, and then a specific location is studied  – the island of Curaçao, in this case – as representative for a whole category. The substance of proof is articulated around the following points:

>> the basic diagnosis as for the needs of the local community in terms of energy sources, with the basic question whether Seawater Pumped Storage Hydropower System is locally suitable as technology. In this specific case, the main criterium was the possible reduction of dependency on fossils. Assumptions as for the electric power required have been made, specifically for the local community.  

>> a GIS tool has been tested for choosing the optimal location. GIS stands for Geographic Information System (https://en.wikipedia.org/wiki/Geographic_information_system ). In this specific thread the proof of concept consisted in checking whether the available GIS data, and the software available for processing it are sufficient for selecting an optimal location in Curaçao.

At the bottom line, the proof of concept sums up to checking, whether the available GIS technology allows calibrating a site for installing the required electrical power in a Seawater Pumped Storage Hydropower System.

That paper by Pradhan, Marence & Franca (2021, op. cit.) presents a few other interesting traits for me. Firstly, the author’s prove that combining hydropower with windmills and solar modules is a viable solution, and this is exactly what I thought, only I wasn’t sure. Secondly, the authors consider a very practical issue: corrosion, and the materials recommended in order to bypass that problem. Their choice is fiberglass. Secondly, they introduce an important parameter, namely the L/H aka ‘Length to Head’ ratio. This is the proportion between the length of water conductors and the hydraulic head (i.e. the relative denivelation) in the actual installation. Pradhan, Marence & Franca recommend distinguishing two types of installations: those with L/H < 15, on the one hand, and those with 15 ≤ L/H ≤ 25. However accurate is that assessment of theirs, it is a paremeter to consider. In my concept of ‘Energy Ponds’, I assume an artificially created hydraulic head of 20 metres, and thus the conductors leading from elevated tanks to the collecting wetland-type structure should be classified in two types, namely [(L/H < 15) (L < 15*20) (L < 300 metres)], on the one hand, and [(15 ≤ L/H ≤ 25) (300 metres ≤ L ≤ 500 metres)], on the other hand.  

Still, there is bad news for me. According to a report by Botterud, Levin & Koritarov (2014[17]), which Pradhan, Marence & Franca quote as an authoritative source, hydraulic head for pumped storage should be at least 100 metres in order to make the whole thing profitable. My working assumption with ‘Energy Ponds’ is 20 metres, and, obviously, I have to work through it.

I think I have the outline of a structure for writing a decent proof-of-concept article for my ‘Energy Ponds’ concept. I think I should start with something I have already done once, two years ago, namely with compiling data as regards places in Europe, located in fluvial plains, with relatively the large volatility in water level and flow. These places will need water retention.

Out of that list, I select locations eligible for creating wetland-type structures for retaining water, either in the form of swamps, or as porous architectural structures. Once that second list prepared, I assess the local need for electrical power. From there, I reverse engineer. With a given power of X megawatts, I reverse to the storage capacity needed for delivering that power efficiently and cost-effectively. I nail down the storage capacity as such, and I pass in review the available technologies of power storage.

Next, I choose the best storage technology for that specific place, and I estimate the investment outlays necessary for installing it. I calculate the hydropower required in hydroelectric turbines, as well as in adjacent windmills and photovoltaic. I check whether the local river can supply the amount of water that fits the bill. I pass in review literature as regards optimal combinations of those three sources of energy. I calculate the investment outlays needed to install all that stuff, and I add the investment required in ram pumping, elevated tanks, and water conductors.  

Then, I do a first approximation of cash flow: cash from sales of electricity, in that local installation, minus the possible maintenance costs. After I calculate that gross margin of cash,  I compare it to the investment capital I had calculated before, and I try to estimate provisionally the time of return on investment. Once this done, I add maintenance costs to my sauce. I think that the best way of estimating these is to assume a given lifecycle of complete depreciation in the technology installed, and to count maintenance costs as the corresponding annual amortization.         


[1] Phiri, W. K., Vanzo, D., Banda, K., Nyirenda, E., & Nyambe, I. A. (2021). A pseudo-reservoir concept in SWAT model for the simulation of an alluvial floodplain in a complex tropical river system. Journal of Hydrology: Regional Studies, 33, 100770. https://doi.org/10.1016/j.ejrh.2020.100770.

[2] Arnold, J.G., Srinivasan, R., Muttiah, R.S., Williams, J.R., 1998. Large area hydrological modelling and assessment: Part I. Model development. J. Am. Water Resour. Assoc. 34, 73–89.

[3] Neitsch, S.L., Arnold, J.G., Kiniry, J.R., Williams, J.R., 2005. “Soil and Water Assessment Tool Theoretical Documentation.” Version 2005. Blackland Research Center, Texas.

[4] Harvey, J.W., Schaffranek, R.W., Noe, G.B., Larsen, L.G., Nowacki, D.J., O’Connor, B.L., 2009. Hydroecological factors governing surface water flow on a low-gradient floodplain. Water Resour. Res. 45, W03421, https://doi.org/10.1029/2008WR007129.

[5] Sharma, R., & Malaviya, P. (2021). Management of stormwater pollution using green infrastructure: The role of rain gardens. Wiley Interdisciplinary Reviews: Water, 8(2), e1507. https://doi.org/10.1002/wat2.1507

[6] Li, J., Liu, F., & Li, Y. (2020). Simulation and design optimization of rain gardens via DRAINMOD and response surface methodology. Journal of Hydrology, 585, 124788. https://doi.org/10.1016/j.jhydrol.2020.124788

[7] Venvik, G., & Boogaard, F. C. (2020). Infiltration capacity of rain gardens using full-scale test method: effect of infiltration system on groundwater levels in Bergen, Norway. Land, 9(12), 520. https://doi.org/10.3390/land9120520

[8] Ma, Y., Jiang, Y., & Swallow, S. (2020). China’s sponge city development for urban water resilience and sustainability: A policy discussion. Science of the Total Environment, 729, 139078. https://doi.org/10.1016/j.scitotenv.2020.139078

[9] Sun, J., Cheshmehzangi, A., & Wang, S. (2020). Green infrastructure practice and a sustainability key performance indicators framework for neighbourhood-level construction of sponge city programme. Journal of Environmental Protection, 11(2), 82-109. https://doi.org/10.4236/jep.2020.112007

[10] Liu, Yan, Wang, Wei, Ghadimi, Noradin, 2017. Electricity load forecasting by an improved forecast engine for building level consumers. Energy 139, 18–30. https://doi.org/10.1016/j.energy.2017.07.150

[11] Gollou, Abbas Rahimi, Ghadimi, Noradin, 2017. A new feature selection and hybrid forecast engine for day-ahead price forecasting of electricity markets. J. Intell. Fuzzy Systems 32 (6), 4031–4045.

[12] Aghajani, Gholamreza, Ghadimi, Noradin, 2018. Multi-objective energy manage- ment in a micro-grid. Energy Rep. 4, 218–225.

[13] Yu, Dongmin, Ghadimi, Noradin, 2019. Reliability constraint stochastic UC by considering the correlation of random variables with Copula theory. IET Renew. Power Gener. 13 (14), 2587–2593.

[14] Cai, X., Ye, F., & Gholinia, F. (2020). Application of artificial neural network and Soil and Water Assessment Tools in evaluating power generation of small hydropower stations. Energy Reports, 6, 2106-2118. https://doi.org/10.1016/j.egyr.2020.08.010.

[15] Cordova M, Finardi E, Ribas F, de Matos V, Scuzziato M. Performance evaluation and energy production optimization in the real-time operation of hydropower plants. Electr Pow Syst Res 2014;116:201–7.   http://dx.doi.org/ 10.1016/j.epsr.2014.06.012  

[16] Vieira, D. A. G., Guedes, L. S. M., Lisboa, A. C., & Saldanha, R. R. (2015). Formulations for hydroelectric energy production with optimality conditions. Energy Conversion and Management, 89, 781-788.

[17] Botterud, A., Levin, T., & Koritarov, V. (2014). Pumped storage hydropower: benefits for grid reliability and integration of variable renewable energy (No. ANL/DIS-14/10). Argonne National Lab.(ANL), Argonne, IL (United States). https://publications.anl.gov/anlpubs/2014/12/106380.pdf

Investment, national security, and psychiatry

I need to clear my mind a bit. For the last few weeks, I have been working a lot on revising an article of mine, and I feel I need a little bit of a shake-off. I know by experience that I need a structure to break free from another structure. Yes, I am one of those guys. I like structures. When I feel I lack one, I make one.

The structure which I want to dive into, in order to shake off the thinking about my article, is the thinking about my investment in the stock market. My general strategy in that department is to take the rent, which I collect from an apartment in town, every month, and to invest it in the stock market. Economically, it is a complex process of converting the residential utility of a real asset (apartment) into a flow of cash, thus into a financial asset with quite steady a market value (inflation is still quite low), and then I convert that low-risk financial asset into a differentiated portfolio of other financial assets endowed with higher a risk (stock). I progressively move capital from markets with low risk (residential real estate, money) into a high-risk-high-reward market.

I am playing a game. I make a move (monthly cash investment), and I wait for a change in the stock market. I am wrapping my mind around the observable change, and I make my next move the next month. With each move I make, I gather information. What is that information? Let’s have a look at my portfolio such as it is now. You can see it in the table below:

StockValue in EURReal return in €Rate of return I have as of April 6ht, 2021, in the morning
CASH & CASH FUND & FTX CASH (EUR) € 25,82 €                                    –   €                                     25,82
ALLEGRO.EU SA € 48,86 €                               (2,82)-5,78%
ALTIMMUNE INC. – COMM € 1 147,22 €                            179,6515,66%
APPLE INC. – COMMON ST € 1 065,87 €                                8,210,77%
BIONTECH SE € 1 712,88 €                           (149,36)-8,72%
CUREVAC N.V. € 711,00 €                             (98,05)-13,79%
DEEPMATTER GROUP PLC € 8,57 €                               (1,99)-23,26%
FEDEX CORPORATION COMM € 238,38 €                              33,4914,05%
FIRST SOLAR INC. – CO € 140,74 €                             (11,41)-8,11%
GRITSTONE ONCOLOGY INC € 513,55 €                           (158,43)-30,85%
INPOST € 90,74 €                             (17,56)-19,35%
MODERNA INC. – COMMON € 879,85 €                             (45,75)-5,20%
NOVAVAX INC. – COMMON STOCK € 1 200,75 €                            398,5333,19%
NVIDIA CORPORATION – C € 947,35 €                              42,254,46%
ONCOLYTICS BIOTCH CM € 243,50 €                             (14,63)-6,01%
SOLAREDGE TECHNOLOGIES € 683,13 €                             (83,96)-12,29%
SOLIGENIX INC. COMMON € 518,37 €                           (169,40)-32,68%
TESLA MOTORS INC. – C € 4 680,34 €                            902,3719,28%
VITALHUB CORP.. € 136,80 €                               (3,50)-2,56%
WHIRLPOOL CORPORATION € 197,69 €                              33,1116,75%
  €       15 191,41 €                            840,745,53%

A few words of explanation are due. Whilst I have been actively investing for 13 months, I made this portfolio in November 2020, when I did some major reshuffling. My overall return on the cash invested, over the entire period of 13 months, is 30,64% as for now (April 6th, 2021), which makes 30,64% * (12/13) = 28,3% on the annual basis.

The 5,53% of return which I have on this specific portfolio makes roughly 1/6th of the total return in have on all the portfolios I had over the past 13 months. It is the outcome of my latest experimental round, and this round is very illustrative of the mistake which I know I can make as an investor: panic.

In August and September 2020, I collected some information, I did some thinking, and I made a portfolio of biotech companies involved in the COVID-vaccine story: Pfizer, Biontech, Curevac, Moderna, Novavax, Soligenix. By mid-October 2020, I was literally swimming in extasy, as I had returns on these ones like +50%. Pure madness. Then, big financial sharks, commonly called ‘investment funds’, went hunting for those stocks, and they did what sharks do: they made their target bleed before eating it. They boxed and shorted those stocks in order to make their prices affordably low for long investment positions. At the time, I lost control of my emotions, and when I saw those prices plummet, I sold out everything I had. Almost as soon as I did it, I realized what an idiot I had been. Two weeks later, the same stocks started to rise again. Sharks had had their meal. In response, I did what I still wonder whether it was wise or stupid: I bought back into those positions, only at a price higher than what I sold them for.

Selling out was stupid, for sure. Was buying back in a wise move? I don’t know, like really. My intuition tells me that biotech companies in general have a bright future ahead, and not only in connection with vaccines. I am deeply convinced that the pandemic has already built up, and will keep building up an interest for biotechnology and medical technologies, especially in highly innovative forms. This is even more probable as we realized that modern biotechnology is very largely digital technology. This is what is called ‘platforms’ in the biotech lingo. These are digital clouds which combine empirical experimental data with artificial intelligence, and the latter is supposed to experiment virtually with that data. Modern biotechnology consists in creating as many alternative combinations of molecules and lifeforms as we possibly can make and study, and then pick those which offer the best combination of biological outcomes with the probability of achieving said outcomes.

My currently achieved rates of return, in the portfolio I have now, are very illustrative of an old principle in capital investment: I will fail most of the times. Most of my investment decisions will be failures, at least in the short and medium term, because I cannot possibly outsmart the incredibly intelligent collective structure of the stock market. My overall gain, those 5,53% in the case of this specific portfolio, is the outcome of 19 experiments, where I fail in 12 of them, for now, and I am more or less successful in the remaining 7.

The very concept of ‘beating the market’, which some wannabe investment gurus present, is ridiculous. The stock market is made of dozens of thousands of human brains, operating in correlated coupling, and leveraged with increasingly powerful artificial neural networks. When I expect to beat that networked collective intelligence with that individual mind of mine, I am pumping smoke up my ass. On the other hand, what I can do is to do as many different experiments as I can possibly spread my capital between.

It is important to understand that any investment strategy, where I assume that from now on, I will not make any mistakes, is delusional. I made mistakes in the past, and I am likely to make mistakes in the future. What I can do is to make myself more predictable to myself. I can narrow down the type of mistakes I tend to make, and to create the corresponding compensatory moves in my own strategy.

Differentiation of risk is a big principle in my investment philosophy, and yet it is not the only one. Generally, with the exception of maybe 2 or 3 days in a year, I don’t really like quick, daily trade in the stock market. I am more of a financial farmer: I sow, and I wait to see plants growing out of those seeds. I invest in industries rather than individual companies. I look for some kind of strong economic undertow for my investments, and the kind of undertow I specifically look for is high potential for deep technological change. Accessorily, I look for industries which sort of logically follow human needs, e.g. the industry of express deliveries in the times of pandemic. I focus on three main fields of technology: biotech, digital, and energy.

Good. I needed to shake off, and I am. Thinking and writing about real business decisions helped me to take some perspective. Now, I am gently returning into the realm of science, without completely leaving the realm of business: I am navigating the somehow troubled and feebly charted waters of money for science. I am currently involved in launching and fundraising for two scientific projects, in two very different fields of science: national security and psychiatry. Yes, I know, they can conjunct in more points than we commonly think they can. Still, in canonical scientific terms, these two diverge.

How come I am involved, as researcher, in both national security and psychiatry? Here is the thing: my method of using a simple artificial neural network to simulate social interactions seems to be catching on. Honestly, I think it is catching on because other researchers, when they hear me talking about ‘you know, simulating alternative realities and assessing which one is the closest to the actual reality’ sense in me that peculiar mental state, close to the edge of insanity, but not quite over that edge, just enough to give some nerve and some fun to science.

In the field of national security, I teamed up with a scientist strongly involved in it, and we take on studying the way our Polish forces of Territorial Defence have been acting in and coping with the pandemic of COVID-19. First, the context. So far, the pandemic has worked as a magnifying glass for all the f**kery in public governance. We could all see a minister saying ‘A,B and C will happen because we said so’, and right after there was just A happening, with a lot of delay, and then a completely unexpected phenomenal D appeared, with B and C bitching and moaning they haven’t the right conditions for happening decently, and therefore they will not happen at all.  This is the first piece of the context. The second is the official mission and the reputation of our Territorial Defence Forces AKA TDF. This is a branch of our Polish military, created in 2017 by our right-wing government. From the beginning, these guys had the reputation to be a right-wing militia dressed in uniforms and paid with taxpayers’ money. I honestly admit I used to share that view. TDF is something like the National Guard in US. These are units made of soldiers who serve in the military, and have basic military training, but they have normal civilian lives besides. They have civilian jobs, whilst training regularly and being at the ready should the nation call.

The initial idea of TDF emerged after the Russian invasion of the Crimea, when we became acutely aware that military troops in nondescript uniforms, apparently lost, and yet strangely connected to the Russian government, could massively start looking lost by our Eastern border. The initial idea behind TDF was to significantly increase the capacity of the Polish population for mobilising military resources. Switzerland and Finland largely served as models.

When the pandemic hit, our government could barely pretend they control the situation. Hospitals designated as COVID-specific had frequently no resources to carry out that mission. Our government had the idea of mobilising TDF to help with basic stuff: logistics, triage and support in hospitals etc. Once again, the initial reaction of the general public was to put the label of ‘militarisation’ on that decision, and, once again, I was initially thinking this way. Still, some friends of mine, strongly involved as social workers supporting healthcare professionals, started telling me that working with TDF, in local communities, was nothing short of amazing. TDF had the speed, the diligence, and the capacity to keep their s**t together which many public officials lacked. They were just doing their job and helping tremendously.

I started scratching the surface. I did some research, and I found out that TDF was of invaluable help for many local communities, especially outside of big cities. Recently, I accidentally had a conversation about it with M., the scientist whom I am working with on that project. He just confirmed my initial observations.

M. has strong connections with TDF, including their top command. Our common idea is to collect abundant, interview-based data from TDF soldiers mobilised during the pandemic, as regards the way they carried out their respective missions. The purely empirical edge we want to have here is oriented on defining successes and failures, as well as their context and contributing factors. The first layer of our study is supposed to provide the command of TDF with some sort of case-studies-based manual for future interventions. At the theoretical, more scientific level, we intend to check the following hypotheses:      

>> Hypothesis #1: during the pandemic, TDF has changed its role, under the pressure of external events, from the initially assumed, properly spoken territorial defence, to civil defence and assistance to the civilian sector.

>> Hypothesis #2: the actual role played by the TDF during the pandemic was determined by the TDF’s actual capacity of reaction, i.e. speed and diligence in the mobilisation of human and material resources.

>> Hypothesis #3: collectively intelligent human social structures form mechanisms of reaction to external stressors, and the chief orientation of those mechanisms is to assure proper behavioural coupling between the action of external stressors, and the coordinated social reaction. Note: I define behavioural coupling in terms of the games’ theory, i.e. as the objectively existing need for proper pacing in action and reaction.   

The basic method of verifying those hypotheses consists, in the first place, in translating the primary empirical material into a matrix of probabilities. There is a finite catalogue of operational procedures that TDF can perform. Some of those procedures are associated with territorial military defence as such, whilst other procedures belong to the realm of civil defence. It is supposed to go like: ‘At the moment T, in the location A, procedure of type Si had a P(T,A, Si) probability of happening’. In that general spirit, Hypothesis #1 can be translated straight into a matrix of probabilities, and phrased out as ‘during the pandemic, the probability of TDF units acting as civil defence was higher than seeing them operate as strict territorial defence’.

That general probability can be split into local ones, e.g. region-specific. On the other hand, I intuitively associate Hypotheses #2 and #3 with the method which I call ‘study of orientation’. I take the matrix of probabilities defined for the purposes of Hypothesis #1, and I put it back to back with a matrix of quantitative data relative to the speed and diligence in action, as regards TDF on the one hand, and other public services on the other hand. It is about the availability of vehicles, capacity of mobilisation in people etc. In general, it is about the so-called ‘operational readiness’, which you can read more in, for example, the publications of RAND Corporation (https://www.rand.org/topics/operational-readiness.html).  

Thus, I take the matrix of variables relative to operational readiness observable in the TDF, and I use that matrix as input for a simple neural network, where the aggregate neural activation based on those metrics, e.g. through a hyperbolic tangent, is supposed to approximate a specific probability relative to TDF people endorsing, in their operational procedures, the role of civil defence, against that of military territorial defence. I hypothesise that operational readiness in TDF manifests a collective intelligence at work and doing its best to endorse specific roles and applying specific operational procedures. I make as many such neural networks as there are operational procedures observed for the purposes of Hypothesis #1. Each of these networks is supposed to represent the collective intelligence of TDF attempting to optimize, through its operational readiness, the endorsement and fulfilment of a specific role. In other words, each network represents an orientation.

Each such network transforms the input data it works with. This is what neural networks do: they experiment with many alternative versions of themselves. Each experimental round, in this case, consists in a vector of metrics informative about the operational readiness TDF, and that vector locally tries to generate an aggregate outcome – its neural activation – as close as possible to the probability of effectively playing a specific role. This is always a failure: the neural activation of operational readiness always falls short of nailing down exactly the probability it attempts to optimize. There is always a local residual error to account for, and the way a neural network (well, my neural network) accounts for errors consists in measuring them and feeding them into the next experimental round. The point is that each such distinct neural network, oriented on optimizing the probability of Territorial Defence Forces endorsing and fulfilling a specific social role, is a transformation of the original, empirical dataset informative about the TDF’s operational readiness.

Thus, in this method, I create as many transformations (AKA alternative versions) of the actual operational readiness in TDF, as there are social roles to endorse and fulfil by TDF. In the next step, I estimate two mathematical attributes of each such transformation: its Euclidean distance from the original empirical dataset, and the distribution of its residual error. The former is informative about similarity between the actual reality of TDF’s operational readiness, on the one hand, and alternative realities, where TDF orient themselves on endorsing and fulfilling just one specific role. The latter shows the process of learning which happens in each such alternative reality.

I make a few methodological hypotheses at this point. Firstly, I expect a few, like 1 ÷ 3 transformations (alternative realities) to fall particularly close from the actual empirical reality, as compared to others. Particularly close means their Euclidean distances from the original dataset will be at least one order of magnitude smaller than those observable in the remaining transformations. Secondly, I expect those transformations to display a specific pattern of learning, where the residual error swings in a predictable cycle, over a relatively wide amplitude, yet inside that amplitude. This is a cycle where the collective intelligence of Territorial Defence Forces goes like: ‘We optimize, we optimize, it goes well, we narrow down the error, f**k!, we failed, our error increased, and yet we keep trying, we optimize, we optimize, we narrow down the error once again…’ etc. Thirdly, I expect the remaining transformations, namely those much less similar to the actual reality in Euclidean terms, to display different patterns of learning, either completely dishevelled, with the residual error bouncing haphazardly all over the place, or exaggeratedly tight, with error being narrowed down very quickly and small ever since.

That’s the outline of research which I am engaging into in the field of national security. My role in this project is that of a methodologist. I am supposed to design the system of interviews with TDF people, the way of formalizing the resulting data, binding it with other sources of information, and finally carrying out the quantitative analysis. I think I can use the experience I already have with using artificial neural networks as simulators of social reality, mostly in defining said reality as a vector of probabilities attached to specific events and behavioural patterns.     

As regards psychiatry, I have just started to work with a group of psychiatrists who have abundant professional experience in two specific applications of natural language in the diagnosing and treating psychoses. The first one consists in interpreting patients’ elocutions as informative about their likelihood of being psychotic, relapsing into psychosis after therapy, or getting durably better after such therapy. In psychiatry, the durability of therapeutic outcomes is a big thing, as I have already learnt when preparing for this project. The second application is the analysis of patients’ emails. Those psychiatrists I am starting to work with use a therapeutic method which engages the patient to maintain contact with the therapist by writing emails. Patients describe, quite freely and casually, their mental state together with their general existential context (job, family, relationships, hobbies etc.). They don’t necessarily discuss those emails in subsequent therapeutic sessions; sometimes they do, sometimes they don’t. The most important therapeutic outcome seems to be derived from the very fact of writing and emailing.

In terms of empirical research, the semantic material we are supposed to work with in that project are two big sets of written elocutions: patients’ emails, on the one hand, and transcripts of standardized 5-minute therapeutic interviews, on the other hand. Each elocution is a complex grammatical structure in itself. The semantic material is supposed to be cross-checked with neurological biomarkers in the same patients. The way I intend to use neural networks in this case is slightly different from that national security thing. I am thinking about defining categories, i.e. about networks which guess similarities and classification out of crude empirical data. For now, I make two working hypotheses:

>> Hypothesis #1: the probability of occurrence in specific grammatical structures A, B, C, in the general grammatical structure of a patient’s elocutions, both written and spoken, is informative about the patient’s mental state, including the likelihood of psychosis and its specific form.

>> Hypothesis #2: the action of written self-reporting, e.g. via email, from the part of a psychotic patient, allows post-clinical treatment of psychosis, with results observable as transition from mental state A to mental state B.

An overhead of individuals

I think I have found out, when writing my last update (‘Cultural classes’) another piece of the puzzle which I need to assemble in order to finish writing my book on collective intelligence. I think I have nailed down the general scientific interest of the book, i.e. the reason why my fellow scientists should even bother to have a look at it. That reason is the possibility to have deep insight into various quantitative models used in social sciences, with a particular emphasis on the predictive power of those models in the presence of exogenous stressors, and, digging further, the representativeness of those models as simulators of social reality.

Let’s have a look at one quantitative model, just one picked at random (well, almost at random): autoregressive conditional heteroscedasticity AKA ARCH (https://en.wikipedia.org/wiki/Autoregressive_conditional_heteroskedasticity ). It goes as follows. I have a process, i.e. a time-series of a quantitative variable. I compute the mean expected value in that time series, which, in plain human, means arithmetical average of all the observations in that series. In even plainer human, the one we speak after having watched a lot of You Tube, it means that we sum up the values of all the consecutive observations in that time series and we divide the so-obtained total by the number of observations.

Mean expected values have that timid charm of not existing, i.e. when I compute the mean expected value in my time series, none of the observations will be exactly equal to it. Each observation t will return a residual error εt. The ARCH approach assumes that εtis the product of two factors, namely of the time-dependentstandard deviation σt, and a factor of white noise zt. Long story short, we have εttzt.

The time-dependent standard deviation shares the common characteristics of all the standard deviations, namely it is the square root of time-dependent variance: σt = [(σt)2]1/2. That time-dependent variance is computed as:

Against that general methodological background, many variations arise, especially as regards the mean expected value which everything else is wrapped around. It can be a constant value, i.e. computed for the entire time-series once and for all. We can allow the time series to extend, and then each extension leads to the recalculation of the mean expected value, including the new observation(s). We can make the mean expected value a moving average over a specific window in time.

Before I dig further into the underlying assumptions of ARCH, one reminder begs for being reminded: I am talking about social sciences, and about the application of ARCH to all kinds of crazy stuff that we, humans, do collectively. All the equations and conditions phrased out above apply to collective human behaviour. The next step in understanding of ARCH, in the specific context of social sciences, is that ARCH has any point when the measurable attributes of our collective human behaviour really oscillate and change. When I have, for example, a trend in the price of something, and that trend is essentially smooth, without much of a dentition jumping to the eye, ARCH is pretty much pointless. On the other hand, that analytical approach – where each observation in the real measurable process which I observe is par excellence a deviation from the expected state – gains in cognitive value as the process in question becomes increasingly dented and bumpy.

A brief commentary on the very name of the method might be interesting. The term ‘heteroskedasticity’ means that real observations tend to be grouped on one side of the mean expected value rather than on the other. There is a slant, which, over time, translates into a drift. Let’s simulate the way it happens. Before I even start going down this rabbit hole, another assumption is worth deconstructing. If I deem a phenomenon to be describable as white noise, AKA zt, I assume there is no pattern in the occurrence thereof. Any state of that phenomenon can happen with equal probability. It is the ‘Who knows?’ state of reality in its purest form.

White noise is at the very basis of the way we experience reality. This is pure chaos. We make distinctions in this chaos; we group phenomena, and we assess the probability of each newly observed phenomenon falling into one of the groups. Our essential cognition of reality assumes that in any given pound of chaos, there are a few ounces of order, and a few residual ounces of chaos. Then we have the ‘Wait a minute!’ moment and we further decompose the residual ounces of chaos into some order and even more residual a chaos. From there, we can go ad infinitum, sequestrating streams of regularity and order out of the essentially chaotic flow of reality. I would argue that the book of Genesis in the Old Testament is a poetic, metaphorical account of the way that human mind cuts layers of intelligible order out of the primordial chaos.

Seen from a slightly different angle, it means that white noise zt can be interpreted as an error in itself, because it is essentially a departure from the nicely predictable process εt = σt, i.e. where residual departure from the mean expected value is equal to the mean expected departure from the mean expected value. Being a residual error, zt can be factorized into zt = σ’t*z’t , and, once again, that factorization can go all the way down to the limits of observability as regards the phenomena studied.     

At this point, I am going to put the whole reasoning on its head, as regards white noise. It is because I know and use a lot the same concept, just under a different name, namely that of mean-reverted value. I use mean-reversion a lot in my investment decisions in the stock market, with a very simple logic: when I am deciding to buy or sell a given stock, my purely technical concern is to know how far away the current price from its moving average is. When I do this calculation for many different stocks, priced differently, I need a common denominator, and I use standard deviation in price for that purpose. In other words, I compute as follows: mean-reverted price = (current price – mean expected price)/ standard deviation in price.

If you have a closer look at this coefficient of mean-reverted price, its nominator is error, because it is the deviation from mean expected value. I divide that error by standard deviation, and, logically, what I get is error divided by standard deviation, therefore the white noise component zt of the equation εt = σtzt. This is perfectly fine mathematically, only my experience with that coefficient tells me it is anything but white noise. When I want to grasp very sharply and accurately the way which the price of a given stock reacts to its economic environment, I use precisely the mean-reverted coefficient of price. As soon as I recalculate the time series of a price into its mean-reverted form, patterns emerge, sharp and distinct. In other words, the allegedly white-noise-based factor in the stock price is much more patterned than the original price used for its calculation.

The same procedure which I call ‘mean-reversion’ is, by the way, a valid procedure to standardize empirical data. You take each empirical observation, you subtract from it the mean expected value of the corresponding variable, you divide the residual difference by its standard deviation, and Bob’s your uncle. You have your data standardized.

Summing up that little rant of mine, I understand the spirit of the ARCH method. If I want to extract some kind of autoregression in time-series, I can test the hypothesis that standard deviation is time-dependent. Do I need, for that purpose, to assume the existence of strong white noise in the time series? I would say cautiously: maybe, although I do not see the immediate necessity for it. Is the equation εt = σtzt the right way to grasp the distinction into the stochastic component and the random one, in the time series? Honestly: I don’t think so. Where is the catch? I think it is in the definition and utilization of error, which, further, leads to the definition and utilization of the expected state.

In order to make my point clearer, I am going to quote two short passages from pages xxviii-xxix in Nicolas Nassim Taleb’s book ‘The Black Swan’. Here it goes. ‘There are two possible ways to approach phenomena. The first is to rule out the extraordinary and focus on the “normal.” The examiner leaves aside “outliers” and studies ordinary cases. The second approach is to consider that in order to understand a phenomenon, one needs first to consider the extremes—particularly if, like the Black Swan, they carry an extraordinary cumulative effect. […] Almost everything in social life is produced by rare but consequential shocks and jumps; all the while almost everything studied about social life focuses on the “normal,” particularly with “bell curve” methods of inference that tell you close to nothing’.

When I use mean-reversion to study stock prices, for my investment decisions, I go very much in the spirit of Nicolas Taleb. I am most of all interested in the outlying values of the metric (current price – mean expected price)/ standard deviation in price, which, once again, the proponents of the ARCH method interpret as white noise. When that metric spikes up, it is a good moment to sell, whilst when it is in a deep trough, it might be the right moment to buy. I have one more interesting observation about those mean-reverted prices of stock: when they change their direction from ascending to descending and vice versa, it is always a sharp change, like a spike, never a gentle recurving. Outliers always produce sharp change. Exactly, as Nicolas Taleb claims. In order to understand better what I am talking about, you can have a look at one of the analytical graphs I used for my investment decisions, precisely with mean-reverted prices and transactional volumes, as regards Ethereum: https://discoversocialsciences.com/wp-content/uploads/2020/04/Slide5-Ethereum-MR.png .

In a manuscript that I wrote and which I am still struggling to polish enough for making it publishable (https://discoversocialsciences.com/wp-content/uploads/2021/01/Black-Swans-article.pdf ), I have identified three different modes of collective learning. In most of the cases I studied empirically, societies learn cyclically, i.e. first they produce big errors in adjustment, then they narrow their error down, which means they figure s**t out, and in a next phase the error increases again, just to decrease once again in the next cycle of learning. This is cyclical adjustment. In some cases, societies (national economies, to be exact) adjust in a pretty continuous process of diminishing error. They make big errors initially, and they reduce their error of adjustment in a visible trend of nailing down workable patterns. Finally, in some cases, national economies can go haywire and increase their error continuously instead of decreasing it or cycling on it.

I am reconnecting to my method of quantitative analysis, based on simulating with a simple neural network. As I did that little excursion into the realm of autoregressive conditional heteroscedasticity, I realized that most of the quantitative methods used today start from studying one single variable, and then increase the scope of analysis by including many variables in the dataset, whilst each variable keeps being the essential monad of observation. For me, the complex local state of the society studied is that monad of observation and empirical study. By default, I group all the variables together, as distinct, and yet fundamentally correlated manifestations of the same existential stuff happening here and now. What I study is a chain of here-and-now states of reality rather than a bundle of different variables.    

I realize that whilst it is almost axiomatic, in typical quantitative analysis, to phrase out the null hypothesis as the absence of correlation between variables, I don’t even think about it. For me, all the empirical variables which we, humans, measure and report in our statistical data, are mutually correlated one way or another, because they all talk about us doing things together. In phenomenological terms, is it reasonable to assume that we do in order to produce real output, i.e. our Gross Domestic Product, is uncorrelated with what we do with the prices of productive assets? Probably not.

There is a fundamental difference between discovering and studying individual properties of a social system, such as heteroskedastic autoregression in a variable, on the one hand, and studying the way this social system changes and learns as a collective. It means two different definitions of expected state. In most quantitative methods, the expected state is the mean value of one single variable. In my approach, it is always a vector of expected values.

I think I start nailing down, at last, the core scientific idea I want to convey in my book about collective intelligence. Studying human societies as instances of collective intelligence, or, if you want, as collectively intelligent structure, means studying chains of complex states. The Markov chain of states, and the concept of state space, are the key mathematical notions here.

I have used that method, so far, to study four distinct fields of empirical research: a) the way we collectively approach energy management in our societies b) the orientation of national economies on the optimization of specific macroeconomic variables c) the way we collectively manage the balance between urban land, urban density of population, and agricultural production, and d) the way we collectively learn in the presence of random disturbances. The main findings I can phrase out start with the general observation that in a chain of complex social states, we collectively tend to lean towards some specific aspects of our social reality. Fault of a better word, I equate those aspects to the quantitative variables I find them represented by, although it is something to dig in. We tend to optimize the way we work, in the first place, and the way we sell our work. Concerns such as return on investment or real output come as secondary. That makes sense. At the large scale, the way we work is important for the way we use energy, and collectively learn. Surprisingly, variables commonly associated with energy management, such as energy efficiency, or the exact composition of energy sources, are secondary.

The second big finding is related to a manuscript t which I am still struggling to polish enough for making it publishable (https://discoversocialsciences.com/wp-content/uploads/2021/01/Black-Swans-article.pdf ), I have identified three different modes of collective learning. In most of the cases I studied empirically, societies learn cyclically, i.e. first they produce big errors in adjustment, then they narrow their error down, which means they figure s**t out, and in a next phase the error increases again, just to decrease once again in the next cycle of learning. This is cyclical adjustment. In some cases, societies (national economies, to be exact) adjust in a pretty continuous process of diminishing error. They make big errors initially, and they reduce their error of adjustment in a visible trend of nailing down workable patterns. Finally, in some cases, national economies can go haywire and increase their error continuously instead of decreasing it or cycling on it.

The third big finding is about the fundamental logic of social change, or so I perceive it. We seem to be balancing, over decades, the proportions between urban land and agricultural land so as to balance the production of food with the production of new social roles for new humans. The countryside is the factory of food, and cities are factories of new social roles. I think I can make a strong, counterintuitive claim that social unrest, such as what is currently going on in United States, for example, erupts when the capacity to produce food in the countryside grows much faster than the capacity to produce new social roles in the cities. When our food systems can sustain more people than our collective learning can provide social roles for, we have an overhead of individuals whose most essential physical subsistence is provided for, and yet they have nothing sensible to do, in the collective intelligent structure of the society.

Cultural classes

Some of my readers asked me to explain how to get in control of one’s own emotions when starting their adventure as small investors in the stock market. The purely psychological side of self-control is something I leave to people smarter than me in that respect. What I do to have more control is the Wim Hof method (https://www.wimhofmethod.com/ ) and it works. You are welcome to try. I described my experience in that matter in the update titled ‘Something even more basic’. Still, there is another thing, namely, to start with a strategy of investment clever enough to allow emotional self-control. The strongest emotion I have been experiencing on my otherwise quite successful path of investment is the fear of loss. Yes, there are occasional bubbles of greed, but they are more like childish expectations to get the biggest toy in the neighbourhood. They are bubbles, which burst quickly and inconsequentially. The fear of loss is there to stay, on the other hand.    

This is what I advise to do. I mean this is what I didn’t do at the very beginning, and fault of doing it I made some big mistakes in my decisions. Only after some time (around 2 months), I figured out the mental framework I am going to present. Start by picking up a market. I started with a dual portfolio, like 50% in the Polish stock market, and 50% in the big foreign ones, such as US, Germany, France etc. Define the industries you want to invest in, like biotech, IT, renewable energies. Whatever: pick something. Study the stock prices in those industries. Pay particular attention to the observed losses, i.e., the observed magnitude of depreciation in those stocks. Figure out the average possible loss, and the maximum one. Now, you have an idea of how much you can lose in percentage. Quantitative techniques such as mean-reversion or extrapolation of the past changes can help. You can consult my update titled ‘What is my take on these four: Bitcoin, Ethereum, Steem, and Golem?’ to see the general drift.

The next step is to accept the occurrence of losses. You need to acknowledge very openly the following: you will lose money on some of your investment positions, inevitably. This is why you build a portfolio of many investment positions. All investors lose money on parts of their portfolio. The trick is to balance losses with even greater gains. You will be experimenting, and some of those experiments will be successful, whilst others will be failures. When you learn investment, you fail a lot. The losses you incur when learning, are the cost of your learning.

My price of learning was around €600, and then I bounced back and compensated it with a large surplus. If I take those €600 and compare it to the cost of taking an investment course online, e.g. with Coursera, I think I made a good deal.

Never invest all your money in the stock market. My method is to take some 30% of my monthly income and invest it, month after month, patiently and rhythmically, by instalments. For you, it can be 10% or 50%, which depends on what exactly your personal budget looks like. Invest just the amount you feel you can afford exposing to losses. Nail down this amount honestly. My experience is that big gains in the stock market are always the outcome of many consecutive steps, with experimentation and the cumulative learning derived therefrom.

General remark: you are much calmer when you know what you’re doing. Look at the fundamental trends and factors. Look beyond stock prices. Try to understand what is happening in the real business you are buying and selling the stock of. That gives perspective and allows more rational decisions.  

That would be it, as regards investment. You are welcome to ask questions. Now, I shift my topic radically. I return to the painful and laborious process of writing my book about collective intelligence. I feel like shaking things off a bit. I feel I need a kick in the ass. The pandemic being around and little social contacts being around, I need to be the one who kicks my own ass.

I am running myself through a series of typical questions asked by a publisher. Those questions fall in two broad categories: interest for me, as compared to interest for readers. I start with the external point of view: why should anyone bother to read what I am going to write? I guess that I will have two groups of readers: social scientists on the one hand, and plain folks on the other hand. The latter might very well have a deeper insight than the former, only the former like being addressed with reverence. I know something about it: I am a scientist.

Now comes the harsh truth: I don’t know why other people should bother about my writing. Honestly. I don’t know. I have been sort of carried away and in the stream of my own blogging and research, and that question comes as alien to the line of logic I have been developing for months. I need to look at my own writing and thinking from outside, so as to adopt something like a fake observer’s perspective. I have to ask myself what is really interesting in my writing.

I think it is going to be a case of assembling a coherent whole out of sparse pieces. I guess I can enumerate, once again, the main points of interest I find in my research on collective intelligence and investigate whether at all and under what conditions the same points are likely to be interesting for other people.

Here I go. There are two, sort of primary and foundational points. For one, I started my whole research on collective intelligence when I experienced the neophyte’s fascination with Artificial Intelligence, i.e. when I discovered that some specific sequences of equations can really figure stuff out just by experimenting with themselves. I did both some review of literature, and some empirical testing of my own, and I discovered that artificial neural networks can be and are used as more advanced counterparts to classical quantitative models. In social sciences, quantitative models are about the things that human societies do. If an artificial form of intelligence can be representative for what happens in societies, I can hypothesise that said societies are forms of intelligence, too, just collective forms.

I am trying to remember what triggered in me that ‘Aha!’ moment, when I started seriously hypothesising about collective intelligence. I think it was when I was casually listening to an online lecture on AI, streamed from the Massachusetts Institute of Technology. It was about programming AI in robots, in order to make them able to learn. I remember one ‘Aha!’ sentence: ‘With a given set of empirical data supplied for training, robots become more proficient at completing some specific tasks rather than others’. At the time, I was working on an article for the journal ‘Energy’. I was struggling. I had an empirical dataset on energy efficiency in selected countries (i.e. on the average amount of real output per unit of energy consumption), combined with some other variables. After weeks and weeks of data mining, I had a gut feeling that some important meaning is hidden in that data, only I wasn’t able to put my finger precisely on it.

That MIT-coined sentence on robots triggered that crazy question in me. What if I return to the old and apparently obsolete claim of the utilitarian school in social sciences, and assume that all those societies I have empirical data about are something like one big organism, with different variables being just different measurable manifestations of its activity?

Why was that question crazy? Utilitarianism is always contentious, as it is frequently used to claim that small local injustice can be justified by bringing a greater common good for the whole society. Many scholars have advocated for that claim, and probably even more of them have advocated against. I am essentially against. Injustice is injustice, whatever greater good you bring about to justify it. Besides, being born and raised in a communist country, I am viscerally vigilant to people who wield the argument of ‘greater good’.

Yet, the fundamental assumptions of utilitarianism can be used under a different angle. Social systems are essentially collective, and energy systems in a society are just as collective. There is any point at all in talking about the energy efficiency of a society when we are talking about the entire intricate system of using energy. About 30% of the energy that we use is used in transport, and transport is from one person to another. Stands to reason, doesn’t it?

Studying my dataset as a complex manifestation of activity in a big complex organism begs for the basic question: what do organisms do, like in their daily life? They adapt, I thought. They constantly adjust to their environment. I mean, they do if they want to survive. If I settle for studying my dataset as informative about a complex social organism, what does this organism adapt to? It could be adapting to a gazillion of factors, including some invisible cosmic radiation (the visible one is called ‘sunlight’). Still, keeping in mind that sentence about robots, adaptation can be considered as actual optimization of some specific traits. In my dataset, I have a range of variables. Each variable can be hypothetically considered as informative about a task, which the collective social robot strives to excel at.

From there, it was relatively simple. At the time (some 16 months ago), I was already familiar with the logical structure of a perceptron, i.e. a very basic form of artificial neural network. I didn’t know – and I still don’t – how to program effectively the algorithm of a perceptron, but I knew how to make a perceptron in Excel. In a perceptron, I take one variable from my dataset as output, the remaining ones are instrumental as input, and I make my perceptron minimize the error on estimating the output. With that simple strategy in mind, I can make as many alternative perceptrons out of my dataset as I have variables in the latter, and it was exactly what I did with my data on energy efficiency. Out of sheer curiosity, I wanted to check how similar were the datasets transformed by the perceptron to the source empirical data. I computed Euclidean distances between the vectors of expected mean values, in all the datasets I had. I expected something foggy and pretty random, and once again, life went against my expectations. What I found was a clear pattern. The perceptron pegged on optimizing the coefficient of fixed capital assets per one domestic patent application was much more similar to the source dataset than any other transformation.

In other words, I created an intelligent computation, and I made it optimize different variables in my dataset, and it turned out that, when optimizing that specific variable, i.e. the coefficient of fixed capital assets per one domestic patent application, that computation was the most fidel representation of the real empirical data.   

This is when I started wrapping my mind around the idea that artificial neural networks can be more than just tools for optimizing quantitative models; they can be simulators of social reality. If that intuition of mine is true, societies can be studied as forms of intelligence, and, as they are, precisely, societies, we are talking about collective intelligence.

Much to my surprise, I am discovering similar a perspective in Steven Pinker’s book ‘How The Mind Works’ (W. W. Norton & Company, New York London, Copyright 1997 by Steven Pinker, ISBN 0-393-04535-8). Professor Steven Pinker uses a perceptron as a representation of human mind, and it seems to be a bloody accurate representation.

That makes me come back to the interest that readers could have in my book about collective intelligence, and I cannot help referring to still another book of another author: Nassim Nicholas Taleb’s ‘The black swan. The impact of the highly improbable’ (2010, Penguin Books, ISBN 9780812973815). Speaking from an abundant experience of quantitative assessment of risk, Nassim Taleb criticizes most quantitative models used in finance and economics as pretty much useless in making reliable predictions. Those quantitative models are good solvers, and they are good at capturing correlations, but they suck are predicting things, based on those correlations, he says.

My experience of investment in the stock market tells me that those mid-term waves of stock prices, which I so much like riding, are the product of dissonance rather than correlation. When a specific industry or a specific company suddenly starts behaving in an unexpected way, e.g. in the context of the pandemic, investors really pay attention. Correlations are boring. In the stock market, you make good money when you spot a Black Swan, not another white one. Here comes a nuance. I think that black swans happen unexpectedly from the point of view of quantitative predictions, yet they don’t come out of nowhere. There is always a process that leads to the emergence of a Black Swan. The trick is to spot it in time.

F**k, I need to focus. The interest of my book for the readers. Right. I think I can use the concept of collective intelligence as a pretext to discuss the logic of using quantitative models in social sciences in general. More specifically, I want to study the relation between correlations and orientations. I am going to use an example in order to make my point a bit more explicit, hopefully. In my preceding update, titled ‘Cool discovery’, I did my best, using my neophytic and modest skills in programming, the method of negotiation proposed in Chris Voss’s book ‘Never Split the Difference’ into a Python algorithm. Surprisingly for myself, I found two alternative ways of doing it: as a loop, on the one hand, and as a class, on the other hand. They differ greatly.

Now, I simulate a situation when all social life is a collection of negotiations between people who try to settle, over and over again, contentious issues arising from us being human and together. I assume that we are a collective intelligence of people who learn by negotiated interactions, i.e. by civilized management of conflictual issues. We form social games, and each game involves negotiations. It can be represented as a lot of these >>

… and a lot of those >>

In other words, we collectively negotiate by creating cultural classes – logical structures connecting names to facts – and inside those classes we ritualise looping behaviours.

Money being just money for the sake of it

I have been doing that research on the role of cities in our human civilization, and I remember the moment of first inspiration to go down this particular rabbit hole. It was the beginning of March, 2020, when the first epidemic lockdown has been imposed in my home country, Poland. I was cycling through streets of Krakow, my city, from home to the campus of my university. I remember being floored at how dead – yes, literally dead – the city looked. That was the moment when I started perceiving cities as something almost alive. I started wondering how will pandemic affect the mechanics of those quasi-living, urban organisms.

Here is one aspect I want to discuss: restaurants. Most restaurants in Krakow turn into takeouts. In the past, each restaurant had the catering part of the business, but it was mostly for special events, like conferences, weddings and whatnot. Catering was sort of a wholesale segment in the restaurant business, and the retail was, well, the table, the napkin, the waiter, that type of story. That retail part was supposed to be the main one. Catering was an addition to that basic business model, which entailed a few characteristic traits. When your essential business process takes place in a restaurant room with tables and guests sitting at them, the place is just as essential. The location, the size, the look, the relative accessibility: it all played a fundamental role. The rent for the place was among the most important fixed costs of a restaurant. When setting up business, one of the most important questions – and risk factors – was: “Will I be able to attract sufficiently profuse customers to this place, and to ramp up prices sufficiently high to as to pay the rent for the place and still have satisfactory profit?”. It was like a functional loop: a better place (location, look) meant more select a clientele and higher prices, which required to pay a high rent etc.

As I was travelling to other countries, and across my own country, I noticed many times that the attributes of the restaurant as physical place were partly substitute to the quality of food. I know a lot of places where the customers used to pretend that the food is excellent just because said food was so strange that it just didn’t do to say it is crappy in taste. Those people pretended they enjoy the food because the place was awesome. Awesomeness of the place, in turn, was largely based on the fact that many people enjoyed coming there, it was trendy, stylish, it was a good thing to show up there from time to time, just to show I have something to show to others. That was another loop in the business model of restaurants: the peculiar, idiosyncratic, gravitational field between places and customers.

In that business model, quite substantial expenses, i.e.  the rent, and the money spent on decorating and equipping the space for customers were essentially sunk costs. The most important financial outlays you made to make the place competitive did not translate into any capital value in your assets. The only way to do such translation was to buy the place instead of renting it. Advantageous, long-term lease was another option. In some cities, e.g. the big French ones, such as Paris, Lyon or Marseille, the market of places suitable for running restaurants, both legally and physically, used to be a special segment in the market of real estate, with its own special contracts, barriers to entry etc.   

As restaurants turn into takeouts, amidst epidemic restrictions, their business model changes. Food counts in the first place, and the place counts only to the extent of accessibility for takeout. Even if I order food from a very fancy restaurant, I pay for food, not for fanciness. When consumed at home, with the glittering reputation of the restaurant taken away from it, food suddenly tastes differently. I consume it much more with my palate and much less with my ideas of what is trendy. Preparation and delivery of food becomes the essential business process. I think it facilitates new entries into the market of gastronomy. Yes, I know, restaurants are going bankrupt, and my take on it is that places are going bankrupt, but people stay. Chefs and cooks are still there. Human capital, until recently being 50/50 important – together with the real estate aspect of the business – becomes definitely the greatest asset of the restaurants’ sector as they focus on takeout. The broadly spoken cooking skills, including the ability to purchase ingredients of good quality, become primordial. Equipping a business-scale kitchen is not really rocket science, and, what is just as important, there is a market for second-hand equipment of that kind. The equipment of a kitchen, in a takeout-oriented restaurant, is much more of an asset than the decoration of a dining room. The rent you pay, or the market price of the whole place in the real-estate market are much lower, too, as compared to classical restaurants.

What restaurant owners face amidst the pandemic is the necessity to switch quickly, and on a very short notice of 1 – 2 weeks, between their classical business model based on a classy place to receive customers, and the takeout business model, focused on the quality of food and the promptness of delivery. It is a zone of uncertainty more than a durable change, and this zone is

associated with different cash flows and different assets. That, in turn, means measurable risk. Risk in big amounts is an amount, essentially, much more than a likelihood. We talk about risk, in economics and in finance, when we are actually sure that some adverse events will happen, and we even know what is going to be the total amount of adversity to deal with; we just don’t know where exactly that adversity will hit and who exactly will have to deal with it.

There are two basic ways of responding to measurable risk: hedging and insurance. I can face risk by having some aces up my sleeve, i.e. by having some alternative assets, sort of fall-back ones, which assure me slightly softer a landing, should the s**t which I hedge against really happen. When I am at risk in my in-situ restaurant business, I can hedge towards my takeout business. With time, I can discover that I am so good at the logistics of delivery that it pays off to hedge towards a marketing platform for takeouts rather than one takeout business. There is an old saying that you shouldn’t put all your eggs in the same basket, and hedging is the perfect illustration thereof. I hedge in business by putting my resources in many different baskets.

On the other hand, I can face risk by sharing it with other people. I can make a business partnership with a few other folks. When I don’t really care who exactly those folks are, I can make a joint-stock company with tradable shares of participation in equity. I can issue derivative financial instruments pegged on the value of the assets which I perceive as risky. When I lend money to a business perceived as risky, I can demand it to be secured with tradable notes AKA bills of exchange. All that is insurance, i.e. a scheme where I give away part of my cash flow in exchange of the guarantee that other people will share with me the burden of damage, if I come to consume my risks. The type of contract designated expressis verbis as ‘insurance’ is one among many forms of insurance: I pay an insurance premium in exchange o the insurer’s guarantee to cover my damages. Restaurant owners can insure their epidemic-based risk by sharing it with someone else. With whom and against what kind of premium on risk? Good question. I can see like a shade of that. During the pandemic, marketing platforms for gastronomy, such as Uber Eats, swell like balloons. These might be the insurers of the restaurant business. They capitalize on the customer base for takeout. As a matter of fact, they can almost own that customer base.

A group of my students, all from France, as if by accident, had an interesting business concept: a platform for ordering food from specific chefs. A list of well-credentialed chefs is available on the website. Each of them recommends a few flagship recipes of theirs. The customer picks the specific chef and their specific culinary chef d’oeuvre. One more click, and the customer has that chef d’oeuvre delivered on their doorstep. Interesting development. Pas si con que ça, as the French say.     

Businesspeople have been using both hedging and insurance for centuries, to face various risks. When used systematically, those two schemes create two characteristic types of capitalistic institutions: financial markets and pooled funds. Spreading my capitalistic eggs across many baskets means that, over time, we need a way to switch quickly among baskets. Tradable financial instruments serve to that purpose, and money is probably the most liquid and versatile among them. Yet, it is the least profitable one: flexibility and adaptability is the only gain that one can derive from holding large monetary balances. No interest rate, no participation in profits of any kind, no speculative gain on the market value. Just adaptability. Sometimes, just being adaptable is enough to forego other gains. In the presence of significant need for hedging risks, businesses hold abnormally large amounts of cash money.

When people insure a lot – and we keep in mind the general meaning of insurance as described above – they tend to create large pooled funds of liquid financial assets, which stand at the ready to repair any breach in the hull of the market. Once again, we return to money and financial markets. Whilst abundant use of hedging as strategy for facing risk leads to hoarding money at the individual level, systematic application of insurance-type contracts favours pooling funds in joint ventures. Hedging and insurance sort of balance each other.

Those pieces of the puzzle sort of fall together into a pattern. As I have been doing my investment in the stock market, all over 2020, financial markets seems to be puffy with liquid capital, and that capital seems to be avid of some productive application. It is as if money itself was saying: ‘C’mon, guys. I know I’m liquid, and I can compensate risk, but I am more than that. Me being liquid and versatile makes me easily convertible into productive assets, so please, start converting. I’m bored with being just me, I mean with money being just money for the sake of it’.