The painful occurrence of sometimes. Educational about insurance and financial risk.


My editorial on You Tube


Things happen sort of sometimes. You are never sure. Take a universe. Technically, there are so many conditions to meet if you want to have a decent universe that is seems a real blessing we have one. You need them electric charges, for example. We call them negative and positive, fault of a better description, but the fact is that in reality, we have two kinds of elementary particles, A and B, I mean protons and electrons, and each A repels any other A but is irresistibly attracted to any B. Same for B’s. Imagine that 50% of B’s start behaving like A’s, i.e. they are attracted by other B’s and repel A’s. You would have 50% of matter devoid of atoms, as you need A and B to behave properly, i.e. to cross-mate A+B, and avoid any A+A or B+B indecency, in order to have an atom.

Kind of stressful. You could think about an insurance. An insurance contract stipulates that the Insured pays to the Insurer a Premium, and in exchange the Insurer gives to the Insured the guarantee of paying them damages in case a specific unpleasant event happens in the future. We insure our cars against physical accident and theft, same for our houses and apartments. You can insure yourself when you travel, and you are afraid of medical bills in case something happens to your health when on the road.

I learnt, with a big surprise, when reading the magnificent Fernand Braudel’s “Civilisation and Capitalism”  , that insurance was the core business of the first really big financial institutions in Europe. Yes, we used to do banking. Yes, we did all that circulation in debt-based securities. Still, it was all sort of featherweight business. Apparently, the real heavyweights of finance appeared with the emergence of maritime insurance. When small, local bankers started offering to the owners of commercial ships those new contracts, guaranteeing to pay for their damages in case there are any, and they gave those guarantees in exchange of relatively small insurance premiums, it was, apparently, like the release of a new iPhone in the world of gadget-lovers: a craze. By offering such contracts to captains and ship owners, those local bankers had rapidly swollen to the size of really big financial institutions.

Financial instruments always have an underlying behavioural pattern. Financial instruments are what they are because we, humans, do what we do. One of the things we do is selective individuation. There is a hurricane coming your way. You board your windows, you attach your garden furniture, you lock yourself in your house, and you check the insurance of your house. You brace against the calamity as an individual. That hurricane is going to do damage. We know it. We know it is going to do damage to the whole neighbourhood. Technically, the entire local community in threatened. Still, we prepare as individuals.

As I check the insurance of my house, I can read that in exchange of a premium, which I have already paid, I can possibly receive a coverage of my damages. Do I really want things to take such a turn, which would make those damages material? With rare exceptions, not really. Yes, I have that insurance but no, I don’t want to use it. I just want to have it, and I want to avoid whatever event might make those damages payable.

I imagine other people in a similar position. All bracing for an imminent hurricane, all having their individual insurances, and all sincerely expecting not to suffer any damage covered by those insurance contracts.

This is selective individuation as it comes to the foresight of future events. I know some bad s**t is heading my way, I know it is going to hit all around, and I just expect it is not going to hit me. As it is bound to hit all around, there is bound to be some aggregate damage. The hurricane is bound to destroy property for an amount of $5 000 000. There are 500 000 people under threat. Let’s say that 24% of them think about insuring their property. How will an insurer approach the situation?

First of all, there is bound to be those $5 000 000 of damages. Seen from a financial point of view, it is a future, certain capital expenditure. I stress it very strongly: certain. What is just a probability at the individual level becomes a certainty at a given level of aggregation. What is the “given level”? Let’s suppose there is a 1% likelihood that I step on a banana skin, when walking down the street. With 100 of me, the 1% becomes 1%*100 = 100%. Sure as taxes.

You have (I hope) already studied my lectures on equity-based securities, the debt-based ones, and on monetary systems. Thus, you already know three manners of securing capital for a future, certain outlay. Way #1: create an entity, endowed with equity in some assets, and then split that equity into tradable shares, which you sell to third parties. This way is good for securing capital in the view of slightly risky ventures, with a lot of open questions as for the strategy to adopt. Way #2: borrow, possibly through issuance of promissory notes (oldie), or bonds, if you really mean business. This path is to follow when you can reasonably expect some cash flows in the future, with relatively low risk. Way #3: get hold of some highly liquid assets, somebody else’s bonds, for example, and then create a system of tokens for payment, backed with the value of those highly liquid assets. This manner of securing capital is good for large communities, endowed with a pool of recurrent needs, and recurrent, yet diffuse fears as for the future.

With insurance, we walk down a fourth avenue. There are some future capital outlays that will compensate a clear, measurable, future loss that we know is bound to happen at a certain level of social aggregation. This aggregate loss decomposes into a set of individual s**t happening to individual people, in a heterogenous manner. It is important, once again: what you can predict quite accurately is the aggregate amount of trouble, but it is much harder to predict individual occurrences inside this aggregate. What you need is a floating mass of capital, ready to rush into whatever individual situation it is needed to compensate for. We call this type of capital a pooled fund. Insurance is sort of opposite of equity or debt. With the latter two, we expect something positive to happen. With the former, we know something bad is going to occur.

According to the basic logic of finance, you look for people who will put money in this pooled fund. Let’s take those 500 000 people threatened by a hurricane and the resulting aggregate loss of $5 000 000. Let’s say that 24% of them think about insuring their property, which makes 24%*500 000 = 120 000. In order to secure the $5 000 000 we need, the basic scheme is to make those people contribute an average of $5 000 000/ 120 000 = $41,67 of insurance premium each. Now, if you take a really simplistic path of thinking, you will say: wait, $5 000 000 divided by 500 000 folks exposed makes $10 per capita, which is clearly less than the $41,67 of insurance premium to pay. Where is my gain? Rightful question, indeed. Tangible gains appear where the possible, individual loss is clearly greater than the insurance premium to pay. Those $5 000 000 of aggregate loss in property are not made as $10 times 500 000 people. It is rather made as 0,005% likelihood in each of those people to incur an individual loss of $200 000 in property. That makes 0,005%*500 000 (remember? the banana skin) = 25. Thus, we have 25 people who will certainly lose property in that hurricane. We just don’t know which 25 out of the general population 500 000 will they be. If you are likely, just likely, to lose $200 000, will you not pay $41,67 of insurance premium? Sounds more reasonable, doesn’t it?

You are not quite following my path of thinking? Calm down, I do not always do, either. Still, this time, I can explain. There are 500 000 people, right? There is a hurricane coming, and according to all the science we have, it is going to hit badly 0,005% of that population, i.e. 25 households, and the individual loss will be $200 000 on average. That makes 25*$200 000 = $5 000 000. In the total population of 500 000, some people acknowledge this logic, some others not really. Those who do are 120 000. Each of them is aware they could be among the 25 really harmed.  They want to sign an insurance contract. Their contracts taken together need to secure the $5 000 000. Thus, each of them has to contribute $41,67 of insurance premium.

At this very basic level, the whole business of insurance is sort of a balance between the real, actual risk we are exposed to, and our behavioural take on that risk. Insurance works in populations where the subset of people who really need capital to compensate damages is much smaller than the population of those who are keen to contribute to the pooled fund.

Interestingly, people are not really keen on insuring things that happen quite frequently. There is high likelihood, although lower that absolute certainty, that someone in the street will stick an old chewing gum on the seat, on a bus, and then you sit on that chewing gum and have your brand-new woollen pants badly stained. Will you insure against it? Probably not. Sort of not exactly the kind of catastrophe one should insure against. There is a certain type of risks we insure against. They need to be spectacular and measurable, and, in the same time, they need to be sufficiently uncertain so as to give us a sense of false security. That kind of trouble is certainly not going to happen to me, still, just in case, I buy that insurance.

We can behaviourally narrow the catalogue of insurable risks, by comparing insurance to hedging, which is an alternative way to shield against risk. When I hedge against a risk, I need to know what amount of capital, in some assets of mine, is exposed to possible loss. When I know that, I acquire other assets, devoid of the same risk, for a similar amount of capital. I have that house, worth $200 000, in a zone exposed to hurricanes. I face the risk of seeing my house destroyed. I buy sovereign bonds of the Federal Republic of Germany, for another $200 000. Rock solid, these ones. They will hold value for years, and can even bring me some profits. My portfolio of German bonds hedges the risk of having my house destroyed by a hurricane.

Thus, here is my choice as for shielding my $200 000 house against hurricane-related risks. Option #1: I hedge with equity in valuable assets worth $200 000 or so. Option #2: I insure, i.e. I buy a conditional claim on an insurer, for the price of $41,67. Hedging looks sort of more solid, and indeed it is. Yet, you need a lot of capital to hedge efficiently. For every penny exposed to a definite risk, you need to hedge with another penny free of that risk. Every penny doubles, sort of. Besides, the assets you hedge with can have their own, intrinsic risk, or, if they haven’t, like German sovereign bonds, you need to pay a juicy discount (price, in financial tongue) for acquiring them. Insurance is cheaper than hedging.

My intuitive perception of the financial market tells me that if somebody has enough capital to hedge efficiently against major risks, and there are assets in view, to hedge with, people will hedge rather than insure. They insure when they either have no big capital reserves at all, when they have run out of such reserves with the hedging they have already done, or when they have no assets to hedge with. When I run a pharmaceutical company and I am launching a new drug at high risk, I will probably hedge with another factory that makes plain, low risk aspirin. That makes sense. It is a sensible allocation of my capital. On the other hand, when I have a fleet of company cars worth $5 000 000, I would rather insure than hedge.

This is what people do in the presence of risk: they insure, and they hedge. They create pooled capital funds for insurance, and they make differentiated portfolios of investments for hedging. Once again: this is what people do, like really. This is how financial markets work, and this is a big reason why they exist.

As I talk about how it works, let’s have a closer look. It is finance and it is business, so what we need is a balance sheet. When, as an insurer, I collect $5 000 000 in insurance premiums to cover $5 000 000 of future damages, I have a potential liability. Here, it becomes a little tricky. Those damages have not taken place yet, and I do not need to pay them now. I am not liable yet to people I signed those insurance contracts with. Still, the hurricane is going to hit, and it is going to destroy property for $5 000 000, and then I will be liable. All in all, the principles of accounting specifically made for the insurance business impose an obligation, upon insurers, to account the insured value as a liability.

Now, a big thing. I mean, really big. Economically, most of what we call ‘public sector’ or ‘political organisations’ are pooled funds. The constitutional state can be seen as a huge pooled fund. We pay our taxes into it, and in exchange we receive security, healthcare, infrastructure, bullshit, enlightened guidance of our leaders etc. Just some of us can really experience that payoff, and, strangely enough, we don’t always want to. Yes, we all want to drive on good roads, but we don’t want to be in a situation when the assistance of a police officer is needed. Most of us wants to have nothing to do with prisons, which are also part of what this pooled fund finances.

There is a pattern in the way that pooled funds work. That pattern sums up to the residual difference between the insurance premiums actually collected, and the actual damages to be paid. A successful insurer manages to market his contracts in sufficiently big an amount so as to have a surplus of premiums collected over the damages to be paid. The part of premiums collected, which is supposed to pay for damages, is the technical reserve of the pooled fund. The residual surplus of premiums collected, over the technical reserve for damages, is de facto an investment fund, which brings a financial yield.

Most types of insurance are based on the scarcity of occurrence across space. Hurricanes do damage to just some of us, but many are willing to insure against.

There is a special type of insurance, usually observable as those special contracts called ‘life insurance’. Life insurance contracts are about death and severe illness rather than about life. When you think about it, those contracts insure a type of events, which are certainly going to happen. In ‘ordinary’ insurance we pool funds for events scarce across space: we don’t know where the hurricane is going to hit. In life insurance, we pool funds for financing occurrences 100% bound to happen to everyone, we just don’t know when.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

Tenez-vous bien, chers Autrichiens


Mon éditorial sur You Tube


Je réfléchis sur ce qu’est intelligence et j’y réfléchis comme je gagne de l’expérience avec une forme très simple d’intelligence artificielle, un perceptron à couches multiples. Vous direz : « Encore ? Tu ne t’es pas ennuyé à mort avec ce truc ? » Eh bien, non. Pas du tout. Je sens intuitivement que je ne fais que commencer ce voyage intellectuel.

Alors, qu’est-ce que je fais ? Je commence par prendre un ensemble de données empiriques.  Chaque variable empirique est un phénomène stimulant distinct. Je leur assigne des coefficients aléatoires de pondération et je sur la base de ces valeurs pondérées je calcule, à travers une fonction d’activation neurale, la valeur d’une variable de résultat. Je compare, par soustraction, cette valeur avec une valeur espérée de référence. Je multiplie la différence par la dérivée locale de la fonction neurale d’activation et je considère le résultat comme une erreur locale. Je produis donc une variation momentanée d’une certaine structure logique et je compare cette variation à un état de référence. La spécificité de chaque variation locale réside dans le mélange unique des coefficients de pondération des phénomènes stimulants.

A ce stade-là, j’expérimente avec la perception proprement dite. C’est ça, la perception. Je suis confronté à un certain nombre des phénomènes de réalité. Le trait important d’un perceptron est que c’est moi, son utilisateur, donc une Intelligence Externe, qui détermine la nature de ces phénomènes. C’est moi qui donne la distinction entre la variable A et la variable B etc.  Dans la perception réelle, la définition des phénomènes se fait par expérimentation. Je marche à travers ce gros truc hétérogène et j’essaie voir quelle est la meilleure façon de ne pas buter constamment contre ces trucs plus petits. En fin de compte, il semble approprié de distinguer le gros truc comme « forêt » et les petits trucs comme « arbres », mettre en place des règles précises de distinguer un arbre d’un autre ainsi que pour distinguer les espaces libres entre eux.

Le perceptron que j’utilise est donc une structure phénoménologique prédéterminée. Je viens de dire que c’est moi, son utilisateur, qui la prédétermine, mais ce n’est pas tout à fait vrai. Mes variables d’entrée correspondent aux données quantitatives que je peux assembler. Dans la structure logique de mon perceptron, je reproduis celle des bases des données publiquement accessibles, comme celle de la Banque Mondiale.

Ce que j’expérimente avec, à travers le perceptron, est donc une structure phénoménologique prédéterminée par ces bases des données. Si la structure phénoménologique est donnée comme exogène, que reste-t-il à expérimenter avec ? Allons pas à pas. Ces coefficients aléatoires de pondération que mon perceptron attribue aux variables d’entrée reflètent l’importance relative et temporaires des phénomènes que ces variables représentent. Le tout – le perceptron – est censé faire quelque chose, produire un changement. L’importance des phénomènes reflète donc ce qu’ils font dans ce tout. Le phénomène donné joue-t-il un rôle important ou pas tout à fait ? Allons donc voir. Attribuons à ce phénomène des rôles d’importance variable et observons le résultat final, donc la variable de résultat.

Ici, il est bon de rappeler un truc à la frontière des maths et de la vie réelle. Les valeurs de variables d’entrée représentent l’état temporaire des processus qui produisent ces variables. Ma taille en centimètres reflète le processus de la croissance de mon corps, précédé par le processus de mélange de mon matériel génétique, dans les générations de mes ancêtres. La consommation d’énergie par tête d’habitant reflète le résultat temporaire de tout ce qui s’est passé jusqu’alors et qui avait une influence quelconque sur ladite consommation d’énergie. Lorsque le perceptron expérimente avec l’importance des rôles respectifs des variables d’entrée, il expérimente avec l’influence possible des différentes séquences d’évènements.

L’apprentissage d’un perceptron devrait conduire à un état de minimisation d’erreur locale, lorsque la combinaison d’influences respectives de la part des variables d’entrée rend possible une approche asymptotique de la fonction d’activation neuronale vers la valeur espérée de la variable de résultat. Ce résultat d’apprentissage est une sorte de perception optimale par rapport à ce que je veux. Monsieur désire une efficience énergétique de l’économie nationale proche de $10 constants 2011 par 1 kilogramme d’équivalent pétrole consommé ? Rien de plus facile, répond le perceptron. Monsieur prend une population plus grande de 3 millions d’habitants, donc plus de 8%, avec un zest de plus en termes de consommation d’énergie par tête d’habitant, quoi que ladite tête d’habitant va consommer un peu moins de Produit Intérieur Brut par an et voilà ! Monsieur a bien les (presque) 10 dollars par kilogramme d’équivalent pétrole. Monsieur désire autre chose ?

Le perceptron simule donc la perception mais en fait il fait autre chose : il optimise la connexion fonctionnelle entre les variables. Ce type de réseau neuronal devrait plutôt s’appeler « expéritron » ou quelque chose de pareil, car l’essence de son fonctionnement c’est l’expérimentation qui tend vers une erreur minimale. Comment l’optimise-t-il ? C’est le mécanisme de rétropropagation. Le perceptron calcule l’erreur locale d’estimation et ensuite – c’est-à-dire dans la prochaine ronde d’expérimentation – il ajoute cette erreur à la valeur des variables d’entrée. Dans la prochaine ronde d’expérimentation, la valeur de chaque variable d’entrée sujette à la pondération aléatoire va être la valeur initiale plus l’erreur enregistrée dans la ronde précédente.

Une seule et même erreur est ajoutée à l’estimation quantitative de chaque phénomène distinct de réalité. Le fait d’ajouter l’erreur est vaguement équivalent à la mémoire d’une entité intelligente. L’erreur est une pièce d’information et je veux l’emmagasiner pour qu’elle serve mon apprentissage. Dans la version de base de mon perceptron, chaque phénomène de réalité – chaque variable d’entrée – absorbe cette information d’exactement de la même façon. C’est une mémoire concentrée sur les échecs et voilà la grosse différence par rapport à l’intelligence humaine. Cette dernière aime apprendre sur des succès. Une tentative réussie immédiatement donne une gratification à notre système nerveux et ça marche nettement mieux que l’apprentissage à travers des échecs.

Question : comment construire un réseau neuronal qui apprend sur des succès plutôt que sur des échecs ? Un échec est relativement facile à représenter mathématiquement : c’est une erreur d’estimation. Un succès veut dire que j’ai essayé d’avancer dans une direction et ça a marché. Disons que j’ai une variable chère à mon cœur, comme l’efficience énergétique d’économie nationale, donc ces dollars de Produit Intérieur Brut obtenus avec 1 kilogramme d’équivalent pétrole. J’espère d’avancer par X dollars, mais grâce à mon génie indéniable tout comme avec l’aide de la Providence divine, j’ai avancé par Y dollars et Y > X. Alors la différence Y – X est la mesure de mon succès. Jusqu’à ce point, ça à l’air identique à ce que fait le perceptron classique : ça soustrait.

Question : comment utiliser la différence d’une soustraction comme quelque chose à amplifier comme succès au lieu de la minimiser en tant que la mesure d’un échec ? Réponse : il faut utiliser une fonction d’activation neurale qui amplifie un certain type de déviation, la positive. La première idée qui me vient à l’esprit est de mettre dans le perceptron une formule logique du type « si Y – X > 0, alors fais A, mais en revanche si Y – X ≤ 0, alors fais B ». Ça a l’air enfantin à première vue. Seulement si ça doit se répéter 5000 fois, comme je le fais d’habitude avec ce perceptron, çççaa raaalentiîîit terriblement. Je pense à quelque chose de plus simple : et si je calculais immédiatement la valeur exponentielle de l’erreur locale ? Si Y – X > 0, alors la valeur exponentielle va être nettement supérieure à celle de Y – X ≤ 0. Je teste avec l’Autriche et les données sur son efficience énergétique. Non, ça marche pas : j’obtiens une variable de résultat rigoureusement égale à la valeur espérée déjà après 30 rondes d’expérimentation – ce qui est pratiquement la vitesse de la lumière dans l’apprentissage d’un perceptron  – mais les variables d’entrée prennent des valeurs ridiculement élevées. Tenez : il leur faudrait être 42 millions au lieu de 8 millions, à ces Autrichiens. Impensable.

Bon. Bon gré mal gré, faudra que j’aille dans cette formule « si A, alors B, sinon va te faire … ». Tenez-vous bien, chers Autrichiens. J’observe et je raisonne. Ce que vous avez accompli vraiment, entre 1990 et 2014, c’est passer de 9,67 dollars de PIB par kilogramme d’équivalent pétrole à 11,78 dollars, donc une différence de 1,22 dollars. Si mon perceptron rend une erreur positive supérieure à cet accomplissement, je le laisse la rétropropager sans encombre. En revanche, tout ce qui n’est pas un succès est un échec, donc toute erreur en-dessous de ce seuil de référence, y compris une erreur négative, je la divise par deux avant de la rétropropager. J’amplifie des succès et je réduis l’impact mémorisé des échecs. Résultat ? Tout devient plus petit. L’efficience énergétique après les 5000 rondes expérimentales est à peine plus grande qu’en 1990 – 9,93 dollars – la population se rétrécit à moins de 3 millions, la consommation d’énergie et le PIB par tête d’habitant sont coupés par 4.

Ça alors ! Lorsque je parlais d’apprentissage à travers les succès, je m’attendais à quelque chose de différent. Bon. Je mets une muselière sur mon perceptron : tout en laissant l’apprendre sur les succès, comme dans le cas cité, j’ajoute la composante d’apprentissage sur la cohésion mutuelle entre les variables. Consultez « Ensuite, mon perceptron réfléchit » ou bien « Joseph et le perceptron » pour en savoir plus sur ce trait particulier. Ça se réfère à l’intelligence collective des fourmis. Peu ambitieux mais ça peut marcher. Résultat ? Très similaire à celui obtenu avec cette fonction particulière (apprentissage sur cohésion) et avec la rétroprogation de base, donc vers l’échec minimal. Faut que j’y réfléchisse.

Je continue à vous fournir de la bonne science, presque neuve, juste un peu cabossée dans le processus de conception. Je vous rappelle que vous pouvez télécharger le business plan du projet BeFund (aussi accessible en version anglaise). Vous pouvez aussi télécharger mon livre intitulé “Capitalism and Political Power”. Je veux utiliser le financement participatif pour me donner une assise financière dans cet effort. Vous pouvez soutenir financièrement ma recherche, selon votre meilleur jugement, à travers mon compte PayPal. Vous pouvez aussi vous enregistrer comme mon patron sur mon compte Patreon . Si vous en faites ainsi, je vous serai reconnaissant pour m’indiquer deux trucs importants : quel genre de récompense attendez-vous en échange du patronage et quelles étapes souhaitiez-vous voir dans mon travail ?

Ta Ważna Komisja: rzecz o rozporządzeniu ewaluacyjnym


Mój wstępniak na You Tube


Po miesiącach przerwy wracam do pisania po polsku na moim blogu. Jak zwykle, piszę po to, żeby ułożyć myśli. Zaangażowałem się w projekt wdrożenia w praktyce tzw. rozporządzenia ewaluacyjnego, czyli Rozporządzenia Ministra Nauki i Szkolnictwa Wyższego z dnia 22 lutego 2019 roku w sprawie ewaluacji jakości działalności naukowej. Jak klikniecie w link, ściągniecie z archiwum mojego bloga tą wersję, z którą ja obecnie pracuję. Mam nadzieję zresztą, że ta wersja nie ulegnie jakimś znacznym zmianom. W każdym razie przydają się też okazjonalne odniesienia do ustawy 2.0 czyli tzw. Konstytucji dla Nauki. Moja praca z tym małym cudem legislacji polega na zaproponowaniu struktury logicznej dla narzędzia informatycznego, które z kolei pozwoli rejestrować dorobek naukowy na mojej macierzystej uczelni, Krakowskiej Akademii im. Andrzeja Frycza Modrzewskiego.

W tym wpisie staram się zrobić coś, co oburza moją żonę. Nie, nie to, co myślicie. Staram się przełożyć treść rozporządzenia na strukturę logiczną. Moja żona jest prawnikiem i uważa, że takie zabiegi podcinają gałąź, na której siedzi cała profesja. Co by to było, gdyby nagle ludzie rozumieli w intuicyjny i praktyczny sposób treść rozporządzeń? Zgroza, nie? Ano właśnie to robię. Zapuszczam się w krainę zgrozy.

Przekonałem się, że akty prawne warto czytać od końca. Tym razem też to się sprawdziło. Załącznik nr 2 do rozporządzenia określa „Algorytm porównania ocen przyznanych ewaluowanemu podmiotowi prowadzącemu działalność naukową w ramach danej dyscypliny naukowej albo artystycznej za spełnienie podstawowych kryteriów ewaluacji z wartościami referencyjnymi dla tych kryteriów”. Długa nazwa. Budzi respekt. Zobaczymy, co jest w środku. W środku jest mowa o tym, że kiedy ewaluowany podmiot X dostanie jakąś ocenę Oi(X) według i-tego kryterium ewaluacji, to porównujemy ją z wartością referencyjną Oi(R). Porównujemy w dwóch krokach. Najpierw patrzymy, co jest większe, a następnie robimy odejmowanie bez wyników ujemnych. Wyniki ujemne są kłopotliwe. Znak trzeba zmieniać albo określać wartość bezwzględną. W każdym razie, jeżeli Oi(X) ≥ Oi( R ), to wtedy robimy ∆O = Oi(X) – Oi( R ). Jeżeli jest wręcz przeciwnie, to znaczy kiedy „oj-iks!” jest mniejsze od „oj-er!” to wtedy ∆O = Oi( R ) – Oi(X).

Mamy deltę ∆O i ta delta też ma swoją wartość referencyjną G, zwaną progiem pełnego przewyższania i określaną przez Komisję. Jaką Komisję? No chyba jakąś ważną, nie? Jeżeli Oi(X) ≥ Oi( R ) i jednocześnie  ∆O = Oi(X) – Oi( R )G, to wtedy wynik porównania ewaluowanego podmiotu z wartością referencyjną dla i-tego kryterium ewaluacji wynosi  Pi(X, R) = 1 i to jest najlepsza możliwa wiadomość. Jeżeli Oi(X) ≥ Oi( R ) ale jednak  ∆O = Oi(X) – Oi( R ) < G, to wtedy przykro nam, bracie, ale twój Pi(X, R) = ∆O/G. Może być jeszcze bardziej przykro, kiedy Oi(X) < Oi( R ), to wtedy robimy ∆O = Oi( R ) – Oi(X) i jeżeli na dodatek taka ∆O ≥ G, to wtedy Pi(X,R) = -1. Bagno. Może być trochę płytsze, w sensie to bagno. Jeżeli Oi(X) < Oi( R ) i ∆O = Oi( R ) – Oi(X) < G, to wtedy Pi(X,R) = -∆O/G.

Innymi słowy, wynik ewaluacji według i-tego kryterium może się zawierać w przedziale -1 ≤ Pi(X,R) ≤ 1. Każdy taki cząstkowy wynik ewaluacji wpisujemy do Matki Wszystkich Wzorów, czyli:


V(X,R) = W1*P1(X,R) + W2*P2(X,R) + W3*P3(X,R).


Czyli i-te kryterium ewaluacji to może być kryterium 1, 2 albo 3? No trzeba było tak od razu, zamiast komplikować z tym „i”. Kryterium P1 to poziom naukowy prowadzonej działalności, kryterium P2 to efekty finansowe badań naukowych lub prac rozwojowych, zaś kryterium P3 oznacza wpływ działalności naukowej na funkcjonowanie społeczeństwa i gospodarki. Za chwilę powiem więcej na temat treści tych kryteriów. Na razie w kwestii formalnej: W1, W2 oraz W3 to wagi kryteriów P1 ÷ P3 i wagi te są zróżnicowane w zależności od dziedziny nauki, jak w tabeli poniżej.



Wartości W dla kryteriów ewaluacji
Kryteria ewaluacji Nauki humanistyczne, nauki społeczne i nauki teologiczne Nauki ścisłe i przyrodnicze, nauki medyczne i nauki o zdrowiu Nauki inżynieryjne i techniczne oraz nauki rolnicze Dyscypliny artystyczne
P1 – poziom naukowy działalności – W3 70 60 50 80
P2 – Efekty finansowe badań naukowych lub prac rozwojowych – W2 10 20 35 0
P3 – Wpływ działalności naukowej na funkcjonowanie społeczeństwa i gospodarki – W3 20 20 15 20



Teraz o kryteriach P1 ÷ P3. Poziom naukowy działalności, czyli P1, jest definiowany przez §8 rozporządzenia i jest oceniany na podstawie: monografii naukowych tudzież redakcji takowych monografii albo rozdziałów w nich, artykułów naukowych, patentów na wynalazki, praw ochronnych na wzory użytkowe oraz wyłącznych praw hodowców do odmian roślin. Znakomita większość ocen cząstkowych przyznawanych za poszczególne osiągnięcia naukowe to sztywne wartości punktowe.

Poziom naukowy działalności (P1) ma inną ważną cechę: jest w przeważającej mierze oceniany na podstawie indywidualnych osiągnięć naukowych poszczególnych osób, które spełniają przesłanki określone w §11 rozporządzenia. Tak jest dla artykułów, monografii, ich redakcji oraz rozdziałów w nich. W przypadku własności intelektualnej, czyli patentów, praw do wzorów użytkowych oraz do odmian roślin, bierzemy pod uwagę tylko własność intelektualną przysługującą ewaluowanemu podmiotowi. Jeżeli ja opublikuję artykuł, to jego wartość przelicza się na ocenę mojej macierzystej uczelni. Jednakowoż, jeżeli ja mam patent, to jego wartość naukowa nie przenosi się na moją uczelnię. To uczelnia musi być uprawniona z patentu, żeby mogła go wykorzystać w ewaluacji.

Szybki przegląd wartości punktowych określanych w §12 ÷ §19 rozporządzenia pokazuje, że kryterium P1 pompujemy przede wszystkim publikacjami: artykułami i monografiami. Patenty mogą być cenne, ale uczelnia musiałaby ich mieć naprawdę mnóstwo, żeby ich łączna punktacja była porównywalna z tym, co można udyndolić za spokojne pisanie. Jeżeli ewaluowany podmiot ma, powiedzmy, 30 pracowników naukowych, to w ciągu roku oni mogą natrzaskać nawet 6000 punktów P1. Tych samych 30 pracowników mogłoby w ciągu roku postarać się o 1 – 2 patenty, czyli maksimum 200 punktów, jeżeli są to patenty w trybie europejskim.

Nieco odrębną kategorię w ocenie P1 stanowią dyscypliny artystyczne. Punktacja osiągnięć artystycznych określona w §20 rozporządzenia odsyła do Załącznika nr 1 pt. „Rodzaje osiągnięć artystycznych uwzględnianych w ocenie poziomu artystycznego prowadzonej działalności naukowej w zakresie twórczości artystycznej i liczba przyznawanych za nie punktów”. Powiem szczerze: nie znam się na tym. Mogę sobie jedynie wyrobić ogólny pogląd na ten temat. Jeżeli widzę w tym załączniku 200 punktów za wybitne osiągnięcie w zakresie reżyserii realizacji telewizyjnej, a jednocześnie można uzyskać 200 punktów za porządną monografię na temat reżyserii filmowej, to intuicyjnie zgaduję, że te drugie 200 punktów znacznie łatwiej uzyskać niż te pierwsze 200 punktów.

Swoją odrębność w zakresie kryterium P1 mają również nauki humanistyczne, społeczne i teologiczne, a konkretnie monografie wydawane w tym obszarze. Po pierwsze, za pełne autorstwo monografii w tym obszarze, jeżeli została wydana we Właściwym Wydawnictwie, określonym przez Ministra na podstawie art. 267 ust. 2 pkt. 2 ustawy Konstytucja dla Nauki, można mieć nawet 300 punktów. W innych obszarach nauki maksimum to 200 za sztukę. Po drugie, istnieje tu pojęcie monografii z tzw. oceną ekspercką. Jeżeli ktoś coś zrobi z monografią naukową w dziedzinie nauk humanistycznych, społecznych lub teologicznych, w sensie napisze, zredaguje albo jakiś rozdzialik strzeli, ale monografia ukaże się w wydawnictwie pozbawionym Namaszczenia Ministerstwa, to wtedy można się zwrócić do Przewodniczącego Komisji (tej ważnej Komisji), żeby on się zwrócił do Ministra (no, wiecie, Ministra) żeby on wyznaczył panel ekspertów, którzy ocenią taką monografię. Wtedy jest więcej punktów.

Ogólnie, to z tym kryterium P1 jest tak, jak na obrazku poniżej.


Dobra, to teraz P2, czyli efekty finansowe badań naukowych lub prac rozwojowych.  O tym mówi §22 rozporządzenia ewaluacyjnego. Paragraf 22… Ciekawe… Czytał ktoś tą powieść? No, ale do adremu. Najkorzystniejsza punktacja – czyli 4 punkty ewaluacyjne za każde 12 500 zł wydanych na finansowanie badań naukowych i prac rozwojowych – pojawia się przy spełnieniu kilku przesłanek. Po pierwsze, projekt musi być finansowany przez Europejską Radę do Spraw Badań Naukowych (European Reseach Council), w trybie konkursowym. Po drugie, projekt musi być realizowany przez grupę podmiotów, do której należy ewaluowany podmiot i jednocześnie której liderem jest albo był podmiot nienależący do systemu szkolnictwa wyższego i nauki. W sensie jakaś firma albo rząd jakiś. Po trzecie, te 12 500 zł to musi być tylko w ramach zadań wykonywanych w takiej grupie przez ewaluowany podmiot. Mówiąc krótko, nawiązujemy kontakt z korporacją o ambicjach w zakresie rozwoju nowych technologii, robimy wspólnie projekt badawczo – rozwojowy finansowany przez Europejską Radę do Spraw Badań Naukowych i dbamy o to, żeby przypadkiem nie być liderem takiego projektu. Po czwarte, projekt powinien dotyczyć nauk społecznych, humanistycznych lub teologicznych. Czyli tak: namawiam dużą firmę do wspólnego projektu, w którym na pewno nie będziemy liderem i jednocześnie będziemy szukali dowodu na istnienie Boga. Aha, no i muszą być konkretne prace rozwojowe, np. detektor obecności Boga. Namawiamy Europejską Radę do Spraw Badań Naukowych do ogłoszenia konkursu na takie projekty i uzyskujemy dofinansowanie w ramach takiego konkursu. Dalej już z górki: 1 000 000 zł na takie coś i mamy 320 punktów ewaluacyjnych.

Najgorszym wyjściem jest zrobić samodzielnie (bez konsorcjum) projekt badawczy w zakresie nauk innych niż humanistyczne, społeczne i teologiczne, a projekt ten powinien być realizowany przez grupę podmiotów, której liderem jest ewaluowany podmiot albo inny podmiot należący do systemu szkolnictwa wyższego i nauki. Wtedy mamy 1 punkt ewaluacyjny za każde 50 000 zł wydane na projekt. Jeden milion złotych daje zaledwie 20 punktów.

No i wreszcie trzecie kryterium ewaluacji, czyli P3 „Wpływ działalności naukowej na funkcjonowanie społeczeństwa i gospodarki, czyli §23 rozporządzenia ewaluacyjnego. Ewaluowany podmiot może przedstawić Komisji tzw. opisy wpływu, czyli co dobrego się wydarzyło pod wpływem badań naukowych prowadzonych i/lub popularyzowanych przez ten podmiot. Przedstawić można od 2 do 5 opisów wpływu, w zależności od liczby osób zaliczonych do tzw. liczby N. O liczbie N będę jeszcze pisał, ale na razie krótko. Zgodnie z § 7 rozporządzenia ewaluacyjnego liczba „N” to liczba pracowników ewaluowanego podmiotu, którzy prowadzą działalność naukową w danej dyscyplinie naukowej albo artystycznej oraz którzy złożyli dwa oświadczenia, opisane odpowiednio w art. 343 ust. 7 oraz art. 265 ust. 5 ustawy.

Typowy wydział na uczelni ma z reguły nie więcej niż 100 osób w liczbie N, a więc może przedstawić dwa opisy wpływu. W najbardziej optymistycznych układach, jeżeli uda się wykazać wpływ o międzynarodowym zasięgu i znaczeniu, można dostać 100 punktów za sztukę, czyli w sumie 200. Jednakowoż jeżeli posłużymy się bazą POL-on, wychodzi 164000 pracowników zatrudnionych w 656 podmiotach, czyli średnio 250 osób na podmiot. Taka średnia statystyczna może złożyć, zgodnie z rozporządzeniem, 4 opisy wpływu na funkcjonowanie społeczeństwa i gospodarki. Każdy z tych opisów może dostać od 20 do 100 punktów. Daje to w sumie przedział od 80 do 400 punktów.

Suma summarum rysują się dwie strategie dla uczelni, w perspektywie uzyskania jak najwyższej kategorii naukowej zgodnie z rozporządzeniem: strategia zorientowana na publikacje naukowe oraz ta zorientowana na współpracę z biznesem w zakresie rozwoju nowych technologii, przy finansowaniu środkami publicznymi przyznawanymi w drodze konkursów. Dla przykładu, budżet Narodowego Centrum Badań i Rozwoju na 2019 rok, w ramach Programu Operacyjnego Inteligentny Rozwój, wynosi 3,09 mld zł. W zależności od tego, jak inteligentnie jednostki naukowe wykorzystają ten łączny budżet, jest to „rynek” 61 800 ÷ 988 800 punktów P2 do podziału między zainteresowanych. Zgodnie z danymi systemu POL-on, w Polsce jest 396 uczelni oraz 260 jednostek naukowych, w sumie 656 graczy. Te punkty P2 rozkładają się więc średnio od 94 punktów rocznie na jeden podmiot aż do nieco ponad 1500 punktów rocznie. Zależy, jak mówiłem, od strategii pozyskiwania funduszy badawczych.

Porównajmy to teraz z punktami P1, za jakość działalności naukowej. W bazie POL-on jest 1 mln publikacji, a baza działa od 2011 roku. Do daje średnio po 125 000 publikacji rocznie, czyli w przeliczeniu średnio na jeden podmiot daje to ok. 190 publikacji. Przy optymistycznym wariancie publikacji wysoko punktowanych można liczyć średnio 65 punktów za jedną publikację. Przy nieco mniejszych sukcesach byłoby to średnio 20 punktów P1 za sztukę. W efekcie mamy roczny dorobek punktowy w średnim przedziale od 3800 do 12 350 punktów.

Pogrzebałem trochę w bazie POL-on w poszukiwaniu liczb. Liczby służą mi do zgadywania myśli, w sensie myśli Komisji (tej ważnej) w momencie kolejnej ewaluacji. Pokusiłem się o coś w rodzaju modelu ewaluacji, czyli o wyobrażenie sobie, jak liczby z POL-onu oraz liczby z rozporządzenia ewaluacyjnego mogą pracować w praktyce. W dwóch tabelach poniżej przedstawiam efekty tej symulacji. Policzyłem prawdopodobny przedział dorobku punktowego, jaki typowy ewaluowany podmiot może zrobić w ciągu roku i pomnożyłem to przez 3 lata. Potem podzieliłem ten dorobek punktowy przez średnią liczbę N, która mi wychodzi N = 250. Te wielkości wpuściłem do równania V(X,R) = W1*P1(X,R) + W2*P2(X,R) + W3*P3(X,R) i policzyłem coś w rodzaju przedziału ufności. Wyszedłem z założenia, że wartość referencyjna R będzie gdzieś w środku tego przedziału, a próg przewyższania G będzie liczony jako połowa tego przedziału. No i wyszło to co widzicie poniżej.



Średni możliwy dorobek punktowy W przeliczeniu na 1 osobę z liczby N
Kryterium ewaluacji Od Do Od Do
P1 11400 37 050 45,6 148,2
P2 282 4500 1,128 18
P3 240 1200 0,96 4,8




Nauki humanistyczne, nauki społeczne i nauki teologiczne Nauki ścisłe i przyrodnicze, nauki medyczne i nauki o zdrowiu Nauki inżynieryjne i techniczne oraz nauki rolnicze Dyscypliny artystyczne
Dolna granica przedziału referencyjnego 3 222,48 2 777,76 2 333,88 3 667,20
Górna granica przedziału referencyjnego 10 650,00 9 348,00 8 112,00 11 952,00
Prawdopodobna wartość referencyjna R 6 936,24 6 062,88 5 222,94 7 809,60
Prawdopodobny próg przewyższania G 3 713,76 3 285,12 2 889,06 4 142,40



More and more money just in case. Educational about money and monetary systems


My editorial on You Tube


Here comes the next, hopefully educational piece in Fundamentals of Finance. This time it is about money. Money strictly speaking. This is probably one of the hardest. Money is all around us, whether we have it or not. How to explain something so pervasive? I think the best way is to stick to facts, in the first place. I take my wallet. What’s inside? There is some cash, there is a debit card, and two credit cards. Oh, yes, and there is that payment app, SkyCash, on my phone. All that, i.e. cash + credit cards + debit card + payment app, is the money I am walking around with.

How to explain things which seem really hard to explain? One possible way is to ask THOSE questions. I mean those stupid, out of place questions. One such question is just nocking at the door of my consciousness. Are all these forms of money in my wallet just different forms of essentially the same thing, or are they rather essentially different things which just take a similar form? I mean, if this is all money, why is there not just one form of money? Why are there many forms? Why don’t I use just cash, or just a payment app? See? If anyone was in any doubt as for whether I can ask a really stupid question, here is the answer. Yes, I can.

Now, I need the really hard answer, I mean the answer to that stupid question. I observe things and try to figure something out. I observe my credit card, for example. What is that? It is a technology that allows me to tap into a credit account that a bank has allowed me. Which means that the bank studied me, and compared me to a bunch of other people, and they decided that I have a certain borrowing capacity, i.e. I am able to generate sufficient a stream of income over time to pay back a certain amount of credit. When I use a credit card, I use my future income. If this is a technology, there must have been need for its massive use. We usually make technologies for things that happen recurrently. Banks recurrently assess the amount of credit they can extend to non-bank people, and they take care of securing some kind of technology to do so. Here comes an important distinction in plastic, namely that between a credit card and a debit card. A debit card is a technology that allows me to tap into my own current bank account, which is different from my credit card account. I trust the bank with recording a certain type of transactions I make. These transactions are transfers to and from my current account. The bank is my book keeper, and, as far as a current account strictly spoken is concerned, it is a smart book keeper. I cannot make more transfers from my current account than I receive onto it. It is book keeping with a safety valve. Banks recurrently keep the record of financial transactions that people enter into, they take care of preventing negative balance on those transactions, and the temporary bottom line of such transactions is the current balance on the same people’s current accounts.


Good, now comes cash money. Those notes and coins I have in my wallet are any good for payment because a special bank, the Central Bank of my country, printed and minted them, put them in circulation, and guarantees their nominal (face) value. Guaranteeing means that the Central Bank can be held liable of the total nominal value of all the notes and coins in circulation. This means, in turn, that the Central Bank needs to hold assets of similar liquidity, just to balance the value of cash guaranteed. When I use cash, I indirectly use a fraction of those liquid assets held by the central bank. What kind of assets has a similar liquidity to money? Well, money, of course. The Central Bank can extend credit to commercial banks, and thus holding claims on the money those banks hold. The Central Bank can also buy the cash money guaranteed by other central banks, mostly those reliable ones. We have another behavioural pattern: governments form central banks, and those central banks hold some highly liquid assets, and they use those highly liquid assets to back a certain amount of cash they put in circulation.

Now, there is that beast called « FinTech » and all them Payment Apps we can use, like Apple Wallet. I can use a payment app in two ways: I can connect a credit card to it, or I can directly hold a monetary balance in it. Anyway, I need to register an account, and give it some liquidity. When I pay through connection with my credit card, the Payment App is just an extension of the same technology as the one in the card. On the other hand, when I hold a monetary balance with a payment app, that balance is a claim of mine on the operator of the app. That means the operator has a liability to me, and they need to hold liquid assets to balance that liability. By the way, when a bank holds my current account, the temporary balance on that account is also my claim on the bank, and the bank needs to hold some highly liquid assets to balance my current balance with them. Here comes an even more general behavioural pattern. Some institutions, called financial institutions, like commercial banks, central banks, and operators of FinTech utilities, are good at assessing the future liquidity in other agents, and hold highly liquid assets that allow them to be liable to third persons as for holding, and keeping operational, specific accounts of liabilities: current accounts and cash in circulation.

Those highly liquid assets held by financial institutions need to be similar in their transactional pattern to the liabilities served. They need to be various forms of money. A bank can extend me a credit card, because they have another bank extends them an even bigger credit card. A central bank can maintain cash in circulation because it can trust in the value of other currencies in circulation. Looks like a loop? Well, yes, ‘cause it is a loop. Monetary systems are made of trusted agents who are trusted precisely as for their capacity to maintain a reliable balance between what they owe and what they have claims on. Historically, financial institutions emerged as agents who always pay their debts.


Good, this is what them financial institutions do about money. What do I do about money? I hold it and I spend it. When I think about it, I hold much more than I spend. Even if I count just my current wallet, i.e. all those forms of liquidity I walk around with, it is much more than I need for my current expenses. Why do I hold something I don’t immediately need? Perhaps because I think I might need it. There is some sort of uncertainty ahead of me, and I more or less consciously assume that holding more money than I immediately need can help me facing those contingencies. It might be positive or negative. I might have to pay for sudden medical care, or I might be willing to enter into some sudden business deals. Some of the money I hold corresponds to a quantity of goods and services I am going to purchase immediately, and another part of my money is there just to assure I might be able to buy more if I need.

When I focus on the money I hold just in case, I can see another distinction. I just walk around with some extra money, and I hold a different balance of extra money in the form of savings, i.e. I have it stored somewhere, and I assume I don’t spend it now. When I use money to meet uncertainty, the latter is scalable and differentiated. There are future expenditures, usually in a more distant future, which I attempt to provide for by saving. There are others, sort of more diffuse and seemingly more immediate, which I just hold some money for in my current wallet. We use money to meet uncertainty and risk, and we adapt our use of money to our perception of that uncertainty and risk.

Let’s see how Polish people use money. To that end, I use the statistics available with the National Bank of Poland as well as those published by the World Bank. You can see a synthetic picture in the two graphs below. In the first one, you can see the so-called broad money (all the money we hold) in relation to the GDP, or to Gross Domestic Product. The GDP is supposed to represent the real amount of goods and services supplied in the country over 1 year. Incidentally, the way we compute GDP implies that it reflects the real amount of all final goods and services purchased over one year. Hence, that proportion between money supplied and GDP is that between the money we hold, and the things we buy. You can see, in the graph, that in Poland (it is the same a bit all around the world, by the way) we tend to hold more and more money in relation to the things we buy. Conclusion: we hold more and more money just in case.

In the second graph below, you can see the structure of broad money supplied in Poland, split into the so-called monetary aggregates: cash in circulation, current account money, and term deposits in money. You can see current account money gently taking over the system, with the cash money receding, and deposits sort of receding as well, still holding a larger position in the system. It looks as if we were adapting our way of using money to a more and more intense perception of diffuse, hardly predictable risks.


I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

Ensuite, mon perceptron réfléchit


Mon éditorial sur You Tube


Je vais au fond des choses, dans ma recherche courante. Comme vous le savez peut-être, je suis en train de combler une grosse lacune dans mon éducation scientifique : j’apprends les rudiments d’intelligence artificielle (consultez « Joseph et le perceptron » par exemple). Je pense que j’ai déjà compris quelques trucs, surtout le principe général du réseau neuronal appelé « perceptron ». Au fin fond de la science moderne, il y a cette observation générale : les choses, elles surviennent comme des instances momentanées et locales des structures sous-jacentes. Disons que je suis assis, là, devant mon ordinateur, j’écris, et tout à coup il me vient à l’esprit que je pourrais démolir l’univers entier et le reconstruire à nouveau. Je dois le faire instantanément, en une fraction de seconde. Après tout, l’absence totale de l’univers peut poser des problèmes. Alors, je démolis et je reconstruis mon univers entier. Je le fais plusieurs fois de suite, comme mille fois. Question : après chaque reconstruction de l’univers, suis-je le même Krzysztof Wasniewski que j’étais dans l’univers précèdent ? Eh bien, tout ce que je sais de la science me dit : presque. Dans chaque instance consécutive de l’univers je suis presque le même moi. Des petits détails diffèrent. C’est précisément ce que fait le réseau neuronal du type perceptron. Il crée plusieurs instances possibles de la même structure logique et chaque instance particulière est originale par des petits détails.

Question : qu’est-ce que ferait un perceptron qui serait à ma place, donc qui découvrirait qu’il est une instance parmi plusieurs autres, temporairement présente dans un univers momentané ? Ça dépend quelle est sa fonction d’activation. Si mon perceptron travaille avec la fonction sigmoïde, il accumule l’expérience et produit une série de plus en plus récurrente d’instances de lui-même, tout en relâchant un peu les gonds de sa structure. Il devient à la fois plus souple et plus répétitif. En revanche, si la fonction d’activation c’est la tangente hyperbolique, le perceptron devient très susceptible et en même temps rigide. Il produits des instances de lui-même par des à-coups soudains, il se balance loin de son état initial mais après, il retourne très près ce cet état initial et à la longue, il n’apprend pas vraiment beaucoup.

Comme j’expérimente avec le perceptron, j’ai découvert deux types d’apprentissage : le sigmoïde rationnel et systématique d’une part, et la tangente hyperbolique violente et émotionnelle d’autre part. Ces deux sentiers de traitement de l’expérience ont leur expression mathématique et moi, en ce moment, j’essaie de comprendre le lien entre ces structures mathématiques et l’économie. Plus je fais de la recherche et de l’enseignement en sciences sociales, plus je suis convaincu que l’approche behavioriste est l’avenir des sciences économiques. Les modèles économiques reflètent et généralisent notre compréhension du comportement humain. Mes observations à propos du perceptron se traduisent en un débat long-présent dans les sciences économiques : l’état doit-il intervenir pour atténuer les chocs quantitatifs sur les marchés ou bien doit-il s’abstenir d’intervention et laisser les marchés apprendre à travers les chocs ?

L’application des deux fonctions neuronales – le sigmoïde et la tangente hyperbolique – me permet de reconstruire ces deux alternatives. Le sigmoïde est parfait pour simuler un marché protégé des chocs. La structure logique de la fonction sigmoïde la rend pareille à un pare-chocs. Le sigmoïde absorbe tout excès de stimulation et rend un résultat lissé et domestiqué. En revanche, la tangente hyperbolique est hyper-réactive dans sa structure logique et semble être une bonne représentation d’un marché dépourvu des ceintures de sécurité.

Je prends donc le modèle financier que je suis en train de développer pour le marché de l’énergie – donc la solution qui permet aux petits fournisseurs locaux d’énergies renouvelables de bâtir une base financière tout en créant des liens durables avec leurs clients (consultez « More vigilant than sigmoid », par exemple) – et j’utilise le perceptron pour expérimenter. Ici, il y a une question méthodologique que jusqu’alors j’hésitais à poser directement et que je dois répondre, d’une façon ou d’une autre : qu’est-ce que le perceptron représente, au juste ? Quel type d’environnement expérimental peut être associé avec la façon dont le perceptron apprend ? Mon idée est de faire le parallèle avec l’intelligence collective, donc de défendre l’assomption que le perceptron représente l’apprentissage qui réellement prend lieu dans une structure sociale. Question : comment donner de l’assise intellectuelle à cette assomption ? Le sentier le plus évident passe par la théorie d’essaims que j’avais déjà discuté dans « Joseph et le perceptron » ainsi que dans « Si je permets plus d’extravagance ». Si le perceptron est capable de varier sa fonction interne d’adaptation , donc de passer entre des degrés différents d’association entre ses variables, alors ce perceptron peut être une représentation acceptable de l’intelligence collective.

Ça, c’est fait. J’ai déjà vérifié. Mon perceptron produit de la variation dans sa fonction d’adaptation et qui plus est, il la produit de façons différentes suivant la fonction d’activation utilisée, soit le sigmoïde ou bien la tangente hyperbolique. Point de vue théorie d’essaims, il est collectivement intelligent, mon perceptron.

Ceci fait et dit, j’ai déjà découvert que le sigmoïde – donc le marché assisté par des institutions anticycliques – apprend plus (il accumule plus de mémoire expérimentale) et il produit un modèle financier plus étoffé en termes de finance et d’investissement. Il produit des prix d’énergie plus élevés, aussi. En revanche, la tangente hyperbolique – soit le marché sans protection – apprend moins (accumule moins de mémoire expérimentale) et il produit un marché final avec moins de capital financier offert et moins d’investissement, tout en gardant les prix d’énergie à un niveau relativement plus bas. En plus, la tangente hyperbolique produit plus de risque, dans la mesure où elle rend plus de disparité entre des différentes instances expérimentales que le sigmoïde.

Bon, un instant de réflexion : à quoi bon ? Je veux dire : qui peut faire un usage pratique de ce que je suis en train de découvrir ? Réfléchissons : je fais tout ce bazar de recherche avec l’aide d’un réseau neuronal pour explorer le comportement d’une structure sociale vis-à-vis d’un schéma financier nouveau dans le marché d’énergies renouvelables. Logiquement, cette science devrait servir à introduire et développer un tel schéma. Dans la finance, la science ça sert à prédire les prix et les quantités d’instruments financiers en circulation, ainsi qu’à prédire la valeur agrégée du marché prix dans sa totalité. J’ai déjà expérimenté avec les valeurs que peut prendre la capitalisation « K » du financement participatif pour les énergies renouvelables et j’ai découvert que cette capitalisation peut varier en présence d’une fourchette des prix « PQ – PB » constante. Pour savoir ce que veulent dire ces symboles, le mieux est de consulter « De la misère, quoi ». Je peux donc faire une simulation des états possibles du marché, donc de la proportion entre la capitalisation K et les prix d’énergie, avec ces deux scénarios alternatifs en tête : marché protégé contre variation cyclique ou bien un marché pleinement exposé à des telles variations. Ensuite, l’expérimentation que j’ai conduite jusqu’alors m’a permis de cerner les variations possibles de consommation d’énergie, ainsi que de l’investissement brut en la capacité de génération d’énergie, encore une fois avec variations cycliques ou pas.

J’ai donc trois variables – capitalisation du financement participatif, investissement brut et quantité d’énergie consommée – dont la variation je peux simuler avec mon perceptron. C’est donc quelque chose de similaire aux équations de la physique quantique, donc un outil analytique qui permet de formuler des hypothèses très précises à propos de l’état futur du marché.

Deux idées me viennent à l’esprit. Premièrement, je peux prendre les données que j’avais déjà utilisées dans le passé pour développer mon concept EneFin (consultez, par exemple : « Je recalcule ça en épisodes de chargement des smartphones »), donc les profils des pays Européens particuliers et de voir où exactement mon perceptron peut aller à partir de ces positions. Y-aura-t-il de la convergence ? Y-aura-t-il un schéma récurrent de changement ?

Deuxièmement, je peux (et je veux) retourner à la recherche que j’avais déjà faite en automne 2018 à propos de l’efficience énergétique d’économies nationales. A l’époque, j’avais découvert que lorsque je transforme les données empiriques brutes de façon à simuler l’occurrence d’un choc soudain dans les années 1980 et ensuite l’absorption de ce choc, ce modèle particulier a nettement plus de pouvoir explicatif qu’une approche basée sur l’extraction des tendances a long-terme. Consultez « Coefficients explosifs court-terme » pour en savoir plus à ce sujet. Il se peut que l’échantillon empirique que j’avais utilisé raconte une histoire complexe – composée d’un changement agité façon tangente hyperbolique combiné avec quelque chose de plus pondéré façon sigmoïde – et que l’histoire hyper-tangentielle soit la plus forte des deux. Alors, le scénario possible à construire avec la fonction sigmoïde pourrait représenter quelque chose comme un apprentissage collectif beaucoup plus calme et plus négocié, avec des mécanismes anticycliques.

Ce deuxième truc, l’efficience énergétique, ça m’a inspiré. Aussitôt dit, aussitôt fait. Chaque économie nationale montre une efficience énergétique différente, donc j’assume que chaque pays est un essaim intelligent distinct. J’en prends deux : Pologne, mon pays natal, et Autriche, qui est presque à côté et j’adore leur capitale, Vienne. J’ajoute un troisième pays – les Philippines – pour faire un contraste marqué. Je prends les données que j’avais déjà utilisées dans ce brouillon d’article et je les utilise dans un perceptron. Je considère l’efficience énergétique en tant que telle, soit le coefficient « Q/E » de PIB par 1 kg d’équivalent pétrole de consommation finale d’énergie, comme ma variable de résultat. Le tenseur des variables d’entrée se compose de : i) le coefficient « K/PA » de capital fixe par une demande locale de brevet ii) le coefficient « A/Q » d’amortissement agrégé divisé par le PIB iii) le coefficient « PA/N » de nombre de demandes de brevet locales par 1000 habitants iv) le coefficient « E/N » de consommation d’énergie par tête d’habitant v) le coefficient « U/N » de la population urbaine comme fraction de la population entière du pays vi) le coefficient « Q/N » de PIB par tête d’habitant vii) la part « R/E » d’énergies renouvelables dans la consommation totale d’énergie et enfin viii) deux variables d’échelle, soit « Q » pour le PIB agrégé et « N » pour la population.

Chaque variable contient 25 observations annuelles, de 1990 à 2014, parmi lesquelles j’identifie la valeur maximale et je l’utilise ensuite comme base de standardisation. Le perceptron, il se sent vraiment à l’aise seulement dans un univers standardisé. Le « Q/E » standardisé pour année « ti » est donc égal à Q/E(ti)/max(Q/E) sur Q/E(1990) ≤ x ≤ Q/E(2014) et ainsi de suite pour chaque variable. Je prends les 25 valeurs standardisées pour chaque variable et je les donne à mon perceptron comme matériel d’apprentissage. Le perceptron, il commence par assigner à chaque valeur standardisée un coefficient aléatoire W(x) de pondération et pour chaque il fait ∑W(x)*x, donc la somme des produits « coefficient de pondération multiplié par la valeur locale standardisée ».

Le ∑W(x)*x est ensuite utilisé dans la fonction d’activation neurale : le sigmoïde y = 1 / (1 + e-∑W(x)*x) ou bien la tangente hyperbolique y = (e2*∑W(x)*x – 1) / (e2*∑W(x)*x + 1). Pour chaque période d’observation je calcule aussi la fonction d’adaptation locale, ce qui commence par le calcul des distances Euclidiennes entre cette variable et les autres. La distance Euclidienne ? Eh bien, je prends la valeur x1 et celle de x, par exemple Q/E(1995) – U/N(1995). Bien sûr, on ne parle que valeurs standardisées.  Je fais donc Q/E(1995) – U/N(1995) et pour s’assurer qu’il n’ait pas de négativité compromettante, je tire la racine carrée du carré de la différence arithmétique, soit {[Q/E(1995) – U/N(1995)]2}0,5. Avec M variables au total, et 1 ≤ j ≤ M – 1, pour chacune d’elles je fais V(xi) = {xi + ∑[(xi – xj)2]0,5}/M et ce V(xi) c’est précisément la fonction d’adaptation.

J’utilise V(xi) dans une troisième version du perceptron, le sigmoïde modifié

y = 1 / (1 + e-∑V(x)*W(x)*x)

donc une forme d’intelligence qui assume que plus une variable d’entrée est distante d’autres variables d’entrée, plus elle est importante dans la perception de mon perceptron. C’est la vieille histoire de distinction entre un arbre et un ours grizzli, dans une forêt. Un ours, ça déteint nettement sur un fond forestier et il vaut mieux y attacher une attention particulière. Le perceptron basé sur le sigmoïde modifié contient un tantinet d’apprentissage profond : ça apprend sa propre cohésion interne (distance Euclidienne entre variables).

Quel que soit la version du perceptron, il produit un résultat partiellement aléatoire et ensuite il le compare à la valeur espérée de la variable de résultat. La comparaison est complexe. Tout d’abord, le perceptron fait Er(Q/N ; t) = y(t) – Q/E(t) et il constate, bien sûr, qu’il s’est gouré, donc que Er(Q/N ; t) = y(t) – Q/E(t) ≠ 0. C’est normal avec tous ces coefficients aléatoires de pondération. Ensuite, mon perceptron réfléchit sur l’importance de cette erreur locale et comme la réflexion n’est pas vraiment un point fort de mon perceptron, il préfère prendre refuge dans les maths et calculer la dérivée locale y’(t). De tout en tout, le perceptron prend l’erreur locale et la multiplie par la dérivée locale de la fonction d’activation neurale et cette valeur complexe  y’(t-1)*[y(t-1) – Q/E(t-1)] est ensuite ajoutée à la valeur de chacune variable d’entrée dans la ronde suivante d’expérimentation.

Les 25 premières rondes d’expérimentation sont basées sur les données réelles pour les années 1990 – 2014 et c’est donc du matériel d’apprentissage. Les rondes suivantes, en revanche, sont de l’apprentissage pur : le perceptron génère les variables d’entrée comme valeurs de la ronde précédente plus y’(t-1)*[y(t-1) – Q/E(t-1)].

Bon, appliquons. J’itère mon perceptron sur 5000 rondes d’expérimentation. Première chose : ce truc est-il intelligent ? Sur la base de la théorie d’essaims j’assume que si la cohésion générale des données, mesurée comme 1/V(x) donc comme l’inverse de la fonction d’adaptation, change significativement sur les 5000 essais, ‘y-a de l’espoir pour intelligence. Ci-dessous, je présente les graphes de cohésion pour la Pologne et les Philippines. Ça promet. Le sigmoïde, ça se désassocie de façon systématique : la ligne bleue sur les graphes descend résolument. Le sigmoïde modifié, ça commence par plonger vers un niveau de cohésion beaucoup plus bas que la valeur initiale et ensuite ça danse et oscille. La tangente hyperbolique prend bien garde de ne pas sembler trop intelligente, néanmoins ça produit un peu de chaos additionnel. Toutes les trois versions du perceptron semblent (formellement) intelligentes.

Le truc intéressant c’est que le changement de cohésion diffère fortement suivant le pays étudié. Je ne sais pas encore quoi penser de ça. Maintenant, les résultats des 5000 rondes expérimentales. Il y a des régularités communes pour tous les trois pays étudiés. Le sigmoïde et la tangente hyperbolique produisent un état avec efficience énergétique égale (ou presque) à celle de 2014 – donc de la dernière année des données empiriques – et toutes les autres variables fortement en hausse : plus de population, plus de PIB, plus d’innovation, plus d’énergies renouvelables etc. Le sigmoïde modifié ajoute aussi à ces variables mais en revanche, il produit une efficience énergétique nettement inférieure à celle de 2014, à peu près égale à celle de 2008 – 2009.

J’essaie de comprendre. Est-ce que ces résultats veulent dire qu’il y a un sentier possible d’évolution sociale où on devient plus ou moins créatifs et chaotiques – donc où la cohésion entre nos décisions économiques décroit – et en même temps nous sommes comme plus grands et plus intenses en termes de consommation d’énergie et des biens ?

Je continue à vous fournir de la bonne science, presque neuve, juste un peu cabossée dans le processus de conception. Je vous rappelle que vous pouvez télécharger le business plan du projet BeFund (aussi accessible en version anglaise). Vous pouvez aussi télécharger mon livre intitulé “Capitalism and Political Power”. Je veux utiliser le financement participatif pour me donner une assise financière dans cet effort. Vous pouvez soutenir financièrement ma recherche, selon votre meilleur jugement, à travers mon compte PayPal. Vous pouvez aussi vous enregistrer comme mon patron sur mon compte Patreon . Si vous en faites ainsi, je vous serai reconnaissant pour m’indiquer deux trucs importants : quel genre de récompense attendez-vous en échange du patronage et quelles étapes souhaitiez-vous voir dans mon travail ?

Unconditional claim, remember? Educational about debt-based securities

My editorial on You Tube


Here comes another piece of educational content, regarding the fundamentals of finance, namely a short presentation of debt-based securities. As I will be discussing that topic,  here below, I will compare those financial instruments to equity-based securities, which I already discussed in « Finding the right spot in that flow: educational about equity-based securities ».

In short, debt-based securities are financial instruments which transform a big chunk of debt, thus a big obligatory contract, into a set of small, tradable pieces, and give to that debt more liquidity.

In order to understand how debt-based securities work in finance, it is a good thing to put a few clichés on their head and make them hold that stance. First of all, we normally associate debt with a relation of power: the CREDITOR, or the person who lends to somebody else, has a dominant position over the DEBTOR, who borrows. Whilst being sometimes true, it is true just sometimes, and it is just one point of view. Debt can be considered as a way of transferring capital from entity A to entity B. Entity A has more cash than they currently need, whilst B has less. Entity A can transfer the excess of cash to B, only they need a contractual base to do it in a civilized way. In my last educational, regarding equity-based securities, I presented a way of transferring capital in exchange of a conditional claim on B’s assets, and of a corresponding decisional power: that would be investing in B’s equity. Another way is to acquire an unconditional claim on B’s future cash flows, and this is debt. Historically, both ways have been used and developed into specific financial instruments.

Anyway, the essential concept of debt-based securities is to transform one big, obligatory claim of one entity onto another entity into many small pieces, each expressed as a tradable deed (document). How the hell is it possible to transform a debt – thus future money that is not there yet – into securities? Here come two important, general concepts of finance: liquidity, and security. Liquidity, in financial terms, is something that we spontaneously associate with being able to pay whatever we need to pay in the immediate. The boss of a company can say they have financial liquidity when they have enough cash in their balance sheet to pay the bills currently on the desk. If some of those bills cannot be paid (not enough cash), the boss can say ‘Sorry, not enough liquidity’.

You can generalize from there: liquidity is the capacity to enter into new economic transactions, and to fulfil obligations resulting from such transactions. In markets that we, humans, put in place, there is a peculiar phenomenon to notice: we swing between various levels of required liquidity. In some periods, people in that market will be like immerged in routine. They will repeat the same transactions over and over again, in recurrent amounts. It is like an average Kowalski (the Polish equivalent of the English average Smith, or the French average Dupont) paying their electricity bills. Your electricity bill comes in the form of a six-month plan of instalments. Each month you will have to pay the same, fixed amount, which results from the last reading of your electricity counter. That amount is most likely to be similar to the amounts from previous six-month periods, unless you have just decided to grow some marijuana and you need extra electricity for those greenhouse lamps. If you manage to keep your head above the water, in day-to-day financial terms, you have probably incorporated those payments for electricity into your monthly budget, more or less consciously. You don’t need extra liquidity to meet those obligations. This is the state of a market, when it runs on routine transactions.

Still, there are times when a lot of new business is to be done. New technologies are elbowing their way into our industry, or a new trade agreement has been signed with another country, or the government had the excellent idea of forcing every entity in the market to equip themselves with that absolutely-necessary-thingy-which-absolutely-incidentally-is-being-marketed-by-the-minister’s-cousin. When we need to enter into new transactions, or when we just need to be ready for entering them, we need a reserve of liquidity, i.e. we need additional capacity to transact. Our market has entered into a period of heightened need for liquidity.

When I lend to someone a substantial amount of money in a period of low need for liquidity, I can just sit and wait until they pay me back. No hurry. On the other hand, when I lend during a period of increased need for liquidity, my approach is different: I want to recoup my capital as soon as possible. My debtor, i.e. the person which I have lent to, cannot pay me back immediately. If they could, they would not need to borrow from me. Stands to reason. What I can do is to express that lending-borrowing transaction as an exchange of securities against money.

You can find an accurate description of that link between actual business, its required liquidity, and all the lending business in: Adam Smith – “An Inquiry Into The Nature And Causes Of The Wealth of Nations”, Book II: Of The Nature, Accumulation, and Employment of Stock, Chapter IV: Of Stock Lent At Interest: “Almost all loans at interest are made in money, either of paper, or of gold and silver; but what the borrower really wants, and what the lender readily supplies him with, is not the money, but the money’s worth, or the goods which it can purchase. If he wants it as a stock for immediate consumption, it is those goods only which he can place in that stock. If he wants it as a capital for employing industry, it is from those goods only that the industrious can be furnished with the tools, materials, and maintenance necessary for carrying on their work. By means of the loan, the lender, as it were, assigns to the borrower his right to a certain portion of the annual produce of the land and labour of the country, to be employed as the borrower pleases.”

Here, we come to the concept of financial security. Anything in the future is subject to uncertainty and risk. We don’t know how exactly things are going to happen. This generates risk. Future events can meet my expectations, or they can do me harm. If I can sort of divide both my expectations, and the possible harm, into small pieces, and make each such small piece sort of independent from other pieces, I create a state of dispersed expectations, and dispersed harm. This is the fundamental idea of a security. How can I create mutual autonomy between small pieces of my future luck or lack thereof? By allowing people to trade those pieces independently from each other.

It is time to explain how the hell can we give more liquidity to debt by transforming it into securities. First things first, let’s see the typical ways of doing it: a note, and a bond. A note, AKA promissory note, or bill of exchange, in its most basic appearance is a written, unconditional promise to pay a certain amount of money to whoever presents the note on a given date. You can see it in the graphic below.

Now, those of you, who, hopefully, paid attention in the course of microeconomics, might ask: “Whaaait a minute, doc! Where is the interest on that loan? You told us: there ain’t free money…”. Indeed, there ain’t. Notes were invented long ago. The oldest ones we have in European museums date back to the 12th century A.D. Still, given what we know about the ways of doing business in the past, they had been used even further back. As you might know, it was frequently forbidden by the law to lend money at interest. It was called usury, it was considered at least as a misdemeanour, if not a crime, and you could even be hanged for that. In the world of Islamic Finance, lending at interest is forbidden even today.

One of the ways to bypass the ban on interest-based lending is to calculate who much money will that precise interest make on that precise loan. I lend €9000 at 12%, for one year, and it makes €9000 *12% = €1 080. I lend €9000, for one year, and I make my debtor liable for €10 080. Interest? Who’s talking about interest? It is ordinary discount!

Discount is the difference between the nominal value of a financial instrument (AKA face value), and its actual price in exchange, thus the amount of money you can have in exchange of that instrument.

A few years ago, I found that same pattern in an innocently-looking contract, which was underpinning a loan that me and my wife were taking for 50% of a new car. The person who negotiated the deal at the car dealer’s announced joyfully: ‘This is a zero-interest loan. No interest!’. Great news, isn’t it? Still, as I was going through the contract, I found that we have to pay, at the signature, a ‘contractual fee’. The fee was strangely precise, I mean there were grosze (Polish equivalent of cents) after the decimal point. I did my maths: that ‘contractual fee’ was exactly and rigorously equal to the interest we would have to pay on that loan, should it be officially interest-bearing at ordinary, market rates.

The usage of discount instead of interest points at an important correlate of notes, and debt-based securities in general: risk. That scheme with pre-calculated interest included into the face value of the note is any good when I can reliably predict when exactly will the debtor pay back (buy the note back). Moreover, as the discount is supposed to reflect pre-calculated interest, it also reflects that part of the interest rate, which accounts for credit risk.

There are 1000 borrowers, who borrow from a nondescript number of lenders. Each loan bears a principal (i.e. nominal amount) of €3000, which makes a total market of €3 000 000 lent and borrowed. Out of those 1000, a certain number is bound to default on paying back. Let it be 4%. It makes 4% * 1000 * €3000 = €120 000, which, spread over the whole population of borrowers makes €120 000/ 1000 = €120, or €120/€3000 = 4%. Looks like a logical loop, and for a good reason: you cannot escape it. In a large set of people, some will default on their obligations. This is a fact. Their collective default is an aggregate financial risk – credit risk – which has to be absorbed by the market, somehow. The simplest way to absorb it is to make each borrower pay a small part of it. When I take a loan, in a bank, the interest rate I pay always reflects the credit risk in the whole population of borrowers. When I issue a note, the discount I have to give to my lender will always include the financial risk that recurrently happens in the given market.

The discount rate is a price of debt, just as the interest rate. Both can be used, and the prevalence of one or the other depends on the market. Whenever debt gets massively securitized, i.e. transformed into tradable securities, discount becomes somehow handier and smoother to use. Another quote from invaluable Adam Smith sheds some light on this issue (

Adam Smith – “An Inquiry Into The Nature And Causes Of The Wealth of Nations”, Book II: Of The Nature, Accumulation, and Employment of Stock, Chapter IV: Of Stock Lent At Interest): “As the quantity of stock to be lent at interest increases, the interest, or the price which must be paid for the use of that stock, necessarily diminishes, not only from those general causes which make the market price of things commonly diminish as their quantity increases, but from other causes which are peculiar to this particular case. As capitals increase in any country, the profits which can be made by employing them necessarily diminish. It becomes gradually more and more difficult to find within the country a profitable method of employing any new capital. There arises, in consequence, a competition between different capitals, the owner of one endeavouring to get possession of that employment which is occupied by another; but, upon most occasions, he can hope to justle that other out of this employment by no other means but by dealing upon more reasonable terms.”

The presence of financial risk, and the necessity to account for it whilst maintaining proper liquidity in the market, brought two financial inventions: endorsement, and routed notes. Notes used to be (and still are) issued for a relatively short time, usually not longer than 1 year. If the lender needs to have their money back before the due date of the note, they can do something called endorsement: they can present that note as their own to a third party, who will advance them money in exchange. Presenting a note as my own means making myself liable for up to 100% of the original, i.e signing the note, with a date. You can find an example in the graphic below.

Endorsement used to be a normal way of assuring liquidity in the market financed with notes. Endorsers’ signatures made a chain of liability, ordered by dates. The same scheme is used today in cryptocurrencies, as the chain of hash-tagged digital signatures. Another solution was to put in the system someone super-reliable, like a banker. Such a trusted payer, who, on their part, had tons of reserve money to provide liquidity, made the whole game calmer and less risky, and thus the price of credit (the discount rate) was lower. The way of putting a banker in the game was to write them in the note as the entity liable for payment. Such a note was designated as a routed one, or as a draft. Below, I am presenting an example.

As banks entered the game of securitized debt, it opened the gates of hell, i.e. the way to paper money. Adam Smith was very apprehensive about it (Adam Smith – “Wealth of Nations”, Book II: Of The Nature, Accumulation, and Employment of Stock, Chapter II: Of Money, Considered As A Particular Branch Of The General Stock Of The Society, Or Of The Expense Of Maintaining The National Capital”): “The trader A in Edinburgh, we shall suppose, draws a bill upon B in London, payable two months after date. In reality B in Lon- don owes nothing to A in Edinburgh; but he agrees to accept of A ‘s bill, upon condition, that before the term of payment he shall redraw upon A in Edinburgh for the same sum, together with the interest and a commission, another bill, payable likewise two months after date. B accordingly, before the expiration of the first two months, redraws this bill upon A in Edinburgh; who, again before the expiration of the second two months, draws a second bill upon B in London, payable likewise two months after date; and before the expiration of the third two months, B in London redraws upon A in Edinburgh another bill payable also two months after date. This practice has sometimes gone on, not only for several months, but for several years together, the bill always returning upon A in Edinburgh with the accumulated interest and com- mission of all the former bills. The interest was five per cent. in the year, and the commission was never less than one half per cent. on each draught. This commission being repeated more than six times in the year, whatever money A might raise by this expedient might necessarily have cost him something more than eight per cent. in the year and sometimes a great deal more, when either the price of the commission happened to rise, or when he was obliged to pay compound interest upon the interest and commission of former bills. This practice was called raising money by circulation”

Notes were quick to issue, but a bit clumsy when it came to financing really big ventures, like governments. When you are a king, and you need cash for waging war on another king, issuing a few notes can be tricky. Same in the corporate sector. When we are talking about really big money, making the debt tradable is just one part, and another part is to make it nicely spread over the landscape. This is how bonds came into being, as financial instruments. The idea of bonds was to make the market of debt a bit steadier across space and over time. Notes worked well for short-term borrowing, but long-term projects, which required financing for 5 or 6 years, encountered a problem of price, i.e. discount rate. If I issue a note to back a loan for 5 years, the receiver of the note, i.e. the lender, knows they will have to wait really long to see their money back. Below, in the graphic, you have the idea explained sort of in capital letters.

The first thing is the face value. The note presented earlier proudly displayed €10 000 of face value. The bond is just €100. You divide €10 000 into 100 separate bonds, each tradable independently, at you have something like a moving, living mass of things, flowing, coming and going. Yep, babe. Liquidity, liquidity, and once again liquidity. A lot of small debts flows much more smoothly than one big.

The next thing is the interest. You can see it here designated as “5%, annuity”, with the word ‘coupon’ added. If we have the interest rate written explicitly, it means the whole thing was invented when lending at interest became a normal thing, probably in the late 1700ies. The term ‘annuity’ means that every year, those 5% are being paid to the holder of the bond, like a fixed annual income. This is where the ‘word’ coupon comes from. Back in the day, when bonds were paper documents (they are not anymore), they had detachable strips, as in a cinema ticket, one strip per year. When the issuer of the bond paid annuities to the holders, those strips were being cut off.

The maturity date of the bond is the moment, when the issuer is supposed to buy it back. It is a general convention that bonds are issued for many years. This is when the manner of counting and compound the interest plays a role, and this is when we need to remind one fundamental thing – bonds are made for big borrowers. Anyone can make a note, and many different anyones can make it circulate, by endorsement or else. Only big entities can issue bonds, and because they are big, bonds are usually considered as safe placements, endowed with low risk. Low risk means low price of debt. When I can convince many small lenders that I, the big borrower, am rock solid in my future solvency, I can play on that interest rate. When I guarantee an annuity, it can be lower than the interest paid only at the very end of maturity, i.e. in 2022 as regards this case. When all around us all of them loans are given at 10% or 12%, an annuity backed with the authority of a big institution can be just 5%, and no one bothers.

Over time, bonds have dominated the market of debt. They are more flexible, and thus assure more liquidity. They offer interesting possibilities as for risk management and discount. When big entities issue bonds, it is the possibility for other big entities to invest large amounts of capital at fixed, guaranteed rate of return, i.e. the interest rates. Think about it: you have an investment the size of a big, incorporated business, and yet you have a risk-free return. Unconditional claim, remember? Hence, over time, what professional investors started doing was building a portfolio of investment with equity-based securities for high yield and high risk, plain lending contracts for moderate yield (high interest rate) and moderate risk, and, finally, bonds for low yield and low risk. Creating a highly liquid market of debt, by putting a lot of bonds into circulation, was like creating a safe harbour for investors. Whatever crazy s**t they were after, they could compensate the resulting risk through the inclusion of bonds in their portfolios.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

What are the practical outcomes of those hypotheses being true or false?


My editorial on You Tube


This is one of those moments when I need to reassess what the hell I am doing. Scientifically, I mean. Of course, it is good to reassess things existentially, too, every now and then, but for the moment I am limiting myself to science. Simpler and safer than life in general. Anyway, I have a financial scheme in mind, where local crowdfunding platforms serve to support the development of local suppliers in renewable energies. The scheme is based on the observable difference between prices of electricity for small users (higher), and those reserved to industrial scale users (lower). I wonder if small consumers would be ready to pay the normal, relatively higher price in exchange of a package made of: a) electricity and b) shares in the equity of its suppliers.

I have a general, methodological hypothesis in mind, which I have been trying to develop over the last 2 years or so: collective intelligence. I hypothesise that collective behaviour observable in markets can be studied as a manifestation of collective intelligence. The purpose is to go beyond optimization and to define, with scientific rigour, what are the alternative, essentially equiprobable paths of change that a complex market can take. I think such an approach is useful when I am dealing with an economic model with a lot of internal correlation between variables, and that correlation can be so strong that it turns into those variables basically looping on each other. In such a situation, distinguishing independent variables from the dependent ones becomes bloody hard, and methodologically doubtful.

On the grounds of literature, and my own experimentation, I have defined three essential traits of such collective intelligence: a) distinction between structure and instance b) capacity to accumulate experience, and c) capacity to pass between different levels of freedom in social cohesion. I am using an artificial neural network, a multi-layer perceptron, in order to simulate such collectively intelligent behaviour.

The distinction between structure and instance means that we can devise something, make different instances of that something, each different by some small details, and experiment with those different instances in order to devise an even better something. When I make a mechanical clock, I am a clockmaker. When I am able to have a critical look at this clock, make many different versions of it – all based on the same structural connections between mechanical parts, but differing from each other by subtle details – and experiment with those multiple versions, I become a meta-clock-maker, i.e. someone who can advise clockmakers on how to make clocks. The capacity to distinguish between structures and their instances is one of the basic skills we need in life. Autistic people have a big problem in that department, as they are mostly on the instance side. To a severely autistic person, me in a blue jacket, and me in a brown jacket are two completely different people. Schizophrenic people are on the opposite end of the spectrum. To them, everything is one and the same structure, and they cannot cope with instances. Me in a blue jacket and me in a brown jacket are the same as my neighbour in a yellow jumper, and we all are instances of the same alien monster. I know you think I might be overstating, but my grandmother on the father’s side used to suffer from schizophrenia, and it was precisely that: to her, all strong smells were the manifestation of one and the same volatile poison sprayed in the air by THEM, and every person outside a circle of about 19 people closest to her was a member of THEM. Poor Jadwiga.

In economics, the distinction between structure and instance corresponds to the tension between markets and their underpinning institutions. Markets are fluid and changeable, they are like constant experimenting. Institutions give some gravitas and predictability to that experimenting. Institutions are structures, and markets are ritualized manners of multiplying and testing many alternative instances of those structures.

The capacity to accumulate experience means that as we experiment with different instances of different structures, we can store information we collect in the process, and use this information in some meaningful way. My great compatriot, Alfred Korzybski, in his general semantics, used to designate it as ‘the capacity to bind time’. The thing is not as obvious as one could think. A Nobel-prized mathematician, Reinhard Selten, coined up the concept of social games with imperfect recall (Harsanyi, Selten 1988[1]). He argued that as we, collective humans, accumulate and generalize experience about what the hell is going on, from time to time we shake off that big folder, and pick the pages endowed with the most meaning. All the remaining stuff, judged less useful on the moment, is somehow archived in culture, so as it basically stays there, but becomes much harder to access and utilise. The capacity to accumulate experience means largely the way of accumulating experience, and doing that from-time-to-time archiving. We can observe this basic distinction in everyday life. There are things that we learn sort of incrementally. When I learn to play piano – which I wish I was learning right now, cool stuff – I practice, I practice, I practice and… I accumulate learning from all those practices, and one day I give a concert, in a pub. Still, other things, I learn them sort of haphazardly. Relationships are a good example. I am with someone, one day I am mad at her, the other day I see her as the love of my life, then, again, she really gets on my nerves, and then I think I couldn’t live without her etc. Bit of a bumpy road, isn’t it? Yes, there is some incremental learning, but you become aware of it after like 25 years of conjoint life. Earlier on, you just need to suck ass and keep going.

There is an interesting theory in economics, labelled as « semi – martingale » (see for example: Malkiel, Fama 1970[2]). When we observe changes in stock prices, in a capital market, we tend to say they are random, but they are not. You can test it. If the price is really random, it should fan out according to the pattern of normal distribution. This is what we call a full martingale. Any real price you observe actually swings less broadly than normal distribution: this is a semi-martingale. Still, anyone with any experience in investment knows that prediction inside the semi-martingale is always burdened with a s**tload of error. When you observe stock prices over a long time, like 2 or 3 years, you can see a sequence of distinct semi-martingales. From September through December it swings inside one semi-martingale, then the Ghost of Past Christmases shakes it badly, people panic, and later it settles into another semi-martingale, slightly shifted from the preceding one, and here it goes, semi-martingaling for another dozen of weeks etc.

The central theoretical question in this economic theory, and a couple of others, spells: do we learn something durable through local shocks? Does a sequence of economic shocks, of whatever type, make a learning path similar to the incremental learning of piano playing? There are strong arguments in favour of both possible answers. If you get your face punched, over and over again, you must be a really dumb asshole not to learn anything from that. Still, there is that phenomenon called systemic homeostasis: many systems, social structures included, tend to fight for stability when shaken, and they are frequently successful. The memory of shocks and revolutions is frequently erased, and they are assumed to have never existed.

The issue of different levels in social cohesion refers to the so-called swarm theory (Stradner et al 2013[3]). This theory studies collective intelligence by reference to animals, which we know are intelligent just collectively. Bees, ants, hornets: all those beasts, when acting individually, as dumb as f**k. Still, when they gang up, they develop amazingly complex patterns of action. That’s not all. Those complex patterns of theirs fall into three categories, applicable to human behaviour as well: static coupling, dynamic correlated coupling, and dynamic random coupling.

When we coordinate by static coupling, we always do things together in the same way. These are recurrent rituals, without much room for change. Many legal rules, and institutions they form the basis of, are examples of static coupling. You want to put some equity-based securities in circulation? Good, you do this, and this, and this. You haven’t done the third this? Sorry, man, but you cannot call it a day yet. When we need to change the structure of what we do, we should somehow loosen that static coupling and try something new. We should dissolve the existing business, which is static coupling, and look for creating something new. When we do so, we can sort of stay in touch with our customary business partners, and after some circling and asking around we form a new business structure, involving people we clearly coordinate with. This is dynamic correlated coupling. Finally, we can decide to sail completely uncharted waters, and take our business concept to China, or to New Zealand, and try to work with completely different people. What we do, in such a case, is emitting some sort of business signal into the environment, and waiting for any response from whoever is interested. This is dynamic random coupling. Attracting random followers to a new You Tube channel is very much an example of the same.

At the level of social cohesion, we can be intelligent in two distinct ways. On the one hand, we can keep the given pattern of collective associations behaviour at the same level, i.e. one of the three I have just mentioned. We keep it ritualized and static, or somehow loose and dynamically correlated, or, finally, we take care of not ritualizing too much and keep it deliberately at the level of random associations. On the other hand, we can shift between different levels of cohesion. We take some institutions, we start experimenting with making them more flexible, at some point we possibly make it as free as possible, and we gain experience, which, in turn, allows us to create new institutions.

When applying the issue of social cohesion in collective intelligence to economic phenomena, we can use a little trick, to be found, for example, in de Vincenzo et al (2018[4]): we assume that quantitative economic variables, which we normally perceive as just numbers, are manifestations of distinct collective decisions. When I have the price of energy, let’s say, €0,17 per kilowatt hour, I consider it as the outcome of collective decision-making. At this point, it is useful to remember the fundamentals of intelligence. We perceive our own, individual decisions as outcomes of our independent thinking. We associate them with the fact of wanting something, and being apprehensive regarding something else etc. Still, neurologically, those decisions are outcomes of some neurons firing in a certain sequence. Same for economic variables, i.e. mostly prices and quantities: they are fruit of interactions between the members of a community. When I buy apples in the local marketplace, I just buy them for a certain price, and, if they look bad, I just don’t buy. This is not any form of purposeful influence upon the market. Still, when 10 000 people like me do the same, sort of ‘buy when price good, don’t when the apple is bruised’, a patterned process emerges. The resulting price of apples is the outcome of that process.

Social cohesion can be viewed as association between collective decisions, not just between individual actions. The resulting methodology is made, roughly speaking, of three steps. Step one: I put all the economic variables in my model over a common denominator (common scale of measurement). Step two: I calculate the relative cohesion between them with the general concept of a fitness function, which I can express, for example, as the Euclidean distance between local values of variables in question. Step three: I calculate the average of those Euclidean distances, and I calculate its reciprocal, like « 1/x ». This reciprocal is the direct measure of cohesion between decisions, i.e. the higher the value of this precise « 1/x », the more cohesion between different processes of economic decision-making.

Now, those of you with a sharp scientific edge could say now: “Wait a minute, doc. How do you know we are talking about different processes of decision making? Who do you know that variable X1 comes from a different process than variable X2?”. This is precisely my point. The swarm theory tells me that if I can observe changing a cohesion between those variables, I can reasonably hypothesise that their underlying decision-making processes are distinct. If, on the other hand, their mutual Euclidean distance stays the same, I hypothesise that they come from the same process.

Summing up, here is the general drift: I take an economic model and I formulate three hypotheses as for the occurrence of collective intelligence in that model. Hypothesis #1: different variables of the model come from different processes of collective decision-making.

Hypothesis #2: the economic system underlying the model has the capacity to learn as a collective intelligence, i.e. to durably increase or decrease the mutual cohesion between those processes. Hypothesis #3: collective learning in the presence of economic shocks is different from the instance of learning in the absence of such shocks.

They look nice, those hypotheses. Now, why the hell should anyone bother? I mean what are the practical outcomes of those hypotheses being true or false? In my experimental perceptron, I express the presence of economic shocks by using hyperbolic tangent as neural function of activation, whilst the absence of shocks (or the presence of countercyclical policies) is expressed with a sigmoid function. Those two yield very different processes of learning. Long story short, the sigmoid learns more, i.e. it accumulates more local errors (this more experimental material for learning), and it generates a steady trend towards lower a cohesion between variables (decisions). The hyperbolic tangent accumulates less experiential material (it learns less), and it is quite random in arriving to any tangible change in cohesion. The collective intelligence I mimicked with that perceptron looks like the kind of intelligence, which, when going through shocks, learns only the skill of returning to the initial position after shock: it does not create any lasting type of change. The latter happens only when my perceptron has a device to absorb and alleviate shocks, i.e. the sigmoid neural function.

When I have my perceptron explicitly feeding back that cohesion between variables (i.e. feeding back the fitness function considered as a local error), it learns less and changes less, but not necessarily goes through less shocks. When the perceptron does not care about feeding back the observable distance between variables, there is more learning and more change, but not more shocks. The overall fitness function of my perceptron changes over time The ‘over time’ depends on the kind of neural activation function I use. In the case of hyperbolic tangent, it is brutal change over a short time, eventually coming back to virtually the same point that it started from. In the hyperbolic tangent, the passage between various levels of association, according to the swarm theory, is super quick, but not really productive. In the sigmoid, it is definitely a steady trend of decreasing cohesion.

I want to know what the hell I am doing. I feel I have made a few steps towards that understanding, but getting to know what I am doing proves really hard.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

[1] Harsanyi, J. C., & Selten, R. (1988). A general theory of equilibrium selection in games. MIT Press Books, 1.

[2] Malkiel, B. G., & Fama, E. F. (1970). Efficient capital markets: A review of theory and empirical work. The journal of Finance, 25(2), 383-417.

[3] Stradner, J., Thenius, R., Zahadat, P., Hamann, H., Crailsheim, K., & Schmickl, T. (2013). Algorithmic requirements for swarm intelligence in differently coupled collective systems. Chaos, Solitons & Fractals, 50, 100-114.

[4] De Vincenzo, I., Massari, G. F., Giannoccaro, I., Carbone, G., & Grigolini, P. (2018). Mimicking the collective intelligence of human groups as an optimization tool for complex problems. Chaos, Solitons & Fractals, 110, 259-266.

Finding the right spot in that flow: educational about equity-based securities


My editorial on You Tube


I am returning to educational content, and more specifically to finance. Incidentally, it is quite connected to my current research – crowdfunding in the market of renewable energies – and I feel like returning to the roots of financial theory. In this update, I am taking on a classical topic in finance: equity-based securities.

First things first, a short revision of what is equity. We have things, and we can have them in two ways. We can sort of have them, or have them actually. When I have something, like a house worth $1 mln, and, in the same time, I owe to somebody $1,2 mln, what is really mine, at the end of the day, is a debt of $1 mln – $1,2 mln = – $0,2 mln. As a matter of fact, I have no equity in this house. I just sort of have it. In the opposite case, when the house is worth $1,2 mln and my debt is just $1 mln, I really have $1,2 – $1 mln = $0,2 mln in equity.

There is a pattern in doing business: when we do a lot of it, we most frequently do it in a relatively closed circle of recurrent business partners. Developing durable business relations is even taught in business studies as one of the fundamental skills. When we recurrently do business with the same people, we have claims on each other. Some people owe me something, I owe something to others. The capital account, which we call « balance sheet », expresses the balance between those two types of claims: those of other people on me, against my claims on other people. The art of doing business consists very largely in having more claims on others than others have on us. That “more” is precisely our equity.

When we do business, people expect us to have and maintain positive equity in it. A business person is expected to have that basic skill of keeping a positive balance between claims they have on other people, and the claims that other people have on them.

There are two types of business people, and, correspondingly, two types of strategies regarding equity in business. Type A is mono-business. We do one business, and have one equity. Type B is multi-business. Type B is a bit ADHDish: those are people who would like to participate in oil drilling, manufacturing of solar modules, space travel to Mars, launching a new smartphone, and growing some marijuana, all in the same or nearly the same time. This is a fact of life that the wealthiest people in any social group are to be found in the second category. There is a recurrent pattern of climbing the ladder of social hierarchy: being restless, or at least open in the pursuit of different business opportunities rather than being consistent in pursuing just one. If you think about it, it is something more general: being open to many opportunities in life offers a special path of personal development. Yes, consistency and perseverance matter, but they matter even more when we can be open to novelty, and consistent in the same time.

We tend to do things together. This is how we survived, over millennia, all kinds of s**t: famine, epidemies, them sabretooth tigers and whatnot. Same for business: over time, we have developed institutions for doing business together.

When we do something again and again, we figure out a way of optimizing the doing of that something. In business law, we (i.e. homo sapiens) have therefore invented institutions for both type A, and type B. You look for doing the same business for a long time, and doing it together with other people, type A just like you? You will look for something like a limited liability partnership. If, on the other hand, you are rather the restless B type, you will need something like a joint stock company, and you will need equity-based securities.

The essential idea of an equity-based security is… well, there is more than one idea inside. This is a good example of what finance is: we invent something akin to a social screwdriver, i.e. a tool which unfolds its many utilities as it is being used. Hence, I start with the initial idea rather than with the essential one, and the initial one is to do business with, or between, those B-type people: restless, open-minded, constantly rearranging their horizon of new ventures. Such people need a predictable way to swing between different businesses and/or to build a complex portfolio thereof.

Thus, we have the basic deal presented graphically above: we set a company, we endow it with an equity of €3 000 000, we divide that equity into 10 000 shares of €300 each, and we distribute those shares among some initial group of shareholders. Question: why anyone should bother to be our shareholder, i.e. to pay those €300 for one share? What do they have in exchange? Well, each shareholder who pays €300, receives in exchange one share, nominally worth €300, a bundle of intangible rights, and the opportunity to trade that share in the so-called « stock market », i.e. the market of shares. Let’s discuss these one by one.

Apparently the most unequivocal thing, i.e. the share in itself, nominally worth €300, is, in itself, the least valuable part. It is important to know: the fact of holding shares in an incorporated company does not give to the shareholder any pre-defined, unconditional claim on the company. This is the big difference between a share, and a corporate bond. The fact of holding one €300 share does not entitle to payback of €300 from the company. You have decided to invest in our equity, bro? That’s great, but investment means risk. There is no refund possible. Well, almost no refund. There are contracts called « buyback schemes », which I discuss further.

The intangible rights attached to an equity-based security (share) fall into two categories: voting power on the one hand, and conditional claims on assets on the other hand.

Joint stock companies have official, decision-making bodies: the General Assembly, the Board of Directors, the Executive Management, and they can have additional committees, defined by the statute of the company. As a shareholder, I can directly execute my voting power at the General Assembly of Shareholders. Normally, one share means one vote. There are privileged shares, with more than one vote attached to them. These are usually reserved to the founders of a company. There can also be shares with a reduced voting power, when the company wants to reward someone, with its own shares, but does not want to give them influence on the course of the business.

The General Assembly is the corporate equivalent of Parliament. It is the source of all decisional power in the company. General Assembly appoints the Board of Directors, and, depending on the exact phrasing of the company’s statute, has various competences in appointing the Executive Management. The Board of Directors directs, i.e. it makes the strategic, long-term decisions, whilst the Executive Management is for current things. Now, long story short: the voting power attached to equity-based securities, in a company, is any good only if it is decisive in the appointment of Directors. This is what much of corporate law sums up to. If my shares give me direct leverage upon who will be in the Board of Directors, then I really have voting power.

Sometimes, when holding a small parcel of shares in a company, you can be approached by nice people, who will offer you money (not much, really) in exchange of granting them the power of attorney in the General Assembly, i.e. to vote there in your name. In corporate language it is called power of proxy, and those people, after having collected a lot of such small, individual powers of attorney, can run the so-called proxy votes. Believe me or not, but proxy powers are sort of tradable, too. If you have accumulated enough proxy power in the General Assembly of a company, you, in turn, might be approached by even nicer people, who will propose you (even more) money in exchange of having that conglomerate, proxy voting power of yours on their side when appointing a good friend of theirs to the Board of Directors.

Here you have a glimpse of what equity-based securities are in essence: they are tradable, abstract building blocks of an incorporated business structure. Knowing that, let’s have a look at the conditional claims on assets that come with a corporate share. The company makes some net profit at the end of the year, and happens even to have free cash corresponding to that profit, and the General Assembly decides to have 50% of net profit paid to shareholders, as dividend. Still, voting in a company is based on majority, and, as I already said, majority is there when it can back someone to be member of the Board of Directors. In practical terms it means that decisions about dividend are taken by a majority in the Board of Directors, who, in turn, represent a majority in the General Assembly.

The claim on dividend that you can have, as a shareholder, is conditional on: a) the fact of the company having any profit after tax, b) the company having any free cash in the balance sheet, corresponding to that profit after tax, and c) the majority of voting power in the General Assembly backing the idea of paying a dividend to shareholders. Summing up, the dividend is your conditional claim on the liquid assets of the company. Why do I say it is a conditional claim on assets, and not on net profit? Well, profit is a result. It is an abstract value. What is really there, to distribute, is some cash. That cash can come from many sources. It is just its arithmetical value that must correspond to a voted percentage of net profit after tax. Your dividend might be actually paid with cash that comes from the selling of some used equipment, previously owned by the company.

Another typical case of conditional claim on assets is that of liquidation and dissolvence. When business goes really bad, the company might be forced to sell out its fixed assets in order to pay its debts. When really a lot of debt is there to pay, the shareholders of the company might decide to sell out everything, and to dissolve the incorporation. In such case, should any assets be left at the moment of dissolvence, free of other claims, the proceeds from their sales can be distributed among the incumbent shareholders.

Right, but voting, giving or receiving proxy power, claiming the dividend or proceeds from dissolvence, it is all about staying in a company, and we were talking about the utility of equity-based securities for those B-type capitalists, who would rather trade their shares than hold them. These people can use the stock market.

It is a historical fact that whenever and wherever it became a common practice to incorporate business in the form of companies, and to issue equity-based securities corresponding to shares, a market for those securities arose. Military legions in Ancient Rome were incorporated businesses, which would issue (something akin to) equity-based securities, and there were special places, called ‘counters’, where those securities would be traded. This is a peculiar pattern in human civilisation: when we practice some kind of repetitive deals, whose structure can be standardized, we tend to single out some claims out of those contracts, and turn those claims into tradable financial instruments. We call them ‘financial instruments’, because they are traded as goods, whilst not having any intrinsic utility, besides the fact of representing some claims.

Probably the first modern stock exchange in Europe was founded in Angers, France, somehow in the 15th century. At the time, there were (virtually) no incorporated companies. Still, there was another type of equity. Goods used to be transported slowly. A cargo of wheat could take weeks to sail from port A to port B, and then to be transported inland by barges or carts pulled by oxen. If you were the restless type of capitalist, you could eat your fingernails out of restlessness when waiting for your money, invested in that wheat, to come back to you. Thus, merchants invented securities, which represented abstract arithmetical fraction of the market value ascribed to such a stock of wheat. They were called different names, and usually fell under the general category of warrants, i.e. securities that give the right to pick up something from somewhere. Those warrants were massively traded in that stock exchange in Angers, and in other similar places, like Cadiz, in Spain. Thus, I bought a stock of wheat in Poland (excellent quality and good price), and I had it shipped (horribly slowly) to Italy, and as soon as I had that stock, I made a series of warrants on it, like one warrant per 100 pounds of wheat, and I started trading those warrants.

By the way, this is where the name ‘stock market’ comes from. The word ‘stock’ initially meant, and still means, a large quantity of some tradable goods. Places, such as Angers o Cadiz, where warrants on such goods were being traded, were commonly called ‘stock markets’. When you think of it, those warrants on corn, cotton, wool, wine etc. were equity-based securities. As long as the issuer of warrants had any equity in that stock, i.e. as long as their debt was not exceeding the value of that stock, said value was equity and warrants on those goods were securities backed with equity.

That little historical sketch gives an idea of what finance is. This is a set of institutionalized, behavioural patterns and rituals, which allow faster reaction to changing conditions, by creating something like a social hormone: symbols subject to exchange, and markets of those symbols.

Here comes an important behavioural pattern, observable in the capital market. There are companies, which are recommended by analysts and brokers as ‘dividend companies’ or ‘dividend stock’. It is recommended to hold their stock for a long time, as a long-term investment. The fact of recommending them comes from another fact: in these companies, a substantial percentage of shares stays, for years, in the hands of the same people. This is how they can have their dividend. We can observe relatively low liquidity in their stock. Here is a typical loop, peculiar for financial markets. Some people like holding the stock of some companies for a long time. That creates little liquidity in that stock, and, indirectly, little variation in the market price of that stock. Little variation in price means that whatever you can expect to gain on that stock, you will not really make those gains overnight. Thus, you hold. As you hold, and as other people do the same, there is little liquidity on that stock, and little variation in its price, and analysts recommend it as ‘dividend stock’. And so the loop spins.

I generalize. You have some equity-based securities, whose market value comes mostly from the fact that we have a market for them. People do something specific about those securities, and their behavioural pattern creates a pattern in prices and quantities of trade in that stock. Other people watch those prices and quantities, and conclude that the best thing to do regarding those securities is to clone the behavioural pattern, which made those prices and quantities. The financial market works as a market for strategies. Prices and quantities become signals as for what strategy is recommended.

On the other hand, there are shares just made for being traded. Holding them for more than two weeks seems like preventing a race horse from having a run on the track. People buy and sell them quickly, there is a lot of turnover and liquidity, we are having fun with trade, and the price swings madly. Other people are having a look at the market, and they conclude that with those swings in price, they should buy and sell that stock really quickly. Another loop spins. The stock market gives two types of signals, for two distinct strategies. And thus, two types of capitalists are in the game: the calm and consistent A type, and the restless B type. The financial market and the behavioural patterns observable in business people mutually reinforce and sharpen each other.

Sort of in the shade of those ‘big’ strategies, there is another one. We have ambitions, but we have no capital. We convince other people to finance the equity of a company, where we become Directors or Executive Management. With time, we attribute ourselves so-called ‘management packages’, i.e. parcels of the company’s stock, paid to us as additional compensation. We reasonably assume that the value of those management packages is defined by the price we can sell this stock in. The best price is the price we make: this is one of the basic lessons in the course of macroeconomics. Hence, we make a price for our stock. As Board of Directors, we officially decide to buy some stock from shareholders, at a price which accidentally hits the market maximums or even higher. The company buys some stock from its own shareholders. That stock is usually specified. Just some stock is being bought back, in what we call a buyback scheme. Accidentally, that ‘just some stock’ is the stock contained in the management packages we hold as Directors. Pure coincidence. In some legal orders, an incorporated company cannot hold its own stock, and the shares purchased back must be nullified and terminated. Thus, the company makes some shares, issues them, gives them to selected people, who later vote to sell them back to the company, with a juicy surplus, and ultimately those shares disappear. In other countries, the shares acquired back by the company pass into the category of ‘treasury shares’, i.e. they become assets, without voting power or claim on dividend. This is the Dark Side of the stock market. When there is a lot of hormones flowing, you can have a position of power just by finding the right spot in that flow. Brains know it better than anyone else.

Now, some macroeconomics, thus the bird’s eye view. The bird is lazy, and it prefers having a look at the website of the World Bank, and there it picks two metrics: a) Gross Capital Formation as % of GDP and b) Stock traded as % of GDP. The former measures the value of new fixed assets that pop up in the economic system, the latter estimates the value of all corporate stock traded in capital markets. Both are denominated in units of real output, i.e. as % of GDP, and both have a line labelled ‘World’, i.e. the value estimated for the whole planet taken as an economic system. Here comes a table, and a graph. The latter calculates the liquidity of capital formation, measured as the value of stock traded divided by the gross value of fixed capital formed. Some sort of ascending cycle emerges, just as if we, humans, were experimenting with more and more financial liquidity in new fixed assets, and as if, from time to time, we had to back off a bit on that liquidity.


Year Gross capital formation (% of GDP), World Stocks traded, total value (% of GDP), World Year Gross capital formation (% of GDP), World Stocks traded, total value (% of GDP), World
1984 25,4% 17,7% 2001 24,0% 104,8%
1985 25,4% 23,7% 2002 23,4% 82,8%
1986 25,1% 32,4% 2003 23,9% 76,0%
1987 25,4% 46,8% 2004 24,7% 83,8%
1988 26,2% 38,1% 2005 25,0% 99,8%
1989 26,6% 44,5% 2006 25,4% 118,5%
1990 26,0% 31,9% 2007 25,8% 161,9%
1991 25,4% 24,1% 2008 25,6% 140,3%
1992 25,2% 22,5% 2009 23,4% 117,3%
1993 25,0% 30,7% 2010 24,2% 112,5%
1994 25,0% 34,0% 2011 24,5% 104,8%
1995 24,8% 34,1% 2012 24,3% 82,4%
1996 24,7% 41,2% 2013 24,2% 87,7%
1997 24,7% 58,9% 2014 24,4% 101,2%
1998 24,5% 73,1% 2015 24,2% 163,4%
1999 24,1% 103,5% 2016 23,8% 124,5%
2000 24,5% 145,7%

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

Joseph et le perceptron


Mon éditorial sur You Tube


Voilà que mon idée d’appliquer l’intelligence artificielle pour simuler des décisions collectives – et plus particulièrement l’implémentation possible d’un schéma financier participatif à l’échelle locale – prend des couleurs. Je viens de travailler un peu sur l’idée de cohésion mutuelle entre les décisions collectives, bien dans l’esprit de ces articles que j’avais cité dans « Si je permets plus d’extravagance ». J’ai inclus la composante de cohésion dans le perceptron que je décris sur ce blog et applique dans ma recherche depuis à peu près 2 mois. Ça commence à donner des résultats intéressants : un perceptron qui prend en compte la cohésion relative de ses propres variables acquiert une certaine noblesse d’apprentissage profond. Vous avez pu lire les premiers résultats de cette approche dans « How can I possibly learn on that thing I have just become aware I do? ».

Depuis cette dernière mise à jour je me suis avancé un peu dans l’application de cette idée de cohésion et de la théorie d’essaims dans ma propre recherche. J’ai remarqué une différence nette entre la cohésion générée par le perceptron suivant la façon d’observer cette cohésion. Je peux adopter deux stratégies de simulation en ce qui concerne le rôle de la cohésion mutuelle entre variables. Stratégie no. 1 : je calcule la cohésion mutuelle entre les décisions représentées par les variables et je m’en arrête à l’observation. Le perceptron n’utilise donc pas la fonction d’adaptation comme paramètre dans le processus d’apprentissage : le réseau neuronal ne sait pas quel degré de cohésion il rend. J’ajoute donc une dimension additionnelle à l’observation de ce que fait le réseau neuronal mais je ne change pas la structure logique de réseau. Stratégie no. 2 : j’inclus les valeurs locales de la fonction d’adaptation – donc de la mesure de cohésion entre variables – comme paramètre utilisé par le perceptron. La cohésion mesurée dans la ronde d’expérimentation « k – 1 » est utilisée comme donnée dans la ronde « k ». La cohésion entre variables modifie la structure logique du réseau de façon récurrente. Je crée donc une composante d’apprentissage profond : le perceptron, initialement orienté sur l’expérimentation pure, commence à prendre en compte la cohésion interne entre ses propres décisions.

Cette prise en compte est indirecte. La cohésion des variables observée dans la ronde « k – 1 » s’ajoute, comme information additionnelle, aux valeurs de ces variables prise en compte dans la ronde « k ». Par conséquent, cette mesure de cohésion modifie l’erreur locale générée par le réseau et ainsi influence le processus d’apprentissage. Le mécanisme d’apprentissage sur la base de cohésion entre variables est donc identique au mécanisme de base de ce perceptron : la fonction d’adaptation, dont la valeur est inversement proportionnelle à la cohésion entre variables est une source d’erreur de plus. Le perceptron tend à minimiser l’erreur locale, donc il devrait, logiquement, minimiser la fonction d’adaptation aussi et ainsi maximiser la cohésion. Ça semble logique à première vue.

Eh bien, voilà que mon perceptron me surprend et il me surprend de façons différentes suivant la fonction d’activation qu’il utilise pour apprendre. Lorsque l’apprentissage se fait à travers la fonction sigmoïde, le perceptron rend toujours moins de cohésion à la fin des 5000 rondes d’expérimentation qu’il en rendait au début. Le sigmoïde à l’air de gonfler systématiquement la distance Euclidienne entre ses variables, quelle stratégie d’apprentissage que ce soit, la no. 1 ou bien la no. 2. Lorsque c’est la no. 2 (donc la cohésion est explicitement prise en compte) le perceptron génère une significativement moindre erreur cumulative et une plus grande cohésion à la fin. Moins d’erreur cumulative veut dire moins d’apprentissage : un perceptron apprend à travers l’analyse de ses propres erreurs. En même temps le sigmoïde qui doit activement observer sa propre cohésion devient un peu moins stable. La courbe d’erreur cumulative – donc la courbe essentielle d’apprentissage – devient un peu plus accidentée, avec des convulsions momentanées.

En revanche, lorsque mon perceptron s’efforce d’être intelligent à travers la tangente hyperbolique , il ne change pas vraiment la cohésion fondamentale entre variables. Il se comporte de façon typique à la tangente hyperbolique – donc il s’affole localement sans que ça change beaucoup à la longue – mais à la fin de la journée la cohésion générale entre variables diffère peu ou pas du tout par rapport à la position de départ. Pendant que le sigmoïde à l’air d’apprendre quelque chose à propos de sa cohésion – et ce quelque chose semble se résumer à dire qu’il faut vraiment la réduire, la cohésion – la tangente hyperbolique semble être quasi-inapte à apprendre quoi que ce soit de significatif. En plus, lorsque la tangente hyperbolique prend explicitement en compte sa propre cohésion, son erreur cumulative devient un peu moindre mais surtout son comportement devient beaucoup plus aléatoire, dans plusieurs dimensions. La courbe d’erreur locale acquiert beaucoup plus d’amplitude et en même temps l’erreur cumulative après 5000 rondes d’expérimentation varie plus d’une instance de 5000 à l’autre. La courbe de cohésion est tout aussi affolée mais à la fin de la journée y’a pas beaucoup de changement niveau cohésion.

J’ai déjà eu plusieurs fois cette intuition de base à propos de ces deux fonctions d’activation neurale : elles représentent des sentiers d’apprentissage fondamentalement différents. Le sigmoïde est comme un ingénieur enfermé à l’intérieur d’une capsule blindée. Il absorbe et amortit les chocs locaux tout en les menant gentiment dans une direction bien définie. La tangente hyperbolique quant à elle se comporte comme un chimpanzé névrotique : ça gueule à haute voix à la moindre dissonance, mais ça ne tire pas beaucoup de conclusions.  Je suis tenté de dire que le sigmoïde est comme intellect et la tangente hyperbolique représente les émotions. Réaction pesée et rationnelle d’une part, réaction vive et paniquarde d’autre part.

Je m’efforce de trouver une représentation graphique pour tout ce bazar, quelque chose qui soit à la fois synthétique et pertinent par rapport à ce que j’ai déjà présenté dans mes mises à jour précédentes. Je veux vous montrer la façon dont le perceptron apprend sous des conditions différentes. Je commence avec l’image. Ci-dessous, vous trouverez 3 graphes qui décrivent la façon dont mon perceptron apprend sous des conditions différentes. Plus loin, après les graphes, je développe une discussion.


Bon, je discute. Tout d’abord, les variables. J’en ai quatre dans ces graphes. La première, marquée comme ligne bleue avec des marques rouges discrètes, c’est l’erreur accumulée générée avec sigmoïde. Une remarque : cette fois, lorsque je dis « erreur accumulée », elle est vraiment accumulée. C’est la somme des toutes les erreurs locales, accumulées à mesure des rondes consécutives d’expérimentation. C’est donc comme ∑e(k) où « e(k) » est l’erreur locale – donc déviation par rapport aux valeurs initiales – observée sur les variables de résultat dans la k-tième ronde d’expérimentation. La ligne orange donne le même, donc l’erreur accumulée, seulement avec la fonction de tangente hyperbolique.

L’erreur accumulée, pour moi, elle est la mesure la plus directe de l’apprentissage compris de façon purement quantitative. Plus d’erreur accumulée, plus d’expérience mon perceptron a gagné. Quel que soit le scénario représenté sur les graphes, le perceptron accumule de l’apprentissage de manière différente, suivant la fonction neurale. Le sigmoïde accumule de l’expérience sans équivoque et d’une façon systématique. Avec la tangente hyperbolique, c’est différent. Lorsque j’observe cette courbe accidentée, j’ai l’impression intuitive d’un apprentissage par à-coups. Je vois aussi quelque chose d’autre, que j’ai même de la peine à nommer de manière précise. Si la courbe d’erreur accumulée – donc d’expérience rencontrée – descend abruptement, qu’est-ce que ça signifie ? Ce qui vient à mon esprit, c’est l’idée d’expériences contraires qui produisent de l’apprentissage contradictoire. Un jour, je suis très content par la collaboration avec ces gens de l’autre côté de la rue (l’autre côté de l’océan, de l’idéologie etc.) et je suis plein de conclusions comme « L’accord c’est mieux que la discorde » etc. Le jour suivant, lorsque j’essaie de reproduire cette expérience positive, ‘y-a du sable dans l’engrenage, tout à coup. Je ne peux pas trouver de langage commun avec ces mecs, ils sont nuls, ça ne semble aller nulle part de travailler ensemble. Chaque jour, je fais face à la possibilité équiprobable de me balancer dans l’un ou l’autre des extrêmes.

Je ne sais pas comme vous, mais moi je reconnais ce genre d’apprentissage. C’est du pain quotidien, en fait. Á moins d’avoir une méthode progressive d’apprendre quelque chose – donc le sigmoïde – nous apprenons fréquemment comme ça, c’est-à-dire en développant des schémas comportementaux contradictoires.

Ensuite, j’introduis une mesure de cohésion entre les variables du perceptron, comme l’inverse de la fonction d’adaptation, donc comme « 1/V(x) ». J’ai décidé d’utiliser cet inverse, au lieu de la fonction d’adaptation strictement dite, pour une explication plus claire. La fonction d’adaptation strictement dite est la distance Euclidienne entre les valeurs locales des variables de mon perceptron. Interprétée comme telle, la fonction d’adaptation est donc le contraire de la cohésion. Je me réfère à la théorie d’essaims, telle que je l’avais discutée dans « Si je permets plus d’extravagance ». Lorsque la courbe de « 1/V(x) » descend, cela veut dire moins de cohésion : l’essaim relâche ces associations internes. Lorsque « 1/V(x) » monte, une rigidité nouvelle s’introduit dans les associations de ce même essaim.

Question : puis-je légitimement considérer mes deux tenseurs – donc une collection structurée des variables numériques – comme un essaim social ? Je pense que je peux les regarder comme le résultat complexe d’activité d’un tel essaim : des décisions multiples, associées entre elles de manière changeante, peuvent être vues comme la manifestation d’apprentissage collectif.

Avec cette assomption, je vois, encore une fois – deux façons d’apprendre à travers les deux fonctions neurales différentes. Le sigmoïde produit toujours de la cohésion décroissante progressivement. L’essaim social qui marche selon ce modèle comportemental apprend progressivement (il accumule progressivement de l’expérience cohérente) et à mesure d’apprendre il relâche sa cohésion interne de façon contrôlée. L’essaim qui se comporte plutôt tangente hyperbolique fait quelque chose de différent : il oscille entre des niveaux différents de cohésion, comme s’il testait ce qui se passe lorsqu’on se permet plus de liberté d’expérimenter.

Bon, ça, ce sont mes impressions après avoir fait bosser mon perceptron sous des conditions différentes. Maintenant, puis-je trouver des connexions logiques entre ce que fait mon perceptron et la théorie économique ? Je dois me rappeler, encore et encore, que le perceptron, ainsi que tout le bazar d’intelligence collective, ça me sert à prédire l’absorption possible de mon concept financier dans le marché d’énergies renouvelables.

L’observation du perceptron suggère que le marché, il est susceptible de réagir à cette idée nouvelle de deux façons différentes : agitation d’une part et changement progressif d’autre part. En fait, en termes de théorie économique, je vois une association avec la théorie des cycles économiques de Joseph Schumpeter. Joseph, il assumait que changement technologique s’associe avec du changement social à travers deux sentiers distincts et parallèles : la destruction créative, qui fait souvent mal au moment donné, oblige la structure du système économique à changer de façon progressive.

Je continue à vous fournir de la bonne science, presque neuve, juste un peu cabossée dans le processus de conception. Je vous rappelle que vous pouvez télécharger le business plan du projet BeFund (aussi accessible en version anglaise). Vous pouvez aussi télécharger mon livre intitulé “Capitalism and Political Power”. Je veux utiliser le financement participatif pour me donner une assise financière dans cet effort. Vous pouvez soutenir financièrement ma recherche, selon votre meilleur jugement, à travers mon compte PayPal. Vous pouvez aussi vous enregistrer comme mon patron sur mon compte Patreon . Si vous en faites ainsi, je vous serai reconnaissant pour m’indiquer deux trucs importants : quel genre de récompense attendez-vous en échange du patronage et quelles étapes souhaitiez-vous voir dans mon travail ?

How can I possibly learn on that thing I have just become aware I do?


My editorial on You Tube


I keep working on the application of neural networks to simulate the workings of collective intelligence in humans. I am currently macheting my way through the model proposed by de Vincenzo et al in their article entitled ‘Mimicking the collective intelligence of human groups as an optimization tool for complex problems’ (2018[1]). In the spirit of my own research, I am trying to use optimization tools for a slightly different purpose, that is for simulating the way things are done. It usually means that I relax some assumptions which come along with said optimization tools, and I just watch what happens.

Vincenzo et al propose a model of artificial intelligence, which combines a classical perceptron, such as the one I have already discussed on this blog (see « More vigilant than sigmoid », for example) with a component of deep learning based on the observable divergences in decisions. In that model, social agents strive to minimize their divergences and to achieve relative consensus. Mathematically, it means that each decision is characterized by a fitness function, i.e. a function of mathematical distance from other decisions made in the same population.

I take the tensors I have already been working with, namely the input tensor TI = {LCOER, LCOENR, KR, KNR, IR, INR, PA;R, PA;NR, PB;R, PB;NR} and the output tensor is TO = {QR/N; QNR/N}. Once again, consult « More vigilant than sigmoid » as for the meaning of those variables. In the spirit of the model presented by Vincenzo et al, I assume that each variable in my tensors is a decision. Thus, for example, PA;R, i.e. the basic price of energy from renewable sources, which small consumers are charged with, is the tangible outcome of a collective decision. Same for the levelized cost of electricity from renewable sources, the LCOER, etc. For each i-th variable xi in TI and TO, I calculate its relative fitness to the overall universe of decisions, as the average of itself, and of its Euclidean distances to other decisions. It looks like:


V(xi) = (1/N)*{xi + [(xi – xi;1)2]0,5 + [(xi – xi;2)2]0,5 + … + [(xi – xi;K)2]0,5}


…where N is the total number of variables in my tensors, and K = N – 1.


In a next step, I can calculate the average of averages, thus to sum up all the individual V(xi)’s and divide that total by N. That average V*(x) = (1/N) * [V(x1) + V(x2) + … + V(xN)] is the measure of aggregate divergence between individual variables considered as decisions.

Now, I imagine two populations: one who actively learns from the observed divergence of decisions, and another one who doesn’t really. The former is represented with a perceptron that feeds back the observable V(xi)’s into consecutive experimental rounds. Still, it is just feeding that V(xi) back into the loop, without any a priori ideas about it. The latter is more or less what it already is: it just yields those V(xi)’s but does not do much about them.

I needed a bit of thinking as for how exactly should that feeding back of fitness function look like. In the algorithm I finally came up with, it looks differently for the input variables on the one hand, and for the output ones. You might remember, from the reading of « More vigilant than sigmoid », that my perceptron, in its basic version, learns by estimating local errors observed in the last round of experimentation, and then adding those local errors to the values of input variables, just to make them roll once again through the neural activation function (sigmoid or hyperbolic tangent), and see what happens.

As I upgrade my perceptron with the estimation of fitness function V(xi), I ask: who estimates the fitness function? What kind of question is that? Well, a basic one. I have that neural network, right? It is supposed to be intelligent, right? I add a function of intelligence, namely that of estimating the fitness function. Who is doing the estimation: my supposedly intelligent network or some other intelligent entity? If it is an external intelligence, mine, for a start, it just estimates V(xi), sits on its couch, and watches the perceptron struggling through the meanders of attempts to be intelligent. In such a case, the fitness function is like sweat generated by a body. The body sweats but does not have any way of using the sweat produced.

Now, if the V(xi) is to be used for learning, the perceptron is precisely the incumbent intelligent structure supposed to use it. I see two basic ways for the perceptron to do that. First of all, the input neuron of my perceptron can capture the local fitness functions on input variables and add them, as additional information, to the previously used values of input variables. Second of all, the second hidden neuron can add the local fitness functions, observed on output variables, to the exponent of the neural activation function.

I explain. I am a perceptron. I start my adventure with two tensors: input TI = {LCOER, LCOENR, KR, KNR, IR, INR, PA;R, PA;NR, PB;R, PB;NR} and output TO = {QR/N; QNR/N}. The initial values I start with are slightly modified in comparison to what was being processed in « More vigilant than sigmoid ». I assume that the initial market of renewable energies – thus most variables of quantity with ‘R’ in subscript – is quasi inexistent. More specifically, QR/N = 0,01 and  QNR/N = 0,99 in output variables, whilst in the input tensor I have capital invested in capacity IR = 0,46 (thus a readiness to go and generate from renewables), and yet the crowdfunding flow K is KR = 0,01 for renewables and KNR = 0,09 for non-renewables. If you want, it is a sector of renewable energies which is sort of ready to fire off but hasn’t done anything yet in that department. All in all, I start with: LCOER = 0,26; LCOENR = 0,48; KR = 0,01; KNR = 0,09; IR = 0,46; INR = 0,99; PA;R = 0,71; PA;NR = 0,46; PB;R = 0,20; PB;NR = 0,37; QR/N = 0,01; and QNR/N = 0,99.

Being a pure perceptron, I am dumb as f**k. I can learn by pure experimentation. I have ambitions, though, to be smarter, thus to add some deep learning to my repertoire. I estimate the relative mutual fitness of my variables according to the V(xi) formula given earlier, as arithmetical average of each variable separately and its Euclidean distance to others. With the initial values as given, I observe: V(LCOER; t0) = 0,302691788; V(LCOENR; t0) = 0,310267104; V(KR; t0) = 0,410347388; V(KNR; t0) = 0,363680721; V(IR ; t0) = 0,300647174; V(INR ; t0) = 0,652537097; V(PA;R ; t0) = 0,441356844 ; V(PA;NR ; t0) = 0,300683099 ; V(PB;R ; t0) = 0,316248176 ; V(PB;NR ; t0) = 0,293252713 ; V(QR/N ; t0) = 0,410347388 ; and V(QNR/N ; t0) = 0,570485945. All that stuff put together into an overall fitness estimation is like average V*(x; t0) = 0,389378787.

I ask myself: what happens to that fitness function when as I process information with my two alternative neural functions, the sigmoid or the hyperbolic tangent. I jump to experimental round 1500, thus to t1500, and I watch. With the sigmoid, I have V(LCOER; t1500) =  0,359529289 ; V(LCOENR; t1500) =  0,367104605; V(KR; t1500) =  0,467184889; V(KNR; t1500) = 0,420518222; V(IR ; t1500) =  0,357484675; V(INR ; t1500) =  0,709374598; V(PA;R ; t1500) =  0,498194345; V(PA;NR ; t1500) =  0,3575206; V(PB;R ; t1500) =  0,373085677; V(PB;NR ; t1500) =  0,350090214; V(QR/N ; t1500) =  0,467184889; and V(QNR/N ; t1500) = 0,570485945, with average V*(x; t1500) =  0,441479829.

Hmm, interesting. Working my way through intelligent cognition with a sigmoid, after 1500 rounds of experimentation, I have somehow decreased the mutual fitness of decisions I make through individual variables. Those V(xi)’s have changed. Now, let’s see what it gives when I do the same with the hyperbolic tangent: V(LCOER; t1500) =   0,347752478; V(LCOENR; t1500) =  0,317803169; V(KR; t1500) =   0,496752021; V(KNR; t1500) = 0,436752021; V(IR ; t1500) =  0,312040791; V(INR ; t1500) =  0,575690006; V(PA;R ; t1500) =  0,411438698; V(PA;NR ; t1500) =  0,312052766; V(PB;R ; t1500) = 0,370346458; V(PB;NR ; t1500) = 0,319435252; V(QR/N ; t1500) =  0,496752021; and V(QNR/N ; t1500) = 0,570485945, with average V*(x; t1500) =0,413941802.

Well, it is becoming more and more interesting. Being a dumb perceptron, I can, nevertheless, create two different states of mutual fitness between my decisions, depending on the kind of neural function I use. I want to have a bird’s eye view on the whole thing. How can a perceptron have a bird’s eye view of anything? Simple: it rents a drone. How can a perceptron rent a drone? Well, how smart do you have to be to rent a drone? Anyway, it gives something like the graph below:


Wow! So this is what I do, as a perceptron, and what I haven’t been aware so far? Amazing. When I think in sigmoid, I sort of consistently increase the relative distance between my decisions, i.e. I decrease their mutual fitness. The sigmoid, that function which sorts of calms down any local disturbance, leads to making a decision-making process like less coherent, more prone to embracing a little chaos. The hyperbolic tangent thinking is different. It occasionally sort of stretches across a broader spectrum of fitness in decisions, but as soon as it does so, it seems being afraid of its own actions, and returns to the initial level of V*(x). Please, note that as a perceptron, I am almost alive, and I produce slightly different outcomes in each instance of myself. The point is that in the line corresponding to hyperbolic tangent, the comb-like pattern of small oscillations can stretch and move from instance to instance. Still, it keeps the general form of a comb.

OK, so this is what I do, and now I ask myself: how can I possibly learn on that thing I have just become aware I do? As a perceptron, endowed with this precise logical structure, I can do one thing with information: I can arithmetically add it to my input. Still, having some ambitions for evolving, I attempt to change my logical structure, and I risk myself into incorporating somehow the observable V(xi) into my neural activation function. Thus, the first thing I do with that new learning is to top the values of input variables with local fitness functions observed in the previous round of experimenting. I am doing it already with local errors observed in outcome variables, so why not doubling the dose of learning? Anyway, it goes like: xi(t0) = xi(t-1) + e(xi; t-1) + V(xi; t-1). It looks interesting, but I am still using just a fraction of information about myself, i.e. just that about input variables. Here is where I start being really ambitious. In the equation of the sigmoid function, I change s = 1 / [1 + exp(∑xi*Wi)] into s = 1 / [1 + exp(∑xi*Wi + V(To)], where V(To) stands for local fitness functions observed in output  variables. I do the same by analogy in my version based on hyperbolic tangent. The th = [exp(2*∑xi*wi)-1] / [exp(2*∑xi*wi) + 1] turns into th = {exp[2*∑xi*wi + V(To)] -1} / {exp[2*∑xi*wi + V(To)] + 1}. I do what I know how to do, i.e. adding information from fresh observation, and I apply it to change the structure of my neural function.

All those ambitious changes in myself, put together, change my pattern of learing as shown in the graph below:

When I think sigmoid, the fact of feeding back my own fitness function does not change much. It makes the learning curve a bit steeper in the early experimental rounds, and makes it asymptotic to a little lower threshold in the last rounds, as compared to learning without feedback on V(xi). Yet, it is the same old sigmoid, with just its sleeves ironed. On the other hand, the hyperbolic tangent thinking changes significantly. What used to look like a comb, without feedback, now looks much more aggressive, like a plough on steroids. There is something like a complex cycle of learning on the internal cohesion of decisions made. Generally, feeding back the observable V(xi) increases the finally achieved cohesion in decisions, and, in the same time, it reduces the cumulative error gathered by the perceptron. With that type of feedback, the cumulative error of the sigmoid, which normally hits around 2,2 in this case, falls to like 0,8. With hyperbolic tangent, cumulative errors which used to be 0,6 ÷ 0,8 without feedback, fall to 0,1 ÷ 0,4 with feedback on V(xi).


The (provisional) piece of wisdom I can have as my takeaway is twofold. Firstly, whatever I do, a large chunk of perceptual learning leads to a bit less cohesion in my decisions. As I learn by experience, I allow myself more divergence in decisions. Secondly, looping on that divergence, and including it explicitly in my pattern of learning leads to relatively more cohesion at the end of the day. Still, more cohesion has a price – less learning.


I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

[1] De Vincenzo, I., Massari, G. F., Giannoccaro, I., Carbone, G., & Grigolini, P. (2018). Mimicking the collective intelligence of human groups as an optimization tool for complex problems. Chaos, Solitons & Fractals, 110, 259-266.