Black Swans happen all the time


I continue with the topic of Artificial Intelligence used as a tool to study collective intelligence in human social structures. In scientific dissertations, the first question, to sort of answer right off the bat, is: ‘Why should anyone bother?’. What is the point of adding one more conceptual sub-repertoire, i.e. that of collective intelligence, to the already abundant toolbox of social sciences? I can give two answers. Firstly, and most importantly, we just can do it. We have Artificial Intelligence, and artificial neural networks are already used in social sciences as tools for optimizing models. From there, it is just one more step to use the same networks as tools for simulation: they can show how specifically a given intelligent adaptation is being developed. This first part of the answer leads to the second one, namely to the scientific value added of such an approach. My essential goal is to explore the meaning, the power, and the value of collective intelligent adaptation as such, and artificial neural networks seem to be useful instruments to that purpose.

We live and we learn. We learn in two different ways: by experimental trial and error, on the one hand, and by cultural recombination of knowledge. The latter means more than just transmission of formalized cultural content: we can collectively learn as we communicate to each other what we know and as we recombine those individual pieces of knowledge. Quite a few times already, I have crossed my intellectual paths with the ‘Black Swan Theory’ by Nassim Nicholas Taleb, and its central claim that we collectively tend to silence information about sudden, unexpected events which escape the rules of normality – the Black Swans – and yet our social structures are very significantly, maybe even predominantly shaped by those unusual events. This is very close to my standpoint. I claim that we, humans, need to find a balance between chaos and order in our existence. Most of our culture is order, though, and this is pertinent to social sciences as well. Still, it is really interesting to see – and possibly experiment with – the way our culture deals with the unpredictable and extraordinary kind of s**t, sort of when history is really happening out there.

I have already had a go at something like a black swan, using a neural network, which I described in The perfectly dumb, smart social structure. The thing I discovered when experimenting with that piece of AI is that black swans are black just superficially, sort of. At the deepest, mathematical level of reality, roughly at the same pub where Pierre Simon Laplace plays his usual poker game, unexpectedness of events is a property of human cognition, and not that of reality as such. The relatively new Interface Theory of Perception (Hoffman et al. 2015[1]; Fields et al. 2018[2]; see also I followed my suspects home) supplies interesting insights in this respect. States of the world are what they are, quite simply. No single state of the world is more expected than others, per se. We expect something to happen, or we don’t although we should. My interpretation of the Nassim Nicholas Taleb’s theory is that Black Swans appear when we have a collective tendency to sort of over-smooth a given chunk of our experience and we collectively commit not to give a f**k about some strange outliers, which sort of should jump to the eye but we cognitively arrange so as they don’t really. Cognitively, Black Swans are qualia rather than phenomena as such.

Another little piece of knowledge I feel like contributing to the theory of Black Swan is that collective intelligence of human societies – or culture, quite simply – is compound and heterogenous. What is unexpected to some people is perfectly normal to others. This is how professional traders make money in financial markets: they are good at spotting recurrence in phenomena which look like perfect Black Swans to the non-initiated market players.

In the branch of philosophy called ‘praxeology’, there is a principle which states that the shortest path to a goal is the most efficient path, which is supposed to reflect the basics of Newtonian physics: the shortest path consumes the least amount of energy. Still, just as Newtonian physics are being questioned by their modern cousins, such as quantum physics, that classical approach of praxeology is being questioned by modern social sciences. I was born in the communist Poland, in 1968, and I spent the first 13 years of my life there. I know by heart the Marxist logic of the shortest path. You want people to be equal? Force them to be equal. You want to use resources in the most efficient way? Good, make a centralized, country-wide plan for all kinds of business, and you know what, make it five-year long. The shortest, the most efficient path, right? Right, there was only one thing: it didn’t work. Today, we have a concept to explain it: hyper-coordination. When a big organization focuses on implementing one, ‘perfect’ plan, people tend to neglect many opportunities to experiment with little things, sort of sidekicks regarding the main thread of the plan. Such neglect has a high price, for a large number of what initially looks like haphazard disturbances is valuable innovation. Once put aside, those ideas seldom come back, and they turn into lost opportunities. In economic theory, lost opportunities have a metric attached. It is called opportunity cost. Lots of lost opportunities means a whole stockpile of opportunity cost, which, in turn, takes revenge later on, in the form of money that we don’t earn on the technologies we haven’t implemented. Translated into present day’s challenges, lost ideas can kick our ass as lost chances to tackle a pandemic, or to adapt to climate change.

The shortest path to a goal is efficient under the condition that we know the goal. In long-range strategies, we frequently don’t know it, and then adaptative change is the name of the game. Here come artificial neural networks, once again. At the first sight, if we assume learning by trial and error and who knows where exactly we are heading, we tend to infer that we don’t know at all. Still, observing neural networks with their sleeves up and doing computational work teaches an important lesson: learning by trial and error follows clear patterns and pathways, and so does adaptative change. Learning means putting order in the inherent chaos of reality. Probably the most essential principle of that order is that error is information, and, should it be used for learning, it needs to be memorized, remembered, and processed.

Building a method of adaptative learning is just as valuable as, and complementary to preparing a plan with clearly cut goals. Goals are cognitive constructs which we make to put some order in the chaos of reality. These constructs are valuable tools for guiding our actions, yet they are in loop with our experience. We stumble upon Black Swans more frequently than we think. We just learn how to incorporate them into our cognition. I have experienced, in my investment strategy, the value and the power of consistent, relentless reformulation and re-description of both my strategic goals and of my experience.

How does our culture store information about events which we could label as errors? If I want to answer that question, I need to ask and answer another one: how do we collectively know that we have made a collective error, which can possibly be used as material for collective learning? I stress very strongly the different grammatical transformations of the word ‘collective’. A single person can know something, by storing information, residual from sensory experience, in the synapses of the brain. An event can be labelled as error, in the brain, when it yields an outcome non-conform to the desired (expected) one. Of course, at this point, a whole field of scientific research emerges, namely that of cognitive sciences. Still, we have research techniques to study that stuff. On the other hand, a collective has no single brain, as a distinct information processing unit. A collective cannot know things in the same way an individual does.

Recognition of error is a combination of panic in front of chaos, on the one hand, and objective measurement of the gap between reality and expected outcomes.  Let’s illustrate it with an example. When I am writing these words, it is July 12th, 2020, and it is electoral day: we are having, in Poland, the second-round ballot in presidential elections. As second rounds normally play out, there are just two candidates, the first two past-the-post in the first-round ballot. Judging by the polls, and by the arithmetic of transfer from the first round, it is going to be a close shave. In a country of about 18 million voters, and with an expected electoral attendance over 50%, the next 5 years of presidency is likely to be decided by around 0,5% of votes cast, roughly 40 ÷ 50 thousand people. Whatever the outcome of the ballot, there will be roughly 50% of the population claiming that our country is on the right track, and another 50% or so pulling their hair out and screaming that we are heading towards a precipice. Is there any error to make collectively, in this specific situation? If so, who and how will know whether the error really occurred, what was its magnitude and how to process the corresponding information?

Observation of neural networks at work provides some insights in that respect. First of all, in order to assess error, we need a gap between the desired outcome and the state of reality such as it is. We can collectively assume that something went wrong if we have a collective take on what would be the perfect state of things. What if the desired outcome is an internally conflicted duality, as it is the case of the Polish presidential elections 2020? Still, that collectively desired outcome could be something else that just the victory of one candidate. Maybe the electoral attendance? Maybe the fact of having elections at all? Whatever it is that we are collectively after, we learn by making errors at nailing down that specific value.

Thus, what are we collectively after? Once again, what is the point of discovering anything in respect to presidential elections? Politics are functional when they help uniting people, and yet some of the most efficient political strategies are those which use division rather than unity. Divide et impera, isn’t it? How to build social cooperation at the ground level, when higher echelons in the political system love playing the poker of social dissent? Understanding ourselves seems to be the key.    

Once again, neural networks suggest two alternative pathways for discovering it, depending on the amount of data we have regarding our own social structure. If we have acceptably abundant and reliable data, we can approach the thing straightforwardly, and test all the variables we have as the possible output ones in the neural network supposed to represent the way our society works. Variables which, when pegged as output ones in the network, allow the neural network to produce datasets very similar to the original one, are probably informative about the real values pursued by the given society. This is the approach I have already discussed a few times on my blog. You can find a scientific example of its application in my paper on energy efficiency.

There is another interesting way of approaching the same issue, and this one is much more empiricist, as it forces to discover more from scratch. We start with the simple observation that things change. When they change a lot, and we can measure change on some kind of quantitative scale, we call it variance. There is a special angle of approach to variance, when we observe it over time. Observable behavioural change – or variance at different levels of behavioural patterns – includes a component of propagated error. How? Let’s break it down.

When I change my behaviour in a non-aleatory way, i.e. when my behavioural change makes at least some sense, anyone can safely assume that I made the change for a reason. I changed my behaviour because my experience tells me that I should. I recognized something I f**ked up or some kind of frustration with the outcomes of my actions, and I change. I have somehow incorporated information about past error into my present behaviour, whence the logical equivalence: Variance in behaviour = Residual behaviour + Adaptive change after the recognition of error + Aleatory component.

Discover Social Sciences is a scientific blog, which I, Krzysztof Wasniewski, individually write and manage. If you enjoy the content I create, you can choose to support my work, with a symbolic $1, or whatever other amount you please, via MY PAYPAL ACCOUNT.  What you will contribute to will be almost exactly what you can read now. I have been blogging since 2017, and I think I have a pretty clearly rounded style.

In the bottom on the sidebar of the main page, you can access the archives of that blog, all the way back to August 2017. You can make yourself an idea how I work, what do I work on and how has my writing evolved. If you like social sciences served in this specific sauce, I will be grateful for your support to my research and writing.

‘Discover Social Sciences’ is a continuous endeavour and is mostly made of my personal energy and work. There are minor expenses, to cover the current costs of maintaining the website, or to collect data, yet I want to be honest: by supporting ‘Discover Social Sciences’, you will be mostly supporting my continuous stream of writing and online publishing. As you read through the stream of my updates on , you can see that I usually write 1 – 3 updates a week, and this is the pace of writing that you can expect from me.

Besides the continuous stream of writing which I provide to my readers, there are some more durable takeaways. One of them is an e-book which I published in 2017, ‘Capitalism And Political Power’. Normally, it is available with the publisher, the Scholar publishing house ( ). Via , you can download that e-book for free.

Another takeaway you can be interested in is ‘The Business Planning Calculator’, an Excel-based, simple tool for financial calculations needed when building a business plan.

Both the e-book and the calculator are available via links in the top right corner of the main page on .

You might be interested Virtual Summer Camps, as well. These are free, half-day summer camps will be a week-long, with enrichment-based classes in subjects like foreign languages, chess, theatre, coding, Minecraft, how to be a detective, photography and more. These live, interactive classes will be taught by expert instructors vetted through Varsity Tutors’ platform. We already have 200 camps scheduled for the summer.

[1] Hoffman, D. D., Singh, M., & Prakash, C. (2015). The interface theory of perception. Psychonomic bulletin & review, 22(6), 1480-1506.

[2] Fields, C., Hoffman, D. D., Prakash, C., & Singh, M. (2018). Conscious agent networks: Formal analysis and application to cognition. Cognitive Systems Research, 47, 186-213.