Cruel and fatalistic? Weelll, not necessarily.

MY EDITORIAL ON YOU TUBE

I am developing on one particular thread in my research, somehow congruent with the research on the role of cities, namely the phenomenon of collective intelligence and the prospects for using artificial intelligence to study human social structures. I am going both for good teaching material and for valuable scientific insight.

In social sciences, we face sort of an embarrassing question, which nevertheless is a fundamental one, namely how should we interpret quantitative data about societies. Simple but puzzling: are those numbers a meaningful representation of collectively pursued desired outcomes, or should we view them as largely random, temporary a representation of something going on at a deeper, essentially unobserved level?

I guess I can use artificial neural networks to try and solve that puzzle, at least to some extent. like starting with empirics, or, in plain human, with facts which I have observed so far. My most general observation, pertinent to every single instance of me meddling with artificial neural networks is that they are intelligent structures. I ground this general claim in two specific observations. Firstly, a neural network can experiment with itself, and come up with meaningful outcomes of experimentation, whilst keeping structural stability. In other words, an artificial neural network can change a part of itself whilst staying the same in its logical frame. Secondly, when I make an artificial neural network observe its own internal coherence, that observation changes the behaviour of the network. For me, that capacity to do meaningful and functional introspection is an important sign of intelligence.

This intellectual standpoint, where artificial neural networks are assumed to be intelligent structures, I pass to the question what kind of intelligence those networks can possibly represent. At this point I assume that human social structures are intelligent, too, as they can experiment with themselves (to some extent) whilst keeping structural stability, and they can functionally observe their own internal coherence and learn therefrom. Those two intelligent properties of human social structures are what we commonly call culture.

As I put those two intelligences – that of artificial neural networks and that of human social structures – back to back, I arrive at a new definition of culture. Instead of defining culture as a structured collection of symbolic representations, I define it as collective intelligence of human societies, which, depending on its exact local characteristics, endows those societies with a given flexibility and capacity to change, through a given capacity for collective experimentation.      

Once again, these are my empirical observations, the most general ones regarding the topic at hand. Empirically, I can observe that both artificial neural networks and human social structures can experiment with themselves in the view of optimizing something, whilst maintaining structural stability, and yet that capacity to experiment with itself has limits. Both a neural network and a human society can either stop experimenting or go haywire when experimentation leads to excessively low internal coherence of the system. Thence the idea of using artificial neural networks to represent the way that human social structures experiment with themselves, i.e. the way we are collectively intelligent. When we think about our civilisation, we intuitively ask what’s the endgame, seen from the present moment. Where are we going? That’s a delicate question, and, according to historians such as Arnold Toynbee, this is essentially a pointless one. Civilisations develop and degenerate, and supplant each other, in multi-secular cycles of apparently some 2500 – 3500 years each. If I ask the question ‘How can our civilisation survive, e.g. how can we survive climate change?’, the most rationally grounded answer is ‘Our civilisation will almost certainly fade away and die out, and then a new civilisation will emerge, and climate change could be as good an excuse as anything else to do that transition’. Cruel and fatalistic? Weelll, not necessarily. Think and ask yourself: would you like to stay the same forever? Probably not. The only way to change is to get out of our comfort zone, and the same is true for civilisations. The death of civilisations is different from extinction: when a civilisation dies, its culture transforms radically, i.e. its intelligent structure changes, yet the human population essentially survives.        

Social sciences are sciences because they focus on the ‘how?’ more than on the ‘why?’. The ‘why?’ implies there is a reason for everything, thus some kind of ultimate goal. The ‘how?’ dispenses with those considerations. The personal future of each individual human is almost entirely connected to the ‘how?’ of civilizational change and virtually completely disconnected from the ‘why?’. Civilisations change at the pace of centuries, and this is a slow pace. Even a person who lives for 100 years can see only a glimpse of human history. Yes, our individual existences are incredibly rich in personal experience, and we can use that existential wealth to make our own lives better, and to give a touch of betterment to the lives of incoming humans (i.e. our kids), and yet our personal change is very different from civilizational change. I will even go as far as claiming that individual human existence, with all its twists and turns, usually takes place inside one single cultural pattern, therefore inside a given civilisation. There are just a few human generations in the history of mankind, whose individual existences happened at the overlapping between a receding civilization and an emerging one.

On the night of July 6th, 2020, I had that strange dream, which I believe could be important in the teaching of social sciences. I dreamt of being pursued by some not quite descript ‘them’, in a slightly gangster fashion. I knew they had guns. I procured a gun for myself by breaking its previous owner neck by surprise. Yes, it is shocking, but it was just the beginning. I was running away from those people who wanted to get me. I was running through something like an urban neighbourhood, slightly like Venice, Italy, with a lot of canals all over the place. As I was running, I was pushing people into those canals, just to have freeway and keep running. I shot a few people dead, when they tried to get hold of me. All the time, I was experiencing intense, nagging fear. I woke up from that dream, shortly after midnight, and that intense fear was still resonating in me. After a few minutes of being awake, and whilst still being awake, I experienced another intense frame of mind, like a realization: me in that dream, doing horrible things when running away from people about whom I think they could try to hurt me, it was a metaphor of quite a long window in my so-far existence. Many a time I would just rush forward and do things I am still ashamed of today, and, when I meditate about it, I was doing it out of that irrational fear that other people could do me harm when they sort of catch on. When this realization popped in my mind, I immediately calmed down, and it was deep serenity, as if a lot of my deeply hidden fears had suddenly evaporated.

Fear is a learnt response to environmental factors. Recently, I have been discovering, and I keep discovering something new about fear: its fundamentally irrational nature. All of my early life, I have been taught that when I am afraid of something, I probably have good reasons to. Still, over the last 3 years, I have been practicing intermittent fasting (combined with a largely paleo-like diet), just to get out of a pre-diabetic state. Month after month, I was extending that window of fasting, and now I am at around 17 – 18 hours out of 24. A little bit more than one month ago, I decided to jump over another hurdle, i.e. that of fasted training. I started doing my strength training when fasting, early in the morning. The first few times, my body was literally shaking with fear. My muscles were screaming: ‘Noo! We don’t want effort without food!’. Still, I gently pushed myself, taking good care of staying in my zone of proximal development, and already after a few days, all changed. My body started craving for those fasted workouts, as if I was experiencing some strange energy inside of me. Something that initially had looked like a deeply organic and hence 100% justified a fear, turned out to be another piece of deeply ingrained bullshit, which I removed safely and fruitfully.

My generalisation on that personal experience is a broad question: how much of that deeply ingrained bullshit, i.e. completely irrational and yet very strong beliefs do we carry inside our body, like literally inside our body? How much memories, good and bad, do we have stored in our muscles, in our sub-cortical neural circuitry, in our guts and endocrine glands? It is fascinating to discover what we can change in our existence when we remove those useless protocols.

So far, I have used artificial neural networks in two meaningful ways, i.e. meaningful from the point of view of what I know about social sciences. It is generally useful to discover what we, humans, are after. I can use a dataset of common socio-economic stats, and test each of them as the desired outcome of an artificial neural network. Those stats have a strange property: some of them come as much more likely desired outcomes than others. A neural network oriented on optimizing those ‘special’ ones is much more similar to the original data than networks pegged on other variables. It is also useful to predict human behaviour. I figured out a trick to make such predictions: I define patterns of behaviour (social roles or parts thereof), and I make a neural network which simulates the probability that each of those patterns happens.

One avenue consists in discovering a hierarchy of importance in a set of socio-economic variables, i.e. in common stats available from external sources. In this specific approach, I treat empirical datasets of those stats as manifestation of the corresponding state spaces. I assume that the empirical dataset at hand describes one possible state among many. Let me illustrate it with an example: I take a big dataset such as Penn Tables. I assume that the set of observations yielded by the 160ish countries in the database, observed since 1964, is like a complex scenario. It is one scenario among many possible. This specific scenario has played out the way it has due to a complex occurrence of events. Yet, other scenarios are possible.      

To put it simply, datasets made of those typical stats have a strange property, possible to demonstrate by using a neural network: some variables seem to reflect social outcomes of particular interest for the society observed. A neural network pegged on those specific variables as output ones produces very little residual error, and, consequently, stays very similar to the original dataset, as compared to networks pegged on other variables therein.

Under this angle of approach, I ascribe an ontological interpretation to the stats I work with: I assume that each distinct socio-economic variable informs about a distinct phenomenon. Mind you, it is just one possible interpretation. Another one, almost the opposite, claims that all the socio-economic stats we commonly use are essentially facets (or dimensions) of the same, big, compound phenomenon called social existence of humans. Long story short, when I ascribe ontological autonomy to different socio-economic stats, I can use a neural network to establish two hierarchies among these variables: one hierarchy is that of value in desired social outcomes, and another one of epistatic role played by individual variables in the process of achieving those outcomes. In other words, I can assess what the given society is after, and what are the key leverages being moved so as to achieve the outcome pursued.

Another promising avenue of research, which I started exploring quite recently, is that of using an artificial neural network as a complex set of probabilities. Those among you, my readers, who are at least mildly familiar with the mechanics of artificial neural networks, know that a neural network needs empirical data to be transformed in a specific way, called standardization. The most common way of standardizing consists in translating whatever numbers I have at the start into a scale of relative size between 0 and 1, where 1 corresponds to the local maximum. I thought that such a strict decimal fraction comprised between 0 and 1 can spell ‘probability’, i.e. the probability of something happening. This line of logic applies to just some among the indefinitely many datasets we can make. If I have a dataset made of variables such as, for example, GDP per capita, healthcare expenditures per capita, and the average age which a person ends their formal education at, it cannot be really considered in terms of probability. If there is any healthcare system in place, there are always some healthcare expenditures per capita, and their standardized value cannot be really interpreted as the probability of healthcare spending taking place. Still, I can approach the same under a different angle. The average healthcare spending per capita can be decomposed into a finite number of distinct social entities, e.g. individuals, local communities etc., and each of those social entities can be associated with a probability of using any healthcare at all during a given period of time.

That other approach to using neural networks, i.e. as sets of probabilities, has some special edge to it. I can simulate things happening or not, and I can introduce a disturbing factor, which kicks certain pre-defined events into existence or out of it. I have observed that once a phenomenon becomes probable, it is not really possible to kick it out of the system, yet it can yield to newly emerging phenomena. In other words, my empirical observation is that once a given structure of reality is in place, with distinct phenomena happening in it, that structure remains essentially there, and it doesn’t fade even if probabilities attached to those phenomena are random. On the other hand, when I allow a new structure, i.e. another set of distinct phenomena, to come into existence with random probabilities, that new structure will slowly take over a part of the space previously occupied just by the initially incumbent, ‘old’ set of phenomena. All in all, when I treat standardized numerical values – which an artificial neural network normally feeds on – as probabilities of happening rather than magnitudes of something existing anyway, I can simulate the unfolding of entire new structures. This is a structure generating other structures.

I am trying to reverse engineer that phenomenon. Why do I use at all numerical values standardized between 0 and 1, in my neural network? Because this is the interval (type) of values that the function of neural activation needs. I mean there are some functions, such as the hyperbolic tangent, which can work with input variables standardized between – 1 and 1, yet if I want my data to be fully digest for any neural activation function, I’d better standardize it between 0 and 1. Logically, I infer that mathematical functions useful for simulating neural activation are mathematically adapted to deal with sets of probabilities (range between 0 and 1) rather than sets of local magnitudes.    

Discover Social Sciences is a scientific blog, which I, Krzysztof Wasniewski, individually write and manage. If you enjoy the content I create, you can choose to support my work, with a symbolic $1, or whatever other amount you please, via MY PAYPAL ACCOUNT.  What you will contribute to will be almost exactly what you can read now. I have been blogging since 2017, and I think I have a pretty clearly rounded style.

In the bottom on the sidebar of the main page, you can access the archives of that blog, all the way back to August 2017. You can make yourself an idea how I work, what do I work on and how has my writing evolved. If you like social sciences served in this specific sauce, I will be grateful for your support to my research and writing.

‘Discover Social Sciences’ is a continuous endeavour and is mostly made of my personal energy and work. There are minor expenses, to cover the current costs of maintaining the website, or to collect data, yet I want to be honest: by supporting ‘Discover Social Sciences’, you will be mostly supporting my continuous stream of writing and online publishing. As you read through the stream of my updates on https://discoversocialsciences.com , you can see that I usually write 1 – 3 updates a week, and this is the pace of writing that you can expect from me.

Besides the continuous stream of writing which I provide to my readers, there are some more durable takeaways. One of them is an e-book which I published in 2017, ‘Capitalism And Political Power’. Normally, it is available with the publisher, the Scholar publishing house (https://scholar.com.pl/en/economics/1703-capitalism-and-political-power.html?search_query=Wasniewski&results=2 ). Via https://discoversocialsciences.com , you can download that e-book for free.

Another takeaway you can be interested in is ‘The Business Planning Calculator’, an Excel-based, simple tool for financial calculations needed when building a business plan.

Both the e-book and the calculator are available via links in the top right corner of the main page on https://discoversocialsciences.com .

You might be interested Virtual Summer Camps, as well. These are free, half-day summer camps will be a week-long, with enrichment-based classes in subjects like foreign languages, chess, theatre, coding, Minecraft, how to be a detective, photography and more. These live, interactive classes will be taught by expert instructors vetted through Varsity Tutors’ platform. We already have 200 camps scheduled for the summer.   https://www.varsitytutors.com/virtual-summer-camps

Leave a Reply