I followed my suspects home

MY EDITORIAL ON YOU TUBE

I am putting together the different threads of thinking which I have developed over the last weeks. I am trying to make sense of the concept of collective intelligence, and to make my science keep up with the surrounding reality. If you have followed my latest updates, you know I yielded to the temptation of taking a stance regarding current events (see Dear Fanatics and We suck our knowledge about happening into some kind of patterned structure ). I have been struggling, and I keep struggling with balancing science with current observation. Someone could say: ‘But, prof, isn’t science based on observation?’. Yes, indeed it is, but science is like a big lorry: it has a wide turning radius, because it needs to make sure to incorporate current observation into a conceptual network which takes a step back from current observation. I am committed to science, and, in the same time, I am trying to take as tight a turning around the current events as possible. Tires screech on the tarmac, synapses flare… It is exciting and almost painful, all in the same time. I think we need it, too: that combination of involvement and distance.

Once again, I restate my reasons for being concerned about the current events. First of all, I am a human being, and when I see and feel the social structure rattling around me, I become a bit jumpy. Second of all, as the Black Lives Matter protests spill from the United States to Europe, I can see the worst European demons awakening. Fanatic leftist anarchists, firmly believing they will change the world for the better by destroying it, pave the way for right-wing fanatics, who, in turn, firmly believe that order is all we need and order tastes the best when served with a pinchful of concentration camps. When I saw some videos published by activists from the so-called Chaz, or Capitol-Hill-Autonomous-Zone, in Seattle, Washingtion, U.S., I had a déjà vu. ‘F**k!’ I thought ‘This is exactly what I remember about life in a communist country’. A bunch of thugs with guns control the borders and decide who can get in and out. Behind them, a handful of grandstanding, useful idiots with big signs: ‘Make big business pay’, ‘We want restorative justice’ etc. In the background, thousands of residents – people who simply want to live their lives the best they can – are trapped in that s**t. This is what I remember from the communist Poland: thugs with deadly force using enthusiastic idiots to control local resources.

Let me be clear: I know people in Poland who use the expression ‘black apes’ to designate black people. I am sorry when I hear things like that. That one hurts too. Still, people who say stupid s**t can be talked into changing their mind. On the other hand, people who set fire to other people’s property and handle weapons are much harder to engage into a creative exchange of viewpoints. I think it is a good thing to pump our brakes before we come to the edge of the cliff. If we go over that edge, it will dramatically slow down positive social change instead of speeding it up.   

Thirdly, events go the way I was slightly afraid they would when the pandemic started, and lockdowns were being instated. The sense of danger combined with the inevitable economic downturn make a perfect tinderbox for random explosions. This is social change experienced from the uncomfortable side. When I develop my research about cities and their role in our civilisation, I frequently refer to the concept of collective intelligence. I refer to cities as a social contrivance, supposed to work as creators of new social roles, moderators of territorial conflicts, and markets for agricultural goods. If they are supposed to work, you might ask, why don’t they? Why are they overcrowded, polluted, infested with crime and whatnot?

You probably know that you can hurt yourself with a screwdriver. It certainly not the screwdriver’s fault, and it is not even always your fault. S**t happens, quite simply. When a lot of people do a lot of happening, s**t happens recurrently. It is called risk. At the aggregate scale risk is a tangible quantity of damage, not just a likelihood of damage taking place. This is why we store food for later and buy insurance policies. Dense human settlements mean lots of humans doing things very frequently in space and time, and that means more risk. We create social structures, and those structures work. This is how we survived. Those structures always have some flaws, and when we see it, we try to make some change.

My point is that collective intelligence means collective capacity to figure stuff out when we are at a loss as for what to do next. It does not mean coming up with perfect solutions. It means advancing one more step on a path that we have no idea where exactly it leads to. Scientifically, the concept is called adaptive walk in rugged landscape. There is a specific theoretical shade to it, namely that of conscious representation.

Accidents happen, and another one has just happened. I stumbled upon a video on You Tube, entitled ‘This Scientist Proves Why Our Reality Is False | Donald Hoffman on Conversations with Tom’( https://youtu.be/UJukJiNEl4o ), and I went after the man, i.e. after prof. Hoffman. Yes, guys, this is what I like doing. When I find someone with interesting ideas, I tend to sort of follow them home. One of my friends calls it ‘the bulldog state of mind’. Anyway, I went down this specific rabbit hole, and I found two articles: Hoffman et al. 2015[1] and Fields et al. 2018[2]. I owe professor Hoffman for giving me hope that I am not mad, when I use neural networks to represent collective intelligence. I owe him and his collaborators for giving some theoretic polish to my own work. I am like Moliere’s bourgeois turning into a gentleman: I suddenly realize what kind of prose I have been speaking about that topic. That prose is built around the concept of Markov chains, i.e. sequential states of reality where each consecutive state is the result of just the previous state, without exogenous corrections. The neural network I use is a Markovian kernel, i.e. a matrix (= a big table with numbers in it, to be simple) that transforms one Markov space into another.

As we talk about spaces, I feel like calling two other mathematical concepts, important for understanding the concept of Conscious Agents Networks (yes, the acronym is CAN), as developed by professor Hoffman. These concepts are: measurable space and σ-algebra. If I take a set of any phenomenal occurrences – chicken, airplanes, people, diamonds, numbers and whatnot – I can recombine that set by moving its elements around, and I can define subsets inside of it by cherry-picking some elements. All those possible transformations of the set X, together with the way of doing them and the rules of delimiting the set X out of its environment, all that makes the σ-algebra of the set X. The set X together with its σ-algebra is a measurable space.

Fields et al. 2018 represent conscious existence in the world as relation between three essential, measurable spaces: states of the world or W, conscious experiences thereof or X, and actions, designated as G. Each of these is a measurable space because it is a set of phenomena accompanied by all the possible transformations thereof. States of the world are a set, and this set can be recombined through its specific σ-algebra. The same holds for experiences and actions. Conscious existence consists in consciously experiencing states of the world and taking actions on the grounds of that experience.

That brings up an interesting consequence: conscious existence can be represented as a mathematical manifold of 7 dimensions. Why 7? It is simple. States of the world W, for one. Experiences X, two. Actions G, three. Perception is a combination of experiences with states of the world, right? Therefore, perception P is a Markovian kernel (i.e. a set of strings) attaching those two together and can be represented as P: W*X → X. That makes four dimensions. We go further. Decisions are a transformation of experiences into actions, or D: X*G → G. Yes, this is another Markovian kernel, and it is the 5-th dimension of conscious existence. The sixth one is the one that some people don’t like, i.e. the consequences of actions, thus a Markovian kernel that transforms actions into further states of the world, and spells A: G*W →W. All that happy family of phenomenological dimensions, i.e. W, X, G, P, D, A, needs another, seventh dimension to have any existence at all: they need time t. In the theory presented by Fields et al. 2018 , a Conscious Agent (CA) is precisely a 7-dimensional combination of W, X, G, P, D, A, and t.

That paper by Fields et al. 2018 made me understand that representing collective intelligence with neural networks involves deep theoretical assumptions about perception and consciousness. Neural networks are mathematical structures. In simpler words, they are combinations of symmetrical equations, asymmetrical inequalities and logical propositions linking them (such as ‘if… then…’). Those mathematical structures are divided into output variables and input variables. A combination of inputs should stay in a given relation, i.e. equality, superiority or inferiority to a pre-defined output. The output variable is precisely the tricky thing. The theoretical stream represented by Fields et al. 2018 , as well as by: He et al. 2015[3], Hoffman et al. 2015[4], Hoffman 2016[5] calls itself ‘Interface Theory of Perception’ (ITP) and assumes that the output of perception and consciousness consists in payoffs from environment. In other words, perception and consciousness are fitness functions, and organisms responsive only to fitness systematically outcompete those responsive to a veridical representation of reality, i.e. to truth about reality. In still other words, ITP stipulates that we live in a Matrix that we make by ourselves: we peg our attention on phenomena that give us payoffs and don’t give a s**t about all the rest.

Apparently, there is an important body of science which vigorously oppose the Interface Theory of Perception (see e.g. Trivers 2011[6]; Pizlo et al. 2014[7]), by claiming that human perception is fundamentally veridical, i.e. oriented on discovering the truth about reality.

In the middle of that theoretical clash, my question is: can I represent intelligent structures as Markov chains without endorsing the assumptions of ITP? In other words, can I assume that collective intelligence is a sequence of states, observable as sets of quantitative variables, and each such state is solely the outcome of the preceding state? I think it is possible, and, as I explore this particular question, I decided to connect with a review I am preparing right now, for a manuscript entitled ‘Evolutionary Analysis of a Four-dimensional Energy- Economy- Environment Dynamic System’, submitted for publication in the International Journal of Energy Sector Management (ISSN1750-6220). As I am just a reviewer of this paper, I think I should disseminate its contents on my blog, and therefore I will break with the basic habit I have to provide a linked access to the sources I quote on my blog. I will be just discussing the paper in this update, with the hope of adding, by the means of review, as much scientific value to the initial manuscript as possible.

I refer to that paper because it uses a neural network, namely a Levenberg–Marquardt Backpropagation Network, for validating a model of interactions between economy, energy, and environment. I want to start my review from this point, namely from the essential logic of that neural network and its application to the problem studied in the paper I am reviewing. The usage of neural networks in social sciences is becoming a fashion, and I like going through the basic assumptions of this method once again, as it comes handy in connection with the Interface Theory of Perception which I have just passed cursorily through.

The manuscript ‘Evolutionary Analysis of a Four-dimensional Energy-Economy- Environment Dynamic System’ explores the general hypothesis that relations between energy, economy and the ecosystem are a self-regulating, complex ecosystem. In other words, this paper somehow assumes that human populations can somehow self-regulate themselves, although not necessarily in a perfect manner, as regards the balance between economic activity, consumption of energy, and environmental interactions.   

Neural networks can be used in social sciences in two essential ways. First of all, we can assume that ‘IT IS INTELLIGENT’, whatever ‘IT’ is or means. A neural network is supposed to represent the way IT IS INTELLIGENT. Second of all, we can use neural networks instead of classical stochastic models so as to find the best fitting values in the parameters ascribed to some variables. The difference between a stochastic method and a neural network, as regards nailing those parameters down, is in the way of reading and utilizing residual errors. We have ideas, right? As long as we keep them nicely inside our heads, those ideas look just great. Still, when we externalize those ideas, i.e. when we try and see how that stuff works in real life, then it usually hurts, at least a little. It hurts because reality is a bitch and does not want to curb down to our expectations. When it hurts, the local interaction of our grand ideas with reality generates a gap. Mathematically, that gap ‘Ideal expectations – reality’ is a local residual error.

Essentially, mathematical sciences consist in finding such logical, recurrent patterns in our thinking, which generate as little residual error as possible when confronted with reality. The Pythagorean theorem c2 = a2 + b2, the π number (yes, we read it ‘the pie number’) etc. – all that stuff consists of formalized ideas which hold in confrontation with reality, i.e. they generate very little error or no error at all. The classical way of nailing down those logical structures, i.e. the classical way of doing mathematics, consists in making a provisional estimation of what real life should look like according to our provisional math, then assessing all the local residual errors which inevitably appear as soon as we confront said real life, and, in a long sequence of consecutive steps, in progressively modifying our initial math so as it fits well to reality. We take all the errors we can find at once, and we calibrate our mathematical structure so as to minimize all those errors in the same time.

That was the classical approach. Mathematicians whom we read about in history books were dudes who would spend a lifetime at nailing down one single equation. With the emergence of simple digital tools for statistics, it has become a lot easier. With software like SPSS or Stata, you can essentially create your own equations, and, provided that you have relevant empirical data, you can quickly check their accuracy. The problem with that approach, which is already being labelled as classical stochastic, is that if an equation you come up with proves statistically inaccurate, i.e. it generates a lot of error, you sort of have to guess what other equation could fit better. That classic statistical software speeds up the testing, but not really the formulation of equations as such.

With the advent of artificial intelligence, things have changed even further. Each time you fire up a neural network, that thing essentially nails down new math. A neural network learns: it does the same thing that great mathematical minds used to do. Each time a neural network makes an error, it learns on that single error, and improves, producing a slightly different equation and so forth, until error becomes negligible. I noticed there is a recent fashion to use neural networks as tools for validating mathematical models, just as classical stochastic methods would be used, e.g. Ordinary Least Squares. Generally, that approach has some kind of bad methodological smell for me. A neural network can process the same empirical data that an Ordinary Least Squares processes, and the neural network can yield the same type of parameters as the OLS test, and yet the way those values are obtained is completely different. A neural network is intelligent, whilst an Ordinary Least Squares test (or any other statistical test) is not. What a neural network yields comes out of a process very similar to thinking. The result of a test is just a number.  

If someone says: ‘this neural network has validated my model’, I am always like: ‘Weeelll, I guess what this network has just done was to invent its own model, which you don’t really understand, on the basis of your model’. My point is that a neural network can optimize very nearly anything, yet the better a network optimizes, the more prone it is to overfitting, i.e. to being overly efficient at justifying a set of numbers which does not correspond to the true structure of the problem.

Validation of the model, and the use of neural network to that purpose leads me to the model itself, such as it is presented in the manuscript. This is a complex logical structure and, as this blog is supposed to serve the popularization of science, I am going to stop and study at length both the model and its connection with the neural network. First of all, the authors of that ‘Evolutionary Analysis of a Four-dimensional Energy- Economy- Environment Dynamic System’ manuscript are non-descript to me. This is called blind review. Just in case I had some connection to them. Still, man, like really: if you want to conspire, do it well. Those authors technically remain anonymous, but right at the beginning of their paper they introduce a model, which, in order to be fully understood, requires referring to another paper, which the same authors quote: Zhao, L., & Otoo, C. O. A. (2019). Stability and Complexity of a Novel Three-Dimensional Environmental Quality Dynamic Evolution System. Complexity, 2019, https://doi.org/10.1155/2019/3941920 .

As I go through that referenced paper, I discover largely the same line of logic. Guys, if you want to remain anonymous, don’t send around your Instagram profile. I am pretty sure that ‘Evolutionary Analysis of a Four-dimensional Energy- Economy- Environment Dynamic System’ is very largely a build-up the paper they quote, i.e. ‘Stability and Complexity of a Novel Three-Dimensional Environmental Quality Dynamic Evolution System’. It is the same method of validation, i.e. a Levenberg-Marquardt Backpropagation Network, with virtually the same empirical data, and almost identical a model.

Good. I followed my suspects home, I know who do they hang out with (i.e. with themselves), and now I can go back to their statement. Both papers, i.e. the one I am reviewing, and the one which serves as baseline, follow the same line of logic. The authors build a linear model of relations between economy, energy, and environment, with three main dependent variables: the volume x(t) of pollution emitted, the value of Gross Domestic Product y(t), and environmental quality z(t).

By the way, I can see that the authors need to get a bit more at home with macroeconomics. In their original writing, they use the expression ‘level of economic growth (GDP)’. As regards the Gross Domestic Product, you have either level or growth. Level means aggregate GDP, and growth means percentage change over time, like [GDP(t1) – GDP(t0)] / GDP(t0). As I try to figure out what exactly do those authors mean by ‘level of economic growth (GDP)’, I go through the empirical data they introduce as regards China and its economy. Under the heading y(t), i.e. the one I’m after, they present standardized values which start at y(2000) = 1,1085 in the year 2000, and reach y(2017) = 9,2297 in 2017. Whatever the authors have in mind, aggregate GDP or its rate of growth, that thing had changed by 9,2297/1,1085 = 8,32 times between 2000 and 2017.

I go and check with the World Bank. The aggregate GDP of Cina, measured in constant 2010 US$, made $2 232 billion in 2000, and  $10 131,9 billion in 2017. This is a change by 4,54 times, thus much less than the standardized change in y(t) that the authors present. I check with the rate of real growth in GDP. In 2000, the Chinese economic growth was 8,5%, and in 2017 it yields 6,8%, which gives a change by (6,8/8,5) = 0,8 times and is, once again, far from the standardized 3,06 times provided by the authors. I checked with 2 other possible measures of GDP: in current US$, and in current international $ PPP. The latter indicator provides values for gross domestic product (GDP) expressed in current international dollars, converted by purchasing power parity (PPP) conversion factor. The first of the two yielded a 10,02 times growth in GDP, in China, from 2000 to 2017. The latter gives 5,31 times growth.

Good. I conclude that the authors used some kind of nominal GDP in their data, calculated with internal inflation in the Chinese economy. That could be a serious drawback, as regards the model they develop. This is supposed to be research on the mutual balance between economy, ecosystems, and energy. In this context, economy should be measured in terms of real output, thus after having shaven off inflation. Using nominal GDP is a methodological mistake.

What the hell, I go further into the model. This is a model based on differentials, thus on local gradients of change. The (allegedly) anonymous authors of the ‘Evolutionary Analysis of a Four-dimensional Energy- Economy- Environment Dynamic System’ manuscript refer their model, without giving much of an explanation, to that presented in: Zhao, L., & Otoo, C. O. A. (2019). Stability and Complexity of a Novel Three-Dimensional Environmental Quality Dynamic Evolution System. Complexity, 2019. I need to cross reference those two models in order to make sense of it.

The chronologically earlier model in: Zhao, L., & Otoo, C. O. A. (2019). Stability and Complexity of a Novel Three-Dimensional Environmental Quality Dynamic Evolution System. Complexity, 2019 operates within something that the authors call ‘economic cycle’ (to be dug in seriously, ‘cause, man, the theory of economic cycles is like a separate planet in the galaxy of social sciences), and introduces 4 essential, independent variables computed as peak values inside the economic cycle. They are:

>>> F stands for peak value in the volume of pollution,

>>> E represents the peak of GDP (once again, the authors write about ‘economic growth’, yet there is no way it could possibly be economic growth, it has to be aggregate GDP),

>>> H stands for ‘peak value of the impact of economic growth 𝑦(𝑡) on environmental quality 𝑧(𝑡)’, and, finally,

>>> P is the maximum volume of pollution possible to absorb by the ecosystem.

With those independent peak values in the system, that baseline model focuses on computing first-order derivatives of, respectively x(t), y(t) and z(t) over time. In other words, what the authors are after is change over time, noted as, respectively d[x(t)]/ d(t), d[y(t)]/d(t), and d[z(t)]/d(t).

The formal notation of the model is given in a triad of equations:  

d[x(t)]/d(t) = a1*x*[1 – (x/F)] + a2*y*[1 – (y/E)] – a3*z

d[y(t)]/d(t) = -b1*x – b2*y – b3*z

d[z(t)]/d(t) = -c1*x + c2*y*[1 – (y/H)] + c3*z*[(x/P) – 1]

Good. This is the baseline model presented in: Zhao, L., & Otoo, C. O. A. (2019). Stability and Complexity of a Novel Three-Dimensional Environmental Quality Dynamic Evolution System. Complexity, 2019. I am going to comment on it, and then I present the extension to that model, which the paper under review, i.e. ‘Evolutionary Analysis of a Four-dimensional Energy- Economy- Environment Dynamic System’ introduces as theoretical value added.

Thus, I comment. The model generally assumes two things. Firstly, gradients of change in pollution x(t), real output y(t), and environmental quality z(t) are sums of fractions taken out of stationary states. It is like saying: the pace at which this child will grow will be a fraction of its bodyweight plus a fraction of the difference between their current height and their tallest relative’s physical height etc. This is a computational trick more than solid theory. In statistics, when we study empirical data pertinent to economics or finance, we frequently have things like non-stationarity (i.e. a trend of change or a cycle of change) in some variables, very different scales of measurement etc. One way out of that is to do regression on the natural logarithms of that data (logarithms flatten out whatever needs to be flattened), or first derivatives over time (i.e. growth rates). It usually works, i.e. logarithms or first moments of original data yield better accuracy in linear regression than original data itself. Still, it is a computational trick, which can help validate a theory, not a theory as such. To my knowledge, there is no theory to postulate that the gradient of change in the volume of pollution d[x(t)]/d(t) is a sum of fractions resulting from the current economic output or the peak possible pollution in the economic cycle. Even if we assume that relations between energy, economy and environment in a human society are a complex, self-organizing system, that system is supposed to work through interaction, not through the addition of growth rates.

I need to wrap my mind a bit more around those equations, and here comes another assumption I can see in that model. It assumes that the pace of change in output, pollution and environmental quality depends on intra-cyclical peaks in those variables. You know, those F, E, H and P peaks, which I mentioned earlier. Somehow, I don’t follow this logic. The peak of any process depends on the cumulative rates of change rather that the other way around. Besides, if I assume any kind of attractor in a stochastic process, it would be rather the mean-reverted value, and not really the local maximum.

I can see that reviewing that manuscript will be tons of fun, intellectually. I like it. For the time being, I am posting those uncombed thoughts of mine on my blog, and I keep thinking.

Discover Social Sciences is a scientific blog, which I, Krzysztof Wasniewski, individually write and manage. If you enjoy the content I create, you can choose to support my work, with a symbolic $1, or whatever other amount you please, via MY PAYPAL ACCOUNT.  What you will contribute to will be almost exactly what you can read now. I have been blogging since 2017, and I think I have a pretty clearly rounded style.

In the bottom on the sidebar of the main page, you can access the archives of that blog, all the way back to August 2017. You can make yourself an idea how I work, what do I work on and how has my writing evolved. If you like social sciences served in this specific sauce, I will be grateful for your support to my research and writing.

‘Discover Social Sciences’ is a continuous endeavour and is mostly made of my personal energy and work. There are minor expenses, to cover the current costs of maintaining the website, or to collect data, yet I want to be honest: by supporting ‘Discover Social Sciences’, you will be mostly supporting my continuous stream of writing and online publishing. As you read through the stream of my updates on https://discoversocialsciences.com , you can see that I usually write 1 – 3 updates a week, and this is the pace of writing that you can expect from me.

Besides the continuous stream of writing which I provide to my readers, there are some more durable takeaways. One of them is an e-book which I published in 2017, ‘Capitalism And Political Power’. Normally, it is available with the publisher, the Scholar publishing house (https://scholar.com.pl/en/economics/1703-capitalism-and-political-power.html?search_query=Wasniewski&results=2 ). Via https://discoversocialsciences.com , you can download that e-book for free.

Another takeaway you can be interested in is ‘The Business Planning Calculator’, an Excel-based, simple tool for financial calculations needed when building a business plan.

Both the e-book and the calculator are available via links in the top right corner of the main page on https://discoversocialsciences.com .

You might be interested Virtual Summer Camps, as well. These are free, half-day summer camps will be a week-long, with enrichment-based classes in subjects like foreign languages, chess, theater, coding, Minecraft, how to be a detective, photography and more. These live, interactive classes will be taught by expert instructors vetted through Varsity Tutors’ platform. We already have 200 camps scheduled for the summer.   https://www.varsitytutors.com/virtual-summer-camps .


[1] Hoffman, D. D., Singh, M., & Prakash, C. (2015). The interface theory of perception. Psychonomic bulletin & review, 22(6), 1480-1506.

[2] Fields, C., Hoffman, D. D., Prakash, C., & Singh, M. (2018). Conscious agent networks: Formal analysis and application to cognition. Cognitive Systems Research, 47, 186-213. https://doi.org/10.1016/j.cogsys.2017.10.003

[3] He, X., Feldman, J., & Singh, M. (2015). Structure from motion without projective consistency. Journal of Vision, 15, 725.

[4] Hoffman, D. D., Singh, M., & Prakash, C. (2015). The interface theory of perception. Psychonomic bulletin & review, 22(6), 1480-1506.

[5] Hoffman, D. D. (2016). The interface theory of perception. Current Directions in Psychological Science, 25, 157–161

[6] Trivers, R. L. (2011). The folly of fools. New York: Basic Books

[7] Pizlo, Z., Li, Y., Sawada, T., & Steinman, R. M. (2014). Making a machine that sees like us. New York: Oxford University Press.

Leave a Reply