Friday, October 27, 2017

Class v. Identity

The class v. identity debate (Jacobin Magazine) has two aspects: factual and narrative. The factual aspect is the reality of disadvantage and discrimination that women and certain ethnic groups face - which is a fact. The narrative aspect is the manner in which this fact is conveyed in public discourse.
The factual aspect of this debate can hardly be contested, as discrimination in employment opportunities, access to housing, or due process is indeed difficult to deny. The narrative aspect forms a spectrum ranging from mostly descriptive to mostly emotive. An example of mostly descriptive narrative is a labor market study that uses quantitative methods to identify the effect of sex or ethnicity on wage disparity. An example of mostly emotive narrative is the talk about "patriarchy" "white supremacy" or "white privilege".
The actual bone of contention in the class v. identity debate is the narrative aspect, although there is a wide spread misconception that the debate is about facts. I sincerely doubt that anyone who bona fide engages in this debate disputes the facts, which as I said are difficult to deny. As I see it, the actual debate centers on the mostly emotive narratives, which the "identity people" love and "class people" hate.
Stated differently, the debate is mostly about cultural tastes not the facts, and as the old saying goes de gustibus non est disputandum. I can certainly understand the emotional appeal of narratives that fly in the face of the official ideology of democracy, equal opportunity, and meritocracy. Crying "the emperor has no clothes" can have a profoundly cathartic effect on those who do the crying.
There is, however, one aspect of these cultural tastes and preferences that can be debated - their effect on political struggle to diminish, if not altogether eliminate discrimination from economy and society. As a matter of fact, the discrimination persists mostly because it is very useful for the oligarchy as it enables the few to control through the divide and conquer tactics. Therefore, the success of the struggle against discrimination critically depends on forming a broad coalition of various social groups against well organized and financed corporate and political oligarchies.
The narratives of discrimination can be judged by the effects they have on that divide and conquer tactic. It is easy to see that the talk about "patriarchy" "white supremacy" or "white privilege" plays right into the hands of the oligarchy because it amplifies divisions among groups that should be united. That talk may sound like music to ivory tower academics who love to wallow in guilt, but ads insult to injury to those who are chronically unemployed or underemployed, work shitty jobs at the whim of their bosses, cannot afford health care, experience the effects of substance abuse and also happened to be white or male.
The talk about "patriarchy" "white supremacy" or "white privilege" is very unlikely to win any supporters from the white working or lower middle classes. In fact, it is likely to alienate them from the left-leaning politics and further push them toward those who, like Trump, appear to "call things as they are." If the left is to win those people it needs to come with its own "call things as they are" narrative of discrimination and injustice, a narrative that may not be as appealing as wallowing in guilt to the cultural studies academics but that is appealing to the proles whose support is indispensable in the political struggle for a better society.

Monday, October 16, 2017

Latter days magic

Magic thinking is not limited to prehistoric or medieval societies. It is alive and well in the modern supposedly "rational" societies. The main difference between prehistoric and modern magic is the kind of magical objects involved, which in turn is affected by the state of knowledge and technology. In the prehistoric times there were witches,demons, flying objects, an the like, today there are foreign agents or conspiracies. What all of them have in common is that they are outside of the realm of rational knowledge and by that virtue - they "expand" the rational knowledge by going where rational knowledge cannot.
Looking for explanations is the quintessence of human cognition. If the rational knowledge cannot supply a satisfactory explanation, people will turn to magic, especially in situations that cause them anxiety or fear. In this respect, people have not changed since the prehistoric times. What has changed is the type of magic and magic objects involved in these "explanations".
Witches and supernatural flying objects could do the trick when the then existing state of knowledge could not convincingly disprove their supposed 'effects' on human affairs. However, when knowledge progressed to the point that it could decisively show that witchcraft and sorcery do not exist, new magical objects immune to rational refutation had to be found. As a result we have two kind of magic phenomena today. One is the conspiracy theories of various kinds that live in the folklore and on the internet. The other one is institutional rites and believes - from "risk management" programs to the corporate organization itself.
The belief in 'Russia meddling in the US "democracy"" is a magic ritual that combines the elements of both forms of magical thinking - nutjob conspiracism with pseudo-rational corporate ritualism. It is widely accepted because it serves a useful function - it provides a convenient and comforting explanation of a discomforting reality that is messy and difficult to explain, at least in simplistic terms. That messy reality is the sorry state of the US 'democracy'. People find it it discomforting to find out that what they learned in civic lessons does not apply to the political reality they see - so they need a convenient explanation.
Invariably in such situations outsiders and foreigners are the first to blame for the "corruption" of the political order at home. In the Soviet Russia and the Maoist China - imperialist agents were to blame. In North Korea and the Islamic world - it is America, the Great Satan. In America, by contrast it is Russia. Blaming Russia has an advantage for the politically correct liberals, because allows them to maintain the pretense that they are not "racist" and do not blame people of different skin color or ethnicity. That pretty much explains the popularity of the Russia meddling in the US affairs myth.

Tuesday, October 3, 2017

Mass killing as a theatrical event

In a predictable like bowel movement fashion, the corporate media will offer two partisan explanations of the recent Las Vegas mass shooting - Stephen Paddock was deranged (R) or Stephen Paddock had easy access to deadly weapons (D). Both may be technically true, yet neither explains much. There are many people who are deranged and millions who have weapons, yet a great majority of them do not commit mass murder.
An alternative view, that in my view provides a better explanation, is that such events are in fact theatrical performances staged by people who commit them. Their purpose is to communicate a message to the public in manner that the author finds aesthetically appealing. The main difference between these massacres and, say, Hollywood "action" or "horror" movies is that the extras in Hollywood movies leave the scene alive after the shooting - both literal and figurative - is done.
This explanation of criminal acts has been proposed by Jack Katz in his 1989 book "Seductions of Crime." Katz assembles impressive secondary empirical material to demonstrate that what motivates many perps not material rewards but the sensual experience of the crime itself. "The adolescent shoplifter, for example, savors not the trinket she steals but the challenge of sneaking it past the clerk, the thrill of having outsmarted the system. With stickup men, as with members of adolescent street gangs, it is the transcendent joy of dominating an adversary. With the impassioned killer, it is to vent the rage triggered by humiliation or by a threat to some cherished moral value."
Aesthetic and rules of the drama play and important role in Katz's explanation. They induce, so to speak, the perp to the logic of action suggested by the dramatic rules of the genre, the choice of scenery, the choice of props, and the course of action. Different types of drama may appeal to different cultures and different types of individuals. The drama of bringing down an airliner full of infidels from the sky may appeal to people from cultures infused with stories of eternal battles between the believers and the infidels. Machine gunning people attending a concert is more aesthetically appealing in a culture infused with images of a single individual, a rock star or a business executive, controlling the masses from a high stage, or Hollywood images of a super-hero shooting down villains by the dozen to the applause of the audience.
The main point here is that the dramas of "martyrdom", "super-star" or "super-hero" are not only culture specific, but have wide aesthetic appeal to many people from a particular culture. Many people in those culture vicariously re-enact such dramas, eithei in their day dreams, in theaters, or in staged re-enactment events. Few, however, go a step or two further and take the step alluded to by Franz Kafka in the short story "The Penal Colony" - an execution by writing the sentence into the flesh of the condemned.
My guess is that people who take that extra step and actually arrange for mass killing are those who have what is considered "leadership qualities" in a particular culture. These individuals imagine themselves as leaders or even creators and are not satisfied by mere consumption of popular narratives in their cultures. They aspire to creating ones of their own, ones that would leave others in awe. Many such leaders and creators eventually end up in mental institutions, some succeed as super-stars of business (the Jeff Bezos and Steve Jobs) or entertainment, but the few become mass murderers like Mr. Paddock.
This explains why an individual who has no rational reason to kill strangers in great numbers does so. His behavior is no different from that he reads about in religious texts of his culture about martyrs, businessmen, and other super-stars. What separates him from millions of others consuming the same stories is gumption and leadership, the willingness and determination to implement the dream.

What still needs an explanation is the incidence of mass shootings, that is, why they are occurring with greater frequency in certain times and places than others.  Here, it is useful to consider the insights of the  sociologist Mark Granonovetter who focused on the social context in which individual decisions are made.  In this line of thought, individual decisions are affected, to a significant degree, by social precedent.  If one person sees other people engage in a particular behavior, they are more likely to engage in that behavior than if nobody around that person engaged in it.  This creates a positive feedback loop, also known as path dependence in institutional economics. in which subsequent repetitions of a particular type of behavior makes it more likely for that behavior to occur in the future.

Based on this reasoning, Malcolm Gladwell offered a compelling explanation of the increased incidence of mass school shootings in the US.  In this explanation, each incidence of schools shooting lowers the threshold of resistance to such actions, and thus making it more likely for another person to commit a similar act.  That is, as the mass shooting start occurring the influence more people to commit such acts, but who would not commit them in the absence of prior acts of this nature.  This explains why such acts are more frequent in the US after the Columbine school  shooting than in other countries, or for that matter in the US in an earlier time.

To sum it up, social context increases the incidence of mass killing sin two ways.  First, it provides cultural narratives that sanction and legitimate mass killings as the means of redressing real or imaginary grievances.  These narratives appeal to a wide range of individuals, but most of those individuals would not decide to act on those narratives on their own.  There are powerful social norms that prohibit killing of fellow humans, especially on the mass scale, and these norms effectively prevent most people from acting on even most appealing killing fantasies.  But of norms are being eroded, they stop acting as a deterrent, which increases the likelihood that someone  will act out these violent fantasies.  What causes this gradual norm erosion is a positive feedback loop, known as path dependence, in which an initial occurrence of social behavior makes subsequent occurrences more likely, and as the incidence of that behavior increases, so does the likelihood that it will be further repeated. 

This is an uneasy diagnosis of a serious social disease, because it recognizes the root cause of a problem that cannot be easily solved, at least in a democratic society.  Breaking the positive feedback loop that creates mass violence requires state interventions that would make civil libertarians cringe, while the prospect of these measures' success is far from being certain.Facing a serious problem that does not have an easy solution  is not what people can typically cope with, cognitively or emotionally.  That is why most people prefer alternative diagnoses, one that offer a prospect of a solution, even if illusory.  Such as better gun control, or a better mental health care system.

Sunday, September 10, 2017

Bus Ride in Baltimore

Some years ago I was riding a bus through one of the most impoverished areas in Baltimore, just north of the Johns Hopkins hospital - for the most part empty shells of row houses, debris all over the street, etc. In front of one of the few still inhabited houses someone planted sunflowers in an empty tree-well in what used to be the sidewalk. The contrast between bright "happy" faces of sunflowers and the urban decay that surrounded them was truly striking. Clearly, someone tried to make this dump a decent place to live - a valiant effort indeed, against all odds. It reminded me of the picture of a lone man standing in front of the column of tanks in the Tienanmen square that "made the news." Except that the corporate media are more interested in showing the failures of communism than capitalism, so there was nobody there to document this person's struggle against capitalist land use policies.
Some time later, Hopkins bought those few blocks of urban wasteland and razed them in what was supposed to be redevelopment. There was no place for the sunflowers in those plans, so they went with the demolition debris, I guess. Then came the great recession of 2007 - a fine point of the casino capitalism roller coaster economy indeed - and the redevelopment plans were razed too. Today there are only empty lots waiting for a money grubber to "redevelop" this land.
What makes me sad about this situation is not that it happened. After all, this is the normal working of capitalism - the market, the business model, and the race to the bottom it perpetrates. I do not expect anything else from the psychopaths elevated to "leadership" positions. What I do regret, however, is that I did not have a camera on that bus ride so I could document what I saw. After all, a picture is worth a thousand words.

Friday, April 14, 2017

Back to the neo-feudal future

The fundamental error of Marxism is not its understanding of capitalism – this was spot on – but its understanding of history, which was borrowed from Hegel.  In this understanding history is viewed as a more or less linear progress, i.e. ‘higher’, more ‘advanced’ stages following the ‘lower’, more ’primitive’ ones.  This conception of linearity is well embedded in the Western thought, from Aristotle, to Thomas Aquinas, to Hegel, to Marx, and to Teilhard de Chardin, but I am digressing.  The fundamental error in this linear conception of history is conflating changes in technology with those in human relations: since changes in technology are more or less linear, so must be the changes in human relations. 

This Western conception of history, however, is not the only one possible.  An alternative conception, embedded in the Eastern thought, is cyclical.  This cyclical conception is more appropriate for understanding human relations.  Unlike technology, human relations move in cycles, or rather oscillate between two “yin and yang” – like states: concentration and diffusion of power.  Changes in technology often undermine the concentrated power, starting the diffusion semi-cycle, but at some point they enable the concentration of power, triggering the concentration of power semi-cycle. 

Industrial revolution undermined the power concentrated in the hands of landed nobility and diffused it among the nascent bourgeoisie.  Marx observed the advanced waning stage of this concentration of power semi-cycle and, interpreting it within the linear conception of history prevalent in the Western thought, he concluded that this tendency will continue on a more or less straight line leading to the total diffusion of power among the working class. 

Had Marx been born Indian or Chinese, however, he might have applied a cyclical conception of history, and view the 19th century capitalism as a power diffusion semi-cycle, followed by the power concentration semi-cycle.  Karl Polanyi adopted such a cyclical understanding of history, but again I am digressing.  This cyclical interpretation, in my opinion, would be more accurate than the linear one.  The nadir of power concentration occurred circa 1960s-1970s with the peak of the welfare state in the West and decolonization of the Global South.  This nadir was followed by the waxing stage of power concentration, triggered by advances in information technology and “neoliberalism” in social relations that followed.  These technological advances simply provided a tool for the elites to concentrate power in their own hands. 

We still seem to be in this waxing stage of power concentration.  That is to say, we are moving back, so to speak, to an earlier stage in human history when power was even more concentrated.  Alas, “neoliberalism” is a misnomer.  We are not going back to the “liberal” stage of development, in which nascent industrialists and merchants challenged the concentrated power of the nobility.  At least not yet.  We are heading toward the “feudal” stage in which the power was concentrated in the hand of the nobility and their vassals, so the proper term if not “neoliberalism” but “neofeudalism.” 
Stated differently, we are approaching a stage in which the main social division is not that between public and private, or this or that ethnicity, but between the small group of nobility that, like their medieval predecessors, combine both public and private spheres in their own personae, a somewhat larger group of their vassals, such as the military, the police, the academia, and the media, who are instrumental in projecting the power of the nobility, and a vast number peons that have little power, little wealth, and few if any rights.  This is the foreseeable future of the “new economy” and its looks pretty much like the old dark ages.


If the cyclical conception of history is right, this zenith of concentrated oligarchic power will not last forever.  Given the fast pace of technological changes, it will end sooner rather than latter, triggered by some material change rendering the information technology increasingly useless in maintaining the concentrated power.  It could be a new technology, an environmental catastrophe, a global war, or something totally unforeseen.  But one thing is almost certain – it will get worse before it starts getting better.  We will see even greater concentration of power, before we see its diffusion.  The only relevant question, in my mind, is not whether this happens, but how fast.  Marx hoped for a radical change during his life time.  It is not unrealistic to hope for the same today, especially that the pace of technological change seems considerably faster today than during Marx’s life time. 

Thursday, March 9, 2017

Cooking science with mathematics

Mathematics is to science what seasoning is to cooking. It is impossible to imagine modern cooking without spices. However, and this is an extremely important qualification, not all spices lead to good cooking. Some combination of spices may ruin an otherwise good dish, and other combination of spices may even cover up the taste of a foul, putrefying, or poisonous dish.

The application of mathematics to science follows the same rule. While modern science would not exist without mathematics, some applications of mathematics either ruin what otherwise would be good science or cover up trivial claims or outright quackery.  The IQ testing and much of the neoclassical economics are examples of the latter. Unfortunately, fraudulent use of mathematics has spread to other social sciences as well.  This evidenced, among other, by the fact that most scientists cannot replicate results reported by their peers .

There are many reasons behind this trend, but I would like to focus on two of them:  (i) the sudden availability of exotic spices in the hands of chefs, and (ii) the symbiotic relationship between chefs and restaurateurs who employ them. 

A. The availability of exotic spices.

As we all know, the quest for spices was the main reason behind early European explorations. Spices had to be imported to the medieval Europe from the far-away lands through trade routes fraught with many dangers.  This made them rare and very expensive. As a result, they were available only to those select few who could afford them.  However, the improvements in transportation and warfare technologies made these exotic spices more and more available, which gave birth to fine European cuisine.  The next push came with globalization that spread formerly “exotic” cuisine – Chinese, Indian, Middle Eastern, etc. all over the world, creating the new type of cooking known as "fast food".  Every shopping mall in the US and Europe has “ethnic” (Chinese, Japanese, Vietnamese, Mediterranean, etc.) eateries offering hastily prepared simple and rather crappy dishes whose taste is virtually indistinguishable from one another. 

The same process occurred in science.  The medieval science – limited for the most part to theology – used logic and sophistry as its main methods.  However, the development and application of mathematics gave birth to the modern physics, astronomy, and other empirical sciences that dethroned theology.  The next push came from the development of computers that made mathematical calculations easy and effortless.  Today the availability of cheap computer numerical data manipulation programs made these tools available to anyone involved in any kind of scientific inquiry, including those called “idiographic sciences” in the continental European tradition and  concerned with various manifestations of human behavior (history, anthropology, psychology, sociology, economics etc.).  The distinctive characteristic of the idiographic sciences, which distinguished them form nomothetic sciences, was their focus on understanding of unique human phenomena rather than general rules of nature, which in turn called for qualitative observations rather than quantitative methods.

The availability of cheap computers and computer programs fundamentally altered not only the direction of research in the idiographic sciences, but also what kinds of data are being collected. Since qualitative data are more difficult to process by computer software, their collection often takes the back seat in favor of quantitative – or rather pseudo-quantitative - data collected by opinion surveys.  They are pseudo-quantitative, because they use numerical scales representing intensity (e.g. strongly agree, somewhat agree, neither agree not disagree, etc.), but they cannot be processed as “real” numbers.

For “real” numbers, such as 1,2, 3, 4 etc. we can say that the difference between 1 and 2 is the same as that between 3 and 4, and that 2 is twice as big as 1 just as 4 is twice as big as 2.  However, when those numbers are being used as mere symbols representing multiple choices in an opinion survey, they cease to be “real” numbers.  They can be replaced with letters a,b,c,d, etc. or even pictograms representing different choices cooked up by survey designers. The reason why they are not “real” numbers but pictograms is that we cannot say that a distance between choice A and choice B (e.g. strongly agree and moderately agree) is the same as between b and c (moderately agree and neither agree nor disagree).

Research shows that subjective perceptions of quantities themselves differ from their numerical properties.  For example, a 5 percent change in probability is perceived differently depending on the overall probability of an outcome (i.e. whether it is 10%, 50% or 90%).  When it comes to opinions and perceptions, that level of subjectivity is even higher.  For example, if I only “moderately agree” with an opinion on, say, capital punishment, it may not take much to persuade me to be an agnostic (neither agree nor disagree).  However, if I have a strong feeling (strongly agree or strongly disagree), it typically takes much more to move me into the “moderate agreement/disagreement” direction.

There are other cognitive biases distorting these measurements as well.  If I ask respondents to measure, say, the length of sticks by using a measuring tape calibrated in centimeters or inches, most of the reported results will be reasonably accurate, even if some respondents may not be familiar with a particular scale of measurement.  It is so, because the measurement requires simple application of a well calibrated tool that does not change when applied to the object being measured.  That is, measuring tapes are typically made out of steel and do not expand or shrink to “fit” the particular object being measured.  If the measurement tape were made out of rubber, however, the measurements it produces would be rubbish, because the length of the tape would change each time it is applied to the object. 

Using subjective scales of the type strongly/moderately agree or disagree is like using a rubber measuring tape.  The scale itself changes each time the measurement is applied.  If I am currently in pain, my perception of it is affected by my current experience, so I will report my pain as severe on a rating scale.  However, if I am asked to evaluate my past pain experience on the same scale, I would report it as less severe, because humans have the tendency to forget and minimize past negative experiences.  Likewise, if I am asked to record my emotional state or my views on a policy proposal, my rating would be affected by two things – how this question is worded, and what I am told or asked to do before being asked that question.  If previous line of inquiry evoked negative experiences my answers would differ from those if the previous line of inquiry evoked positive experiences (the so called “anchoring bias”).

It is therefore clear that answers solicited using such measurement scales are nothing more than subjective opinions that will likely change if the circumstances under which the measurement is takes changes.  Assigning numbers to these measurements creates a false illusion that they represent numerical quantities, similar to the measurements of physical objects taken with a measuring tape made of steel.  In reality, it is assigning numbers to the shapes of clouds in the sky.  Transforming these shapes into numbers does not changer the idiographic nature of these observations into nomothetic science leading to the general laws, because such laws simply do not exist in this particular case.  While each cloud was certainly caused by objective and measurable atmospheric conditions, its particular shape is idiosyncratic and impossible to predict from these measurable atmospheric conditions.

More conscientious researchers may refrain from treating such subjective responses like “real” numbers and limit their analysis to reporting frequency counts of broad categories of responses (e.g. how many people agreed and how many disagreed), but the availability of cheap data processing software make such analysis look “pedestrian” and pressure is applied to use more “advanced” techniques.  I am speaking from experience here.  Some time ago, an anonymous peer reviewer of my paper using frequency-based contingency tables showing distributions of opinions collected in a survey called this technique “pedestrian” and suggested one based on regression.  In other words, let’s treat them as “real” numbers. This advice reminds me of the old economist joke – he could not find a can opener on an uninhabited island, so he assumed he had one.

The problem is not limited to the assumptions about quantitative properties of the data, but the kind of research that gains dominance in social sciences with the advent of cheap computational tools.  This new research paradigm favors questions that can be answered by numerical or quasi-numerical data, because such data are easy to collect and process.  Hence the proliferation of various opinion surveys.  The idiocy of this approach lies not only in the misinterpretation of numerical data, but more importantly, in intellectual laziness is fosters.  Researchers abandon the difficult intellectual task of trying to understand how people think and under what conditions in favor of giving them simplistic multiple choice tests involving prefabricated opinion statements, because such simplistic multiple choice tests are easy to score and process by computers.  If this is not the proverbial drunkard’s search, I do not know what is.

B. The symbiotic relationship between chefs and restaurateurs

Imagine, if you will, a renowned chef coming to the restaurateur who employs her and saying “I tested several recipes, but they do not meet the high standards of this fine dining establishment.”  “So what are we going to serve our guests tonight?” asks the restaurateur.  “I do not have anything ready yet” answers the chef, but with more testing I should be able to come up with a good recipe in a week or two.”  It is likely that this chef would be looking for another job after this conversation.

The same applies to scientific knowledge.  As Bruno Latour (“Science in Action”) aptly observed, the production of science differs from the presentation of scientific facts after the have been produced, that is, accepted as facts by the scientific community.  Whereas the presentation of agreed upon facts is subjected to only one requirement – truth (i.e. concordance with the reality) – the production of science is a much messier process.  For one thing, it involves a search in the dark, before “the truth” has been discovered.  All that is known during this process involves various, often conflicting claims posed by people representing different schools of thought to what they believe is factually true.  Testing these claims requires enormous resources, teams of researchers and support personnel, laboratories, expensive instruments, and channels of communication with the scientific community.  The procurement of these enormous resources requires the involvement of a much larger number of people than the scientists who do the actual research.  It requires institutions – universities and research laboratories – run by an army of administrative staff.  It involves designers and manufacturers of specialized equipment without which research would not be possible.  It also requires funding, which in turn is procured by linking the prospective results of scientific inquiries to the interest of people who control financial resources, such as government officials, corporate bosses, or investors. 

All those people – the administrators, the government officials, the corporate bosses, and the investors – want to see the results of their efforts and investments.  In that sense, they act like the restaurateur in our story – they expect their chefs to produce meal of certain quality, but they will not settle for a chef who tells them that she tested several recipes and found none of them satisfactory.  Yet, when we look at science as a set of agreed upon facts, all that messy production process disappears from the eyes, and we are served with an equivalent of a meal served to us on a sliver plate that need to pass only our taste test.

This is why the actual production of science, like cooking, is very different form tasting the already prepared product.  In the idealized world, chefs and scientists look for recipes high and low, test them and select only those that pass the rigorous test – the ultimate consistency between the object (a dish or natural phenomenon) and human perception of it (excellent flavor or truth).  This is how scientific research appears to Karl Popper – as a long series of attempts to falsify different scientific hypotheses to see which one will withstand this test.  This may be a logical way to proceed if one’s goal is to find THE ultimate truth in some unspecified but likely long period of time, but it is a waste of effort if one’s goal is to find a truth that is good enough to pass some rudimentary plausibility test and satisfy the stakeholders in the scientific production process – the administrators, the government bureaucrats, the funders, the investors and, last but not least, the researchers themselves whose academic careers and salaries depend on producing tangible results.  Falsifying hypotheses may produce results in a long run, but the stakeholders, like our restaurateur, need to put the food on the table now, not in some distant future.  They will not settle for a response “we have eliminated some hypotheses, but we have nothing positive yet.”  People providing such responses would be soon looking for other jobs, whether they are chefs or scientists.

Here is where the ubiquity of computers and software utilizing statistical analysis comes handy – they can produce minimally satisfactory results in a rather short time period and with relatively little effort.  They can do it for two reasons.  First, their use requires numbers, which in turn, leads to the substitution of qualitative phenomena with something that looks like numbers, but it really is not.  I covered this process in section A of this essay.  Second, it substitutes causal models with statistical correlations and inductive methods of reasoning with statistical significance tests. 

We all learn in Research Methods 101 that correlation is not causation and there is no need to further repeat this rather hackneyed truth.  Suffice it to say that correlation may – but does not have to – imply a causal connection, so finding a correlation is a useful first step into an inquiry into what causes the phenomenon we want explain.  Unfortunately, for many research projects, especially in social and behavioral sciences, this is the only step.  These researchers often shout Eureka as soon as they discover a meaningful correlation.  And how do they know if the correlation is meaningful?  By looking at the results of the statistical significance test or the p-statistic, which by convention has to be lower than 0.05 (or 5%) to be called “significant.”  However, yelling Eureka after finding a “significant” statistical correlation is like coming to a five star restaurant and being served macaroni and cheese out of a can.  It is barely edible, all right, but hardly worth the five star restaurant price.  Here is why.

The p-statistic simply means that if we were to repeatedly draw a representative sample of the same size as one at hand from the same population, the value of the correlation we are looking at would be zero (meaning no correlation) in 5 (or fewer, depending on the actual value of the p-statistic) in 100 such trials.  In other words, there is only 5% (or less) chance that the correlation that our analysis uncovered in the data is due to random chance.  This is all that there is to it.  Does this mean that there is a cause-effect relation behind this correlation?

The answer is that we really have no way of knowing with only this information at hand.  Imagine the following situation.  I want to know if it rained at night when I was asleep, so I look at the grass in my front yard.  If it is dry, I can safely conclude that it did not rain.  But what if it is wet?  Can I conclude that it rained?  I can, but I could be dead wrong because rain is not the only thing that makes grass wet.  It could have been my automatic sprinkler, or it could have been mist on a chilly night.  The same reasoning applies to statistical significance testing.  What we are really testing here is whether the observed correlation is an artifact of the statistical procedure – random selection of samples whose composition slightly varies due to chance.  Our p-statistic tells us the probability of that, or in the methodological parlance, the probability that the null-hypothesis (i.e. the hypothesis of no correlation) is true.  If that probability is less than 5%, we by convention conclude that the null hypothesis is false.  The grass is dry, therefore it did not rain that night.  But the falsification of the null hypothesis is not a sufficient reason to claim that the variables in question are actually correlated, just as there is insufficient reason to conclude that it rained when the grass is wet.

What is more, statistical correlations change depending on the presence of other variables in our model.  Suppose for example that when I look at prices of cars in different countries I find that in the US cars cost on average $25k or more, while in the Global South countries cars cost in the vicinity of $15k.  I also discover that there are far more cars in the US than in the Global South.  If I entered these findings into a computer, I would learn that there is a positive correlation between the price of cars and the number of cars sold, which would make most economists scratch their heads as this contradicts everything they know about prices and the size of the demand.  These economists would tell me that I would need to consider other factors that affect prices and demand for cars, such as earnings, cost of other goods, availability of roads etc.  So if following the economists’ advice, I enter them into my model, the value of the initially observed correlation will almost certainly change, and it may even become negative as the economists expected.  However, if I believe that my initial findings were correct and the correlation between prices and quantity of cars is actually positive due to monopolization of transportation by car companies, I will measure the availability of alternative means of transportation, enter it into my model in the hope that the relationship between price and the number of cars sold moves back into the positive territory.

If this looks like a wild goose chase, there is certainly some truth in it.  Most statistical models are highly dependent on analytic procedures used to construct them.  These include the nature and the quality of the data, the data cleaning and preparation procedures, the type and number of variables in the model, and in more complex models, such as factor analysis, researcher-supplied assumptions about the nature of the expected outcome that are necessary for the computerized analysis to run.  As Paul Ormerod (“Death of Economics”) argued, complex econometric models can produce a virtually infinite number of solutions, depending on more or less arbitrary assumptions that analysts build into them.  A similar conclusion was reached by Stephen Jay Gould (“The Mismeasure of Man”) in regard to complex psychometric models. 

So if these complex statistical tools indeed put the researchers on a wild goose chase, why is there such a strong pressure in the research community to use these tools?  The answer is that, unlike in the wild nature where the wild goose typically gets away after leading her pursuer sufficiently far astray, in the academic research world the pursuers almost always get something from such a chase.  It may not be the goose they were after, but a few feathers, an insect or two, or a mouse if they are lucky.  In other words, they will find statistically significant correlations that computers will invariably produce if run long enough, which then they can claim as their big catch in their research papers and use as a bait to attract further research grants and support.  There is a tangible product of the chase after all, which makes everyone happy, the chef and the restaurateur and his guest, the researches, the university administrators, the government bureaucrats, and the funders and investors. 

None of it would be possible without computer tools capable of cranking out correlations in the matter of seconds from the garbage that passes for the quantitative data.  This has transformed the science as Popper claimed it to be - falsification of scientific hypotheses.  Indeed there is little economic value in such a pursuit, just as there is little value in a chef deconstructing the food served by other restaurants.  What pays out is spending most of the energy on cranking out food for thought that the administrators and paying clients would buy.  There is little wonder that so few of these results are replicable, since their construction involves a fair amount of chance and data preparation. 


We have achieved a lofty goal.  As the proliferation of computers and quantitative analytic tools reaches unprecedented proportions, the social science research resembles, more and more closely, an elaborate chase after a wild goose in the thicket of numbers that pass for the data.  On some great and glorious day social scientists will reach the object of their physics envy and turn their discipline into what theology was in the Middle Ages, an impressive logically coherent intellectual edifice whose empirical relevance and predictive power is on a par with that of a chimp randomly pushing computer buttons.  

Friday, January 27, 2017

Everyone for oneself

Jeder für sich und Gott gegen alle – Everyone for Oneself and God against All is a title of a Werner Herzog film based on a true story of a man raised in total isolation from any human contact, whom citizens of a German town tried to integrate to their community.  It may as well be a metaphoric expression of capitalist rationality destroying what the German sociologists Ferdinand Tönnies and Max Weber called Gemeinschaft – a human community based on feelings of togetherness and on mutual bonds, and replacing it with Gesellschaft – a group sustained by the pursuit of its members’ individual interests and goals.  The loss of Gemeinschaft is – I believe – the main reason behind the recent growth of populist nationalism, which left liberal pundits flabbergasted and grasping for answers.  In fact, this change from Gemeinschaft to Gesellschaft has been largely unnoticed by the English speaking pundits, as the English language does not even have the words for these two different kinds of human societies. 

As an Eastern European immigrant to the United States, I had an opportunity to directly experience this change.  The key element of Eastern European socialism – routinely missed by its detractors and supporters alike – was the marriage of modernization and internationalism with Gemeinschaft.  The official state ideology, or religion as some would argue, was “socialist” economic development.  Like any other economic development, the ‘socialist’ variety aimed to bring industrialization, modern technology, rationally managed economy to the nation, and integrate the nation into a broader network of international cooperation.  But unlike its ‘capitalist’ counterpart – the Eastern European socialism was achieving this modernization and internationalization without the "capitalist" transition from Gemeinschaft to Gesellschaft.  Despite massive migration from the countryside to the cities triggered by industrialization, the urban communities maintained the essential features of rural communities based on physical proximity, feelings of togetherness and on mutual bonds.  This type of community was, in fact, carefully cultivated by the socialist state, from the state promotion of folk culture to the official declarations of solidarity at both local and international level. 

However, one of the key element in the preservation of Gemeinschaft in Eastern Europe was the physical organization of the living space, specifically physical proximity.  I lived in a fairly large city, yet I did not have to leave my neighborhood to obtain life necessities, from food shopping to education.  I started walking to school by myself in the first grade.  In the third grade I was sent on shopping errands to buy food in the neighborhood stores.  In the fourth grade I started exploring more remote parts of the city – all by myself – because I could take advantage of good public transportation.  In the seventh grade I run away from home for the first time – again taking advantage of the availability of public transit.

My childhood ability to physically move around the city was facilitated not only by physical proximity and public transit but also by social bond and feeling of togetherness.  It was taken for granted that I, or for that matter any child, would be helped by strangers if needed and certainly not taken advantage of.  Neighbors watching other neighbors’ kids playing in the community yards was the norm.  When kids misbehaved in school or even engaged in petty crime, teachers would rather work with parents to address the issue instead of notifying authorities. 

This was not limited to children.  As a college student, I would often hop on a train on a spur of a moment, without any money, and go to another town, counting on the fact that others would help me.  In one instance, my girlfriend and I would travel some 750 kilometers to Dresden in Germany with enough money to buy a one way ticket.  While in Dresden, we managed to get completely plastered in a local pub thanks to the hospitality of the local residents, and then some good Samaritan gave us a ride back to the border town – some 120 kilometers, which is “far away” in Europe – from where we hitchhiked the remaining 600 kilometers to home.

All that ended when I immigrated to the United States at the age of 27.  When I first landed, FOB, in Grand Rapids MI, I first thought that we were shipped off to some underdeveloped hinterland.  Not only the center of the city was hollowed up by deindustrialization – something almost unimaginable in Europe, but the sheer sprawl of it made it almost impossible to move around it, at least without a car.  The automobile was a mixed blessing, because while it allowed moving around the city more efficiently, it also isolated me from other people.  Unlike in Eastern Europe, when travel was almost invariably an opportunity for socializing, people in the United States travel alone. 

I soon discovered that Grand Rapids was the norm rather than an exception in the United States.  The cities were hollowed up and ravaged by deindustrialization long before President Clinton signed NAFTA.  The situation was even worse in small towns.  In my travels through central and Pennsylvania I saw many of such towns transformed into literally dying communities – all but abandoned by the young people looking for economic opportunities elsewhere and inhabited most by old people who did not have enough energy and resources to leave.  The ultimate case is Centralia, PA, totally abandoned and razed to the ground after an uncontrollable underground fire of coal deposits broke out.


Gemeinschaft died in the United States before I got off the boat here, or rather was killed by capitalist economic development.  It is not the lost of economic opportunity alone that matters, but the loss of community based on close proximity, feelings of togetherness, and on mutual bonds.  This process continued in the 1990s in large cities where local typically ethnic communities were displaced by gentrification.  In fact, gentrification is almost a perfect example of Gemeinschaft giving way to Gesellschaft – townhomes razed to make room for luxury condos, neighborhood stores displaced by national chains, local residents displaced by yuppies ready to move out as soon as a new economic opportunity emerges elsewhere.

The recent influx of immigrants to the United States and Europe did not cause this slow death of Gemeinschaft.  In fact, immigrants likely injected some fresh lifeblood into the dying Gemeinschaft as they brought the sense Gemeinscheft from their old countries.  However, the influx of immigrants provided the lightening rod for the popular anger resulting from the prior loss of Gemeinschaft caused by capitalist development.  It is much easier to blame a visible scapegoat than invisible systemic forces for one’s misfortunes.  It is even easier to blame scapegoats instead of systemic forces when the ruling oligarchy and the media it controls encourage scapegoating.

This loss of Gemeinschaft and the feeling of despair it produces was totally missed by the liberal and leftist pundits and commentators.  The former tried to explain it by anti-immigrant bigotry, the later by the lack of economic opportunity.  Both missed it by a mile – albeit it can be said to their defense that it is easy to miss what you cannot even name in your native language.  In a sense, they acted like the good citizens in the Werner Herzog’s film, who tried to transfer the rational elements of their culture to the “feral” man, yet failed to instill a sense of human bond and community which he was deprived in his childhood.  The nationalist right, on the other hand, sensed the mood and its root causes correctly, and managed to channel that mood into their own nationalistic narratives and ideologies. 

It is difficult to blame people for going with these nationalistic narratives.  If you are dying of a disease that the doctors cannot even properly diagnose, you will listen to anyone who offers you a treatment, even if that treatment is nothing but snake oil.  Calling these people bigots and deplorables is totally counterproductive, as it will push them even more into the embrace of charlatans.

If the Left is to offer a viable alternative to nationalism, it must return to its roots of socialist development that marries the economy and Gemeinschaft.  While the Left typically talks about the economic plight and the need to reorganize the economy, it does not have much to say about restoring Gemeinschaft ravaged by capitalist development.  It is the promise of Gemeinschaft – even if fake – what draws people to nationalism.  The left must simply offer a better alternative in this respect to become relevant again.  Multi-culti hipster colonies in gentrified urban spaces will not do because they are Gesellschaft – groups held together by individual pursuit of self-interest – not Gemeinschaft. 


I am not suggesting a return to a romanticized version of what Karl Marx called idiocy of the rural life, or perhaps to some romanticized version of kibbutz.  In fact, I do not miss the provincialism of Eastern European life that motivated me to emigrate to the United States in the first place.  What I have in mind is an international (“global” is a dirty word today) network of organic communities rooted in specific geographic locations, organized for human interaction rather than automobile movement, close geographic proximity, plenty of public spaces, and composed of people bound together by common interests, activities, and pursuits, but at the same time tied to other similar communities in the region, the nation, and internationally.  Communities that make the people who live in them feel safe and not afraid to interact with strangers.  Communities that offer refuge from threats and danger without closing up and “circling their wagons.”  The promise of such Gemeinschaft communities – which capitalism is simply unable to deliver – must prominently feature in the Left’s message even if the exact blue prints for them are yet to be drawn.  The alternative is descent into nationalism and eventually fascism.