Thomas Sowell - Social Justice Fallacies-Basic Books (2023)
Thomas Sowell - Social Justice Fallacies-Basic Books (2023)
Thomas Sowell - Social Justice Fallacies-Basic Books (2023)
Basic Economics
A Conflict of Visions
Copyright © 2023 by Thomas Sowell
Book cover design copyrighted by Thomas Sowell
Hachette Book Group supports the right to free expression and the value of
copyright. The purpose of copyright is to encourage writers and artists to
produce the creative works that enrich our culture.
Basic Books
Hachette Book Group
1290 Avenue of the Americas, New York, NY 10104
www.basicbooks.com
The publisher is not responsible for websites (or their content) that are not
owned by the publisher.
Library of Congress Control Number: 2023941574
E3-20230726-JV-NF-ORI
“You’re entitled to your own opinion, but
you’re not entitled to your own facts.”
Daniel Patrick Moynihan1
CONTENTS
Cover
Other Books by Thomas Sowell
Title Page
Copyright
Epigraph
Endnotes
Index
Chapter 1
RECIPROCAL INEQUALITIES
While group equalities in the same endeavors are by no means common,
what is common are reciprocal inequalities among groups in different
endeavors. The equality among different groups of human beings—
presupposed by those who regard disparities in outcomes as evidence or
proof of discriminatory bias— might well be true as regards innate
potentialities. But people are not hired or paid for their innate potentialities.
They are hired, paid, admitted to colleges or accepted into other desired
positions on the basis of their developed capabilities relevant to the
particular endeavor. In these terms, reciprocal inequalities might suggest
equal potentialities, without providing any basis for expecting equal
outcomes.
Even groups lagging in many kinds of achievement tend nevertheless to
have some particular endeavors where they do not merely hold their own
but excel. Groups lacking in their educational backgrounds, for example,
may lag in many other endeavors, for which such a background is essential
— and yet such generally lagging groups have often excelled in some other
endeavors, where personal talent and dedication are key factors. Sports and
entertainment have long been among such endeavors with high
achievements for such American groups rising out of poverty as the Irish,
blacks and Southern whites.23
While group equality— in either incomes or capabilities— is hard to
find, it is also hard to find any ethnic or other large social group that has no
endeavor in which it is above average.
Reciprocal inequalities abound— even when equality does not. As we
have seen, different ethnic groups dominate different American sports. One
consequence of this is that the degree of inequality of group representation
in American sports as a whole is not as severe as in each individual sport. A
similar principle applies, for similar reasons, in other endeavors, because of
reciprocal inequalities.
If one looks at wealthy, historic individuals in commerce and industry,
for example, one could find Jews far more widely represented among
historic leaders in retailing, finance and garment production and sales than
in the steel industry, automobile production or coal mining. In the
professions as well, groups that have similar representation in the
professions as a whole can have very different representations in particular
professions, such as engineering, medicine or the law. Asian American
professionals are not necessarily concentrated in the same professions as
Irish American professionals.
Because of reciprocal inequalities, the more narrowly defined the
endeavor, the less likely are different groups to be comparably represented.
Yet crusaders for social justice often decry uneven representation of groups
in an individual company, as evidence or proof of employer discrimination
in that particular company.
When different peoples evolve differently in very different settings and
conditions, they can develop different talents that create reciprocal
inequalities of achievements in a wide range of endeavors, without
necessarily creating equality, or even comparability, in any of those
endeavors. Such reciprocal inequalities lend no support to theories of either
genetic determinism or discriminatory biases as automatic explanations of
inequalities.
Many assumptions and phrases in the social justice literature are
repeated endlessly, without any empirical test. When women are
statistically “under-represented” in Silicon Valley, for example, some
people automatically assume that to be due to sex discrimination by Silicon
Valley employers. It so happens that the work done in Silicon Valley is
based on an application of engineering skills, including computer software
engineering— and American women receive less than 30 percent of the
degrees in engineering, whether at the college level or the postgraduate
level.24
When American men receive less than 20 percent of the undergraduate
degrees in education, and only 22 percent and 32 percent of master’s
degrees and doctoral degrees, respectively, in the same subject,25 is it
surprising that men are under-represented among school teachers and
women are under-represented in engineering occupations?
Comparing the statistical representation of women and men in either of
these occupations is like comparing apples and oranges, when their
educational specializations are so different. These educational specialization
decisions were usually made individually, years before either the women or
the men reached an employer to begin a professional career.
A more general question arises when the incomes of women as a whole
are compared to the incomes of men as a whole. This leaves out many
specific differences in the life patterns of women and men.26 One of the
most basic of these differences is that women are full-time, year-round
workers significantly less often than men. U.S. Census Bureau data show
that, in 2019, there were 15 million more male, full-time, year-round
workers than female, full-time, year-round workers.27 The work patterns of
women include more part-time work, and some whole years when many
women are out of the labor force entirely, often due to staying home to take
care of young children.28
When these and other differences in work patterns are taken into
account, male-female differences in income shrink drastically, and in some
cases reverse.29 As far back as 1971, single women in their thirties who had
worked continuously since leaving school were earning slightly more than
men of the same description.30
When there are statistical differences in the representation of various
ethnic groups, different patterns within these groups themselves are
likewise often overlooked. A typical example of equating differences in
demographic representation with employer discrimination was a headline in
a San Francisco newspaper:31
ORIGINS OF INEQUALITIES
The question whether different social groups have equal or unequal
capabilities in various endeavors is very different from the question whether
racial or sexual differences create inherently different mental potential
determined by genes. The genetic determinism assumption that reigned
supreme among American intellectuals of the Progressive era in the early
twentieth century is an irrelevant issue in this context, though it will be
dealt with in Chapter 2, and has been dealt with more extensively
elsewhere.39
If we assume, for the sake of argument, that every social group— or
even every individual— has equal mental potential at the moment of
conception, that would still not be enough to guarantee even equal “native
intelligence” at birth, much less equally developed capabilities after
growing up in unequal circumstances and/or being culturally oriented
toward different goals in different fields.
Episodic Factors
In addition to on-going differences among peoples, there have also been
unpredictable episodic events— such as wars, famines, and epidemics—
that can disrupt the development path of particular peoples. The outcomes
of military conflicts can be a matter of chances that are incalculable— and
yet able to determine the fate of whole societies or nations for subsequent
generations or centuries.
Had Napoleon won the battle of Waterloo, instead of his enemy the
Duke of Wellington, the history of peoples and nations across the continent
of Europe could have been very different. Wellington himself said
afterwards that the outcome of that battle was “the nearest run thing you
ever saw in your life.”88 It could have gone either way. Had the earlier
battle against invading Islamic forces at Tours in 732 or at the siege of
Vienna in 1529 gone the other way, Europe would be culturally a very
different place today.
As things turned out, Europe has been far from being a culturally,
economically or otherwise homogeneous civilization, with its peoples
having the same quantity and kind of human capital across the continent.
Instead, the languages of Western Europe acquired written versions
centuries before the languages of Eastern Europe.89 This had major
implications for the education of the peoples in these two regions, who had
little chance to be equal in endeavors requiring the kinds of knowledge and
skills taught from books in schools and colleges.
This was not simply an inequality confined to the past, for the evolution
to the present began from very different pasts in different places and times.
Eastern Europe has been poorer and less industrially developed than
Western Europe for centuries,90 and the homicide rate in Eastern Europe
has been some multiple of the homicide rate in Western Europe for
centuries.91
Nor was this east-west divide the only source of national inequalities
within Europe. At the beginning of the twentieth century, “when only 3
percent of the population of Great Britain was illiterate, the figure for Italy
was 48 percent, for Spain 56 percent, for Portugal 78 percent.”92 There
were similar disparities in 1900 within the Habsburg Empire, where the rate
of illiteracy ranged from 3 percent in Bohemia to 73 percent in Dalmatia.93
Massive scholarly studies have found great differences in both
technological development and in the number of leading figures in the arts
and sciences in different parts of Europe.94
It was much the same story in Africa, where in 1957 only 11 percent of
the children attending secondary school in Nigeria were from the northern
part of the country, where a majority of the population lived.95 Someone
born in northern Nigeria had nowhere near the same chances as someone
born in southern Nigeria— a fact reflected in the different economic
success of tribes from these different regions of the country.96
Both in Europe and in Nigeria, different circumstances led different
groups to different levels of literacy and different school attendance. In
Europe— in centuries past, when people were far poorer— some groups
working in agriculture had little need for literacy, but often had great need
for the work of children, in order to keep families adequately fed. In such
circumstances, children’s education was often sacrificed, depriving them of
even second-hand knowledge of a wider world.
In other parts of the world as well, innumerable factors influenced the
development of innumerable peoples. It would be an incredible coincidence
if all these factors affected all these peoples the same way during the many
thousands of years in the past. What is also very unlikely, over vast
expanses of time, is that the very same peoples would have been the highest
achievers throughout many thousands of years. Just within a fraction of
those millennia for which there has been recorded history, the peoples who
have been in the forefront of human achievements have changed
dramatically.
For centuries, China was far more technologically advanced than any
European nation— having cast iron a thousand years before the
Europeans.97 The Chinese also had mechanical printing on paper, during
centuries when Europeans were still writing by hand on costlier materials.98
Educating most Europeans with costly individual manuscripts, rather than
mass-produced books, was not an economically viable prospect. Only after
Europeans developed mechanical printing themselves was it feasible for
them to educate more than a small fraction of their populations. And only
after all the languages of different European peoples developed written
versions was an equal education, and the development of equal human
capital, even theoretically possible.
Differences in human capital— including honesty and languages, as well
as occupational skills and industrial and commercial talents— have been
common between nations and within nations. There was no way that people
on the short end of these circumstantial disparities had “equal chances” of
developing their capabilities, even in a society with equal opportunity, in
the sense of open competition for all, and equal standards applied to all.
We might agree that “equal chances for all” would be desirable. But that
in no way guarantees that we have either the knowledge or the power
required to make that goal attainable, without ruinous sacrifices of other
desirable goals, ranging from freedom to survival.
Do we want the mixture of students who are going to be trained to do
advanced medical research to be representative of the demographic make-
up of the population as a whole— or do we want whatever students, from
whatever background, who have track records demonstrating a mastery of
medical science that gives them the highest probability of finding cures for
cancer, Alzheimer’s and other devastating diseases? Endeavors have
purposes. Is indulging ideological visions more important than ending
cancer and Alzheimer’s?
Do you want airlines to have pilots chosen for demographic
representation of various groups, or would you prefer to fly on planes
whose pilots were chosen for their mastery of all the complex things that
increase your chances of arriving safely at your destination? Once we
recognize the many factors that can create different developed capabilities,
“equal chances for all” becomes very different in its consequences from
“equal opportunity.” And consequences matter— or should matter— more
so than some attractive or fashionable theory.
More fundamentally, do we want a society in which some babies are
born into the world as heirs of pre-packaged grievances against other babies
born the same day— blighting both their lives— or do we want to at least
leave them the option to work things out better in their lives than we have in
ours?
Chapter 2
RACIAL FALLACIES
GENETIC DETERMINISM
In the early decades of the twentieth century, when Progressivism was a
major new force among American intellectuals and in politics, one of
Progressivism’s central tenets was genetic determinism— the belief that
less successful races were genetically inferior.
Later, in the closing decades of the twentieth century, Progressives with
similar views on such other issues as the role of government, environmental
protection and legal philosophy, now took an opposite view on racial issues.
Less successful races were now seen as being automatically victims of
racism, as they had once been considered automatically inferior. The
conclusions were different, but the way evidence was used and the way
contrary views and contrary evidence were disregarded, was very similar.
Both sets of Progressives expressed utter certainty in their conclusions—
on this and other subjects— and dismissed critics as uninformed at best, and
confused or dishonest at worst.50
While Progressivism was an American movement, similar views and
attitudes existed under other names on the other side of the Atlantic. There
too, the prevailing views on race were opposite at the beginning of the
twentieth century from what they became at the end of that century, and on
into our own times.
Early Progressivism
Genetic determinism did not begin with the Progressives. In earlier
times, many people considered themselves born inherently superior to other
people, without requiring either the reality or the pretense of scientific
evidence.
Some considered themselves superior as a class or a race, or because of
royal blood, or whatever. In Britain, Sir Francis Galton (1822–1911) wrote
a book titled Hereditary Genius, based on the fact that many outstanding
achievements were concentrated in particular families. This conclusion
might have had more weight as evidence if other families had comparable
opportunities, but such a requirement could hardly have been met then, and
it is not certain how often it can be met now.
A major piece of empirical evidence became available when soldiers in
the U.S. Army were given mental tests during the First World War. Mental
test scores from a sample of more than 100,000 of these tests showed that
black soldiers as a whole scored lower than white soldiers as a whole on
those tests. That was treated as irrefutable evidence that genetic
determinism was a proven fact.51 But an internal breakdown of the mental
test score data showed that black soldiers from Ohio, Illinois, New York
and Pennsylvania scored higher on the Army mental tests than white
soldiers from Georgia, Arkansas, Kentucky and Mississippi.52
If the reason for the over-all test score differences between the races
were genetic, people’s genes do not change when they cross a state line. But
some states do have better schools than others.
Even a moderately well-informed person in that era could hardly avoid
knowing that other things were not equal between the races in the South, as
Southern politicians of that era loudly proclaimed their determination to
keep things unequal. This went beyond an unwillingness to spend equally
on black and white schools. As far back as the end of the Civil War, when
thousands of white volunteers from the North went into the South to teach
the children of newly freed slaves, these teachers— mostly young women—
were not only ostracized by Southern whites, but were even harassed and
threatened.53
This was an era when many Southern whites did not want blacks to be
educated, and the education policies of Southern state governments
reflected that.54 When wealthy white philanthropists such as John D.
Rockefeller, Andrew Carnegie and Julius Rosenwald sent money to help
create schools for black children in the South,55 the state of Georgia passed
a law, taxing donations to schools by people of a different race from the
race of the students in those schools.56
The most fundamental problem with the conclusions reached by the
genetic determinists of that era— and the opposite conclusions reached by
Progressives of a later era— was in the way they used empirical evidence.
Progressives in each era began with a preconception, and ended their
examination of evidence when they found data which seemed to fit their
preconception. Such a procedure may be enough to supply talking points.
But, if the goal is to find the truth, the search must continue, in order to see
if there are other data that conflict with the initial belief.
People with opposing views are often eager to supply opposing
evidence, so the difficulty is not in finding such evidence. The difficulty is
in whether such evidence will be examined. For example, were there other
groups of whites— besides soldiers from certain states during the First
World War— who scored as low on mental tests as blacks, or lower than
blacks, in the twentieth century? It turns out that there were. These would
include whites living in some American mountain and foothill
communities.57
There have also been white people living in the Hebrides islands off
Scotland,58 and white people living in canal boat communities in Britain,
with IQ test scores similar to those of black Americans.59 What these
particular whites have all had in common was isolation, whether
geographic isolation or social isolation. Such social isolation from the larger
society has also long been common among black Americans.
Although blacks in the U.S. Army during the First World War scored
marginally lower than various members of recently-arrived European
immigrant groups, other blacks— living in Northern communities— often
scored either equally or marginally higher on mental tests than these same
immigrant groups. These immigrants included Italian American children in
a 1923 survey of IQs.60 Similar results were found in a 1926 survey of IQ
results for Slovaks, Greeks, Spaniards and Portuguese in the United
States.61 During this era, most European immigrants settled outside the
South, and blacks outside the South had higher average IQs than blacks
living in the South.62
Whites living in isolated mountain and foothill communities are an
especially striking group, as regards poverty and isolation from both the
outside world and from similar communities in the same mountains and
foothills. We have seen how strikingly lower the incomes of such people
have been in Appalachian counties in the twenty-first century.63 Back in
1929, the IQs of children in Blue Ridge Mountain areas were studied, and
can be compared to the IQs of blacks, which averaged 85 nationally. The
average IQs of these white children in Blue Ridge Mountain communities
ranged from a high of 83.9 to a low of 61.2, varying with which particular
IQ test was used.64
White children in East Tennessee mountain schools in 1930 had an
average IQ of 82.4. As with black children with similar IQs, these white
mountain children had higher IQs when young— 94.68 at age six, declining
to 73.50 at age sixteen.65 A decade later, in 1940, after many improvements
in both the local environment and in the schools, children in the same
communities— and apparently from many of the same families66— had an
average IQ of 92.22. Now their average IQ at age six was 102.56, and this
declined to 80.00 at age sixteen.67
Clearly, these lower than average IQs were not due to race, but— before
1940— they were at least as far below the national average IQ of 100 as
were the IQs of black children. These results seem consistent with what
geographer Ellen Churchill Semple said, back in 1911, that human
advancement “slackens its pace” in the foothills and “comes to a halt” in the
mountains.68 Other studies of life in isolated mountain and foothill
communities around the world show similar patterns of both poverty and
lagging human development.69
Later years would bring additional evidence incompatible with genetic
determinism. A 1976 study showed that black orphans raised by white
families had significantly higher average IQs than other black children, and
IQs slightly above the national average.70 It so happens that one of the first
notable black scientists— George Washington Carver, in the early twentieth
century— was an orphan raised by a white family.71
Genetic determinism in the early twentieth century was by no means
simply an issue about black and white Americans. The belief that blacks
were genetically inferior was already so widely accepted that most of the
genetic determinism literature of that era focused on arguing that people
from Eastern Europe and Southern Europe were genetically inferior to
people from Western Europe and Northern Europe. This was a major issue
in that era, because large-scale emigration from Europe had changed in its
origins from predominantly Western Europe and Northern Europe in earlier
times to predominantly Eastern Europe and Southern Europe, beginning in
the last two decades of the nineteenth century.
Among the massive new wave of immigrants were Eastern European
Jews. A leading mental test authority in that era, Carl Brigham— creator of
the Scholastic Aptitude Test (SAT)— said that the Army mental test results
tended to “disprove the popular belief that the Jew is highly intelligent.”72
Another mental test authority, H.H. Goddard, who tested children of these
Eastern European and Southern European immigrants at the Ellis Island
immigrant receiving facility, declared that “These people cannot deal with
abstractions.”73
Prominent economist of that era Francis A. Walker described
immigrants from Eastern Europe and Southern Europe as “beaten men from
beaten races”74— a “foul and stagnant pool of population in Europe,”
originating in places where “no breath of intellectual life has stirred for
ages.”75
Professor Edward A. Ross, an official of the American Economic
Association and President of the American Sociological Association, coined
the term “race suicide” to describe the prospect of a demographic
replacement over time of the Western Europeans and Northern Europeans
as the majority of the American population by Eastern Europeans and
Southern Europeans, because both these latter groups had a higher
birthrate.76 He called these new immigrants “oxlike men,” and descendants
of backward peoples, whose very physical appearance “proclaims
inferiority of type.”77
Professor Ross lamented an “unanticipated result” of widespread access
to medical advances— namely, “the brightening of the survival prospect of
the ignorant, the stupid, the careless and the very poor.”78
Ross was the author of more than two dozen books, with large sales.79
The introduction to one of his books included a letter of fulsome praise
from Theodore Roosevelt.80 Among Professor Ross’ academic colleagues
was Roscoe Pound, who later became dean of the Harvard law school.
Professor Pound credited Professor Ross with setting him “in the path the
world is moving in.”81 This sense of mission, and a history-is-on-our-side
assumption, marked Roscoe Pound’s influential writings over a long career,
as he promoted judicial activism to free government from Constitutional
restrictions, leaving judges with a more expansive role to play in promoting
Progressive social policies.82
The people who led the crusade for genetic determinism in the early
twentieth century were not ill-educated, lower-class people. They included
some of the most intellectually prominent people of that era, on both sides
of the Atlantic.
These included the founders of such scholarly organizations as the
American Economic Association83 and the American Sociological
Association,84 a president of Stanford University and a president of MIT,85
as well as renowned professors at leading universities across the United
States.86 In England, John Maynard Keynes was one of the founders of the
eugenics society at Cambridge University.87 Most of these intellectuals
were on the political left in both countries.88 But there were also some
conservatives, including Winston Churchill and Neville Chamberlain.89
There were hundreds of courses on eugenics in colleges and universities
across the United States,90 just as there are similarly ideological courses on
college and university campuses across the country today, promoting very
different ideologies as regards race, but with a very similar sense of
mission, and a very similar intolerance toward those who do not share their
ideology or their mission.
“Eugenics” was a term coined by Sir Francis Galton, to describe an
agenda to reduce or prevent the survival of people considered genetically
inferior. He said, “there exists a sentiment, for the most part quite
unreasonable, against the gradual extinction of an inferior race.”91 Professor
Richard T. Ely, one of the founders of the American Economic Association,
said of the people he considered genetically inferior: “We must give to the
most hopeless classes left behind in our social progress custodial care with
the highest possible development and with segregation of sexes and
confinement to prevent reproduction.”92
Other contemporary academics of great distinction expressed very
similar views. Professor Irving Fisher of Yale, the leading American
monetary economist of his day, advocated the prevention of the “breeding
of the worst” by “isolation in public institutions and in some cases by
surgical operation.”93 Professor Henry Rogers Seager, of Columbia
University, likewise said that “we must courageously cut off lines of
heredity that have been proved to be undesirable,” even if that requires
“isolation or sterilization.”94
Prominent Harvard professor of economics Frank Taussig said of a
variety of people he considered inferior, that if it were not feasible to
“chloroform them once and for all,” then “at least they can be segregated,
shut up in refuges and asylums, and prevented from propagating their
kind.”95
The casual ease with which leading scholars of their time could advocate
imprisoning people for life, who had committed no crime, and depriving
them of a normal life, is a painfully sobering reminder of what can happen
when an idea or a vision becomes a heady dogma that overwhelms all other
considerations. A widely read book of that era, The Passing of the Great
Race by Madison Grant, declared that “race lies at the base of all the
manifestation of modern society”96 and deplored “a sentimental belief in
the sanctity of human life,” when that is used “to prevent both the
elimination of defective infants and the sterilization of such adults as are
themselves of no value to the community.”97
This book was translated into other languages, including German, and
Hitler called it his “Bible.”98
The early twentieth-century Progressives were by no means Nazis. They
took pride in advocating a wide range of policies for social betterment, very
similar to the kinds of policies that would be advocated by other
Progressives in the later years of the twentieth century, and on into our own
times.
Prominent economist Richard T. Ely, for example, rejected free-market
economics because he saw government power as something to be applied
“to the amelioration of the conditions under which people live or work.” Far
from seeing government power as a threat to freedom, he said, “regulation
by the power of the state of these industrial and other social relations
existing among men is a condition of freedom.”99 He favored “public
ownership” of municipal utilities, highways and railroads— and declared
that “labor unions should be legally encouraged in their efforts for shorter
hours and higher wages” and that “inheritance and income taxes should be
generally extended.”100 Eugenics was to him just another social benefit he
wanted provided by government.
Professor Ely was clearly a man of the left, and has been called “the
father of institutional economics”101— a branch of economics long noted
for its opposition to free-market economics. One of Ely’s students— John
R. Commons— became a leading institutional economist at the University
of Wisconsin. Professor Commons rejected free-market competition
because “competition has no respect for the superior races,” so that “the
race with lowest necessities displaces others.”102
Among Ely’s other students was the iconic Progressive President of the
United States, Woodrow Wilson.103 President Wilson too saw some races as
inferior. He approved of the annexation of Puerto Rico by President
William McKinley before him, saying of those annexed, “they are children
and we are men in these deep matters of government and justice.”104
Wilson’s own administration segregated black employees of federal
agencies in Washington,105 and he showed the movie, “Birth of A
Nation”— glorifying the Ku Klux Klan— in the White House to invited
guests.106
Like other Progressives of his time and later times, Woodrow Wilson
saw no dangers to freedom in an expansion of government power—
whether through the creation of new federal agencies like the Federal Trade
Commission and the Federal Reserve System during his own
administration,107 or through the appointment of federal judges who would
“interpret” the Constitution so as to loosen what President Wilson regarded
as excessive restriction on the powers of government.108
In his book The New Freedom, Woodrow Wilson arbitrarily defined
government benefits as a new form of freedom,109 thereby verbally
finessing aside concerns about expanding powers of government being a
threat to people’s freedom. This redefinition of freedom has persisted
among various later advocates of expanding welfare state powers, on into
the twenty-first century.110
Among other prominent scholars of the early Progressive era who were
clearly on the political left, along with advocating eugenics, was the already
mentioned Professor Edward A. Ross, who was regarded as one of the
founders of the profession of sociology in the United States. Professor Ross
referred to “us liberals” as people who speak up “for public interests against
powerful selfish private interests,” and denounced those who disagreed with
his views as unworthy “kept” spokesmen for special interests, a “mercenary
corps” as contrasted with “us champions of the social welfare.”111
In their own minds, at least, these early twentieth-century Progressives
were advocating social justice— and Roscoe Pound used that specific
phrase.112 There is no need to question Ross’ sincerity, as he questioned
others’ sincerity. People can be very sincere when presupposing their own
superiority.
Madison Grant, whose book Hitler called his “Bible,” was likewise a
staunch Progressive of the early twentieth century. While not an academic
scholar, neither was he an ignorant redneck. He was from a wealthy family
in New York, and he was educated at Yale and the Columbia University law
school. He was an activist in Progressive causes, such as conservation,
preserving endangered species, municipal reform and the creation of
national parks.113 He was welcomed into an exclusive social club
established by Theodore Roosevelt,114 and during the 1920s he exchanged
friendly letters with Franklin D. Roosevelt, addressing him in these letters
as “My dear Frank,” while FDR reciprocated by addressing him as “My
dear Madison.”115
In short, the Progressives of the early twentieth century shared more
than a name with Progressives of a later era, extending on into our own
times. While these different generations of Progressives reached opposite
conclusions on the reasons for racial differences in economic and social
outcomes, they shared very similar views on the role of government in
general and judges in particular. They also had similar practices in dealing
with empirical evidence. Both remained largely impervious to evidence or
conclusions contrary to their own beliefs.
In addressing one of the central issues in early twentieth-century
America— the massive increase in immigration from Eastern Europe and
Southern Europe that began in the 1880s— the Progressives went beyond
claiming that the current generation of immigrants was less productive or
less advanced than the previous generations from Western Europe and
Northern Europe. The Progressives’ claim was that Eastern Europeans and
Southern Europeans were inherently, genetically— and therefore
permanently— inferior, whether in the past or the future.
Ironically, the Western civilization that all these Europeans shared
originated, thousands of years earlier, in Southern Europe— specifically in
ancient Greece, located in the eastern Mediterranean. The very words that
genetic determinists wrote were written in letters created in Southern
Europe by the Romans. In those ancient times, it was the Southern
Europeans who were more advanced. In the ancient days of the Roman
Empire, Cicero warned his fellow Romans not to buy British slaves,
because they were so hard to teach.116 It is difficult to see how it could have
been otherwise, when someone from an illiterate tribal people in ancient
Britain was brought in bondage to a highly complex and sophisticated
civilization like that in ancient Rome.
As for the claim that Southern European and Eastern European
immigrant children tested at Ellis Island “cannot deal with abstractions,”117
that can hardly be taken as proof of a genetic inability of people from these
regions to deal with abstractions. The ancient Greeks did not simply learn
mathematics. They were among the creators of mathematics— Euclid in
geometry and Pythagoras in trigonometry.
Nor need we believe that there was some biological superiority of the
ancient Greeks in southeastern Europe. A series of geographic treatises on
the history of Europe’s socioeconomic development by Professor N.J.G.
Pounds offered a very different explanation of why the earliest
developments of Western civilization began where they did:
Most of the significant advances in man’s material culture, like
agriculture and the smelting of metals, had been made in the Middle
East and had entered Europe through the Balkan peninsula. From
here they had been diffused northwestward to central Europe and
then to western.118
Later Progressivism
In the later decades of the twentieth century, and on into the twenty-first
century, latter-day Progressives substituted racial discrimination for genes
as the automatic explanation of group differences in economic and social
outcomes. Mental tests— once exalted as an embodiment of “science,”
supposedly proving genetic determinism— were now automatically
dismissed as biased, when SAT and ACT college admissions tests produced
results that conflicted with the new social justice agenda of imposed
demographic representation of various social groups in various institutions
and endeavors.
In this new Progressive era, statistical disparities between blacks and
whites, in any endeavor, have usually been sufficient to produce a
conclusion that racial discrimination was the reason. Often there are also
statistical data on Asian Americans in these same endeavors. But these
Asian American data are almost invariably omitted, not only by the media,
but even by academic scholars in elite universities. Such data would often
present a serious challenge to the conclusions reached by latter-day
Progressives.
In the job market, for example, it has often been said that blacks are “the
last hired and the first fired,” when there are downturns in the economy.
Black employees may in fact be terminated during an economic downturn,
sooner or to a greater extent than white employees. But data also show that
white employees are often let go before Asian American employees.138 Can
this be attributed to racial discrimination against whites, by employers who
were usually white themselves? Are we to accept statistical data as evidence
when these data fit existing preconceptions, but not accept such data when
they go counter to those same preconceptions?
Or are we to be spared such problems by those who simply omit facts
that go against their vision or agenda?
One of the major factors in the housing boom and bust, which produced
an economic crisis in the United States, early in the twenty-first century,
was a widespread belief that there was rampant racial discrimination by
banks and other lending institutions against blacks applying for mortgage
loans. Various statistics from a number of sources showed that, although
most black and white applicants for conventional mortgage loans were
approved, black applicants were turned down at a higher rate than white
applicants for the same loans. What was almost universally omitted were
statistical data showing that whites were turned down for those same loans
more often than Asian Americans.139
Nor was there any great mystery as to why this was so. The average
credit rating of whites was higher than the average credit rating of blacks—
and the average credit rating of Asian Americans was higher than the
average credit rating of whites.140 Nor was this the only economically
relevant difference.141
Nevertheless, there were outraged demands in the media, in academia
and in politics that the government should “do something” about racial
discrimination by banks and other mortgage lenders. The government
responded by doing many things. The net result was that it forced mortgage
lenders to lower their lending standards.142 This made mortgage loans so
risky that many people, including the author of this book, warned that the
housing market could “collapse like a house of cards.”143 When it did, the
whole economy collapsed.144 Low-income blacks were among those who
suffered.
The same question can be raised about mortgage approval patterns as the
question about hiring and firing in the job market. Were predominantly
white mortgage lenders discriminating against white applicants? If that
seems highly unlikely, it is also unlikely that black-owned banks were
discriminating against black mortgage loan applicants. Yet black applicants
for mortgage loans were turned down at an even higher rate by a black-
owned bank.145
It has been much the same story with student discipline in the public
schools. Statistics show that black males have been disciplined for
misbehavior more often than white males. Because of the prevailing
preconception that the behavior of different groups themselves cannot be
different, this automatically became another example of racial
discrimination— and literally a federal issue. A joint declaration from the
U.S. Department of Education and the U.S. Department of Justice warned
public school officials that they wanted what they characterized as a racially
discriminatory pattern ended.146
Statistical data from a landmark study of American education— No
Excuses: Closing the Racial Gap in Learning by Abigail Thernstrom and
Stephan Thernstrom— showed that black students were disciplined two-
and-a-half times as often as white students, who were disciplined twice as
often as Asian students.147 Were the predominantly white teachers biased
against white students? Nor was the disciplining of black students
correlated with whether the teachers involved were black or white.148
Although we may analyze all these statistics by race, that does not
necessarily mean that the employers, lenders or teachers made their
decisions on the basis of race. If black, white and Asian employees had
different distributions of jobs, or were distributed differently at different
levels in the same occupations, then decisions as to which kinds of jobs—
or job performances— were expendable during an economic downturn
could result in the racial disparities seen.
Banking officials who decided whose mortgage applications to accept or
reject are unlikely to have actually seen the applicants themselves. These
applicants would more likely be interviewed by lower-level bank
employees. These employees would then pass the income and other data—
including individual credit ratings— on to higher officials, who would then
either approve or disapprove the applications. In the public schools,
teachers would obviously see the students whose misbehavior they
reported, but the fact that black and white teachers made similar reports,
suggests that race was not likely to be the key factor in this case either.
Perhaps the point in American history when there was the widest
consensus on racial issues, across racial lines, was the occasion of the
historic speech by Martin Luther King at the Lincoln Memorial in 1963.
That was when he said that his dream was of a world where people “will
not be judged by the color of their skin but by the content of their
character.”149 His message was equal opportunity for individuals, regardless
of race. But that agenda, and the wide consensus it had, began eroding in
the years that followed. The goal changed from equal opportunity for
individuals, regardless of race, to equal outcomes for groups, whether these
groups were defined by race, sex or otherwise.
What now rose to dominance was the social justice agenda, which
included equalized outcomes in the present and reparations for the past.
This new agenda drew on history, or on myths presented as history, as well
as assertions presented as facts— the latter in a spirit reminiscent of the
certitude and heedlessness of evidence in the genetic determinism era.
Chapter 3
REDISTRIBUTION OF WEALTH
Politically attractive as confiscation and redistribution of the wealth of
“the rich” might seem, the extent to which it can actually be carried out in
practice depends on the extent to which “the rich” are conceived as being
like inert pieces on a chessboard. To the extent that “the rich” can foresee
and react to redistributive policies, the actual consequences can be very
different from what was intended.
In an absolute monarchy or a totalitarian dictatorship, a mass
confiscation of wealth can be suddenly imposed without warning on the
“millionaires and billionaires” so often cited as targets of confiscation. But,
in a country with a democratically elected government, confiscatory
taxation or other forms of confiscation must first be publicly proposed, and
then develop sufficient political support over time among the voters, before
being actually imposed by law. If “millionaires and billionaires” are not
oblivious to all this, there is little chance that they will not know about the
impending confiscation and redistribution before it happens. Nor can we
assume that they will simply wait passively to be sheared like sheep.
Among the more obvious options available to “the rich”— when they
are forewarned of large-scale confiscations of their wealth— include (1)
investing their wealth in tax-exempt securities, (2) sending their wealth
beyond the taxing jurisdiction, or (3) moving themselves personally beyond
the taxing jurisdiction.
In the United States, the taxing jurisdiction can be a city, a state or the
federal government. The various ways of sheltering wealth from taxation
may have some costs to “the rich” and, where their wealth is embodied in
immovable assets such as steel mills or chains of stores, there may be little
they can do to escape confiscation of these particular forms of wealth. But,
for liquid assets in today’s globalized economies around the world, vast
sums of money can be transferred electronically from country to country,
with the click of a computer mouse.
This means that the actual consequences of raising tax rates on “the
rich” in a given jurisdiction is a factual question. The outcome is not
necessarily predictable, and the potential consequences may or may not
make the planned confiscation feasible. Raising the tax rate X percent does
not guarantee that the tax revenue will also rise X percent— or will even
rise at all. When we turn from theories and rhetoric to the facts of history,
we can put both the explicit and the implicit assumptions of the social
justice vision to the test.
History
Back in the eighteenth century, Britain’s imposition of a new tax on its
American colonies played a major role in setting off a chain of events that
led ultimately to those colonies declaring their independence, and becoming
the United States of America. Edmund Burke pointed out at the time, in the
British Parliament: “Your scheme yields no revenue; it yields nothing but
discontent, disorder, disobedience…”3
Americans were not just inert pieces on the great chessboard of the
British Empire. American independence deprived Britain not only of
revenue from the new taxes they imposed, but also deprived the British of
revenue from the other taxes they had already been collecting from the
American colonies. This was by no means the only time when an increase
in the official rate of taxation led to a reduction in the tax revenues actually
collected.
The people in Africa were not inert chess pieces, any more than people
in Europe or America.
Many studies of many forms of price controls, in countries around the
world, have revealed very similar patterns.21 This has led some people to
ask: “Why don’t politicians learn from their mistakes?” Politicians do learn.
They learn what is politically effective, and what they do is not a mistake
politically, despite how disastrous such policies may turn out to be for the
country. What can be a mistake politically is to assume that particular ideals
— including social justice— can be something that society can just
“arrange,” through government, without considering the particular patterns
of incentives and constraints inherent in the institution of government.
KNOWLEDGE FALLACIES
F or many social issues, the most important decision is who makes the
decision. Both social justice advocates and their critics might agree that
many consequential social decisions are best made by those who have the
most relevant knowledge. But they have radically different assumptions as
to who in fact has the most knowledge.
That is partly because they have radically different conceptions of what
is defined as knowledge. Such differences of opinion as to what constitutes
knowledge go back for centuries.1
Consequential Knowledge
As an example of consequential knowledge— knowledge affecting
decisions with meaningful consequences in people’s lives— the officers in
charge of the Titanic no doubt had much complex knowledge about the
intricacies of ships and navigation on the seas. But the most consequential
knowledge on a particular night was the mundane knowledge of the
location of particular icebergs, because collision with an iceberg is what
damaged and sank the Titanic.
Although mundane information and special kinds of information have
both been called knowledge by some, they are not commensurable, but are
very distinct. Moreover, the presumably higher knowledge does not
automatically encompass the more mundane knowledge. Each can be
consequential in particular circumstances. This means that the distribution
of consequential knowledge in a given society can be very different,
depending on what kind of knowledge is involved.
As another example of the role of mundane but consequential
knowledge, when people migrate from one country to another, they seldom
migrate randomly from all parts of the country they leave or settle randomly
in all parts of the country they go to. Various kinds of mundane knowledge
— information of a sort not taught in schools or colleges— can play major
roles in the migration decisions of millions of human beings.
Two provinces in mid-nineteenth-century Spain, containing just 6
percent of the Spanish population, supplied 67 percent of the Spanish
immigrants to Argentina. Moreover, when these immigrants arrived in
Argentina, they lived clustered together in particular neighborhoods in
Buenos Aires.2 Similarly, during the last quarter of the nineteenth century,
nearly 90 percent of the Italian immigrants to Australia came from an area
in Italy containing just 10 percent of that country’s population.3 Yet
immigration to Australia remained substantial, over the years, from the
same isolated places in Italy where most of these emigrants originated. By
1939, there were more people from some Italian villages living in Australia
than remained behind in those same villages back in Italy.4
Immigrants in general tend to go to some very specific place in the
destination country, where people from their home country— people known
to them personally, and trusted— have already settled before. Such people
can provide newcomers with very specific information about the particular
places where these earlier immigrants live. This has been highly valuable
knowledge about such basic things as where to get a job, find an affordable
place to live, and numerous other mundane but consequential things in a
new country with unknown people and many unknown things about the
way of life in a society that is new to the immigrants.
Where this kind of knowledge happened to be available to people in
particular places in Spain or Italy, people from those particular places had
high rates of immigration, while many other places in these same countries
that lacked such personal connections could have very few people
emigrating. Contrary to implicit assumptions of random behavior by some
social theorists, people did not emigrate randomly from Spain in general
to Argentina in general, or from Italy in general to Australia in general.
It was much the same story with Germans immigrating to the United
States. One study found some villages “practically transplanted from
Germany to rural Missouri.”5 There was a similar pattern among German
immigrants to urban places in America. Frankfort, Kentucky, was founded
by people from Frankfurt, Germany, and Grand Island, Nebraska, was
founded by Schleswig-Holsteiners.6 Of all the people who emigrated from
China to the United States in more than half a century prior to World War I,
60 percent came from Toishan, just one of 98 counties in one province in
southern China.7
Such patterns have been the rule, not the exception, among other
immigrants to other countries, including the Lebanese settling in Colombia8
and Jewish immigrants from Eastern Europe settling in particular parts of
New York’s Lower East Side slum neighborhood.9
These patterns of very specific ties to very specific places— based on
very specific mundane but consequential knowledge of particular people in
those places— extended into the social life of immigrants after their arrival
and settlement. Most of the marriages that took place in nineteenth-century
New York’s Irish neighborhoods were marriages between people from the
same county in Ireland.10 It was much the same story in the Australian city
of Griffith. In the years from 1920 to 1933, 90 percent of the Italian men
who had emigrated from Venice, and gotten married in Australia, married
Italian women who had also emigrated from Venice.11 People sort
themselves out, based on very specific information.
Such patterns have been so widely observed that they have been given a
name— “chain migration”— for the chain of personal connections
involved. This is consequential knowledge, valued for its practical
applications, rather than because of its intellectual challenge or elegance. It
is a highly specific kind of knowledge, about highly specific people and
places. This kind of knowledge is unlikely to be known by surrogate
decision-makers, such as economic central planners or policy experts, who
may have far more of the kinds of knowledge taught in schools and
colleges. But, no matter how much of this latter kind of knowledge may be
regarded as higher knowledge, it does not necessarily encompass— much
less supersede— what is regarded as lower knowledge.
How much knowledge there is in a given society, and how it is
distributed, depends crucially on how knowledge is conceived and defined.
When a social justice advocate like Professor John Rawls of Harvard
referred to how “society” should “arrange” certain outcomes,12 he was
clearly referring to collective decisions of a kind that a government makes,
using knowledge available to surrogate decision-makers, more so than the
kind of knowledge known and used by individuals in the population at
large, when making their own decisions about their own lives. As an old
saying expressed it: “A fool can put on his coat better than a wise man can
do it for him.”13
Whatever the desirability of the goals sought by social justice advocates,
the feasibility of achieving those goals through surrogate decision-makers
depends on the distribution of relevant and consequential knowledge.
It also depends on the nature, purpose and reliability of the political
process through which governments act. The history of many twentieth-
century fervent crusades for idealistic goals is a painful record of how often
the granting of great powers to governments, in pursuit of those goals, led
instead to totalitarian dictatorships. The bitter theme of “the Revolution
betrayed” goes back at least as far as the French Revolution in the
eighteenth century.
At the opposite pole from the position attributed to Benjamin Jowett,
twentieth-century Nobel Prize-winning economist F.A. Hayek’s conception
of knowledge would encompass both the carpenter’s information and the
physicist’s information— and extend far beyond both. This put him in
direct opposition to various systems of surrogate decision-making in the
twentieth century, including the social justice vision.
To Hayek, consequential knowledge included not only articulated
information, but also unarticulated information, embodied in behavioral
responses to known realities. Examples might include something as simple
— and consequential— as putting warm clothing on children before taking
them out in cold weather, or moving your car over to the side of a road,
when you hear the siren of an emergency vehicle wanting to pass. As Hayek
put it:
Not all knowledge in this sense is part of our intellect, nor is our
intellect the whole of our knowledge. Our habits and skills, our
emotional attitudes, our tools, and our institutions— all are in this
sense adaptations to past experience which have grown up by
selective elimination of less suitable conduct. They are as much an
indispensable foundation of successful action as is our conscious
knowledge.14
Opposite Visions
Although F.A. Hayek was a landmark figure in the development of an
understanding of the crucial role of the distribution of knowledge in
determining which kinds of policies and institutions were likely to produce
what kinds of results, there were others before him whose analyses had
similar implications, and others after him— notably Milton Friedman—
who applied Hayek’s analysis in their own work.
An opposite vision of knowledge and its distribution has likewise had a
very long pedigree behind its opposite conclusions— namely, that
consequential knowledge is concentrated in intellectually more advanced
people. The question of what constitutes knowledge was among the things
addressed in a two-volume 1793 treatise titled Enquiry Concerning
Political Justice by William Godwin.19
Godwin’s conception of knowledge was very much like that prevalent in
today’s writings on social justice. Indeed, the word “political” in the title of
his book was used in a sense common at that time, referring to the polity or
governmental structure of a society. The word was used in a similar sense at
that time in the expression “political economy”— meaning what we call
“economics” today— the economic analysis of a society or polity, as
distinguished from economic analysis of decisions in a home, business or
other individual institution within a society or polity.
To Godwin, explicitly articulated reason was the source of knowledge
and understanding. In this way, “just views of society” in the minds of “the
liberally educated and reflecting members” of society will enable them to
be “to the people guides and instructors.”20 Here the assumption of superior
knowledge and understanding did not lead to casting an intellectual elite in
the role of surrogate decision-makers as part of a government, but as
influencers of the public, who in turn were expected to influence the
government.
A similar role for the intellectual elite appeared later in the nineteenth-
century writings of John Stuart Mill. Although Mill saw the population at
large as having more knowledge than the government,21 he also saw the
population as needing the guidance of elite intellectuals. As he said in On
Liberty, democracy can rise above mediocrity, only where “the sovereign
Many have let themselves be guided (which in their best times they always
have done) by the counsels and influence of a more highly gifted and
instructed One or Few.”22
Mill depicted these intellectual elites— “the best and wisest,”23 the
“thinking minds,”24 “the most cultivated intellects in the country,”25 “those
who have been in advance of society in thought and feeling”26— as “the
salt of the earth; without them, human life would become a stagnant
pool.”27 He called on the universities to “send forth into society a
succession of minds, not the creatures of their age, but capable of being its
improvers and regenerators.”28
Ironically, this presumed indispensability of intellectuals for human
progress was asserted at a time and in a place— nineteenth-century Britain
— where an industrial revolution was taking place in Mill’s own lifetime
that would change whole patterns of life in many nations around the world.
Moreover, this industrial revolution was led by men with practical
experience in industry, rather than intellectual or scientific education.
Among Americans as well, even revolutionary industrial giants like
Thomas Edison and Henry Ford had very little formal schooling,29 and the
first airplane to lift off the ground with a human being on board was
invented by two bicycle mechanics— the Wright brothers— who never
finished high school.30
Nevertheless, John Stuart Mill’s vision of the indispensable role of
intellectuals in human progress has been one shared by many intellectuals
over the centuries. These have included intellectuals leading crusades for
more economic equality, based ironically on assumptions of their own
superiority. Rousseau said in the eighteenth century that he considered it
“the best and most natural arrangement for the wisest to govern the
multitude.”31 Variations on this theme have marked such movements
against economic inequality as Marxism, Fabian socialism, Progressivism
and social justice activism.
Rousseau, despite his emphasis on society being guided by “the general
will,” left the interpretation of that will to elites. He likened the masses of
the people to “a stupid, pusillanimous invalid.”32 Others on the eighteenth-
century left, such as William Godwin and the Marquis de Condorcet,
expressed similar contempt for the masses.33 In the nineteenth century, Karl
Marx said, “The working class is revolutionary or it is nothing.”34 In other
words, millions of fellow human beings mattered only if they carried out
the Marxian vision.
Fabian socialist pioneer George Bernard Shaw regarded the working
class as being among the “detestable” people who “have no right to live.”
He added: “I should despair if I did not know that they will all die presently,
and that there is no need on earth why they should be replaced by people
like themselves.”35
In our own times, prominent legal scholar Professor Ronald Dworkin of
Oxford University declared that “a more equal society is a better society
even if its citizens prefer inequality.”36 French feminist pioneer Simone de
Beauvoir likewise said, “No woman should be authorized to stay at home to
raise her children. Society should be totally different. Women should not
have that choice, precisely because if there is such a choice, too many
women will make that one.”37 In a similar vein, consumer activist Ralph
Nader said that “the consumer must be protected at times from his own
indiscretion and vanity.”38
We have already seen how similar attitudes led genetic determinists in
the early twentieth century to casually advocate imprisoning people who
had committed no crime, and denying them a normal life, on the basis of
unsubstantiated beliefs that were then in vogue in intellectual circles.
Given the conception of knowledge prevalent among many elite
intellectuals, and the distribution of such knowledge implied by that
conception, it is hardly surprising that they reach the kinds of conclusions
that they do. Indeed, to make the opposite assumption— that one’s own
great achievements and competence are confined to a narrow band, out of
the vast spectrum of human concerns— could be a major impediment to
promoting social crusades that preempt the decisions of others, who are
supposedly to be the beneficiaries of such crusades as the quest for social
justice.
F.A. Hayek regarded the assumptions of crusading intellectuals as The
Fatal Conceit— the title of his book on the subject. Although he was a
landmark figure in opposition to the presumed superiority of intellectuals as
guides or surrogate decision-makers for other people, he was not alone in
his opposition to the idea of a presumed concentration of consequential
knowledge in intellectual elites.
Professor Milton Friedman, another Nobel Prize economist, noted how
that honor can lead to assumptions of omnicompetence, by both the public
and the recipient:
Employment Issues
One of the prominent early Progressives to call for elite preemptions of
other people’s decisions was Walter E. Weyl, who graduated from college at
age 19, went on to earn a Ph.D., and had a career as an academic and a
journalist. He was clearly one of the intellectual elites, and he devoted his
talents to crusading for a “socialized democracy,” in which employees
would be protected from the “great interstate corporations,”46 among other
hazards and restrictions. For example:
Clearly, Walter E. Weyl saw the employer as taking away this woman’s
liberty and people like himself as wanting to restore it to her— even though
it was the employer who offered her an option and surrogates like Weyl
who wanted to take away her option. For intellectual elites who see
society’s consequential knowledge concentrated in people like themselves,
this might make sense. But people who see consequential knowledge
widely diffused among the population at large could reach the opposite
conclusion— already mentioned— that “A fool can put on his coat better
than a wise man can do it for him.” Or her.
Minimum wage laws are another example of intellectual elites and
social justice advocates acting as surrogate decision-makers, preempting the
decisions of both employers and employees. As noted in Chapter 3, the
unemployment rate among black 16-year-old and 17-year-old males was
under 10 percent in 1948, when inflation had rendered the minimum wage
law ineffective. But, after a series of minimum wage increases, beginning in
1950, restored that law’s effectiveness, the unemployment rate of black
males in this age bracket rose, and never fell below 20 percent for more
than three consecutive decades, in the years from 1958 to 1994.48
In some of those years, their unemployment rate was over 40 percent.
Moreover, during those years, the virtually identical unemployment rates
for black and white teenage males that existed when the minimum wage
law was ineffective in 1948, now had a racial gap. Black teenage male
unemployment rates were now often twice as high as the unemployment
rate for white teenage males.49 In 2009— ironically, the first year of the
Obama administration— the annual unemployment rate of black teenage
males as a whole was 52 percent.50
In other words, half of all black teenage males looking for jobs could not
find any, because surrogate decision-makers made it illegal for them to take
jobs at wages that employers were willing to pay, but which third-party
surrogates disliked. Preempting their options left black teenage males the
choice of doing without pay in legal occupations or making money from
illegal activities, such as selling drugs— an activity with dangers from both
the law and rival gangs. But even if unemployed black teenage males just
hung around idle on the streets, no community of any race is made better
off with many adolescent males hanging around with nothing useful to do.
None of these facts has made the slightest impression on many people
advocating higher minimum wage rates. This is another example of
situations in which “friends” and “defenders” of the less fortunate are
oblivious to the harm they are doing. New York Times columnist Nicholas
Kristof, for example, depicted people who oppose minimum wage laws as
people with “hostility” to “raising the minimum wage to keep up with
inflation” because of their “mean-spiritedness” or “at best, a lack of
empathy toward those struggling.”51
There is no need to attribute malign intentions to Nicholas Kristof. Fact-
free moralizing is a common pattern among social justice advocates. But
the fundamental problem is an institutional problem, when laws allow third-
party surrogates to preempt other people’s decisions and pay no price for
being wrong, no matter how high the price paid by others, whom they are
supposedly helping.
Anyone seriously interested in facts about the effects of minimum wage
laws on employment can find such facts in innumerable examples from
countries around the world, and in different periods of history.52 Most
modern, industrial countries have minimum wage laws, but some do not, so
their unemployment levels can be compared to the unemployment levels in
other countries.
It was news in 2003 when The Economist magazine reported that
Switzerland’s unemployment rate “neared a five-year high of 3.9% in
February.”53 Switzerland had no minimum wage law. The city-state of
Singapore has also been without a minimum wage law, and its
unemployment rate has been as low as 2.1 percent in 2013.54 Back in 1991,
when Hong Kong was still a British colony, it too had no minimum wage
law, and its unemployment rate was under 2 percent.55 The last American
administration without a national minimum wage law was the Coolidge
administration in the 1920s. In President Coolidge’s last four years in
office, the annual unemployment rate ranged from a high of 4.2 percent to a
low of 1.8 percent.56
While some social justice advocates may think of minimum wage laws
as a way to help low-income people, many special-interest groups in
countries around the world— perhaps more experienced and informed
about their own economic interests— have deliberately advocated
minimum wage laws for the express purpose of pricing some low-income
people out of the labor market. At one time, the groups targeted for
exclusion included Japanese immigrant workers in Canada57 and African
workers in South Africa under apartheid,58 among others.59
Payday Loans
Similar presumptions have led to many local social justice crusades to
outlaw so-called “payday loans” in low-income neighborhoods. These are
usually short-term loans of small amounts of money, charging something
like $15 per hundred dollars lent for perhaps a few weeks.60 Low-income
people, facing some unexpected financial emergency, often turn to such
loans because banks are unlikely to lend to them, and the money they need
to deal with some emergency must be paid before their next check is due—
whether that is a paycheck from some job, or a check from welfare or some
other source.
Perhaps an old car has broken down, and needs immediate repairs, if
that is the only way someone can get to work from where they live. Or a
family member might have suddenly gotten sick, and needs some expensive
medicine right away. In any event, the borrowers need money they don’t
have, and they need it right now. Paying $15 to borrow $100 until the end
of the month may be one of the very few options available. But that could
work out mathematically to an annual interest rate of several hundred
percent— and social justice advocates consider that “exploitation.”
Accordingly, payday loans have been denounced from the editorial pages of
the New York Times61 to many other venues for social justice activism.62
By the same kind of reasoning as that denouncing payday loan interest
rates as being several hundred percent on an annual basis, renting a hotel
room for $100 a night is paying $36,500 rent annually, which seems
exorbitant for renting a room. But of course most people are very unlikely
to rent a hotel room for a year at that price. Nor is there any guarantee to the
hotel management that every room in a hotel will be rented every night,
even though hotel employees have to be paid every payday, regardless of
how many rooms are rented or not rented.
Nevertheless, based on reasoning about annual interest rates, some states
have imposed interest rate caps, which have often been enough to shut
down most payday loan businesses. Among the other flaws in the social
justice crusaders’ reasoning is that the $15 is not all interest, as economists
define interest. That sum also covers the cost of processing the loan and
covers the inevitable risks of losses from any kind of lending, as well as
covering such common business expenses as employees’ salaries, rent, etc.,
that other businesses have.
Such costs are a higher percentage of all costs when a small amount of
money is borrowed. It does not cost a bank a hundred times as much to
process a loan of $10,000 as it costs a payday loan business to process a
loan of $100.
In short, the real interest rate— net of other costs— is unlikely to be
anything resembling the alarming interest rate numbers that are thrown
around recklessly, in order to justify preempting the decisions of low-
income people faced with a financial emergency. But, nevertheless,
intellectual elites and social justice crusaders can go away feeling good
about themselves, after depriving poor people of one of their very few
options for dealing with a financial emergency.
To someone directly involved, it may be worth much more than $15 to
avoid losing a day’s pay or to spare a sick family member needless
suffering. But it may never occur to crusading intellectual elites that
ordinary people may have far more consequential knowledge about their
own circumstances than distant surrogates have.
As for “exploitation,” it is not always easy to know what some people
mean specifically when they use that word, other than as an expression of
their disapproval. But if we take “exploitation” in this context to mean that
people who own payday loan businesses receive a higher rate of return on
their business investment than is necessary to compensate them for being in
this particular business, then the complete shutdown of many payday loan
businesses, in the wake of legislation reducing their “interest” charges,
suggests the opposite. Why would anyone completely give up a business
that still earns them as much of a return on their investment as other
businesses receive?
In the particular cases where legislated limits on what is called “interest”
force payday loan businesses to go out of business, social justice reformers
may go away feeling good about having ended “exploitation” of the poor,
when they have in fact simply denied the poor one of their very few options
in an emergency, by preventing the businesses supplying that option from
earning a rate of return common in other businesses.
Housing Decisions
Even such basic individual decisions as where to live— in what kind of
housing and in what kind of neighborhood— have been preempted by
surrogate decision-makers.
For more than a century, social reformers have used the power of
government to force low-income people to abandon the homes in which
they have chosen to live, and move to places the reformers consider better.
These policies have gone by a variety of names, such as “slum clearance,”
“urban renewal” or whatever other names happened to be in vogue
politically at various times.
Some of the housing that the poorest people lived in, especially back in
the early twentieth century, was truly awful. A survey in 1908 showed that
about half of the families who lived on New York’s Lower East Side had
three or four people sleeping per room, and nearly 25 percent of these
families had five or more people sleeping per room.63 Individual home
bathtubs were very rare in such places at that time. An indoor faucet or
toilet, to be shared by many tenants, was a recent improvement, and they
were by no means universal. There were still thousands of outdoor toilets in
the backyards, which could be something of a challenge in the winter.
Surrogate decision-makers did not merely advise the tenants to leave,
nor did the government provide places to which they could move. Instead,
government officials ordered the slums torn down, and used the police to
evict tenants who did not want to leave. During these and later times,
surrogate decision-makers simply assumed that their own knowledge and
understanding were superior to that of the low-income people they had
forced out of the tenements. Later, after better housing was built as
replacements, the surrogates could feel vindicated.
Even if both the housing that the evicted tenants moved into
immediately and the new housing that was built to replace the slums were
better, the slum tenants already had the former option before they were
evicted— and their choice, when they had one, was to stay where they
were, in order to save some much-needed money, rather than pay higher
rent. Often the better housing that was built as replacements was also more
expensive.
Among the poorest of the European immigrants at that time were
Eastern European Jews. Their men often began working as peddlers on the
streets, while the women and children worked at home— for long hours at
low piecework pay, on consignments of clothing production in the slum
apartments where they lived. They were often trying to save up enough
money to be able to eventually open up some small shop or grocery store, in
hopes of being able to earn a better living that way, or at least not having
their men be peddlers working outdoors on the streets in all kinds of
weather.
Many of these Jewish immigrants had family members back in Eastern
Europe, where they were being attacked by anti-Semitic mobs. The money
being saved was also used to pay the fares of those family members who
desperately needed to escape. During these years, most of the Jewish
immigrants from Eastern Europe who came to America had their fares paid
by family members already living in America,64 even though many Jews at
that time were still poor and living in slums.
Other immigrant groups, living in slums in nineteenth-century and early
twentieth-century America, had similarly urgent situations to deal with.
Italian immigrants, who were overwhelmingly men, often had families back
in the poorer southern regions of Italy, to whom they sent money they
earned in the United States. These immigrants often slept many men to a
room, in order to save money. Observers who noticed that they seemed to
be physically smaller than other men— something not said of Italian men in
America in later generations— may not have known that these men
skimped even on food, in order to save up money with which to either
return to Italy in a few years to rejoin their families, or to send money to
their families to come join them in America.
In an earlier generation, Irish immigrants lived in some of the worst
slums in America— usually in families, but also with other family members
still remaining in Ireland, where a crop failure created a devastating famine
that struck in the 1840s. Like the Jewish immigrants from Eastern Europe in
later years, the Irish living in America sent money back to their family
members in Ireland, so that they could immigrate to America, with their
fares prepaid.65
These and other urgent reasons for needing to save money were part of
the consequential knowledge keenly felt by family members living in the
slums, but less likely to be known by surrogate decision-makers with great
confidence in their own supposedly superior knowledge and understanding.
Early Progressive-era writer Walter E. Weyl said that “a tenement house
law increases the liberty of tenement dwellers.”66 The resistance of slum
dwellers who had to be forced out by police suggests that they saw things
differently.
Children
An even deeper penetration into the lives of other people has been
preempting parents’ role in raising their own children.
The decision as to when and how parents want their children to be
informed and advised about sex was simply preempted by surrogates who
introduced “sex education” into the public schools in the 1960s. Like so
many other social crusades by intellectual elites, the “sex education” agenda
was presented politically as an urgent response to an existing “crisis.” In
this case, the problems to be solved were said to include unwanted
pregnancies among teenage girls and venereal diseases among both sexes.
A Planned Parenthood representative, for example, testified before a
Congressional subcommittee on the need for such programs “to assist our
young people in reducing the incidence of out-of-wedlock births and early
marriage necessitated by pregnancy.”67 Similar views, as regards both
venereal diseases and unwanted pregnancies were echoed in many elite
intellectual circles, and questioners or critics were depicted as ignorant or
worse.68
What were the actual facts, as of the time of this “crisis,” supposedly in
urgent need of a “solution” by preempting the role of parents? Venereal
diseases had been declining for years. The rate of infection for gonorrhea
declined every year from 1950 through 1958, and the rate of syphilis
infection was, by 1960, less than half of what it had been in 1950.69 The
pregnancy rate among teenage females had declined for more than a
decade.70
As for the facts about what happened after “sex education” was widely
introduced into public schools, the rate of teenage gonorrhea tripled
between 1956 and 1975.71 The rate of infection for syphilis continued to
decline, but its rate of decline from 1961 on was nowhere near as steep as
its sharp rate of decline in earlier years.72
During the 1970s, the pregnancy rate among females from 15 years old
to 19 years old rose from approximately 68 per thousand in 1970 to
approximately 96 per thousand by 1980.73 Data for birth rates per thousand
females in this same age group differ numerically— because of abortions
and miscarriages— but the pattern over the years was similar.
Beginning in the years before sex education was introduced into the
public schools on a large scale in the 1960s, the birth rate among unmarried
females, aged 15 to 19, was 12.6 per thousand in 1950, 15.3 in 1960, 22.4
in 1970 and 27.6 in 1980. At the end of the century in 1999, it was 40.4 per
thousand.74 As a percentage of all births to females in the same age bracket
— both married and unmarried— the births to unmarried females in this
age bracket were 13.4 percent of all the births to females of these ages in
1950, 14.8 in 1960, 29.5 in 1970, and 47.6 in 1980. As of the year 2000,
more than three quarters of all the births to females in this age bracket—
78.7 percent— were to unmarried females.75
The reason is not hard to find: The percentage of unmarried teenage
females who had engaged in sex was higher at every age from 15 through
19 by 1976 than it was just five years earlier.76 Nor is it hard to understand
why, when the specifics of what was called “sex education” included such
things as this:
Only when he can support himself and his family, choose his job and
make a living wage can an individual and his family exercise real
freedom. Otherwise he is a servant to survival without the means to
do what he wants to do.95
Dewey asked, “how does the desire for freedom compare in intensity
with the desire to feel equal with others, especially with those who have
previously been called superiors?”100 He said, “as we look at the world we
see supposedly free institutions in many countries not so much overthrown
as abandoned willingly, apparently with enthusiasm.”101
Although Dewey was a professor of philosophy, well aware that theories
“must be regarded as hypotheses” to be subjected to “actions which test
them,” so that they are not to be accepted as “rigid dogmas,”102 no such
tests or evidence accompanied his own sweeping pronouncements about
such things as “obvious social evils” in contemporary American society.103
Nor were such tests applied to Professor Dewey’s other sweeping
pronouncements about “our defective industrial regime,”104 his claim that
“Industrial entrepreneurs have reaped out of all proportion to what they
sowed,”105 or that schools needed to offset “the coarseness, blunders, and
prejudices of their elders” that children see at home.106
This casual contempt for ordinary people and their freedom was by no
means confined to John Dewey, or to educators. In the law as well, there
has been the same disregard of other people’s rights and values by
intellectual elites. One of the leading legal authorities of the Progressive era
was Roscoe Pound, who was for 20 years— from 1916 to 1936— Dean of
the Harvard Law School, which turned out many leading legal scholars
promoting an expansive role for judges in “interpreting” the Constitution to
loosen its restrictions on government power, in the cause of what Roscoe
Pound called “social justice,” as far back as 1907.107
Pound invoked the words “science” and “scientific” repeatedly in his
discussions,108 which had neither the procedures nor the precision of
science. There was to be a “science of politics,”109 and a “science of
law.”110 Similarly, Pound repeatedly called for “social engineering,”111 as if
other human beings were to be like inert components of social machinery,
to be constructed by elites into a society with “social justice.”
With Pound, as with Woodrow Wilson, what the public at large wanted
faded into the background. Pound lamented that “we still harp upon the
sacredness of property before the law” and approvingly cited the “progress
of law away from the older individualism” which “is not confined to
property rights.”112
Thus, in 1907 and 1908, Roscoe Pound set forth principles of judicial
activism— going beyond interpreting the law to making social policy—
that would still be dominant, more than a hundred years later, and on into
the present. One of the rationales for such an expanded role for judges has
been the claim that the Constitution is too hard to amend, so that judges
must amend it by “interpretation,” to adapt it to changing times.
Like so much that has been said and repeated endlessly by elites with the
social justice vision, this rationale is contradicted by readily available facts.
The Constitution of the United States was amended 4 times in 8 years—
from 1913 through 1920113— during the heyday of the Progressives, who
claimed that it was nearly impossible to amend the Constitution.114 When
the people wanted the Constitution amended, it was amended. When the
elites wanted it amended, but the people did not, that was not a “problem”
to be “solved.” That was democracy, even if it frustrated elites convinced
that their superior wisdom and virtue should be imposed on others.
Dean Pound simply dismissed as “dogma” the Constitution’s separation
of powers, because the separation of powers would “limit the courts to
interpretation and application” of the law.115 Pound’s own conception of the
role of judges was far more expansive.
As far back as 1908, Pound referred to the desirability of “a living
constitution by judicial interpretation.”116 He called for “an awakening of
juristic activity,” for “the sociological jurist,” and declared that law “must
be judged by the results it achieves.”117 What he called “mechanical”
jurisprudence118 was condemned for “its failure to respond to vital needs of
present-day life.” When law “becomes a body of rules,” that “is the
condition against which sociologists now protest, and protest rightly,”119 he
said. Why judges and sociologists should be making social policy, instead
of people elected as legislators or executives, was not explained.
Whether in law or in other areas, one of the hallmarks of elite
intellectuals’ seeking to preempt other people’s decisions— whether on
public policy or in their own private lives— is a reliance on unsubstantiated
pronouncements, based on elite consensus, treated as if that was equivalent
to documented facts. One revealing sign of this is how often the arguments
of people with other views are not answered with counter-arguments, but
with ad hominem assertions instead. This pattern has persisted for more
than a century, not only in discussions of social justice issues, but also in
other issues— and not only in the United States, but also among other
intellectual elites in countries on the other side of the Atlantic.
From the earliest days of the Progressive era in the United States, one of
the features of Progressives’ conceptions of advanced social thinking was
that automatic punishment of criminals should be replaced, or at least
supplemented, by treatment of the criminal, as if crime were a disease—
and a disease whose “root causes” could be traced to society, as well as to
the criminal. Such ideas can be traced back at least as far as such
eighteenth-century writers as William Godwin in England and the Marquis
de Condorcet in France.120 But these ideas were often presented by
twentieth-century Progressives as new revelations of modern “social
science” and were widely celebrated among intellectual elites.121
In this atmosphere, the Supreme Court of the United States, in a series of
early 1960s cases, began to “interpret” the Constitution as providing newly
discovered “rights” for criminals that had apparently escaped notice before.
These cases included Mapp v. Ohio (1961), Escobedo v. Illinois (1964) and
Miranda v. Arizona (1966). The Supreme Court majority, led by Chief
Justice Earl Warren, were undeterred by bitter dissenting opinions from
other Justices, who objected to both the dangers being created and the lack
of legal basis for the decisions.122
At a 1965 conference of judges and legal scholars, when a former police
commissioner complained about the trend of recent Supreme Court
decisions on criminal law, Justice William J. Brennan and Chief Justice Earl
Warren sat “stony-faced” during his presentation, according to a New York
Times account. But, after a law professor responded with scorn and ridicule
to what the commissioner said, Warren and Brennan “frequently roared
with laughter.”123
A mere police official opposing learned Olympians of the law may have
seemed humorous to elites at this gathering. But some crime statistics might
present a somewhat different perspective. Prior to the Supreme Court’s
remaking of the criminal law, beginning in the early 1960s, the homicide
rate in the United States had been going down for three consecutive decades
— and that rate, in proportion to population, was in 1960 just under half of
what it had been in 1934.124 But almost immediately after the Supreme
Court’s creation of sweeping new “rights” for criminals, the homicide rate
reversed. It doubled from 1963 to 1973.125
No one found that humorous, least of all the mothers, widows and
orphans of homicide victims. Although this was a nationwide trend, it was
especially severe in black communities— places supposedly being helped
by social justice advocates, who were often also advocates of a de-emphasis
of law enforcement and punishment, seeking instead to treat the “root
causes” of crime.
Both before and after the 1960s sudden upsurge in homicides, the
homicide rate among blacks was consistently some multiple of the
homicide rate among whites. In some years there were more black homicide
victims than white homicide victims— in absolute numbers126— even
though the size of the black population was only a fraction of the size of the
white population. This meant that the sudden upsurge in homicides took an
especially heavy toll in black communities.
Supreme Court Justices with lifetime tenure are classic examples of
elites who institutionally pay no price for being wrong— no matter how
wrong, and no matter how high the price paid by others. Chief Justice Earl
Warren did not even pay the price of admitting a mistake. In his memoirs,
he rejected critics of the Supreme Court’s criminal law decisions. He
blamed crime “in our disturbed society” on “the root causes” of crime—
citing such examples as “poverty,” “unemployment,” and “the degradation
of slum life.”127 But he offered no factual evidence that any of these things
had suddenly gotten worse in the 1960s than they had been in the three
preceding decades, when the homicide rate was going down.
IMPLICATIONS
How we see the distribution of consequential knowledge is crucial for
deciding what kinds of decisions make sense, through what kinds of
policies and institutions. We each have our own island of knowledge in a
sea of ignorance. Some islands are larger than others, but no island is as
large as the sea. As Hayek conceived it, the enormously vast amount of
consequential knowledge dispersed among the population of a whole
society makes the differences in the amount of such knowledge between
some people and other people “comparatively insignificant.”128
This conclusion provides little basis for intellectual elites to engage in
wholesale preemption of other people’s decisions, whether these are
decisions about how they live their own lives or decisions about the kinds
of laws the voting public want to live under, and the people they want in
charge of carrying out those laws. Intellectual elites with outstanding
achievements within their own respective specialties may give little thought
to how ignorant they may be on a vast spectrum of other concerns.
Even more dangerous than ignorance, however, is a fallacious certitude,
which can afflict people at all educational levels and all IQ levels. While we
may not see our own fallacies, the saving grace in this situation is that we
can often see other people’s fallacies much more clearly— and they can see
ours. In a world of inevitably fallible human beings, with inevitably
different viewpoints and different fragments of consequential knowledge,
our ability to correct each other can be essential to preventing our making
fatally dangerous mistakes as individuals, or as a society.
The fatal danger of our times today is a growing intolerance and
suppression of both opinions and evidence that differ from the prevailing
ideologies that dominate institutions, ranging from the academic world to
the corporate world, the media and governmental institutions.
Many intellectuals with high accomplishments seem to assume that
those accomplishments confer validity to their notions about a broad swath
of issues, ranging far beyond the scope of their accomplishments. But
stepping outside the scope of one’s expertise can be like stepping off a cliff.
A high IQ and low information can be a very dangerous combination, as
a basis for preempting other people’s decisions— especially when this
preemption takes place in circumstances where there is no price for
surrogate decision-makers to pay for being wrong.
Stupid people can create problems, but it often takes brilliant people to
create a real catastrophe. They have already done that enough times— and
in enough different ways— for us to reconsider, before joining their latest
stampedes, led by self-congratulatory elites, deaf to argument and immune
to evidence.
Chapter 5
Lionel Trilling1
P eople who may share many of the same basic concerns that social justice
advocates have do not necessarily share the same vision or agenda,
because they do not make the same assumptions about options, causation or
consequences. Iconic free-market economist Milton Friedman, for example,
said:
Clearly, Hayek also saw life in general as unfair, even with the free
markets he advocated. But that is not the same as saying that he saw society
as unfair. To Hayek, society was “an orderly structure,” but not a decision-
making unit, or an institution taking action.4 That is what governments do.5
But neither society nor government comprehends or controls all the many
and highly varied circumstances— including a large element of luck— that
can influence the fate of individuals, classes, races or nations.
Even within the same family, as we have seen, it matters whether you
were the first-born child or the last-born child. When the first-born child in
five-child families constituted 52 percent of the children from such families
to become National Merit Scholarship finalists, while the fifth-born child in
those families became the finalist just 6 percent of the time,6 that is a
disparity larger than most disparities between the sexes or the races.
In a growing economy, it also matters which generation of the family
you were born into.7 A facetious headline in The Economist magazine—
“Choose your parents wisely”8— highlighted another important truth about
inequalities, illustrated with this impossible advice. Circumstances beyond
our control are major factors in economic and other inequalities. Trying to
understand causation is not necessarily the same as looking for someone to
blame.
The totality of circumstances around us Hayek called a “cosmos”9 or
universe. In this context, what others call “social justice” might more
fittingly be called “cosmic justice,”10 since that is what would be required
to produce the results sought by many social justice advocates.
This is not simply a question about different names. It is a more
fundamental question about what we can and cannot do— and at what costs
and risks. When there are “differences in human fates for which clearly no
human agency is responsible,”11 as Hayek put it, we cannot demand justice
from the cosmos. No human beings, either singly or collectively, can
control the cosmos— that is, the whole universe of circumstances
surrounding us and affecting everyone’s chances in life. The large element
of luck in all our lives means that neither society nor government has either
causal control or moral responsibility extending to everything that has gone
right or wrong in everybody’s life.
Some of us may be able to think of some particular individual, whose
appearance in our lives at one particular juncture altered the trajectory of
our lives. There may be more than one such person, at different stages of
our lives, who changed our prospects in different ways, for better or worse.
Neither we nor surrogate decision-makers control such things. Those who
imagine that they can— that they are either a “self-made man” or surrogate
saviors of other people or the planet— operate in dangerous territory,
littered with human tragedies and national catastrophes.
If the world around us happened to provide equal chances for all people
in all endeavors— whether as individuals or as classes, races or nations—
that might well be seen as a world far superior to the world we actually see
around us today. Whether called social justice or cosmic justice, that might
be seen as ideal by many people who agree on little else. But our ideals tell
us nothing about our capabilities and their limits— or the dangers of trying
to go beyond those limits.
As just one example, from the earliest American Progressives onward,
there has been an ideal of applying criminal laws in a manner
individualized to the criminal, rather than generalized from the crime.12
Before even considering whether this is desirable, there is first the question
of whether human beings are even capable of doing such a thing. Where
would officials acquire such sweeping, intimate and accurate knowledge
about a stranger, much less have the superhuman wisdom to apply it in the
incalculable complications of life?
A murderer may have had an unhappy childhood, but does that justify
gambling other people’s lives, by turning him loose among them, after some
process that has been given the name “rehabilitation”? Are high-sounding
notions and fashionable catchwords important enough to risk the lives of
innocent men, women and children?
F.A. Hayek’s key insight was that all the consequential knowledge
essential to the functioning of a large society exists in its totality nowhere
in any given individual, class or institution. Therefore the functioning and
survival of a large society requires coordination among innumerable people
with innumerable fragments of consequential knowledge. This put Hayek in
opposition to various systems of centrally directed control, whether a
centrally planned economy, systems of comprehensive surrogate decision-
making in the interests of social justice, or presumptions of “society” being
morally responsible for all its inhabitants’ good or bad fates, when nobody
has the requisite knowledge for such responsibility.
The fact that we cannot do everything does not mean that we should do
nothing. But it does suggest that we need to make very sure that we have
our facts straight, so that we do not make things worse, while trying to
make them better. In a world of ever-changing facts and inherently fallible
human beings, that means leaving everything we say or do be open to
criticism. Dogmatic certitudes and intolerance of dissent have often led to
major catastrophes, and nowhere more so than in the twentieth century. The
continuation and escalation of such practices in the twenty-first century is
by no means a hopeful sign.
Back in the eighteenth century, Edmund Burke made a fundamental
distinction between his ideals and his policy advocacies. “Preserving my
principles unshaken,” he said, “I reserve my activity for rational
endeavours.”13 In other words, having high ideals did not imply carrying
idealism to the extreme of trying to impose those ideals at all costs and
oblivious to all dangers.
Pursuing high ideals at all costs has already been tried, especially in
twentieth-century creations of totalitarian dictatorships, often based on
egalitarian goals with the highest moral principles. But powers conferred
for the finest reasons can be used for the worst purposes— and, beyond
some point, powers conferred cannot be taken back. Milton Friedman
clearly understood this:
F.A. Hayek— having lived through the era of the rise of totalitarian
dictatorships in twentieth-century Europe— and having witnessed how it
happened— arrived at essentially the same conclusions. But he did not
regard social justice advocates as evil people, plotting to create totalitarian
dictatorships. Hayek said that some of the leading advocates of social
justice included individuals whose unselfishness was “beyond question.”15
Hayek’s argument was that the kind of world idealized by social justice
advocates— a world with everyone having equal chances of success in all
endeavors— was not only unattainable, but that its fervent but futile pursuit
can lead to the opposite of what its advocates are seeking. It was not that
social justice advocates would create dictatorships, but that their passionate
attacks on existing democracies could weaken those democracies to the
point where others could seize dictatorial powers.
Social justice advocates themselves obviously do not share the
conclusions of their critics, such as Friedman and Hayek. But the
differences in their conclusions are not necessarily differences in
fundamental moral values. Their differences tend to be at the level of
fundamentally different beliefs about circumstances and assumptions
about causation that can produce very different conclusions. They envision
different worlds, operating on different principles, and describe these
worlds with words that have different meanings within the framework of
different visions.
When visions and vocabularies differ so fundamentally, an examination
of facts offers at least a hope of clarification.
Merit
Opponents of group preferences, such as affirmative action for hiring or
for college admissions, often say that each individual should be judged by
that individual’s own merit. In most cases, “merit” in this context seems to
mean individual capabilities that are relevant to the particular endeavor.
Merit in this sense is simply a factual question, and the validity of the
answer depends on the predictive validity of the criteria used to compare
different applicants’ capabilities.
Others, however— including social justice advocates— see not only a
factual issue, but also a moral issue, in the concept of merit. As far back as
the eighteenth century, social justice advocate William Godwin was
concerned not only about unequal outcomes, but especially “unmerited
advantage.”16 Twentieth-century Fabian socialist pioneer George Bernard
Shaw likewise said that “enormous fortunes are made without the least
merit.”17 He noted that not only the poor, but also many well-educated
people, “see successful men of business, inferior to themselves in
knowledge, talent, character, and public spirit, making much larger
incomes.”18
Here merit is no longer simply a factual question about who has the
particular capabilities relevant to success in a particular endeavor. There is
now also a moral question as to how those capabilities were acquired—
whether they were a result of some special personal exertions or were just
some “unmerited advantage,” perhaps due to being born into unusually
more favorable circumstances than the circumstances of most other people.
Merit in this sense, with a moral dimension, raises very different
questions, which can have very different answers. Do people born into
certain German families or certain German communities deserve to inherit
the benefits of the knowledge, experience and insights derived from more
than a thousand years of Germans brewing beer? Clearly, they do not! It is a
windfall gain. But, equally clearly, their possession of this valuable
knowledge is a fact of life today, whether we like it or not. Nor is this kind
of situation peculiar to Germans or to beer.
It so happens that the first black American to become a general in the
U.S. Air Force— General Benjamin O. Davis, Jr.— was the son of the first
black American to become a general in the U.S. Army, General Benjamin
O. Davis, Sr. Did other black Americans— or white Americans, for that
matter— have the same advantage of growing up in a military family,
automatically learning, from childhood onward, about the many aspects of a
career as a senior military officer?
Nor was this situation unique. One of the most famous American
generals in World War II— and one of the most famous in American
military history— was General Douglas MacArthur. His father was a young
commanding officer in the Civil War, where his performance on the
battlefield brought him the Congressional Medal of Honor. He ended his
long military career as a general.
None of this is peculiar to the military. In the National Football League,
quarterback Archie Manning had a long and distinguished career, in which
he threw more than a hundred touchdown passes.19 His sons— Peyton
Manning and Eli Manning— also had long and distinguished careers as
NFL quarterbacks, which in their cases included winning Super Bowls. Did
other quarterbacks, not having a father who had been an NFL quarterback
before them, have equal chances? Not very likely. But would football fans
rather watch other quarterbacks who were not as good, but who had been
chosen in order to equalize social justice?
The advantages that some people have, in a given endeavor, are not just
disadvantages to everyone else. These advantages also benefit all the people
who pay for the product or service provided by that endeavor. It is not a
zero-sum situation. Mutual benefit is the only way the endeavor can
continue, in a competitive market, with vast numbers of people free to
decide what they are willing to pay for. The losers are the much smaller
number of people who wanted to supply the same product or service. But
the losers were unable to match what the successful producers offered,
regardless of whether the winners’ success was due to skills developed at
great sacrifice or skills that came their way from just happening to be in the
right place at the right time.
When computer-based products spread around the world, both their
producers and their consumers benefitted. It was bad news for
manufacturers of competing products such as typewriters, or the slide rules
that were once standard equipment used by engineers for making
mathematical calculations. Small computerized devices could make those
calculations faster, simpler and with a vastly larger range of applications.
But, in a free-market economy, progress based on new advances inevitably
means bad news for those whose goods or services are no longer the best.
Demographic “inclusion” requires some surrogate decision-makers,
empowered to over-rule what consumers want.
A similar situation exists in the military. A country fighting for its life,
on the battlefield, cannot afford the luxury of choosing its generals on the
basis of demographic representation— “looking like America”— rather
than on the basis of military skills, regardless of how those skills were
acquired. Not if the country wants to win and survive. That is especially so
if the country wants to win its military victories without more losses of
soldiers’ lives than necessary. In that case, it cannot put generals in charge
of those soldiers when these are not the best generals available.
In the social justice literature, unmerited advantages tend to be treated as
if they are deductions from the well-being of the rest of the population. But
there is no fixed or predestined amount of well-being, whether measured in
financial terms or in terms of spectators enjoying a sport, or soldiers
surviving a battle. When President Barack Obama said: “The top 10 percent
no longer takes in one-third of our income, it now takes half,”20 that would
clearly be a deduction from other people’s incomes if there were a fixed or
predestined amount of total income.
This is not an incidental subtlety. It matters greatly whether people with
high incomes are adding to, or subtracting from, the incomes of the rest
of the population. Insinuations are a weak basis for making decisions about
a serious issue. It is too important to have that issue decided— or
obfuscated— by artful words. In plain English: Is the average American’s
income higher or lower because of the products created and sold by some
multi-billionaire?
Again, there is no fixed or predestined total amount of income or wealth
to be shared. If some people are creating more wealth than they are
receiving as income, then they are not making other people poorer. But if
they are creating products or services that are worth less than the income
they receive, then equally clearly they are making other people poorer. But,
although anyone can charge any price they want to, for whatever they are
selling, they are not likely to find people who will pay more than the
product or service is worth to themselves.
Arguing as if some people’s high incomes were deducted from some
fixed or predestined total income— leaving less for others— may be clever.
But cleverness is not wisdom, and artful insinuations are no substitute for
factual evidence, if your goal is knowing the facts. But, if your goals are
political or ideological, there is no question that one of the most politically
successful messages of the twentieth century was that the rich have gotten
rich by taking from the poor.
The Marxian message of “exploitation” helped sweep communists into
power in countries around the world in the twentieth century, at a pace and
on a scale seldom seen in history. There is clearly a political market for that
message, and communists are just one of the ideological groups to use it
successfully for their own purposes, despite how disastrously that turned
out to be for millions of other human beings living under communist
dictatorships.
The very possibility that poor Americans, for example, are having a
rising standard of living because of progress created by people who are
getting rich— as suggested by Herman Kahn21— would be anathema to
social justice advocates. But it is by no means obvious that empirical tests
of that hypothesis would vindicate those who advocate social justice. It
seems even less likely that social justice advocates would put that
hypothesis to an empirical test.
For people seeking facts, rather than political or ideological goals, there
are many factual tests that might be applied, in order to see if the wealth of
the wealthy is derived from the poverty of the poor. One way might be to
see if countries with many billionaires— either absolutely or relative to the
size of the population— have higher or lower standards of living among the
rest of their people. The United States, for example, has more billionaires
than there are in the entire continent of Africa plus the Middle East.22 But
even Americans living in conditions officially defined as poverty usually
have a higher standard of living than that of most of the people in Africa
and the Middle East.
Other factual tests might include examining the history of prosperous
ethnic minorities, who have often been depicted as “exploiters” in various
times and places over the years. Such minorities have, in many cases over
the years, been either expelled by governments or driven out of particular
cities or countries by mob violence, or both. This has happened to Jews a
number of times over the centuries in various parts of Europe.23 The
overseas Chinese have had similar experiences in various southeast Asian
countries.24 So have Indians and Pakistanis expelled from Uganda in East
Africa.25 So have the Chettiar money-lenders in Burma, after that country’s
laws confiscating much of their property in 1948, drove many of them out
of Burma.26
The Ugandan economy collapsed in the 1970s, after the government
expelled Asian business owners,27 who had supposedly been making
Africans worse off economically. Interest rates in Burma went up, not
down, after the Chettiars were gone.28 It was much the same story in the
Philippines, where 23,000 overseas Chinese were massacred in the
seventeenth century, after which there were shortages of the goods
produced by the Chinese.29
In centuries past, it was not uncommon for Jews in Europe to be driven
out— denounced as “exploiters” and “bloodsuckers”— from various cities
and countries, whether forced out by government edict or mob violence, or
both. What is remarkable is how often Jews were in later years invited back
to some of the places from which they had been expelled.30
Apparently some of those who drove them out discovered that the
country was worse off economically after the Jews were gone.
Although Catherine the Great banned Jews from immigrating into
Russia, in her later efforts to attract much-needed foreign skills from
Western Europe, including “some merchant people,” she wrote to one of her
officials that people in the occupations being sought should be given
passports to Russia, “not mentioning their nationality and without enquiring
into their confession.” To the formal Russian text of this message she added
a postscript in German saying, “If you don’t understand me, it will not be
my fault” and “keep all this secret.”31
In the wake of this message, Jews began to be recruited as immigrants to
Russia— even though, as a historian has noted, “throughout the whole
transaction any reference to Jewishness was scrupulously avoided.”32 In
short, even despotic rulers may seek to evade their own policies, when it is
impolitic to repeal those policies, and counterproductive to follow them.
These historical events are by no means the only factual tests that could
be used to determine whether more prosperous people are making other
people less prosperous. Nor are these necessarily the best factual tests. But
the far larger point is that a prevailing social vision does not have to
produce any factual test, when rhetoric and repetition can be sufficient to
accomplish their aims, especially when alternative views can be ignored
and/or suppressed. It is that suppression which is a key factor— and it is
already a large and growing factor in academic, political and other
institutions in our own times.
Today it is possible, even in our most prestigious educational institutions
at all levels, to go literally from kindergarten to a Ph.D., without ever
having read a single article— much less a book— by someone who
advocates free-market economies or who opposes gun control laws.
Whether you would agree with them or disagree with them, if you read
what they said, is not the issue. The far larger issue is why education has so
often become indoctrination— and for whose benefit.
The issue is not even whether what is being indoctrinated is true or false.
Even if we were to assume, for the sake of argument, that everything with
which students are being indoctrinated today is true, these issues of today
are by no means necessarily the same as the issues that are likely to arise
during the half-century or more of life that most students have ahead of
them after they have finished their education. What good would it do them
then, to have the right answers to yesterday’s questions?
What they will need then, in order to sort out the new controversial
issues, is an education that has equipped them with the intellectual skills,
knowledge and experience to confront and analyze opposing views— and
subject those views to scrutiny and systematic analysis. That is precisely
what they do not get when being indoctrinated with whatever is currently in
vogue today.
Such “education” sets up whole generations to become easy prey for
whatever clever demagogues come along, with heady rhetoric that can
manipulate people’s emotions. As John Stuart Mill put the issue, long ago:
He who knows only his own side of the case, knows little of that…
Nor is it enough that he should hear the arguments of adversaries
from his own teachers, presented as they state them, and
accompanied by what they offer as refutations. That is not the way to
do justice to the arguments, or bring them into real contact with his
own mind. He must be able to hear them from persons who actually
believe them; who defend them in earnest, and do their very utmost
for them. He must know them in their most plausible and persuasive
form…33
What Mill described is precisely what most students today do not get, in
even our most prestigious educational institutions. What they are more
likely to get are prepackaged conclusions, wrapped securely against the
intrusion of other ideas— or of facts inconsistent with the prevailing
narratives.
In the prevailing narratives of our time, someone else’s good luck is
your bad luck— and a “problem” to be “solved.” But when someone has,
however undeservedly, acquired some knowledge and insights that can be
used to design a product which enables billions of people around the world
to use computers— without knowing anything about the specifics of
computer science— that is a product which can, over the years, add trillions
of dollars’ worth of wealth to the world’s existing supply of wealth. If the
producer of that product becomes a multi-billionaire by selling it to those
billions of people, that does not make those people poorer.
People like British socialist George Bernard Shaw may lament that the
producer of this product may not have either the academic credentials or the
personal virtues which Shaw seems to attribute to himself, and to others like
himself. But that is not what the buyers of the computerized product are
paying for, with their own money. Nor is it obvious why a third-party’s
laments should be allowed to affect transactions which are not doing the
third party any harm. Nor is the general track record of third-party
preemptions encouraging.
None of this suggests that businesses have never done anything wrong.
Sainthood is not the norm in business, any more than in politics, in the
media or on academic campuses. That is why we have laws. But it is not a
reason to create ever more numerous and sweeping laws to put ever more
power in the hands of people who pay no price for being wrong, regardless
of how high a price is paid by others who are subject to their power.
Slippery words like “merit”— with multiple and conflicting meanings—
can make it hard to clearly understand what the issues are, much less see
how to resolve them.
Racism
“Racism” may be the most powerful word in the social justice
vocabulary. There is no question that racism has inflicted an enormous
amount of needless suffering on innocent people, punctuated by
unspeakable horrors, such as the Holocaust.
Racism might be analogized to some deadly pandemic disease. If so, it
may be worth considering the consequences of responding to pandemics in
different ways. We certainly cannot simply ignore the disease and hope for
the best. But we cannot go to the opposite extreme, and sacrifice every
other concern— including other deadly diseases— in hopes of reducing
fatalities from the pandemic. During the Covid-19 pandemic, for example,
death rates from other diseases went up,34 because many people feared
going to medical facilities, where they might catch Covid from other
patients.
Even the most terrible pandemics can subside or end. At some point,
continued preoccupation with the pandemic disease can then cause more
dangers and death from other diseases, and from other life stresses resulting
from continued restrictions that may have made sense when the pandemic
was in full force, but are counterproductive on net balance afterwards.
Everything depends on what the specific facts are at a given time and
place. That is not always easy to know. It may be especially difficult to
know, when special interests have benefitted politically or financially from
the pandemic restrictions, and therefore have every incentive to promote the
belief that those restrictions are still urgently needed.
Similarly, it can be especially hard to know about the current incidence
and consequences of racism, when racists do not publicly identify
themselves. Moreover, people who have incentives to maximize fears of
racism include politicians seeking to win votes by claiming to offer
protection from racists, or leaders of ethnic protest movements who can use
fears of racists to attract more followers, more donations and more power.
No sane person believes that there is zero racism in American society, or
in any other society. Here it may be worth recalling what Edmund Burke
said, back in the eighteenth century: “Preserving my principles unshaken, I
reserve my activity for rational endeavours.”35 Our principles can reject
racism completely. But neither a racial minority nor anyone else has
unlimited time, unlimited energy or unlimited resources to invest in seeking
out every possible trace of racism— or to invest in the even less promising
activity of trying to morally enlighten racists.
Even if, by some miracle, we could get to zero racism, we already know,
from the history of American hillbillies— who are physically
indistinguishable from other white people, and therefore face zero racism—
that even this is not enough to prevent poverty. Meanwhile, black married-
couple families, who are not exempt from racism, have nevertheless had
poverty rates in single digits, every year for more than a quarter of a
century.36 We also know that racists today cannot prevent black young
people from becoming pilots in the Air Force, or even generals in the Air
Force, nor from becoming millionaires, billionaires or President of the
United States.
Just as we need to recognize when the power of a pandemic has at least
subsided, so that we can use more of our limited time, energy and resources
against other dangers, so we also need to pay more attention to other
dangers besides racism. That is especially so for the younger generation,
who need to deal with the problems and dangers actually confronting them,
rather than remain fixated on the problems and dangers of the generations
before them. If racists cannot prevent today’s minority young people from
becoming pilots, the teachers unions can— by denying them a decent
education, in schools whose top priorities are iron-clad job security for
teachers, and billions of dollars in union dues for teachers unions.37
It is by no means certain whether the enemies of American minorities
are able to do them as much harm as their supposed “friends” and
“benefactors.” We have already seen some of the harm that minimum wage
laws have done, by denying black teenagers the option of taking jobs that
employers are willing to offer, at pay that teenagers are willing to accept,
because unaffected third parties choose to believe that they understand the
situation better than all the people directly involved.
Another “benefit” for minorities, from those with the social justice
vision and agenda, is “affirmative action.” This is an issue often discussed
in terms of the harm done to people who would have gotten particular jobs,
college admissions or other benefits, if these had been awarded on the basis
of qualifications, rather than demographic representation. But the harm
done to the supposed beneficiaries also needs to be understood— and that
harm can be even worse.
This possibility especially needs to be examined, because it goes
completely counter to the prevailing social justice agenda and its narrative
about the sources of black Americans’ advancement. In that narrative,
blacks’ rise out of poverty was due to the civil rights laws and social
welfare policies of the 1960s, including affirmative action. An empirical
test of that narrative is long overdue.
Affirmative Action
In the prevailing narrative on the socioeconomic progress of black
Americans, statistical data have been cited, showing declining proportions
of the black population living in poverty after the 1960s, and rising
proportions of the black population employed in professional occupations,
as well as having rising incomes. But, as with many other statements about
statistical trends over time, the arbitrary choice of which year to select as
the beginning of the statistical evaluation can be crucial in determining the
validity of the conclusions.
If the statistical data on the annual rate of poverty among black
Americans were to be presented, beginning in 1940— that is, 20 years
before the civil rights laws and expanded social welfare state policies of the
1960s— the conclusions are very different.
These data show that the poverty rate among blacks fell from 87 percent
in 1940 to 47 percent over the next two decades38— that is, before the
major civil rights laws and social welfare policies of the 1960s. This trend
continued after the 1960s, but did not originate then and did not accelerate
then. The poverty rate among blacks fell an additional 17 points, to 30
percent in 1970— a rate only slightly lower than that in the two preceding
decades, but certainly not higher. The black poverty rate fell yet again
during the 1970s, from 30 percent in 1970 to 29 percent in 1980.39 This
one-percentage-point decline in poverty was clearly much less than in the
three preceding decades.
Where does affirmative action fit in with this history? The first use of the
phrase “affirmative action” in a Presidential Executive Order was by
President John F. Kennedy in 1961. That Executive Order said that federal
contractors should “take affirmative action to ensure that applicants are
employed, and that employees are treated during employment, without
regard to their race, creed, color, or national origin.”40 In other words, at
that point affirmative action meant equal opportunity for individuals, not
equal outcomes for groups. Subsequent Executive Orders by Presidents
Lyndon B. Johnson and Richard Nixon made numerical group outcomes the
test of affirmative action by the 1970s.
With affirmative action now transformed from equal individual
opportunity to equalized group outcomes, many people saw this as a more
beneficial policy for blacks and other low-income racial or ethnic groups to
whom this principle applied. Indeed, it was widely regarded as axiomatic
that this would better promote their progress in many areas. But the one-
percentage-point decline in black poverty during the 1970s, after
affirmative action meant group preferences or quotas, goes completely
counter to the prevailing narrative.
Over the years, as controversies raged about affirmative action as group
preferences, the prevailing narrative defended affirmative action as a major
contributor to black progress. As with many other controversial issues,
however, a consensus of elite opinion has been widely accepted, with little
recourse to vast amounts of empirical evidence to the contrary. Best-selling
author Shelby Steele, whose incisive books have explored the rationales and
incentives behind support for failed social policies,41 cited an encounter he
had with a man who had been a government official involved in the 1960s
policies:
“Damn it, we saved this country!” he all but shouted. “This country
was about to blow up. There were riots everywhere. You can stand
there now in hindsight and criticize, but we had to keep the country
together, my friend.”43
From a factual standpoint, this former 1960s official had the sequence
completely wrong. Nor was he unique in that. The massive ghetto riots
across the nation began during the Lyndon Johnson administration, on a
scale unseen before.44 The riots subsided after that administration ended,
and its “war on poverty” programs were repudiated by the next
administration. Still later, during the eight years of the Reagan
administration, which rejected that whole approach, there were no such
massive waves of riots.
Of course politicians have every incentive to depict black progress as
something for which politicians can take credit. So do social justice
advocates, who supported these policies. But that narrative enables some
critics to complain that blacks ought to lift themselves out of poverty, as
other groups have done. Yet the cold facts demonstrate that this is largely
what blacks did, during decades when blacks did not yet have even equal
opportunity, much less group preferences.
These were decades when neither the federal government, the media, nor
intellectual elites paid anything like the amount of attention to blacks that
they did from the 1960s on. As for the attention paid to blacks by
governments in Southern states during the 1940s and 1950s, that was
largely negative, in accordance with the racially discriminatory laws and
policies at that time.
Among the ways by which many blacks escaped from poverty in the
1940s and 1950s was migrating out of the South, gaining better economic
opportunities for adults and better education for their children.45 The Civil
Rights Act of 1964 was an overdue major factor in ending the denial of
basic Constitutional rights to blacks in the South.46 But there is no point
trying to make that also the main source of the black rise out of poverty.
The rate of rise of blacks into the professions more than doubled from 1954
to 196447— that is, before the historic Civil Rights Act of 1964. Nor can
the political left act as if the Civil Rights Act of 1964 was solely their work.
The Congressional Record shows that a higher percentage of Republicans
than Democrats voted for that Act.48
In short, during the decades when the rise of black Americans out of
poverty was greatest, the causes of that rise were most like the causes of the
rise of other low-income groups in the United States, and in other countries
around the world. That is, it was primarily a result of the individual
decisions of millions of ordinary people, on their own initiative, and owed
little to charismatic group leaders, to government programs, to intellectual
elites or to media publicity. It is doubtful if most Americans of that earlier
era even knew the names of leaders of the most prominent civil rights
organizations of that era.
Affirmative action in the United States, like similar group preference
policies in other countries, seldom provided much benefit for people in
poverty.49 A typical teenager in a low-income minority community in the
United States, having usually gotten a very poor education in such
neighborhoods, is unlikely to be able to make use of preferential admissions
to medical schools, when it would be a major challenge just to graduate
from an ordinary college. In a much poorer country, such as India, it could
be an even bigger challenge for a rural youngster from one of the
“scheduled castes”— formerly known as “untouchables.”50
Both in the United States and in other countries with group preference
policies, benefits created for poorer groups have often gone
disproportionately to the more prosperous members of these poorer
groups51— and sometimes to people more prosperous than the average
member of the larger society.52
The central premise of affirmative action is that group “under-
representation” is the problem, and proportional representation of groups is
the solution. This might make sense if all segments of a society had equal
capabilities in all endeavors. But neither social justice advocates, nor
anyone else, seems able to come up with an example of any such society
today, or in the thousands of years of recorded history. Even highly
successful groups have seldom been highly successful in all endeavors.
Asian Americans and Jewish Americans are seldom found among the
leading athletic stars or German Americans among charismatic politicians.
At the very least, it is worth considering such basic facts as the extent to
which affirmative action has been beneficial or harmful, on net balance, for
those it was designed to help— in a world where specific developed
capabilities are seldom equal, even when reciprocal inequalities are
common. One example is the widespread practice of admitting members of
low-income minority groups to colleges and universities under less
stringent requirements than other students have to meet.
Such affirmative action in college admissions policies has been widely
justified on the ground that few students educated in the public schools in
low-income minority neighborhoods have the kind of test scores that would
get them admitted to top-level colleges and universities otherwise. So group
preferences in admissions are thought to be a solution.
Despite the implicit assumption that students will get a better education
at a higher-ranked institution, there are serious reasons to doubt it.
Professors tend to teach at a pace, and at a level of complexity, appropriate
for the particular kinds of students they are teaching. A student who is fully
qualified to be admitted to many good quality colleges or universities can
nevertheless be overwhelmed by the pace and complexity of courses taught
at an elite institution, where most of the students score in the top ten percent
nationwide— or even the top one percent— on the mathematics and verbal
parts of the Scholastic Aptitude Test (SAT).
Admitting a student who scores at the 80th percentile to such an
institution, because that student is a member of a minority group, is no
favor. It can turn someone who is fully qualified for success into a
frustrated failure. An intelligent student who scored at the 80th percentile in
mathematics can find the pace of math courses far too fast to keep up with,
while the professor’s brief explanations of complex principles may be
readily understood by the other students in the class, who scored at the 99th
percentile. They may already have learned half this material in high school.
It can be much the same story with the amount and complexity of readings
assigned to students in an elite academic institution.
None of this is news to people familiar with top elite academic
institutions. But many young people from a low-income minority
community may be the first member of their family to go to college. When
such a person is being congratulated for having been accepted into some
big-name college or university, they may not see the great risks there may
be in this situation. Given the low academic standards in most public
schools in low-income minority communities, the supposedly lucky student
may have been getting top grades with ease in high school, and can be
heading for a nasty shock when confronted with a wholly different situation
at the college level.
What is at issue is not whether the student is qualified to be in college,
but whether that student’s particular qualifications are a match or a
mismatch with the qualifications of the other students at the particular
college or university that grants admission. Empirical evidence suggests
that this can be a crucial factor.
In the University of California system, under affirmative action
admissions policies, the black and Hispanic students admitted to the top-
ranked campus at Berkeley had SAT scores just slightly above the national
average. But the white students admitted to UC Berkeley had SAT scores
more than 200 points higher— and the Asian American students had SAT
scores somewhat higher than the whites.53
In this setting, most black students failed to graduate— and, as the
number of black students admitted increased during the 1980s, the number
graduating actually decreased.54
California voters voted to put an end to affirmative action admissions in
the University of California system. Despite dire predictions that there
would be a drastic reduction in the number of minority students in the UC
system, there was in fact very little change in the total number of minority
students admitted to the system as a whole. But there was a radical
redistribution of minority students among the different campuses across the
state.
There was a drastic reduction in the number going to the two top-ranked
campuses— UC Berkeley and UCLA. Minority students were now going to
those particular UC campuses where the other students had academic
backgrounds more similar to their own, as measured by admissions test
scores. Under these new conditions, the number of black and Hispanic
students graduating from the University of California system as a whole
rose by more than a thousand students over a four-year span.55 There was
also an increase of 63 percent in the number graduating in four years with a
grade point average of 3.5 or higher.56
The minority students who fail to graduate under affirmative action
admissions policies are by no means the only ones who are harmed by
being admitted to institutions geared to students with better pre-college
educational backgrounds. Many minority students who enter college
expecting to major in challenging fields like science, technology,
engineering or mathematics— called STEM fields— are forced to abandon
such tough subjects and concentrate in easier fields. After affirmative action
in admissions was banned in the University of California system, not only
did more minority students graduate, the number graduating with degrees in
the STEM fields rose by 51 percent.57
What is crucial from the standpoint of minority students being able to
survive and flourish academically is not the absolute level of their pre-
college educational qualifications, as measured by admissions test scores,
but the difference between their test scores and the test scores of the other
students at the particular institutions they attend. Minority students who
score well above the average of American students as a whole on college
admissions tests can nevertheless be turned into failures by being admitted
to institutions where the other students score even farther above the
average of American students as a whole.
Data from the Massachusetts Institute of Technology illustrate this
situation. Data from MIT showed the black students there had average SAT
math scores at the 90th percentile. But, although these students were in the
top ten percent of American students in mathematics, they were in the
bottom 10 percent of students at MIT, whose students’ math scores were at
the 99th percentile. The outcome was that 24 percent of these extremely
well-qualified black students failed to graduate at MIT, and those who did
graduate were concentrated in the lower half of their class.58 In most
American academic institutions, these same black students would have been
among the best students on campus.
Some people might say that even those students who were concentrated
in the lower half of their class at MIT gained the advantage of having been
educated at one of the leading engineering schools in the world. But this is
implicitly assuming that students automatically get a better education at a
higher-ranked institution. However, we cannot dismiss the possibility that
these students may learn less where the pace and complexity of the
education is geared to students with an extraordinarily stronger pre-college
educational background.
To test this possibility, we can turn to some fields, such as medicine and
the law, where there are independent tests of how much the students have
learned, after they have completed their formal education. The graduates of
both medical schools and law schools cannot become licensed to practice
their professions without passing these independent tests.
A study of five state-run medical schools found that the black-white
difference in passing the U.S. Medical Licensing Examination was
correlated with the black-white difference on the Medical College
Admission Test before entering medical school.
In other words, blacks trained at medical schools where there was little
difference between black and white students— in their scores on the test
that got them admitted to medical school— had less difference between the
races in their rates of passing the Medical Licensing test years later, after
graduating from medical school.59 The success or failure of blacks on these
tests after graduation was correlated more with whether they were trained
with other students whose admissions test scores were similar to theirs,
rather than being correlated with whether the medical school was highly
ranked or lower ranked. Apparently they learned better where they were not
mismatched by affirmative action admissions policies.
There were similar results in a comparison of law school graduates who
took the independent bar examination, in order to become licensed as
lawyers. George Mason University law school’s student body as a whole
had higher law school admissions test scores than the admissions test scores
of the student body at the Howard University law school, a predominantly
black institution. But the black students at both institutions had law school
admissions test scores similar to each other. The net result was that black
students entered the law school at George Mason University with
admissions test scores lower than that of the other law school students there.
But apparently not so at Howard University.
Data on the percentage of black students admitted to each law school
who both graduated from law school and passed the bar examination on the
first try showed that 30 percent of the black students at George Mason
University law school did so— compared to 57 percent of the black
students from the Howard University law school who did so.60 Again, the
students who were mismatched did not succeed as well as those who were
not. As with the other examples, the students who were not mismatched
seemed to learn better when taught in classes where the other students had
educational preparation similar to their own.
These few examples need not be considered definitive. But they provide
data that many other institutions refuse to release. When UCLA Professor
Richard H. Sander sought to get California bar examination data, in order to
test whether affirmative action admissions policies produced more black
lawyers or fewer black lawyers, a lawsuit was threatened if the California
Bar Association released that data.61 The data were not released. Nor is this
an unusual pattern. Academic institutions across the country, that proclaim
the benefits of affirmative action “diversity,” refuse to release data that
would put such claims to the test.62
A study that declared affirmative action admissions policies a success—
The Shape of the River by William Bowen and Derek Bok— was widely
praised in the media, but its authors refused to let critics see the raw data
from which they reached conclusions very different from the conclusions of
other studies— based on data these other authors made available.63
Moreover, other academic scholars found much to question about the
conclusions reached by former university presidents Bowen and Bok.64
Where damaging information about the actual consequences of
affirmative action admissions policies are brought to light and create a
scandal, the response has seldom been to address the issue, but instead to
denounce the person who revealed the scandalous facts as a “racist.” This
was the response when Professor Bernard Davis of the Harvard medical
school said in the New England Journal of Medicine that black students
there, and at other medical schools, were being granted diplomas “on a
charitable basis.” He called it “cruel” to admit students unlikely to meet
medical school standards, and even more cruel “to abandon those standards
and allow the trusting patients to pay for our irresponsibility.”65
Although Professor Davis was denounced as a “racist,” black economist
Walter E. Williams had learned of such things elsewhere,66 and there was a
private communication from an official at the Harvard medical school some
years earlier that such things were being proposed.67
Similarly, when a student at Georgetown University revealed data
showing that the median score at which black students were admitted to that
law school was lower than the test score at which any white student was
admitted, the response was to denounce him as a “racist,” rather than
concentrating on the serious issue raised by that revelation.68 That median
score, incidentally, was at the 70th percentile, so these were not
“unqualified” students, but students who would probably have more chance
of success at some other law schools, and when later confronting the need
to pass a bar exam to become lawyers.
Being a failure at an elite institution does a student no good. But the
tenacity with which academic institutions fiercely resist anything that might
force them to abandon counterproductive admissions practices suggests that
these practices may be doing somebody some good. Even after California
voters voted to end affirmative action admissions practices in the University
of California system, that led to continuing efforts to circumvent this
prohibition.69 Why? What good does having a visible minority student
presence on campus do, if most of them do not graduate?
One clue might be what many colleges have long done with their athletic
teams in basketball and football, which can bring in millions of dollars in
what are classified as “amateur” sports. Some successful college football
coaches have incomes higher than the incomes of their college or university
presidents. But the athletes on their teams have been paid nothing70 for
spending years providing entertainment for others, at the risk of bodily
injuries— and the perhaps greater and longer-lasting risk to their character,
from spending years pretending to be getting an education, when many are
only doing enough to maintain their eligibility to play. An extremely small
percentage of college athletes in basketball and football go on to a career in
professional sports.
A disproportionate number of college basketball and football stars are
black71— and academic institutions have not hesitated to misuse them in
these ways. So we need not question whether these academic institutions
are morally capable of bringing minority youngsters on campus to serve the
institution’s own interests. Nor need we doubt academics’ verbal talents for
rationalization, whether trying to convince others or themselves.72
The factual question is simply whether there are institutional interests
being served by having a visible demographic representation of minority
students on campus, whether those students get an education and graduate
or not. The hundreds of millions of dollars of federal money that comes into
an academic institution annually can be put at risk if ethnic minorities are
seriously “under-represented” among the students, since that raises the
prospect of under-representation being equated with racial discrimination.
And that issue can be a legal threat to vast amounts of government money.
Nor is this the only outside pressure on academic institutions to continue
affirmative action admissions policies that are damaging to the very groups
supposedly being favored. George Mason University’s law school was
threatened with losing its accreditation if it did not continue admitting
minority students who did not have qualifications as high as other students,
even though data showed that this was not in the minority students’ own
best interests.73 The reigning social justice fallacy that statistical disparities
in group representation mean racial discrimination has major impacts.
Minority students on campus are like human shields used to protect
institutional interests— and casualties among human shields can be very
high.
Many social policies help some groups while harming other groups.
Affirmative action in academia manages to inflict harm on both the students
who were not granted admissions, despite their qualifications, and also
many of those students who were admitted to institutions where they were
more likely to fail, even when they were fully qualified to succeed in other
institutions.
Economic self-interest is by no means the only factor leading some
individuals and institutions to persist in demonstrably counterproductive
affirmative action admissions policies. Ideological crusades are not readily
abandoned by people who are paying no price for being wrong, and who
could pay a heavy price— personally and socially— for breaking ranks
under fire and forfeiting both a cherished vision and a cherished place
among fellow elites. As with the genetic determinists and the “sex
education” advocates, there have been very few people willing to
acknowledge facts that contradict the prevailing narrative.
Even where there is good news about people that surrogate decision-
makers are supposedly helping, it seldom gets much attention when the
good results have been achieved independently of surrogate decision-
makers. For example, the fact that most of the rise of blacks out of poverty
occurred in the decades before the massive government social programs of
the 1960s, before the proliferation of charismatic “leaders,” and before
widespread media attention, has seldom been mentioned in the prevailing
social justice narrative.
Neither has there been much attention paid to the fact that homicide rates
among non-white males in the 1940s (who were overwhelmingly black
males in those years) went down by 18 percent in that decade, followed by
a further decline of 22 percent in the 1950s. Then suddenly that reversed in
the 1960s,74 when criminal laws were weakened, amid heady catchwords
like “root causes” and “rehabilitation.” Perhaps the most dramatic— and
most consequential— contrast between the pre-1960s progress of blacks
and negative trends in the post-1960s era was that the proportion of black
children born to unmarried women quadrupled from just under 17 percent
in 1940 to just over 68 percent at the end of the century.75
Intellectual elites, politicians, activists and “leaders”— who took credit
for the black progress that supposedly all began in the 1960s— took no
responsibility for painful retrogressions that demonstrably did begin in the
1960s.
Such patterns are not peculiar to blacks or to the United States. Group
preference policies in other countries did little for people in poverty, just as
affirmative action did little for black Americans in poverty. The benefits of
preferential treatment in India, Malaysia and Sri Lanka, for example, tended
to go principally to more fortunate people in low-income groups in these
countries,76 just as in the United States.77
IMPLICATIONS
Where, fundamentally, did the social justice vision go wrong? Certainly
not in hoping for a better world than the world we see around us today, with
so many people suffering needlessly, in a world with ample resources to
have better outcomes. But the painful reality is that no human being has
either the vast range of consequential knowledge, or the overwhelming
power, required to make the social justice ideal become a reality. Some
fortunate societies have seen enough favorable factors come together to
create basic prosperity and common decency among free people. But that is
not enough for many social justice crusaders.
Intellectual elites may imagine that they have all the consequential
knowledge required to create the social justice world they seek, despite
considerable evidence to the contrary. But, even if they were somehow able
to handle the knowledge problem, there still remains the problem of having
enough power to do all that would need to be done. That is not just a
problem for intellectual elites. It is an even bigger problem— and danger—
for the people who might give them that power.
The history of totalitarian dictatorships that arose in the twentieth
century, and were responsible for the deaths of millions of their own people
in peacetime, should be an urgent warning against putting too much power
in the hands of any human beings. That some of these disastrous regimes
were established with the help of many sincere and earnest people, seeking
high ideals and a better life for the less fortunate, should be an especially
relevant warning to people seeking social justice, in disregard of the
dangers.
It is hard to think of any power exercised by human beings over other
human beings that has not been abused. Yet we must have laws and
governments, because anarchy is worse. But we cannot just keep
surrendering more and more of our freedoms to politicians, bureaucrats and
judges— who are what elected governments basically consist of— in
exchange for plausible-sounding rhetoric that we do not bother to subject to
the test of facts.
Among the many facts that need to be checked is the actual track record
of crusading intellectual elites, seeking to influence public policies and
shape national institutions, on a range of issues extending from social
justice to foreign policies and military conflict.
As regards social justice issues in general, and the situation of the poor in
particular, intellectual elites who have produced a wide variety of policies
that claim to help the poor, have shown a great reluctance to put the actual
consequences of those policies to any empirical test. Often they have been
hostile to others who have put these policies to some empirical test. Where
social justice advocates have had the power to do so, they have often
blocked access to data sought by scholars who want to do empirical tests on
the consequences of such policies as affirmative action academic
admissions policies.
Perhaps most surprising of all, many social justice advocates have shown
little or no interest in remarkable examples of progress by the poor— when
that progress was not based on the kinds of policies promoted in the name
of social justice. The striking progress made by black Americans in the
decades before the 1960s has been widely ignored. So has the demonstrable
harm suffered by black Americans after the social justice policies of the
1960s. These included a sharp reversal of the homicide rate decline and a
quadrupling of the proportion of black children born to unmarried women.
Government policies made fathers a negative factor for mothers seeking
welfare benefits.
Social justice advocates who denounce elite New York City public high
schools that require an entrance examination for admissions pay no
attention to the fact that black student admissions to such schools were
much higher in the past, before the elementary schools and middle schools
in black communities were ruined by the kinds of policies favored by social
justice advocates. Back in 1938, the proportion of black students who
graduated from elite Stuyvesant High School was almost as high as the
proportion of blacks in the New York City population.78
As late as 1971, there were more black students than Asian students at
Stuyvesant.79 As of 1979, blacks were 12.9 percent of the students at
Stuyvesant, but that declined to 4.8 percent by 1995.80 By 2012, blacks
were just 1.2 percent of the students at Stuyvesant.81 Over a span of 33
years, the proportion of black students at Stuyvesant High School fell to
less than one tenth of what it had been before. Neither of the usual suspects
— genetics or racism— can explain these developments in those years. Nor
is there any evidence of soul-searching by social justice advocates for how
their ideas might have played a role in all this.
On an international scale, and on issues besides education, those with the
social justice vision often fail to show any serious interest in the progress of
the less fortunate, when it happens in ways unrelated to the social justice
agenda. The rate of socioeconomic progress of black Americans before the
1960s is a classic example. But there has been a similar lack of interest in
the ways by which poverty-stricken Eastern European Jewish immigrants,
living in slums, rose to prosperity, or how similarly poverty-stricken
Japanese immigrants in Canada did the same. In both cases, their current
prosperity has been dealt with rhetorically, by calling their achievements
“privilege.”82
There have been many examples of peoples and places around the world
that lifted themselves out of poverty in the second half of the twentieth
century. These would include Hong Kong,83 Singapore,84 and South
Korea.85 In the last quarter of the twentieth century, the huge nations of
India86 and China87 had vast millions of poor people rise out of poverty.
The common denominator in all these places was that their rise out of
poverty began after government micro-managing of the economy was
reduced. This was especially ironic in the case of China, with a communist
government.
With social justice advocates supposedly concerned with the fate of the
poor, it may seem strange that they seem to have paid remarkably little
attention to places where the poor have risen out of poverty at a dramatic
rate and on a massive scale. That at least raises the question whether the
social justice advocates’ priorities are the poor themselves or the social
justice advocates’ own vision of the world and their own role in that vision.
What are those of us who are not followers of the social justice vision
and its agenda to do? At a minimum, we can turn our attention from
rhetoric to the realities of life. As the great Supreme Court Justice Oliver
Wendell Holmes said, “think things instead of words.”88 Today it is
especially important to get facts, rather than catchwords. These include not
only current facts, but also the vast array of facts about what others have
done in the past— both the successes and the failures. As the distinguished
British historian Paul Johnson said:
Balkans, 39
Banks, 45–46, 86
Beauvoir, Simone de, 79
Becker, Gary S., 59
Beer, 3, 4, 106
Berlin, 3–4
Billionaires, 23, 49–50, 53, 64, 69, 109
Birth Order, 9, 21, 102
Black Americans, 5, 7, 10, 23–25, 26, 27, 28, 30–31, 32, 33, 42–46, 58–61,
83–84, 98–99, 106, 114–115, 116, 117, 120, 121, 122, 123, 125, 126,
127, 128–129, 144 (endnote 8), 152 (endnote 65), 160 (endnote 135),
186 (endnote 71)
Bohemia, 19
Bok, Derek, 123
Bowen, William, 123
Braudel, Fernand, 12
Brazil, 3
Brennan, William J., 98
Brigham, Carl, 33, 41–42, 92
Britain and Britons, 2, 10, 14–15, 17, 18, 19, 38, 40, 50–51, 71, 78
Buenos Aires, 73
Burke, Edmund, 50, 104, 114, 179 (endnote 127)
Burma, 109–110
Bush, George H.W., 66
Businesses, 2, 5, 18, 51–52, 86–87, 88, 100, 106, 112
Dalmatia, 19
Dangers, 100, 104, 113–114, 127–128
Davis, Benjamin O., Junior and Senior, 106
Davis, Bernard, 124, 185–186 (endnote 65)
Democracy, 49, 82, 93, 97
Demographic Representation, 1, 2, 3, 5–8, 108, 115, 118, 125
Dewey, John, 76, 93, 95–96, 150 (endnote 50)
Discrimination, 2, 4, 5–6, 7, 22, 27–28, 29, 31, 44–46, 59–60, 93, 117, 125
Diseases, 18, 21, 89–91
Dworkin, Ronald, 79
Habsburg Empire, 2
Harvard, 13, 34, 35, 74, 96, 124
Hayek, F.A., 75, 77, 80, 99, 101, 102, 103, 104–105
Helper, Hinton, 28
Hillbillies, 26–27, 31, 32–33, 41, 57, 114
Hispanic Americans, 1, 7, 25, 120, 121
History, 2, 3, 12, 19, 20, 22, 23, 28, 34, 39, 40, 41, 47, 50
Hitler, Adolf, 36, 37
Homicide, 19, 98, 99, 126, 129
Honesty, 16–18, 21
Hong Kong, 84, 129
Hotels, 85–86
Housing, 45–46, 87–89, 95
Howard University, 123
Human Capital, 16–18, 19, 20–21
Ibos, 2
Iceland, 51–52
Immigrants, 32, 33, 39, 42, 72–74, 88–89, 168 (endnote 2)
Incas, 14
Income, 6, 7, 23, 32, 54, 62–70, 85–87, 108–111
India and Indians, 2, 18, 15, 23, 109, 110, 118, 127, 130
Industrial Revolution, 15–16, 78
Industries, 2, 21, 95–96
Inflation, 53–55, 83, 84
Intellectual Elites, 29, 34, 35, 44, 71, 77–78, 79, 80, 86, 92, 96–97, 98, 99,
100, 117, 126, 127, 128
Intelligence, 9, 42, 43, 82, 100, 152 (endnote 65), 159 (endnote 131), 160
(endnote 135)
Interest Rates, 85–86
Internal Revenue Service, 53, 64, 67
Ireland and Irish, 5, 74, 89
Iroquois, 14
Italy and Italians, 2, 3, 19, 32, 40, 73, 74, 88, 168 (endnote 2)
Nader, Ralph, 79
Napoleon, 19
Narratives, 115, 116, 117, 126
Nations, 2, 11
Natural Resources, 15–16
Nature, 1, 12–16
New York City, 3–4, 17, 18, 74, 129
New York Times, 56, 62, 66, 84, 98, 174 (endnote 68)
New Zealand, 42
Newton, Sir Isaac, 40
Nigeria and Nigerians, 2, 11, 20
Nixon, Richard, 55, 116
Non-Profit Organizations, 60–61
Northern United States, 30, 32
Noyes, Alfred, 41
Racism, 4, 22, 23, 24, 25, 27, 29–30, 58, 59, 113–115
Rawls, John, 48, 74, 76
Read, Leonard, 76
Reagan, Ronald, 66, 117
Reciprocal Inequalities, 4–8, 16, 40, 119
Redistribution of Income and Wealth, 49–55, 65–66, 69
Rent Control Laws, 57
Reynolds, Alan, 70
Rich People, 5, 49–55, 62, 67–69, 109
Riots, 117
Roman Empire, 3, 15, 38, 41, 55
Rome, 3–4
Roosevelt, Franklin D., 37–38
Roosevelt, Theodore, 34, 37–38
Ross, Edward A., 34, 37, 150 (endnote 50)
Rousseau, Jean-Jacques, 1, 15, 76, 78, 79
Russia and Russians, 18
Taussig, Frank, 35
Taxes, 36, 51–52, 69, 92
inflation “tax”: 53–55
tax rate vs. tax revenue: 50
Thernstrom, Abigail, 46
Thernstrom, Stephan, 46
Titanic, 72
Tocqueville, Alexis de, 25, 28
Transportation Costs, 12–15
Turnover, 62–64, 67, 68, 69–70
Venice, 74
Violence, 22, 68–69, 88, 109–110
Visions, 21, 35, 48–49, 105