To show that discrimination is wrong, one must show that it is unjust, and nobody does this. Hence, there is no ex-ante reason to assume claims of discrimination have any real moral weight.
Discrimination by itself is clearly not morally wrong. Consider the following examples:
A bartender refuses to serve a black customer but agrees to serve a white customer because the first is black and the second is white.
A bartender refuses to serve a 13-year-old customer but agrees to serve a 21-year-old customer because the first is 13 years old and the second is 21 years old.
A bartender refuses to serve a drunk customer but agrees to serve a sober customer because the first is drunk and the second is sober.
All three of these, unambiguously, are discrimination—i.e., the bartender is discriminating between two people based on some difference in characteristics between them. Further, most people would say that the first is certainly immoral, the second is certainly licit, and the third is almost certainly licit. So why on earth do people seem to think crying “discrimination!” shows that a moral evil has occurred? Some other examples of discrimination that is obviously fine:
A wife will not consider it sexual harassment if a male who is her husband calls her beautiful but will consider it sexual harassment if a male who is her coworker calls her beautiful.
A country will allow its own citizens to enter without a visa but will not allow another country’s citizens to enter without a visa.
The police will arrest people who are suspected to have committed crimes but will not arrest people who are not suspected to have committed crimes.
What is my point here? The reason why some discrimination is clearly fine, and other discrimination is clearly not fine, is because some discrimination is unjust—i.e., the discriminator fails to render unto that person what they are owed—whereas other discrimination is not unjust, and it is the injustice—not the discrimination—that is immoral.
In the above examples, a 13-year-old is not owed the right to purchase alcohol at a pub (indeed, it is actively inappropriate for them to do so), and so the discrimination is not only allowed but appropriate. But a black person is owed the right to purchase alcohol at a pub, and so the bartender’s failure to extend that right to them is unjust.
The reason I raise this is because The Discourse™ seems to think that merely identifying that discrimination has occurred is sufficient to show that something morally wrong has occurred. Obviously, it is discrimination for the state to allow vaccinated people but not unvaccinated people to attend large social gatherings—but is this discrimination unjust? Obviously, it is discrimination when only natal females are allowed to compete in female sport—but is this discrimination unjust? Obviously, it is discrimination when men who have sex with men are disallowed from donating blood—but is this discrimination unjust? On the basis of the arguments put forward in The Discourse™, I have no idea, and more importantly nobody else seems to have any idea either. All anyone seems to be able to identify is that discrimination has occurred, not some substantive conception of justice under which the discrimination would be unjust. So, trivially: to show that discrimination is wrong, one must show that it is unjust, and nobody does this. Hence, there is no ex-ante reason to assume claims of discrimination have any real moral weight.
Pro-life advocates cannot coherently advocate that abortion is wrong while also arguing that it is necessarily illegitimate to commit any acts of civil disobedience in defence of the unborn.
Here is what I think is a valid argument:
(P1) There exist instances where civil disobedience (i.e., disobedience of laws and even minimal violence) may be legitimate to correct grave injustice. (P2) If there exist any instances where civil disobedience may be legitimate to correct grave injustice, then state-sanctioned mass murder is such an instance. (P3) Murder is the deliberate killing of an innocent person. (P4) If the unborn are innocent persons, then abortion is state-sanctioned mass murder. (P5) The unborn are innocent persons. (C) Therefore civil disobedience may be legitimate in cases of abortion.
The question of whether abortion should be legal is precisely a question about (P5): if the pro-life camp is correct, then abortion just is the state-sanctioned murder en masse of unborn human beings. What I therefore find somewhat strange is the idea that the abortion debate should purely be a civil debate (i.e., a debate where both parties argue their case before the body politic and accept its judgements without recourse to any extra-legal methods of achieving one’s ends). But if someone accepts (P1)–(P4), then the correctness of the pro-life position would necessarily entail that individuals are not bound to submit to the diktats of the body politic and are entitled to use extra-legal methods to prevent abortion. In other words, it is not coherent for a pro-lifer to think that a “civil debate” (as defined above) on abortion is even coherent: if the pro-life position is amenable to civil debate, then it cannot be correct. Symmetrically, a pro-choicer cannot think that a “civil debate” is coherent: if they are unwilling to grant the possibility that their opponents could be legitimate in using civil disobedience to achieve their ends, then they are unable to grant the possibility of their opponent’s arguments at all. Both of these conclusions follow necessarily, unless one abandons at least one of (P1)–(P4).
The problem is that (P1)–(P4) are simply not controversial. It seems that almost nobody will want to say that North Korean dissidents, e.g., are acting illegitimately, which gives us (P1); (P2) seems likewise trivially obvious (do we really want to argue, e.g., that a dissident North Korean family that helps the to-be-executed to flee to South Korea are acting illegitimately?); (P3) seems likewise non-controversial unless one is going to tie oneself in contortions; and (P4) is analytic on (P3) given that abortion laws simply are the state’s sanctioning of abortion and given that abortion occurs at a large scale (depending on the country, generally at least in the thousands or tens of thousands per year in places like Australia, up to 850,000 in places like the United States ).
So, to insist that the legality of abortion be decided by civil debate (as defined above) is then seemingly precisely to insist that the legality of abortion need not be decided at all, since (P5) could not be true if abortion was able to be decided by civil debate.
To be clear, nothing in my argument above would indicate that civil disobedience is morally obligatory in respect of abortion, nor that pro-life advocates should not argue their case before the body politic. It is simply to argue that pro-life advocates cannot coherently advocate that abortion is wrong while also arguing that it is necessarily illegitimate to commit any acts of civil disobedience in defence of the unborn.
Almost all arguments will ultimately rely on some form of appeal to authority. If rationalists are disappointed by the insubstantiality of their own appeals, perhaps they should consider a philosophy that vindicates appeals to authority more rigorously.
Suppose that a caricatured rationalist atheist (R) and a caricatured devout Catholic (C) have a discussion about the existence of God. Let’s say R defends his position by some argument against God’s existence (e.g. the existence of evil), and let’s further assume that C rebuts R’s argument to the point where even R acknowledges that his defence fails. If R does not change his mind, is he being unreasonable? Conversely, let’s assume that C defends his position by some argument for God’s existence (e.g. the argument from contingency), and let’s further assume that R rebuts C’s argument to the point where even C acknowledges that his defence fails. If C does not change his mind, is he being unreasonable? Or, let’s take an even more extreme scenario: let’s assume that before R or C even present their arguments, both individuals try to get their counterparty to agree that he will abandon his position should his argument be defeated, to which both parties reply that no, even if all their arguments for believing their position are defeated, they will continue to hold that position. Are R and C both being unreasonable?
Conventional wisdom would suggest that both R and C are being equally unreasonable in the above: reasonable people change their mind when their arguments are defeated, or so custom dictates. A more sophisticated person might note that R’s and C’s apparently unreasonable behaviour might belie a less recalcitrant internal set of attitudes, and as such they might in fact think reasonably even if they are acting unreasonably. A yet more sophisticated person might note that, in line with something like the Duhem–Quine thesis, R and C have actually gone outside the realm of “reasonableness” altogether, since something like “belief in God” is a core position that will inform one’s entire worldview and as such it is not really amenable to persuasion. But I want to suggest something different: the only person who could possibly be accused of unreasonableness or need defending is R. C is entirely justified unless one begs the question.
Note that for R, his ability to rationally defend his positions is critical to his reasonableness. For (at least the caricature of) a rationalist, man truly is the measure of all things, and so he must either be able to defend his position directly or be able to defend an appeal to authority for his position. Critically, however, the second option will by his own standards be subject to an infinite regress: if he cites a psychology finding, for instance, we might ask him to justify his belief in psychology (at which point we could direct him to the replication crisis); if he wants to defend his claim against the replication crisis, he will need to appeal-to-authority to statisticians or philosophers of science, at which point we might point him to rebuttals of them; and so on. R will as an obvious empirical fact not be able to justify the full epistemic chain, and what’s worse this will prevent him from having any proper justification in appealing to authority. Unless he is able to defend his own position to his own satisfaction, he will be forced to make a move that he himself considers unreasonable. His argument will straightforwardly lack validity.
For C, however, no such dilemma ever obtains. Our devout Catholic C holds a position first and foremost because they believe that the Church possesses, through the grace of God and the work of the Holy Spirit, the unique gift of proclaiming truth to the world, and so what the Church teaches as dogma is therefore correct. C not only has personal experience of the Holy Spirit; they most likely also have experiences of the Church’s ability to give valid and strong arguments for its own positions (e.g. on the existence of God, one might look to the aforementioned argument from contingency, or the Ontological Argument, etc.). Hence, C has no empirical grounds to doubt the claim that the Church, by divine grace, does in fact teach the truth, and so if C is defeated in an argument, the most reasonable belief that C could hold is that he made an error in reasoning, not that his position was incorrect. God, not man, is the measure of all things, and self-evidently C’s personal fumblings in argumentation do not indict God. Hence, his appeal to authority will be entirely valid. There is no infinite regress. C has made no leaps of faith aside from his original leap of faith, and he can justify that in good part within his system by means of rational arguments that even his interlocutors must accept as ostensibly valid.
In other words, the only way we could come to the conclusion that R and C are both being unreasonable is if we presuppose the truth of rationalism—which is clearly not a legitimate move in a debate over whether rationalism is true (and a debate over the existence of the Christian God is precisely such a debate). R and C will both ultimately be forced to make appeals to authority in defending their beliefs, but only one of them is in any real sense justified in doing so. If caricatured rationalists are disappointed by the insubstantiality of their own appeals, perhaps they should consider a philosophy that vindicates appeals to authority more rigorously.
 Note: “dogma” is a specific technical term in Catholicism, denoting a certain set of beliefs that have been divinely revealed (e.g. belief in the Trinity, the Immaculate Conception, etc.). Many beliefs held by Catholics—indeed, many beliefs advocated by the Church—are not dogma and therefore are not guaranteed of infallibility. So to be convinced of the Church’s infallibility in teaching dogma is not to be convinced that, for instance, the Pope would never accidentally misspell someone’s name. For a broader dissection, see here: https://www.catholic.com/magazine/print-edition/dogma
 Note: this diatribe is intended only against caricatured rationalists. While as an empirical point most rationalists do appear to act like caricatured rationalists, it is obviously false that the above succeeds in attacking any sophisticated doctrine.
Whether it be due to excessively tight deadlines, poor-quality cadets, ideological echo chambers, or just plain-old laziness, there’s very little case to be made that The Discourse in the media accurately reflects reality in any real sense.
I was listening to the excellent Two Psychologists, Four Beers episode with Gordon Pennycook, specifically the discussion on the credibility of the media in relation to conspiracy theories. Yoel, Mickey, and Gordon seemed to think there were two possibilities: “the mainstream media is deliberately dishonest and biased”, or “the mainstream media endeavours to report accurately and mostly achieves this”. Between these two options, I tend to side with the podcasters and say the second—but there is clearly a third option. So, in my most Very Online blog post ever, I give you the third option: “the media does try to report facts accurately when it reports them, but journalists are often incompetent, so The Discourse ends up sounding like lies even when the strict facts of the matter are true”.
To start, here’s a few very conspiratorial-sounding questions on hot button topics that have been prominently discussed in Anglosphere media (apologies, they’re Australia-focussed, but also North America should get used to the fact that the rest of the world exists). See how many you can answer:
Euthanasia: In 2020, Germany’s Constitutional Court declared that German citizens had a constitutional right to assistance with suicide for what reasons?
Disability rights: What percentage of babies with Downs Syndrome are aborted in Denmark?
LGBTIQ issues: What percentage of Australian gay men have HIV?
Abortion: Deceased abortionist Ulrich Klopfer was found in Indiana to have how many preserved foetuses at his house following his death?
China: A number of Australian-university research institutes were found to have collaborated with which branch of the Chinese government in what program?
If you click through those links, you’ll notice that all of them either link to press releases from primary sources or to mainstream media outlets. It is certainly true that these things were reported on. Bret Weinstein would be as-per-usually absurd to suggest any arch-conspiracy to hush this stuff up. And yet, in spite of the literally unbelievable nature of the items (Assisted suicide for any reason at any stage of life? So high an incidence of aborting babies with Downs Syndrome that the program looks virtually indistinguishable from eugenics? Australian academics’ collaboration in an ongoing genocide?), and in spite of how commonly the media discusses the overarching topics, I’m willing to guess this is the first you’re hearing of most, if not all, of the above specific facts. If you follow any of the above topics even remotely closely, that is astonishing! Australia recently legalised euthanasia in two states, we have a periodic discussion about liberalising blood donation for men who have sex with men, we’re constantly discussing Chinese influence on Australia’s institutions, and yet whenever I cite any of the above to an interlocutor—even one who “follows these issues closely”—I am greeted with incredulous stares. However incredible these facts may be, they have not permeated the public consciousness at all.
Or at least, this situation would be astonishing if we assumed the commentators in these arenas themselves knew these things. And we have no reason to assume that. On each of the above, after the initial report there was often little-to-no follow up or commentary. These episodes certainly weren’t hashed through the media in the latest iteration of the Two Minutes Hate. So, unless journalists and commentators are doing their job really well and ensuring they stay abreast of all the latest in their area, even if it’s only reported briefly and once, there’s every reason to believe they have no knowledge of any of these facts. And especially when these are hardly the sort of facts that you’ll be popular for sharing at the water cooler, it’s unsurprising they make very little impact after their initial publication. This is doubly true in more complex domains: my day job is in the energy sector, and it’s a rare day when reporting on the industry isn’t riddled with basic conceptual errors (e.g. “levelized cost of energy” is self-evidently not a measure of how much end-consumers will pay for energy, so it cannot be cited to discuss the impact of renewables or fossil-fuel generation on the prices paid by end-consumers).
So, whether it be due to excessively tight deadlines, poor-quality cadets, ideological echo chambers, or just plain-old laziness, there’s very little case to be made that The Discourse in the media accurately reflects reality in any real sense. Yoel, Mickey, and Gordon are clearly correct to be dismissive of conspiracy theories about deliberate media bias, but as Hanlon’s Razor goes, never attribute to malice what can be attributed to stupidity. The papers of record do remain good papers of record, and they are good collators of raw facts, but any statement of credibility beyond that is, uh, a stretch.
If we want to say “it is immoral to try and influence people’s preferences because [insert boringly stupid Rawlsian reason here]”, then we should just say that, not pretend that the problem is far harder to solve than it actually is because we’ve restricted ourselves to assuming that everyone’s preference relation is purely self-interested and we just have to fix incentives to counter that.
While “improve incentive structures” is a good way to improve outcomes, it has obvious limits if everyone is particularly immoral, so “improve people” is both an inescapable goal and also an almost domain-independent Pareto improvement
A friend recently commented that this was very oblique, so this is a short post intended to elucidate what I mean.
Let’s consider a classic economic incentive problem: the principal–agent problem. I restate it here as follows. A principal desires that some certain outcome be achieved (e.g. a shareholder desires that the stock of a company increase in value), so they delegate their authority to an agent, who acts on their behalf (e.g. the shareholder delegates authority to the CEO). But the agent may have incentives that fail to align with the principal’s (e.g. the CEO may have an incentive to increase their own salary past some optimum point, which would not be in the interests of the company but would be in the interests of the CEO), so the principal has to put some place of checks and balances in place to ensure that the agent’s incentives align with the principal’s incentives.
The principal–agent problem is trivially “solvable” if we just define the agent’s preference relation to be the same as the principal’s preference relation: no checks and balances will be needed, since the agent will by construction always act in the interests of the principal. In order for us therefore to arrive at a principal–agent “problem” that we solve by incentives, we must assume their preference relations don’t align. In other words, preferences are (clearly) logically prior to incentive structures. The principal–agent problem, externality analysis, game theory, and indeed any other way of modelling decisions using preference relations can by definition tell us nothing about how such preferences are formed or what types of preferences exist in the real world, because these problems always assume some preference relation and then work from there.
I do not believe I have said anything interesting or insightful in the above, and yet the above is seemingly forgotten in almost all discussions about incentive structures. For instance, large numbers of economic problems become trivially solvable if we assume that we can change people’s preferences to (e.g.) incorporate externalities into their own preferences. Political science as a discipline largely falls away if you assume you have some means of ensuring that only people of moral integrity ever run for office. The constant refrains among replication-crisis commentators that “we must reform the incentives” seems to assume you could never reform people’s preferences to achieve the same effect. To belabour the point here, all of these discussions take preferences as given and then discuss how to fix incentive structures to ensure outcomes, but it’s very rare that anyone gives a reason to take preferences as given and not to consider the possibility that we could get people to have different preferences.
The reason this silence on preference formation is so bizarre is because preferences clearly are malleable and varied—literally all wills are potential principal–agent problems, and yet many executors execute wills to the letter because they want to uphold the wishes of the deceased! And if we wanted to study how and why these preference relations develop, and what we might want to do to ensure that people’s preferences are a bit less dodgy, there are whole sub-disciplines that study preference formation (in psychology, sociology, anthropology, philosophy, etc.)! If we want to say “it is immoral to try and influence people’s preferences because [insert boringly stupid Rawlsian reason here]”, then we should just say that, not pretend that the problem is far harder to solve than it actually is because we’ve restricted ourselves to assuming that everyone’s preference relation is purely self-interested and we just have to fix incentives to counter that.
Karen Stenner’s The Authoritarian Dynamic is a seminal collection of evidence on when and how authoritarianism affects polities, but the nuance that she offers above and beyond previous investigations into authoritarianism begins to invite questions about whether it is “authoritarians” who are truly the voters that should puzzle political psychologists.
Karen Stenner’s 2005 The Authoritarian Dynamic received an unexpected jolt into the spotlight on 8 November 2016. Almost every poll had predicted that Hillary Clinton would win the 2016 presidential election; almost every credible pundit had argued that Trump was unelectable. And then, as if out of nowhere, a sizeable chunk of the US population elected the vulgar former head of a reality show into the Oval Office. Who could have predicted such an event? As it turns out, Karen Stenner.
Earlier analysis of the “authoritarian personality”—i.e. a theory that predicted some individuals would consistently express authoritarianism as a deep-seated aspect of their personality—were unable to account for the unexpectedness of Trump’s victory. Indeed, they were unable to account for much at all, since a stable personality trait like “authoritarianism” should predict behaviour consistently across time, and few investigations yielded any real stability in whatever measure of authoritarianism they posited. People did not appear to exhibit particularly consistent racist animosity or willingness to use the strong arm of the state to enforce morality. And indeed, there was little observable anti-establishment sentiment in the US polity in 2015—it would be difficult to attribute Trump’s win to Americans who had acted in a consistently authoritarian way prior to that point.
Here enters Stenner’s concept not of an authoritarian personality but of an authoritarian dynamic. What drives authoritarianism, per Stenner, is not fundamentally racism or a love of strongmen or a punitive moral compass—it is a psychological predisposition towards oneness and sameness. Authoritarians want to be assured that we are all fighting for the same team. Who “we” are is not so salient as the desire for groupishness per se—“it is a groupishness that generally comes from wanting to be part of some collective, not from identification with a particular group” (p.18). But authoritarians will fail to meet this need for groupishness under conditions of “normative threat”—i.e. a sense of the polity’s coming apart, be it through substantial divergence of public opinion, untrustworthy political leadership, or “diversity and freedom ‘run amok’” (p.17). Authoritarians are not more likely to perceive normative threat than are other citizens, but once they do perceive it, they will immediately man the barricades in defence of a reestablishment of a normative order—indeed, any normative order—that will return the community to a state of oneness and sameness.
Stenner marshals impressive evidence in favour of this point. To measure authoritarianism—i.e. a fundamental desire for oneness and sameness—she uses survey respondents’ endorsement of obedience, courtesy, and respect for elders as childrearing values, as compared with endorsements of the child’s taking responsibility for his own actions, his curiosity, or his following his conscience. Using this apparently minimal measure, Stenner shows that it is specifically the presence of normative threat that drives authoritarians to endorse traditionally authoritarian attitudes, such as an emphasis on “law and order” and a desire to crackdown on “deviant groups and troublemakers”. If authoritarians are encouraged to believe that their polity is united and their leaders are honest, then they are indistinguishable from their more “libertarian” (Stenner’s term, used idiosyncratically) counterparts in their expression of authoritarian beliefs. Further, Stenner derives some impressive real-world results from this minimalist measure: using purely the respondent’s endorsed childrearing values, Stenner can predict a substantially higher level of general intolerance of difference than using almost any other single variable, including years of education, one’s social class, one’s religiosity, or one’s political views. What’s more, she can predict it across a large number of diverse polities—from the Anglosphere through into Eastern Europe through to East Asia. In other words, Stenner appears justified in claiming to have identified a true fundamental psychological contributor to citizens’ authoritarian predispositions the world over.
From this model, Stenner makes some fascinating, if controversial, deductions. For instance, she suggests that the “genocide formula”—i.e. the preconditions for a country’s committing genocide—may not lie so much in average levels of ethnic prejudice so much as in the variance in public belief, since only the latter would indicate normative threat. In Stenner’s 1990–1995 survey data, “none of the six Yugoslav republics displayed especially high levels of authoritarianism on average… But Serbia is unparalleled across the eighty samples in terms of variance in authoritarianism” (p.113). In other words, it may not be deep-seated and widely shared prejudices that drive authoritarian actions. It may simply be that the proportion of the population predisposed to authoritarianism is driven their extremes of behaviour by the presence of normative threat.
Potentially even more controversially, Stenner suggests that “much of what we think of as racism, likewise political and moral intolerance, is more helpfully understood as ‘difference-ism’.” (p.276). Since racial differences, differences in sexual expression, and so on are necessarily socially salient deviations from an authoritarian’s conception of normality, they constitute a normative threat, and it is in that sense—not in the sense of a learned or structural prejudice, nor in the sense of a specific hatefulness towards that minority—that authoritarians’ prejudicial behaviours should be understood.
The other major contribution of Stenner’s work is to distinguish the psychological drivers of authoritarianism, status-quo conservatism, and laissez-faire conservatism (the latter two are Stenner’s terms). Just as authoritarianism is driven by a fundamental psychological need for oneness and sameness, status-quo conservatism is driven by a fundamental psychological need for stability. Status-quo conservatives do not particularly mind however much oneness and sameness there is in the polity, so long as the amount is not very different today to how it was yesterday. Laissez-faire conservatism, by contrast, is perhaps best not given the moniker “conservatism” at all and is identified most strongly by support for specific laissez-faire capitalist policies. Although contemporary conservative parties are often alliances between each three of these drivers as a matter of political convenience—think, for instance, of the Trumpian, moderate, and corporatist elements of the Republican Party—they are largely uncorrelated and separate at the level of individual voters. Contra the often sloppy analysis by political scientists and psychologists, neither status-quo nor laissez faire conservativism are particularly predictive of prejudice once one accounts for a predisposition to authoritarianism.
Stenner’s argument begins to drift, however, when it comes to characterising what authoritarianism (and its counterpart, libertarianism) actually are.
Stenner identifies the fundamental psychological driver behind authoritarianism as a psychological need for oneness and sameness, but she argues the predisposition is content neutral beyond this point: authoritarianism merely means that “whatever it is that we stand for, we must all stand for it” (p.142). Symmetrically, libertarianism is merely the fundamental desire for “freedom and difference” (p.81), and it is that per se that libertarians desire. But it is obvious in neither case why these would be fundamental psychological needs in the first instance—leaving aside briefly libertarians’ need for freedom, there is no obvious reason why anyone would have a need specifically for oneness and sameness, nor for difference. Perhaps appropriately given its title, The Authoritarian Dynamic provides no account of why libertarians would desire diversity other than that they are “excited and engaged” (p.217) by difference. For authoritarianism, however, we see a slightly more fleshed-out picture. Modern liberal democracy engenders a “diversity of lifestyles and beliefs… [which] may be frightening, overwhelming, or isolating for many individuals, who may wish to divest themselves of the fear, stress, or loneliness of their own freedom, and/or to avoid the diverse and unpredictable consequences of the freedom of others” (p.143). This effect is compounded by authoritarians’ tendency to score lower on cognitive tests: in a sense, the diversity of modernity is more cognitive load than they can handle.
There is a strangeness to this case, however, compounded by her specific operationalisation of authoritarianism and her definition of normative threat. In one operationalisation, authoritarianism was measured by choosing the following as important (against the alternative in brackets) on a list of childrearing qualities:
“that a child obeys his parents” (“that he is responsible for his own actions”)
“that he has good manners” (“that he has good sense and sound judgment”)
“that he is neat and clean” (“that he is interested in how and why things happen”)
“that he has respect for his elders” (“that he thinks for himself”)
“that he follows the rules” (“that he follows his own conscience”)
Recall that Stenner defines authoritarianism as a fundamental preference for oneness and sameness, but a cursory glance over these items appears to bear minimal resemblance, if any, to oneness and sameness so much as to a tendency towards “interdependence” instead of “independence”, to use Markus and Kitayama’s terminology (alternately, towards “collectivism” instead of “individualism”).
Stenner may well argue that to define authoritarianism in terms of preference for collectivist over individualist childrearing values is to make our operationalisation “tautological with the dependent variables it is designed to explain” (p.21), as she does for the Altemeyer’s Right-Wing Authoritarianism scale, but there are two responses to this. The first is that this does not address the substantial independence of her operationalisation and any measure of “oneness and sameness”. But the second relates to what Stenner claims to measure, namely the desire for “oneness and sameness” at the level of the polity. In some meaningful sense, a necessary condition for a group’s being a community is that it be defined by a “oneness and sameness” about something: a religious community cannot be a religious community without a substantially shared religious outlook, a town community cannot be a town community without a substantially shared set of social norms, and so on. It is intuitive that individuals who value interdependent childrearing values would value strong collectivism at the local level (i.e. for their immediate community); it is not obvious that they would generalise this to the level of the polity as a whole. Stenner’s analysis of how childrearing values predict authoritarian attitudes, then, would show that individuals who are more interdependent at the local level are also more interdependent at the political level—i.e., individuals tend to see both their local environs and their polities as communities in the sense of “united under one and the same normative framework”, or they tend to see neither as communities in this sense. That her measure of authoritarianism appears to be measuring the desire to have one’s polity be a community becomes doubly apparent when one remembers that Stenner’s definition of normative threat included not only divergence of public opinion, which is a per se threat to oneness and sameness, but also questionable authorities, which are not a per se threat to oneness and sameness but are a per se threat to a community in the thick sense of the word.
This seemingly minor distinction matters immensely when we recall Stenner’s claim that “the targets and content, though not the general form and function, of [authoritarianism’s] expression can vary depending on who ‘we’ are and what ‘we’ stand for” (p.142). This claim will only be true necessarily if it is oneness and sameness per se that they value. If what they value instead is that their polity be a community, then the specific contents of the community’s norms and values will substantially influence which normative orders will cause them to man the barricades.
Note that most of Stenner’s analysis would apply as well to the above framing as to hers—I am not seeking to dispute her empirical results, which are formidable. It is very plausible that those with lower cognitive ability would rely more heavily on the predictability that comes from individuals’ sharing a single normative framework in the one community. If we account for the fact that race has for the past several centuries been an unfortunate dividing block of populations into sub-communities, much of her “difference-ism” analysis is perfectly transparent as “anti–non-community-member–ism”. And, perhaps most importantly, it makes the explanandum of her political psychology far easier to answer: the question becomes less “why are there authoritarians (qua collectivists)?” and more “why are there libertarians (quaWEIRDos)?”.
In case this last remark seems bad-faith, Stenner appears to define as authoritarian anything other than a purely morally relativist individualist liberalism. For instance, Stenner includes as authoritarian coercion a desire for “favourable treatment for those conforming with conventions” (p.90; this would seemingly include, for instance, only extending marriage rights to monogamous couples), sees any religious belief “beyond personal faith and individual codes of conduct… that is, a need to regulate other people’s behaviour” as necessarily authoritarian (i.e. she defines all religious belief as it has always been traditionally understood by religions themselves as necessarily authoritarian), and explicitly includes several items endorsing moral realism in the abstract under her measure of authoritarianism. Stenner explicitly cites a example of how one paradigmatic libertarian (i.e. an individual who selected mostly individualistic childrearing values) differed from the authoritarians in interviews: “I don’t think that people are any more or less moral by today’s standards than people a hundred years ago were by their standards. I just think our standards have changed” (p.235).
If this is her exemplar of authoritarianism’s negative, then it is hardly ambiguous why one might be attracted to Stenner’s formulation of authoritarianism. It is comprehensible (and not at all obviously “authoritarian” in the conventional sense) why a citizen would not want the government to be completely agnostic on the question of the good life. It is comprehensible why a citizen would want the polity to be (even if at a minimal level only) a community, and not simply a legal structure with associated institutions. Stenner argues that “authoritarians are never more tolerant than when reassured and pacified by an autocratic culture” (p.334), but in light of the above, this is perfectly limpid—her statement is equivalent to noting that authoritarians do not want their government (i.e. the body that determines their schools’ curricula, has an outsized impact on social norms, and determines funding decisions for key norm-making bodies) to be wholly morally relativistic. In Stenner’s analysis, only an individual willing to wholly abandon community at the political level and willing to wholly embrace atomised-individualistic politics will register as truly non-authoritarian.
This does not repudiate that authoritarians in the sense Stenner describes—i.e. those who have no allegiance to any specific normative order but merely oneness and sameness per se—do in fact exist. We simply have no means in Stenner’s operationalisation of distinguishing them from those with an intuitive sense of moral realism, from those who adhere to any collectivist set of norms, or even merely those who do not want their government to be entirely neutral on conceptions of the good. As such, perhaps our takeaway from The Authoritarian Dynamic should not be determining how to address “the negative consequences we all suffer on account of [authoritarians’] neglect and discomfort” and instead should be to consider that some of them might have a point.
 All page-number references are to: Karen Stenner, The Authoritarian Dynamic, (New York: Cambridge University Press, 2005), paperback edition, ISBN: 978-0-521-53478-9.
 Stenner’s main counterevidence on this point comes from her Multi-Investigator Study 99 (MIS99), in which authoritarians were primed with a story in which there was “belief diversity” (i.e. Americans, in the abstract, agreed with each other less than before), “stable diversity” (i.e. Americans, in the abstract, disagreed but had a stable society), and “changing together” (i.e. Americans, in the abstract, were in a changing society that was coalescing in its desires). Stenner found that “belief diversity” and “stable diversity” caused greater expressions of authoritarian attitudes and was related to lower levels of desire to preserve the existing political system than was “changing together”—i.e. authoritarians acted more authoritarian when the polity disagrees, and authoritarians are happy to change social structures entirely so long as everyone does it together. Notably, however, the stimuli in this experiment deliberately avoided reference to any specific change in social structure—the language in question was “pulling together” or “falling apart”. Given that my claim is simply that authoritarians are not agnostic to the content of a normative order, not that they are wedded to every aspect, I do not believe that the MIS99 constitutes a decisive blow against my argument. The specific text of the stimuli can be found on pages 46–47 of The Authoritarian Dynamic.
 In particular, “There is no ‘ONE right way’ to live life; everybody has to create their own way” and “It is wonderful that young people today have greater freedom to protest against things they don’t like, and to make their own ‘rules’ to govern their behavior.” For any form of moral realism that includes religion, also see: “Some of the best people in our country are those who are challenging our government, criticizing religion, and ignoring the ‘normal way’ things are supposed to be done” and “It would be best for everyone if the proper authorities censored magazines so that people could not get their hands on trashy and disgusting material.”.
Oliver Traldi proposes that academia can solve its current polarisation by focussing on the epistemic justification of knowledge. I argue the schisms of Protestantism indicate this is likely to fail.
37 Pilate therefore said unto him, Art thou a king then? Jesus answered, Thou sayest that I am a king. To this end was I born, and for this cause came I into the world, that I should bear witness unto the truth. Every one that is of the truth heareth my voice.
In 2016, Jonathan Haidt noted a new trend among universities. Their telos—their goal or fundamental value—had traditionally always been “truth”. According to Haidt, there had over the preceding 30 years emerged a new telos: social justice. But no man can serve two masters, and as such, Haidt argued that if universities deviated from their traditional telos, not only would truth suffer, but eventually—as academia’s knowledge of truly efficacious solutions to social-justice problems diminished—social justice would suffer too. Oliver Traldi recently published an excellent essay at Heterodox Academy buttressing Haidt: the telos of a university cannot merely be truth but rather knowledge—i.e. justified true belief. If the university is justifying its claims with reasons of social justice, then it is not justifying them with epistemically valid reasons, and as such it does not have knowledge. Vigorous and robust academic freedom is required to ensure that academia’s reasons for stating what it states are good reasons; otherwise, it merely has haphazard beliefs that will be true by coincidence at best.
But we may wish to ask here: is it true that “social justice” is really an epistemically invalid reason to believe a claim? Indeed, how might we even reconcile differences in what we take to be epistemic authorities? If I may be permitted one snarky remark for this essay, it is that the secular, liberal ivory tower often forgets it is not the first ivory tower. There are, in fact, already parallel institutions of higher learning that starkly disagree with the secular university’s permitted sources of epistemic authority—namely, the seminaries and university of Christian churches.
It is flatly insufficient to claim these institutions do not believe they are seeking knowledge; it is flatly insufficient, too, to say that they are purely seeking especially religious knowledge. Both Catholic and Protestant universities generally have the full complement of non-religious faculties (see, for instance, the list of faculties at the Catholic University of America and at Baylor University respectively), and even seminaries often extend into usually secular disciplines (Fuller Theological Seminary, for instance, offers programs in psychology and cultural studies). These institutions believe they are seeking knowledge in the broad sense—justified, true beliefs about the world.
No, the division between these and secular institutions lies in what would justify the true beliefs for each—and therein, too, lies the crux of what would justify the limits of “academic freedom”. A brief overview, then, of the sources of epistemic authority for the major denominations of English-speaking Christianity:
Roman Catholicism accepts two sources of ultimate epistemic authority: the Christian Scriptures and the Tradition of the Catholic Church. This is not to say that science cannot provide epistemic justification for claims, merely that science operates at the level of “secondary causality” (i.e. causality as normally understood), and secondary causality depends on “primary causality” (i.e. God’s continually willing the universe into being) . Science can therefore claim as it wishes—but only to the extent that it does not contradict the revelations of God.
Many Protestant denominations affirm sola scriptura, or “only scripture”—i.e. the only ultimate epistemic justification is scripture. For some Protestant denominations who read the entire Bible as entirely literal (as opposed to part-literal, part-poetic, etc.) and who believe this reading disavows the model of primary and secondary causality outlined above, sola scriptura yields doctrines like “young-Earth creationism” or the literal historicity of Adam and Eve.
Anglicanism, the deliberately milquetoast addition to the Christian flock, decided to “middle-road” the above two by saying that epistemic authority derives from scripture, tradition, and reason, and that all three must be present for a claim to be ultimately justified. It therefore generally affirms the findings of science and (unlike Catholicism) is generally happy to make inferences from science back to scripture and tradition, not merely the other way around.
Methodism, like Anglicanism, affirms scripture, tradition, and reason as sources of authority but adds the experience of the faithful (the famous “Wesleyan Quadrilateral”).
These differences in permissible ultimate epistemic authority also help us to understand how both religious and secular universities believe they are pursuing “academic freedom”, despite the insistence of the latter that the former are engaging in censorship. The telos of academic freedom in any institution, religious or otherwise, is to allow academics to pursue knowledge (see, for instance, Article 39 of the Catholic Church’s Sapienta Christiana for an affirmation thereof). But this necessarily entails that academics cannot pursue blatant falsehood—in particular, “falsehood” according to the epistemic authorities that the university has adopted.
In a secular institution, therefore, academic freedom would allow a geologist to broadly pursue their lines of research without hindrance, but it is seemingly no contravention of academic freedom if a university removed the geologist’s teaching authority for having taught flat-Earthism. Such a doctrine clearly contravenes truth and therefore cannot constitute knowledge. Accordingly, Catholic universities would not revoke an academic’s teaching authority for researching or teaching evolution (this is, after all, merely secondary causality). But, given the epistemic authorities affirmed by the Catholic church, Catholic universities would in full accordance with academic freedom revoke teaching authority for an academic who teaches that the inherent purpose of sexuality is not the creation of new life: knowledge cannot not be contrary to truth. A Biblical-literalist college would likewise in full accordance with academic freedom dismiss a professor who did not affirm the historicity of Adam and Eve: if a literal reading of Scripture is the only ultimate epistemic justification, then the professor could not have been disseminating knowledge.
The above is not intended to convince readers of the legitimacy of the epistemic authorities appealed to above (I do not imagine readers of this blog generally find revealed religion to be particularly compelling). It is merely to note that there is nothing inherent in the concept of academic freedom per se that enables one to condemn the above as violations of academic freedom, since under the epistemic authorities to which those denominations have appealed, the censorship does not inhibit the pursuit of knowledge. And if Traldi’s telos-as-knowledge model cannot show that dismissing a professor for failing to affirm the historicity of Adam and Eve is a violation of academic freedom, it is unclear how he thinks it will resolve the debate between traditionalist and social-justice directions on secular-university campuses, since the steel-man of the social-justice side is that they wish to affirm social-justice considerations as a valid epistemic justification.
To this, I therefore say: academia, it is a contradiction in terms to have a rational debate about the legitimate sources of epistemic authority. You’ve had a good run, but the cleft is now too deep. Follow your predecessors in the Christian academy. It is time for a schism. Time and again, when Christianity has found itself with incommensurate sources of epistemic authority, we have threatened and executed schisms. Against the authority of Tradition, Lutheranism executed a schism. Against the treatment of Scripture as non-literal, Christian fundamentalism executed a schism. Against the increasingly liberal exegetical strategies of the Episcopal Church, dioceses and congregations continue to execute schisms. And now, 500 years after the Reformation, there are at least as many sources of epistemic authority as there are denominations, each with their own seminaries, each with their journals, each with their own unique truth that it’s their telos to pursue. In Christian academia, there is no real need for debates over telos—if the teloi differ, at worst one can always schism again.
This is, of course, a joke. The fragmentation of Christianity is lamentable. It is hard to think of a period in its history where Christianity is less unified than it is now; it is unclear what unites a liberal Methodist, a conservative Catholic, and a prosperity-gospel Pentecostal today other than paraphernalia, and perhaps some creeds on whose interpretation all three differ. The three could certainly have a conversation about the teloi of their movements, and about what would justify justified true belief, but they would find simply that they flatly disagree. The difference between Christianity and academia is not that it is actually more unified in its telos, but rather that it has not yet fully realised that it is splintered.
So, to Traldi, I propose an alternative solution: there should be no discussions of the university’s telos, and certainly not of the sources of epistemic authority. Learn from Christianity’s mistakes. Everyone should just shut up. If there is no actual common foundation of epistemic authority, then the most likely possible result of investigating the foundation of epistemic authority is schism. If the belief in the literal defeat of sin and death and the literal incarnation of God as man was not enough to bind Christianity fast through its investigations of epistemic foundation, it is not clear to me why the complete absence of any unifying characteristic would bind secular academia through its.
To return briefly to Haidt, his book The Righteous Mind ends with a curious remedy to political polarisation: more bipartisan BBQs . If Congressmen’s spouses are friends, and their children play on the same basketball teams, and they joke about the horrible weather in DC together in their carpools, then perhaps the debates on the floor of Congress would not be so acrimonious. It is likely not possible for Congressmen to build bipartisan friendships on the basis of a genuine shared moral foundation. There is none. But that is no problem: humans are naturally sociable creatures, and friendships can be built on gossamer threads. Academia, traditionally understood, should be irrelevant, dusty, and full of cobwebs. In such an academy—shared telos or no—there should be plenty of gossamer.
Peterson’s appeals to a mythological interpretation of the New Testament are fundamentally at odds with the historical evidence we have of how New Testament authors viewed what they were writing.
This essay is most likely of zero interest to Christians, but it may be of interest to atheists who have found Peterson’s mythological discussion of the New Testament interesting.
In justifying how different layers of truth (historical, metaphorical, archetypal, etc.) are imbued into a single great text, such as the Bible or the Mesopotamian mythos, Jordan Peterson is wont to appeal to the expanding “penumbra” of knowledge : there is a core circle of things that we clearly understand, there is a huge ocean of things that we don’t understand at all, and then there is a penumbra of things between the circle of things we know and the ocean of things we don’t know that we are attempting (badly) to understand. In Peterson’s understanding, this penumbra is the realm of myth—we cannot consciously explicate (for instance) what the optimal response to uncertainty is, so we instantiate the optimal response in myth so that we can attempt to understand the myth, which is more comprehensible than the uncertainty itself.
This essay is not intended as a criticism of this model. Certainly the model appears prima facie plausible for myth that began as oral tradition, and if we are willing to entertain some Jungian archetypes then it appears plausible even for narrative more generally. I merely want to point out that this model only makes sense when the author of a great text is intending to write a narrative, not when they are intending to convey a history of things that actually happened. There is obviously a great deal of interpretative work to be done in, for instance, a history of the First World War, but that interpretative work is constrained by the actual events that occurred: the interpretation is legitimate only to the extent that it admits Archduke Franz Ferdinand’s assassination on 28 June 1914 and the armistice’s being signed on 11 November 1918.
The problem then, possibly for Peterson and certainly for many of his followers, is that the historical record (i.e. the record that we can obtain on the historical evidence alone without any special pleading to divine revelation) clearly depicts the New Testament’s authors as believing the events they describe to have actually happened—they did not appear to believe they were writing myth .
The New Testament can be roughly divided as follows:
To begin are the four gospels (Matthew, Mark, Luke, and John), which function essentially as histories of Jesus’ life, followed by Acts, which is essentially a history of the early Church. It’s generally believed that these were written somewhere between 30 and 80 years after Jesus’ death.
To end, we have the Book of Revelation, which is a prophetical book that most major denominations do not believe should be interpreted in a straightforwardly literal way and therefore doesn’t concern us here.
Between these are the epistles, which are actual letters that actual Christians wrote to actual other Christians or to be read aloud to actual Christian congregations. They are generally (as with most letters now) are intended to convey some information or exhortation to the recipient. The best historical evidence we have dates most of these as having been written earlier than the Gospels (starting from roughly 20 years after Jesus’ death).
To be clear, there is abundant purely historical evidence that most of these epistles were real letters written by real Christians to other real Christians, and not simply edited or cobbled together at a later date to reinforce an extant Christian mythology. For a start, Biblical scholars generally concede where sections were likely added in by later scribes (for instance, 1 Corinthians 14:34–35 is generally believed to have been added later), and while there is scholarly division over whether some epistles were written by their purported author (e.g. whether the Epistle to the Colossians was written by Saint Paul), there is no such division over other epistles (e.g. whether the Epistle to the Romans was written by Saint Paul). Indeed, even where there is division about whether an epistle was written by its purported author, the alternative author is generally proposed to be a contemporary admirer of the purported author, not some clergyman several hundred years later. In other words, the epistles are likely a very good reflection of actual Christian beliefs in the time shortly after Jesus’ death. And if we take that seriously, then the idea that the referents of the letter are intended primarily as mythological rather than historical is… bizarre.
Take, for instance, Chapter 15 from the First Epistle to the Corinthians, which according to scholarly consensus was indisputably written by Saint Paul. Emphasis is mine; it’s worth reading in full:
1 Now I would remind you, brothers and sisters, of the good news that I proclaimed to you, which you in turn received, in which also you stand, 2 through which also you are being saved, if you hold firmly to the message that I proclaimed to you—unless you have come to believe in vain.
3 For I handed on to you as of first importance what I in turn had received: that Christ died for our sins in accordance with the scriptures, 4 and that he was buried, and that he was raised on the third day in accordance with the scriptures, 5 and that he appeared to Cephas, then to the twelve. 6Then he appeared to more than five hundred brothers and sisters at one time, most of whom are still alive, though some have died.7 Then he appeared to James, then to all the apostles. […]
12Now if Christ is proclaimed as raised from the dead, how can some of you say there is no resurrection of the dead?13If there is no resurrection of the dead, then Christ has not been raised;14and if Christ has not been raised, then our proclamation has been in vain and your faith has been in vain.15 We are even found to be misrepresenting God, because we testified of God that he raised Christ—whom he did not raise if it is true that the dead are not raised. 16 For if the dead are not raised, then Christ has not been raised. 17 If Christ has not been raised, your faith is futile and you are still in your sins. 18 Then those also who have died in Christ have perished. 19 If for this life only we have hoped in Christ, we are of all people most to be pitied.
20But in fact Christ has been raised from the dead, the first fruits of those who have died.
Note the underlined bits here. In the first, Paul is explicitly saying: “If you do not believe that the historical event of Christ’s resurrection happened, you can go and verify that it happened by speaking to those before whom Christ appeared after he was resurrected”. In the second, Paul is explicitly saying: “If you do not believe literally in the historical event of the resurrection—i.e. if you believe in a mythologised version instead—then your faith is in vain”. In the third, just in case he hadn’t already made it clear, Paul is explicitly saying: “But this historical event did happen, and will happen again when we Christians are resurrected”. Again, this was an actual letter that was written by an actual Christian to be read out before an actual Christian congregation that really existed. There is straightforwardly no way to interpret the above other than to say Paul is insisting on the utterly non-mythological historicity of Jesus’ resurrection from the dead.
In other words, even if we thought for some reason that the more narrative gospels were fundamentally intended as mythological in nature (which, incidentally, we also have no real evidence for), we could not possibly believe that the epistles—which broadly precede the gospels—were intended as fundamentally mythological. I am not trying to get into whether the resurrection actually did happen here—and others elsewhere have made persuasive cases that early Christians could believe that the resurrection actually happened without its having happened. My point is solely to say that Peterson’s (and others’) insistence on looking at these first and foremost as mythologies is utterly anachronistic and entirely unsupported by the historical evidence we have about the epistles’ authorship and audience.
This isn’t to say it’s illegitimate to read the New Testament as mythological—it’s simply illegitimate to read it as mythological unless you’re a Christian. I, as a Christian, can believe that Scripture is divinely inspired and therefore operates at all levels simultaneously—historical, mythological, eschatological all at once. If a non-Christian, however, wants to read them as mythological, they have to form a case whereby the books of the New Testament were not intended as myths at the time of their authorship, were canonised by the early Church understood explicitly as historical and not as myths, and have been affirmed by Christians throughout the ages explicitly as historical and not as myths, while the whole time were actually just an exploration of the “penumbra of the unknown” in spite of a complete lack of evidence for this.
Peterson, to his credit, refuses to rule out the historical interpretation —he simply says he’s first and foremost interested in the mythological lens. His argument, if on shaky foundations, is therefore not technically unsound. Other scholars of mythology would do well to follow his lead, or to simply affirm the New Testament as historically false instead of affirming it as myth.
 To be clear, I am not arguing for the truth claims of the New Testament in this essay. Independently, as a Christian, I do believe the New Testament is true, but this essay is only intended to show that the authors believed what they were writing to not be mythological in nature.
“Well, Christ’s spirit lives on. It’s had a massive effect across time. Well, is that an answer to the question, “did his body resurrect?” I don’t know. I don’t know. The accounts aren’t clear, for one thing. What the accounts mean isn’t clear. I don’t know what happens to a person if they bring themselves completely into alignment. I’ve had intimations of what that might mean. We don’t understand the world very well. We don’t understand how the world could be mastered, if it was mastered completely. We don’t know how an individual might be able to manage that. We don’t know what transformations that might make possible.” (transcript here: https://www.jordanbpeterson.com/transcripts/transliminal/)
Dark fulfils the theodicy expressed by Dostoevsky: Evil exists because of the lies of Man, and all the suffering of innocents is preventable, but at the end “there will occur and be revealed something so precious that it will suffice for all human hearts”.
How much unjustifiable cruelty is the world’s salvation worth? This is the question that Ivan Karamazov poses to his monastic brother Alexei in Dostoevsky’s The Brothers Karamazov. If God plans for the salvation of mankind, then how can innocents like children be tortured and tormented? How can a good God allow for unjustifiable suffering? What theodicy—what reason for the suffering—could we possibly give?
Tell me straight out, I call on you—answer me: imagine that you yourself are building the edifice of human destiny with the object of making people happy in the finale, of giving them peace and rest at last, but for that you must inevitably and unavoidably torture just one tiny creature, that same child who was beating her chest with her little fist, and raise your edifice on the foundation of her unrequited tears—would you agree to be the architect on such conditions? (p.245)
The Brothers Karamazov understands that this is not a question that is answerable in a rational way. One can either accept the whole wretched injustice of the world, or one can (as Ivan does) “most respectfully return [to God] the ticket” (p.245).
This acceptance of being unable to rationally answer this question is mirrored by Netflix’s time-travel drama Dark (huge spoilers to follow). Through seasons one and two, it is revealed that the ostensible hero Jonas in fact will inevitably become the arch-villain Adam. Having discovered that his soulmate Martha is in fact his aunt, having realised that he inadvertently triggers his father’s suicide by going back in time to save him, having watched his aunt-soulmate Martha be killed by his older self, and having at every turn been thwarted by his attempts to set the world right, Jonas—as do Ivan and contemporary thinkers like David Benetar—eventually concludes that the wretched mess of the world would be better off never having existed. Accordingly, Adam does unforgivable things to bring about his ultimately compassionate goal of euthanising the world.
One parallel universe over, Jonas never existed, and Martha assumes the hero role that had been held in Jonas’s universe by Jonas. Eventually, Martha too falls from innocence—she conceives a child with Jonas, and after being forced by her older self to kill Jonas, her heart hardens and she becomes “Eve”. Unlike Jonas/Adam, Martha/Eve answers the unanswerable question in the affirmative: existence—in the form of her child—is worth it, so she will do unspeakable things not for the compassionate goal of ending existence, but for the compassionate goal of preserving it.
In this way, Jonas/Adam and Martha/Eve represent the two options that seem available to us (and to Ivan) in the face of the ineradicable and inexcusable misery of the world. We can sell our soul by admitting Evil as part of God’s plan, or we can sell our soul by rejecting God’s creation. Both options are borne of compassion, and both options end in apocalypse.
Of course, in our world, in The Brothers Karamazov, and in Dark, Evil is simultaneously a cosmic force and a conscious choice that we freely make. Jonas/Adam and Martha/Eve certainly conspire to ensure that the free choices of individuals lead to their ruinous ends, but they often rely on individuals’ cowardice and deceit to do so. In neither world would the apocalypse have taken place if not for Alexander’s succumbing to Hannah’s blackmailing, or the four families’ refusal to convey information to one another during the earlier seasons, or Ulrich’s (many) affairs. As Katherina notes, the entire city is “like an ulcer, and we are all a part of it”; as Franziska notes, “that’s exactly what’s ruined everything—all your fucking secrets”. The causal chain runs inexorably back and forth through time to ensure that precisely the miseries that have happened will happen again, but the causal chain is only inexorable because nobody chooses to stop it by telling the truth.
In this sense, too, Dark fulfils the theodicy expressed by Dostoevsky:
There is only one salvation for you: take yourself up, and make yourself responsible for all the sins of men. For indeed it is so, my friend, and the moment you make yourself sincerely responsible for everything and everyone, you will see at once that it really is so, that it is you who are guilty on behalf of all and for all. Whereas by shifting your own laziness and powerlessness onto others, you will end by sharing in Satan’s pride and murmuring against God. (p.320)
But Dark does not end with an admonition towards responsibility. In Dark’s finale, it is revealed that the entire wretched knot of causal connections that has caused such unjustifiable and heinous suffering all stems from father’s grief over a car accident that killed his son (Marek), daughter-in-law (Sonja), and granddaughter in the “overworld”. Hubristically trying to reverse time and bring his family back, the father accidentally split the overworld reality in two and created the two ulcerous worlds that cause such misery both for themselves and for one another. A not-yet-fallen Martha and Jonas venture to the overworld and impede the son’s passage, thereby preventing the car accident. But in their interaction, Martha and Jonas see their souls reflected in Marek and Sonja respectively. There are reflections of the ulcer in the overworld, but the reflections are not yet distorted and twisted. We see the world before the unjustifiable suffering has ever occurred.
Indeed, the salvation of Marek and his family in the overworld fulfils all the empty vacuous promises in the ulcerous worlds. The henchman Noah becomes correct when he promises that God has a plan for each of them, Adam becomes correct when he promises deliverance into paradise, and Jonas is finally vindicated in his belief that he and Martha are soulmates. In one of the most touching scenes of the series, Adam reveals to Eve that they can both finally lay down their swords. He reveals that Jonas and Martha have been sent to the overworld and ulcer will finally be lanced: he will win, in that their worlds will be no more, and she will win, in that a world will remain. They have fought the good fight, they have finished the race, and they have kept the faith.
Following her and Jonas’s saving of Marek and his family but before the ulcer ceases to exist, Martha hauntingly asks “will anything of us remain?”. But this is perhaps asks the wrong question. Had she asked instead whether anything of the misery and suffering of their worlds could be justified, she could have answered as Ivan had wanted to:
… I have a childlike conviction that the sufferings will be healed and smoothed over, that the whole offensive comedy of human contradictions will disappear like a pitiful mirage, a vile concoction of man’s Euclidean mind, feeble and puny as an atom, and that ultimately, at the world’s finale, in the moment of eternal harmony, there will occur and be revealed something so precious that it will suffice for all human hearts, to allay all indignation, to redeem all human villainy, all bloodshed: it will suffice not only to make forgiveness possible, but to justify all that has happened with men. (p.235–236)
“And I saw a new heaven and a new earth: for the first heaven and the first earth were passed away; and there was no more sea. And I John saw the holy city, new Jerusalem, coming down from God out of heaven, prepared as a bride adorned for her husband. And I heard a great voice out of heaven saying, Behold, the tabernacle of God is with men, and he will dwell with them, and they shall be his people, and God himself shall be with them, and be their God. And God shall wipe away all tears from their eyes; and there shall be no more death, neither sorrow, nor crying, neither shall there be any more pain: for the former things are passed away.”
 All page numbers cited are in reference to: Dostoevsky, Fyodor. 1992. The Brothers Karamazov. Translated by Richard Pevear and Larissa Volokhonsky. New York: Penguin Random House.
 See, for instance, his book Better Never to Have Been (Wikipedia page here).
 For more on this specific plot interpretation, see here or archived version here.
Society does not owe me a fair opportunity to become the next Brad Pitt. And society does not owe any of us a fair opportunity to become a lawyer, doctor, or academic.
I will never be the next Brad Pitt. Even if I had the talent for acting (I do not), at 23 I would likely be starting too late. In any event, becoming an actor requires a huge degree of luck and perseverance, and I am unwilling to endure the years of working three side-gigs to make ends meet until I am noticed and land my first big show. Being an actor is a luxury job, a “status” job, a lifestyle job, just like being an artist or a supermodel or a politician. We all understand that you need to be a very specific kind of person to even compete to be the next Brad Pitt, and even then it remains a pipe dream. As we all know, “sensible people” do jobs like teaching or accounting or plumbing—jobs that most people can do with sufficient tenacity. So if we acknowledge that individuals are not owed an opportunity to become the next Brad Pitt, why do we think individuals are owed opportunities to become lawyers, or academics, or doctors, or any other high-status career?
A traditional argument for equality of opportunity might go as follows. Citizens should not, by circumstances of their birth, be forced into poverty or social degradation. Society should therefore care deeply about improving the opportunities that those at the bottom of the social ladder have to climb it. At a minimum, this means that every industrious citizen should have access to the education and resources to become (for instance) a teacher or an accountant or a plumber.
A broader conception of opportunity might surpass mere material adequacy and argue that individuals should have a fair opportunity to access a life and career that is valuable to them. This argument is an important corrective: if an individual would be much happier as an accountant than as a plumber, then we should not be comfortable with that individual’s being denied access to the required education to tabulate expenses on the basis that they could reach material adequacy by installing sinks. But this broader conception runs quickly against constraints—unless I am willing to take on substantial risk and effort, I will never be the next Brad Pitt, even if that would grant me inordinately more meaning than any other career.
To spell this out explicitly: we do not consistently value equality of opportunity, and with good reason. What we appear to value is equality of opportunity for a life that is good enough. When someone has access to a career that is good enough and then instead chooses a life of precarity and risk to attempt a high-status, well-paid career like Hollywood acting, we note that they consciously made the decision to accept that risk and hardship and adjust our sympathy accordingly.
It is worth noting, then, how far above “good enough” the conditions of those in elite careers are. The life of a senior lawyer will often consist of intellectually stimulating work remunerated in the six figures and is a career that everyone acknowledges as prestigious and desirable (hence the popularity of shows like Suits and How To Get Away With Murder). Academics spend most of their career reading and thinking about interesting problems, again at salaries in the six figures and again with a high social status (who wouldn’t want to be Stellan Skarsgård from Good Will Hunting?). It is not at all surprising that there is intense competition for these roles—as with acting in Hollywood, once you’re “in”, it’s one of the best careers. Further, applicants to these careers would all have been able to access less competitive opportunities that offer stable and relatively interesting work from early in one’s career. Prospective lawyers could easily have become comfortably paid bureaucrats in the public service or the corporate world, prospective academics could easily have become high-school teachers, and prospective doctors could easily have become nurses.
As such, the complaints about poor conditions at the start of one’s career begin to ring slightly hollow. The conditions are so poor precisely because the end-conditions are so good for those who make it through (the starting salary is roughly AU$100,000 for academics and doctors), and the applicants choose to endure those risks and the hardships of the training because the end result is so good. If I were to read the complaint uncharitably, the complains seems to be “it’s unfair that I should have to endure hardship to obtain a job well above the average salary where I do work inordinately more interesting and high-status than I would be able to do anywhere else”.
It’s possible that you think my argument up till now is heartless and ignores people’s suffering on the basis that they opted into it. That is a fair criticism. But it is nonetheless clear that the equivalation between lack of fair opportunity for elite and low-skill occupations is not legitimate. Take this gem:
The fact that many of those [academic casual staff] are underpaid for the work they do only reinforces the extent to which our universities are now dependent on the same kind of exploitative labour practices that blight our economy more broadly, but especially in the hospitality industry.
Colin Long, the 2018 Victorian secretary of the (Australian) National Tertiary Education Union
Casualisation of supermarket or hospitality employment is wholly different to the casualisation of academic (or legal, or medical) work. We are concerned when hospitality or retail workers are placed in insecure employment because there may well not have been alternative options: we as a society may well be denying citizens the opportunity for a life that is good enough. But academics (or lawyers, or doctors) always had a safe and comfortable opportunity available. They simply chose not to exercise it in favour of a riskier but more promising option.
None of this is to say the system is fine as it is: I lament the overpaid executives in universities and law firms and hospitals; I wish that less of our taxpayer money funded bureaucratic bloat in these institutions. But, critically, even if we dislike aspects of the current system, that does not mean society owes anyone a comfortable road to an elite career, at least not in the way that we owe all citizens a good high school education or a comfortable road to material sufficiency. So, when the arguments for increasing the ease of obtaining these jobs involves an increase to taxpayer expenditure (e.g. increased funding for universities), we should be rightly sceptical. Society does not owe me a fair opportunity to become the next Brad Pitt. And society does not owe any of us a fair opportunity to become a lawyer, doctor, or academic.
To end, an appropriate Existential Comic:
This is a substantially edited version of the original piece, which was published under the title “People seem very confused about the merits of equality of opportunity”.
 I want to point out that I am only considering arguments here for equality of opportunity, not arguments for ensuring the fundamental dignity of every human life. This is not really intended as a policy post in any event, but as an aside, I’m really comfortable with unemployment insurance, public (i.e. single-payer) free healthcare, state-run disability insurance, and so on. But these are not broadly relevant to equality of opportunity so much as to ensuring the dignity of all human beings, and that’s a separate concern.
 Link provided for the University of Melbourne. See p.41 for salary table and p.58 for the description of “Level B”.
 See Part 4, clause 16(h). When I say “starting”, I mean “beginning as a specialist”.
 There are obviously instrumental reasons to support offering access to high-status careers for certain individuals. We might, for instance, want to ensure affirmative action for women in a political party to signal that women (as a class) are politically equal to men. Or, alternately, we wish to ensure a minimum number of women on a corporate board to decrease the likelihood that the company would overlook concerns relevant to women (e.g. maternity leave policies). I accept that there are good reasons to believe any of these instrumental reasons. Critically, however, the instrumental concerns can be met with instrumental solutions: we need not necessarily care that Jane is denied the corporate board position if it is given instead to Jenny, since either will mean more female representation on the board. It does not appear that the lack of fair opportunity per se should be what bothers us.