An alternative source of analysis and commentary

Beyond OSU




You don't have a right to believe whatever you want to


Daniel DeNicola


Do we have the right to believe whatever we want to believe? This supposed right is often claimed as the last resort of the wilfully ignorant, the person who is cornered by evidence and mounting opinion: ‘I believe climate change is a hoax whatever anyone else says, and I have a right to believe it!’ But is there such a right?


We do recognise the right to know certain things. I have a right to know the conditions of my employment, the physician’s diagnosis of my ailments, the grades I achieved at school, the name of my accuser and the nature of the charges, and so on. But belief is not knowledge.


Beliefs are factive: to believe is to take to be true. It would be absurd, as the analytic philosopher G E Moore observed in the 1940s, to say: ‘It is raining, but I don’t believe that it is raining.’ Beliefs aspire to truth – but they do not entail it. Beliefs can be false, unwarranted by evidence or reasoned consideration. They can also be morally repugnant. Among likely candidates: beliefs that are sexist, racist or homophobic; the belief that proper upbringing of a child requires ‘breaking the will’ and severe corporal punishment; the belief that the elderly should routinely be euthanised; the belief that ‘ethnic cleansing’ is a political solution, and so on. If we find these morally wrong, we condemn not only the potential acts that spring from such beliefs, but the content of the belief itself, the act of believing it, and thus the believer.


Such judgments can imply that believing is a voluntary act. But beliefs are often more like states of mind or attitudes than decisive actions. Some beliefs, such as personal values, are not deliberately chosen; they are ‘inherited’ from parents and ‘acquired’ from peers, acquired inadvertently, inculcated by institutions and authorities, or assumed from hearsay. For this reason, I think, it is not always the coming-to-hold-this-belief that is problematic; it is rather the sustaining of such beliefs, the refusal to disbelieve or discard them that can be voluntary and ethically wrong.


If the content of a belief is judged morally wrong, it is also thought to be false. The belief that one race is less than fully human is not only a morally repugnant, racist tenet; it is also thought to be a false claim – though not by the believer. The falsity of a belief is a necessary but not sufficient condition for a belief to be morally wrong; neither is the ugliness of the content sufficient for a belief to be morally wrong. Alas, there are indeed morally repugnant truths, but it is not the believing that makes them so. Their moral ugliness is embedded in the world, not in one’s belief about the world.


‘Who are you to tell me what to believe?’ replies the zealot. It is a misguided challenge: it implies that certifying one’s beliefs is a matter of someone’s authority. It ignores the role of reality. Believing has what philosophers call a ‘mind-to-world direction of fit’. Our beliefs are intended to reflect the real world – and it is on this point that beliefs can go haywire. There are irresponsible beliefs; more precisely, there are beliefs that are acquired and retained in an irresponsible way. One might disregard evidence; accept gossip, rumour, or testimony from dubious sources; ignore incoherence with one’s other beliefs; embrace wishful thinking; or display a predilection for conspiracy theories.


I do not mean to revert to the stern evidentialism of the 19th-century mathematical philosopher William K Clifford, who claimed: ‘It is wrong, always, everywhere, and for anyone, to believe anything upon insufficient evidence.’ Clifford was trying to prevent irresponsible ‘overbelief’, in which wishful thinking, blind faith or sentiment (rather than evidence) stimulate or justify belief. This is too restrictive. In any complex society, one has to rely on the testimony of reliable sources, expert judgment and the best available evidence. Moreover, as the psychologist William James responded in 1896, some of our most important beliefs about the world and the human prospect must be formed without the possibility of sufficient evidence. In such circumstances (which are sometimes defined narrowly, sometimes more broadly in James’s writings), one’s ‘will to believe’ entitles us to choose to believe the alternative that projects a better life.


In exploring the varieties of religious experience, James would remind us that the ‘right to believe’ can establish a climate of religious tolerance. Those religions that define themselves by required beliefs (creeds) have engaged in repression, torture and countless wars against non-believers that can cease only with recognition of a mutual ‘right to believe’. Yet, even in this context, extremely intolerant beliefs cannot be tolerated. Rights have limits and carry responsibilities.


Unfortunately, many people today seem to take great licence with the right to believe, flouting their responsibility. The wilful ignorance and false knowledge that are commonly defended by the assertion ‘I have a right to my belief’ do not meet James’s requirements. Consider those who believe that the lunar landings or the Sandy Hook school shooting were unreal, government-created dramas; that Barack Obama is Muslim; that the Earth is flat; or that climate change is a hoax. In such cases, the right to believe is proclaimed as a negative right; that is, its intent is to foreclose dialogue, to deflect all challenges; to enjoin others from interfering with one’s belief-commitment. The mind is closed, not open for learning. They might be ‘true believers’, but they are not believers in the truth.


Believing, like willing, seems fundamental to autonomy, the ultimate ground of one’s freedom. But, as Clifford also remarked: ‘No one man’s belief is in any case a private matter which concerns himself alone.’ Beliefs shape attitudes and motives, guide choices and actions. Believing and knowing are formed within an epistemic community, which also bears their effects. There is an ethic of believing, of acquiring, sustaining, and relinquishing beliefs – and that ethic both generates and limits our right to believe. If some beliefs are false, or morally repugnant, or irresponsible, some beliefs are also dangerous. And to those, we have no right.


_____________________________________________________________________________


About the author

Daniel DeNicola is Professor and Chair of Philosophy at Gettysburg College in Pennsylvania and the author of Understanding Ignorance: The Surprising Impact of What We Don’t Know (M.I.T. Press, 2017), which received the 2018 PROSE Award in Philosophy from the Association of American Publishers.



This article was originally published at Aeon and has been republished under Creative Commons.








The Danger of Absolute Thinking is Absolutely Clear



Mohammed Al-Mosaiwi

Think of the most happy and well-adjusted person you know – what can you say about their thinking style? Are they dogmatic, with an all-or-nothing outlook on the world? Do they place totally rigid demands on themselves and those around them? When confronted with stresses and misfortunes, are they apt to magnify and fixate on them? In short, do they have an absolutist thinking style?


‘Absolutism’ refers to ideas, phrases and words that denote totality, either in magnitude or probability. Absolutist thoughts are unqualified by nuance and overlook the complexity of a given subject.


There are generally two forms of absolutism; ‘dichotomous thinking’ and ‘categorical imperatives’. Dichotomous thinking – also referred to as ‘black-and-white’ or ‘all-or-nothing’ thinking – describes a binary outlook, where things in life are either ‘this’ or ‘that’, and nothing in between. Categorical imperatives are completely rigid demands that people place on themselves and others. The term is borrowed from Immanuel Kant’s deontological moral philosophy, which is grounded in an obligation- and rules-based ethical code.


In our research – and in clinical psychology more broadly – absolutist thinking is viewed as an unhealthy thinking style that disrupts emotion-regulation and hinders people from achieving their goals. Yet we all, to varying extents, are disposed to it – why is this? Primarily, because it’s much easier than dealing with the true complexities of life. The term cognitive miser, first introduced by the American psychologists Susan Fiske and Shelley Taylor in 1984, describes how humans seek the simplest and least effortful ways of thinking. Nuance and complexity is expensive – it takes up precious time and energy – so wherever possible we try to cut corners. This is why we have biases and prejudices, and form habits. It’s why the study of heuristics (intuitive ‘gut-feeling’ judgments) is so useful in behavioural economics and political science.


But there is no such thing as a free lunch; the time and energy saved through absolutist thinking has a cost. In order to successfully navigate through life, we need to appreciate nuance, understand complexity and embrace flexibility. When we succumb to absolutist thinking for the most important matters in our lives – such as our goals, relationships and self-esteem – the consequences are disastrous.


In a recent research article in Clinical Psychological Science, I and my collaborator, the neuroscientist Tom Johnstone at the University of Reading in the UK, examined the prevalence of absolutist thinking in the natural language of more than 6,400 online members in various mental-health chat groups. From the outset, we predicted that those with depression, anxiety and suicidal ideation would have a more absolutist outlook, and that this would manifest in their style of language. Compared with 19 different online control chat groups on topics from cancer to parenting, the prevalence of absolutist words was approximately 50 per cent greater in depression and anxiety groups, and approximately 80 per cent greater in the suicidal-ideation group.


Previously, the best-known linguistic markers for mental-health disorders had been an excessive use of first-person singular pronouns such as ‘me’, ‘myself’ and ‘I’, with a reduced use of second- and third-person pronouns. This pattern of pronoun use reflects the isolation and self-focus common in depression. Negative-emotion words are also a strong linguistic marker for mental-health disorders, however researchers have reported that pronouns are actually more reliable in identifying depression. We find that the prevalence of absolutist words is a better marker than both pronouns and negative-emotion words. They produced bigger differences between mental-health and control groups compared with pronouns, and they tracked the mental-health groups better than negative-emotion words. Paradoxically, negative-emotion words were more prevalent in anxiety and depression groups than in the suicidal-ideation group.


How do we know that a greater use of absolutist words actually reflects absolutist thinking, and is not simply a result of extreme emotions and psychological distress? In a second study, we calculated the prevalence of absolutist words in mental-health conditions known to be linked to absolutist thinking (borderline personality disorder and eating disorder) with mental-health groups not linked to absolutist thinking (post-traumatic stress disorder and schizophrenia). All groups are shown to have the same levels of psychological distress, but only the groups known to be linked to absolutist thinking had elevated levels of absolutist words. This confirms that a greater use of absolutist words is specific to absolutist thinking, and not to psychological distress per se.


Despite the correlations, nothing yet suggests that absolutism causes depression. In a third study, we examined groups whose participants believe that they have recovered from a depressive episode, and write positive, encouraging posts about their recovery. We found that positive-emotion words were elevated by approximately 70 per cent, yet they continued to use a high prevalence of absolutist words, significantly greater than control groups and much closer to anxiety and depression levels. Crucially, those who have previously had depressive symptoms are more likely to have them again. Therefore, their greater tendency for absolutist thinking, even when there are currently no symptoms of depression, is a sign that it might play a role in causing depressive episodes.


These findings support the recent ‘third wave’ therapies that have entered clinical psychology. The most well-known of these is ‘mindfulness’, but they all advocate a flexible outlook, acceptance, and freedom from attachments. An early exponent of mindfulness is the noted psychologist John Teasdale, whose lab has produced a litany of empirical data to support its efficacy. In a landmark 2001 study, Teasdale and his colleagues at the University of Cambridge found that an ‘absolutist, dichotomous thinking style’ predicted future depressive relapse.


Many argue that the world is a harsh place, and that it is the stresses and misfortunes in life that make people depressed, not their thinking style. Wrong! Countless people suffer misfortunes and do not get depressed or anxious, while others seemingly suffer no misfortune at all, and are blighted with depression and anxiety. The Stoic philosopher (and former slave) Epictetus opined that ‘men are disturbed not by things, but by the view which they take of them’. A sentiment that is totally, completely and absolutely correct.




_____________________________________________________________________________


About the author

Mohammed Al-Mosaiwi is a postgraduate student in Psychology at the University of Reading in the UK.



This article was originally published at Aeon and has been republished under Creative Commons.