It’s Okay to Not Give Everyone Equal Attention

Kyle Whitaker
16 min readOct 4, 2020

I recently found myself saying to a friend, “I literally can’t imagine a lower epistemic bar than ‘Don’t trust people who think the X-Files are real.’” I won’t bore you with the details (though you may guess them), but the gist is that someone said some incomprehensibly dumb things, some other people defended trusting this person anyway, and my brain broke a little.

Many thoughtful, kind people — because they are thoughtful and kind — think that ignoring someone or a point of view entirely is somehow wrong. Or even more strongly, that not giving equal attention to every point of view is wrong. What makes it wrong is usually less clear: if not morally wrong, then the thought seems to be that it’s at least unhealthy, or bad for your goals as an informed person. “How hypocritical would I be if I didn’t engage with the other side,” they’ll say. Or: “I need to listen to all sides of the issue so I can choose the best option.”

But this is a mistake. And it’s a mistake for some interesting philosophical reasons. Getting clear on these reasons may just help you to save some mental energy, and maybe not feel so guilty about clicking “unfollow" on that one social media friend (you know the one).

First, it’s important to understand that you don’t owe everyone your attention. In what follows, I’ll use “attention” in an epistemic sense to mean “consideration of the point of view of another in the process of forming beliefs.” To receive attention is to have your point of view taken seriously as an option worth considering by another person who is forming their own position on some issue. To give attention is to take seriously someone else’s view in the context of forming your own.

In this sense, attention is a commodity, and it is scarce. How to best divvy it up is a complicated calculation. We can analyze this issue at three levels: (1) the subjective, which focuses on the individual’s ability to give attention, (2) the intersubjective, which focuses on how attention is allocated in a group, and (3) the objective, which focuses on whether attention is a basic human right.

(1) Subjective: Owing to my personal giftings and limitations, I cannot give attention equally to all, so I must choose. If I am reasonable, I will likely choose based on fit between what others have to offer, what I need to know, and what sorts of information I am capable of receiving. For example, I’m not a sports fan, so I don’t devote a lot of my attention to people who talk about sports. I know other people for whom this is a priority, and a significant portion of their attention is directed at people with this ability. On the other hand, I have a pretty serious layperson’s interest in physics, so I devote more attention to experts who are able to explain complicated physical concepts in an accessible way. This goes in the other direction as well: I try to aim my own output so that it will be considered by those with the best fit between their interests and my expertise. I’m not knocking on the doors of the local Church of Scientology to discuss the epistemology of cults, for example. In addition to fit, impact is also important: all else equal, I should focus my attention where it will do the most good — but without falling into the trap of thinking that my attention should only be focused where it does the most quantifiable good.

(2) Intersubjective: All human individuals are limited in the way just described, and so in any given human community, there may be people who are not owed any attention at all. Let’s say that one is owed attention just in case: (a) one’s desire to have their views considered is sincere and legitimate, (b) there is someone in one’s community who has the ability to give the attention desired, (c) there is not a more pressing demand on the giver’s attention, and (d) giving the attention desired does not unduly burden or harm the giver or anyone else. Whether these conditions are met in a given community is a contingent matter, based on things like how big the community is, and the distribution of giftings, interests, personality types, needs, etc. And even if the conditions are all met, and so one is “owed” attention, it doesn’t follow that any particular individual is obligated to give it. This might result in a non-ideal scenario, in which people who should receive attention don’t, but it’s not the fault of anyone in particular. For example, someone needs an expert ear, but there are no experts in her community (such a community would likely be very small, and/or very unhealthy). Or the scenario might be perfectly innocent, or even ideal — e.g., a community in which all and only the worthwhile ideas receive attention. This idea — that there could be an ideal community where some people do not receive the attention they want — seems counterintuitive to some. This is because it is easy to conflate rights and obligations. Which brings us to…

(3) Objective: It is not the case that everyone is owed the attention of someone else just by virtue of being human, nor that everyone is obligated to give their attention indiscriminately. Note that this is a different point from that made in (2), since it holds independently of the contingent fact that a person is subjectively unable to give indiscriminate attention (i.e., it could be the case that one has an objective obligation that one cannot in fact meet given one’s subjective limitations, but that is not what is happening here). Put differently, even if one were unlimited and could give attention freely to all, it would not follow that one is obligated to do so.

Here we need to distinguish between rights — something that someone is owed as a matter of course — and obligations — the duties that people have toward one another. The concepts are obviously linked: all else equal, one person having a right creates an obligation for someone else to respect that right. But there are limits. For example, the right to life entails the moral obligation to not take life. However, it does not entail the obligation to provide the means necessary to sustain the life of another. There is a famous thought experiment in the philosophical literature on abortion that illustrates this point. Judith Jarvis Thomson argued in the 1970’s that even if a fetus has a right to life, it does not follow that a mother has an obligation to sustain that life. She asks us to imagine the following scenario: you wake to find yourself in a hospital hooked up to a violinist with failing kidneys, being kept alive through the connection with you. If you disconnect, the violinist will die, but in nine months the connection can be safely severed. Are you obligated to maintain the connection? This is an extremely controversial argument, but one thing is clear from its history: the answer is not obvious. This implies that having a right does not obviously create a corresponding obligation for someone else, unless certain other conditions are met.

Moving back to the epistemological context, even if there is a basic right to attention, it doesn’t automatically follow that anyone has an obligation to give attention to anyone else. But is there such a right in the first place?

There is a difference between intrinsic or basic moral value, and having a demand on the attention of others, and these are often confused in the disagreements I’ve witnessed. Every influential moral system affirms that all people are intrinsically or basically morally equal. The reasons for this vary depending on the system. There are, for example, religious reasons, Kantian reasons, utilitarian reasons, Aristotelian reasons, contractual reasons, biological reasons, or even just reasons of political expediency. One can accept basic moral equality on any of these systems without also affirming that all people are owed the same amount of attention. It is likely that each of the types of reasons just mentioned will imply different connections between intrinsic value and receiving attention; but on none of them, as far as I can tell, is a right to attention basic and immutable. This implies that attention is not a basic human right.

It may, however, still be an expectation given certain conditions. Spelling out these conditions precisely is difficult and will depend on which moral and epistemological frameworks one adopts, but something like conditions (a) through (d) above can probably be universalized. What this means for the average person trying to avoid a fruitless argument on social media is that she is under no obligation to engage with anyone else unless all of those conditions are met, and maybe not even then.

Even though we may not be obligated to give attention, we often still want to, and we want to make wise decisions about how to allocate it. So practically, how can we tell which views are worth our attention? There’s a puzzle here that starts to feel like a Catch-22: in order to find out which views are worth my time to consider, I have to already know something about them.

But sometimes I can know without looking into a position or a person that it/they are not worth my time. For example, I personally have never read, seen, or heard anything by Ben Shapiro, and yet I am very confident that I don’t need to. Similarly, I have never seen 99% of horror films, and yet I know that I would hate most of them. How does this work? Moreover, how can this be a reasonable stance?

The answer has two parts. First, humans are very good at detecting patterns, and generalizing from known cases to unknown cases on the basis of those patterns. This process is fraught and often errs, but it’s also the basis of the scientific method, so with the proper controls, it can yield very reliable results. The key is to make sure that the patterns we’re detecting are actually representative of the whole scope of the issue. For example, Newtonian mechanics accurately explained the motions of the planets of the solar system until it was noticed in the 19th century that the perihelion of Mercury was not what Newton’s math predicted. It took Einsteinian relativity — a complete reframing of our understanding of space — to explain the discrepancy. In other words, our perceived patterns, as well-confirmed and seemingly sufficient as they were for so long, were not really representative of the whole.

A more mundane example: when I was in college, most of my friends and family were young earth creationists, and none of my professors were. So I was torn: all the people I trusted told me creationism was true, but they also sent me to college to learn from people who thought it was obviously false. So I set out to settle the issue in my mind by examining both sides of the argument. As I was reading through creationist literature, which is voluminous, I began to notice patterns. They’d use similar argument forms, define important terms similar ways, appeal to the same sets of evidence and authorities. I pretty quickly realized that I didn’t need to read all of the books creationists ever wrote to see the flaws in their position. In fact, thankfully, it only took a few. The arguments which were representative of young earth creationism were weak, so I rejected the whole view, as my professors had done.

Second, dismissing a view without examining it or engaging with its proponents can be reasonable if we are basing our beliefs on the testimony of experts. In fact, the previous point depends on this one, since expert consensus is one of the primary ways that we can know if the patterns we’re detecting are actually representative. In the creationist example, I noticed patterns of reasoning that were represented by multiple independent creationist “experts,” and those same patterns were refuted by other independent non-creationist scientific experts, which clued me in that the patterns I was discerning were real and representative.

The independence bit is important: if three people all hold the same view, but two of them got it from the third one, then you have one view, not three, and in general, three people holding it doesn’t give you more reason to believe it than just the one person. (There are exceptions, having to do with especially skilled, virtuous reasoners, but let’s set that aside for the moment.) So the fact that I was picking up patterns from independent sources, each coming to them on their own, was further evidence that these were real patterns that accurately captured the scope of the issue.

Experts are, roughly, people who have spent a great deal of time examining an issue in a critical environment. A critical environment encourages independent thought and following the evidence where it leads. So experts are best poised to understand the scope and depth of an issue, to know what the evidence on balance supports, and to know who the other experts are. In many cases, listening to — and indeed deferring to — these experts is the most reasonable thing the average person can do. And if the experts agree that a point of view or a person is not worth paying attention to — even if on their own merits that person might count as an expert — then it is reasonable for non-experts to dismiss that point of view. For example, over 99% of relevant experts agree that climate change is significantly caused by humans. The fact that a few people who might otherwise count as experts disagree is irrelevant to what I, as a non-expert, should believe about climate change. To override this sort of consensus, one would need very compelling reasons to doubt the integrity or ability of all of those experts, and such reasons are extremely unlikely.

So if relevant experts agree that a view should be dismissed, you needn’t feel bad about dismissing it. But that’s not the only way it can be reasonable to deny attention to others.

What if a view is absurd — like tinfoil hat absurd? Or openly immoral? In such cases, you do not need to engage it, unless you have some personal interest in the person who holds the view. In fact, you ought not to engage views like this, because doing so makes them seem reasonable, and therefore worthy of dialogue. To go back to the creationism case (such a fruitful example!), many young earth creationists believe that there is a legitimate controversy over the age of the universe, and that many “experts” who hold creationist views have been silenced by mainstream academia. Instead, the reality is that almost no one with any relevant credentials thinks that “scientific” creationism is reasonable, and that most mainstream academics couldn’t pull off an organized campaign to save their lives (at institutions I’ve worked in, just getting faculty to attend meetings and read emails sometimes proves to be an insurmountable challenge). But engaging with creationists in argument, especially publicly and especially when it’s done by actual experts, lends credence to these false and pernicious ideas, and allows the “controversy” surrounding evolution to persist. Most scientists have realized this, and that’s why creationists (and their “ID” counterparts) are now largely ignored by the scientific establishment. This hasn’t eliminated creationism, of course, but it has made it more socially difficult to engage it seriously, which is appropriate, given its intellectual vacuity and frequent dishonesty. If this continues, creationism will likely die of attrition as cultural values shift.

The danger of engaging absurd and immoral views can have much larger and more serious consequences as well, such as when a political party is able to convince a large portion of the electorate that their historically and ideologically radical view is actually moderate and mainstream. The current GOP and its well-known, widely watched news arm have perfected this strategy. A large percentage of the American electorate believes that Hillary Clinton is a radical liberal, that AOC is a socialist, and that the Trump administration was traditionally conservative. These views are analogous to the “creationist controversy” views mentioned above: no one who knows what they’re talking about takes them seriously, but because the appearance of debate continues, the so-called “Overton Window” keeps shifting to the right. Allow this to continue for long enough, and we end up in our present situation: one whole political party can’t tell the difference between reality and nonsense, and believers in absurd conspiracy theories get elected to public office.

So not only do you not have an obligation to give everyone your attention, you often do have an obligation to ignore entire points of view altogether, in cases where expert consensus holds a view to be demonstrably unreasonable, and in cases where a view is absurd or immoral.

Unfortunately, ignoring nonsense doesn’t always make it go away. QAnon, for example, is apparently flourishing under the inattention of mainstream experts. Many people, admirably, feel obligated to do something about this. Additionally, it often happens that the person spewing the nonsensical view is someone you care about, and want to help. What should you do if you fall into one of those categories?

There are no easy answers here. The choice to engage is of course up to you, and there are various strategies you can employ to make your discussion as effective as possible. But this is complicated by the fact that many nonsensical and/or immoral views are the result of echo chambers — the phenomenon in which one’s epistemic environment immunizes them against critique. Often, this involves unquestioning acceptance of some epistemic authority, and/or deep distrust of expertise. Breaking through the barriers created by echo chambers is difficult and relatively rare, and a direct argumentative assault is almost never effective. The best strategy seems to be gaining trust on an interpersonal level, which itself might entail not engaging someone’s outrageous views directly. We should also keep in mind that sometimes people just aren’t ready to be in serious dialogue, even if they think they are. We must avoid trying to educate those who haven’t asked for it, and let them self-guide, always leaving them an easy, judgment-free exit.

It’s also important to be scrupulously honest should you choose to engage with people who hold such views. Unintentional deception is unfortunately common here, such as when one pretends to have more in common with another than is true for the sake of opening dialogue. For example, imagine I had said to my creationist friends, “You know, I can see where you’re coming from when it comes to evolutionists wanting to squelch free speech; we can agree that more constructive dialogue about this would be better.” That sounds irenic and productive, but given what we said above, and given my actual beliefs, it’s actually counterproductive, and a little dishonest. Eventually, if dialogue continues, I’m going to have to explain my actual views to my friends, which are that the fault in this disagreement lies entirely on one side (theirs). Any trust I have gained by pretending to be their ally will quickly evaporate at this point, and rightly so. Perhaps I intended my allyship to be personal rather than ideological, which is well and good, but if I know (or should be reasonably expected to know) that they might interpret it as ideological, then I am guilty of deception. This is an inherent danger in the “bothsidesism” approach to controversial issues: while it might open dialogue, it’s lousy at seeing it through. It’s also misleading in another way: being “in the middle” is not an epistemic virtue. The truth might just as easily be at an extreme end of the ideological continuum as in the middle of it, and as we already noted, what counts as “extreme” depends on context anyway.

Someone might object to my defense of ignoring others by pointing to the danger of giving license to bad actors. It might work okay for well-intentioned people, they might say — people who will try to apportion their attention fairly and compassionately — but do we really want to encourage already-selfish people to ignore the viewpoints of others with impunity? Wouldn’t this just exacerbate the echo chamber problem?

This is an important objection that we should take seriously. But it can be mitigated. Ultimately, I believe that people who are interested in the truth will seek it out to the extent that they are able. If that’s right, then the potential negative practical outcomes of what I’m suggesting are not devastating. To see why, compare it to a common moral reasoning strategy that most people uncritically employ on a regular basis: I go out with my friends on a Friday night for some beers (imagine we’re in pre- or post-pandemic times), and post some photos of my favorites on social media. I have social media followers who are self-described alcoholics. I know that they might see my photos and be tempted to drink, a temptation which might be augmented by their respect for me. But I choose to post the photos anyway, and I don’t feel the slightest qualm about it. Similarly, most of us regularly engage in activities that would be risky for others, without thinking twice about the example we’re setting for them. And we’re reasonable to do so. What justifies our nonchalance in such cases is our intuitive awareness that the application of moral standards is context-sensitive. What goes for one doesn’t always go for another. [This is not to say that the moral standards themselves are context-dependent. That stronger claim is almost universally rejected by moral philosophers. But that’s a discussion for another time.] I’m suggesting that the application of epistemic standards is similarly context-sensitive, and that we all know it. If we didn’t, we would avoid things like scrolling Facebook for news, or watching lowbrow reality television, or reading Reddit threads, since we know that less intellectually virtuous people than ourselves would be endangered by such activities. In general, we don’t guide ourselves with norms that we think apply universally and impersonally, but rather with norms that apply given a good will. I don’t ask “would this be a good thing for anyone in my position to do?”; I ask “would this be a good thing for a virtuous person to do?”

So I think the person who wants to know the truth will be able to implement my advice here without too much damage, since that person will not be inclined to use it to escape epistemic responsibility. Another, perhaps more direct, way to respond to the objection is to say that I am not advocating the norm that one ought to ignore anyone they don’t want to listen to. Instead, I’m recommending the stricter norm that one ought to ignore views that are immoral and/or contrary to expert consensus, except where ignoring them facilitates their negative effects. Of course, a bad actor could use this to reject reasonable views as immoral or contrary to what they wrongly think is expert consensus. But they were going to do that anyway. The fault here is not with the norm, but rather with whatever intellectual vice is preventing them from recognizing genuine expertise.

So the lesson here is not to bury your head in the sand and avoid engaging with people who disagree with you. It’s that it’s okay to be selective about who you engage with in the first place. Indeed, being a virtuous reasoner demands it.

--

--

Kyle Whitaker

philosopher writing about disagreement, public discussion, trust, expertise, and (occasionally) politics and religion