Why We’re All Cognitively Near-Sighted

E.J. Yozamp
7 min readJun 1, 2021
Photo: Karpovich, Vlada. Untitled. N.d., Pexels.

W e make decisions every day. Whether it is deciding which exit to take on the highway or to eat that expired yogurt that’s been sitting in the back of the refrigerator, we are in a constant feedback loop with our world as we engage with it. There are three core elements to decision-making: Judgment (how people predict outcomes that follow potential options), preference (how people weigh those outcomes), and choice (how people combine their judgments and preferences to arrive at a decision). Judgment is the first step to decision making, and here we’ll discuss it as it pertains to such, beginning with its definition and the criteria required for assessing the quality of judgments.

Judgment and decision-making research is a relatively new frontier in the field of psychology. Its beginnings may be heralded to the work of Neumann and Morgenstern (1944), Tversky and Kahneman (1974), and others, where behavioral decision research begot Bayesian inference of judgment over the course of the latter-half of the twentieth century. Current judgment research shifted from Bayesian inference to multi-method analyses to account for known unknown variables (potential variables that one is aware of not knowing) in individuals’ judgments. Important research that uses these methods to understand judgment errors include determining how much unknown or missing evidence plays a role in overconfidence (Walters et al., 2017), understanding the effect of mechanistic explanation (an explanation of how something functions) on individuals’ extreme political attitudes about complicated policies (Fernbach et al., 2013), and understanding the differences between the likelihood of evidence and the weight of evidence are treated in a jury trial (Curley et al., 2007).

Understanding how we formulate the judgments we do has important implications for the kinds of decisions we make in the domains of health, finance, public policy, intelligence analysis, and risk management. The following review aims to provide a concise examination of all of the above.

What is Judgment?

Judgment is defined as the prediction of potential outcomes that follow possible choices. Two criteria that may be used to assess the quality of judgements are accuracy and consistency. Accuracy is how or to what extent one understands/has knowledge of the world, while consistency is how or to what extent that understanding/knowledge of one event generalizes to related events. Accuracy of a judgment depends on calibration, or the degree to which confidence in one’s beliefs matches their correctness, and knowledge; consistency is dependent upon the degree to which judgments logically agree with each other, such as those of Bayesian inference, Dempster-Shafer inference, etc., (Fischhoff & Broomell, 2020, pp. 333–335).

Judgment Heuristics and Human Bias

The development of judgment research was born out of behavioral decision research, a branch of psychology that began with Neumann & Morgenstern’s (1944) rational choice theory, the theory that individuals depend on rational reasoning to make rational choices that result in outcomes associated with their own best interests.

A few decades later, Tversky & Kahneman (1974) sought to explore human decision-making further by examining the judgment involved — specifically how biases are formed from heuristics, or the cognitive “shortcuts” that humans subconsciously use to reduce the complexity of tasks. While cognitive short-cuts are helpful for making quick judgments, it is often at the cost of accuracy, which results in what one may recognize as bias. In 1979, Kahneman & Tversky published their work involving the orderly, but non-rational choice process that humans utilize, and later, the cognitive capabilities of humans in regard to decision-making (Karelaia & Hogarth 2008, Lieder et al. 2018).

Overconfidence and Extremism

Contemporary research on judgment became more advanced by shifting from Bayesian analysis, (a statistical method that answers questions about unknown parameters according to their probability), to using multi-method approaches, such as Dempster-Shafer inference, where room was made for some probability for possibilities that are unknown to the individual conducting the judgment. By using such to measuring judgment, researchers could better understand a subject’s true confidence without relying on whether or not they have knowledge for a particular skill, fact, or event in order for their judgments to be measured, and while also avoiding manufactured responses from the subject, such as feigned “goodness” (Fischoff & Broomwell, 2020, p. 335). Ultimately, these performance standards of accuracy and consistency allowed experimenters to design and analyze tasks that revealed participants’ true thought processes involved in their judgments.

An example of this may be demonstrated by Walter et al.’s (2017) study that sought to determine how much unknown or missing evidence plays a role in judged overconfidence, the researchers hypothesized that overconfidence is generated from the neglect of such. Previous research demonstrates that overconfidence is due to a bias for favoring a focal hypothesis that is generated only by known variables (Walter et al., 2017, p. 5). Participants judged the probability that potential solutions they generated for scenarios of varying options were correct. Those who thought more about known, unknown variables (variables that one knows exist but doesn’t have enough detail to be able to determine their significance in relation to the problem) experienced less overconfidence in their answers than considering alternative solutions, a classic debiasing technique; however it did not affect the participants’ confidence in well-calibrated or underconfident solutions.

Fernbach et al. (2013) demonstrated this with the effect of mechanistic explanation (an explanation for how something functions) on individuals’ extreme political attitudes about complicated policies. The researchers hypothesized that asking those that do to provide a mechanistic explanation for the policies they held extreme views for, would undermine the illusion of explanatory depth (where one believes they understand more about the world than they actually do), and therefore would lead to more moderate views. According to previous research that suggests that explaining one’s reasoning for their position may actually encourage extremism (Tesser et al., 1995, as cited by Fernbach et al., 2013), Fernbach et al. reasoned that by having participants explain how a policy works, rather than list reasons for their opinion on the matter, it may force them to realize their lack of understanding when they realize how complex the policies are, and therefore challenge their beliefs.

The results of the study confirmed that mechanistic explanations produced more moderate views; however, the effects did not occur for those that were asked to simply list reasons for their extreme views. Furthermore, the results revealed that mechanistic explanations reduced the amount of donations made to advocacy groups for those policies. According to these results, the researchers conclude that political extremism may be attributed in part to a misunderstanding of one’s own level of understanding for the causal processes of relevant policies due to oversimplification. This kind of miscalibration can occur due to the brain’s propensity to focus on the strength of evidence, or representativeness, however while undervaluing the actual weight of the evidence, or the quantity and credibility of it. Curley et al. (2007) provide evidence for this by examining the differences between how contradictory evidence (evidence that disagrees about the same event) is treated in comparison to conflicting evidence (evidence that suggests different events) in a jury trial. While subjects’ responses indicated that weight and likelihood were understood to be distinguishable from one another, the likelihood of the events held more weight than the actual weight of the evidence itself. These results suggest that the human brain is biased towards pulling from its stores of lived experiences for representativeness as opposed to the amount of actual, reported occurrences of an event, and if one has a limited store to pull from in addition to doing so, one may oversimplify the complexity of potential variables involved in a particular situation. This would explain why generating unknown variables and mechanistic explanations have the ability to reduce overconfidence and moderate extreme attitudes.

Implications

According to Fischoff and Broomell (2020), choice is the combination of judgments and preferences that are necessary for forming decisions, and according to Tversky and Kahneman (1974), heuristics affect decision-making because they affect one’s ability to make judgments. Therefore, if one is unable to make accurate judgments, one will struggle to make accurate decisions that reflect reality: this is why judgment is important as the first step to decision-making.

Judgment research, both old and new, have demonstrated this importance by addressing the gaps that exist in our ability to make sound judgments, statistically test those judgments, and the very real implications for both. By understanding these gaps, improvements can be made in health, finance, public policy, intelligence analysis, and risk management — both at an individual and organizational level. Overall, however, any situation that requires the ability to accurately interpret situational variables for an effective conclusion, especially in matters that involve action, will benefit from judgment research. Whether it’s judging if a set of symptoms indicate a particular disease, if a particular social media post is accurate, or if it’s worth it to invest an hour of one’s time streaming an episode of a new television series, our ability to form judgments determines our ability to make meaningful decisions.

ko-fi.com/ejyozamp

Curley, S. P. (2007). The application of Dempster-Shafer theory demonstrated with justification provided by legal evidence. Judgment and Decision Making, 2(5), 257–276.

Fernbach, P. M., Rogers, T., Fox, C. R., & Sloman, S. A. (2013). Political extremism is supported by an illusion of understanding. Psychological Science, 24(6), 939–946. https://doi.org/10.1177/0956797612464058

Fischhoff, B., & Broomell, S. B. (2020). Judgment and decision making. Annual Review of Psychology, 71, 331–355. https://doi.org/10.1146/annurev-psych-010419-050747

Kahneman D., Tversky A. 1979. Prospect theory: an analysis of decision under risk. Econometrica 47(2), 263–292. https://doi.org/10.2307/1914185

Karelaia, N., & Hogarth, R. (2008). Determinants of linear judgment: a meta-analysis of lens model studies. Psychological Bulletin, 134(3), 404–426. https://doi.org/10.1037/0033-2909.134.3.404

Lieder, F., Griffiths, T. L., & Hsu, M. (2018). Overrepresentation of extreme events in decision making reflects rational use of cognitive resources. Psychological Review, 125(1), 1–32. https://doi.org/10.1037/rev0000074

Morgenstern, O., & Von Neumann, J. (1953). Theory of games and economic behavior. Princeton University Press.

Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131. https://doi.org/10.1126/science.185.4157.1124

Walters, D. J., Fernbach, P. M., Fox, C. R., & Sloman, S. A. (2017). Known unknowns: A critical determinant of confidence and calibration. Management Science, 63(12), 4298–4307. https://doi.org/10.1287/mnsc.2016.2580

--

--