Research Article
Problem Testing (and beyond) in the Messy World of Public Policy
David Trafimow1, Nick Cosstick2 and Magda Osman2,3*
1Department of Psychology, MSC 3452, New Mexico State University, P. O. Box 30001, Las Cruces, NM 88003-8001, USA
2Judge Business School, Trumpington Street, University of Cambridge, Cambridge CB2 1AG, UK
3Centre for Decision Research, Leeds Business School, University of Leeds, Leeds, West Yorkshire, LS2 9JT, UK
Magda Osman, Judge Business School, Trumpington Street, University of Cambridge, Cambridge CB2 1AG, UK.
Received Date:August 30, 2024; Published Date:September 16, 2024
Abstract
Background: Accessible thoughts and affects influence everyone, and policymakers are not immune. These are ubiquitous within human
psychology, and therefore within policymaking. Furthermore, the very processes designed to expunge them may compound them.
Aims: As a response to the fact that accessible (priors) assumptions and affect (emotions) play roles in policy making, we propose an analytic
framework to help guide policy making down a process to utilize them for good. We first introduce and show how to apply The Taxonomy of
Psychological Processes in Policymaking (ToPPP) to policy making using an example: addressing misinformation and antisocial behavior.
Methodology: ToPPP is a policy-focused extension of Trafimow’s taxonomy of assumptions-developed to expose the limitations academics
face in their development of theories and evidential support of them. ToPPP has three concerns: (1) the process required to test a policy problem,
especially the types of assumptions needed; (2) how two main drivers of all human psychological activity-accessibility and affect-interact to
influence the assumptions made by policy actors; and (3) how the assumptions and psychological drivers have an impact both at the level of the
policy problem and on decision-making downstream of this.
Findings: Through the worked example, we show that accessible assumptions about the world are at the heart of policymaking, and partly
determine which problems policymakers accept, and the policy decision-making downstream of this acceptance. We also show how using ToPPPs
one can increase the specificity of planned policy interventions. It is necessary to make auxiliary assumptions. Furthermore, each increase in
specificity needed to ensure that policies are practical necessitates the addition of more auxiliary assumptions.
Synthesis: The systematic application of ToPPPs to policy making from start to end (i.e. problem identification to problem solution) can, in a
very simple way, help make the decision-making process for individuals or groups of policy makers transparent.
Conclusions: Acknowledging that the effects of accessibility and affect are inevitable suggests that, rather than trying to remove their effects,
a more effective policymaking strategy would be to assume their presence and engage processes designed to increase the accessibility of more
possibilities and to allow more and potentially counteracting affects to play into decision processes.
Keywords: Policymaking psychology; policy cycle; problem testing; policy implementation; biases
Introduction
Context
Policymakers face the unenviable task of devising policies to address inherently difficult or impossible (“wicked”) problems [1- 3]. However, the intractable nature of the problems policymakers face, and the overwhelming complexity of the policy process [4,5], means they can fail. In the messy world of public policy, reasons for policy failure are ubiquitous. They include: the policy problem being poorly identified or characterized [2]; the policy not adequately addressing the problem (perhaps just targeting a symptom) [2]; overoptimism regarding outcomes [6,7]; the difficulty of implementation, especially translating national policy at the local level [7]; and there being multiple models of failure (policy evaluation is tied to values and problem definition) [8].
Theories of policy change typically characterize change in terms of the balance struck between individual agency (the power and decisions of individual policymakers) and ‘structural factors’ (demographic, economic, historical-geographic, institutional, social, and technological factors which constitute the context within which policy decisions are made) [5]. When trying to reduce the failure of policies, a closely related distinction is salient: the process of policymaking versus the external factors that interact with it. These are two target areas which could be the focus of efforts to reduce the failure rate of policies. While the latter area is just too complex to be controllable, the former area is, prima facie, a viable target for improvement. That is, there are likely ways to support better policymaking processes concerning recognizing a problem, testing it, and then coming up with policies to address it The idea being that if any improvements are made in, for example, testing policy problems, then any failures downstream of this are less likely to be attributable to this part of the process.
In light of this, this paper concerns the policy problems that policy actors focus on and the potential downstream effects of this focus. We specify a taxonomy which concerns (1) the process required to test for the existence of a policy problem, especially the types of assumptions needed; (2) how two main drivers of all human psychological activity—accessibility and affect—interact to influence the assumptions made by policy actors; and (3) how the assumptions and psychological drivers have an impact both at the level of the policy problem and on decision-making downstream of this. Policy-problem testing (adapted from [9,10] taxonomy of assumptions) concerns the assumptions required to test the theory which underlies a (specific) policy problem. Thus, it can be viewed as offering a normative benchmark. Of course, the influence of psychological drivers will lead policy actors to stray from this benchmark. However, the way in which they do is descriptively illuminating and provides the basis for pragmatic prescriptions to policy actors engaged in complex work. To avoid generating a taxonomy which provides an inaccurate picture of the policymaking process, and is unhelpful to policy practitioners, it is important to heed the lessons of modern policy literature. The specific lesson we focus on is the perilousness of overly neat and simplistic accounts of the policy process. We turn to this next.
Beyond the Policy Cycle: The Messiness of the Policy Process
For better or worse, one of the most influential (set of) models of the policymaking process has been the ‘policy-cycle’ model [4,5]. Jones (1970) [11] developed his policy-cycle model as an attempt to characterize the entire policy process in an intuitive way [12], but other policy-cycle models were soon proposed [13,14]. Policy-cycle models divide up the policy process into a sequence of (potentially iterative) stages. Different versions of the cycle are composed of different stages, but they share a strong family resemblance [5]. Cairney (2020) generated a ‘generic’ model of the policy cycle to consider these different accounts in aggregate. The generic model moves from ‘agenda setting’ (“the process of turning public issues into actionable government priorities” [15]); to policy formulation (choosing a particular policy); to ‘policy legitimation’ (ensuring the policy has the necessary support); to ‘implementation’ (using some organization to implement the policy); to ‘evaluation’ (assessing the policy’s success); and finally, to policy maintenance, modification, or termination. The policy cycle became “the textbook approach” [16,17] to policy theory, in the sense that many introductory textbooks would be structured according to its characterization of the policy process-with individual stages covered chapter-bychapter [4].
Many criticisms of the policy-cycle conceptualization have been offered. [17] set out an insightful list. A point of relevance here is that the cycle lacks descriptive accuracy. Though individual stages might occur, they frequently do not occur in the order set out in the model [16]. For example, agenda setting does not always occur before policy formulation: “advocacy of solutions often precedes the highlighting of problems to which they become attached” [18]. Furthermore, its picture of a single cycle is false. A picture of “multiple, interacting cycles” [17,19] at different levels of government (each with level-relevant stages) would be more accurate. Relatedly, it assumes a ‘top-down’ account of the policy process: one which focuses on the intentions of policymakers over other important actors. For example, a network of actors-including “street-level bureaucrats” [20]-are involved in the implementation of a policy, and policymaking can occur within this complex process [21,22].
For this reason, the policy cycle has been superseded by an altogether messier picture of policymaking [4]. Rather than a single theory of the policy process, there has been a proliferation of theories. Such theories tend to have been generated with an eye towards explaining particular facts about the policy process and then inducing out from these particular facts to the policy process in general [5]. Given this—rather unsurprisingly—no single theory is well-fitted to the entire policy process [12,5]. [23] summarized the complex picture produced by modern policy theory: “policymakers engage in a policy process over which they have limited knowledge and even less control”. His summary has two justifications: policymaker psychology and complex policymaking environments. Since the first is more suited to our purpose, that is our focus. Qua human reasoners and decision-makers, policymakers have limited working-memory capacity [24-30] and must use heuristics to make difficult decisions under time and resource constraints [31-36]. Moreover, affect is intertwined within policymakers’ reasoning processes [37-42]. In light of this complex picture, [43] rejected models which characterize the policy process in terms of simplicity and central control, including the policy cycle. At best, he argued, the policy-cycle stages can be useful “as checklist of functions to carry out at some point”—so long as they are conceived broadly, with no fixed ordering, and the context of policymaking taken into consideration. With this in mind, we move on to considering the nature of policy problems.
Understanding Policy Problems
Policy problems are not fully mind-independent entities: they “do not exist ‘out there’” [44-46]. Instead, they are hybrid entities constituted by some facts and some mental states (assumptions, beliefs, attitudes, opinions, values, etc.)-with social dynamics playing a role in the collective understanding of a problem [46,47]. For example, homelessness as a policy problem is partly constituted by facts about the number of homeless people in a district and partly by certain views: that this is wrong, that it is solvable [44], etc. This hybrid-entity conceptualization fits with the fact that policy problems are not defined in a value-free way; instead, this process depends upon one’s axiological vantage point [48,46,18].
For example, while same-sex marriage is a potential problem for religious social conservatives, this is not the case for social liberals, libertarians, and progressives. The modern literature treats this as one example of a wider phenomenon. Namely, ‘problem ambiguity’: there are many ways to think about the same phenomenon, which might not be reconcilable [49]-thus, many definitions of the problem are possible [46]. For example, anthropogenic climate change might be viewed as posing different policy problems. Beyond being a climate problem, it might be viewed as an economic problem by someone who believes that government interventions in the market will be required for the purpose of adaptation and mitigation. However, a differently disposed actor might see a different kind of economic problem in the same phenomenon: the problem of avoiding inefficiencies and unintended consequences caused by government intervention in the market for the purpose of adaptation and mitigation.
Clearly, understanding a person or group’s ideology goes some way to disambiguating a problem. However, it does not go the entire way. Individuals within a single ideological group may still have different interpretations of a problem due to their personal experiences, beliefs, and values [50]. For example, prominent libertarians have disagreed regarding the extent to which the political feasibility of libertarian ideals create issues in developing policy. Thus, just knowing someone is a libertarian is not enough to determine their position on, say, school vouchers.
For [46], “The definition of a problem presupposes certain realities, and its solution or possible solutions are based on those presuppositions”. For Brewer and deLeon (1983), “Problems designate theory and methods, not the reverse”. In light of ambiguity, this cannot apply to broad categories labelled as problems (‘ambiguous problems’: e.g. anthropogenic climate change, economic growth, housing, etc.), it can only be true for an altogether more specific kind of thing: ‘specific problems’. Ambiguous problems come with the assumption that the phenomenon in question is problematic in some way, but this problematic nature can be cashed out in many different ways (as with the example of anthropogenic climate change above). In contrast, a specific problem is indexed by a specific sense in which the phenomenon in question is problematic, and a. For example, consider the difference between the ambiguous problem of misinformation (e.g., [51]) and the specific problem that social media misinformation is fueling anti-social behavior (e.g., [52,53]). The ambiguous problem does not concern a specific domain, nor a specific causal picture; by contrast, the specific problem does. Thus, a specific problem provides a ‘worldview’ for actors regarding the domain in question—it orientates their perception and conception of the domain and their action within it [50].
The recognition of problems has been a key part of the agendasetting literature—one which, importantly, leaves a gap for problem testing. It is characterized as having a perceptual component [48,18,15], though different accounts exist. [48] account concerns noticing a conflict “between familiar patterns of behavior and expectation and one’s environment”. For example, an actor might notice a conflict between the expectation of affordable and dignified social care for older people and the facts on the ground. However, recognition alone cannot determine that this is a true policy problem: a problem with a true causal picture of the descriptive aspects of the problem [54,55]. To know that this is a true policy problem-and not just a problem with a specific care professional or care home—one must test for an aggregate pattern.
On Kingdon’s [18] account, problems are, in some way, already lurking in people’s minds. They are formally recognized and prioritized via indicators (such as the number of complaints against care-home staff per year), focusing events (such as crises or the personal experiences of actors), and feedback (such as the monitoring of a pre-existing policy’s implementation process). However, Kingdon did not think that the primary function of (for example) indicators is to test for the existence of a true problem. The information flowing from these channels (especially from indicators) might be used in analysis which infers the existence of a problem, or not depending on the type and quality of information [18]. Thus, on this account, problem recognition does not make problem testing superfluous.
These accounts leave a gap for problem testing, but why engage in this process? To see why, imagine a policy actor who recognizes a specific ‘problem’ but the assumptions behind it are an inaccurate account of its domain. This actor has not actually identified a true problem; instead, they have identified a pseudo-problem. To the extent that they have power in, or over, the policy process, it will—at least in part—be used to target this pseudo-problem. We will consider the potential down-stream effects of this in section 8. For now, note that a failure to identify a true problem means that the best we can hope for is that the messiness of the policy process transforms policies targeted at pseudo-problems into policies which (incidentally or through street-level implementation) address real problems. Thus, the failure to identify a real problem removes a reason to hope that policymakers have control over the policy process. The assumptions behind problem testing and its downstream effects (to the extent there are any such effects) can be explained by two of the main drivers of human psychological activity: accessibility and affect. We will address these in more detail in subsequent sections, but, for now, accessibility can be roughly characterized as how easily a cognition comes to mind and affect can be roughly characterized as how one feels about an accessible cognition. The role of affect is perhaps more obvious than that of accessibility. Indeed, section 2 covered modern work in policy studies which takes into account ‘policymaker psychology’, including the key role of affect. We agree that it is unrealistic to expect policy actors to avoid being guided by their affective responses (even with active strategies for such avoidance). Furthermore, section 10 argues that, even if were possible to avoid being guided by accessibility and affect, this might actually be undesirable. For this reason, these two psychological drivers play a key role in the taxonomy we specify (in section 6).
Psychological Driver: Accessibility
Accessibility—the ease with which cognition comes to mind— has been the study of so much work in psychology that it would take multiple books to outline in total. Given its long history, and the robust foundations established in this domain, we introduce key illustrations of how it is studied and prototypical findings. Higgins [56] provided an early impetus towards studying the importance of accessibility in their so-called ‘Donald studies’. Participants were given a preliminary task that concerned processing words that were synonyms of ‘reckless’ or ‘adventurous’. In an ostensibly unrelated later study, they read a passage about Donald, who crossed the Atlantic Ocean in a sailboat. Participants who had previously been exposed to synonyms of ‘reckless’ judged Donald to be reckless, whereas participants who had previously been exposed to synonyms of ‘adventurous’ judged Donald to be adventurous. Thus, that which comes to mind easily because it has been primed, influences judgments. There have been many different twists on this research paradigm, all of which demonstrate the power of accessibility to influence judgments.
Prima facie, it seems reasonable to argue for a limit on the importance of accessibility. If an accessibility manipulation is extremely subtle, then accessibility should only be slightly influenced—with little overall effect due to the slightness of the manipulation. In contrast, if an accessibility manipulation is unsubtle, then it should have a large effect, due to its powerfulness, but that effect can be actively discounted if the person is aware of it. Consequently, a comforting view might be that if accessibility is manipulated subtly, it does not have much effect on anything and if it is manipulated in such a way that activates conscious awareness, there is little to worry about, too, because the person can take cognitive countermeasures. However, [57,58] showed that subtle accessibility manipulations have impressive effects. Wyer and Srull [59,60] famously reviewed the literature on accessibility, and the principle continues strongly today (see [61] for a recent review).
Accessibility extends even to the way people think about themselves. [62] performed two relevant experiments. In one experiment, they increased the accessibility of the ‘private self’ (containing cognitions about a person’s traits and states) or the ‘collective self’ (containing cognitions about a person’s group memberships) by asking people to think for two minutes about how they are different from-or similar to-others. In a second (extremely subtle) experiment, they manipulated accessibility via an ostensibly unrelated story about an ancient Mesopotamian king who appointed a talented general (priming the private self) or member of his family (priming the collective self) to lead a detachment of soldiers. In both experiments, increasing the accessibility of the private self, or the collective self, increased the extent to which participants later described themselves via private self-cognitions or collective self-cognitions, respectively. Moreover, the manipulations worked both for American and Chinese participants, despite previous research proposing that Americans are “individualists” and Chinese “collectivists” (see [63] for a review). And to really nail down how easily accessibility can influence how people think of themselves, [64] had bilingual Chinese people describe themselves in either English (a language used in individualist cultures) or Chinese (a language used in collectivist cultures). These researchers found that language, itself, acts as a self-prime; when participants described themselves in English, they wrote more private self-cognitions but when they described themselves in Chinese, they wrote more collective self-cognitions.
Since the work in the 1990s, there have been countless modifications and extensions. Oyserman and Lee [65] performed a review and meta-analysis demonstrating that these effects continue to replicate. Accessibility is crucial at the level of values too. Verplanken and Holland [66] performed six experiments showing that values influence behavioral preferences and behaviors, but only when the values are primed to become cognitively accessible. Unprimed values have no discernible effect. Following this, [67] showed that accessibility influences subjective certainty and commitment to values.
Not only does accessibility influence judgments, selfdescriptions, and values, but it even influences intentions to commit crimes. Trafimow and Borrie [68] performed a complex experiment where participants were, or were not, informed about another person stealing valuable petrified wood samples from a national park. The idea was to manipulate the accessibility of the behavior. In an ostensibly unrelated task, participants were later asked about their own intentions to steal, with accessibility impressively increasing those intentions. Furthermore, due to the complexity of the experiment, [68] were able to rule out a variety of alternative explanations.
The evidence presented in this section illustrates how accessibility is likely to be a crucial factor for policymaking. This is because it is a fundamental property of all human cognition. Not only does it influence judgments in general, it even influences how people think of themselves and their intentions for the future. Moreover, if a problem is not accessible, the policymaker will be unlikely to think of it. And, in turn, there will be no way to address it. It is only if a problem is accessible that the policymaker will be likely to think of it. This has potential downsides too. This can be shown by considering three cases of (specific) problems, all of which rely on the fact that accessibility is a commonly used cue to indicate importance [67]. In the first case, an untested problem is cognitively accessible to a policy actor. The accessibility of the ‘problem’ might lead the actor to deem it important-thereby bypassing thorough testing. In the second case, a pseudo-problem is cognitively accessible, leading the actor to deem it important— despite its descriptive component being false. In the third case, a relatively unimportant problem (one which has passed/would pass testing, but this process has revealed/would reveal it is not as widespread and/or acute as initially thought) is cognitively accessible, leading the actor to overweight its importance. Thus, accessible policy problems can bias the policy process by presenting actors with unverified, wrong, or unimportant tools for determining how to proceed. As Maslow [69] warned: “it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail”.
Psychological Driver: Affect
The literature on accessibility may be vast, but the literature on affect is much larger. However, as before, an exhaustive review is not necessary. The issue of importance, here, is that affect can influence both cognitions and behaviors. It has long been known that affect can influence recall [70], judgments [71], current lifeinterpretations [72], perceptions of general life satisfaction (Strack et al., 1985), and many others (see [72,73] for reviews).
Trafimow [74] performed a direct comparison of the ability of affect or cognition to influence behavioral intentions for 15 different behaviors, and they performed both traditional between participants analyses and non-traditional within-participants analyses. The former facilitates determining which behaviors are more under affective or cognitive control and the latter facilitates determining which people are more under affective or cognitive control. They found not only that more behaviors were under affective than cognitive control, but also that more people were under affective than cognitive control. A caveat is that––consistent with the previous section––accessible cognitions were nevertheless important, with the joint contribution quite impressive. When analyzing persons, the median multiple correlation for predicting behaviors, including both affect and accessible cognition as predictors, was high (0.84).
At, perhaps, a more basic level, Johnston [75] provided an evolutionary perspective on affect. The bottom line is that affect evolved because it increased the probability of survival and reproduction by turning organisms away from harmful stimuli and towards helpful stimuli. Johnston’s work suggests the intriguing possibility that cognition is an evolutionary add-on that extends the potential range of behaviors to which affect can apply, with affect remaining the primary determinant of behaviors. The structure of the brain, itself, supports this possibility [76]. Furthermore, Ketelaar’s [77] review reinforces the value of applying evolutionary thinking to psychology, which necessarily includes a consideration of affect as a crucial evolutionary determinant of behavior.
We have used ‘affect’ as a general descriptor to indicate valance of feeling, positive or negative, but social psychology research has gone far beyond that to identify specific emotions, such as anger, sadness, joy, and so on (see [78,79] for reviews). In addition, there has been an explosion of functional brain imaging research on emotion, to determine neurophysiological underpinnings [80]. Although the older research showing the many different effects of affect manipulations on how humans think and behave is sufficient for present purposes, the evolutionary and functional brain imaging work buttress both the existence and importance of affect.
The evidence considered in this section supports the view that affect is also likely a crucial factor in policymaking, due mostly to its fundamental influence on both judgment and behavior. In the mind of a policy actor, affects might be attached to different problems, solutions, ideologies, and even other actors—influencing attention, policy judgment and persuasion, and the mobilizing of coalitions [42]. With respect to policy problems, clearly affect is partly constitutive of policy actors’ (specific) problems: representing what is seen as good or bad within the domain via positive or negative valence. We will also consider how affect can interact with accessibility in important ways as part of outlining the taxonomy.
Taxonomy of Psychological Processes in Policymaking
The taxonomy of psychological processes in policymaking (ToPPP) was characterized broadly in section 1 as concerning (1) the process required to test a policy problem, especially the types of assumptions needed; (2) how two main drivers of all human psychological activity—accessibility and affect—interact to influence the assumptions made by policy actors; and (3) how the assumptions and psychological drivers have an impact both at the level of the policy problem and on decision-making downstream of this. The aim here is to introduce ToPPP as a means of characterizing how to take fundamental properties of human cognition—accessibility and affect—and highlight where they can influence problem testing and policy decision-making. To expose the various practical aspects of setting out what is needed to go from the worldview underpinning a (specific) policy problem to meaningful ways of measuring, quantifying, and inferring the causes of it, a simple framework is needed. We propose that the same framework that is used to help researchers move from theory to evidence and evaluation, is applicable to policymakers wishing to move from (specific) problems to policies.
ToPPP is essentially an extension of [9,10] taxonomy of assumptions. This taxonomy concerns the four types of assumptions (outlined below) that move a researcher from a theory to evidence and from evidence to evaluation. We aim to show how the same assumptions can be used to move a policymaker from a (specific) policy problem to policies designed to address it. In this way, it deals with concern (1) of ToPPP by providing a normative account of how problem testing ought to be carried out. Importantly, not all policy problems need concern the use of all four types of assumptions— non-statistical problems need only make do with two.
Consider the four types of assumptions. First, theoretical assumptions: assumptions/propositions which jointly compose a theory about some domain—containing nonobservational terms referring to unobservable entities [79]. Second, auxiliary assumptions: assumptions which aid in moving from a theory about the world to an empirical hypothesis which can be tested— since they concern observable entities [79]. Specifically, auxiliary assumptions add information regarding how to manipulate and/ or measure the referents of a theory’s nonobservational terms. Auxiliary assumptions can also specify initial conditions [80,81]. Third, ‘statistical assumptions’: assumptions which complete the process of moving from an empirical hypothesis to a statistical hypothesis by precisely specifying summary statistics of concern. Fourth, ‘inferential assumptions’: assumptions which justify inferences from the data to inferences concerning populations.
Trafimow [9] indicated that, although all four types of assumptions are crucial to most research in psychology, it is possible to have cases where only theoretical and auxiliary assumptions are needed. He used Halley’s comet as an example. Halley used Newton’s theory and auxiliary assumptions about the present position of the comet, the effects of gravitationally relevant bodies, and others, to arrive at his spectacularly successful prediction about the year of the return of the comet. In the policy world, non-statistical problems only require theoretical and auxiliary assumptions. For example, suppose that a (specific) policy problem assumes that there has been a cover up within a government institution. (Potential cases include the Watergate Scandal, the Iran-Contra Affair, and the UK Post Office Scandal.) The testing of such a problem does not necessitate a statistical approach—indeed, such an approach may not even be possible. Instead, auxiliary assumptions would be needed to test for the unobservable conspiratorial intentions of the accused—assumptions, for instance, which provide rules regarding how to judge if a written communication is indicative of a cover up (rather than a joke or honest intention).
Consider an example to illustrate how these different types of assumptions together achieve the process of moving from theory to evidence, and evidence to evaluation. Suppose our theory is composed of the sole theoretical assumption that feeling threatened causes prejudice [82]. ‘Threat’ and ‘prejudice’ are nonobservational terms, referring to unobservable entities. Thus, there is no way to test the theory directly. However, it can be tested indirectly by adding auxiliary assumptions about how to experimentally manipulate threat and measure prejudice. If these auxiliary assumptions are correct—which might not be the case— then a researcher can successfully manipulate threat and obtain an effect on the prejudice measure, thereby supporting (but not proving) the theory. Alternatively, the experiment might not work, in which case the theory might be false, or one or more auxiliary assumptions might be false [83-85,81].
Unfortunately, even the improved specificity engendered by the addition of auxiliary assumptions––pertaining to how to manipulate threat and measure prejudice––is nevertheless insufficient to provide evidence for the theory. Although the prediction is that there should be more prejudice in the threat condition than in the no threat condition, it is not clear what we mean by ‘more’. We could mean that all people in the threat condition score higher on the prejudice measure than do all people in the no-threat condition. Alternatively, we could mean that the median score in the threat condition exceeds the median score in the no threat condition. Or we could cash out ‘more’ through differences in means, locations, 75th percentiles, and so on. We need yet more assumptions—statistical assumptions—to increase specificity from ‘more’ to single summary statistics of focus. For example, if one assumes normal distributions, it makes sense to focus on a difference in means because means are parameters of normal distributions. However, if one assumes skew normal distributions, it makes more sense to focus on locations because locations are parameters of skew normal distributions, whereas means are not. And it is possible for differences in means and differences in locations to be in opposite directions, thereby implying opposing substantive stories [86].
Finally, researchers usually only care about sample summary statistics as a conduit to drawing conclusions about populations. Typically, researchers perform inferential statistical analyses to draw those conclusions. Although there is much recent debate about the soundness of alternative inferential procedures, they all have in common the necessity to make inferential assumptions. For example, a ubiquitous inferential assumption is that the researcher has randomly selected from the population [87,88]. The extent to which the necessity to make this assumption decreases the soundness of various inferential procedures is a matter of debate [9].
Moving on to ToPPP’s concern (2), accessibility and affect
interact for all four assumption types. There are relevant cognitions
that may be at differing levels of accessibility, and more accessible
cognitions are more likely to influence policy. In addition, there
are affects attached to accessible cognitions, and these affects also
influence policymaking. However, affective experiences influence
the accessibility of yet further cognitions, and so on. Thus, the
interplay between the two is both complex and important, and
the focus of a dedicated analysis in the following sections. To
summarize thus far, ToPPP has multiple concerns—allowing it to
serve multiple purposes. First, the assumptions outlined provide a
means of evaluating a theory—by converting it into an empirical
hypothesis for the purpose of testing. Through this process, it
provides a means of testing a specific policy problem—by testing
the theory specified by that problem. (Policy actors need not carry
out this testing process, but we hope to show that it will aid them in
their interactions with researchers and evidence-makers.) Second,
two of the main drivers of all human psychological activity—
accessibility and affect—interact in complex ways to influence the
assumptions that actual policy actors make. This moves us one to
concern (3): the complex interaction of accessibility and affect can
have effects on multiple levels of actors’ policy processes. We have
seen how it can influence the level of the policy problem. Yet, it can
also influence policy decision-making which is downstream of the
recognition and testing of policy problems. Thus, we must consider
the complex interaction of accessibility and affect at two (broadly
specified) levels of individuals’ (or groups’) policy processes:
• Level I — The Policy-Problem Level: Once a ‘problem’ has
been recognized, ToPPP can be used to test it (Figure 1).
• Level II — Downstream of the Policy-Problem Level: Once
a ‘problem’ has been recognized and accepted (whether as
a result of a formal process of testing or not), aspects of the
policy process which are downstream of this (including
agenda setting and agitation, policy formulation, policy choice,
implementation, and evaluation) can be affected (Figure 1).

The next section uses a more in-depth example to illustrate Level I.
Level I — The Policy-Problem Level
Theoretical assumptions
Let’s imagine that a problem identified by a group of policymakers is that antisocial behavior is on the rise. Moreover, the specific problem of focus within a group identified as antisocial is that anti-social behavior (a bad thing) is caused by misinformation (a bad thing)—providing a ‘worldview’ of this domain. This incorporates a folk ‘working theory’ of the phenomena in question. The theory the policymakers share about this domain is that, currently, antisocial behavior is caused by misinformation.
The theory also assumes that misinformation is on the rise, and antisocial behavior is on the rise, and the former causes the latter. Moreover, a further assumption is that misinformation is encountered and spread more through social media. Thus, the theory is composed of several theoretical assumptions that may have limited evidential support, or none (for review see [51]). Regardless, the policymakers accept this worldview and so task themselves with furthering policy that aims to responsibly reduce antisocial behavior by reducing misinformation.
The success of the group’s problem recognition (and further efforts regarding policy choice, implementation, etc.) will depend on accessibility and affect. Suppose that the members of the group take it as self-evident that social media is indeed causing an ‘infodemic’ [89-91] scale of misinformation, and that misinformation is, in turn, causing a rise in anti-social behavior [92]. Alternative theories/ worldviews may be inaccessible to these policymakers. If so, the theory—constituted by these theoretical assumptions—would be accepted without question, rather than being seen as a legitimate target of evaluation [93]. Thus, there would be no evaluation of their theoretical assumptions, because there wouldn’t be any room for challenge if all agree, and little interest in sourcing evidence contrary to it. The impact of the inaccessibility of alternative theories (or worldviews) on problem recognition is obvious: since there is no reason to test a self-evident theory, successful problem recognition becomes a matter of epistemic luck [94]. Suppose that the policymakers have, in fact, identified a real problem, despite not evaluating the theory/worldview on which it is based. Such an outcome is not the result of their “powers, abilities, or skills”, and thus is ultimately out of their control, and so a matter of epistemic luck [94]. (This could have a down-stream effect on policy choice and implementation; see section 8.) Clearly, a lack of such epistemic control is undesirable (even if it is very often unavoidable) because it leaves the success of a human endeavor in the lap of the gods from the very beginning!
Next, consider how affect complements the role of accessibility. The amount of conviction the policymakers have in the theory (or broader worldview)—and therefore its theoretical assumptions— would be accompanied by affect. The group of policymakers may feel confident, assured, and even righteous about their assumptions. In fact, they may even feel that there is a moral imperative to do something about misinformation, given their concerns about antisocial behavior [95]. If the affective associations with the theory—and therefore the theoretical assumptions—are strongly valenced, they might prove a spur to policymaking and what follows will continue from this theoretical assumption.
If the affective associations with the worldview are only weakly valenced, policymakers might consider other accessible worldviews that would question both whether the problem is worth addressing and what the reasons for it are. As a result, there is room for the evaluation of their theoretical assumptions: since those assumptions are weakly held––either because there is doubt in them or limited emotional commitment to them––this enables some evaluation and alternatives are sought. However, to truly test their theoretical assumptions, they must test them via the generation of auxiliary (and, potentially statistical and inferential) assumptions, as outlined by ToPPP. The next few subsections cover these assumption types in turn.
Auxiliary assumptions
In order to test the theory, one would need to translate it into an empirical hypothesis. This means developing a scheme for measuring and/or manipulating its referents, in order for them to function as variables. To begin, it does not appear that the problem is well specified, as the policymakers have not focused on a specific type of antisocial behavior. Antisocial behavior includes littering, unlicensed street drinking, trespassing, begging, nuisance noise, nuisance calling, and misuse of fireworks [96]. Yet, policymakers may be more concerned with public disorder, such as riots, violent disorder, fear or provocation of violence, unlawful protest, inciting racial or religious hatred, racially or religiously aggravated assault. So, terminology and its precision matters in the characterization of the problem, and (in this case) in determining the behaviors that need to be specifically reduced to see the reduction in anti-social behavior which the causal relationship (posited in the theory) predicts. This takes care of the conceptualization and measurement of antisocial behavior—the dependent variable. To measure misinformation, we need an understanding of the specific type of misinformation in question—even social media misinformation might come in multiple forms. For it to be manipulated, we require an understanding of the different possible states which might indicate causation versus mere correlation—for example, people with no access to social media versus social media users.
Just as theoretical assumptions can be more accessible or less accessible––or associated with relatively strong (positive or negative) affect or associated with relatively weak (positive or negative) affect––the same is true for auxiliary assumptions. That is, various auxiliary assumptions which could come to mind, to evaluate the theory, could be more accessible or less accessible, and/or more tinged with (positive or negative) affect or less tinged with (positive or negative) affect.
Statistical and Inferential Assumptions
To test the theory that misinformation is causing anti-social behavior, still further assumptions are needed: statistical and inferential assumptions. For example, what does it mean to say that anti-social behavior is on the rise? We could quantify by specifying that the frequency of anti-social acts is greater each year than the previous year. An alternative way to quantify would be to specify that there is a positive correlation between year and frequency of anti-social acts. A third way would be to weight each anti-social act by how bad it is, sum the weights, and specify that the summed weights increase each year. A fourth way would be to establish a ratio of anti-social acts to pro-social acts and specify that this ratio increases each year. And there are many other ways, of which these are only a small sample. The larger point is that the empirical hypothesis that anti-social behavior is on the rise requires better specification to be useful. Statistical assumptions need to be added to transform the poorly specified empirical hypothesis into a better specified statistical hypothesis. Appreciating that statistical assumptions are impacted by availability and affect is long acknowledge in the sciences, but once acknowledged can be used to appreciate how statistics are interpreted and used [97].
Even a well-specified statistical hypothesis might be insufficient. Suppose a policy change is expected to affect the population of a particular geographical area. It might not be clear that the sample for which a statistical hypothesis was confirmed—if, indeed, that was so—can be taken as indicating that the statistical hypothesis works for the population of concern. It might be necessary to make the inferential assumption that the sample is representative of the population. In turn, to make that case, it might be necessary to argue that the sample was randomly selected from the population or that other measures were taken to ensure representativeness. Otherwise, even if the statistical hypothesis is confirmed with respect to the sample, there may be little reason to believe that it applies to the population. In that case, it might be ill-advised to expect a policy change based on the sample findings to work for the population where it is intended to work.
A related issue is the extent to which standard (Neyman- Pearson) hypothesis testing can provide the requisite evidence for such an evaluation. Policymakers are often concerned with the statistical significance of the results they have access to—thus, this seems an easily accessible inferential question. The implication is that, if the findings are statistically significant, then that validates the policy problem and justifies going forward with policy formulation. There are many problems with such reasoning, but two will suffice for present purposes. One problem is that significance testing has increasingly come under scrutiny, with many experts arguing that it is unsound and should be abandoned (see the 43 articles in The American Statistician, 2019, special issue). Even ignoring the first problem and pretending that significance testing is a sound inferential procedure, statistical significance and practical significance are very different issues. Because the inferential model is never correct with respect to all assumptions [9], statistical significance is tantamount to guaranteed provided the sample size is sufficiently large. However, statistical significance need not mean that the effect size is large enough to justify a change in policy or a new policy [98,99]. Rather than asking about statistical significance, policymakers ought to ask about effect sizes, as these are much more important for policymaking [99-101]. Moreover, a very recent statistical literature has developed for estimating probabilistic advantage; that is, the probability of being better off, and by varying amounts, in one condition of the study relative to another condition [102-105].
Even assuming that inferential assumptions have been validly applied to arrive at, and support, an inferential hypothesis, there remains the issue of causation [106]. Regarding whether misinformation causes anti-social behavior, supposing that what is meant by ‘misinformation’ and ‘anti-social behavior’ have been extremely well specified. Suppose also there is good reason to believe that the correlation between the two pertains to the relevant population with a sufficiently large effect size to matter. What is left is to show that misinformation causes anti-social behavior. Unfortunately, the mere fact of a correlation coefficient, even if it is reasonably large, is insufficient. Many researchers resort to complex path models to make the case for causation, but caution should nevertheless remain. Ultimately, complex path models are based on an underlying matrix of correlation coefficients, and so they fail to solve the standard problem that correlation need not indicate causation [107]. In fact, it is possible to argue that complex path models make the problem even worse. In the case of a single underlying correlation coefficient, although it need not indicate causation, it is at least possible that causation is nevertheless there; that is, the correlation coefficient is for the correct theoretical reason. However, if there are three variables, so there are three underlying correlation coefficients, then all three must be for the right theoretical reason for the hypothesized causal pathway to be true. If we generously assume a probability of 0.70 for each correlation coefficient, the overall probability that all are for the correct theoretical reason is 0.70 0.70 0.70 = 0.343, much less than a coin toss. And the problem worsens as the model becomes increasingly complex (see [108-110] for thorough accounts).
Level II — Downstream of the Policy-Problem Level
Once a policy problem has been accepted by a policymaker—or, in the case of our example, a group of policymakers—other policyrelated decisions may come downstream of this acceptance. (Of course, such downstream effects are not certain. For example, a policymaker might accept that misinformation causing antisocial behavior is a problem, but then have no further involvement with this problem.) These include agenda-setting and agitation, policy formulation, policy choice, implementation, and monitoring and evaluation. Importantly, accessibility and affect might still combine in important ways in influencing such decision-making.
One potential issue is the extent to which the causal picture outlined in the specific problem’s theory is the whole story. It could be that misinformation spread through social media is a real problem but (comparatively) not the pivotal cause of antisocial behavior. Then, too, it is possible that there are more pressing problems to address than social media misinformation [111]. Yet, if it is a highly accessible causal factor, and/or one associated with highly valenced affect—perhaps due to its association with certain salient focusing events [18]—then it might receive undue weight in agenda setting.
Suppose that policymakers have avoided the preceding issue in the formulation of policy aimed at reducing antisocial behavior. An issue which might be faced is how to implement the chosen policy. Here, a similar challenge is confronted in policy selection as it is for problem testing. Namely, the chosen policy might be vague—just as the assumptions attached to a specific problem might be vague—and refinement might be necessary to reach the requisite level of specificity for an effective policy to adequately address the problem. Just as with problem testing, there can be no increased specificity without adding auxiliary assumptions. (This is true whether one is in a top-down or a bottom-up context. In the former case, the further auxiliary assumptions would be provided from the more senior policymaking teams in central government [112]. In the latter case, it would be provided (to a large extent) by street-level actors in an ad hoc manner [20]. This can be risky in the sense that each additional auxiliary assumption could be wrong. Nevertheless, the risk is necessary to achieve the requisite specificity for an implementable policy.
Suppose that the general policy is to reduce misinformation by targeting social media companies, and the intended effect is a reduction in public disorder. Then, too, specificity is needed with respect to precise numbers. By how much should misinformation be reduced? How many public disorder behaviors should be targeted? By how much should each public disorderly behavior be reduced? There needs to be a specific number, or at least a range of numbers, that policymakers would deem acceptable to implement the policy in solving the problem. This presents an issue if nothing in the policy states the precise value, or range of values, acceptable to solve the problem. To traverse the distance between the policy, and the greater specificity needed to carry it through, it is necessary to make more auxiliary assumptions. Perhaps too small a decrease in misinformation would be ineffective. But perhaps too large a decrease would cause public alarm, and backfiring effects given the secondary consequences [113] of the measures introduced [114]. To address these issues, it is necessary to make assumptions about the likely effects of various amounts of decreases in misinformation on social media.
An issue that is insufficiently (or not at all) appreciated is that the extra assumptions which come into play in this process are themselves associated with various cognitions and affects. Policymakers might have generally positive affect towards reducing misinformation to reduce, say, public disorder, but negative affect towards any single specific policy aimed at doing so. This might be for personal reasons. For example, some policymakers might take a strong line on free speech [95], and so the possible consequences of a policy targeting reductions in misinformation may lead to stricter ways in which people communicate via social media.
Policymakers might have general negative affect towards public campaigns designed to increase pressures on social media companies to change their monitoring of misinformation and be stricter in the punitive actions for those posting misinformation [115-117]. This might be because they don’t see the value in them, or because they are more inclined towards harder interventions. Alternatively, policymakers might consider simply changing the regulations and punishing social media companies with large fines [118] if they do not reduce the amount of misinformation annually by 50%. Some policymakers might have negative affect towards being so draconian and might instead favor of a lower value, say 10%. Or, even if some policymakers have positive affect to the thought of the 50% decrease, they might experience negative affect based on their response from a minister (head of state) that is contemplating how this policy will impact their voters.
Level II Case Study: Worldview-Inconsistent Policy Choice
There are many examples from history about how policymakers have eschewed specific policies despite their consistency with those policymakers’ worldviews. Henry Clay [1777–1852], an often admirable and highly influential congressman, provided some cases in point. Clay famously touted his American System, that featured (a) high tariffs to promote American businesses and raise money for the government and (b) strong expenditures on internal improvements, especially those designed to facilitate transportation [119]. And yet, there were times when Clay militated against specific plans that were consistent with his general desire to promote the American System. For example, a fellow congressman named Benton pushed a proposal to appropriate $230,000 for a railroad in Missouri intended to feed a transcontinental line toward the Pacific. This proposal was exactly of the sort one would have expected Clay to support, based on his American System. Yet, Clay opposed it largely because of his negative affect towards Benton [119]. The example of Clay versus Benton illustrates the importance of affect in policymaking. As a further illustration, a few days later, he supported a similar bill, not proposed by Benton, pertaining to water transportation [119].
Summary and further considerations
To summarize thus far, at Level I (the policy-problem level), policymakers have worldviews that are more accessible or less accessible, and that are impacted with more positive or negative affect or less positive or negative affect. These might, in turn, stimulate yet more cognitions to become accessible or more positive or negative affect to ensue. Assuming the worldview is accessible and associated with strongly valenced affect, it may become strongly activated by conflict between one’s expectation and one’s environment, indicators, focusing events, and feedback. Upon such activation, the worldview may stimulate the formation of a policy problem (e.g., a rise in misinformation causes a rise in antisocial behavior). But this general plan is subject to: (a) accessible auxiliary (and potentially statistical, and inferential) assumptions, and (b) affects activated by these assumptions. In turn, at Level II, in moving beyond the acceptance of the policy problem, there are additional processes. Indeed, for implementation, it is necessary to make yet more auxiliary assumptions which, in turn, activate more affects and cognitions. And once one has implemented a policy, that policy might be evaluated—bringing yet more auxiliary assumptions into play, and potentially activating yet more cognitions and affects! Indeed, even this greater specificity might be insufficient for implementation, and it might be that the process needs to be repeated once again targeted at the further cognitions. Our expectation is that the number of times one goes through this process will depend on a variety of factors, such as the difficulty of implementing the plan, the creative abilities of the policymakers involved, and the amount of effort that the policymakers involved are willing to devote to the plan.
Mental Toggling
To avoid the mistake of the policy cycle model, further clarification must be added to the picture provided by Level I and II of the taxonomy. Let us commence with an example and then generalize. Suppose that the policy problem we have been considering has been accepted by a policymaking team in central government, who have formulated a policy to address it. (Thus, this team have moved beyond Level I to Level II.) Part of the policy for reducing misinformation is introducing a fine to social media companies for failing to reduce it by 50% in conversation. However, a colleague within the team comes up with a brilliant argument about why the plan will do more harm than good. For instance, that the fines will impact advertising revenue for social media platforms, which will in turn lead social media companies to introduce high subscription charges for use of certain services [120,121]. More than that, the colleague shows how the policymaking team can benefit politically by looking at other factors to reduce public disorder. It seems plausible that the team might adjust their specification of the specific problem—or at least the affect attached to it—in line with the new thinking. This might be done, for example, by deciding that perhaps the misinformation is not related directly as a cause of increases in public disorder, after all. (Thus, they move back to Level I and make alterations).
To the extent that the policymaking team changes their thinking due to an expanded accessibility of potential benefits and harms, mental toggling is a good thing. An exception, which is likely rare, would be if the policymaking team acts primarily to benefit themselves at an overall negative expected value to the public. To the extent that policymakers already engage in mental toggling, the present comments can be descriptive; however, our bet would be that administrators could engage in much more mental toggling than is typical, with a substantially improved probability of benefitting the public. A counterargument is that mental toggling requires resources in terms of mental energy, time, and so on, but the potential gains may provide a justification.
In general, we recommend that policymakers mentally toggle between levels of the taxonomy, so the foregoing descriptive characterization is normative too. This results in reevaluation of plans at the two levels, and perhaps even general worldviews. It also may result in yet more auxiliary assumptions coming to the fore. Policymakers might see where their auxiliary assumptions could be replaced with better auxiliary assumptions thereby leading to an improved policy change. Moreover, mental toggling provides opportunities for policymakers to experience different affects, such as negative affect where there had previously only been positive affect which, in turn, could lead them to realize that their auxiliary assumptions are poorer than originally suspected. In addition, experiencing a different affect can lead cognitions associated with it to become accessible. In turn, these formerly inaccessible cognitions can lead to auxiliary assumption reappraisal or to better auxiliary assumptions, thusly improving the policy. Returning to the misinformation example, mental toggling could lead policymakers to see why the original plan for reducing misinformation might not work, or might have negative consequences that outweigh the benefits. This could lead to them rejecting a plan that previously seemed beneficial but with a low probability of success, in favor of a new plan with a greater probability of being beneficial. We believe that the benefits of mentally toggling across levels of the taxonomy is not sufficiently appreciated, as influence the influence of assumptions and affect can transmit from higher to lower levels, but from lower to higher levels too. A vital advantage of mental toggling is the ability for considerations at one level to influence thinking at another level, to the overall betterment of the plan or the creation of superior alternative plans.
Case Study: Mental Toggling
A historical example of mental toggling might be the controversy between those favoring the gold standard, such as William McKinley [1843–1901], who became president in 1897, and those favoring including silver too (bimetallism), such as William Jennings Bryan [1860–1925], who made the famous “Cross of Gold” speech (1896) but nevertheless lost to McKinley. McKinley favored the bankers, who did not want to be repaid in less valuable currency, whereas Bryan favored the debtors, for whom bimetallism-induced inflation would be a boon—allowing them to pay their debts in less valuable currency. The candidates’ speeches exemplify not just top-down processes from general worldviews to specific policies, but much more bottom-up processes where general views about inflation were strongly influenced by the specific people and plans each of the protagonists wished to favor.
The Goodness and Badness of Accessibility and Affect
Because accessibility and affect are so important throughout the taxonomy, it is best to be upfront about why this is both good and bad. If accessible worldviews or auxiliary assumptions are sound, especially if associated with sufficient affect to kickstart action, then the policy process proceeds from this insight. But if the accessible worldviews or auxiliary assumptions are unsound, the resulting policy is relatively unlikely to benefit the public. We stress that there is much here that is potentially good as well as potentially bad.
In line with the importance of ‘policymaker psychology’ in the complex picture produced by modern policy theory [122], affect is a necessary part of policymaker—and, more generally, policyactor— psychology. However, few appreciate the cruciality of accessibility; there is no way for policymakers to act on ideas unless those ideas are accessible. In addition, even if an idea is accessible, if it is associated with negative affect, policymakers are unlikely to act on it. Or, going the other way, if an idea is both accessible and associated with positive affect, policymakers are likely to act on it. As we have seen, advantages of mental toggling include providing opportunities for contradictory affects to come to the fore, providing opportunities for either contradictory or simply different cognitions to become accessible, and providing opportunities for complex interactions between these. Although the role of affect is known, though we suspect underappreciated, ToPPP provides a fleshing out of this picture with the added emphasis on accessibility. Like it or not, policy actors (qua human agents) are influenced by that which is accessible and by associated affects. The only way to be unbiased is to resign from humanity given the evolutionary and adaptive functions of biases and heuristics [123]. Furthermore, this is not necessarily a bad thing: accessible cognitions and affects, and their biasing effects, can potentially spur creativity in policymaking, a highly desirable consequence.
Although each person is inevitably biased by accessible cognitions and associated affects, different people have different worldviews and different auxiliary assumptions are accessible to them, with different associated affective reactions. An advantage of having more than one person involved in policymaking is that each person’s complex of accessible cognitions and associated affects can act as a check on each other’s complex of accessible cognitions and associated affects. At least that is so hypothetically. A key issue is that people generally do not want to hear views that conflict with their own, and assume that their arguments and views are objective and so are unlikely to be wrong [124]. There are many accounts examined in the organizational science literature of the severe consequences faced by those that challenge accepted positions of their peers and seniors [125,126], and policy making environments are no less immune to this [127,128].
Put dramatically, it is bad enough to disagree, but the larger crime is in being right in that disagreement! Just as one of the greatest presidents ever, Abraham Lincoln [1809–1865, president 1861–1865], purposely included people in his cabinet precisely because he knew they would dissent, we suggest that actively including dissenters should become standard practice. This is not to say that dissenters should always carry the day—as this certainly did not happen under Abraham Lincoln—but rather they should be heard on the chance that what they have to say will be valuable and result in overcoming maladaptive biases to the betterment of policy. A potential ToPPP gain is that it renders plainer the value of encouraging dissenting views.
Now, compare policymaking to the case of science. Kuhn [50,129] famously argued that a large number of factors lie behind scientists’ theory choices, including many subjective factors— “idiosyncrasies of autobiography and personality”, along with factors concerning nationality and reputation. Such idiosyncrasies are helpful in providing a risk-spreading pattern at the group level. On occasions, a new theory is needed to resolve persistent anomalies with the dominant theory. If no scientists are willing to risk jumping ship, then, ultimately, progress cannot occur. The subjective factors which influence scientists’ theory choices ensure a good proportion of loyal versus revolutionary scientists. This is the classic philosophical argument for viewpoint diversity in science. As indicated above, we argue that what is true for science is—in a sense—also true for public policy [130]. Idiosyncrasies in policy actors’ complexes of accessible cognitions and associated affects can provide a risk-spreading mechanism at the group level.
Conclusion
We have seen that, like all humans, policymakers have cognitions that are accessible or that can be made accessible. These might be come in the form of worldviews, but they also can be auxiliary assumptions that help traverse the distance from the policy-problem level to downstream policymaking activity—and from one downstream activity (e.g. implementation) to another (e.g. monitoring and evaluation). In addition, policymakers have affects associated with accessible cognitions that play an important role too, not just in the planning of policymaking activity but also in subjective evaluations of those plans. The modern policy process literature acknowledges the importance of affect. Recognizing the importance of accessibility, and its complex interactions with affect enriches our understanding of policy-actor psychology. The psychological process is complex because accessibility and affect not only can influence each other but can lead to new cognitions becoming accessible and new affects becoming stimulated. Furthermore, an understanding that an increase in specificity necessitates more auxiliary assumptions (which may activate more cognitions, with attached affects) provides a foundation for generating helpful prescriptions for practitioners—if they must engage in this process anyway, there may be easy improvements to be made. Finally, it is reasonable for policymakers to mentally toggle across different levels of specificity, potentially improving their decision-making process as more cognitions become accessible and more affects become stimulated.
Given that each increase in specificity of a plan necessitates more auxiliary assumptions, which, in turn, provide further grist for increasingly complex interactions between accessible cognitions and affects, it is tantamount to impossible that policy actors are unbiased. It is futile to admonish them not to be biased— being human, they cannot help themselves. A better solution is for policymakers to solicit dissent and viewpoint diversity. Such diversity cannot eliminate bias, but it at least allows for the possibility to play off counteracting biases against each other. This recommendation may be of limited use if dissent is downplayed, ignored, or outright cancelled. A way around the limitation would be to institute a policymaking culture that actively embraces diversity of opinion. Finally, we recommend that policymakers place less emphasis on statistical significance and more emphasis on practical significance when formulating their policies. This goes for assessment too: even if a policy results in a statistically significant effect in the desired direction, that is not necessarily equivalent to having provided sufficient benefit to justify the policy.
Policymaking in the public interest is difficult to perform. Even in ideal cases where policymakers honestly wish to act in the public interest, there is no way to do so in an unbiased way. Even were it possible to somehow eliminate the influence of affects, like all humans, policymakers would remain biased in the direction of their accessible cognitions. Then, too, as we have seen, there are complex interactions between affects and accessible cognitions, and these complex interactions are dynamic. They change as new cognitions become accessible and are, or become, associated with different affects. Rather than decrying the inevitability of these complex interactions at different levels of policymaking, an alternative is to make the best of it by providing opportunities for new cognition-affect interactions to provide a balance. One way of providing such opportunities is to train policymakers to increase mental toggling. Another way is to actively encourage dissent, including the recruitment of policymakers with viewpoint diversity. Neither of these is a panacea that will redress all biases that policymakers have. Nevertheless, we hope and expect that ToPPP and the recommendations that stem from it, will aid policymakers in proposing and executing effective policies that better the human condition.
Acknowledgement
None
Conflict of Interest
No conflict of interest.
References
- Rittel HWJ, Webber MM (1973) Dilemmas in a general theory of planning. Policy Sciences 4: 155-
- Head BW (2008) Wicked problems in public policy. Public Policy 3(2): 101-
- Head BW (2022) Wicked problems in public policy: understanding and responding to complex challenges. Palgrave Macmillan, Cham.
- Cairney P (2015) How can policy theory have an impact on policymaking? The role of theory-led academic-practitioner discussions. Teaching Public Administration 33(1): 22-39.
- Cairney P (2020) Understanding public policy: theories and issues (2nd). Red Globe Press, London.
- National Audit Office (2013) Over-optimism in government projects.
- Hudson B, Hunter D, Peckham S (2019) Policy failure and the policy-implementation gap: can policy support programs help? Policy Design and Practice 2(1): 1-
- Matthews P (2012) Problem definition and re-evaluating a policy: the real successes of a regeneration scheme. Critical Policy Studies 6(3): 243-
- Trafimow D (2019) A taxonomy of model assumptions on which P is based and implications for added benefit in the sciences. International Journal of Social Research Methodology 22(6): 571-
- Trafimow D (2020)A taxonomy of major premises and implications for falsification and verification. International Studies in the Philosophy of Science 33(4): 211-
- Jones C (1970) An introduction to the study of political life (3rd). Duxberry Press, Berkeley.
- Smith KB, Larimer CW (2009) The public policy theory primer. Westview Press.
- Anderson JE (1979) Public policy-making. 2nd Holt, Rinehart and Winston, New York.
- Peters BG (1986) American public policy: Promise and performance (2nd). Chatham House Publishers.
- Zahariadis N (2016) Setting the agenda on agenda setting: Definitions, concepts, and controversies. In N. Zahariadis, (Ed.), Handbook of public policy agenda setting 1-22.
- Nakamura RT (1987) The textbook process and implementation research. Policy Studies Review 7(1): 142-
- Jenkins-Smith HC, Sabatier PA (1993) The study of public policy processes. In P.A. Sabatier and H.C. Jenkins-Smith (eds), Policy change and learning: an advocacy coalition approach (pp. 1-9). Westview Press, Boulder.
- Kingdon J (1984) Agendas, alternatives and public policies (1st edn). Little, Brown and Company, Boston.
- Sabatier PA (2007) The need for better theories. In P. A. Sabatier (Ed.), Theories of the policy process (2nd, pp. 3–17), Westview Press.
- Lipsky M (1980) Street-level bureaucracy: dilemmas of the individual in public services. Russell Sage Foundation, New York.
- Hjern B, Porter DO (1981) Implementation structures: a new unit of administrative analysis. Organization Studies. 2(3): 211-
- Hjern B (1982) Implementation research - the link gone missing. Journal of Public Policy 2(3): 301-
- Cairney P (2021) Taking lessons from policy theory into practice. In T. Mercer, R. Ayres, B.W. Head. and J. Wanna (eds). Learning policy, doing policy: interactions between public policy theory, practice and teaching (pp. 281–298). Australia National University Press, Acton.
- Cairney P, Kwiatkowski R (2017) How to communicate effectively with policymakers: combine insights from psychology and policy studies. Palgrave Communications. 3(1): 37.
- Chen Z, Cowan N (2009) Core verbal working-memory capacity: the limit in words retained without covert articulation. Q J Exp Psychol 62(7): 1420-1429.
- Cowan N (2001) The magical number 4 in short-term memory: a reconsideration of mental storage capacity. Behavioral and Brain Sciences 24(1): 87-114; discussion 114-185.
- Cowan N (2005) Working memory capacity. Psychology Press, New York.
- Cowan N (2010), The magical mystery four: how is working memory capacity limited, and why? Curr Dir Psychol Sci 19(1): 51–57.
- Miller GA (1956) The magical number seven, plus or minus two: some limits on our capacity for processing information. Psychological Review 63(2): 81-
- Thalmann M, Souza AS, Oberauer K (2019) How does chunking help working memory? Journal of Experimental Psychology: Learning, Memory, and Cognition 45(1): 37-
- Cairney P, Weible CM (2017) The new policy sciences: combining the cognitive science of choice, multiple theories of context, and basic and applied analysis. Policy Sciences 50(3): 619-627.
- Green-Pederson C, Princen S (2016) Punctuated equilibrium theory. In N. Zahariadis (ed) Handbook of public policy agenda setting (pp. 69–86). Edward Elgar Publishing, Cheltenham.
- Gigerenzer G, Selten R (2002) Rethinking rationality. In G. Gigerenzer and R. Selten (eds), Bounded rationality: the adaptive toolbox (pp. 1–12). The MIT Press, Cambridge, Mass.
- Gigerenzer G, Todd PM, the ABC Research Group (1999) Simple heuristics that make us smart. Oxford University Press, Oxford.
- Kahneman D, Tversky A (1972) Subjective probability: a judgment of representativeness. Cognitive Psychology 3(3): 430-
- Kahneman D, Tversky A (1973) On the psychology of prediction. Psychological Review 80(4): 237-
- Kahneman D, Slovic SP, Tversky A (1982) Judgment under uncertainty: heuristics and biases. Cambridge University Press, Cambridge.
- Morris JP, Squires NK, Taber CS, Lodge M (2003) Activation of political attitudes: a psychophysiological examination of the hot cognition hypothesis. Political Psychology 24(4): 727-
- Lodge M, Taber CS (2005) The automaticity of affect for political leaders, groups, and issues: an experimental test of the hot cognition hypothesis. Political Psychology 26(3): 455-
- Peterson HL, Jones MD (2016) Making sense of complexity: The narrative policy framework and agenda setting. In N. Zahariadis (Ed.), Handbook of public policy agenda-setting, (pp. 106–131). Edward Elgar Publishing.
- Shanahan EA, Jones MD, McBeth MK, Radaelli CM (2018) The narrative policy framework. In C. M. Weible & P. A. Sabatier (Eds.), Theories of the policy process pp. 173-213.
- Pierce JJ (2021) Emotions and the policy process: Enthusiasm, anger and fear. Policy & Politics 49(4): 595-
- Dery D (1984) Problem definition in policy analysis. University Press of Kansas, Lawrence.
- Lindblom CE, Cohen DK (1979) Usable knowledge: social science and social problem solving. Yale University Press, New Haven.
- Cobb RW, Elder CD (1983) Participation in American politics: the dynamics of agenda-building (2nd edn). The Johns Hopkins University Press, Baltimore.
- Blumer H (1971) Social problems as collective behavior. Social Problems 18(3): 298-306.
- Brewer G, deLeon P (1983) The foundations of policy analysis. The Dorsey Press, Pacific Grove.
- Zahariadis N (2007) The multiple streams framework. In P. Sabatier (Ed.), Theories of the policy process 2nd, pp. 65-92.
- Kuhn TS (1970) The structure of scientific revolutions (2nd). The University of Chicago Press, Chicago.
- Adams Z, Osman M, Bechlivanidis C, Meder B (2023) (Why) is misinformation a problem? Perspect Psychol Sci 18(6): 1436-1463.
- Enders A, Klofstad C, Stoler J, Uscinski JE (2023) How anti-social personality traits and anti-establishment views promote beliefs in election fraud, QAnon, and COVID-19 conspiracy theories and misinformation. Am Polit Res 51(2): 247-259.
- Gruzd A, Soares FB, Mai P (2023) Trust and safety on social media: understanding the impact of anti-social behavior and misinformation on content moderation and platform governance. Social Media + Society 9(3): 1-
- Cartwright N, Hardie J (2012) Evidence-based policy: a practical guide to doing it better. Oxford University Press, Oxford.
- Pearl J, Mackenzie D (2018) The book of why: The new science of cause and effect. Basic Books.
- Higgins ET, Rholes WS, Jones CR (1977) Category accessibility and impression formation. Journal of Experimental Social Psychology 13(2): 141-
- Martin LL (1985) Categorization and differentiation: a set, re-set, comparison analysis of the effects of context on person perception. Springer-Verlag, New York.
- Martin LL (1986) Set/reset: the use and disuse of concepts in impression formation. Journal of Personality and Social Psychology 51(3): 493-
- Wyer RS, Srull TK (1986) Human cognition in its social context. Psychological Review 93(3): 322-
- Wyer RS, Srull TK (1989) Memory and cognition in its social context. Lawrence Erlbaum Associates.
- Gawronski B, Bodenhausen GV (2015) Social-cognitive theories. In B Gawronski and GV Bodenhausen (eds), Theory and explanation in social psychology (pp. 65–83). The Guilford Press, New York.
- Trafimow D, Triandis HC, Goto SG (1991) Some tests of the distinction between the private self and the collective self. Journal of Personality and Social Psychology 60(5): 649-
- Triandis HC (1989) The self and social behavior in differing cultural contexts. Psychological Review 96(3): 506-
- Trafimow D, Silverman ES, Fan RMT, Law JSF (1997) The effects of language and priming on the relative accessibility of the private self and collective self. Journal of Cross-Cultural Psychology 28(1): 107-
- Oyserman D, Lee SWS (2008) Does culture influence what and how we think? Effects of priming individualism and collectivism. Psychol Bull 134(2): 311-
- Verplanken B, Holland RW (2002) Motivated decision making: Effects of activation and self-centrality of values on choices and behavior. Journal of Personality and Social Psychology 82(3): 434-
- Holland RW, Verplanken B, van Knippenberg A (2003) From repetition to conviction: attitude accessibility as a determinant of attitude certainty. Journal of Experimental Social Psychology 39(6): 594-
- Trafimow D, Borrie WT (1999) Influencing future behavior by priming past behavior: A test in the context of Petrified Forest National Park. Leisure Sciences 21(1): 31-
- Maslow AH (1966)The psychology of science: a Harper & Row, Publishers, New York.
- Bower GH, Gilligan SG, Monteiro KP (1981) Selectivity of learning caused by affective states. Journal of Experimental Psychology: General 110(4): 451-473.
- Manstead ASR, Parkinson B (2015) Emotion theories. In B. Gawronski and G.V. Bodenhausen (eds), Theory and explanation in social psychology (pp. 84–107). The Guilford Press, New York.
- Schwarz N, Clore GL (1983) Mood, misattribution, and judgments of well-being: Informative and directive functions of affective states. Journal of Personality and Social Psychology 45(3): 513-
- Trafimow D, Sheeran P, Lombardo B, Finlay KA, Brown J, et al. (2004) Affective and cognitive control of persons and behaviors. British Journal of Social Psychology 43(2): 207-
- Johnston VS (1999) Why we feel: the science of human emotions. Perseus Books, Cambridge, Mass.
- Trafimow D, Sheeran P (2004) A theory about the translation of cognition into affect and behavior. In G. Maio & G. Haddock (Eds.), Contemporary perspectives in the psychology of attitudes, 57-76, Psychology Press.
- Ketelaar T (2015) Evolutionary theories. In B. Gawronski and G.V. Bodenhausen (eds), Theory and explanation in social psychology (pp. 224-244), The Guilford Press, New York.
- Eckman P (2003) Emotions revealed: recognizing faces and feelings to improve communication and emotional life. Henry Holt and Company, New York.
- Keltner D, Oatley K, Jenkins JM (2019) Understanding emotions (4th edn),
- Aziz-Zadeh L, Damasio A (2008) Embodied semantics for actions: findings from functional brain imaging. J Physiol Paris 102(1-3): 35-39.
- Hempel CG (1958) The theoretician’s dilemma: a study in the logic of theory construction. In H. Feigl, M. Scriven and G. Maxwell, (eds), Minnesota studies in the philosophy of science 2: (pp. 37-98). University of Minnesota Press.
- Hempel CG (1965) Aspects of scientific explanation and other essays in the philosophy of science. The Free Press, New York.
- Meehl PE (1990) Appraising and amending theories: the strategy of Lakatosian defense and two principles that warrant it. Psychological Inquiry 1(2): 108-
- Stephan WG, Stephan CW (2000) An integrated threat theory of prejudice. In S. Oskamp (Ed.) Reducing prejudice and discrimination (pp. 23–45). Lawrence Erlbaum Associates.
- Duhem P (1954) The aim and structure of physical theory (P. Wiener, Trans.). Princeton University Press, Princeton. (Original work published 1914.)
- Lakatos I (1978) The methodology of scientific research programmes: vol. 1. philosophical papers (J. Worrall & G. Currie, eds). Cambridge University Press, Cambridge.
- Quine WVO (1951) Two dogmas of empiricism. Philosophical Review, 60: 20- Reprinted in W. V. O. Quine, From a logical point of view (1961, 2nd ed., pp. 20-46). Harper & Row, Publishers.
- Trafimow D, Roth N, Xu L, Toomasian D, Perello A, et a. (2023b). Surprising implications of differences in locations versus differences in means. Methodology: European Journal of Research Methods for the Behavioral and Social Sciences 19(2): 152-
- Berk RA, Freedman DA (2003) Statistical assumptions as empirical commitments. In T. G. Blomberg and S. Cohen (eds), Law, punishment, and social control: essays in honor of Sheldon Messinger (2nd edn, pp. 235–254.) Aldine de Gruyter, New York.
- Hirschauer N, Grüner S, Musshoff O, Becker C, Jantsch A (2020) Can p-values be meaningfully interpreted without random sampling? Statistics Surveys 14: 71-91.
- Cinelli M, Quattrociocchi W, Galeazzi A, Valensise CM, Brugnoli E, et al. (2020) The COVID-19 social media infodemic. Scientific Reports 10: 16598.
- Larson HJ (2020) A call to arms: helping family, friends and communities navigate the COVID-19 infodemic. Nature Reviews Immunology 20: 449-
- Ying W, Cheng C (2021) Public emotional and coping responses to the COVID-19 infodemic: A review and recommendations. Front Psychiatry 12:
- Krešňáková VM, Sarnovský M, Butka P (2019) Deep learning methods for fake news detection. In 2019 IEEE 19th International Symposium on Computational Intelligence and Informatics and 7th IEEE International Conference on Recent Achievements in Mechatronics, Automation, Computer Sciences and Robotics (CINTI-MACRo), IEEE, 000143-000148.
- Riggs WD (2009) Luck, knowledge, and control. In A. Haddock, A. Millar, and D. Pritchard (Eds.), Epistemic value (pp. 204–221). Oxford University Press.
- Kozyreva A, Herzog SM, Lewandowsky S, Hertwig R, Lorenz-Spreen P, et al. (2023) Resolving content moderation dilemmas between free speech and harmful misinformation. Proc Nat Acad Sci 120(7):
- Pople L (2010) Responding to antisocial behaviour. In D. J. Smith (Ed.) A new response to youth crime (pp. 143-179),
- Berger JO, Berry DA (1988) Statistical analysis and the illusion of objectivity. American Scientist 76(2): 159-165.
- Peeters MJ (2016) Practical significance: Moving beyond statistical significance. Currents in Pharmacy Teaching and Learning 8(1): 83-
- Trafimow D, Osman M (2022) Editorial: Barriers to converting applied social psychology to bettering the human condition. Basic and Applied Social Psychology 4(1): 1-
- Trafimow D, Hyman MR, Kostyk A (2020) The (im)precision of scholarly consumer behavior research. Journal of Business Research 114: 93-
- Trafimow D, Hyman MR, Kostyk A (2023a) Enhancing predictive power by unamalgamating multi-item scales. Psychological Methods. Advance online publication.
- Tong T, Wang T, Trafimow D, Wang C (2022) The probability of being better or worse off, and by how much, depending on experimental conditions with skew normal populations. In S. Sriboonchitta, V. Kreinovich, W. Yamaka (Eds.), Credible asset allocation, opt
- Trafimow D, Hyman MR, Kostyk A, Wang Z, Tong T, et al. (2022) Gain-probability diagrams in consumer research. International Journal of Market Research 64(4): 470-
- Trafimow D, Wang Z, Tong T, Wang T (2023c) Gain-probability diagrams as an alternative to significance testing in economics and finance. Asian Journal of Economics and Banking 7(3): 333-
- Wang Z, Wang T, Trafimow D, Xu Z (2022) A different kind of effect size based on samples from two populations with delta log-skew-normal distributions. In N. N. Thach, D. T. Ha, N. D. Trung, V. Kreinovich (Eds.) Prediction and causality in econometricsand related topics pp. 97-112.
- Fenton N, Neil M (2019) Risk assessment and decision analysis with Bayesian networks (2nd edn). CRC Press, New York.
- Saylors R, Trafimow D (2021) Why the increasing use of complex causal models is a problem: On the danger sophisticated theoretical narratives pose to truth. Organizational Research Methods 24(3): 616-
- Trafimow D (2017) The probability of simple versus complex causal models in causal analyses. Behav Res Methods 49(2): 739-
- Hameed I, Irfan BZ (2021) Social media self‐control failure leading to antisocial aggressive behavior. Human Behavior and Emerging Technologies 3(2): 296-
- Marsh D (2008) Understanding British government: analysing competing models. The British Journal of Politics and International Relations 10(2): 251-
- Hazlitt H (1946) Economics in one lesson. Harper & Brothers Publishers, New York.
- Ecker UKH, Lewandowsky S, Chadwick M (2020) Can corrections spread misinformation to new audiences? Testing for the elusive familiarity backfire effect. Cogn Res Princ Implic 5(1):
- Farrell J, McConnell K, Brulle R (2019) Evidence-based strategies to combat scientific misinformation. Nature Climate Change 9(3): 191-195.
- Sanderson JA, Ecker UKH (2020) The challenge of misinformation and ways to reduce its impact. In P. van Meter, A. List, D. Lombardi, & P. Kendeou, Handbook of learning from multiple representations and perspectives (pp. 461-476). Routledge.
- Roozenbeek J, Culloty E, Suiter J (2023) Countering misinformation: Evidence, knowledge gaps, and implications of current interventions. European Psychologist 28(3): 189-
- Aswad E (2020) In a world of 'fake news,' what's a social media company to do? Utah Law Review 2020 (4).
- Baxter MG (1995) Henry Clay and the American system. The University Press of Kentucky, Lexington.
- Li P, Mei S, Zhong W (2023) Fee or subsidy? Pricing strategies for digital content platforms with different content and advertising. Managerial and Decision Economics 44(8): 4482-
- Sconyers A (2018) Corporations, social media & advertising: Deceptive, profitable, or just smart marketing. The Journal of Corporation Law 43(2): 417-
- Haselton MG, Bryant GA, Wilke A, Frederick DA, Galperin A, et al. (2009) Adaptive rationality: an evolutionary perspective on cognitive bias. Social Cognition 27(5): 733-
- Pronin E, Gilovich T, Ross L (2004) Objectivity in the eye of the beholder: Divergent perceptions of bias in self versus others. Psychological Review 111(3): 781-
- Berti M, Cunha MPE (2023) Paradox, dialectics or trade‐offs? A double loop model of paradox. Journal of Management Studies 60(4): 861-888.
- Berti M, Simpson AV (2021) The dark side of organizational paradoxes: the dynamics of disempowerment. Academy of Management Review 46(2): 252-274.
- Stevens A (2011) Telling policy stories: An ethnographic study of the use of evidence in policy-making in the UK. Journal of Social Policy 40(2): 237-
- Stevens A (2021) The politics of being an “expert”: A critical realist auto-ethnography of drug policy advisory panels in the UK. Journal of Qualitative Criminal Justice and Criminology 10(2): 1-
- Kuhn TS (1977) Objectivity, value judgment, and theory choice. In T.S. Kuhn, The essential tension: selected studies in scientific tradition and change (pp. 320-339). University of Chicago Press, Chicago.
- Scholten P (2020) Mainstreaming versus alienation: Conceptualising the role of complexity in migration and diversity policymaking. Journal of Ethnic and Migration Studies 46(1): 108-
- Block WE, Friedman M (2019) Block vs. Friedman on Hayek. In W.E. Block (ed), Property rights: The argument for privatization (pp. 51–73) Palgrave Macmillan, Cham.
- Bradley MT, Brand A (2016) Significance testing needs a taxonomy: or how the Fisher, Neyman–Pearson controversy resulted in the inferential tail wagging the measurement dog. Psychological Reports 119(2): 487-504.
- Clark SG (2002) The policy process: a practical guide for natural resources professionals. Yale University Press, New Haven.
- Duarte JL, Crawford JT, Stern S, Haidt J, Jussim L, et al. (2015) Political diversity will improve social psychological science. Behavioral and Brain Sciences 38:
- Hall PA, Taylor RCR (1996) Political science and the three new institutionalisms. Political Studies 44(5): 936-
- Lasswell HD (1956) The decision process: seven categories of functional analysis. Bureau of Governmental Research, University of Maryland Press, College Park.
- Lasswell HD (1971) A pre-view of policy sciences. American Elsevier, New York.
- Lasswell HD, Kaplan A (1950) Power and society: a framework for political inquiry. Yale University Press, New Haven.
- Medin DL, Lee CD (2012) Diversity makes better science. Association for Psychological Science Observer. [Column in APS’s Observer Series]. Online publication.
- Nápoles PR (2014) Macro policies for climate change: free market or state intervention? World Social and Economic Review 2014(3): 90-
- Redding RE (2001) Sociopolitical diversity in psychology: The case for pluralism. Am Psychol 56(3): 205-2
- Strack F, Schwarz N, Gschneidinger E (1985) Happiness and reminiscing: The role of time perspective, affect, and mode of thinking. Journal of Personality and Social Psychology 49(6): 1460-
- Tetlock PE (1994) Political psychology or politicized psychology: Is the road to scientific hell paved with good moral intentions? Political Psychology 15(3): 509-
- Weible CM, Cairney P, Yordy J (2022) A diamond in the rough: Digging up and polishing Harold D. Lasswell’s decision functions. Policy Sciences 55: 209-
-
David Trafimow, Nick Cosstick and Magda Osman*. Problem Testing (and beyond) in the Messy World of Public Policy. Sci J Research & Rev. 4(3): 2023. SJRR.MS.ID.000586.
Policymaking psychology; policy cycle; problem testing; policy implementation; biases
-
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.