An Empirical Examination of the Effect of Causal Attributions on Project Continuation Decision

A large body of research has shown that norms matter for ordinary causal attributions. When people are given scenarios in which two agents[1] jointly bring about an outcome by performing symmetric actions, but where one violates a norm and the other does not, time after time researchers have found that causal ratings for the norm-violating agent are significantly higher than for the norm-conforming agent (e.g., Knobe and Fraser 2008, Hitchcock and Knobe 2009, Sytsma et al. 2012). Call this the cross-agent effect.

A number of different explanations have been offered for this effect. Perhaps the most prominent is the counterfactual view. The basic idea is that in thinking about what caused an outcome, people consider counterfactuals, but they aren't equally likely to consider all counterfactuals—they're more likely to consider counterfactuals on which an abnormal event is replaced with a more normal event. And when the outcome doesn't occur on that counterfactual, people tend to treat the agent whose action was changed as the cause.

In a recent series of papers, Kominsky et al. (2015), Icard et al. (2017), and Kominsky & Phillips (2019; blog post here) have made a strong case for the counterfactual view by making novel predictions about variations on the basic type of scenario described above and providing evidence that the predicted pattern of effects is borne out. The first variation is to include a non-normed contrast condition. Comparing the normed and the non-normed conditions we have one agent whose normative status is varied (violating a norm in one condition and not in the other) and one agent whose normative status is fixed (doesn't violate a norm in either condition). Looking across these two conditions there are now two further comparisons we can make—we can compare ratings for the varied agent in each condition and we can compare ratings for the fixed agent in each condition. While a number of alternative explanations predict that ratings will be higher for the varied agent in the normed condition than in the non-normed condition (the varied agent effect), Kominsky et al. argue that the counterfactual view predicts the reverse for the fixed agent—that ratings will be lower for the fixed agent in the normed condition than in the non-normed condition (the fixed agent effect).

The second variation is to alter the causal structure in these scenarios. In the original conditions both actions must take place for the outcome to occur (the scenarios are conjunctive), in the new conditions either action alone is sufficient for the outcome to occur (the scenarios are disjunctive). Kominsky et al. predict that in disjunctive scenarios the fixed agent effect will be absent, while Icard et al. predict that the varied agent effect will be reversed. Putting these effects together, the counterfactual view also predicts that in normed disjunctive cases where the agents' actions are otherwise symmetric, there will be a reverse cross-agent effect.

This gives a pattern of six predicted effects—cross-agent, varied agent, and fixed agent effects in conjunctive cases, and their absence (fixed agent) or reversal (cross-agent, varied agent) in disjunctive cases. And Kominsky et al., Icard et al., and Kominsky & Phillips have provided evidence that this pattern of effects occurs. They then argue that while the counterfactual view predicts this pattern of effects, competing views—including our responsibility view (Sytsma et al. 2012, Livengood et al. 2017, Sytsma et al. 2019, Livengood & Sytsma fc [blog post here])—are only able to explain some of the effects. If accurate this would be strong evidence for the counterfactual view!

In a recent manuscript, however, I argue that this undersells the alternative views and present evidence suggesting that the overall pattern of effects is quite different than that found by advocates of the counterfactual view.

First, I argue that while the responsibility view (and other competing accounts) cannot directly explain the fixed agent effect, they can explain it indirectly and that there is a ready alternative explanation—that the fixed agent effect is simply a context effect.

Second, while advocates of the counterfactual view have suggested that alternative views should make the same predictions for disjunctive cases that they do for conjunctive cases, in fact these views expect that causal structure (whether the case is conjunctive or disjunctive) will affect causal attributions. The reason is that they treat people's judgments that an agent caused an outcome as akin to judgments that the agent is responsible for the outcome (responsibility view), hold that people interpret the causal questions as asking about the agent's responsibility for the outcome (the pragmatic view; Samland and Waldmann 2016, Samland et al. 2016), or hold that people's judgments that an agent is to blame for the outcome bias their causal judgments (the bias view; Alicke 1992, Alicke et al. 2011, Rose 2017); but, there is reason to expect that such responsibility and/or blame attributions will be sensitive to causal structure. As such, to determine how well these views can explain the effects for disjunctive cases, we also need to test these attributions.

Third, I present the results of a series of new studies investigating the overall pattern of effects predicted by the counterfactual view. Surprisingly, the results are rather different than those found by advocates of the counterfactual view:

  • Looking at two standard cases from the literature (Pen Case, Computer Case), I find that while the fixed agent effect is seen in conjunctive cases, it is not found reliably, occurring in only a minority of the comparisons tested.
  • Looking at two disjunctive cases tested by Kominsky, Icard, and crew (Motion Detector Case, Email Case), I find no evidence of the reverse cross-agent effect or the reverse varied agent effect. Further, for the Email Case I find a fixed agent effect (rather than no effect for the fixed agent).

Finally, for Kominsky et al.'s Motion Detector Case I also tested responsibility and blame attributions. I found that overall the results were quite similar for each type of attribution, suggesting that the alternatives to the counterfactual view can in fact explain the results. Results are shown below. Further, this study was replicated for causal attributions and responsibility attributions with larger sample sizes (roughly N=200 per condition), showing comparable results.

The upshot is that while I think advocates of the counterfactual view are right that it is important to look at the overall pattern of effects, it is unclear that the pattern conforms to their predictions. In fact, it looks like alternative accounts, such as the responsibility view, might be better able to explain the pattern.

Results for Study 5 from Sytsma (ms) with histograms above the plots of the means for each condition and showing 95% confidence intervals.

[Hearty thanks to Kominsky, Livengood, and Rose especially for their extremely helpful comments and suggestions on earlier drafts of this paper!]

References

Alicke, M. (1992). "Culpable causation." Journal of Personality and Social Psychology, 63: 368–378.

Alicke, M., Rose, D., and Bloom, D. (2011). "Causation, Norm Violation and Culpable Control." Journal of Philosophy, 108: 670–696.

Hitchcock, C. and J. Knobe (2009). "Cause and Norm." The Journal of Philosophy, 106: 587–612.

Icard, T., J. Kominsky, and J. Knobe (2017). "Normality and actual causal strength." Cognition, 161: 80–93.

Knobe, J. and B. Fraser (2008). "Causal judgments and moral judgment: Two experiments." In W. Sinnott-Armstrong (ed.), Moral Psychology, Volume 2: The Cognitive Science of Morality, pp. 441–447, Cambridge: MIT Press.

Kominsky, J. and J. Phillips (2019). "Immoral Professors and Malfunctioning Tools: Counterfactual Relevance Accounts Explain the Effect of Norm Violations on Causal Selection." Cognitive Science, 43(11): e12792.

Kominsky, J., J. Phillips, T. Gerstenberg, D. Lagnado, and J. Knobe (2015). "Causal superseding." Cognition, 137: 196–209.

Livengood, J., and J. Sytsma (forthcoming). "Actual causation and compositionality." Philosophy of Science.

Livengood, J., J. Sytsma, and D. Rose (2017). "Following the FAD: Folk attributions and theories of actual causation." Review of Philosophy and Psychology, 8(2): 274–294.

Rose, D. (2017). "Folk Intuitions of Actual Causation: A Two-pronged Debunking Explanation." Philosophical Studies, 174(5): 1323–1361.

Samland, J. and M. R. Waldmann (2016). "How prescriptive norms influence causal inferences." Cognition, 156: 164–176.

Samland, J., M. Josephs, M. Waldmann, and H. Rakoczy (2016). "The Role of Prescriptive Norms and Knowledge in Children's and Adults' Causal Selection." Journal of Experimental Psychology: General, 145(2): 125–130.

Sytsma, J., J. Livengood, and D. Rose (2012). "Two types of typicality: Rethinking the role of statistical typicality in ordinary causal attributions." Studies in History and Philosophy of Biological and Biomedical Sciences, 43: 814–820.

Sytsma, J., R. Bluhm, P. Willemsen, and K. Reuter (2019). "Causal Attributions and Corpus Analysis." In E. Fischer and M. Curtis (eds.), Methodological Advances in Experimental Philosophy, London: Bloomsbury Press.


[1] While similar results are found for non-agents, to keep things simple I'll just focus on agents here.

corneliusmepland.blogspot.com

Source: https://xphiblog.com/the-pattern-of-effects-for-causal-attributions/

0 Response to "An Empirical Examination of the Effect of Causal Attributions on Project Continuation Decision"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel