Intergroup conflict profoundly affects the welfare of groups and can deteriorate intergroup relations long after the conflict is over. Here, we experimentally investigate how the experience of an intergroup conflict influences the ability of groups to establish cooperation after conflict. We induced conflict by using a repeated attacker- defender game in which groups of four are divided into two ‘attackers’ that can invest resources to take away resources from the other two participants in the role of ‘defenders.’ After the conflict, groups engaged in a repeated public goods game with peer-punishment, in which group members could invest resources to benefit the group and punish other group members for their decisions. Previous conflict did not significantly reduce group cooperation compared to a control treatment in which groups did not experience the intergroup conflict. However, when having experienced an intergroup conflict, individuals punished free-riding during the repeated public goods game less harshly and did not react to punishment by previous attackers, ultimately reducing group welfare. This result reveals an important boundary condition for peer punishment institutions. Peer punishment is less able to efficiently promote cooperation amid a ‘shadow of conflict.’ In a third treatment, we tested whether such ‘maladaptive’ punishment patterns induced by previous conflict can be mitigated by hiding the group members’ conflict roles during the subsequent public goods provision game. We find more cooperation when individuals could not identify each other as (previous) attackers and defenders and maladaptive punishment patterns disappeared. Results suggest that intergroup conflict undermines past perpetrators’ legitimacy to enforce cooperation norms. More generally, results reveal that past conflict can reduce the effectiveness of institutions for managing the commons.
Peaceful coexistence and trade among human groups can be fragile and intergroup relations frequently transition to violent exchange and conflict. Here we specify how exogenous changes in groups' environment and ensuing carrying-capacity stress can increase individual participation in intergroup conflict, and out-group aggression in particular. In two intergroup contest experiments, individuals could contribute private resources to out-group aggression (versus in-group defense). Environmental unpredictability, induced by making non-invested resources subject to risk of destruction (versus not), created psychological stress and increased participation in and coordination of out-group attacks. Archival analyses of interstate conflicts showed, likewise, that sovereign states engage in revisionist warfare more when their pre-conflict economic and climatic environment were more volatile and unpredictable. Given that participation in conflict is wasteful, environmental unpredictability not only made groups more often victorious but also less wealthy. Macro-level changes in the natural and economic environment can be a root cause of out-group aggression and turn benign intergroup relations violent.
We examine dishonest behavior in the face of potential uncertain gains and losses in three pre-studies (N = 150, N = 225, N = 188) and a main study (N = 240). Ample research has shown that people cheat when presented with the opportunity. We use a die-under-cup paradigm, in which participants could dishonestly report a private die roll and thereby increase the odds to obtain a desired outcome. Results showed that the framing of the uncertain situation mattered: Participants who lied to decrease the likelihood to experience a loss used major lies (i.e., reporting a '6'), while those who lied to increase the chance to achieve an equivalent gain used more modest lies.
Intergroup conflict can be modeled as a two-level game of strategy, in which pro-sociality can take the form of trust and cooperation within groups or between groups. We review recent work, from our own laboratory and that of others, that show how biological and socio-cultural mechanisms that promote pro-social preferences and beliefs create in-group bounded, parochial cooperation and, sometimes, parochial competition. We show when and how parochial cooperation and competition intensifies rather than mitigates intergroup conflict.
Norms prescribe how to make decisions in social situations and play a crucial role in sustaining cooperative relationships and coordinating collective action. However, following norms often requires to restrict behavior, demanding to curtail selfishness or to suppress personal goals. This raises the question why people adhere to norms. We review recent theories and empirical findings that aim at explaining why people follow norms even in private, when violations are difficult to detect and are not sanctioned. We discuss theories of norm internalization, social and self-image concerns, and social learning (i.e., preferences conditional on what others do/believe). Finally, we present two behavioral, incentivized tasks that can be used to elicit norms and measure the individual propensity to follow them.
Helping other people can entail risks for the helper. For example, when treating infectious patients, medical volunteers risk their own health. In such situations, decisions to help should depend on the individual’s valuation of others’ well-being (social preferences) and the degree of personal risk the individual finds acceptable (risk preferences). We investigated how these distinct preferences are psychologically and neurobiologically integrated when helping is risky. We used incentivized decision-making tasks (Study 1; N = 292 adults) and manipulated dopamine and norepinephrine levels in the brain by administering methylphenidate, atomoxetine, or a placebo (Study 2; N = 154 adults). We found that social and risk preferences are independent drivers of risky helping. Methylphenidate increased risky helping by selectively altering risk preferences rather than social preferences. Atomoxetine influenced neither risk preferences nor social preferences and did not affect risky helping. This suggests that methylphenidate-altered dopamine concentrations affect helping decisions that entail a risk to the helper.
Reputation has been shown to provide an informal solution to the problem of cooperation in human societies. After reviewing models that connect reputations and cooperation, we address how reputation results from information exchange embedded in a social network that changes endogenously itself. Theoretical studies highlight that network topologies have different effects on the extent of cooperation, since they can foster or hinder the flow of reputational information. Subsequently, we review models and empirical studies that intend to grasp the coevolution of reputations, cooperation and social networks. We identify open questions in the literature concerning how networks affect the accuracy of reputations, the honesty of shared information and the spread of reputational information. Certain network topologies may facilitate biased beliefs and intergroup competition or in-group identity formation that could lead to high cooperation within but conflicts between different subgroups of a network. Our review covers theoretical, experimental and field studies across various disciplines that target these questions and could explain how the dynamics of interactions and reputations help or prevent the establishment and sustainability of cooperation in small- and large-scale societies.
Humans differ in their preferences for personal rewards, fairness, and others’ welfare. Such social preferences predict trust, public goods provision, and mutual gains bargaining, and have been linked to neural activity in regions involved in reward computation, cognitive control, and perspective taking. Although shaped by culture, social preferences are relatively stable across time, raising the question whether differences in brain anatomy predict social preferences and their key components—concern for personal outcomes and concern for others’ outcomes. Here we examine this possibility by linking social preferences measured with incentivized economic games to 74 cortical parcels in N = 194 healthy humans. Neither concerns for personal outcomes nor concerns for the outcomes of others in isolation were related to anatomical differences. However, fitting earlier findings, social preferences positively scaled with cortical thickness in the left olfactory sulcus, a structure in the orbital frontal cortex previously shown to be involved in value-based decision making. Consistent with work showing that heavier usage corresponds to larger brain volume, findings suggest that pro-social preferences relate to cortical thickness in the left olfactory sulcus because of heavier reliance on the orbital frontal cortex during social decision-making.
Political conflicts often revolve around changing versus defending a status quo. We propose to capture the dynamics between proponents and opponents of political change in terms of an asymmetric game of attack and defence with its equilibrium in mixed strategies. Formal analyses generate predictions about effort expended on revising and protecting the status quo, the form and function of false signalling and cheap talk, how power differences impact conflict intensity and the likelihood of status quo revision. Laboratory experiments on the neurocognitive and hormonal foundations of attack and defence reveal that out-of-equilibrium investments in attack emerge because of non-selfish preferences, limited capacity to compute costs and benefits and optimistic beliefs about the chances of winning from one's rival. We conclude with implications for the likelihood of political change and inertia, and discuss the role of ideology in political games of attack and defence.
Compared with working alone, interacting in groups can increase dishonesty and give rise to collaborative cheating—the joint violation of honesty. At the same time, collaborative cheating emerges some but not all of the time, even when dishonesty is not sanctioned and economically rational. Here, we address this conundrum. We show that people differ in their extent to follow arbitrary and costly rules and observe that “rule-followers” behave more honestly than “rule-violators.” Because rule- followers also resist the temptation to engage in collaborative cheating, dyads and groups with at least one high rule-follower have fewer instances of coordinated violations of honesty. Whereas social interaction can lead to a “social slippery slope” of increased cheating, rule-abiding individuals mitigate the emergence and spreading of collaborative cheating, leading to a transmission advantage of honesty. Accordingly, interindividual differences in rule following provide a basis through which honest behavior can persist.
Humans are considered a highly cooperative species. Through cooperation, we can tackle shared problems like climate change or pandemics and cater for shared needs like shelter, mobility, or healthcare. However, cooperation invites free-riding and can easily break down. Maybe because of this reason, societies also enable individuals to solve shared problems individually, like in the case of private healthcare plans or private retirement planning. Such “self-reliance” allows individuals to avoid problems related to public goods provision, like free-riding or underprovision, and decreases social interdependence. However, not everyone can equally afford to be self-reliant, and amid shared problems, self-reliance may lead to conflicts within groups on how to solve shared problems. In two preregistered studies, we investigate how the ability of self-reliance influences collective action and cooperation. We show that self-reliance crowds out cooperation and exacerbates inequality, especially when some heavily depend on collective action while others do not. However, we also show that groups are willing to curtail their ability of self-reliance. When given the opportunity, groups overwhelmingly vote in favor of abolishing individual solutions to shared problems, which, in turn, increases cooperation and decreases inequality, particularly between group members that differ in their ability to be self-reliant. The support for such endogenously imposed interdependence, however, reduces when individual solutions become more affordable, resonating with findings of increased individualism in wealthier societies and suggesting a link between wealth inequality and favoring individual independence and freedom over communalism and interdependence.
Humans establish public goods to provide for shared needs like safety or healthcare. Yet, public goods rely on cooperation which can break down because of free-riding incentives. Previous research extensively investigated how groups solve this free-rider problem but ignored another challenge to public goods provision. Namely, some individuals do not need public goods to solve the problems they share with others. We investigate how such self-reliance influences cooperation by confronting groups in a laboratory experiment with a safety problem that could be solved either cooperatively or individually. We show that self-reliance leads to a decline in cooperation. Moreover, asymmetries in self-reliance undermine social welfare and increase wealth inequality between group members. Less dependent group members often choose to solve the shared problem individually, while more dependent members frequently fail to solve the problem, leaving them increasingly poor. While self-reliance circumvents the free-rider problem, it complicates the governing of the commons.
Group Cooperation, Carrying-Capacity Stress, and Intergroup Conflict
Peaceful intergroup relations deteriorate when individuals engage in parochial cooperation and parochial competition. To understand when and why intergroup relations change from peaceful to violent, we present a theoretical framework mapping out the different interdependence structures between groups. According to this framework, cooperation can lead to group expansion and ultimately to carrying-capacity stress. In such cases of endogenously created carrying-capacity stress, intergroup relations are more likely to become negatively interdependent, and parochial competition can emerge as a response. We discuss the cognitive, neural, and hormonal building blocks of parochial cooperation, and conclude that conflict between groups can be the inadvertent consequence of human preparedness – biological and cultural – to solve cooperation problems within groups.
Economic games offer an analytic tool to examine strategic decision-making in social interactions. Here we identify four sources of power that can be captured and studied with economic games – asymmetric dependence, the possibility to reduce dependence, the ability to punish and reward, and the use of knowledge and information. We review recent studies examining these distinct forms of power, highlight that the use of economic games can benefit our understanding of the behavioral and neurobiological underpinnings of power, and illustrate how power differences within and between groups impact cooperation, exploitation, and conflict.
Alone and together, climatic changes, population growth, and economic scarcity create shared problems that can be tackled effectively through cooperation and coordination. Perhaps because cooperation is fragile and easily breaks down, societies also provide individual solutions to shared problems, such as privatized health-care or retirement planning. But how does the availability of individual solutions affect free-riding and the efficient creation of public goods? We confronted groups of individuals with a shared problem that could be solved either individually or collectively. Across different cost-benefit ratios of individually versus collectively solving the shared problem, individuals display a remarkable tendency toward group-independent, individual solutions. This “individualism” leads to inefficient resource allocations and coordination failure. Introducing peer punishment further results in wasteful punishment feuds between “individualists” and “collectivists.” In the presence of individual solutions to shared problems, groups struggle to balance self-reliance and collective efficiency, leading to a “modern tragedy of the commons.”
Humans exhibit a remarkable capacity for cooperation among genetically unrelated individuals. Yet, human cooperation is neither universal, nor stable. Instead, cooperation is often bounded to members of particular groups, and such groups endogenously form or break apart. Cooperation networks are parochial and under constant reconfiguration. Here, we demonstrate how parochial cooperation networks endogenously emerge as a consequence of simple reputation heuristics people may use when deciding to cooperate or defect. These reputation heuristics, such as “a friend of a friend is a friend” and “the enemy of a friend is an enemy” further lead to the dynamic formation and fission of cooperative groups, accompanied by a dynamic rise and fall of cooperation among agents. The ability of humans to safeguard kin-independent cooperation through gossip and reputation may be, accordingly, closely interlinked with the formation of group-bounded cooperation networks that are under constant reconfiguration, ultimately preventing global and stable cooperation.
Intergroup conflict contributes to human discrimination and violence, but persists because individuals make costly contributions to their group’s fighting capacity. Yet, how group members effectively coordinate their contributions during intergroup conflict remains poorly understood. Here, we examine the role of oxytocin for (the coordination of) contributions to group attack or defense in multi-round, real-time feedback intergroup contests. In a double-blind placebo-controlled study with N = 480 males in Intergroup Attacker-Defender Contests, we found that oxytocin reduced contributions to attack and over time increased attacker’s within-group coordination of contributions. However, rather than becoming peaceful, attackers given oxytocin better tracked their rival’s historical defense and coordinated their contributions into well-timed and hence more profitable attacks. Our results reveal coordination of contributions as a critical component of successful attacks and subscribe to the possibility that oxytocin enables individuals to contribute to in-group efficiency and prosperity even when doing so implies outsiders are excluded or harmed.
Conflict can profoundly affect individuals and their groups. Oftentimes, conflict involves a clash between one side seeking change and increased gains through victory, and the other side defending the status quo and protecting against loss and defeat. However, theory and empirical research largely neglected these conflicts between attackers and defenders, and the strategic, social, and psychological consequences of attack and defense remain poorly understood. To fill this void, we model (i) the clashing of attack and defense as games of strategy, reveal that (ii) attack benefits from mismatching its target's level of defense, whereas defense benefits from matching the attacker's competitiveness, suggest that (iii) attack recruits neuro-endocrine pathways underlying behavioral activation and overconfidence, whereas defense invokes neural networks for behavioral inhibition, vigilant scanning and hostile attributions, and show that (iv) people invest less in attack than defense and attack often fails. Finally, we propose that (v) in intergroup conflict out-group attack needs institutional arrangements that motivate and coordinate collective action, whereas in-group defense benefits from endogenously emerging in-group identification. We discuss how games of attack and defense may have shaped human capacities for pro-sociality and aggression, and how third parties can regulate such conflicts, and reduce its waste.
Corruption is often the product of coordinated rule-violations. We investigate how such corrupt collaboration emerges and spreads when people can choose their partners (vs. not). Participants were assigned a partner and could increase their payoff by coordinated lying. After several interactions, they were either free to choose whether to stay or switch partners, or forced to stay with (or switch) their partner. Results reveal both dishonest and honest people exploit the freedom to choose a partner. Dishonest people seek and find a partner that will also lie — a "partner in crime." Honest people, by contrast, engage in ethical free-riding: they refrain from lying but also from leaving dishonest partners, taking advantage of their partners’ lies. We conclude that to curb collaborative corruption, relying on people’s honesty is insufficient. Encouraging honest individuals not to engage in ethical free-riding is essential.
Decisions are often governed by rules on adequate social behaviour. Recent research suggests that the right lateral prefrontal cortex (rLPFC) is involved in the implementation of internal fairness rules (norms), by controlling the impulse to act selfishly. A drawback of these studies is that the assumed norms and impulses have to be deduced from behaviour and that norm-following and pro-sociality are indistinguishable. Here, we directly confronted participants with a rule that demanded to make advantageous or disadvantageous monetary allocations for themselves or another person. To disentangle its functional role in rule-following and pro-sociality, we divergently manipulated the rLPFC by applying cathodal or anodal transcranial direct current stimulation (tDCS). Cathodal tDCS increased participants’ rule-following, even of rules that demanded to lose money or hurt another person financially. In contrast, anodal tDCS led participants to specifically violate more often those rules that were at odds with what participants chose freely. Brain stimulation over the rLPFC thus did not simply increase or decrease selfishness. Instead, by disentangling rule-following and pro-sociality, our results point to a broader role of the rLPFC in integrating the costs and benefits of rules in order to align decisions with internal goals, ultimately enabling to flexibly adapt social behaviour.
Rules, whether in the form of norms, taboos or laws, regulate and coordinate human life. Some rules, however, are arbitrary and adhering to them can be personally costly. Rigidly sticking to such rules can be considered maladaptive. Here, we test whether, at the neurobiological level, (mal)adaptive rule adherence is reduced by oxytocin – a hypothalamic neuropeptide that biases the biobehavioural approach-avoidance system. Participants self-administered oxytocin or placebo intranasally, and reported their need for structure and approach-avoidance sensitivity. Next, participants made binary decisions and were given an arbitrary rule that demanded to forgo financial benefits. Under oxytocin, participants violated the rule more often, especially when they had high need for structure and high approach sensitivity. Possibly, oxytocin dampens the need for a highly structured environment and enables individuals to flexibly trade-off internal desires against external restrictions. Implications for the treatment of clinical disorders marked by maladaptive rule adherence are discussed.
Intergroup conflict persists when and because individuals make costly contributions to their group’s fighting capacity, but how groups organize contributions into effective collective action remains poorly understood. Here we distinguish between contributions aimed at subordinating out-groups (out-group aggression) from those aimed at defending the in-group against possible out-group aggression (in-group defense). We conducted two experiments in which three-person aggressor groups confronted three-person defender groups in a multiround contest game. Individuals received an endowment from which they could contribute to their group’s fighting capacity. Contributions were always wasted, but when the aggressor group’s fighting capacity exceeded that of the defender group, the aggressor group acquired the defender group’s remaining resources. In-group defense appeared stronger and better coordinated than out-group aggression, and defender groups survived roughly 70% of the attacks. This low success rate for aggressor groups mirrored that of group-hunting predators such as wolves and chimpanzees, hostile takeovers in industry, and interstate conflicts. Furthermore, whereas peer punishment increased out-group aggression more than in-group defense without affecting success rates, sequential decision-making increased coordination of collective action for out-group aggression, doubling the aggressor’s success rate. The relatively high success rate of in-group defense suggests evolutionary and cultural pressures may have favored capacities for cooperation and coordination when the group goal is to defend, rather than to expand, dominate, and exploit.
The prevalence of cooperation among humans is puzzling because cooperators can be exploited by free riders. Peer punishment has been suggested as a solution to this puzzle, but cumulating evidence questions its robustness in sustaining cooperation. Amongst others, punishment fails when it is not powerful enough, or when it elicits counter-punishment. Existing research, however, has ignored that the distribution of punishment power can be the result of social interactions. We introduce a novel experiment in which individuals can transfer punishment power to others. We find that while decentralised peer punishment fails to overcome free riding, the voluntary transfer of punishment power enables groups to sustain cooperation. This is achieved by non-punishing cooperators empowering those who are willing to punish in the interest of the group. Our results show how voluntary power centralisation can efficiently sustain cooperation, which could explain why hierarchical power structures are widespread among animals and humans.
One fundamental question in decision making research is how humans compute the values that guide their decisions. Recent studies showed that people assign higher value to goods that are closer to them, even when physical proximity should be irrelevant for the decision from a normative perspective. This phenomenon, however, seems reasonable from an evolutionary perspective. Most foraging decisions of animals involve the trade-off between the value that can be obtained and the associated effort of obtaining. Anticipated effort for physically obtaining a good could therefore affect the subjective value of this good. In this experiment, we test this hypothesis by letting participants state their subjective value for snack food while the effort that would be incurred when reaching for it was manipulated. Even though reaching was not required in the experiment, we find that willingness to pay was significantly lower when subjects wore heavy wristbands on their arms. Thus, when reaching was more difficult, items were perceived as less valuable. Importantly, this was only the case when items were physically in front of the participants but not when items were presented as text on a computer screen. Our results suggest automatic interactions of motor and valuation processes which are unexplored to this date and may account for irrational decisions that occur when reward is particularly easy to reach.
Social norms, such as treating others fairly regardless of kin relations, are essential for the functioning of human societies. Their existence may explain why humans, among all species, show unique patterns of prosocial behaviour. The maintenance of social norms often depends on external enforcement, as in the absence of credible sanctioning mechanisms prosocial behaviour deteriorates quickly. This sanction-dependent prosocial behaviour suggests that humans strategically adapt their behaviour and act selfishly if possible but control selfish impulses if necessary. Recent studies point at the role of the dorsolateral prefrontal cortex (DLPFC) in controlling selfish impulses. We test whether the DLPFC is indeed involved in the control of selfish impulses as well as the strategic acquisition of this control mechanism. Using repetitive transcranial magnetic stimulation, we provide evidence for the causal role of the right DLPFC in strategic fairness. Because the DLPFC is phylogenetically one of the latest developed neocortical regions, this could explain why complex norm systems exist in humans but not in other social animals.
Humans can choose between fundamentally different options such as watching a movie or going out for dinner. According to the utility concept, put forward by utilitarian philosophers and widely used in economics, this may be accomplished by mapping the value of different options onto a common scale, independent of specific option characteristics. If this is the case, value-related activity patterns in the brain should allow predictions of individual preferences across fundamentally different reward categories. We analyze fMRI data of the prefrontal cortex while subjects imagine the pleasure they would derive from items belonging to two distinct reward categories: engaging activities (like going out for drinks, daydreaming or doing sports) and snack foods. Support vector machines trained on brain patterns related to one category reliably predict individual preferences of the other category and vice versa. Further, we predict preferences across participants. These findings demonstrate that prefrontal cortex value signals follow a common scale representation of value that is even comparable across individuals and could in principle be used to predict choice.
My PhD dissertation "From Simple Choice to Social Decision – On the Neurobiological and Evolutionary Roots of Decision Making".
German introduction to R. R is a programming language and free software environment for statistical computing and thus a free and very powerful alternative to commercial software like MatLab, SPSS, Stata or SAS. The reader covers basic syntax, data types, data import and export, plotting, basic descriptive statistics functions, distribution generation, loops and regression analysis.
German introduction to basic concepts of statistics. The reader was used in bachelor level education and covers concepts of measurement theory, descriptive statistics, data visualization, an introduction to probability theory and inferential statistics.
My master thesis about the neuronal and behavioral correlates of altruistic punishment – the punishment of unfair behavior from a bystander perspective (in german).