Nature

Companies inadvertently fund online misinformation despite consumer backlash

56


Background on digital advertising

The predominant business model of several mainstream digital media platforms relies on monetizing attention via advertising3. While these platforms typically offer free content and services to individual consumers, they generate revenue by serving as an intermediary or advertising exchange connecting advertisers with independent websites that want to host advertisements. To do so, platforms run online auctions to algorithmically distribute advertising across websites, known as ‘programmatic advertising’. For example, Google distributes advertising in this manner to more than two million non-Google sites that are part of the Google Display Network. This allows websites to generate revenue for hosting advertising, and they share a percentage of this payment with the platform. In the USA, more than 80% of digital display advertisements are placed programmatically16. We refer to these advertising exchanges as digital advertising platforms and use the term digital platforms to collectively refer to all the services offered by such media platforms.

We examine the role of advertising companies and digital advertising platforms in monetizing online misinformation. While in other forms of (offline) media, advertisers typically have substantial control over where their advertisements appear, advertising placement through digital advertising platforms is mainly automated. Since most companies do not have the capacity to participate in high-frequency advertising auctions that require them to place individual bids for each advertising slot they are interested in, they typically outsource the bidding process to an advertising platform. Such programmatic advertising gives companies relatively less control over where their advertisements end up online. However, companies can take steps to reduce advertising on misinformation websites, such as by only being part of advertising auctions for a select list of credible websites or blocking advertisements from appearing on specific misinformation outlets.

Collecting website data

We collect data on misinformation websites in three steps. First, we use a dataset maintained by NewsGuard. This company rates all the news and information websites that account for 95% of online engagement in each of the five countries where it operates. Journalists and experienced editors manually generate these ratings by reviewing news and information websites according to nine apolitical journalistic criteria. Recent research has used this dataset to identify misinformation websites6,66,67. In this paper, we consider each website that NewsGuard rates as repeatedly publishing false content between 2019 and 2021 to be a misinformation website and all others to be non-misinformation websites, leading to a set of 1,546 misinformation websites and 6,499 non-misinformation websites. To get coverage throughout our study period, we sample websites provided by NewsGuard from the start, middle and end of each year from 2019 to 2021. Additionally, we also sample websites from January 2022 and June 2022 to account for websites that may have existed during our study period and discovered later. Supplementary Table 3 summarizes the characteristics of this dataset. Our NewsGuard dataset contains websites across the political spectrum, including left-leaning websites (for example, https://www.palmerreport.com/ and https://occupydemocrats.com/), politically neutral websites (for example, https://rt.com/ and https://www.nationalenquirer.com), and right-leaning websites (for example, https://www.thegatewaypundit.com/ and http://theconservativetreehouse.com/).

Note that prior research that has used the NewsGuard dataset has often used the term ‘untrustworthy’ to describe websites6,67. Such research has used NewsGuard’s aggregate classification whereby a site that scores below a certain threshold (60 points) on NewsGuard’s weighted score system is labelled as untrustworthy. Instead of using NewsGuard’s overall score for a website, we use the first criterion classified by NewsGuard for each website—that is, whether a website repeatedly publishes false news to identify a set of 1,546 misinformation websites. While 94% of the NewsGuard misinformation websites we identify in this manner are also untrustworthy based on NewsGuard’s classification, only about 52% of the untrustworthy websites are misinformation websites or websites that repeatedly publish false news. Our measure of misinformation is, therefore, more conservative than prior work using NewsGuard’s ‘untrustworthy’ label.

In addition to the NewsGuard dataset, we use a list of websites provided by the GDI. This non-profit organization identifies disinformation by analysing both the content and context of a message, and how they are spread through networks and across platforms68. In this way, GDI maintains a list of monthly-updated websites, which it also shares with interested advertising tech platforms to help reduce advertising on misinformation websites. The GDI list allows us to identify 1,869 additional misinformation websites. Finally, we augment our list of misinformation websites with 396 additional ones used in prior work69,70. Among the websites that NewsGuard rated as non-misinformation (at any point in our sample), 310 websites were considered to be misinformation websites by our other sources or by NewsGuard itself (during a different period in our sample). We categorize these websites as misinformation websites given their risk of producing misinformation.

Altogether, our website dataset consists of 10,310 websites, including 3,811 misinformation and 6,499 non-misinformation websites. Similar to prior work6,67, our final measure of misinformation is at the level of the website or online news outlet. Aggregating article-level information and using website-level metadata is meaningful since it reduces noise when arriving at a website-level measure. Finally, we use data from SEMRush, a leading online analytics platform, to determine the level of monthly traffic received by each website from 2019 to 2021.

Consumer experiment design

This study was reviewed by the Stanford University Institutional Review Board (Protocol No. IRB-63897) and the Carnegie Mellon University Institutional Review Board (protocol no. IRB00000603). Our study was pre-registered at the American Economic Association’s Registry under AEARCTR-0009973. Informed consent was obtained from all participants at the beginning of the survey.

Setting and sample recruitment

We recruited a sample of US internet users via CloudResearch. CloudResearch screened respondents for our study so that they are representative of the US population in terms of age, gender and race based on the US Census (2020). It is important to note that while we recruited our sample to be representative on these dimensions to improve the generalizability and external validity of our results, our sample is a diverse sample of US internet users, which is not necessarily representative of the US population on other dimensions71. To ensure data quality, we include a screener in our survey to check whether participants pay attention to the information provided. Only participants who pass this screener can proceed with the survey. Our total sample includes 4,039 participants, who are randomized into five groups approximately evenly.

The flow of the survey study is shown in Supplementary Fig. 1. We begin by asking participants to report demographics such as age, gender and residence. From a list of trustworthy and misinformation outlets, we then ask participants questions about their behaviours in terms of the news outlets they have used in the past 12 months, their trust in the media (on a 5-point scale), the online services or platforms they have used and the number of petitions they have signed in the past 12 months.

Initial gift card preferences

We then inform participants that one in five (that is, 20% of all respondents) who complete the survey will be offered a US$25 gift card from a company of their choice out of six company options. Respondents are asked to rank the six gift card companies on a scale from their first choice (most preferred) to their sixth choice (least preferred). These six companies belong to one of three categories: fast food, food delivery and ride-sharing. All six companies appeared on the misinformation websites in our sample during the past three years (2019–2021), offer items below US$25, and are commonly used throughout the USA. The order in which the six companies are presented is randomized at the respondent level. As a robustness check, we also ask respondents to assign weights to each of the six gift card options. This question gives respondents greater flexibility by allowing them to indicate the possibility of indifference (that is, equal weights) between any set of options. We then ask participants to confirm which gift card they would like to receive if they are selected to ensure they have consistent preferences regardless of how the question is asked. At this initial elicitation stage, the respondents did not know that they will get another chance to revise their choice. Hence, these choices can be thought of as capturing their revealed preference.

Information treatments

All participants in the experiment are given baseline information on misinformation and advertising. This is meant to ensure that all participants in our experiment are made aware of how we define misinformation along with examples of a few misinformation websites (including right-wing, neutral and left-wing misinformation websites), how misinformation websites are identified, and how companies advertise on misinformation websites (via an illustrative example) and use digital platforms to automate placing advertisements.

Participants are then randomized into one control and four treatment groups, in which the information treatments are all based on factual information from our data and prior research. We use an active control design to isolate the effect of providing information relevant to the practice of specific companies on people’s behaviour9. Participants in the control group are given generic information based on prior research that is unrelated to advertising companies or platforms but relevant to topic of news and misinformation.

In our first ‘company only’ treatment group (T1), participants are given factual information stating that advertisements from their top choice gift card company appeared on misinformation websites in the recent past. Based on their preferences, people may change their final gift card preference away from their initial top-ranked company after receiving this information. It is unclear, however, whether advertising on misinformation websites would cause a sufficient change in consumption patterns and which sets of participants may be more affected.

Our second ‘platform only’ treatment group (T2) informs participants that companies using digital advertising platforms were about 10 times more likely to appear on misinformation websites than companies that did not use such platforms in the recent past. This information treatment measures the effects of digital advertising platforms in financing misinformation news outlets. Since it does not contain information about advertising companies, it practically serves as a second control group for our company-level outcome and aims to measure how people may respond to our platform-related outcome.

Because our descriptive data suggest that the use of digital advertising platforms amplifies advertising revenue for misinformation outlets, we are interested in measuring how consumers respond to a specific advertising company appearing on misinformation websites when also informed of the potential role of digital advertising platforms in placing companies’ advertising on misinformation websites. It is unclear whether consumers will attribute more blame to companies or advertising platforms for financing misinformation websites when informed about the role of the different stakeholders in this ecosystem. For this reason, our third ‘company and platform’ treatment (T3) combines information from our first two treatments (T1 and T2). Similar to T1, participants are given factual information that advertisements from their top choice gift card company appeared on misinformation websites in the recent past. Additionally, we informed participants that their top choice company used digital advertising platforms and companies that used such platforms were about ten times more likely to appear on misinformation websites than companies that did not use digital advertising platforms, as mentioned in T2.

Finally, since several advertising companies appear on misinformation websites, we would like to determine whether informing consumers about other advertising companies also appearing on misinformation websites changes their response towards their top choice company. In our fourth company-ranking treatment (T4), participants are given factual information, which states that “In the recent past, ads from all six companies below repeatedly appeared on misinformation websites in the following order of intensity”, and provided with a ranking from one of three years in our study period—that is, 2019, 2020 or 2021. We personalize these rankings by providing truthful information based on data from different years in the recent past such that the respondents’ top gift card choice company does not appear last in the ranking (that is, is not the company that advertises least on misinformation websites) and in most cases, advertises more intensely on misinformation websites than its potential substitute in the same company category (for example, fast food, food delivery or ride-sharing). Such a treatment allows us to measure potential differences in the direction of consumers switching their gift card choices, such as switching towards companies that advertise more or less intensely on misinformation websites. It could also give consumers reasonable deniability such as “everyone advertises on misinformation websites” leading to ambiguous predictions about the exact impact of the treatment effect.

Outcomes

We measure two pre-registered behavioural outcomes that collectively allow us to measure how people respond to our information treatments in terms of both voice and exit25. After the information treatment, all participants are asked to make their final gift card choice from the same six options they were shown earlier. Our main outcome of interest is whether participants ‘exit’ or switch their gift card preference—that is, whether they select a different gift card after the information treatment than their top choice indicated before the information treatment. To ensure incentive compatibility, participants are (truthfully) told that those randomly selected to receive a gift card will be offered the gift card of their choice at the end of our study. As mentioned above, the probability of being randomly chosen to receive a gift card is 20%. We choose a high probability of receiving a gift card relative to other online experiments since prior work has shown that consumers process choice-relevant information more carefully as realization probability increases72. To make the gift card outcome as realistic as possible, we also had a large value gift card (US$25). The focus of our experiments is on single-shot outcomes. While it would have been interesting to capture longer-term effects, the cost of implementing our gift card outcome for a large sample and expenditure on the other studies made a follow-up study cost-prohibitive.

Secondly, participants are given the option to sign one of several real online petitions that we made and hosted on Change.org. Participants can opt to sign a petition that advocates for either blocking or allowing advertising on misinformation or choose not to sign any petition. Further, participants could choose between two petitions for blocking advertisements on misinformation websites, suggesting that either: (1) advertising companies, or (2) digital advertising platforms, need to block advertisements from appearing on misinformation websites. Overall, participants selected among the following five choices: (1) “Companies like X need to block their ads from appearing on misinformation websites.”, where X is their top choice gift card company; (2) “Companies like X need to allow their ads to appear on misinformation websites.”, where X is their top choice gift card company; (3) “Digital ad platforms used by companies need to block ads from appearing on misinformation websites.”; (4) “Digital ad platforms used by companies need to allow ads to appear on misinformation websites.”; and (5) I do not want to sign any petition. To track the number of petition signatures for each of these four petition options across our randomized groups, we provide separate petition links to participants in each randomized group. We record several petition-related outcomes. First, we measure participants’ intention to sign a petition based on the option they select in this question. Participants who pass our attention check and opt to sign a petition are later provided with a link to their petition of choice. This allows tracking whether participants click on the petition link provided. Participants can also self-report whether they signed the petition. Finally, for each randomized group, we can track the total number of actual petition signatures.

Our petition outcomes serves two purposes. While our gift card outcome measures how people change their consumption behaviour in response to the information provided, people may also respond to our information treat ments in alternative ways—for example, by voicing their concerns or supplying information to the parties involved25,26. Given that the process of signing a petition is costly, participants’ responses to this outcome would constitute a meaningful measure similar to petition measures used in prior experimental work73,74. Second, since participants must choose between signing either company or platform petitions, this outcome allows us to measure whether or not, across our treatments, people hold advertising companies more responsible for financing misinformation than the digital advertising platforms that automatically place advertisements for companies.

In addition to our behavioural outcomes, we also record participants’ stated preferences. To do so, we ask participants about their degree of agreement with several statements about misinformation on a seven-point scale ranging from ‘strongly agree’ to ‘strongly disagree’. These include whether they think: (1) companies have an important role in reducing the spread of misinformation through their advertising practices; and whether (2) digital platforms should give companies the option to avoid advertising on misinformation websites.

Heterogeneous treatment effects

We explore heterogeneity in consumer responses along four pre-registered dimensions. First, prior research recognizes differences in the salience of prosocial motivations across gender75, with women being more affected by social-impact messages than men76 and more critical consumers of new media content77. Given these findings, we could expect female participants to be more strongly affected by our information treatments.

Responses to our information treatments may also differ by respondents’ political orientation. According to prior research, conservatives are especially likely to associate the mainstream media with the term ‘fake news’. These perceptions are generally linked to lower trust in media, voting for Trump, and higher belief in conspiracy theories78. Moreover, conservatives are more likely to consume misinformation2 and the supply of misinformation has been found to be higher on the ideological right than on the left79. Consequently, we might expect stronger treatment effects for left-wing respondents.

Consumers who more frequently use a company’s products or services could be presumed to be more loyal towards the company or derive greater utility from its use, which could limit changes in their behaviour37. Alternatively, more frequent consumers may be more strongly affected by our information treatments as they may perceive their usage as supporting such company practices to a greater extent than less frequent consumers.

Finally, we measure whether people’s responses differ by whether they consume misinformation themselves based on whether they reported using misinformation outlets in the initial question asking them to select which news outlets they used in the past 12 months.

Tackling experimental validity concerns

In our incentivized, online setting where we measure behavioural outcomes, we expect experimenter demand effects to be minimal as has been evidenced in the experimental literature80,81. We take several steps to mitigate potential experimenter demand effects, including implementing best practices recommended in prior work9. First, our experiment has a neutral framing throughout the survey since the recruitment of participants. While recruiting participants, we invite them to “take a survey about the news, technology and businesses” without making any specific references to misinformation or its effects. While introducing misinformation websites and how they are identified by independent non-partisan organizations, we include examples of misinformation websites across the political spectrum (including both right-wing and left-wing sites) and provide an illustrative example of misinformation by foreign actors. In drafting the survey instruments, the phrasing of the questions and choices available were as neutral as possible. For example, while introducing our online petitions, we presented participants with the option to sign real petitions that suggest both blocking and allowing advertising on misinformation sites. Indeed, we find that the vast majority of participants believe that the information provided in the survey was unbiased as shown in Supplementary Fig. 4. Only about 10% of participants chose one of the ‘biased’ or ‘very biased’ options when asked to rate the political bias of the survey information provided from a seven-point scale ranging from ‘very right-wing biased’ to ‘very left-wing biased’.

In our active control design, participants in all randomized groups are presented with the same baseline information about misinformation, given misinformation-related information in the information intervention and asked the same questions after the information intervention to emphasize the same topics and minimize potential differences in the understanding of the study across treatment groups. Moreover, to maximize privacy and increase truthful reporting82, respondents complete the surveys on their own devices without the physical presence of a researcher. We also do not collect respondents’ names or contact details (with the exception of eliciting emails to provide gift cards to participants at the end of the study).

In presenting our information interventions and measuring our behavioural outcomes, we take special care to not highlight the names of the specific entities being randomized across groups to avoid emphasizing what is being measured. We do, however, highlight our gift card incentives by putting the gift card information in bold text to ensure incentive compatibility since prior work has found that failing to make incentives conspicuous can vastly undermine their ability to shift behaviour83.

Apart from making the above design choices to minimize experimenter demand effects, we measure their relevance using a survey question. Since demand effects are less likely a concern if participants cannot identify the intent of the study9, we ask participants an open-ended question—that is, “What do you think is the purpose of our study?”. Following prior work84,85, we then analyse the responses to this question to examine whether they differ across treatment groups. To measure potential differences in the respondents’ perceptions of the study, we examine their open-ended text responses about the purpose of the study using a Support Vector Machine classifier, which incorporates several features in text analysis, including word, character, and sentence counts, sentiments, topics (using Gensim) and word embeddings. We predict treatment status using the classifier, keeping 75% of the sample for the training set and the remaining 25% as the test set. The classifier predicts treatment status similar to chance for our main treatment groups relative to the control group, as shown in Supplementary Table 11. These results, which are similar in magnitude to those found in previous research84,85, suggest that our treatments do not substantially affect participants’ perceptions about the purpose of the study. Overall, this analysis gives us confidence that our main experimental findings are unlikely to be driven by experimenter demand effects.

To address external validity concerns, we incorporate additional exit outcomes in the paper, showing that treated individuals switched to lower preference products (Table 1, columns 3 and 4) and products across categories (Table 1, columns 5 and 6) after our information interventions by 8 and 5 percentage points, respectively. We also show in Supplementary Table 8 that as the difference between participants’ highest weighted and second highest weighted gift card choice increases, their switching behaviour decreases. This shows that the weights assigned by participants to their gift card options are capturing meaningful and costly differences in value, highlighting the external validity of our findings. More generally, our pre-registered heterogeneity analysis lends credence to the study’s external validity. In line with expectations, we find that less frequent users and more politically liberal individuals are likelier to switch (see Extended Data Table 3 for the full set of pre-registered heterogeneity results). Moreover, we find that the cost of switching gift cards varies based on participants’ observable characteristics. For example, treated participants who reported not using any of the misinformation news outlets in our survey lost 50% of the median value (US$12.50) of their initial top choice gift card whereas treated participants who reported reading such outlets lost 33.3% of the median value (US$8.33) of their initial top choice gift card. Participants’ text responses also indicate that they believed their choices to be consequential (see Supplementary Tables 1 and 2). As an example, while explaining their choice of gift card, one participant stated, “Because I would most likely use this gift card on my next visit to… and it is less likely that i would use the others.” Regarding the petition outcome, one participant stated “The source of this problem seems to be from the digital advertising platforms, so I’d rather sign the petition that stops them from putting ads on misinformation websites.”

Decision-maker experiment design

We followed the same IRB review, pre-registration and consent procedures as those used for our consumer study. This study addresses two research questions. First, we aim to measure the existing beliefs and preferences decision-makers have about advertising on misinformation websites. This will help inform whether companies may be inadvertently or willingly sustaining online misinformation. Secondly, we ask: how do decision-makers update their beliefs and demand for a platform-based solution to avoid advertising on misinformation websites in response to information about the role of platforms in amplifying the financing of misinformation? This will suggest whether companies may be more interested in adopting advertising platforms that reduce the financing of misinformation. To this end, we conduct an information-provision experiment9. While past work has examined how firm behaviour regarding market decisions changes in response to new information48,49, it is unclear how information on the role of digital advertising platforms in amplifying advertising on misinformation would affect decision-makers’ non-market strategies.

Setting and sample recruitment

To recruit participants, we partnered with the executive education programmes at the Stanford Graduate School of Business and Heinz College at Carnegie Mellon University. We did so in order to survey senior managers and leaders who could influence strategic decision-making within their firms, in contrast to studies relying heavily on MBA students for understanding decision-making in various contexts such as competition, pricing, strategic alliances and marketing86,87,88,89. Additionally, partnering with two university programmes instead of a specific firm allowed us to access a more diverse sample of companies than prior work that sampled specific types of firms—for example, innovative firms, startups or small businesses90,91,92. Throughout this study, we use the preferences of decision-makers (for example, chief executive officers) as a proxy for company-level preferences since people in such roles shape the outcomes of their companies through their strategic decisions93,94.

Our partner organizations sent emails to their alumni on our behalf. We used neutral language in our study recruitment emails to attract a broad audience of participants to our survey regardless of their initial beliefs and concerns about misinformation, stating our goal as “conducting vital research on the role of digital technologies in impacting your organization” without mentioning misinformation. We received 567 complete responses, of which 90% are kept since they are from currently employed respondents. To ensure data quality, we dropped an additional 13% of responses where participants were inattentive in answering the survey, resulting in a final sample of 442 responses. These participants were determined to be inattentive since they provided an answer greater than 100 when asked to estimate a number out of 100 in the two questions eliciting their prior beliefs about companies and platforms before the information treatment was provided. Our final sample of 442 respondents is from companies that span all the 23 industries in our descriptive analysis. Moreover, as shown in Supplementary Fig. 5, our sample of participants represents a broad array of company sizes and experience levels at their current roles. Additionally, about 22% of the executives in our sample (and 25% of all our participants) are women, which is aligned with the 21% to 26% industry estimates of women in senior roles globally95,96.

Supplementary Fig. 2 shows the design of the survey study. We first elicit participants’ current employment status. All those working in some capacity are allowed to continue the survey, whereas the rest of the participants are screened out. After asking for their main occupation, all participants in the experiment are provided with baseline information on misinformation and advertising similar to that provided in the consumer experiment.

Baseline beliefs and preferences

 In our pre-registration, we highlighted that we would measure the baseline beliefs and preferences of decision-makers. We measure participants’ baseline beliefs about the roles of companies in general, their own company and platforms in general in financing misinformation. Specifically, participants are asked to estimate the number of companies among the most active 100 advertisers whose advertisements appeared on misinformation websites during the past three years (2019–2021). Additionally, we ask participants to report whether they think their company or organization had its advertisements appear on misinformation websites in the past three years. Finally, we measure participants’ beliefs about the role of digital advertising platforms in placing advertisements on misinformation websites. To do so, we first inform participants that during the past three years (2019–2021), out of every 100 companies that did not use digital advertising platforms, eight companies appeared on misinformation websites on average. We then asked participants to provide their best estimate for the number of companies whose advertisements appeared on misinformation websites out of every 100 companies that did use digital advertising platforms.

In addition to recording participants’ stated preferences using self-reported survey measures, we measure participants’ revealed preferences. To ensure incentive compatibility, participants are asked three questions in a randomized order: (1) information demand about consumer responses—that is, whether they would like to learn how consumers respond to companies whose advertisements appear on misinformation websites (based on our consumer survey experiment); (2) advertisement check—that is, whether they would like to know about their own company’s advertisements appearing on misinformation websites in the recent past; and (3) demand for a solution—that is, whether they would like to sign up for a 15-minute information session on how companies can manage where their advertisements appear online. Participants are told they can receive information about consumer responses at the end of the study if they opt to receive it whereas the advertisement check and solution information are provided as a follow-up after the survey. Participants are required to provide their emails and company name for the advertisement check. To sign up for an information session from our industry partner on a potential solution to avoid advertising on misinformation websites, participants sign up on a separate form by providing their emails. Since all three types of information offered are novel and otherwise costly to obtain, we expect respondents’ demand for such information to capture their revealed preferences.

Information intervention

Participants are then randomized into a treatment group, which receives information about the role of digital advertising platforms in placing advertising on misinformation websites, and a control group, which does not receive this information. Based on the dataset we assembled, participants are given factual information that companies that used digital advertising platforms were about ten times more likely to appear on misinformation websites than companies that did not use such platforms in the recent past. This information is identical to the information provided to participants in the T2 (that is, platform only) group in the consumer experiment.

Outcomes

After the information intervention, we first measure participants’ posterior beliefs about the role of digital advertising platforms in placing advertisements on misinformation websites following our pre-registration. Participants are told about the average number of companies whose advertisements appear per month on misinformation websites that are not monetized by digital advertising platforms. They are then asked to estimate the average number of companies whose advertisements appear monthly on misinformation websites that use digital advertising platforms. This question measures whether participants believe that the use of digital advertising platforms amplifies advertising on misinformation websites.

We record two behavioural outcomes, which were pre-registered as our primary outcomes of interest after the information intervention. Our main outcome of interest is the respondents’ demand for a platform-based solution to avoid advertising on misinformation websites. Participants can opt to learn more about two different types of information—that is: (1) which platforms least frequently place companies’ advertising on misinformation websites; and (2) which types of analytics technologies are used to improve advertising performance—or opt not to receive any information. Since participants can only opt to receive one of the two types of information, this question is meant to capture the trade-off between respondents’ concern for avoiding misinformation outlets and their desire to improve advertising performance, respectively. Participants are told that they will be provided with the information they choose at the end of this study. Following the literature in measuring information acquisition97, we measure respondents’ demand for solution information, which serves as a revealed-preference proxy for their interest in implementing a solution for their organization.

Additionally, to measure whether the information treatment increases concern for financing misinformation in general, we record a second behavioural measure. Participants are told that the research team will donate US$100 to one of two organizations after randomly selecting one of the first hundred responses: (1) the GDI; and (2) DataKind, which helps mission-driven organizations increase their impact by unlocking their data science potential ethically and responsibly.

Tackling experimental validity concerns

Similarly to our consumer experiment, this survey was carried out in an online setting, where experimenter demand effects are limited80,81. We followed best practices9 by keeping the treatment language neutral and ensuring the anonymity of the participants wherever possible. We find that most participants believe that the information provided in the survey was unbiased. Only about 7% of participants chose one of the ‘biased’ or ‘very biased’ options when asked to rate the political bias of the survey information provided from a seven-point scale ranging from ‘very right-wing biased’ to ‘very left-wing biased’.

Importantly, to ensure truthful reporting, our main experimental outcomes were incentive-compatible. In particular, respondents who chose our platform solution demand outcome to learn about which platforms least contribute to placing companies’ advertisements on misinformation websites had to face a trade-off between receiving this information and receiving information on improving advertising performance. Additionally, our baseline information demand outcomes elicited before the information intervention were also incentive-compatible in that participants would be asked to follow up on their decisions whether they opted for additional information via email or via an online information session.

These design choices are made to minimize demand effects on our main outcomes of interest. However, it is possible that these effects are still relevant, partially because participants may have an interest in ‘doing the right thing’ on a survey administered by an institution they have a connection with. We measure the relevance of potential demand effects using a survey question mirroring the approach used for our consumer experiment. To measure potential differences in the respondents’ perceptions of the study across our treatment and control groups, we predict treatment status based on respondents’ open-ended text responses about the purpose of the study via a support vector machine classifier, keeping 75% of the sample for the training set and the remaining 25% as the test set. We find that the classifier is only slightly worse than random chance in predicting treatment status (Supplementary Table 16) but similar in magnitude to those in the consumer experiment. Therefore, although experimenter demand effects may still be present, these results suggest that these effects do not drive our findings.

We address the external validity of our findings by verifying the decision-making capacity of our respondents within their organizations and by examining the generalizability of our sample. We find that the vast majority of those whose job titles we verify (94%) serve in executive or managerial roles within their organizations. The regression estimates in Supplementary Tables 18 and 19 show that our results remain qualitatively and quantitatively similar after the exclusion of the small sample of individuals in non-executive and non-managerial roles. Moreover, the verified and self-reported decision-makers are similar across observable characteristics as reported in Supplementary Table 17, suggesting limited selection in our verification process. To examine the generalizability of our sample, we investigate their observable characteristics. As shown in Supplementary Fig. 5, our sample of participants represents a broad array of company sizes and experience levels at their current roles. Additionally, about 22% of the executives in our sample (and 25% of all our participants) are women, which is aligned with the 21% to 26% industry estimates of women in senior roles globally95,96.

Reporting summary

Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.



Source link

UtahDigitalNews.com