Research Associate, Duke University
Hi, I'm Pëllumb Reshidi. I earned a PhD in Economics from Princeton University in 2022 and spent the last two years in the Economics Department at Duke University.
I will be on the 2024-2025 job market.
A manufacturer seeks to license a product to downstream competitors with unknown productivities. She can design a mechanism to allocate licenses to one or multiple competitors. We identify the revenue-maximizing mechanism and show it can be implemented through an interval auction: the highest bidder is exclusively licensed if their bid is much higher than others, but multiple bidders are licensed otherwise. This mechanism does not allocate efficiently, and we characterize the distributions of buyer valuations that lead to over- or under-licensing. If buyers arrive over time, the seller may delay licensing, and we show that the seller only commits to exclusive contracts if she is less patient than the buyers.
We study asymptotic learning when the decision maker is ambiguous about the precision of her information sources. She aims to estimate a state and evaluates outcomes according to the worst-case scenario. Under prior-by-prior updating, we characterize the set of asymptotic posteriors the decision maker entertains, which consists of a continuum of degenerate distributions over an interval. Moreover, her asymptotic estimate of the state is generically incorrect. We show that even a small amount of ambiguity may lead to large estimation errors and illustrate how an econometrician who learns from observing others' actions may over- or underreact to information.
Many committees—juries, political task forces, etc.—spend time gathering costly information before reaching a decision. We report results from lab experiments focused on such dynamic information-collection processes, as in sequential hypothesis testing. We consider decisions governed by individuals and groups and compare how voting rules affect outcomes. Several insights emerge. First, average decision accuracies approximate those predicted theoretically, but these accuracies decline over time: participants display non-stationary behavior. Second, groups exhibit markedly different behaviors than individuals, with majority rule yielding faster and less accurate decisions. In particular, welfare is higher when sequential information is collected in groups using unanimity.
We study underlying reasons for the failure of individuals to adhere to Bayes' rule and decompose this departure into three elements: (i) task complexity, (ii) information structure, and (iii) timing of information release. In a series of controlled experiments, we systematically alter all three elements and quantify their magnitude. We link task complexity with the degree of non-linearity embedded in Bayesian updating. We experimentally explore this link and find empirical support for it.
In complex environments—where carrying out Bayesian updating is computationally unfeasible—the DeGroot model has emerged as a reliable, and heavily utilized alternative. An assumption present in practically all versions of this model is that agents receive information simultaneously. We relax this assumption by allowing for sequential arrival of information. We find that the final beliefs can be altered by varying only the sequencing of information, keeping the information content unchanged. In this setup the \textit{wisdom of crowds} typically fails: as the number of group members grows, the sequential arrival of information compromises the group’s beliefs, in all but particular cases, beliefs converge away from the truth. We identify the optimal and pessimal information release sequences that yield the highest and lowest attainable consensus, respectively. In doing so, we bound the variation in final beliefs that can be attributed to the variation in the sequencing of information. Groups in which all members are equally influential turn out to be most susceptible to information sequencing.
We test whether the order and timing of information arrival affect beliefs formed within a group. In a lab experiment, participants estimate a parameter of interest using a common and a private signal, as well as past guesses of group members. By varying the sequencing of information arrival, at odds with the Bayesian model, we find that the order and timing of information affect final beliefs, even when the information content is unchanged. Although behavior is non-Bayesian, it is robustly predictable by a model relying on simple heuristics. We explore ways in which the network structure and the timing of information help alleviate correlation neglect. Finally, we document an important heuristic—the influence of private information on participants' actions is time-independent.
Utilizing a dataset with 81 million detailing decisions over an 11-year period, we examine how firms decide which doctors to target with drug advertisements through sales representatives. Specifically, we analyze drug characteristics—such as whether the drug requires a repeat prescription, whether it is fully covered by insurance, etc.—and market factors, such as the number of competing firms, to understand firms’ strategic choices. A central focus of our analysis is assessing whether firms perceive their competitors' targets as strategic complements or substitutes. Our analysis covers 187 markets, ranging from highly competitive environments with over 12 competing firms to monopolistic settings, making this the first comprehensive study to address such a wide spectrum of markets.
A large empirical literature on the timing of binary choices documents that quicker decisions are often more accurate, even when subjects know the quality of both available options. This evidence suggests individuals decrease the standards with which they make decisions over time, at odds with the classic sequential testing model in which optimal standards are time-independent. We use a novel approximation technique to show that incorporating risk aversion can account for time-dependent standards. We find sufficient conditions for standards to be decreasing for a family of utility functions. Our technique sidetracks many of the difficulties in solving non-stationary optimal stopping problems and allows us to partially characterize the optimal boundaries.