Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2007
…
9 pages
1 file
Dynamic contracts with multiple agents is a classical decentralized decision-making problem with asymmetric information. In this paper, we extend the single-agent dynamic incentive contract model in continuous-time to a multi-agent scheme in finite horizon and allow the terminal reward to be dependent on the history of actions and incentives. We first derive a set of sufficient conditions for the existence of optimal contracts in the most general setting and conditions under which they form a Nash equilibrium. Then we show that the principal's problem can be converted to solving Hamilton-Jacobi-Bellman (HJB) equation requiring a static Nash equilibrium. Finally, we provide a framework to solve this problem by solving partial differential equations (PDE) derived from backward stochastic differential equations (BSDE).
SSRN Electronic Journal, 2017
Dynamic contracts with multiple agents is a classical decentralized decision-making problem with asymmetric information. In this paper, we extend the single-agent dynamic incentive contract model in continuous-time to a multi-agent scheme in finite horizon and allow the terminal reward to be dependent on the history of actions and incentives. We first derive a set of sufficient conditions for the existence of optimal contracts in the most general setting and conditions under which they form a Nash equilibrium. Then we show that the principal's problem can be converted to solving Hamilton-Jacobi-Bellman (HJB) equation requiring a static Nash equilibrium. Finally, we provide a framework to solve this problem by solving partial differential equations (PDE) derived from backward stochastic differential equations (BSDE).
Mathematics
Multiagent incentive contracts are advanced techniques for solving decentralized decision-making problems with asymmetric information. The principal designs contracts aiming to incentivize non-cooperating agents to act in his or her interest. Due to the asymmetric information, the principal must balance the efficiency loss and the security for keeping the agents. We prove both the existence conditions for optimality and the uniqueness conditions for computational tractability. The coupled principal-agent problems are converted to solving a Hamilton–Jacobi–Bellman equation with equilibrium constraints. Extending the incentive contract to a multiagent setting with history-dependent terminal conditions opens the door to new applications in corporate finance, institutional design, and operations research.
2020
This paper investigates an optimal dynamic incentive contract between a risk-averse principal (system operator) and multiple risk-averse agents (subsystems) with independently local controllers in continuous-time controlled Markov processes, which can represent various cyber-physical systems. The principal fs incentive design and the agents f decision-makings under asymmetric information structure are known as the principal-agent (PA) problems in economic field. However, the standard framework in economics cannot be directly applied to the realistic control systems including large-scale cyber-physical systems and complex networked systems due to some unrealistic assumptions for an engineering perspective. In this paper, using a constructive approach based on the techniques of the classical stochastic control theory, we propose and solve a novel dynamic control/incentive synthesis for the PA problem under moral hazard.
European Journal of Political Economy, 1989
We formulate and solve a class of three-agent incentive decision problems with strict hierarchy and decentralized information.
2005
We consider a continuous-time setting, in which the agent can control both the drift and the volatility of the underlying process. The principal can observe the agent's action and can offer payment at a continuous rate, as well as a bulk payment at the end of the fixed time horizon. In examples, we show that if the principal and the agent have the same CRRA utility, or they both have (possibly different) CARA utilities, the optimal contract is (ex-post) linear; if they have different CRRA utilities, the optimal contract is nonlinear, and can be of the call option type. We use martingale/duality methods, which, in the general case, lead to the optimal contract as a fixed point of a functional that connects the agent's and the principal's utility maximization problems.
For a non-cooperative m-persons differential game, the value functions ofthe various players satisfy a system of Hamilton-Jacobi-Bellman equations.Nashequilibrium solutions in feedback form can be obtained by studying a related system of P.D.E's.A new approach, which is proposed in this paper allows one to construct the feedback optimal control x 1 x, . . . , m x and cost functions J i t, x 0 , i 1, . . . , m directly,i.e.,without any reference to the corresponding Hamilton-Jacobi-Bellman equations.
SIAM Journal on Control and Optimization
We consider a general formulation of the random horizon principal-agent problem with a continuous payment and a lump-sum payment at termination. In the European version of the problem, the random horizon is chosen solely by the principal with no other possible action from the agent than exerting effort on the dynamics of the output process. We also consider the American version of the contract, where the agent can also quit by optimally choosing the termination time of the contract. Our main result reduces such non-zero-sum stochastic differential games to appropriate stochastic control problems which may be solved by standard methods of stochastic control theory. This reduction is obtained by following the Sannikov [22] approach, further developed in . We first introduce an appropriate class of contracts for which the agent's optimal effort is immediately characterized by the standard verification argument in stochastic control theory. We then show that this class of contracts is dense in an appropriate sense, so that the optimization over this restricted family of contracts represents no loss of generality. The result is obtained by using the recent well-posedness result of random horizon second-order backward SDE in .
For a non-cooperative m-persons differential game, the value functions ofthe various players satisfy a system of Hamilton-Jacobi-Bellman equations.Nashequilibrium solutions in feedback form can be obtained by studying a related system of P.D.E's.A new approach, which is proposed in this this paper
SSRN Electronic Journal, 2000
A recursive dynamic agency model is developed for situations where the state of nature follows a Markov process. The repeated agency model is a special case. It is found that the optimal e¤ort depends not only on current performance but also on past performances, and the disparity between current and past performances is a crucial determinant in the optimal contract. In a special case when both the principal and the agent are risk neutral, the …rst best is achieved by a semi-linear contract. In another special case of a repeated version, the …rst best is again achieved when the discount rate converges to zero. For the general model, a computing algorithm is developed, which can be implemented in MathCAD to …nd the solution numerically.
European Journal of Operational Research, 1999
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Journal of Optimization Theory and Applications, 2012
International Series in Operations Research & Management Science, 2014
RePEc: Research Papers in Economics, 2010
arXiv: Optimization and Control, 2020
Journal of Mathematical Sciences, 2008
Eprint Arxiv 0806 0240, 2008
arXiv (Cornell University), 2013
IEEE Transactions on Automatic Control, 2017
Automatica, 1985
Applied Mathematics & Optimization, 1993
arXiv (Cornell University), 2013
Journal of Differential Equations, 2006
Frontiers of Dynamic Games, 2018
American Economic Journal: Microeconomics, 2015
The Fascination of Probability, Statistics and their Applications, 2015