Staffing, Routing, and Payment to Trade off Speed and Quality in Large Service Systems

Periodicals(2019)

引用 20|浏览7
暂无评分
摘要
AbstractThree fundamental questions when operating a service system are (1) how many employees to staff, and (2) how to route work to them, and (iii) how to pay them. These questions have often been studied separately; that is, the queueing and network-design literature that considers staffing and workload routing generally ignores payment, and the literature on employee payment generally ignores issues surrounding staffing and routing. In “Staffing, Routing, and Payment to Trade Off Speed and Quality in Large Service Systems,” D. Zhan and A.R. Ward study how the aforementioned three decisions jointly affect system throughput and the quality of the service delivered when the employers maximize their own payment. They find that the system manager should first solve a joint optimization problem to determine the staffing level, the routing policy, and the service speed, and second, design a payment contract under which the employees work at the desired service speed.Most common queueing models used for service-system design assume that the servers work at fixed (possibly heterogeneous) rates. However, real-life service systems are staffed by people, and people may change their service speed in response to incentives. The delicacy is that the resulting service speed is jointly affected by staffing, routing, and payment decisions. Our objective in this paper is to find a joint staffing, routing, and payment policy that induces optimal service-system performance. We do this under the assumption that there is a trade-off between service speed and quality and that employees are paid based on both. The employees selfishly choose their own service speed to maximize their own expected utility (which depends on the staffing through their busy time). The endogenous service-rate assumption leads to a centralized control problem in which the system manager jointly optimizes over the staffing, routing, and service rate. By solving the centralized control problem under fluid scaling, we find four different economically optimal operating regimes: critically loaded, efficiency driven, quality driven, and intentional idling (in which there is simultaneous customer abandonment and server idling). Then we show that a simple piece-rate payment scheme can be used to solve the associated decentralized control problem under fluid scaling.
更多
查看译文
关键词
service operations,queueing games,fluid limits,Erlang-A,strategic servers
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要