Let’s Play A Game: Choose Effort Optimally (CEO)

Motivation:

Organizations with a hierarchy of people often face the problem of what effort can be expected from the employees given the organizational structure. For example, let us consider the following tree.

Hierarchy

Imagine the root node is a Senior Manager of the organization, whom the nodes in the next level (Managers) report to. The leaf nodes are the freshly recruited employees who finished their grad school and joined the organization. The organization has a constant rate {\lambda} of incoming works. All the different employees of the organization, including the Managers and Senior Managers, can exert efforts in order to finish the job. They can choose their working rate {\lambda_i} (effort) to finish the task. However, the nodes who are high above in the hierarchy get a passive benefit due to the work of their subtree. In the figure, the blue nodes in the path shown gets a passive benefit from the work done by the red node. Also, the effort has certain cost to the nodes. So, a node who earns more benefit passively, might optimize his/her payoff by putting a less amount of effort.

The Game:

Let us assume that there are N players connected in some tree structure. Tasks arrive at a Poisson rate {\lambda}. Each of the nodes can choose their working rate {\lambda_i}, which is also Poisson. Tasks are matched to the agents on a FCFS basis. It can be shown that the expected direct benefit a node gets for choosing effort {\lambda_i} is given by {\frac{R \cdot \lambda_i}{\sum_{j \in N} \lambda_j}}. But (s)he also incurs a cost of {c \cdot \lambda_i}, c is a constant. In addition, the node gets some passive benefits. Hence the total expected payoff of node {i} is of the form:

Total Payoff {= \frac{R \cdot \lambda_i}{\sum_{j \in N} \lambda_j} - c \cdot \lambda_i + } Passive Benefits.

The passive benefit in this game is shown in the following figure. The nodes in the path to the node get a geometrically diminishing benefit because of the reward got by the red node.

payoff tree

It can be shown that if each player tries to greedily maximize his/her own payoff by changing the effort, this game reaches an equilibrium where nobody can unilaterally change his/her effort and increase individual payoffs.

Imagine You are a Player:

In order to assist you in choosing your playing strategies, we provide you an online game portal that computes your expected payoff for your chosen effort and the effort profile of the other players. All that you need to do is to set your effort level to a point where your expected payoff would be maximized. We will provide you with a username to login (the URL would be sent to you in an email), and you can play this game at your leisure. Once you log in, you’ll see a screen similar to the following one, where you can increase or decrease efforts and watch what is your payoff. Ideally you would like to stop at an effort level where your utility (or payoff) is maximum.

screenshotWe call this game ‘Choose Effort Optimally’ (CEO). We will keep the game open for few days, so that you get enough number of chances in different times to update your effort and maximize your payoff. Remember that, your optimal effort now is dependent on the effort levels of the other players. If any of them changes his/her effort, your optimal effort level might also change and you need to update that. That is to say, if you are a manager and your managed employees increase their efforts, you need to work less, since the free passive benefits are more. So, over the duration when the game is live, you need to periodically check what are your current payoffs and update them accordingly.

Purpose of the game:

This is a human computation experiment where we want to see how human behavior compares to the predicted one, and to see if the human players can figure out and converge to the social equilibrium point. You will not be told your position in the tree, in spite of that, we expect that you will converge to the right equilibrium effort level.

Ready to play the game, let’s get started:

Enlist your name and email in this spreadsheet. We will send you the URL and your username. Thanks for your interest in our game.

Advertisements

The Weekly ECL Meeting Schedule (Spring 2012)

We are starting a little early this Spring. The first meet would be on Dec 28, 2011. We are going to keep Wednesdays 7- 8 PM as the default time. Here is a tentative schedule, which I’ll keep on updating.

Date  Time Speaker Venue Topic Reference(s)
Dec 28, 2011 7 PM Videolecture (Bobby Klienberg) ECL Network Formation in the Presence of Contagious Risk Blume et al. EC 2011
Jan 18, 2012 7 PM Raghunandan M. A. 225 Sustaining Cooperation on Networks: An Analytical Study Based on Evolutionary Game Theory Raghu & Subramanian, AAMAS 2012
Feb 1, 2012 7 PM Satyanath Bhat 254 Myerson Auction Myerson 1981
 Feb 17, 2012 5 PM  Balakrishnan Narayanaswamy (Murali), IBM IRL 252 Resource management in the smart grid
TBA 7 PM Moon K. Chetry ECL INTERDEPENDENT VALUATION WITH MULTI-DIMENSIONAL SIGNALS  BIKHCHANDANI, 2006
Mar 28, 2012 5 PM Shweta Jain  ECL Learning, Regret Minimization and Equilibria Avrim Blum and Yishay Mansour
TBA 7 PM  Swaprava Nath ECL Who’s Who in Networks. Wanted: The Key Player Ballester et al. Econometrica 2006
TBA 7 PM Rohith D. Vallam ECL TBA
TBA 7 PM TBA ECL  TBA
TBA 7 PM ECL
Apr 4, 2012 7 PM  Swapnil Dhamal ECL TBA

Creating html tables.

Conference, Journals in E-Commerce

Keeping eyes open …

 Conference Deadline Decision dates Conference dates Remarks
 AAAI  Jan 20 (abstracts); Jan 24 (papers)  March 27 July 21-25, Toronto  Tier 1
 IJCAI  August 3-9, 2013 Beijing
 GAMES  Feb 1
 ACM EC  Feb 6
 SIGKDD  Feb 10
 SIGIR  Feb 6 (abstracts); Feb 13 (papers)
 ICALP  Feb 21
 ICML
  • Cycle I paper submissions due October 1, 2012
  • Cycle I author response period November 1 – 7, 2012
  • Cycle I author notification November 30, 2012
  • Cycle I final version submission due January 7, 2013
  • Cycle II paper submissions due December 15, 2012
  • Cycle II author response period January 21 – 25, 2013
  • Cycle II author notification Feb 15, 2013
  • Cycle II final version submission due March 15, 2013
  • Cycle III paper submissions due February 15, 2013
  • Cycle III author response period March 15-22, 2013
  • Cycle III author notification April 15, 2013
  • Cycle III final version submission due May 8, 2013
long procedure June 16 to 21, 2013 Atlanta
 COLT  Feb 24
 IEEE CASE  Feb 25
 SSCW  Feb 28  Aug 17-21, New Delhi
 ECAI  March 6
 UAI  March 16
 FOCS  April 1
 ECML-PKDD  April 1 (abstracts); April 8 (papers)
 AMMA  bi-annual
 SAGT  May 1
 NIPS  June 2
 CIKM  June 15
 ICDM  June 15
 FSTTCS  July 1
 SODA  July 5 (abstracts); July 12 (papers)
 WINE  Aug 1, 11:59pm GMT  September 16  9-12 December 2012, Liverpool
 WSDM  Aug 1 (abstracts); Aug 8 (papers)
 ITCS  August 13 (earlier called ICS)
 TARK  September 3
 AAMAS  Oct 8 (abstracts); Oct 12 (papers)  RB: Nov 27 – 29, Dec 20  May 6 -10, 2013, Univ. of Minnesota
 PAKDD  Oct 1
 SDM  Oct 1
 STOC  Nov 1
 WWW 2012  Nov 19 (abstracts); Nov 26 (papers)  February 8, 2013  May 13 -17, 2013, Rio de Jeneiro

Journals

  1. Dynamic Games and Applications (special issue, deadline March 2012)
  2. IEEE Transactions on ASE (TASE)
  3. IEEE Transactions on KDE (TKDE)
  4. IEEE Transactions on PAMI(TPAMI)
  5. IEEE Journal on Selected Areas in Commn (JSAC)
  6. ACM Transactions on Intelligent Systems and Technology
  7. ACM Transactions on the Web
  8. ACM Transactions on Economics and Computation
  9. ACM Transactions on Internet Technologies
  10. Journal of Machine Learning Research
  11. Artificial Intelligence
  12. Journal of Artificial Intelligence Research (JAIR)
  13. Theoretical Computer Science
  14. Social Networks
  15. Games and Economic Behaviour
  16. International Journal of Game Theory
  17. Journal of Economic Theory
  18. Social Choice and Welfare
  19. Journal of Mathematical Economics
  20. Theoretical Economics
  21. Economics Letters
  22. International Game Theory Review
  23. International Journal of Mathematics, Game Theory and Algebra
  24. Journal of Game Theory
  25. Games, MDPI

On reading papers

There are plenty of literature on this grand question which keeps a researcher afloat. For example see Michael Mitzenmacher’s blog or Jason Eisner’s detailed description. However, reading and summarizing paper is lot of task and also, while talking about some result, one has to remember things. It may not be possible to remember the whole content of a paper, even not the proofs or any other detail. So, then, I guess it is better to remember only 3 grand questions on any paper:

  1. The model.
  2. The main result, forget about the proof and its correctness. If it is a published material, it has already been reviewed by many people, whom you can trust.
  3. The shortcomings, this is important if one is interested in working further in problems. And rest assured, EVERY paper has shortcomings even though they claim not to be.

It really makes the job concise, doesn’t it?

Some new papers uploaded to ArXiv on Stochastic games

There are a few uploads of papers on stochastic games in ArXiv recently. The common author in all of them is Krishnendu Chatterjee.

1. Magnifying Lens Abstraction for Stochastic Games with Discounted and Long-run Average Objectives

Authors: Krishnendu Chatterjee and Luca de Alfaro and Pritam Roy

Turn-based stochastic games and its important subclass Markov decision processes (MDPs) provide models for systems with both probabilistic and nondeterministic behaviors. We consider turn-based stochastic games with two classical quantitative objectives: discounted-sum and long-run average objectives. The game models and the quantitative objectives are widely used in probabilistic verification, planning, optimal inventory control, network protocol and performance analysis. Games and MDPs that model realistic systems often have very large state spaces, and probabilistic abstraction techniques are necessary to handle the state-space explosion. The commonly used full-abstraction techniques do not yield space-savings for systems that have many states with similar value, but does not necessarily have similar transition structure. A semi-abstraction technique, namely Magnifying-lens abstractions (MLA), that clusters states based on value only, disregarding differences in their transition relation was proposed for qualitative objectives (reachability and safety objectives). In this paper we extend the MLA technique to solve stochastic games with discounted-sum and long-run average objectives. We present the MLA technique based abstraction-refinement algorithm for stochastic games and MDPs with discounted-sum objectives. For long-run average objectives, our solution works for all MDPs and a sub-class of stochastic games where every state has the same value.

ArXiv Paper

2. Partial-Observation Stochastic Games: How to Win when Belief Fails

Authors: Krishnendu Chatterjee and Laurent Doyen

In two-player finite-state stochastic games of partial observation on graphs, in every state of the graph, the players simultaneously choose an action, and their joint actions determine a probability distribution over the successor states. We consider reachability objectives where player 1 tries to ensure a target state to be visited almost-surely or positively. On the basis of information, the game can be one-sided with either (a)player 1 or (b)player 2 having partial observation, or two-sided with both players having partial observation. On the basis of randomization (a)players may not be allowed to use randomization (pure strategies), or (b)may choose a probability distribution over actions but the actual random choice is not visible (actions invisible), or (c)may use full randomization. Our results for pure strategies are as follows: (1)For one-sided games with player 2 perfect observation we show that belief-based strategies are not sufficient, and present an exponential upper bound on memory both for almost-sure and positive winning strategies; we show that the problem of deciding the existence of almost-sure and positive winning strategies for player 1 is EXPTIME-complete and present symbolic algorithms that avoid the explicit exponential construction. (2)For one-sided games with player 1 perfect observation we show that non-elementary memory is both necessary and sufficient for both almost-sure and positive winning strategies. (3)We show that for the two-sided case finite memory strategies are sufficient for both positive and almost-sure winning. We establish the equivalence of the almost-sure winning problem for pure strategies with randomized strategies with actions invisible. Our equivalence result exhibit serious flaws in previous results in the literature: we show a non-elementary memory lower bound for almost-sure winning whereas an exponential upper bound was claimed.

ArXiv Paper

3. Bounded Rationality in Concurrent Parity Games

Authors: Krishnendu Chatterjee

We consider 2-player games played on a finite state space for infinite rounds. The games are concurrent: in each round, the two players choose their moves simultaneously; the current state and the moves determine the successor. We consider omega-regular winning conditions given as parity objectives. We consider the qualitative analysis problems: the computation of the almost-sure and limit-sure winning set of states, where player 1 can ensure to win with probability 1 and with probability arbitrarily close to 1, respectively. In general the almost-sure and limit-sure winning strategies require both infinite-memory and infinite-precision. We study the bounded-rationality problem for qualitative analysis of concurrent parity games, where the strategy set player 1 is restricted to bounded-resource strategies. In terms of precision, strategies can be deterministic, uniform, finite-precision or infinite-precision; and in terms of memory, strategies can be memoryless, finite-memory or infinite-memory. We present a precise and complete characterization of the qualitative winning sets for all combinations of classes of strategies. In particular, we show that uniform memoryless strategies are as powerful as finite-precision infinite-memory strategies, and infinite-precision memoryless strategies are as powerful as infinite-precision finite-memory strategies. We show that the winning sets can be computed in {O(n^{2d+3})} time, where n is the size of the game and 2d is the number of priorities, and our algorithms are symbolic. The membership problem of whether a state belongs to a winning set can be decided in NP cap coNP. While this complexity is the same as for the simpler class of turn-based games, where in each state only one of the players has a choice of moves, our algorithms, that are obtained by characterization of the winning sets as mu-calculus formulas, are considerably more involved.

ArXiv Paper

Some very interesting papers in SAGT 2011

Though there is quite a lot of papers which focus on different areas of AGT, the following three seems very interesting to me as they are trying to catch up either a new area or exploring a classical but important area.

Krzysztof Apt and Evangelos Markakis. Diffusion in Social Networks with Competing Products

Abstract: We introduce a new threshold model of social networks, in which the nodes influenced by their neighbours can adopt one out of several alternatives.  We identify a class of graphs that allow us to characterize social networks for which adoption of a product by the whole network is possible (respectively necessary) and the ones for which a unique outcome is guaranteed. We also provide a simple polynomial time algorithm that allows us to determine whether a graph belongs to this class. This directly yields polynomial time algorithms that allows us to determine whether a given social network satisfies one of the above properties. We also study algorithmic questions concerning networks without unique outcomes. In particular we show that the problem of computing the minimum possible spread of a product is NP-hard to approximate with an approximation ratio better than $\Omega(n)$, in contrast to the maximum spread, which is efficiently computable. We then move on to questions regarding the behavior of a node with respect to adopting some (resp. a given) product. We show that the problem of determining whether a given node has to adopt some (resp.~a given) product in all final networks is co-NP-complete.

Asaph Arnon and Yishay Mansour. Repeated Budgeted Second Price Ad Auction

Abstract: Our main goal is to abstract existing repeated sponsored search ad auction mechanisms which includes budgets, and study their equilibrium and dynamics. Our abstraction has multiple agents biding repeatedly for multiple identical items (such as impressions in an ad auction). The agents are budget limited and have a value for per item. We abstract the repeated interaction as a one-shot game, which we call “budget auction”, where agents submit a bid and a budget, and then items are sold by a sequential second price auction. Once an agent exhausts its budget it does not participate in the proceeding auctions. Our main result is that if agents bid conservatively (never bid above their value) then there always exists a pure Nash equilibrium. We also study simple dynamics of repeated budget auctions, showing their convergence to a Nash equilibrium for  two agents and for multiple agents with identical budgets.

Navendu Jain, Ishai Menache, Joseph Naor and Jonathan Yaniv. A Truthful Mechanism for Value-Based Scheduling in Cloud Computing

Abstract: Cloud computing provides an attractive computation platform, in which computing resources (e.g., virtual machines, storage capacity) are rented to end-users under a utility pricing model. The most common pricing offering is a pay-as-you-go scheme, in which users are charged a fixed price per unit resource per hour. While this scheme is simple to implement, it does not support value-based scheduling, where users assign a value to their job and the cloud schedules to maximize aggregate value (or prioritize competing jobs) under job demand and capacity constraints.  In this paper, we introduce a novel pricing approach for batch jobs (e.g., image processing, financial analytics, scientific simulations). In our economic model, users submit jobs with a value function that specifies willingness to pay as function of job due dates. Focusing on social-welfare as the system objective (especially relevant for private or in-house clouds), we construct a resource allocation algorithm which obtains a (small) constant-factor approximation of maximum aggregate value, assuming that user valuations are known. Based on this algorithm, we then design an efficient truthful-in-expectation mechanism, which significantly improves the running complexity of black-box reduction mechanisms that can be applied to the problem, thereby facilitating its implementation in real systems.

The complete list of accepted papers is here.