February 12, 2017 at 8:35 pm #115
I thought Hogeye might enjoy this paper by Peter Leeson, given his math background. I was surprised that a seemingly simple idea took so much math to formalize.
It’s titled “Social Distance and Self-Enforcing Exchange“, and discusses one way people can scale up self-enforcing social institutions. The idea is that “cheaters” have a “hit-and-run” style of social interaction, because once they exploit someone their victim will no longer cooperate with them. If they rely on an exploitative strategy to achieve their ends, then, it won’t be beneficial for them to have long-term relationships with others, long-term relationships are only beneficial if you plan to cooperate with someone.
Since different people discount the value of future gains relative to present gains to different degrees, those who want short term gain will be more likely to exploit, (because they don’t value the extra long term gain from cooperation highly enough,) while those who want long term gain will more likely cooperate.
One can thus weed out the cheaters from the cooperators by figuring out who values short-term gain more than long-term gain and vice versa. One way to do this is to require those one interacts with to endure some sort of one-time cost. If they’re going to cooperate over a long period, and they value long-term gains, then they can gain enough to make the one-time cost worth paying, but if they plan to exploit one and leave then it may not be worth it.
That someone chooses to endure a short-term cost for long-term gain indicates, then, that they’re more likely to cooperate. Thus, one can often tell before interacting with an agent whether or not they are trustworthy, which allows people to interact only with those who won’t exploit them, without having to wait to be exploited to learn who to trust.
Anyway, I thought people here might enjoy the read. Professor Leeson has many other papers available to read for free on his website as well, for those interested.
February 19, 2017 at 5:04 pm #125
Very interesting paper! It has a lot which supports David Friedman’s competing PDA model. Also it would seem to imply that xenophobic enclaves (such as racist groups) would likely be relatively poorer than tolerant communities (due to their extraordinarily high costs for signaling.) A simple example of signaling would be putting money in escrow in advance of trading, so that the sunk cost is greater than the one-time payoff for cheating.
Related to PDAs, the point that an outsider has successfully applied to be a client (the small in-group in the paper is the PDA and clients) means that the PDA is satisfied that his fees are expected to be greater than the costs of people suing him for cheating. (Or else the PDA would decline membership to a customer who is likely to cost them money.) The PDA is loosely equivalent to the African “land priest” or headman in the paper’s examples.
I have been on Facebook arguing the anarchism without adjectives case to both ancap and ancom sectarians. This paper gives insight on how and why an anarcho-communist enclave and an anarcho-capitalist enclave may engage in mutually beneficial trade. Perhaps part of this would involve banishing the sectarians, since those assholes constantly signal cheating behavior.
- This reply was modified 9 months ago by Hogeye.
February 21, 2017 at 7:13 pm #130
Blockchain technology allows virtually free public signaling, so it is a force for freedom.
February 26, 2017 at 3:01 pm #142
Good points, Hogeye! I hadn’t thought about the disadvantages xenophobic groups would have, and I had been thinking more of how signaling would affect relations between individuals than between PDAs. But perhaps this could give ancap and ancom groups a way to cooperate, if some members of each group could act, in a sense, as go-betweens by taking on characteristics of both groups.
I shall have to think about it a bit more.
You must be logged in to reply to this topic.