Skip to content

Commit daf0c1d

Browse files
authored
Merge pull request #866 from janga1997/SelfSteem
Add new Strategy SelfSteem
2 parents 094e839 + 90b8bd8 commit daf0c1d

File tree

6 files changed

+98
-2
lines changed

6 files changed

+98
-2
lines changed

axelrod/strategies/_strategies.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -67,6 +67,7 @@
6767
Retaliate, Retaliate2, Retaliate3, LimitedRetaliate, LimitedRetaliate2,
6868
LimitedRetaliate3)
6969
from .sequence_player import SequencePlayer, ThueMorse, ThueMorseInverse
70+
from .selfsteem import SelfSteem
7071
from .shortmem import ShortMem
7172
from .titfortat import (
7273
TitForTat, TitFor2Tats, TwoTitsForTat, Bully, SneakyTitForTat,
@@ -210,6 +211,7 @@
210211
RevisedDowning,
211212
Ripoff,
212213
RiskyQLearner,
214+
SelfSteem,
213215
ShortMem,
214216
Shubik,
215217
SlowTitForTwoTats,

axelrod/strategies/selfsteem.py

Lines changed: 53 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,53 @@
1+
from axelrod.actions import Action, Actions
2+
from axelrod.player import Player
3+
from axelrod.random_ import random_choice
4+
5+
from math import pi, sin
6+
7+
C, D = Actions.C, Actions.D
8+
9+
10+
class SelfSteem(Player):
11+
"""
12+
This strategy is based on the feeling with the same name.
13+
It is modeled on the sine curve(f = sin( 2* pi * n / 10 )), which varies
14+
with the current iteration.
15+
16+
If f > 0.95, 'ego' of the algorithm is inflated; always defects.
17+
If 0.95 > abs(f) > 0.3, rational behavior; follows TitForTat algortithm.
18+
If 0.3 > f > -0.3; random behavior.
19+
If f < -0.95, algorithm is at rock bottom; always cooperates.
20+
21+
Futhermore, the algorithm implements a retaliation policy, if the opponent
22+
defects; the sin curve is shifted. But due to lack of further information,
23+
this implementation does not include a sin phase change.
24+
Names:
25+
26+
- SelfSteem: [Andre2013]_
27+
"""
28+
29+
name = 'SelfSteem'
30+
classifier = {
31+
'memory_depth': float("inf"),
32+
'stochastic': True,
33+
'makes_use_of': set(),
34+
'long_run_time': False,
35+
'inspects_source': False,
36+
'manipulates_source': False,
37+
'manipulates_state': False
38+
}
39+
40+
def strategy(self, opponent: Player) -> Action:
41+
turns_number = len(self.history)
42+
sine_value = sin(2 * pi * turns_number / 10)
43+
44+
if sine_value > 0.95:
45+
return D
46+
47+
if abs(sine_value) < 0.95 and abs(sine_value) > 0.3:
48+
return opponent.history[-1]
49+
50+
if sine_value < 0.3 and sine_value > -0.3:
51+
return random_choice()
52+
53+
return C
Lines changed: 38 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,38 @@
1+
"""Tests for the SelfSteem strategy."""
2+
3+
import axelrod
4+
import random
5+
from .test_player import TestPlayer
6+
7+
C, D = axelrod.Actions.C, axelrod.Actions.D
8+
9+
class TestSelfSteem(TestPlayer):
10+
11+
name = "SelfSteem"
12+
player = axelrod.SelfSteem
13+
expected_classifier = {
14+
'memory_depth': float("inf"),
15+
'stochastic': True,
16+
'makes_use_of': set(),
17+
'inspects_source': False,
18+
'manipulates_source': False,
19+
'manipulates_state': False
20+
}
21+
22+
def test_strategy(self):
23+
# Check for f > 0.95
24+
self.responses_test([D], [C] * 2 , [C] * 2)
25+
self.responses_test([D], [C] * 13, [C] * 13)
26+
27+
# Check for f < -0.95
28+
self.responses_test([C], [C] * 7, [C] * 7)
29+
self.responses_test([C], [C] * 18, [D] * 18)
30+
31+
# Check for -0.3 < f < 0.3
32+
self.responses_test([C], [C] * 20, [C] * 20, seed=6)
33+
self.responses_test([D], [C] * 20, [D] * 20, seed=5)
34+
35+
# Check for 0.95 > abs(f) > 0.3
36+
self.responses_test([C], [C], [C])
37+
self.responses_test([D], [C] * 16, [D] * 16)
38+
self.responses_test([C], [D] * 9, [C] * 9)

docs/reference/all_strategies.rst

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -135,6 +135,9 @@ Here are the docstrings of all the strategies in the library.
135135
.. automodule:: axelrod.strategies.shortmem
136136
:members:
137137
:undoc-members:
138+
.. automodule:: axelrod.strategies.selfsteem
139+
:members:
140+
:undoc-members:
138141
.. automodule:: axelrod.strategies.titfortat
139142
:members:
140143
:undoc-members:

docs/reference/bibliography.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@ Bibliography
66
This is a collection of various bibliographic items referenced in the
77
documentation.
88

9+
.. [Andre2013] Andre L. C., Honovan P., Felipe T. and Frederico G. (2013). Iterated Prisoner’s Dilemma - An extended analysis, http://abricom.org.br/wp-content/uploads/2016/03/bricsccicbic2013_submission_202.pdf
910
.. [Ashlock2006] Ashlock, D., & Kim E. Y, & Leahy, N. (2006). Understanding Representational Sensitivity in the Iterated Prisoner’s Dilemma with Fingerprints. IEEE Transactions On Systems, Man, And Cybernetics, Part C: Applications And Reviews, 36 (4)
1011
.. [Ashlock2008] Ashlock, D., & Kim, E. Y. (2008). Fingerprinting: Visualization and automatic analysis of prisoner’s dilemma strategies. IEEE Transactions on Evolutionary Computation, 12(5), 647–659. http://doi.org/10.1109/TEVC.2008.920675
1112
.. [Ashlock2009] Ashlock, D., Kim, E. Y., & Ashlock, W. (2009) Fingerprint analysis of the noisy prisoner’s dilemma using a finite-state representation. IEEE Transactions on Computational Intelligence and AI in Games. 1(2), 154-167 http://doi.org/10.1109/TCIAIG.2009.2018704
@@ -37,4 +38,3 @@ documentation.
3738
.. [Stewart2012] Stewart, a. J., & Plotkin, J. B. (2012). Extortion and cooperation in the Prisoner’s Dilemma. Proceedings of the National Academy of Sciences, 109(26), 10134–10135. http://doi.org/10.1073/pnas.1208087109
3839
.. [Szabó1992] Szabó, G., & Fáth, G. (2007). Evolutionary games on graphs. Physics Reports, 446(4-6), 97–216. http://doi.org/10.1016/j.physrep.2007.04.004
3940
.. [Tzafestas2000] Tzafestas, E. (2000). Toward adaptive cooperative behavior. From Animals to Animals: Proceedings of the 6th International Conference on the Simulation of Adaptive Behavior {(SAB-2000)}, 2, 334–340.
40-
.. [Andre2013] Andre L. C., Honovan P., Felipe T. and Frederico G. (2013). Iterated Prisoner’s Dilemma - An extended analysis, http://abricom.org.br/wp-content/uploads/2016/03/bricsccicbic2013_submission_202.pdf

docs/tutorials/advanced/classification_of_strategies.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ strategies::
4747
... }
4848
>>> strategies = axl.filtered_strategies(filterset)
4949
>>> len(strategies)
50-
64
50+
65
5151

5252
Or, to find out how many strategies only use 1 turn worth of memory to
5353
make a decision::

0 commit comments

Comments
 (0)