A MasterMind player must discover a secret
combination by making guesses using the hints obtained as
a response to the previous ones. Finding a general strategy
that scales well with problem size is still an open issue, despite
having been approached from different angles, including evolu-
tionary algorithms. In previous papers we have tested different
approaches to the evolutionary MasterMind and having found
out that diversity is essential in this kind of combinatorial
optimization problems, in this paper we try to tune the search
methods to keep a high diversity level and thus obtain solutions
to the puzzle in less average evaluations, and, if possible, in
less number of combinations played. This will allow us to get
improvements in the time that will be used to explore problems
of bigger size.
Paper presented at the ICCS'12-Agadir conference
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
Optimizing search via diversity enhancement in evolutionary MasterMind
1. Optimizing search via diversity
enhancement in evolutionary
MasterMind
J. J. Merelo, A. Mora, C. Cotta, T. Runársson
U. Granada & Málaga (Spain) & Iceland
Http://geneura.wordpress.com
http://twitter.com/geneura
2. Game of MasterMind
Fine tuning Evolutionary Mastermind - Merelo/Mora/Cotta/Runársson 2
3. Let's play,
then
Fine tuning Evolutionary Mastermind - Merelo/Mora/Cotta/Runársson 3
5. Naïve Algorithm
Repeat
Find a
consistent
combination
and play it.
Fine tuning Evolutionary Mastermind - Merelo/Mora/Cotta/Runársson 5
6. Looking for consistent solutions
Optimization algorithm based on distance to
consistency (for all combinations played)
D=2
Fine tuning Evolutionary Mastermind - Merelo/Mora/Cotta/Runársson 6
7. Not all consistent combinations
are born the same
There's at least one
better than the others
(the solution).
Some will reduce the
remaining search
space more.
But scoring them is
an open issue.
Fine tuning Evolutionary Mastermind - Merelo/Mora/Cotta/Runársson 7
8. What we did before
Increase diversity in search via new operators and
selection mechanisms
Fine tuning Evolutionary Mastermind - Merelo/Mora/Cotta/Runársson 8
9. What we do now
Fine-tune evolutionary parameters to minimize
evaluations and number of games played
Fine tuning Evolutionary Mastermind - Merelo/Mora/Cotta/Runársson 9
10. Increase diversity.
Increase speed to
afford tackling
bigger sizes.
Obtain better
solutions
Less turns
Fine tuning Evolutionary Mastermind - Merelo/Mora/Cotta/Runársson 10
11. Consistent set size
Fine tuning Evolutionary Mastermind - Merelo/Mora/Cotta/Runársson 11
12. Tournament size
Fine tuning Evolutionary Mastermind - Merelo/Mora/Cotta/Runársson 12
13. Fine tuned!
#Evaluations decreased up
to 30%!
(Game performance still the
same)
Fine tuning Evolutionary Mastermind - Merelo/Mora/Cotta/Runársson 13
14. Open source your science!
Fine tuning Evolutionary Mastermind - Merelo/Mora/Cotta/Runársson 14
How would you play mastermind? It's not easy to do, since possible branches are many more than for Sudoku or even chess. In fact, this is the kind of game that can be played more easily by a machine than by a person. CC picture from http://www.flickr.com/photos/unloveable/2399932549/
One of the possible ways to find solutions. Could be others, of course, but this is a good one.
Like the birds. They look the same, but one of them has a bad hair day. Or rather a bad feather day. Let's just say that what we do is, once a solution is consistent, we find a scoring based on how the set of consistent solutions is partitioned by comparing consistent solutions with each other. In other papers we tested different ways of doing it, and we're fixing it here. Ideally, anyways, the solution should have always the maximum fitness, but I'm not sure it does (it will have to be checked)
Creative commons image from Okinawa Soba at http://www.flickr.com/photos/24443965@N08/3606831198/ This was published in NICSO, Evostar, CIG, GECCO (as a póster) and eventually PPSN
CC Picture from San Diego Shooter http://www.flickr.com/photos/nathaninsandiego/3758988303/ New is always better. And better is also always better. Mostly.
Picture from Philip James Claxton at http://www.flickr.com/photos/philipclaxton/4076919342/in/photostream/
Image from John Traynor at http://www.flickr.com/photos/trainor/3028243647/in/photostream/
All source, data sets, experiment results for this paper are available from Sourceforge (in fact, they were while we were doing it). Source is also available from the CPAN Perl module server worldwide, in two separate modules: the algorithm itself as the module Algorithm::Mastermind (along with other algorithms; for instance, Knuth's algorithm), and the EA in the shape of the Evolutionary Algorithm library.