1. Search

I have moved to the top of the page all of the imports that I will need for this notebook.

In [ ]:
import matplotlib.pyplot as plt
import random
import math
import copy

%matplotlib inline

1.1 The TSP

For these examples, I need a hard problem. How about the Traveling Salesperson Problem (TSP)? The problem is: given a set of cities, what is the shortest path that will visit all of them, and return home?

First we need a set of cities. I use a dictionary to associate each number of a city with its geographical location:

In [ ]:
cities = {0: (60, 200), 1: (180, 200), 2: (80, 180), 3: (140, 180),
          4: (20, 160), 5: (100, 160), 6: (200, 160), 7: (140, 140),
          8: (40, 120), 9: (100, 120), 10: (180, 100), 11: (60, 80),
          12: (120, 80), 13: (180, 60), 14: (20, 40), 15: (100, 40),
          16: (200, 40), 17: (20, 20), 18: (60, 20), 19: (160, 20)}

Let's see where these cities are:

In [ ]:
def distance(a, b):
    return math.sqrt((a[0] - b[0]) ** 2 + (a[1] - b[1]) ** 2)
In [ ]:
def plot_tour(tour):
    plt.figure(figsize=(6, 6))
    xy = [cities[i] for i in tour] + [cities[tour[0]]]
    axes = plt.gca()
    axes.set_xlim([0, 200])
    axes.set_ylim([0, 200])
    plt.plot([d[0] for d in xy], [d[1] for d in xy], "-o")
    
def scatter_tour(tour):
    plt.figure(figsize=(6, 6))
    xy = [cities[i] for i in tour] + [cities[tour[0]]]
    axes = plt.gca()
    axes.set_xlim([0, 200])
    axes.set_ylim([0, 200])
    plt.scatter([d[0] for d in xy], [d[1] for d in xy])
    
def tour_distance(tour):
    total = 0
    current = tour[0]
    for city in tour[1:]:
        total += distance(cities[current], cities[city])
        current = city
    total += distance(cities[tour[-1]], cities[tour[0]])
    return total
In [ ]:
tour_distance(range(20))
In [ ]:
scatter_tour(range(20))
In [ ]:
plot_tour(range(20))

How hard is this problem? It turns out that TSP is as hard as they come. We call this NP hard. It is very hard because there are so many possibilities, and each new city multiplies those possibilities. And, the problem can't be divided into subproblems (like sorting can be).

For an $n$ city problem, there are factorial($n - 1$) possible routes. Consider just 20 cities.

In [ ]:
# Iterative solution:
def factorial(n):
    retval = 1
    for i in range(1, n + 1):
        retval = retval * i
    return retval

# Recursive solution:
def factorial(n):
    if n == 1:
        return 1
    else:
        return n * factorial(n - 1)
In [ ]:
factorial(20 - 1)

To know if we found the shortest, we might have to check all of those. But that is far too many. This kind of problem is often called intractable.

$$ O(factorial(n - 1)) $$

$$ O(n!) $$

1.2 Greedy Method

How can we solve this problem of searching for the optimal path? As an example, suppose that you could just pick the closest city to visit at each city. This is called a "greedy" method, because it greedily picks the locally closest.

How good is it?

In [ ]:
def greedy_method():
    tour = [0]
    while True:
        min_dist = 100000
        min_city = -1
        for j in range(20):
            if j in tour or tour[-1] == j:
                continue
            dist = distance(cities[tour[-1]], cities[j])
            if dist < min_dist:
                min_dist = dist
                min_city = j
        if min_city == -1:
            break
        tour.append(min_city)
    return tour
In [ ]:
%%time
greedy_tour = greedy_method()
greedy_tour
In [ ]:
tour_distance(greedy_tour)
In [ ]:
plot_tour(greedy_tour)

Not too bad, but we can do better. Ok, let's try some other algorithms.

1.3 Guess and Step

Let's consider a simple method of problem solving: guess and step. First, just make a guess and note how close you are to a solution. Next, pick some piece of your guess and consider a slight change. If the variation is worse, don't make the change. If the variation is better, go ahead make the change. Now, continue making slight changes until you get a good enough solution. We will need:

  • a method of representing a problem (e.g., dna, or tour)
  • a measure of how good the guess is (e.g., a fitness score)
  • a method of making slight changes (e.g., mutation)

At this point it is clear to see that analogy with Biological systems.

Let's explore an iterative guess-and-check algorithm.

We need a method for creating random "tours":

In [ ]:
def make_tour():
    ret_list = []
    for i in range(len(cities)):
        city = random.choice(range(len(cities)))
        while city in ret_list:
            city = random.choice(range(len(cities)))
        ret_list.append(city)
    return ret_list
In [ ]:
guess = make_tour()
guess
In [ ]:
tour_distance(guess)
In [ ]:
plot_tour(guess)

And we need a method for measure a tour's "fitness". We'll use "total distance".

In [ ]:
def fitness(tour):
    return tour_distance(tour)
In [ ]:
fitness(greedy_tour)
In [ ]:
fitness(guess)

Finally, we need a method of mutating a tour. Tours aren't just random numbers: they are a circuit. For this problem, we'll use a specialized mutation function that just swaps two cities. That way it will always be a valid circuit.

In [ ]:
def mutate(tour):
    tour = copy.copy(tour)
    # Pick two points and swap:
    point1 = random.randint(0, len(cities) - 1)
    point2 = random.randint(0, len(cities) - 1)
    tour[point1], tour[point2] = tour[point2], tour[point1]
    return tour
In [ ]:
print(guess)
guess = mutate(guess)
print(guess)

Now, let's iteratively mutate the guess, and ignore it if it is worse, but replace the original guess if the mutation is better.

In [ ]:
guess = make_tour()
fit_list = [fitness(guess)]

for i in range(10000):
    new_guess = copy.copy(guess)
    for j in range(2):
        new_guess = mutate(new_guess)
    f1 = fitness(guess)
    f2 = fitness(new_guess)
    if f2 < f1:
        fit_list.append(f2)
        guess = new_guess
    else:
        fit_list.append(f1)
plt.plot(fit_list, marker="o")
plt.figure()
plot_tour(guess)
In [ ]:
guess, fitness(guess)

That's not too bad! Perhaps a bit better than the greedy method used.

1.4 Hill Climbing

This guess-and-step methodology is actually called hill climbing due to the following metaphor. Start at a random place on a hill. Pick a direction to step. If the place you would step to is higher than where you are, make the step, otherwise stay where you are. This little algorithm will eventually take you to the top of the hill. However, it might might not take you to the highest place around because you could get trapped on a little plateau (i.e., you would have to step to a lower place before stepping to even higher ground).

This is part of Computer Science that can be viewed as "search for the solution."

In [ ]:
def hillclimb(times):
    guess = make_tour()
    fit_list = [fitness(guess)]

    for i in range(times):
        new_guess = mutate(guess)
        f1 = fitness(guess)
        f2 = fitness(new_guess)
        if f2 < f1:
            fit_list.append(f2)
            guess = new_guess
        else:
            fit_list.append(f1)
    plt.plot(fit_list, marker="o")
    plt.figure()
    plot_tour(guess)
    return guess, fitness(guess)
In [ ]:
hillclimb(10000)
In [ ]:
fit_list[-1]

One solution to getting stuck on a plateau is to make random moves. If you do that all of the time, that is called "random search".

In [ ]:
def random_search(times):
    guess = make_tour()
    fit_list = [fitness(guess)]

    for i in range(times):
        new_guess = mutate(guess)
        f1 = fitness(guess)
        f2 = fitness(new_guess)
        if random.random() > .5:
            fit_list.append(f2)
            guess = new_guess
        else:
            fit_list.append(f1)
    plt.plot(fit_list, marker="o")
    plt.figure()
    plot_tour(guess)
    return guess, fitness(guess)
In [ ]:
random_search(1000)

1.6 Simulated Annealing

A slightly better version of hill climbing allows you to take random steps sometimes, regardless of whether or not the step would put you on higher ground. Of course, you don't always want to take random steps (that's called random search), so you'll need to control your randomness. If you only take random steps as dictated by a schedule, then you are using simulated annealing. Simulated annealing allows you to start off taking random steps quite often, but then slowly curb the habit. This works better than hill climbing when the ground is fairly smooth.

In [ ]:
def simulated_annealing(times):
    guess = make_tour()
    fit_list = [fitness(guess)]

    for i in range(times):
        new_guess = mutate(guess)
        f1 = fitness(guess)
        f2 = fitness(new_guess)
        if random.random() > i/times:
            # Pick random:
            if random.random() < .5:
                fit_list.append(f1)  
            else:
                fit_list.append(f2)
                guess = new_guess
        elif f2 < f1:
            fit_list.append(f2)
            guess = new_guess
        else:
            fit_list.append(f1)
    plt.plot(fit_list, marker="o")
    plt.figure()
    plot_tour(guess)
    return guess, fitness(guess)
In [ ]:
simulated_annealing(100000)

1.7 Can we do better?

Simulated annealing can be described as a "balance between Exploitation and Exploration".

What could be better than simulated annealing? How about a whole group of people spread over the country side, each as their own simulated annealer? And they can communicate with each other. "Hey, I'm on high ground over here!" or "This area looks promising! Come over here!" This is the idea behind evolutionary algorithms.

1.8 Evolutionary Algorithms

Evolutionary algorithms are techniques for searching through a solution landscape in a generally effective manner.

All evolutionary strategies have a core set of properties:

  • A "population" of solutions
  • Limited resources (not all solutions will survive)
  • A measure of performance, called ''fitness''
  • Preferential treatment toward the higher measured solutions (the most fit ones)
  • Reliance on randomness

A typical evolutionary system follows this basic algorithm:

  1. Create a random population of solutions
  2. Compute a fitness measure for each
  3. Create new members by mutating and/or combining old ones
  4. Select the most fit for the next generation
  5. Go to Step #3

1.8.1 Mutation

Mutation is the act of changing one gene in a genome. Often simulated mutation can take two forms: incremental mutation, or random value mutation. Incremental mutation takes the value of the gene and changes it by +/- a small amount. Random value mutation replaces the value by a random value between min and max.

1.8.2 Crossover

Crossover exchanges pieces of two genomes. Crossover can be a perfect shuffle, uniform, or point-based. Shuffle simply alternates between each parent. Uniform tosses an even coin to see if the gene comes from the mother or the father. Point-based mutation will only crossover at particular points. Shown here is a single point crossover:

1.8.3 Selection

Those genes with the higher fitness are more likely to survive to the next generation.

In many implementations, crossover and selection are combined such that parents give rise to children that replace the lower fitness genomes.

1.8.4 Variations

There are two main flavors of evolutionary systems: the genetic algorithm (GA), and genetic programming (GP). GAs use a fixed genome size composed of bits, or, more generally, floating point numbers. GP uses trees of programming language expressions that can grow and shrink. If using a GA, then the human encoder of the problem assigns a meaning for each number in the gene. In a GP, the human encoder must create the functions and terminals used in the expressions, but the system builds its own representations. We will examine both of these techniques in the following sections.

1.9 Genetic Algorithm

To get started, we need three things:

  1. a fitness function (fitness should always be >= 0)
  2. a function to determine when we should stop
  3. a random population
In [33]:
import string
In [34]:
alphabet = string.ascii_uppercase + " "
In [35]:
def make_guess():
    return [random.choice(alphabet) for c in range(len(target))]

def make_pop(size):
    pop = []
    for i in range(len(target)):
        pop.append([0, make_guess()])
    return pop

def mutate(guess):
    guess = copy.copy(guess)
    point = random.randint(0, len(target) - 1)
    guess[point] = random.choice(alphabet)
    return guess
    
def fitness(guess):
    return sum([guess[i] == target[i] for i in range(len(target))])

def select(pop):
    index = 0
    partsum = 0.0
    sumFitness = sum([item[0] for item in pop])
    if sumFitness == 0:
        raise Exception("Population has a total of zero fitness")
    spin = random.random() * sumFitness
    while index < len(pop) - 1:
        fitness = pop[index][0]
        if fitness < 0:
            raise Exception("Negative fitness in select: " + str(fitness))
        partsum += fitness
        if partsum >= spin:
            break
        index += 1
    return copy.copy(pop[index][1])

def crossover(pop):
    for j in range(int(len(pop)/2)):
        p1 = select(pop)
        p2 = select(pop)
        point = random.randint(0, len(target) - 1)
        child = []
        for i in range(len(target)):
            if i < point:
                child.append(p1[i])
            else:
                child.append(p2[i])
        pop[len(pop) - j - 1][1] = child
In [36]:
guess = make_guess()
"".join(guess)
Out[36]:
'IOWILEUOXQTHHKOZRECTUNXXADGF'
In [37]:
pop = make_pop(10)
In [38]:
pop[0]
Out[38]:
[0,
 ['X',
  'X',
  'W',
  'L',
  'R',
  'Q',
  'O',
  'L',
  'H',
  'Z',
  'U',
  'U',
  'T',
  'N',
  'V',
  'P',
  'D',
  'G',
  'U',
  'A',
  'E',
  'V',
  'P',
  'G',
  'D',
  'Q',
  'K',
  'S']]
In [39]:
guess = make_guess()
print(guess)
print(mutate(guess))
['N', 'U', 'A', 'J', 'A', 'V', 'B', 'L', 'T', 'G', 'Z', 'S', ' ', 'N', 'O', 'O', 'Y', 'L', 'G', 'K', 'I', 'L', 'T', 'I', 'G', 'R', ' ', 'O']
['N', 'U', 'A', 'J', 'A', 'V', 'B', 'L', 'T', 'G', 'Z', 'S', ' ', 'N', 'O', 'O', 'Y', 'L', 'G', 'K', 'I', 'L', 'T', 'I', 'K', 'R', ' ', 'O']
In [40]:
fitness(pop[0][1])
Out[40]:
0
In [41]:
def evolve(pop):
    generations = 0
    while True:
        for i in range(len(pop)):
            pop[i][0] = fitness(pop[i][1])
        pop.sort(reverse=True)
        if pop[0][0] == len(target):
            break
        for i in range(2, 10): # not the top 20%
            pop[i][1] = mutate(pop[i][1])
        crossover(pop)
        if generations % 10 == 0:
            print("generations:", generations, "fitness:", pop[0][0], "".join(pop[0][1]))
        generations += 1
    print("generations:", generations, "fitness:", pop[0][0], "".join(pop[0][1]))
In [42]:
pop = make_pop(10)
evolve(pop)
generations: 0 fitness: 4 MPNXZOWZRPHLCS QXNONTQOAYLQL
generations: 10 fitness: 7 MLWMIFPV ZTNCH QXASKCVUEOSXF
generations: 20 fitness: 8 MLTMIQPV YTNCH QXASECVUEOSXF
generations: 30 fitness: 9 MLTMITKM YTWCH BXA KHAUEOSXF
generations: 40 fitness: 11 MQTMITKM YTNCH BXPEKHA ERSEF
generations: 50 fitness: 12 MSTMITKM YTNCH BMPE IATEESEF
generations: 60 fitness: 14 MQTHITKM YT CH BMPE IATEESEF
generations: 70 fitness: 15 MTTHITKM YT CH BXPE NATELSEL
generations: 80 fitness: 17 MTTHIWKM YT MH BXKE SATEASEL
generations: 90 fitness: 17 MZTHIWKM YT MH BXKE  ATEASEL
generations: 100 fitness: 18 MTTHIXKM YT KH BIKE SNTEASEL
generations: 110 fitness: 19 MZTHIYKM YT MH BIKE FJWEASEL
generations: 120 fitness: 20 MZTHIYKM YT MH BIKE AAWEASEL
generations: 130 fitness: 20 MZTHIYKT YT MH BIKE AAWEASEL
generations: 140 fitness: 20 MZTHIYKT YT MQ XIKE AAWEASEL
generations: 150 fitness: 20 MZTHTNKT YT MQ XIKE AJWEASEL
generations: 160 fitness: 20 MZTHTNKT YT MQ XIKE AJWEASEL
generations: 170 fitness: 20 MZTHXNKY RT PQ CIKE ATWEASEL
generations: 180 fitness: 21 METHXNKY RT GQ IIKE ACWEASEL
generations: 190 fitness: 21 METHXNKY RT GQ IIKE ACWEASEL
generations: 200 fitness: 21 METHXNKY RT GQ IIKE ACWEASEL
generations: 210 fitness: 22 METHXNK  RT RS BIKE ACWEASEL
generations: 220 fitness: 22 METHXNKI RT RS BIKE ACWEASEL
generations: 230 fitness: 23 METHXNKI RT IS RIKE ACWEASEL
generations: 240 fitness: 23 METHXNKI ZT IS WIKE AIWEASEL
generations: 250 fitness: 23 METHXNKQ ZT IS WIKE ASWEASEL
generations: 260 fitness: 24 METHINKI ZT IS WIKE ASWEASEL
generations: 270 fitness: 25 METHINKV IT IS WIKE ASWEASEL
generations: 280 fitness: 25 METHINKV IT IS WIKE AXWEASEL
generations: 290 fitness: 26 METHINKS IT IS WIKE AXWEASEL
generations: 300 fitness: 26 METHINKS IT IS WIKE AYWEASEL
generations: 310 fitness: 26 METHINKS IT IS XIKE AXWEASEL
generations: 320 fitness: 26 METHINKS IT IS XIKE AYWEASEL
generations: 330 fitness: 26 METHINKS IT IS XIKE AYWEASEL
generations: 340 fitness: 26 METHINKS IT IS XIKE AYWEASEL
generations: 350 fitness: 26 METHINKS IT IS XIKE AYWEASEL
generations: 360 fitness: 26 METHINKS IT IS XIKE AYWEASEL
generations: 370 fitness: 26 METHINKS IT IS XIKE AYWEASEL
generations: 380 fitness: 26 METHINKS IT IS XIKE AYWEASEL
generations: 390 fitness: 26 METHINKS IT IS XIKE AYWEASEL
generations: 400 fitness: 26 METHINKS IT ISLLIKE AHWEASEL
generations: 410 fitness: 26 METHINKS IT ISLLIKE ASWEASEL
generations: 420 fitness: 26 METHWNKS IT ISLLIKE A WEASEL
generations: 430 fitness: 27 METHINKS IT ISLLIKE A WEASEL
generations: 440 fitness: 27 METHINKS IT ISLLIKE A WEASEL
generations: 450 fitness: 27 METHINKS IT ISLLIKE A WEASEL
generations: 460 fitness: 27 METHINKS IT ISLLIKE A WEASEL
generations: 470 fitness: 27 METHINKS IT ISOLIKE A WEASEL
generations: 480 fitness: 27 METHINKS IT ISOLIKE A WEASEL
generations: 490 fitness: 27 METHINKS IT ISOLIKE A WEASEL
generations: 500 fitness: 27 METHINKS IT ISOLIKE A WEASEL
generations: 510 fitness: 27 METHINKS IT ISOLIKE A WEASEL
generations: 520 fitness: 27 METHINKS IT ISOLIKE A WEASEL
generations: 530 fitness: 27 METHINKS IT ISOLIKE A WEASEL
generations: 540 fitness: 27 METHINKS IT ISOLIKE A WEASEL
generations: 550 fitness: 27 METHINKS IT ISOLIKE A WEASEL
generations: 560 fitness: 27 METHINKS IT ISYLIKE A WEASEL
generations: 570 fitness: 27 METHINKS IT ISYLIKE A WEASEL
generations: 576 fitness: 28 METHINKS IT IS LIKE A WEASEL

2. Further Reading

  1. Holland, John H. (1975) Adaptation in Natural and Artificial Systems. The University of Michigan Press, Ann Arbour.