Searching Mazes NetLogo Model

Produced for the book series "Artificial Intelligence";

Author: W. J. Teahan; Publisher: Ventus Publishing Aps, Denmark.

powered by NetLogo

view/download model file: Searching-Mazes.nlogo

WHAT IS IT?

This model applies standard search algorithms to the problem of searching mazes. Three mazes are provided in the model: the empty maze with no internal walls; the Hampton Court maze, which is a schematic representation of the real-life garden maze at Hampton Court Palace in the U.K.; and the Chevening House maze, which is a representation of the real-life maze at Chevening House in the U.K. designed to thwart the hand-on-the-wall method of getting to the centre of the maze. The entrance of the empty maze is at the bottom middle of the maze, and the goal is to reach the exit at the top of the maze. The entrance of the Hampton Court maze is in the middle bottom, and the goal is to reach the centre. Similarly, the entrance for the Chevening House maze is in the middle bottom, and the goal is to reach the centre.

The problem is the following: How do you reach the goal from the entrance? To solve this problem, search needs to be employed to try out the different paths that are possible. This model implements some of the classic search algorithms and these can be applied to this toy problem to see how the different search strategies perform against each other.


WHAT IS ITS PURPOSE?

The purpose of this model is to show how some of the classic search algorithms such as breadth-first search and depth-first search can be implemented in NetLogo, and also how they can then be applied to solving a classic toy problem in order to compare how they perform.


HOW IT WORKS

The model implements the search algorithms in a novel way by making use of an agent-oriented approach which is relatively easy to implement in NetLogo. Rather than use a queue data structure to implement the classic search algorithms (this is a standard solution; see, for example, Russell & Norvig's AI textbook), the model instead adopts a purely agent-oriented solution where the information is distributed amongst NetLogo agents. These agents conduct the search by transferring information to other agents that continue the search. Hence, an explicit queue data structure separate from what is happening in the search is not needed (although technically the information is still being stored in a queue behind the scenes, but the implementation of this is hidden behind the top-level commands that NetLogo provides).

The hope is that this "queueless" agent-oriented approach provides a more intuitive solution that makes it easier to grasp how the search strategies work. It uses a situated, embodied first person design perspective, therefore arguably making it easier to understand the essential differences between the search strategies.

The breed of searcher turtle agents is used for the searchers that move throughout the maze trying to get to the goal. These searcher agents maintain information about the current state of the search such as the time taken and the path and estimated costs. Each searcher agent expands the search in three directions: forward; a right 90 degrees turn then forward; and a left 90 degrees turn then forward. The forward movement behaviour of the searcher agents in the maze is controlled by two choosers - move-forward-behaviour and the move-forward-behaviour-after-turning. The options for the type of move forward behaviour used by both these choosers are as follows:
- "Move forward n steps unless hit wall": The agent moves forward a number of steps determined by the move-forward-step-amount slider.
- "Move forward until hit wall": The agent will move forward indefinitely until it hits a wall.
- "Move forward until any open space": The agent will move forward until it has found open space (i.e. until there is a wall in the way or there is open space on both sides).
- "Move forward until open space after side wall": The agent will move forward until it has found open space (i.e. until there is a wall in the way or there is open space on both sides but will only stop if it has previously been following a wall on either side).


HOW TO USE IT

To initialise the search and draw the selected maze (specified by the maze-being-searched chooser), press the setup button in the Interface. A turtle agent with a shape of a person will slowly walk towards the entrance of the maze. Then the entrance will be blocked (to prevent the agent from exiting in that direction).

To start the search, either press the go-once button to make the search proceed one step at a time, or press the go-forever button to make the search proceed continuously until it reaches the goal state or it gets stuck.


THE INTERFACE

The model's Interface buttons are defined as follows:

- setup: This will clear the environment and all variables, and redraw the selected maze (chosen by using the maze-being-searched slider).

- go-once: This will make the search proceed one step at a time.

- go-forever: This will make the proceed continuously until it has reached the goal state or it gets stuck.

The model's Interface choosers, sliders and switch are defined as follows:

- maze-being-searched: This selects the maze to be searched. There are three available: the empty maze, the Hampton Court maze, and the Chevening House maze.

- search-behaviour: This specifies what search strategy the agents should employ when performing the search. A full description of the different search strategies can be found in the book Artificial Intelligence - Agent Behaviour I (see reference at the bottom of this Information).

- heuristic: This specifies the heuristic to be used for the Informed searches - Greedy Best First Search and A* Search.

- move-forward-behaviour: This specifies the type of move forward behaviour the searcher agent executes. (For an explanation of the options, see the How It Works section above).

- move-forward-behaviour-after-turning: This specifies the type of move forward behaviour the searcher agent executes after it has turned either right or left. (For an explanation of the options, see the How It Works section above).

- move-forward-step-amount: This is the amount of steps forward the agent makes when it moves forward (either directly ahead of itself on its current heading, or after it has made a right or left turn).

- allow-revisited-states?: This is a flag that if set to On will allow the search to proceed to already visited states.

- left-cols, right-cols, above-rows, above-cols, entrance-cols: These sliders set the height and width of the empty maze, and the size of the entrance.

- col-patches-width, row-patches-width: These sliders set the height and width of the Hampton Court and Chevening House mazes.

- max-agents-to-expand: This sets the maximum number of agents to expand the search each step. It is only used by the Multi-Agent Depth First Search.

- max-depth: This sets the maximum depth of the search, for the depth-limited search and the Iterative Deepening Search only.

The model's Interface monitors are defined as follows:

- Active Searcher Agents: This shows the current number of active searcher agents still participating in the search.

- Maximum Active Searcher Agents: This shows the maximum value that the Active Searcher Agents monitor has achieved throughout the search.

- Total Searcher Agents: This shows the total number of searcher agents that the search has used.

- IDS Depth: This is the current depth during the execution of the Iterative Deepening Search.


THINGS TO NOTICE

Notice the effect of turning the flag allow-revisited-states? On and Off. Why is turning it Off so effective at dramatically reducing the search for the different search behaviours?

Notice that some of the searches if repeated with the same settings end up in different places. Why is this?

Notice that many of the behaviours result in the searching turtle agents getting stuck. When does this happen, and why?


THINGS TO TRY

Try slowing down the animation by changing the speed slider at the top of the Information window. You will be able to see how the search proceeds.

Which search behaviour seems to be the best at this problem? Which seems to be the worst? Try out each of the different search behaviours to see which one is the most effective. Find out how well each search algorithm performs against the four evaluation criteria - time complexity, space complexity, optimality and complexity. Relate the theory to what happens in practice.

Try switching the searching behaviour mid-stream. The difference in behaviours is most noticeable in the empty maze. For example, try switching from a breadth-first search to a depth-first search then back again (after setting the move-forward-behaviour and move-forward-after-turning-behaviour chooser settings to "Move forward n steps unless hit wall", and set the move-forward-step-amount slider to 1).

Try sliding the move-forward-step-amount slider back and forth as the search proceeds if you have set either of the move-forward-behaviour and move-forward-after-turning-behaviour choosers to be "Move forward n steps unless hit wall".

Try out the different heuristics for the informed searches (Greedy Best First Search and A* Search). Do they have any effect on how effective the search is?

Try out the all the different combinations of the move-forward-behaviour and move-forward-after-turning-behaviour chooser settings, and the move-forward-step-amount slider setting. This results in a huge variety of behaviours for the searching turtle agents. For example, have a go at reproducing the behaviour shown by the screenshots in Chapter 8 (e.g. Figure 8.10) of the AI book series (see reference at the bottom of this Information).

Try changing the parameters of each maze by altering the settings of the sliders on the right.


EXTENDING THE MODEL

Try adding other search algorithms, or adding your own variations to the ones implemented in the model.

Try adding your own maze, perhaps of a garden maze or other maze that exists in real life. If your maze does have a real-life equivalent, note the discrepancies that you have to introduce into the virtual representation where it does not exactly correspond to what is happening in the real-life maze.


NETLOGO FEATURES

Note the use of the hatch and die commands to clone child searchers and terminate parent searchers thus avoiding the need for a separate queue data structure to maintain the information during the search. Note also the variation in the ask command that determines the strategy being used that defines the different searches.


RELATED MODELS

For other search problems, see the Searching For Kevin Bacon model, and the Searching Mazes model. For an insight into the Manhattan distance and Euclidean distance metrics used as heuristics for the Informed searches, see the Manhattan Distance model.


CREDITS AND REFERENCES

This model was created by William John Teahan.

To refer to this model in publications, please use:

Searching Mazes NetLogo model.
Teahan, W. J. (2010). Artificial Intelligence. Ventus Publishing Aps.


PROCEDURES

; Searching Mazes model.
; 
; Gets turtles with searching behavaiours such as BFS, DFS, Greedy Best-First and A* to search various mazes.

; Copyright 2010 William John Teahan. All Rights Reserved.

breed [mazes maze]
mazes-own
[ start-x start-y start-width; co-ordinates where the search starts
  entrance-x entrance-y entrance-width
  goal-type goal-x goal-y goal-width] ; specifies where the exit or goal is

breed [searchers searcher]
searchers-own
[ time                ; time step
  height              ; used for hill-climbing search
  path-cost           ; actual cost of search path so far (used for A-star search)
  estimated-cost ]    ; estimated cost of cheapest path to goal

globals
[ search-time-step    ; indicates what the current time step is during the search
  visited-states      ; indicates which states have aleady been visited - do not revisit them in that case
  search-completed    ; true when search is completed
  searchers-used      ; number of searcher agents used
  max-active-searchers; maximum number of active searchers at any one time
  path-found          ; true if we have found a path to expand; e.g. used by DFS to signal a dead-end
  person-colour       ; colour of person shown doing the search in the visualisation
  IDS-depth ]         ; current depth limit for IDS (Iterative Deepening Search)      

to setup
; setup maze at (0,0)
  ca ;; clear everything

  if (maze-being-searched = "Empty Maze")
      [setup-empty-maze
       create-mazes 1 [  
         set start-x 0
         set start-y (- below-rows - 5)
         set start-width 6
         set entrance-x (- entrance-cols) + 1
         set entrance-y (- below-rows)
         set entrance-width 2 * entrance-cols - 2
         set goal-type "Exit"
         set goal-x entrance-x
         set goal-y above-rows
         set goal-width entrance-width]]
  if (maze-being-searched = "Hampton Court Maze")
      [setup-hampton-court-maze
       create-mazes 1 [
         set start-x (col-patches-width / 2)
         set start-y (- row-patches-width * 7 - 1)
         set start-width row-patches-width + 2
         set entrance-x 0
         set entrance-y (- row-patches-width * 6)
         set entrance-width col-patches-width
         set goal-type "Centre"
         set goal-x 0
         set goal-y (- 2 * row-patches-width)
         set goal-width col-patches-width]]
  if (maze-being-searched = "Chevening House Maze")
      [setup-chevening-house-maze
       create-mazes 1 [
         set start-x (col-patches-width / 2)
         set start-y (- row-patches-width * 12)
         set start-width row-patches-width + 2
         set entrance-x 1
         set entrance-y (- row-patches-width * 11)
         set entrance-width col-patches-width - 2
         set goal-type "Centre"
         set goal-x 1
         set goal-y (- row-patches-width)
         set goal-width entrance-width]]

  create-searchers 1 [ setup-searcher ]
  
  ; close "door" once they are in
  ask patches
    [ let xpos ([entrance-x] of maze 0)
      if pxcor >= xpos and pxcor <= xpos + ([entrance-width] of maze 0) and
        pycor = ([entrance-y] of maze 0)
      [ set pcolor sky ]]
  
  set search-time-step 0
  set search-completed false
  set visited-states []
  set searchers-used 1
  set max-active-searchers 1
  set person-colour magenta
end

to setup-searcher
; initialises the searcher
  set searchers-used searchers-used + 1
      
  set size 3
  set pen-size 2
  set color magenta
  set shape "person"
      
  ; move the turtle slowly to the entrance
  setxy [start-x] of maze 0 [start-y] of maze 0
  set height hill-height xcor ycor
  set estimated-cost (heuristic-function xcor ycor)
  set path-cost 0

  set heading 0 ; head north
  repeat ([start-width] of maze 0) [forward 1 wait 0.2]
end

to setup-empty-maze
  ca ;; clear everything
  
  ask patches
  [
    set pcolor white ;; make background full of white patches
          
    if (pxcor >= (- left-cols - entrance-cols) and pxcor <= (- entrance-cols) and
        pycor = (- below-rows))
      [set pcolor blue] ;; draws bottom left horizontal wall in blue

    if (pxcor >= (entrance-cols) and pxcor <= (right-cols + entrance-cols) and
        pycor = (- below-rows))
      [set pcolor blue] ;; draws bottom right horizontal wall in blue

    if (pxcor >= (- left-cols - entrance-cols) and pxcor <= (- entrance-cols) and
        pycor = (above-rows))
      [set pcolor blue] ;; draws top left horizontal wall in blue

    if (pxcor >= (entrance-cols) and pxcor <= (right-cols + entrance-cols) and
        pycor = (above-rows))         
      [set pcolor blue] ;; draws top left horizontal wall in blue

    if (pxcor = (- left-cols - entrance-cols) and
        pycor >= (- below-rows) and pycor <= (above-rows))
      [set pcolor blue] ;; draws left vertical wall in blue
      
    if (pxcor = (right-cols + entrance-cols) and
        pycor >= (- below-rows) and pycor <= (above-rows))
      [set pcolor blue] ;; draws right vertical wall in blue
  ]
end

to setup-row [row colour segments]
  foreach segments
  [
      if pycor = row * row-patches-width and
        (pxcor >= col-patches-width * (item 0 ?)) and (pxcor <= col-patches-width * (item 1 ?))
          [set pcolor colour]
  ]
end

to setup-col [col colour segments]
  foreach segments
  [
      if pxcor = col * col-patches-width and
        (pycor >= row-patches-width * (item 0 ?)) and (pycor <= row-patches-width * (item 1 ?))
          [set pcolor colour]
  ]
end

to setup-hampton-court-maze
  ca ;; clear everything
  
  ask patches
  [
      if (pxcor >= min-pxcor and pxcor <= max-pxcor and
        pycor >= min-pycor and pycor <= max-pycor)
          [set pcolor white] ;; make background full of white patches
      
      setup-row  5 blue [[-9 10]]
      setup-row  4 blue [[-8 -5] [-3 -1] [0 3] [5 9]]
      setup-row  3 blue [[-7 -4] [-2 2] [4 8]]
      setup-row  2 blue [[-6 -1] [1 4] [5 7]]
      setup-row  1 blue [[-3 3] [8 9]]
      setup-row  0 blue [[-8 -7] [9 10]]
      setup-row -1 blue [[-9 -8]]
      setup-row -2 blue [[-8 -7] [-3 0] [1 3]]
      setup-row -3 blue [[-4 -1] [2 4] [6 8]]
      setup-row -4 blue [[-7 -1] [1 9]]
      setup-row -5 blue [[-8 10]]
      setup-row -6 blue [[-9 0] [1 10]]

      setup-col 10 blue [[-6 5]]
      setup-col  9 blue [[-4 -1] [1 4]]
      setup-col  8 blue [[-3 1] [2 3]]
      setup-col  7 blue [[-2 2]]
      setup-col  6 blue [[-4 1]]
      setup-col  5 blue [[-3 2]]
      setup-col  4 blue [[-3 2] [3 5]]
      setup-col  3 blue [[-2 1] [2 4]]
      setup-col  1 blue [[-4 -2]]
      setup-col  0 blue [[-5 -2] [1 3]]
      setup-col -1 blue [[-4 -3] [4 5]]                  
      setup-col -3 blue [[-2 1] [2 4]]
      setup-col -4 blue [[-3 2] [3 5]]
      setup-col -5 blue [[-4 1]]
      setup-col -6 blue [[-3 2]]
      setup-col -7 blue [[-4 -3] [-2 0] [1 3]]
      setup-col -8 blue [[-5 -2] [0 4]]
      setup-col -9 blue [[-6 5]]
  ]
end

to setup-chevening-house-maze
  ca ;; clear everything
  
  ask patches
  [
      if (pxcor >= min-pxcor and pxcor <= max-pxcor and
        pycor >= min-pycor and pycor <= max-pycor)
          [set pcolor white] ;; make background full of white patches
      
      setup-row  12 blue [[-11 12]]
      setup-row  11 blue [[-10 11]]
      setup-row  10 blue [[-9 10]]
      setup-row   9 blue [[-8 0] [1 9]]
      setup-row   8 blue [[-7 -1] [2 8]]
      setup-row   7 blue [[-6 0] [1 7]]
      setup-row   6 blue [[-5 0] [1 6]]
      setup-row   5 blue [[-4 -1] [2 5]]
      setup-row   4 blue [[-3 0] [1 4]]
      setup-row   3 blue [[-2 3]]
      setup-row   2 blue [[-1 2]]
      setup-row   1 blue [[-9 -5] [-4 -2] [3 5] [6 8] [10 11]]
      setup-row  -0 blue [[-9 -7] [3 5] [6 10]]
      setup-row  -1 blue [[-1 0] [1 2] [7 9]]
      setup-row  -2 blue [[-2 0] [1 3]]
      setup-row  -3 blue [[-3 -1] [1 4]]
      setup-row  -4 blue [[-4 -2] [2 5]]
      setup-row  -5 blue [[-5 -3] [2 6]]
      setup-row  -6 blue [[-6 -3] [1 7]]
      setup-row  -7 blue [[-7 -2] [0 8]]
      setup-row  -8 blue [[-8 1] [3 9]]
      setup-row  -9 blue [[-9 0] [3 10]]
      setup-row -10 blue [[-10 -1] [2 11]]
      setup-row -11 blue [[-11 0] [1 12]]

      setup-col  12 blue [[-11 12]]
      setup-col  11 blue [[-10 1] [2 11]]
      setup-col  10 blue [[-9 0] [1 10]]
      setup-col   9 blue [[-8 -1] [0 9]]
      setup-col   8 blue [[-7 -2] [1 8]]
      setup-col   7 blue [[-6 -1] [2 7]]
      setup-col   6 blue [[-5 0] [1 6]]
      setup-col   5 blue [[-4 0] [1 5]]
      setup-col   4 blue [[-3 -1] [2 4]]
      setup-col   3 blue [[-2 0] [1 3]]
      setup-col   2 blue [[-10 -7] [-1 2]]
      setup-col   1 blue [[-11 -8] [-6 -1] [4 6] [7 9]]
      setup-col   0 blue [[-11 -9] [-7 -1] [4 6] [7 9]]
      setup-col  -1 blue [[-8 -3] [-1 2]]
      setup-col  -2 blue [[-7 -4] [-2 0] [1 3]]
      setup-col  -3 blue [[-3 1] [2 4]]
      setup-col  -4 blue [[-4 0] [1 5]]
      setup-col  -5 blue [[-5 6]]
      setup-col  -6 blue [[-6 0] [2 7]]
      setup-col  -7 blue [[-7 0] [1 8]]
      setup-col  -8 blue [[-8 -1] [2 9]]
      setup-col  -9 blue [[-9 0] [1 10]]
      setup-col -10 blue [[-10 11]]
      setup-col -11 blue [[-11 12]]
  ]
end

to go
  if search-completed
    [ ifelse [goal-type] of maze 0 = "Exit"
       [ user-message "Found the exit!" ]
       [ user-message "Made it to the centre of the maze!" ]
      stop ]
      
  if (count searchers = 0)
    [ user-message "Can't find any more paths to search!"
      stop ]

  ;; uninformed search strategies (blind search)
  if search-behaviour = "Breadth First Search"
    [ expand-breadth-first-search ]
  if search-behaviour = "Uniform Cost Search"
    [ expand-uniform-cost-search ]
  if search-behaviour = "Depth First Search"
    [ expand-depth-first-search ]
  if search-behaviour = "Multi-Agent Depth First Search"
    [ expand-MA-depth-first-search ]
  if search-behaviour = "Depth Limited Search"
    [ expand-depth-limited-search ]
  if search-behaviour = "Iterative Deepening Search"
    [ expand-iterative-deepening-search ]

  ;; informed search strategies
  if search-behaviour = "Greedy Best First Search"
    [ expand-greedy-best-first-search ]
  if search-behaviour = "A* Search"
    [ expand-A-star-search ]

  ;; local search strategies
  if search-behaviour = "Hill Climbing Search"
    [ expand-hill-climbing-search ]
  
  set search-time-step search-time-step + 1
  
  if (count searchers = 0)
    [ user-message "No more searchers! Abort!"
      stop ]
  
  tick
end

to-report wall? [angle dist]
  ;; reports if there is a wall ahead at dist.
  ;; note that angle may be positive or negative.  if angle is
  ;; positive, the turtle looks right.  if angle is negative,
  ;; the turtle looks left.
  let patch-color [pcolor] of patch-right-and-ahead angle dist
      
  report patch-color = blue or patch-color = sky ; blue if it is a wall, sky if it is the closed entrance
end

to-report wall-ahead? [dist]
  ;; reports whether there is a wall directly ahead at dist.
  let patch-color [pcolor] of patch-ahead dist
  
  report patch-color = blue or patch-color = sky ; blue if it is a wall, sky if it is the closed entrance
end

to-report wall-anywhere-ahead? [dist]
  ;; reports whether there is a wall directly ahead anywhere up to dist.

  let d 1
  let wall false
  while [d <= dist and not wall]
  [ if wall-ahead? d
      [ set wall true ]
    set d d + 1 ]
  report wall
end

to-report state [searcher-agent searcher-behaviour]
  ;; reports the state as a list of the current co-ordinates + heading of the searcher-agent plus the behaviour chosen 
  report (list [xcor] of searcher-agent [ycor] of searcher-agent [heading] of searcher-agent searcher-behaviour)  
end

to-report goal [xpos ypos]
;; returns true if reached the goal
  let goal-xpos [goal-x] of maze 0
  ifelse xpos >= goal-xpos and xpos <= goal-xpos + [goal-width] of maze 0 and
         ypos = [goal-y] of maze 0
     [ set search-completed true
       report true ]
 ;else
     [ report false ]
end

to-report goal-state [this-state]
;; returns true if the searcher agent has reached the goal

  let xpos (item 0 this-state)
  let ypos (item 1 this-state) 
  report goal xpos ypos
end

to behaviour-move-forward [searcher-agent]
; move forward behaviour defined by 
  if (move-forward-behaviour = "Move forward n steps unless hit wall")
    [ behaviour-move-forward1 searcher-agent move-forward-step-amount ]
  if (move-forward-behaviour = "Move forward until hit wall")
    [ behaviour-move-forward2 searcher-agent ]
  if (move-forward-behaviour = "Move forward until any open space")
    [ behaviour-move-forward3 searcher-agent ]
  if (move-forward-behaviour = "Move forward until open space after side wall")
    [ behaviour-move-forward4 searcher-agent ]  
end

to behaviour-move-forward-after-turning [searcher-agent]
; move forward behaviour defined by 
  if (move-forward-behaviour-after-turning = "Move forward n steps unless hit wall")
    [ behaviour-move-forward1 searcher-agent move-forward-step-amount ]
  if (move-forward-behaviour-after-turning = "Move forward until hit wall")
    [ behaviour-move-forward2 searcher-agent ]
  if (move-forward-behaviour-after-turning = "Move forward until any open space")
    [ behaviour-move-forward3 searcher-agent ]
  if (move-forward-behaviour-after-turning = "Move forward until open space after side wall")
    [ behaviour-move-forward4 searcher-agent ]  
end

to behaviour-move-forward1 [searcher-agent dist]
; move forward dist steps unless there is a wall in the way
  ask searcher-agent
  [
    ifelse (wall-anywhere-ahead? dist)
      [ die ] ; kill off searcher agents that don't move
              ; (to prevent infinite loops in A* & uniform-cost searches and save on needless search)
      [ ; move forward
        if goal xcor ycor
          [ stop ]
        fd dist
        set path-cost path-cost + dist
      ]
  ]
end

to behaviour-move-forward2 [searcher-agent]
; move forward until there is a wall in the way
  let did-move false
  
  ask searcher-agent
  [
    while [not wall-ahead? 1]
      [ ; move forward
        set did-move true
        if goal xcor ycor
          [ stop ]
        fd 1
        set path-cost path-cost + 1
      ]
    if not did-move
      [ die ] ; kill off searcher agents that don't move
              ; (to prevent infinite loops and save on needless search)
  ]
end

to behaviour-move-forward3 [searcher-agent]
; move forward until there is a wall in the way or there is open space on either side
  ask searcher-agent
  [
    let open-space false
    let did-move false
    while [not wall-ahead? 1 and not open-space]
      [ ; move forward
        set did-move true
        if goal xcor ycor
          [ stop ]
        fd 1
        set path-cost path-cost + 1
        if not (wall? -90 1) and not (wall? 90 1)
          [ set open-space true ] 
      ]
    if not did-move
      [ die ] ; kill off searcher agents that don't move
              ; (to prevent infinite loops and save on needless search)
  ]
end

to behaviour-move-forward4 [searcher-agent]
; move forward until there is a wall in the way or there is open space on either side
; but will only stop if there has been following a wall on either side
  ask searcher-agent
  [
    let open-space false
    let by-wall false
    let did-move false
    while [not wall-ahead? 1 and not open-space]
      [ ; move forward
        set did-move true
        if goal xcor ycor
          [ stop ]
        fd 1
        set path-cost path-cost + 1
        if (wall? -90 1) or (wall? 90 1)
          [ set by-wall true]
        if not (wall? -90 1) and not (wall? 90 1) and by-wall
          [ set open-space true ] 
      ]
    if not did-move
      [ die ] ; kill off searcher agents that don't move
              ; (to prevent infinite loops and save on needless search)
  ]
end

to behaviour-turn-left [searcher-agent]
; turn left 90 degrees.
  ask searcher-agent
  [ lt 90 ]
end

to behaviour-turn-right [searcher-agent]
; turn right 90 degrees.
  ask searcher-agent
  [ rt 90 ]
end

to behaviour-turn-left-then-forward [searcher-agent]
; turn left 90 degrees
  behaviour-turn-left searcher-agent
  behaviour-move-forward-after-turning searcher-agent
end

to behaviour-turn-right-then-forward [searcher-agent]
; turn right 90 degrees.
  behaviour-turn-right searcher-agent
  behaviour-move-forward-after-turning searcher-agent
end

to behaviour-follow-left-wall [searcher-agent]
; follow wall on the left until wall ends or hits another
  ask searcher-agent
  [
    while [(wall? -90 1) and not (wall-ahead? 1)]
      [ ; move forward
        fd 1
        set path-cost path-cost + 1
      ]
  ]
end

to behaviour-follow-right-wall [searcher-agent]
; follow wall on the right until wall ends or hits another
  ask searcher-agent
  [
    while [(wall? 90 1) and not (wall-ahead? 1)]
      [ ; move forward
        fd 1
        set path-cost path-cost + 1
      ]
  ]
end

to expand-breadth-first-search
  ; expand the search using breadth first search by adding more searcher-agents

  ask searchers
  [
    expand-paths (searcher who)
    die ; prune away parent agents used at lower time-steps
  ]
end

to expand-uniform-cost-search
  ; expand the search by following the lowest cost paths first

  set path-found false
  ask first (sort-by [[path-cost] of ?1 < [path-cost] of ?2] searchers)
  [
    expand-paths (searcher who)
    die ; this agent has done its work; it's children are now doing the work
  ]
end

to expand-depth-first-search
  ; expand the search by following longest path
  ; do not follow the other paths in parallel; just follow one of them

  ask first (sort-by [[time] of ?1 > [time] of ?2] searchers)
  [       
    expand-paths (searcher who)
    die ; this agent has done its work; it's children are now doing the work
  ]
end

to expand-MA-depth-first-search
  ; expand the search by following longest path
  ; follow the other paths in parallel; but do not follow all of them

  set path-found false
  let agents ifelse-value (count searchers < max-agents-to-expand)
    [ count searchers ]
    [ max-agents-to-expand ]
  ask n-of agents turtle-set (sort-by [[time] of ?1 > [time] of ?2] searchers)
  [      
    expand-paths (searcher who) 
    die ; this agent has done its work; it's children are now doing the work
  ]
end

to expand-depth-limited-search
  ; expand the search by following longest path
  ; do not follow the other paths in parallel; just follow one of them
  ; limit the search depth to max-depth

  expand-depth-limited-search-1 max-depth
end

to expand-depth-limited-search-1 [maxdepth]
  ; expand the search by following longest path
  ; do not follow the other paths in parallel; just follow one of them
  ; limit the search depth to maxdepth

  set path-found false
  ask first (sort-by [[time] of ?1 > [time] of ?2] searchers)
  [
    if (time <= maxdepth)     
      [ expand-paths (searcher who) ] ; only expand if not exceeded depth limit
    die ; this agent has done its work; it's children are now doing the work
  ]
end

to expand-iterative-deepening-search
  ; expand the search by iteratively performing depth-limited-search
  ; to increasingly greater depths
  set IDS-depth 1
  set person-colour magenta
  while [ IDS-depth <= max-depth ]
  [
    while [count searchers != 0]
      [ expand-depth-limited-search-1 IDS-depth ]
    set IDS-depth IDS-depth + 1
    ; change colour of person to reflect the increased maxdepth of search
    if (person-colour > 5)
      [ set person-colour person-colour - 5 ]
    create-searchers 1 [ setup-searcher ]
  ]
  set person-colour magenta ; restore default colour
end

to expand-greedy-best-first-search
  ; expand the search by adding more searcher-agents

  set path-found false
  ask first (sort-by [[estimated-cost] of ?1 < [estimated-cost] of ?2] searchers)
  [     
    expand-paths (searcher who)
    die ; this agent has done its work; it's children are now doing the work
  ]
end

to expand-A-star-search
  ; expand the search by adding more searcher-agents

  set path-found false
  ask first (sort-by [[estimated-cost + path-cost] of ?1 < [estimated-cost + path-cost] of ?2] searchers)
  [
    expand-paths (searcher who)
    die ; this agent has done its work; it's children are now doing the work
  ]
end

to expand-hill-climbing-search
  ; expand the search using hill-climbing search method
  
  set path-found false
  ask searchers ; there should always be only one searcher at this stage
  [ expand-paths (searcher who) ] ; look to see where I can go next
  
  foreach but-first (sort-by [[height] of ?1 < [height] of ?2] searchers)
  [ ; kill off all but the first
    ask ? [ die ]; only let the best of the new searchers continue
  ]
end

to-report count-active-searchers
  report count searchers
end

to-report maximum-active-searchers
  let c count searchers
  if (c > max-active-searchers)
  [ set max-active-searchers c]
  
  report max-active-searchers
end

to-report total-searchers
  report searchers-used
end

to-report euclidean-distance [x y x1 y1]
;; reports the euclidean distance between points (x,y) and (x1,y1)
  report sqrt ((x1 - x) ^ 2  + (y1 - y) ^ 2)
end

to-report manhattan-distance [x y x1 y1]
;; reports the euclidean distance between points (x,y) and (x1,y1)
  report abs (x1 - x) + abs (y1 - y)
end

to-report heuristic-function [x y]
;; reports the heuristic evaluation function value

  let goalx [goal-x] of maze 0
  let goaly [goal-y] of maze 0

  if (heuristic = "Zero")
    [ report 0 ]
  if (heuristic = "Euclidean distance")
    [ report euclidean-distance x y goalx goaly ]
  if (heuristic = "Manhattan distance")
    [ report manhattan-distance x y goalx goaly ]
end

to-report hill-height [x y]
;; reports the "height" of the current search position
;; the zero height is the goal

  let goalx [goal-x] of maze 0
  let goaly [goal-y] of maze 0

  report heuristic-function x y
end

to expand-path [searcher-agent searcher-behaviour]
  ; the searcher-agent creates a new searcher-agent that draws a path in the maze from its
  ; current position based on the searcher-behaviour
  
  let s []
  if not search-completed
    [ ; create a new path by creating an agent to search it      
      ; check to see if the path has already been visited
      set s (state searcher-agent searcher-behaviour) 
      if allow-revisited-states? or not member? s visited-states
        [
          set path-found true
          if not allow-revisited-states?
              [set visited-states fput s visited-states] ; add to front of visited-states set

          hatch-searchers 1
            [ ; clone searcher
              set searchers-used searchers-used + 1
              set size 3
              set pen-size 2
              set color person-colour
              set shape "person"

              set xcor [xcor] of searcher-agent
              set ycor [ycor] of searcher-agent
              set heading [heading] of searcher-agent
              set time [time] of searcher-agent + 1
              set path-cost [path-cost] of searcher-agent ; increment path cost when executing the behaviour
              set estimated-cost (heuristic-function xcor ycor)
              pen-down

              if (searcher-behaviour = "Move forward")      [behaviour-move-forward self]
              if (searcher-behaviour = "Move left")         [behaviour-turn-left-then-forward self]
              if (searcher-behaviour = "Move right")        [behaviour-turn-right-then-forward self]
             ;if (searcher-behaviour = "Follow left wall")  [behaviour-follow-left-wall self]
             ;if (searcher-behaviour = "Follow right wall") [behaviour-follow-right-wall self]

              set height hill-height xcor ycor
              
              stamp
            ]
        ]

      if goal-state s
        [ set search-completed true ]
    ]
end

to expand-paths [searcher-agent]
;; expands all the possible paths for the searcher-agent

  expand-path searcher-agent "Move forward"      ; new searcher agent moves forward until it hits a wall
  expand-path searcher-agent "Move left"         ; new searcher agent turns left then moves forward
  expand-path searcher-agent "Move right"        ; new searcher agent turns right then moves forward
 ;expand-path searcher-agent "Follow left wall"  ; new searcher agent follows wall on the left if there is one
 ;expand-path searcher-agent "Follow right wall" ; new searcher agent follows wall on the right if there is one
end
;
; Copyright 2010 by William John Teahan.  All rights reserved.
;
; Permission to use, modify or redistribute this model is hereby granted,
; provided that both of the following requirements are followed:
; a) this copyright notice is included.
; b) this model will not be redistributed for profit without permission
;    from William John Teahan.
; Contact William John Teahan for appropriate licenses for redistribution for
; profit.
;
; To refer to this model in publications, please use:
;
; Teahan, W. J. (2010).  Searching Mazes NetLogo model.
;   Artificial Intelligence. Ventus Publishing Aps.
;