Missionaries and Cannibals NetLogo Model
Produced for the book series "Artificial Intelligence";
Author: W. J. Teahan; Publisher: Ventus Publishing Aps, Denmark.
powered by NetLogo
view/download model file: Missionaries-and-Cannibals.nlogo
WHAT IS IT?
This model applies standard search algorithms to the classic search problem called Missionaries and Cannibals. In this toy problem, there are 3 missionaries and 3 cannibals. They have arrived together on one side of a river and they all wish to cross to the other side. Luckily, there is available a boat for crossing the river. However, there are two catches to the problem. The first catch is that only two people can fit in the boat at any one time. The second catch is that the cannibals are not allowed to outnumber the missionaries at any stage. If they do, they will overpower the missionaries and then eat them.
The problem is the following: Is it possible to get all the missionaries and cannibals safely across to the other side of the river? To solve this problem, search needs to be employed to try out the different scenarios. This model implements some of the classic search algorithms and these can be applied to this toy problem to see how the different search strategies perform against each other.
WHAT IS ITS PURPOSE?
The purpose of this model is to show how some of the classic search algorithms such as breadth-first search and depth-first search can be implemented in NetLogo, and also how they can then be applied to solving a classic toy problem in order to compare how they perform.
HOW IT WORKS
The model implements the search algorithms in a novel way by making use of an agent-oriented approach which is relatively easy to implement in NetLogo. Rather than use a queue data structure to implement the classic search algorithms (this is a standard solution; see, for example, Russell & Norvig's AI textbook), the model instead adopts a purely agent-oriented solution where the information is distributed amongst NetLogo agents. These agents conduct the search by transferring information to other agents that continue the search. Hence, an explicit queue data structure separate from what is happening in the search is not needed (although technically the information is still being stored in a queue behind the scenes, but the implementation of this is hidden behind the top-level commands that NetLogo provides).
The hope is that this "queueless" agent-oriented approach provides a more intuitive solution that makes it easier to grasp how the search strategies work. It uses a situated, embodied first person design perspective, therefore arguably making it easier to understand the essential differences between the search strategies.
The breed of searcher turtle agents is used for the searchers that move throughout the maze trying to get to the goal. These searcher agents maintain information about the current state of the search such as the time taken and the path and estimated costs. Each searcher agent expands the search by seeing if the following actions are possible:
- two missionaries get in the row-boat and row to the other side;
- two cannibals get in the row-boat and row to the other side;
- one missionary and one cannibal get in the row-boat and row to the other side;
- just one missionary gets in the row-boat and rows to the other side;
- just one cannibal gets in the row-boat and rows to the other side.
Note that the states of the search for this problem can be represented by a 3-tuple:
(#missionaries on start side of river, #cannibals on start side of river, #boats on start side of river)
Hence, the start state is represented by the tuple (3, 3, 1) and the goal state by (0, 0, 0).
In the Interface, a graph using parallel co-ordinates is used to visualise the search as it proceeds. An animation of one possible solution to the problem is also provided below the graph. The graph is updated as the animated solution proceeds.
HOW TO USE IT
To initialise the search, press the setup button in the Interface. A person shape will appear at the start state (3, 3, 1) at the top left of the graph. The goal state is at the bottom of the graph, so a rough estimate of whether a search is making progress is how far it has progressed downwards.
To start the search, either press the go-once button to make the search proceed one step at a time, or press the go-forever button to make the search proceed continuously until it reaches the goal state or other termination criteria are satisfied (i.e. it reaches the right side of the graph without finding a solution).
Pressing the go-animation button will start the animation and show the path taken at the same time in the graph. This is shown using the red colour.
The model's Interface buttons are defined as follows:
- setup: This will clear the environment and all variables, reset the animation and redraw the graph.
- go-once: This will make the search proceed one step at a time.
- go-forever: This will make the search proceed continuously until it has reached the goal state or it is deemed to be unsuccessful.
- go-animation: This will start the animation shown at the middle bottom of the Interface.
The model's Interface choosers, sliders and switch are defined as follows:
- search-behaviour: This specifies what search strategy the agents should employ when performing the search. A full description of the different search strategies can be found in Chapter 8 of Volume 1 of the Artificial Intelligence book (see reference at the bottom of this Information).
- heuristic: This specifies the heuristic to be used for the Informed searches - Greedy Best First Search and A* Search.
- max-depth: This sets the maximum depth of the search.
- max-agents-to-expand: This sets the maximum number of agents to expand the search each step. It is only used by the Multi-Agent Depth First Search.
- allow-revisited-states?: This is a flag that if set to On will allow the search to proceed to already visited states.
The model's Interface monitors are defined as follows:
- Active Searcher Agents: This shows the current number of active searcher agents still participating in the search.
- Maximum Active Searcher Agents: This shows the maximum value that the Active Searcher Agents monitor has achieved throughout the search.
- Total Searcher Agents: This shows the total number of searcher agents that the search has used.
- IDS Depth: This is the current depth during the execution of the Iterative Deepening Search.
THINGS TO NOTICE
Notice the effect of turning the flag allow-revisited-states? On and Off. Why is turning it Off so effective at dramatically reducing the search for the different search behaviours?
Notice that some of the searches if repeated with the same settings end up in different places (for example, try Greedy Best First Search several times using the heuristic "Min. no. boat trips needed" and with the flag allow-revisited-states? set to Off.). Why is this?
THINGS TO TRY
Which search behaviour seems to be the best at this problem? Which seems to be the worst? Try out each of the different search behaviours to see which one is the most effective. Find out how well each search algorithm performs against the four evaluation criteria - time complexity, space complexity, optimality and complexity. Relate the theory to what happens in practice.
Try figuring out how many solutions there are to the problem (the one shown in the animation is not the only one).
Try out the different heuristics for the informed searches (Greedy Best First Search and A* Search). Do they have any effect on how effective the search is?
EXTENDING THE MODEL
Try adding other search algorithms, or adding your own variations to the ones implemented in the model.
Note the use of the hatch and die commands to clone child searchers and terminate parent searchers thus avoiding the need for a separate queue data structure to maintain the information during the search. Note also the variation in the ask command that determines the strategy being used that defines the different searches.
For other search problems, see the Searching For Kevin Bacon model, and the Searching Mazes model. For an insight into the Manhattan distance and Euclidean distance metrics used as heuristics for the Informed searches, see the Manhattan Distance model.
CREDITS AND REFERENCES
This model was created by William John Teahan.
To refer to this model in publications, please use:
Missionaries and Cannibals NetLogo model.
Teahan, W. J. (2010). Artificial Intelligence. Ventus Publishing Aps.
; Missionaries and Cannibals model, ; ; Shows how the Missionaries and Cannibals search problem works.. ; ; Copyright 2010 William John Teahan. All Rights Reserved. ; breed [missionaries missionary] breed [cannibals cannibal] breed [searchers searcher] searchers-own [ time ; time step height ; used for hill-climbing search path-cost ; actual cost of search path so far (used for A-star search) estimated-cost ; estimated cost of cheapest path to goal m-count c-count r-count] ; #missionaries, #cannibals, #rowboats on left side of river breed [axis-points axis-point] globals [ search-time-step ; indicates what the current time step is during the search visited-states ; indicates which states have aleady been visited - do not revisit them in that case search-completed ; true when search is completed width-between-axes ; horizontal gap used when drawing vertical axes path-found ; true if we have found a path to expand; e.g. used by DFS to signal a dead-end max-active-searchers; maximum number of active searchers searchers-used ; total number of searcher agents used person-colour ; colour of person shown doing the search in the visualisation IDS-depth ] ; current depth limit for IDS (Iterative Deepening Search) to setup ca ;; clear everything set search-time-step 0 set visited-states  set search-completed false set searchers-used 1 set max-active-searchers 1 set width-between-axes 18 set-default-shape turtles "person" set person-colour sky ;set-default-shape searchers "person" set-default-shape axis-points "line" let apos -80 ; position of animation create-missionaries 3 [set size 18 set color gray] create-cannibals 3 [set size 18 set color red] ask missionary 0 [set xcor -90 set ycor apos] ask missionary 1 [set xcor -82 set ycor apos] ask missionary 2 [set xcor -74 set ycor apos] ask cannibal 3 [set xcor -90 set ycor apos - 20] ask cannibal 4 [set xcor -82 set ycor apos - 20] ask cannibal 5 [set xcor -74 set ycor apos - 20] draw-rowboat -48 -90 create-searchers 1 [ setup-searcher ] ask patches [ set pcolor white ;; make background full of white patches if (pxcor >= -62 and pxcor <= 62 and pycor >= apos - 40 and pycor <= apos + 15) [set pcolor blue] ;; draws the "river" in blue if (pxcor >= -65 and pxcor <= -63) and (pycor >= apos - 40 and pycor <= apos + 15) [set pcolor brown] ;; draws the left "bank" in brown if (pxcor >= 63 and pxcor <= 65) and (pycor >= apos - 40 and pycor <= apos + 15) [set pcolor brown] ;; draws the left "bank" in brown ; draw the axes let xpos -140 let t 0 repeat max-depth + 1 [ if (pxcor = xpos and pycor >= (- 45) and pycor <= 110) [set pcolor black] ;; draws the axis if (pxcor = xpos + 4 and pycor = (- 50)) [set plabel (word "t=" t) set plabel-color black] ;; draws its label set xpos xpos + width-between-axes set t t + 1 ] ] setup-axis-points -140 end to setup-searcher ; Initialises the searcher agent. set xcor -140 set ycor 110 pen-down set color sky set size 8 stamp set time 0 set path-cost 0 set estimated-cost 0 set m-count 3 set c-count 3 set r-count 1 set height hill-height m-count c-count r-count if not allow-revisited-states? [ set visited-states fput self visited-states ] end to animate-solution-path [agent m c r] ; draw the solution path one state at a time ask searcher agent [ pen-down set time [time] of searcher agent + 1 let xpos xcor + width-between-axes ; move to next time step i.e. next axis let ypos -45 + (state-number m c r) ; use the state (m,c,r) encoded as an integer setxy xpos ypos stamp set m-count m set c-count c set r-count r] end to go-animation ; performs the animation let left-to-right true let right-to-left false let agent 0 create-searchers 1 [ setup-searcher set agent who set color red ] cross-river left-to-right 4 5 ; (3,1,0) cannibal & cannibal --> animate-solution-path agent 3 1 0 wait 1.0 cross-river right-to-left nobody 4 ; (3,2,1) <-- cannibal animate-solution-path agent 3 2 1 wait 1.0 cross-river left-to-right 3 4 ; (3,0,0) cannibal & cannibal --> animate-solution-path agent 3 0 0 wait 1.0 cross-river right-to-left nobody 3 ; (3,1,1) <-- cannibal animate-solution-path agent 3 1 1 wait 1.0 cross-river left-to-right 1 2 ; (1,1,0) missionary & missionary --> animate-solution-path agent 1 1 0 wait 1.0 cross-river right-to-left 1 4 ; (2,2,1) <-- missionary & cannibal animate-solution-path agent 2 2 1 wait 1.0 cross-river left-to-right 0 1 ; (0,2,0) missionary & missionary --> animate-solution-path agent 0 2 0 wait 1.0 cross-river right-to-left nobody 5 ; (0,3,1) <-- cannibal animate-solution-path agent 0 3 1 wait 1.0 cross-river left-to-right 4 5 ; (0,1,0) cannibal & cannibal --> animate-solution-path agent 0 1 0 wait 1.0 cross-river right-to-left nobody 4 ; (0,2,1) <-- cannibal animate-solution-path agent 0 2 1 wait 1.0 cross-river left-to-right 3 4 ; (0,0,0) missionary & cannibal --> animate-solution-path agent 0 0 0 end to-report state-number [m c r] ;; reports the state (m, c, r) as a single integer report m * 40 + c * 10 + r * 5 end to draw-rowboat [x y] ;; for drawing the rowing boat create-turtles 1 [set xcor x set ycor y set color lime set shape "canoe" set size 35 set heading 90] end to move-rowboat [agent1 agent2 step] ;; moves the canoe and the agents along by step increments horizontally ;; if either agent1 or agent2 is nobody, then doesn't draw them let xpos [xcor] of turtle 6 let ypos [ycor] of turtle 6 if agent1 != nobody [ask turtle agent1 [set xcor xpos + step - 4 set ycor ypos + 6]] if agent2 != nobody [ask turtle agent2 [set xcor xpos + step + 4 set ycor ypos + 6]] ask turtle 6 [set xcor xcor + step] wait 0.1 ; draw canoe last so that it covers the agents; turtles sit "inside" the canoe wait 0.1 ; slow down the animation so that it is not too quick end to cross-river [left-to-right agent1 agent2] ;; draws the canoe crossing the river. If left-to-right, it will draw it from the left to the right, ;; otherwise it will draw it from the other direction. let step ifelse-value left-to-right  [-5] ; go right or left 5 repeat 19 [move-rowboat agent1 agent2 step] ; now disembark the agents ; first find position for each agent on the bank let x-offset ifelse-value left-to-right  [-90] ; set x-offset to the left side of people on this bank foreach list agent1 agent2 [if ? != nobody [ask turtle ? [set xcor x-offset + (who mod 3) * 8 set ycor ifelse-value (who < 3) [-80] [-100]]] ] end to setup-axis-points [xpos] let ypos 110 let m 3 ; m for missionaries repeat 4 [ let c 3 ; c for cannibals repeat 4 [ let r 1 ; r for row-boat repeat 2 [ create-axis-points 1 [ set color black set size 6 set heading 90 set xcor xpos set ycor ypos set label (word "(" m "," c "," r ") ") set label-color blue] set ypos ypos - 5 set r r - 1 ] set c c - 1 ] set m m - 1 ] end to go if search-completed [ user-message "Search is completed!" stop ] ; uninformed search strategies (blind search) if search-behaviour = "Breadth First Search" [ expand-breadth-first-search ] if search-behaviour = "Uniform Cost Search" [ expand-uniform-cost-search ] if search-behaviour = "Depth First Search" [ expand-depth-first-search ] if search-behaviour = "Multi-Agent Depth First Search" [ expand-MA-depth-first-search ] if search-behaviour = "Depth Limited Search" [ expand-depth-limited-search ] if search-behaviour = "Iterative Deepening Search" [ expand-iterative-deepening-search ] ; informed search strategies if search-behaviour = "Greedy Best First Search" [ expand-greedy-best-first-search ] if search-behaviour = "A* Search" [ expand-A-star-search ] ; local search strategies if search-behaviour = "Hill Climbing Search" [ expand-hill-climbing-search ] set search-time-step search-time-step + 1 if (count searchers = 0) [ user-message "No more searchers! Abort!" stop ] tick end to expand-breadth-first-search ; expand the search by adding more searcher-agents ask searchers [ expand-paths (searcher who) die ; prune away parent agents used at lower time-steps ] end to expand-uniform-cost-search ; expand the search using uniform cost search by adding more searcher-agents set path-found false ask first (sort-by [[path-cost] of ?1 < [path-cost] of ?2] searchers) [ expand-paths (searcher who) die ; this agent has done its work; it's children are now doing the work ] end to expand-depth-first-search ; expand the search by following longest path ; do not follow the other paths in parallel; just follow one of them set path-found false ask first (sort-by [[time] of ?1 > [time] of ?2] searchers) [ if time >= max-depth [ user-message "This search is taking too long! Abort!" stop ] expand-paths (searcher who) die ; this agent has done its work; it's children are now doing the work ] if not path-found [ user-message "Dead-end found. I have to back-track!" ] end to expand-MA-depth-first-search ; expand the search by following longest path ; follow the other paths in parallel; but do not follow all of them set path-found false let agents ifelse-value (count searchers < max-agents-to-expand) [ count searchers ] [ max-agents-to-expand ] ask n-of agents turtle-set (sort-by [[time] of ?1 > [time] of ?2] searchers) [ expand-paths (searcher who) die ; this agent has done its work; it's children are now doing the work ] if not path-found [ user-message "Dead-end found. I have to back-track!" ] end to expand-depth-limited-search ; expand the search by following longest path ; do not follow the other paths in parallel; just follow one of them ; limit the search depth to max-depth - 1 expand-depth-limited-search-1 (max-depth - 1) end to expand-depth-limited-search-1 [maxdepth] ; expand the search by following longest path ; do not follow the other paths in parallel; just follow one of them ; limit the search depth to maxdepth set path-found false ask first (sort-by [[time] of ?1 > [time] of ?2] searchers) [ if (time <= maxdepth) [ expand-paths (searcher who) ] ; only expand if not exceeded depth limit die ; this agent has done its work; it's children are now doing the work ] end to expand-iterative-deepening-search ; expand the search by iteratively performing depth-limited-search ; to increasingly greater depths set IDS-depth 1 set person-colour sky while [ IDS-depth <= max-depth ] [ while [count searchers != 0] [ expand-depth-limited-search-1 IDS-depth ] set IDS-depth IDS-depth + 1 ; change colour of person to reflect the increased maxdepth of search if (person-colour > 5) [ set person-colour person-colour - 5 ] create-searchers 1 [ setup-searcher ] ] set person-colour sky ; restore default colour end to expand-greedy-best-first-search ; expand the search using greedy best first search method set path-found false ask first (sort-by [[estimated-cost] of ?1 < [estimated-cost] of ?2] searchers) [ if time >= max-depth [ user-message "This search is taking too long! Abort!" stop ] expand-paths (searcher who) die ; this agent has done its work; it's children are now doing the work ] end to expand-A-star-search ; expand the search using A* search method set path-found false ask first (sort-by [[estimated-cost + path-cost] of ?1 < [estimated-cost + path-cost] of ?2] searchers) [ if time >= max-depth [ user-message "This search is taking too long! Abort!" stop ] expand-paths (searcher who) die ; this agent has done its work; it's children are now doing the work ] end to expand-hill-climbing-search ; expand the search using hill-climbing search method set path-found false ask searchers ; there should always be only one searcher at this stage [ ; look to see where I can go next if time >= max-depth [ user-message "This search is taking too long! Abort!" stop ] expand-paths (searcher who) ] foreach but-first (sort-by [[height] of ?1 < [height] of ?2] searchers) [ ; kill off all but the first ask ? [ die ]; only let the best of the new searchers continue ] end to-report count-active-searchers report count searchers end to-report maximum-active-searchers let c count searchers if (c > max-active-searchers) [ set max-active-searchers c] report max-active-searchers end to-report total-searchers report searchers-used end to-report path-ok [m c r m-in-rowboat c-in-rowboat] ; report whether this search path option is valid and safe to expand let m1 0 let c1 0 ifelse r = 1 ; i.e. row-boat is on left-hand-side [ set m1 m - m-in-rowboat set c1 c - c-in-rowboat ] ;else row-boat is on right-hand-side [ set m1 m + m-in-rowboat set c1 c + c-in-rowboat ] ifelse m1 < 0 or c1 < 0 or m1 > 3 or c1 > 3 [ report false ] ;else [ report (m1 = 0 or m1 = 3) or ((m1 >= c1) and (3 - m1 >= 3 - c1)) ] ;report true only if missionaries are not outnumbered on both sides of river end to-report euclidean-distance [x y z x1 y1 z1] ;; reports the euclidean distance between points (x,y,z) and (x1,y1,z1) report sqrt ((x1 - x) ^ 2 + (y1 - y) ^ 2 + (z1 - z) ^ 2) end to-report manhattan-distance [x y z x1 y1 z1] ;; reports the manhattan distance between points (x,y,z) and (x1,y1,z1) report abs (x1 - x) + abs (y1 - y) + abs (z1 - z) end to-report heuristic-function [mcount ccount rcount] ;; reports the heuristic evaluation function value if (heuristic = "Zero") [ report 0 ] if (heuristic = "People on the left side") [ report mcount + ccount ] if (heuristic = "Min. no. boat trips needed") [ report (mcount + ccount) / 2 ] ; 2 is the capacity of the boat if (heuristic = "Euclidean distance") [ report euclidean-distance mcount ccount rcount 0 0 0 ] ; goal is point (0,0,0) if (heuristic = "Manhattan distance") [ report manhattan-distance mcount ccount rcount 0 0 0 ] ; goal is point (0,0,0) end to-report hill-height [mcount ccount rcount] ;; reports the "height" of the current search position ;; the zero height is the goal; the maximum height is the start report mcount * 8 + ccount * 2 + rcount end to expand-path [searcher-agent m-in-rowboat c-in-rowboat] ; the searcher-agent creates a new searcher-agent that draws a path on the plot from its ; current position to (m,c,r) if it is a valid move let s 0 let m [m-count] of searcher-agent let c [c-count] of searcher-agent let r [r-count] of searcher-agent if not search-completed and path-ok m c r m-in-rowboat c-in-rowboat [ ; path is ok, so create it by creating an agent to search it ifelse r = 1 ; i.e. row-boat is on left-hand-side [ set m m - m-in-rowboat set c c - c-in-rowboat ] ;else row-boat is on the right-hand-side [ set m m + m-in-rowboat set c c + c-in-rowboat ] set r ifelse-value (r = 1)   ; toggle r ; check to see if the path has already been visited set s (state-number m c r) ; convert state to a single number to make it easier to search visited-set if allow-revisited-states? or not member? s visited-states [ set path-found true if not allow-revisited-states? [set visited-states fput s visited-states] ; add to front of visited-states set hatch-searchers 1 ; clone searcher [ set searchers-used searchers-used + 1 set xcor [xcor] of searcher-agent set ycor [ycor] of searcher-agent set m-count m set c-count c set r-count r set height hill-height m c r set time [time] of searcher-agent + 1 set path-cost [path-cost] of searcher-agent + 1 ; 1 is the cost of rowing across the river once set estimated-cost (heuristic-function m c r) ;; type "path cost = " type path-cost type " estimated cost = " print estimated-cost set color sky set size 8 pen-down let xpos xcor + 18 ; move to next time step i.e. next axis let ypos -45 + s; use the state (m,c,r) encoded as an integer setxy xpos ypos stamp ] ] if s = 0 ; goal state [ set search-completed true ] ] end to expand-paths [searcher-agent] ;; expands all the possible paths for searcher-agent expand-path searcher-agent 2 0 ; two missionaries in row-boat expand-path searcher-agent 0 2 ; two cannibals in row-boat expand-path searcher-agent 1 1 ; one missionary and one cannibal in row-boat expand-path searcher-agent 1 0 ; one missionary in row-boat expand-path searcher-agent 0 1 ; one cannibal in row-boat end ; ; Copyright 2010 by William John Teahan. All rights reserved. ; ; Permission to use, modify or redistribute this model is hereby granted, ; provided that both of the following requirements are followed: ; a) this copyright notice is included. ; b) this model will not be redistributed for profit without permission ; from William John Teahan. ; Contact William John Teahan for appropriate licenses for redistribution for ; profit. ; ; To refer to this model in publications, please use: ; ; Teahan, W. J. (2010). Missionaries and Cannibals NetLogo model. ; Artificial Intelligence. Ventus Publishing Aps. ;