Computer Science 315
Artificial Intelligence

Assignment 3 - AIMIA Chapter 3: Search, Part Deux

I'm crazy.
Allow me to a-maze thee.
They say I'm ugly but it just don't faze me.


Digital Underground, "The Humpty Dance" (1990)

Due Wednesday, 10 October

In this problem set, you will use your search code from the previous assignment to explore mazes with Quagents. This will require the following additional steps:
  1. Download mazetest.py and mazebot.py.

  2. Modify your Search.treeSearch method to keep a list of visited nodes as you remove them from the fringe. Modify DFSearch.insertAll to insert un-visited nodes before visited. You can actually put off doing this, so you can see why it's necessary.

  3. Write a MazeAgent class to produce the states of the maze by exploring it. This class should be defined in a file called maze_agent.py. MazeAgent is essentially a connector between the Problem class and the MazeBot class that you will use to access the Quagent bot indirectly. Your MazeAgent will implement all the methods in the Problem class, with stepCost just returning zero, because it's too complicated to build mazes that have costs associated with paths. 1 The MazeAgent constructor should start like this:
      def __init__(self, bot):      # bot is a MazeBot object
    
    The main challenge will be determining the bot's state and successors of that state. The MazeBot class gives you very simple interaction with the bot: you can see ahead of you, turn to the left, run ahead into a wall, and check for gold, but you otherwise don't know anything about where you are or which way you're facing. So your MazeAgent will have to keep track of those things itself, as well as remembering the state it was in when it saw the gold – because finding the gold is the goal of the game. A simple solution is to store the agent's state and bearing as arbitrary integers, and build a table of successors of each state as you explore the maze. You'll want the bot to start in a corner (think about why). He doesn't necessarily do that, so you should get that part working before you tackle the successor function.

    To get started, take the "fun-first" approach I've described, and write stub methods for MazeWorld: have goalTest return False, initialState return zero, successorFn return a list containing the pair (None, None), and takeAction doing nothing. Then you can test your code by running
      python mazetest.py 2
    
    for a 2 X 2 maze. You will find yourself hovering in view of the bot, so you can navigate around without blocking him.3 You'll lose sight of him when you run through a wall, so it takes some skill and patience to track him. The currently supported maze sizes are 2x2, 3x3, and 4x4.

    Because the Quagents program sends so many messages to the terminal, you may want to squelch them:
      python mazetest.py 2 1> /dev/null
    
    This says to send the standard-output stream (1>) to the "black hole" file /dev/null. So how do you then put debugging printouts into your test code? Just write a temporary debug function to send output to the standard-error stream, in any class where you want to do this:
    import sys
    
    # ...
    
    # this goes above any code that calls it
    def debug(s):
        sys.stderr.write(s + '\n')
        
    # ...
    
    debug(str(fringe))
    
    Of course, you should remove all debugging printout from your final code, or I'll take off points!

    What to Turn In

    Send me an email with maze_agent.py, search.py, and dfsearch.py attached. Your search.py should contain comments explaining how your successor function works.


    1 One would have to build a maze with multiple paths to the goal, and then associate a cost with each path – perhaps total distance, or a "negative cost" obtained by picking up good stuff like tofu along the way. If you want to earn some extra credit by building your own mazes, ask me how!

    2 Recall that the Euclidean distance between two points (x1, y1) and (x2, y2) is defined as √ ( (x1-x2)2 + (y1-y2)2)

    3 This is done by turning clipping off. Thanks to Sam for the tip!