This document provides an overview of a web crawler case study that was implemented. The web crawler takes in a starting webpage, target string to search for, and limits the number of pages to 50. It uses graphs and hash tables to represent the web pages and links as it crawls. The algorithm adds the starting page to a hash set and graph, removes pages to search, checks for the target string, adds found links to the hash set and graph, and repeats until all pages are searched or the limit is reached.