Crawl spawns worker threads to make requests to the endpoint.
It takes the results and appends them to the set of links,
making sure that the depth does not exceed the supplied parameter.
After 10 seconds, if the set does not enlarge, we stop crawling.
Do makes an HTTP request to the URL supplied in the work pool,
parses it if it follows the HTML schema, looks for "form" and "a"
tags and sends back the URLs associated respectively.
Result is returned by the Do function.
It contains an embedded pointer to a page
that gets discovered with its respective depth.
The Err field may indicate any error associated while crawling.