Documentation ¶
Index ¶
Constants ¶
View Source
const ( CLUSTER_STATUS_INIT = iota CLUSTER_STATUS_JOIN CLUSTER_STATUS_ELECTION CLUSTER_STATUS_READY )
cluster status * init:where every thing have init * join:try to connect to other node ,if not make itself master,else ,get other master * election(option):when circle is builded a start to elect a master * ready:ready to start crawl
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Cluster ¶
type Cluster interface { CrawlStatus() *crawler.CrawlerStatus DeleteDeadNode(nodeName string) IsMasterNode() bool AddNode(nodeInfo *node.NodeInfo) MakeMasterNode(nodeName string) ElectMaster() *node.NodeInfo IsReady() bool IsSpiderRunning(spiderName string) bool StartSpider(spiderName string) AddRequest(request *http.Request) StopSpider(spiderName string) AddToCrawlingQuene(request *http.Request) Crawled(scrapyResult *crawler.ScrapeResult) CanWeStopSpider(spiderName string) bool IsStop() bool PopRequest() *http.Request Ready() Join() GetMasterNode() *node.NodeInfo GetMasterName() string GetClusterInfo() *ClusterInfo GetRequestStatus() *RequestStatus GetAllNode() []*node.NodeInfo }
type ClusterInfo ¶
type ClusterInfo struct { Status int Name string NodeList []*node.NodeInfo LocalNode *node.NodeInfo MasterNode *node.NodeInfo }
basic cluster infomation
type RequestStatus ¶
type RequestStatus struct { CrawledMap map[string]int // node + num CrawlingMap map[string]map[string]*http.Request WaitingQuene *crawler.RequestQuene }
receive basic request and record crawled requets
func NewRequestStatus ¶
func NewRequestStatus() *RequestStatus
func (*RequestStatus) Crawled ¶
func (this *RequestStatus) Crawled(scrapyResult *crawler.ScrapeResult)
delete in CrawlingMap add for CrawledMap
func (*RequestStatus) DeleteDeadNode ¶
func (this *RequestStatus) DeleteDeadNode(nodeName string)
remove request from crawlingmap for dead node add those requests to waiting quenu
Click to show internal directories.
Click to hide internal directories.